Planet Sakai

September 30, 2014

Ian Boston

AppleRAID low level fix

Anyone who uses AppleRAID will know how often it declares that a perfectly healthy disk is no longer a valid member of a Raid set. What you may not have experienced is when it wont rebuild. For a stripped set, the practical only solution is a backup. For a mirror there are some things you can do. Typically when diskutil or the GUI wont repair the low level AppleRAID.kext wont load, or will load and fails reporting it cant get a controller object. In the logs you might also see the Raid set is degraded or just offline. If its really bad DiskUtility and diskutil will hang somewhere in the kernel, and you wont be able to get a clean reboot.

Here is one way to fix:

Unplug the disk subsystem causing the problem.

Reboot, you may have to pull the plug to get shutdown.

Once up, move the AppleRAID.kext into a safe place eg

mkdir ~/kext
sudo mv /System/Library/Extensions/AppleRAID.kext ~/kext

Watch the logs to see that kextcache has rebuilt the cache of kernel extensions. You should see something like

30/09/2014 13:21:37.518[456]: /: helper partitions appear up to date.

When you see that you know that if you plugin the RAID Subsystem the kernel wont be able to load the AppleRAID.kext and so you will be able to manipulate the disks.

Plugin the raid subsystem and check that it didnt load the kernel extension,

kextstat | grep AppleRAID

You will now be able to do diskutil list and you should see your disks listed as Apple RAID disks, eg

$ diskutil list
 0: GUID_partition_scheme *750.2 GB disk2
 1: EFI 209.7 MB disk2s1
 2: Apple_RAID 749.8 GB disk2s2
 3: Apple_Boot Boot OS X 134.2 MB disk2s3
 0: GUID_partition_scheme *750.2 GB disk3
 1: EFI 209.7 MB disk3s1
 2: Apple_RAID 749.8 GB disk3s2
 3: Apple_Boot Boot OS X 134.2 MB disk3s3

At this point the disks are just plain disks. The AppleRAID kernel extension isn’t managing the disks. Verify with

$ diskutil appleRAID list
No AppleRAID sets found

Since you cant use them as RAID any more, and so cant use the diskutil appleRAID delete command convert the RAID set into normal disks you have to trick OSX into mounting the disks. To do this you need to edit the partition table, without touching the data on the disk. You can do this with gpt

$ sudo gpt show disk2
start size index contents
0 1 PMBR
1 1 Pri GPT header
2 32 Pri GPT table
34 6
40 409600 1 GPT part - C12A7328-F81F-11D2-BA4B-00A0C93EC93B
409640 1464471472 2 GPT part - 52414944-0000-11AA-AA11-00306543ECAC
1464881112 262144 3 GPT part - 426F6F74-0000-11AA-AA11-00306543ECAC
1465143256 7
1465143263 32 Sec GPT table
1465143295 1 Sec GPT header
$ sudo gpt show disk3
start size index contents
0 1 PMBR
1 1 Pri GPT header
2 32 Pri GPT table
34 6
40 409600 1 GPT part - C12A7328-F81F-11D2-BA4B-00A0C93EC93B
409640 1464471472 2 GPT part - 52414944-0000-11AA-AA11-00306543ECAC
1464881112 262144 3 GPT part - 426F6F74-0000-11AA-AA11-00306543ECAC
1465143256 7
1465143263 32 Sec GPT table
1465143295 1 Sec GPT header

According to the partition in index 2 with a partition type of 52414944-0000-11AA-AA11-00306543ECAC is a Apple_RAID partition. Its actually HFS+ with some other settings. Those settings get removed when converting it form RAID to non RAID, but to get it mounted we can just change the partition type. First delete the entry from the partion table, then recreated it with the HFS+ type exactly the same size.

$ gpt remove -i 2 disk2
disk2s2 removed
$ gpt add -b 409640 -s 1464471472 -t 48465300-0000-11AA-AA11-00306543ECAC disk3
disk2s2 added

OSX will mount the disk. It will probably tell you that its been mounted read only, and cant be repaired. At the point you need to copy all the data off onto a clean disk, using rsync.

Once that is done you can do the same with the second disk and compare the differences between both your RAID members. When you have all the data back, you can consider if you leave the AppleRAID.kext disabled or use it again. I know what I will be doing.

by Ian at September 30, 2014 01:29 PM

September 26, 2014

Michael Feldstein

Investigation of IPEDS Distance Education Data Highlights System Not Ready for Modern Trends

This article is cross-posted to the WCET blog.

After billions of dollars spent on administrative computer systems and billions of dollars invested in ed tech companies, the U.S. higher education system is woefully out of date and unable to cope with major education trends such as online & hybrid education, flexible terms, and the expansion of continuing and extended education. Based on an investigation of the recently released distance education data for IPEDS, the primary national education database maintained by the National Center for Education Statistics (NCES), we have found significant confusion over basic definitions of terms, manual gathering of data outside of the computer systems designed to collect data, and, due to confusion over which students to include in IPEDS data, the systematic non-reporting of large numbers of degree-seeking students.

In Fall 2012, the IPEDS (Integrated Postsecondary Education Data System) data collection for the first time included distance education – primarily for online courses and programs. This data is important for policy makers and institutional enrollment management as well as for the companies serving the higher education market.

We first noticed the discrepancies based on feedback from analysis that we have both included at the e-Literate and WCET blogs. One of the most troubling calls came from a state university representative that said that the school has never reported any students who took their credit bearing courses through their self-supported, continuing education program.  Since they did not include the enrollments in reporting to the state, they did not report those enrollments to IPEDS. These were credits toward degrees and certificate programs offered by the university and therefore should have been included in IPEDS reporting based on the following instructions.

Include all students enrolled for credit (courses or programs that can be applied towards the requirements for a postsecondary degree, diploma, certificate, or other formal award), regardless of whether or not they are seeking a degree or certificate.

Unfortunately, the instructions call out this confusing exclusion (one example out of four):

Exclude students who are not enrolled for credit. For example, exclude: Students enrolled exclusively in Continuing Education Units (CEUs).

How many schools have interpreted this continuing education exclusion to apply to all continuing education enrollments? To do an initial check, we contacted several campuses in the California State University system and were told that all IPEDS reporting was handled at the system level. Based on the introduction of the Fall 2012 distance education changes, Cal State re-evaluated whether to change their reporting policy. A system spokesman explained that:

I’ve spoken with our analytic studies staff and they’ve indicated that the standard practice for data reporting has been to share only data for state-supported enrollments. We have not been asked by IPEDS to do otherwise so when we report distance learning data next spring, we plan on once again sharing only state-supported students.

Within the Cal State system, this means that more than 50,000 students taking for-credit self-support courses will not be reported, and this student group has never been reported.

One of the reasons for the confusion as well as the significance of this change is that continuing education units have moved past their roots of offering CEUs and non-credit courses for the general public (hence the name continuing education) and taking up a new role of offering courses not funded by the state (hence self-support). Since these courses and programs are not state funded, they are not subject to the same oversight and restrictions as state-funded equivalents such as maximum tuition per credit hour.

This situation allows continuing education units in public schools to become laboratories and innovators in online education. The flip side is that given the non-state-funded nature of these courses and programs, it appears that schools may not be reporting these for-credit enrollments through IPEDS, whether or not the students were in online courses. However, the changes in distance education reporting may actually trigger changes in reporting.

Do Other Colleges Also Omit Students from Their IPEDS Report?

Given what was learned from the California State University System, we were interested in learning if other colleges were having similar problems with reporting distance education enrollments to IPEDS. WCET conducted a non-scientific canvassing of colleges to get their feedback on what problems they may have encountered. Twenty-one institutions were selected through a non-scientific process of identifying colleges that reported enrollment figures that seemed incongruous with their size or distance education operations. See the “Appendix A: Methodology” for more details.

From early August to mid-September, we sought answers regarding whether the colleges reported all for-credit distance education and online enrollments for Fall 2012. If they did not, we asked about the size of the undercount and why some enrollments were not reported.

Typically, the response included some back-and-forth between the institutional research and distance education units at each college. Through these conversations, we quickly realized that we should have asked a question about the U.S. Department of Education’s definition of “distance education.”   Institutions were very unclear about what activities to include or exclude in their counts. Some used local definitions that varied from the federal expectations. As a result, we asked that question as often as we could.

The Responses

Twenty institutions provided useable responses. We agreed to keep responses confidential. Table 1 provides a very high level summary of the responses to the following two questions:

  • Counts Correct? – Do the IPEDS data reported include all for-credit distance education and online enrollments for Fall 2012?
  • Problem with “Distance Education” Definition? – Although we did not specifically ask this question, several people volunteered that they had trouble applying the IPEDS definition.

Table 1: Counts for Institutional Responses

Counts Correct?
Problem with "Distance Education" Definition?

Of those that assured us that they submitted the correct distance education counts, some of them also reported having used their own definitions or processes for distance education. This would make their reported counts incomparable to the vast majority of others reporting.One institution declined to respond. Given that its website advertises many hundreds of online courses, the distance education counts reported would leave us to believe that they either: a) under-reported, or b) average one or two students per online class. The second scenario seems unlikely.


This analysis found several issues that call into question the usability of IPEDS distance education enrollment counts and, more broadly and more disturbingly, IPEDS statistics, in general.

There is a large undercount of distance education students

While only a few institutions reported an undercount, one was from the California State University System and another from a large university system in another populous state. Since the same procedures were used within each system, there are a few hundred thousand students who were not counted in just those two systems.

In California, they have never reported students enrolled in Continuing Education (self-support) units to IPEDS. A source of the problem may be in the survey instructions. Respondents are asked to exclude: “Students enrolled exclusively in Continuing Education Units (CEUs).” The intent of this statement is to exclude those taking only non-credit courses. It is conceivable that some might misinterpret this to mean to exclude those in the campuses continuing education division. What was supposed to be reported was the number of students taking for-credit courses regardless of what college or institutional unit was responsible for offering the course.

In the other large system, they do not report out-of-state students as they do not receive funding from the state coffers.

It is unclear what the numeric scope would be if we knew the actual numbers across all institutions. Given that the total number of “students enrolled exclusively in distance education courses” for Fall 2012 was 2,653,426, an undercount of a hundred thousand students just from these two systems would be a 4% error. That percentage is attention-getting on its own.

The IPEDS methodology does not work for innovative programs…and this will only get worse

Because it uses as many as 28 start dates for courses, one institutional respondent estimated that there was approximately a 40% undercount in its reported enrollments. A student completing a full complement of courses in a 15-week period might not be enrolled in all of those courses at the census date. With the increased use of competency-based programs, adaptive learning, and innovations still on the drawing board, it is conceivable that the census dates used by an institution (IPEDS gives some options) might not serve every type of educational offering.

The definition of ‘distance education’ is causing confusion

It is impossible to get an accurate count of anything if there is not a clear understanding of what should or should not be included in the count. The definition of a “distance education course” from the IPEDS Glossary is:

A course in which the instructional content is delivered exclusively via distance education.  Requirements for coming to campus for orientation, testing, or academic support services do not exclude a course from being classified as distance education.

Even with that definition, colleges faced problems with counting ‘blended’ or ‘hybrid’ courses. What percentage of a course needs to be offered at a distance to be counted in the federal report? Some colleges had their own standard (or one prescribed by the state) with the percentage to be labeled a “distance education” course varied greatly. One reported that it included all courses with more than 50% of the course being offered at a distance.

To clarify the federal definition, one college said they called the IPEDS help desk. After escalating the issue to a second line manager, they were still unclear on exactly how to apply the definition.

The Online Learning Consortium is updating their distance education definitions. Their current work could inform IPEDs on possible definitions, but probably contains too many categories for such wide-spread data gathering.

There is a large overcount of distance education students

Because many colleges used their own definition, there is a massive overcount of distance education. At least, it is an overcount relative to the current IPEDS definition. This raises the question, is the near 100% standard imposed by that definition useful in interpreting activity in this mode of instruction? Is it the correct standard since no one else seems to use it?

In addressing the anomalies, IPEDS reporting becomes burdensome or the problems ignored

In decentralized institutions or in institutions with “self-support” units that operate independently from the rest of campus, their data systems are often not connected. They are also faced with simultaneously having to reconcile differing “distance education” definitions. One choice for institutional researchers is to knit together numbers from incompatible data systems and/or with differing definitions. Often by hand. To their credit, institutional researchers overcome many such obstacles. Whether it is through misunderstanding the requirements or not having the ability to perform the work, some colleges did not tackle this burdensome task.

Conclusions – We Don’t Know

While these analyses have shed light on the subject, we are still left with the feeling that we don’t know what we don’t know. In brief the biggest finding is that we do not know what we do not know and bring to mind former Secretary of Defense Donald Rumsfeld’s famous rambling:

There are known knowns. These are things we know that we know. We also know there are known unknowns. That is to say, there are things that we know we don’t know. But there are also unknown unknowns. There are ones we don’t know we don’t know.

The net effect is not known

Some institutions reported accurately, some overcounted, some undercounted, some did both at the same time. What should the actual count be?

We don’t know.

The 2012 numbers are not a credible baseline

The distance education field looked forward to the 2012 Fall Enrollment statistics with distance education numbers as a welcomed baseline to the size and growth of this mode of instruction. That is not possible and the problems will persist with the 2013 Fall Enrollment report when those numbers are released. These problems can be fixed, but it will take work. When can we get a credible baseline?

We don’t know.

A large number of students have not been included on ANY IPEDS survey, EVER.

A bigger issue for the U.S. Department of Education goes well beyond the laser-focused issue of distance education enrollments. Our findings indicate that there are hundreds of thousands of students who have never been reported on any IPEDS survey that has ever been conducted. What is the impact on IPEDS? What is the impact on the states where they systematically underreported large numbers of students?

We don’t know.

Who is at fault?

Everybody and nobody.  IPEDS is faced with institutional practices that vary greatly and often change from year-to-year as innovations are introduced.  Institutional researchers are faced with reporting requirements that vary depending on the need, such as state oversight agencies, IPEDS, accrediting agencies, external surveys and ranking services, and internal pressures from the marketing and public relations staffs.  They do the best they can in a difficult situation. Meanwhile, we are in an environment in which innovations may no longer fit into classic definitional measurement boxes.

What to expect?

In the end, this expansion of data from NCES through the IPEDS database is a worthwhile effort in our opinion, and we should see greater usage of real data to support policy decisions and market decisions thanks to this effort. However, we recommend the following:

  • The data changes from the Fall 2012 to Fall 2013 reporting periods will include significant changes in methodology from participating institutions. Assuming that we get improved definitions over time, there will also be changes in reporting methodology at least through Fall 2015. Therefore we recommend analysts and policy-makers not put too much credence in year-over-year changes for the first two or three years.
  • The most immediate improvement available is for NCES to clarify and gain broader consensus on the distance education definitions. This process should include working with accrediting agencies, whose own definitions influence school reporting, as well as leading colleges and universities with extensive online experience.

Appendix: Methodology

The Process for Selecting Institutions to Survey

The selection process for institutions to survey was neither random nor scientific. A multi-step process of identifying institutions that might have had problems in reporting distance education enrollments was undertaken. The goal was to identify twenty institutions to be canvassed. The steps included:

  • A first cut was created by an “eyeball” analysis of the Fall 2012 IPEDS Fall Enrollment database to identify institutions that may have had problems in responding to the distance education enrollment question.
    • Colleges that reported distance education enrollments that did not appear to be in scope with the size of the institution (i.e., a large institution with very low distance education enrollments) or what we knew about their distance education operations were included.
    • Special attention was paid to land grant colleges as they are likely to have self-funded continuing or distance education units.
    • Institutions in the California State University system were excluded.
    • This resulted in a list of a little more than 100 institutions.
  • The second cut was based upon:
    • Including colleges across different regions of the country.
    • Including a private college and an HBCU as indicators as to whether this problem might be found in colleges from those institutional categories.
    • Twenty institutions were identified.
  • In side discussions with a distance education leader at a public university, they agreed to participate in the survey. This brought the total to twenty-one institutions.

Questions Asked in the Survey

  1. Do the IPEDS data reported include all for-credit distance education and online enrollments for Fall 2012?
  2. If the IPEDS data reported does not include all for-credit distance education and online enrollments for Fall 2012, approximately how many enrollments are under-counted?
  3. If the IPEDS data reported does not include all for-credit distance education and online enrollments for Fall 2012, why did you not report some enrollments?

The post Investigation of IPEDS Distance Education Data Highlights System Not Ready for Modern Trends appeared first on e-Literate.

by Phil Hill at September 26, 2014 06:00 AM

Steve Swinsburg

TextWrangler filters to tidy XML and tidy JSON

I work with XML and JSON a lot, often as the input to or output from web services. Generally it is unformatted, so before I can read the data I need it formatted and whitespaced. So here are some TextWrangler filters to tidy up XML and JSON documents.

XMLLINT_INDENT=$'\t' xmllint --format --encode utf-8 -

Save this into a file called Tidy

import fileinput
import json
print json.dumps( json.loads(''.join([line.strip() for line in fileinput.input()])), sort_keys=True, indent=2)

Save this into a file called Tidy

Drop these into ~/Library/Application Support/TextWranger/Text Filters. You can then run them on a file within TextWrangler by choosing Text > Apply Text Filter > [filter].

by steveswinsburg at September 26, 2014 12:31 AM

September 25, 2014

Adam Marshall

Break in WebLearn service: 25th Sept at 22:00

WebLearn will be unavailable for a short period between 22:00 and 22:30 on Thursday 25th September 2014 whilst repairs are made to the faulty TurnItIn configuration: this should fix the problems that users are experiencing relating to the TurnItIn integration within the Assignments tool. We sincerely apologise as there will be no service to users during this period.

Note: TurnItIn-enabled assignments created before the move to WebLearn 10 will now have their backlog of Originality Reports processed, unfortunately TurnItIn-enabled assignments created after the move to WebLearn 10 will need to be recreated from scratch.

Please contact the WebLearn team if you have any questions or need help with this. Please accept our apologies for the inconvenience that this mistake may have caused.

by Adam Marshall at September 25, 2014 03:35 PM

Apereo OAE

Apereo OAE Heron is now available!

The Apereo Open Academic Environment (OAE) project team is extremely proud to announce the next major release of the Apereo Open Academic Environment; OAE Heron or OAE 9.

OAE Heron is a landmark release that introduces the long awaited folders functionality, allowing for sets of content items to be collected, organised, shared and curated. OAE Heron also provides full support for Shibboleth access management federations and brings improvements to activities, (email) notifications and the REST API documentation. Next to that, OAE Heron also ships with a wide range of overall usability improvements.



Using the personal and group libraries, Apereo OAE has always allowed collaboration to grow organically, reflecting how most of our collaborations work in real life. Individual content items could be shared with people and groups, making those items available in their respective libraries. This has always tested extremely well in usability testing, and not requiring the organisation of items upfront has been considered to reduce the obstacles to collaboration.

However, sustained usage and usability testing have also highlighted a number of challenges with this approach. First of all, it was difficult to group items that logically belong together (e.g. a set of field trip pictures) and share and interact with them as a single unit. Next to that, heavy use of the system was showing that libraries could become quite hard to manage and were clearly lacking some form of organisation.

Therefore, OAE introduces the long-awaited folders functionality, a feature we've been working on for an extended period of time and has gone through many rounds of usability testing. OAE Folders allow for a set of content items to be grouped into a folder. This folder can be shared with other people and groups and has its own permissions and metadata. A folder also has its own thumbnail picture based on the items inside of the folder and folders will generate helpful activities, notifications and emails.

OAE folders also stay true to the OAE philosophy, and therefore content items are never bound to a folder. This means that the items in a folder can still be used as an independent content and can be shared, discussed, etc. individually. This also means that a content item can belong to multiple folders at the same time, opening the door for re-mixing content items and content curation, allowing new interesting folders to be created from existing folders and content items.

Whilst maintaining the ability to grow collaboration organically, OAE Folders allow for a better and more logical organisation of content items and open the door to many interesting content re-use scenarios.

Shibboleth federations

Many countries around the world now expose their own Shibboleth access management federation. This provides an organised and managed way in which an application can be offered to many institutions at the same time, directly integrating with the institutional Single Sign On systems.

OAE Heron makes it possible for an OAE installation to become a recognised Service Provider for one or more of these federations. This dramatically simplifies the tenant creation process for an institution that's a member of one of these access management federations, making it possible to set up an OAE tenant with full Shibboleth SSO integration in a matter of minutes.

Email improvements

OAE Heron introduces significant email notification improvements for those users that have their email preference set to Immediate. OAE was already capable of aggregating a series of actions that happened in quick succession into a single email. OAE Heron makes this possible over a longer period of time, and will hold off sending an email until a series of events that would otherwise generate multiple email notifications has finished. This dramatically cuts down the number of emails that are sent out by OAE and provides a more intelligent email update to users.

The display of email notifications on mobile devices has also been improved significantly, making the content of the email much easier to read.

Activity improvements

OAE Heron offers more descriptive activity summaries, especially in the area of content creation. These will for example provide a much better overview of the context in which an activity happened.

Next to that, OAE Heron will also ensure that the indicator for the number of unread notifications a user has is always completely accurate.

REST API documentation

OAE Heron continues to build on the REST API documentation that was introduced in OAE Griffin. It makes all possible responses for each of the REST endpoints available through the documentation UI and further improves the quality of the available documentation.

Try it out

OAE Heron can be tried out on the project's QA server at It is worth noting that this server is actively used for testing and will be wiped and redeployed every night.

The source code has been tagged with version number 9.0.0 and can be downloaded from the following repositories:


Documentation on how to install the system can be found at

Instruction on how to upgrade an OAE installation from version 8 to version 9 can be found at

The repository containing all deployment scripts can be found at

Get in touch

The project website can be found at The project blog will be updated with the latest project news from time to time, and can be found at

The mailing list used for Apereo OAE is You can subscribe to the mailing list at

Bugs and other issues can be reported in our issue tracker at

by Nicolaas Matthijs at September 25, 2014 12:22 PM

September 24, 2014

Sakai Project

Sakai Virtual Conference 2014: Bridging Education with Technology November 7, 2014

Sakai Virtual Conference 2014
Bridging Education with Technology
November 7, 2014 - Online   #SakaiVC14

Register now to attend the first ever Sakai Virtual Conference on Friday, November 7th!

September 24, 2014 04:43 PM

Apereo October-December 2014 Webinar Program

Webinars will use Big Blue Button. Choose Apereo Room 1, enter your name and the password apereo at -


September 24, 2014 04:40 PM

2014 Educause Conference - The Open Communities Reception hosted by Apereo Foundation and Open Source Initiative (OSI)

Educause Conference - The Open Communities Reception hosted by Apereo Foundation and Open Source Initiative (OSI)

Tuesday September 30th, 2014
6:30 PM - 8:00 PM Eastern Time
Florida Ballroom A, Convention Level, Hyatt Regency Hotel

September 24, 2014 04:36 PM

September 23, 2014

Michael Feldstein

New LMS Market Data: Edutechnica provides one-year update

In Fall 2013 we saw a rich source of LMS market data emerge.

George Kroner, a former engineer at Blackboard who now works for University of Maryland University College (UMUC), has developed what may be the most thorough measurement of LMS adoption in higher education at Edutechnica (OK, he’s better at coding and analysis than site naming). This side project (not affiliated with UMUC) started two months ago based on George’s ambition to unite various learning communities with better data. He said that he was inspired by the Campus Computing Project (CCP) and that Edutechnica should be seen as complementary to the CCP.

The project is based on a web crawler that checks against national databases as a starting point to identify the higher education institution, then goes out to the official school web site to find the official LMS (or multiple LMSs officially used). The initial data is all based on the Anglosphere (US, UK, Canada, Australia), but there is no reason this data could not expand.

There is new data available in Edutechnica’s one-year update, with year-over-year comparisons available as well as improvements to the methodology. Note that the methodology has improved both in terms of setting the denominator and in terms of how many schools are included in the data collection.

The Fall 2014 data which now includes all schools with more than 800 enrollments:

There’s more data available on the site, including measures of the Anglosphere (combining US, UK, Canada and Australia data) as well as comparison tables for 2013 to 2014. Go read the whole post.

LMS Anglo 2014In the meantime, here are some initial notes on this data. Given the change in methodology, I will focus on major changes.

  • Blackboard’s BbLearn and ANGEL continue to lose market share in US -[1] Using the 2013 to 2014 tables (> 2000 enrollments), BbLearn has dropped from 848 to 817 institutions and ANGEL has dropped from 162 to 123. Using the revised methodology, Blackboard market share for > 800 enrollments now stands at 33.5% of institutions and 43.5% of total enrollments.
  • Moodle, D2L, and Sakai have no changes in US - Using the 2013 to 2014 tables (> 2000 enrollments), D2L has added only 2 schools, Moodle none, and Sakai 2 schools.
  • Canvas is the fasted growing LMS and has overtaken D2L - Using the 2013 to 2014 tables (> 2000 enrollments), Canvas grew ~40% in one year (from 166 to 232 institutions). For the first time, Canvas appears to have have larger US market share than D2L (13.7% to 12.2% of total enrollments using table above).
  • BbLearn is popular in the UK while Moodle is largest provider in Canada and Australia - The non-US numbers are worth reviewing, even without the same amount of detail as we have for US numbers.

While this data is very useful, I will again point out that no one to my knowledge has independently verified the accuracy of the data at this site. I have done sanity checks against Campus Computing and ITC data, but I do not have access to the Edutechnica specific mechanism for counting systems. In order to gain longer-term acceptance of these data sets, we will need some method to provide some level of verification.

In the meantime, enjoy the new market data.

Update: Allan Christie has a post up questioning the source data for Australia. I hope this information is used to improve the Edutechnica data set or at least leads to clarifications.

Put simply, it is generally accepted that there are 39 universities (38 public, 1 private) in Australia. Given the small number of universities and my knowledge of the sector I know that there are 20 (51%) universities which use Blackboard as their enterprise LMS, 16 (41%) use Moodle, and 3 (8%) use D2L. It is acknowledged that there are some departments within universities that use another LMS but according to Edutechnica’s methodology these were excluded from their analysis.

  1. Disclosure: Blackboard is a client of MindWires Consulting.

The post New LMS Market Data: Edutechnica provides one-year update appeared first on e-Literate.

by Phil Hill at September 23, 2014 10:58 AM

September 20, 2014

Michael Feldstein

On False Binaries, Walled Gardens, and Moneyball

D’Arcy Norman started a lively inter-blog conversation like we haven’t seen in the edublogosphere in quite a while with his post on the false binary between LMS and open. His main point is that, even if you think that the open web provides a better learning environment, an LMS provides a better-than-nothing learning environment for faculty who can’t or won’t go through the work of using open web tools, and in some cases may be perfectly adequate for the educational need at hand. The institution has an obligation to provide the least-common-denominator tool set in order to help raise the baseline, and the LMS is it. This provoked a number of responses, but I want to focus on Phil’s two responses, which talk at a conceptual level about building a bridge between the “walled garden” of the LMS and the open web (or, to draw on his analogy, keeping the garden but removing the walls that demarcate its border). There are some interesting implications from this line of reasoning that could be explored. What would be the most likely path for this interoperability to develop? What role would the LMS play when the change is complete? For that matter, what would the whole ecosystem look like?

Seemingly separately from this discussion, we have the new Unizin coalition. Every time that Phil or I write a post on the topic, the most common response we get is, “Uh…yeah, I still don’t get it. Tell me again what the point of Unizin is, please?” The truth is that the Unizin coalition is still holding its cards close to its vest. I suspect there are details of the deals being discussed in back rooms that are crucial to understanding why universities are potentially interested. That said, we do know a couple of broad, high-level ambitions that the Unizin leadership has discussed publicly. One of those is to advance the state of learning analytics. Colorado State University’s VP of Information Technology Pat Burns has frequently talked about “educational Moneyball” in the context of Unizin’s value proposition. And having spoken with a number of stakeholders at Unizin-curious schools, it is fair to say that there is a high level of frustration with the current state of play in commercial learning analytics offerings that is driving some of the interest. But the dots have not been connected for us. What is the most feasible path for advancing the state of learning analytics? And how could Unizin help in this regard?

It turns out that the walled garden questions and the learning analytics questions are related.

The Current State of Interoperability

Right now, our LMS gardens still have walls and very few doors, but they do have windows, thanks to the IMS LTI standard. You can do a few things with LTI, including the following:

  • Send a student from the LMS to someplace elsewhere on the web with single sign-on
  • Bring that “elsewhere” place inside the LMS experience by putting it in an iframe (again, with single sign-on)
  • Send assessment results (if there are any) back from that “elsewhere” to the LMS gradebook.

The first use case for LTI was to bring in a third-party tool (like a web conferencing app or a subject-specific test engine) into the LMS, making it feel like a native tool. The second use case was to send students out to a tool that needed to full control of the screen real estate (like an eBook reader or an immersive learning environment) but to make that process easier for students (through single sign-on) and teachers (through grade return). This is nice, as far as it goes, but it has some significant limitations. From a user experience perspective, it still privileges the LMS as “home base.” As D’Arcy points out, that’s fine for some uses and less fine for others. Further, when you go from the LMS to an LTI tool and back, there’s very little information shared between the tool. For example, you can use LTI to send a student from the LMS to a WordPress multiuser installation, have WordPress register that student and sign that student in, and even provision a new WordPress site for that student. But you can’t have it feed back information on all the student’s posts and comments into a dashboard that combines it with the student’s activity in the LMS and in other LTI tools. Nor can you use LTI to aggregate student posts from their respective WordPress blogs that are related to a specific topic. All of that would have to be coded separately (or, more likely, not done at all). This is less than ideal from both user experience and analytics perspectives.

Enter Uniz…Er…Caliper

There is an IMS standard in development called Caliper that is intended to address this problem (among many others). I have described some of the details of it elsewhere, but for our current purposes the main thing you need to know is that it is based on the same concepts (although not the same technical standards) as the semantic web. What is that? Here’s a high-level explanation from the Man Himself, Mr. Tim Berners-Lee:

Click here to view the embedded video.

The basic idea is that web sites “understand” each other. The LMS would “understand” that a blog provides posts and comments, both of which have authors and tags and categories, and some of which have parent/child relationships with others. Imagine if, during the LTI initial connection, the blog told the LMS about what it is and what it can provide. The LMS could then reply, “Great! I will send you some people who can be ‘authors’, and I will send you some assignments that can be ‘tags.’ Tell me about everything that goes on with my authors and tags.” This would allow instructors to combine blog data with LMS data in their LMS dashboard, start LMS discussion threads off of blog posts, and probably a bunch of other nifty things I haven’t thought of.

But that’s not the only way you could use Caliper. The thing about the semantic web is that it is not hub-and-spoke in design and does not have to have a “center.” It is truly federated. Perhaps the best analogy is to think of your mobile phone. Imagine if students had their own private learning data wallets, the same way that your phone has your contact information, location, and so on. Whenever a learning application—an LMS, a blog, a homework product, whatever—wanted to know something about you, you would get a warning telling you which information the app was asking to access and asking you to approve that access. (Goodbye, FERPA freakouts.) You could then work in those individual apps. You could authorize apps to share information with each other. And you would have your own personal notification center that would aggregate activity alerts from those apps. That notification center could become the primary interface for your learning activities across all the many apps you use. The PLE prototypes that I have seen basically tried to do a basic subset of this capability set using mostly RSS and a lot of duct tape. Caliper would enable a richer, more flexible version of this with a lot less point-to-point hand coding required. You could, for example, use any Caliper-enabled eBook reader that you choose on any device that you choose to do your course-related reading. You could choose to share your annotations with other people in the class and have their annotations appear in your reader. You could share information about what you’ve read and when you’ve read it (or not) with the instructor or with a FitBit-style analytics system that helps recommend better study habits. The LMS could remain primary, fade into the background, or go away entirely, based on the individual needs of the class and the students.

Caliper is being marketed as a learning analytics standard, but because it is based on the concepts underlying the semantic web, it is much more than that.

Can Unizin Help?

One of the claims that Unizin stakeholders make is that the coalition can can accelerate the arrival of useful learning analytics. We have very few specifics to back up this claim so far, but there are occasionally revealing tidbits. For example, University of Wisconsin CIO Bruce Mass wrote, “…IMS Global is already working with some Unizin institutions on new standards.” I assume he is primarily referring to Caliper, since it is the only new learning analytics standard that I know of at the IMS. His characterization is misleading, since it suggests a peer-to-peer relationship between the Unizin institutions and IMS. That is not what is happening. Some Unizin institutions are working in IMS on Caliper, by which I mean that they are participating in the working group. I do not mean to slight or denigrate their contributions. I know some of these folks. They are good smart people, and I have no doubt that they are good contributors. But the IMS is leading the standards development process, and the Unizin institutions are participating side-by-side with other institutions and with vendors in that process.

Can Unizin help accelerate the process? Yes they can, in the same ways that other participants in the working group can. They can contribute representatives to the working groups, and those representatives can suggest use cases. They can review documents. They can write documents. They can implement working prototypes or push their vendors to do so. The latter is probably the biggest thing that anyone can do to move a standard forward. Sitting around a table and thinking about the standard is good and useful, but it’s not a real standard until multiple parties implement it. It’s pretty common for vendors to tell their customers, “Oh yes, of course we will implement Caliper, just as soon as the specification is finalized,” while failing to mention that the specification cannot be finalized until there are implementers. What you end up with is a bunch of kids standing around the pool, each waiting for somebody else to jump in first. In other words, what you end up with is paralysis. If Unizin can accelerate the rate of implementation and testing of the proposed specification by either implementing themselves or pushing their vendor(s) to implement, then they can accelerate the development of real market solutions for learning analytics. And once those solutions exist, then Unizin institutions (along with everyone else) can use them and try to discover how to use all that data to actually improve learning. These are not unique and earth-shaking contributions that only Unizin could make, but they are real and important ones. I hope that they make them.

The post On False Binaries, Walled Gardens, and Moneyball appeared first on e-Literate.

by Michael Feldstein at September 20, 2014 04:08 PM

September 19, 2014

Jason Shao

Searching for an ideal home whiteboard


I have to admit, a good whiteboard is one of my absolute favorite things in the world. While I absolutely spend all kinds of time writing in text editors, and other digital medium (and have tried just about every tablet/digital/smart-pen replacement for dumb-pens and paper) there is something about how easy it is to work at a whiteboard, especially collaboratively. Maybe it’s good memories of doing work at the board in HS Math.

At home, I recently moved into a new apartment that has a slightly > 8′ long wall space right by the entry. While *clearly* too tight for me to want to put furniture on that wall, the space is *screaming* for a large whiteboard. One of my prime criteria is project/bucket-lists though – so I do expect items to stay up on the board for potentially a *loooooong* time. Looking at options, it seems like we have can figure out something:

  • Actually buying a whiteboard – about $300 for a melamine one, and $4-500 for one made out of porcelain which should last longer (though given I don’t use it all the time, melamine would probably be fine)
  • IdeaPaint – about $100-150, which I have used in offices before, and am a big fan of, but unfortunately requires *really* flat wall surfaces – and not sure it’s worth sanding and re-painting for the small number of blemishes (that absolutely will bother me). There are of course cheaper options – even Sherwin Williams seems to be getting in the game, but those seem to have mixed reviews
  • Mark R Board – the paper guys (Georgia Pacific) – a sample at:
  • Bathtub Reglaze Kit – about $30, plus something for probably a board or the like – seems like this is also a valid refinish strategy –
  • IKEA Hacking - about $120 to use a TORSBY glass-top table, picture ledge, and some mirror hangers. Example with pictures at:
  • White Tile Board – about $20 at Lowes, and even a bunch of comments that it’s a great DIY whiteboard, though some other people have posted notes about it not *quite* being the same, and definitely seeing ghosting if you leave writing on it for more than a few days
  • Decals. has some fascinating pre-printed ones – baseball fields, maps, graph paper – that seem really interesting. also has some

Across the different options I have to admit, I think I’m almost definitely going to look into the glass tabletop – I have lusted after that look for a while, and this looks like by far the most reasonably way to get there I’ve seen so far, will post pics once I get something up.

… and then I can build one of these: :)


by jayshao at September 19, 2014 02:45 PM

Adam Marshall

The new WebLearn Lessons tool enables a better user experience

WebLearn 10 has brought with it the brand new Lessons Tool. This tool allows one to set up a set of step-by-step learning exercises which students can be asked to work through in a structured manner.

Consider this example: without leaving the page, you can ask your students to read instructions, take part in a discussion, watch a video clip and finally answer a few questions to assess how well they understand what they have just learnt.

In the past it was not easy to link a number of WebLearn tools on one page to create a smooth web experience.   The Lessons Tool has now solved this problem!


lessons-what-to-addLessons is a new tool that allows you (as a maintainer/contributor) to organise resources, activities, and media on a single page. Each Lessons page can be customised to suit your needs, including links to other WebLearn tools, conditional release of items and content, etc.

Content from a number of WebLearn tools can be added to a Lessons page including Assignments, Polls, Forums, etc. We will be adding support for Tests in the next few weeks.

The “Add content to the Lessons page” section (see blow) shows the full list of what you can have on a Lessons page.

How it works

If the Lessons tool is not in your site, you can easily add it via Site Info.  Go to Site Info, click Edit Tools, select the Lessons tool and follow the screen instructions.  As you can have more than one Lessons tool in a site, you are allowed to customise the name of the Lessons page.

Once Lessons is added to a site, it appears in the menu on the left hand side.


Develop a lesson

Click on the Lessons page link on the left hand side.  You can rename the lesson or add content to the lesson.


To rename a lesson you should click the ‘Settings’ icon.  Note that you can also configure other things here, for example, the time when the page is available.


Add content to the Lessons page

Go to a lessons page, clicking on “Add Content” allows you to add the flowing content.

  • Add text:  the HTML editor enables you to add text, links, images, video/audio etc. to the page
  • Add multimedia: embed an image, video, Flash file, web page, etc. on the page
  • Add Resource: upload a file or use an existing file in Resources tool and link to it, or enter a URL to another site
  • Add Assignment: link to a WebLearn assignment
  • Add Quiz: link to a WebLearn test – not yet implemented
  • Add Forum Topic: link to a WebLearn forum topic
  • Add Question: created embed multiple-choice or short answer question.  The site maintainers/contributors can view the results displayed using a bar chart
  • Add Comments tool: allow Access users to add comments to the page
  • Add Student Content: add a section where students can create their own pages
  • Add Subpage: add a child page on which you can create content, or links
  • Add Website: upload a ZIP file with web content
  • Add External Tool: add a tool using IMS Basic LTI

View a Lessons page as a Student

lessons-student view lessons-student-view2

Photos credit: the two Creative Commons licensed images are produced by Longsight company (!prettyPhoto).

by Fawei Geng at September 19, 2014 01:36 PM

September 16, 2014

Adam Marshall

How to move the Home Page text into Resources

If the text for a Home Page has been entered via the WYSIWYG ‘rich text’ editor (rather than linking to a page in Resources) then all links without an explicit ‘target=’ attribute will open in a new tab which is likely to alter the way in which a site’s home page works.

If this causes a problem, then follow this simple ‘recipe’. You do not need to do this if your ‘home page’ is currently a page in Resources.

1 Go to your site and click on the ‘Edit’ icon (pencil and paper)


2 Within the editor panel, select all text with your mouse and press CTRL-C (or equivalent) to copy the text. Make sure you scroll to the bottom of the page to get all text. (You may find it easier to switch to “Source” view to do this.)


3 Click ‘Cancel’ then navigate to the Resources Tool.


4 Create a new HTML page. You may want to create a special folder for this.


5 Paste the copied text into a new HTML page in Resources by clicking in the editor panel and pressing CTRL-V (or equivalent) then save the page (click ‘Continue’).


6 Give the page a sensible name. It is good practice to keep the “.html” extension


7 Complete the Save process


8 copy the URL of this page


9 Edit the Home Tool again, paste the URL into the box marked “Site Info URL” (underneath the editor panel) then save by clicking “Update Options”


10 Your page should look the same as before but will behave better. Check that there is nothing missing and check that the links work. If anything is wrong then repeat the process.

11 If you are happy you may like to remove the original text from your Home page but this is purely optional.

by Adam Marshall at September 16, 2014 03:21 PM

September 15, 2014


Changing Your Display Name in Sakai@UD

If you are a student and if your name in Sakai (or in other campus systems) is not what you want it to be, you can change it in UDSIS. more >

by Mathieu Plourde at September 15, 2014 04:23 PM

September 13, 2014

Alex Balleste

My history with Sakai

Tomorrow,  September 13 is the 10th anniversary of Sakai at UdL. We ran into production with Sakai 1.0 rc2 in University of Lleida in 2004. Quite an achievement and an adventure that has lasted 10 years and hopefully will be able to last many more. Perhaps it was a little rushed but luckily it worked out fine.

I will not tell you the history of the UdL and Sakai, I'll tell you what I know and I feel about my history of Sakai, that is directly related with the UdL. To get the full version of the UdL we should have a lot of people points of view. 

So I will start before Sakai, we have to go back few months ago, In January 2004 I applied for a contest for a temporary position at UdL for a project to provide to the University an open source LMS system. The tests were based on knowledge of programming in Java servlets, jsp,  and knowledge of eLearning. The IT service management was looking for a Java developer profile as they were evaluating the coursework platform from Stanford. They wanted developers to make improvements and adapt it to the UdL needs. At that time, UdL ran WebCT and wanted to replace it to an open source one in the context of free software migration across all the University.

I had coded a little Java for my final degree project, but I didn’t know anything about servlets or jsp, so I bought a book of J2SE and I studied some days before and took the test with many other they wanted that position. I passed the tests, and I was lucky to win the programmer position  on the “Virtual Campus” team  with 2 other guys.  David Barroso was already the analyst programmer of the team, it mean he was my direct boss (a really good one).  

We ran a pilot with a few subjects with the Computer Science degree in Coursework, and it looked to be well adapted to our needs. Also we were looking closely the LMS CHEF. When founding universities of Sakai announced that they join to create an LMS based on the work of those LMS the decision was taken.

When Sakai started lacked many features that we thought necessary, like a gradebook, a robust tool for assignment  and assessment,  but still seemed a platform with great potential. It had a big funding and support from the best universities in the world, it was enough for us to get into the project. UdL intention  with Sakai was to go beyond the capabilities of LMS and use it as a virtual space for the whole university. Provide in a future a set of community sites, and use for our intranet as well as development framework for our applications.

So we started working on it, translating the interface of Sakai to catalan and adapting the institutional image. We created sites for the subjects of the studies of the center Escola Politècnica Superior of the UdL. The September 13, 2004 the platform was in production.

Sakai 1.0rc2 translated in catalan and customized for UdL

During the process, we realized that the need of translating all the platform in each version would be very expensive, and the internationalization process appeared not to be one of the imminent community efforts, so the IT service manager Carles Mateu and David Barroso decided to offer support to internationalize Sakai. The idea was to provide a mechanism to translate Sakai easily without having to modify the source code every time Sakai released new version. It was an essential feature for us and it must be done in order to continue with Sakai project.  David contacted with the Sakai project chief director Dr. +Charles Severance and offered our help to internationalize whole Sakai.

Chuck was glad about our offering and the work started soon. Beth Kirschner was the person in charge of managing our work and sync with Sakai code. I was lucky to have the responsibility to manage the task on our part. First thing I did was a PoC with a tool. I extracted all the properties of a VM tool to a properties file, and then it was loaded with Java Properties objects. The PoC worked well but Beth encouraged me to use ResourceBundles instead of simple Properties class. I wrote another PoC with this guy and it worked great. From that point then began the tedious task of going all the code to do this. The result was  “tlang” and “rb” objects everywhere. That took between 2-3 months 3 people. We also used that process to write the catalan translation. We used a Forge instance installed at UdL to synchronize these efforts. We implement the changes there for Sakai 1.5 and when a tool was completely internationalized I notified in order for Beth to apply  the changes in the project main Sakai branch.

Although we worked on a 1.5, i18n changes were released in version the Sakai 2.0. For us it was a success because it ensured that we could continue using this platform for longer.  When version 2.0 came out we upgraded from our 1.0rc2. Only one word comes to my mind when I remember that upgrade: PAIN. We had a very little documentation and we had to look for the code for every error we found. We had to make a preliminary migration to 1.5, running scripts and processes on the Sakai startup and then upgrade to 2.0. The migration process failed on all sides but with a lot of efforts finally we went ahead with it.

Once we had the platform upgraded, we started to organize our virtual campus and university-wide LMS as intranet, creating sites for specific areas and services and facilitating access to people depending on the profile had in the LDAP. We also created the sites for the rest of degrees of our University. 

From that moment our relationship with Sakai has not been so hard. Everything went
better. Next version we ran was 2.2, we upgrade it on 2006. By then we  were granted with a Mellon Foundation award for our internationalization effort in Sakai. It’s one of the things that I’m prouder of my career, but it was embittered because the prize reward was finally not claimed. I did not find out until a couple of years after that happened.  The money of the award should be spent developing something interesting related on education, so in order to receive the award was needed to make a project proposal detailing how we would spend the $50K. UdL’s idea was to create a system to translate Sakai’s string bundles easily like some tools did it by then (poedit, ...). The IT service direction thought it was a better that the project wasn’t done by the same team that customized Sakai at UdL and internationalized (I guess they had other priorities in mind for us), but that development should be done by people of a Computer Science research group of the UdL. I do not know why they didn’t do the project or the proposal to get the award money, but nowadays I already don’t mind. [Some light here ... see the comments]

Around that time our team started working on a new project where were implied a large number of catalan Universities,  The campus project. It initially began as a proposal to create an open source LMS from scratch to be used by them. The project was lead by Open University of Catalunya, UOC. The UdL IT service direction board and David Barroso expressed their disagreement to spend the 2M€ finance such a project having already open source LMSs like Moodle and Sakai in which they could invest that money. The project changed direction and they tried to do something involved with existing LMSs, so they decided to create a set of tools that would use an OKI OSID middleware implemented for Moodle and Sakai.  Although running external tools in the context of an LMS using standards and provide an WS BUS to interact with the LMSs API was a good idea  I didn’t like how wanted to use a double level OKI OSID layer to interact with both LMS APIs. I thought that was too much complex and hard to be maintained.


We upgraded Sakai again in 2007 to version 2.4 (that release gave us a lot of headaches). I also won the position as analyst programmer in the Virtual Campus team that David Barroso vacated when he won the internal projects manager position. The selection process left me quite exhausted by the long rounds of tests that were delayed in time and the competition with nearly 30 colleagues the made ​​harder the effort to get the best grades to win the position. By then, the IT service direction board, Carles Mateu and Cesar Fernandez, resigned because they had discrepancies with the main university direction board about how to apply the free software migration in the UdL. It was a shame because from then we have experienced a strong running back in free software policies and has worsened the situation of the entire IT service.

In September of that year, after the job competition finished and being chosen to take it, I went to spend a couple of weeks at the University of Michigan. My mission there was to work on the IMS-TI protocol with Dr. Chuck to see if we could use this standard as part of the Campus project. These two weeks there were very helpful. We did several examples implementing IMS-TI with OSID.  I spent a good time with Chuck and Beth in Ann Arbor during my visit to the United States, but I really remember fondly that trip because a few days before I went to Michigan, I got married in Las Vegas and I spent our honeymoon in New York.

Once back to Lleida, I insisted several times to the Campus Project architects on changing standards for the registration and launch apps to IMS-TI. Although people lead the campus project loved that idea they had already deep in mind the architecture they wanted to use, so we went with the original idea.

Several of the partner universities in the project created tools for that system and the UdL picked up the responsibility to create OSID implementations for Sakai as well as a tools to register and launch remote tools within Sakai as if they were their own . Although it was very tedious to implement OSID, it allowed me to get a fairly deep knowledge of all systems that later became the Kernel of Sakai. Unfortunately, the campus project was not used, but parallel IMS-LTI could end up winning.

Already on April 2008, taking advantage of a visit by Dr. Chuck to Barcelona for an attendance at a conference organized by the University Ramon Llull, we had the first meeting of Spanish Universities that had or thought to run Sakai. 

I went with the new director of IT services of the UdL, Carles Fornós. He was there the first time I saw Sakaigress, furry pink Sakai’s mascot. Dr. Chuck was carrying her. I explained to my boss that these teddies were given as a reward for participation in the community, and the first thing he told me was, "we have to get one." During the meeting the representatives of both universities that had running sakai, UPV as we, explained a bit of the experience we had with Sakai and resolved doubts that were raised to us from other universities. At the end of the meeting, everyone's surprise, Dr. Chuck wanted to give to us (UdL) the Sakaigress. He did it for two reasons that told me later. First, because we had been working hard in the community to internationalization and helping to promote standards like the IMS-TI with our work in  the implementation of the campus project, on the other hand he gave it to silence some voices of doubt that came out in the environment of our university about choosing Sakai instead of Moodle, wanting to reaffirm the commitment of the community with our University. 


During that meeting also came the idea of making the first workshop of Sakai. A way to show people how to install, make tools and discuss about the platform. When my boss heard it whispered to me that we should offer volunteers to organize it, so I offered to organize.

In that meeting I also met the man who was in charge of implementing Sakai in the Valencian International University (VIU). We talked with him ​​about the OKI OSID implementation with his technical staff by mail some days before. They were very interested  in this use case. It was not even a month that  the team that prepared the specifications for implementation of Sakai to VIU came to Lleida to visit us. Before I tried to convince Carles Fornós to offer our services to the VIU. The customization of Sakai on other university for us would have been very simple and it was an opportunity to provide to the UdL more funds to keep developers. Carles did not seem a good idea, so I did not even offered. 
Moreover, when the UdL rejected to offer services as an institution, I considered doing at personal level with the help some co-workers. At first it seemed like a good idea for the responsibles for the technical office of the VIU, but when the moment arrived to go ahead with the collaboration, the UdL main direction board showed their disapproval (no prohibition), which made ​​us pull back because the risk of losing our jobs in the UdL if anything went wrong. Finally, the work was made by Pentec-Setival (Samoo). They did a great job. Perhaps it was the best result for the Spanish Sakai community because we got  a commercial  provider to support  Sakai.

In June 2008 we held the first Sakai workshop. It was a very pleasant experience, where the colleagues from UPV Raul Mengod and David Roldan, along with some staff of the Institute of Education Science of UdL (ICE) helped me to give some talks to other universities that were evaluating Sakai as their LMS.  

Soon after, in February of 2009, it was organized the second Sakai event in Santiago de Compostela. There, the group of the S2U was consolidated. By then, the UPNA was about to run on production migrating the contents of its old LMS WEBCT. In that meeting I showed how to develop tools in Sakai. At UdL we had upgraded to 2.5 and also shared opinions. We suffered a lot for performance issues and crashes with 2.4, but 2.5 seemed to improve a lot.

Days later that event, UPV invited us to attend a presentation and a meeting with Michael Korcuska, Sakai Foundation executive director by then. In Valencia it was the first time I saw the preview of Sakai 3. It was sold as the new version that would replace Sakai 2, he told that perhaps community would release a 2.7 version but not a 2.8. It was expected to be on 2010.

Truth be told, I loved it, and I spent much time tinkering and learning new technologies that had behind sakai 3. I went to the workshops offered at the 2009 conference in Boston, the truth is that everything pointed to the community supported the plan to move to Sakai 3, or at least it seemed to me.

On the 3rd congress of the S2U on November 2009, I made ​​a presentation of the benefits and technology behind Sakai 3 for making people aware of the new road that faced the LMS. Unfortunately we all know what has been the real way. It passed as slowly from “being the replacement”  to “something complementary” and finally to “something totally different”.

We did some proof of concept with hybrid system between Sakai CLE i OAE, Bedework, BBB and Kaltura. The PoC was quite promising, but the shift in architecture given the poor results obtained with the technological stack chosen frustrated our plans. Currently OAE continues with another stack but this away to the idea we had in mind at first.

By then we owned a big number of tools developed for Sakai JSF and Spring-Hibernate. For us, it was a problem in the future expected platform migration process between 2 and 3. In late 2009 and early 2010 we started developing our own JS + REST framework based on Sakai to have tools implemented more neutral manner that would allow us to move between platforms in less traumatic process. Thanks to all what I learned from Sakai OAE technologies I designed what is now our tool development framework for Sakai, DataCollector. It’s a framework that allows us to link to multiple sources and types of data sources and display it as js apps inside Sakai. It uses Sakai realms as permission mechanism and lets create big apps based on templates.
Gradually we have been replacing all the tools created in JSF (poor maintainable) by these based in our framework.  Although we finally we have not moved to OAE platform, it has helped us to have a set of more flexible and maintainable apps than those written  in JSF.

In July of 2010 we upgraded to version 2.7. We were still hoping to see soon Sakai OAE as part of our ecosystem Virtual Campus. Everything seemed to fit pretty well. At the 2010. At the end of the month my first son was born, and I took a long paternity. I was not working in the UdL but I wanted to assist to the IV Spanish Sakai congress in Barcelona in November to show all the work made with the Datacollector.  I went with my wife and him, the youngest member in the S2U.

In June 2011 we had another meeting in Madrid, it was organized to show to whole S2U member how to coordinate and use JIRA in a better way to help to our contributions being incorporated in sakai trunk. Some time ago we had an arrangement to implement some functionalities together and it was difficult to get in the main code. Some universities paid to Samoo to get it implemented but UM and UdL preferred to implemented ourselves. But what I really enjoyed in that meeting is how UM had implemented Hudson in their CI process. I loved the idea and my task in the following months was refactor all our process and automatize builds, deployments and tests with jenkins and selenium. 

Looking backward I see that during the years 2010 to 2012 our involvement with the S2U and the whole Sakai community dropped considerably. I guess that our eyes were on shift to the new environment. We concentrate our efforts on having DataCollector framework developed as much as possible in order to have a valid output gate for all those tools that have been developed since 2004. In addition S2U objectives were not in line with what we had in that moment. S2U's approach focused on the internationalization. As I understand it was a mistake because there was already a part of the community focused on that and the S2U should not focus only on those issues.

In July 2013 we did the sixth and last upgrade. In the upgrade process to  2.9 we took the chance to spend some time to migrate from our users provisioning system based scripts to an implementation of Course Management. Mireia Calzada did an excellent job  preparing ETLs and helping to build an implementation based on hibernate.

We took that opportunity to open all the functionality to allow create sites by teachers and students to let them use to work together, now they have storage space for their own projects, communication tools, etc.  That gave us very good results because people feel the virtual campus more useful than previous years. Also, we allowed teachers to invite external people and organize their sites as they want. Many of the complaints we had about the Sakai platform weren’t about features not supported by Sakai but due to the restrictions imposed by us.

The previous tasks related with that upgrade allowed me to reconnect with the community collaborating  reporting and resolving bugs, participating in QA,  and contributing what my colleagues and I have translated into catalan.

During 2013 I also ventured on a personal project related with Sakai. I created together with Juanjo Meroño from Murcia a functionality to allow videocam streaming  using the Sakai’s portal chat. A desire to contribute something personal to free software and especially to Sakai motivated me to make this project. It was a pretty nice experience to work with the community again. The help of Neal Caidin and Adrian Fish was the key to integrate it to the main code Sakai.  

In November 2013, Juanjo and me presented that functionality in the VI Congress of Sakai in Madrid. The important thing about that congress was that the whole s2u recovered sinergia. I’m convinced that University of Murcia staff was the key to inspire the rest of us. If you are interested you can read my opinion of the event in a previous blog post. Now we have weekly meetings and work as a team. Resources flow gently between the needs of group members and goes pretty well.

Now I feel again that I’m part of the Sakai community and S2U. I guess that the fact of working closely with its members has allowed me to believe that Sakai has a bit me.I'm waiting when the next s2u meeting is going to celebrate, and maybe I'm gonna go with my second son born that August.

And that is a brief summary of how I remember that history, maybe something was different, or happened in a different time. Just say Thanks to UdL, Sakai project, and S2U members to make that experience so amazing. 

by Alex Ballesté ( at September 13, 2014 08:18 AM

September 09, 2014


New Terminology in Forums

In Sakai 2.9.3, the Forums tool uses a different terminology to refer to what used to be called a Thread. It is now called a Conversation. The following diagram presents graphically the new hierarchy of the terms now used in the Forums tool. Once in a topic, you can either start a top-level conversation or […] more >

by Mathieu Plourde at September 09, 2014 03:27 PM

Dr. Chuck

How to Achieve Vendor Lock-in with a Legit Open Source License – Affero GPL

Note: In this post I am not speaking for the University of Michigan, IMS, Longsight, or any one else. I have no inside information on Kuali or Instructure and am basing all of my interpretations and commentary on the public communications from the web site and other publically available materials. The opinions in this post are my own.

Before reading this blog post, please take a quick look at this video about Open Source:

The founding principles of Open Source from the video are as follows:

  1. Access to the source of any given work
  2. Free Remix and Redistribution of Any Given Work
  3. End to Predatory Vendor Lock-In
  4. Higher Degree of Cooperation

A decade ago efforts like Jasig, Sakai, and Kuali were founded to collaboratively build open source software to meet the needs of higher education to achieve all of the above goals. Recently Kuali has announced a pivot toward Professional Open Source. Several years ago the Sakai and Jasig communities decided to form a new shared non-profit organization called Apereo to move away from Community Source and toward pure Apache-style open source. So interestingly, at this time, all the projects that coined the term “Community Source”, no longer use the term to describe themselves.

In the August 22 Kuali announcement of the pivot from non-profit open source to for-profit open source, there was a theme of how much things have changed in the past decade since Kuali was founded:

…as we celebrate our innovative 2004 start and the progress of the last decade, we also know that we live in a world of change. Technology evolves. Economics evolve. Institutional needs evolve. We need to go faster. We need a path to a full suite of great products for institutions that want a suite. So it is quite natural that a 10-year-old software organization consolidates its insights and adapts to the opportunities ahead.

There were many elements in the August 22 announcement that merit discussion (i.e. here and here) but I will focus on these particular quotes from the FAQ that accompanied the August 22 announcement:

This plan is still under consideration. The current plan is for the Kuali codebase to be forked and re-licensed under Affero General Public License (AGPL).

The Kuali Foundation (.org) will still exist and will be a co-founder of the company. … The Foundation will provide initial capital investment for the company out of its reserves.

In a follow-up post five days later on August 27 they clarified the wording about licensing and capital:

All software that has been released under the current, Open Source Initiative approved Educational Community License (ECL) will and can continue under that license.

The software license for work done by the new entity and from its own capital will be the Open Source Initiative approved Affero GPL3 license (AGPL3).

While the details and overall intent of the August 22 and August 27 announcements from the Kuali Foundation may seem somewhat different, the AGPL3 license remains the central tenet of the Kuali pivot to professional open source.

The availability of the AGPL3 license and the successful use of AGPL3 to found and fund “open source” companies that can protect their intellectual property and force vendor lock-in *is* the “change” that has happened in the past decade that underlies both of these announcements and the makes a pivot away from open source and to professional open source an investment with the potential for high returns to its shareholders.

Before AGPL3

Before the AGPL3 license was created, there were two main approaches to open source licensing – Apache-style and GPL-style. The Apache-like licenses (including BSD, MIT, and ECL) allow commercial companies to participate fully in both the active development of the code base and the internal commercial use of that code base without regard to mixing of their proprietary code with the open source code.

The GNU Public License (GPL) had a “sticky” copyleft clause that forced any modifications of redistributed code to also be released open source. The GPL license was conceived pre-cloud and so its terms and conditions were all about distribution of software artifacts and not about standing up a cloud service with GPL code that had been modified by a company or mixed with proprietary code.

Many companies chose to keep it simple and avoided making any modifications to GPL software like the Linux kernel. Those companies could participate in Apache projects with gusto but they kept the GPL projects at arms length. Clever companies like IBM that wanted to advance the cause of GPL software like Linux would hire completely separate and isolated staff that would work on Linux. They (and their lawyers) felt they could meet the terms of the GPL license by having one team tweak their cloud offerings based on GPL software and a completely separate team that would work on GPL software and never let the two teams meet (kind of like matter and anti-matter).

So clever companies could work closely with GPL software and the associated projects if they were very careful. In a sense because GPL had this “loophole”, while it was not *easy* for commercial companies to engage in GPL projects when a company tweaked the GPL software for their own production use, it was *possible* for a diverse group of commercial companies to engage constructively in GPL projects. The Moodle project is a wonderful example of a great GPL project (well over a decade of success) with a rich multi-vendor ecosystem.

So back in 1997, the GPL and Apache-like licenses appeared far apart – in practice as the world moved to cloud in the past decades the copyleft clause in GPL became less and less of a problem. GPL licensed code could leverage a rich commercial ecosystem almost as well as Apache licensed code. The copyleft clause in GPL had became much weaker by 2005 because of the shift to the cloud.

AGPL – Fixing the “loophole” in GPL

The original purpose of the GPL license was to insist that over time all software would be open source and its clause to force redistribution was a core element.

For example, if you distribute copies of such a program, whether gratis or for a fee, you must pass on to the recipients the same freedoms that you received. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights.

The fact that these cloud vendors could “have their cake and eat it too” could be easily fixed by making the AGPL3 license tighter than the GPL license by adding this clause:

The GNU Affero General Public License is designed specifically to ensure that, in such cases, the modified source code becomes available to the community. It requires the operator of a network server to provide the source code of the modified version running there to the users of that server. Therefore, public use of a modified version, on a publicly accessible server, gives the public access to the source code of the modified version.

This seems simple enough. Fix the flaw. The GPL license did not imagine that someday software would not be “distributed” at all and only run in the cloud. The AGPL3 license solves that problem. Job done.

But solving one problem in the GPL pathos causes another in the marketplace. AGPL3 ensures that we can “see” the code that those who would remix and run on servers would develop, but it creates an unfortunate asymmetry that can be exploited to achieve a combination of vendor lock-in and open source.

AGPL3 = Open Source + Vendor Lock-In

The creators of GPL generally imagined that open source software would have a diverse community around it and that the GPL (and AGPL) licenses were a set of rules about how that community interacted with each other and constrain companies working with GPL software to bring their improvements back to the commons. But just like the GPL founders did not imagine the cloud, the AGPL creators did not imagine that open source software could be created in a proprietary organization and that the AGPL license would ensure that a diverse community would never form (or take a really long time to form) around the open source software.

These days in Educational Technology it is pretty easy to talk to someone on your Caltrans commute and get $60 Million in venture capital for an educational technology startup. But your VC’s want an exit strategy where they make a lot of money. I think that there are likely no examples of VC-funded companies that used an Apache-like license in their core technology that were funded let alone successful. That hippie-share-everything crap just does not cut it with VC’s. Vendor lock-in is the only way to protect asset value and flip that startup or go public.

Clever company founders figured out how to “have their cake and eat it too”. Here is the strategy. First take VC money and develop some new piece of software. Divide the software into two parts – (a) the part that looks nice but is missing major functionality and (b) the super-awesome add-ons to that software that really rock. You license (a) using the AGPL3 and license (b) as all rights reserved and never release that source code.

You then stand up a cloud instance of the software that combines (a) and (b) and not allow any self-hosted versions of the software which might entail handing your (b) source code to your customers.

Since the (a) portion is incomplete it poses no threat to their commercial cloud offering. And since the (a) part is AGPL it is impossible for a multi-vendor commercial ecosystem to emerge. If a small commercial competitor wants to augment the (a) code to compete with the initial vendor that has (a)+(b) running in the cloud, they are bound by the AGPL3 license to publish all of their improvements. This means that if the second company comes up with a better idea than the original company – the original company gets it and any and all competitors of the second company get the improvement for free as well. But if the original company makes an improvement – they keep it hidden and proprietary thus extending their advantage over all other commercial participants in the marketplace:

You can see this theme in the August 22 Kuali FAQ where they talk about “What happens to the Kuali Commercial Affiliates (KCAs)?”:

There will be ample and growing opportunities for the KCAs to engage with Kuali clients. The company would love for KCAs to take on 80% or more of the installation projects. The Kuali platform will continue to become more and more of a platform that KCAs can augment with add-ons and plugins. In addition, KCAs will likely be used to augment the company’s development of core code and for software projects for Kuali customers.

Reading this carefully, the role for companies other than Kuali, Inc. is to install the software developed by the new “Kuali, Inc.” company or perhaps develop plugins. With the source code locked into AGPL3, the greatest role that a community of companies can do is be “Kuali Inc’s little helpers”. The relationship is not a peer relationship.

When a company builds a proprietary product from scratch and releases a portion of it under APGL3, there never was a commons and the AGPL3 license is the best open source license the comapny can use to insure that there never will be a true commons.

Revisiting – AGPL – Fixing the “bug” in GPL (oops)

Now the AGPL3 advocates actually achieve their goals when the original company goes out of business because even though we never see the (b) component of the software, since the (a) part is open source and a truly open ecosystem could emerge around the carcass of the company – but by the time the company failed – it is not likely that their “half-eaten code carcass” would be all that useful.

What is far more likely is that the company using the AGPL strategy would get a few rounds of VC, thrive and sell themselves for a billion dollars or go public for a few billion dollars. After the founders pocket the cash, there would no longer need to market themselves as “open source” so they would just change the license on (a) from AGPL3 to a proprietary license and stop redistributing the code. Since the (b) code was always proprietary – after a few months of improvements to the (a) code in a non-open source fashion and the deep interdependence of the (a) and (b) code, the open copy of (a) has effectively died on the vine. The resulting company has a wonderfully proprietary and closed source product with no competitors and the VC’s have another half-billion dollars to give to some new person on a Caltrans ride. And the “wheel of life” goes on.

Each time open source loses and VCs and corporations win, I am sure somewhere in the world, about ten Teslas get ordered and a puppy cries while struggling to make it to the next level.

Proprietary Code is a Fine Business Model

Probably by this time (if you have read this far) you probably have tagged this post as #tldr and #opensourcerant – it might indeed warrant #tldr – but it is not an open source rant.

I am a big fan of open source but I am also a big fan of proprietary software development. The educational technology market is made up of well over 90% of its software that is proprietary. Excellent proprietary offerings come from companies like Blackboard, Coursera, Instructure (part b), Piazza, Microsoft, Google, Edmodo, Flat World Knowledge, Pearson, McGraw Hill, Apple and many others. Without them open source efforts like Sakai and Moodle would not exist. I am not so foolish that I believe that purely open source solutions will be sufficient to meet the need of this market that I care so much about.

The right combination in a marketplace is a combination of healthy and competitive open source and proprietary products. This kind of healthy competition is great because choices make everyone stronger and keep teams motivated and moving forward:

  • Linux and Microsoft Windows
  • Microsoft Office and LibreOffice
  • Sakai and Blackboard
  • Apache HTTPd and Microsoft IIS
  • ….

The wisest of proprietary companies even see fit to invest in their open source competitors because they know it is a great way to make their own products better.

The reason that the “open source uber alles” strategy fails is that proprietary companies can raise capital far more effectively than open source efforts. This statement from an earlier Kuali blog post captures this nicely:

We need to accelerate completion of our full suite of Kuali software applications, and to do so we need access to substantially more capital than we have secured to date to meet this need of colleges and universities.

The problem is also why it is very rare for an open source product to dominate and push out proprietary competitors. Open source functions best as a healthy alternative and reasonably calm competitor.

AGPL3 + Proprietary + Cloud Strategy in Action

To their credit, Instructure has executed the AGPL3 open/closed hybrid strategy perfectly for their Canvas product. They have structured their software into two interlinked components and only released one of the components. They have shaded their marketing the right way so they sound “open source” to those who don’t know how to listen carefully. They let their fans breathlessly re-tell the story of “Instructure Open Source” and Instructure focuses on their core business of providing a successful cloud-hosted partially open product.

The Kuali pivot of the past few weeks to create Kuali Inc., (actual name TBD) is pretty clearly an attempt to replicate the commercial success of the Instructure AGPL3 strategy but in the academic business applications area. This particular statement from the August 22 Kuali announcement sums it up nicely:

From where will the founding investment come?

The Foundation will provide initial capital investment for the company out of its reserves. Future investment will come from entities that are aligned with Kuali’s mission and interested in long-term dividends. A first set of investors may be University foundations. There is no plan for an IPO or an acquisition.

Read this carefully. Read this like a lawyer, venture capitalist, or university foundation preparing to invest in Kuali, Inc. would read it. The investors in Kuali, Inc. may be more patient than the average investor – but they are not philanthropic organizations making a grant. The AGPL license strategy is essential to insuring that an investment in Kuali, Inc. has the potential to repay investors investments as well as a nice profit for its patient investors.

Is there any action that should be taken at this time? If I were involved in Kuali or on the board of directors of the Kuali Foundation, I would be very suspect of any attempted change to the license of the code currently in the Kuali repository. A change of the kind of license or a change to “who owns” the code would be very significant. The good news is that in the August 27 Kuali post it appears that at least for now, a board-level wholesale copyright change is off the table.

All software that has been released under the current, Open Source Initiative approved Educational Community License (ECL) will and can continue under that license.

I think that a second issue is more about the individual Kuali projects. There are lots of Kuali projects and each project is at a different maturity level and has its own community and its own leadership. I think that the approach to Kuali, Inc. might be different across the different Kuali Foundation projects. In particular if a project has a rich and diverse community of academic and commercial participants, it might be in that communities’ best interest to ignore Kuali Inc. and just keep working with the ECL licensed code base and manage its own community using open source principles.

If you are a member of a diverse community working on and using a Kuali project (Coeus and KFS are probably the best examples of this) you should be careful not to ignore a seemingly innocuous board action to switch to AGPL3 in any code base you are working on or depending on (including Rice). Right now because the code is licensed under the Apache-like Educational Community License, the fact that the Foundation “owns” the code hardly matters. In Apache-like licenses, the owner really has no more right to the code than the contributors. But as soon as the code you are working on or using is switched to AGPL3, it puts all the power in the hands of the copyright owner – not the community.

A worrisome scenario would be to quietly switch the license to AGPL3 and then have the community continue to invest in the Kuali Foundation version of the code for a year or so and then a year from now, the Kuali Foundation Board could then transfer ownership of the code to someone else and then you would have to scramble and pick through the AGPL3 bits and separate them out if you really wanted to continue as a community. This is usually so painful after a year of development that no one ever does it.

The Winter of AGPL3 Discontent

If we look back at the four principles of open source that I used to start this article, we quickly can see how AGPL3 has allowed clever commercial companies to subvert the goals of Open Source to their own ends:

  • Access to the source of any given work – By encouraging companies to only open source a subset of their overall software, AGPL3 ensures that we will never see the source of the part (b) of their work and that we will only see the part (a) code until the company sells itself or goes public.
  • Free Remix and Redistribution of Any Given Work – This is true unless the remixing includes enhancing the AGPL work with proprietary value-add. But the owner of the AGPL-licensed software is completely free to mix in proprietary goodness – but no other company is allowed to do so.
  • End to Predatory Vendor Lock-In – Properly used, AGPL3 is the perfect tool to enable predatory vendor lock-in. Clueless consumers think they are purchasing an “open source” product with an exit strategy – but they are not.
  • Higher Degree of Cooperation – AGPL3 ensures that the copyright holder has complete and total control of how a cooperative community builds around software that they hold the copyright to. Those that contribute improvements to AGPL3-licensed software line the pockets of commercial company that owns the copyright on the software.

So AGPL3 is the perfect open source license for a company that thinks open source sounds great but an actual open community is a bad idea. The saddest part is that most of the companies that were using the “loophole” in GPL were doing so precisely so they could participate in and contribute to the open source community.


As I wrote about MySQL back in 2010, a copyright license alone does not protect an open source community:

Why an Open Source Community Should not cede Leadership to a Commercial Entity – MySql/Oracle

Many people think that simply releasing source code under an open license such as Instructure or GPL is “good enough” protection to ensure that software will always be open. For me, the license has always been a secondary issue – what matters is the health and vitality of the open community (the richness and depth of the bazaar around the software).

Luckily, the MySQL *community* saw the potential of the problem and made sure that they had a community-owned version of the code named MariaDB that they have actively developed from the moment that Oracle bought MySQL. I have not yet used MariaDB – but its existence is a reasonable insurance policy against Oracle “going rogue” with MySQL. So far, now over four years later Oracle has continued to do a reasonable job of managing MySQL for the common good so I keep using it and teaching classes on it. But if MariaDB had not happened, by now the game would likely be over and MySQL would be a 100% proprietary product.

While I am sure that the creators of the Affero GPL were well intentioned, the short-term effect of the license is to give commercial cloud providers a wonderful tool to destroy open source communities or at least ensure that any significant participation in an open-source community is subject to the approval and controls of the copyright owner.

I have yet to see a situation where the AGPL3 license made the world a better place. I have only seen situations where it was used craftily to advance the ends of for-profit corporations that don’t really believe in open source.

It never bothers me when corporations try to make money – that is their purpose and I am glad they do it. But it bothers me when someone plays a shell game to suppress or eliminate an open source community. But frankly – even with that – corporations will and should take advantage of every trick in the book – and AGPL3 is the “new trick”.

Instead of hating corporations for being clever and maximizing revenue – we members of open source communities must simply be mindful of being led down the wrong path when it comes to software licensing.

Note: The author gratefully acknowledges the insightful comments from the reviewers of this article.

by Charles Severance at September 09, 2014 02:26 AM

August 28, 2014


Managing Announcements and Notifications

As the fall semester begins, IT staff members have received a number of inquiries related to notifications and sending announcements to students, especially with the growing number of instructors opting to use Canvas instead of Sakai. Below are some options and “gotchas” regarding notifications in UD-supported technologies, including Canvas, Sakai, and P.O. Box. I. Verify […] more >

by Mathieu Plourde at August 28, 2014 05:55 PM

August 27, 2014

Apereo OAE

Apereo OAE Griffin is now available!

The Apereo Open Academic Environment (OAE) project team is excited to announce the next major release of the Apereo Open Academic Environment; OAE Griffin or OAE 8.

OAE Griffin brings a complete overhaul of the collaborative document experience, metadata widgets, full interactive REST API documentation and improved Office document previews. Next to that, OAE Griffin also introduces a wide range of incremental usability improvements, technical advances and bug fixes.


Collaborative documents

The collaborative document experience in OAE Griffin has been completely overhauled. Whilst OAE's collaborative note taking capabilities have consistently been identified as very useful during usability testing, the actual Etherpad editor user experience has always tested poorly and never felt like an inherent part of the OAE platform.

Therefore, OAE Griffin introduces a fully skinned and customised collaborative document editor. The Etherpad editor has been skinned to make it fit seamlessly into the overall OAE interface and a number of under-utilised features have been removed. The editor and toolbar now also behave a lot better on mobile devices. All of this creates a much cleaner, more integrated and easier to use collaborative document experience.

At the same time, the activities and notifications generated by collaborative documents have also been fine-tuned. OAE Griffin now detects which people have made a change and will generate accurate activities, providing a much better idea of what's been happening inside of a document.

Metadata widgets

It is now possible to see the metadata for all content items, discussions and groups. This includes the full title of the item, the description, who created it and when it was created. For content items and discussions, it is also possible to see the full list of managers and people and groups it's shared with. All of this will provide a lot more context to an item, for example when discovering an interesting content item or when wondering who's involved in a discussion.

At the same time, the long-awaited download button has been provided for all content items, ensuring that the original file can easily be downloaded.

REST API Documentation

OAE Griffin introduces a REST API documentation framework and all of the OAE REST APIs have been fully documented. This work is based on a REST API documentation specification called Swagger, and offers a nice interactive UI where the documentation can be viewed and all of the REST endpoints can be tried.

This documentation is available on every OAE tenant and sits alongside the internal API documentation. All of this should provide sufficient information and documentation for widget development and integration with OAE.

Office documents

The OAE preview processor has been upgraded from LibreOffice 3.5 to LibreOffice 4.3. This brings tremendous improvements to the content previews that are generated for Office files (Word, Excel and PowerPoint). Especially the display of shapes, pictures and tables has been much improved, whilst some additional font support has been added as well.

Email improvements

The email notifications have been tweaked to ensure that emails sent out by OAE are as relevant as possible. At the same time, a number of visual improvements have been made to those emails to ensure that they look good on all devices.

Embedding improvements

Browsers have started introducing a set of new new cross-protocol embedding restrictions, which were causing some embedded links to not show correctly in the content profile. Therefore, OAE Griffin puts a number of measures in place that improve link embedding is and provide a fallback when a link can not be embedded.

CAS Authentication

It is now possible to pick up and use SAML attributes released by a CAS authentication server. This allows for a user's profile metadata to be available immediately after signing into OAE for the first time, without having to pre-provision the account.


The icons used in OAE Griffin have been upgraded from FontAwesome 3 to FontAwesome 4.3, allowing for a wider variety of icons to be used in widget development.

Apache Cassandra

OAE Griffin has been upgraded from Apache Cassandra 1.2.15 to Apache Cassandra 2.0.8, bringing a range of performance improvements, as well as the possibility of setting up simple database transactions.

Try it out

OAE Griffin can be tried out on the project's QA server at It is worth noting that this server is actively used for testing and will be wiped and redeployed every night.

The source code has been tagged with version number 8.0.0 and can be downloaded from the following repositories:


Documentation on how to install the system can be found at

Instruction on how to upgrade an OAE installation from version 7 to version 8 can be found at

The repository containing all deployment scripts can be found at

Get in touch

The project website can be found at The project blog will be updated with the latest project news from time to time, and can be found at

The mailing list used for Apereo OAE is You can subscribe to the mailing list at

Bugs and other issues can be reported in our issue tracker at

by Nicolaas Matthijs at August 27, 2014 01:14 PM

August 25, 2014

Chris Coppola

Kuali 2.0

Last Friday, the Kuali Foundation made an announcement that stirred up quite a buzz. Kuali has formed a new Kuali Commercial entity (referred to below as Kuali-Company) that is a “for profit” enterprise owned by and aligned with the higher ed community through As a co-founder and community leader, rSmart has naturally been asked for comment, clarity, and information.

Kuali 2.0 logoLet’s start with some facts…

  • Kuali software will continue to be open source.
  • Kuali will continue to be driven by higher education.
  • Kuali will continue to engage colleges and universities in the way that it always has.

… and we expect that…

  • Kuali-Company will be better at engaging the higher education community from a marketing and sales perspective.
  • Kuali will have a more effective product development capability organized in Kuali-Company.
  • Kuali-Company will deliver Kuali software in the cloud.

The Kuali mission is unwavering, to drive down the cost of administration for colleges and universities to keep more money focused on the core teaching and research mission. Our (the Kuali community) mission hasn’t changed, but the ability to execute on it has improved dramatically. The former structure made it too difficult for colleges and universities to engage and benefit from Kuali’s work. This new model will simplify how institutions can engage. The former structure breeds a lot of duplicative (and even competitive) work. The new structure will be more efficient.

People have been calling and asking what this means to rSmart. rSmart is made up of people who, like me, are passionate about the mission. We wake up every day and work hard to achieve the mission. rSmart has been involved in the leadership of Kuali since the day it started… actually before that since we are one of Kuali’s founders. This change is no different, and we believe that this change is going to better enable us to fulfill our mission.

We’re excited about Kuali’s future. I’ve had the opportunity to get to know Joel Dehlin a bit and I’m confident that he is going to be a great addition and I look forward to working with him. rSmart is committed to our customers, to the Kuali mission, and to supporting this new direction.

Tagged: business, commercial-os, community, education, erp, future, kuali, open source, rSmart

by Chris at August 25, 2014 06:27 PM