Planet Sakai

May 01, 2016

Michael Feldstein

Fall 2014 IPEDS Data: Interactive table ranking DE programs by enrollment

Last week I shared a static view of the US institutions with the 30 highest enrollments of students taking at least one online (distance ed, or DE) course. But we can do better than that, thanks to some help from Justin Menard at LISTedTECH and his Tableau guidance.

The following interactive chart allows you to see the full rankings based on undergraduate, graduate and combined enrollments. And it has two views – one for students taking at least one online course and one for exclusive online students. Note the following:

Tableau hints

  • (1) shows how you can change views by selecting the appropriate tab.
  • (2) shows how you can sort on any of the three measures (hover over the column header).
  • (3) shows the sector for each institution by the institution name.

Have at it! You can also go directly to Tableau, allowing a wider view of the table.

<noscript><a href=""><img alt=" " src="https:&#47;&#47;;static&#47;images&#47;IP&#47;IPEDSFall2014RankingByDEEnrollment&#47;ExclusivelyDECourses&#47;1_rss.png" style="border: none;" /></a></noscript>

The post Fall 2014 IPEDS Data: Interactive table ranking DE programs by enrollment appeared first on e-Literate.

by Phil Hill at May 01, 2016 11:33 PM

April 28, 2016

Michael Feldstein

No Filters: My ASU/GSV Conference Panel on Personalized Learning

ASU’s Lou Pugliese was kind enough to invite me to participate on a panel discussion on “Next-Generation Digital Platforms,” which was really about a soup of adaptive learning, CBE, and other stuff that the industry likes to lump under the heading “personalized learning” these days. One of the reasons the panel was interesting was that we had some smart people on the stage who were often talking past each other a little bit because the industry wants to talk about the things that it can do something about—features and algorithms and product design—rather than the really hard and important parts that it has little influence over—teaching practices and culture and other messy human stuff. I did see a number of signs at the conference (and on the panel) that ed tech businesses and investors are slowly getting smarter about understanding their respective roles and opportunities. But this particular topic threw the panel right into the briar patch. It’s hard to understand a problem space when you’re focusing on the wrong problems. I mean no disrespect to the panelists or to Lou; this is just a tough nut to crack.

I admit, I have few filters under the best of circumstances and none left at all by the second afternoon of an ASU/GSV conference. I was probably a little disruptive, but I prefer to think of it as disruptive innovation.

Here’s the video of the panel:

The post No Filters: My ASU/GSV Conference Panel on Personalized Learning appeared first on e-Literate.

by Michael Feldstein at April 28, 2016 01:57 PM

April 27, 2016

Michael Feldstein

Fall 2014 IPEDS Data: Top 30 largest online enrollments per institution

The National Center for Educational Statistics (NCES) and its Integrated Postsecondary Education Data System (IPEDS) provide the most official data on colleges and universities in the United States. This is the third year of data.

Let’s look at the top 30 online programs for Fall 2014 (in terms of total number of students taking at least one online course). Some notes on the data source:

  • I have combined the categories ‘students exclusively taking distance education courses’ and ‘students taking some but not all distance education courses’ to obtain the ‘at least one online course’ category;
  • Each sector is listed by column;
  • IPEDS tracks data based on the accredited body, which can differ for systems – I manually combined most for-profit systems into one institution entity as well as Arizona State University[1];
  • See this post for Fall 2013 Top 30 data and see this post for Fall 2014 profile by sector and state.

Fall 2014 Top 30 Largest Online Enrollments Per Institution

Number Of Students Taking At Least One Online Course (Graduate & Undergraduate Combined)

Top 30 Online Enrollments By Fall 2014 IPEDS Data

The post Fall 2014 IPEDS Data: Top 30 largest online enrollments per institution appeared first on e-Literate.

by Phil Hill at April 27, 2016 11:21 PM

Apereo Foundation

April 26, 2016

Adam Marshall

WebLearn and Turnitin courses Trinity Term 2016

IT Services offers a variety of taught courses to support the use of WebLearn and the plagiarism awareness software Turnitin. Course books for the formal courses (3-hour sessions) can be downloaded for self study. Places are limited and bookings are required.

Click on the links provided to book a place, or for further information. Bookings open 30 days in advance, but you can express an interest in a course and receive a reminder to book when booking opens.

WebLearn courses:

Plagiarism awareness courses (Turnitin):

Byte-sized lunch time sessions:

These focus on particular tools with plenty of time for questions and discussion

User Group meeting:

by Jill Fresen at April 26, 2016 03:37 PM

Dr. Chuck

More Tsugi Refactoring – Removal of the mod folder

I completed the last of many refactoring steps of Tsugi yesterday. when I moved the contents of the “mod” folder into its own repository. The goal of all this refactoring was to get it to the point where checking out the core Tsugi repository did not include any end-user tools – it just would include the administrator, developer, key management, and support capabilities (LTI 2, CASA, ContentItem Store). The key is that this console will also be used for the Java and NodeJS implementations of Tsugi until we build the functionality in the console in each of those languages and so it made no sense to drag in a bunch of PHP tools if you were just going to use the console. I wrote a bunch of new documentation showing how the new “pieces of Tsugi” fit together:

This means that as of this morning if you do a “git pull” in your /tsugi folder – the mod folder will disappear. But have no fear – you can restore it with the following steps:

cd tsugi
git clone mod

And your mod folder will be restored. You will now have to do separate git pulls for both Tsugi and the mod folder.

I have all this in solid production (with the mod restored as above) with my Coursera and on campus Umich courses. So I am pretty sure it holds together well.

This was the last of a multi-step refactor for this code to modularize it in multiple repositories so as to better prepare for Tsugi in multiple languages as well as plugging Tsugi into various production environments.

by Charles Severance at April 26, 2016 02:11 PM

April 22, 2016

Adam Marshall

WebLearn will be unavailable on Tuesday 26 April 2016 from 7-9am

cisco-routerThere will be no service during this period this is due to essential maintenance of the AFS filesystem.

We apologise for any inconvenience that this essential work may cause.

by Adam Marshall at April 22, 2016 03:32 PM

April 20, 2016

Apereo Foundation

April 19, 2016

Apereo Foundation

April 06, 2016

Adam Marshall

The Contact Us Tool

I thought it may be useful to show the information that is sent when one dispatches a message using the Conatct Us form within WebLearn.

As you will see, the context from within WebLearn is included: username, email and so on, plus the URL of the site where the message was sent from. In addition, we are able to include information about the user’s browser which may be relevant to debugging a problem.

by Adam Marshall at April 06, 2016 02:59 PM

Dr. Chuck

Ring Fencing JSON-LD and Making JSON-LD Parseable Strictly as JSON

My debate with my colleagues[1, 2] about the perils of unconstrained JSON-LD as an API specification is coming to a positive conclusion. We have agreed to the following principles:

  • Our API standard is a JSON standard and we will constrain our JSON-LD usage so as to make it so that the API can be deterministically produced and consumed using *only* JSON parsing libraries. During de-serialization, it must be possible to parse the JSON deterministically using a JSON library without looking at the @context at all. It must be possible to produce the correct JSON deterministically and add a hard-coded and well understood @context section that does not need to change.
  • There should never be a requirement in the API specification or in our certification suite that forces the use of JSON-LD serialization or de-serialization on either end of the API.
  • If some software in the ecosystem covered by the standard decides to use JSON-LD serializers or de-serializers and and they cannot produce the canonical JSON form for our API – that software will be forced to change and generate the precise constrained JSON (i.e. we will ignore any attempts to coerce the rest of the ecosystem using our API to accept unconstrained JSON-LD).
  • Going forward we will make sure that our sample JSON that we publish in our specifications will always be in JSON-LD Compacted form with either a single @context or a multiple contexts with the default @context included as “@vocab” and all fields in the default context having no prefixes and all fields outside the default @context having simple and predictable prefixes.
  • We are hopeful and expect that Compacted JSON-LD is so well defined in the JSON-LD W3C specification that all implementations in all languages that produce compact JSON-LD with the same context will produce identical JSON. If for some strange reason, a particular JSON-LD compacting algorithm starts producing JSON that is incompatible with our canonical JSON – we will expect that the JSON-LD serializer will need changing – not our specification.
  • In the case of extending the data model, the prefixes used in the JSON will be agreed upon to maintain predictable JSON parsing. If we cannot pre-agree on the precise prefixes themselves then at least we can agree on a convention for prefix naming. I will recommend they start with “x_” to pay homage to the use of “X-” in RFC-822 and friends.
  • As we build API certification mechanisms we will check and validate incoming JSON to insure that it is valid JSON-LD and issue a warning for any flawed JSON-LD but consider that non-fatal and parse the content using only the deterministic JSON parsing to judge whether or not an implementation passes certification.

It is the hope that or the next 3-5 years we can rely on JSON-only infrastructure but at the same time lay the groundwork for a future set of more elegant and expandable APIs using JSON-LD once performance and ubiquity concerns around JSON-LD are addressed.

Some Sample JSON To Demonstrate the Point

Our typical serialization starts with the short form for a single default @context as in this example from the JSON-LD playground:

  "@context": "",
  "@type": "Person",
  "name": "Jane Doe",
  "jobTitle": "Professor",
  "telephone": "(425) 123-4567",
  "url": ""

But lets say we want to extend this with a field – the @context would need to switch from a single string to an object that maps prefixes to IRIs as shown below:

  "@context": {
    "@vocab": "",
    "csev": ""
  "@type": "Person",
  "url": "",
  "jobTitle": "Professor",
  "name": "Jane Doe",
  "telephone": "(425) 123-4567",
  "csev:debug" : "42"

If you compact this with a single schema for – all extensions get expanded:

  "@context": "",
  "type": "Person",
  "": "42",
  "jobTitle": "Professor",
  "name": "Jane Doe",
  "telephone": "(425) 123-4567",
  "schema:url": ""

The resulting JSON is tacky and inelegant. If on the other hand you compact with this context:

  "@context": {
    "@vocab" : "",
    "csev" : ""

You get JSON that is succinct and deterministic with predictable prefixes and minus the context looks like clean looking JSON that one might design even without the influence of JSON-LD.

  "@context": {
    "@vocab": "",
    "csev": ""
  "@type": "Person",
  "csev:debug": "42",
  "jobTitle": "Professor",
  "name": "Jane Doe",
  "telephone": "(425) 123-4567",
  "url": ""

What is beautiful here is that when you use the @vocab + extension prefixes as the @context, it means that our “canonical JSON serialization” can be read by JSON-LD parsers and produced deterministically by a JSON LD compact process.

In a sense, what we want for our canonical serialization is the output of a jsonld_compact operation and if you were to run the resulting JSON through jsonld_compact again – you would the the exact same JSON.

Taking this approach and pre-agreeing on all the official context and all prefixes for official contexts as well as a prefix naming convention for any and all extensions – means we should be able to use pure-JSON libraries to parse the JSON whilst ignoring the @context completely.


Comments welcome. I expect this document will be revised and clarified over time to insure that it truly represents a consensus position.

by Charles Severance at April 06, 2016 03:57 AM

April 05, 2016

Dr. Chuck

Abstract: Massively Open Online Courses (MOOCs) – Past, Present, and Future

This presentation will explore what it was like when MOOCs were first emerging in 2012 and talk about what we have learned from the experience so far. Today, MOOC providers are increasingly focusing on becoming profitable and this trend is changing both the nature of MOOCS and university relationships with MOOC platform providers. Also, we will look at how a university can scale the development of MOOCs and use knowledge gained in MOOCs to improve on-campus teaching. We will also look forward at how the MOOC market may change and how MOOC approaches and technologies may ultimately impact campus courses and programs.

by Charles Severance at April 05, 2016 01:37 PM

April 03, 2016

Steve Swinsburg

Tool properties in tool registration files

I discovered this feature by accident when setting up a new tool and configuring its registration file. The registration file is what you use to wire up a webapp in Sakai so that it can be added to sites. You can give it a title, description, tell it what site types are supported and a few other settings.

One of the recent features in Sakai is the ability to get a direct URL to any tool within Sakai. This is useful when you want to link to a tool without the portal around it.


Note the links on the right hand side are part of the tool registration. The ones on the left are controlled within the tool code itself and together it makes for a nice navbar when in full screen mode.

However, if you have a tool that doesn’t need any header items, for example a summary tool or widget, and there are multiples of them on screen, you still get the Link and Help items which can clutter the UI. You can disable the Help in the tool registration file via:

<configuration name="help.button" value="false" />

However the Link doesn’t have a corresponding configuration option (oversight maybe… blame me, I wrote the code…). However you can disable it with a tool property – although this is normally something reserved for an admin user to set into the tool placement within the portal, which is a manual step per placement. But what I have discovered is that you can add the tool property to the tool registration file and it is automatically linked up! Magic.

<configuration name="sakai:tool-directurl-enabled" value="true" />

This is coming in very handy as we are creating a series of relatively small widgets to place on the home screen of a site and the header toolbar was cluttering the UI. Now it is nice and clean with the header toolbar completely removed.


by steveswinsburg at April 03, 2016 10:14 PM

March 14, 2016

Sakai Project

QA Testing for Sakai 11 Is Underway

QA Testing for Sakai 11 Is Underway

We need your help in this sizable effort of testing Sakai 11.

by Michelle Hall at March 14, 2016 11:48 PM

February 29, 2016

Sakai Project

Sakai 10.6 Release

Sakai 10.6 Release

The Sakai Core Team is happy to announce the Sakai 10.6 maintenance release.

by Michelle Hall at February 29, 2016 09:02 PM

February 12, 2016


Known Issue: Incomplete list of students enrolled by section in Roster and Gradebook 2

For course sites where multiple course sections have access, instructors or teaching assistants use the drop-down menu on top of their list of students in Gradebook 2 to filter the students by class section. Be aware that using this feature might only return a partial list of students. Gradebook 2 uses the Roster tool to… Continue reading

by Mathieu Plourde at February 12, 2016 06:29 PM

February 03, 2016

Ian Boston

Ai in FM

Limited experience in either of these fields does not stop thought or research. At the risk of being corrected, from which I will learn, I’ll share those thoughts.

Early AI in FM was broadly expert systems. Used to advise on hedging to minimise overnight risk etc or to identify certain trends based on historical information. Like early symbolic maths programs (1980s) that revolutionised the way in which theoretical problems can be solved (transformed) without error in a fraction of the time, early AI in FM put an expert with a probability of correctness on every desk. This is not the AI I am interested in. It it only artificial in the sense it artificially encapsulates the knowledge of an expert. The intelligence is not artificially generated or acquired.

Machine learning  covers many techniques. Supervised learning takes a set of inputs and allows the system to perform actions based on a set of policies to produce an output. Reinforcement learning favors the more successful policies by reinforcing the action. Good machine, bad machine. The assumption is, that the environment is stochastic. or unpredictable due to the influence of randomness.

Inputs and outputs are simple. They are a replay of the historical prices. There is no guarantee that future prices will behave in the same way as historical, but that is in the nature of a stochastic system.  Reward is simple. Profit or loss. What is not simple is the machine learning policies. AFAICT, machine learning, for a stochastic system with a large amount of randomness, can’t magic the policies out of thin air. Speech has rules, Image processing also and although there is randomness, policies can be defined. At the purests level, excluding wrappers, financial markets are driven by the millions of human brains attempting to make a profit out of buying and selling the same thing without adding any value to that same thing. They are driven by emotion, fear and every aspect of human nature rationalised by economics, risk, a desire to exploit every new opportunity, and a desire to be a part of the crowd. Dominating means trading on infinitesimal margins exploiting perfect arbitrage as it the randomness exposes differences. That doesn’t mean the smaller trader can’t make money, as the smaller trader does not need to dominate, but it does mean the larger the trader becomes, the more extreme the trades have to become maintain the level of expected profits. I said excluding wrappers because they do add value, they adjust the risk for which the buyer pays a premium over the core assets. That premium allows the inventor of the wrapper to make a service profit in the belief that they can mitigate the risk. It is, when carefully chosen, a fair trade.

The key to machine learning is to find a successful set of policies. A model for success, or a model for the game. The game of Go has a simple model, the rules of the game. Therefore it’s possible to have a policy of, do everything. Go is a very large but ultimately bounded Markov Decision Process (MDP).  Try every move. With trying every move every theoretical policy can be tested. With feedback, and iteration, input patterns can be recognised and successful outcomes can be found. Although the number of combinations is large, the problem is very large but finite. So large that classical methods are not feasible, but not infinite so that reinforcement machine learning becomes viable.

The MDP governing financial markets may be near infinite in size. While attempts to formalise will appear to be successful the events of 2007 have shown us that if we believe we have found finite boundaries of a MDP representing trade, +1 means we have not. Just as finite+1 is no longer finite by the original definition, infinite+1 proves what we thought was infinite is not. The nasty surprise just over the horizon.

by Ian at February 03, 2016 01:09 PM

December 15, 2015

Apereo OAE

Looking back at Diwali

Marist College’s week-long Diwali celebration concluded on November 13th with a wonderful closing reception attended by over 200 members of the Marist College community, along with their family and friends. Participants filled the room for a night of singing, dancing, Indian cuisine and a fashion show modeling traditional Indian apparel from various regions of the country.

"It was such a significant experience that brought a bit of India to campus for those who have traveled so far from their home ... even more so for those of us, like myself, who might never have the opportunity to visit," says Corri Nicoletti, Educational Technology Specialist Academic Technology & eLearning office at Marist College.

The closing reception was the perfect end to a five day exhibit to celebrate Diwali, the Hindu Festival of Lights. As Marist College boasts a large international population representing over 50 countries worldwide, this event was the perfect opportunity to share a significant cultural experience campus-wide. However, we decided to take it one step further. What if our families and friends could participate, no matter where they are? OAE served as the perfect platform for our students to share in their celebrations as well as to reconnect with those at home.

Diwali Closing Reception

As a result of using OAE, various family, friends, and other institutions were able to connect, regardless of their global location. Everyone was invited to post pictures of their celebrations wherever they were. Beginning October 14th, those involved in the event began posting images of the Rangoli workshop, followed by event preparations, the exhibit, and the final celebration. The OAE group and shared images for MyDiwali Celebration were visited numerous times throughout the month. Surprisingly, these visitors included global participants from as far as England, Australia, South Africa, and more!

Using the combination of social media, active global participants, and the collaborative and interactive nature of OAE, we extended the week-long Diwali event past the grounds of Marist College. We were able to reach friends and family elsewhere by spreading a diverse, culture-rich experience with those around us ... even if they were half way around the word!

It was clearly evident just how much this meant to the students, who worked countless hours, day in and day out, to make it a success. Many of them were amazed at how much it felt like home.

December 15, 2015 07:21 PM

December 11, 2015

Apereo OAE


Since the Open Academic Environment's main cloud deployment, *Unity, rolled out to 20,000 universities and research institutions last month, one of the most common questions has been how so many people are able to use their campus credentials to sign in. I’m going to explain, but be warned: after that I’m going to say why I think this is the wrong question. The right question, I think, is, "Why did no one do this before?"

The Open Academic Environment software at the heart of *Unity integrates with most of the commonly used authentication strategies, including open standards such as Shibboleth. Using these different strategies we’ve been able to establish single sign on with almost half our 20,000 tenancies.

The benefits are real. You don’t have to remember another username and password; instead, you can sign into *Unity with your campus credentials. And so can the majority of your colleagues around the world. It’s one of the features that makes *Unity a uniquely suitable venue for all your research projects.

We’ve managed to hook up with so many universities partly by making bilateral arrangements, campus by campus. But we’ve also worked through the many access management federations to which we belong. These national federations act as brokers; on one side are the universities, on the other are service providers such as *Unity. The federations allow us to hook up with many institutions in one go, reducing the effort involved.

So, given that the username / password thing is one of the biggest barriers both to adoption and usage, why is it that none of our competitors have gone to the effort of integrating with institutions’ single sign on strategies? How is that none of Facebook, Google, LinkedIn, or ResearchGate let you use your campus credentials to sign in?

To see why, compare our old friend email with one of the newer services offered by these companies such as file sharing.

Email is a federated service based on open standards. Each university controls its own servers, data and users. Even if these days they may buy the service in from a cloud provider, the university retains control.

If you draw a diagram of the connections, it looks like this. Each individual dot connects to a university server, which connects to other servers, which connect to other individual dots. The connections between the users are mediated by their universities.

Email as a federated service

File sharing via, say, ResearchGate, is different. There’s no open standard. The service is not federated but owned by one company. It controls the servers, data and users. The university is, literally, nowhere.

In this diagram, each individual dot simply connects to the ResearchGate servers in the middle.

ResearchGate as a centralised service

What these Silicon Valley companies have done is to disintermediate the universities themselves.

Now, from the point of view of these companies, what happens if they integrate with an institution’s own single sign on system is that they reintroduce the university into the diagram. Now the university itself has re-acquired control. It will examine your terms and conditions and veto things it doesn’t like. It will demand that the privacy of its users is protected. It will demand ownership of the content they create. And if it doesn’t get it, it may switch off the single sign on and take its users elsewhere. Even worse, maybe 100 institutions might get together and move elsewhere all at the same time!

So that explains why I think the Silicon Valley companies don’t work with campus credentials. And it explains why universities should prefer services that do. It’s the difference between control that is centralised and control that is federated. Or, to put it another way, between colonisation and independence.

But, you may say, file sharing is different to email. File sharing can’t be provided in a federated way; it needs a centralised infrastructure. Indeed it does. But a centralised infrastructure does not have to mean centralised control. You can have centralised infrastructure in which each university owns and controls its own tenancy, its own users, its own data. This is *Unity, and the logo of the Open Academic Environment project shows you the kind of connections we have in mind.

Open Academic Environment

In the centre is the central OAE infrastructure (known to you as *Unity). This is connected to the institutions (the small dots), which in turn are connected to the individual users (the big dots).

This is exactly the arrangement that is effected when we integrate with your university’s single sign on system, and it reflects our vision. Not the disintermediation of the universities but rather their re-intermediation, a step which means empowerment for the university and respect for the user.

You can find out more about *Unity and the issues raised here by downloading our briefing for university Chief Information Officers

December 11, 2015 11:43 AM