Planet Sakai

May 27, 2016

Michael Feldstein

Comparing Fully-Online vs Mixed-Course Enrollment Data

Mike Caulfield wrote a post yesterday about a new Blackboard report on design findings regarding online students. The focus of Mike’s post was that people often assume that the norm for an “online” student is taking all courses online, when in fact it is more common for students to take some courses online and some face-to-face (what Mike calls “Mixed-Course”). This distinction is important:

What shocks some people reading this, I think, is that online students would have face-to-face courses as a comparison or option, or that they would be consciously choosing between online and face-to-face in the course of a semester. But this is the norm now at state universities and community colleges; it’s only a secret to people not in those sorts of environments.

I agree that this is an important topic, so let’s look at the data in more detail (and for those wondering if you can connect to Tableau Public to edit online data visualizations while flying, the answer is yes if you’re patient). The best source of information is the IPEDS database, with the most recent data for Fall 2014. WCET put out an excellent report analyzing the distance education data in IPEDS – more on that later. Both IPEDS and WCET use the term “Some but not all” in the same manner as “Mixed-Course”. Based on overall data combining undergraduate and graduate students, Mike is right that more students are Mixed-Course than they are Fully-Online – 2.926 million vs. 2.824 million.

Fully Online vs Mix-and-match table

But this comparison varies by sector and by degree type as you can see below.

Fully-Online vs Mixed-Course

Some notes:

  • For graduate students in all sectors, Fully-Online is more common than Mixed-Course;
  • For private not-for-profit and for for-profit sectors, Fully-Online is more common than Mixed-Course;
  • The predominant case where Mixed-Course is most common is for public institutions – both 4-year and 2-year – for undergraduate degrees (as Mike pointed out); and
  • Due to the large size of those two public sectors – combined they represent 75% of all enrollments in the US – the overall numbers slightly favor Mixed-Course.

What about growth measures? IPEDS only began collecting this level of distance education enrollment data for Fall 2012, and I have worked with WCET to show some data issues in IPEDS, particularly for Fall 2012. So be aware of some noise in these measurements. However, the WCET report shows overall growth trends in their report, and Fully-Online (exclusively DE) is actually growing slightly faster than Mixed-Course – 225 thousand vs. 178 thousand. Note that WCET combined public 4-year and public 2-year into one bucket.

Some but not all growth

Exclusive DE growth

If you’d like to explore the data in more detail, see this interactive chart.

<noscript><a href=";&#47;;fall-2014-ipeds-data-new-profile-us-higher-ed-online-education&#47;"><img alt=" " src="https:&#47;&#47;;static&#47;images&#47;Fa&#47;Fall2014IPEDSDEEnrollmentProfile&#47;Fully-Onlinevs_Mixed-Course&#47;1_rss.png" style="border: none;" /></a></noscript>

The post Comparing Fully-Online vs Mixed-Course Enrollment Data appeared first on e-Literate.

by Phil Hill at May 27, 2016 01:18 AM

May 23, 2016

Michael Feldstein

Previous LMS For Schools Moving to Canvas in US and Canada

During the most recent quarterly earnings call for Instructure, an analyst asked an interesting question (despite starting off from the Chris Farley Show format).

Corey Greendale (First Analysis Securities Corporation):  Awesome. A couple of other things on the, primarily on the Higher Ed space but I guess on education space, there’s a whole lot of couple questions about the competitive environment. When you’re and I don’t know if you will ever get into this level of granularity but when you got competitive wins against the Blackboard, are those predominantly from legacy ANGEL, are you getting those wins as much from Learn as well.

Josh Coates (CEO of Instructure):  A lot of them are from Learn. Most, you know I don’t have the stats right off the top of my head. A lot of the ANGEL and WebCT stuff is been mopped up in the previous years and so, what’s left the majority of what’s left is Learn and our win rate against Blackboard it continues to be incredibly high, not just domestically but internationally as well.

In fact, I think most of three out of the four international schools that we announced in this earnings where Blackboard Learn replacements, so yes Learn’s getting it.

The question gets to the issue of whether Canvas is just picking up higher education clients coming off of discontinued LMSs (Angel, WebCT, etc) or if they are picking up clients from ongoing platforms such as Blackboard Learn. Beyond the obvious interest of investors and other ed tech vendors, this issue in general affects higher education institutions going through a vendor selection – for the system in consideration, are there many other schools considering the same migration path?

Thanks to the work we’ve been doing with LISTedTECH and our new subscription service, we can answer this question in a little more detail. One of the charts we share shows higher education migrations over the past five years in the US and Canada.


Looking at the bottom right of the chart, you can see that Canvas has picked up clients previously using WebCT, Learn, ANGEL, Moodle, LearningStudio, Sakai, Homegrown, and Brightspace (from D2L).

Canvas Wins

Josh could have answered that Canvas actually has picked up more clients formerly using Learn than those using ANGEL, but a small portion of Learn includes those using the discontinued Basic edition. Nevertheless, there are quite a few wins coming from systems that have not been discontinued, which I think was the main point of the question.

As you can see, there is interesting data on other systems as well. Some notes from this view:

  • Blackboard Learn new implementation have mostly come from the company’s own discontinued WebCT and ANGEL platforms, with some small contributions from LearningStudio and Moodle.

Learn Wins

  • D2L Brightspace has an impressive client retention rate, with very few clients migrating off of their system.

Brightspace Losses

What other trends do you see in this data?

The post Previous LMS For Schools Moving to Canvas in US and Canada appeared first on e-Literate.

by Phil Hill at May 23, 2016 10:39 PM

May 20, 2016

Adam Marshall

WebLearn unavailable on Tuesday 24 May 2016 from 7-9am

cisco-routerWebLearn will be unavailable on Tuesday 24 May 2016 7-9am. This is necessary because of the need to undertake essential maintenance of the underlying AFS file system. There will be no service during this period.

We apologise for any inconvenience that this essential work may cause.

by Adam Marshall at May 20, 2016 01:41 PM

May 16, 2016

Michael Feldstein

What Homework and Adaptive Platforms Are (and Aren’t) Good For

I was delighted that we are able to publish Mike Caulfield’s post on how ed tech gets personalization backwards, partly because Mike is such a unique and inventive thinker, but also because he provided such a great example of how “personalized learning” teaching techniques are different than adaptive content and other product capabilities.

The heart of his post is two stories about teachable moments he had with his daughters. In one, he helped his middle school-aged daughter understand why an Iranian author was worried that people in the Western world have harmful stereotypes of Iranians. In the other, he helped his high school aged-daughter see how her knowledge of the history of rocket science could be useful in answering a question she was asked about Churchill’s Iron Curtain speech. Mike’s stories show truly significant learning of the kind that changes students perspectives and, if we’re lucky, their lives. It is not just personalized but deeply personal. He was able to reach his daughters because he understood them as humans, well beyond the boundaries of a list of competencies they had or had not mastered within the disciplines they were studying.

For now and the foreseeable future, no robot tutor in the sky is going to be able to take Mike’s place in those conversations. This is the kind of personal teaching that humans are good at and robots are not. But neither are the tools we have today useless for this sort of teaching. Vendors, administrators, and faculty alike have broadly misunderstood their role and potential. In this post, I’m going to talk about both how these tools are useful for the kind of education that Mike cares about (and I care about) as well as, perhaps more importantly, why we are so prone to getting that role wrong.

Robot Tutors in the Weeds

One fact that shines through about Mike’s daughters is that they are both pretty smart. His middle schooler clearly read and understood the book by the Iranian author. She was stuck on not a “what” question but a “why” question. His high schooler knew a lot about the space race—not just the what, but also the why. She just hadn’t yet seen the relevance of what she knew to a “why” question she was being asked in a different domain. Both girls were working on higher-order thinking skills. One of the reasons that Mike could teach them is that they already had a lot of foundational knowledge.

Not all students do. They don’t all have strong reading comprehension or study study skills. Not everybody is good at remembering facts or distinguishing causal connections from co-occurring but irrelevant details. For example, some readers could benefit from being stopped every few paragraphs or pages and asked questions to help them check themselves to see if they’ve understood what they read (and become more self-aware about their reading comprehension in general). This is the sort of thing that computers are good at.

What happens in classrooms where students don’t have this sort of comprehension tutoring help (robot or otherwise)? Sometimes the students who need that help don’t get it, and they fail. They are never able to answer the “why” questions because they don’t know the “what.” Other times, the teacher slows down to cover the “what” in class in order to help the students who are struggling. This teaching strategy has a few side effects. First, it takes a lot of class time, which means that there is little or no time to discuss the “why.” This leaves kids like Mike’s daughters, who are ready and hungry for the “why,” bored. Second, all the sudents quickly learn that they don’t have to read the book because the teacher will go over the important parts in class. I hear complaints from teachers all the time that they have trouble getting “kids today” to read. I believe them. But I’m skeptical of the explanations that I hear for why this is so. I don’t think it’s primarily because of TV or YouTube or mobile phones. All of those factors fall under the larger umbrella cause that students don’t have to read anyore. Nowhere is that more true than in the classroom. If you were asked to read something in advance of a meeting, and you knew the person running the meeting would take almost all the meeting time reviewing the aspects of the reading that she thought were important for you to know, would you read in advance? Or would you find a better use for that time?

How Homework Got Broken and How Not to Fix It

Most teachers—especially middle school and above—have a passion for their subject. (Elementary school teachers, who are generalists, more often have a passion for the students, although the two are not mutually exclusive.) They love the “why” and want to talk about it. But they end up spending most of their time talking about the “what” because if they don’t they will leave some students behind. As we have seen, a side effect of this understandable behavior is that students learn not to do the homework, which means that they increasingly come into class not knowing the what. And the viscious cycle continues.

So teachers grade the homework, hoping that the grades will force the students to come to class prepared to discuss the “why.” To be clear, there are different reasons why teachers might want to count homework toward a course grade. One is when the homework is carefully constructed to incrementally build skills, so each homework assignment is essentially a summative assessment of the next small step on the hill the teacher is trying to get the students to climb. We see this most often in math, engineering, or other subjects where there is a strong emphasis on increasingly sophisticated application of procedural knowledge. But more often than not, teachers count day-to-day homework toward a course grade primarily because they are trying to motivate students to learn the “what” at home.

This approach has side effects of its own. Students are motivated by grades, but only to a point. They quickly become quite sophisticated at calculating how much of the homework they have to do in order to get the minimum grade that they want to achieve. Excellent students and weak ones alike make this calculation. Unfortunately, weak students often miscalculate, undershoot, and fail. Meanwhile, good students may be getting good grades, but they are not necessarily learning all that they could be. And, of course, the more the homework counts toward the course grade, the more incentive students have to cheat.

Part of my day job as a consultant is to help companies who design educational technology products understand teachers and students better. In the course of doing that work over the past few years, I have spoken to a lot of students and teachers about the homework problem. Many of the best teachers either don’t count homework toward the course grade or count it just a little—enough to communicate to the students that the homework matters, but not enough to trigger the what’s-the-minimum-I-have-to-do calculation. They use the grade as just one tool in an overall strategy designed to help students see that the “what” questions they are learning to answer in their homework are relevant to the far more interesting “why” questions about which the teachers are passionate and would like their students to become passionate about too. They pose mysteries at the end of class that the students can only solve with the knowledge they gain from doing the homework. Or they have little verbal in-class quizzes to keep the students on their toes, in the context of a discussion of how the tidbit in the verbal quiz matters to the larger topic being discussed.

Interestingly, a number of students have told me that they like in-class verbal quizzes. Well, “like” probably isn’t quite the right word. Appreciate. Value. Are grateful for. But they only will tell you this if you ask the right question, which is the following:

“How can you tell that your teacher cares about you?”

The story that we often tell ourselves about other people’s children is that they are lazy. They don’t like to work or to learn. But the first question that most students are trying to answer for themselves when they start a new class, particularly if that class is about something they don’t already care about, is “Does this teacher care if I learn?” If the answer is “no,” if the relationship is purely transactional, then most students will try to figure out the minimum cost they have to pay in order to get a satisfactory grade. Think back to your own school days. Didn’t you do that sometimes? I did. The less I thought the teacher respected me or cared about me, the harder I played the I’m-going-to-do-almost-nothing-and-still-ace-your-stupid-class-you-arrogant-ass game. And the more I could get away with it, the more I was convinced that the teacher didn’t care about me. “After all,” I reasoned, “nobody who really cares about me as a student would let me get away with being so damned lazy.”

This is a problem that grading homework won’t fix, robo-grading won’t fix, and adaptive robo-grading won’t fix. In fact, those strategies often make the situation worse. Two things enrage students almost more than anything:

  1. Making them buy  expensive books that they never actually need and that the teacher never even mentions in class discussion
  2. Making them do hundreds of stupid homework problems that seem to have no obvious connection to anything on the tests (or, really, anything in the world) and that the teacher never even bothers to talk about in class

Luckily, these selfsame robo-homework tools actually can help avoid the cascade of course design failures I have traced in this post, if only they are designed and deployed a little differently.

Personalized Learning

Let’s review some of the things that educational software can do well:

  • Check students’ mastery of low-level cognitive skills such as memory of facts and application of procedural knowledge
  • Provide feedback to students and teachers on the students’ progress toward mastery
  • Provide feedback to the teachers on the students’ progress
  • In some (but not all) cases, help the students when they get stuck on low-level mastery skills

When I say “low-level,” that is not a value judgment. Mike’s daughter had to know basic facts about the U.S. and Soviet space programs in order to make inferences about their consequences for the broader political climate. The teacher clearly wanted to spend time with the students discussing the “why” question, and Mike’s story illustrated how humans are much better suited than robots for helping students learn to answer those sorts of question. But we also know that teachers get stuck spending all their time on the “what” because some students get stuck there. Students don’t want to spend any more time on the “what” than their teachers do. It is in everybody’s interest to get students to learn as much of the basic facts and procedural knowledge as possible outside of class so that the teacher can spend class time on the real intellectually challenging aspects of the subject.

Software can help solve this problem by giving both students and teachers feedback on how the students are doing with the “what.” Some students, like Mike’s daughters, won’t need help, but it’s not an all-or-nothing thing. Most people are better at grasping the basics of some subjects than others, or better at some low-level cognitive tasks than others. Personally, my reading comprehension is very good, but I am horrible at memorizing. I was interested in science but ended up dropping every college science course I registered for because the memorization killed me every time. Tutoring software might have helped me. In high school, I scored very well on the physics Achievement test because I could derive most of the physics needed to answer the questions based on the “why” I had absorbed in class. But I did poorly in the class itself because I was bad at remembering and applying simple formulas.

I could have become good at physics. I could have learned to love it. When I was a kid, I used to write letters to NASA to request pictures from their telescopes. My high school teacher knew that about me, because I was in a small class, and because he was the kind of teacher who made a point of knowing that sort of thing about his students. But my college professors had no way of knowing given the contact that they had with me in their large lectures. If they could have seen my results on formative assessments, and if they had more time in class to help students like me with the sticking points rather than repeating the reading that almost nobody did because they knew the professor would repeat it, then I might have had a different relationship with the subject. I took every philosophy of science course I could but avoided the actual science classes because I was afraid of them.

Tools that can help students like me exist today. But more often than not, two common mistakes currently get in the way of them being used in ways that actually would have helped me get through physics (or biology, or art history). The first is grading. The minute the homework becomes high-stakes, it breaks the ability to help students who are stuck. Rather than reducing student anxiety about the course, it raises it. Rather than motivating students to do the best they can for the teacher and themselves, it motivates them to calculate the impact of each assignment on their grade. Students need to believe that mastering the “what” matters, but this is not the way to convince them.

This brings me to the second and related mistake, which is failing to make explicit connections between the “what” and the “why” for students. They need to understand the point of learning all that low-level stuff. I didn’t care about solving physics problems, but I did care about understanding physics. I might have been more motivated to take on the scary work that was hard for me if I had seen a clearer connection between the two. This is all about course design. It’s about using the homework tool in context. It’s about reclaiming classroom time to have discussions like the ones that Mike had with his daughters, and maybe sometimes to review the specific “what” problems that students are getting stuck on.

Putting all this together, fixing the problem of broken homework requires the three personalized learning strategies that Phil and I have been writing about:

  • Moving content broadcast—especially lectures about the “what”— out of the classroom to make room for discussions about the why
  • Making homework time contact time, so that students can get help from the teacher when they are stuck with the “what” and also see that the teacher cares about whether they are learning
  • Providing a tutor in cases where the software can help the student with the “what,” or maybe a human tutor by enabling the teacher to see where students are stuck and focus class time on getting them unstuck

The term of art for using homework this way is”continuous formative assessment:”

You don’t need technology to do this. It’s just a feedback loop that could be accomplished by manually marking up students’ work or otherwise guiding them as they work. Technology just provides the ability to amplify that feedback loop and make it less labor-intensive to implement. But most vendors aren’t optimizing their homework products for this kind of use. Instead, they spend all their time adding gradebook features and increasingly complex ways for instructors to customize problem sets and reduce cheating. And they do this, more often than not, because their customers ask them to. (Of course, they don’t often hear from non-customers who aren’t interested in graded homework but might be interested in continuous formative assessment.)

The fundamental problem isn’t the tool or the vendor. It’s the cascade of unintended consequences caused by students who come into the class with different levels of skill and motivation, and the coping mechanisms teachers have employed to deal with the challenges of teaching a heterogenous group. Right now, we are mostly getting products that are designed to minimize the pain caused by that cascade or tools that are designed to replicate the failures in a more automated and therefore cheaper way. But we could easily be getting products that help teachers to create that positive feedback loop between themselves and their students. If we want that to happen, then we have to start asking a different question:

How can the capabilities afforded by educational technologies empower teachers to learn and implement teaching strategies that work better for them and their students?


The post What Homework and Adaptive Platforms Are (and Aren’t) Good For appeared first on e-Literate.

by Michael Feldstein at May 16, 2016 03:28 PM

May 13, 2016

Sakai Project

Sakai 10.7 is released

Sakai 10.7 is released

The Sakai Core Team is happy to announce the Sakai 10.7 maintenance release for general availability!

by Michelle Hall at May 13, 2016 05:17 AM

VeriCite and Turnitin are now pre-packaged in Sakai

VeriCite and Turnitin are now pre-packaged in Sakai

Sakai’s content review (or plagiarism detection) service has been completely revamped for the upcoming release.

by Michelle Hall at May 13, 2016 05:05 AM

April 27, 2016

Apereo Foundation

April 26, 2016

Adam Marshall

WebLearn and Turnitin courses Trinity Term 2016

IT Services offers a variety of taught courses to support the use of WebLearn and the plagiarism awareness software Turnitin. Course books for the formal courses (3-hour sessions) can be downloaded for self study. Places are limited and bookings are required.

Click on the links provided to book a place, or for further information. Bookings open 30 days in advance, but you can express an interest in a course and receive a reminder to book when booking opens.

WebLearn courses:

Plagiarism awareness courses (Turnitin):

Byte-sized lunch time sessions:

These focus on particular tools with plenty of time for questions and discussion

User Group meeting:

by Jill Fresen at April 26, 2016 03:37 PM

Dr. Chuck

More Tsugi Refactoring – Removal of the mod folder

I completed the last of many refactoring steps of Tsugi yesterday. when I moved the contents of the “mod” folder into its own repository. The goal of all this refactoring was to get it to the point where checking out the core Tsugi repository did not include any end-user tools – it just would include the administrator, developer, key management, and support capabilities (LTI 2, CASA, ContentItem Store). The key is that this console will also be used for the Java and NodeJS implementations of Tsugi until we build the functionality in the console in each of those languages and so it made no sense to drag in a bunch of PHP tools if you were just going to use the console. I wrote a bunch of new documentation showing how the new “pieces of Tsugi” fit together:

This means that as of this morning if you do a “git pull” in your /tsugi folder – the mod folder will disappear. But have no fear – you can restore it with the following steps:

cd tsugi
git clone mod

And your mod folder will be restored. You will now have to do separate git pulls for both Tsugi and the mod folder.

I have all this in solid production (with the mod restored as above) with my Coursera and on campus Umich courses. So I am pretty sure it holds together well.

This was the last of a multi-step refactor for this code to modularize it in multiple repositories so as to better prepare for Tsugi in multiple languages as well as plugging Tsugi into various production environments.

by Charles Severance at April 26, 2016 02:11 PM

April 22, 2016

Adam Marshall

WebLearn will be unavailable on Tuesday 26 April 2016 from 7-9am

cisco-routerThere will be no service during this period this is due to essential maintenance of the AFS filesystem.

We apologise for any inconvenience that this essential work may cause.

by Adam Marshall at April 22, 2016 03:32 PM

April 20, 2016

Apereo Foundation

April 19, 2016

Apereo Foundation

April 06, 2016

Dr. Chuck

Ring Fencing JSON-LD and Making JSON-LD Parseable Strictly as JSON

My debate with my colleagues[1, 2] about the perils of unconstrained JSON-LD as an API specification is coming to a positive conclusion. We have agreed to the following principles:

  • Our API standard is a JSON standard and we will constrain our JSON-LD usage so as to make it so that the API can be deterministically produced and consumed using *only* JSON parsing libraries. During de-serialization, it must be possible to parse the JSON deterministically using a JSON library without looking at the @context at all. It must be possible to produce the correct JSON deterministically and add a hard-coded and well understood @context section that does not need to change.
  • There should never be a requirement in the API specification or in our certification suite that forces the use of JSON-LD serialization or de-serialization on either end of the API.
  • If some software in the ecosystem covered by the standard decides to use JSON-LD serializers or de-serializers and and they cannot produce the canonical JSON form for our API – that software will be forced to change and generate the precise constrained JSON (i.e. we will ignore any attempts to coerce the rest of the ecosystem using our API to accept unconstrained JSON-LD).
  • Going forward we will make sure that our sample JSON that we publish in our specifications will always be in JSON-LD Compacted form with either a single @context or a multiple contexts with the default @context included as “@vocab” and all fields in the default context having no prefixes and all fields outside the default @context having simple and predictable prefixes.
  • We are hopeful and expect that Compacted JSON-LD is so well defined in the JSON-LD W3C specification that all implementations in all languages that produce compact JSON-LD with the same context will produce identical JSON. If for some strange reason, a particular JSON-LD compacting algorithm starts producing JSON that is incompatible with our canonical JSON – we will expect that the JSON-LD serializer will need changing – not our specification.
  • In the case of extending the data model, the prefixes used in the JSON will be agreed upon to maintain predictable JSON parsing. If we cannot pre-agree on the precise prefixes themselves then at least we can agree on a convention for prefix naming. I will recommend they start with “x_” to pay homage to the use of “X-” in RFC-822 and friends.
  • As we build API certification mechanisms we will check and validate incoming JSON to insure that it is valid JSON-LD and issue a warning for any flawed JSON-LD but consider that non-fatal and parse the content using only the deterministic JSON parsing to judge whether or not an implementation passes certification.

It is the hope that or the next 3-5 years we can rely on JSON-only infrastructure but at the same time lay the groundwork for a future set of more elegant and expandable APIs using JSON-LD once performance and ubiquity concerns around JSON-LD are addressed.

Some Sample JSON To Demonstrate the Point

Our typical serialization starts with the short form for a single default @context as in this example from the JSON-LD playground:

  "@context": "",
  "@type": "Person",
  "name": "Jane Doe",
  "jobTitle": "Professor",
  "telephone": "(425) 123-4567",
  "url": ""

But lets say we want to extend this with a field – the @context would need to switch from a single string to an object that maps prefixes to IRIs as shown below:

  "@context": {
    "@vocab": "",
    "csev": ""
  "@type": "Person",
  "url": "",
  "jobTitle": "Professor",
  "name": "Jane Doe",
  "telephone": "(425) 123-4567",
  "csev:debug" : "42"

If you compact this with a single schema for – all extensions get expanded:

  "@context": "",
  "type": "Person",
  "": "42",
  "jobTitle": "Professor",
  "name": "Jane Doe",
  "telephone": "(425) 123-4567",
  "schema:url": ""

The resulting JSON is tacky and inelegant. If on the other hand you compact with this context:

  "@context": {
    "@vocab" : "",
    "csev" : ""

You get JSON that is succinct and deterministic with predictable prefixes and minus the context looks like clean looking JSON that one might design even without the influence of JSON-LD.

  "@context": {
    "@vocab": "",
    "csev": ""
  "@type": "Person",
  "csev:debug": "42",
  "jobTitle": "Professor",
  "name": "Jane Doe",
  "telephone": "(425) 123-4567",
  "url": ""

What is beautiful here is that when you use the @vocab + extension prefixes as the @context, it means that our “canonical JSON serialization” can be read by JSON-LD parsers and produced deterministically by a JSON LD compact process.

In a sense, what we want for our canonical serialization is the output of a jsonld_compact operation and if you were to run the resulting JSON through jsonld_compact again – you would the the exact same JSON.

Taking this approach and pre-agreeing on all the official context and all prefixes for official contexts as well as a prefix naming convention for any and all extensions – means we should be able to use pure-JSON libraries to parse the JSON whilst ignoring the @context completely.


Comments welcome. I expect this document will be revised and clarified over time to insure that it truly represents a consensus position.

by Charles Severance at April 06, 2016 03:57 AM

April 05, 2016

Dr. Chuck

Abstract: Massively Open Online Courses (MOOCs) – Past, Present, and Future

This presentation will explore what it was like when MOOCs were first emerging in 2012 and talk about what we have learned from the experience so far. Today, MOOC providers are increasingly focusing on becoming profitable and this trend is changing both the nature of MOOCS and university relationships with MOOC platform providers. Also, we will look at how a university can scale the development of MOOCs and use knowledge gained in MOOCs to improve on-campus teaching. We will also look forward at how the MOOC market may change and how MOOC approaches and technologies may ultimately impact campus courses and programs.

by Charles Severance at April 05, 2016 01:37 PM

April 03, 2016

Steve Swinsburg

Tool properties in tool registration files

I discovered this feature by accident when setting up a new tool and configuring its registration file. The registration file is what you use to wire up a webapp in Sakai so that it can be added to sites. You can give it a title, description, tell it what site types are supported and a few other settings.

One of the recent features in Sakai is the ability to get a direct URL to any tool within Sakai. This is useful when you want to link to a tool without the portal around it.


Note the links on the right hand side are part of the tool registration. The ones on the left are controlled within the tool code itself and together it makes for a nice navbar when in full screen mode.

However, if you have a tool that doesn’t need any header items, for example a summary tool or widget, and there are multiples of them on screen, you still get the Link and Help items which can clutter the UI. You can disable the Help in the tool registration file via:

<configuration name="help.button" value="false" />

However the Link doesn’t have a corresponding configuration option (oversight maybe… blame me, I wrote the code…). However you can disable it with a tool property – although this is normally something reserved for an admin user to set into the tool placement within the portal, which is a manual step per placement. But what I have discovered is that you can add the tool property to the tool registration file and it is automatically linked up! Magic.

<configuration name="sakai:tool-directurl-enabled" value="true" />

This is coming in very handy as we are creating a series of relatively small widgets to place on the home screen of a site and the header toolbar was cluttering the UI. Now it is nice and clean with the header toolbar completely removed.


by steveswinsburg at April 03, 2016 10:14 PM

March 14, 2016

Sakai Project

QA Testing for Sakai 11 Is Underway

QA Testing for Sakai 11 Is Underway

We need your help in this sizable effort of testing Sakai 11.

by Michelle Hall at March 14, 2016 11:48 PM

February 12, 2016


Known Issue: Incomplete list of students enrolled by section in Roster and Gradebook 2

For course sites where multiple course sections have access, instructors or teaching assistants use the drop-down menu on top of their list of students in Gradebook 2 to filter the students by class section. Be aware that using this feature might only return a partial list of students. Gradebook 2 uses the Roster tool to… Continue reading

by Mathieu Plourde at February 12, 2016 06:29 PM

February 03, 2016

Ian Boston

Ai in FM

Limited experience in either of these fields does not stop thought or research. At the risk of being corrected, from which I will learn, I’ll share those thoughts.

Early AI in FM was broadly expert systems. Used to advise on hedging to minimise overnight risk etc or to identify certain trends based on historical information. Like early symbolic maths programs (1980s) that revolutionised the way in which theoretical problems can be solved (transformed) without error in a fraction of the time, early AI in FM put an expert with a probability of correctness on every desk. This is not the AI I am interested in. It it only artificial in the sense it artificially encapsulates the knowledge of an expert. The intelligence is not artificially generated or acquired.

Machine learning  covers many techniques. Supervised learning takes a set of inputs and allows the system to perform actions based on a set of policies to produce an output. Reinforcement learning favors the more successful policies by reinforcing the action. Good machine, bad machine. The assumption is, that the environment is stochastic. or unpredictable due to the influence of randomness.

Inputs and outputs are simple. They are a replay of the historical prices. There is no guarantee that future prices will behave in the same way as historical, but that is in the nature of a stochastic system.  Reward is simple. Profit or loss. What is not simple is the machine learning policies. AFAICT, machine learning, for a stochastic system with a large amount of randomness, can’t magic the policies out of thin air. Speech has rules, Image processing also and although there is randomness, policies can be defined. At the purests level, excluding wrappers, financial markets are driven by the millions of human brains attempting to make a profit out of buying and selling the same thing without adding any value to that same thing. They are driven by emotion, fear and every aspect of human nature rationalised by economics, risk, a desire to exploit every new opportunity, and a desire to be a part of the crowd. Dominating means trading on infinitesimal margins exploiting perfect arbitrage as it the randomness exposes differences. That doesn’t mean the smaller trader can’t make money, as the smaller trader does not need to dominate, but it does mean the larger the trader becomes, the more extreme the trades have to become maintain the level of expected profits. I said excluding wrappers because they do add value, they adjust the risk for which the buyer pays a premium over the core assets. That premium allows the inventor of the wrapper to make a service profit in the belief that they can mitigate the risk. It is, when carefully chosen, a fair trade.

The key to machine learning is to find a successful set of policies. A model for success, or a model for the game. The game of Go has a simple model, the rules of the game. Therefore it’s possible to have a policy of, do everything. Go is a very large but ultimately bounded Markov Decision Process (MDP).  Try every move. With trying every move every theoretical policy can be tested. With feedback, and iteration, input patterns can be recognised and successful outcomes can be found. Although the number of combinations is large, the problem is very large but finite. So large that classical methods are not feasible, but not infinite so that reinforcement machine learning becomes viable.

The MDP governing financial markets may be near infinite in size. While attempts to formalise will appear to be successful the events of 2007 have shown us that if we believe we have found finite boundaries of a MDP representing trade, +1 means we have not. Just as finite+1 is no longer finite by the original definition, infinite+1 proves what we thought was infinite is not. The nasty surprise just over the horizon.

by Ian at February 03, 2016 01:09 PM

December 15, 2015

Apereo OAE

Looking back at Diwali

Marist College’s week-long Diwali celebration concluded on November 13th with a wonderful closing reception attended by over 200 members of the Marist College community, along with their family and friends. Participants filled the room for a night of singing, dancing, Indian cuisine and a fashion show modeling traditional Indian apparel from various regions of the country.

"It was such a significant experience that brought a bit of India to campus for those who have traveled so far from their home ... even more so for those of us, like myself, who might never have the opportunity to visit," says Corri Nicoletti, Educational Technology Specialist Academic Technology & eLearning office at Marist College.

The closing reception was the perfect end to a five day exhibit to celebrate Diwali, the Hindu Festival of Lights. As Marist College boasts a large international population representing over 50 countries worldwide, this event was the perfect opportunity to share a significant cultural experience campus-wide. However, we decided to take it one step further. What if our families and friends could participate, no matter where they are? OAE served as the perfect platform for our students to share in their celebrations as well as to reconnect with those at home.

Diwali Closing Reception

As a result of using OAE, various family, friends, and other institutions were able to connect, regardless of their global location. Everyone was invited to post pictures of their celebrations wherever they were. Beginning October 14th, those involved in the event began posting images of the Rangoli workshop, followed by event preparations, the exhibit, and the final celebration. The OAE group and shared images for MyDiwali Celebration were visited numerous times throughout the month. Surprisingly, these visitors included global participants from as far as England, Australia, South Africa, and more!

Using the combination of social media, active global participants, and the collaborative and interactive nature of OAE, we extended the week-long Diwali event past the grounds of Marist College. We were able to reach friends and family elsewhere by spreading a diverse, culture-rich experience with those around us ... even if they were half way around the word!

It was clearly evident just how much this meant to the students, who worked countless hours, day in and day out, to make it a success. Many of them were amazed at how much it felt like home.

December 15, 2015 07:21 PM

December 11, 2015

Apereo OAE


Since the Open Academic Environment's main cloud deployment, *Unity, rolled out to 20,000 universities and research institutions last month, one of the most common questions has been how so many people are able to use their campus credentials to sign in. I’m going to explain, but be warned: after that I’m going to say why I think this is the wrong question. The right question, I think, is, "Why did no one do this before?"

The Open Academic Environment software at the heart of *Unity integrates with most of the commonly used authentication strategies, including open standards such as Shibboleth. Using these different strategies we’ve been able to establish single sign on with almost half our 20,000 tenancies.

The benefits are real. You don’t have to remember another username and password; instead, you can sign into *Unity with your campus credentials. And so can the majority of your colleagues around the world. It’s one of the features that makes *Unity a uniquely suitable venue for all your research projects.

We’ve managed to hook up with so many universities partly by making bilateral arrangements, campus by campus. But we’ve also worked through the many access management federations to which we belong. These national federations act as brokers; on one side are the universities, on the other are service providers such as *Unity. The federations allow us to hook up with many institutions in one go, reducing the effort involved.

So, given that the username / password thing is one of the biggest barriers both to adoption and usage, why is it that none of our competitors have gone to the effort of integrating with institutions’ single sign on strategies? How is that none of Facebook, Google, LinkedIn, or ResearchGate let you use your campus credentials to sign in?

To see why, compare our old friend email with one of the newer services offered by these companies such as file sharing.

Email is a federated service based on open standards. Each university controls its own servers, data and users. Even if these days they may buy the service in from a cloud provider, the university retains control.

If you draw a diagram of the connections, it looks like this. Each individual dot connects to a university server, which connects to other servers, which connect to other individual dots. The connections between the users are mediated by their universities.

Email as a federated service

File sharing via, say, ResearchGate, is different. There’s no open standard. The service is not federated but owned by one company. It controls the servers, data and users. The university is, literally, nowhere.

In this diagram, each individual dot simply connects to the ResearchGate servers in the middle.

ResearchGate as a centralised service

What these Silicon Valley companies have done is to disintermediate the universities themselves.

Now, from the point of view of these companies, what happens if they integrate with an institution’s own single sign on system is that they reintroduce the university into the diagram. Now the university itself has re-acquired control. It will examine your terms and conditions and veto things it doesn’t like. It will demand that the privacy of its users is protected. It will demand ownership of the content they create. And if it doesn’t get it, it may switch off the single sign on and take its users elsewhere. Even worse, maybe 100 institutions might get together and move elsewhere all at the same time!

So that explains why I think the Silicon Valley companies don’t work with campus credentials. And it explains why universities should prefer services that do. It’s the difference between control that is centralised and control that is federated. Or, to put it another way, between colonisation and independence.

But, you may say, file sharing is different to email. File sharing can’t be provided in a federated way; it needs a centralised infrastructure. Indeed it does. But a centralised infrastructure does not have to mean centralised control. You can have centralised infrastructure in which each university owns and controls its own tenancy, its own users, its own data. This is *Unity, and the logo of the Open Academic Environment project shows you the kind of connections we have in mind.

Open Academic Environment

In the centre is the central OAE infrastructure (known to you as *Unity). This is connected to the institutions (the small dots), which in turn are connected to the individual users (the big dots).

This is exactly the arrangement that is effected when we integrate with your university’s single sign on system, and it reflects our vision. Not the disintermediation of the universities but rather their re-intermediation, a step which means empowerment for the university and respect for the user.

You can find out more about *Unity and the issues raised here by downloading our briefing for university Chief Information Officers

December 11, 2015 11:43 AM