Planet Sakai

September 19, 2017

Adam Marshall

Lecture Capture (Replay) Forum – Wed 27 Sep PM

From the Replay Team

We will be holding the annual Replay lecture capture user forum at IT Services, Banbury Road on Wednesday 27th September from 14:00-17:30. There will be a buffet lunch beforehand from 13:00-14:00.

The forum is designed to bring Oxford staff up-to-date with the latest developments of the Replay lecture capture service and is an ideal opportunity to meet other Replay users across the university.

Replay lecture capture service:

Both academic and IT/AV/administrative staff are welcome to attend. The event is free, but please book if you plan to attend so that we can correctly assess numbers for lunch:

There will be plenty of opportunity for discussion, along with a series of brief talks, including:

  • How to get started using Replay
  • What’s new with the Replay service and the Panopto software
  • Case-studies from departments
  • Guest speakers from other UK institutions
  • The VLE review and new Panopto integration options, including embedding
  • and more…

There will also be a series of training sessions throughout the next term: Dates to be confirmed and booking links will be posted at

As always, send any enquiries to the team at

by Adam Marshall at September 19, 2017 02:34 PM

September 18, 2017


Online Video Tutorial Authoring – Quick Overview

As an instructional designer a key component to my work is creating instructional videos.  While many platforms, software and workflows exist here’s the workflow I use:

    1. Write the Script:  This first step is critical though to some it may seem rather artificial.  Writing the script helps guide and direct the rest of the video development process. If the video is part of a larger series, inclusion of some ‘standard’ text at the beginning and end of the video helps keep things consistent.  For example, in the tutorial videos created for our Online Instructor Certification Course, each script begins and ends with “This is a Johnson University Online tutorial.” Creating a script also helps insure you include all the content you need to, rather than ad-libbing – only to realize later you left something out.As the script is written, particular attention has to be paid to consistency of wording and verification of the steps suggested to the viewer – so they’re easy to follow and replicate. Some of the script work also involves set up of the screens used – both as part of the development process and as part of making sure the script is accurate.


  1. Build the Visual Content: This next step could be wildly creative – but typically a standard format is chosen, especially if the video content will be included in a series or block of other videos.  Often, use of a 16:9 aspect ratio is used for capturing content and can include both text and image content more easily. Build the content using a set of tools you’re familiar with. The video above was built using the the following set of tools:
    • Microsoft Word (for writing the script)
    • Microsoft PowerPoint (for creating a standard look, and inclusion of visual and textual content – it provides a sort of stage for the visual content)
    • Google Chrome (for demonstrating specific steps – layered on top of Microsoft PowerPoint) – though any browser would work
    • Screencast-O-Matic (Pro version for recording all visual and audio content)
    • Good quality microphone such as this one
    • Evernote’s Skitch (for grabbing and annotating screenshots), though use of native screenshot functions and using PowerPoint to annotate is also OK
    • YouTube or Microsoft Stream (for creating auto-generated captions – if it’s difficult to keep to the original script)
    • Notepad, TextEdit or Adobe’s free Brackets for correcting/editing/fixing auto-generated captions VTT, SRT or SBV
    • Warpwire to post/stream/share/place and track video content online.  Sakai is typically used as the CMS to embed the content and provide additional access controls and content organization
  2. Record the Audio: Screencast-O-Matic has a great workflow for creating video content and it even provides a way to create scripts and captions. I tend to record the audio first, which in some cases may require 2 to 4 takes. Recording the audio initially, provides a workflow to create appropriate audio pauses, use tangible inflection and enunciation of terms. For anyone who has created a ‘music video’ or set images to audio content this will seem pretty doable.
  3. Sync Audio and Visual Content: So this is where the use of multiple tools really shines. Once the audio is recorded, Screencast-O-Matic makes it easy to re-record retaining the audio portion and replacing just the visual portion of the project. Recording  the visual content (PowerPoint and Chrome) is pretty much just listening to the audio and walking through the slides and steps using Chrome. Skitch or other screen capture software may have already been used to capture visual content I can bring attention to in the slides.
  4. Once the project is completed, Screencast-O-Matic provides a 1 click upload to YouTube or save as an MP4 file, which can then be uploaded to Warpwire or Microsoft Stream.
  5. Once YouTube or Microsoft Stream have a viable caption file, it can be downloaded and corrected (as needed) and then paired back with any of the streaming platforms.
  6. Post of the video within the CMS is as easy as using the LTI plugin (via Warpwire) or by using the embed code provided by any of the streaming platforms.

by Dave E. at September 18, 2017 04:03 PM

September 16, 2017

Dr. Chuck

Chuck’s “TV” Career – 1994-2000

It is a long time ago, but I was once on TV nationwide talking about the Internet:

The whole thing happened because my late friend Rich Wiggins and I had as a friend a named John Liskey who was a reasonably high executive in the TCI cable organization. The three of us ended up at the bar in the early 1990s where Rich and I would tease each other about the emerging Internet technologies. Rich was always the logical NYT-reading academic (who never even finished his BS) and I was the pragmatic grumpy alpha coder (who ultimately got a PhD.) that knew the real world. So we would have spirited debates at the bar. John would sit and watch and say “these conversations should be on TV”.

So in 1994, John got TCI to hire camera folks, studio folks, a director and editor and we taped a monthly show that was sent (on tape) to the TCI affiliates in the major markets which showed each show 10-20 times per month on the “local origination” channel.

It started in 1995 But the cable companies were in a feeding frenzy in the late 1990’s so each time the Lansing / East Lansing market was sold to some other company, we had to rename the show – but John just went to the new company and kept us in production and on the air. Our shows were:

Internet: TCI
Nothin’ but Net
North Coast Digital

In 1995-1996 – we were avant garde. We won two national awards in 1995. We won a “Michigan Cable Emmy” and predated TechTV and Leo Laporte by 3 years. We were first on the scene but as soon as TechTV came on the scene with 24-hour programming and national distribution our days were numbered. We kept producing on a less-than-once-per-month schedule through 1999 and our last effort was to try to become reporters for TechTV:

TechTV Audition Tape

Our audition tape was viewed TechTV – but by 1999 – the “Old Guy Nerds on TV” was not front and center for their programming. They wanted to feature young good looking people talking about video games. So we had no chance. Interestingly, the future for Leo Laporte was no longer on TechTV either and he went on to create his own show on the Internet.

So it was a fun time – and a good time to teach me how to talk to a camera and think on my feet.

My Internet History, Technology, and Security course on Coursera and video based column in IEEE Computer magazine was a way to revive my collection of material from those early days in the mid-1990’s.

Secretly I just want to be the Anthony Bourdain of tech :)

by Charles Severance at September 16, 2017 02:34 PM

How I build Open Textbooks

I am often asked how I build textbooks. I am a little weird in that I am radically open and refuse to use a commercial service or any non-open software. I prefer a line-oriented format in github,using open software and a process that I run myself.

The best example of how I write open books is here:

I write my books in Pandoc Markdown

I do pay for and use OmniGraffle for figures (InkScape is free – but super hard to use):

I export the figures export them to SVG and EPS:

Then I use pandoc (which uses LaTeX) to produce PDF and epub

Then I upload the PDF to CreateSpace and the epub to Amazon and it is auto-converted to mobi:

This is perhaps a more complex process that using Word, pressbooks or gitbook but I prefer a pipeline that I completely own and control and can adjust. Other methods are easier – I prefer control ownership and introspection over ease of use.

by Charles Severance at September 16, 2017 02:20 PM

September 14, 2017

Michael Feldstein

Research in Translation: Cultural Limits of Self-Regulated Learning

We are currently facing two civalizational educational challenges. By “civilizational,” I mean that they go beyond country- or region-specific challenges. They are unprecedented in the history of humanity. The challenges I’m talking about are universal  access to quality education and universal lifelong learning, both which are almost certainly ones that you’re well aware of. But we don’t talk enough about how unprecedented they are, what we need to learn to do differently to meet them, and how we could go about learning what we need to learn.

One plausible solution to both civilizational challenges is to get a lot better at teaching humans how to get better at learning. Again, this is probably not a new or shocking assertion to you. But the research paper I’m going to describe here—”Eight-minute self-regulation intervention raises educational attainment at scale in individualist but not collectivist cultures” by René F. Kizilcec and Geoffrey L. Cohen— suggests that doing so is going to be particularly hard because the effectiveness of different self-education strategies is heavily mediated by contextual factors like culture.

At the very least, this means that our conversations about trendy approaches like adaptive learning and competency-based education need to become a lot more nuanced than they are right now. We’re not paying enough attention to the things we don’t know yet about the circumstances under which these strategies work.  But more profoundly, the findings of the paper raise questions about whether our fundamental approach to educational research is adequate to the task of learning how to meet these grand educational challenges of our age.

The Challenges: Universal and Lifelong Education

The first grand challenge is to give every human on the planet the access and support they need to achieve the highest level of education that suits their individual goals and abilities. No civilization has ever come close to achieving this before, particularly in large, heterogenous culture like the United States. And while the defunding of public college and university systems has increased the challenge, I’m not aware of any evidence that we could meet this challenge with our currently structured educational system at any economically feasible funding level.

At the same time, the idea of a “terminal degree” being synonymous with the end of an individual’s education is going away. We’ve heard talk for the past few decades about how “knowledge workers” are the future of the economy and how everyone will need to be “lifelong learners” because the skills and knowledge they need will change quickly and constantly. But now we’re really seeing it show up all over the economy and, as a civilization, we haven’t yet developed strategies to deal with it. For example, as the coal mining industry dies, we don’t know how to help all of those miners learn the skills they need to find new careers. Our existing systems are not adequate for those kinds of large-scale educational transitions. So the second grand civilizational challenge is universal lifelong learning.

These challenges are both partly ones of scale. We know how to help some humans achieve the highest level of education that suits their goals and abilities. We know how to help some miners learn what they need to make career transitions. We just don’t know how to do these things for everyone by growing our current system. Logistically, it’s something we’ve never done before, and economically, it’s not clear that we have the resources to give everybody access to the education they need, never mind at the level of quality that would eliminate any achievement gaps. At least, not with our current system.

Whether consciously or not, most of the high-profile efforts and many of the discussions around how to address these challenges have been heavily influenced by research that has come to be known as Benjamin Bloom’s “two sigma problem.” Before we can understand the full implications René F. Kizilcec and Geoffrey L. Cohen’s paper, we need to understand the two sigma problem, including some of the limitations of the research and the ways in which it has framed educational reform discussions.

Understanding the Two Sigma Problem

Benjamin Bloom is most famous for three contributions to educational research. The first is Bloom’s Taxonomy, which is not relevant to this post. The second is the mastery learning approach, which we’ll get to shortly. And the third is the two sigma problem.

A “sigma” or “standard deviation” is a statistical concept that has to do with measuring how far from a group’s “average” something is. So it’s a relative concept. Height for an adult human being doesn’t vary nearly as much as, say, hight for all primates. So one standard deviation, or sigma, from the average human height will be a smaller increment than one sigma for average primate height. In Bloom’s world, where he is measuring variation in grades in a class, a very rough proxy for one sigma is one letter grade in a final course grade, e.g., the difference between a C and a B.

The popular interpretation Bloom’s two sigma experimental result is that students who get one-on-one tutors do better by roughly two sigma, or two final course grade letters, than students in a typical class who do not get one-on-one tutoring. But that’s not quite right. Or at least, it’s not the full picture.

To start with, the two-sigma research is built on Bloom’s previous work on mastery learning. You may not be familiar with this term, but if you’re at all engaged with ed tech, then you’ve probably seen traces of its influence. The basic idea of mastery learning is that you break a subject up into small learning objectives that are properly sequenced, and students don’t move on to the next learning objective until they’ve demonstrated “mastery”—often defined in terms like “answering 90% or better of assessment questions correctly.” Bloom and his colleagues found that they could achieve a one-sigma improvement for students who were taught using mastery learning techniques over similar students who were not. Students who were taught using mastery learning and received one-on-one tutoring achieved a two-sigma improvement relative to the control groups.

Textbook publishers love this result because it gives them a direction for product development. They know how to break course subjects down into small, sequenced learning objectives. They also know how to set thresholds on assessments that unlock the next bit of content. Whether educators choose to actually employ master learning techniques is out of their control, but at least they can develop products that are friendly to that approach. And if they could somehow automate the mastery learning pedagocial techniques—say, through adaptive learning, for example—then they could show that their products can improve students’ learning outcomes by two course grades over traditional teaching approaches.

But this approach and Bloom’s “two sigma” framing are littered with caveats (as research tends to be). First, his control groups were school children in traditional courses. If you vary that environment significantly—say, by putting students in a MOOC or teaching adults pursuing non-degree career development knowledge—I’m not aware of robust findings that Bloom’s results still hold. Second, not all subjects lend themselves equally well to being broken up into small, discrete, and straightforwardly measurable learning objectives that can be determinatively sequenced. Third, as far as learning outcomes go, a “letter grade” difference is not a terribly authentic measure of progress.

Lastly, and most relevant to the Kizilcec and Cohen paper, Bloom and his colleagues were never able to pin down exactly what it was about one-on-one tutoring that led to that second sigma of improvement. Bloom himself wrote,

It should be pointed out that the need for corrective work under tutoring is very small.

So what is it about one-on-one tutoring that made such a big difference? Bloom and his colleagues tried to isolate one or two variables that would account for it. They failed.

What if they failed because there aren’t just one or two factors that account for the difference that human tutors make? What if there are many different factors that affect different students to different degrees in different contexts? What would this mean for the whole personalized learning enterprise? Even more broadly, if the impacts of teaching interventions vary dramatically, singly and in combination, across a wide range of contextual factors, what are the implications for making progress in educational research? What does a science of learning look like in a world where isolating a variable in a particular experimental condition doesn’t tell us a whole lot that can be generalized very far?

These are the questions that ultimately interest me in the Kizilcec and Cohen paper. But as with all of these “Research in Translation” articles, we’re going to have to start by unpacking some of the discipline-specific knowledge that underpins the study itself. Some of it will probably be unfamiliar to you. For example, there’s a good chance that you haven’t run across the theory of cultural dimensions that the paper draws upon unless you’re a social psychologist or a sociologist. On the other hand, if you’re reading this blog, there’s a reasonable chance that you know at least a little bit about self-regulated learning. But even there we may find some aspects or implications that you’re not aware of.

In fact, let’s start with self-regulated learning.

Understanding Self-Regulation and Self-Regulated Learning

Kizilcec and Cohen aren’t concerned with mastery learning but rather self-regulated learning. If we forget about the “learning” part for a moment and focus on the “self-regulation” part, the concept is familiar enough in our daily lives.

Suppose that I want to lose 20 pounds. There are any number of strategies that I could employ try to get myself to where I want to be. Here are a few:

  • Plan a beach vacation six months from now and think about how I want to look in my bathing suit
  • Read scary articles about the consequences of being overweight
  • Promise myself I will buy something I really want if I achieve my goal
  • Commit to giving money to a cause I hate if I don’t achieve my goal
  • Do a 20-minute cardio workout four times a week
  • Increase my fiber intake
  • Fast 8 hours out of every day
  • Reduce my sugar and carbohydrate intake
  • Adopt the cabbage soup diet
  • Wear ear magnets

Notice that there are two basic kinds of strategies on this list: motivation and implementation. Things that make me want to take action and actions that I can take.

Let’s say I decide that I’m going to try to lose weight by wearing ear magnets. Every morning I put them on, and every day I weigh myself. After a month, I have not lost weight. Being the introspective person that I am, I come to the conclusion that ear magnets are not helping me achieve my goal.

Next, I try reducing my sugar and carbohydrate intake. There’s only one problem: I can’t get myself to stick to the plan. Every morning, I reach for a bagel or muffin for breakfast. Whenever I’m out for dinner, I just can’t stop myself from ordering dessert. A month later, I still haven’t achieved my goal.

So I decide I will promise myself that I will buy that remote controlled submarine drone I’ve been lusting after if I can lose weight by giving up those bagels and desserts. That will be my next experiment. And so on.

That’s self-regulation in a nutshell:

  1. Set a goal
  2. Pick a strategy that might help you achieve your goal
  3. Monitor your progress toward your goal
  4. Reflect on whether your chosen strategy has resulted in satisfactory progress toward your goal
  5. Adjust your strategy (or stick with it) accordingly
  6. Rinse, repeat

Apply this basic self-regulation approach to learning and you have…wait for it…self-regulated learning!


When I wrote at the top of the post that one plausible answer to our civilizational education challenges is to get a lot better at teaching humans how to get better at learning, that basically means getting a whole lot better at teaching students how to be self-regulated learners.

SRL post-dates Bloom’s work on mastery learning and it reflects a different focus. Mastery learning is primarily focused on the content that is being learned. Any consideration of learner motivation is a means to an end. In contrast, SRL is focused on achieving the learner’s goals more effectively. The goal may very well be to master some coherent set of content. But any consideration of content is a means to an end. Specifically, the learner’s end.

At least in theory. In practice, there are lots of attempts underway, with varying degrees of self-awareness, to marry mastery-based adaptive learning products with SRL techniques. For example, here’s an effort at Essex County College in Newark, NJ in which they recruited John Hudesman, a researcher in SRL, to marry the approach with an essentially mastery-based model:

The basic idea is that maybe the SRL feedback loop can help achieve that elusive second sigma. It’s more sophisticated than Bloom’s hunt for the one or two factors that account for one-on-one tutoring’s effect on all students in the sense that each student can individualize and hopefully find for herself the factors that will enable her to reach her goals.

But here, too, there are caveats.

The Limits of SRL

Let’s go back to that weight loss self-regulation problem. Suppose I’ve tried everything. Ear magnets. The cabbage soup diet. Exercise. Weightwatchers. Promises. Threats. Nothing works. In every case, either I can’t stick with the plan or I stick with it but don’t lose weight. Maybe, after many months of frustration, I talk to my doctor, who tells me that a side effect of my prescription medication is weight gain.


Now I’m in uncharted territory. Can I take a different medicine? Is there some other way to counterbalance the effect of the medication? Or am I just stuck being 20 pounds overweight?

There are limits to the power of self-regulation, including the power of self-regulated learning. If I’m a single parent working two jobs and driving for Uber in between to make ends meet, then no SRL strategy is going to give me time that I don’t have. If I have a learning disability, then I may need help finding SRL strategies that account for my particular situation.

Here’s the tricky part that gets into the heart of the Kizilcec and Cohen paper: Barriers that impact the effectiveness of SRL strategies might be non-obvious and influenced by things like culture. In particular, the paper examines the differences of average effectiveness of a couple of different self-regulation strategies for people from individualist cultures versus collectivist cultures. This distinction will take a little unpacking in order to understand how it affects SRL.

Suppose the week that I have a big assignment due, my uncle dies. I wasn’t particularly close to this person, but my extended family is generally close-knit. My relatives would be hurt if I didn’t come. I really value the closeness of my extended family, even if I didn’t really get to know my uncle well, and I care a lot about how they feel. It’s not really an option for me to skip the funeral. Which is halfway across the country. My sense of social obligation to my family makes it impossible for me to employ the self-regulation strategy of setting aside the time I need to complete my schoolwork.

Imagine that your whole life were filled with these sorts of social obligations. Not just an occasional death in the family, but daily demands that you can’t predict and can’t ignore. In your world, there are many people who can ask you to change your plans at any time. Extended family members, neighbors, coworkers, or even strangers can place demands on you and, depending on the specifics, you really can’t say “no.” Because you live in a culture that places a high value on social bonds, and you have learned to place a high value on those bonds as well. The term of art for this kind of a culture is “collectivist,” and it contrasts with an “individualist” culture, where less emphasis is placed on social expectations and more on individual achievement.

At the very least, you will find yourself in a similar problem to one of struggling to lose weight when you’re taking a medication that causes weight gain. Time management strategies don’t work because you’re really not in control of your time. But it may go even deeper than that. If your world is highly unpredictable because you can’t anticipate demands that you can’t reject and that come at you on a daily basis, then your fundamental idea of what it means to “manage time” may have to be different. You can’t just schedule certain nights of the week or reserve X hours to do your homework. You don’t have that power. Or, at least, it wouldn’t occur to you that exercising that power is a viable option, because you care deeply about what your willingness to fulfill your social obligations says about you as a person.

This is exactly the hypothesis that Kizilcec and Cohen wanted to test. They wanted to see whether there are differences in how SRL works for students in collectivist versus individualist cultures.

The stakes are high. Remember those two civilizational challenges: universal and lifelong education. To meet those challenges, we need to provide less traditional and formalized teacher support than the students in Bloom’s control groups got. We just don’t have enough teachers and classrooms to go around. This is a fundamentally different challenge than the one that Bloom was probing with his two sigma experiments. If it turns out that all kinds of factors, some as subtle as the the nature of the social ties in the culture you come from, can impact the effectiveness of our ability to teach students how to teach themselves, then how can we possibly meet our unprecedented civilizational goals? How can we tease out all the many possible factors, particularly when they interact with each other? How would we even begin to go about figuring that out in a rigorous, evidence-grounded way?

Hold that thought. We’re going to return to it later in this post. First, though, we have to understand the experiment that Kizilcec and Cohen conducted to test whether this is even a problem. And to do that, we first need to understand one piece of social psychology.

Understanding Cultural Dimensions Theory

Kizilcec and Cohen wanted to figure out a way to test whether, on average, students from individualist cultures benefit more from being taught SRL techniques than students from collectivist cultures. The first thing they needed in order to do that is some way to define individualist versus collectivist cultures in a reasonably rigorous way.

As you might expect, the researchers didn’t just pull the idea of individualist and collectivist cultures out of thin air. There is a body of research literature from which they were drawing. In particular, they drew on the research of a social psychologist named Geert Hofstede. During the late 1960s, Hofstede worked at IBM, where he founded and led the Personnel Research Department. This was at a time when globalization was really beginning to take hold and American-founded companies like IBM were learning how to run divisions in other countries with very different cultures. In the early days, these companies believed that they could train their new international employees on IBM-standard management practices and all would be well. But it soon became apparent that the challenge was more complicated than they thought. People in other countries responded to IBM’s management practices differently.

At about this time, Hofstede stumbled upon a database of 117,000 attitude surveys from IBM employees all over the world. When analyzed for patterns on an individual level, the data were confusing. But when Hofstede grouped employees by nationality and looked for similarities and differences between national groups, some patterns began to emerge.

As I have written about before, I worry that our generally low level of statistical literacy means that many of us are prone to misread statistical results and have little confidence in statistical analysis. So I am making a practice of providing some explanation of the statistical methods used when I write up these Research in Translation posts. In Hofstede’s case, he used a method called “factor analysis.”

We can get a basic sense of the intuition that underlies that method through a common joke. You’ve probably seen lists with titles like “You might be a _________ if…”. The basic idea is that there are funny and non-obvious little traits and experiences that are shared by people of a certain type. When they are not mean, they are often inside jokes. For example, RallyPoint, a site that bills itself as “The Professional Military Network,” has an article entitled, “You might be a veteran if…” Item number five on the list is “You remember laughing at troops who thought 29% APR was good…” I don’t even know what that means, and I certainly wouldn’t think that an attitude about interest rates would be a marker of whether somebody is a veteran.

Factor analysis starts with a collection of seemingly unrelated variables (like answers on an employee attitude survey) and looks for how closely correlated they are. If a group of variables is highly correlated, then it may be because they are all indications of a hidden or “latent” variable.

“Oh, you answered ‘yes’ on eight out of these ten seemingly random questions. That suggests that you might be a veteran.”

Hofstede applied factor analysis to national groups of employees and found evidence of four latent variables, which he calls “cultural dimensions.” (Subsequent research has reproduced Hofstede’s results, and the theory has been refined and expanded.) He called one of the latent variables “Individualism/Collectivism.” It purports to capture the degree to which each national culture has a strong web of the kinds of social obligations and expectations that I described in the previous section. Hofstede has used the cumulative research to create a country-by-country comparative index of his different dimensions, which you can play around with here.

Kizilcec and Cohen used Hofstede’s country index of the Individualism/Collectivism dimension to provide some rigor to their question about cultural differences impacting the effectiveness SRL techniques.

Understanding the Kizilcec and Cohen Paper

The researchers conducted experiments on two MOOCs. In each case, students were given eight-minute tutorials on two SRL techniques: Mental Contrasting (MC) and Implementation Intentions (II). If you recall the weight loss strategy list from earlier in the posts, there were strategies that helped motivate me to do what I needed to do (like promising to buy myself a submarine drone if I meet my weight loss goal) and other strategies to actually accomplish my goal (like wearing ear magnets). MC and II fall into these two respective categories. MC is intended to help students self-motivate by (in the words of the authors)

…vividly elaborating on positive outcomes associated with attaining a goal (e.g., learning a new skill) followed by vividly elaborating on central hindrances in the present that might interfere (e.g., a busy work schedule). By juxtaposing the desired future with current obstacles, MC can strengthen goal commitment and striving. Insofar as the obstacles to goal attainment are seen as surmountable, MC induces a sense that the desired future is within one’s reach, thereby increasing commitment and effortful goal striving.

On the other side,

[t]he II procedure helps people plan how to overcome obstacles and execute goal-directed actions. It encourages people to generate concrete if–then plans. Unlike unstructured planning, an II links a specific situation to a goal-directed action. An example of an II is, “If I feel too tired after work to watch the next lecture, then I will make myself coffee to stay awake.” Forming an II facilitates goal attainment because it increases the likelihood that people will respond efficiently and even automatically to regular obstacles that threaten the completion of their goals.

Using their short MC and II tutorials, the researchers were able to increase MOOC completion by 32% over the control group in the first experiment and 15% in the second—for students from individualist countries. That’s a pretty remarkable result. If we’re trying to achieve these big civilizational goals of providing every human with higher education and lifelong learning advancement, then the possibility that we could increase completion rates in low-facilitation courses by up to 30% with an eight-minute lesson helping students get motivated and focused is pretty huge.

That’s the good news. The bad news is that students from collectivist cultures showed no significant benefit. In fact, using India as a country on the collectivist end of the scale, the researchers found that

relative to US respondents, Indian respondents reported that their social environment was more complex and that they shied away from forming if–then plans. Indian respondents listed more obstacles that could interfere with the goal of achieving a good grade in an online course than US respondents (India median = 4, US median = 3; Kruskal–Wallis X2 = 9.50, P = 0.002). They were also more likely to report that if– then plans oversimplify the complexity and ignore the uncertainty of real-life situations [t(192) = 3.12, P = 0.002, d = 0.45].

In other words, Indian students were more likely to say that the II self-regulation strategy in general was unrealistic. It didn’t account for the complexity in their lives.

There’s a lot more to this study than I’m going to cover here. For example, students from collectivist countries showed some benefits from MC when decoupled from II, which is interesting to think about. Generally speaking, I can’t unpack all the background in these Research in Translation posts and still have room to cover all the nuances of the papers themselves (although one of my goals is to give readers enough background that they can read and understand the papers themselves).

But the headline here is provocative enough. When we think back to Bloom’s failure to find the one or two magic ingredients that tutors add which account for the second sigma of improvement, we can now see that the answer very likely is different from student to student. There is no silver bullet. In fact, there are all kinds of non-obvious factors that influence how well a given educational strategy works for a given student.

So if we still want to address those two civilizational challenges—or even an intermediate challenge along the way, like closing achievement gaps—then Kizilcec and Cohen’s study raises a critical question:

Now what?


Right about now, some of you are probably thinking, “So after making me read all of that, your grand conclusion is that students are individuals? Thanks for a whole lot of nothing, buddy.” Fair enough. But meeting these big educational challenges requires us to navigate between a rock and a hard place.

On the one hand, we don’t want to fall for easy answers. As a culture, we tend to be a little schizophrenic about our attitudes toward education. Even smart people who believe in their hearts that every student is an individual human with different needs and goals can all too easily slip into over-generalizations and solutionism when the conversation turns from individual students to solving large-scale educational problems.

On the other hand, if we’re committed to solving the big educational challenges, we can’t just shrug our shoulders and say, “It’s too hard. There’s no way to sort out all the factors.” We have to come up with a research approach that accounts for the fractal problem of student differences and how various combinations of those differences affect what works for different students in different learning contexts. The Kizilcec and Cohen paper is one model for what that kind of science could look like. But we need more. A lot more.

In my view, we need what I call “empirical educators” and what Candace Thille calls “citizen scientists.” We need to crowd-source this problem by recruiting front-line classroom educators as field researchers that work in cooperation with researchers trained in learning sciences. For example, Kizilcec and Cohen’s experiments, however cleverly designed, can’t tell us how these results play out in courses that are not MOOCs. Or with students who are first-generation Americans whose families come from collectivist societies. Or whether other factors influence whether there are indentifiable factors that tell us which students from a collectivist country are most or least likely to be similar to their cultural norm in terms of SRL. Or what other SRL techniques might be more effective given any combination of these variables. The amount of research one can imagine being generated off the results of this one study alone is massive.

We spend a lot of public and private money chasing silver bullets in education. I propose we would be better served by investing that money in providing educators with the training, support, and incentives to participate in the work of advancing the sciences of learning. At the very least, all professional educators should have a certain level of literacy on what we know about education, be able to read and understand the implications of a research paper, and believe that having this knowledge and these skills is a core part of their professional identity. Nobody should still be talking about learning styles, for example.

Some educators may take this a step further and learn how to make the classroom experiments that they intuitively conduct on a regular basis a little more rigorous. And some may actively collaborate with professional researchers or even design their own studies using the disciplinary research tools that they already know from their graduate training.

There won’t be one answer for every educator, any more than there will be one answer for every student. My country doctor primary care physician has a different relationship to medical science than an oncologist working at Memorial Sloan Kettering Cancer Center. But they both believe that having some relationship to medical science is essential to doing their jobs. In education, where the fractal nature of the problems we are trying to understand requires us to run many experiments in many contexts, having all educators see themselves in some sense as citizen scientists is even more critical.

This post is part of our Research in Translation series, which is funded in part by the Bill & Melinda Gates Foundation. The findings and conclusions (or views) contained within are those of the authors and do not necessarily reflect positions or policies of the Bill & Melinda Gates Foundation.

The post Research in Translation: Cultural Limits of Self-Regulated Learning appeared first on e-Literate.

by Michael Feldstein at September 14, 2017 05:23 PM

September 11, 2017

Michael Feldstein

Some Ed Tech Perspective on UC’s Billion-Dollar Payroll System Fiasco

In 2011 the University of California laid out plans for a new payroll system called UCPath (for Payroll, Academic Personnel, Timekeeping, and Human Resources). The goal of the $170 million project was to save a reported $100 million per year eventually and to replace a 30-year-old Payroll Personnel System (PPS) that runs separately for each of the 11 UC locations with Oracle’s PeopleSoft payroll and HR systems. All systems were planned to be live by the end of 2014 and run centrally in a new UCPath processing center.

In 2014 we described how the project had grown from $170 and 36 months to $220 million and 72 months. In spring of this year we described how the project was planned to cost $504 million and take 93 months (almost five years longer than originally planned).

A few weeks ago the state auditor released a report claiming that the project would really cost $942 million. The $942 million does not mean that the $504 million estimate has changed since spring, but the auditor does claim that UC is not reporting the full costs of the implementation. From the audit summary on page 1:

The Office of the President currently projects the implementation cost of UCPath to be $504 million—$334 million over its original estimate of $170 million—and it has delayed the date of UCPath’s implementation by nearly five years, to June 2019. Moreover, the $504 million estimate does not represent the full cost of the project because it includes just a fraction of the cost associated with the campuses’ implementation efforts and a shared services center, known as the UCPath Center. The full cost to the university of adopting UCPath is likely to be at least $942 million.

Most of this information was available in the spring, but the state auditor makes a compelling, well-documented argument.

The Worse Part

However, this is not the big news from the audit. In my 2014 post I commented on Christopher Newfield’s analysis at Remaking the University on the claimed benefits from the project:

What about the current estimate of benefits – is it $30 million per year as Chris described or closer to $100 million per year? One big concern I have is that the information on project benefits was not updated, presented to the regents, or asked by the regents.

Well it turns out that was exactly the problem based on this finding from the audit:

The Office of the President’s initial business case in 2011 asserted that UCPath would result in $753 million in cost savings, primarily from staffing reductions at the campuses. However, the UCPath project director told us that the Office of the President no longer expects to realize those projected savings. Several campuses also reported to us that they do not anticipate the staff reductions that the 2011 business case promised. In fact, in a status update to the University of California Board of Regents (regents) in July 2017, the Office of the President did not discuss any offsetting savings but rather discussed creating efficiencies and avoiding costs.

You read that right. The $753 million in savings that was the basis for the project is not going to materialize. There clearly was a need to replace 30 year old systems, but the justification for the UCPath project and its specific approach was based on large staff cuts to be achieved by centralizing payroll for all 10 universities in the system. To get the true scale of the cost impacts of this project, look at this helpful chart from page 16 of the audit (note the $504 million in top right – that is the cost claimed by UC):

What this means is that the net savings / cost have changed by almost $1.4 billion. Let that sink in. Billion with a ‘b’.

UC Response

The University of California Office of the President (UCOP) responded to the audit both formally in the audit report itself and informally through media statements. The official UCOP statement starting on page 35 of the audit mostly notes that President Napolitano was not at UC when UCPath started, that this is a necessary and complex project, claims they have already made improvements, and it disputes some of the specific recommendations as being heavy-handed. But at no point does UCOP dispute the findings. What is most problematic is the emphatic claim at the end:

I have complete confidence in UC’s ability to continue successful implementation of UCPath, a necessary project with significant, expansive, and long-term benefits to the University.

There is no serious re-questioning of assumptions or of UC’s ability to finish the job, despite plenty of evidence pointing to fundamental problems in the project.

The UC response in the UCLA paper is even more problematic, as it mostly argues that the implementation only costs $504 million many other items are operational in nature.

Claire Doan, a UC Office of the President spokesperson, said the state audit includes additional costs that should not contribute to the overall cost estimate. [snip]

The UCPath Center will assume all payroll and human resources functions systemwide, according to the state audit. Doan added the UC believes the $130 million the state audit cited for the center’s operating cost should be included in the project’s operations budget, rather than its implementation budget, because the UC does not typically include operating expenses in project implementation costs.

In other words, UCOP is complaining about accounting methods while not disputing the findings. UCOP wants to just look at IT implementation costs, while the state auditor is looking at “the full cost to the university of adopting UCPath”.

Some Perspective

We here at e-Literate are focused more on ed tech – the impact of changes to teaching and learning enabled by technology. So it might help to add some ed tech perspective on this story.

Taking the well-grounded assumption that the project, or some form of it, was necessary, and making the assumption that UC’s original plan made some sense ($170 million for IT implementation), let’s look just at the impact of cost overruns.

  • Using the UCOP argument, the IT implementation cost overrun is currently $334 million
  • Using the state auditor argument, the total UCPath cost overrun is currently $636 million
  • Adding in the disappearance of planned savings, the change in savings / cost is almost $1.4 billion

Keep in mind that much of the project is funded by a 20-year bond. Some comparisons using that time frame (we’ll factor in inflation and cost increases by adding 1.5x for a range):

  • Based on typical UC campus costs and extrapolating, the cost of providing an LMS for every UC campus for 20 years is likely $66 – $99 million
  • Using EDUCAUSE Core Data of $96 – $110 per student median spend in the US, the costs of centralized instructional technology support of all applications and services for every UC campus for 20 years is likely $500 –  $850 million

The fallout from UCPath’s cost overruns and loss of planned savings likely exceeds the entire combined instructional technology budget for all 10 UC campuses. This project matters.

The post Some Ed Tech Perspective on UC’s Billion-Dollar Payroll System Fiasco appeared first on e-Literate.

by Phil Hill at September 11, 2017 03:32 AM

September 09, 2017

Dr. Chuck

The worst not-spam DNS Verification Email Ever – DreamHost /

I am putting this post up as a public service since most of the results on the web are wrong for this topic.

Lately I have been moving a few domains to DreamHost because they have a super simple and 100% free way to use LetsEncrypt certificates on my domains.

This is my first experience with DreamHost and I am super impressed with the simplicity of their management UI, the competence of their tech support, and their free LetsEncrypt Certificates. I can have SSL even on domains that are only a HTTP redirect (I have a lot of those).

Transferring Domains

When you transfer a domain, there are lots of emails that go back and forth. Most of those make perfect sense sense. But there is one mail you get *after* the transfer is complete that completely looks like spam but turns out to be essential.

The mail is from (not what the DreamHost documentation claims) and has a subject line of:


And the text looks something like this (yes the question marks are there).

Fran?ais  Italiano  Portugu?s  Espa?ol  Deutsch  Polskie  Srpski

As of January 1, 2014, the Internet Corporation for Assigned Names
and Numbers (ICANN) has mandated that all ICANN accredited registrars
begin verifying the WHOIS contact information for all new domain
registrations and Registrant contact modifications.

The following Registrant contact information for one or more of
your domains has not yet been verified:

You are supposed to click on a link to to verify the domain. Here is a screen shot of the mail.

My “WOAH THERE THIS MUST BE SPAM” detector went off like crazy. I Googled around a bit and many folks felt it was Spam. So I just deleted them and went about my day.

TL;DR This message is not spam

Somewhat later I went into my DreamHost account and saw this under Domains -> Registrations


So I resent the mail and the spam-like message immediately showed up. At that point I should have just assumed it was not spam. But just to be sure I talked to DreamHost tech support and they verified it was OK.

I clicked on the verification link to in the email and it said “thanks” – and then after about 60 seconds the “need to verify” message went away in my DreamHost UI.

So this message is legit. It is an interesting question as to the possible harm that we do when legit messages look so much like spam and then turn out not to be spam. It took me 3 weeks to figure this out.


by Charles Severance at September 09, 2017 02:05 PM

September 08, 2017

Michael Feldstein

California Should Watch Arkansas Process for Creating New Online Institution

Two months ago I wrote a post about Governor Brown’s directive for a fully-online community college in California, noting that:

What this points to is that for a new fully-online institution to get to some meaningful level of enrollment (let’s say 20,000) in the same ballpark as these comparison schools, I estimate it would take a full decade at the least. This is the reason, by the way, that Mitch Daniels and Purdue University made the Kaplan University deal even though Kaplan’s enrollments are dropping. Daniels did not want to wait a decade to get to meaningful enrollment numbers for an online college serving working adults – if everything works out, within a year Purdue will have a fully-online institution serving 30,000+ working adults. That is a big if, by the way.

This estimate is probably optimistic, however, based on the outlook for eVersity, the fully-online institution being created in the state of Arkansas. The eVersity leaders have decided that they cannot wait for regional accreditation as reported at Inside Higher Ed today [emphasis added].

When the University of Arkansas System envisioned creating the online-only institution eVersity in 2014, it planned to follow the well-worn path trodden by other public higher education systems in launching fully online institutions: building on the accreditation of the system’s other universities before seeking independent approval from the regional accreditor.

But come January, eVersity will seek approval from the Distance Education Accrediting Commission — a national body that overwhelmingly accredits for-profit and nonprofit online institutions — rather than the Higher Learning Commission, which accredits all other public institutions in Arkansas and many nonprofit colleges in 18 other states.

One of the primary factors shaping eVersity’s decision is speed. The regional accreditor told the university that it could take roughly six years for HLC to award its stamp of approval, while DEAC — assuming it affirms eVersity in January — will have acted in just under two years. Institutional accreditation is required for eVersity students to gain access to federal financial aid, and to ensure that their credentials are valued by employers and others.

The challenge with national accreditation includes severe limitations on students being able to transfer credits out of the school.

On the issue of speed, [senior policy analyst at the Center for American Progress] Flores noted that institutions waiting for regional accreditation can often apply for federal aid during the candidacy stage of their application, and that students who attend regionally accredited institutions will have a much easier time transferring their credits than those who attend nationally accredited ones. Flores said eVersity seemed like “a little bit of an odd fit” for DEAC, which typically accredits smaller for-profit institutions that don’t offer federal aid.

The IHE article (very well-written, by the way) described the path chosen by previous fully-online institutions.

A more conventional route to regional accreditation, however, is to start as a division of an already regionally accredited campus, said Goldstein. This is what the University of Maryland University College did before obtaining independent regional accreditation. Colorado State University Global Campus also went this route.

[Chief academic and operating officer of eVersity] Moore said that eVersity decided not to do that, as it did not want to be under the academic and administrative control of another University of Arkansas System institution. “We wanted the ability to be nimble and responsive and not burdened by legacy systems, practices and policies. There are certainly advantages to built-in infrastructures, but they also come with a cost,” said Moore.

Think about the implications – if a state wants a new, fully-online institution to serve working adults, there seems to be four choices before there is meaningful impact in numbers of students enrolled in institution:

  • Establish new, separate institution, choose regional accreditation, be patient in realistic enrollment growth, and expect 10 – 15 years for meaningful impact
  • Do the above but choose national accreditation and limit transfer ability and possibly impact enrollment, and expect 6 – 11 years
  • Establish division of another school using their accreditation, then spin off for separate institution later on, and risk getting caught up in traditional institution’s legacy policies and practices (unknown timescale)
  • Pull a Mitch Daniels and buy an existing online (or mostly online) institution through creative process, risk not being approved due to transfer of control, and risk getting caught up in the online institution’s legacy policies and practices – and expect 2 – 3 years if the bet works out

California likely faces similar choices with the fully-online college directive being evaluated this fall. This is a legacy-building project, but there will be real pressure to not have to wait 10 – 15 years to start getting meaningful impact. eVersity from Arkansas is going through this same process ahead of time, and the California team should learn lessons by watching what works and doesn’t work in this case.

More broadly, the IHE article ends with a key point about accreditation needing to change.

Russell Poulin, director of policy and analysis at the WICHE Cooperative for Educational Technologies, said that accreditors needed to figure out how to accredit new providers more quickly, without compromising on quality. “Accreditation is slow and innovation is fast; we are starting to see political and business pressure to find alternatives,” he said.

Read the entire IHE article. This subject is important.

The post California Should Watch Arkansas Process for Creating New Online Institution appeared first on e-Literate.

by Phil Hill at September 08, 2017 03:28 PM

September 01, 2017

Sakai Project

Sakai Docs Ride Along

Sakai Docs ride along - Learn about creating Sakai Online Help documentation September 8th, 10am Eastern

by MHall at September 01, 2017 05:38 PM

August 30, 2017

Sakai Project

Sakai get togethers - in person and online

Group photo from Sakai Camp 2017 in Orlando

Sakai is a virtual community and we often meet online through email, and in real time through the Apereo Slack channel and web conferences. We have so many meetings that we need a Sakai calendar to keep track of our meetings. 

Read about our upcoming get togethers!

SakaiCamp Lite
Sakai VC

by NealC at August 30, 2017 06:37 PM

Sakai 12 branch created!

We are finally here! A big milestone has been reached with the branching of Sakai 12.0. What is a "branch"? A branch means we've taken a snapshot in time of Sakai and put it to the side so we improve it, mostly QA (quality assurance testing) and bug fixing until we feel it is ready to release to the world and become a community supported release. We have a stretch goal from this point of releasing before the end of this year, 2017. 

Check out some of our new features.

by NealC at August 30, 2017 06:00 PM

Apereo Foundation

2018 SakaiCamp registration is open!

2018 SakaiCamp registration is open!

SakaiCamp un-conference in Orlando Florida January 21 - 24, 2018 

by Michelle Hall at August 30, 2017 02:16 AM

July 18, 2017

Steve Swinsburg

An experiment with fitness trackers

I have had a fitness tracker of some descript for many years. In fact I still have a stack of them. I used to think they were actually tracking stuff accurately. I compete with friends and we all have a good time. Lately though, I haven’t really seen the fitness benefits I would have expected from pushing myself to get higher and higher step counts. I am starting to think it is bullshit.

I’ve have the following:

  1. Fitbit Flex
  2. Samsung Gear Wear
  3. Fitbit Charge HR
  4. Xiaomi Mi Band
  5. Fitbit Alta
  6. Moto 360
  7. Phone in pocket setup to send to Google Fit.
  8. Garmin ForeRunner 735XT (current)

Most days I would be getting 12K+ just by doing my daily activities (with a goal of 11K): getting ready for work and children ready for school (2.5K), taking the kids to school (1.2K), walking around work (3K), going for a walk at lunch (2K), picking up the kids and doing stuff around the house of an evening (3.5K) etc.

My routine hasn’t really changed for a while.

However, two weeks ago I bought the Garmin Forerunner 735XT, mainly because I was fed up with the lack of Android Wear watches in Australia as well as Fitbit’s lack of innovation. I love Android Wear and Google Fit and have many friends on Fitbit, but needed something to actually motivate me to exercise more.

The first thing I noticed is that my step count is far lower than any of the above fitness trackers. Like seriously lower. We are talking at least 30% or more lower. As I write this I am sitting at ~8.5K steps for the day and I have done all of the above plus walked to the shops and back (normally netting me at least 1.5K) and have switched to a standing desk at work which is about 3 metres closer to the kitchen that my original desk. So negligible distance change. The other day I even played table tennis at work (you should see my workplace) and it didn’t seem to net me as many steps as I would have expected.

Last night I went for a 30 min walk and snatched another 2K, which is pretty accurate given the distance and my stride length. I think the Fitbit would have given me double that.

This is interesting.

Either the Garmin is under-reporting or the others are over-reporting. I suspect the latter. The Garmin tracker cost me close to $600 so I am a bit more confident of its abilities than the $15 Mi band.

So, tomorrow I am performing an experiment.

As soon as I wake up I will be wearing my Garmin watch, Fitbit Charge HR right next to it, and keeping my phone in my pocket at all times. Both the watch and Fitbit will be setup for lefthand use. The next day, I will add more devices to the mix.

I expect the Fitbit to get me to at least 11K, Google fit to be under that (9.5K) and Garmin to be under that again (8K). I expect the Mi band to be a lot more than the Fitbit.

The fitness tracker secret will be exposed!

by steveswinsburg at July 18, 2017 12:46 PM

July 07, 2017

Adam Marshall

Innovative use of WebLearn – Oxford Online Programme in Sleep Medicine

In 2014 Oxford University approved a brand new postgraduate programme in Sleep Medicine. The two-year online programme leads to a postgraduate diploma (PGDip) or a Master of Science degree (MSc).

The programme is hosted by the Sleep and Circadian Neuroscience Institute (SCNi), at the University of Oxford which “brings together world leading expertise in basic and human sleep and circadian research and in the evaluation and management of sleep disorders” (Nuffield Department of Clinical Neurosciences, 2016).

Learning technologists in Medical Sciences and IT Services were involved in building a customised portal and customised online course components in WebLearn. In tandem with the course development team, the learning technologists have tried hard to design a programme that attempts to imitate the face-to-face, personalised Oxford learning experience.

This approach is achieved through small student groups, moderated online discussions, live webinars and collaboration with subject specialists to reflect the most recent research findings. It was particularly important to employ aspects of personalisation, e.g. showing students only material that is relevant to them, at the appropriate time (depending on current module, week etc.).

In the structure of the online modules, the WebLearn ‘Lessons tool’ was used to offer the pedagogical advantage of tailoring a learning pathway for the students, with integrated content, relevant activities and assessment opportunities.

The customised interface and personalisation features were realised by taking advantage of WebLearn’s ‘behind-the-scenes’ RESTful web services API and rendered using a popular open source JavaScript framework called Angular 2. A very modest amount of development work was undertaken by the WebLearn team to make this approach possible.


by Jill Fresen at July 07, 2017 11:13 AM

June 27, 2017

Apereo Foundation

June 20, 2017

Adam Marshall

System Improvements: WebLearn v11-ox6

WebLearn was upgraded on 20th June 2017 to version 11-ox6. We apologise for any inconvenience caused by the disruption.

Here is a list of some of the improvements:

  • Single file upload limit is now 250MB (Resources, Assignments etc.)
  • A link to one’s personal Calendar has been added in the top right Top Right “personal” drop down

  • Anonymous Submission sites
    • Site Info tool cannot now be removed in error
    • It is now not possible to change the Admin Site – all ‘submission’ sites are forced to be managed by Exams and Assessment
  • Favourite sites are now clickable

  • One can how hide / or un-hide one’s self in a site via Home > Preferences > Sites
  • Replay (Recorded Lectures)
    • All instances now have the same ‘play button’ icon
    • Individual recordings can now be inserted into Lessons (using IMS LTI Content Item Message)
  • Citations List improvements
  • Site Members will display the photos which have been set in a user’s Profile by default (as there are currently no available ‘official photos’)
  • Interactive videos (and other content types) from can now be used within Lessons (and Resources): “H5P makes it easy to create, share and reuse HTML5 content and applications. H5P empowers everyone to create rich and interactive web experiences more efficiently“. H5p includes
    • Interactive YouTube videos (annotate, ask questions etc.)
    • Image juxtaposition
    • Drag and drop / Drag the words
    • Hotspots
    • Many many more content types

  • Resources:
    • The superfluous recycle bin link has been removed
    • Folders can be expanded on a mobile phone
    • Emoticon images inserted pre-WebLearn 11 will now appear correctly
  • Forums and Topics are correctly copied during ‘Duplicate site’ and ‘Import from site’
  • Researcher Training Tool
    • Search Results page is now fully responsive
    • Improved rendering in Internet Explorer 11
  • Lessons tool: ‘Add section break above’ no longer results two blocks appearing below



by Adam Marshall at June 20, 2017 02:01 PM

June 16, 2017

Apereo OAE

OAE at Open Apereo 2017

The Open Apereo 2017 conference took place last week in Philadelphia and it provided a great opportunity for the OAE Project team to meet and network for three whole days. The conference days were chock full of interesting presentations and workshops, with the major topic being the next generation digital learning environment (NGDLE). Malcolm Brown's keynote was a particularly interesting take on this topic, although at that point the OAE team was still reeling from having a picture from our Tsugi meeting come up during the welcome speech - that was a surprising start for the conference! We made note about how the words 'app store' kept popping up in presentations and in talks among the attendees again and again - perhaps this is something we can work towards offering within the OAE soon? Watch this space...

The team also met with people from many other Apereo projects and talked about current and future integration work with several project members, including Charles Severance from Tsugi, Opencast's Stephen Marquard and Jesus and Fred from Big Blue Button. There's some exciting work to be done in the next few weeks... While Quetzal was released only a few days before the conference, we are now teeming with new ideas for OAE 14!

After the conference events were over on Wednesday, we gathered together to have a stakeholders meeting where we discussed strategy, priorities and next steps. We hope to be delivering some great news very soon.

During the conference, the OAE team also provided assistance to attendees in using the Open Apereo 2017 group hosted on *Unity that supported the online discussion of presentation topics. A lot of content was created during the conference days so be sure to check it out if you're looking for slides and/or links to recorded videos. The group is public and can be accessed from here.

OAE team members who attended the conference were Miguel and Salla from *Unity and Mathilde, Frédéric and Alain from ESUP-Portail.

June 16, 2017 12:00 PM

June 12, 2017

Apereo Foundation

June 01, 2017

Apereo OAE

Apereo OAE Quetzal is now available!

The Apereo Open Academic Environment (OAE) project is delighted to announce a new major release of the Apereo Open Academic Environment; OAE Quetzal or OAE 13.

OAE Quetzal is an important release for the Open Academic Environment software and includes many new features and integration options that are moving OAE towards the next generation academic ecosystem for teaching and research.


LTI integration

LTI, or Learning Tools Interoperability, is a specification that allows developers of learning applications to establish a standard way of integrating with different platforms. With Quetzal, Apereo OAE becomes an LTI consumer. In other words, users (currently only those with admin rights) can now add LTI standards compatible tools to their groups for other group members to use.

These could be tools for tests, a course chat, a grade book - or perhaps a virtual chemistry lab! The only limit is what tools are available, and the number of LTI-compatible tools is growing all the time.

Video conferencing with Jitsi

Another important feature introduced to OAE in Quetzal is the ability to have face-to-face meetings using the embedded video conferencing tool, Jitsi. Jitsi is an open source project that allows users to talk to each other either one on one or in groups.

In OAE, it could have a number of uses - maybe a brainstorming session among members of a globally distributed research team, or holding office hours for students on a MOOC. Jitsi can be set up for all the tenancies under an OAE instance, or on a tenancy by tenancy basis.


Password recovery

This feature that has been widely requested by users: the ability to reset their password if they have forgotten it. Now a user in such a predicament can enter in their username, and they will receive an email with a one-time link to reset their password. Many thanks to Steven Zhou for his work on this feature!

Dockerisation of the development environment

Many new developers have been intimidated by the setup required to get Open Academic Environment up and running locally. For their benefit, we have now created a development environment using Docker containers that allows newcomers to get up and running much quicker.

We hope that this will attract new contributions and let more people to get involved with OAE.

Try it out

OAE Quetzal can be experienced on the project's QA server at It is worth noting that this server is actively used for testing and will be wiped and redeployed every night.

The source code has been tagged with version number 13.0.0 and can be downloaded from the following repositories:


Documentation on how to install the system can be found at

Instruction on how to upgrade an OAE installation from version 12 to version 13 can be found at

The repository containing all deployment scripts can be found at

Get in touch

The project website can be found at The project blog will be updated with the latest project news from time to time, and can be found at

The mailing list used for Apereo OAE is You can subscribe to the mailing list at

Bugs and other issues can be reported in our issue tracker at

June 01, 2017 05:00 PM

May 26, 2017


Interested in Transitioning a Sakai Course Site to Canvas?

There have been several inquiries by faculty wishing to move their current Sakai course to the Canvas Learning Management System. The following guide has been prepared to help those interested in making this transition. At any time, if you have questions or need additional assistance, please contact ATS, 116 Pearson Hall,, 302-831-0640. Sakai to… Continue reading

by Nancy O'Laughlin at May 26, 2017 05:57 PM