Planet Sakai

February 16, 2018

Michael Feldstein

Personalized Learning: What It Really Is and Why It Really Matters

The following is a re-post from a 2016 EDUCAUSE Review article of ours with minor updates.

Let's be honest: as an academic term of art, personalized learning is horrible. It has almost no descriptive value. What does it mean to "personalize" learning? Isn't learning, which is done by individual learners, inherently personal? What would it mean to personalize learning? And who would want unpersonalized learning? Because the term carries so little semantic weight, it is a natural for marketing purposes: "Our personalized learning is new, improved, and 99.44% pure!" Unfortunately, this also sets it up perfectly for the inevitable War of Definitions. Remember the Great MOOC War a few years ago? Were MOOCs the creation of the Canadian Constructivists or of the Stanford professor who invented a self-driving car? Are we talking about an xMOOC or a cMOOC? Which one is the good one, and which one is the bad one? Now that the furor has died down, there is relatively little debating over the definition of the term MOOC and much more focus on how the family of approaches that are collected under that term can best serve different educational purposes.

Let's just skip to the end this time, shall we?

The two of us spent the past three years visiting colleges and universities that have undertaken so-called personalized learning projects, and we talked to the students, teachers, and administrators about what they are actually doing and why they are doing it. We visited a wide range of institutions and talked to a wide range of stakeholders, based on our daily work as consultants to colleges and universities and as analysts of the educational technology industry and our work on a grant funded by the Bill & Melinda Gates Foundation. Through these observations, we have been looking for the ground truth underneath the hype of personalized learning. As a result of this process, we observed a family of technology-enabled educational practices that are potentially useful for a range of educational challenges. We would like to share our framework, which we hope will be useful for thinking about (1) the circumstances under which personalized learning can help students and (2) the best way to evaluate the real educational value for products that are marketed under the personalized learning banner.

Personalized Learning as Practice

Imagine for a moment that personalized learning is not already a term in the ed tech lexicon and, further, that there is no need for any new term to be "catchy" or "sticky." The most descriptive label we could come up with for the practices that the two of us have observed in our school visits might be undepersonalized teaching. If the ideal, most personal teaching modality is one-to-one tutoring, there are many reasons why we fall short of this ideal in real-world classrooms. The most stereotypical depersonalized teaching experience is the large lecture class, but there are many other situations in which teachers do not connect with individual students and/or meet the students' specific needs. For example, even a small class might contain students with a wide-enough range of skills, aptitudes, and needs that the teacher cannot possibly serve them all equally well. Or a student may have needs (or aptitudes) that the teacher simply doesn't get an opportunity to see within the amount of contact time that the class allows. The truth is that students fall through the cracks all the time, even in the best classes taught by the best teachers. Failing a course is the most visible evidence, but more often students drift through the class and earn a passing grade—maybe even a good grade—without getting any lasting educational benefit.

If we choose to think of personalized learning as a practice rather than a product, we can start by taking a hard look at course designs and identifying those areas that fail to make meaningful individual contact with students. These gaps will be different from course to course, subject to subject, student population to student population, and teacher to teacher. Although there is no generic answer to the question of where students are most likely to fall through the cracks in a course, there are some patterns to look for (as we will discuss later in this article).

Technology then becomes an enabler for increasing meaningful personal contact. In our observations, we have seen three main technology-enabled strategies for lowering classroom barriers to one-on-one teacher/student (and student/student) interactions:

  1. Moving content broadcast out of the classroom: Even in relatively small classes, a lot of class time can be taken up with content broadcast such as lectures and announcements. Personalized learning strategies often try to move as much broadcast out of class time as possible in order to make room for more conversation. This strategy is sometimes called "flipping" because it is commonly accomplished by having the teacher record the lectures they would normally give in class and assign the lecture videos as homework, but it can be accomplished in other ways as well, for example with reading-based or problem-based course designs.
  2. Turning homework time into contact time: In a traditional class, much of the work that the students do is invisible to the teacher. For some aspects, such as homework problems, teachers can observe the results but are often severely limited by time constraints. In other cases, such as comprehension of assigned readings, the students' work is invisible to the teacher and can be observed only indirectly and with significant effort. Personalized learning approaches often allow the teacher to observe the students' work in digital products, so that there is more opportunity to coach students. Further, personalized learning often identifies meaningful trends in a student's work and calls the attention of both teacher and student to those trends through analytics.
  3. Providing tutoring: Sometimes students get stuck in problem areas that don't require help from a skilled human instructor. Although software isn't good at teaching everything, it can be good at teaching some things. Personalized learning approaches can offload the tutoring for those topics to adaptive learning software that gives students interactive feedback while also turning the students' work into contact time by making it observable to the teacher at a glance through analytics.

Personalized Learning Practices

None of these techniques, by themselves, undepersonalize the teaching. They generally need to be designed and implemented by skilled educators as part of a larger course design that is intended to address the particular problems of particular students. In the business world, an analogous initiative might be called "business process redesign." Emphasis is on process. The primary question being asked is, "What is the most effective way to accomplish the goal?" The redesigned process may well need software, but it is the process itself that matters. In personalized learning, the process we are redesigning is that of teaching individual students what they need to learn from a class as effectively as possible (though we can easily imagine applying the same kind of exercise to improving advising, course registration, or any other important function).

We saw a noteworthy example of this educational design work in action at Essex County College (ECC) in Newark, New Jersey. The majority of ECC students need to take developmental math in order to complete their degrees, and the majority of those do not pass developmental math. Of those who do, the majority do not pass the college-level math course that follows. ECC leadership believed that this educational failure could be attributed to two main factors. First, students came into developmental math with an enormous range of prior knowledge: some had the equivalent of a fourth-grade math education; others needed to learn only a few concepts. Students on one end of the spectrum typically got lost because they were not receiving the individual help they needed, whereas those on the other end often got bored and eventually failed or dropped out because they were being forced to spend a lot of their time on skills that they had already mastered. Second, many ECC students had never been taught good study skills, and faculty did not have the class time needed to teach those skills. So to address the two personalization gaps in this particular course, the college redesigned developmental math using personalized learning techniques.


ECC used an overall pedagogical framework called Self-Regulated Learning. Students in the course spend part of their class time in a computer lab, working at their own pace through an adaptive learning math program. Students who already know much of the content can move through it quickly, giving them more time to master the concepts that they have yet to learn. Students who have more to learn can take their time and get tutoring and reinforcement from the software. Teachers, now freed from the task of lecturing, roam the room and give individual attention to those students who need it. They can also see how students are doing, individually and as a class, through the software's analytics. But the course has another critical component that takes place outside the computer lab, separate from the technology. Every week, the teachers meet with the students to discuss learning goals and strategies. Students review the goals they set the previous week, discuss their progress toward those goals, evaluate whether the strategies they used helped them, and develop new goals for the next week.

Note the role of the software in this design. In the lab, it primarily takes on the role of tutor, helping most of the students most of the time with routine skill coaching and practice so that the teacher is freed up to give individual attention to those students who really need it. In the goal-setting sessions, the software acts mainly as a record keeper. It helps students track their time on task, number of problems solved, and so on. The teacher then helps students figure out what to do with that information. In both cases, the software is an important enabler of the new teaching practices. But the value that it adds is quite different from the way personalized learning software products are often characterized by sales reps, marketing materials, and many news stories. It is thus worth taking a minor detour from our exploration of personalized learning as a practice to examine the significant gap between the ground truth of the practice and the popular characterizations of the products.

Why Such Hype?

So far, we have deliberately and explicitly set aside the various policy, political, and business pressures that have brought the term personalized learning into broader use so that we could focus on the educational value that lurks underneath the hype. But it is also important to understand these pressures so that we can be on guard for the ways in which they might deform the discussion and distract us from the real value that we should be talking about. None of the three approaches that we identified above are particularly new; nor do they require fancy algorithms and expensive products to achieve. There are two specific reasons why these approaches are being attached to heavily marketed products right now.

First, on the policy side, there has been a shift in emphasis from access to degree completion. President Barack Obama set the tone by announcing a goal that the United States be number one in the world in the proportion of college graduates by 2020. Since then, state and federal policy makers have followed suit. Colleges and universities now have to account for gainful employment metrics and track their institutional scorecard results. Base funding, particularly for public institutions, is increasingly tied to performance against these and similar metrics. Grant funding is also increasingly outcomes-driven. As higher education institutions have narrowed their focus on these metrics, the students who fall through the cracks and fail out of the standard educational model have come into sharper relief.

With these policy changes and the funding that follows them, being "student-centric" is no longer a nice-to-have goal. Rather, it is a critical success factor for improving measurable student outcomes and therefore getting funding and being seen as a successful institution. For example, whereas in the past a few forward-thinking community college administrators might have thought to take on a project like ECC's personalized learning developmental math redesign out of a sense of mission, now every community college administrator must be looking for ways to improve degree completion by eliminating failure traps such as developmental math. The security of institutional funding depends on it.

The second big change has been the widespread commercialization of the adaptive learning techniques that have existed in educational research laboratories for over fifty years. As the term adaptive learning suggests, these products provide students with a certain amount of one-on-one tutoring (although the methods that these systems use for analyzing students' progress and providing useful feedback vary widely by product and discipline). This change in the market has been enabled by technological advances that are increasingly resulting in one networked computer for every student in a class and affordable developer access to machine learning tools.

Market forces are playing a big role in publishing too. Textbook publishers have found that their traditional business model is collapsing as more students find ways to avoid buying new textbooks. Cengage, for example, was forced to go through bankruptcy. McGraw-Hill Education was sold to a private equity firm. Pearson's stock is near historic lows. All of these companies have had multiple rounds of major layoffs. They are in desperate need of a new product, and they are increasingly latching onto the personalized learning trend as the cure for what ails them. Meanwhile, according to Ambient Insight, U.S. ed tech companies received $3.6 billion of angel and venture capital funding in 2015. (This doesn't even include mergers and acquisitions, which have also been huge.) Startup founders find themselves in an increasingly crowded field and are under strong pressure to promise and produce big results. Every vendor of a developmental math product, whether that vendor is an established textbook publisher or a young startup, is aware that campus presidents and provosts need to solve their degree-completion problem and that developmental math is very likely to be a big part of that problem. The vendors therefore market their products as a solution to degree completion. Every textbook vendor and aspiring textbook disruptor knows that stories about improving pass rates through technology sell. But what to call these products? Personalized learning is a term that sounds good without the inconvenience of having any obviously specific pedagogical meaning, so it becomes the flag that all vendors fly, even though different products do very different things and even though undepersonalization is rarely accomplished through software alone.

Thus, through policy and commercialization, the personalized learning marketing juggernaut was born. Unfortunately, the combination of the marketing shortcuts and the funding pressures created a strong temptation for magical thinking. Campus leaders are being asked to believe that they can solve their degree-completion and other accountability metric problems by buying software that will somehow magically provide personalized learning (in a way that faculty members, by implication, do not). For obvious reasons, faculty are likely to reject this stunted conception of personalized learning. Leaders who want to see their campus communities benefit from personalized learning approaches need to guard against product-centric characterizations and should suggest that discussions of vended solutions take place in the context of course and curricular designs that undepersonalize teaching. Otherwise, the baby will probably get thrown out with the bathwater.

Good Candidate Opportunities

One of the benefits of reframing personalized learning as undepersonalized teaching and focusing on the three techniques we outlined earlier is that faculty can readily translate this framework into their own contexts and start identifying opportunities that are good candidates for undepersonalization (not all of which will require vended products). In contrast to a product-centric conception of personalized learning, a practice-centered conception is something that faculty can own. That said, we have seen areas of opportunity where personalized learning is often a good fit, and not all of those areas are always obvious.

To begin with, any course that students enter with a wide range of prior knowledge and ability is a good candidate. ECC's developmental math is a prototypical example. Another example is Austin Community College's ACCelerator lab, used heavily for developmental math courses. But the course doesn't have to be remedial and the institution doesn't have to be access-oriented in order for personalized learning to be helpful. For example, at Middlebury College, a geography professor realized that some students in his course on geographical information systems (GIS) were struggling. For context, this is a general education course that is taken by students in a wide range of majors and specialties. In an elite college like Middlebury, nobody worries too much about completion rates, particularly at the institutional level. The professor simply observed that some students were working very hard. In fact, the course had such a strong reputation for difficulty that taking and passing it was considered a badge of honor. Students were sleeping in the labs. But the professor didn't see this as a sign that he was inspiring his students to work hard. He saw it, rather, as a course-design problem. Some students were working harder than they should because he wasn't reaching them in the way that they needed to be reached.

As it turns out, one critical skill that the course was teaching—spatial reasoning—is rarely taught in high school. A handful of the students in the class came in with either prior training or natural talent. They did well. The others were the ones who were sleeping in the labs. They needed more time and more help. The professor decided to make videos of his lectures and assign the videos as homework. With this change, struggling students could watch the videos as often as they needed in the (relative) comfort of their own dorm rooms while the professor's class time was also freed up for more interactive work. (After he had done this, a colleague told the professor that this technique is called "flipping the classroom"—a term he had never heard before.) He is also thinking about developing tutorial software that can help students work through the homework problems in ways that best suit their needs.


Another obvious opportunity to undepersonalize teaching is in large lecture courses. For example, administrators at the University of California, Davis, became interested in redesigning their survey biology and chemistry courses because they recognized that they were losing a high percentage of first-year students—the ones who typically take these large lecture courses. It is very easy for a student to become passive in this broadcast-heavy course design. The team involved in the course-redesign projects wanted students to both get more individual attention and take more individual responsibility for their learning. To accomplish these goals, the team employed personalized learning practices as a way of making room for more active learning in the classroom. Students used software-based homework to experience much of the content that had previously been delivered in lectures. Faculty redesigned their lecture periods to become interactive discussions. Meanwhile, the teaching assistants who ran the discussion sections used analytics from the homework software to identify the areas where students were struggling; as a result, they could better focus their class time on those areas. Importantly, teaching assistants received additional training in how to employ active learning principles in their teaching techniques. Once again, in contrast to marketing pitches and popular narratives, the software played only a supporting role, albeit an important one, in undepersonalizing the large lecture.


A less obvious opportunity for personalized learning is in the design of problem-based learning courses. The previous two examples fit within the common understanding of personalized learning being used to help students work through traditional, didactic courses with more support. But Arizona State University incorporated the "flipping" aspect of the technology into an online STEM lab course for non–science majors. In this course students, for their final project, are asked to evaluate the likelihood that there are other intelligent civilizations in a randomly assigned field of stars. The teaching philosophy of the faculty member who was the lead designer of the course is that he should be a coach or a guide, helping students navigate difficult problems. In his view, both content delivery and assessment are activities that take time away from that core function. He and his colleagues framed the course, Habitable Worlds, as a series of challenges. Overcoming each challenge requires the students to learn new knowledge and skills. The course design is based on mastery learning: students must demonstrate that they have learned one skill before moving on to the next. The course is also difficult. Students often get stuck, which is by design. Faculty and teaching assistants, freed up from both content delivery and assignment grading, spend most of their time responding to students' questions. And because the coursework is all software-based, they can see exactly what students are doing, how far students have progressed, and where students are struggling. Students can proceed at their own pace, moving quickly where they can and getting help where they need it. Yet despite the self-paced nature of the course, there is also a strong social component. Students can and often do seek out each other's help. Because personalized learning practices make space for more interactivity, these practices often go hand-in-hand with active learning. And active learning is often social.


We suspect there are many opportunities in addition to the ones we have identified here. If faculty are given a commonsense framework and a chance to experiment, refine, and share, they will find novel and exciting ways to better support their students' individual educational needs.

Doing It Right

Because personalized learning is a family of educational practices that support good course designs, implementing those practices well is not as simple as buying a product. To begin with, course design is always a time-consuming process when done correctly. Second, in many cases faculty will be trying techniques they have not used before, requiring them to teach in ways that are very different from how they have taught before, that are far removed from their experience (and therefore instincts) of what works and what doesn't, and that may have ripple effects they don't anticipate. On top of all this, the vast majority of faculty are neither trained in course design/research nor compensated for any time they invest in it. They will need time and support. In many cases, implementing personalized learning well can require an institutional effort analogous to the one required to implement an online learning program well.

Looking across a range of personalized learning projects that have had varying degrees of success, both at the schools we visited and elsewhere, we can identify six steps for a successful strategy. It should be noted that these themes could be applied to any number of pedagogical innovations.

  1. Identify the student need that is to be addressed. The various personalized learning approaches are just one set of tools in the toolbox. Successful programs generally start by identifying a significant educational problem that faculty and program staff believe can be corrected with a change in course design.
  2. Design the pedagogical structure. If the problem that is identified can be addressed through personalization, then how will the course support different students differently? The answer has to be more than just "adaptive learning." Successful programs identify opportunities in the course design to improve individual support for students.
  3. Pick the products or technologies. The details of different products or technological approaches are most meaningful when they impact what can be done with the course design. Successful programs pick the right tool based on the job at hand rather than on who has the best marketing pitch.
  4. Don't forget faculty training. Because personalized learning, done properly, generally means implementing new pedagogical approaches, faculty may need to learn to teach in ways that they haven't taught before. Successful programs provide faculty with training and pedagogical support.
  5. Don't forget technology support. Software helps with learning only when it works, and Murphy's Law can hit with a vengeance when technology is mixed with teaching. Successful programs make sure that faculty have the technology training, equipment, and support staff that they need in order to be successful.
  6. Be prepared to measure, fail, and iterate. Because personalized learning approaches often require new software, new teaching techniques for faculty, new responsibilities for students, and in some cases new scheduling challenges, institutions will almost inevitably get some things wrong in the first couple of iterations, and those mistakes may have real impact on outcomes. Successful programs approach implementation empirically but with patience.

On the bright side, the fact that personalized learning is now being attached to funding-related metrics such as degree-completion rates means that attaching institutional support costs to a funding stream will also be easier. In many cases, schools can build personalized learning "muscle mass" by focusing on metric-relevant projects first and then expanding the initiative once the critical institutional knowledge and support mechanisms have been put in place.

Final Thoughts: Ed Tech Groundhog Day

There is a lesson to be learned here, and it is broader than personalized learning. Every popular ed tech trend, going at least as far back as the original online asynchronous distance learning courses in the 1990s, has brought with it a food fight, with proponents hyping the trend as revolutionary and opponents attacking it as harmful. And in every case, a policy or other institutional driver has resulted in a rush of companies responding to the market opportunity created by that driver. Together, these forces generate hype and magical thinking, which in turn provoke an equal and opposite reaction.

In this article, we have tried to identify specific teaching practices being used by educators, and we have tried to describe them in commonsense terms that should make intuitive sense to experienced teachers. These practices always exist at the beginning. Some teacher somewhere comes up with a specific approach to a specific problem. External forces then make that problem a more institutionally consequential one, and companies rush in to name, market, and sell solutions. In the process, we lose track of the original educational idea. It's like playing a game of telephone in a noisy airport. Except in this case, the message in the game is an actual plan for how we are going to help our students, and when it gets to the end of the telephone line, we will act as if we received the message with perfect fidelity. And then fight over it. Endlessly. Much of the Gartner hype cycle can be attributed to this process.

We can break out of this hype cycle with a fairly simple (though not necessarily easy) approach. Whenever a new ed tech trend gets named—whether it is distance learning, adaptive learning, personalized learning, competency-based education, MOOCs, or something else—we should start trying to understand that trend by looking for the best examples of what teachers and students are doing when they are doing the thing we just named. We should ask them what they are doing, and why. We should ask how their practice is working and what they are learning and what they don't yet know. We should attach the name of the new trend to those educational practices and those reasons—rather than to any products, technologies, or services. We should not waste time debating whether the name we came up with for those practices is the perfect name or exactly what it includes or excludes. Instead, we should spend our time trying to understand the practices themselves and their applicability to the educational problems we are trying to solve.

Yes, personalized learning is a lousy term, but it is attached to legitimate educational practices that have the potential to improve the lives of many students. It is also a term that is trapped in the early stages of its hype cycle. So let's just skip to the end and break personalized learning out of the hype cycle by doing our best to understand—and explain—what it really is and why it really matters.

The post Personalized Learning: What It Really Is and Why It Really Matters appeared first on e-Literate.

by Phil Hill at February 16, 2018 03:00 PM

February 15, 2018

Michael Feldstein

Visibility As A Benefit: Ole Miss and UCF share their stories on courseware usage

In an article Michael and I wrote for EDUCAUSE Review in 2016, we described our view of personalized learning as "a family of teaching practices that are intended to help reach students in the metaphorical back row". One of the key practices focused on gaining increased visibility into student coursework.

These same automated homework tools can also give teachers an easy view into how their students are doing and create opportunities to engage with those students. "Analytics" in these tools are roughly analogous to your ability to scan the classroom visually and see, at a glance, who is paying attention, who looks confused, who has a question.

With the usage of digital courseware to provide the homework tools, this focus on visibility into the learning process can apply across the entire course. What are different schools learning in this area?

As part of our e-Literate TV series of video case studies, we had a chance this fall to interview several institutions that are focusing on the benefit of increased visibility into student learning by usage of digital courseware based on interviews at the RealizeIt users conference.1

In the first episode we explored the challenge of going beyond pilots and deploying systems at scale. In this second episode I interview representatives from the University of Mississippi and the University of Central Florida, asking them to describe their experiences and focus on the issue of increased visibility.


We'll share one more set of interviews from this conference in the coming weeks.

This post is part of our e-Literate TV series, which is funded in part by the Bill & Melinda Gates Foundation. The findings and conclusions (or views) contained within are those of the authors and do not necessarily reflect positions or policies of the Bill & Melinda Gates Foundation.

  1. This post is not meant to endorse RealizeIt's platform over other companies' platforms. We are focusing on institutional perspectives and lessons to be learned.

The post Visibility As A Benefit: Ole Miss and UCF share their stories on courseware usage appeared first on e-Literate.

by Phil Hill at February 15, 2018 03:00 PM

February 14, 2018

Apereo Foundation

12th Annual LAMP Pedagogy and Technology Conference

12th Annual LAMP Pedagogy and Technology Conference

The12th annual Pedagogy and Technology Conference will be held July 24 through 26 in Berea, Kentucky. Keynote speaker will be Wilma Hodges. Registration is now open.

by Michelle Hall at February 14, 2018 06:10 PM

Michael Feldstein

An Alternative to the Engineering Model of Personalized Learning

There is an article in EdWeek that quotes Larry Berger, CEO of Amplify, in his "confession" about personalized learning. The focus is on K-12 education but applies directly to higher ed as well.

Until a few years ago, I was a great believer in what might be called the "engineering" model of personalized learning, which is still what most people mean by personalized learning. The model works as follows:

You start with a map of all the things that kids need to learn.

Then you measure the kids so that you can place each kid on the map in just the spot where they know everything behind them, and in front of them is what they should learn next.

Then you assemble a vast library of learning objects and ask an algorithm to sort through it to find the optimal learning object for each kid at that particular moment.

Then you make each kid use the learning object.

Then you measure the kids again. If they have learned what you wanted them to learn, you move them to the next place on the map. If they didn't learn it, you try something simpler.

If the map, the assessments, and the library were used by millions of kids, then the algorithms would get smarter and smarter, and make better, more personalized choices about which things to put in front of which kids.

I spent a decade believing in this model—the map, the measure, and the library, all powered by big data algorithms.

Here's the problem: The map doesn't exist, the measurement is impossible, and we have, collectively, built only 5% of the library. [snip]

So we need to move beyond this engineering model. Once we do, we find that many more compelling and more realistic frontiers of personalized learning opening up.

Larry is exactly right that there is a fundamental problem with the assumptions behind what he calls the engineering model of personalized learning. But there are alternate models that offer "more compelling and more realistic frontiers". We have described this contrast in models at e-Literate, most directly in Michael's post The Battle for “Personalized Learning”.

Phil and I have decided to claim this prime piece of linguistic real estate. We are asserting squatters' rights.

We hereby decree, by the power vested in us by nobody at all, that "personalized learning" shall henceforth refer to a family of teaching practices that are intended to help reach students in the metaphorical back row. The ones who are bored, or confused, or tuned out, or feeling stupid. Personalized learning practices are almost always ones that teachers have been using for a very long time but that digital tools can support or enhance. Here are a few that we have identified so far:

Move content broadcast out of the classroom: In many disciplines, the ideal teaching format is a seminar, in which students spend class time engaged in conversation with a professor. In others, it is a lab. Both models have students actively engaged in academic practice during class time, when the professor, as the expert practitioner, is present to coach them. Every class spent lecturing is a wasted coaching opportunity.

Many disciplines have traditionally used assigned readings to move content broadcast out of the classroom, and some still do. But it is not always possible to find readings that capture what you want to cover, and in any case, it is becoming harder to persuade students to read. Luckily, there are tools that can help with this problem. You can record and post your lectures as videos, which students can watch as many times as they need to absorb what you’re trying to tell them. You can assign podcasts that they can listen to on the go, or find interactive content that keeps them more engaged.

Make homework time contact time: Good teachers help students see the direct connection between the work they do at home and the overall purpose of the class. They do this in a variety of ways. Sometimes they mark up and comment on the student work. Sometimes they ask the students questions in class that require them to build on the work they did at home. For a variety of reasons, which often boil down to professors’ having less available time per student, this has become harder to do. The great crutch that is now being used to limp along without actually solving this problem is robo-graded homework assignments. By itself, automated practice might help some students drag themselves through to the end of the semester. But it doesn’t often inspire them to think that maybe they are not destined to be the student in the back row forever. (There are important exceptions to this rule, which I address below.)

On the other hand, these same automated homework tools can also give teachers an easy view into how their students are doing and create opportunities to engage with those students. "Analytics" in these tools are roughly analogous to your ability to scan the classroom visually and see, at a glance, who is paying attention, who looks confused, who has a question. Nor are these the only tools available for making homework time feel less isolated and pointless. Any homework activity that is done electronically can be socially connected. Group work done on a discussion board can be read over by the professor when she has time. Highlights and margin notes on readings can be shared and discussed in class. This sort of effort on the professor’s part doesn’t have to be exhaustive (or exhausting). Sometimes a small gesture to show a student that you see her is all it takes.

Hire a tutor: You know what tutors are typically good for in your particular discipline. You also know that there generally aren’t enough good ones available, and that even when there are, it’s tough to get students to come into the tutoring center. One of the best uses of machine-graded homework systems, especially when they are "adaptive," is to treat them as personal tutors that are available to students whenever they need them and wherever they are. They aren’t perfect, but what tutors are? Sometimes getting students out of the back row means helping them to believe that they are capable of learning. And sometimes students are willing to pose a question to a computer that they would be embarrassed to ask in person. In those cases, a little extra practice and feedback on the basics, without judgment, can make all the difference — even if the feedback comes from a machine. And if adaptive learning robo-tutors don’t fit the needs of your students and your discipline, technology also makes it possible to connect students with actual human tutors, who are available online to help them get through the rough spots.

We wrote more extensively about this description of personalized learning at EDUCAUSE Review in 2016 at "Personalized Learning: What It Really Is and Why It Really Matters".

There is a battle for personalized learning, and the description of the engineering model is useful for understanding one approach (unfortunately the one most often used in marketing and by ed reformers). But there is an alternative and it is more compelling.

The post An Alternative to the Engineering Model of Personalized Learning appeared first on e-Literate.

by Phil Hill at February 14, 2018 05:47 PM

Apereo Foundation

Preview Artifact: Sakai Manifesto

Preview Artifact: Sakai Manifesto

The annual SakaiCamp held in Orlando in January produced an unusual artifact this year. The 20+ participants drafted a "Sakai Manifesto" - aka principles and practices which developers follow when improving Sakai.

by Michelle Hall at February 14, 2018 05:44 PM

February 07, 2018

Adam Marshall

WebLearn Improvements – Version 11-ox8.1, released w/c 4 December 2017

Ooops note / disclaimer! I just noticed that I had inadvertently forgotten to publish this blog post. Many apologies for the delay.

WebLearn was upgraded during the week of 4 December 2017 to version 11-ox8.1. There was no downtime associated with this release.

  • System emails now originate from a “black hole” address:
  • In the Lessons tool, the embedded “Forums widget” now correctly displays the name of the person who initiated the conversation (rather than the last person to read a post)
  • In the Lessons tool, the embedded ‘Calendar Widget’ now correctly displays the event icons
  • In the Lessons tool, the “Calendar widget” now changes to the correct colour when the colour scheme is modified
  • The “Recorded Lectures” dashboard is now available to all users in their home site (known as ‘My Home’), this dashboard is also available in the “Avatar Menu” (top right)

  • Joinable sites that are only available to Oxford SSO accounts now have a better description in the Site Info tool
  • External users now have access to a collated list of announcements on their home site
  • On new sites, the main panel on the ‘Overview’ page now has a more appropriate heading of ‘Welcome’
  • In the ‘Site Members’ (Roster) tool, site participants with the ‘maintain’ and ‘contribute’ roles now have permission to view site visits (prior to this fix, the permission to do this had to be set manually on each site)
  • The error message that one sees when attempting to complete a ‘single-attempt’ survey has been improved
  • “Access” (read-only) view of resources now uses Font Awesome icons

Anonymous Submission (AS) Sites / Assignment Tool

  • Students can now resubmit in an Anonymous Submission (AS) site
  • Participants with the “marker role” now have their own personal Drop Box (one use of this new functionality is for markers to exchange essays and marks with departmental administrators) – students will not see the drop box in the left-had side page menu
  • Participants with the ‘maintain’ and ‘contribute’ roles can now see and edit each other’s draft assignments

by Adam Marshall at February 07, 2018 04:04 PM

February 06, 2018

Adam Marshall

“The Only Way Is Up” – the increasing use of WebLearn over the last few years

I was just preparing the monthly report for the WebLearn service and thought it may be interesting to look at the long term trend in WebLearn usage. WebLearn has been using Google Analytics since January 2015 so I plotted a graph showing how the “Number of Sessions per Month” has increased over the last 3 years – as you can see it’s a fairly steady increase.

by Adam Marshall at February 06, 2018 11:41 AM

January 26, 2018

Dr. Chuck

Abstract: Learning Management Systems, Educational App Stores, Repositories, and Analytics – An Ecosystem Approach

The idea of a “Next Generation Digital Learning Environment” (NGDLE) is now several years old and while the commercial products seem to be happy with the status quo of a monolithic LMS with mostly-proprietary integrations, the Apereo open source communities are adopting this new model across the board.  Apereo is showing the path to the NGDLE by moving from a single LMS product (Sakai) to a situation where educational needs can be met from any number of open source projects like Tsugi, Equella, Open Learning Warehouse, Xerte, and others.  Much like Sakai’s “interoperability first” approach in 2004 radically changed the educational technology marketplace, Apereo’s NGDLE efforts in 2018 are laying the groundwork that will dramatically transform the educational technology market for the next decade.  The exciting part of this effort is that there are already production-ready open source projects that allow us to explore the next generation ecosystem in action to begin the process to move toward a truly next generation experience in educational technology.

Submitted to: JaSakai 2018

by Charles Severance at January 26, 2018 04:49 PM

Implementing a Standards Compliant Educational App Store with Tsugi (Educause)

Interoperability standards have matured enough to enable the creation of a standards based Application Store for Education that can be used in all the major LMS systems, MOOC platforms, and even Google Classroom.  An extensible application store is an important first step towards a Next Generation Digital Learning Ecosystem (NGDLE).

Keywords: NGDLE, Standards, Application Store, Interoperability

When you combine the IMS Learning Tools Interoperability (LTI), Deep Linking (Content Item), and Common Cartridge standards and use them together in a coordinated fashion, you can build and Educational App Store that has smooth integration into the major LMS systems and MOOC platforms using IMS standards. Tsugi tools also can  integrate into Google Classroom.  Tsugi ( is a software framework that reduces the effort required to build IMS standards-compliant applications and integrate them into a learning ecosystem.  A number of open source Tsugi Tools are hosted and free to use at This presentation will highlight how IMS standards can be used to deploy an educational app store and talk about how an App Store lays a foundation towards a Next Generation Digital Learning Ecosystem (NGDLE).

Participants will see a real, tangible element of the NGDLE.  Tsugi is the first standards-based Application Store for education. While not everyone will walk out and start using, the presentation will help us better understand what NGDLE will look like.

Participants will be given a usability exercise to perform on the site and we will gather feedback and report on the feedback several times during the presentation.

A hope of Tsugi is to simplify the building of educational tools to the point where faculty, students, and instructional designers can be part of building our educational technology infrastructure.

Chuck lead the development of the initial IMS LTI specification and is the lead developer of the Tsugi and TsugiCloud projects.  He also has a Tattoo that commemorates the major LMS systems that support LTI.

Chuck is a professor at the University of Michigan School of Information and uses a Tsugi App store to support his on campus classes in Canvas and support 100K students in his 10 Coursera courses and two specializations.

Submitted to: Educause 2018

by Charles Severance at January 26, 2018 04:05 PM

Sakai Community Update 2018

This presentation will review the progress on Sakai in 2017-2018, covering the Sakai 11 and 12 releases and looking ahead towards the Sakai 13 release.  We will review new features in Sakai 12, report from SakaiCamp 2018, the Sakai Virtual Conference, and FARM Projects.  We will update attendees on accessibility, QA efforts, documentation efforts, standards compliance, and marketing efforts. We will talk about the future arc of Sakai and how we intend to move Sakai forward to be part of a Next Generation Learning Ecosystem. We will cover these and other aspects of the Sakai product and community in a fun and upbeat talk show format.

Submitted to: Open Apereo 2018

by Charles Severance at January 26, 2018 03:26 PM

Adam Marshall

Improvements in WebLearn v11-ox9 released on 23 Jan 2018

A new version of WebLearn (version 11-ox9) was released on Tuesday 23 January 2018. There have been a number of improvements especially in the area of anonymous essay submissions.

Here is a breakdown of the enhancements.

Anonymous Submissions / Assignments

  • A warning is now issued if a file has been uploaded into the Assignment tool but the user hasn’t opted to ‘Submit’
  • The Turnitin Originality Report no longer loses anonymity once the due date has passed
  • Submission sites now have their own section in the Sites Drawer

Bulk Creation of Internal Subgroups

This release sees big improvements in the area of bulk creation of sub-groups (this is in Site Info > Manage Subgroups > Bulk Creation). It is possible to define multiple sets of groups and users in a file and have them created all at once. This facility should be particularly useful for Anonymous Submissions.

Visit Site Info and opt to Manage Subgroups.


On the next screen you are given the opportunity to upload a CSV file which can be generated by a spreadsheet application such as Excel. The contents of the CSV file can also be pasted into an on-screen textarea.

Contact Us Tool

  • The link to the WebLearn Guidance site has been corrected
  • If a user tries to visit a site to which they dont have access, the correct contact details are now shown making it much easier to ask to be made a site member


  • In the Lessons tool, on a public site, hyperlinks inserted via ‘add content’ now work correctly for non-logged in users
  • The should no longer be any emails with a return address of
  • The title of the main panel of the Overview page has been changed from (the meaningless) Site Information Display to the more sensible Welcome!

by Adam Marshall at January 26, 2018 12:16 PM

January 16, 2018

Apereo Foundation

ATLAS Open and Call for Peer Reviewers

ATLAS Open and Call for Peer Reviewers

ATLAS 2018 is now OPEN! Please submit by February 26 2018. The committee is seeking peer reviewers for the submissions.

by Michelle Hall at January 16, 2018 09:01 PM

October 11, 2017

Apereo OAE

Getting started with LibreOffice Online - a step-by-step guide from the OAE Project

As developers working on the Apereo Open Academic Environment, we are constantly looking for ways to make OAE better for our users in universities. One thing they often ask for is a more powerful word processor and a wider range of office tools. So we decided to take a look at LibreOffice Online, the new cloud version of the LibreOffice suite.

On paper, LibreOffice Online looks like the answer to all of our problems. It’s got the functionality, it's open source, it's under active development - plus it's backed by The Document Foundation, a well-established non-profit organisation.

However, it was pretty difficult to find any instructions on how to set up LibreOffice Online locally, or on how to integrate it with your own project. Much of the documentation that was available was focused on a commercial spin-off, Collabora Online, and there was little by way of instructions on how to build LibreOffice Online from source. We also couldn't find a community of people trying to do the same thing. (A notable exception to this is m-jowett who we found on GitHub).

Despite this, we decided to press on. It turned out to be even trickier than we expected, and so I decided to write up this post, partly to make it easier for others and partly in the hope that it might help get a bit more of a community going.

Most of the documentation recommends running LibreOffice Online (or LOO) using the official Docker container, found here. Since we recently introduced a dockerised development setup for OAE, this seems like a good fit. A downside to this is that you can’t tweak the compilation settings, and by default, LOO is limited to 20 connections and 10 documents.

While this limitation is fine for development, OAE deployments typically have tens or hundreds of thousands of users. We therefore decided to work on compiling LOO from source to see whether it would be possible to configure it in a way that allows it to support these kinds of numbers. As expected, this made the project substantially more challenging.

I’ve written down the steps to compile and install LOO in this way below. I’m writing this on Linux but they should work for OSX as well.

Installation steps

These installation steps rely heavily on this setup gist on GitHub by m-jowett, but have been updated for the latest version of LibreOffice Online. To install everything from source, you will need to have git and Node.js installed; if you don’t already have them, you can install both (plus npm, node package manager) with sudo apt-get install git nodejs npm. You need to symlink Node.js to /usr/bin/node with sudo ln -s /usr/bin/nodejs /usr/bin/node for the makefiles. You’ll also need to install several dependencies, so I recommend creating a new directory for this project to keep everything in one place. From your new directory, you can then clone the LOO repository from the read-only GitHub using git clone

Next, you’ll need to install some dependencies. Let’s start with C++ library POCO. POCO has dependencies of it’s own, which you can install using apt: sudo apt-get install openssl g++ libssl-dev. Then you can download the source code for POCO itself with wget Uncompress the source files, and as root, run the following command from your newly uncompressed POCO directory:

./configure --prefix=/opt/poco
make install

This installs POCO at /opt/poco.

Then we need to install the LibreOffice Core. Go back to the top level project directory and clone the core repository: git clone Go into the new 'core' folder. Compiling the core from source requires some more dependencies from apt. Make sure the deb-src line in /etc/apt/sources.list is not commented out. The exact line will depend on your locale and distro, but for me it’s deb-src xenial main restricted. Next, run the following commands:

sudo apt-get update
sudo apt-get build-dep libreoffice
sudo apt-get install libkrb5-dev

You can also now set the $MASTER environment variable, which will be used when configuring parts of LibreOffice Online:

export MASTER=$(pwd)

Then run to prepare for building the source with ./ Finally, run make to build the LibreOffice Core. This will take a long time, so you might want to leave it running while you do something else.

After the core is built successfully, go back to your project root folder and switch to the LibreOffice Online folder, /online. I recommend checking out the latest release, which for me was 2.1.2-13: git checkout 2.1.2-13. We need to install yet more dependencies: sudo apt-get install -y libpng12-dev libcap-dev libtool m4 automake libcppunit-dev libcppunit-doc pkg-config, after which you should install jake using npm: npm install -g jake. We will also need a python library called polib. If you don’t have pip installed, first install it using sudo apt-get install python-pip, then install the polib library using pip install polib. We should also set some environment variables while here:

export SYSTEMPLATE=$(pwd)/systemplate
export ROOTFORJAILS=$(pwd)/jails

Run ./ to create the configuration file, then run the configuration script with: 

./configure --enable-silent-rules --with-lokit-path=${MASTER}/include --with-lo-path=${MASTER}/instdir --enable-debug --with-poco-includes=/opt/poco/include --with-poco-libs=/opt/poco/lib --with-max-connections=100000 –with-max-documents=100000

Next, build the websocket server, loolwsd, using make. Create the caching directory in the default location with sudo mkdir -p /usr/local/var/cache/loolwsd, then change caching permissions with sudo chmod -R 777 /usr/local/var/cache/loolwsd. Test that you can run loolwsd with make run. Try accessing the admin panel at https://localhost:9980/loleaflet/dist/admin/admin.html. You can stop it with CTRL+C.

That, as they say, is it. You should now have a LibreOffice Online installation with a maximum connections and maximum documents both set to 100000. You can adjust these numbers to your liking by changing the with-max-connections and with-max-documents variables when configuring loolwsd.

Final words

Overall, I found this whole experience a bit discouraging. There was a lot of painful trial and error. We are still hoping to use LibreOffice Online for OAE in the future, but I wish it was easier to use. We'll be posting a request in The Document Foundation's LibreOffice forum for a docker version without the user limits to be released in future.

If you're also thinking about using LOO, or are already, and would like to swap notes, we'd love to hear from you. There are a few options. You can contact us via our mailing list at or directly at

October 11, 2017 11:00 AM

September 18, 2017


Online Video Tutorial Authoring – Quick Overview

As an instructional designer a key component to my work is creating instructional videos.  While many platforms, software and workflows exist here’s the workflow I use:

    1. Write the Script:  This first step is critical though to some it may seem rather artificial.  Writing the script helps guide and direct the rest of the video development process. If the video is part of a larger series, inclusion of some ‘standard’ text at the beginning and end of the video helps keep things consistent.  For example, in the tutorial videos created for our Online Instructor Certification Course, each script begins and ends with “This is a Johnson University Online tutorial.” Creating a script also helps insure you include all the content you need to, rather than ad-libbing – only to realize later you left something out.As the script is written, particular attention has to be paid to consistency of wording and verification of the steps suggested to the viewer – so they’re easy to follow and replicate. Some of the script work also involves set up of the screens used – both as part of the development process and as part of making sure the script is accurate.


  1. Build the Visual Content: This next step could be wildly creative – but typically a standard format is chosen, especially if the video content will be included in a series or block of other videos.  Often, use of a 16:9 aspect ratio is used for capturing content and can include both text and image content more easily. Build the content using a set of tools you’re familiar with. The video above was built using the the following set of tools:
    • Microsoft Word (for writing the script)
    • Microsoft PowerPoint (for creating a standard look, and inclusion of visual and textual content – it provides a sort of stage for the visual content)
    • Google Chrome (for demonstrating specific steps – layered on top of Microsoft PowerPoint) – though any browser would work
    • Screencast-O-Matic (Pro version for recording all visual and audio content)
    • Good quality microphone such as this one
    • Evernote’s Skitch (for grabbing and annotating screenshots), though use of native screenshot functions and using PowerPoint to annotate is also OK
    • YouTube or Microsoft Stream (for creating auto-generated captions – if it’s difficult to keep to the original script)
    • Notepad, TextEdit or Adobe’s free Brackets for correcting/editing/fixing auto-generated captions VTT, SRT or SBV
    • Warpwire to post/stream/share/place and track video content online.  Sakai is typically used as the CMS to embed the content and provide additional access controls and content organization
  2. Record the Audio: Screencast-O-Matic has a great workflow for creating video content and it even provides a way to create scripts and captions. I tend to record the audio first, which in some cases may require 2 to 4 takes. Recording the audio initially, provides a workflow to create appropriate audio pauses, use tangible inflection and enunciation of terms. For anyone who has created a ‘music video’ or set images to audio content this will seem pretty doable.
  3. Sync Audio and Visual Content: So this is where the use of multiple tools really shines. Once the audio is recorded, Screencast-O-Matic makes it easy to re-record retaining the audio portion and replacing just the visual portion of the project. Recording  the visual content (PowerPoint and Chrome) is pretty much just listening to the audio and walking through the slides and steps using Chrome. Skitch or other screen capture software may have already been used to capture visual content I can bring attention to in the slides.
  4. Once the project is completed, Screencast-O-Matic provides a 1 click upload to YouTube or save as an MP4 file, which can then be uploaded to Warpwire or Microsoft Stream.
  5. Once YouTube or Microsoft Stream have a viable caption file, it can be downloaded and corrected (as needed) and then paired back with any of the streaming platforms.
  6. Post of the video within the CMS is as easy as using the LTI plugin (via Warpwire) or by using the embed code provided by any of the streaming platforms.

by Dave E. at September 18, 2017 04:03 PM

September 01, 2017

Sakai Project

Sakai Docs Ride Along

Sakai Docs ride along - Learn about creating Sakai Online Help documentation September 8th, 10am Eastern

by MHall at September 01, 2017 05:38 PM

August 30, 2017

Sakai Project

Sakai get togethers - in person and online

Sakai is a virtual community and we often meet online through email, and in real time through the Apereo Slack channel and web conferences. We have so many meetings that we need a Sakai calendar to keep track of our meetings. 

Read about our upcoming get togethers!

SakaiCamp Lite
Sakai VC

by NealC at August 30, 2017 06:37 PM

Sakai 12 branch created!

We are finally here! A big milestone has been reached with the branching of Sakai 12.0. What is a "branch"? A branch means we've taken a snapshot in time of Sakai and put it to the side so we improve it, mostly QA (quality assurance testing) and bug fixing until we feel it is ready to release to the world and become a community supported release. We have a stretch goal from this point of releasing before the end of this year, 2017. 

Check out some of our new features.

by NealC at August 30, 2017 06:00 PM

July 18, 2017

Steve Swinsburg

An experiment with fitness trackers

I have had a fitness tracker of some descript for many years. In fact I still have a stack of them. I used to think they were actually tracking stuff accurately. I compete with friends and we all have a good time. Lately though, I haven’t really seen the fitness benefits I would have expected from pushing myself to get higher and higher step counts. I am starting to think it is bullshit.

I’ve have the following:

  1. Fitbit Flex
  2. Samsung Gear Wear
  3. Fitbit Charge HR
  4. Xiaomi Mi Band
  5. Fitbit Alta
  6. Moto 360
  7. Phone in pocket setup to send to Google Fit.
  8. Garmin ForeRunner 735XT (current)

Most days I would be getting 12K+ just by doing my daily activities (with a goal of 11K): getting ready for work and children ready for school (2.5K), taking the kids to school (1.2K), walking around work (3K), going for a walk at lunch (2K), picking up the kids and doing stuff around the house of an evening (3.5K) etc.

My routine hasn’t really changed for a while.

However, two weeks ago I bought the Garmin Forerunner 735XT, mainly because I was fed up with the lack of Android Wear watches in Australia as well as Fitbit’s lack of innovation. I love Android Wear and Google Fit and have many friends on Fitbit, but needed something to actually motivate me to exercise more.

The first thing I noticed is that my step count is far lower than any of the above fitness trackers. Like seriously lower. We are talking at least 30% or more lower. As I write this I am sitting at ~8.5K steps for the day and I have done all of the above plus walked to the shops and back (normally netting me at least 1.5K) and have switched to a standing desk at work which is about 3 metres closer to the kitchen that my original desk. So negligible distance change. The other day I even played table tennis at work (you should see my workplace) and it didn’t seem to net me as many steps as I would have expected.

Last night I went for a 30 min walk and snatched another 2K, which is pretty accurate given the distance and my stride length. I think the Fitbit would have given me double that.

This is interesting.

Either the Garmin is under-reporting or the others are over-reporting. I suspect the latter. The Garmin tracker cost me close to $600 so I am a bit more confident of its abilities than the $15 Mi band.

So, tomorrow I am performing an experiment.

As soon as I wake up I will be wearing my Garmin watch, Fitbit Charge HR right next to it, and keeping my phone in my pocket at all times. Both the watch and Fitbit will be setup for lefthand use. The next day, I will add more devices to the mix.

I expect the Fitbit to get me to at least 11K, Google fit to be under that (9.5K) and Garmin to be under that again (8K). I expect the Mi band to be a lot more than the Fitbit.

The fitness tracker secret will be exposed!

by steveswinsburg at July 18, 2017 12:46 PM

June 16, 2017

Apereo OAE

OAE at Open Apereo 2017

The Open Apereo 2017 conference took place last week in Philadelphia and it provided a great opportunity for the OAE Project team to meet and network for three whole days. The conference days were chock full of interesting presentations and workshops, with the major topic being the next generation digital learning environment (NGDLE). Malcolm Brown's keynote was a particularly interesting take on this topic, although at that point the OAE team was still reeling from having a picture from our Tsugi meeting come up during the welcome speech - that was a surprising start for the conference! We made note about how the words 'app store' kept popping up in presentations and in talks among the attendees again and again - perhaps this is something we can work towards offering within the OAE soon? Watch this space...

The team also met with people from many other Apereo projects and talked about current and future integration work with several project members, including Charles Severance from Tsugi, Opencast's Stephen Marquard and Jesus and Fred from Big Blue Button. There's some exciting work to be done in the next few weeks... While Quetzal was released only a few days before the conference, we are now teeming with new ideas for OAE 14!

After the conference events were over on Wednesday, we gathered together to have a stakeholders meeting where we discussed strategy, priorities and next steps. We hope to be delivering some great news very soon.

During the conference, the OAE team also provided assistance to attendees in using the Open Apereo 2017 group hosted on *Unity that supported the online discussion of presentation topics. A lot of content was created during the conference days so be sure to check it out if you're looking for slides and/or links to recorded videos. The group is public and can be accessed from here.

OAE team members who attended the conference were Miguel and Salla from *Unity and Mathilde, Frédéric and Alain from ESUP-Portail.

June 16, 2017 12:00 PM

June 01, 2017

Apereo OAE

Apereo OAE Quetzal is now available!

The Apereo Open Academic Environment (OAE) project is delighted to announce a new major release of the Apereo Open Academic Environment; OAE Quetzal or OAE 13.

OAE Quetzal is an important release for the Open Academic Environment software and includes many new features and integration options that are moving OAE towards the next generation academic ecosystem for teaching and research.


LTI integration

LTI, or Learning Tools Interoperability, is a specification that allows developers of learning applications to establish a standard way of integrating with different platforms. With Quetzal, Apereo OAE becomes an LTI consumer. In other words, users (currently only those with admin rights) can now add LTI standards compatible tools to their groups for other group members to use.

These could be tools for tests, a course chat, a grade book - or perhaps a virtual chemistry lab! The only limit is what tools are available, and the number of LTI-compatible tools is growing all the time.

Video conferencing with Jitsi

Another important feature introduced to OAE in Quetzal is the ability to have face-to-face meetings using the embedded video conferencing tool, Jitsi. Jitsi is an open source project that allows users to talk to each other either one on one or in groups.

In OAE, it could have a number of uses - maybe a brainstorming session among members of a globally distributed research team, or holding office hours for students on a MOOC. Jitsi can be set up for all the tenancies under an OAE instance, or on a tenancy by tenancy basis.


Password recovery

This feature that has been widely requested by users: the ability to reset their password if they have forgotten it. Now a user in such a predicament can enter in their username, and they will receive an email with a one-time link to reset their password. Many thanks to Steven Zhou for his work on this feature!

Dockerisation of the development environment

Many new developers have been intimidated by the setup required to get Open Academic Environment up and running locally. For their benefit, we have now created a development environment using Docker containers that allows newcomers to get up and running much quicker.

We hope that this will attract new contributions and let more people to get involved with OAE.

Try it out

OAE Quetzal can be experienced on the project's QA server at It is worth noting that this server is actively used for testing and will be wiped and redeployed every night.

The source code has been tagged with version number 13.0.0 and can be downloaded from the following repositories:


Documentation on how to install the system can be found at

Instruction on how to upgrade an OAE installation from version 12 to version 13 can be found at

The repository containing all deployment scripts can be found at

Get in touch

The project website can be found at The project blog will be updated with the latest project news from time to time, and can be found at

The mailing list used for Apereo OAE is You can subscribe to the mailing list at

Bugs and other issues can be reported in our issue tracker at

June 01, 2017 05:00 PM