Planet Sakai

April 15, 2019

Michael Feldstein

Carnegie Mellon and Lumen Learning Announce EEP-Relevant Collaboration

Late last week, Carnegie Mellon University (CMU) and Lumen Learning jointly issued a press release announcing their collaboration on an effort to integrate the Lumen-developed RISE analytical framework for curricular materials improvement analysis into the toolkit that Carnegie Mellon announced it will be contributing via open licenses (and unveiling at the Empirical Educator Project (EEP) summit that they are hosting in May).

To be clear, Lumen and Carnegie Mellon are long-time collaborators, and this particular project probably would have happened without either EEP or CMU's decision to contribute the software that they are now openly licensing. But it is worth talking about in this context for two reasons. First, it provides a great, simple, easy-to-understand example of a subset of the kinds of collaborations we hope to catalyze. And second, it illustrates how CMU's contribution and the growth of the EEP network can amplify the value of such contributions.


The RISE framework is pretty easy to understand. RISE stands for Resource Inspection, Selection, and Enhancement. Their focus is on using it to improve Open Educational Resources (OER) because that's what they do, but there's nothing about RISE that only works with OER. As long as you have the right to modify the curricular materials you are working with—even if that means removing something proprietary and replacing it with something of your own making—then the RISE framework is potentially useful.

From the paper:

In order to continuously improve open educational resources, an automated process and framework is needed to make course content improvement practical, inexpensive, and efficient. One way that resources could be programmatically identified is to use a metric combining resource use and student grade on the corresponding outcome to identify whether the resource was similar to or different than other resources. Resources that were significantly different than others can be flagged for examination by instructional designers to determine why the resource was more or less effective than other resources. To achieve this, we propose the Resource Inspection, Selection, and Enhancement (RISE) Framework as a simple framework for using learning analytics to identify open educational resources that are good candidates for improvement efforts.

The framework assumes that both OER content and assessment items have been explicitly aligned with learning outcomes, allowing designers or evaluators to connect OER to the specific assessments whose success they are designed to facilitate. In other words, learning outcome alignment of both content and assessment is critical to enabling the proposed framework. Our framework is flexible regarding the number of resources aligned with a single outcome and the number of items assessing a single outcome.

The framework is composed of a 2 x 2 matrix. Student grade on assessment is on the y-axis. The x-axis is more flexible, and can include resource usage metrics such as pageviewstime spent, or content page ratings. Each resource can be classified as either high or low on each axis by splitting resources into categories based on the median value. By locating each resource within this matrix, we can examine the relationship between resource usage and student performance on related assessments. In Figure 2, we have identified possible reasons that may cause a resource to be categorized in a particular quadrant using resource use (x-axis) and grades (y-axis).

Figure 2. A partial list of reasons OER might receive a particular classification within the RISE framework.

By utilizing this framework, designers can identify resources in their courses that are good candidates for additional improvement efforts. For instance, if a resource is in the High Use, High Grades quadrant, it may act as a model for other resources in the class. If a resource falls into the Low Use, Low Grades quadrant, it may warrant further evaluation by the designers to understand why students are ignoring it or why it is not contributing to student success. The goal of the framework is not to make specific design recommendations, but to provide a means of identifying resources that should be evaluated and improved.

Let's break this down.

RISE is designed to work with a certain type of common course design, where content and assessment items are both aligned to learning objectives. This design paradigm doesn't work for every course, but it works for many courses. The work of aligning the course content and assessment questions with specific learning objectives is intended to pay dividends in terms of helping the course designers and instructors gain added visibility into whether their course design is accomplishing what it was intended to accomplish. The 2x2 matrix in the RISE paper captures this value rather intuitively. Let's look at it again:

Each box captures potential explanations that would be fairly obvious candidates to most instructors. For example, if students are spending a lot of time looking at the content but still scoring poorly on related test questions, some possible explanations are that (1) the teaching content is poorly designed, (2) assessment questions are poorly written, or (3) the concept is hard for students to learn. There may be other explanations as well. But just seeing the correlation that students are spending a lot of time on particular content are still doing poorly on particular related assessment learning questions leads the instructor and the content designer (who may or may not be the same person) to ask useful questions. And then there is some craft at the end about thinking through how to deal with the content that has been identified as potentially problematic.

This isn't magic. It's not a robot tutor in the sky. In fact, it's almost the antithesis. It's so sensible that it verges on boring. It's hygiene. Everybody who teaches with this kind of course design should regularly tune those courses in this way, as should everybody who builds courses that are designed this way. But that's like saying everybody should brush their teeth at least twice a day. It's not sexy.

Also, easy to understand and easy to do are two different things. Even assuming that your curricular materials are designed this way and that you have sufficient rights to modify them, different courses live in different platforms. While you don't need to get a lot of sophisticated data to do this analysis—just basic Google Analytics-style page usage and item-level assessment data—it will take a little bit of technical know-how, and the details will be different on each platform. Once you have the data, you will then need to be able to do a little statistical analysis. There isn't much math in this paper and what little there is isn't very complicated, but it is still math. Not everybody will feel comfortable with it.

The typical way the sector has handled this problem has been to put pressure on vendors as consumers to add this capability as a feature to their products. But that process is slow and uncertain. Worse, each vendor will likely implement the feature slightly differently and non-transparently, which creates a greater challenge for the last point of friction. Features like this require a little bit of literacy to use well. Everybody knows the mantra "correlation is not causation," but it is better thought of as the closest thing that Western scientific thinking can get to Zen koan.1 If you think you've plumbed the depths of meaning of that phrase, then you probably haven't. If we want educators to understand both the value and the limitations of working with data, then they need to have absolute clarity and consistency regarding what those analytics widgets are telling them. Having ten widgets in different platforms telling them almost but not quite the same things in ways that are hard to differentiate will do more harm than good.

And this is where we fail.

While the world is off chasing robot tutors and self-driving cars, we are leaving many, many tools like RISE just lying on the floor, unused and largely unusable, for the simple reason that we have not taken the extra steps necessary to make them easy enough and intuitive enough for non-technical faculty to adopt. And by tools, I mean methods. This isn't about technology. It's about literacy. Why should we expect academics, of all people, to trust analytical methods that nobody has bothered to explain to them? They don't need to understand how to do the math, but they do need to understand what the math is doing. And they need to trust that somebody that they trust is verifying that the math is doing what they think it is doing. They need to know that peer review is at work, even if they are not active participants in it.

Making RISE shine

This is where CMU's contribution and EEP can help. LearnSphere is the particular portion of the CMU contribution into which RISE will be integrated. I use the word "portion" because LearnSphere itself is a composite project consisting of a few different components that CMU collectively describes as "a community data infrastructure to support learning improvement online." I might alternatively describe it as a cloud-based educational research collaboration platform. It is probably best known for its DataShop component, which is designed to share research learning research data sets.

One of the more recent but extremely interesting additions to LearnSphere is called Tigris, which provides a separate research workflow layer. Suppose that you wanted to run a RISE analysis on your course data, in whatever platform it happens to be in. Lumen Learning is contributing the statistical programming package for RISE that will be imported into Tigris. If you happen to be statistically fluent, you can open up that package and inspect it. If you aren't technical, don't worry. You'll be able to grab the workflow using drag-and-drop, import your data, and see the results.

Again, this kind of contribution was possible before CMU decided to make its open source contribution and before EEP existed. They have been cloud hosting LearnSphere for collaborative research use for some time now.

But now they also have an ecosystem.

By contributing so much under open license, along with the major accompanying effort to make that contribution ready for public consumption, CMU is making massive declaration to the world about their seriousness regarding research collaboration. It is a magnet. Now Lumen Learning's contribution isn't simply an isolated event. It is an early leader with more to come. Expect more vendors to contribute algorithms and to announce data export compatibility. Expect universities to begin adopting LearnSphere, either via CMU's hosted instance or their own instance, made possible the full stack being released under an open source license. This will start with the group that will gather at the EEP summit at CMU on May 6th and 7th, because one has to start somewhere. That is the pilot group. But it will grow. (And LearnSphere is only part of CMU's total contribution.)

With this kind of an ecosystem, we can create an environment in which practically useful innovations can spread much more quickly (and cheaply) which vendors regardless of size or marketing budget can be rewarded in the marketplace based on their willingness to make practical contributions of educational tools and methods that can be useful to customers and non-customers alike. Lumen Learning has made a contribution with the RISE research. They now want to make a further contribution to make that research more practically useful to customers and non-customers alike. CMU's contributed infrastructure and the EEP network will give us an opportunity reward that kind of behavior with credit and attention.

That is the kind of world I want to live in.

  1. Outside of quantum mechanics, at least.

The post Carnegie Mellon and Lumen Learning Announce EEP-Relevant Collaboration appeared first on e-Literate.

by Michael Feldstein at April 15, 2019 06:21 PM

April 12, 2019

Apereo Foundation

2019 ATLAS Winners Announced!

2019 ATLAS Winners Announced!

The Apereo Foundation is pleased to announce the winners of the Apereo Teaching and Learning Awards (ATLAS) for 2019.

by Michelle Hall at April 12, 2019 07:48 PM

April 03, 2019

Michael Feldstein

EEP, EDwhy, and Seeds

So the news broke today about the Empirical Educator Project's (EEP's) year two experimental design, which we're calling EDwhy. The "ED" stands for Educational Design," so the full name means, basically, "Why is your educational design the way that it is?" It invites educators to interrogate their own designs and aspires to give them the tools to do so. Here is the press release.

We have some good coverage to start you off from Inside Higher Ed and EdSurge. At IHE, Lindsay McKenzie goes broad. She starts with some good shoe leather work at Carnegie Mellon with some interviews. Pay close attention to the interview with Ken Koedinger, as he talks about (but does not name) a research finding called the doer effect, which I'm going to use as an example later in this blog post. She also provides a good refresher of the open source versus proprietary question that universities often face with substantial software intellectual property that they develop, and then touches lightly on EEP's role with the EDwhy announcement at the end (although with a clutch statement from Duke’s Matthew Rascoff, who always seems to say the right thing with a lot of intellectual and moral clarity in very few words). If you're looking to find a way into this story from the beginning in a compact way, Linday's story one good route in.

Meanwhile, Jeff Young at EdSurge has dug a little deeper into significance behind the EDwhy idea and mechanics. I think the question that is on everyone's minds is, "OK, $100 million dollars, lots of software, cool learning science-y things, but really, how is this going to be made useful?" Jeff begins to explore that question, and I'm going to take a deeper dive in this post. He also has some commentary from me about why we chose the name we did. You'll have to go read it on EdSurge to get those details, but I'll say this much here: On e-Literate, where one of our major roles is to critique hype and protect against the dangers of  bad actors, we have an ethical obligation to throw some sharp elbows. With EEP, where we are not watching from the sidelines but actually entering the fray, we are mindful that our obligation shifts as our role shifts. We take the e-Literate lessons to heart while also attempting to be humble both about the accomplishments of those before us and how easy it is for us to fall into the same traps that very smart people before us have fallen victim to.

But I don't want to write about the naming decision too much here. Instead, I want to write about how we are going to attempt to live up to the humbling confidence that Carnegie Mellon expressed in us when they chose us as a partner in their grand project. Obviously, when they offered to make their enormous contribution through our fledgling organization, it both forced and empowered us to rethink how we would go about the project in Year 2. We had always planned to stop, evaluate, and iterate on the design after the first year, but this opportunity demanded a pretty dramatic rethink in approach which, to be honest, is still ongoing. We have an idea that I'm going to share with you now that I believe makes sense in concept but does not yet have a fine-grained implementation plan. We are working hard with our Carnegie Mellon friends to have a foundation in place by the time of the summit. We will also workshop the idea at the summit with the cohort to refine our approach. This is going to be a year-long project. So we expect to spend some time after the summit continuing to put pieces in place and fine-tuning as we go. At the end of the year, we will do a progress check, evaluate, and iterate.

The Hackathon

I am always mindful about appropriating terms from Silicon Valley culture because I think it tends to be reflexively idealized. That said, there is a lot to like about the educational value of a hackathon. It is a social, time-bounded, self-organizing, problem-based learning exercise. A group of people will get together to solve a defined problem over a period of time. That group is often cross-functional. They might have software engineers, user experience designers, end users, and so on. Hackathons have a tangible and several intangible goals. The tangible goal in the canonical case is a piece of software, but we can think of it more broadly as an artifact that has been tested and demonstrated to solve the problem that was the goal set out at the beginning of the exercise. The intangible goals often include learning how to work in a cross-functional team, learning how to solve difficult problems with unexpected wrinkles, and learning particular craft-related skills necessary to solve the problem (e.g., programming tricks or software testing techniques).

This is a good model for the kind of culture building that EEP has always aspired to achieve and, I believe that inspired Carnegie Mellon to see us as a good fit for their own ambitions. While I want to be clear that I do not speak for them, my understanding of their goals from our conversations thus far is that it would be a mistake to interpret their primary goal to be broader adoption of their software and other tools. Sure, they want to see that happen. But my read is that they see that as a second-order effect, or maybe as means to an end. What I hear from them in our conversations is that they really want to make their approach to improving education broadly accessible and meaningfully useful. They call that approach "learning engineering," which they seem comfortable with me characterizing as one flavor or methodology within a broader developing family that we call "empirical education." The hackathon works to support this goal because it creates an environment in which people habitually self-organize in cross-functional groups to improve educational design in ways that empower greater student success. It brings together the right people around the right kinds of goals and conversations. If we can then empower them with the right tools and methods, we are on your way to promoting learning engineering. If we can achieve that,  we can unlock the real power of the big release, which is to help democratize the science of education.

While I said I didn't want to dwell on our name choice here, it's probably worth spending a little time on the word "design" in the way we are using it in EDwhy. A number of different overlapping but distinct stakeholder groups in academia tend to compete for mindshare around this word—Design Thinking practitioners, Instructional Designers, Learning Designers, User Experience Designers, and others. Making sense of how these all connect yet are distinct from each other is non-obvious even before we get to culturally local differences in usage. To give one example, Herb Simon, in addition to being the father of Learning Engineering, is considered by some to be the grandfather of Design Thinking. These are two compatible but distinct and non-interchangeable disciplines. In most places outside of Carnegie Mellon, their practitioners tend to be either completely ignorant of each other or find themselves cast as rivals in educational solution design.

"Design" in the EDwhy context is a holistic and colloquial term meaning, simply, the way you decided to put something together. A cross-functional EDwhy hackathon team might include people with knowledge of Design Thinking, Instructional Design, Learning Design, User Experience Design, and/or Learning Engineering. Who is at the table will depend on the specific nature of the challenge being tackled and the kinds of expertise needed to take it on.

At any rate, as we started thinking about how to help our network digest Carnegie Mellon's $100 million contribution—never mind the sum of all possible contributions from all current and future EEP participants—we started thinking about both the digestive process and coming up with a form that is digestible. Verbs and nouns.

The hackathon is the verb. Theoretically, the hackathon is flexible enough to allow for projects of different sizes and ambitions, whether inter- or intra-institutional. We still very much want to encourage inter-institutional collaboration, but one lesson we learned last year is that inter-institutional collaboration is incredibly hard, even with a lot of work done by third parties to lower barriers. We have to build a gentle slope toward that level of collaboration. The hackathon is a form that lets people start small and grow in ambition. At some point, they will outgrow the form and need to form something more like a traditional project with more formal management structures.

We aspire to reach the point where we have that problem. For now, we are focused on culture-building, and we hypothesize that the hackathon is a good ritual for accomplishing that while also delivering immediate educational utility.

The Seeds

The hackathon idea is simple enough to grasp in the abstract. The hard part is putting it together with the right packages that help people identify and solve new problems using the contributions from Carnegie Mellon or other participants. For this, we've developed the concept of an EDwhy "seed." This is one of the pieces I will want to workshop with the EEP cohort, but there's enough here conceptually that the general idea should be clear.

We start with a general area of interest where some research has been done but where there are more questions to be answered. For example (and as I mentioned earlier, Ken Koedinger and his CMU colleagues have done some research into something called "the doer effect." It means pretty much what it sounds like. The researchers were able to demonstrate, using solid, quantitative methods that learning by doing is, for example, about six times more effective than learning by watching a video.

(Side note for all you liberal arts folks out there who are suspicious of this data stuff: This study more or less just made the case for constructivism. Using numbers and computers and statistics and stuff.)

That's an interesting finding, if not a shocking one, but it also highlights a lot that we don't know. For example, is doing always better than watching a video (or reading) for learning? Should we throw out all books and videos? If not, then how much watching or reading is good? In what order? Does the subject matter make a difference? The expertise of the learner? Other characteristics of the learner? Other characteristics of the overall course design? Or course goals?

Let's make this more concrete. One of my favorite course designs is Habitable Worlds by ASU's Ariel Anbar. There is a lot of learning by doing in that problem-based course, but also liberal use of video. It would be interesting to do some testing and experimentation to find out how to make the most out of the doer effect and find the optimal balance of the course elements.

As it turns out, Carnegie Mellon's contributions include the software that was used to conduct the original doer effect research. (The IHE article mentions LearnSphere. Spend a little time exploring that site if you're curious.) That software includes a data repository with access to (appropriately anonymized) data that could be used to replicate the results (or try to run different analyses on the data), a visual workflow that makes the study easily repeatable with different data, and access to the underlying R packages (for those who can understand them) to make the research methods completely transparent. If you put together the original studies, the software, the workflows, the data to practice reproducing the results, the transparency of the methods, and wrap in some documentation, some training, and a number of suggested starter questions for investigation, you have a seed. A self-organizing community could take up that seed and develop a hackathon project. If there were also a community forum where the hackathon group could ask questions of statisticians, cognitive psychologists, and psychometricians, as well as some technical support folks, as well as share lessons learned with each other, then you could really have something.

I'm guessing the net result might turn out to be what would call an "intermediate" seed. Not every team would have the capability to self-organize around something this complex. We'd like to develop beginner, intermediate, and advanced level seeds, where beginner seeds are approachable by non-technical groups, intermediate seeds might require some technical skill and some knowledge of experimental design, and advanced seeds are really for folks who have some serious specialist expertise in their groups. The I'll defer on the final difficulty ratings of each seed, including the one I just described, to the creators and the early adopters. One skill set we will be learning in the EDwhy experiment is how to package up a seed to make it accessible and useful to different sorts of audiences. Eventually, we may develop profiles of hackathon teams that are richer than just beginner/intermediate/advanced.

At any rate, our goal for the year is to prove out and refine the approach through some pilot seeds and hackathons. We don't imagine that we will be able to address the entire surface area of Carnegie Mellon's $100 Million contribution in the one-year time frame, but we do aspire to prove out a novel and sustainable support and diffusion mechanism, not only for the software but for the methods and the culture. And during this time, we will also invite other EEP members to develop and contribute their own seeds, some of which will be less technical or tackle entirely different types of educational problems than Carnegie Mellon's seeds will. This is a general mechanism we will be trying out. Interestingly, another arrow that CMU has in its quiver is the Open Learning Initiative (OLI) authoring and delivery platforms. So we may very well find their contributions to seed development goes well beyond the open source software code, which I think is the way in which people are naturally tending to think about the contribution at this early stage in the process.

Both learning and science—or any path to enlightenment, really—starts with a simple admission: "There is so much that I don't know, and so much that I would like to understand better." Big announcements like this generally run against the grain of that admission. We have an ingrained cultural notion that, after spending a $100 million, you are supposed know all the answers. After spending 7 years in graduate school, you are supposed to know all the answers. After getting all the press and all the buzz, you are supposed to know all the answers.

Nope. Sorry. It doesn't work that way.

There is so much that we don't know, and so much that we would like to understand better. If you keep repeating that mantra to yourself every time you hear something new about Carnegie Mellon's contribution or about EEP or the EDwhy initiative, each new piece of information will make a lot more sense to you.

The post EEP, EDwhy, and Seeds appeared first on e-Literate.

by Michael Feldstein at April 03, 2019 03:45 PM

April 02, 2019

Michael Feldstein

Christensen Scorecard: Data visualization of US postsecondary institution closures and mergers

In 2013, Harvard Business professor Clayton Christensen made a bold prediction based on his ubiquitous innovation theory that maybe half of all postsecondary institutions could close within 10-15 years.

(source:, starting at 6:25)

The scary thing is that 15 years from now, maybe half of the universities will be in bankruptcy, including the state schools. But in the end, I'm excited to see that happen.

Christensen then doubled down on his predictions in 2017, humorously saying it might take nine years instead of ten.

(source:, starting at 1:04:42)

Q. Do you still believe, as you've said before, that as many as half of colleges and universities will be bankrupt or closed within a decade?

A. Um, yes. [snip] Whether the providers get disrupted within a decade -- I might bet that it takes nine years rather than 10. Maybe I’m too scared about the Harvard Business School to be rational about it. But we should worry.

There have been plenty of articles written about these claims, but it has been frustrating that very few back up their analysis with data. One exception is Derek Newton's article critiquing the claims in Forbes, titled "No, Half Of All Colleges Will Not Go Bankrupt".

Look at the numbers. In the 2013-14 year, there were 3,122 four-year colleges according to the Department of Education. In 2017-18, the most recent data, there were 2,902 – a drop of about 7% over four years. That could be disruptive. But numerically, all of school closures since Christensen made his 2013 forecast were four-year, for-profit schools, which fell from 769 in 2013 to 499 in 2017 – a drop of 270. Of all the colleges, at all levels, that have closed since 2013, 95.5% of them were for-profit institutions.

Another exception is Michael Horn's explanation of the predictions (he co-authored the New York Times op-ed from 2013, titled "Innovation Imperative: Change Everything", that included the initial prediction). This 2018 post "Will half of all colleges really close in the next decade?" also sought to go back to original, more nuanced claims of 25% closures and mergers at the Christensen Institute.

Translation? Our predictions may be off, but they are directionally correct.

To that I emphasize one more piece of nuance. Ultimately we are really predicting a failure rate, made up of a combination of closures, mergers or acquisitions, and bankruptcies in which a college or university has the opportunity to restructure itself. Not all universities that “fail” will disappear. [snip]

From 2004–2014, “Closures among four-year public and private not-for-profit colleges averaged five per year from 2004-14, while mergers averaged two to three,” according to Moody’s. Moody’s predicted in 2015 that that closure rate—out of 2,300 institutions—would triple by 2017, and the merger rate would double.

Assuming that were true, and say that the rate held steady for 15 years, that would take out roughly 13% of existing higher education institutions right there.

Thanks to our partners with our LMS Market Analysis service, LISTedTECH, we can now provide data visualizations to better evaluate the validity or likelihood of these claims. For the first time that I'm aware of, we have visualizations showing combined closures and mergers over time, broken down by sector and degree-type, and showing data 2-3 years in advance of IPEDS publications.

The LISTedTECH data shown below tracks known closures and mergers, which have then been checked against both IPEDS and Federal Student Aid data sets. There are translation issues in all three data sets, so the data will not match 100% - probably more at the 80 - 90% confidence level. The first view shows combined closures and mergers per year, broken out by control and whether they are classified as 2-year or 4-year degree-granting institutions.

Closed US higher ed schools over past decade

As Derek Newton and Michael Horn pointed out, the vast majority of closures were from the for-profit sectors. Part of the dynamic at play is that when a large for-profit chain meets its demise (e.g. Corinthian Colleges, ITT, Westwood Colleges) or has a massive downturn (e.g. University of Phoenix) literally dozens of individual institutions close, whereas when a small private nonprofit college in New England closes, it is one school. Add to the that the massive drop in for-profit enrollments since 2012.

The public sector data in 2013 and 2014 is largely driven by reorganizations in the University System of Georgia.

Also note that the 2019 data only includes the first quarter.

If we want to track the Christensen (and Horn) predictions, however, we need to view this data as a running total.

Running total of closed and merged US higher ed institutions

Let's zoom out to capture the timeline of the most recent predictions of a decade from 2017, and let's add the rough levels indicated (using bold row from this IPEDS table to define number of institutions).

Running total of closed US institutions with trend lines

If you include all degree-granting institutions (i.e. for-profits as well as private nonprofits and publics), then the current trends lines show that the 50% closure prediction by 2027 certainly seems feasible. Note, however, is that there are less than 1,000 for-profit institutions remaining as of Fall 2017 IPEDS data, and the rate of for-profit closures cannot continue more than another 8-10 years (best case / worst case, take your pick).

There are quite a few stories recently about private nonprofit small-school closures, but the data thus far don't show a rapid acceleration of closures. Some perspective is useful here.

If you ignore the for-profit sectors, then the trend line for private nonprofit and public institution closures + mergers remains far below that needed to hit the 25% level described by Horn or the 50% level described by Christensen. None of this is to say that the trends moving forward will be linear, however. The rate of private nonprofit and public closures and mergers would need to at least triple to hit the more conservative level of 25% within a decade, a possibility that I would not reject out of hand. And it turns out that Moody's was wrong - the rate of closures and mergers in this group did not triple from 2015 - 2017. Nevertheless, the data could get worse.

We'll share more information on this new data, but hopefully these visualizations provide a better sense of the trends on college closures and mergers.

The post Christensen Scorecard: Data visualization of US postsecondary institution closures and mergers appeared first on e-Literate.

by Phil Hill at April 02, 2019 10:30 AM

April 01, 2019

Apereo Foundation

March 27, 2019

Apereo Foundation

2019 Apereo Fellows: call for nominations

2019 Apereo Fellows: call for nominations

The Apereo Fellows program seeks to foster community leadership and contributions by recognizing and supporting active contributors. This year five (5) Fellows will be chosen by the selection committee and each Fellow will receive a modest stipend.  

Due 9 April 2019.

by Michelle Hall at March 27, 2019 04:45 PM

March 08, 2019

Dr. Chuck

Tsugi Achieves LTI Advantage Certification

Tsugi ( is one of the first learning tools to achieve a brand-new certification for an interoperability standard called LTI Advantage, continuing the longtime leadership of open source learning projects in the Apereo Foundation ( in standards and interoperability.   Tsugi is an application library that allows rapid development of standards-compliant learning applications.

Tsugi certification coupled with the recent Sakai certification completes an open source end-to-end solution for both the Platform and Tool versions of the LTI Advantage specification.  Open Source implementations allow proprietary vendors to examine source code and have an endpoint for regular interoperability testing.

Apereo projects like Tsugi and Sakai benefit the entire marketplace whether or not a school adopts the software that is produced as part of Apereo.   – Charles Severance, Founder Sakai and Tsugi Projects

The Tsugi project provides a free test server that allows LMS vendors like Sakai, Blackboard, Canvas and Desire2Learn to do regular LTI Advantage interoperability testing with a scalable production LTI Advantage compliant educational application store.

In addition to using LTI Advantage for integration into enterprise LMS systems, Tsugi tools also can be integrated into Google Classroom.

For more information or to see how to use Tsugi to develop standards compliant learning tools see

by Charles Severance at March 08, 2019 02:41 PM

March 07, 2019

Adam Marshall

WebLearn User Group: Tues 12 March 14:00-16:00

Please join us at the next meeting of the WebLearn User Group:

Date: Tuesday 12 March 2019

Time: 2:00 – 4:00 pm, followed by refreshments

Venue: IT Services, 13 Banbury Rd

Come and meet with fellow WebLearn users and members of the Technology Enhanced Learning (TEL) team to give feedback and share ideas and practices.

Book now to secure your place.


  • Canvas@Oxford project team: Update on the Canvas rollout to Year 1 programmes of study
  • James Shaw, Bodleian Libraries: Copyright and the CLA: Preparing digital material for presentation in a VLE
  • Jon Mason, Medical Sciences: Interactive copyright picker (based on source and intended use)
  • TEL team: Design and content for WebLearn pages
  • Adam Marshall: WebLearn updates

Join the WebLearn User Group site: for regular updates and access to audio recordings of previous presentations.

Dr Jill Fresen, Senior Learning Technologist, Technology-Enhanced Learning, IT Services, University of Oxford

by Adam Marshall at March 07, 2019 02:41 PM

February 24, 2019

Dr. Chuck

Abstract: Coursera Office Hours @ Pengucon – Python for Everybody

This is a meeting between students of Dr. Chuck’s Python for Everybody, Internet History, Technology, and Security, and Web Applications for Everybody (PHP / SQL) online Coursera courses and anyone else who is interested in MOOCs and the MOOC movement.  You can see other meetings that Dr. Chuck has had with students at

by Charles Severance at February 24, 2019 05:54 PM

February 13, 2019

Dr. Chuck

What Should I do After I Finish the Python for Everybody Specialization?

I got this message from a student:
I am currently a student in your Python for Everybody online Coursera specialization, and am about to complete the Capstone course. I would like to move my career into App Development / Software Engineering management, what is the next course that you would recommend that I take?
Here is my answer.

Congratulations on making it through the specialization. In terms of what to do next – a lot has to do with how confident you feel at this point. If you are still struggling with the programs in the specialization – then you might want to go back and take another “beginner class” – there are levels of beginner classes and you might benefit from one that is more rigorous like this one from UMich:

If you are confident in your programming skills, it depends on what you want to do. If building web apps is something you find interesting, these specializations will move you in that direction:

If you want to go into Data Mining – we have a specialization for that too:

You should have your programming skills well in place before you take the data science specialization – but it has a lot of good stuff and important job-ready skills.

by Charles Severance at February 13, 2019 01:51 AM

February 12, 2019


Peer Assessment – Reflect and Improve

Peer assessment or review can improve student learning, and there's a way to do it in a course site.


by Dave E. at February 12, 2019 04:17 PM

February 08, 2019

Adam Marshall

Copyright and Audio-Visual Material

I thought this copyright guidance from LearningOnScreen (The British Universities and Colleges Film and Video Council) may be of interest to some.

by Adam Marshall at February 08, 2019 04:42 PM

January 28, 2019

Adam Marshall

Free Accessibility Tool


I thought I’d pass along the following message from JISC.

It is important to ensure that the visual content of your website and learning resources has alternative text for those who either cannot see the visual content or struggle to make sense of its interpretation.

However, how do you know what is an appropriate description? And some visual content is merely eye candy and is best hidden entirely from screenreader users rather than wasting their time announcing something that is meaningless to the learning experience.

Making this different choices requires a certain degree of understanding but the good news is that there are some excellent free training resources out there. A recent quote from a Vision Australia newsletter reminded me of the Poet training Tool (which I’ve used  – and it has nothing to do with poetry!).

Vision Australia’s partner, Benetech has provided an initiative called Poet Training Tool which provides best practice guidelines and exercises that will help you grow your skills in writing effective image descriptions benefiting everyone who needs to access your digital documents, web pages and mobile apps.

This free resource is broken up into 3 helpful sections:

  1. Helps you determine when a description is actually needed.
  2. Provides guidelines on how to write an effective description (with examples).
  3. Upload content and practice writing your own descriptions.

If you or your colleagues are going to be involved in revisiting digital images on your website or learning platform then I highly recommend using these resources.

Alistair McNaught
Subject specialist – accessibility

by Adam Marshall at January 28, 2019 03:21 PM

November 23, 2018

Matthew Buckett

Firewalling IPs on macOS

I needed to selectively block some IPs from macOS and this is how I did it. First create a new anchor for the rules to go in. The file to create is:/etc/pf.anchors/org.user.block.out and it should contain:

table <blocked-hosts> persist
block in quick from <blocked-hosts>

Then edit: /etc/pf.conf and append the lines:

anchor "org.user.block.out"
load anchor "org.user.block.out" from "/etc/pf.anchors/org.user.block.out"

Then to reload the firewalling rules run:

$ sudo pfctl -f /etc/pf.conf

and if you haven't got pf enabled you also need to enable it with:

$ sudo pfctl -e

Then you can manage the blocked IPs with these commands:

# Block some IPs
$ sudo pfctl -a org.user.block.out -t blocked-hosts -T add
# Remove all the blocked IPs
$ sudo pfctl -a org.user.block.out -t blocked-hosts -T flush
# Remove a single IP
$ sudo pfctl -a org.user.block.out -t blocked-hosts -T delete

by Matthew Buckett ( at November 23, 2018 12:41 PM

November 09, 2018

Apereo OAE

Apereo OAE Snowy Owl is now available!

The latest version of Apereo's Open Academic Environment (OAE) project has just been released! Version 15.0.0 is codenamed Snowy Owl and it includes some changes (mostly under the hood) in order to pave the way for what's to come. Read the full changelog at Github

Image taken from bird eden.

November 09, 2018 06:50 PM

September 27, 2018

Sakai Project

Sakai 12.4 maintenance is released!

Dear Community,

I'm pleased to announce on behalf of the worldwide community that Sakai 12.4 is released and available for downloading! 

Sakai 12.4 has 88 improvements including: 

  • 22 fixes in Assignments
  • 14 fixes in Gradebook
  • 9 fixes in Tests & Quizzes (Samigo)
  • 7 fixes in Lessons
  • 6 fixes in Roster
  • 5 fixes in Portal

For more information, visit 12.4 Fixes by Tool

by WHodges at September 27, 2018 06:11 PM

August 15, 2018

Sakai Project

Now Open! Call for Proposals for the Sakai Virtual Conference 2018

Sakai Project Logo

We are actively seeking presenters who are knowledgeable about teaching with Sakai. You don’t need to be a technical expert to share your experiences! Submit your proposal today! The deadline for submissions is September 21st, 2018.

Save the Date: The Sakai Virtual Conference will take place entirely online on Wednesday, November 7th.

by MHall at August 15, 2018 06:58 PM

August 13, 2018

Sakai Project

Sakai Community Survey - Number of Users at Your Institution

We would like your help in tallying up the total number of Sakai users worldwide.

by MHall at August 13, 2018 04:33 PM

July 04, 2018


F2F Course Site Content Import

If you're tasked with teaching an upcoming course that you've taught in the past with the University - there's no need to rebuild everything from scratch - unless you want to. Faculty teaching face to face (F2F) courses can benefit from the course content import process in Site Info. This process allows you to pull … Continue reading F2F Course Site Content Import

by Dave E. at July 04, 2018 06:56 PM

June 11, 2018

Apereo OAE

Strategic re-positioning: OAE in the world of NGDLE

The experience of the Open Academic Environment Project (OAE) forms a significant practical contribution to the emerging vision of the ‘Next Generation Digital Learning Environment’, or NGDLE. Specifically, OAE contributes core collaboration tools and services that can be used in the context of a class, of a formal or informal group outside a class, and indeed of such a group outside an institution. This set of tools and services leverages academic infrastructure, such as Access Management Federations, or widely used commercial infrastructure for authentication, open APIs for popular third-party software (e.g. video conference) and open standards such as LTI and xAPI.

Beyond the LMS/VLE

OAE is widely used by staff in French higher education in the context of research and other inter-institutional collaboration. The project is now examining future directions which bring OAE closer to students – and to learning. This is driven by a groundswell among learners. There is strong anecdotal evidence that students in France are chafing at the constraints of the LMS/VLE. They are beginning to use social media – not necessarily with adequate data or other safeguards – to overcome the perceived limitations of the LMS/VLE. The core functionality of OAE – people forming groups to collaborate around content – provides a means of circumventing the LMS’s limitations without selling one’s soul – or one’s data – to the social media giants. OAE embodies key capabilities supporting social and unstructured learning, and indeed could be adapted and configured as a ‘student owned environment’: a safe space for sharing and discussion of ideas leading to organic group activities. The desires and requirements of students have not featured strongly in NGDLE conversations to this point: The OAE project, beginning with work in France, will explore student discontent with the LMS, and seek to work together with LMS solution providers and software communities to provide a richer and more engaging experience for learners.

Integration points and data flows

OAE has three principal objectives in this area:

  1. OAE has a basic (uncertified) implementation of the IMSGlobal Learning Tools Interoperability specification. This will be enriched to further effect integration with the LMS/VLE where it is required. OAE will not assume such integration is required without evidence. It will not drive such integration on the basis of technical feasibility, but by needs expressed by learners and educators.
  2. Driven by the significant growth of usage of the Karuta ePortfolio software in France, OAE will explore how student-selected evidence of competency can easily be provided for Karuta, and what other connections might be required or desirable between the two systems.
  3. Given the growth of interest in learning analytics in France and globally, OAE will become an exemplary emitter of learning analytics data and will act wherever possible to analyse each new or old feature from a designed analytics perspective. Learning analytics data will flow from learning designs embedded in OAE, not simply be the accidental output that constitutes a technical log file.

OAE is continuing to develop and transform its sustainability model. The change is essentially from a model based primarily on financially-based contributions to that of a mixed mode community-based model, where financial contributions are encouraged alongside individual, institutional and organisational volunteered contributions of code, documentation and other non-code artefacts. There are two preconditions for accomplishing this. The first, which applies specifically to code, is clearing a layer of technical debt in order to more easily encourage and facilitate contributions around modern software frameworks and tools. OAE is committed to paying down this debt and encouraging contributions from developers outside the project.

The second is both more complex and more straightforward; straightforward to describe, but complex to realise. Put simply, answers to questions around wasteful duplication of resources in deploying software in education have fallen out of balance with reality. The pendulum has swung from “local” through “cloud first” to “cloud only”. Innovation around learning, which by its very nature often begins locally, is often stifled by the industrial-style massification of ‘the hosted LMS’ which emphasises conformity with a single model. As a result of this strategy, institutions have switched from software development and maintenance to contract management. In many cases, this means that they have tended to swap creative, problem-solving capability for an administrative capability. It is almost as though e-learning has entered a “Fordist” phase, with only the green shoots of LTI enabled niche applications and individual institutional initiatives providing hope of a rather more postmodern – and flexible - future.

OAE retains its desire and ambition to provide a scalable solution that remains “cloud ready”. The project believes, however, that the future is federated. Patchworks of juridical and legal frameworks across national and regional boundaries alone – particularly around privacy - should drive a reconsideration of “cloud only” as a strategy for institutions with global appetites. Institutions with such appetites – and there are few now which do not have them – will distribute, federate and firewall systems to work around legislative roadblocks, bumps in the road, and brick walls. OAE will, then, begin to consider and work on inter-host federation of content and other services. This will, of necessity, begin small. It will, however, remain the principled grit in the strategic oyster. As more partners join the project, OAE will start designing a federation architectural layer that will lay the foundation to a scenario where OAE instances dynamically exchange data among themselves in a seamless and efficient way according to a variety of use cases.

ID 22-MAY-18 Amended 23-MAY-18

June 11, 2018 12:00 PM