Before I move on to my next case study in academic institutions moving toward operational excellence at supporting student success, I want to revisit a section toward the end of my last post on the California Community Colleges Online Education Initiative (OEI). I was looking at the alignment that has to be achieved at various levels in the academic organization in order to encourage all the stakeholders to embrace this collective mission, with all the changes to their day-to-day work and even professional identities it would entail. Much of the piece is about the work that had been done so far to get alignment at various levels within the administration. But toward the end of the piece, I speculated a bit on potential opportunities for fostering faculty alignment through a course peer review process using a common rubric:
[T]o me, one of the most interesting vectors for culture-building is the course exchange course quality rubric. Every course on the exchange has to be evaluated against a rubric of evidence-backed effective online teaching practices. As the pace at which exchange courses are developed increases, OEI will not be able to keep up with demand to evaluate these courses using central staff. So they are creating a peer reviewer mechanism in which faculty on the campuses are trained on the rubric and presumably compensated to review courses that are candidates for the exchange.
This opportunity fascinates me. We know that faculty who go through an expert-supported course redesign process often experience intellectually deep and emotionally moving shifts in their teaching strategies. Is the same true when faculty are trained reviewers of their colleagues' redesigned courses? What effect will simply exposing faculty to more and different course designs have? How will their role as reviewers and critiquers shape or enhance that effect? Can a continuously improved and updated rubric become a vector for sharing new research-supported processes across the system on an ongoing basis? Will the impact be broad and deep enough to foster new kinds of intra- and inter-campus faculty dialogs about the scholarship of teaching and learning (SoTL)? Will these cultural changes help to foster alignment around continuous operational improvement for enabling student success? This is the last mile problem of higher education. Operational excellence at student success cannot be achieved unless it is infused in the daily operations in individual classrooms. That requires affirmative faculty buy-in, support, training, and embedding in a culture that invites them into the larger conversation.
Unpacking this a bit, what does it really mean to build a culture of operational excellence in supporting student success? What kind of change would be necessary at the individual level to achieve change at the organizational level?
There is a useful concept in organizational psychology called "double-loop learning." I'll give a simple example of a non-academic organization first to make the concept clear. Suppose your company manufactures smartphones. You want supply to match demand almost exactly as possible. If you manufacture too many phones, then you will sink expense into building units that will sit on the shelves and fairly quickly become obsolete. But if you manufacture too few, then you won't have phones to sell at the moments that people need to buy them, thus encouraging them to buy a different (more available) phone instead. In a single-loop model, you have one lever to pull, which is how many phones you produce at a given time. It's a like a thermostat: If the room is too cold, then turn on the furnace. If the room is warm enough, then turn off the furnace. If there are not enough phones on the shelves, then turn up production. If there are too many phones on the shelves, then turn down the production line.
The problem is that there's a significant lag between when the order is given to produce more phones and when they arrive on the shelves. During that time period, demand can change. Maybe by the time the new phones the company produces during a period of high demand actually land on the shelves during the beginning of a recession, or right after a competitor releases their hot new model. The single-loop, thermostat-like model doesn't work very well.
Of course, the people who run the company are smart enough to know this, so they come up with all sorts of work-arounds. They build warehouses to hold excess phones near where they are built, since holding onto the phones that way is cheaper than shipping them halfway across the world and negotiating with the retail stores that are selling them and may want to ship excess inventory back. They build sophisticated forecasting models that account for factors such as the economy and competitor behavior, so the chances of them being badly wrong are reduced. These are all work-arounds to a fundamental problem regarding the costliness of being wrong in your demand forecasts. And this is exactly the way manufacturers of all kinds of complex items, including smartphones, used to operate in the old days.
But then somebody somewhere questioned a fundamental premise that drove so much effort and activity: Does it have to take so long from the time the company order new products to be manufactured until those products reach the retail shelves? Maybe there's some part that often holds up the whole product; if we could only use a different part, or make the part ourselves, then we could get rid of a lot of the delays. Maybe the places where those component parts come from are farther away from our factory than they need to be; if we could just get them to move closer, then we could cut down on the lags. Maybe we have extra steps in our manufacturing process, or use outdated equipment; if we could only make some updates, then we can shorten the lag. And maybe if we do all of these things, as well as more generally finding and making changes anyplace where the process bogs down, then maybe we don't have to put products on shelves at all. Maybe we can get the delay between order and manufacture short enough that we could start manufacturing the device when the consumer orders it and get it assembled and shipped fast enough that the consumer would tolerate the delay.
This is double-loop learning. Organizations not only use processes that allow them to make adjustments but also regularly examine the assumptions behind those processes that may be unnecessarily getting in the way of achieving organizational goals. We assume that we have to develop processes to mitigate bad product demand forecasts because we assume that those costs will be high because, in turn, we assume that manufacturing the product will take a long time once we decide to do it. But what if we're wrong?
Double-loop thinking is a reasonably simple concept to understand but very hard to execute well and consistently. In the smartphone manufacturer example, think about all the many kinds of assumptions in the way things had always been done that would have to be identified, questioned, and replaced with a better-designed alternative. Particularly in the early days, when there weren't models to copy or lessons learned elsewhere, no one person who could see all the changes that would have to be made. There would be many people across the organization—in manufacturing, product design, contract negotiation, shipping, retail relations, and so on—who would each be able to spot an individual sub-optimization in her daily work experience. And then more people would have to be involved in designing a solution to each sub-optimization, including accounting for all the ripple effects across other aspects of the organization. It would be an all-hands-on-deck sort of affair. Everyone would be needed to find problems, identify potential solutions, check those solutions for side-effects, and then implement them well.
Double-loop learning in academia
Now think about a few of the many questions that are starting to be asked about the operating assumptions about the education-related processes of colleges and universities:
Why must students stay in a course for a set number of weeks, regardless of how quickly or slowly they are capable of learning the material?
Why are students only able to register for and start a course at most a couple of set times in the year?
Why some very common teaching modalities based on the default assumption that all students learn roughly the same way and encounter roughly the same rough spots?
Why do we define the minimum math literacy for a college degree as basic algebra rather than, say, statistics?
We do we believe that professorial training requires at least five years of deep disciplinary education and at most one course in pedagogical education?
Why do faculty gain job security through research excellence far more than through teaching excellence?
Why do we assume that students know and understand everything they need to do from the moment they receive their college acceptance to the moment they arrive on campus for the start of their first semester of class?
Why do we assume we can know when individual students are in trouble and need help from the academic institutions when no employee of that institution sees the student for more than a few hours in the week—at most—and there is no good mechanism for sharing concerns and observations among the people who have contact with that student?
Think about the people who were in a position to spot each of these assumptions. Think about all the people required to design, troubleshoot, and implement alternatives that arise out of questioning the assumptions. If we want to reliably create student-ready colleges, then we need to be able to identify many unwarranted assumptions and design many alternative ways of doing things in ways that will deeply affect the ways in which academic institutions—and the people employed by them—work. To change everything, you need everyone. That specifically includes faculty.
A rubric as a vector for change
Way back in late 2013, I wrote about Pearson using a rubric to try to catalyze this sort of broad-based organizational shift in (critical) thinking:1
So if you're the CEO of major textbook publisher and you want to unite the entire 45,000-employee company around a plan to transform the way the company does business, what do you do? Surprisingly, Pearson's CEO John Fallon's answer was, "I'll create a rubric."
I'm not going to analyze Pearson's rubric in detail here.... I'll say this much about it: It's nothing special. It's not bad, but it's not genius either. There are plenty of flaws and limitations you could find if you worked at it and applied it broadly enough. There is no magic in it.
But here's the thing: There is neverany magic in a rubric. The magic, when there is any, happens from the norming conversations that the rubric engenders. It happens when one colleague says to another, "What do you mean by 'quality of evidence'?" Or "I scored that course a 2 on effectiveness. Why did you think it was a 4?" To the degree that the Effectiveness Framework proves to have any magic for Pearson, it will be in the norming conversations that it engenders across the company. Like our hypothetical Berkeley president, Fallon is working with diverse groups within an institution that has a culture of independence and Balkanization. Some of this is for good reason; conversations about effectiveness in chemistry education should look very different from conversations about effectiveness in fine arts education. Some of the fractiousness is about lack of a common culture and language necessary to discuss what otherwise arecommon challenges. And some of it is just human territoriality and self-interest. The first two challenges might be addressed by having a deep and wide ongoing norming conversation about a rubric that is general enough to cover a wide range disciplines and products but focused enough to provoke important discussions. The goal is for that conversation to become the basis for a new culture. The third challenge might be addressed by reinforcing that culture through your HR and other business practices.
Since I wrote that post, Pearson has developed a set of rubrics for evaluating whether a given product supports research-backed learning design principles. They have rolled those rubrics out to every product team and trained their product teams on how to use them. They have released them under a Creative Commons license and are. (For more on both the resource itself and Pearson's interest in working with academics to make them more useful to academia, see the talk given by Pearson's Global Head of Efficacy and Reach at last year's Empirical Educator Project summit.) So Pearson continues to use what is essentially an academic strategy, not that different from the one being rolled out by California OEI, to build a double-loop culture around designing educational content and software functionality that are more effective at impacting student outcomes.
The rubric development, training, and norming processes are necessary but not sufficient. As I suggest in that last sentence of the Pearson post quote, other organizational processes need to be put in place as well in order to get the desired effect. It would be easy to get faculty thinking that the new practices are baked into the rubric, and as long as everybody is aligned with them, you're good. The organization goes through the double-loop, but only until the norming process is complete. This is, in fact, what happens in many colleges and universities that adopt course quality rubrics. The institution has to mindfully employ the rubric updating, retraining, and renorming processes as methods for collaborative innovation. The rubric needs to be designed at a high enough level that it invites discussion and thought rather than rote implementation. The processes around it need to be collaborative rather than broadcast-only. And many other processes—like compensation for time invested or rewards for innovation, to take a couple of obvious examples—need to be created or modified to support and dovetail with the rubric processes.
There are lots of organizations that implement course quality rubrics. Enough that we should be able to start gathering stories and effective practices for using them to foster continuous organizational improvement. If anyone has a good example, please let me know.
Pearson is a sponsor of e-Literate's Empirical Educator Project.
This event will be a little better organized than my previous office hours. The hotel is providing snacks and drinks and Shroff Publishing has provided a number of copies of the Python for Everybody textbook that they print and sell in India:
In mid 2012 during the midst of MOOC mania, I wrote a post noting that we should pay attention to future generations of the concept and that there were four barriers that the MOOC vendors would have to overcome to have any long-lasting impact.
Given this short timeline and the nature of investment-backed educational experiments, I think the real focus should be on whether and how MOOCs or successor models build on current scalability and openness while overcoming these four barriers.
Six years later, it is becoming increasingly clear that the next-generation model for MOOCs in higher education is to become a form of Online Program Management (OPM) providers, including the near-term focus on master's degrees. The OPM market has demonstrated revenue models (tuition revenue sharing mixed with fee-for-service), the end credential is the already-accepted degree, course completion rates are higher for paying and matriculated students, and degree programs have methods for student authentication. In other words, the MOOC-based OPM model is the next-generation designed to address these challenges.
The picture one gets is of a chaotic market that is not for the faint of heart, and one that will likely see further consolidations and category changes. 2U, for its part, has been successful partially due to a niche strategy where they go after elite master's programs and mostly avoid direct competition or engagement with the rest of the market. And recently we have started to see the MOOC providers become OPM providers - where the primary revenue for Coursera and FutureLearn are based on revenue sharing with online programs, albeit with lower sharing rates and with very different marketing approaches. In other words, there seems to be several efforts to enter into the same OPM race, but if possible to avoid being in the mainline rev-share OPM market.
Last week Julia Stiglitz from GSV Advisors, in their first podcast episode, interviewed Coursera CEO Jeff Maggioncalda who joined the company summer 2017. This interview gives the clearest view yet of Coursera's emerging business model, and by extension it helps explain the new subset of MOOC-based OPM that includes FutureLearn, edX, and Udacity as vendors.1 I think that the media narrative of tuition revenue-sharing vs. fee-for-service OPM models is overblown, especially since there is a spectrum in that respect more than a binary choice of OPM vendor types. What the MOOC-based OPM entry introduces is a more fundamental characteristic of how traditional institutions develop online programs - namely low-cost vs. full-cost online degrees.
The first note from the interview is that the Coursera of 2018 is not the Coursera of 2012. While Maggioncalda still shows aspects of that old-time MOOC belief system, his approaches are very much rooted in focusing Coursera on a solid business model. And the difference shows. The second note is that 2U's success in the OPM market and a successful IPO had a big influence on Coursera's shift. [Emphasis added in transcript]
Julia: You know when you first joined. You spent some time looking at Coursera's strategy, and really digging in and looking at the different businesses that Coursera had, and one of them that you were particularly attracted to, and you have put increased attention on here at Coursera is the online degrees business. What was it about online degrees that excited you?
Jeff: Yeah. This is sort of I think another good example of what good entrepreneurs have to do, is you have to have feedback loops; you need to get information from multiple sources to understand the nature of a problem so that you can come up with solution. The nature of an opportunity so you can develop a strategy to go after it. It's really actually pretty simple. I came in - you were on the team, too, we did a lot. We call them deep dives. We went all through the business model, and there's a great book Business Model Generation that really, to me, gives a nice framework for saying this is what a business model is. It is a target customer. It is a value proposition and offering that solves their needs. It's a set of channels of how you acquire those customers. It's a servicing models of how you service the models. Internally it's the key activities and resources you bring to bear on that. It's the partners that you work with. It's the financial revenues and costs, and is your competition. So and that's the framework. And we stepped through every one of those. I wrote 250 questions across that business model that we as an executive team went through. You know step by step by step, so that everybody learned the nature of our business. And what became very obvious is we had a few things that nobody else really has.
We had 36 million learners, at the time it was 25 million. 25 million learners from all around the world. That's a pretty big asset. We had university partners. Now there are competitors out there like LinkedIn Learning, previously Lynda, like PluralSight, like SkillSoft. You know there's YouTube, there's Khan Academy - there's a lot of content out there. You were one of the ones who told me in one of those early meetings, "Hey we're worried that content, generic content, might become a commodity." Well, we don't want to play a commodity game. So what is it about my partners that's super distinctive? Well our partners are universities, and they're not just the universities, they're the best universities in the world, and they're spread around the world. So you say, well I've got a resource that almost no one has, which is this network of universities. Right now they're publishing MOOCs, and there's something special about MOOCS, but MOOCs are a little more susceptible to that commoditization just as MOOCs. But what was not very susceptible to commoditization are degrees. So that's OK. We have an asset nobody else has, and what they do really well is degrees, and they still have market sizing. How big is the market for degrees? 1.5 trillion dollars. Okay, well that's a pretty big opportunity. And then you say, what's the likelihood that that industry could be transformed due to technology . . . You know, some industries it's easier to transform, others, it's harder. The provision of education is absolutely set up nicely to be enhanced, transformed by technology.
I think Uber and Lyft were really smart when they said "you know on-demand transportation, called a taxi, it's a big market, but it's a broken product. And if we just do some sort of digital view of this kind of redesign what on-demand transportation looks like, it'll be a much bigger market." I'm looking at degrees, I'm not saying it's broken altogether, but if you look at the student debt out there, you look at the the lack of access, and you look at how inconvenient it is for people to have to stop their lives - especially for master's degrees - quit their job, move their family, pay hundreds of thousands of dollars, forfeit their income. That's a broken product. So I thought we got partners who are really good, and a massive economic opportunity, and a product that is just ready to be dramatically improved by technology, and so I thought this is pretty good. We should go after this. By the way we also had 2U trading at like a 12 times forward multiple. So clearly Wall Street loved the idea of online degrees, and 2U's been doing great. They're growing really rapidly, so there's a data point out that says, hey this company is doing really, really well; We should be able to do pretty well here, too.
Julia: Could you share a little bit about what this redesign looks like? Because the online degrees aren't new. You know 2U is doing them. And before 2U there were a whole set of online degree providers that were out there, so why is what Coursera is doing different?
Jeff: Yeah I think it's a few things. The number one, I would say, is quality. When I say quality, I mean the quality of the credential. So a lot of people have spent a lot of money on for-profit college degrees that just don't have very good credential value, they're not recognized in the job market. You pay a lot of money, you don't get much back for it. One of the reasons that people pay so much for the top universities is that those types of degrees means something in the job market. There have been a lot of online degrees out there, from universities, that charge a lot and don't get you very far. Our partners happen to be the best universities in the world, with the highest credential value in the world. When these degrees come online, and these degrees online are the same degrees as on campus, you're getting something as a credential that's extremely valuable. That should have a very high ROI. Because we're doing it online the cost is often less than half. So it's a top quality credential at half the price. Same credential you get on campus.
Different Assumptions on Tuition
There's a lot of useful insight in the full interview, but I'd like to call out the fundamental question that gets raised about online education with this market view. Should online degrees from traditional universities cost the same as face-to-face offerings, or should they cost significantly less?
For full-service revenue-sharing segment of the OPM market, some core assumptions are built on the assumption of high revenue share percentages and full-priced online degrees. 2U is probably the best-known and arguably the most successful OPM company, and like Coursera they target elite institutions as partners. In 2U's website under "Our Approach" they describe how their online programs typically charge the same as on-campus programs.
Most of the full-service revenue-sharing segment of the OPM market is similar in its view, whether from Pearson, Wiley, Academic Partnerships, or others - relying on consistent tuition as for online programs, and if there are lower prices they tend to be marginally lower.2 The Coursera approach is in direct contrast with this view, based on the interview as well as several of their online degree programs. There are arguments for either approach. With full-cost tuition, the idea is that the online degree gives at least as much value to students as the face-to-face, or on-campus, degree, and therefore students will be willing to pay the same. With low-cost tuition, the idea is that while students get the same value, "because we're doing it online" the costs should necessarily be lower. Online infrastructure and marginal costs are much lower than investment in physical facilities. The point here is that this is a fundamentally different set of assumptions.
For most, the initial appeal of the program was certainly the price tag. Illinois’ iMBA costs a fraction of a degree from an elite school, where the median cost is roughly $171,000 and can break the $200,000 mark at the far end of the scale. Illinois’ own residential two-year MBA costs more than $100,000. Arshad Saiyed, executive director of online programs at the Gies College, acknowledges that the low cost brought the program to many prospective students’ attention — but says the iMBA has kept students around through a combination of high-quality instruction and successful community building.
Different Assumptions on Student Recruiting
For OPM full-service vendors, the largest expense is typically marketing and sales - i.e. recruiting potential qualified students. The predominant approach to OPM student recruitment has been based on digital marketing - advertising and outreach on social media platforms, search engine placement, digital advertisements in articles. With the MOOC-based OPM subset of the market, there is now an alternative approach based on having a multi-sided platform model. Coursera views their 36 million registered learners as an asset - a natural base of potential students for online degrees that can be reached without external advertising. In addition, the original aim and design of large-scale MOOCs is based on ability to easily sign up new learners for low- or no-cost, with the opportunity to move these students into higher-cost credentials and degrees over time, not requiring full financial commitments from students up front. While a Coursera or FutureLearn might use digital marketing for recruitment, that is not their primary method.
Different Assumptions on Course Size
Related to the above assumptions, in 2U's case the class size is small - typically 10 - 20 students leveraging the platform's design around small discussion groups, using both synchronous and asynchronous learning. This 2018 article about Washington University's two programs partnering with 2U partially describes this approach.
But what is it like for student to pursue a graduate degree in law fully online? How could a pre-recorded lecture support the active teaching that's integral to discipline? After all, watching a video isn't the same as participating in a conversation. To support such engagement, 2U created a new tool.
"Through building an online LLM [master's of law] program with Washington University in St. Louis, we learned how to design one of the most important tools we provide today: the bidirectional learning tool, or BLT," said Chip Paucek, co-founder and CEO of 2U. "Socratic-style teaching is fundamental to all law curriculum and coursework. As such, it was imperative for us to design a way to conduct Socratic-style group discussions for Wash U once we signed their online LLM program.
"What we didn't realize is that while we were developing a software tool to help solve the challenge of teaching the Socratic method online, we were simultaneously creating a way to facilitate discussion-based learning in an asynchronous environment that would eventually be used in all of our future partner programs."
The approach that 2U and Wash U Law conceived relies upon the ingenious integration of asynchronous and synchronous course components. Instead of lecturing from a podium, faculty address small groups of student actors. At key points, the instructor breaks the fourth wall and addresses the online student, who is prompted to answer without the benefit of knowing how his or her peers have responded. In other words, students can't piggyback like they might in an in-person class.
After responding, online students can review one another's answers. They might be prompted to answer follow-up questions, or they might be asked to come to the next live class prepared to defend whatever position they've chosen. The preparatory work that might otherwise happen during an in-person class is accomplished in advance through the pre-recorded sessions, enabling faculty to make better use of live, synchronous time.
I think about systems. As the system gets bigger, where would the bottlenecks emerge? My sense is that the bottlenecks will emerge in live sessions and in grading. That’s my guess. The grading, I’m actually not so concerned about because I think the ability to automate grading at scale will become pretty good. The live sessions get tricky. From a technology perspective, I’m not that worried about it. It’s the professor’s time and attention. My thought is it’s going to be a little bit like pyramid, where the number of hours that the main professor puts in won’t really change. If you think about how medical systems have worked, a doctor is in the system, but the number of minutes and hours that a doctor spends [with each patient] becomes an increasingly smaller portion of the total time [during which medical treatment is being delivered]. I think it will probably be somewhat similar for education. The size of the classes could be big, let’s say 10,000. But that will be broken into sections of say 50. And each of those sections has an expert who’s probably not the professor. Also, there will be a lot more collaborative learning among the peers in the class. If you think about it, a lot of learning does actually happen among the folks in a class. The expert just dispensing wisdom is not the way most learning happens. I call it “high engagement learning at scale.” A major piece of high engagement learning at scale is utilizing your classmates to provide a highly valuable learning experience.
Coursera is pursuing a path to enable high enrollments in low-cost programs, and they view their challenge to balance scale and student engagement, with class sections of ~50 students.
Good Enough vs. Better Enough
In twoposts recently, Michael described a battle in the digital curricular materials market. Focusing on Cengage Unlimited in the first one, he described this dynamic.
Make no mistake; this is a potential inflection point in the curricular materials market. There is a war raging between curricular materials that are "good enough," meaning that the lower price has a bigger impact on student outcomes than any differences in the quality of more expensive alternatives, versus "better enough," meaning both instructors and students believe the product makes a sufficient difference in student outcomes that the more expensive product is worth the premium. Cengage is betting the farm on "good enough" beating out "better enough" and, win or lose, their bet could cause tectonic shifts in how curricular materials are developed, purchased, and used. It will have implications for inclusive access, adaptive courseware, textbook companies, textbook authors, and the landscape of options available to students and teachers.
Elaborating in the second post:
The distinction I’m trying to make between two strategies is a little tricky. I’m not arguing that Cengage, for example, thinks that their products aren’t great or that they think all anybody needs is the cheapest PDF possible. And on the other hand, “better enough” no longer means better editing or better production values, which is the way that textbook publishers used to position themselves against OER (and still do sometimes, although that reflex is beginning to fade). Rather, it’s about improving student outcomes.
What we are seeing in the OPM market, with the introduction of MOOC-based degrees, is a new battle. MOOC providers and its partner institutions, represented by Coursera, betting on "good enough"; and 2U and its partners betting on "better enough". Like the curricular materials market, the product is based on student outcomes, which wraps in the value of the credential coming from the university along with the academic and administrative experience enabled by the company. Coursera obviously believes in the quality of their experience, and their partners have some programs that are not deeply discounted, but their market position is based on the program price being the compelling feature for students, including free or low-cost on-ramps. 2U understands that students are seeking more cost effective options, which was one driver behind creating the short-course segment with the acquisition of GetSmarter, but their market position is based on quality of experience and value of credential being the compelling feature for students. But the difference in approaches is stark and significant.
While there is likely room in the market for both approaches, the Coursera of 2018 (and not the Coursera of 2012) deserves careful observation to understand future trends with online degrees. Win or lose, their bet on low-cost online degrees will have big implications in the market.
Outside of Georgia Tech legacy contract, Udacity has moved to corporate professional development market, which is a different approach to same problem.
Earlier this month Ben Thompson from Stratechery wrote a post, analyzing SAP's $8 billion acquisition of Qualtrics, that provides insight into the shift in value proposition of the academic LMS. The SAP explanation along enterprise software lines shows the broader shift of enterprise software extending the view of the internal operations of an organization to also include a deeper view of the end users of an organizations offerings - students in the case of the LMS.
Thompson describes how SAP was founded in the 1970s and has a dominant position in Enterprise Resource Planning (ERP) systems that use central databases to provide customers with "a 'real-time' view of the state of their company" - essentially showing what the company is doing from an internal view. Customer Relationship Management (CRM) products emerged in the 1990s with the rise of ubiquitous PCs and the emerging Internet, tracking interactions with a company's customers across time and across multiple locations - essentially showing a view of who the customers are and their interactions. Thompson then describes the challenge that modern companies face.
Fast forward another 20 years and the world has dramatically shifted yet again: not only are computing devices and Internet access ubiquitous, but critically, that ubiquity is not confined to businesses: customers, the ultimate endpoint of any business, are today just as connected as the employees of any large enterprise.
This can be a rather frightening proposition for large businesses: look no further than social media, where seemingly every week some terrible story about a company with poor customer service goes viral; there are an untold number of similar sob stories shared instantly with friends and family.
This same trend applies in education, with students being just as connected as faculty and staff of a college or university.
There are millions of complaints every day about disappointing customer experiences. This is called the experience gap. Businesses used to have time to sort this out, but in today’s unforgiving world, the damage is immediate, disruption is imminent. This has shifted the challenge from a running a business to guaranteeing great experiences for every single person.
Qualtrics provides a survey tool along with a sophisticated set of analytics and reporting tools based on this data - the key for SAP to understand consumer experiences. What is crucial, however, is not the standalone capabilities of Qualtrics, but the combination, again described by SAP's CEO [emphasis added]:
To win in the experience economy there are two pieces to the puzzle. SAP has the first one: operational data, or what we call O-data, from the systems that run companies. Our applications portfolio is end-to-end, from demand chain to supply chain. The second piece of the puzzle is owned by Qualtrics. Experience data, or, X-data. This is actual feedback in real-time from actual people. How they’re engaging with a company’s brand. Are they satisfied with the customer experience that was offered. Is the product doing what they expected? What do they feel about the direction of their employer?
Think of it this way: the O-data tells you what happened, the X-data tells you why it happened.
This view of enterprise software navigating the larger trends of ubiquitous technology and connectivity, leading from the what to who to why, provides clarity on many of the trends we see in the ed tech world.
In education, the Learning Management System (LMS) was originally and more accurately called a Course Management System, and it has historically been focused on the management of courses, primarily through announcements to class, rosters, grade book, distribution of syllabus and course content, and submission of student work. Consider this figure from the ECAR Study of Faculty and Information Technology, 2017 that mirrors several other studies in its results:
While the modern LMS has advanced in many ways - particularly around usability, interoperability, and system reliability - the common usage of the this ERP-of-the-classroom has remained fairly steady. The dominant usage is managing the what of courses.
The LMS provides tools to manage communications - a view of the who of courses - through inbox, discussion boards, announcements, and various conferencing apps, but of these the dominant usage is through announcements. One way communication from faculty to students. The tools are there but not the reality of holistic views of interactions with students.
The shift in education from running a course to guaranteeing great experiences for students, to bastardize the SAP explanation, is much like the move towards experience management referred to in the Stratechery article. The movement is in its infancy, and it is likely to be measured in terms of decades, not years. Michael referred to this move in his most recent post.
If you're a regular e-Literate reader, you know we have a macro thesis that the higher education sector is in the early stages of an evolution from having a philosophical commitment to student success toward having an operational commitment to student success. In other words, colleges and universities are starting to approach student success systematically, not as the natural by-product of hiring good faculty but as something that every student-facing aspect of the institution needs to be optimized for.
When you talk about student success, and great experiences, you have to go well beyond the official production of course content and grades and rosters. It doesn't just matter what grades students get, it matters whether each student is learning, whether and when they get frustrated, and how often they're engaging in the class. This gets to learning analytics and formative assessments and opportunities for students to quickly get help.
None of this is new, per se, and we've even seen attempts at alternative learning platforms to address this richer ecosystem. Consider the learning platforms designed initially to support competency-based education (CBE) such as Motivis Learning (spun out of Southern New Hampshire University's College for America) or Sagence Learning (formerly FlatWorld Knowledge). These systems1, often called Learning Resource Management (LRM) systems, are designed to "see a holistic view" of students and "track student engagement". They are designed to achieve the stated goals of SAP to combine operational data and tools along with experience data and tools.
We'll get into more detail in future posts, but the category often labeled as adaptive courseware platforms are another example of next-generation systems that are designed to capture both operational data and experience data. These systems blur the boundaries between content and platforms and have the advantage of combining the two into a common design, which should allow deeper instrumentation of student activity during the learning process.
These examples get to the common question of whether the LMS will survive and exist in 10 years. The original LMS concept was designed around a course, not the learner, and most usage is administrative in nature, not learning activities. Shouldn't next-generation systems like LRMs overtake the LMS market, as these companies expand beyond just CBE programs (see this post for context)? Well, the data do not show signs of this movement, and in fact the LMS market has been consolidating around just four solutions for institutional adoption - Canvas, D2L, Blackboard, and Moodle.
In the meantime, most of the LMS vendors have been adding functionality, whether through extension of their platforms or strategic integrations with third party tools, that seeks to provide views of the student experience. Learning analytics and reporting capabilities, mastery learning additions, federated sharing of student activity data.
One reason for the persistence of the primary LMS is that the LRM and courseware markets are not the ERP market. There are no SAPs in these worlds that already have ubiquitous usage. According to the Stratechery article "SAP is at the center of 77% of transactions worldwide". The LRM typically starts out in a new CBE program with dozens, or maybe hundreds of students.
What is dominant in higher education circles? The LMS. It is one of the few ed tech solutions used in a majority of courses across online, blended, and face-to-face modalities. What the market appears to be doing is waiting for solutions that build on top of the LMS, or even extend the LMS itself, rather than replacing the LMS. And one of the main reasons is that the LMS has already been accepted as the enterprise system for academic usage, with operational data and tools managing the what of courses. It may be that over time alternative learning platform models will build up enough market share to become a credible threat to change the broader LMS market, but the signs so far are not encouraging for those vendors.
Qualtrics proved to be so valuable ($8 billion) because it could augment the ubiquitous SAP. SurveyMonkey, by contrast, went public as a standalone company and is worth far less ($1.8 billion, still a respectable number).
Looking into the future, the LMS will have to provide useful analytics on student outcomes, learning, and experiences along the way. Shifting from mostly running a course to guaranteeing great experiences for students. Whether this happens within the LMS of the future or as third-party augmentations of the LMS, and whether this happens with the current top four vendors or a different set, is not known. But the move to combine operational and experience data and tools is a trend we should expect to see over the next decade, both in ERP systems like SAP and in the academic LMS market.
This presentation will take a look at the “Python for Everybody” series of courses on the Coursera platform. This course has impacted over 1.3 million students over the last five years. We will look a the history and goals of the course and how the course works to create a learning community. We will show how the free open educational resources (OERs) and book associated with the course have been used by teachers, students, and courses around the world to form a network of educational activities centered around Python. We will also cover briefly the Tsugi (www.tsugi.org) software that is used to build the learning assessments and distribute the OER materials in a way that enables maximum reusability of the materials for other teachers.
As you have been hearing on the Tsugi developer list and in my public presentations, Tsugi is going to build a new approach for adding tools that uses web services rather than a shared database connection across libraries written in multiple languages.
This means that new environments will be easier to build and support over the long-term and Tsugi will be able to provide a tool environment that will meet and exceed the privacy requirements of GDPR and similar privacy oriented measures.
The following efforts that I have put together for Tsugi over the years are now deprecated:
I would also recommend that there be no further investment in Tsugi PHP tools that depend on the Silex framework as it has been deprecated and I don’t expect to upgrade it.
If you have built Tsugi PHP tools that are currently working within Tsugi-PHP – they won’t be broken – what is there currently will be maintained. But some of the deprecated bits will be frozen going forward. If you are building a new tool before the new development model is complete, simply build it in generic (non-framework) PHP like the tools in https://github.com/tsugitools
In time I expect to build new sample code in Laravel, Python, and Node that makes use of the Tsugi APIs and services in time.
These deprecations allow me to “clear the deck” to focus on the next generation and make sure no one starts new work in an Tsugi environment that will not continue to be supported.
I needed to selectively block some IPs from macOS and this is how I did it. First create a new anchor for the rules to go in. The file to create is:/etc/pf.anchors/org.user.block.out and it should contain:
table <blocked-hosts> persist
block in quick from <blocked-hosts>
Then edit: /etc/pf.conf and append the lines:
load anchor "org.user.block.out" from "/etc/pf.anchors/org.user.block.out"
Then to reload the firewalling rules run:
$ sudo pfctl -f /etc/pf.conf
and if you haven't got pf enabled you also need to enable it with:
$ sudo pfctl -e
Then you can manage the blocked IPs with these commands:
# Block some IPs
$ sudo pfctl -a org.user.block.out -t blocked-hosts -T add 188.8.131.52 184.108.40.206
# Remove all the blocked IPs
$ sudo pfctl -a org.user.block.out -t blocked-hosts -T flush
# Remove a single IP
$ sudo pfctl -a org.user.block.out -t blocked-hosts -T delete 220.127.116.11
We would like to announce a new university course timetabling competition. Building on the success of the earlier timetabling competitions, the International Timetabling Competition 2019 is aimed to motivate further research on complex university course timetabling problems coming from practice.
For the meeting taking place January 16-19, 2019 at ETH Zurich, the Opencast community is looking for contributions related to the use and management of video in academia. With a focus on Opencast, course capture, and video management, we are particularly keen to hear from educational technologists and designers, instructors, or service providers working with video to support teaching and learning.
The latest version of Apereo's Open Academic Environment (OAE) project has just been released! Version 15.0.0 is codenamed Snowy Owl and it includes some changes (mostly under the hood) in order to pave the way for what's to come. Read the full changelog at Github
We are actively seeking presenters who are knowledgeable about teaching with Sakai. You don’t need to be a technical expert to share your experiences! Submit your proposal today! The deadline for submissions is September 21st, 2018.
Save the Date: The Sakai Virtual Conference will take place entirely online on Wednesday, November 7th.
If you’re tasked with teaching an upcoming course that you’ve taught in the past with the University – there’s no need to rebuild everything from scratch – unless you want to.
Faculty teaching face to face (F2F) courses can benefit from the course content import process in Site Info. This process allows you to pull in all your assignments, syllabus, gradebook, handouts and other files associated with the course – as used in a previous offering of the course.
To do this, you need to be an instructor in both course sites (the former and the upcoming). Go to the upcoming course site, and select Site Info>Import from Site:
Next, select the kind of import you wish to perform. I typically suggest using the replacement option “I would like to replace my data”. On the next screen select which course you’d like to pull content in FROM. Be careful here making sure you select the SOURCE of the content you’ll import. Next click Continue.
On the next screen select the tools/areas of content you wish to import. Keep in mind it’s always a good idea to import the Resources, because files referred to in Assignments, Quizzes, Lessons or Announcements could refer to those files, and in order for those links to work properly the corresponding resources must be likewise imported.
Finally complete the import process and watch for the email to be sent to you – notifying you of the import process being completed. You can find out more information about the process here.
Want to watch the whole process in real time? Take a gander here:
The experience of the Open Academic Environment Project (OAE) forms a significant practical contribution to the emerging vision of the ‘Next Generation Digital Learning Environment’, or NGDLE. Specifically, OAE contributes core collaboration tools and services that can be used in the context of a class, of a formal or informal group outside a class, and indeed of such a group outside an institution. This set of tools and services leverages academic infrastructure, such as Access Management Federations, or widely used commercial infrastructure for authentication, open APIs for popular third-party software (e.g. video conference) and open standards such as LTI and xAPI.
Beyond the LMS/VLE
OAE is widely used by staff in French higher education in the context of research and other inter-institutional collaboration. The project is now examining future directions which bring OAE closer to students – and to learning. This is driven by a groundswell among learners. There is strong anecdotal evidence that students in France are chafing at the constraints of the LMS/VLE. They are beginning to use social media – not necessarily with adequate data or other safeguards – to overcome the perceived limitations of the LMS/VLE. The core functionality of OAE – people forming groups to collaborate around content – provides a means of circumventing the LMS’s limitations without selling one’s soul – or one’s data – to the social media giants. OAE embodies key capabilities supporting social and unstructured learning, and indeed could be adapted and configured as a ‘student owned environment’: a safe space for sharing and discussion of ideas leading to organic group activities. The desires and requirements of students have not featured strongly in NGDLE conversations to this point: The OAE project, beginning with work in France, will explore student discontent with the LMS, and seek to work together with LMS solution providers and software communities to provide a richer and more engaging experience for learners.
Integration points and data flows
OAE has three principal objectives in this area:
OAE has a basic (uncertified) implementation of the IMSGlobal Learning Tools Interoperability specification. This will be enriched to further effect integration with the LMS/VLE where it is required. OAE will not assume such integration is required without evidence. It will not drive such integration on the basis of technical feasibility, but by needs expressed by learners and educators.
Driven by the significant growth of usage of the Karuta ePortfolio software in France, OAE will explore how student-selected evidence of competency can easily be provided for Karuta, and what other connections might be required or desirable between the two systems.
Given the growth of interest in learning analytics in France and globally, OAE will become an exemplary emitter of learning analytics data and will act wherever possible to analyse each new or old feature from a designed analytics perspective. Learning analytics data will flow from learning designs embedded in OAE, not simply be the accidental output that constitutes a technical log file.
OAE is continuing to develop and transform its sustainability model. The change is essentially from a model based primarily on financially-based contributions to that of a mixed mode community-based model, where financial contributions are encouraged alongside individual, institutional and organisational volunteered contributions of code, documentation and other non-code artefacts. There are two preconditions for accomplishing this. The first, which applies specifically to code, is clearing a layer of technical debt in order to more easily encourage and facilitate contributions around modern software frameworks and tools. OAE is committed to paying down this debt and encouraging contributions from developers outside the project.
The second is both more complex and more straightforward; straightforward to describe, but complex to realise. Put simply, answers to questions around wasteful duplication of resources in deploying software in education have fallen out of balance with reality. The pendulum has swung from “local” through “cloud first” to “cloud only”. Innovation around learning, which by its very nature often begins locally, is often stifled by the industrial-style massification of ‘the hosted LMS’ which emphasises conformity with a single model. As a result of this strategy, institutions have switched from software development and maintenance to contract management. In many cases, this means that they have tended to swap creative, problem-solving capability for an administrative capability. It is almost as though e-learning has entered a “Fordist” phase, with only the green shoots of LTI enabled niche applications and individual institutional initiatives providing hope of a rather more postmodern – and flexible - future.
OAE retains its desire and ambition to provide a scalable solution that remains “cloud ready”. The project believes, however, that the future is federated. Patchworks of juridical and legal frameworks across national and regional boundaries alone – particularly around privacy - should drive a reconsideration of “cloud only” as a strategy for institutions with global appetites. Institutions with such appetites – and there are few now which do not have them – will distribute, federate and firewall systems to work around legislative roadblocks, bumps in the road, and brick walls. OAE will, then, begin to consider and work on inter-host federation of content and other services. This will, of necessity, begin small. It will, however, remain the principled grit in the strategic oyster. As more partners join the project, OAE will start designing a federation architectural layer that will lay the foundation to a scenario where OAE instances dynamically exchange data among themselves in a seamless and efficient way according to a variety of use cases.
While there are some improvements to accessibility and some on-going tweaks to improve color contrast issues, the upgrade to Sakai will not affect the overall appearance that much. For mobile users – the difference in course navigation will be much-improved.
Mobile view (Sakai 11/Post-Upgrade):
More detail will be distributed in the coming weeks and those following the upgrade.