Planet Sakai

June 12, 2019

Michael Feldstein

Instructure is not “the New Blackboard”

Yesterday, I wrote about my experiences at the recent IMS Learning Impact Leadership Institute. Today, I'm going to write about a sentence that I heard uttered several times while at that summit. One that I've been expecting to hear for nearly a year now.

"Instructure is the new Blackboard."

It's not the first time I have heard that sentence, but it has reached critical mass. I have known it was coming since last Instructurecon. I wrote a blog post specifically to prepare for this entirely predictable moment. It has finally arrived.

And now it is time to explain why nobody should ever say "X is the new Blackboard" about any company ever again.

Predicting the inevitable

I have often characterized Instructure's first decade of customer relations as "gravity-defying." Once or twice, I have had people challenge me on the blog about that characterization. "Why are you rooting for them to fail?" they would ask. But I wasn't. I was merely observing that gravity exists, and nobody can defy the laws of physics forever. What goes up eventually comes down. And in ed tech, any fall is a fall from grace. As a rule, educators are distrustful of ed tech companies, are really distrustful of large ones, and are bitterly resentful of companies that disappoint them. At some point, Instructure would have to slip from abnormally good and revert to mean. And when that happened, there would be blowback.

It was clear that moment had arrived at Instructurecon 2018 because Instructure was no longer able to pull off the impossible. Josh Coates keynotes should have been impossible. Josh is a smart, interesting, thoughtful guy. He is not a good keynote speaker. He rambles. He careens. He talks about what he cares about, and what he thinks you should care about, but doesn't give a lot of thought to what you think you need to hear from him. And yet, somehow, his Instructurecon keynotes came off as charming and fascinating. Nobody cared that he said not one damned thing about anything that every other LMS company CEO would have been shredded by their customers for not covering. He was like some funhouse mirror version of Mr. Rogers.

Until 2018, when his keynote was a disaster. It wasn't just that the quirky charm failed to work this time. Josh offended multiple groups in the audience. What goes up must eventually come down.

Then there was Josh's fireside chat with Dan Goldsmith, then the newly announced President. It was obvious to Phil and me that Dan was being introduced to the customers because he would be CEO within a year. Gravity-defying Instructure would have somehow magically helped the audience understand that they were being introduced to the line of succession while being reassured that things were steady-as-she-goes. But that would have been a near-impossible feat to pull off, and the Instructure of 2018 walked on the earth like you and me. So the audience reaction was, basically, "Uh, he seems nice, but why do I need to hear about how he was an Uber driver for a while?"

There were also smaller signs, and other facts from which one could draw inferences. There was the small but noticeable reduction in spending on the conference. There was the increasing pressure from the stock market for Instructure to grow their sales of Bridge to corporations. The dominos had already started falling, and the pattern was set for the next ones to fall in a certain order:

  • Josh would leave soon. Other executives and senior managers would likely leave as well. Some would go because they had had a good run and were ready to move on. Others would go because Dan would want to put his own team in place.
  • Instructure was built around Josh, who is an idiosyncratic leader. It was also built to sell to higher education. In order to retool it so that it is something that can run well under Dan's leadership style and sell into higher education, K12, and the corporate market, many things would have to change internally. People would move around. Some people would leave. Others would arrive. Processes would change.
  • All of this would be distracting to people who are trying to do their jobs. Things inevitably would fall through the cracks. Some of those things would be important to some customers. Those customers would notice.
  • All of this uncertainty would inevitably create some trepidation among the employees, even if the new management handles the situation beautifully. The fact is that when people are no longer sure what their job is or how they can be successful at it, which is inevitable in this kind of environment of change, they tend to keep their heads down until they figure it out. They may not challenge decisions that they think are on the wrong track.
  • Meanwhile, some of the new senior management, crucially including the CEO, were new to education and wouldn’t know where the landmines are. And there are many, many landmines. It wouldn't matter how smart the new people are. It wouldn’t matter how decent and kind they are. Since they wouldn’t know where the landmines are, and their people would be likely too nervous or distracted to warn them, then sooner or later they would step on one.

In March of this year, Dan Goldsmith said this:

What's even more interesting and compelling is that we can take that information, correlate it across all sorts of universities, curricula, etc, and we can start making recommendations and suggestions to the student or instructor in how they can be more successful. Watch this video, read this passage, do problems 17-34 in this textbook, spend an extra two hours on this or that. When we drive student success, we impact things like retention, we impact the productivity of the teachers, and it's a huge opportunity. That's just one small example.

Our DIG initiative, it is first and foremost a platform for ML and AI, and we will deliver and monetize it by offering different functional domains of predictive algorithms and insights. Maybe things like student success, retention, coaching and advising, career pathing, as well as a number of the other metrics that will help improve the value of an institution or connectivity across institutions. [snip]

We've gone through enough cycles thus far to have demonstrable results around improving outcomes with students and improving student success. [snip] I hope to have something at least in beta by the end of this year.

That quote is pulled from Phil's contemporaneous post on the statement, where he then goes on to reference the "robot tutor in the sky." But Dan probably wouldn't have gotten that reference, because he wasn't in the industry at the time that former Knewton CEO Jose Ferreira made it. As a result, his own statement, which was predictably explosive to Phil and me, probably seemed somewhere between anodyne and exciting to him.

So. You have an ed tech company that has spectacularly over-performed for a decade. Their performance slips, not to horror show levels, but to levels where some customers are noticeably unhappy. The company leadership makes a tone deaf statement or two about unreleased products that we really don't know that much about.

And that is all it takes to become a fallen angel in higher education ed tech. There is likely no way that employees at Instructure who have only ever worked at that one ed tech company could have known that to be true in advance of having experienced it. There is likely no way that executives coming in from outside of ed tech could have known that to be true without having experienced it either. Because it doesn't make sense. But it is true. Instructure's brand was destined to crash hard precisely because it was so good. That's how it works in ed tech. Cynics are disappointed optimists, and we have a lot of those.

But why, specifically, "the new Blackboard?" It's not the first time I've heard that phrase used about a company. And really, it's unfair to both Instructure and Blackboard. In fact, when I wrote in my last post about how some companies that used to be barriers to interoperability work now are among its most important champions, I was specifically thinking of Blackboard. The complaints I've had about them in recent years have been related to (1) trying to spin their financial challenges and (2) struggling to execute well during an extraordinarily tough transition. In other words, totally normal company stuff. Today's Blackboard may not be perfect, but it is basically a decent company. In the moral sense.

This sector has a lingering revulsion for a version of a company that ceased to exist in 2012–at the latest—and yet continues to loom as a shadow over the entire vendor space, creating a sense of ever-present subconscious dread. It's like having a lifelong fear of clowns from something that happened at a circus when you were three years old but that you can no longer remember.

It is time to remember.

The personal as parable

As I described in a recent post, my public debates with Blackboard over their patent assertion are something of an origin story for e-Literate. There is a lot about the story that I'm going to tell now—some of it for the first time on the blog—that is became personal because certain parties at Blackboard chose to make it personal. Throughout that period, and through my writing since, I have tried to keep e-Literate professional and focused only on details that are worth sharing insofar as they advance the public good. I have not always succeeded in that aspiration, but it is important to me to try.

Today I choose to share some actions that were taken against me because I think it is important to understand how truly bad actors behave. These are not the kinds of actions that either Instructure or today's Blackboard would take. If the sector is going to improve, then we need to get better at distinguishing between bad behavior, which can have a variety of causes and can be corrected through engagement, and truly bad actors, with whom there can be no negotiating. In my experience, truly bad actors are rare.

So I'm going to share some personal experiences later in this blog, but I'm going to try to keep this as minimally personal as I can. When possible, I'm going to avoid naming names, even though some of you will know who I'm talking about. I will share some details but not others. What I ask you to think about as you read my portion of the story is not what happened to me or who did what but how what happened then is qualitatively different from what is happening now.

The old Blackboard

The period of Blackboard's history that I am talking about is specifically from roughly 1999 to roughly 2012 (or 2009, depending on how you mark the end of the era). During this period, the company carefully developed a carefully crafted and highly successful business strategy. First, they were pioneers in the software rental business. You didn't own Blackboard software, even if you ran it on your own servers. You paid an annual license fee. I can't say that Blackboard invented this strategy—I'm not sure who did; it might have been Oracle—but Blackboard certainly drove it deep into the education sector.

This could be a handsomely profitable business model, particularly if they could hold market share and maintain pricing power. Which brings us to the second leg of their strategy. Blackboard sought to dominate ed tech product categories by buying up every vendor in the category as soon as it reached significant market share. Here's how that looked in the LMS product category:

  • In 2000, they acquired MadDuck Technologies, which made Web Course in a Box
  • In 2002, it was George Washington University's Prometheus
  • In 2006, WebCT (which had spun out of University of British Columbia but had been independent for a while)
  • In 2009, ANGEL Learning from IUPUI
  • In 2012, after reportedly failing to buy Moodle Pty, the company bought Moodlerooms and NetSpot, the biggest Moodle partners in the US and Australia respectively

The reason that Phil's famous LMS market share graphic is called the "squid graph" is because Blackboard formed the body by continuously gobbling up competitors as they formed.

In every case except Moodle, Blackboard would kill off the acquired platform after acquisition. They weren't really looking to acquire technology. To the contrary; they didn't want the expense of maintaining multiple platforms and showed almost no interest any of the technical innovation until after the ANGEL acquisition, when Ray Henderson started driving some of the product strategy for them. Rather, Blackboard was interested in acquiring customers. They knew that some of those customers would leave—in fact, some of those customers had already left Blackboard previously to the platform that was now being acquired—but that was OK. Because by keeping competition low and competitors under a certain size, Blackboard was really controlling pricing power. LMS license fees were, not coincidentally, significantly more expensive during this period than they are today.

There was one company—Desire2Learn—that represented an increasing threat to Blackboard but would not sell. So Blackboard tried a different tactic, which we'll come to a little later in this narrative.

Blackboard tried a similar trick of domination through acquisition, somewhat less successfully, in the web conference space by simultaneously buying Wimba and Elluminate, which were two of the largest education-specific web conferencing platforms at the time. If there hadn't been an explosion of cheap and excellent generic web conferencing solutions soon afterward, it might have worked.

Blackboard did not really consider itself a software development company during this period and was not afraid to say so explicitly to customers. I was told this by a Blackboard representative, and I know of one ePortfolio company that was told the same thing. They started up specifically because Blackboard's response to them when they asked as university customers if Blackboard would an build ePortfolio was, "We don't really develop software, but if you know of any good ePortfolio companies, we might consider acquiring one."

Blackboard did have an internal product development strategy of sorts, albeit an anemic one. Companies understand that it's easier (and cheaper) to sell a second product to an existing customer than a first product to a new customer. So they often develop a portfolio of products and services to "cross-sell" to those existing clients. In and of itself, there's absolutely nothing wrong with that. And like many companies, Blackboard had a formula for how many products they needed to cross-sell in order to hit their financial goals. Again, this is pretty standard stuff. The objectionable part was the way in which that formula drove the product road map.

The quintessential example of this was Blackboard Community. Keep in mind that the LMS originated when universities started taking generic groupware (like Lotus Notes, for example) and adding education specific features like a grade book and a homework drop box. Blackboard's idea was to strip those education-specific features back out of the product and license it separately to use for clubs, committees, and so on. I'm sure it wasn't quite that simple from a development perspective, but it wasn't very far off. Take the product you've already sold to the customer, strip out some features, integrate the stripped down version with the original version—badly—and sell it to the customer a second time.

Blackboard also had epically bad customer service. Far worse than any of the LMS vendors today. To be clear, there were individuals at Blackboard who worked their butts off to serve their customers. There are always good people at sufficiently large companies. There were people in Blackboard—on their development teams, in customer service, and in other parts of the company—who tried desperately hard to serve their customers well. But the company's processes were not optimized for customer service, and it did not invest in customer service. One can only conclude that customer service was not a priority of executive management, whatever the line employees may have felt about it.

The patent suit

As I mentioned earlier, Desire2Learn was becoming a thorn in Blackboard's side. But Blackboard's management team was developing a legal strategy that they thought would complement their acquisition strategy, especially in cases where pesky entrepreneurs would not sell. They started filing for patents. Now, software patents are an unfortunate reality in our world. I don't like them, but since they exist, I understand why some companies feel the need to have them. That said, Blackboard's intentions were neither for defensive purposes nor for demonstrating durable value to investors. They intended to assert their patents against other companies.

In industries like pharmaceuticals or electronics, where innovation takes considerable investment up front but yields significant, long-term profits afterward, the economics can support patent assertion. There is enough money flowing in the system that there is at least a plausible argument that paying the inventor a licensing fee incentivizes investment in innovation. But education is not that sort of market, and the LMS product category in particular has thin margins. If new LMS vendors had to pay patent royalties, there likely wouldn't have been new LMS vendors.

Blackboard received a patent for LMS functionality, the precise definition of which I will get to momentarily. They immediately asserted that patent against Desire2Learn. They probably expected the company to fold and agree to either pay the royalty or sell. Companies usually don't fight patents. If Desire2Learn had folded, that would have given Blackboard's patent added legal weight. And Blackboard had other patents it had filed. There was every indication that they were attempting to create what is called a "patent thicket," effectively making it impossible to bring a new product to market without running into one or another of their patents. If they had succeeded, they would have owned the LMS market forever.

They would have killed the LMS market.

And what was Blackboard's first patent? What was their supposed innovation?

A system where a user could log into one course as an instructor and another as a student.

That's it.


When I learned enough about how to read a patent to figure that out, I couldn't believe it. And this is where Blackboard started fighting with me. But it was all non-denial denials. There is a moment in the legal process of a patent fight where the court determines the scope of the patent. Before that, legally speaking, the patent is undefined. So when Blackboard pushed back against my posts, all they were really saying was that the court hadn't spoken yet.

When the two companies faced each other in court and argued for their definition of the scope of the patent, what did Blackboard argue was the scope of their patent?

A system where a user could log into one course as an instructor and another as a student.

Blackboard didn't like me writing stuff like that. They—where "they" means specific executives who I choose not to mention by name, rather than some hive mind of every human working at the company—did not like it when I called them out on it in advance. And they really did not like it when I pointed out afterward that they had been misleading at best in their previous statements about what they believed the scope of the patent to be.

What concerned me was that their repeatedly calling attention to my writing by arguing with me in public was irrational. I was relatively unknown until they started responding to me. This kind of regular unforced error was out of character. It was telling me...something. What was it telling me? The most logical explanation was that I had gotten under their skin. I had cause to suspect that they were the kind of people who did not have a high tolerance for being challenged. That could be dangerous.

As long as I was working at SUNY, I was protected. They may have been irrationally focused on me, but they weren't stupid. They were not about to attack a university employee. However, once I became an Oracle employee, I was concerned that things would get ugly.

I was right.

What ugly looks like

When I was offered the job at Oracle, I had a conversation with my prospective manager about the Blackboard situation. I told him that I thought the patent assertion was a threat to the health of the sector, that I did not intend to stop writing about it, and that it was possible that Blackboard would come after me once I was no longer working for a university. He replied that he respected my right to continue writing as long as I made clear on the blog that my opinions were my own—which I did, scrupulously—but that if the politics reached above a certain level in the organization, then his ability to protect me would have its limits. We agreed that it would be unfortunate if that were to happen, we each understood and respected the other's position, and we agreed to give it a go. Nothing ventured and all that.

It didn't take long. I was at a Blackboard reception at EDUCAUSE when one of the executives approached me and started a conversation about my posts. "You know, I wouldn't complain to Oracle about it. I would never do that. I respect your independence. But this isn't good for the relationship between our two companies."

That's a nice shiny new job you got there, kid. It would be shame if anything were to, you know. Happen to it.

I kept writing.

Not many months after that, the same executive, in the presence of my manager, sat down next to my colleague and started complaining to her about me. Repeatedly. Incessantly. To the point where my manager had to physically interpose himself between the executive and my colleague in order to protect her from what he perceived to be harassment. At which point, the executive started complaining to my manager about me.

It had the opposite of the intended effect. My manager was very protective of his people.

I kept writing.

Not all of the writing was negative, by the way. For example, when Blackboard's Chief Legal Counsel showed up at a Sakai conference to debate the Software Freedom Law Center's Eben Moglen on the merits of the patent, I argued both that Blackboard's representative had been unfairly treated and that it was important to continue to try to work with the company constructively on the larger patent problem if at all possible.

Nevertheless, Blackboard continued what I can only describe as a widening and escalating campaign convince my employer to either silence me or remove me. They were specifically told that I was unwelcome at Blackboard hosted events. The message was clear: Feldstein is harming Oracle's relationship with Blackboard. And if that weren't clear enough, I started being approached by random Oracle employees. The conversation would go like this:

Do you know [Blackboard employee name redacted]?

Yeah, I know him. Why?

Well I don't, but he just came up to me at BbWorld and started complaining to me about how you're harming Blackboard's relationship with Oracle.

That same Blackboard employee accosted me at an IMS meeting, literally yelling at me, telling me that he had almost convinced his bosses to adopt the new version of the LIS standard we were developing—the one that was going to save universities time and money by getting rid of the need to manually monitor the integration between the registrar software and the LMS—but they killed it when they read my latest blog post.

A Blackboard executive all but confirmed this in a later meeting. He looked me in the eye, with my manager present, and asked, "Why should we adopt Oracle's standard?"

"Oracle's standard."

I kept writing.

Next, the Blackboard executive decided to go up a few levels in the food chain. He told my manager's manager's manager that he was having Blackboard customers coming into his office in response to my blog posts and asking why Oracle hates Blackboard. This intervention too had the opposite of the intended effect. My manager's manager's manager did not believe for one second that people were confusing my personal blog posts with Oracle's official position on Blackboard.

After all of that, and some more that I'm not going to write about here, Blackboard lost the patent suit. They took a $3.3 million write-down for it. But that's nothing compared to the actual loss, which the company is still paying today. If people are still using the sentence "X is the new Blackboard," do you think there is any way that Blackboard itself is not still paying for the damage done by management that left the company seven years ago? Many people in this sector still hate that company with a fiery passion, and some of them don't even know why anymore.

Now, ask yourself this: Does what I just described bear any relation to the behavior of any company that you know of in ed tech today? Instructure? Blackboard? Anyone? I can think of a few that I would characterize as on the spectrum of bad actors. All of them are in immature product categories, where there is less transparency, more hype, and therefore more room for con artists. Jose Ferreira from Knewton was a bad actor in that he harmed our ability to have a productive discussion about the utility of adaptive learning or machine learning through his unsubstantiated hype (and the fundraising he did off it). But he didn't do anything I'm aware of that rose to the level of anything like what I've described here. The robot tutor hype scam, and the new variation where vendors start claiming that all their competitors are robot tutor scam artists, are the main dangers at the moment. Anything AI-related still has some danger in it, as does the OPM space. But the bad actors I can think of are mostly little league compared to the Hall of Fame bad actors at old Blackboard.

Instructure is the new Instructure

Organizations change. Instructure changes. Blackboard changes. Your university changes. Your department changes. Change happens. Change is hard. Mistakes happen during the stress of transition. And what comes out the other side is not always predictable. But it often can be influenced.

When I wrote my post in the wake of Instructurecon 2018, I knew that it might not make a ton of sense in the moment to either customers or employees of the company. So a lot of it was written in a way that would hopefully be memorable...I don't know...maybe eleven months later, when the story had played out enough that we could have a real conversation about it.

Here's the important bit:

Instructure's unbelievably long age of innocence may finally be coming to an end. That doesn't mean that it is going to fail or to become the next ed tech company that everybody hates. It does mean that it is beginning to go through some changes, that some of those changes will be awkward and hard, and that the company will eventually grow up to become somewhat different than it has been. Not necessarily better or worse. But necessarily different.

So maybe you don't like some of the things that they've been doing (or not doing) lately. Now what? You could try engaging with them. OK, maybe you tried that and didn't get the results you wanted. Remember that extended metaphor about the awkward teenage years in my original post? I used to teach eighth graders, and I've raised kids of my own. One talk usually doesn't do it during the challenging periods. Not because they don't care about you, but because it's just really hard being a teenager. You're overloaded. Everything is changing at once, and you're just trying to get through the day. If a teenager responds badly in the moment, it doesn't mean that they're a bad person, or even that they're not listening. It usually means that they're dealing with more than just you.

A company isn't a teenager; it's a group of adults who you pay to do things for you. Nevertheless, it is also a group of humans who can experience change, individually and collectively, and who can have all the reactions that humans do to change and stress and all that stuff.

These organizational transitions don't finish up over night. Dan's been CEO for less than a year now. Next month will be his first Instructurecon as CEO. He's still in the steep part of his learning curve. Will he be a good CEO? I don't know. I barely know the guy. You probably barely know the guy too. His employees are starting to get to know the guy by now. They're figuring it out.

Maybe you feel like you can't engage with Instructure because other people in your university "own" that relationship, and you're relatively powerless.

Well, that's a different sort of problem, isn't it? I've written before about how bad LMS vendor behavior and bad LMS product development are actively driven by bad university LMS procurement processes. These internal conversations are hard ones to have, and sometimes the people who see the problems are not in a position to force the conversation. But ultimately, the vendors have to respond to whatever sorts of interactions the universities invite them to have (or don't). It's worth taking some time to understand why the vendors are thinking and acting the way they are so that you can find some productive ways into the conversation.

And by the way, those vendors do read what you write. Heck, we live in an era where we pick the President of the United States on Twitter and Facebook. You think these companies don't read your posts? They damned well do. They may be constrained in how they respond, but they do pay attention. How do you think I do what I do? I'm just a dude with a blog. How did I get myself into all the trouble you just read about? By being a dude with a blog. Turns out that using your voice can be a powerful thing, particularly if think carefully about who you want to hear you and how you want them to react. Yes, I do beat on vendors in public sometimes. But I always do it with a specific intention to make something happen. It may not be obvious in the moment, but it is always there. You can talk to these vendors and be heard, particularly if you have that intent and if they think that you are also listening.

If you want your vendors to be better, it's not that different from trying to get your kids to be better, or any humans with which you want to have a genuine relationship to be better. That was really the point of my original blog post. Talk to them, listen to them, engage with them. It doesn't mean you have to let them walk all over you, but it does mean you shouldn't assume you understand what they're thinking or that they are force of nature that cannot be influenced. If you're reading e-Literate, then you're probably an educator of some sort. Be an educator. Use that.

Instructure is changing. I don't know what they're changing into yet. You don't know either. I would bet money that Instructure doesn't know yet. And this isn't really just about Instructure. They are the case study of the moment. The point is, vendors make their money by responding to the conditions created by the university ecosystem. That's you and your colleagues. If you want better vendors, then create the conditions under which they can succeed by behaving in the ways in which you would prefer them to behave. That's hard work, and it may involve some family therapy inside your home institution. But the alternative is living in perpetual fear of clowns. And that, my friends, is no way to live.

The post Instructure is not “the New Blackboard” appeared first on e-Literate.

by Michael Feldstein at June 12, 2019 08:27 PM

Apereo Foundation

June 11, 2019

Michael Feldstein

The IMS at an Inflection Point

A few weeks back, I had the pleasure of attending the IMS Learning Impact Leadership Institute (LILI). For those of you who aren't familiar with it, IMS is the major learning application technical interoperability organization for higher education and K12 (and is making some forays into the corporate training and development world as well). They're behind specifications like LIS, which lets your registrar software automagically populate your LMS course shell with students, and LTI, which lets you plug in many different learning applications. (I'll have a lot more to say about LTI later in this post.)

While you may not pay much attention to them if you aren't a technical person, they have been and will continue to be vital to creating the kind of infrastructure necessary to support more and better teaching and learning affordances in our educational technology. As I'll describe in this post, I think the nature of that role is likely to evolve somewhat as the interoperability needs of the sector are beginning to evolve.

The IMS is very healthy

I'm happy to report that the IMS appears to be thriving by any obvious measure. The conference was well attended. It attracted a remarkably diverse group of people for an event hosted by an organization that could easily be perceived as techie-only. Furthermore, the attendees seemed very engaged and the discussions were lively.

On more objective measures, the organization's annual report bears out this impression of strong engagement. They have strong international representation across a range of organization types.

From the IMS Global 2018 Annual Report

Whether your measure is membership, product certifications, or financial health, the IMS is setting records.

From the IMS Global 2018 Annual Report

This state of affairs is even more remarkable given that, 13 years ago, there was some question as to whether the IMS was financially sustainable.

From the IMS Global 2018 Annual Report

If you look carefully at this graph, you'll see three distinct periods of improvement: 2005-2008, 2009-2013, and 2013-2018. Based on what I know about the state of the organization at the time, first period can most plausibly be attributed to immediate changes implemented by Rob Abel, who took over the reins of the organization in February of 2006 and likely saved it from extinction. Likewise, the magnitude of growth in the second period is consistent with that of a healthy membership organization that has been put back on track.

But that third period is different. That's not normal growth. That's hockey stick growth.

I am not a San Franciscan. By and large, I do not believe in heroic entrepreneur geniuses who change the world through sheer force of will. Whenever I see that kind of an upward trend, I look for a systemic change that enabled a leader or organization—through insight, luck, or both—to catch an updraft.

There is no doubt in my mind that the IMS has capitalized on some major updrafts over the last decade. That is an observation, not a criticism. That said, the winds are changing, in part because the IMS has helped move the sector through an important period of evolution and is now helping to usher in the next one. That will raise some new challenges that the IMS is certainly healthy enough to take on but will likely require them to develop a few new tricks.

The world of 2005

In the first year of the chart above, when the IMS was in danger of dying, there was very little in the way ed tech to interoperate. There were LMSs and registrar systems (a.k.a. SISs). Those were the two main systems that had to talk to each other. And they did, after a fashion. There was an IMS standard at the time, but it wasn't a very good one. The result was that, even with the standard, there was a person in each college or university IT department whose job it was to manage the integration process, keep it running, fix it when it broke, and so on. This was not an occasional tweak, but a continual effort that ran from the first day of class registration through the last day of add/drop. If you picture an old-timey railroad engineer shoveling coal into the engine to keep it running and checking the pressure gauge every ten minutes to make sure it didn't blow up, you wouldn't be too far off. As for reporting final grades from the LMS's electronic grade book automatically to the SIS's electronic final grade record, well, forget it.

If you ignore some of the older content-oriented specifications, like QTI for test questions and Common Cartridge for importing static course content, then that was pretty much it in terms of application-to-application interoperability. Once you were inside the LMS, it was basically a bare-bones box with not much you could add. Today, the IMS lists 276 officially certified products that one can plug into any LMS (or other LTI-compliant consumer), from Academic ASAP to Xinics Commons. I am certain that is a substantial undercount of the number of LTI-compatible applications, since not all compatible product makers get officially certified. In 2005, there were zero, because LTI didn't exist. There were LMS-specific extensions. Blackboard, for example, had Building Blocks. But with a few exceptions, most weren't very elaborate or interesting.

My personal experience at the time was working at SUNY Systems Administration and running a search committee for an LMS that could be centrally hosted—preferably on a single instance—and potentially support all 64 campuses. For those who aren't familiar with it, SUNY is a highly diverse system, with everything from rural (and urban) community colleges to R1s to everything in between, with some specialty schools thrown into the mix like the Fashion Institute of Technology, a medical school or two, an ophthalmology school, and so on. Both the pedagogical needs and the on-campus support capabilities across the system were (and presumably still are) incredibly diverse. There simply was not any existing LMS at the time, with or without proprietary extensions, that could meet such a diverse set of needs across the system. We saw no signs that this state of affairs was changing at pace that was visible to the naked eye, and relatively few signs that it was even widely recognized as a problem.

To be honest, I came to the realization of the need fairly slowly myself, one conversation at a time. A couple of art history professors dragged me excitedly to Columbia University to see an open source image annotation tool, only to be disappointed when they discovered that the tool was developed to teach clinical histology, which uses image annotation to teach in an entirely different way than is typically employed in art history classes. An astronomy professor at a community college on the far tip of Long Island, where there was relatively little light pollution, wanted to give every astronomy student in SUNY remote access to his telescope if only we could figure out how to get it to talk to the LMS. Anyone who has either taught a been an instructional designer for a few wildly different subjects has a leg up on this insight (and I had done both), but even so, there are levels of understanding. The art history/histology thing definitely took me by surprise.

A colleague and I, in an effort to raise awareness about the problem, wrote an article about the need for "tinkerable" learning environments in eLearn Magazine. But there were very few models at the time, even in the consumer world. The first iPhone wasn't released until 2007. The first practically usable iPhone wasn't released until 2008. (And we now know that even Steve Jobs was secretly skeptical that apps on a phone were a good idea.) It is a sign of just how impoverished our world of examples was in January of 2006 that the best we could think of to show what a world of learning apps could be like was Google Maps:

There are several different ways that software can be designed for extensibility. One of the most common is for developers to provide a set of application programming interfaces, or APIs, which other developers can use to hook into their own software. For example, Blackboard provides a set of APIs for building extensions that they call "Building Blocks." The company lists about 70 such blocks that have been developed for Blackboard 6 over the several years that the product version has been in existence. That sounds like a lot, doesn't it? On the other hand, in the first five months after Google made the APIs available for Google Maps, at least ten times that many extensions have been created for the new tool. Google doesn't formally track the number of extensions that people create using their APIs, but Mike Pegg, author of the Google Maps Mania weblog, estimates that 800-900 English-language extensions, or "mash-ups," with a "usable, polished Google Maps implementation" have been developed during that time—with a growth rate continuing at about 1,000 new applications being developed every six months. According to Pegg, "There are about five sites out there that facilitate users to create a map by taking out an account. These sites include—each of these sites probably has hundreds of maps for which just one key has been registered at Google." (Google requires people who are extending their application to register for free software "keys." Perhaps for this reason, Chris DiBona, Google's own Open Source Program Manager, has heard estimates that are much higher. "I've seen speculation that there are hundreds or thousands," says DiBona, noting that estimates can vary widely depending on how you count.

Nevertheless, even the most conservative estimate of Google Maps mash-ups is higher than the total number of extensions that exist for any mainstream LMS by an order of magnitude.

There seemed little hope for this kind of growth any time in the foreseeable future. By early 2007, having failed to convince SUNY to use its institutional weight to push interoperability forward, I had a new job working at Oracle and was representing them on a specification development committee at the IMS. It was hard, which I didn't mind, but it was also depressing. There was little incentive for the small number of LMS and SIS vendors who dominated specification development at that time to do anything ambitious. To the contrary, the market was so anemic that the dominant vendors had every reason to maintain their dominance by resisting interoperability. Every step forward represented an internal battle within those companies between the obvious benefit of a competitive moat and the less obvious enlightened self-interest of doing something good for customers. This is simply not the kind of environment in which interoperability standards grow and thrive.

And yet, despite the fact that it certainly didn't feel like it, change was in the air.

Glaciers are slow, but they reshape the planet

For starters, there was the LMS, which was both a change agent in of itself and an indicator of deeper changes in the institutions that were adopting them. EDUCAUSE data shows that the US LMS market became saturated some time roughly around 2003. At that time, Blackboard and WebCT had the major leads as #1 and #2, respectively. The dynamic for the next 10 years was a seesaw, with new competitors rising and Blackboard buying and killing them off as fast as it could. Take a look at the period between 2003 and 2013 in Phil's squid graph:1

It was absolutely vicious.

None of this would materially affect the standards making process inside the IMS until, first, Blackboard's practice of continually buying up market share eventually failed (thus allowing an actual market with actual market pressures to form) and, second, until the management team that came up with this decidedly anti-competitive to spend more time with their respective families. (I'll have more to say about Heckle and Jeckle and their lasting impact on market perceptions in a future post.)

But the important dynamic during this period is that customers kept trying to leave Blackboard (even if they found themselves being reacquired shortly thereafter) and other companies kept trying to provide better alternatives. So even though we didn't have a functioning, competitive market that could incentivize interoperability, and even though it certainly didn't feel like we had one, some of the preconditions for one were being established.

Meanwhile online education growth was being driven by no fewer than three different vectors. First, for-profit providers were hitting their stride. By 2005, the University of Phoenix alone was at over 400,000 enrollments. Second, public access-oriented institutions, many of which had been seeded a decade earlier with grants from the Sloane Foundation, were starting to show impressive growth as well. A couple were getting particular attention. UMUC, for example, may not have had over 400,000 online enrollments in 2005, but they had well over 40,000, which is enough to get the attention of anyone in charge of an access-oriented public university's budget. More quietly, many smaller schools were having online success that were proportional to their sizes and missions. For example, when I arrived at SUNY in 2005, they had a handful of community colleges that had self-sustaining online degree programs that supported both the missions and the budget of the campuses. Many more were offering individual courses and partial degrees in order to increase access for students. (Most of New York is rural, after all.)

The third driver of online education, which is more tightly intertwined with the first two than most people realize, is that Online Program Management companies (OPMs) were taking off. The early pioneers, like Deltak (now Wiley Education Services), Embanet, Compass Education (now both subsumed into Pearson), and Orbis (recently acquired by Grand Canyon University) had proved out the model. The second wave was coming. Academic Partnerships and 2Tor (now 2U) were both founded in 2008. Altius Education came in 2009. In 2010, Learning House (now also owned by Wiley) was founded.

Counting online enrollments is a notoriously slippery business, but this chart from the Babson survey is highly suggestive and accurate enough for our purpose:

If you're a campus leader and thirty percent of your students are taking at least one online class, that becomes hard for you to ignore. Uptime becomes far more important. Quality of user experience becomes far more important. Educational affordances become far more important. Obviously, thirty percent is an average, and one that is highly unevenly distributed across segments. But it's significant enough to be market-changing.

And the market did change. In a number of ways, the biggest one being that it became an actual, functioning market (or at least as close to one as we've gotten in this space).

When glaciers recede

Let's revisit that second growth period in the IMS graph—2008 to 2013—and talk about what was happening in the world during that period. For starters, online continued its rocket ride. The for-profits peaked in 2010 at roughly 2 million enrollments (before beginning their spectacular downward spiral shortly thereafter). Not-for-profits (and odd mostly-not hybrids) ramped up the competition. ASU launched its first online 4-year degree in 2006. SNHU started a new online unit in 2009. WGU expanded into Indiana in 2010, which was the same year that Embanet merged with Compass Knowledge and was promptly bought by Pearson. (Wiley acquired Deltak two years later.)

Once again, the more online students you have, the less you are able to tolerate downtime, a poor user interface that drives down productivity, or generic course shells that make it hard to teach students what they need to learn in the ways in which they need to learn. Instructure was founded in 2008. They emphasized a few distinctions from their competitors out of the gate. The first was their native multitentant cloud architecture. Reduced downtime? Check. The second was a strong emphasis on usability. The big feature that they touted which was their early runaway hit was Speed Grader. Increased productivity? Check.

Instructure had found their updraft to give them their hockey stick growth.

But they also emphasized that they were going to be a learning platform. They weren't going to build out every tool imaginable. Instead, they were going build a platform and encourage others to build the specialized the tools that teachers and students need. And they would aggressively encourage the development and usage of standards to do so. On the one hand, this fit from a cultural perspective. Instructure was more like a Silicon Valley company than its competitors, and platforms were hot in the Valley. On the other hand, it was still a little weird for the education space. There still weren't good interoperability standards for what they wanted to do. There still hadn't been an explosion of good learning tools. This is one of those situations where it's hard to tell how much of their success was prescience and how much of it was luck that higher ed caught up with their cultural inclination at that exact moment.


The very same year that Brian Whitmer and Devlin Daley founded Instructure, Chuck Severence and Mark Alier were mentoring Jordi Piguillem on a Google Summer of Code project that would become the initial implementation of LTI. In 2010, the same year that Instructure scored its first major win with the Utah Education Network, IMS Global released the final specification for LTI v1.0. All this time that the market had felt like it had been standing still, it had actually been iterating. We just hadn't been experiencing the benefits of it. Chuck, who had been thinking about interoperability in part through his work on Sakai, had been tinkering. Students like Brian and Devlin, who had been frustrated with their LMS, had been tinkering. The IMS, which actually had a precursor specification before LTI, had been tinkering. While conditions hadn't become visible on the surface of the glacier, way down, a mile below, the topology of the land was changing.

Meanwhile in Arizona, in 2009, the very first ASU+GSV summit was held. I admit that I have had writer's block regarding this particular conference the last few years. It has gotten so big that it's hard to know how to think about it, much less how to sum it up. In 2009, it was an idea. What if a university and a company that facilitates start-ups (in multiple ways) got together to encourage ed tech companies to work more effectively with universities? That's my retrospective interpretation of the original vision. I wasn't at many of those early conferences and I certainly wasn't an insider. It was hard for me, with my particular background, to know what to make of it then and even harder now.

But something clicked for me this year when it turned out that IMS LILI was held at the same hotel that the ASU+GSV summit had been at a couple of months earlier. How does the IMS get to 523 product certifications and $8 million in the bank? A lot of things have to go right for that to happen, but for starters, there have to be 523 products to certify and lots of companies that can afford to pay certification fees. That economy simply did not exist in 2008. Without it, there would be no updraft to ride and consequently no hockey stick growth. ASU+GSV's phenomenal growth, and the ecosystem that it enabled, was another major factor influenced what I saw at IMS LILI this month.

There is a lot of chicken-and-egg here. LTI made a lot of this possible, and the success LTI (and IMS Global) have experienced would not have been possible without a lot of this. The harder you stare at the picture, the more complicated it looks. This is what "systems thinking" is all about. There isn't a linear cause-and-effect story. There are multiple interacting feedback loops. It’s a complex adaptive system, which means that it doesn’t respond in linear or predictable ways.

Update: I got a note from Rob Abel noting that a lot of the growth in the last leg came from an explosion of participation in the K12 space. That's good color and consistent with what I've seen in my last couple of LILI conference visits. It's also consistent with the rest of this analysis. K12 benefitted from all of the dynamics above—the maturation of the LMS market, the dynamics in higher education online that pushed toward SaaS and usability, the massive influx of venture funding, and so on. All of those developments, plus the work inside IMS, made the K12 growth possible, while the dynamics inside K12 added another feedback loop to this complex adaptive system.

But respond it finally did. We have some semblance of a functioning market, and with its rise, blockers preventing the formation of a vibrant interoperability standards ecosystem of the type we have today have largely fallen. Now we have to address the blockers of the formation of the vibrant interoperability ecosystem that we will need tomorrow. Because it will be qualitatively different. Tomorrow’s blockers are not market formation problems but rather collaboration methodology problems. They are about creating meaningful learning learning analytics, which will require solving some wicked problems that can only be tackled through close and well structured interdisciplinary work. That most definitely includes the standards design process itself.

After the glacier comes the flood

What I saw at the IMS LILI this year was, I think, a milestone. The end of an era. Market pressures now favor interoperability. The same companies that were the most resistant to developing and implementing useful interoperability standards in 2007 are among the most aggressive champions of interoperability today. This is not to say that foundational interoperability work is "over." Far from it. Rather, the conditions finally exist where it can move forward as it should, still hard but relatively unimpeded by the distortions of a dysfunctional market.

That said, the nature and challenges of interoperability our sector will be facing in the next decade are fundamentally different from the ones that we faced in the last one. Up until now, we have primarily been concerned with synchronizing administration-related bits across applications. Which people are in this class? Are they students or instructors? What grades did they get on which assignments? And how much does each assignment count toward the final course grade? These challenges are hard in all the ways that are familiar to anyone who works on any sort of generic data interoperability questions.

But the next decade is is going to be about data interoperability as it pertains to insight. Data scientists think this is still familiar territory and are excited because it keeps them at the frontier of their own profession. But this will not be generic data science, for several reasons. (I will tell you right now that some of them disagree with me on this. Vehemently.) First, even in the most richly instrumented fully online environments that we have today, they are highly data impoverished relative to what we need to make good inferences about teaching and learning. For heaven's sake, Amazon still recommends things that I have already bought. If I just bought a toaster oven last month, then how likely is it that I want to buy another one now? And I buy everything on Amazon. If they don't know enough to make good buying recommendations on consumer products, then there's no way that our learning environments are going to have enough data to make judgements that are orders of magnitude more sophisticated.

Well then, some answer, we'll just collect more data! More more more! We'll collect everything! If we collect every bit of data, then we can answer any question. (That is a pretty close paraphrase of what one of the IMS presenters said in one of the handful of learning analytics talks I went to.)

No. You won't collect "everything"—even if we ignore the obvious, glaring ethical questions—because you don't know what "everything" is. Computer folks, having finally freed themselves from the shackles of SQL queries and data marts, are understandably excited to apply that newfound freedom to the important problem space of learning. But it is not a good fit, because we don't have a good understanding of the basic cognitive processes involved in learning. As I wrote about (at length) in a previous post, we have to employ multiple cutting-edge machine learning techniques just to get glimpses of learning processes even when we are directly monitoring students' brain activity because these are extraordinarily complex processes with multiple hidden variables. Trying to tease out learning processes inside a student's head based on learning analytics from running machine learning algorithms on LMS data is a little like trying to monitor the digestive processes of a flatworm on the bottom of the Marianas Trench based on studying the wave patterns on the surface of the ocean. There are too many invisible mediating layers to just run a random forest algorithm on your data lake—it all sounds very organic, doesn't it?—and pop out new insights about how students learn.

That doesn't mean we should just throw up our hands, by any means. To the contrary, IMS Global has some extraordinarily good tools close at hand for tackling this problem. But it does mean that they are going to have to take some of the stakeholder engagement strategies they've been working at diligently to the next level, to the point where the standards-making process itself may evolve over time.

Theory-driven interoperability

There is an excellent data and processing resource that the learning analytics folks have yet to think deeply about how to leverage, as far as I can tell from the conference. The computational power is impressive (and impressively parallel). It is the collective intelligence of educators and learning scientists. Because there are too many confounds to making useful direct inferences from the data, educational inferencing needs to be theory-driven. You need to start with at least some idea of what might be going on inside the learner's head. One that can be either supported or disproven based on evidence. And you need to know what that evidence might look like. If you can spell all that out, then you can start doing interesting things with learning analytics, including machine learning. There is room for learning science, data science, and on-the-ground teaching expertise at the table. In fact, you need all those kinds of expertise. But the folks with those respective kinds of know-how need to be able to talk to each other and work together in the right ways, which is really hard.

The IMS has an outstanding foundation for this sort of work, because their Caliper specification turns out to provide the basis for a perfectly lovely lingua franca. To begin with, its fundamental structure is triples, which is the same basic idea as the original concept behind the semantic web. If you're not a computer person and this is starting to make your eye's glaze over, don't worry, because this is plain English. Three-word sentences, in fact. Noun, verb, direct object. Student takes test. Question assesses learning objective. Student highlights sentence. Sentence discusses Impressionism.

IMS Caliper expresses learning analytics in statements that can easily be translated into three-word plain-English sentences. These sentences can be strung together into coherent paragraphs. Notice, for example, how the last two example sentences are related. Three-word sentences in this format can be chained together to form longer thoughts. New thoughts. With this one, very simple grammatical structure, we have a language that is generative in the linguistic sense. As long as you have words to put into these grammatical placeholders, you can string thoughts together. Or "chain inferences," to sling the lingo. And it turns out, unsurprisingly, that Caliper has a mechanism for defining these words in ways that both humans and machines can understand them.

That has to be the bridge. Humans have to understand the utterances well enough to be able express their theories on the front end and understand whatever the machine is telling them it may have learned on the back end. Machines have to understand them specifically enough to be able to parse the sentences in their own, literal, machine-y way. Theoretically, Caliper could be an ideal language to enable educators and computer scientists to discuss theories about how to better support students as well as how to test those theories together.

The challenge is that the IMS community, at least based on what I saw in the sessions I attended, is not using the specification as an interdisciplinary communication tool in this way yet. What I saw happening instead was a lot of very earnest data scientists pumping as much Caliper data as the can into their data lakes. They come to the conference, give a talk and, to their credit, shrug their shoulders and admit that they really don't know what to do with those data yet. But then they go home and build bigger pipes, because that's their job. That's what they do.

It's not their fault. I've been friends with some of these folks for a very long time indeed. There are good people here. But if you work in the IT department, and you're not a learning scientist or a classroom educator, and the faculty are somewhere between dismissive and disdainful of the idea of talking to you about working together to improve teaching and learning, then what can you do? You do what you know how to do and hope that things will change for the better over time.

It's not the IMS's fault either. The conference I attended was called the IMS Learning Impact Leadership Institute. That's not a new name. Caliper has board that helps guide its direction. That board includes educators who are the kind of advocates that I would like to see on such a body. They are productive irritants in the best possible way. But that's not enough anymore. This is just a really hard problem. It's the challenge of the next decade. To meet it, we need to do more than just make sure the right people are in the room together. We need to develop new ways of working together. New roles, methodologies, ways of talking with each other, and ways of seeing the world.

I'm going to preview a bit of a post that I have in my queue for...I'm not sure when, but some time mentioning "learning engineering." This term has gotten a lot of buzz lately, along with some criticism. I'll be writing up my own take on it, but for now I'll say that one reason I think the term is gaining some currency is that it represents a set of skills for being a mediator in the kind of collaboration that I'm describing here.

As it turns out, it was coined by Nobel prize-winning polymath and Carnegie Mellon luminary Herb Simon, after whom Carnegie Mellon University's Simon Initiative was named. And, as it also turns out, the Simon Initiative hosted this year's EEP summit and made some news in the process by contributing $100 million worth of open source software that they use in their research and pratice of...wait for it...learning engineering.

Here's a slide that they used in their talk explaining what the heck learning engineering is and what they are doing when they are doing it:

Copyright Carnegie Mellon University, CC-BY

(By the way, the videos of all talks from the summit will be posted online, as promised. Please be patient a little longer.)

This post has already run long, so rather than unpacking the slide, I'll leave you with a question or two. Think about this graphic as representing a data-informed continuous improvement methodology involving multiple people with multiple types of expertise. What would that methodology need to look like? Who would have to be at the table, what kinds of conversations would they have to have, and how would they have to work together?

I'm not suggesting that "learning engineering" is a magical conjuring phrase. But I am suggesting that we need new approaches, new competencies, and likely a new role or two if we are going to get to the next updraft.

  1. By the way, if you haven't subscribed to Phil's new blog yet, then you really, really should. Like, right now. I'll wait.

The post The IMS at an Inflection Point appeared first on e-Literate.

by Michael Feldstein at June 11, 2019 09:47 PM

May 29, 2019

Michael Feldstein

Building a Better Vendor Ecosystem

In my last post, I wrote about at a high level about the new e-Literate:

Think of the organization as having two parts. The first part is dedicated to spreading understanding. e-Literate has been doing this from the very beginning. The Empirical Educator Project (EEP) can be thought of as a social, IRL extension of e-Literate. I think of this as "spreading the gospel," but the business-speak for this part would be "media and events." There will be the blog, the summits (which may evolve into a conference as interest grows), a podcast, and other logical extensions. I will have more details to share on this, including a way to pay for the expense of expanding this work that I think will actually do net good rather than harm, in tomorrow's post. One does not get rich from education media and events, but if they pay the mortgages of the people working on them, that's just fine.

The other half of the business is to help organizations implement the knowledge and effective practices that are brought forward in the first half. Everything published through e-Literate and EEP will be contributed to the commons via a Creative Commons or open source license. But sometimes organizations need help implementing these ideas, or they want context-specific coaching. So the rest of the business will provide a combination of workshops and consulting that help apply and extend just about anything covered in e-Literate or EEP. I do have significant ambitions to grow this part of the organization, largely because I think it's an important way to grow impact.

In today's post, I'm going to make good on the promise of describing my plan for paying for the "spreading the gospel" part of the mission. Here's the short version:

Vendors are always trying to get me to write good things about them. They know my word is valuable because people believe I will write good things about vendors if and only if I think they are true. By definition, I can't take their money to write good things about them. That would defeat the purpose. Which means it becomes ever harder to cover authentically good work as the demand for attention increases and I have no way to pay for the cost of the time to cover that news. In fact, in the old consulting model, a lot of good work that we knew about was hard to cover because the way we learned about it—namely, through our paid consulting with the vendor—created a conflict of interest.

But what if vendors paid not for good coverage but for help in becoming better collaborators and contributors, particularly with non-customers? And what if there were a kind of peer review mechanism to help vet their success at becoming better collaborators and contributors? And what if they got coverage for free based on that success?

That's the new model in a nutshell. Now I'll break down where it came from and how it works.

The past: An origin story

Back in the early days of this blog, when I was just some guy working at a university writing about stuff that was on my mind, I didn't think or write much about vendors or business issues. Sure, I was frustrated by the teaching tools they were providing, but I didn't think much about the organizations themselves, how they ran their businesses, or the dynamics of the markets.

Then Blackboard decided to sue Desire2Learn for patent infringement. I didn't know anything about software patents, either legally or economically, but this sounded at first blush like it might not be a good thing. So I started asking around. It turns out that when you work in an academic environment it isn't too hard to learn stuff about just about anything. Pretty soon I knew more than I ever thought I wanted to know about patent claim construction, patent thickets, different ways that patents can be challenged, and so on. The more I learned, the more concerned I grew. And the more concerned I grew, the more I wrote about it.

When I started writing about the topic, only a handful of people were reading my blog. But then a weird thing happened. Matt Small, Blackboard's Chief Legal Counsel, started arguing with me. In public. The next thing I knew, an Associated Press reporter was at my door with a photographer, and my picture was in USA Today. My readership skyrocketed. My public arguments with Matt Small continued literally for years, and with them, my readership continued to grow. Blackboard had made themselves the most hated company in ed tech and I became known as the little guy who took them on.

It started off as an accident, but once it had happened, I felt compelled to continue. Nobody else was doing that kind of accountability work in ed tech until Phil came along, and it was important work. Importantly, it sometimes got results. The bigger I got, the more likely companies were to behave better when we poked them.

That said, there are two aspects of this work that I don't like. The first is the collateral damage. A blog post is a blunt force instrument. Companies do dumb or harmful things for a lot of different reasons, and some educators tend to be very reductive about their vendors and the people working at them. Sometimes that damage is a necessary evil, but I've come to appreciate that a sharp-elbowed vendor accountability piece is not something to be written lightly. The potential for good has to outweigh the potential for harm, and that's often a tough calculation to make.

The second aspect I don't like about the "cop on the beat" role of e-Literate is the opportunity cost. Every post I write about something that somebody shouldn't have done a missed opportunity to highlight something good that we should be doing more of.

Put these two problems together, add in the problem that none of the time on this work is paid for, and it feels like e-Literate could be doing a lot more than it has been to foster a new economy in ed tech. One where good behavior is rewarded. And the funny thing is, ed tech companies try to make contributions all the time. Their efforts are generally ignored, dismissed, or simply not noticed.

There are many reasons for this. One is that many of these companies are not skilled at making contributions in ways that are helpful to educators and likely to be noticed. Another is that many educators have trouble thinking about vendors as complex organizations that can do a mix of good things, bad things, and neutral things all at the same time for reasons other than some evil master plan. There are others I could list, but the net effect is that contribution and collaboration are not reliably effective methods for vendors to increase their brand value in this space. So they do less of it than they could and don't work as hard as they should at getting better at it.

Education companies should compete based on how much they contribute to education, particularly including how much they contribute to the public good. We haven't created a world in which it is possible for them to do so effectively. That strikes me as a problem worth tackling. e-Literate is trying to do something about it via EEP.

The present

I realize that many of you are still trying to wrap your heads around exactly what the Empirical Educator Project (EEP) is. That's OK. For the purpose of this post, you just need to understand this much:

  • It's a project where colleges and universities share what they've learned and collaborate on projects to help their students learn, succeed, and thrive.
  • Knowledge shared in EEP is intended to be contributed to the commons via some sort of open source license and to be made as practically accessible as possible to as many organizations as possible.
  • EEP is, in many ways, a real-world extension of e-Literate.

EEP is also vendor-sponsored. In fact, it has a very particular and carefully crafted approach to sponsorship that is designed to help create an economy that rewards vendor collaboration and contribution.

Here are the principles:

Sponsors are vetted, not once but continually

I personally hand pick sponsors. I interview them before I accept their money. I have turned down sponsors, including high-dollar-value ones whose logos would have looked very good on the EEP web site. I have declined to invite sponsors back when their behavior has not lived up to expectations. The first step in establishing a new economy of collaboration and contribution is establishing a higher level of trust. I can only have EEP sponsors in the room that can demonstrate they are trustworthy.

Sponsors are participants

We refer to the vendors who support EEP as "sponsoring participants," and we only admit sponsors who have something to offer in addition to money. The corollary is that sponsoring participants are not allowed to send sales or marketing people to the summit. They have to send product designers, executives, or researchers. In other words, they have to send people who are in roles that enable them to collaborate. Ultimately, we want our sponsors to contribute something. That is what they are there for.

For each company, there is only one price

There is no gold, silver, or bronze sponsorship. Nobody pays for a lunch or a lanyard or a program ad. You are either in or you are out. The only price difference is based on size. We have three bands based on broad ranges of company valuations—small, medium, or large. The price is the price, and it covers a year of sponsorship.

There may be ancillaries in the future—the one in the works right now is a podcast—but the core will always be one price based on company size with no distinction in the value received in return for that price, because we do not want to privilege large companies over small ones.

Credit is proportional to contribution

Becoming an EEP sponsoring participant in and of itself gets the vendor a few things:

  • Their logo on the web site to show that they are participating in creating this new ecosystem
  • My word to EEP network participants that the vendor has been vetted for participation
  • An opportunity to sit side-by-side with the participants at the summit and seek collaboration opportunities throughout the year
  • Help from e-Literate in finding collaboration and contribution opportunities

It does not buy them a single positive blog post on e-Literate. (Nor does it buy them protection from e-Literate's watchdog function. The blog will continue to do what it does.)

If the participating sponsor makes a contribution under some form of open license, that will get coverage from e-Literate. I will be publishing several such posts in the coming weeks based on contributions from the recent summit. For the short term, I am vetting those contributions. As we grow, I intend to put in place a more robust review mechanism.

If the participating sponsor makes a non-proprietary contribution that is adopted by colleges and universities in the EEP network, that will get more attention on e-Literate because the adoption acts as a peer review mechanism. And if that adoption comes with some sort of impact measure, that will get the most attention of all.

Cost is proportional to value

The way we came up with the sponsorship costs is that, within each of the three company sponsorship size bands, we figured out roughly how much the marketing manager could sign off on without getting executive approval. Then we charged a little more than that. We are creating an environment in which vendors have an opportunity to prove that they are genuinely contributing to the betterment of education. They should be willing to invest in that future.

Some questions you may have

It's pretty abnormal to have such a long post on e-Literate about we make our money, so you probably have some questions. I'll try to anticipate a few, but feel free to ask yours in the comments section below.

  • Are you selling out, Michael? My answer on this one is the same as always. I'm the wrong person to answer that question. You be the judge. All I ask is that you watch my actions going forward and judge me by what I do.
  • Is e-Literate going to be all about EEP all the time now? No. As a matter of fact, I have a couple of posts coming up in the immediate future that are exactly the kind of analysis I've been writing for a long time now. But my writing has always evolved based on whatever I've been working on and thinking about. You may not have noticed it, because I haven't always called attention to it. But it has. It's going to do so again.
  • Does this mean you will only write about companies that are EEP sponsors? No, but it does mean that I have a filter for my already limited writing time. I ignore about a dozen press releases every week. That was true before EEP, and it's still true now. I will write about good work wherever I see it. The question is really one of where I have time to focus my attention. Going forward, I am going to be focusing more of my attention on the work coming out of the EEP network, so I will be more likely to notice work being done there.
  • Do you really think you can change the vendor economy? Me by myself? Probably not. I have been surprised by how much I've been able to nudge it on occasion, but changing the whole darned thing is a pretty heavy lift. All of us together, on the other hand? Yeah, I kinda do think we can make it happen.
  • Can my company sponsor EEP? Maybe. Contact me and we'll talk.
  • Are you going to be writing about your business all the time now? Nah. Don't get too weirded out by all of this. I am more me than ever. I had to explain the changes and will return to the details from time to time when they are relevant, but e-Literate is still gonna e-Literate.

The post Building a Better Vendor Ecosystem appeared first on e-Literate.

by Michael Feldstein at May 29, 2019 04:09 PM

Dr. Chuck

What is all the Fuss About Python?

The Python programming language has been around for over 20 years but these days it feels like it is an overnight sensation.   Python has moved from being a fringe language for beginners, biologists, and natural language analysis to being the go-to language in nearly every domain of computing.  Whlie there is a lot of inertia in the choice of a programming language for a project, the adoption pattern of Python is quite different than “that cool new language that came out a few years ago”.  While most new programming languages are exciting for early adopters, by the time they are a few years old, many early adopters have moved on to the next big thing and the languages never find their way into the mainstream.  Python seems different – Python seems to have a solid and continuously growing market share – and in particular Python seems to invade and take over application areas previously dominated by well-established technologies.  We will look at some of the inherent aspects of Python that make it “sticky” – once you go Python you rarely go back or go anywhere else.  This presentation will also look at the world’s largest programming course (in Python) and how the course fits into the Python movement and how the course benefits from Apereo Open Source Software, Open Educational Resources, and the Open Course Enrollment (a.k.a the MOOC movement).

Abstract for Apereo 2019

by Charles Severance at May 29, 2019 11:55 AM

May 20, 2019

Apereo Foundation

New Directions for Apereo OAE

New Directions for Apereo OAE

The OAE has been slowly getting back on its feet after a somewhat long period of consolidation, data migration, and redeployment. The 15.0 release "Snowy Owl" last November paved the path to a sequence of new developments and releases which we are now proud to announce to the Apereo community.

by Michelle Hall at May 20, 2019 03:59 PM

May 16, 2019

Apereo Foundation

Dr. Chuck

New PMC Members for Sakai (2019)

We are pleased to announce that three new members are joining the Sakai Project Management Committee (PMC).

Matthew Hall – University of Virginia

Matthew is the Technical Lead for the University of Virginia’s learning management system team. He manages software development and system administration for UVACollab, UVA’s implementation of Sakai. His focus is on the tools, services, and integrations that make up UVA’s central online collaboration and learning environment. Matthew has been working with Sakai since 2011. He holds an MS in Computer Science, a BS in Computer Engineering, and a BS in Mathematics from the University of Virginia.

Miguel Pellicer – EDF – Entornos de Formación

Miguel is the CTO at EDF and an LMS market entrepreneur with a strong commitment in Open Source education technologies.  An active member of the Sakai Community and the Spanish Team since 2008, security and internationalization lead. Miguel has led more than thirty Sakai deployments all over the world, including elite universities in Spain, Netherlands, UK, Colombia, Mexico, Chile, and the USA.

Joshua Wilson – Longsight Inc

Joshua is Longsight’s Chief Operating Officer.  He leads client relations, business operations, project management, and strategic planning. In addition, Josh chairs the Sakai Community’s Marketing Team. He has been a leader in academic technology for more than a decade, serving most recently as Associate CIO for Academic Technology at Brandeis University, where he directed the strategic and client-centered renewal of the University’s academic technology environment, including its open source LMS.  Josh has served for a decade on the management team for the nationwide MISO Survey, which measures the effectiveness of IT and libraries at more than 150 higher education institutions.

About the Sakai PMC

PMC membership is reflective of significant contributions to the community and a dedication to the shared goals of the Sakai community.

In terms of what PMC membership “means”, the PMC members are already active members in the various groups in the Sakai/Apereo community (QA, Core Team, Marketing, FARM, Accessibility, Teaching and Learning, etc.). Most of the decisions about Sakai are made in those groups without any need for the PMC to take a vote or render an opinion because we believe that those closest to the actual work should make decisions regarding the work whenever possible.

The PMC gets involved when there is a significant or broader issue or a decision needs to be made regarding Sakai’s direction or resource expenditure by Apereo on behalf of Sakai. The PMC does all of its work in full view of the community on a public list except for votes on new PMC members.  Everyone in the Sakai community is welcome to join, monitor, and participate in the PMC mailing list.

Please join me in thanking these new PMC members for their past, present, and future contributions to Sakai and welcoming them to the Sakai PMC.


P.S. Thanks to Matt Jones (PMC Treasurer) for running the PMC nomination and election process.

by Charles Severance at May 16, 2019 01:30 PM

May 07, 2019

Adam Marshall

Did you know that Sakai has a racing car?


As I’m sure most readers know, WebLearn is built upon the open source Sakai platform.

One of the software’s founders, Dr Charles Severance has decided to initiate a guerrilla marketing campaign and have some fun, by buying a cheap old car (a ‘lemon’), calling it the ‘Sakaicar‘, sticking a pair of “Sakaiger” ears on it and running it into the ground on the racing circuit!


by Adam Marshall at May 07, 2019 12:02 PM

April 29, 2019

Adam Marshall

WebLearn and Turnitin courses: Trinity term 2019

IT Services offers a variety of taught courses to support the use of WebLearn and the plagiarism awareness software Turnitin. Course books for the WebLearn Fundamentals course (3 hours) can be downloaded for self study. Places are limited and bookings are required. All courses are free of charge and are presented at IT Services, 13 Banbury Road.

Click on the links provided for further information and to book a place.

WebLearn 3-hour course:

WebLearn Bytes sessions:

Plagiarism awareness courses (Turnitin):

User Group meetings will run again in Michaelmas term

by Jill Fresen at April 29, 2019 04:10 PM

April 24, 2019

Dr. Chuck

Why do People Like Sakai, given the Market Share?

A Sakai user saw this report where Sakai was highly ranked against its market competitors and wondered “How could this be with Sakai at a 5% market share?”

Here is my answer.

There is a pretty simple explanation as to why Sakai polls well in some situations and yet there are a lot of folks that say “lets go Canvas”.

It depends on who you ask.

I knew a school that did a year-long evaluation of Sakai, Canvas, Blackboard and Desire2Learn. The faculty and students ranked them in exactly that order. The IT organization that was already convinced they wanted Canvas, removed the Sakai data, published the report, and then chose Canvas based on the report that showed Canvas#1 – it was clearly the “overwhelming favorite” of the faculty and students.

In general, IT staff prefer Canvas because it means less responsibility for them. Canvas rarely listens to its end-users and throws good parties – there is a certain stability and simplicity in not being able to influence the direction of your commercial vendor. Just accept it and move on. Faculty and IT staff at Sakai schools can dream up ideas and some of those ideas make it into the core product, often surprisingly rapidly. That is both a joy and a responsibility.

I would like to see a survey of a lot of schools (Sakai and others) where we ask the faculty and students how well they like their current LMS. I think that if you exclusively listened to the end-user’s voices, Sakai schools would “like their current LMS” more often than commercial LMS schools. (see the NYU data on this)  I expect this would even be true for end-users who had no idea of our wonderful community or our 100% commitment to open source.

If you, on the other hand, polled the IT folks at schools across the board and asked them, “Does your current LMS make your job easy or hard?”, Canvas would win as the LMS that makes IT folks jobs easiest – by far. I would say that Sakai schools that completely outsource hosting to LongSight, EDF or OpenCollab would also get pretty high marks. The more a school is involved in the Sakai community, the more they are working to make Sakai better, the more some of the IT staff might want to switch to Canvas to “take a load off of themselves”. Sakai schools that are self hosted and have senior (expensive) in-house Sakai developers are great for a few activist IT organizations – but too much to handle for IT organizations that can barely handle Wifi, the SIS, printers and desktop support on their campus.

So you get this strange anomaly that does not correlate to market share. If you ask the end users at Sakai schools – they love Sakai. If you ask Canvas users at Canvas schools, they like Canvas and it kind of goes down from there.  And the graphs you cite reflect that.

And it is why Sakai continues to be so focused on meeting the end-user requirements above our “corporate” profitability and market share. Our end-user satisfaction is high, our community is strong, our profitability is zero, and our market share is low. Separately, our impact on overall market innovation is *extremely high* through Sakai-led innovations like LTI and Common Cartridge. Our contribution and impact and end-user satisfaction unfortunately does not correlate to rapidly growing market share because after we meet end-user needs year after year with a best-of-breed 100% open source product, we don’t have any money to hire sales people to visit every university on the planet and buy free lunches for the IT staff.


P.S. We do have a pretty cool SakaiCar with ears – – like which other LMS has a race car?

P.P.S. Instructure spent $135M last year on marketing and sales.  They took this money from the pockets of higher education and used it to convince more schools to give them more money.  (link)

by Charles Severance at April 24, 2019 12:57 PM

March 07, 2019

Adam Marshall

WebLearn User Group: Tues 12 March 14:00-16:00

Please join us at the next meeting of the WebLearn User Group:

Date: Tuesday 12 March 2019

Time: 2:00 – 4:00 pm, followed by refreshments

Venue: IT Services, 13 Banbury Rd

Come and meet with fellow WebLearn users and members of the Technology Enhanced Learning (TEL) team to give feedback and share ideas and practices.

Book now to secure your place.


  • Canvas@Oxford project team: Update on the Canvas rollout to Year 1 programmes of study
  • James Shaw, Bodleian Libraries: Copyright and the CLA: Preparing digital material for presentation in a VLE
  • Jon Mason, Medical Sciences: Interactive copyright picker (based on source and intended use)
  • TEL team: Design and content for WebLearn pages
  • Adam Marshall: WebLearn updates

Join the WebLearn User Group site: for regular updates and access to audio recordings of previous presentations.

Dr Jill Fresen, Senior Learning Technologist, Technology-Enhanced Learning, IT Services, University of Oxford

by Adam Marshall at March 07, 2019 02:41 PM

February 12, 2019


Peer Assessment – Reflect and Improve

Peer assessment or review can improve student learning, and there's a way to do it in a course site.


by Dave E. at February 12, 2019 04:17 PM

November 23, 2018

Matthew Buckett

Firewalling IPs on macOS

I needed to selectively block some IPs from macOS and this is how I did it. First create a new anchor for the rules to go in. The file to create is:/etc/pf.anchors/org.user.block.out and it should contain:

table <blocked-hosts> persist
block in quick from <blocked-hosts>

Then edit: /etc/pf.conf and append the lines:

anchor "org.user.block.out"
load anchor "org.user.block.out" from "/etc/pf.anchors/org.user.block.out"

Then to reload the firewalling rules run:

$ sudo pfctl -f /etc/pf.conf

and if you haven't got pf enabled you also need to enable it with:

$ sudo pfctl -e

Then you can manage the blocked IPs with these commands:

# Block some IPs
$ sudo pfctl -a org.user.block.out -t blocked-hosts -T add
# Remove all the blocked IPs
$ sudo pfctl -a org.user.block.out -t blocked-hosts -T flush
# Remove a single IP
$ sudo pfctl -a org.user.block.out -t blocked-hosts -T delete

by Matthew Buckett ( at November 23, 2018 12:41 PM

November 09, 2018

Apereo OAE

Apereo OAE Snowy Owl is now available!

The latest version of Apereo's Open Academic Environment (OAE) project has just been released! Version 15.0.0 is codenamed Snowy Owl and it includes some changes (mostly under the hood) in order to pave the way for what's to come. Read the full changelog at Github

Image taken from bird eden.

November 09, 2018 06:50 PM

September 27, 2018

Sakai Project

Sakai 12.4 maintenance is released!

Dear Community,

I'm pleased to announce on behalf of the worldwide community that Sakai 12.4 is released and available for downloading! 

Sakai 12.4 has 88 improvements including: 

  • 22 fixes in Assignments
  • 14 fixes in Gradebook
  • 9 fixes in Tests & Quizzes (Samigo)
  • 7 fixes in Lessons
  • 6 fixes in Roster
  • 5 fixes in Portal

For more information, visit 12.4 Fixes by Tool

by WHodges at September 27, 2018 06:11 PM

August 15, 2018

Sakai Project

Now Open! Call for Proposals for the Sakai Virtual Conference 2018

Sakai Project Logo

We are actively seeking presenters who are knowledgeable about teaching with Sakai. You don’t need to be a technical expert to share your experiences! Submit your proposal today! The deadline for submissions is September 21st, 2018.

Save the Date: The Sakai Virtual Conference will take place entirely online on Wednesday, November 7th.

by MHall at August 15, 2018 06:58 PM

August 13, 2018

Sakai Project

Sakai Community Survey - Number of Users at Your Institution

We would like your help in tallying up the total number of Sakai users worldwide.

by MHall at August 13, 2018 04:33 PM

July 04, 2018


F2F Course Site Content Import

If you're tasked with teaching an upcoming course that you've taught in the past with the University - there's no need to rebuild everything from scratch - unless you want to. Faculty teaching face to face (F2F) courses can benefit from the course content import process in Site Info. This process allows you to pull … Continue reading F2F Course Site Content Import

by Dave E. at July 04, 2018 06:56 PM

June 11, 2018

Apereo OAE

Strategic re-positioning: OAE in the world of NGDLE

The experience of the Open Academic Environment Project (OAE) forms a significant practical contribution to the emerging vision of the ‘Next Generation Digital Learning Environment’, or NGDLE. Specifically, OAE contributes core collaboration tools and services that can be used in the context of a class, of a formal or informal group outside a class, and indeed of such a group outside an institution. This set of tools and services leverages academic infrastructure, such as Access Management Federations, or widely used commercial infrastructure for authentication, open APIs for popular third-party software (e.g. video conference) and open standards such as LTI and xAPI.

Beyond the LMS/VLE

OAE is widely used by staff in French higher education in the context of research and other inter-institutional collaboration. The project is now examining future directions which bring OAE closer to students – and to learning. This is driven by a groundswell among learners. There is strong anecdotal evidence that students in France are chafing at the constraints of the LMS/VLE. They are beginning to use social media – not necessarily with adequate data or other safeguards – to overcome the perceived limitations of the LMS/VLE. The core functionality of OAE – people forming groups to collaborate around content – provides a means of circumventing the LMS’s limitations without selling one’s soul – or one’s data – to the social media giants. OAE embodies key capabilities supporting social and unstructured learning, and indeed could be adapted and configured as a ‘student owned environment’: a safe space for sharing and discussion of ideas leading to organic group activities. The desires and requirements of students have not featured strongly in NGDLE conversations to this point: The OAE project, beginning with work in France, will explore student discontent with the LMS, and seek to work together with LMS solution providers and software communities to provide a richer and more engaging experience for learners.

Integration points and data flows

OAE has three principal objectives in this area:

  1. OAE has a basic (uncertified) implementation of the IMSGlobal Learning Tools Interoperability specification. This will be enriched to further effect integration with the LMS/VLE where it is required. OAE will not assume such integration is required without evidence. It will not drive such integration on the basis of technical feasibility, but by needs expressed by learners and educators.
  2. Driven by the significant growth of usage of the Karuta ePortfolio software in France, OAE will explore how student-selected evidence of competency can easily be provided for Karuta, and what other connections might be required or desirable between the two systems.
  3. Given the growth of interest in learning analytics in France and globally, OAE will become an exemplary emitter of learning analytics data and will act wherever possible to analyse each new or old feature from a designed analytics perspective. Learning analytics data will flow from learning designs embedded in OAE, not simply be the accidental output that constitutes a technical log file.

OAE is continuing to develop and transform its sustainability model. The change is essentially from a model based primarily on financially-based contributions to that of a mixed mode community-based model, where financial contributions are encouraged alongside individual, institutional and organisational volunteered contributions of code, documentation and other non-code artefacts. There are two preconditions for accomplishing this. The first, which applies specifically to code, is clearing a layer of technical debt in order to more easily encourage and facilitate contributions around modern software frameworks and tools. OAE is committed to paying down this debt and encouraging contributions from developers outside the project.

The second is both more complex and more straightforward; straightforward to describe, but complex to realise. Put simply, answers to questions around wasteful duplication of resources in deploying software in education have fallen out of balance with reality. The pendulum has swung from “local” through “cloud first” to “cloud only”. Innovation around learning, which by its very nature often begins locally, is often stifled by the industrial-style massification of ‘the hosted LMS’ which emphasises conformity with a single model. As a result of this strategy, institutions have switched from software development and maintenance to contract management. In many cases, this means that they have tended to swap creative, problem-solving capability for an administrative capability. It is almost as though e-learning has entered a “Fordist” phase, with only the green shoots of LTI enabled niche applications and individual institutional initiatives providing hope of a rather more postmodern – and flexible - future.

OAE retains its desire and ambition to provide a scalable solution that remains “cloud ready”. The project believes, however, that the future is federated. Patchworks of juridical and legal frameworks across national and regional boundaries alone – particularly around privacy - should drive a reconsideration of “cloud only” as a strategy for institutions with global appetites. Institutions with such appetites – and there are few now which do not have them – will distribute, federate and firewall systems to work around legislative roadblocks, bumps in the road, and brick walls. OAE will, then, begin to consider and work on inter-host federation of content and other services. This will, of necessity, begin small. It will, however, remain the principled grit in the strategic oyster. As more partners join the project, OAE will start designing a federation architectural layer that will lay the foundation to a scenario where OAE instances dynamically exchange data among themselves in a seamless and efficient way according to a variety of use cases.

ID 22-MAY-18 Amended 23-MAY-18

June 11, 2018 12:00 PM