Planet Sakai

August 05, 2019

Apereo Foundation

openEQUELLA 2019.1 Released - Features Cloud Provider

openEQUELLA 2019.1 Released - Features Cloud Provider

openEQUELLA 2019.1 represents an important re-positioning of the platform, with the introduction of a Cloud Provider framework that enables the dynamic creation of new services into openEQUELLA.  The addition of Cloud Providers ensures openEQUELLA is more accessible to the open source developer community, decreases time-to-market for new features, and increases the ability to leverage new cloud technologies.

by Michelle Hall at August 05, 2019 08:06 PM

July 30, 2019

Dr. Chuck

LTI 2.0 Removed from Sakai-20 master and nightly

LTI 2.0 has been removed from Sakai in anticipation of Sakai-20 (https://jira.sakaiproject.org/browse/SAK-41789)

It is up in master on github and the Sakai-20 nightly server.

It took most of one day (Last Thursday at LAMP Camp) and the rest was testing and making sure.

I removed over 20% of the LTI code in Sakai (>7000 lines removed).

I updated all the QA test plans and wrote some test plans, and wrote some “how to documentation”.

https://github.com/sakaiproject/sakai/tree/master/basiclti/basiclti-docs/resources/docs
https://github.com/sakaiproject/sakai/blob/master/basiclti/README.md

With help from Andrea I went through all the QA tests to make sure they were up-to-date and to my surprise everything worked in my testing. I did find and fix two small bugs that had crept into the remaining LTI 1.x code – so that was nice. These fixes are already in master this morning as well.

While it has worked great in my testing, I want everyone to be vigilant and test LTI in Sakai as much as you can. We will definitely do a solid QA of LTI as part of Sakai-20 but if something feels weird let me know.

For the Tsugi folks using Java tsugi-util library from the Tsugi distribution, I will wait a few weeks and then port these changes to tsugi-util master:

https://github.com/tsugiproject/tsugi-util

It is mostly deletions. The only real change is that the ContentItem class in tsugi-util went from the org.sakaiproject.lti2 package to the org.sakaiproject.basicltii package. It never belonged in the lti2 package – but I built it while I was building lti2 so I put it there.

Reflection

It is kind of bittersweet in that it took me three years of almost 100% of my Sakai effort to develop the LTI 2.0 spec and build the Sakai implementation and less than six hours to remove it. But it is always good to remove complex and unused code from production software.

One of these mornings with a good cup of coffee and a little time, I will write a blog post about the lessons we can learn from the failure of the LTI 2.0 spec – but for now we are moving on to focus on LTI Advantage – as it should be.

by Charles Severance at July 30, 2019 10:58 AM

July 29, 2019

Apereo Foundation

July 27, 2019

Dr. Chuck

Sakai 19.2 Released

(This message came from Wilma Hodges – the Sakai Community Coordinator)

I’m pleased to announce that Sakai 19.2 is released and available for downloading [1]!

Sakai 19.2 has 125 improvements [2] including

  • 23 fixes in Tests & Quizzes (Samigo)

  • 22 fixes in Gradebook

  • 17 fixes in Assignments

  • 16 fixes in Lessons

  • 12 fixes in Forums

  • 5 fixes in Rubrics

Other areas improved include:

  • A11y

  • Basic LTI

  • Calendar

  • Chat Room

  • Content Review

  • Dropbox

  • I18n

  • Messages

  • Preferences

  • Quartz Scheduler

  • Resources

  • Roster

  • Section Info

  • Sign Up

  • Site Info

  • Syllabus

  • Wiki

  • Worksite Setup

There were 3 security issues fixed in 19.2 (details will be sent to the Sakai Security Announcements list).

Please also note the upgrade information page [3] for important notes related to performing the upgrade. 2 Quartz jobs need to be run to complete the conversion steps for Sakai 12, including a new one for the Job Scheduler in 12.1.

[1] Download information- http://source.sakaiproject.org/release/19.2/

[2] 19.2 Fixes by Tool – https://confluence.sakaiproject.org/display/DOC/19.2+Fixes+by+tool

[3] Upgrade information – https://confluence.sakaiproject.org/display/DOC/Sakai+19+Upgrade+Information 

by Charles Severance at July 27, 2019 01:59 PM

July 18, 2019

Michael Feldstein

Pearson’s Born-Digital Move and Frequency of Updates

There's been a bit of an uproar over Pearson's announcement that they are switching entirely to a digital-first model and will be updating their editions more frequently as a result.1 The prevailing take in the media write-ups so far has been fear of increasing prices. I'm not doing as much of that sort of analysis as I used to anymore. As usual, if you want a clear write-up of it, a good place to check is with PhilonEdTech. I do have a somewhat different take on the economics than the current hand-wringing would suggest and will use a graph from Phil's post to make a brief point about it. The middle of this post will be about the difference between print and digital and how that drives different reasons for updating an "edition." And the last part will be about drawing larger lessons from the tendency in ed tech to tell stories that are more based on past traumas than analysis of current situations.

The horse is out of the barn on pricing

Phil's post contains this graph showing the publishers' competition with other sources for textbook rentals:

Graph of textbook rental distribution from PhilonEdTech, sourced from NACS

The textbook publishers have a lot to gain by controlling the distribution channel for their products. If they can cut out Amazon and Chegg, then they can get a larger percentage of each sale. So they definitely have a lot to gain economically from this.

But I don't get the sense that any of the major publishers believe they can raise prices again. They all read the OER faculty attitude surveys very carefully. The prevailing sense in the industry is that, in addition to the strong price sensitivity and ingenuity that has existed among students for some time, there is now increasing price awareness and sensitivity among academics that is not going away. I don't think this is about that.

In fact, while there are immediate economic reasons for publishers to make this move—if remember correctly, McGraw Hill made the same move a while ago—there are also product-related reasons for doing so.

Analog vs digital updates

Many folks in the sector have a reflex reaction to the phrase "textbook edition" based on the old print tradition of updating a book every three years whether it needed updating or not. In some subjects, a three-year update is warranted. Programming languages change, for example. Linear algebra, on the other hand? Not so much. Textbook publishers gained a somewhat justified reputation for updating books every three years just to thwart the used book market. Which they then reinforced by raising prices every year, driving students to the used book market and giving publishers further incentives to update editions for purely economic reasons.

That said, in the digital world, there are legitimate reasons for product updates that don't exist with a print textbook. First, you can actually get good data about whether your content is working so that you can make non-arbitrary improvements. I'm not talking about the sort of fancy machine learning algorithms that seem to be making some folks nervous these days. I'm talking about basic psychometric assessment measures that have been used since before digital but are really hard to gather data on at scale for an analog textbook—how hard are my test questions, and are any of the distractors obviously wrong?—and page analytics on the level that's not much more sophisticated that I use for this blog—is anybody even reading that lesson? Lumen Learning's RISE framework is a good, easy-to-understand example of the level of analysis that can be used to continuously improve content in a ho-hum, non-creepy, completely uncontroversial way.

(I stress this because there's been an elevated level of concern about intrusive analytics among a segment of the e-Literate readership and I want to be clear that one can do quite a bit without coming anywhere near the touchy areas.)

At the same time, unlike paper books, digital products have functionality, and there is always a continuous list of features that students and teachers want or need. Just keeping up with accessibility requirements is a never-ending job. Then, instructors in different subjects want different quiz question types. Or the ability to author those question types. Or different configurations on the number of tries that students can have for the questions. Or the number of hints they can have. Or integration with some discipline-specific tool that they like to use. Or a tool that the students like to use. It goes on and on and on.

Are all of these updates good updates? That's an impossible generalization to make. People who have no use for a software product category in the first place generally tend to think that the updates to a pointless product are pointless. So if you're already cynical about courseware, you'll probably be cynical about courseware functionality updates. If you find courseware useful, then your attitude will be closer to that of any other software user, which is to say that you'll look at whether the particular update fits your need. Since functionality in courseware platform often is differentially useful across disciplines, chances are that you will like some updates a lot and find others completely uninteresting (or even a step backwards for your particular needs). If you are a builder of digital platforms that need to support a wide range of academic disciplines, as Pearson now is, you have a lot of surface area that you have to cover. Hence the need for frequent updates.

Motivation

Ed tech news cycles often feel like fighting the last war to me. We're always building our Maginot Line. Some vendor does something, and everybody rushes to write the hand-wringing story about how this might be just like the last disaster. The last trauma. Nobody asks, "What's changed since then?"

Analogously to default story trope with the LMS vendors, the default story tropes about the publishers all come from past traumas about pricing and don't take into account how the economics of the industry have changed completely in the last decade, or how going from print to digital changes what it even means to update an edition.

Nor do these trauma stories take into account what is changeable. They are typically written as if vendors are implacable forces of nature or all-powerful multi-national corporations. Pearson, one of the largest companies in the sector, is just 2.5% the size of Exxon Mobil. Even assuming the worst regarding the intent of the company management, the industry could not have maintained such high price points if faculty had simply started to select books based on price. Vendors respond to the signals customers send about what matters to them.

Faculty had the power to change the industry. Whether knowingly or not, they chose not to exercise it. They chose not to even ask about the price in the majority of cases for many, many years. This is not to let companies off the hook for their decisions but rather to say that when we choose to retell trauma stories without interrogating them, we may miss opportunities to choose to play roles other than victims. And if you are going to choose to have a relationship with a vendor, why wouldn't you choose for that relationship to be something other than victim?

Five years ago, I wrote a post about how people who complained bitterly about how unhappy they were with their LMS vendors tended to ignore the procurement practices of their colleagues and their institution that all but guaranteed their dissatisfaction. That post was called Dammit, the LMS. It received quite a bit of attention. Even people who hated it admitted to me that they thought it was correct.

Sadly, not much has changed since then.

  1. Full disclosure: Pearson is both a current sponsor of the Empirical Educator Project and a current consulting client.

The post Pearson’s Born-Digital Move and Frequency of Updates appeared first on e-Literate.

by Michael Feldstein at July 18, 2019 04:54 PM

July 12, 2019

Michael Feldstein

Instructure DIG and Student Early Warning Systems

EdSurge's Tony Wan is first out of the blocks with an Instructurecon coverage article this year. (Because of my recent change in professional focus, I will not be on the LMS conference circuit this year.) Tony broke some news in his interview with CEO Dan Goldsmith with this tidbit about the forthcoming DIG analytics product:

One example with DIG is around student success and student risk. We can predict, to a pretty high accuracy, what a likely outcome for a student in a course is, even before they set foot in the classroom. Throughout that class, or even at the beginning, we can make recommendations to the teacher or student on things they can do to increase their chances of success.

Instructure CEO Dan Goldsmith

There isn't a whole lot of detail to go on here, so I don't want to speculate too much. But the phrase "before they even set foot in the classroom" is a clue as to what this might be. I suspect that the particular functionality he is talking about is what's known as an "student retention early warning system."

Or maybe not. Time will tell.

Either way, it provides me with the thin pretext I was looking for to write a post on student retention early warning systems. It seems like a good time to review the history, anatomy, and challenges of the product category since I haven't written about them in quite a while and they've become something of a fixture. The product category is also a good case study in why tool that could be tremendously useful in supporting students who need help the most often fails to live up to either its educational or commercial potential.

The archetype: Purdue Course Signals

The first retention early warning system that I know of was Purdue Course Signals. It was an experiment undertaken by Purdue University to—you guessed it—increase student retention, particularly in the first year of college, when students tend to drop out most often. The leader of the project, John Campbell, and his fellow researchers Kim Arnold and Matthew Pistilli, looked at data from their Student Information System (SIS) as well as the LMS to see if they could predict and influence students. Their first goal was to prevent them from dropping courses, but they ultimately wanted to prevent those students from dropping out.

They looked at quite a few variables from both systems, but the main results they found are fairly intuitive. On the LMS side, the four biggest predictors they found for students staying in the class (or, conversely, for falling through the cracks) where

  1. Student logins (i.e., whether they are showing up for class)
  2. Student assignments (i.e., whether they are turning in their work)
  3. Student grades (i.e., whether their work is passing)
  4. Student discussion participation (i.e., are they participating in class)

All four of these variables were compared to the class average, because not all instructors were using the LMS in the same way. If, for example, the instructor wasn't conducting class discussions online, then the fact that a student wasn't posting on the discussion board wouldn't be a meaningful indicator.

These are basically four of the same very generic criteria that any instructor would look at to determine whether a student is starting to get in trouble. The system is just more objective and vigilant in applying these criteria than instructors can be at times, particularly in large classes (which is likely to be the norm for many first-year students). The sensitivity with which Course Signals would respond to those factors would be modified by what the system "knew" about the students from their longitudinal data—their prior course grades, their SAT or ACT scores, their biographical and demographic data, and so on. For example, the system would be less "concerned" about an honors student living on campus who doesn't log in for a week than about a student on academic probation who lives off-campus.

In the latter case, the data used by the system might not normally be accessible, or even legal, for the instructor to look at. For example, a disability could be a student retention risk factor for which there are laws governing the conditions under which faculty can be informed. Of course, instructors don't have to be informed in order for the early warning system to be influenced by the risk factor. One way to think about a way that this sensitive information could be handled is like a credit score. There is some composite score that informs the instructor that the student is at increased risk based on a variety of factors, some of which are private to the student. The people who are authorized to see the data can verify that the model works and that there is legitimate reason to be concerned about the student, but the people who are not authorize are only told that the student is considered at-risk.

Already, we are in a bit of an ethical rabbit hole here. Note that this is not caused by the technology. At least in my state, the great Commonwealth of Massachusetts, instructors are not permitted to ask students about their disabilities, even though that knowledge could be very helpful in teaching those students. (I should know whether that's a Federal law, but I don't.) Colleges and universities face complicated challenges today, in the analog world, with the tensions between their obligation to protect student privacy and their affirmative obligation to help the students based on what they know about what the students need. And this is exactly the way John Campbell characterized the problem when he talked about it. This is not a "Facebook" problem. It's a genuine educational ethical dilemma.

Some of you may remember some controversy around the Purdue research. The details matter here. Purdue's original study, which showed increased course completion and improved course grades, particularly for "C" and "D" students, was never questioned. It still stands. A subsequent study, which purported to show that student gains persisted in subsequent classes, was later called into question. You can read the details of that drama here. (e-Literate played a minor role in that drama by helping to amplify the voices of the people who caught the problem in the research.)

But if you remember the controversy, it's important to remember three things about it. First, the original research about persistence was not ever called into question. Second, the subsequent finding was not disproven; rather, there was a null hypothesis. We have proof neither for nor against the hypothesis that the Perdue system can produce longer term effects. And finally, the biggest problem that controversy exposed was with university IR departments releasing non-peer-reviewed research papers that staff researchers have no power to respond to on their own when they get criticized. That's worth exploring further some other time, but for now, the point is that the process problem was the real story. The controversy didn't invalidate the fundamental idea behind the software.

Since then

Since then, we've seen lots of tinkering with the model on both the LMS and SIS sides of the equation. Predictive models have gotten better. Both Blackboard and D2L have some sort of retention early warning products, as do Hobsons, Civitas, EAB, and HelioCampus, among others. There were some early problems related to a generational shift in data analytics technologies; most LMSs and SISs were originally architected well before the era when systems were expected to provide the kind of high-volume transactional data flows needed to perform near-real-time early warning analytics. Those problems have increasingly been either ironed out or, at least, worked around. So in one sense, this is a relatively mature product category. We have a pretty good sense of what a solution looks like and there are a number of providers in the market right now with variations on on the theme.

In a second sense, the product category hasn't fundamentally changed since Purdue created Course Signals over a decade ago. We've seen incremental improvements to the model, but no fundamental changes to it. Maybe that's because the Purdue folks pretty much nailed the basic model for a single institution on the first try. What's left are three challenges that share the common characteristic of becoming harder when converted from an experiment by a single university to a product model supported by a third-party company. At the same time, They fall on different places on the spectrum between being primarily human challenges and primarily technology challenges. The first, the aforementioned privacy dilemma, is mostly a human challenge. It's a university policy issue that can be supported by software affordances. The second, model tuning, is on the opposite end of the spectrum. It's all about the software. And the third, which is the last mile problem from good analytics to actual impact, is somewhere in the messy middle.

Three significant challenges

I've already spent some time on the student data privacy challenge specific to these systems, so I won't spend much more time on it here. The macro issue is that these systems sometimes rely on privacy-sensitive data to determine—with demonstrated accuracy—which students are most likely to need extra attention to make sure they don't fall through the cracks. This is an academic (and legal) problem that can only be resolved by academic (and legal) stakeholders. The role of the technologists is to make the effectiveness and the privacy consequences of various software settings both clear and clearly in the control of the appropriate stakeholders. In other words, the software should support and enable appropriate policy decisions rather than obscuring or impeding them. At Purdue, where Course Signals was not a product that was purchased but a research initiative that had active, high-level buy-in from academic leadership, these issues could be worked through. But a company selling the product into as many universities as possible with differing levels of sophistication and policy-making capability in this area, the best the vendor can do is build a transparent product and try to educate their customers as best as they can. You can lead a horse to water and all that.

On the other end of the human/technology spectrum, there is an open question about the degree to which these systems can be made accurate without individual hand tuning of the algorithms for each institution. Purdue was building a system for exactly one university, so it didn't face this problem. We don't have good public data on how well its commercial successors work out of the box. I am not a data scientist, but I have had this question raised by some of the folks who I trust the most in this field. That, in turn, means that each installation of the product would require a significant services component, which would raise the cost and make these systems less affordable to the access-oriented institutions that need them the most. This is not a settled question; the jury is still out. I would like to see more public proof points that have undergone some form of peer review.

And in the middle, there's the question of what to do with the predictions in order to produce positive results. Suppose you know which students are more likely to fail the course on Day 1. Suppose your confidence level is high. Maybe not Minority Report-level stuff—although, if I remember the movie correctly, they got a big case wrong, didn't they?—but pretty accurately. What then? At my recent IMS conference visit, I heard one panelists on learning analytics (depressingly) say, "We're getting really good at predicting which students are likely to fail, but we're not getting much better at preventing them from failing."

Purdue had both a specific theory of action for helping students and good connections among the various program offices that would need to execute that theory of action. Campell et al believed, based on prior academic research, that students who struggle academically in their first year of college are likely to be weak in a skill called "help-seeking behavior." Academically at risk students often are not good at knowing when they need help and they are not good at knowing how to get it. Course Signals would send students carefully crafted and increasingly insistent emails urging them to go to the tutoring center, where staff would track which students actually came. The IR department would analyze the results. Over time, the academic IT department that owned the Course Signals system itself experimented with different email messages, in collaboration with IR, and figured out which ones were the most effective at motivating students to take action and seek help.

Notice two critical features to Purdue's method. First, they had a theory about student learning—in this case, learning about productive study behaviors—that could be supported or disproven by evidence. Second, they used data science to test a learning intervention that they believed would help students based on their theory of what is going on inside the students' heads. This is learning engineering. It also explains why the Purdue folks had reason to hypothesize that the effects of using Course Signals might persist with students after they stopped using the product. They believed that students might learn the skill from the product. The fact that the experimental design of their follow-up study was flawed doesn't mean that their hypothesis was a bad one.

When Blackboard built their first version of a retention early warning system—one, it should be noted, that is substantially different from their current product in a number of ways—they didn't choose Purdue's theory of change. Instead, gave the risk information to the instructors and let them decide what to do with it. As have many other designers of these systems. While everybody that I know of copied Purdue's basic analytics design, nobody that I know—at least no commercial product developers that I know of—copied Purdue's decision to put so much emphasis on student empowerment first. Some of this has started to enter product design in more recent years now that "nudges" have made the leap from behavioral economics into consumer software design. (Fitbit, anyone?) But the faculty and administrators remain the primary personas in the design process for many of these products. (For non-software designers, a "persona" is an idealized person that you imagine that you're designing the software for.)

Why? Two reasons. First, students don't buy enterprise academic software. So however much the companies that design these products may genuinely want to serve students well, their relationship with them is inherently mediated. The second reason is the same as with the previous two challenges in scaling Purdue's solution. Individual institutions can do things that companies can't. Purdue was able to foster extensive coordination between academic IT, institutional research, and the tutoring center, even though those three organizations live on completely different branches of the organizational chart in pretty much every college and university that I know. An LMS vendor has no way of compelling such inter-departmental coordination in its customers. The best they can do is give information to a single stakeholder who is most likely to be in a position to take action and hope that person does something. In this case, the instructor.

One could imagine different kinds of vendor relationships with a service component—a consultancy or an OPM, for example—where this kind of coordination would be supported. One could also imagine colleges and universities reorganizing themselves and learning new skills to become better at the sort of cross-functional cooperation for serving students. If academia is going to survive and thrive in the changing environment it finds itself in, both of these possibilities will have to become far more common. The kinds of scaling problems I just described in retention early warning systems are far from unique to that category. Before higher education can develop and apply the new techniques and enabling technologies it needs to serve students more effectively with high ethical standards, we first need to cultivate an academic ecosystem that can make proper use of better tools.

Given a hammer, everything looks pretty frustrating if you don't have an opposable thumb.

The post Instructure DIG and Student Early Warning Systems appeared first on e-Literate.

by Michael Feldstein at July 12, 2019 02:08 PM

July 08, 2019

Dr. Chuck

Building Tsugi Learning Tools – The Experience

The Tsugi project (www.tsugi.org) is providing a software environment to enable a wide range of educational technology use cases. Tsugi was developed to simplify the development of educational tools and to allow those tools to be deployed in an “App Store” pattern using standards like IMS Learning Tools Interoperability, Common Cartridge, Deep Linking (Content Item), and LTI Advantage. This will be two presentations in one. One thread of the presentation will cover how Tsugi uses the latest standards to implement a LMS agnostic Next Generation Digital Learning Environment (or is that Experience). All the while during the presentation, those in the audience who want to experience building a tool, will put up a server, and create a simple Python-based learning tool and integrate it into the Sakai nightly server. The attendees can be as active as they like.

Abstract for the 2019 LAMP camp

by Charles Severance at July 08, 2019 11:28 PM

July 03, 2019

Michael Feldstein

Learning Engineering: A Caliper Example

In my recent IMS update post, I wrote,

[T]he nature and challenges of interoperability our sector will be facing in the next decade are fundamentally different from the ones that we faced in the last one. Up until now, we have primarily been concerned with synchronizing administration-related bits across applications. Which people are in this class? Are they students or instructors? What grades did they get on which assignments? And how much does each assignment count toward the final course grade? These challenges are hard in all the ways that are familiar to anyone who works on any sort of generic data interoperability questions. 
But the next decade is is going to be about data interoperability as it pertains to insight. Data scientists think this is still familiar territory and are excited because it keeps them at the frontier of their own profession. But this will not be generic data science, for several reasons.

I then asserted the following positions:

  • Because learning processes are not directly observable, blindly running machine learning algorithms against the click streams in our learning platforms will probably not teach us much about learning.
  • On the other hand, if our analytics are theory-driven, i.e., if we start with some empirically grounded hypotheses about learning processes and design our analytics to search for data that either support or disprove those hypotheses, then we might actually get somewhere.
  • Because learning analytics expressions written in the IMS Caliper standard can be readily translated into plain English, Caliper could form a basis for expressing educational hypotheses and translating them into interoperable tools for testing those hypotheses across the boundaries of tech tools and platforms.
  • The kind of Caliper-mediated conversation I imagined among learning scientists, practicing educators, data scientists, learning system designers, and others, is relevant to a term coined and still used heavily at Carnegie Mellon University—"learning engineering."

In this post, I'm going to explore the last two points in more detail.

What the heck is "learning engineering"?

The term "learning engineering" was first used by Nobel laureate and Carnegie Mellon University polymath Herbert Simon in 1966. It has been around for quite a while. But it is a term whose time as finally has come and, as such, we are seeing the usual academic turf wars over its meaning and value. On the one hand, some folks love it, embrace it, and want to apply it liberally. IEEE has an entire group devoted to defining it. As is always the case, some of this sort of enthusiasm is thoughtful, and some of it is less so. At its worst, there is a tendency for people to get tangled up in the term because it provides a certain je ne sais quoi they've been yearning for to describe the aspects of their jobs that they really want to be doing as change agents rather than the mundane tasks that they keep being dragged back into doing, much like the way some folks are wrapping "innovation" and "design" around themselves like a warm blanket. It's perfectly understandable, and I think it attaches to something real in many cases, but it's hard to say exactly what that is. And, of course, where there are enthusiasts in academia, there are critics. Again, some thoughtful, while others...less so. (Note my comment in the thread on that particularly egregious column.)

If you want to get a clear sense of the range of possible meanings of "learning engineering" as used by people who actually think about it deeply, one good place to start would be Learning Engineering for Online Education: Theoretical Contexts and Design-Based Examples edited by Chris Dede, John Richards, and Bror Saxberg. (I am still working on getting half a day's worth of Carnegie Mellon University video presentations on their own learning engineering work ready for posting on the web. I promise it is coming.) There are a lot of great take-aways from that anthology, one of which is that even the people who think hard about the term and work together to put together something like a coherent tome on the subject don't fully agree on what the term means.

And that's really OK. Let's just set a few boundary conditions. On the one hand, learning engineering isn't an all-encompassing discipline and methodology that is going to make all previous roles, disciplines, and methodologies obsolete. If you are an instructional designer, or a learning designer, or a user experience designer; if you practice design thinking, or ADDIE; be not afraid. On the other hand, learning engineering is not creeping Stalinism either. Think about learning engineering, writ large, as applying data and cognitive sciences to help bring about desired learning outcomes, usually within the context of a team of colleagues with different skills all working together. That's still pretty vague, but it's specific enough for the current cultural moment.

Forget about your stereotypes of engineers and their practices. Do you believe there is a place for applied science in our efforts to improve the ways in which we design and deliver our courses, or try to understand and serve our students needs and goals? If so, what would such an applied science look like? What would a person applying the science need to know? What would their role be? How would they work with other educators who have complementary expertise?

That is the possibility space that learning engineering inhabits.

Applied science as a design exercise

One of the reasons that people have trouble wrapping their heads around the notion of learning engineering is that it was conceived of by very unusual mind. Some of the critiques I've seen online of the term position "learning engineering" in opposition to "learning design." But as Phil Long points out in his essay in the aforementioned anthology, Herb Simon both coined the term "learning engineering" and is essentially the grandfather of design thinking:

Design science was introduced by Buckminster Fuller in 1963, but it was Herbert Simon who is most closely associated with it and has established how we think of it today. "The Sciences of the Artificial" (Simon, 1967) distinguished the artificial, or practical sciences, from the natural sciences. Simon described design as an ill-structured problem, much like the learning environment, which involves man-made responses to the world. Design science is influenced by the limitations of human cognition unlike mathematical models. Human decision-making is further constrained by practical attributes of limited time and available information. This bounded rationality makes us prone to seek adequate as opposed to optimal solutions to problems. That is, we engage in satisficing not optimizing. Design is central to the artificial sciences: 'Everyone designs who devises courses of action aimed at changing existing situations into desired ones.' Natural sciences are concerned with understanding what is; design science instead asks about what should be. this distinction separates the study of the science of learning from the design of learning. Learning scientists are interested in how humans learn. Learning engineers are part of team focused on how students ought to learn."

Phil Long, "The Role of the Learning Engineer"

Phil points out two important dichotomies in Simon's thinking. The first one: is vs. ought. Natural science is about what is, while design science is about what you would like to exist. What you want to bring into being. The second dichotomy is about well structured vs. poorly structured. For Simon, "design" is a set of activities one undertakes to solve a poorly structured problem. To need or want is human, and to be human is to be messy. Understanding a human need is about understanding a messy problem. Understanding how different humans with different backgrounds and different cognitive and non-cognitive abilities learn, given a wide range of contextual variables like the teaching strategies being employed, the personal relationships between students and teacher, what else is going on in the students' lives at the time, whether different students are coming to class well fed and well slept, and so on, is pretty much the definition of a poorly structured problem. So as far as Herb Simon is concerned, education is a design problem by definition, whether or not you choose to use the word "engineer."

In the next section of his article, Phil then makes a fascinating connection between the evolution of design thinking, which emerged out design science, and learning engineering. The key is in identifying the central social activity that defines design thinking:

Design thinking represents those processes that designers use to create new designs, possible approaches to problem solutions spaces where none existed before. A problem-solving method has been derived from this and applied to human social interactions iteratively taking the designer and/or co-design participants from inspiration to ideation and then to implementation. The designer and design team may have a mental model of the solution to a proposed problem, but it is essential to externalize this representation in terms of a sketch a description of a learning design sequence, or by actual prototyping of the activities which the learner is asked to engage. [Emphasis added.] All involved can see the attributes of the proposed design solution that were not apparent in the conceptualization of it. this process of externalizing and prototyping design solutions allows it to be situated in larger and different contexts, what Donald Schon called reframing the design, situating it in contexts other than originally considered.

Phil Long, "The Role of the Learning Engineer"

So the essential feature that Phil is calling out in design thinking is putting the idea out into the world so that everybody can see it, respond to it, and talk about it together. Now watch where he takes this:

As learning environments are intentionally designed in digital contexts, the opportunity to instrument the learning environment emerges. Learners benefit in terms of feedback or suggested possible actions. Evaluators can assess how the course performed on a number of dimensions. The faculty and others in the learning-design team can get data through the instrumented learning behaviors, which may provide insight into how the design is working, for whom it is working, and in what context.

Phil Long, "The Role of the Learning Engineer"

Rather than a sketch, a wireframe, or a prototype, a learning engineer makes the graph, the dashboard, or the visualization into the externalization. For Herb Simon, as for Phil Long, these design artifacts serve the same purpose. They're the same thing, basically.

If you're not a data person, this might be hard to grasp. (I'm not a data person. This is hard for me to grasp sometimes.) How can you take numbers in a table and turn them into a meaningful artifact that a group of people can look at together, discuss, make sense of, debate, and learn from? What might that even look like?

Well, it might look something like this, for example:

Higher ed LMS market share for US and Canada, January 2019Phil Hill's famous squid diagram

Phil Hill has a graduate degree in engineering. Not learning engineering. Electrical. (Also, he's not a Stalinist.)

By the way, when we externalize and share data with a student about her learning processes in a form that is designed to provoke thought and discussion, we have a particular term of art for that in education. It's called "formative assessment." If we do it in a way such that the student always has access to such externalizations, which are continually updating based on the student's actions, we call that "continuous formative assessment." When executed well, there is evidence that it can be an effective educational practice.

Caliper statements as learning engineering artifacts

So here's where we've arrived at this point in the post:

  • Design is a process by which we tackle ill-defined problems of meeting human needs and wants, such as needing or wanting to learn something.
  • Engineering is a word that we're not going to worry about defining precisely for now, but it relates to applying science to a design problem, and therefore often involves the measurement and numbers.
  • One important innovation in design methodology is the creation of external artifacts early in the design process so that various stakeholders with different sorts of experience and expertise can provide feedback in a social context. In other words, create something that makes the idea more "real" and therefore easier to discuss.
  • Learning engineering includes the skills of creation and manipulation of design artifacts that require more technical expertise, including expertise in data and software engineering.

The twist with Caliper is that, rather than using visualizations and dashboards as the externalization, we can use human language. This was the original idea of behind the Semantic Web, which is still brilliant in concept, even if the original implementation was flawed. Let's review that basic idea as implemented in Caliper:

  • You can express statements about the world (or the world-wide web) in three-word sentences of the form [subject] [verb] [direct object] e.g., [student A] [correctly answers] .
  • Because English grammar works the way it does, you can string these sentences together to form inferences, e.g., [tests knowledge of] [multiplying fractions]; therefore, [student A] [correctly answers] [a question about multiplying fractions].
  • We can define mandatory and optional details of every noun and verb e.g., it might be mandatory to know that question 13 was a multiple choice question, but it might be optional to include the actual text of the question, the correct answer, and the distractors.

That's it. Three-word sentences, which work the way they do in English grammar, and definitions of the "words."

A learning engineer could use Caliper paragraphs as a design artifact to facilitate conversations about refining the standard, the products involved, and the experimental design. I'll share a modified version of an example I recently shared with an IMS engineer to illustrate this same point.

Suppose you are interested in helping students become better at reflective writing. You want to do this by providing them with continuous formative assessment, i.e., in addition to the feedback that you give them as an instructor, you want to provide them an externalization of the language in their reflective writing assignments. You want to use textual analysis to help the students look at their own writing through a new lens, find the spots where they are really doing serious thought work, and also the spots where maybe they could think a little harder.

But you have to solve a few problems in order to do give this affordance to your students. First, you have to develop the natural language analysis tool that can detect cues in the students' writing that indicate self-reflection (or not). That's hard enough, but the research is being conducted and progress is being made. The second problem is that you are designing a new experiment to test your latest iteration and need some sort of summative measure to test against. So maybe you design a randomized controlled trial where half the students in the class use the new feedback tool, half don't, and all get the same human-graded final reflective writing assignment. You compare the results.

This is an example of theory-driven learning analytics. Your theory is that student reflection improves when students become more aware of certain types of reflective language in their journaling. You think you can train a textual analysis algorithm to reliably distinguish—externalize—the kind of language that you want students to be more aware of in their writing and point it out to them. You want to test that by giving students such a tool and see if their reflective writing does, in fact, improve. Either students' reflective writing will improve under the test condition, which will provide supporting evidence for the theory, or it won't, which at the very least will not support the theory and might provide evidence that tends to disprove the theory, depending on the specifics. There are data science and machine learning being employed here, but they are being employed more selectively than just shotgunning an algorithm at a data set and expecting it to come up with novel insights about the mysteries of human cognition.

Constructing theory-driven learning analytics of the sort described here is challenging enough to do in a unified system that is designed for the experiment. But now we get to the problem for which we will need the help of IMS over the next decade, which is that the various activities we need to monitor for this work often happen in different applications. Each writing assignment is in response to a reading. So the first thing you might want to do, at least for the experiment if not in the production application, is to control for students who do the reading. If they aren't doing the reading, then their reflective writing on that reading isn't going to tell you much. Let's say the reading happens to take place in an ebook app. But their writing takes place in a separate notebook app. Maybe it's whatever notebook app they normally use—Evernote, One Note, etc. Ideally, you would want them to journal in whatever they normally use for that sort of activity. And if it's reflective writing for their own growth, it should be an app that they own and that will travel with them after they leave the class and the institution. On the other hand, the final writing assignment needs to be submittable, gradable, and maybe markable. So maybe it gets submitted through an LMS, or maybe through a specialized tool like Turnitin.

This is an interoperability problem. But it's a special one, because the semantics have to be preserved through all of these connections in order for (a) the researchers to conduct the study, and then (b) the formative assessment tool to have real value to the students. The people who normally write Caliper metric profiles—the technical definitions of the nouns in Caliper—would have no idea about any of this on their own. Nor would the application developers. Both groups would need to have a conversation with the researchers in order to get the clarity they need in order to define the profiles for this purpose.

The language of Caliper could help with this if a person with the right role and expertise were facilitating the conversation. That person would start by eliciting a set of three-word sentences from the researchers. What do you need to know? The answers might include statements like the following:

  • Student A reads text 1
  • Student A writes text alpha
  • Text alpha is a learning reflection of text 1
  • Student A reads text 2
  • Text 2 is a learning reflection of texts 1 and 2
  • Etc.

The person asking the questions of the researcher and the feature designer—let's call that person the learning engineer—would then ask questions about the meanings and details of the words, such as the following:

  • In what system or systems is the reading activity happening?
  • Do you need to know if the student started the reading? Finished it? Anything finer grained than that?
  • What do you need to know about the student's writing in order to perform your textual analysis? What data and metadata do you need? And how long a writing sample do you need to elicit in order to perform the kind of textual analysis you intend and get worthwhile results back?
  • What do you mean when you say that text 2 is a reflection of both text 1 and 2, and how would you make that determination?

At some point, the data scientist and software systems engineers would join in the conversation and different concerns would start to come up, such as the following:

  • Right now, I have no way of associating Student A in the note-taking system with Student A in the reading system.
  • To do the analysis you want, you need the full text of the reflection. That's not currently in the spec, and it has performance implications. We should discuss this.
  • The student data privacy implications are very different for an IRB-approved research study, an individual student dashboard, and an instructor- or administrator-facing dashboard. Who owns these privacy concerns and how do we expect them to be handled?

Notice that the Caliper language has become the externalization that we manipulate socially in the design exercise. There are two aspects of Caliper that make this work: (1) the three-word sentences are linguistically generative, i.e., they can express new ideas that have never been expressed before, and (2) every human-readable expression directly maps to a machine-readable expression. These two properties together enable rich conversations among very different kinds of stakeholders to map out theory-driven analytics and the interoperability requirements that they entail.

This is the kind of conversation by which Caliper can evolve into a standard that leads to useful insights and tools for improving learning impact. And in the early days, it will likely happen one use case at a time. Over time, the working group would learn from having enough of these conversations that design patterns would emerge, both for writing new portions of the specification itself and for the process by which the specification is modified and extended.

Copyright Carnegie Mellon University, CC-BY

The post Learning Engineering: A Caliper Example appeared first on e-Literate.

by Michael Feldstein at July 03, 2019 07:35 PM

May 07, 2019

Adam Marshall

Did you know that Sakai has a racing car?

Sakaiger

As I’m sure most readers know, WebLearn is built upon the open source Sakai platform.

One of the software’s founders, Dr Charles Severance has decided to initiate a guerrilla marketing campaign and have some fun, by buying a cheap old car (a ‘lemon’), calling it the ‘Sakaicar‘, sticking a pair of “Sakaiger” ears on it and running it into the ground on the racing circuit!

 

by Adam Marshall at May 07, 2019 12:02 PM

April 29, 2019

Adam Marshall

WebLearn and Turnitin courses: Trinity term 2019

IT Services offers a variety of taught courses to support the use of WebLearn and the plagiarism awareness software Turnitin. Course books for the WebLearn Fundamentals course (3 hours) can be downloaded for self study. Places are limited and bookings are required. All courses are free of charge and are presented at IT Services, 13 Banbury Road.

Click on the links provided for further information and to book a place.

WebLearn 3-hour course:

WebLearn Bytes sessions:

Plagiarism awareness courses (Turnitin):

User Group meetings will run again in Michaelmas term

by Jill Fresen at April 29, 2019 04:10 PM

March 07, 2019

Adam Marshall

WebLearn User Group: Tues 12 March 14:00-16:00

Please join us at the next meeting of the WebLearn User Group:

Date: Tuesday 12 March 2019

Time: 2:00 – 4:00 pm, followed by refreshments

Venue: IT Services, 13 Banbury Rd

Come and meet with fellow WebLearn users and members of the Technology Enhanced Learning (TEL) team to give feedback and share ideas and practices.

Book now to secure your place.

Programme:

  • Canvas@Oxford project team: Update on the Canvas rollout to Year 1 programmes of study
  • James Shaw, Bodleian Libraries: Copyright and the CLA: Preparing digital material for presentation in a VLE
  • Jon Mason, Medical Sciences: Interactive copyright picker (based on source and intended use)
  • TEL team: Design and content for WebLearn pages
  • Adam Marshall: WebLearn updates

Join the WebLearn User Group site: https://weblearn.ox.ac.uk/info/wlug for regular updates and access to audio recordings of previous presentations.

Dr Jill Fresen, Senior Learning Technologist, Technology-Enhanced Learning, IT Services, University of Oxford

by Adam Marshall at March 07, 2019 02:41 PM

February 12, 2019

Sakai@JU

Peer Assessment – Reflect and Improve

Peer assessment or review can improve student learning, and there's a way to do it in a course site.

Advertisements

by Dave E. at February 12, 2019 04:17 PM

November 23, 2018

Matthew Buckett

Firewalling IPs on macOS

I needed to selectively block some IPs from macOS and this is how I did it. First create a new anchor for the rules to go in. The file to create is:/etc/pf.anchors/org.user.block.out and it should contain:

table <blocked-hosts> persist
block in quick from <blocked-hosts>

Then edit: /etc/pf.conf and append the lines:

anchor "org.user.block.out"
load anchor "org.user.block.out" from "/etc/pf.anchors/org.user.block.out"

Then to reload the firewalling rules run:

$ sudo pfctl -f /etc/pf.conf

and if you haven't got pf enabled you also need to enable it with:

$ sudo pfctl -e

Then you can manage the blocked IPs with these commands:

# Block some IPs
$ sudo pfctl -a org.user.block.out -t blocked-hosts -T add 1.2.3.4 5.6.7.8
# Remove all the blocked IPs
$ sudo pfctl -a org.user.block.out -t blocked-hosts -T flush
# Remove a single IP
$ sudo pfctl -a org.user.block.out -t blocked-hosts -T delete 1.2.3.4






by Matthew Buckett (noreply@blogger.com) at November 23, 2018 12:41 PM

November 09, 2018

Apereo OAE

Apereo OAE Snowy Owl is now available!

The latest version of Apereo's Open Academic Environment (OAE) project has just been released! Version 15.0.0 is codenamed Snowy Owl and it includes some changes (mostly under the hood) in order to pave the way for what's to come. Read the full changelog at Github

Image taken from bird eden.

November 09, 2018 06:50 PM

July 04, 2018

Sakai@JU

F2F Course Site Content Import

If you're tasked with teaching an upcoming course that you've taught in the past with the University - there's no need to rebuild everything from scratch - unless you want to. Faculty teaching face to face (F2F) courses can benefit from the course content import process in Site Info. This process allows you to pull … Continue reading F2F Course Site Content Import

by Dave E. at July 04, 2018 06:56 PM

June 11, 2018

Apereo OAE

Strategic re-positioning: OAE in the world of NGDLE

The experience of the Open Academic Environment Project (OAE) forms a significant practical contribution to the emerging vision of the ‘Next Generation Digital Learning Environment’, or NGDLE. Specifically, OAE contributes core collaboration tools and services that can be used in the context of a class, of a formal or informal group outside a class, and indeed of such a group outside an institution. This set of tools and services leverages academic infrastructure, such as Access Management Federations, or widely used commercial infrastructure for authentication, open APIs for popular third-party software (e.g. video conference) and open standards such as LTI and xAPI.

Beyond the LMS/VLE

OAE is widely used by staff in French higher education in the context of research and other inter-institutional collaboration. The project is now examining future directions which bring OAE closer to students – and to learning. This is driven by a groundswell among learners. There is strong anecdotal evidence that students in France are chafing at the constraints of the LMS/VLE. They are beginning to use social media – not necessarily with adequate data or other safeguards – to overcome the perceived limitations of the LMS/VLE. The core functionality of OAE – people forming groups to collaborate around content – provides a means of circumventing the LMS’s limitations without selling one’s soul – or one’s data – to the social media giants. OAE embodies key capabilities supporting social and unstructured learning, and indeed could be adapted and configured as a ‘student owned environment’: a safe space for sharing and discussion of ideas leading to organic group activities. The desires and requirements of students have not featured strongly in NGDLE conversations to this point: The OAE project, beginning with work in France, will explore student discontent with the LMS, and seek to work together with LMS solution providers and software communities to provide a richer and more engaging experience for learners.

Integration points and data flows

OAE has three principal objectives in this area:

  1. OAE has a basic (uncertified) implementation of the IMSGlobal Learning Tools Interoperability specification. This will be enriched to further effect integration with the LMS/VLE where it is required. OAE will not assume such integration is required without evidence. It will not drive such integration on the basis of technical feasibility, but by needs expressed by learners and educators.
  2. Driven by the significant growth of usage of the Karuta ePortfolio software in France, OAE will explore how student-selected evidence of competency can easily be provided for Karuta, and what other connections might be required or desirable between the two systems.
  3. Given the growth of interest in learning analytics in France and globally, OAE will become an exemplary emitter of learning analytics data and will act wherever possible to analyse each new or old feature from a designed analytics perspective. Learning analytics data will flow from learning designs embedded in OAE, not simply be the accidental output that constitutes a technical log file.

OAE is continuing to develop and transform its sustainability model. The change is essentially from a model based primarily on financially-based contributions to that of a mixed mode community-based model, where financial contributions are encouraged alongside individual, institutional and organisational volunteered contributions of code, documentation and other non-code artefacts. There are two preconditions for accomplishing this. The first, which applies specifically to code, is clearing a layer of technical debt in order to more easily encourage and facilitate contributions around modern software frameworks and tools. OAE is committed to paying down this debt and encouraging contributions from developers outside the project.

The second is both more complex and more straightforward; straightforward to describe, but complex to realise. Put simply, answers to questions around wasteful duplication of resources in deploying software in education have fallen out of balance with reality. The pendulum has swung from “local” through “cloud first” to “cloud only”. Innovation around learning, which by its very nature often begins locally, is often stifled by the industrial-style massification of ‘the hosted LMS’ which emphasises conformity with a single model. As a result of this strategy, institutions have switched from software development and maintenance to contract management. In many cases, this means that they have tended to swap creative, problem-solving capability for an administrative capability. It is almost as though e-learning has entered a “Fordist” phase, with only the green shoots of LTI enabled niche applications and individual institutional initiatives providing hope of a rather more postmodern – and flexible - future.

OAE retains its desire and ambition to provide a scalable solution that remains “cloud ready”. The project believes, however, that the future is federated. Patchworks of juridical and legal frameworks across national and regional boundaries alone – particularly around privacy - should drive a reconsideration of “cloud only” as a strategy for institutions with global appetites. Institutions with such appetites – and there are few now which do not have them – will distribute, federate and firewall systems to work around legislative roadblocks, bumps in the road, and brick walls. OAE will, then, begin to consider and work on inter-host federation of content and other services. This will, of necessity, begin small. It will, however, remain the principled grit in the strategic oyster. As more partners join the project, OAE will start designing a federation architectural layer that will lay the foundation to a scenario where OAE instances dynamically exchange data among themselves in a seamless and efficient way according to a variety of use cases.

ID 22-MAY-18 Amended 23-MAY-18

June 11, 2018 12:00 PM

May 01, 2018

Sakai@JU

Will Sakai look different following the upgrade?

While there are some improvements to accessibility and some on-going tweaks to improve color contrast issues, the upgrade to Sakai will not affect the overall appearance that much.  For mobile users - the difference in course navigation will be much-improved. Desktop/Laptop view: Sakai 11 Following Upgrade: Mobile view (Sakai 11/Post-Upgrade):    More detail will be … Continue reading Will Sakai look different following the upgrade?

by Dave E. at May 01, 2018 07:53 PM

April 01, 2018

Aaron Zeckoski

March 25, 2018

Aaron Zeckoski

Leading Softly

I'm coming back to blogging after a few years buried under project work, and I want to explore some lessons learned as a technology leader managing a department growing rapidly and going through significant changes. My department builds educational software products and has grown from a couple employees and a dozen consultants to over 70 employees and 100 contractors/consultants over 3 years.
The saying goes, "What got you here, won't get you there". This is especially true for leaders in technical fields like software engineering. It means you probably have the hard skills (coding, automation, design, coordination, etc.) and logical problem solving that helped you be successful as an individual contributor. Now you are in leadership and probably finding those skills are not helping you solve the same problems. Here are a few lessons I learned last week of the softer skilled sort.


1) Unmet expectations are the root cause of upset people
If you are dealing with friction with someone at work (or helping 2 people in your team deal with their friction) then your best option is to look for the unmet expectation. Maybe they expected to be treated with more respect, or that you would be on time for the meeting, or that something would be done more quickly. Try to determine what the unmet expectation was and help address it and you will remove the source of the problem. This alone won't solve everything but it will help resolve the issue.


2) If you impact someone else, then at least inform them, and ideally engage them
This is easiest to think about using some examples. Are you waiting on something from another person in order to get your job done and it is late but you haven't heard anything? Do you depend on a process controlled by someone else to get your job done? Have you been pulled into a meeting beyond your control without knowing why? Do you get assigned to projects without having a say? All of these are examples of being impacted by the decisions of someone else. This is pretty common and probably pretty annoying for you (or whomever is on the receiving end). If you are the one causing the impact to someone else, try to always keep them informed. If there is flexibility, then engage them in the decision making about it (even if you only ask for their feedback). You would want this if you were in their place so treat others like you want to be treated.


3) Good communication is the key to everything
I've come to realize that most relationship and work challenges are caused by poor communication. Did servers go down during a recent release because the database configuration was mismatched between prod and dev? Bad communication. Did a recent feature get built differently than customers wanted? Bad communication. Are users angry because a bug was released that the testers knew about? Bad communication. Was someone surprised by bad news that they should have been aware of? I think you get it... The simplest step to improving communication is to simply take the extra time to do it. It's not a magic bullet, but most poor communication happens because we didn't bother taking the extra time to communicate for understanding. Try asking people to echo things back when you talk to them this week and do them the favor of doing the same. You won't regret spending some extra time on communication but you will regret not doing it when things go wrong.

Also find this on Medium and LinkedIn

by Unknown (noreply@blogger.com) at March 25, 2018 11:47 PM