Planet Sakai

December 11, 2017

Apereo Foundation

Save the Date: Apereo Teaching and Learning Awards (ATLAS) 2018

Save the Date: Apereo Teaching and Learning Awards (ATLAS) 2018

This year, the award applicant selection process opens on January 16, 2018 and closes on February 26, 2018.

by Michelle Hall at December 11, 2017 08:05 PM

December 07, 2017

Dr. Chuck

Integrating Koseu / Tsugi into Google Classroom

TL;DR – The Demo

Tsugi and Google Classroom

For the past four years, I have been building software to implement my vision for the new technologies that will enable the Next Generation Digital Learning Environment/Ecosystem.

  • Tsugi – is software infrastructure, APIs, and code libraries that allow interactive learning tools to be built, hosted and integrated into Learning Management Systems like Sakai, Canvas, Blackboard, Moodle, Desire2Learn, edX, or Coursera. Without requiring the programmer to read and understand the complex documentation that describes the low level details of these integrations.

  • Koseu is a LMS/Course platform that is aimed at supporting course content on the web. Koseu in a sense is a way for every teacher to build and publish a “MOOC of my Own” while at the same time making that learning content easily integrated into LMS systems. My Python for Everybody web site (www.py4e.com) is a good example of a well developed Koseu-based web site.

Up to this point, Tsugi has focused on the standards like such as IMS Learning Tools Interoperability, IMS Common Cartridge, and IMS Content Item that are used to integrate content and tools into traditional LMS systems.

But increasingly, Google Classroom is being used in K12 and beyond as the “LMS” of choice since so many organizations already use Google Suite for their single sign on, document editing, forms, etc. It is a simple matter to just start using Google Classroom – and Google Classroom is very well connected to the rest of the Google Suite.

So we have added initial support for Google Classroom integration to Tsugi/Koseu. The Google Classroom API patterns are very different than IMS LTI and Content Item Message. Google Classroom uses an OAuth 2.0 and single sign-on (SSO) pattern instead. This pattern requires more initial coordination, but has some nice features that allow the end-user to be involved in their own privacy decisions.

With this support, a Tsugi tool can send grades to the Learning system regardless of where the tool was launched using LTI or with Google Classroom using the exact same lines of code and exact same code libraries.

If we are truly going to make a Next Generation Digital Learning Ecosystem, systems like Tsugi and Koseu need to look beyond the traditional LMS market and to emerging platforms like Google Classroom.

by Charles Severance at December 07, 2017 02:57 PM

December 01, 2017

Adam Marshall

Using WebLearn to aid student induction

Lettitia Derrington (Department for Continuing Education) received a project grant to develop a WebLearn feature that could be made available across the University to support the online induction of postgraduate students. http://blogs.it.ox.ac.uk/adamweblearn/2015/11/university-teaching-awards-2015/

Lettitia built an ‘Induction Lessons Tool’, made entirely with the standard Lessons Tool in WebLearn. It was trialled within the CPD Centre, Department for Continuing Education in 2016-17 (with eight programme sites), offered to other teams within ContEd in 2017-18 (four additional sites) and is now in a format that could be rolled out to the wider University.

We asked Lettitia to describe how she customised WebLearn in this way to meet the needs of tutors and students in ContEd.

The aim was to create an induction area for postgraduate students as an alternative to, or to complement face-to-face induction activities. We wanted to provide a sense of progression through the various steps that students need to take and to make the information visually appealing and the content engaging. We also needed to ensure that we provided the essential information that departments have to provide for inductions as well as signposting useful information from the wider University.

I used the Lessons Tool to create a grid layout and to embed images and videos easily. It also allows me to embed generic files held on a central site, so that updating the information can be carried out in one location with the changes automatically updating all individual sites.

I hope that describing the different areas with a question – ‘Have you…..?’, emphasises the importance of the tasks, which is substantiated by quotes from previous students describing how important/useful it is.

I used colour on the main boxes as a differentiating factor, and I have carried the colour coding through to the subpages. I don’t think that it is very obvious to the students, but the subpages are divided into white and coloured boxes. The white information contains the essential information that departments have to provide, whilst the coloured boxes contain additional information to enhance the student experience.

Feedback from students has been good. The Induction section has been very useful as a resource to help them to do everything they need to be ready. All in all, it has had an immensely positive impact in supporting the delivery and administration of ContEd courses.

by Adam Marshall at December 01, 2017 02:54 PM

November 29, 2017

Dr. Chuck

Net Neutrality as Applied to Stop Lights

Net Neutrality as Applied to Stop Lights

I was asked for my comments on Net Neutrality by a reporter and so I wrote this.

In general, I think that it is difficult to predict exactly what bad things will happen and in what order. Once Net Neutrality is no longer an underlying value of the Internet, those who control critical core internet resources like fiber links and peering points will look for opportunities to hold traffic “hostage” to make more money.

I believe that in general, “small sites” may not really notice any change except for a general inflation in bandwidth prices as hosting providers like Amazon, Internet2 and DigitalOcean are held hostage. The wholesale cost of bandwidth could easily double in 2-3 years once “Net Discrimination” is officially legal. Large sources of bandwidth like Netflix, Hulu, etc are going to see “death by a thousand cuts” as every little router owner all around the world is going to want their cut. The cost for bandwidth is likely to go up by a factor of 2-3.

It is as if you are driving through a city and every stoplight has a toll booth where you need to stop and take out your credit card to pass. Assume a bunch of different companies you never heard of each “owned” one or two stoplights in downtown Mountain View. When they “bought” the stoplights there was a rule that you could not charge for a green light but instead you got a small fixed amount of money to run your stop lights to move traffic as efficiently as possible. Now with “Stoplight Discrimination” legalized, they can hold car manufacturers “hostage” for green lights. Tesla can pay them so their cars can send a signal to the stoplight so it immediately turns green whenever a Tesla is moving toward the stoplight.

The problem is that sooner or later all the major car manufacturers will have to pay each of the little stoplight companies the “stoplight toll” and then all traffic will be again be treated equally and the companies will be getting very rich.

And the worse part of it is that there is little incentive to improve the roads or generally improve stoplight technology – because the frustration of the drivers and their complaints to car manufacturers leads to more revenue for the stoplight owners. And when things get really bad the stoplight companies can tell car companies they can purchase the “silver level” to once again get differential treatment for their cars. And then everyone buys the “silver” level so they introduce “gold level” traffic discrimination – and so on. Soon, “platinum”, “ruby”, “sapphire”, and “diamond” levels emerge – and then “double diamond”. The worse that traffic gets snarled up because of the byzantine pay-for-play rules – the more money these stoplight companies make.

The revenue potential for doing absolutely nothing more than what you were already doing is immense. No wonder these companies love Net Discrimination.

And our costs of all these services will go up because the “stoplight tax” will ultimately just be passed to the consumers.

by Charles Severance at November 29, 2017 02:07 PM

November 21, 2017

Michael Feldstein

Fear and Loathing in the Moodle Community

Moodle News responded to our recent coverage regarding the platform’s declining market share by pushing back hard with an article that simultaneously insinuates bias on our part and attempts to use our numbers to draw the opposite conclusions from the ones that we have put forward. This presented us with something of a dilemma. As professional analysts, we try very hard to be undefensive when we are critiqued. If people think we are biased, they are entitled to their opinions. If people question our data or analysis, we try to respond only if we think their critique of us has merit. If we don’t think it does, we generally let our original work speak for itself unless we have a specific reason to do otherwise. So, from this perspective, our default editorial position is to let Moodle News have their say and leave it alone. I will add that our overall impression of Moodle News has been that it is generally a fair and thoughtful outlet. We have no particular desire to pick a fight with them.

On the other hand, we are not just analysts. One of the more common compliments we get from people who trust our work is that they believe it is animated by a concern for improving education. I take these comments to mean more than just that we “care” in some abstract sense. Rather, they seem to be saying that our choices of what stories we cover and how we cover them are animated by our desire for our analysis to be a tool for educators to help improve education. From that perspective, it’s hard for me personally to let the Moodle News story go unanswered. I care about what happens to Moodle, not so much because I care about Moodle in and of itself but because I care about the good that the platform and the community do for education across the globe.

Sometimes when we poke at a group because of a problem, it’s partly because we want to call their attention to it in the hopes that they fix it. If we poke harder, it may be because we’re not convinced that they’re paying attention to the dangers that we see. (See, for example, Phil’s recent Unizin coverage.)

Beyond the Moodle News piece and the occasional (but energetic) challenges we get from Moodle advocates when we present our numbers at conferences, the case for poking here is bolstered by Martin Dougiamas’ periodic public questioning of our analysis, including his comment on that aforementioned last post. The bulk of that comment was as follows:

I get that you are a fan of certain companies and that is fine, but I don’t understand the highly negative and uninformed spin from you lately. Look at the title of this article! These feel like intentional attacks, to be honest, and I have to wonder why.

The fact is your entire article here is based on the false supposition that our business model is a) static and b) based entirely on higher ed. However, we already have a number of new and exciting initiatives that you clearly don’t know about (MoodleCloud, MoodleNet, LearnMoodle, MoodleServices as well as new things not announced yet) that are supporting our current and future growth.

Sustainability of Open Source and all Open initiatives is something we care about deeply and is part of everything we do.

Again, readers have to come to their own conclusions regarding the quality of and motivations for our posts. From our perspective, we tend to write negative headlines when we see negative news. We tend to get more negative in our tone when we think the people we are trying to reach are not hearing us (or are not honest, though that is not an accusation that I am making here).

As somebody who has been very directly involved with and committed to open source projects myself, I have learned that the very passion which drives participation can also cause advocates to dismiss any bad news as “fake news.” This can be fatal to a project. Moodle has massive market share, a fresh infusion of cash, and plenty of talent. There is time to address any challenges that the project faces. But only if those challenges are faced.

I have decided to take one more run at this topic because I would like to see the Moodle community succeed, and in order to do that, I believe it will have to grapple with challenges that I don’t see evidence that it is fully grappling with yet.

The Passion Play of Open Source in Crisis

While the role I have chosen for myself in the ed tech ecosystem has required me to be ecumenical in recent years, I had previously been an active participant in two different open source LMS projects. The first was a system called dotLRN, which came into existence around 2000 and seems to have died around 2010. (The web site is still up, but nobody appears to be home.) From a functional perspective, dotLRN was fantastic. In retrospect, it was a decade ahead of its competition in a number of ways. But its technology stack was quirky. It was written in a programming language called TCL—which stands for “tool command language” but is referred to by its proponents as “tickle”—and ran on top of an early open source application server called AOLServer. I was told by people I trusted that these were perfectly valid technical choices that had distinct advantages over contemporary alternatives. As a newbie to software development and a non-engineer, I had no reason to doubt those assessments.

But while I didn’t know much about technology, I did know how to listen. Over time, it became clear to me that attracting new developers and winning the confidence of university IT departments would be hard. Nobody seemed to think that learning a programming language called “tickle” and an application server called “AOLServer” seemed like a path to a career on the cutting edge. I grew more confident in that assessment as it became clear that adoption growth of the platform had hit a wall and it wasn’t even getting considered as an option in most cases. I raised this concern with the community, but many of the engineers continued to insist that the superiority of their choices would win in the end. And they could point to all the strengths of their platform relative to the competition as evidence for their case.

The problem came to a head one day for me when I made a discovery about one of those strengths. There was a situation—I’ve long since forgotten the details—where the dotLRN developers decided to turn one of the capabilities of the platform into a web service. I asked, “How hard was that to do?” The answer? “Trivial. It’s just configuration. We can turn pretty much any API call into a web service with the flip of a switch.”

I was stunned. This seemed like the way out! Let the dotLRN core continue to be developed by a limited number of specialized engineers, similarly to how web servers like Apache were being written in C by a small number of hard-core specialized developers. Develop a new front end using a more popular language, which at that time would have likely been PHP, Python, or Java. Use web services to communicate between the two. This was before REST and JSON had really hit the scene, so they would have been XML-based web services. But the point was, nobody would have to learn TCL. It would change everything, I thought.

The dotLRN developer community was less enthusiastic. They believed their development approach was technically superior. Why should they pander to the least common denominator by promoting an awful, inelegant language like PHP? Besides, they had a huge installation in Brazil. Things were looking up, they said.

It turns out that many open source communities, when faced with declining adoption that is forcing consideration of unpalatable choices, become some version of David Hasslehoff. “I may be the butt of jokes in the US, but I’m huge in Germany!” That is a bad place to be. Before you know it, the only things people remember you for are your deeply unfortunate “drunken stupor” YouTube video and your cameo in the SpongeBob movie.

Could a switch to web services have saved dotLRN? I don’t know. It would have been tough no matter what. But the point is that open source communities are often driven by the passion of the participants. While it was exactly that commitment to the mission that attracted me to dotLRN at a time when the proprietary alternatives made me very queasy, it was that very same emotion that blinded some of the most passionate and committed community members to problems that turned out to be existential.

When I left the dotLRN community, it was in the process of tearing itself apart. There was a lot of internal debate about whether anything needed to be done for the health of the project and, if so, what. One particularly bright and charismatic young leader in the community became offended that there was resistance to the direction that he (and his company) wanted to take dotLRN. So he left. (I hear he has since gone on to do great work under the auspices of the Mozilla community.) There was a sense among the remaining members that their suffering was externally inflicted. How could the world fail to appreciate the wonderful things that they had built and shared for everyone’s benefit? They were the victims, sacrificed by the unappreciative mob as thanks for their self-sacrificing effort. And now....

I have seen similar challenges, albeit to a much lesser degree, in the Sakai community. By and large, that community has a more realistic grasp of where their challenges are and what their niche is. They also appear to be fairly stable at the moment.

But passion can blind the Sakai community members from time to time too. (“We’re big in Spain!”) I can remember a time not too many years ago when people in that community—smart people who I respect and whose livelihoods depended on the health of the platform—who thought everything was just fine. One in particular told me that all Sakai needed was a refresh of the grade book and the test engine. Otherwise, everything was just great. (I’m not sure whether this was before or after Sakai finally fixed a fundamental problem where the platform broke the browser back button, but if it was after, it wasn’t long after.)

That guy is now the CTO for a company that is not Sakai-focused.

More recently, the last time I was at an Apereo conference, I attended a session whose abstract said the presenters would talk about how they welcome RFP processes and have kept Sakai competitive at their schools. I was curious. What followed was a litany of the presenters raising every imaginable obstacle to choosing Sakai over competitors, including many reasonable ones, and off-handledly dismissed them one by one. Perhaps inevitably, they came around to Phil’s squid diagram. They clearly didn’t know either that I was connected to the graph or that I had previously served on the Sakai Foundation Board of Directors. I didn’t say anything; I hadn’t gone to the session to ambush the poor guys. But others in the room knew who I was. All eyes turned to me, and several people pointed to me. The guy presenting in the moment rehearsed his arguments.

”That graph is based on old data, right?”

No, you just have an old version. The latest version is up on the blog, and it doesn’t look any better.

”But this is just a small sample.”

About 90% of US and Canadian institutions.

You get the idea.

No organization can remain healthy and effective if it doesn’t balance its passion with a healthy dose of skepticism and self-reflection.

Which brings us to the Moodle News piece.

Getting the Numbers Right

There appear to be three sources of misunderstanding in the article in question. The first is not understanding our point about the intersecting trends of the collapse of new implementations (meaning very few new schools choosing to adopt Moodle in a particular year) with an increase in decommissions (meaning more schools replacing Moodle with another LMS). In North American higher education, we are already seeing a decline in total Moodle institutions of 45 – 50 (depending on when you measure). This is roughly equal to the 1% rounding number that we mentioned in the post (i.e., 1% of 4,500+ institutions in North America dataset).

The Moodle News author is correct in noting that for the horse race view, Blackboard and Moodle are closer than ever when measured by percentage of institutions using as primary system. However, this is mainly because Blackboard’s market share has collapsed much faster than Moodle’s. Arguing that this shows Moodle is strong is a little bit like arguing that MySpace is strong because it has surpassed Napster’s market share (though I will grant that a slow decline is better than a fast one).

The second problem is in not understanding the difference between new implementations and installed base. The chart I referenced in my last Moodle post measures the percentage of new implementations in each year moving towards Moodle – this is where (at the time of article) 0% of 2017 new implementations went to Moodle, 55% went to Canvas, and 44% to D2L. (To be fair, we’ve detected a few new implementations since that last piece; we will include them in our next periodic update of the graphs.)

The author then asks:

All of which leaves us to wonder: How can an LMS have no growth while its close competitors show increases of 55% and 44% (for D2L, now with a 15% market share), and end the year with its market share unchanged?

Market share is based on installed base—the percentage of institutions having a particular LMS as its primary system as measured in a point of time. To get this number, we add new implementations, subtract decommissions, account for the changing denominator in the total number of institutions (there are a lot of consolidation and closings happening in the US), and round to the nearest whole percentage.

So the answer to the author’s question is that Moodle has lost 45 - 50 net institutions in North American higher education. In other words, roughly equal to the 1% rounding number.

The third problem is not understanding the difference between primary institutional adoptions, which is the measure that we use in our analysis, and the measure of the number of registered sites in Moodle.org’s stats. We don’t, for example, count the sites that may be self-run by an individual professor or department and that may be registered with Moodle.org. The latter may be important to the community from a mission perspective, but those adoptions do not impact the financial resources available for development of the platform. You can’t combine the two numbers.

Speaking of which, the situation looks even worse for Moodle the closer we get to counting things that directly translate into revenues. The numbers that we track in many of our charts and graphs are the numbers of institutions that adopt different platforms. But the numbers that really matter for the financial health with most of these platforms is the number of students, because institutions pay their vendors, including the Moodle Partners that bankroll Moodle HQ, by the enrollment. For example, Southern New Hampshire University (which is moving to D2L) and Glendale Career College (which is on Moodle) each count as one institutional customer. But SNHU has over 100,000 students, while GCC has 300.

When measuring by numbers of students, Moodle is fourth in the US and Canada, behind Blackboard Learn, Canvas, and Brightspace—and these numbers do not include future losses from the University of Minnesota and other schools that have chosen to migrate off of Moodle in the near future. Note the numbers in the right-most column of this chart:

If, as our data show, the Moodle market share in US and Canada is much smaller when measured by numbers of students versus institutions, even as the number of Moodle institutions is shrinking, that is a bad picture for Moodle HQ’s financial health. If you care about Moodle, then you should be worried about these numbers.

A Healthy Skepticism of Skepticism

Is it possible that we at e-Literate are biased against Moodle? Of course it is. While we try our best to be objective, we are humans, and humans can be biased. The Moodle News author spends a fair bit of energy insinuating that we have a preference for other platforms. I’m not going to address that question, partly because I’m not the best judge of my own biases and partly because it’s not the question that Moodle advocates should be asking. Rather, the main question they should be worried about and should be investigating with as much absence of bias as they can muster, is the following:

Is e-Literate’s analysis true?

Could we be wrong? Of course we could. I very much doubt that we are off by much in the US and Canada, where we our data are the strongest. The margin of error increases as we get to parts of the world where our coverage is less complete (or, in some cases, non-existent). Those areas should be clear—and, in fact, are referenced in the Moodle News article—because we quantify the data fidelity in our regional analyses. If Moodle is huge in Germany, or China, or Myanmar, we might not be picking that up in our data.

There is one person who may well have better data than we have, and Moodle advocates have reason to believe that he is not biased against Moodle. His name is Martin Dougiamas. He argued in the comment quoted at the top of the post that we don’t really understand or have visibility into the business model of Moodle Pty. If that is true, then we would like to be enlightened. And the Moodle community should want that too, for peace of mind.

Moodle Pty could provide the community with two kinds of information that would help its advocates understand how much they should be worried (or not). First, Moodle Partners likely give Moodle Pty counts of customers by institution and numbers of students. (I don’t know this for certain, but I’m not sure how Moodle Pty could verify that they are being properly paid by the partner without this information.) The company could publish this information, aggregated by country so that individual customers and partners have some degree of anonymity while still giving the community a sense of the inputs that fund Moodle development. A second disclosure that Moodle Pty could offer is direct revenue numbers that are transparent enough for community members and other interested parties to independently verify the company’s financial health. While Moodle Pty is not legally obligated to provide any of these numbers, there are disclosure models in both non-profit foundations and for-profit companies that the company could choose to follow. At the moment, the status of Moodle Pty as a private corporation shields it from transparency requirements of either non-profit foundations or publicly traded corporations. But that doesn’t mean that the company, as the main engine of sustainability for a huge open source project that, despite current growth challenges, remains by far the world’s most widely adopted academic LMS, couldn’t or shouldn’t choose to be transparent about numbers that are critical to the project’s sustainability.

If Moodle News really wants to check our numbers, then they should be asking Moodle Pty for adoption numbers of Moodle Partner-supported installations by institution, headcount, and/or revenues. And their biggest concern, as Moodle advocates, should not be whether some US analysts are talking smack about their favorite platform. It should be about how healthy and sustainable that platform truly is.

The post Fear and Loathing in the Moodle Community appeared first on e-Literate.

by Michael Feldstein at November 21, 2017 09:17 PM

Unizin Updates: A change in direction and a likely change in culture

After the resignation of Unizin's CEO (Amin Qazi) and COO (Robin Littleworth) that we reported last week, we can confirm that the key issue was a change in direction for the consortium driven by the board of directors. Our information is based on on-the-record interviews with Qazi and Littleworth and additional interviews with Unizin staff, member institution staff, and outside sources. We believe this change in direction led to the resignations and will likely also lead to a change in emphasis on various Unizin initiatives.

To recap what happened last week and add some details, there were two back-to-back board meetings for Unizin and Kuali held in Austin, TX. These meetings were not emergency meetings and were scheduled a long time ago, based on Unizin's headquarters and Kuali's users conference being held in that city. In an interview and follow-up discussion over the past few days, Amin Qazi described how he had not expected to resign going into the week. But in a meeting last Monday with the executive committee of Unizin, the board described a change in direction that they wanted to make, focusing on investments in initiatives with shorter-term visibility instead of those with a longer-term payoff such as the Open edX and Google partnerships. Qazi said that he was not the right person to lead the company in that direction, and after this meeting he resigned.

I was told that Monday night Rob Lowden, Associate Vice President of Enterprise Systems at Indiana University, was asked to fly down to Austin based on this resignation. At the Tuesday Unizin board meeting, they approved his selection as interim executive director while the board searches for a new CEO. Robin Littleworth described that he was told somewhat conflicting information in his meeting with the board in that there was no change in direction.

Coming out of the the board meeting, there was an all-hands meeting with Unizin staff, and board members told them of the changes. There was a question about rumors that Kuali.co might be acquiring Unizin, and the board members stated that this rumor was not true. Later in the meeting Littleworth gave an impassioned speech that the staff was the company, and that due to the changes and how they were handled the board had seriously harmed the company culture. He then announced his resignation. According to Littleworth, he hopes that his resignation and speech might alert the board that they didn't think the situation all the way through and that they should reconsider how to support the company moving forward. Rob Lowden, for his part, still has his full-time job at Indiana University, but he told staff during this meeting that he would be commuting weekly to Austin for the next several months during the transition.

I suspect that we'll need to analyze the change in direction in more depth as details come out, but I believe that this situation is not based on finances or problems getting member institutions to recommit; rather it is a matter of emphasis on shorter-term versus longer-term initiatives.

All Unizin member institutions that signed on in 2014 have re-signed to new three-year agreements, and according to Unizin Form 990 submissions, the consortium had $2 million in assets as of summer 2016 while running a surplus - meaning that this balance is should be even higher today. Furthermore, Littleworth stated that the Unizin management team was "not given any indication from our Board, let alone anything in writing, that we were at all underperforming or not meeting expectations".

What Qazi and Littleworth were pushing for were initiatives that directly addressed member institution needs even though they may take time to develop. One example is the recently-announced Open edX partnership. In an interview with Thomas Evans at The Ohio State University, he described that school's desire to explore micro-credentials and to figure out how that would fit into an overall OSU strategy. Despite OSU's partnership with Coursera, or actually because of it, the school did not want to figure this out with a platform company that would take a percentage of revenue. The Unizin / Open edX agreement is allowing OSU to pilot programs and figure out a strategy over the next few years.

What we are likely to see with the Unizin change in direction is a stronger emphasis on partnerships and developments focused on near-term positioning of the consortium, include the BNED LoudCloud analytics partnership.

The key intellectual property that Unizin has developed over the past few years is the Unizin Data Platform with its associated Unizin Common Data Model (UCDM). From a post on the UCDM:

The UCDM rules map student, course, instruction, and learning activity information together. They solve the problem of “connecting the dots” between all of the data sources to create a single view of the student in the context of learning. As the data flows in from the SIS, LMS, and learning tools, the rules are applied to each data element, like a puzzle piece, to make sure that it is oriented to contribute to the whole picture.

We at e-Literate have been critical of Unizin over the years for not having a clear value proposition. But from my conversations over the past two years with Unizin member institutions, the biggest value thus far from the consortium was this data platform and the hard work done to turn messy LMS and SIS data into usable formats. We have also heard from two outside sources recently that Unizin has had some real success using Engage to provide Inclusive Access digital content (course content available day one of term through institutional agreements) to several schools. And the OSU description of why they are using Open edX is compelling with its alignment with the stated Unizin mission.

We don't know all the details of the change in direction, but we believe this change is what triggered the management resignations last week. I will be quite interested to see if the changes affect the three initiatives mentioned above and pull the organization backwards in terms of creating value for its member institutions.

Given the change, however, I believe there will also be a corresponding change in company culture that is inevitable at Unizin. Qazi and Littleworth (I have had many more interactions with the former but believe both to have been aligned) had an open, transparent, collegial style. Rather than ever getting defensive from questions we have asked or posts we have written at e-Literate, the two departing Unizin executives went out of their way to listen to criticism, engage us in conversations, and not try and control messaging but favor transparency instead.

Qazi described how he was honored to have had the responsibility to guide Unizin through hard three years of launching the company, and he is proud of the team that Unizin has - they have a great deal of passion and dedication, and they have been asked to solve some very different problems from universities. Littleworth also expressed his primary pride in the Unizin staff and what they are accomplishing.

By way of contrast, we have found the Unizin board to be quite focused on controlling the message. The board specifically asked both Qazi and Littleworth to not talk to me, but given their lack of employment agreements controlling who they talked to, both declined. I have asked to speak to board members for this story over the past few days with no response until they put out a press release today. The press release thanked Qazi for his service in a classy way and briefly noted Lowden's new role while not mentioning Littleworth. But there is no information that I did not already have. After the press release came out I was invited, not by a board member but by a communications specialist, to submit questions for the board to address. I will do so for follow-up analysis.

There is little doubt in my mind that the new Unizin leadership will be much more tightly controlled by the Unizin board, and they will take on much more of the board's characteristics. This will likely lead to a change in company culture.

Where does this leave Unizin? The consortium has money and three-year agreements in place. But there is a lot of work to be done before the consortium can deliver the value justifying $250k - $427k per year membership fees. As Littleworth described, the company is still in its infancy but now is changing direction while missing its critical leadership.

While the following is not based on my interviews, I find the choice of Lowden as interim executive director to be quite interesting and a big part of the reason that I believe the 'change in direction' argument. If the board truly wanted to continue same direction despite Amin's resignation, why not promote Steve Scott (CTO) or Robin Littleworth (COO), at least during transition? Remember that Littleworth did not resign until after Lowden was selected, and there was no discussion with the former COO about what to do next. Bringing in someone from outside so quickly seems to be significant. Furthermore, Lowden has a long history at Indiana University working for Unizin co-founder Brad Wheeler, and he was also involved early on at Sakai and then with the Kuali board - initiatives heavily influenced by Wheeler. Given his full-time job, the choice to use Lowden to replace full-time executive team for the next several months will lead to a challenging situation at a crucial time, to say the least, even though Qazi is staying on board through December to help with the transition. Was this choice partially worked out in advance, or did the board really react to Qazi's resignation and find an interim replacement within 24 hours? What will Lowden be able to accomplish given his logistical challenges (Indianapolis vs. Austin and having multiple jobs). I will attempt to get answers from board on these questions.

What I would watch over the next few months is whether Unizin loses additional staff due to the changes. And I would also watch the direction of the Unizin Data Platform in particular to understand the extent of changes to strategy.

The post Unizin Updates: A change in direction and a likely change in culture appeared first on e-Literate.

by Phil Hill at November 21, 2017 12:05 PM

November 20, 2017

Adam Marshall

Usage Satistics

Every year we traditionally present the number of unique logins in Week 1 of Trinity Term. This can be viewed as a rough measure of the growth of a service. As you can see, apart from one year, growth has been steadily increasing  year on year.

by Adam Marshall at November 20, 2017 03:37 PM

November 17, 2017

Michael Feldstein

Big Changes at Unizin: CEO and COO resign after board meeting

Three and a half years after its formation, Unizin is facing its biggest challenge. Now that the consortium is dealing with contract renewals (membership based on three-year agreements), and now that it is a standalone organization and not wrapped under Internet2, Unizin will face the future without its top management.

There's a lot more here than just a change of one or two executives, and we plan to share more analysis next week here at e-Literate. We have also reached out to get comments from the various people involved. For now, however, here are the basics.

This week there were two board meetings in Austin, TX - one for Unizin and one for Kuali - due to the logistics of having several people serving on both boards. We have confirmed based on multiple sources that after a meeting with the Unizin executive committee but before the board meeting, CEO Amin Qazi turned in his resignation. One day later, after the board approved a new interim CEO, COO Robin Littleworth turned in his resignation.

The interim CEO is Rob Lowden, Associate VP Enterprise Systems at Indiana University and long-time active member of the Kuali community and prior to that in the Sakai community (including board positions in those two open source organizations). To the best of my knowledge, Lowden will remain in his job at IU while at the same time running Unizin until the board selects new executives.

Expect more from us next week.

Update: Clarified timing of resignation.

The post Big Changes at Unizin: CEO and COO resign after board meeting appeared first on e-Literate.

by Phil Hill at November 17, 2017 06:58 PM

November 10, 2017

Dr. Chuck

An Accidental Internet Historian

This is a story that I tell people over and over when I meet them – so I figured I would make a blog post so I could have it written down somewhere.

I am an accidental Internet historian. It all started in 1995-1999 I hosted a Cable television program with my friend Rich Wiggins that was a talk show about the Internet that ended up with three names over time as the cable TV companies bought one another over that period. Here is a YouTube of those (3) shows:

https://www.youtube.com/playlist?list=PLlRFEj9H3Oj5Dlcu6P92S5dpmb3ihr7QW

This led to me having lots of cool early Internet video and for a number of years I wrote a column called Computing Conversations in IEEE Computer Magazine that wrote a short print article with an associated video interview. Here is a Youtube Channel of that work:

https://www.youtube.com/playlist?list=PL4660FB7F523B1770

You can see my old material and new interviews since 2012 interwoven. I also attach a couple of articles to give you a sample. We even made an NPR-style audio interview for each of the columns:

https://www.youtube.com/watch?v=8jysBmy5Bec

Then in 2012, I re-did all this material in the form of a Coursera class titled Internet History, Technology, and Security – that is how Niel and I crossed paths. Here is the course and a Youtube channel of the lectures and media:

https://www.coursera.org/learn/insidetheinternet

https://www.youtube.com/playlist?list=PLlRFEj9H3Oj6-srSAgLb-ZGVNGlo3v14X

I even turned this all into a textbook on the basics of the Internet that I wrote for a Khan Academy high school course course on TCP/IP. I finished the book but never built the Khan Academy course.

http://www.net-intro.com/

The latest activity was a live Teach Out – a one week open learning learning activity that I did with Doug Van Houweling called “Internet and You”

https://www.coursera.org/learn/teach-out-internet-and-you

This is a story that will keep going as long as I find new folks to interview and add to this collection – I hope folks enjoy this material.

by Charles Severance at November 10, 2017 04:28 PM

October 11, 2017

Adam Marshall

WebLearn Tests tool

A powerful way for your students to test their understanding…

WebLearn offers a tool called ‘Tests’, which allows tutors and lecturers to create pools of questions and assessments based on selecting questions to present to students. Students can take the test in their own time and benefit from hints and feedback provided by the tutor when creating the questions.

The following questions types are possible:

The process of creating a test involves the following steps:

  1. Create questions in a question pool – options are available to provide overall feedback or hints
  2. Build an assessment by selecting questions to present (either sequentially or randomly)
  3. Test drive the assessment (test) as a student
  4. Set options such as open and close dates, time limits, number of attempts etc.
  5. Publish the test for students to take

The Tests tools keeps track of all attempts and provides a report showing student names, start and finish time, and scores. The data can be exported to Excel for further analysis.

Further information:

Contact us

If you would like to discuss possibilities for using the Tests tool in your courses, contact our team of learning technologists at weblearn@it.ox.ac.uk

 

 

by Jill Fresen at October 11, 2017 03:36 PM

Apereo OAE

Getting started with LibreOffice Online - a step-by-step guide from the OAE Project

As developers working on the Apereo Open Academic Environment, we are constantly looking for ways to make OAE better for our users in universities. One thing they often ask for is a more powerful word processor and a wider range of office tools. So we decided to take a look at LibreOffice Online, the new cloud version of the LibreOffice suite.

On paper, LibreOffice Online looks like the answer to all of our problems. It’s got the functionality, it's open source, it's under active development - plus it's backed by The Document Foundation, a well-established non-profit organisation.

However, it was pretty difficult to find any instructions on how to set up LibreOffice Online locally, or on how to integrate it with your own project. Much of the documentation that was available was focused on a commercial spin-off, Collabora Online, and there was little by way of instructions on how to build LibreOffice Online from source. We also couldn't find a community of people trying to do the same thing. (A notable exception to this is m-jowett who we found on GitHub).

Despite this, we decided to press on. It turned out to be even trickier than we expected, and so I decided to write up this post, partly to make it easier for others and partly in the hope that it might help get a bit more of a community going.

Most of the documentation recommends running LibreOffice Online (or LOO) using the official Docker container, found here. Since we recently introduced a dockerised development setup for OAE, this seems like a good fit. A downside to this is that you can’t tweak the compilation settings, and by default, LOO is limited to 20 connections and 10 documents.

While this limitation is fine for development, OAE deployments typically have tens or hundreds of thousands of users. We therefore decided to work on compiling LOO from source to see whether it would be possible to configure it in a way that allows it to support these kinds of numbers. As expected, this made the project substantially more challenging.

I’ve written down the steps to compile and install LOO in this way below. I’m writing this on Linux but they should work for OSX as well.

Installation steps

These installation steps rely heavily on this setup gist on GitHub by m-jowett, but have been updated for the latest version of LibreOffice Online. To install everything from source, you will need to have git and Node.js installed; if you don’t already have them, you can install both (plus npm, node package manager) with sudo apt-get install git nodejs npm. You need to symlink Node.js to /usr/bin/node with sudo ln -s /usr/bin/nodejs /usr/bin/node for the makefiles. You’ll also need to install several dependencies, so I recommend creating a new directory for this project to keep everything in one place. From your new directory, you can then clone the LOO repository from the read-only GitHub using git clone https://github.com/LibreOffice/online.git.

Next, you’ll need to install some dependencies. Let’s start with C++ library POCO. POCO has dependencies of it’s own, which you can install using apt: sudo apt-get install openssl g++ libssl-dev. Then you can download the source code for POCO itself with wget https://pocoproject.org/releases/poco-1.7.9/poco-1.7.9-all.tar.gz. Uncompress the source files, and as root, run the following command from your newly uncompressed POCO directory:

./configure --prefix=/opt/poco
make install

This installs POCO at /opt/poco.

Then we need to install the LibreOffice Core. Go back to the top level project directory and clone the core repository: git clone https://github.com/LibreOffice/core.git. Go into the new 'core' folder. Compiling the core from source requires some more dependencies from apt. Make sure the deb-src line in /etc/apt/sources.list is not commented out. The exact line will depend on your locale and distro, but for me it’s deb-src http://fi.archive.ubuntu.com/ubuntu/ xenial main restricted. Next, run the following commands:

sudo apt-get update
sudo apt-get build-dep libreoffice
sudo apt-get install libkrb5-dev

You can also now set the $MASTER environment variable, which will be used when configuring parts of LibreOffice Online:

export MASTER=$(pwd)

Then run autogen.sh to prepare for building the source with ./autogen.sh. Finally, run make to build the LibreOffice Core. This will take a long time, so you might want to leave it running while you do something else.

After the core is built successfully, go back to your project root folder and switch to the LibreOffice Online folder, /online. I recommend checking out the latest release, which for me was 2.1.2-13: git checkout 2.1.2-13. We need to install yet more dependencies: sudo apt-get install -y libpng12-dev libcap-dev libtool m4 automake libcppunit-dev libcppunit-doc pkg-config, after which you should install jake using npm: npm install -g jake. We will also need a python library called polib. If you don’t have pip installed, first install it using sudo apt-get install python-pip, then install the polib library using pip install polib. We should also set some environment variables while here:

export SYSTEMPLATE=$(pwd)/systemplate
export ROOTFORJAILS=$(pwd)/jails

Run ./autogen.sh to create the configuration file, then run the configuration script with: 

./configure --enable-silent-rules --with-lokit-path=${MASTER}/include --with-lo-path=${MASTER}/instdir --enable-debug --with-poco-includes=/opt/poco/include --with-poco-libs=/opt/poco/lib --with-max-connections=100000 –with-max-documents=100000

Next, build the websocket server, loolwsd, using make. Create the caching directory in the default location with sudo mkdir -p /usr/local/var/cache/loolwsd, then change caching permissions with sudo chmod -R 777 /usr/local/var/cache/loolwsd. Test that you can run loolwsd with make run. Try accessing the admin panel at https://localhost:9980/loleaflet/dist/admin/admin.html. You can stop it with CTRL+C.

That, as they say, is it. You should now have a LibreOffice Online installation with a maximum connections and maximum documents both set to 100000. You can adjust these numbers to your liking by changing the with-max-connections and with-max-documents variables when configuring loolwsd.

Final words

Overall, I found this whole experience a bit discouraging. There was a lot of painful trial and error. We are still hoping to use LibreOffice Online for OAE in the future, but I wish it was easier to use. We'll be posting a request in The Document Foundation's LibreOffice forum for a docker version without the user limits to be released in future.

If you're also thinking about using LOO, or are already, and would like to swap notes, we'd love to hear from you. There are a few options. You can contact us via our mailing list at oae@apereo.org or directly at oaeproject@gmail.com

October 11, 2017 11:00 AM

September 18, 2017

Sakai@JU

Online Video Tutorial Authoring – Quick Overview

As an instructional designer a key component to my work is creating instructional videos.  While many platforms, software and workflows exist here’s the workflow I use:

    1. Write the Script:  This first step is critical though to some it may seem rather artificial.  Writing the script helps guide and direct the rest of the video development process. If the video is part of a larger series, inclusion of some ‘standard’ text at the beginning and end of the video helps keep things consistent.  For example, in the tutorial videos created for our Online Instructor Certification Course, each script begins and ends with “This is a Johnson University Online tutorial.” Creating a script also helps insure you include all the content you need to, rather than ad-libbing – only to realize later you left something out.As the script is written, particular attention has to be paid to consistency of wording and verification of the steps suggested to the viewer – so they’re easy to follow and replicate. Some of the script work also involves set up of the screens used – both as part of the development process and as part of making sure the script is accurate.

 

  1. Build the Visual Content: This next step could be wildly creative – but typically a standard format is chosen, especially if the video content will be included in a series or block of other videos.  Often, use of a 16:9 aspect ratio is used for capturing content and can include both text and image content more easily. Build the content using a set of tools you’re familiar with. The video above was built using the the following set of tools:
    • Microsoft Word (for writing the script)
    • Microsoft PowerPoint (for creating a standard look, and inclusion of visual and textual content – it provides a sort of stage for the visual content)
    • Google Chrome (for demonstrating specific steps – layered on top of Microsoft PowerPoint) – though any browser would work
    • Screencast-O-Matic (Pro version for recording all visual and audio content)
    • Good quality microphone such as this one
    • Evernote’s Skitch (for grabbing and annotating screenshots), though use of native screenshot functions and using PowerPoint to annotate is also OK
    • YouTube or Microsoft Stream (for creating auto-generated captions – if it’s difficult to keep to the original script)
    • Notepad, TextEdit or Adobe’s free Brackets for correcting/editing/fixing auto-generated captions VTT, SRT or SBV
    • Warpwire to post/stream/share/place and track video content online.  Sakai is typically used as the CMS to embed the content and provide additional access controls and content organization
  2. Record the Audio: Screencast-O-Matic has a great workflow for creating video content and it even provides a way to create scripts and captions. I tend to record the audio first, which in some cases may require 2 to 4 takes. Recording the audio initially, provides a workflow to create appropriate audio pauses, use tangible inflection and enunciation of terms. For anyone who has created a ‘music video’ or set images to audio content this will seem pretty doable.
  3. Sync Audio and Visual Content: So this is where the use of multiple tools really shines. Once the audio is recorded, Screencast-O-Matic makes it easy to re-record retaining the audio portion and replacing just the visual portion of the project. Recording  the visual content (PowerPoint and Chrome) is pretty much just listening to the audio and walking through the slides and steps using Chrome. Skitch or other screen capture software may have already been used to capture visual content I can bring attention to in the slides.
  4. Once the project is completed, Screencast-O-Matic provides a 1 click upload to YouTube or save as an MP4 file, which can then be uploaded to Warpwire or Microsoft Stream.
  5. Once YouTube or Microsoft Stream have a viable caption file, it can be downloaded and corrected (as needed) and then paired back with any of the streaming platforms.
  6. Post of the video within the CMS is as easy as using the LTI plugin (via Warpwire) or by using the embed code provided by any of the streaming platforms.

by Dave E. at September 18, 2017 04:03 PM

September 01, 2017

Sakai Project

Sakai Docs Ride Along

Sakai Docs ride along - Learn about creating Sakai Online Help documentation September 8th, 10am Eastern

by MHall at September 01, 2017 05:38 PM

August 30, 2017

Sakai Project

Sakai get togethers - in person and online

Sakai is a virtual community and we often meet online through email, and in real time through the Apereo Slack channel and web conferences. We have so many meetings that we need a Sakai calendar to keep track of our meetings. 

Read about our upcoming get togethers!

SakaiCamp
SakaiCamp Lite
Sakai VC
ELI

by NealC at August 30, 2017 06:37 PM

Sakai 12 branch created!

We are finally here! A big milestone has been reached with the branching of Sakai 12.0. What is a "branch"? A branch means we've taken a snapshot in time of Sakai and put it to the side so we improve it, mostly QA (quality assurance testing) and bug fixing until we feel it is ready to release to the world and become a community supported release. We have a stretch goal from this point of releasing before the end of this year, 2017. 

Check out some of our new features.

by NealC at August 30, 2017 06:00 PM

July 18, 2017

Steve Swinsburg

An experiment with fitness trackers

I have had a fitness tracker of some descript for many years. In fact I still have a stack of them. I used to think they were actually tracking stuff accurately. I compete with friends and we all have a good time. Lately though, I haven’t really seen the fitness benefits I would have expected from pushing myself to get higher and higher step counts. I am starting to think it is bullshit.

I’ve have the following:

  1. Fitbit Flex
  2. Samsung Gear Wear
  3. Fitbit Charge HR
  4. Xiaomi Mi Band
  5. Fitbit Alta
  6. Moto 360
  7. Phone in pocket setup to send to Google Fit.
  8. Garmin ForeRunner 735XT (current)

Most days I would be getting 12K+ just by doing my daily activities (with a goal of 11K): getting ready for work and children ready for school (2.5K), taking the kids to school (1.2K), walking around work (3K), going for a walk at lunch (2K), picking up the kids and doing stuff around the house of an evening (3.5K) etc.

My routine hasn’t really changed for a while.

However, two weeks ago I bought the Garmin Forerunner 735XT, mainly because I was fed up with the lack of Android Wear watches in Australia as well as Fitbit’s lack of innovation. I love Android Wear and Google Fit and have many friends on Fitbit, but needed something to actually motivate me to exercise more.

The first thing I noticed is that my step count is far lower than any of the above fitness trackers. Like seriously lower. We are talking at least 30% or more lower. As I write this I am sitting at ~8.5K steps for the day and I have done all of the above plus walked to the shops and back (normally netting me at least 1.5K) and have switched to a standing desk at work which is about 3 metres closer to the kitchen that my original desk. So negligible distance change. The other day I even played table tennis at work (you should see my workplace) and it didn’t seem to net me as many steps as I would have expected.

Last night I went for a 30 min walk and snatched another 2K, which is pretty accurate given the distance and my stride length. I think the Fitbit would have given me double that.

This is interesting.

Either the Garmin is under-reporting or the others are over-reporting. I suspect the latter. The Garmin tracker cost me close to $600 so I am a bit more confident of its abilities than the $15 Mi band.

So, tomorrow I am performing an experiment.

As soon as I wake up I will be wearing my Garmin watch, Fitbit Charge HR right next to it, and keeping my phone in my pocket at all times. Both the watch and Fitbit will be setup for lefthand use. The next day, I will add more devices to the mix.

I expect the Fitbit to get me to at least 11K, Google fit to be under that (9.5K) and Garmin to be under that again (8K). I expect the Mi band to be a lot more than the Fitbit.

The fitness tracker secret will be exposed!

by steveswinsburg at July 18, 2017 12:46 PM

June 16, 2017

Apereo OAE

OAE at Open Apereo 2017

The Open Apereo 2017 conference took place last week in Philadelphia and it provided a great opportunity for the OAE Project team to meet and network for three whole days. The conference days were chock full of interesting presentations and workshops, with the major topic being the next generation digital learning environment (NGDLE). Malcolm Brown's keynote was a particularly interesting take on this topic, although at that point the OAE team was still reeling from having a picture from our Tsugi meeting come up during the welcome speech - that was a surprising start for the conference! We made note about how the words 'app store' kept popping up in presentations and in talks among the attendees again and again - perhaps this is something we can work towards offering within the OAE soon? Watch this space...

The team also met with people from many other Apereo projects and talked about current and future integration work with several project members, including Charles Severance from Tsugi, Opencast's Stephen Marquard and Jesus and Fred from Big Blue Button. There's some exciting work to be done in the next few weeks... While Quetzal was released only a few days before the conference, we are now teeming with new ideas for OAE 14!

After the conference events were over on Wednesday, we gathered together to have a stakeholders meeting where we discussed strategy, priorities and next steps. We hope to be delivering some great news very soon.

During the conference, the OAE team also provided assistance to attendees in using the Open Apereo 2017 group hosted on *Unity that supported the online discussion of presentation topics. A lot of content was created during the conference days so be sure to check it out if you're looking for slides and/or links to recorded videos. The group is public and can be accessed from here.

OAE team members who attended the conference were Miguel and Salla from *Unity and Mathilde, Frédéric and Alain from ESUP-Portail.

June 16, 2017 12:00 PM

June 01, 2017

Apereo OAE

Apereo OAE Quetzal is now available!

The Apereo Open Academic Environment (OAE) project is delighted to announce a new major release of the Apereo Open Academic Environment; OAE Quetzal or OAE 13.

OAE Quetzal is an important release for the Open Academic Environment software and includes many new features and integration options that are moving OAE towards the next generation academic ecosystem for teaching and research.

Changelog

LTI integration

LTI, or Learning Tools Interoperability, is a specification that allows developers of learning applications to establish a standard way of integrating with different platforms. With Quetzal, Apereo OAE becomes an LTI consumer. In other words, users (currently only those with admin rights) can now add LTI standards compatible tools to their groups for other group members to use.

These could be tools for tests, a course chat, a grade book - or perhaps a virtual chemistry lab! The only limit is what tools are available, and the number of LTI-compatible tools is growing all the time.

Video conferencing with Jitsi

Another important feature introduced to OAE in Quetzal is the ability to have face-to-face meetings using the embedded video conferencing tool, Jitsi. Jitsi is an open source project that allows users to talk to each other either one on one or in groups.

In OAE, it could have a number of uses - maybe a brainstorming session among members of a globally distributed research team, or holding office hours for students on a MOOC. Jitsi can be set up for all the tenancies under an OAE instance, or on a tenancy by tenancy basis.

 

Password recovery

This feature that has been widely requested by users: the ability to reset their password if they have forgotten it. Now a user in such a predicament can enter in their username, and they will receive an email with a one-time link to reset their password. Many thanks to Steven Zhou for his work on this feature!

Dockerisation of the development environment

Many new developers have been intimidated by the setup required to get Open Academic Environment up and running locally. For their benefit, we have now created a development environment using Docker containers that allows newcomers to get up and running much quicker.

We hope that this will attract new contributions and let more people to get involved with OAE.

Try it out

OAE Quetzal can be experienced on the project's QA server at http://oae.oae-qa0.oaeproject.org. It is worth noting that this server is actively used for testing and will be wiped and redeployed every night.

The source code has been tagged with version number 13.0.0 and can be downloaded from the following repositories:

Back-end: https://github.com/oaeproject/Hilary/tree/13.0.0
Front-end: https://github.com/oaeproject/3akai-ux/tree/13.0.0

Documentation on how to install the system can be found at https://github.com/oaeproject/Hilary/blob/13.0.0/README.md.

Instruction on how to upgrade an OAE installation from version 12 to version 13 can be found at https://github.com/oaeproject/Hilary/wiki/OAE-Upgrade-Guide.

The repository containing all deployment scripts can be found at https://github.com/oaeproject/puppet-hilary.

Get in touch

The project website can be found at http://www.oaeproject.org. The project blog will be updated with the latest project news from time to time, and can be found at http://www.oaeproject.org/blog.

The mailing list used for Apereo OAE is oae@apereo.org. You can subscribe to the mailing list at https://groups.google.com/a/apereo.org/d/forum/oae.

Bugs and other issues can be reported in our issue tracker at https://github.com/oaeproject/3akai-ux/issues.

June 01, 2017 05:00 PM