Planet Sakai

April 27, 2017

Michael Feldstein

Purdue University Deal To Acquire Kaplan University: Interview with Trace Urdan

The surprise news today is that Purdue University has agree to acquire the academic operations of Kaplan University. As stated in the 8-K filing by Kaplan University’s owner Graham Holdings:

On April 27, 2017, Kaplan Higher Education LLC and Iowa College Acquisition, LLC (collectively, “Kaplan”), subsidiaries of Graham Holdings Company, entered into a Contribution and Transfer Agreement (“Transfer Agreement”) to contribute the institutional assets and operations of Kaplan University (“KU”) to a new, nonprofit, public-benefit corporation (“New University”) affiliated with Purdue University (“Purdue”) in exchange for a Transition and Operations Support Agreement (“TOSA”), pursuant to which, among other provisions, Kaplan will provide key non-academic operations support to New University for an initial term of 30 years with a buy-out option after six years.

Additional coverage of the deal at The Chronicle, Inside Higher Ed, The Wall Street Journal.

This is an unprecedented move, and to get some insight, I interviewed Trace Urdan, who has long covered higher education as an investment analyst and is one of the most knowledgeable observers of the for-profit sector. The following description is based mostly on this interview, paraphrasing Trace’s explanations and adding quotes in places.

While this acquisition was a surprise to most observers, Trace noted the following forces at play that come together to explain the deal.

  • Non-profit entities – both public institutions and private non-profit institutions – “wanting to get into the adult market and the online market”. This is the big push behind the Online Program Management (OPM) market, kick-starting these non-profits into online programs targeting adult education.
  • For-profit entities “feel like they are being burdened by being for-profit”. One part of this is the regulatory burden from the Department of Education and even accreditors. But there is also a marketplace burden as non-profits like Southern New Hampshire University keep growing enrollments while for-profits are dropping.
  • There is a “the investor enthusiasm for the services model” with OPMS, “and this is a model that investors love – it gives you access to the growth in online education, affiliation with strong brands, and it’s more or less free from the regulatory hostility” of the for-profit sector.

All of these forces come together in this case with the potential for immediate results. Purdue University instantly becomes a big player in the adult education market, and Kaplan University converts the asset into a services provider moving past the for-profit sector burdens.

Trace described how the deal is more complicated than a straight acquisition and should be thought of as an OPM deal. Purdue University gets the academic operations of Kaplan University – the school, programs, curriculum, instructional staff, and the accreditation. Kaplan, Inc (owned by Graham Holdings) retains the same elements as an OPM provider using a revenue-sharing model – marketing & recruitment, enrollment management, curriculum development, online course design, student retention support, technology hosting, and student and faculty support. “One way to look at this is effectively [Kaplan] is going to be like an Embanet with one client.”

I asked Trace about the challenge of getting the accreditor (HLC is the accreditor for Kaplan U) and Department of Education (ED) approval for this transaction. Trace described that the deal closing is contingent updated these approvals, as is typical, and the closest parallel is the Grand Canyon University (GCU) proposal last year, where “Grand Canyon wanted to spin out the academic university into a non-profit and hang on to the services piece as a for-profit entity and presumably grow that with other clients”. While we have not seen that actual agreement, the strong suspicion in GCU’s case was that the accreditor, also HLC, shot down the plans based on questions of whether the  non-profit entity would have actually been independent of the remaining for-profit OPM entity.

There are some big differences between Grand Canyon University and this Purdue / Kaplan University deal, however. The big difference is that Graham Holdings retains much more financial risk than GCU’s OPM move, as there is a 30-year agreement but Purdue has plenty of clauses that lets them get out and change OPMs or take over operations after six years. Also, Purdue University gets to appoint the board for NewU and Kaplan does not. On the softer, or political side, there is also a difference by using Purdue Universities reputation and name and Indiana’s plans to make NewU a public institution (albeit with no state funding).

Regarding the 8-K filing, Trace stated “What I see in that agreement is wildly different concerns from the two parties. The concern of the Purdue side of the equation is all about the risk.” What happens if they don’t hit revenue or profit or enrollment targets, what happens if Kaplan cannot continue operations, etc.Kaplan’s focus is all about benefiting from the removal of the for-profit burdens, effectively saying “Hey, if we’re not a for-profit anymore, we can blow the roof off this thing.” How do we benefit if enrollment takes off, etc? This makes the deal unprecedented.

It is worth noting that while Purdue and Kaplan claim to have done the analysis stating that this should all be kosher with accreditors and ED, these approvals have not been granted yet.

Trace indicated that the losers in this deal might include others in the OPM market, as there is a new player with 32,000+ students in their main account. The crowded market gets more crowded.

I want to thank Trace Urdan for his insights into the surprise news today. We’ll likely have additional analysis at e-Literate as we read and digest the news in more depth.

The post Purdue University Deal To Acquire Kaplan University: Interview with Trace Urdan appeared first on e-Literate.

by Phil Hill at April 27, 2017 07:25 PM

Recommended Reading: Is Your Edtech Product a Refrigerator or Washing Machine?

Don’t be put off by a title that reads like click bait; this piece in EdSurge by Julia Freeland Fisher is the real deal. The column looks at the adoption rates of…well…fridges and washing machines.

Refrigerators were adopted much more quickly than washing machines.

Credit: Julia Freeland Fisher

The reason, she argues, is

Most households had electrical outlets that refrigerators could plug into directly, thus leaving iceboxes in the dust. But few homes had the pipes and drain lines required to install a washing machine.

In other words, homes at the time were never designed with washing machines in mind. As a result, to take advantage of the new technology households didn’t just have to shell out money; they had to hire a plumber to configure the pipes that would pump water into and drain water out of the new contraptions.

The analogy is that some ed tech innovations fit more readily into the ways that colleges and individual educators do things than others. This is a fundamental limiting factor on adoption, regardless of theoretical value or “efficacy”.

I have argued that the problem is even worse than that. We can’t really know the impact of the ed tech product or service outside the context of the the institution, many of which are not as easily checked as “Is there hot water line to the basement where I can plug in my washing machine?”

Anyway, Freeland Fisher’s piece is well written and drives the point home with clarity. Go read it.

The post Recommended Reading: Is Your Edtech Product a Refrigerator or Washing Machine? appeared first on e-Literate.

by Michael Feldstein at April 27, 2017 04:08 PM

Alex Balleste

Optimizing GC settings for Sakai

The last weeks I've been playing with Java Virtual Machine (JVM) optimization. I’ve always believed that JVM tuning is somehow mystical and dark magic. You can’t be completely sure that what you change will enhance the current configuration or it will sink your servers. JVM Parametrization is something I try to touch as little as I can, but this time I had to roll up my sleeves and do some experimentation in the production servers.

We recently upgraded our Sakai [1] platform to the 11.3 version. It was great because we have some new interesting features and an awesome new responsive interface. In this case the version of Java was upgraded to 1.8 too, so some of the parameters we had set up were not working anymore.

Our first move was removing all non working parameters and just define the size of the Old Gen Memory. We had the following configuration in our servers.

JAVA_OPTS="-server -d64 -Djava.awt.headless=true
-Xms6g -Xmx6g
-XX:+UseConcMarkSweepGC
-XX:+CMSParallelRemarkEnabled
-Duser.language=ca
-Duser.region=ES -Dhttp.agent=Sakai
-Dorg.apache.jasper.compiler.Parser.STRICT_QUOTE_ESCAPING=false
-Dsun.lang.ClassLoader.allowArraySyntax=true
-Dcom.sun.management.jmxremote"

We are running Sakai in a set of servers that have 8GB of RAM. So we defined 6GB for the CMS Old Generation Memory (Tenured). We obtained a distribution like this:

NON_HEAP: Around 750MB committed.
HEAP:
  • PAR Eden + Par Survivor (Young generation) : 300MB Committed aprox.
  • CMS Old Gen: 5.5 GB Committed aprox.
That configuration would guarantee around 1GB for the SO, so initially it seemed a good configuration. As you can see we had two extra parameters to handle the Java Garbage Collector processes:

-XX:+UseConcMarkSweepGC: Enables the use of the CMS (concurrent mark sweep) garbage collector for the old generation.

-XX:+CMSParallelRemarkEnabled: This option means that remarking is done in parallel to program execution. It’s a good option if your server has many cores.

Out of Memory and JVM Pauses

This setup seemed to work for us. The Garbage Collector calls were fired automatically by the JVM algorithm when it was needed, but quickly we started to see some memory problems. Sometimes, when a server memory occupancy was high, it started to behave dramatically wrong. A list of random errors appeared in the logs.

We use the project PSI Probe [2] to monitor JVM behavior. It recollects information about threads, memory, CPU, data, etc. So we found out that the Garbage Collector mechanism didn’t free the CMS Old Gen memory when the errors were produced. In that moment, we tried to restart tomcat we couldn’t. We received the message that we couldn't stop the server because Heap was out of memory.

Capture from the memory use screen of PSI Probe

After analyzing the logs and watching how much random it was (it happened in different times and different servers), we figured out that all the errors were produced because the servers were running out of memory. So we started to monitor the GC process in more detail. So we added the following lines to the JAVA_OPTS:

     -Xloggc:/home/sakai/logs/gc.log -XX:+PrintGCDetails -XX:+PrintGCDateStamps
-XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=2M

Those lines allowed us to get the GC operations in a log file in order to analyze them. We used the program called GCViwewer to visualize the data. It’s an open source project from Tagtraum [3]. It shows a lot of information about memory and Garbage Collector processes from the logs.

With this application we saw that there was a moment when the memory usage was near to the limit, that pauses in garbage collector increased. There were a lot of them and times up to 30 seconds. That produced that server stopped responding, of course.

So our first thought was to try to prevent from reaching a critical level of occupancy that make it unstable. Our first attempt was to use two options to force Garbage Collector to free memory:

      -Dsun.rmi.dgc.client.gcInterval=3600000
-Dsun.rmi.dgc.server.gcInterval=3600000

It uses RMI to force the memory clean every hour, but it didn’t work for us, the CMS Old Gen kept increasing. So we tried another parameters. This time worked.

      -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=75


-XX:+UseCMSInitiatingOccupancyOnly: Enables the use of the occupancy value as the only criterion for initiating the CMS collector.

-XX:CMSInitiatingOccupancyFraction: Sets the percentage of the old generation occupancy (0 to 100) at which to start a CMS collection cycle.

By the other hand, we also tried set up to more parameters in order to change the mark policy in the garbage collector attending to some recommendations found in Anatoliy Sokolenko's blog post [4].

-XX:+CMSScavengeBeforeRemark: It enables scavenging attempts before the CMS remark step.

We had been running these parameters for some days and the new behavior seems nice. It prevents to saturate the servers, and the GC pauses done when a big CMS Old Gen clean is performed are very short, they are under a second.

Campture from GCViewer. It shows the time spent in a Full GC performed in CMS Old Gen 

As you can observe, what I demonstrated writing this post is that I’m not good at Java tuning. This post intention was to show a little bit the process we followed. I’m looking for comments about wrong assumptions I made in this post (I’m sure I did), and find better ways to do it. So I’ll really appreciate your comments and suggestions in order to improve it.

References:
[1] Sakai Project: https://sakaiproject.org/
[2] PSI Probe project: https://github.com/psi-probe/psi-probe
[3] Tagtraum, CGViewer web page: http://www.tagtraum.com/gcviewer.html
[4]  Anatoliy Sokolenko's blog post: http://blog.sokolenko.me/2014/11/javavm-options-production.html

Other sources I used in my research process:
http://docs.oracle.com/javase/8/docs/technotes/tools/unix/java.html


by Alex Ballesté (noreply@blogger.com) at April 27, 2017 08:54 AM

April 25, 2017

Michael Feldstein

Webinar Tomorrow: Shifts in Video and LMS Adoption: Impact on Student Outcomes

Phil and I will be doing a webinar along with Echo 360’s Fred Singer for Inside Higher Ed tomorrow. We’ll be talking about how to think about adopting these platforms in ways that create opportunities to encourage conversation within the campus community about pedagogy and improving student outcomes.

The webinar is tomorrow—Wednesday, April 26th—at 2 PM EST. You can sign up here.

The post Webinar Tomorrow: Shifts in Video and LMS Adoption: Impact on Student Outcomes appeared first on e-Literate.

by Michael Feldstein at April 25, 2017 08:54 PM

Adam Marshall

Best practice in designing WebLearn sites and pages

The WISE project

One of the outputs of the recent WebLearn Improved Student Experience (WISE) project is a WebLearn site offering advice and guidance on various aspects to consider in building WebLearn sites and pages. The WebLearn Best Practice site encapsulates our experience in supporting 19 departments in redesigning their WebLearn areas.

Another output of the WISE project was a set of four WebLearn site templates, using the ‘box’ design and layout: Departmental Site, Programme Site, Course Site and Tutor Site. The Best Practice site provides a link to the guide on using the site templates, which illustrates how to create a new site based on a template, and then to edit the components according to your needs.

The Best Practice site provides information about the Lessons tool (including various examples of Lessons pages), and considers the question: ‘new site’ or ‘new page’? There are hints and tips about page design and layout (with examples of ‘good’ and ‘bad’ practice), images and video, accessibility and copyright.

More information:

by Jill Fresen at April 25, 2017 03:20 PM

April 20, 2017

Adam Marshall

WebLearn and Turnitin Courses Trinity Term 2017

IT Services offers a variety of taught courses to support the use of WebLearn and the plagiarism awareness software Turnitin. Course books for the formal courses (3-hour sessions) can be downloaded for self study. Places are limited and bookings are required. All courses are free of charge.

Click on the links provided for further information and to book a place.

WebLearn 3-hour courses:

Byte-sized lunch time sessions:

These focus on particular tools with plenty of time for questions and discussion

Plagiarism awareness courses (Turnitin):

User Group meeting:

by Jill Fresen at April 20, 2017 02:43 PM

April 11, 2017

Apereo Foundation

April 10, 2017

Ian Boston

Metrics off Grid

I sail, as many of those who have met me know. I have had the same boat for the past 25 years and I am in the process of changing it. Seeing Isador eventually sold will be a sad day as she predates even my wife. The site of our first date. My kids have grown up on her, strapped into car seats when babies, laughing at their parents soaked and at times terrified (parents that is, not the kids). see https://hallbergrassy38forsale.wordpress.com/ for info.

The replacement will be a new boat, something completely different. A Class 40 hull provisioned for cruising. A Pogo 1250. To get the most out of the new boat and to keep within budget (or to spend more on sails) I am installing the bulk of the electronics. In the spare moments I get away from work I have been building a system over the past few months. I want get get the same sort of detailed evidence that I get at work. At work, I would expect to record 1000s of metrics in real time from 1000s of systems. A sailing boat is just as real time, except it’s off grid. No significant internet, no cloud, only a limited bank of 12v batteries for power, and a finite supply of diesel to generate electricity, in reality no 240v, but plenty of wind and solar at times.  That focuses the mind and forces implementation to be efficient. The budget is measured in Amps or mA as it was for Apollo 13, but hopefully without the consequences.

Modern marine instruments communicate using a variant (NMEA2000) of a CAN Bus present in almost every car for the past 10 years, loved by car hackers. The marine variant, adds some physical standards mostly aimed at easing amatuer installation and waterproofing. The underlying protocol and electrical characteristics are the same as a standard CAN Bus. The NMEA2000 standard also adds message types or PGNs specific to the marine industry. The standard is private, available only to members, but OpenSource projects like CanBoat on GitHub have reverse engineered most of the messages.

Electrically the CAN Bus is a twisted pair with a 120Ohm resistor at each end to make the pair appear like an infinite length transmission line (ie no reflections). The marine versions of the resistors or terminators come with a marine price tag, even though they often have poor tolerances. Precision 120Ohm resistors are a few pence, and once soldered and encapsulated will exceed any IP rating that could be applied to the marine versions. The Marine bus runs at 250Kb/s slower than many vehicle CAN bus implementations. Manufacturers add their own variants for instances Raymarine SeatalkNG which adds proprietary plugs, wires and connectors.

My new instruments are Raymarine, a few heads and sensors, a chart plotter and an Autopilot. They are basic with limited functionality. Had I added a few 0s to the instrument budget I would have gone for NKE or B&G which have a “Race Processor” and sophisticated algorithms to improve the sensor quality including wind corrections for heal and mast tip velocity, except, the corrections allowed in the consumer versions are limited to a simple linear correction table. I would be really interested to see the code for those extra 0s assuming it’s not mostly on the carbon fibre box. This is where it gets interesting for me. In addition to the CanBus project on GitHub there is an excelent NMEA2000   project targeting Arduino style boards in C++ and SignalK that runs on Linux. The NMEA2000 project running on an Aduino Due (ARM-M3 core processor) allows read/write interfacing to the CAN Bus, converting the CAN protocol into a serial protocol that SignalK running on a Pi Zero W can consume. SignalK acts as a conduit to an Instance of InfuxDB which captures boat metrics in a time series database. The metrics are then viewed in Grafana in a web browser on a tablet or mobile device.

I mentioned power earlier. The setup runs on the Pi Zero W with a load average of below 1 the board consuming about 200mA (5V). The Arduino Due consumes around 80mA@5V most of the time. There are probably some optimisations on to IO that can be performed in InfluxDB to minimize the power consumption further. Details of setup are in https://github.com/ieb/nmea2000. An example dashboard showing apparent wind and boat speed from a test dataset. This is taken from the Pi Zero W.

GrafanaAparentWind

Remember those expensive race processors. The marketing documentation talks of multiple ARM processors. The Pi Zero W has 1 ARM and the Arduino Due has another. Programmed in C++ the Arduino has ample spare cycles to do interesting things. On sailing boats, performance is predicted by the designer and through experience presented in a polar performance plot.

Pogo1250Polar

A polar plot showing expected boat speed for varying true wind angles and speeds. Its a 3D surface. Interpolating that surface of points using bilinear surface interpolation is relatively cheap on the ARM giving me real time target boat speed and real time % performance at a rate of well above 25Hz. Unfortunately the sensors do not produce data at 25Hz, but the electrical protocol is simple. Boat speed is presented as pulses at 5.5Hz per kn and wind speed as 1.045Hz per kn.  Wind angle is a sine cosine pair centered on 4V with a max of 6V and a min of 2V. Converting that all that to AWA, AWS and STW is relatively simple. That data is uncorrected. Observing the Raymarine messages with simulated electrical data I found its data is also uncorrected, as the Autopilot outputs Attitude information, ignored by the wind device. I think I can do better. There are plenty of 9DOF sensors (see Sparkfun) available speaking i2c that are easy to attach to an Arduino. Easy, because SparkFun/Adafruit and others have written C++ libraries. 3D Gyro and 3D Acceleration will allow me to correct the wind instrument for wind  shear, heal and mast head velocity (the mast is 18m tall, cup anemometers have non linear errors wrt angle of heel). There are several published papers detailing the nature of these errors. I should have enough spare cycles to do this at 25Hz, to clean the sensors and provide some reliable KPIs to hit while at sea.

A longer term projects might be teach a neural net to steer, by letting it watch how I steer, once I have mastered that. Most owners complaining their autopilots can’t steer as well as a human. Reinforcement learning in the style of Apha Go could change that. I think I heard the Vendee Globe boats got special autopilot software for a fee.

All of this leaves me with more budget to spend on sails, hopefully not batteries. I will only have to try and remember not to hit anything while looking at the data.

 

by Ian at April 10, 2017 05:24 PM

April 08, 2017

Dr. Chuck

A better .htaccess for Silex/Symfony Applications

I am playing with Silex / Symfony for writing Tsugi applications and had to come up with a better .htaccess file because my old one did not route the “/” into the fallback resource correctly.

The problem is that there is a directory for “/” that stops the final rewrite url from happening because of the (red) RewriteCond.

So we add two very explicit rewrite statements (green) to handle the “standalone slash” and the “standalone slash followed by a query string”.

    <IfModule mod_rewrite.c>
        RewriteEngine on
        RewriteRule ^ - [E=protossl]
        RewriteCond %{HTTPS} on
        RewriteRule ^ - [E=protossl:s]

        # Root folder all alone
        RewriteRule "^/$" silex.php [L]
        # Root folder with GET parameters
        RewriteRule "^/?.*$" silex.php [L]

        RewriteRule "(^|/)\." - [F]
        RewriteCond %{REQUEST_FILENAME} !-f

        RewriteCond %{REQUEST_FILENAME} !-d

        RewriteCond %{REQUEST_URI} !=/favicon.ico
        RewriteRule ^ silex.php [L]
    </IfModule>
    <IfModule !mod_rewrite.c>
        FallbackResource silex.php
    </IfModule>

I have a feeling it won’t work well for the FallbackResource – as it will find the “.” folder and not “fall back”. So this kind of means I need mod_rewrite.

Alternatively just rename ‘silex.php’ to ‘index.php’ if I don’t want to have a separate index.php.

Sigh.

by Charles Severance at April 08, 2017 07:27 PM

April 04, 2017

Apereo Foundation

April 03, 2017

Adam Marshall

Activity Browser: We have lift-off!

We are excited to announce that the SHOAL project’s Activity Browser has been launched!  You can find Activity Browser here: https://weblearn.ox.ac.uk/activity-browser.   You can also access it from the left-hand menu of WebLearn’s Gateway home page, or from the Support page of the Digital Education website www.digitaleducation.ox.ac.uk.

If you’re curious about digital teaching tools, want to engage students in different ways both in and beyond the lecture theatre or tutorial, or want to satisfy student digital expectations, Activity Browser is for you!  It’s a searchable showcase of inspirational digital learning activities created within the university.  You can explore activities created by Oxford innovators, and see what digital tools they have chosen to tackle particular teaching challenges.  Each example includes suggestions for how to adopt and adapt the ideas and tools for your own teaching, whether for face-to-face learning in tutorials, classes or labs, or for online study, revision or assessment.

The SHOAL project was a proof-of-concept focussing on STEM subjects, but we’re aware of the innovative online teaching taking place in other subjects and we’re keen to add those resources to the collection.  We are currently looking into the easiest way for you to contribute your own online learning activities, and to grow the range of digital tools and applications in our showcase.  We will update the ‘Contribute’ page of Activity Browser in the next phase of the project.

The Browser interface will be improved when WebLearn is upgraded in Trinity.  We welcome feedback on this early version; please email shoal@maillist.ox.ac.uk.

 

 

by Adam Marshall at April 03, 2017 10:30 AM

March 29, 2017

Dr. Chuck

New Sakai Project Management Committee (PMC) Members

I am pleased to announce four new members elected to the Sakai Project Management Committee (PMC). Here are the new members and the nomination statements for each:

Wilma Hodges, Longsight
Wilma has been a long-time community supporter and Apereo Fellow. She leads the documentation effort, participates in the Apereo FARM effort, leads the Sakai Virtual Conference, and participates in the Sakai Marketing effort and other community activities.

Dede Hourican, Marist
A long term community member, Dede has most recently directed focus on quality assurance with participation in the Sakai QA committee bringing a team of Marist student employees to bear on the process and teaches them about open source software and global communities and building future sakaigers.

Diego del Blanco, Unicon
Diego has been very active in the Sakai community for some time now and Apereo Fellow. He has been a regular attendee of the weekly calls, made significant contributions to Sakai 11 and put in some large features for 12.

Shawn Foster, Western
Shawn is highly active on community calls, provides code contributions, and is tightly connected with the very important usability and accessibility efforts within the Sakai community.

PMC membership is reflective of significant contributions to the community and a dedication to the shared goals of the Sakai community.

In terms of what PMC membership “means”, the PMC members are already active members in the various groups in the Sakai/Apereo community (QA, Core Team, Marketing, FARM, Accessibility, Teaching and Learning, etc.). Most of the decisions about Sakai are made in those groups without any need for the PMC to take a vote or render an opinion because we believe that those closest to the actual work should make decisions regarding the work whenever possible.

The PMC[1] gets involved when there is a significant or broader issue or a decision needs to be made regarding Sakai’s direction or resource expenditure by Apereo[2] on behalf of Sakai [3]. The PMC does all of its work in full view of the community on a public list[4] except for votes on new PMC members.

Please join me in thanking these new PMC members for their past, present, and future contributions to Sakai and welcoming them as the newest members of the Sakai PMC[5].

by Charles Severance at March 29, 2017 03:23 PM

March 20, 2017

Dr. Chuck

Should ICLAs be Required of Every Contributor?

Update: Title changed from “Committer” to “Contributor” based on a suggestion from Andrew Petro (see comments)

In Apereo/Sakai there is discussion of whether or not we need to doggedly require Individual Contributor License Agreements (ICLAs) from every person who sends in a simple github PR. It is generally agreed that if someone will be making significant contributions we need an ICLA – but many (myself included) feel that an ICLA is not necessary for a simple submitted patch. The issue is that this leaves a grey area and soe folks stay a bit conservative on this.

Andrew Petro did some research on this and here are his notes. I keep them here for my own reference.

Here is the thread where we discussed this:
https://groups.google.com/a/apereo.org/forum/#!topic/licensing-discuss/c1puG3RKZcA

Since this post, CLAs have come up a few times on Apache legal-discuss@, including in July when I brought up Apereo’s desire for a canonical position.

In February 2017, “it is considered good practice to collect individual CLAs even if the contributors are not committers. Strictly speaking this is unnecessary”. That is, Committers and Projects via their PMCs may require CLAs of Contributors rather than just only of Committers, and it may be a good practice for them to do this under some circumstances, but Apache does not strictly require this. Also, this post again confirmed that while it is a good practice for Committers to secure Corporate Contributor License Agreements of their employers, this is a judgment call on the part of the Contributor.

In December 2016, “our IP provenance relies on both our license, our ICLA/CCLAs, and the fact that we have written policies that define who can be a committer and how PMCs can make releases. It’s usually good if a code author (or someone who could otherwise legally sign an ICLA in terms of granting us the right licensing rights to that code) actually submits the work to some Apache project before we put it in a release.” That is, it’s sufficient that an ICLA-signatory Committer actually merges the code into the canonical codebase.

In August 2016, “To avoid the risk associated with clever or large contributions, most PMCs request a formal ICLA to be filed.” Which is to say that some do not, and that therefore Apache does not require that projects do so; individual PMCs get to locally decide when to go beyond requiring ICLAs of Committers to require it of a Contributor in the context of a given Contribution.

In August 2016, on this very topic, “I don’t see that there’s a ‘canonical position’ that can exist.” and “Stating my understanding of the Apache policy – Apache requires ICLAs of its committers, uses ICLAs or a software license (https://www.apache.org/licenses/software-grant.txt) for exceptional contributions from contributors and generally relies on clause 5 of theApache License 2.0 for other contributions from contributors.”

There have been opportunities for someone to argue that ICLAs are required of all Contributors, and that position has not been argued on legal-discuss@.

I think it’s also looking likely that this is as canonical a position as one can get from Apache on this matter.

by Charles Severance at March 20, 2017 03:30 PM

March 15, 2017

Apereo OAE

Building the next generation ecosystem

Academic life consists of two parts: teaching and research. Technologists in both of these worlds are struggling to bring into being a new ecosystem. The driving forces behind the desire for these two ecosystems are different, the way the ecosystems are thought about is different and the people doing the thinking are different. We can see this, for example, in the recent parallel consultations run by JISC in the UK on learning and research systems. But why, we might ask, is academia trying to create two distinct ecosystems? That isn't what we see Apple or Google or Facebook trying to do.

In teaching, the talk is of replacing the ubiquitous Learning Management System with an ecosystem. Prompted by the Gates Foundation, discussion of a Next Generation Digital Learning Environment has been going on for some years now, but progress is slow.

In research, two things are happening at once. There are a maelstrom of interconnected moves aimed at creating open access to scholarly publications. At the same time, research in most countries is becoming increasingly managed. This leads to enterprise systems that want to interact with the outside world. The open access movement is highly visible, but no one has a map that gets us to the final destination. The enterprise agenda succeeds according to its institutional limits, but shows little sign of spawning the kind of connections between institutions that reflect the global character of research.

You can find out more about OAE, its take on academic life and how you can participate by reading its new vision statement for 2017, Building the next generation ecosystem. Its all about what we call the New Academic Environment.

March 15, 2017 08:00 AM

March 01, 2017

Sakai@JU

AWS Reports Issue Resolved

According Amazon’s Dashboard (screenshot below), the issue which affected some portions of access to course sites in Sakai has been resolved (5:08 EST).

screenshot-2017-02-28-20-08-47

Faculty and students are encouraged to continue to working in Sakai normally. If you experience any issues logging in, accessing course content, submitting grades or assignments, to contact the HelpDesk. Students experiencing issues related to submitting assignments, discussions, tests or quizzes late should contact their course instructor for direction on how to proceed.

 


by Dave E. at March 01, 2017 01:25 AM

February 28, 2017

Sakai@JU

What does Amazon have to do with Sakai?

amazonA lot of people seem to be asking this question.  Most students (and faculty) tend to think of Amazon as the online equivalent of Walmart (though Walmart has it’s own online presence) – as just a seller of retail items.  Amazon however is far more.

sticker375x360Amazon not only sells retail items (and space) it also provides internet services or hosting for thousands of companies, institutions and other entities.  This hosting essentially allows and provides easy, fast and often redundant access to content on a global scale through something called a content delivery network.  Essentially through an extreme set of complex algorithms, security and other layers the paper just submitted in your course ‘lives’ on an Amazon web server through their S3 platform (Simple Storage Service). It was most evident to me in my role with the university when I noticed images in courses ‘disappearing’.

Think of it this way. Lets say you’re going to a friends house for dinner – they’re hosting you. They ask you to come over to see them. They even tell you that their niece, Nozama is going to be bringing desert in the form of those great scout cookies you enjoy so much. You arrive on time to the dinner and everything seems to be going just fine until it’s time for desert. Sadly, your friend tells you, Nozama couldn’t bring the cookies just yet, because her parents car had trouble on the way over. Sadly (presently) the cookies you love so much are missing in action.

In some ways you could look at this as the host’s problem is that the host of the cookies is having a problem.  For more on understanding the nuts and bolts of hosts, check out this explainer from CommonCraft.


by Dave E. at February 28, 2017 09:48 PM

Course Sites Access – Update

As of 4:20pm EST, the impact of Amazon Web Services (AWS S3) continues to impact online, face to face and blended courses sites in Sakai (http://sakai.johnsonu.edu | https://sakai.lampschools.org).

You can find out more about the AWS S3 issue here.

After further research, the issue not only affected image content in courses – it also affected student’s ability to upload or access files in courses – including but not limited to, access to course syllabi, files in course Resources, upload of assignments as attachments, entry of forum and blog posts and and submission of assignments. Other areas may have also been affected as well.

What does Amazon have to do with Sakai anyways?

While it’s expected that the issue will be resolved soon, instructors are asked to use discretion when accepting assignments and other grade-impacting tasks which rely on electronic submission via Sakai. While not preferable, some instructors may decide to correspond with students via standard email about changes/adjustments to assignment submission processes due to the AWS issue, including extending the due or accept until date(s). Instructor’s ability to access student submissions, files and related gradable digital content is also an issue in some cases.

Students are encouraged to create and author content using an offline editor (such as in Word or Pages) and save their work so they have a back up and can potentially submit their work later or using a different means.

Instructors and students can continue to check the JohnsonU_Online Twitter feed for continued updates on this issue. Additional status update information is available directly from Amazon here.

 


by Dave E. at February 28, 2017 09:25 PM

February 23, 2017

Apereo OAE

New release: Apereo OAE 12.5.0!

After a period of hibernation, Apereo Open Academic Environment (OAE) project is happy to announce a new minor release: OAE 12.5.0.

This release comprises some new features, usability improvements and bug fixes.

Our next step will be to improve documentation and to make the project more approachable to newcomers, but the team is also continuing to modernise the code base, so fear not; the next major release is on the horizon!

Changelog

Administration improvements

Both tenant and global administators can now use the administration interface to add a new logo for a tenancy.

Search improvements

Private tenancies will no longer have content from other tenancies showing up in the search and similarly their content or users will never be visible in the search results of other tenancies.

Linked content improvements

If a link is created pointing directly to a file, we will no longer trigger a download prompt when it's opened. Instead, OAE will attempt to embed the file where possible.

Activity feed and email bug fixes

A bug where some user thumbnails occasionally caused a 401 error in the activity feed was fixed, as was an error where some characters were rendering incorrectly in emails.

Group bug fixes

Deleted users will no longer have links to their homepages in groups. Additionally, a bug where a user's name would sometimes not be displayed correctly in a group's members list was fixed.

Try it out

OAE 12.5.0 can be experienced on the project's QA server at https://oae.oae-qa0.oaeproject.org. It is worth noting that this server is actively used for testing and may be wiped and redeployed every night.

The source code has been tagged with version number 12.5.0 and can be downloaded from the following repositories:

Back-end: https://github.com/oaeproject/Hilary/tree/12.5.0

Front-end: https://github.com/oaeproject/3akai-ux/tree/12.5.0

Documentation on how to install the system can be found at https://github.com/oaeproject/Hilary/blob/12.5.0/README.md.

The repository containing all deployment scripts can be found at https://github.com/oaeproject/puppet-hilary.

Get in touch

The project website can be found at http://www.oaeproject.org. The project blog will be updated with the latest project news from time to time, and can be found at http://www.oaeproject.org/blog.

The mailing list used for Apereo OAE is oae@apereo.org. You can subscribe to the mailing list at https://groups.google.com/a/apereo.org/d/forum/oae.

Bugs and other issues can be reported in our issue tracker at https://github.com/oaeproject/3akai-ux/issues.

February 23, 2017 01:00 PM

February 20, 2017

Sakai Project

Sakai Surveys Summaries

The 2016 surveys are completed and the results have been summarized. While we need to be cognizant that it is only a small portion of our community who responded (about 14%), there is enough feedback to stimulate valuable community discussion. Also may be worth noting that over1/2 of the respondents to the surveys are self hosted or self hosted with commercial affiliate support. 

by NealC at February 20, 2017 03:10 PM