Planet Sakai

October 18, 2017

Dr. Chuck

We want to feature YOU in our upcoming Internet and You Teach-Out!

We are just under two weeks out from The Internet and You! Teach-Out with Doug Van Houweling and me. I would like to encourage you to sign up for this event on Coursera, as we look at the past, present, and future of the Internet and how it influences society.

As we move closer to October 30, we’d love to hear your own thoughts and questions about the internet, so they can be addressed within the live sessions. Consider the following questions:

  • What questions do YOU have for Doug and I about the past, present, and future of the Internet?
  • Do you think the Internet is broken? If so, how should it be fixed?
    When you think about the Internet, what puzzles you?
  • Do you ever worry that the Internet might be changing for the worse? Why? What concerns you?
  • As the Internet continues to become more embedded in our lives, is there anything about it that concerns you?

Once you sign up, there are two ways you can share your responses or questions with us:

  • We provide a form to share your YouTube video
  • We provide a phone number to leave a recorded voice response

We’re looking forward to seeing you on October 30 for the first live session!

by Charles Severance at October 18, 2017 05:25 PM

October 15, 2017

Dr. Chuck

Abstract: Building Reusable Learning Content and Tools

Increasingly faculty need to build educational materials that can be reused and repurposed across a wide range of learning environments. In the “old days”, faculty would dust off last semester’s PowerPoint, fix a few typos, upload it to a campus learning management system like Sakai or Canvas, and walk into lecture. Increasingly, our learning content includes course assignments, specialized software to help with assessments, video materials, formative assessments, supporting materials, etc. And for each course, on each LMS, you needed to upload and format your content and place it into a “course shell”. But now at many campuses, there may be several learning platforms and one or more MOOC platform like Coursera. It is a lot of work to re-author your course content in three or four learning platforms. But with the widespread adoption of LMS interchange standards like IMS Learning Tools Interoperability, IMS Common Cartridge, and IMS Content Item, we should be able to author content once and easily integrate into as many learning platforms as needed. This talk will explore research into developing easy to use learning object repositories and learning application stores that enable this “write once – use anywhere” model of learning content development.

by Charles Severance at October 15, 2017 01:49 AM

October 13, 2017

Michael Feldstein

WGU Is Not Off the Hook

In Phil’s first piece on the Department of Education’s Office of the Inspector General (OIG) finding the Western Governors University (WGU) should be considered a correspondence provider rather than a distance education provider, he wrote,

This audit is a travesty in my opinion. Even though it is likely to be rejected by the ED itself, it will have an impact, and the internal review of the audit will likely take years.

The problem, in a nutshell, is that the OIG decided that WGU’s unbundled instructor’s role, with multiple staff roles supporting students in a (largely) self-paced environment, does not count as “regular and substantive interaction between students and teachers,” which is a requirement for classification as a distance learning provider.

Phil believes that this assessment by the OIG was arbitrary and, based on my admittedly limited understanding of their assessment process, I tend to agree. But that doesn’t mean that the OIG is wrong. It means we don’t know whether the OIG is wrong. And the heart of the problem—the definition and test for “regular and substantive interaction between students and instructors”—is a real challenge. While feel fairly confident that the OIG applied too narrow an interpretation of a standard that problably needs to be revised anyway, coming up with a better evidence-based standard is tough. And if we don’t have one, we can’t tell if WGU’s programs should be considered equivalent to more traditional distance learning programs.

There are two positions that one could take in arguing against the OIG finding: (1) that it is possible to deliver the equal of a traditional education without regular and substantive interaction between students and teachers, or (2) that this interaction is necessary but we need a different, perhaps more flexible definition of it.

Let’s look at each of these in turn.

Position 1: The Standard is Unnecessary

The more radical of the two positions is that “regular and substantive interactions between students and teachers” is outdated in the sense that such interaction is not necessary for a quality distance education program. In this view, good design and good technology provide enough support for self-paced students. People who take this position tend to have a high opinion of the impact of technology, a low opinion of the impact of the average instructor, or both.

I’m not aware of any research that definitively settles this particular debate and would be surprised if there were any. In fact, I’m not sure it’s possible to produce such evidence in principle, because there are too many contextual factors to come up with just one answer. Some students in some programs studying some subjects to some level of achievement may do as well (or better) in a self-paced, largely self-guided competency-based program as they would in a traditional instructor-led setting. There would need to be an enormous amount of research, including some foundational research that we don’t have yet, to sort out all of the many “ifs” that determine the circumstances under which such a program would be equivalent in effectiveness.

I think it’s dangerous to assume that “regular and substantive interaction between students and instructors” is an obsolete standard, and I do think there are at least three strands of research with results that should give us pause about being too aggressive about taking human teachers out of the equation.

First, there’s Benjamin Bloom’s research on the Two Sigma Problem. Since I recently wrote about this in some detail as part of a longer post, I’ll give you the short version. Bloom found that by using tutors in a mastery learning context, he could achieve two standard deviations of improvement over standard instruction. One could argue that WGU’s model of self-paced learning with periodic assessments and support from course mentors attempts to imitate Bloom’s approach (although one would have to look closely to see whether the degree to which they are actually doing so). The relevant detail for our current purpose is that Bloom could never isolate exactly what it was about the tutors that delivered that second sigma. Without understanding the reasons why having a human tutor involved improves student outcomes by as much as a full course grade, it seems imprudent to assume it can be removed or replaced.

Second, there’s the research conducted by Gallup and Purdue University showing that college graduates were 1.7 times more likely to thrive in all five of Gallup’s measures of wellbeing—physical, financial, community, career, and social—if they agreed with the statement “My professors at [college] cared about me as a person.” They were 1.5 more likely to thrive on those measures if they answered agreed with the statement “I had at least one professor at [College] who made me excited about learning.” Those are pretty compelling results, and it’s hard to see how one would replicate them without some form of regular and substantial interaction between students and instructors. For more on this study, see my post on it.

In a follow-up piece to that post I just referenced, I talked about the third strand of research from Vincent Tinto. He showed that students are more likely to persist at college if they feel a sense of belonging. “[S]Students have to come to see themselves as a member of a community of other students, faculty and staff who value their membership.” Yet again, there is evidence of impact for a human factor that argues in favor of regular and substantial interaction between students and teachers.

To be clear, I’m not suggesting that one could not create an educational system that provides real value without such interaction. But it would probably be a different kind of education that provides different kinds and levels of value. The OIG is concerned with classification and equivalence: Is WGU providing educational value that’s similar enough to more traditional distance learning programs that it can be classified as the same type of degree? I don’t think we can let the university off the hook by dismissing the “regular and substantial interaction” requirement as obsolete.

Position 2: The Standard Needs Revision

The more conservative argument against the OIG’s evaluation of WGU’s courses is that we still need a standard for “regular and substantial interaction between students and teachers,” but that our interpretation of that standard should be more flexible than the one that the OIG applied. Ideally, there would be some sort of evidence-based test. Let’s see if we can imagine what such a test might look like, based on the three research strands I mentioned above.

It would be hard (and probably pointless) to try to replicate Bloom’s highly controlled laboratory experiments which took place in a very different schooling context. But we might get something from the spirit of the experiment. Simply put, can we come up with some sort of rough measure of the impact of the instructors (or the various folks who individually or collectively fulfill the instructor’s function) on mastery of materials? Can we find evidence of impact? One place might be to look at variance in student performance between instructors teaching the same material. If there is substantial variance that can be reliably attributed to instructors, then WGU could argue that their courses have enough student/instructor interaction to make a difference.

The Tinto and Gallup/Purdue research would be relatively straightforward to draw upon, since they both use student attitude surveys. But only relatively, because I haven’t seen studies applying any of these instruments specifically to distance learning programs. (If anybody knows of such research, please let me know.) One would need to establish a baseline. But that seems like a good idea anyway.

So there are probably a number of ways that the OIG could establish an empirical test to find evidence of student/instructor interaction that is regular and substantive enough to pass an equivalence threshold. It would probably be crude, but a crude test is better than no test at all, which appears to be what we have now.

I don’t know if the OIG assessment of WGU’s courses was wrong. I feel fairly confident that it was made arbitrarily. But the fact that we have no reason to believe that it is right is not the same as saying we have reason to believe that it is wrong. The reason that bears repeating is that, defined this way, the problem exists not only for WGU but for every assessment that the OIG makes. If the standard is completely subjective and therefore inherently arbitrary in its application, then it is meaningless.

The post WGU Is Not Off the Hook appeared first on e-Literate.

by Michael Feldstein at October 13, 2017 03:33 PM

October 11, 2017

Adam Marshall

WebLearn Tests tool

A powerful way for your students to test their understanding…

WebLearn offers a tool called ‘Tests’, which allows tutors and lecturers to create pools of questions and assessments based on selecting questions to present to students. Students can take the test in their own time and benefit from hints and feedback provided by the tutor when creating the questions.

The following questions types are possible:

The process of creating a test involves the following steps:

  1. Create questions in a question pool – options are available to provide overall feedback or hints
  2. Build an assessment by selecting questions to present (either sequentially or randomly)
  3. Test drive the assessment (test) as a student
  4. Set options such as open and close dates, time limits, number of attempts etc.
  5. Publish the test for students to take

The Tests tools keeps track of all attempts and provides a report showing student names, start and finish time, and scores. The data can be exported to Excel for further analysis.

Further information:

Contact us

If you would like to discuss possibilities for using the Tests tool in your courses, contact our team of learning technologists at weblearn@it.ox.ac.uk

 

 

by Jill Fresen at October 11, 2017 03:36 PM

Apereo OAE

Getting started with LibreOffice Online - a step-by-step guide from the OAE Project

As developers working on the Apereo Open Academic Environment, we are constantly looking for ways to make OAE better for our users in universities. One thing they often ask for is a more powerful word processor and a wider range of office tools. So we decided to take a look at LibreOffice Online, the new cloud version of the LibreOffice suite.

On paper, LibreOffice Online looks like the answer to all of our problems. It’s got the functionality, it's open source, it's under active development - plus it's backed by The Document Foundation, a well-established non-profit organisation.

However, it was pretty difficult to find any instructions on how to set up LibreOffice Online locally, or on how to integrate it with your own project. Much of the documentation that was available was focused on a commercial spin-off, Collabora Online, and there was little by way of instructions on how to build LibreOffice Online from source. We also couldn't find a community of people trying to do the same thing. (A notable exception to this is m-jowett who we found on GitHub).

Despite this, we decided to press on. It turned out to be even trickier than we expected, and so I decided to write up this post, partly to make it easier for others and partly in the hope that it might help get a bit more of a community going.

Most of the documentation recommends running LibreOffice Online (or LOO) using the official Docker container, found here. Since we recently introduced a dockerised development setup for OAE, this seems like a good fit. A downside to this is that you can’t tweak the compilation settings, and by default, LOO is limited to 20 connections and 10 documents.

While this limitation is fine for development, OAE deployments typically have tens or hundreds of thousands of users. We therefore decided to work on compiling LOO from source to see whether it would be possible to configure it in a way that allows it to support these kinds of numbers. As expected, this made the project substantially more challenging.

I’ve written down the steps to compile and install LOO in this way below. I’m writing this on Linux but they should work for OSX as well.

Installation steps

These installation steps rely heavily on this setup gist on GitHub by m-jowett, but have been updated for the latest version of LibreOffice Online. To install everything from source, you will need to have git and Node.js installed; if you don’t already have them, you can install both (plus npm, node package manager) with sudo apt-get install git nodejs npm. You need to symlink Node.js to /usr/bin/node with sudo ln -s /usr/bin/nodejs /usr/bin/node for the makefiles. You’ll also need to install several dependencies, so I recommend creating a new directory for this project to keep everything in one place. From your new directory, you can then clone the LOO repository from the read-only GitHub using git clone https://github.com/LibreOffice/online.git.

Next, you’ll need to install some dependencies. Let’s start with C++ library POCO. POCO has dependencies of it’s own, which you can install using apt: sudo apt-get install openssl g++ libssl-dev. Then you can download the source code for POCO itself with wget https://pocoproject.org/releases/poco-1.7.9/poco-1.7.9-all.tar.gz. Uncompress the source files, and as root, run the following command from your newly uncompressed POCO directory:

./configure --prefix=/opt/poco
make install

This installs POCO at /opt/poco.

Then we need to install the LibreOffice Core. Go back to the top level project directory and clone the core repository: git clone https://github.com/LibreOffice/core.git. Go into the new 'core' folder. Compiling the core from source requires some more dependencies from apt. Make sure the deb-src line in /etc/apt/sources.list is not commented out. The exact line will depend on your locale and distro, but for me it’s deb-src http://fi.archive.ubuntu.com/ubuntu/ xenial main restricted. Next, run the following commands:

sudo apt-get update
sudo apt-get build-dep libreoffice
sudo apt-get install libkrb5-dev

You can also now set the $MASTER environment variable, which will be used when configuring parts of LibreOffice Online:

export MASTER=$(pwd)

Then run autogen.sh to prepare for building the source with ./autogen.sh. Finally, run make to build the LibreOffice Core. This will take a long time, so you might want to leave it running while you do something else.

After the core is built successfully, go back to your project root folder and switch to the LibreOffice Online folder, /online. I recommend checking out the latest release, which for me was 2.1.2-13: git checkout 2.1.2-13. We need to install yet more dependencies: sudo apt-get install -y libpng12-dev libcap-dev libtool m4 automake libcppunit-dev libcppunit-doc pkg-config, after which you should install jake using npm: npm install -g jake. We will also need a python library called polib. If you don’t have pip installed, first install it using sudo apt-get install python-pip, then install the polib library using pip install polib. We should also set some environment variables while here:

export SYSTEMPLATE=$(pwd)/systemplate
export ROOTFORJAILS=$(pwd)/jails

Run ./autogen.sh to create the configuration file, then run the configuration script with: 

./configure --enable-silent-rules --with-lokit-path=${MASTER}/include --with-lo-path=${MASTER}/instdir --enable-debug --with-poco-includes=/opt/poco/include --with-poco-libs=/opt/poco/lib --with-max-connections=100000 –with-max-documents=100000

Next, build the websocket server, loolwsd, using make. Create the caching directory in the default location with sudo mkdir -p /usr/local/var/cache/loolwsd, then change caching permissions with sudo chmod -R 777 /usr/local/var/cache/loolwsd. Test that you can run loolwsd with make run. Try accessing the admin panel at https://localhost:9980/loleaflet/dist/admin/admin.html. You can stop it with CTRL+C.

That, as they say, is it. You should now have a LibreOffice Online installation with a maximum connections and maximum documents both set to 100000. You can adjust these numbers to your liking by changing the with-max-connections and with-max-documents variables when configuring loolwsd.

Final words

Overall, I found this whole experience a bit discouraging. There was a lot of painful trial and error. We are still hoping to use LibreOffice Online for OAE in the future, but I wish it was easier to use. We'll be posting a request in The Document Foundation's LibreOffice forum for a docker version without the user limits to be released in future.

If you're also thinking about using LOO, or are already, and would like to swap notes, we'd love to hear from you. There are a few options. You can contact us via our mailing list at oae@apereo.org or directly at oaeproject@gmail.com

October 11, 2017 11:00 AM

October 10, 2017

Michael Feldstein

Unizin Membership Now Set As Annual Fee Of Up To $427.5k

I've been meaning to provide an update on Unizin now that the consortium is three years old (started officially in July 2014). Thanks to public documents from the University of Minnesota, one of the 11 founding members, we now have additional clarity on the ongoing costs to remain a member of Unizin.

Membership Fees

For some background, Colorado State University staff back in April 2014 described the $1,050,000 initial fee in their meeting minutes for the University Technology Fee Advisory Board:

3. Will this decrease overall costs on our end through collaboration?
a. We are investing $1 million up front, but there is about a 7-year payback. We are investing in a $10 million product since the other 9 universities are putting their money in as we are too. This will absolutely decrease our costs.

One year later, when the Florida State University System joined Unizin as associate members, we noted this item from the University of Florida / Unizin Consortium Membership Agreement:

We noted at e-Literate in our article from 2015:

Does this mean that founding institutions that "invested" $1.050 million over three years will have to start paying annual fees of $100,000 starting in June 2017? That's my assumption, but I'm checking to see what this clause means and will share at e-Literate.

Update (7/17): I talked to Amin Qazi today (CEO of Unizin) who let me know that the annual membership fee for institutional members (currently the 11 schools paying $1.050 million) has not be determined yet.

Fast forward to 2017 and we have an answer. The University of Minnesota has to submit purchases over $1 million to its board of regents for consent, and at the July 2017 meeting the new Unizin membership fees were presented:

To Unizin, Ltd. for $1,282,500 for a three-year renewal of membership in the higher education consortium for the Office of Information Technology (OIT) for the period July 1, 2017, through June 30, 2020. The annual payment of membership fees will be covered from OIT’s central O&M funds. The FY18 budget includes planning and funding for this expense.

That equals $427,500 per year for the next three years for the 70,000+ enrollment university. What this now makes clear is that the up-front investment in Unizin was not a one-time fee broken up into three easy payments. Unizin member has an ongoing annual fee set in three-year periods.

I again asked Amin Qazi for clarification, including whether all Unizin members were now paying the higher fee ($427.5k vs. $350k for initial three years). Amin confirmed via email:

Unizin is a non-profit organization and seeks to cover its costs. We have found that our cost to provide our services and tools somewhat scale with the size of the institution. The Unizin Founding Member Fees have been adjusted after the initial three year period. So while larger institutions do pay more, smaller institutions pay less. We anticipate further adjustments as we grow and are able to recognize even greater economies of scale.

I would then assume that the University of Minnesota, along with University of Michigan and Michigan State University, and Penn State University are paying at the highest level and more than $350k, and that smaller schools like the University of Iowa and the University of Nebraska are paying less than $350k.

LMS Fees

The same University of Minnesota document also describes their costs for the Canvas1 LMS based on the Unizin agreement.

To Unizin, Ltd. for $5,023,000 for a purchase of Canvas Learning Management System (LMS) for the Office of Information Technology (OIT) for the period July 1, 2017 through June 30, 2022. [snip]

Unizen [sic], on behalf of its member institutions, conducted a competitive Request for Proposal followed by a detailed evaluation process. Through this process Canvas by Instructure was selected as a Learning Management System (LMS). The University then conducted a two year pilot of Canvas and a majority of the stakeholders preferred Canvas to the University's current LMS, Moodle. Most of the Big Ten schools have adopted or are adopting Canvas.

The University receives an additional 30% discount by purchasing Canvas through Unizen [sic] rather than purchasing directly through Infrastructure [sic] and 3% caps on annual increases, rather than 5%, has been negotiated.

This five-year deal comes out to $12 - $14 per student per year. The document does not specify what level of support they have chosen, although they describe a "dedicated test server".

New Associate Members

In other news, Unizin announced in July that the University of Nebraska system has joined as associate members.

The Unizin Consortium is thrilled to welcome the full University of Nebraska system, bringing the total number of institutions in the consortium to 25. With the addition, the University of Nebraska at Kearney, University of Nebraska at Omaha, and University of Nebraska Medical Center join Unizin Founding Member the University of Nebraska Lincoln.

Note that associate members do not pay the same amount as full members. In Florida, the State University System deal costs each associate member $100k per year.

We'll likely give updates at e-Literate after the EDUCAUSE conference on how Unizin has evolved in terms of services and potential new members. But for now we at least have more clarity on the financial terms of the consortium.

  1. Disclosure: Instructure is a subscriber to our market analysis service.

The post Unizin Membership Now Set As Annual Fee Of Up To $427.5k appeared first on e-Literate.

by Phil Hill at October 10, 2017 10:52 PM

Dr. Chuck

Speedy Amazon EC2 Compile for Sakai

This is mostly my own notes in my attempt to find a quick developer / compile option for Sakai.

TL;DR – An EC2 c4.2xlarge with the right .bashrc settings is a very fast compile box for Sakai.

Methodology

Check out my Sakai scripts:

https://github.com/csev/sakai-scripts

Put in all the pre-requisites – make sure to run the qmv.sh script once to get the maven repo cache
warmed up before doing timing.

Base Line

My baseline is my own MacBook Pro with quad processor i7 2.8 Ghz, with 1TB SSD, and 16GB RAM. I played with the last line in the qmv.sh to change the number of threads in use using the “-T” options.

mvn -T 4 -e -Dmaven.test.skip=true -Dmaven.tomcat.home=$tomcatdir $goals

Compile time:

1 thread - 4:08
2 threads - 2:47
4 threads - 1:52

Testing on Amazon

AWS – c4.2xlarge, 15G RAM, 8 CPUs, 31 “ECU Units”, EBS

Setting the MAVEN and JAVA OPTS to -Xms4096m -Xmx4096m

Compile time:

4 threads 1:25

Setting the MAVEN and JAVA OPTS to -Xms8192m -Xmx8192m

Compile time:

6 threads 1:17
8 threads 1:18

For future testing we might look at less expensive per hour boxes but this certainly is fast enough for Sakai development at $0.40 per hour.

by Charles Severance at October 10, 2017 05:20 PM

October 09, 2017

Adam Marshall

WebLearn and Turnitin Courses and User Groups: Michaelmas Term 2017

IT Services offers a variety of taught courses to support the use of WebLearn and the plagiarism awareness software Turnitin. Course books for the formal courses (3-hour sessions) can be downloaded for self study. Places are limited and bookings are required. All courses are free of charge.

Click on the links provided for further information and to book a place.

WebLearn 3-hour courses:

Byte-sized lunch time sessions:

These focus on particular tools with plenty of time for questions and discussion

Plagiarism awareness courses (Turnitin):

User Group meetings:

by Jill Fresen at October 09, 2017 04:16 PM

October 05, 2017

Adam Marshall

New Features in WebLearn 11-ox7

A new version of WebLearn (version 11-ox7) was released on Tuesday 3 October 2017. The main ‘headline’ is that authentication has now switched to Shibboleth in anticipation of the removal of the WebAUTH service. Other improvements have been made to Reading Lists, the Peer Assessment process and Anonymous Submission sites.

Here is a list of the main improvements:

  • Change the authentication method from WebAuth to Shibboleth
  • The HTML WYSIWYG editor now has an “Insert HTML5 Media” toolbar button

  • One can now embed HTML pages containing Font Awesome icons within Lessons pages
  • On a public site in the Lessons tool, the resources folder listing is now correctly shown for non-logged in users
  • In Lessons, formatting improvements have been made to the calendar event pop-up and Forums and Announcement widgets change colour when the page’s colour scheme is modified
  • Reading Lists: links to journal articles now open in a new tab
  • NHS users can now post messages to the Email Archive
  • Additional ‘Joinable Sites’ options have been enabled – you can now specify whether Joinable Sites can be joined by any logged in user or just by users with an Oxford SSO account.

Anonymous Essay Submission & Assignments Tool

  • On Anonymous Submissions (AS) sites, one cannot now allow Students to see their TII reports – the option is greyed out
  • A “reveal assignment on this date” option has been added, this means an assignment can be set up but students will not see it in the list until the “visibility date” is reached
  • If peer assessment already selected then the configuration options are shown by default
  • The default End date of the Peer Assessment period is 7 days after Accept Until date

Dynamic Lookup of Users

As part of the VLE review requirements gathering exercise, it was suggested that there should be an ‘auto-complete’ feature when adding users to a site. we have now implemented this new feature and we would like a small number of volunteers to test-drive this new facility. Please get in touch if you would like to help out.

by Adam Marshall at October 05, 2017 01:26 PM

October 03, 2017

Apereo Foundation

7 Things You Should Know About Open-Source Projects

7 Things You Should Know About Open-Source Projects

Published in the Educause Review, August 31st, 2017
By: Ian Dolphin  Douglas Johnson  Laura Gekeler  Patrick Masson

by Michelle Hall at October 03, 2017 06:24 PM

Open Source and the NGDLE

Open Source and the NGDLE

Publshed in the Educause Review, September 11, 2017
by David Ackerman and Ian Dolphin

by Michelle Hall at October 03, 2017 06:17 PM

October 02, 2017

Apereo Foundation

Michael Feldstein

University of Wisconsin System to Migrate From D2L Brightspace to Canvas LMS

In one of the most significant LMS selection projects of the past few years, the University of Wisconsin System (UWS) has chosen to migrate from D2L's Brightspace to Canvas as its centrally-supported Learning Management System (LMS).1 UW Madison already moved to Canvas as part of its Unizin membership, but now the rest of the 180,000 student, 26 campus system will also make the change.

The decision was first noted on the UWS procurement portal and in a investor analysis note from Raymond James. A representative from UWS confirmed the news and added that "Canvas has been issued the Notice of Intent to Award and a final contract is going to the UW System Board of Regents for formal approval in October".

The UWS project page describes the process leading up to the LMS selection, starting with needs analysis kickoff in 2015.

The Learning Environment Needs Analysis (LENA) project was undertaken as a continuation of a multi-year UW System effort to: 1) understand the current and future learning technology landscape, 2) uncover the wants and needs of UW System institutions with regard to academic technologies that support teaching and learning, and 3) identify gaps that exist in supporting teaching and learning through academic technology. The results of the LENA project were presented to the Learn@UW Executive Committee, along with a recommendation that the Committee charter the process for planning to move into a next generation learning environment for the UW System. The intention was that through this process, UW System would discover potential paths forward to support such an environment.

Last year UWS developed the request for proposal (RFP) requirements list, and the formal RFP was released in January of 2017.

Beyond the size of the system, UWS decision is significant due to it being the first major customer of Desire2Learn (as the company was known prior to 2014). Back in 2002 / 2003, most LMS decisions were framed as Blackboard vs. WebCT, and when UWS selected little-known D2L, it sent shock waves through the market. The decision really put D2L on the map as a true contender, and they followed up with wins at the University of Iowa2, the Ohio State University, Minnesota State Colleges and Universities, and the University System of Georgia.

We have noted several times at e-Literate and as part of the e-Literate Big Picture: LMS market analysis service that D2L has an impressive record of client retention. The company has been a fierce competitor in keeping customers, as seen when the Colorado Community College System recently chose to remain on D2L Brightspace after their LMS selection process.3 This loss of UWS is the biggest setback for the company in terms of losing clients, and it is a major win for Instructure's Canvas system.

Expect more market news to come out in the next month based on WCET and ECUCAUSE conferences.

  1. Disclosure: UWS, UW Madison, Instructure, and D2L are all subscribers to our market analysis service, and we aided UWS in the needs analysis portion of this project.
  2. Disclosure: In previous consulting company I advised U Iowa on their LMS selection.
  3. Disclosure: CCCS is a subscriber to our market analysis service. I also advised CCCS when they originally chose D2L in 2008.

The post University of Wisconsin System to Migrate From D2L Brightspace to Canvas LMS appeared first on e-Literate.

by Phil Hill at October 02, 2017 05:47 PM

September 18, 2017

Sakai@JU

Online Video Tutorial Authoring – Quick Overview

As an instructional designer a key component to my work is creating instructional videos.  While many platforms, software and workflows exist here’s the workflow I use:

    1. Write the Script:  This first step is critical though to some it may seem rather artificial.  Writing the script helps guide and direct the rest of the video development process. If the video is part of a larger series, inclusion of some ‘standard’ text at the beginning and end of the video helps keep things consistent.  For example, in the tutorial videos created for our Online Instructor Certification Course, each script begins and ends with “This is a Johnson University Online tutorial.” Creating a script also helps insure you include all the content you need to, rather than ad-libbing – only to realize later you left something out.As the script is written, particular attention has to be paid to consistency of wording and verification of the steps suggested to the viewer – so they’re easy to follow and replicate. Some of the script work also involves set up of the screens used – both as part of the development process and as part of making sure the script is accurate.

 

  1. Build the Visual Content: This next step could be wildly creative – but typically a standard format is chosen, especially if the video content will be included in a series or block of other videos.  Often, use of a 16:9 aspect ratio is used for capturing content and can include both text and image content more easily. Build the content using a set of tools you’re familiar with. The video above was built using the the following set of tools:
    • Microsoft Word (for writing the script)
    • Microsoft PowerPoint (for creating a standard look, and inclusion of visual and textual content – it provides a sort of stage for the visual content)
    • Google Chrome (for demonstrating specific steps – layered on top of Microsoft PowerPoint) – though any browser would work
    • Screencast-O-Matic (Pro version for recording all visual and audio content)
    • Good quality microphone such as this one
    • Evernote’s Skitch (for grabbing and annotating screenshots), though use of native screenshot functions and using PowerPoint to annotate is also OK
    • YouTube or Microsoft Stream (for creating auto-generated captions – if it’s difficult to keep to the original script)
    • Notepad, TextEdit or Adobe’s free Brackets for correcting/editing/fixing auto-generated captions VTT, SRT or SBV
    • Warpwire to post/stream/share/place and track video content online.  Sakai is typically used as the CMS to embed the content and provide additional access controls and content organization
  2. Record the Audio: Screencast-O-Matic has a great workflow for creating video content and it even provides a way to create scripts and captions. I tend to record the audio first, which in some cases may require 2 to 4 takes. Recording the audio initially, provides a workflow to create appropriate audio pauses, use tangible inflection and enunciation of terms. For anyone who has created a ‘music video’ or set images to audio content this will seem pretty doable.
  3. Sync Audio and Visual Content: So this is where the use of multiple tools really shines. Once the audio is recorded, Screencast-O-Matic makes it easy to re-record retaining the audio portion and replacing just the visual portion of the project. Recording  the visual content (PowerPoint and Chrome) is pretty much just listening to the audio and walking through the slides and steps using Chrome. Skitch or other screen capture software may have already been used to capture visual content I can bring attention to in the slides.
  4. Once the project is completed, Screencast-O-Matic provides a 1 click upload to YouTube or save as an MP4 file, which can then be uploaded to Warpwire or Microsoft Stream.
  5. Once YouTube or Microsoft Stream have a viable caption file, it can be downloaded and corrected (as needed) and then paired back with any of the streaming platforms.
  6. Post of the video within the CMS is as easy as using the LTI plugin (via Warpwire) or by using the embed code provided by any of the streaming platforms.

by Dave E. at September 18, 2017 04:03 PM

September 01, 2017

Sakai Project

Sakai Docs Ride Along

Sakai Docs ride along - Learn about creating Sakai Online Help documentation September 8th, 10am Eastern

by MHall at September 01, 2017 05:38 PM

August 30, 2017

Sakai Project

Sakai get togethers - in person and online

Group photo from Sakai Camp 2017 in Orlando

Sakai is a virtual community and we often meet online through email, and in real time through the Apereo Slack channel and web conferences. We have so many meetings that we need a Sakai calendar to keep track of our meetings. 

Read about our upcoming get togethers!

SakaiCamp
SakaiCamp Lite
Sakai VC
ELI

by NealC at August 30, 2017 06:37 PM

Sakai 12 branch created!

We are finally here! A big milestone has been reached with the branching of Sakai 12.0. What is a "branch"? A branch means we've taken a snapshot in time of Sakai and put it to the side so we improve it, mostly QA (quality assurance testing) and bug fixing until we feel it is ready to release to the world and become a community supported release. We have a stretch goal from this point of releasing before the end of this year, 2017. 

Check out some of our new features.

by NealC at August 30, 2017 06:00 PM

July 18, 2017

Steve Swinsburg

An experiment with fitness trackers

I have had a fitness tracker of some descript for many years. In fact I still have a stack of them. I used to think they were actually tracking stuff accurately. I compete with friends and we all have a good time. Lately though, I haven’t really seen the fitness benefits I would have expected from pushing myself to get higher and higher step counts. I am starting to think it is bullshit.

I’ve have the following:

  1. Fitbit Flex
  2. Samsung Gear Wear
  3. Fitbit Charge HR
  4. Xiaomi Mi Band
  5. Fitbit Alta
  6. Moto 360
  7. Phone in pocket setup to send to Google Fit.
  8. Garmin ForeRunner 735XT (current)

Most days I would be getting 12K+ just by doing my daily activities (with a goal of 11K): getting ready for work and children ready for school (2.5K), taking the kids to school (1.2K), walking around work (3K), going for a walk at lunch (2K), picking up the kids and doing stuff around the house of an evening (3.5K) etc.

My routine hasn’t really changed for a while.

However, two weeks ago I bought the Garmin Forerunner 735XT, mainly because I was fed up with the lack of Android Wear watches in Australia as well as Fitbit’s lack of innovation. I love Android Wear and Google Fit and have many friends on Fitbit, but needed something to actually motivate me to exercise more.

The first thing I noticed is that my step count is far lower than any of the above fitness trackers. Like seriously lower. We are talking at least 30% or more lower. As I write this I am sitting at ~8.5K steps for the day and I have done all of the above plus walked to the shops and back (normally netting me at least 1.5K) and have switched to a standing desk at work which is about 3 metres closer to the kitchen that my original desk. So negligible distance change. The other day I even played table tennis at work (you should see my workplace) and it didn’t seem to net me as many steps as I would have expected.

Last night I went for a 30 min walk and snatched another 2K, which is pretty accurate given the distance and my stride length. I think the Fitbit would have given me double that.

This is interesting.

Either the Garmin is under-reporting or the others are over-reporting. I suspect the latter. The Garmin tracker cost me close to $600 so I am a bit more confident of its abilities than the $15 Mi band.

So, tomorrow I am performing an experiment.

As soon as I wake up I will be wearing my Garmin watch, Fitbit Charge HR right next to it, and keeping my phone in my pocket at all times. Both the watch and Fitbit will be setup for lefthand use. The next day, I will add more devices to the mix.

I expect the Fitbit to get me to at least 11K, Google fit to be under that (9.5K) and Garmin to be under that again (8K). I expect the Mi band to be a lot more than the Fitbit.

The fitness tracker secret will be exposed!

by steveswinsburg at July 18, 2017 12:46 PM

June 16, 2017

Apereo OAE

OAE at Open Apereo 2017

The Open Apereo 2017 conference took place last week in Philadelphia and it provided a great opportunity for the OAE Project team to meet and network for three whole days. The conference days were chock full of interesting presentations and workshops, with the major topic being the next generation digital learning environment (NGDLE). Malcolm Brown's keynote was a particularly interesting take on this topic, although at that point the OAE team was still reeling from having a picture from our Tsugi meeting come up during the welcome speech - that was a surprising start for the conference! We made note about how the words 'app store' kept popping up in presentations and in talks among the attendees again and again - perhaps this is something we can work towards offering within the OAE soon? Watch this space...

The team also met with people from many other Apereo projects and talked about current and future integration work with several project members, including Charles Severance from Tsugi, Opencast's Stephen Marquard and Jesus and Fred from Big Blue Button. There's some exciting work to be done in the next few weeks... While Quetzal was released only a few days before the conference, we are now teeming with new ideas for OAE 14!

After the conference events were over on Wednesday, we gathered together to have a stakeholders meeting where we discussed strategy, priorities and next steps. We hope to be delivering some great news very soon.

During the conference, the OAE team also provided assistance to attendees in using the Open Apereo 2017 group hosted on *Unity that supported the online discussion of presentation topics. A lot of content was created during the conference days so be sure to check it out if you're looking for slides and/or links to recorded videos. The group is public and can be accessed from here.

OAE team members who attended the conference were Miguel and Salla from *Unity and Mathilde, Frédéric and Alain from ESUP-Portail.

June 16, 2017 12:00 PM

June 01, 2017

Apereo OAE

Apereo OAE Quetzal is now available!

The Apereo Open Academic Environment (OAE) project is delighted to announce a new major release of the Apereo Open Academic Environment; OAE Quetzal or OAE 13.

OAE Quetzal is an important release for the Open Academic Environment software and includes many new features and integration options that are moving OAE towards the next generation academic ecosystem for teaching and research.

Changelog

LTI integration

LTI, or Learning Tools Interoperability, is a specification that allows developers of learning applications to establish a standard way of integrating with different platforms. With Quetzal, Apereo OAE becomes an LTI consumer. In other words, users (currently only those with admin rights) can now add LTI standards compatible tools to their groups for other group members to use.

These could be tools for tests, a course chat, a grade book - or perhaps a virtual chemistry lab! The only limit is what tools are available, and the number of LTI-compatible tools is growing all the time.

Video conferencing with Jitsi

Another important feature introduced to OAE in Quetzal is the ability to have face-to-face meetings using the embedded video conferencing tool, Jitsi. Jitsi is an open source project that allows users to talk to each other either one on one or in groups.

In OAE, it could have a number of uses - maybe a brainstorming session among members of a globally distributed research team, or holding office hours for students on a MOOC. Jitsi can be set up for all the tenancies under an OAE instance, or on a tenancy by tenancy basis.

 

Password recovery

This feature that has been widely requested by users: the ability to reset their password if they have forgotten it. Now a user in such a predicament can enter in their username, and they will receive an email with a one-time link to reset their password. Many thanks to Steven Zhou for his work on this feature!

Dockerisation of the development environment

Many new developers have been intimidated by the setup required to get Open Academic Environment up and running locally. For their benefit, we have now created a development environment using Docker containers that allows newcomers to get up and running much quicker.

We hope that this will attract new contributions and let more people to get involved with OAE.

Try it out

OAE Quetzal can be experienced on the project's QA server at http://oae.oae-qa0.oaeproject.org. It is worth noting that this server is actively used for testing and will be wiped and redeployed every night.

The source code has been tagged with version number 13.0.0 and can be downloaded from the following repositories:

Back-end: https://github.com/oaeproject/Hilary/tree/13.0.0
Front-end: https://github.com/oaeproject/3akai-ux/tree/13.0.0

Documentation on how to install the system can be found at https://github.com/oaeproject/Hilary/blob/13.0.0/README.md.

Instruction on how to upgrade an OAE installation from version 12 to version 13 can be found at https://github.com/oaeproject/Hilary/wiki/OAE-Upgrade-Guide.

The repository containing all deployment scripts can be found at https://github.com/oaeproject/puppet-hilary.

Get in touch

The project website can be found at http://www.oaeproject.org. The project blog will be updated with the latest project news from time to time, and can be found at http://www.oaeproject.org/blog.

The mailing list used for Apereo OAE is oae@apereo.org. You can subscribe to the mailing list at https://groups.google.com/a/apereo.org/d/forum/oae.

Bugs and other issues can be reported in our issue tracker at https://github.com/oaeproject/3akai-ux/issues.

June 01, 2017 05:00 PM