Planet Sakai

February 09, 2016

Michael Feldstein

Making Lab Sections Interactive: More evidence on potential of course redesign

Two weeks ago Michael and I posted an third article on EdSurge that described an encouraging course redesign for STEM gateway courses.

In our e-Literate TV series on personalized learning, we heard several first-hand stories about the power of simple and timely feedback. As described in the New York Times, administrators at the University of California, Davis, became interested in redesigning introductory biology and chemistry courses, because most of the 45 percent of students who dropped out of STEM programs did so by the middle of their second year. These students are the ones who typically take large lecture courses.

The team involved in the course-redesign projects wanted students to both receive more individual attention and to take more responsibility for their learning. To accomplish these goals, the team employed personalized learning practices as a way of making room for more active learning in the classroom. Students used software-based homework to experience much of the content that had previously been delivered in lectures. Faculty redesigned their lecture periods to become interactive discussions.

The UC Davis team focused first on redesigning the lab sections to move away from content delivery (TAs lecturing) to interactive sessions where students came to class prepared and then engaged in the material through group discussions (read the full EdSurge article for more context). In the UC Davis case, this interactive approach was based on three feedback loops:

  • Immediate Feedback: The software provides tutoring and “immediate response to whether I push a button” as students work through problems, prior to class.
  • Targeted Lecture and Discussion: The basic analytics showing how students have done on the pre-lab questions allows the TA to target lecture and discussion in a more personal manner—based on what the specific students in that particular section need. “I see the questions that most of my class had a difficulty with, and then I cover that in the next discussion,” Fox says.
  • Guidance: The TA “would go over the answers in discussion.” This occurs both as she leads an interactive discussion with all students in the discussion section and as she provides individual guidance to students who need that help.

Formative Feedback Loops UCD

The opportunity to make the lab sections truly interactive, and not just one-way content delivery through lectures, is not unique to the UC Davis example. Shortly after publishing the article, I found another course redesign that plays on some of the same themes. This effort at Cal State Long Beach (CSULB) was described in the Press-Telegram article:

Sitting near a skeleton in a Cal State Long Beach classroom last week, Professor Kelly Young dissected a course redesign that transformed a class from a notorious stumbling block to a stepping stone toward graduation.

Young has reduced the number of students failing, withdrawing or performing below average in Bio 208: Human Anatomy from 50 percent to fewer than 20 percent in about four years, and poorly performing students have watched their grades climb, with continued improvement on the horizon.

That statistic is worth exploring, especially when considering that 500-600 students take this class each year at CSULB.

Thanks to the CSULB Course Redesign work, this work in Bio 208 has some very useful documentation available on the MERLOT repository. Like the UC Davis team, the CSULB team first redesigned the lab sections, “flipping them” to enable a more personalized approach within the small sections. Unlike UC Davis, CSULB centered the content on videos and podcasts.

While we have been working on refining the lecture over the past several years, the Redesign Project has allowed us to get serious about redesigning the laboratory (the source of low grades for most of the students). During the semester, students learn over 1,500 structures just in the laboratory portion of the course. Despite asking them to look at the material before class, students would routinely come to the laboratory session totally unprepared. Flipping the class was an enticing solution to increase preparedness- and therefore success.

BIOL_208_Bones_of_the_Thorax

After trial and error over a few years, the team has created a series of “Anatomy on Demand” annotated videos. But as the team pointed out, this is not the actual important factor.

While the videos often get attention in a flipped classroom proposal, the true focus of our project is what we do with the newly-created class time in the laboratory provided by flipping the lectures. The most important aspect of this project is our new interactive laboratory sessions that serve to deepen understanding of the material. The idea is that a student will watch the relevant short videos (usually 5-7 per week) prior to coming to the laboratory, arrive prepared to their laboratory, take a short quiz that is reduced in rigor but assures readiness, and then spend at least two hours in the laboratory exploring the structures in detail at interactive small group stations.

The effect has been that students are moving from receiving introductions to material and now participating in critical thinking in the lab.

This new method allows prepared students to deeply interact with the material, as opposed to merely being introduced to it. In previous years, we hoped to have students leave the laboratory with some rote memorization of the structures complete. In contrast, when students arrive with a basic understanding of the structures, we are able to use laboratory time to ask application and critical thinking questions.

After applying multiple redesign elements and interventions, the CSULB team started seeing impressive results, especially starting in Spring 2014. This is where they are tracking the reduction in percentage of students getting D, F, or Withdraw from almost 50% to approximately 20%.

phpThumb

Both of these course redesigns were led by university faculty and staff and are showing impressive results. Not just in grades but in deeper student learning. Kudos to both the UC Davis team and the CSULB team.

The post Making Lab Sections Interactive: More evidence on potential of course redesign appeared first on e-Literate.

by Phil Hill at February 09, 2016 01:51 AM

February 07, 2016

Michael Feldstein

College Scorecard: ED quietly adds in 700 missing colleges

It’s worth giving credit where credit is due, and the US Department of Education (ED) has fixed a problem that Russ Poulin and I pointed out where they had previously left ~700 colleges out of the College Scorecard.

When the College Scorecard was announced, Russ noticed a handful of missing schools. When I did the whole data OCD thing, I discovered that more than 700 2-year institutions were missing, including nearly 1-in-4 community colleges. Eventually we published an article in the Washington Post describing this (and other) problems.

The missing community colleges were excluded on purely statistical grounds. If the college granted more certificates (official awards of less than a degree) than degrees in a year, then they were excluded as they were not “primarily degree-granting” institutions. We label this the “Brian Criterion” after the person authoring two discussion board posts that explained this undocumented filter.

This was a statistical decision because it affects graduation rates, but leaves the student wondering why so many colleges cannot be found. Consider Front Range Community College in Colorado with 1,673 associate’s degrees granted in 2012-13. Because they also awarded 1,771 certificates, the Scorecard filters them out from the consumer website.

Largely due to their community-serving mission, community colleges and other two-year institutions were primarily affected. By our calculations, approximately one in three two-year colleges were excluded (more than 700), including approximately one in four community colleges (more than 250).

It is ironic that the most-penalized institutions were community colleges and those innovating with interim certificates and stackable credentials in particular; indeed, the White House has been explicitly promoting both of these groups.

We never heard from the ED officially but had some backchannel communications from others that there were some fixes being considered.

On Wednesday I got a message from the infamous Brian on a Stack Exchange thread letting me know that ED had changed their approach.

The Department recently added institutions to the consumer site such that institutions that predominantly award certificates (PREDDEG=1) are included IF the highest degree is at least an Associate’s (HIGHDEG>=2 ) AND the institution offers an associate’s or bachelor’s degree (CIPxxASSOC>0 OR CIPxxBACHL>0)

In English, this means that the ED took out their artificial criterion and fixed this issue. Colleges that award degrees no longer get excluded from the College Scorecard because they award even more certificates.

Poulin Hill College Scorecard Graphic Updated

It was a little tricky verifying the fix, as they have also changed how the College Scorecard classifies schools. Previously they let the user filter on associate’s programs, leading to institutions that predominantly award associate’s degrees. Now the scorecard will show you all institutions that award associate’s degrees. So the checksum activity must be done at a higher level. Low and behold, the count of public institutions in the Scorecard approximately matches the count from IPEDS. I also did spot checks on a dozen institutions that had previously been missing, and they are now in the Scorecard.

The other issues in the Washington Post article remain, but this headline problem has been fixed, but very quietly. I cannot find any announcement or release notes from ED, just this one line in their release notes:

Update national statistics to include certificate schools

So consider this blog post as the official ED press release, I guess. Thanks for fixing.

The post College Scorecard: ED quietly adds in 700 missing colleges appeared first on e-Literate.

by Phil Hill at February 07, 2016 01:23 AM

February 04, 2016

Adam Marshall

WebLearn upgraded to Version 2.10-ox9 on 26 January 2016

WebLearn was upgraded on 26th January 2016 to version 2.10-ox9. If you would like more details then please contact the Service Desk.

If you would like to suggest further improvements then please do so by contributing to the WebLearn User Voice feedback service.

Improvements

  • There is a new start-up / info page for Lessons tool
  • A ‘quick link’ to the Apereo Open Academic Environment has been added
  • A new web service to get a list “surveys awaiting completion” has been added
  • The “Insert original text” link in Forums now works correctly
  • A link to the new site is displayed when a site is duplicated
  • When an (external) user changes their email address they are also able to change their username
  • Web Content links are no longer duplicated when a site is created from a template
  • Researcher Training Tool (RTT)
    • The ‘Accept’ link in emails should now work correctly
    • 20 results per page are displayed during search
  • Improvements to the usability of the reading list tool (ORLiMS)
    • Reading Lists can now be copied within a site
    • A citation which is a link to a file in Resources is now automatically set to be “Electronic Citation”
    • One can now unselect “use this link as title”
    • Reading lists now have a fixed maximum width (so display better)
    • One can now export a Reading List in RIS format
  • The syllabus tool now shows dates beyond 2015

by Adam Marshall at February 04, 2016 03:27 PM

ORLiMS – On-line Reading Lists Success Story

ORLiMS-Final-Final-Oxford-BlueThanks to Craig Finlay and Helen Worrell from the Bodleian Social Sciences Library for many of the words and figures in this post.

Developed by staff at the Bodleian SSL (Social Science Library) in collaboration with the WebLearn team, ORLiMS provides real time information on the location and availability of the material you need. Every item will be linked to the corresponding entry on SOLO, so you will be able to see:

  • The number of copies held in individual libraries
  • Whether they can be borrowed
  • Whether they are currently available
  • Whether they can be accessed electronically

If a resource can be accessed electronically, ORLiMS will provide a direct link. The aim is simple: you spend less time searching, more time studying.

 Usage Summary

There have been 1253 views of ORLiMS reading lists by 415 different student users from 16/10/15-29/01/16.

Our top user with 50 visits is an PPE undergrad!

Our busiest day so far was 18/01/16, when we had 55 visits by students.

Here is a graph showing access over a 3 month period. The spike in Michaelmas term was towards the end of our promotional drive following the launch. General trend early this term is on the up…

orlims access graph

Feedback

The Bodeian Social Sciences Library undertook an evaluation of ORLiMS at the start of 2016, the comments were generally very positive. Here’s what some staff members said:

Very good initiative … I think this is an excellent idea. Politics course leader

I have heard nothing but positive comments about the ORLiMS project from colleagues, and I believe that this will be a very significant improvement to our students’ experience here. Associate Professor of International Relations.

I think its a great resource and advance in terms of making the readings more accessible to students and tutors alike and especially for those who are working remotely. I would support its usage on a longer term and wider scale. Respondent 4454376551, Criminology.

by Adam Marshall at February 04, 2016 11:34 AM

February 03, 2016

Adam Marshall

Catching up with WebLearn

Members of WISE pilot groups met up in December 2015 at a WISE Champions community event at IT Services. It was an opportunity for participants in the WISE project to  share ideas and see examples of newly designed WebLearn sites. The pictures below show the group meeting in a local coffee shop and also discussing a challenging psychological puzzle!

 IMG_4870_smlIMG_4863_smlIMG_4865_smlIMG_4866_sml

Video

https://oxforduniversity.hosted.panopto.com/Panopto/Pages/Viewer.aspx?id=a97506de-6453-4f57-8221-9cb9574f2880

Included in the video:

  • Update on progress on the WISE project
  • Psychology: Xav describes the psychological theories behind designing webpages.
  • Spot the mistakes: Can you spot the mistakes or bad practice in this test WebLearn site by Xav and Steve?
  • Assignments – top tips
  • Assignments – peer review

Next event

Date for your diary: the next WISE Champions event is on February 18 2016. You can register interest at:

https://courses.it.ox.ac.uk/detail/TOVW

PowerPoint slides

Slides at at https://weblearn.ox.ac.uk/portal/hierarchy/central/oucs/wise/wise_communi

by Stephen Burholt at February 03, 2016 05:12 PM

Ian Boston

Ai in FM

Limited experience in either of these fields does not stop thought or research. At the risk of being corrected, from which I will learn, I’ll share those thoughts.

Early AI in FM was broadly expert systems. Used to advise on hedging to minimise overnight risk etc or to identify certain trends based on historical information. Like early symbolic maths programs (1980s) that revolutionised the way in which theoretical problems can be solved (transformed) without error in a fraction of the time, early AI in FM put an expert with a probability of correctness on every desk. This is not the AI I am interested in. It it only artificial in the sense it artificially encapsulates the knowledge of an expert. The intelligence is not artificially generated or acquired.

Machine learning  covers many techniques. Supervised learning takes a set of inputs and allows the system to perform actions based on a set of policies to produce an output. Reinforcement learning https://en.wikipedia.org/wiki/Reinforcement_learning favors the more successful policies by reinforcing the action. Good machine, bad machine. The assumption is, that the environment is stochastic. https://en.wikipedia.org/wiki/Stochastic or unpredictable due to the influence of randomness.

Inputs and outputs are simple. They are a replay of the historical prices. There is no guarantee that future prices will behave in the same way as historical, but that is in the nature of a stochastic system.  Reward is simple. Profit or loss. What is not simple is the machine learning policies. AFAICT, machine learning, for a stochastic system with a large amount of randomness, can’t magic the policies out of thin air. Speech has rules, Image processing also and although there is randomness, policies can be defined. At the purests level, excluding wrappers, financial markets are driven by the millions of human brains attempting to make a profit out of buying and selling the same thing without adding any value to that same thing. They are driven by emotion, fear and every aspect of human nature rationalised by economics, risk, a desire to exploit every new opportunity, and a desire to be a part of the crowd. Dominating means trading on infinitesimal margins exploiting perfect arbitrage as it the randomness exposes differences. That doesn’t mean the smaller trader can’t make money, as the smaller trader does not need to dominate, but it does mean the larger the trader becomes, the more extreme the trades have to become maintain the level of expected profits. I said excluding wrappers because they do add value, they adjust the risk for which the buyer pays a premium over the core assets. That premium allows the inventor of the wrapper to make a service profit in the belief that they can mitigate the risk. It is, when carefully chosen, a fair trade.

The key to machine learning is to find a successful set of policies. A model for success, or a model for the game. The game of Go has a simple model, the rules of the game. Therefore it’s possible to have a policy of, do everything. Go is a very large but ultimately bounded Markov Decision Process (MDP). https://en.wikipedia.org/wiki/Markov_decision_process  Try every move. With trying every move every theoretical policy can be tested. With feedback, and iteration, input patterns can be recognised and successful outcomes can be found. Although the number of combinations is large, the problem is very large but finite. So large that classical methods are not feasible, but not infinite so that reinforcement machine learning becomes viable.

The MDP governing financial markets may be near infinite in size. While attempts to formalise will appear to be successful the events of 2007 have shown us that if we believe we have found finite boundaries of a MDP representing trade, +1 means we have not. Just as finite+1 is no longer finite by the original definition, infinite+1 proves what we thought was infinite is not. The nasty surprise just over the horizon.

by Ian at February 03, 2016 01:09 PM

February 02, 2016

Michael Feldstein

Empowering Students in Open Research

Phil and I will be writing a twice-monthly column for the Chronicle’s new Re:Learning section. In my inaugural column, “Muy Loco Parentis,” I write about how schools make data privacy decisions on behalf of the students that the students wouldn’t make for themselves, and that may even be net harmful for the students. In contrast to the ways in which other campus policies have evolved, there is still very much a default paternalistic position regarding data.

But the one example that I didn’t cover in my piece happens to be the one that inspired it in the first place. A few months back at the OpenEd conference, I heard a presentation from CMU’s Norm Bier about that challenges of getting different schools to submit OLI student data to a common database for academic research. Basically, every school that wants to do this has to go through its own IRB process, and every IRB is different. Since the faculty using the OLI products usually aren’t engaged in the research themselves, it generally isn’t worth the hassle to go through this process, so the data doesn’t get submitted and the research doesn’t get done. Note that Pearson and McGraw Hill do not have this problem; if they want to look at student performance in a learning application across various schools, they can. Easily. Something is wrong with this picture. I proposed in Norm’s session that maybe students could be given an option to openly publish their data. Maybe that would get around the restrictions. David Wiley, who does a lot more academic research than I do, seemed to think this wasn’t a crazy idea, so I’ve been gnawing on the problem since then.

I have talked to a bunch of researchers about the idea. The first reaction is often skepticism. IRB is not so easy to circumvent (for good reason). What generally changed their minds was the following thought experiment:

  • Suppose that, in some educational software program, there was a button labeled “Export.” Students could click the button and export their data in some suitably anonymized format. (Yes, yes, it is impossible to fully de-identify data, but let’s posit “reasonably anonymized” as assessed by a community of data scientists.) Would giving students the option to export their data to any server of their choosing trigger the requirement for IRB review? [Answer: No.]
  • Suppose the export button offered a choice to export to CMU’s research server. Would giving students that option trigger the requirement for IRB review? [Answer: Probably not.]

There are two shades of gray here that are complications. First, researchers worry about the data bias that comes from opt in. And the further you lead students down the path toward encouraging them to share their data, such as making sharing the default, the more the uneasiness sets in. Second and relatedly, there is the issue of informed consent. There was a general feeling that, even if you get around IRB review, there is still a strong ethical obligation to do more than just pay lip service to informed consent. You need to really educate students on the potential consequences of sharing their data.

That’s all fair. I don’t claim that there is a silver bullet. But the thought experiment is revealing. Our intuitions, and therefore our policies, about student data privacy are strongly paternalistic in an academic context but shift pretty quickly once the institutional role fades and the student’s individual choice is foregrounded. I think this is an idea worth exploring further.

The post Empowering Students in Open Research appeared first on e-Literate.

by Michael Feldstein at February 02, 2016 06:51 PM

January 23, 2016

Dr. Chuck

Two Face-to-Face @Coursera Office Hours – Orlando Florida

I will be having two face-to-face @Coursera office hours in Orlando Florida this week. One in Universal Studios near Harry Potter World and another at the hotel where I will be attending a meeting.

The first Orlando face-to-face office hours for my Internet History and Python for Everybody courses will be Sunday Jan 24 – 3:00PM – 4:00PM at Moe’s Tavern in Universal Studios.

https://www.universalorlando.com/Restaurants/Universal-Studios-Florida/Springfield-Dining.aspx

I wish we could have met at the Leaky Cauldron, but people tell me it is too crowded. But the Leaky Cauldron is a five minute walk away so at the end of the office hours we can walk to Diagon Alley and take a video.

The second face-to-face office hours will be at Holiday Inn Express & Suites – Orlando International Drive Tue Jan 26 – 6:00PM – 7:00PM in the lobby / breakfast area.

7276 International Drive
Orlando, Florida 32819

http://www.ihg.com/holidayinnexpress/hotels/us/en/orlando/mcocd/hoteldetail

I hope to see you at one or the other of the office hours.

by Charles Severance at January 23, 2016 03:03 PM

January 12, 2016

Apereo Foundation

Now Open: Proposals for OpenApereo 2016

Now Open: Proposals for OpenApereo 2016

Early bird proposals are due January 22nd, final proposal deadline is February 8th!

by MHall at January 12, 2016 12:46 AM

January 03, 2016

Dr. Chuck

Contributing to Python 3.0 for Informatics

After many years as a successful Open Python 2.0 textbook, the time has come to update Python for Informatics to be Python 3.0. There will be a lot of work since the Python 2.0 textbook and slides have been translated into so many languages and there are five courses on Coursera all built around the textbook.

Since there is so much work to do, I welcome any and all assistance in the conversion and review of the book. If I can get help in converting the core book, I will have time to add three new chapters that have been requested by the students (see the TODO list for details).

While there are several groups that will likely translate the book and/or slides into Python 3.0, lets wait until the book is relatively solid to make sure that all of the variations of these materials are well aligned.

Temporary Copyright

While I am in the process of drafting a book, I do not put it up with a Creative Commons license. I don’t want anyone grabbing the half-completed text and publishing it prematurely. Once the book is in reasonably good shape I switch the copyright to my normal CC license (CC-BY-NC for print books and CC-BY for all electronic copies). I expect the book to be ready to release in early 2016.

Contributing to the Book

The entire text of the book is in GitHub in this repository:

https://github.com/csev/pythonlearn

There are two basic ways to contribute to the book:

  • Create a GitHub account, then navigate to one of the files for the book in the repository like

    https://github.com/csev/pythonlearn/blob/master/book/02-variables.mkd
    Press the pencil icon to edit the text, and then when you “save” the text, it sends me a “pull request” where I can review your changes, approve them, and apply them to the master copy. Once you get going, it is really easy.
  • If you have more tech-skillz, you can “fork” the repository and send me pull requests the normal way. If you use this approach, please send pull requests quickly so we all stay synchronized. Don’t worry about trying to squeeze a bunch of work into a single comment (like many projects prefer). Lots of little commits avoid merge conflicts.

Make sure to take a look at the TODO list to figure out where you can help. We are only working in the book and code3 folders. We will not be converting the code folder as that will be maintained as Python 2.0.

We have a build server that re-builds the entire book every hour at:

http://do1.dr-chuck.com/pythonlearn/

So you can see your contributions appearing in the final book within an hour of me approving your pull request. GitHub tracks your contribution and gives you credit for everything you do. Once the book is ready to publish, I will go through the GitHub history and add acknowledgements for all of the contributions to the text of the book.

If you send in a pull request and it seems like I am not moving quickly enough for you, simply send a tweet with the URL of the pull request and mention @drchuck. That will make sure I see it and get working on it.

Thanks in Advance

I appreciate your interest, support, and effort in helping make this open book a success for all these years.

I want to make sure to acknowledge the contributions from authors of
Think Python: How to Think like a Computer Scientist” by
Allen B. Downey, Jeff Elkner and Chris Meyers. Their original groundbreaking work in building an open and remixable textbook in 2002 has made the current work possible.

by Charles Severance at January 03, 2016 07:13 PM

January 02, 2016

Dr. Chuck

Tsugi – Non-Upwards-Compatible Change – Using Composer

I made a non-upwards compatible change to Tsugi to move towards refactoring the current monolithic Tsugi code base into separate components. I already factored out the static content (tsugi-static) which is shared between the PHP and Java implementations.

Now I am moving the code that was formerly under lib/vendor into its own repository and have it included using Composer and Packagist.

When you do your next “git pull” or check out a fresh copy, you will need to change your config.php and replace one line:

Remove this line:

require_once($dirroot."/lib/vendor/Tsugi/Config/ConfigInfo.php");

Add this line:

require_once($dirroot."/vendor/autoload.php");

If your Tsugi code explicitly includes anything from lib/vendor/Tsugi (it should not) the fix is to simply remove those includes and allow auto-class loading do work its magic. If you include config.php the auto class loader is in effect.

This makes it so we are loading all the Tsugi classes using the auto loader pattern. Interestingly this allows me to make use of the Tsugi utility code in projects that are not even using LTI.

At this point, I am including the composer.lock file and vendor folder in the Tsugi GitHub repo so that it is a one-stop check out that has the files in their new locations and uses autoloading.

In the long run, I will keep a one-stop-check-it-all out repo but eventually get to the point where a Tsugi tool can stand alone in a repo and have all its dependencies fulfilled by Composer.

If you don’t fix the config.php as described above after you r next git pull, you will see the following error:

Warning: require_once(/Applications/MAMP/htdocs/tsugi/lib/vendor/Tsugi/Config/ConfigInfo.php): failed to open stream: No such file or directory in /Applications/MAMP/htdocs/tsugi/config.php on line 15

Fatal error: require_once(): Failed opening required ‘/Applications/MAMP/htdocs/tsugi/lib/vendor/Tsugi/Config/ConfigInfo.php’ (include_path=’.:/Applications/MAMP/bin/php/php5.5.10/lib/php’) in /Applications/MAMP/htdocs/tsugi/config.php on line 15

Of course the fix is trivial.

by Charles Severance at January 02, 2016 09:55 PM

December 29, 2015

Sakai Project

NYU Steinhardt Uses Sakai for Professional Development

NYU Steinhardt Uses Sakai for Professional Development

The Steinhardt School at NYU hosted its first Course Innovation Grant program supporting the development of technology-enhanced courses.

by MHall at December 29, 2015 07:09 PM

Sakai 10.6 Release

Sakai 10.6 Release

The Sakai Core Team is happy to announce the Sakai 10.6 maintenance release.

by MHall at December 29, 2015 07:07 PM

December 16, 2015

Apereo Foundation

December 15, 2015

Apereo OAE

Looking back at Diwali

Marist College’s week-long Diwali celebration concluded on November 13th with a wonderful closing reception attended by over 200 members of the Marist College community, along with their family and friends. Participants filled the room for a night of singing, dancing, Indian cuisine and a fashion show modeling traditional Indian apparel from various regions of the country.

"It was such a significant experience that brought a bit of India to campus for those who have traveled so far from their home ... even more so for those of us, like myself, who might never have the opportunity to visit," says Corri Nicoletti, Educational Technology Specialist Academic Technology & eLearning office at Marist College.

The closing reception was the perfect end to a five day exhibit to celebrate Diwali, the Hindu Festival of Lights. As Marist College boasts a large international population representing over 50 countries worldwide, this event was the perfect opportunity to share a significant cultural experience campus-wide. However, we decided to take it one step further. What if our families and friends could participate, no matter where they are? OAE served as the perfect platform for our students to share in their celebrations as well as to reconnect with those at home.

Diwali Closing Reception

As a result of using OAE, various family, friends, and other institutions were able to connect, regardless of their global location. Everyone was invited to post pictures of their celebrations wherever they were. Beginning October 14th, those involved in the event began posting images of the Rangoli workshop, followed by event preparations, the exhibit, and the final celebration. The OAE group and shared images for MyDiwali Celebration were visited numerous times throughout the month. Surprisingly, these visitors included global participants from as far as England, Australia, South Africa, and more!

Using the combination of social media, active global participants, and the collaborative and interactive nature of OAE, we extended the week-long Diwali event past the grounds of Marist College. We were able to reach friends and family elsewhere by spreading a diverse, culture-rich experience with those around us ... even if they were half way around the word!

It was clearly evident just how much this meant to the students, who worked countless hours, day in and day out, to make it a success. Many of them were amazed at how much it felt like home.

December 15, 2015 07:21 PM

December 11, 2015

Apereo OAE

Re-intermediation

Since the Open Academic Environment's main cloud deployment, *Unity, rolled out to 20,000 universities and research institutions last month, one of the most common questions has been how so many people are able to use their campus credentials to sign in. I’m going to explain, but be warned: after that I’m going to say why I think this is the wrong question. The right question, I think, is, "Why did no one do this before?"

The Open Academic Environment software at the heart of *Unity integrates with most of the commonly used authentication strategies, including open standards such as Shibboleth. Using these different strategies we’ve been able to establish single sign on with almost half our 20,000 tenancies.

The benefits are real. You don’t have to remember another username and password; instead, you can sign into *Unity with your campus credentials. And so can the majority of your colleagues around the world. It’s one of the features that makes *Unity a uniquely suitable venue for all your research projects.

We’ve managed to hook up with so many universities partly by making bilateral arrangements, campus by campus. But we’ve also worked through the many access management federations to which we belong. These national federations act as brokers; on one side are the universities, on the other are service providers such as *Unity. The federations allow us to hook up with many institutions in one go, reducing the effort involved.

So, given that the username / password thing is one of the biggest barriers both to adoption and usage, why is it that none of our competitors have gone to the effort of integrating with institutions’ single sign on strategies? How is that none of Facebook, Google, LinkedIn, Academia.edu or ResearchGate let you use your campus credentials to sign in?

To see why, compare our old friend email with one of the newer services offered by these companies such as file sharing.

Email is a federated service based on open standards. Each university controls its own servers, data and users. Even if these days they may buy the service in from a cloud provider, the university retains control.

If you draw a diagram of the connections, it looks like this. Each individual dot connects to a university server, which connects to other servers, which connect to other individual dots. The connections between the users are mediated by their universities.

Email as a federated service

File sharing via, say, ResearchGate, is different. There’s no open standard. The service is not federated but owned by one company. It controls the servers, data and users. The university is, literally, nowhere.

In this diagram, each individual dot simply connects to the ResearchGate servers in the middle.

ResearchGate as a centralised service

What these Silicon Valley companies have done is to disintermediate the universities themselves.

Now, from the point of view of these companies, what happens if they integrate with an institution’s own single sign on system is that they reintroduce the university into the diagram. Now the university itself has re-acquired control. It will examine your terms and conditions and veto things it doesn’t like. It will demand that the privacy of its users is protected. It will demand ownership of the content they create. And if it doesn’t get it, it may switch off the single sign on and take its users elsewhere. Even worse, maybe 100 institutions might get together and move elsewhere all at the same time!

So that explains why I think the Silicon Valley companies don’t work with campus credentials. And it explains why universities should prefer services that do. It’s the difference between control that is centralised and control that is federated. Or, to put it another way, between colonisation and independence.

But, you may say, file sharing is different to email. File sharing can’t be provided in a federated way; it needs a centralised infrastructure. Indeed it does. But a centralised infrastructure does not have to mean centralised control. You can have centralised infrastructure in which each university owns and controls its own tenancy, its own users, its own data. This is *Unity, and the logo of the Open Academic Environment project shows you the kind of connections we have in mind.

Open Academic Environment

In the centre is the central OAE infrastructure (known to you as *Unity). This is connected to the institutions (the small dots), which in turn are connected to the individual users (the big dots).

This is exactly the arrangement that is effected when we integrate with your university’s single sign on system, and it reflects our vision. Not the disintermediation of the universities but rather their re-intermediation, a step which means empowerment for the university and respect for the user.

You can find out more about *Unity and the issues raised here by downloading our briefing for university Chief Information Officers

December 11, 2015 11:43 AM

November 15, 2015

Carl Hall

Requiring External Resources Before Attempting JUnit Tests

If you have an integration test that requires external resources to be available, like a local DynamoDB server, that test should be skipped rather than fail when the resources aren’t there. In JUnit, this can be accomplished by throwing an AssumptionViolatedException from an @BeforeClass method, or better yet, with reusable ClassRules. A ClassRule runs like […]

by thecarlhall at November 15, 2015 01:14 AM

November 14, 2015

Carl Hall

Integration Testing with DynamoDB Locally

One of the really nice things about using DynamoDB to back an application is the ability to write integration tests that have a good test server without trying to mimic DynamoDB yourself. DynamoDB_Local is available from AWS and is easily incorporated into a Maven build. Take a look through the documentation for running DynamoDB on […]

by thecarlhall at November 14, 2015 11:32 PM

November 12, 2015

Ian Boston

What do do when your ISP blocks VPN IKE packets on port 500

VPN IKE packets are the first phase of establishing a VPN. UDP versions of this packet go out on port 500. Some ISPs (PlusNet) block packets to routers on port 500, probably because they don’t want you to run a VPN end point on your home router. However this also breaks a normal 500<->500 UDP IKE conversation.  Some routers rewrite the source port of the IKE packet so that they can support more than one VPN. The feature is often called a IPSec application gateway. The router keeps a list of the UDP port mappings using the MAC address of the internal machine. So the first machine to send a  VPN IKE packet will get 500<->500, the second 1500<->500, the third 2500<->500 etc. If your ISP filters packets inbound to your router on UDP 500 the VPN on the first machines will always fail to work.  You can trick your router into thinking your machine is the second or later machine by changing the MAC address before you send the first packet. On OSX

To see the current MAC address use ifconfig, and take a note of it.

then on the interface you are using to connect to your network do

sudo ifconfig en1 ether 00:23:22:23:87:75

Then try and establish a VPN. This will fail, as your ISP will block the response to your port 500. Then reset your MAC address to its original

sudo ifconfig en1 ether 00:23:22:23:87:74

Now when you try and establish a VPN it will send a IKE packet out on 500<->500. The router will rewrite that to 1500<->500 and the VPN server will respond 500<->1500 which will get rewritten to 500<->500 with your machine IP address.

How to debug

If you still have problems establishing a VPN then using tcpdump will show you what is happening. You need to run tcpdump on the local machine and ideally on a network tap between the router and the modem. If you’re on Fibre or Cable, then a Hub can be used to establish a tap. If on ADSL, you will need something harder.

On your machine.

sudo tcpdump -i en1 port 500

On the network tap, assuming eth0 is unconfigured and tapping into the hub. This assumes that your connection to the ISP is using PPPoE. Tcp will decode PPPoE session packets, if you tell it to.

sudo tcpdump -i eth0 -n pppoes and port 500

If your router won’t support more than 1 IPSec session, and uses port 500 externally, then you won’t be able to use UDP 500 IKE unless you can persuade your ISP to change their filtering config.

by Ian at November 12, 2015 03:16 PM