Thursday, September 27, 2012

How to load Zotero into a webpage with 3 lines of PHP

It took me a little while to figure out how to load a formatted bibliography from Zotero into a Drupal page, so I thought I'd share what I came up with.

If publicly store your citations on zotero.org, you can also share these citations by means of Zotero's Read API.  The documentation to the Read API may initially look a little intimidating to non-programmers, but if you take a look at the examples provided by the documentation, you see that just by adding together clearly defined 'parts' you can easily create a link to some or all of your own Zotero library.

Here's a bibliography of mine formatted using APA: https://api.zotero.org/users/5110/collections/VDAA6PSK/items?format=bib&style=apa

Looking at the API, I think the only limitation that I think I need a workaround for that you can only generate COinS if you are generating an ATOM feed.

But I'm getting ahead myself. My goal is to embed this bibliography into a Drupal page. This way I - and others if we embed a shared bibliography - can use Zotero to handle citations and let Drupal handle the rest of the page that surrounds the bibliography.

And it can be done with three lines of PHP:




















And those lines loads the url of the bibliography into the body of the page:















If there is an even easier or better way to do this, please let me know.

Monday, September 10, 2012

All watched over by robo-graders of loving grace

My last post was dedicated to the notion that the violent surge of interest in MOOCs is crashing upon us not because of lofty notions of accessibility or improved pedagogy but due to the promise of robo-grading. I've read a couple posts since then that articulate similar discomfort including this one called "The Robot Teachers."



And now I want to tell you about my favourite grading story and why it makes me uncomfortable.
Southwest Airlines interviews potential hires in groups, bringing other employees and even customers into the process. In this setting of other job seekers, company representatives, and valued customers, an applicant is asked to stand up and describe his or her most embarrassing moment. Most people are shocked to hear that this playful airline has such an aggressive selection technique, assuming Southwest is playing hardball to test the speaker's confidence. 
But it's not what it appears. When the person describing the most embarrassing moment is speaking, making himself or herself vulnerable to complete strangers who are competing for a job (yes, it's awkward), Southwest recruiters are watching the *other* applicants. Why? They're looking for clues to empathy, signals that an applicant feels bad for the storyteller. It turns out that empathy is the secret sauce to serving customers well at thirty thousand feet.

from: Uncommon service: how to win by putting customers at the core of your business by Frances Frei and Anne Morriss

For some reason, I find this duplicity absolutely delicious.

In fact, it inspired me to write up an (unpublished) piece that suggested that this devious model could be the true future of testing and accreditation. Take this simple example: what if a student was required to submit a Google Document instead of a "paper". A robo-grader could then run through the version control record of the document to see whether the student wrote the paper over a long or short period time, determine how much of the material was cut and pasted in, what types of sources was used in the bibliography, and how much was the paper was edited as a whole. The paper would be measured by process. Or take a more sci-fi example: imagine that there's a new guild on campus. To apply, all you have to do is allow your movements tracked and your activities be continually monitored. Then, after a given period of time, you find out whether you will be admitted to the guild or not.  And you will never know by what criteria you were measured against. But, hey, achievement unlocked.

Because there is no feedback, no curriculum, no understanding of what challenge needs to be met, the "achievement unlocked" scenario is pure accreditation and zero teaching. The educational enterprise is designed for these two purposes, feeding the elephant and weighing the elephant, and those involved with the institution of education know that these two ends are all too frequently are at odds with one another. Teaching to the test gets in the way of teaching for learning.  Just ask the Finns. They do next to no accreditation testing and this is one reason why the  Finnish school system is now considered one of the best in the world.

[Author's note: I confess that I'm not well versed in the literature of education and so the following reflections on testing are simplistic, mine and as such, probably questionable. That being said, I think I could pass "The Audrey Test" but don't think I would get an A+ in it. Ironic, n'est pas?]

Thinking about the "achievement unlocked" scenario exercise has made me reflect on one of the Big Questions in education: Why do we test? And from that, can anything good come from testing, ranking, and competition?

Good teaching includes appropriate and timely feedback that responds to a learner's needs. Testing, is arguably, just another opportunity for feedback - a chance to learn exactly how one measures up to expectations and to find how sound one's understanding actually is. But we all know that testing doesn't feel like a great learning opportunity. It's generally a stressful, painful ordeal that doesn't bring out the best in most people.  Kids can't learn under stress.

But I've done enough organized sports to realize that tournaments (aka tests) is not inherently a bad thing. Tournaments provide an opportunity where a person can "test themselves" and to see if they can rise to the occasion when the occasion demands it. Admittedly, not everyone thrives in a high-pressure situation. Also, there is the problem that in order to have to have a hero, you also have to have a goat. That's because most sports are zero-sum games. If I win, that means, you lose.

This begs the question: is education a zero-sum game?

I seem to recall that when I was an undergrad, a friend of a friend was in a class that was structured in a way that the class's performance would be ranked and at the end of the course, of those who passed, the top 15% of the class would earn As, the next 35% would earn Bs, the next 25% would earn Cs and the last 25% would earn Ds (although I'm not sure about the actual values). Now, if this was true ('from a friend of a friend' is the hallmark of an urban legend) this system would be considered as unfair because one could have mastered almost all of the content asked of you, but you would still be graded poorly because you were out-performed by a cohort.

And yet, most instructors have set up their own testing and grading metrics to achieve the same ends. The real problem with the above "ranked marking" scenario is that it is too honest.

Also, it is demoralizing.

I know because I experienced the brutal effects of this scheme during my first year of undergrad.  Since I was enrolled in first year biology at McMaster University, my cohort was filled with students who were keen to do exceptionally well so they could be accepted to McMaster's medical school.  And so I kept finding myself doing very poorly in tests even through I understood the Krebs Cycle but hadn't memorized the Latin name of the Chinese Liver Fluke. But it was necessary to separate me from the elite. And so, feeling gutted, I transferred out of biology  - the reason why I had gone into science in the first place - and transferred into Geography and Environmental Science for my second year, where my grades promptly sky-rocketed.  And I am telling you this, my friends, not because I am bitter - but because I am a living example of why students do not stay in STEM programs at university.

And so I would like to add this question to the Audrey Test:  is your system designed that everyone is able to learn and succeed and rewarded for such? 




The more I think about robo-grading the more concerned I become. If the criteria for robo-grading is transparent, then it is inevitable that it will be gamed - otherwise known as cheating. If the the grading criteria is obscured, then the ability for learning through feedback will be diminished. No tool can serve two masters and I'm afraid that robo-grading will - just like the larger educational system in its present form - will ultimately serve accreditation.

Unless, of course, we design our systems to *not* be a zero-sum game.

I know it's unfashionable now to admit to watching such things, but I was very impressed by a TEDxTalk by game designer Amy Jo Kim. She is a game designer who admits for the first time in this talk that she hates scoring systems. This became apparent to her at a young age when the demands of having to play in music competitions almost destroyed her love of music. After experiencing the pleasure of playing bass in a band and experiencing the joys of collaborating in something bigger than one's self, she asked a very simple question: why do we make young musicans compete against each other when they could be playing with each other?  Likewise, she suggests that our games could be much more than zero-sum - kill or be killed - mass competitions with the victors on the leaderboards. And, as a game designer, she has helped design some of the most popular non-zero sum games ever, including The Sims and Rock Band.

I have two conclusions.

First, I wish my workplace was as concerned with empathy as a commerical airline company.

And secondly, Amy Jo Kim needs to design a MOOC and save us all.