e-Learning ON

“turn and face the strain” sings David Bowie in his brilliant Changes song. And, indeed, that’s what I did. Changes were set upon me recently, and that’s why this blog didn’t have the attention it deserves lately.

I got a new job in Paris starting on the 1st July as a web development consultant working for a large client in the financial area. The work is demanding, but fits well in my experience and knowledge, even though I come from an education background. I’m learning a lot from the experience, since it’s the first time I’m working for a large corporation and have the change to liaise with people much more knowledgeable in the are than myself.

What is mind-boggling, in reality, is how difficult it turn out to be to find a place to live in Paris. The city is crowded and the real estate is really scarce and…

View original post mais 29 palavras

A Reading of Nicholas Carr’s The Shallows

“The Internet’s import and influence can be judged only when viewed in the fuller context of intellectual history”(Carr, 2010), writes Nicholas Carr at the beginning of chapter seven of The Shallows, his book published in 2010. And contextualizing is precisely what the author does in this book, where he recounts the evolution of the different intellectual technologies that the human inventiveness has come up with throughout its speciation and evolution.

Contextualization is a reasoning technique that considers something in relation to the situation in which it happens or exists. By situation we mean the space and time where said thing happens or exists, and contextualizing implies often delineating the historical events that led to the current state of the situation. That is precisely what Carr does, tracing the history of the intellectual tools that preceded the Internet. In any case, a contextualization implies always a broadening of the considered variables and tries to make sense of something taking those variables into account.

Historical contextualization has, however, a drawback: when applied to current events: it rarely gives accurate results for predicting future events for the sole reason that distance is needed to fully appreciate the different faucets of a given situation. Even though information may appear to be abundant, a crucial part of it is always lacking: a good historical distance between the research and the object of study, or the author and the event he writes about. Nevertheless, this book does a very good job in contextualizing intellectual tools, summing up our current historical knowledge on that subject, and gives us a picture of the current state of neuroscience and neurophysiology, with a lengthy amount of sources and citations to validate his point, building a steady (even if not inexpugnable) set of arguments.

His contextualization is not a neutral and innocent one. The author set out to write this book trying to prove a predefined idea: the continual use of the Internet is changing our brains in such a way that we are loosing our ability to focus on more intellectually demanding tasks such as the deep-reading of books. His point is that the architecture of the Internet, or rather the hyperlink-based World Wide Web, is an intellectual paradigm that never allows for concentration, calm and focus, since it is an “ecosystem of interruption technologies” (Carr, 2010) with all its bells and whistles such as the aforementioned hyperlinks, images, sounds, videos, menus, tabs, advertisements, popups, notifications for multiple services (twitter, Facebook, Digg, email, etc.) always demanding the fullest attention from our poor brains, and as we grow used to inhabit that kind of fast-paced, multitasking-heavy environment our brains adapt to conform with it, with its quick flood of information, allocating ever more of its resources to that adaptation, diminishing those originally used for other cognitive tasks that require more focus, such as learning and deep-reading.

Contextualization, as earlier mentioned, requires a degree of distance that, although it is achieved in its historical sense, for the author does a good job of explaining the evolution of the mind tools from the writing to clocks and cartography to the invention of the book, Gutenberg’s press and the electrical/electronic media, falls short in the assessment of the implications of the Internet on the changing of our brain, since the data is, presently, rather poor and contradictory on that subject, and his interpretation of the data that exists is clearly biased to conform with his own preconceptions and prejudices. Jim Holt, from the London Review of Books, points out that “The only germane study that Carr is able to cite was undertaken in 2008 by Gary Small, a professor of psychiatry at UCLA.”(Holt, 2011).

He starts his book with a prologue where he defines one of his pet themes: the technological drive of humankind and the risks it represents for its own integrity. He exemplifies it with reference to Marshall McLuhan’s work in media studies where he stated his vision of the evolution of media and its influence in people’s perception of reality. It isn’t difficult, looking back, to agree with the passages from McLuhan’s Understanding Media Carr mentions. Who could disagree that our focus on the content most of the times blinds us, as users, to the medium in itself? And is it not true that every new medium cannibalizes the precedents, ultimately relegating them to niches or curiosities of the past? But, in fact, do we as a species have any other way forward, considering that inventiveness and the use of technology plays a major role in defining us as a species?

Past the prologue the author continues by presenting himself as having had an “Analogue Youth and then (…) [a] Digital Adulthood”(Carr, 2010). He first used computers at college, and witnessed the digital transformation of the American society, first on businesses and then on personal lives, being effectively part of it. He bought an early version of Apple’s Macintosh in the middle of the eighties, spending almost all of the couple’s savings, for his wife’s dismay, and used it for work and at home tasks. Soon, the computer started to have an impact on him: “at first I had found it impossible to edit anything on-screen”, he declares, “At some point – and abruptly – my editing routine changed. I found I could no longer write or review anything on paper. I felt lost without the Delete key, the scrollbar, the cut and paste functions, the Undo command. I had to do all my editing on-screen.”(Carr, 2010) When some years later he bought a modem, new possibilities opened up for his digital life. There was still no World Wide Web, but his America On Line (AOL) account allowed him a few weekly hours of text-based interaction with the world through email, newsgroups and chat rooms. When the World Wide Web made its debut in the first years of the 90’s, he embraced it, and the story goes on to a more or less common experience we all had: increasing connection speeds, increasing potential of the technologies, more storage, more and more diverse content, social networks, etc.  My own experience, I could say, is not far from that, saving the differences related to age: I was a kid in the 80’s and had a ZX Spectrum at the end of the decade, my first “serious” computer was bought around 1995 and I only had access to Internet in college in 1997. I had, however, some previous knowledge of it and learned to code HTML before even trying it live due to my curiosity on the subject. On this first chapter he uses the infamous HAL computer from Kubrick’s (and Arthur C. Clarke’s) 2001: A Space Odyssey to introduce his fear of a thinking-machine laden, automation and efficiency-focused world. I must say I’m a sci-fi fan myself, having read hundreds of books and short stories of the genre, so I felt immediately compelled to empathize with the author, and was unable to break the spell until late in the book (that is, until he really starts talking his mind). His digital life, however, culminated in an epiphany when he suddenly realized he was unable to “pay attention to one thing for more than a couple of minutes.”(Carr, 2010) That realization was deepened by a feeling not of age-related degeneration of brain functions but a craving of the brain for more and more information to feed its anxious neurons in an increasingly fast pace. Conversations with others reassured him that was the “new normal” state of the brain nowadays, and he started wondering what is this seemingly powerful influence the Internet apparently has in our brains.

The author’s argument for the influence the Internet is having in our brains claims for a scientific basis for the whole “brain change” subject, and that is exactly what he does on the second chapter by informing the reader with the evolution of neuroscience from the end of the nineteenth century to our days, and the discoveries that led to the change of paradigm from an immutable brain in adulthood to an ever-changing malleable, plastic brain that growing evidence is pointing to. It appears to be increasingly accepted that the brain seems to be a bit more complicated than what Aristotle or Descartes thought (and the generality of humankind for that matter) until evidence for real changes in the brain structure started being consistently collected by researchers. Today neuroplasticity is a common word to define the malleable properties of the neural connections and the research on the field is beginning to give us a clearer understanding of the electro-chemical processes endured by the neurons in response to the information the nervous system collects from the exterior thus managing the diverse neural reactions such as creating new connections between different areas of the neural network, strengthening, weakening or breaking existing ones, repurposing or taking over less used ones. The focus here, as it is clear by the title of the chapter, is to refer the so-called vital paths, or the capacity shown by the neural network to, when an habit is set, in other words, when a neural set of connections is created and reinforced, if for some reason that habit is left behind for a time thus weakening the neural path or even making it be repurposed for other function, it quickly gets back to its original function if the habit is regained. That explains addictions, for example, but also gives an insight on the self-optimized characteristics of the brain management system.

After this introduction to neuroscience, the author focuses on the mind tools. He starts by mentioning how time-tracking and map-making were crucial points in humankinds intellectual evolution, and how they changed the way we understand reality. Time tracking gave us a new sense of time as an inexorable entity, measurable and irreversible. Time-tracing was a kind of losing of innocence for humankind: never more would the human being be allowed to rely solely on his biological, nature-given clock. The unnaturally of an artificial time started to embed itself on human beings’ minds, changing them forever. The same happened with the art of visualizing what is around us in a two-dimensional abstraction of space, ever more so with the evolution of the technology. We traded the interaction with the nature for the convenience of the map, and our minds, gaining in abstract thinking, lost much of their ability to survive without it, the old outdoorsman art. And then he turns to the implications of the greatest mind tool and the one that, arguably, had the biggest effects in our minds: writing.

We can consider that writing was a natural development in the process of civilization. As communities grew in size some kind of strategy had to be developed to keep track of various management issues, and it naturally evolved from simple concrete images or tokens to increasingly abstract representations until the ultimate abstraction represented by an alphabet of letters corresponding to vocal sounds. The implications of a writing system were immense, and represented a further alienation of humankind from the natural world. Reading and writing was a new layer of abstraction for the human being and our brains adapted to that with little training (a child, nowadays, requires less than one year to read and write with medium correctness). Reading and writing, although acquired by education, became so deeply incorporated in our brain processes that seem natural to us. What was the trade-off of the adoption of the technology then? According to Socrates, the losing of memory, the death of the revered oral tradition of which he was the grand master. People would rely on written words and not on their own memory. Where do we see this kind of criticism nowadays?…

The focus of the fifth chapter is at one time a contextualization of the birth of computing as a science and a return to the media theme issued on the prologue. Carr informs the reader about the beginnings and great line of theorization behind modern computing with a reference to the brilliant work of Alan Turing and his visionary proposals and reflections, even warning against the risks of his own theoretical machine. His work is of an incredible insight, revealing the genius of a man who could think ahead of his time. The author highlights particularly his warnings against the error of relying in machines, or rather in the all-purpose machine, to substitute humans in tasks that require wisdom, and uses that to return to the media topic presented in the prologue, stating the differences between the traditional media and the Internet. Bi-directionality is obviously the most salient, and that gives it a power and influence that no other medium achieved. Another salient characteristic is the effective cannibalization of every other media and intellectual tool: reading, writing, time-keeping, map-making, books, newspapers, magazines, music, still and moving pictures, the Internet is concentrating and absorbing all of them. A ring to rule them all, towards the singularity?

Carr’s efforts to contextualize the history of the book, since that seems to be one of his main worries, started on the fourth chapter but are reinforced on the sixth his view on the effects the current digital technologies are having on it. The digitalization seems to be the way forward, but the author is worried about the loss of our concentration and deep-reading abilities, and doesn’t think the solutions that include hyperlinks are appropriate, presenting evidence of a less rich cognitive experience on texts filled with hyperlinks than on texts without them. His worries with the electronic books, or eBooks, ability to be suitable substitutes for traditional printed books are, if anything, increased by products such as the Amazon Kindle platform that offer an innovative experience of reading books, with social networking features and hyperlinks throughout the text. I must confess I totally agree with his point: after reading more than one hundred eBooks, I can safely say I don’t see any point in hyperlinks to external sources. They really clutter the text. Of course I love the ability to search, annotate, highlight, underline, change fonts, increase and decrease its size, but those are (in my opinion) non-intrusive features that, if anything, add to a good cognitive experience.

His point on this chapter is to discuss the value of the literary, linear, focused mind against the new trend that is promoted by the digital culture. He argues that the linearization of the mind caused by the introduction of the written narrative and its posterior rapid growth derived to the invention of Gutenberg’s press was crucial to the intellectual revolution of the Enlightenment and the advances that followed it. Reading used to be a form of “training” the memory, and was beneficial to humankind by providing a technique that promoted deep thinking and focusing. However, many see that role of the book as being outdated, “a brief ‘anomaly’ in our intellectual history”(Carr, 2010), reinforced by the idea that “our old literary habits ‘were just a side effect of living in an environment of impoverished access [to information]’”(Carr, 2010). Those who support this line of argumentation see on the kaleidoscopic richness of innumerable interactions made possible by the Internet a fuller contribute to cognitive development. For them, the fast multitasking, the access to information and the ability to connect to knowledge sources in a non-linear, narrative-style way is a far superior experience, and reading a book is a waste of time. Reading books was, before Gutenberg’s printing press, reserved to an elite. Will that, once again, be the fate of the book? Are there neither alternatives nor balances in-between?

In The Juggler’s Mind the author focuses on the evidence for the influence of the Internet in reading and how it affects learning. Evidence seems to be mounting to support a shift of paradigm in the way people’s cognitive process of reading and understanding takes place. There’s no wonder here: the nature of the Internet and its characteristics promote a fast-paced kind of reading that many times is nothing more than a skimming or scanning of short pieces of information scattered through multiple pages. In a fast-moving world our brains are adapting to do more with the same resources, allocating more of their neural resources to quickly process information in a non-linear way, making sense of what is collected in the multiple links. At the neural level the two ways of reading are, in fact very different. If on reading a book the predominant neural paths are those related to concentration, understanding and linguistics, in online reading there is a great deal of decision-making involved, with the former relegated to a less prominent role. Knowledge is taken as an immediate need, but that immediacy tends to be ephemeral, since there is no deep knowledge. This argument will have further development in a later chapter.

The Church of Google is a chapter dedicated to what lies behind the philosophy and practices of the biggest “dotcom” ever created. Google is a knowledge-based company, but its philosophy lies on the corporate efficiency postulated by Frederick Taylor at the end of the nineteenth century that led to an enormous improvement in manufacture production. Google started as a search engine developed to efficiently catalogue and rank the ever-growing amount of web pages that were being created in the last years of the twentieth century. Efficiency was the keyword that originated Google, as well as the notion of knowledge as availability and accessibility of information. The dream of pure knowledge, however, had to balance with capitalism, and Google was forced to adopt an economical model based in advertising to be a viable company. And it worked so well that quickly it became the biggest advertising company in the world, tying up personalized ads to a continual stream of information gathered from its diverse set of free services whose main purpose is exactly the gathering of personal information and serving as vehicles for ads.

Google’s dream of efficiency didn’t stop with the cataloguing of the web. A plan was devised to scan every book ever published and make it available on the web… with Google being the sole copyright holder for those, even if on public domain. The process of scanning the books has been going on for more than a decade now, and was never an easy ride for the Mountain View, California company: actions in justice moved by authors, copyright holders and libraries from various countries were, and still are, a constant. Their plan, according to public documents, is to be able to make the books available in a searchable format, complete or in excerpts, for selling, renting or embedding with advertisements. The author’s position is not against the digitalization of the books, there are obvious advantages on that, but Google’s seemingly patronizing and disdainful attitude towards intellectual property and the integrity of the book as a whole work is not one of appraisal. The search for total effectiveness and optimal use of resource of the company, based in algorithms, is an upsetting one, and embodies the general trend of a machine-driven, machine-dependent world, where the human brain is being fed with scraps of information from multiple sources instead of meaningful curated content.

Contemporary neuroscience is advancing at fabulous and unthinkable pace for some decades ago. Recently new advances were made with new experimental evidence suggesting the storing of certain kinds of long-term memory on the cortical region of the brain rather than on the hippocampus region (Max-Plank-Gesellschaft, 2013), giving a small outdate note to the excellent information provided by the author on the ninth chapter of the book. Memory and its implications in cognition are the main theme of this chapter, and Carr elaborates about the accumulated knowledge on the workings of the neural network in the creation of short-term, working memory and the long-term memory and their implication with cognition and learning.

Studies suggest that the working memory is a short-term, immediate memory used for the gathering and processing of information available in the short span of time. This immediate memory is responsible for the senses, for example, and its neural processes are different from the long-term memory. Evidence suggests that the storing of a memory from short to long time memory requires a meaningful repetition of the task, the old-style memorization, and here is where lies one of the main arguments of the themes of the book: the Internet, with all its bells, whistle, fireworks, multitasking and fast-moving culture, is making the users rely solely in the short-term memory, with hardly any interaction being either meaning or insistent enough to make it to the long-term memory. Even if that decline of memorization can be accounted for with the extraordinary amount of information available online, the author is clearly a critic of that situation, and the title of the book derives exactly from this mental shallowness, which the continued used of the Internet, tends to cause.

The deep-reading of books, in contrast, offers just the right measure of meaningfulness and repetition required for long-term memories to consolidate, for the neural processes in question to properly take place. Carr refers the historical examples of the praise of memory that have been continually degraded since the advent of the thinking machines, and finishes the chapter warning against the error of assuming everything is measurable and reduced to bits and bytes. Human culture is much more that what Google can digitalize and divide in fast to read, easy to use excerpts.

HAL, the defective electronic brain in 2001: A Space Odyssey, makes its comeback in the last chapter of the book, the incarnation of the all-powerful, all-knowledgeable thinking machine, product of an utopian / dystopian (depending on the personal opinion of the reader) society where humans outsource their intellectual powers to the high-speed and immense memory storage of machines in the name of efficiency and stability. This scenario is becoming all too familiar, as the advances in artificial intelligence research and the increasing reliance on the Internet for every kind of human task conjugate to “make our lives easier”.

The search for an artificial intelligence as the holy grail of technology and human inventiveness poses a problematic depicted in innumerable works of literature: what would be the relationship between the human and the machine if the latter was able to emulate or surpass the functions of the former? This question is becoming increasingly important due to the developments in the research, not only of the technology but also in the psychological field. Long-standing evidence points to an eagerness of the human mind to unconsciously reflect their own emotions on others in the communication process, even if those others are simple machines following a scripted set of rules, such as computer programs. Carr exemplifies recalling the events following the publication of the first program that could, apparently, establish a textual conversation with a human being following a relatively simple set of instructions and rules. Developed by MIT’s computer scientist Joseph Weizenbaum and presented in 1966, the program would give what seemed to be meaningful answers and pose questions based in the analysis of the input given by the users. That analysis was only syntactical and not semantic, but was well composed enough to trick the human mind, provided it didn’t understand the programming behind it), in thinking the machine had a real personality and was interacting in a meaningful way. This tricking of the human mind is a result of the human feature of empathy or the reflection of human emotions in others. The program, in short, caused a real stir in the American society of the time, and its application in the psychological field in large-scale was even considered. The principles behind its programming were also the base for Noam Chomsky’s generative grammar studies, and represented a line of enquiry in artificial intelligence research, much to the displeasure of Weizenbeum who, like Turing had done, ended up advising against the risks of technology overuse.

The epilogue of the book offers a final thought on what is at stake in the event of an HAL-praising society: the increased minimization of what is human, like our inventiveness and difference, in favor of increased efficiency. The human element washing away, replaced by machines and their set of instructions that, even if programmed by humans, reflects a particular view of society and of humanity at the image of the machines themselves.

Carr is not, however, a Luddite. He doesn’t abhor technology and the realization that led him to write this book wasn’t enough to make him cut his ties to it. He describes his process of writing as a fight against the entrenched habits his full-connected life had created in his brain that led him and his wife to move from the suburbs of Boston to the mountains of Colorado to allow for concentration and focus as a difficult but necessary move. He reduced his connections to a minimum for the time needed to write the book, but he confesses in the end that he’s sliding back into his former connected life: so many are the advantages of the Internet, so many the possibilities it open, how can anyone resist? Today his personal blog, Rough Type, has an average of 2 or 3 relatively lengthy posts every week.

How can we resist the appeal of our technology? There is no doubt that our tools have changed us since we first start using them, their evolution is part of our own evolution. Different tools have extended human capacities and molded our behaviors and thinking processes to lengths we are only now beginning to understand, even if that realization dates from millennia ago. Our use of tools always had the purpose of alienating us from nature, creating layers of distance between the outside world and us. Our brains evolved in that direction, finding new ways to overcome our own weaknesses and do the most natural thing for a living form: survive in the wilderness of the outside world.

The role of technology is, in the end, the main theme of this book. The twists of evolution have taken us to the present times, where our tools seems to be so powerful and complex that they will apparently replace our brains in most of the intellectual tasks we did until now. If this is a good or bad thing we don’t know, we can only speculate. What is undeniable is that the evolution of our brains and of us as species doesn’t stop, and changes are happening every day.


Carr, N. (2010) The Shallows: How the Internet Is Changing the Way We Think, Read and Remember, Atlantic Books

Holt, J. (2011). Smarter, Happier, More Productive. [Review of the book The Shallows: How the Internet Is Changing the Way We Think, Read and Remember.] London Review of Books33(5), 9-12. Retrieved from

Max-Planck-Gesellschaft (2013, August 27). Long-term memory stored in the cortex.ScienceDaily. Retrieved August 30, 2013, from­/releases/2013/08/130827091629.htm



“turn and face the strain” sings David Bowie in his brilliant Changes song. And, indeed, that’s what I did. Changes were set upon me recently, and that’s why this blog didn’t have the attention it deserves lately.

I got a new job in Paris starting on the 1st July as a web development consultant working for a large client in the financial area. The work is demanding, but fits well in my experience and knowledge, even though I come from an education background. I’m learning a lot from the experience, since it’s the first time I’m working for a large corporation and have the change to liaise with people much more knowledgeable in the are than myself.

What is mind-boggling, in reality, is how difficult it turn out to be to find a place to live in Paris. The city is crowded and the real estate is really scarce and expensive, and even trying to get any responses from people who advertise seems to be a case of extreme luck.

Oh well, all will be solved in due time.

The Practical Use of OERs in Distance Educatiion: Two Examples

For this assignment I decided to build upon the learning design exercise of a previous subject, a putative introductory module for a management course that can be found here. This first unit of the module Management 101 delved on the the concept of management on its multiple dimensions, and so I had to follow the same logic to be coherent with the job already done.

The requirement this time was to use OERs and plan activities and tasks with them, so I had to find something related to business and management at an introductory level that complied to some criteria inherent to their nature and defined inherently by the previous work.

I chose, then, a set of 4 different OERs which are built in a similar way, guided by the same topic and displaying the same title, An Introduction to Business Cultures hosted on the OpenLearn LabSpace for the first activity (the unit 2.1 of the Management 101 module) and adapted an OER built by the University of Bath (UK) named Leadership and the Organization for the unit 2.2 of the Management 101 module.



What criteria, then, did I follow to chose those specific OERs instead of others?

Apart from the obvious time constraints (the internet is too vast to check all the relevant resources) and coherence (I wanted to build upon the previous work, so I had to find something that would follow the same general guidelines), the criteria were:


Source: The sources for the two OERs are different. The first one is stored on the LabSpace, a public repository for both OU and the general public, and was build by individuals in what seems like an assignment for a course. The second one is stored on the University of Bath’s OER repository and was built internally by their department for lifelong learning. The criteria here was to diversify the sources to see what added value they could provide for the course;


Validity and Verifiability: In both cases the content appears to be valid and correct, and bibliographic references are provided to support and verify their validity;


Time: The length of the activities was considered due to the nature of the intended activity and its target audience. The units had a length of 2 weeks or 14 hours each, therefore the OERs should not exceed 5 hours to comply with the other proposed tasks;


Licensing: It was important that, more than having a CC license, that license allowed derivative work, and that is the case with both OERs;


Adaptability and accessibility: Both OERs are easily adaptable, although in different ways. In the first case there are images and text that can easily be used with text processing software and images, but on the second case there are learning objects built with Xerte, an open source learning objects authoring tool based in Adobe Flash framework, for which the authors provide the source to be adapted. This way both OERs comply with the adaptability criteria.
On the accessibility side, despite being built using a somewhat tricky technology for that, the second OER offers options such as different color themes (including high contrast) and changeable font sizes. The first OER has all the typical accessibility options dependent on web browsers.

Brave New (Open) World: Some OER Examples

This post is intended to act as a support to the previous three on the “Brave New (Open) World” series (posts 1, 2 and 3) dedicated to the Open Educational Resources. In the following lines I’ll be reviewing some examples of OERs which I consider of outstanding relevance. My intention, though, is not to order them by degrees of relevance, but simply to point out what makes them an exceptional resource for educators and the overall educational communities at large.




WikiEducator is an online community dedicated to the development, stewarding and continuous implementation of OERs, and is itself an invaluable OER for the global educational community. Backed by the Open Educational Resources Foundation an independent non-profit based at Otago Polytechnic and financially supported by the Commonwealth of Learning, this is a very active community focused on the development of projects towards free and open edicational content. However, they aim higher, and the scope of their work included the discussion of OER policies, implementation, best practices and funding as well as learning design. On their own words, their objectives are (WikiEducators, 2013):

  • planning of education projects linked with the development of free content;
  • development of free content on Wikieducator for e-learning;
  • work on building open education resources (OERs) on how to create OERs.
  • networking on funding proposals developed as free content.


Architecturally this website is a wiki, which makes it content-focused and collaboration-driven. This a true community of research and learning, and the projects are developed and maintained by the efforts of educators all over the world. Networking is assumed as an objective and an integral part of all the community concept, thus making it conform to the nature of the internet culture itself and a true example of the connectivistic principles for education in the network world stated by Siemens (2004): “The starting point of connectivism is the individual. Personal knowledge is comprised of a network, which feeds into

organizations and institutions, which in turn feed back into the network, and then continue to provide learning to individual. This cycle of knowledge development (personal to network to organization) allows learners to remain current in their field through the connections they have formed.”



OER Commons

The OER Commons project was launched in 2007 by ISKME – Institute for the Study of Knowledge Management in Education, a private non-profit US-based corporation, and grew to be one of the first results when googling OER, thus giving it a high level of visibility. It is an impressive repository with over 30.000 resources for education and educational purposes that are freely accessible and usable, licensed under (when not stated otherwise) a creative commons attribution-noncommercial v3 license, divided into many categories, levels and ages. We can find there complete syllabi, unit plans, lesson plans, texts, multimedia, quizzes, all kinds of activities and resources for learner and educators.

This repository is not an isolated effort. It is,in fact, part of a global strategy in Open Education by its supporting entity, ISKME, which provides services in the education market at a global scale, such as consulting and training for other institutions. Their financing model includes not only those services but also public and private donations and grants.

The items are organized in a comprehensive way, and the user is given the ability to tag the different resources and store them in a personal space for easy retrieval at a later time.

Despite the fact that this repository is aimed at the “little OER” (Weller, 2010), resources produced at a small scale by individuals or groups/communities, we can find “big OER” (Weller, 2010) catalogued in the website’s search engine, although they are not stored locally. This variety of resources is probably part of the explanation of its success.


OpenLearn & OpenLearn LabSpace

LabSpace is the public OER repository for The Open University’s OpenLearn project. It is noteworthy that OpenLearn began as a two-year project in 2006 but became a full-blown initiative after that. Although OpenLearn offers pre-packaged courses, activities and materials focused on the learner’s needs, those can be reused out of their original context. LabSpace is focused on educators and allows for the public sharing of resources, with online tools for mixing and re-authoring existing resources.

At this moment, though, the LabSpace is on a process of redesign and will only be fully functional again in the summer of 2013 under the name “OpenLearn Works”.

As noted in the second post of the series when the focus was sustainability, this is a massive project in terms of both content and funds. Funding comes in part from the William and Fiona Hewlett Foundation, but the lion’s share comes from the OU’s budget. Many are the perceived benefits of the initiative, and those explain in part the continual success of the project:

  •             “Enhancing the reputation of The OU. Providing OER is seen as innovative and altruistic placing the provider with other high visibility providers with strong reputations such as MIT and Stanford. In the case of OpenLearn the external approval was also reflected in awards such as the IMS Global Platinum Award in 2007.
  • Extending the reach to new users and communities. Access to the OpenLearn content has been truly global with over 6 million unique visitors to date, the majority from outside the UK. Visitors have come from over 225 different countries/territories including from such places as the Vatican, Guinea-Bissau and the Marshall Islands.
  • Recruitment of students from those who come to see OpenLearn. OpenLearn offers a space where users can see the approach and structure of material before registering, or not, as they choose. A reasonable estimate of recruitment influenced by OpenLearn is the approximately 10,500 students since launch who have made use of OpenLearn before they register for a course at The OU in the same online session.
  • Supporting widening participation. A range of activities have been established linked to the free and open resources available on OpenLearn, for example to introduce groups of disadvantaged learners to the process of learning without the expense or delay in needing to waive fees or set up separate access to learning materials, and setting up special access to OpenLearn content for learners with restricted access in prisons.
  • Providing an experimental base of material for use within the university. The open material has provided a catalyst for reuse and sharing within the organization and has been used as the basis for experiments in semantic search, automated conversion of learning material to speech, and feeds of regular sections of content.
  • Accelerating uptake and use of new technologies. OpenLearn used technologies that were just starting to be rolled out for student use in the University, for example XML authoring and the Moodle learning environment. These needed to be developed more rapidly to meet OpenLearn timescales and were then released openly to the community as well as feeding back into The OU.
  • Acting as a catalyst for less formal collaborations and partnerships. OpenLearn provides a way to encourage joint activity with smaller organisations where previously these may only have been considered when external funding to support the activity was available.” (McAndrew & Lane, 2010)





McAndrew, P & Lane, A. (2010) Newsletter – Issue 18 – The impact of OpenLearn: making The Open University more “Open”. Retrieved May 21, 2013, from

Siemens, G. (2004). Connectivism: A learning theory for the digital age. Retrieved from

Weller, M. (2010). Big and Little OER. In Open Ed 2010 Proceedings. Barcelona: UOC, OU, BYU. Retrieved from





Brave New (Open) World: Licensing of Open Educational Resources

This is the third and last post on Open Educational Resources. On the first and second the focus was, respectively, context and sustainability, and this one will be dedicated to licensing. Licensing is a cornerstone of OER and a deep knowledge of its mechanisms is essential for the success of any OER project.


Open Educational Resources, as noted in previous articles of this series, are difficult to define in just one sentence, since the concept is very general, with a lot of different sub-concepts and practices being part of it, thus generating still a lot of debate on how to properly define it. Hoyle’s (2009) characterization of “big OERs” and “little OERs” shows us how difficult it is to agree upon a common definition of the concept.

Notwithstanding that fact, what is certainly agreed upon is the “open” character of  OER, and that in practice translates in two main aspects: the accessibility of the shared resources and its level of openness.

By accessibility we mean the use of open or de facto standards on the files we share, so that the users of the shared resources can use them on open-source (preferably) or widely used applications, and when feasible, if sharing non-editable files, sharing its editable precursor or source files as well.

The level of openness brings up the question of what are the terms by which we want to share our own resource. Although it may seem pretty straightforward to license everything under an open license such as Creative Commons, some thought must be put on the process. What level of freedom should the consumer of the OER be given? Freedom to use is granted, but should he/she be able to modify it? To adapt it to his/her needs? To mix it withother OERs? And if so, should he/she be able to distribute it in a new format or repackage it? Is there a need for attibution? If so, should the original OER be repackaged, does it maintain attribution need? If distributed in a specific format that is not open (such as a non-editable pdf, should the consumer ber allowed to, by any means, alter it? Some of those questions are addressed on Creative Commons (2006), which is widely thought of as very adequate for licensing content.

However, the resources that are part of the OER concept are not only learning contents, but also tools and implementation (OECD, 2007). Software can fall under the first two categories, since it can be the content to be shared and the tools needed to interact with the content. Creative Commons doesn’t provide a good solution to the open licensing of software due to its unique characteristics, and therefore another license should be applied. According to the Open Source Initiative (2013), the licenses that completely adhere to the concept of open source software are:

Licensing must be a major concern of both the deliverers and the consumers of OER, because it protects intellectual property, for one hand, and legitimizes their use, for the other, preventing their abusive use for other reasons such as profit.


Creative Commons. (2006). Choosing a license.

Downes, S. (2007). Models for Sustainable Open Educational Resources. Interdisciplinary Journal of E-Learning and Learning Objects3, 29-44. Retrieved from

Hoyle, M (2009). OER And A Pedagogy Of Abundance

OECD (2007), Giving Knowledge for Free: the Emergence of Open Educational


Open Source Initiative (2013) Open Source Licenses.

Brave New (Open) World: The Sustainability of an Oper Educational Resources Ecosystem

This is the second post on a series about Open Educational Resources. On the first post the focus was on contextualizing the OER trend on the larger “openness” movement that has been characterizing the digital ager, and in this one I’ll consider how can such projects thrive if they are based on free resources, how are they sustainable and what do we mean by that.


Every enduring human activity needs some degree of sustainability be it a biological demand, a socio-economical arrangement or other. Indeed, the development and complexification of humankind (as well as other organisms and systems) is based in their sustainability. Sustainability is, after all, the capacity to endure (Cobb & Rowe, 1995).

After contextualizing the Open Educational Resources in terms of their conceptual definition (which, as we noted, is far from being unanimously agreed upon), we intend now to discuss its sustainability. There little discussion whether OER are useful or benefitial to the spreading of knowledge based on the principle that knowledge is a common heritage of humankind and should be available to everyone at close to no cost, but we can’t be naïve and assume there are none simply because we don’t see them, or pretend not to. This particular field is, moreover, extremely permeable to other forces that come into play, such as politics, economics and finance, and all those have varied degrees of influence in the how, why and to whom the knowledge is shared.

We are going to focus here on what Martin Weller (200), citing Hoyle (2009) classifies as “big OERs”, large-scale OER projects whose quality is generally excellent, since they are backed by institutions able to control their development and quality standards, as opposed to the “small OERs”, individually resources produced at small costs and whose quality is usually assessed by informal peer-review or though use. Examples of the former concept would be MIT’s OpenCourseware, UKOU OpenLearn or the Stanford Encyclopedia of Philosophy, and of the previous would be every individual resource we find in repositories such as OER Commons.

Downes (2007) elucidates us by providing examples of costs of OER such as OpenLearn and Stanford Encyclopedia of Philosophy: ~$190.000 USD per year for the later and $3 Million USD per course on the later, a whooping $600 Million USD per year, 40% of the the UK Open University’s yearly budget. This figures are, obviously, from before 2007, and we’d expect them today to be, if anything, higher.  And where do those costs arise? Again, on the later case “the bulk of the costs are in staffing ($154,300 USD) with contract programming; travel and expenses; computer services and overhead taking up the rest”(Downes, 2007) and on the later the development and maintenance costs are included in that $3 Million USD per course mentioned earlier.

“It becomes clear that by ‘sustainable’ we cannot mean ‘cpst-free’, and indeed, we may be forced to agree with Walker (2005) that the production of OERs may entail a large scale investment. Rather, with Walker, we note that by ‘sustainable’ we must mean “…’has long-term viability for all concerned’ – meet provider objectives for scale, quality, production costs, margin and return on investment”. This is significant: for after all the consumer of resources obtains the resource for free, then the provision of the resource must be sustainable (whatever that means) from the provider perspective, no matter what the benefits to the consumer.”(Downes, 2007)

Even though OER and adoption is increasing, the decision to move forward cannot be based solely on the measuring of a priori perception of OER costs. Free or cheap as they may be, there are others associated that mustn’t be overlooked at, such as implementation, training of staff, hardware, support and others. We can establish a parallel with the open source software world, and that’s why it is necessary to consider a “total cost of ownership” for the OER adoption. The good news is: look at the implementation of Linux in the enterprise server world.

The benefits of OER do not measure at the same level for every single institution, be it a producer, a consumer of both. Its sustainability must also be understood as attaining the strategic objectives of each particular institution, which will necessarily be different from the others. “Thus ‘sustainable’ in this instance may mean not merely financially cheaper, but capable of promoting wider objectives” (Downes, 2007). On this same topic, David Wiley (2006) states:

“open educational resources projects must find two unique types sustainability. First they must find a way to sustain the production and sharing of open educational resources. Second, and of equal importance, they must find a way to sustain the use and reuse of their open educational resources by end users (whether teachers or learners)”.

Another approach to sustainability that shouldn’t be ignored is the non-economic dimension of it. We cannot measure the sustainability of OER just in terms of net gain or loss, but have to consider its social and philosophical implications. Institutions may be driven not by profit but instead by their social responsibilities and missions (and many are), and for those the sole purpose of contributing for the development of humankind by sharing knowledge may be their measure for sustainability.

Every OER project, whatever its size, objectives and sponsors, is naturally unique, thus the different dimensions of its reality, the models adopted for its running and completion are equally unique and particular to them. Models of funding, technical approaches, content and staffing are diverse, depending on the varied types of funders (governments, NGOs, pan-governmental programs, etc), technical requirements, content to be produced or shared, organizational aspects, etc.


Cobb, C., Halstead, T., & Rowe, J. (1995). The genuine progress indicator. San Francisco, CA: Redefining Progress. Retrieved from

Downes, S. (2007). Models for Sustainable Open Educational Resources. Interdisciplinary Journal of E-Learning and Learning Objects3, 29-44. Retrieved from

Hoyle, M (2009). OER And A Pedagogy Of Abundance

Yuan, Li; Macneill, Sheila; & Kraan, Wilbert (2008). Open Educational Resources – Opportunities and Challenges for Higher Education.

Walker, E. (2005). A reality check for open education. Utah: Open Education Conference. [slides] and

Weller, Martin (2010). Big and Little OER. In Open Ed 2010 Proceedings. Barcelona: UOC, OU, BYU. []

Wiley, D. (2006) On the Sustainability of Open Educational Resource Initiatives in Higher Education,

Brave New (Open) World: Contextualizing Open Educational Resources

This is the first of a series of posts where I’m going to analyse the Open Educational Resources as one of the fundamental changes to the education landscapes in this digital age we live. In this first post I’m going to focus on the context of OERs and why they are here.

Open Educational Resources (OER) are a dream come true to educators all around the world. Who can deny having plenty of educational resources at hand that can be used as-is, repurposed and redistributed at no charge is a good thing? Who never felt the need for more resources than those provided in textbooks? For that matter, who among us didn’t have to build their own syllabus from scratch, gathering resources anywhere, even daring to cross the shady borders of fair use of copyrighted materials and the plain breaking of the law?

What are we talking about when referring to OER? The terms Educational and Resources are self-explaining enough, but what do we mean by Open? Let’s take a deeper look at that.

The term “Open” in this context is related to a movement for “openness” that has been pervading various fields of knowledge and society, such as finance (opennessin finance), governance (open government), education and research (open education, open educational resources, massive open online courses, opencourseware, open textbooks, open access), technology (open source software, open hardware, open networking, open standards). It is important to note, though, the different ways in which it is used on the different fields, sometimes meaning “transparency”, “free”, “collaborative”, “accessible” or any combination of those. This “openness” movement seems to be related to the advent  of the digital age, the network society, where the balance between speed and cost of communication and transmission of knowledge, along with a growing culture of empowerment of the individuals in terms of content creation and consumption, is very different from the previous “analogic” age, allowing for an effective sharing of knowledge with mutual benefits. Moreover, the “openness” movement is conceptually related to a philosophy that sees knowledge as a social good belonging to everyone, hence it must be shared at the broadest possible range to close the developmental gap at a global scale.

The multiplicity of meanings for the word “open” makes it difficult to agree upon a definition of OER, fueling the debate among academics. According to Yuan, Macaill & Kraan (2008) “the term was first introduced in a conference hosted by UNESCO in 2000 and was prompted in the context of providing free access to educational resources on a global basis”, but doesn’t give any precise and authoritative definition of OER. The same authors mention as well that the most widely accepted definition of OER (a working definition, that is), is OCDE’s (OCDE, 2007):

“digitised materials offered freely and openly for educators, students and self-learners to use and re-use for teaching, and research”

This is a somewhat limited definition of OER, because it appears to limit itself to contecnt that can ben shared through electronic means. That content can be varied, and includes not only full courses, courseware, content modules, learning objects, collections and journals but also lesson plans, syllabi and other educational content. This is only one of the three areas of “resources” stated by the OECD (2007), the learning content. The other areas include tools such as software, to suppoort the development, use, reuse and delivery of learning content, including searching and organization of content, content and learning management systemscontent development tools and online communities and implementation resources: intellectual property, licenses to promote open publishing of materials, design principles of best practice and localize content (OECD, 2007). Downes (2007) gives us a wider view of the OER concept by reminding us of the greater scope of educational resources usually needed to fulfill education other than mere “digitized materials” by citing a list from a report by UNESCO:

Visiting lecturers and experts

Twinning arrangements providing for the international exchange of students and academic staff

Imported courseware in a variety of media

Externally developed sponsored programs

Inter-institutional programmes developed collaboratively

Information resources of the Internet (UNESCO, 2002)

The debate on the nature of OER, what should be considered to be part of it or not, is not ending soon. Opinions are vary, but the core concept of openness is always there, and it supposed an inexorable change for higher education in since its appearance.


Downes, S. (2007). Models for Sustainable Open Educational Resources. Interdisciplinary Journal of E-Learning and Learning Objects3, 29-44. Retrieved from

Gurell, Seth (autor) & Wiley, David (editor) (2008). OER Handbook for Educators 1.0. []

OECD (2007), Giving Knowledge for Free: the Emergence of Open Educational Resources,

UNESCO (2002). Free access to 2,000 MIT courses online: A huge opportunity for universities in poor countries. Paris,

Yuan, Li; Macneill, Sheila; & Kraan, Wilbert (2008). Open Educational Resources – Opportunities and Challenges for Higher Education.


Open Educational Resources: Cases from Latin America and Europe in Higher Education

Inamorato, A., Cobo, C., & Costa, C. (2012). Open Educational Resources: Cases from Latin America and Europe in Higher Education. Niterói. Universidade Federal Fluminense Publishing. Retrieved from

Este estudo, inserido no Projeto OportUnidad patrocinado pela Comissão Europeia no âmbito do programa EuropeAid ALFA III que visa promover a adoção de práticas educacionais abertas no ensino superior na América Latina, descreve uma série de estudos de caso de universidades da Europa e América Latina e as suas experiências na implementação das referidas práticas, contextualizando os REAs em ambiente real. O caráter atual deste estudo (publicado em Dezembro de 2012) atribui-lhe especial relevância, uma vez que oferece uma perspectiva muito próxima em termos temporais da implementação prática de Acesso Aberto e REAs, dando conta dos seus resultados e experiências. Na conclusão é referida, para além do aumento da partilha de conhecimento como uma das grandes vantagens dessa implementação, a diversidade de práticas, processos e escolhas tecnológicas na implementação de REAs, conferindo assim para experiências diferentes e únicas em cada instituição.