Access2OER:Case Studies version 1

From Bjoern Hassler's website
Jump to: navigation, search

The original OERWIKI seems to be offline (December 2012). The Access2OER discussion pages are preserved here for reference! The final report in pdf is available here: Access2OER_Report,


The report:
Contents
Introduction
Introduction to the report
Part 1 - Issues
What is access?
Issues and classification
SuperOER
Part 2 - Solutions
Solutions
Solutions criteria
Stories and solutions
Case studies
Part 3 - Proposals
Proposals
Conclusion
Conclusion and next steps
Appendix
Links
Blogs
Additional sections:
Introduction
Welcome
Invitation
Solutions
Stories 1
Stories 2
Stories 3
Case studies v1
Access initiatives v1
Proposals
OER Training proposal
Open Educational Resource Centres
OER exchange infrastructure
OER exchange infrastructure diagrams
Additional materials
Access2OER:Additional Considerations
HowTos
Index
Wiki only
Contents
Welcome
Invitation
Some technical notes
Discussion Log and Quotes:
Contents
Contents
Discussion Week 1
Issues
Classification
Comments on SuperOER
Overview of week 1 activities
Discussion Week 2
Discussion related to solutions put forward
Snippets from the general discussion
Overview of week 2 activities
Discussion Week 3
general discussion
OER training discussion
resource centre discussion
oer exchange discussion
stories discussion
All discussion on one page.
Additional pages
OER
Glossary
For authors:

[Edit][Edit][Edit]

Template:
[Edit]

[Edit]


1 Case Studies

In requesting stories and solutions, some contributors provided quite extensive descriptions of their projects, and we have collected them in this chapters.

2 The 'Connectivism and Connective Knowledge' Solution (Stephen Downes)

Stephen Downes kindly wrote a comprehensive recap of the project, which we include at this point. This contribution fits into the present report particularly well, as it uses our earlier classification of access issues to draw out various elements of this particular solution.

2.1 The context

'Connectivism and Connective Knowledge' was a course run by George Siemens and Stephen Downes last fall (2008), and was titled 'Connectivism and Connective Knowledge' (CCK08). It was offered through the University of Manitoba as a credit course, but the course was also offered for free to any person interested. It came to be called the MOOC - Massive Open Online Course.

Participants. George Siemens and Stephen Downes acted as instructors. Logistical internet support was offered by the University of Manitoba, by Dave Cormier, and by Stephen Downes. Overall 24 students registered and paid fees to the University of Manitoba. Another 2200 people signed up for the course as non-paying participants. All aspects of the course were offered to both paying and non-paying participants, with the exception that paying participants submitted assignments for grading and received course credit.

Participants registered from around the world, with an emphasis on the English-speaking and Spanish-speaking world. The course was offered in English; Spanish participants translated key materials for their own use. The course attracted a wide range of participants, from college and university students to researchers to professors and corporate practitioners.

Solution. The course was designed to operate in a distributed environment and did not centralize on a single platform or technology. With the assistance of University staff and Dave Cormier, George and Stephen Downes set up the following course components:

  • a wiki, in which the course outline and major links were provided
  • a blog, in which course announcements and updates were made
  • a Moodle installation, in which threaded discussions were held
  • an Elluminate environment, in which synchronous discussions were held
  • an aggregator and newsletter, in which student contributions were collected and distributed

The instructors encouraged students to create their own course components, which would be linked togethere with the course structure. Students contributed, among other things:

  • three separate Second Life communities, two of which were in Spanish
  • 170 individual blogs, on platforms ranging from Blogger to edublogs to WordPress and more
  • numerous concept maps and other diagrams
  • Wordle summaries
  • Google group, including a separate group for registered participants

2.2 Key Barriers

The key barriers from our earlier chapter on "Issues and classification" are used to draw out various aspects of the course.

Access in terms of awareness. (Lack of awareness is a barrier to OER.) Given that the course attracted 2200 people, the lack of awareness must have been addressed in some fashion. However, the course had not been widely advertised; it had been posted on George Siemens' and Stephen Downes' newsletters, which in turn are leading sources of information to a community that would be interested in the course.

Access in terms of local policy / attitude. (Do attitudes or policies pose barriers to using OER?) One of the major attractors was that the course was offered by the University of Manitoba. It was necessary to convince the university to offer an open course, which George Siemens managed by adding the enrollment component. In one sense, the paying students funded the non-paying students; in another sense, offering the course as an open course created sufficient marketing to attract the paying students. The University was satisfied with this result and will be employing the same model again.

Access in terms of languages. (How well does the user speak the language of the OER?) There was no multilingual access. However, because the instructors encouraged participants to create their own resources, they created the conditions which enabled a large self-managed Spanish-language component to the course.

Access in terms of relevance? (Is the OER relevant to the user?) The design of the course - as a distributed connectivist-model course - created a structure in which the course contents formed a cluster of resources around a subject-area, rather than a linear set of materials that all students must follow. Because participants were creating their own materials, in addition to the resources found and created by George Siemens and Stephen Downes, it became apparent in the first week that no participant could read or view all the materials. The instructors made it very clear that the expectation was that participants should sample the materials, selecting only those they found interesting and relevant, thereby creating a personal perspective on the materials, that would inform their discussions.

Access in terms of licensing. (Is the licensing suitable / CC?) All course contents and recording were licensed as Creative Commons Attribution Share-Alike Non-Commercial.

Access in terms of file formats. (Are the file formats accessible?) Access in terms of disability. The instructors did not try to provision access in all formats; rather, we employed a wide variety of formats for different materials and encouraged mash-ups, translations, and other adaptations.

Access in terms of infrastructure (Lack of power/computers makes access hard.) The course experienced a full range of issues. Basic course material was provided in HTML and plain text, however, various course components reqired more bandwidth. The use of UStream proved useful to nobody, as the bandwidth requirements were too great even for instructors. Skype worked well for planning and recording, but not for instructing. Elluminate was effective with limited bandwidth, but had limits on the number of seats we could offer (it was capped at 200, though to be fair, Elluminate said they would extend this as needed). We made MP3 recordings of all audio for download. Second life was accessible only to those with sufficient bandwidth and platform. Essentially, the structure of the course provided a wide range of access types, making it possible for people with limited infrastructure to participate, while still employing more intensive applications.

Access in terms of discovery. (If the OER is hidden, not searchable, not indexed, it's hard to find.) Though there was a search provided, the major resource related to discover has nothing to do with search. The provision of a daily newsletter aggregating and distributing course content proved to be a vital link for participants. A steady enrollment of 1870 persisted through the duration of the course. In evaluations and feedback participants said that the newsletter was their lifeline. A full set of archives was provided, allowing people to explore the material chronologically, to make up days they may have missed.

Access in terms of ability and skills. (Does the end user have the right skills to access?) One of the things that was noticed was that, by combining participants from a wide range of skill sets, people were able to - and did - help each other out. This ranged from people answering questions and poviding examples in the discussion areas, to people commenting on and supporting each others' blogs, to those with more skills setting up resources and facilities, such as the translations and Second Life discussion areas.

2.3 Scalability and transferability

How might your solution might "scale"? The connectivist model employed in this course might offer a unique approach to the problem of scalability. The instructors could not, nor did they try, to provision everything that was needed for 2200 students. Rather, they created conditions, and encouragements, where participants would provide additional resources for themselves. The role of the instructors and facilitators is essential in this model, but this role is not to provide solutions but rather to establish a basic structure.

Regarding marking and recognition, the course offered an insight that may prove useful in the future. While 24 students were graded by the University of Manitoba, the instructors did receive (and grant) a request for a student from another country to be assessed and graded by their own institution. All assignment descriptions were displayed as part of the open course, and the assessment metric was also distributed, so other institutions could know everything needed in order to provide evaluation and feedback.

What questions should we be asking about this solution that will add to our understanding of enabling access to knowledge and learning resources? The main questions are in the area of applicability: would this model work in other areas? Would it work in other communities?

In addition, Stephen Downes is exploring the question of whether this approach can be supported with technology designed specifically for this model, for example, the creation of serialized feeds to automatically create and conduct ohorts through the course material.

Implications and adoption: What are the implications of this solution for OER and enabling access to knowledge and learning? The course - which came to be known simply as CCK08 - was a landmark in open access, because while providing the formal requirements of open learning - course structure and content, recognition, assessment and credentials - it nonetheless operated on a very different model from other OER initiatives. Materials for the course were not 'produced' in the traditional; sense - rather, the instructors created a framework, populated that framework with open materials already extant on the web, added some commentary and videos of their own, conducted open online sessions and recordings, and created the infrastructure for wide student participation.

Further links and information:

3 The RECOUP manual

This is a description of the "RECOUP manual", http://manual.recoup.educ.cam.ac.uk/, provided by the present author.

3.1 About RECOUP and the manual

RECOUP itself is the "Research Consortium on Educational Outcomes and Poverty", based at the Faculty of Education, University of Cambridge, UK. The research undertaken by RECOUP is examining the impact of education on the lives and livelihoods of people in developing countries, particularly those living in poorer areas and from poorer households. Its purpose is to generate new knowledge that will improve education and poverty reduction strategies in developing countries, through an enhanced recognition of education's actual and potential role.

RECOUP is a research partnership involving institutions of the South and the North. The partnership brought together people from varied disciplines, where it became crucial to foster a shared understanding, not only of how to do research, but also around what is meant by research itself.

The "RECOUP manual" itself is an outcome of this partnership: Initially a manual was developed to support workshops, that were organised in India, Kenya, Ghana and Pakistan, and then again in India. It then be apparent that the manual would be very useful to help spin out further workshops and further training in the required research skills, and so the lead authors (Nidhi Singal and Roger Jeffrey) decided to turn the manual in to an open educational resource: http://manual.recoup.educ.cam.ac.uk/.

Nidhi and Roger write about the manual: "The spirit of dialogue, experimentation and a belief in the value of qualitative research that we developed during the process of refining the manual underpins our desire to share this work. We do not believe the process is over now that the manual is on the web: we hope everyone who reads and uses this material will tell us how it went, and engage with us and other users to adapt and improve it."

3.2 How does the story relate to the access barriers previously discussed?

  • The RECOUP materials are accessible in terms of licensing (CC).
  • They are accessible in terms of language and culture (and, in some ways specifically developed to bridge and connect research cultures)
  • They are accessible in terms of relevance: The content is highly relevant to the participants in the research consortium.
  • They are accessible in terms of skills: At least within the consortium, the materials become part of the training materials, and appropriate training can be provided.

There are two further aspects I would like to draw out:

Firstly, the process that led to the OER is (in my view) examplary in terms of OER in a developmental context. The OER was created because there was an identified need to provide training. The process itself involved good communication, and North-South partnership, leading to a resource that is appropriate and suitable for the intended areas. The researchers themselves decided that best impact would be achieved by opening up the resource, and making it as widely available as possible.

Secondly, there is a small addendum to the story concerning formats and infrastructure: The RECOUP site uses mediawiki (like wikieducator and wikipedia), and as such, it offers the same access features, including pdf printing (as menitoned earlier in the discussion by Wayne). Also, all additional documents are available for download, bundled as zip files.

However, when looking at low-bandwidth accessibility guidelines, the standard wiki design ("MonoBook") turns out to be quite large (~ 130kB), and the authors wanted to be as low-bandwidth accessible as possible. So they also produced a "low-bandwidth version" of the manual: http://www.ict4e.net/mirror_recoup/index.php?page=Main_Page

You might want to browse both the original site, as well as the low bandwidth version of the site: you'll see that the low bandwidth site is a faster, even on a good connection. However, on a slow connection, you'll really see the difference. (Or you might like to try this on your mobile phone: Even with Opera mini, the low bandwidth site is faster.) The same type of technologies apply to any mediawiki, such as wikieducator (http://www.wikieducator.org), or wikipedia.

As a final twist: The computer hosting the low-bandwidth version doesn't need to have a special relation to the site itself: It can be located anywhere, for instance, on the local area network of a university. Pages that have been accessed once remain available, even if the internet connection temporarily drops. (Of course pages only stay up-to-date while there is internet: But as soon as internet is restored, pages will update again.)

The technology is quite basic, but it would be quite feasible to develop it a little further, so that you always have a version wikieducator, wikipedia, medpedia, ... running at your insitution/school/university. This would be always available, irrespective of whether your internet connection is working or not.

4 Aptivate: The human side to Bandwidth

This case study is contributed by Alan Jackson from Aptivate, an NGO focussing on the global use of ICTs and particularly bandwidth and power issues. It respondes in part to an earlier comment regarding "...challenges of electricity, connectivity, teachers training, maintenance" and puts forward a few ideas.

On the one hand, we must recognise that there are global disparities in access and these need to be addressed. But at the same time, we should not ignore how we can make better use of what we have available right now.

4.1 Some thoughts about Bandwidth

There is a human side to Bandwidth. Bandwidth must be viewed as a shared, and often scarce, resource. If you cannot immediately increase your bandwidth you can think about how it is used, how it is shared, who uses it and what it is used for. Through effective bandwidth management and optimisation (BMO) the effective use of existing connections can be vastly improved.

The last ATICS report found that the majority of African Universities did not have an effective "Acceptable Use Policy" (AUP). An AUP is an important part of a bandwidth management policy. For instance, is it acceptable within a university for students to be downloading copyrighted music for non-educational purposes while others are unable to download research papers because they are competing for the connection? Users must realise how their actions on-line affects the access of their colleagues.

It is useful to think of effective bandwidth management requiring three main elements which we call the BMO triangle - these are Policy, Monitoring and Tools. You can read more about this in the Creative Commons book "How To Accelerate Your Internet" where the authors discuss the all aspects of the BMO triangle describing various tools, techniques and approaches, see http://bwmo.net/index.html.

It is useful to use a Content Delivery Chain model as a framework for thinking about bandwidth issues. We refer to this a chain because success is dependent on the weakest link. It's a simple idea and looks likes this:

Content -> Connection -> Local Network -> User

Arguabley it is a mistake to concentrate solely on the Connection and not spend equal effort considering the other links in the chain.

Content. Content providers have their role to play. They must ensure that their content is usable over existing connections. Aptivate has written web design guidelines that describe techniques for optimising on-line content which we hope are useful to creators of content. These guidelines are available here http://www.aptivate.org/webguidelines/Home.html.

Local Network. BMO, which we already mentioned, is something that needs to be done at the Local Network.

Users. Users are also critical. User behaviour is the largest factor determining the effectiveness of any Internet connection.

As an example, using web-based email, like Hotmail or Yahoo, can add a massive overhead to the size of an email, sometimes multiplying its size by a factor of 100 or more. A university in the UK typically might have email take less than 5% of its Internet bandwidth. However an institution that relies solely on web-based email, and there are many, can see 25% or more of its bandwidth taken by email.

Users can empower themselves by using bandwidth optimising tools. For instance Aptivate hosts a free web-based service called Loband that reformats any web page into a text-only form that radically reduces its size. Loband is available at http://www.loband.org.

Adobe also offer a service for doing something similar with PDF files. For OER we may want to think about transcoding services for other types of media (video, audio, composite learning objects etc), which many OCW/OER providers are offering.

Finally, as food for thought and an example of effective bandwith use, we may download the entire works of Shakespeare as compressed text, which is 2MB, from this url http://www.it.usyd.edu.au/~matty/Shakespeare/shakespeare.tar.gz.

For the same bandwidth, we can only download six average web pages (see http://www.websiteoptimization.com/speed/tweak/average-web-page/ for web page size information).

(Just in case you cannot open .tar.gz file, see http://www.brouhaha.com/~eric/tgz.html .)

5 The Global Grid for Learning

This case study was provided by Theo Lynn from Dublin City University. Dublin City University is a partner on the Global Grid for Learning initiative with Cambridge University Press, Arizona State University, CARET (University of Cambridge) and Obeikan Research and Development. The Global Grid for Learning inititative (GGfL) is attempting to address many of the issues mentioned already by building a digital content pipeline to connect a billion digital resources to education in the next 10 years.

Regarding an earlier comment, Theo notes that the "travel well" concept is a tough nut to crack. One of the ways GGfL is dealing with is is to atomise content in to learning assets and structured learning objects instead of large aggregate units. The more atomised or granular the resource, it seems the better it travels e.g. images, audio, video, animations. However, GGfL also recognises the need to provide scaffolding to allow users shape this content to suit their local needs.

There are three challenges that GGfL is encountering:

  1. We need to get a balance between commercial content, free but not open content, and open content and provide the system, repository and enabling workflow process to distribute this in a device agnostic, bmo way. Ideally content needs to be local; unfortunately neither content nor metadata is crosswalked for local context. Similarly, there are issues about cultural appropriateness. To get to a billion resources, we assume 80-90% will be free or open resources.
  2. Many countries worldwide want content but have no way of surfacing it. As such we need to provide a hosted discovery, exchange and delivery system. Similarly, we need to convince commercial publishers to price on a micro-object level and to index such pricing for the economic capacity of the target country.
  3. Even when content and systems are provided, teachers and learners do not have the capacity to teaching using the content or the systems or teaching learners how to use the content and the systems to learn. Relating to 1 above, capacity to develop local content is also limited.

The GGfL project solution, if it is a solution, is to provide a central content repository, and federated brokerage system, with common file and metadata standards, transcoding tools etc for commercial, free and open sources. To deal with free, as opposed to open, content GGfL has had to cater for two options for contributors - our own license or creative commons. To date, the focus is on attracting commercial publishers as they will be the hardest to get on board for competitive reasons. So far, GGfL has over 4m resources. GGfL has built a web service that plugs in to common platforms. Secondly, GGfL is nearly finished building a free centrally hosted portal which has search, discovery, google apps for education and some additional community features. GGfL hopes to extend this to include an LMS over time - however this has additional cost ramifications. And finally, GGfL is putting together a pro bono training programme (on searching, evaluating, downloading, modifying, describing and exposing content) and twinning project to get educators to work together to build one piece of content. The site for free and open resources will be free and GGfL hopes to be able to host and make a hosted portal with LMS available to schools and colleges in the developing world by fund the project through commercial licenses to schools and education systems in the "developed world" and twinning licenses in the developed world with those in the developing world.

GGfL has developed a wide variety of tools to cater for crosswalking to curriculum standards, editing for cultural appropriateness and exchaning content. In some way, the project may have bitten off a lot by startinig in the US and the Arab world nearly simultaneously. However, if GGfL can crack these two regions, the rest may be easier.

6 Ending the Internet Obsession: identifying hybrid information delivery solutions to serve the poor

This case study was contributed by Cliff Missen, the director of the WiderNet Project, a non-profit service group based out of the School of Library and Information Science at the University of Iowa. WiderNet has provide IT training to over 4,000 people in sub-Saharan Africa since 2000. Their volunteers have refurbished over 1,200 computers for our partners and have put in over 10,000 hours developing our off-line eGranary Digital Library. The eGranary Digital Library, now installed at 275 locations worldwide, contains over 10 million digital resources that have been copied, with permission, from the Web so that the collection can be made freely available to patrons over local area networks without Internet connectivity.

Cliff's interest is in developing multi-tiered, hybrid solutions to deliver information to the poorest people on the planet. Understanding that what we sometimes call a "digital divide" is really a pernicious economic divide, WiderNet seek low-cost, high impact solutions that are locally affordable and sustainable.

Over the last ten years, we have seen hundreds of demonstration projects that deliver a handful of computers and a smattering of Internet connectivity to a handful of people. This is nice, but having seen computers sent to Mars and Internet connectivity delivered to remote sites like the Amazon and Antarctica, we knew all of this was possible. Now need to stretch ourselves to scale computer access, information access, and IT skills to the billions of people -- health care practitioners, students, policy makers, entrepreneurs -- in the majority of the world.

One of the lessons learned over the years is that there is no one "user" and no single solution. In some places, for some people, electricity is adequate. For others, even for different economic classes in the same location, electricity is highly problematic. For some, Internet connectivity is available but expensive and slow. For others, adequate Internet connectivity is simply impossible without spending millions on infrastructure. (And more often than not, if such a sum were actually available, a community would likely chose to spend the money on health giving or income generating investments.) For some organizations, that have trained and talented technologists with ongoing salary support, open source software makes sense. For others, off-the-shelf solutions make them productive faster using common tools with which the broader community is familiar.

It can be aggravating to see how some initiatives, like the One Laptop Per Child, purposely confuse access to a computer as access to the Internet. In many communities, the cost of ADEQUATE Internet connectivity is more expensive, per person, than the computers themselves.

Finally, Cliff remains firmly convinced that the best "bang for the buck" for most communities is to build local communication and information networks. External information has its value, but, as the GSM revolution has shown us, most communication needs are local. Building the capacity for sharing locally-generated and locally-stored information is critical. For most institutions in the West, local network traffic is 7-9 times that of Internet traffic -- and, since they own the networks, they don't pay obscene "rent" to Internet bandwidth providers to access the bulk of their information.

Cliff has come to see that the pursuit of Internet-centric solutions automatically marginalizes those who cannot afford it. Our challenge is to develop a hybrid suite of on-line and off-line solutions that meet a wide range of information needs and environmental constraints.

Last month WideNet installed a computer lab and eGranary Digital Library at the Dalai Lama's schools in Dharamsala, India. The teachers had been struggling with a slow, unreliable shared 1mbit Intent connection for more than a year. They were experiencing a common disconnect: all of the external "experts" were telling them that a connection to the Internet was a panacea for their information access and teacher training needs, yet they mostly experienced frustration and limitations with their tiny but expensive Internet connection being shared over a wireless network. Once we set up a 100mbit switch and a eGranary Digital Library in their 12-computer lab, they had over 1,200mbit of bandwidth to access millions of documents in the blink of an eye. After opening hundreds of pages within minutes the teachers told us, "Ah ha! Now we get it! This is what the experts were talking about!"

The ever-growing eGranary Digital Library includes dozens of OER sites and demonstrates how effectively off-line information stores can serve poorly connected communities.

Lately WiderNet has been developing a "Community Information Platform" (sponsored by Intel) which makes it very easy for subscribers to set up and edit an unlimited number of local Web sites, add local content, create Moodle courses, develop Drupal sites, and implement a host of Web 2.0 applications on their eGranary.

Finally, where few people paying for Internet bandwidth in developing countries would think of sharing it freely with their neighbors, we have several subscribers who share their eGranary with everyone within reach of their wired and wireless networks. You may like to look at the concept of "eGranary Knowledgespheres" on YouTube: http://www.youtube.com/watch?v=WdOIJWkDaNw

Combining some of the ideas presented so far in this forum, there is the the potential of OER, off-line information storage, bandwidth optimization, and asynchronous information updates to create an inexpensive and powerful information delivery platform for a wide variety of institutions in underserved areas all around the world. Sort of an "Internet extender." Then mix in solar-powered systems, refurbished computers, low-powered laptops, handheld devices, kiosks, and community centers to scale up the access.

The technical solutions are at hand. Now it's a matter of finding the right mix to serve people appropriately.

7 PMP/PSP (Moyomola Bolarin)

At the eLearning Africa 2007 in Nairobi, Kenya, and at a Commonwealth of Learning (COL) workshop also in Nairobi, Moyomola Bolarin demonstrated to the mobile learning group how a child’s toy, black hawk portable media player, was use as a platform to make a learning resource accessible to the father where there is no access to a computer. The amount of feedbacks received thereafter from the African participants was impressive.

Moyomola Bolarin is working as Multimedia/Training Material Specialist at the International Center for Agricultural Research in the Dry Areas (ICARDA), Aleppo, Syria. He conducted an ICT resources pilot survey involving about 120 trainees from about 21 countries ranging from Afghanistan, Ethiopia, Eritrea, Iraq, Tunisia, Turkey, Tajikistan to Sudan within Central and West Asia and North Africa region. The survey analysis revealed that many respondents who do not have access to computer or are using shared computer in the work places own mobile phones capable of recording and playback audio and video and/or iPod, and some bought PlayStation Portable (PSP) or other Portable Media player (PMP) devices for their children to play games.

That prompted Moyomola to start converting some of the learning resources initially developed for online/CD-ROM delivery into format that can be playback on PMP and PSP and mobile phone. Thereafter, Moyomola developed a keen interest in portable mobile learning devices to the point that in any market place, airport duty free shops, supermarkets and malls Moyomola looked for different kinds of portable media players that are available and their features, and associated content media for the different players. The disappointing aspect as a learning specialist and instructional designer is that there are different kinds of these devices but little or no structured learning contents for them. What you get are games, Hollywood movies, and music.

Then come the question, why is the educational sector is not embracing, adopting and adapting learning contents for these technologies as much as the music, movies and gaming industries. PMP features that make the device a valuable learning tool include:

  • Capability to record and playback MP3 audios and MP4 videos.
  • Reading e-books that are formatted as txt or as digital photo series
  • 1 GB internal storage or memory
  • Memory card slot for additional memory up to 2 GB of the maximum available.
  • AV in and out which makes it possible to hook it up with TV sets
  • Capability to record TV programs directly through its AV in.
  • 3 mega pixel still camera
  • Audio and video recording of live events in real-time.
  • USB connection to computers
  • Rechargeable battery: 3-4 play hours

It has been used to playback reformatted audios, videos and e-books in both txt format and digital picture series. And looking ahead, one could imagine a situation when instructional videos downloaded from the internet on any subject from open education resource repositories can be view over and over again on PMP/PSP. Useful for children and adults. Instructors can prepare contents in audio or video and provide course guidelines and assignment or discussion points in txt e-book.

Some of these devices are as low-cost as US$25. Moyomola has personally use his child’s own to record educational TV program for him when away from the house which he can playback later, so he never missed his favorite educational programs. Moyomola has also downloaded online learning resources from the internet and convert to the format that can be playback on his device. This is on black hawk PMP which Moyomola's child actually bought by himself with savings from his pocket money.

On several occasions, the parent has used the device to help the child fill the gaps he has in math class. Whenever, the parent noticed that the child had a problem in any area of mathematics or other subject, they would go to YouTube and search for free instructional videos in that area of mathematics, download and convert to PMP format which the child can then play back as often as he wants. And there are 100s of gigabytes of instructional videos on YouTube today that can be used to enhance what the child learn in class.

Well, we still need internet for that, you may say. Yes, but when we come to realize that technology is not an end in itself but a means to an end, we will begin to look into all options and choose the best technology for a given situation.

Effective technology supported learning is not a matter of using the most advance technologies. What is important is the ability of the trainer to combine and use the technologies available, avoidable, and comfortable with minimal and avoidable operational cost, to the learners in a way that will facilitate and bring about meaningful learning in a given situation.

For example, to develop a workable hybrid technology-based learning strategy for remote community schools that have no ready access to electricity, internet and computer resources, the use of mobile and handheld digital devices rechargeable by solar energy may be a workable solution.

In many countries, there is internet connection, at least, in the country’s capital and/or major cities, and all countries have education management system. Where these are true, then Cliff’s type of "Community Information Platform" can be combined with hand-held digital player devices. There could be one education/information resource center in the capital and/or one each in the major cities that have ready access to the internet where OER can be downloaded, re-formatted for other digital devices, and made available to users through community learning resources center or "Community Information Platform" or even networked schools together.

UNESCO might consider initiating a study into the effectiveness of “one PMP or PSP per child” with solar energy system that can be used to charge about 50 PMP centrally at once.