Welcome. We are Aarón Alzola Romero and Elton Barker, from the Open University's Department of Classical Studies. This blog is part of a broader research project exploring the uses (and abuses) of mobile learning in the Arts. Our aim is to examine mobile learning applications, assess their strengths and weaknesses (in terms of user interaction, contribution to learning outcomes, cost and popularity), identify areas of opportunity and challenges in their future implementation and assess the impact that mobile learning solutions have on the delivery of Arts courses.

Sunday 10 June 2012

Not just a pretty face

A couple of years ago we were dazzled by the release of image recognition tools such as Google Goggles, which, for the first time, allowed regular users to perform instant image-based web searches by using the camera in their hand-held devices. The technology was touted as a Swiss Army knife of visual recognition (applicable to anything from bar codes to restaurant menus in foreign languages).

A bit of road testing soon made it clear that it wasn't great at distinguishing, say, a dalmatian from a poodle, or a Wedgewood plate from a Clarice Cliff. However, it was very good at doing one thing in particular -- identifying individual art works and providing their name and author.  This made it quite a useful app to have in those annoying situations when, flicking through a magazine, you come across a famous painting whose artist you can't quite remember. You simply point your camera at the picture, tap on the "Go" button, and hey presto: title, date and artist.

Museums like the New York Met and the Getty started a series of collaborations with Google, providing metadata for thousands of paintings. Although the app was certainly very handy (and a great talking point in the pub), its pedagogical value was still somewhat limited during those early stages of development -- the best one could hope for was a couple of basic facts about the painting and a list of Google search results (some more relevant than others).

Google Googles at work (Image © by Google).



However, that is changing fast. Educational researchers and IT developers are creating new ways of digging through the data, contextualising, meshing up, inter-linking and reshaping results. It is these developments that can turn visual search technology from a gimmicky app to a powerful educational / research tool.

The University of California Riverside, for example, is developing an ambitious facial recognition project designed to identity individual historical characters portrayed in paintings. The principle is similar to Facebook's infamous facial recognition / photo tagging technology (without the Big Brother implications).

Applied to historical portraits via the Google Goggles infrastructure, this UCR tool is expected to provide answers to questions such as: Who is this person? Where did s/he live? What stage of his/her life is s/he in in this portrait? What was happening in the world at that time? What is his/her facial expression? What other portraits exist of this individual? Who else is in the painting with him/her? What webs of relations can we tease out based on people's associations in different portraits?

UCR's project is still at a very early stage of development and there are plenty of obstacles to overcome before the tool is sufficiently stable and useful. However, it is an encouraging example of the kind of research that is helping us make the crucial transition from visual recognition to visual data mining (and ultimately visual data analysis) in mobile learning.

No comments:

Post a Comment