I attended the British Computer Society’s Visions of Computer Science conference which took place in London, earlier this month. The idea behind the event was to provide inspiration for the future development of computer science as a discipline and, as such, offered a unique chance to feel the pulse of current computer science research agendas. There was a wide range of topical issues and some great keynote speakers including Turing medal winners like Robin Milner, Tony Hoare and Internet pioneer Vint Cerf.
So what pointers can be picked up for the long-term direction of ICT in tertiary education?
• The Internet is starting to show its age and much work is going on behind the scenes to improve capacity, security and its overall architecture in order to cope with developments such as the rapidly increasing number of mobile devices that are being connected. Efforts are underway to map out what the next generation Internet may look like. The sobriquet ‘Web Engineering 2.0’ has been coined for some of this work.
• The introduction of multi-core processors into standard desktop PCs means that making software work in parallel has become a very high priority. For many years attempts have been made to solve the issue of how to easily and efficiently ‘parallelise’ code to work on multiple processors. This is continuing, but there is a debate between those who favour using a few complex CPU cores (the present situation) and those who argue for the use of many simple CPU cores.
• Computer science needs to ‘get a grip’ with regard to ubiquitous computing. In a few years computers will be leaving our desks and merging into our physical environments, cars and even clothing (as TechWatch has reported). There may well be millions of these computing devices spread out across our urban environments and this will obviously include schools and colleges. Understanding how these devices will all interact with each other and with us, in essence, how they will ‘behave’, is one of the big research questions at the moment. They will create an enormous information space and Robin Milner argues that understanding this space is likely to be the greatest challenge for computer science in 21st century (for more on this see the 2006 TechWatch report Will we cope with the invisible computer)
• Mobile phones are starting to incorporate forms of sensor, e.g. location-based sensors (GPS), awareness of user’s status (walking, running, sitting etc.). These devices will form networks with other devices in the near future and exchange information via the Internet, forming a global mobile sensor network. There are potentially many applications (and social implications) for this in the education arena.
• There is a lot of work going on into human facial recognition and expression/emotion detection which, in the long run, will feed into the kinds of human-computer interfaces that we will be using at home and in the educational setting.
• Research is being undertaken into what’s called the outlier detection problem. This is the process of automatically detecting anomalies or unusual events from massive streams of real-time experimental data that can now be generated by scientific experiments. This work will have obvious implications for the research community as enormous data sets become more common through the work of the e-science community.
• Understanding how the Web is working at the large scale, for example when social networks have millions of users and billions of interactions, requires a multi-disciplinary approach. Foundation work in this area is being driven by Tim Berners-Lee, Nigel Shadbolt and colleagues at Southampton University under the title of Web Science.