Doing Enterprise Architecture

Things have been very busy here at TechWatch. For the last year we’ve been ’embedded’ in a JISC project to pilot the use of Enterprise Architecture (EA) at three universities: Liverpool John Moores (LJMU), Cardiff University and King’s College London. The hard work is just starting to bear fruit, and today we’re launching the first of two reports on EA called Doing Enterprise Architecture: enabling the agile institution.

For anyone who hasn’t come across EA before, we’ve described it (in our ‘Introduction to EA’ chapter) as:

‘a high-level, strategic technique designed to help senior managers achieve business and organisational change. It provides a way of describing and aligning the functional aspects of an organisation, its people, activities, tools, resources and data/information, so that they work more effectively together to achieve its business goals. EA is also about achieving desired future change through design. It holds that by understanding existing information assets, business processes, organisational structures, information and application infrastructure (the ‘as is’ state) it is possible to ‘do something different’, something new and innovative (the ‘to be’ state)’.

For TechWatch, this is the first in what will hopefully become a new series of what we call Early Adopter Studies. Where our existing remit is to anticipate and speculate about new technology, the EAS will provide information about early stage futures work that JISC has funded. The idea is to take the stuff that might sound a bit far-fetched or not do-able and show how, actually, people are starting to think about how they might be able to do it.

This is not straightforward. EA is a really, really, big thing to undertake. There was an article in the BCS magazine this month that was basically saying it is too big to ever be implemented successfully, and yet there are supposedly lots of private companies who’ve been implementing this (successfully) for the last 15 years. However, it’s something that universities just can’t ignore. There are a lot of policy drivers pushing this – national governments, for example, saying the public sector has to start getting  its act together – but of course, just because there are policy drivers doesn’t mean that it’s necessarily any more achievable.

The TechWatch EAS is just an introduction to some of the key concepts and a set of detailed case studies that describe what the participating universities actually did during the course of the pilot project and how useful they found it. There’s a second report due in June that will follow a more standard TechWatch format in that it will provide analysis and a synthesis of some of the main findings. If you can wait until then, of course.

The International Standard Text Code

A couple of weeks ago I spotted an article about the ISTC, a new publishing standard. What was particularly interesting was that it seems to have potential for much wider application: Ted Nelson’s ‘transclusion’ came to mind and there would seem to be the potential for using an ISTC to identify and make micro-payments for texts that you might want to transclude; it could even be useful for realising aspects of the Semantic Web. However, this is big picture thinking so in order to understand the basics we asked Richard Gartner to explain it to us.

Introduction to the ISTC
by Richard Gartner

A new standard has recently been approved by the International Standards Organisation (ISO) to enable texts to be uniquely identified. The International Standard Text Code (ISTC) is a sixteen-digit number which applies to a text itself rather than the forms in which is it published or disseminated. It takes the form of a sixteen-digit hexadecimal number looking like this:

0A9-2002-12B4A105-7

whose parts (separated here by hyphens) represent the registration agency, year, a code for the text itself and a check digit (a single digit calculated from the others in the number to allow the detection of any errors in typing) respectively.

The ISTC is designed to perform a similar function to the well-established ISBN (International Standard Book Number), but will apply at the level of a text itself, a higher level of description than that of the publication (which the ISBN is designed to identify): it would, for instance, be assigned to the text of a work such as Bleak House itself rather than its different published editions. Where a text may have multiple ISBNs, for example where it is published in multiple editions, or in both hardback and paperback, it will only have a single ISTC to identify its textual content and allow these different editions to be linked together.

The ISTC is designed to operate within the FRBR (Fundamental Requirements for Bibliographic Records) framework. FRBR is a model for bibliographic information which sets out a number of levels for bibliographic description from the most abstract (known as the work) down to an individual physical item. The ISTC is designed to identify what FRBR terms the expression of a work, the specific form that it takes when it is realized in some way, such as a given edition of a text, or a version in a given language. It therefore operates on a level one higher than ISBNs, which identify the physical forms that a work takes when it is published or otherwise made concrete in some way (called the manifestation in the FRBR scheme).

Within the education sector, the ISTC will clearly have an important role within libraries, and also in repositories or any other area where textual objects need to be unambiguously identified. Its prime function will be to allow the identification of an item as a textual intellectual entity more precisely than is possible at present. For the library user, it will allow for more precise bibliographic searches, by, for example, distinguishing texts with similar titles from each other. It would also allow the greater recall of relevant texts by allowing users to find the same one published under different titles: the manifold titles under which Jonathan Swift’s Gulliver’s Travels appear, for instance, could be linked together neatly by an ISTC, allowing the user to trace all other editions of the text from a single record containing this number.

It certainly seems that the ISTC will become a key part of a bibliographic record, particularly once the new RDA (Resource Description and Access) cataloguing rules standard, which is based on FRBR, is fully adopted.

Horizon Report 2009

The sixth edition of the Horizon Report, a USA-based collaboration between EDUCAUSE Learning Initiative and the New Media Consortium has been published. The report is a summary of the latest thinking on emerging technologies and practices that will affect education over the next few years and is a useful read for all those involved in technology in the UK’s HE/FE sector.

The report features six emerging technologies and places them along three adoption time horizons of: within a year, two to three years and four to five years. These timescales are slightly shorter than JISC TechWatch’s usual remit of five to ten years but many of the issues discussed in the report have strong resonances with our existing and planned work. The report also provides discussion of what it terms ‘critical challenges’ that are facing education in the next few years.

The six emerging technologies are summarised below, together with a note on work that Techwatch has already been involved with in these areas:

Mobiles (within a year)

Mobile technologies have featured in many of their reports and its reappearance emphasises how this is a rapidly evolving area both technically (through new devices like the iPhone) and in its huge uptake by students. The report suggests that providing content and delivering teaching and learning via these new devices is one of the critical challenges for HE/FE.

TechWatch produced a report on mobile devices and PDAs in 2005.

Cloud Computing (within a year)

The report notes the rapid rise of large-scale data farm facilities and the attendant increase in the use of remote storage and Web 2.0 related applications and services. The report argues that this is causing a shift in the way educators think about how they use software and data – freeing them up from the existing paradigm of the one-per-desk PC and use of licenced, shrink-wrapped software.

TechWatch is in the process of preparing a report that looks at the environmental impacts of data centres and will be producing a report on cloud computing later in 2009.

Geo-Everything (2 – 3 years)

There is an increasing interest in the provision and use of location-based information. This is being driven by the growing number of gadgets that automatically provide some form of location information (e.g. GPS built into mobile phones) and also by an increasing interest in the use of geo-related data, visualisation and mapping systems. The provision of automatically generated location data is having an impact on research and data acquisition in sciences, social observation studies, medicine and other areas.

TechWatch is in the process of commissioning a report on GeoWeb.

The Personal Web (2 – 3 years)

The report notes the increasing propensity for users to organise and aggregate their Web-related content in their own, personal ways using a growing range Web 2.0-style widgets and services. iGoggle, tagging, micro-blogging (twitter etc), use of group wikis, data mash-ups and social network aggregation all come under this broad area of development. The authors use the term ‘personal web’ to describe these emerging phenomenon and note the emergence of “highly personalized windows to the networked world” (p. 19). It argues that the online tools that provide for this are also ideal for research and learning.

Semantic-Aware Applications (4-5 years)

The Semantic Web, in which some element of meaning (semantics) is applied to Web-based data, has been discussed and researched for a number of years. The Horizon report argues that we are beginning to see the emergence of applications that make use of semantic data (are semantically aware) and that this will be a growing area in the next few years. Perhaps somewhat controversially the report argues that the provision of such a capacity will happen through what it calls a top-down manner via the use of natural language processing of existing content and not through provision of a new layer of semantic-related metadata.

TechWatch reported on the Semantic Web in 2005.

Smart Objects (4-5 years)

The report defines smart objects as any physical object that includes a unique identifier and which can track information about the object. Such objects can communicate with each other and to the wider Internet. Technologies like RFID and wireless networking are involved. Although there are few applications for education as yet (apart from the use of RFID in libraries) the authors predict that such applications will emerge within five years.

TechWatch reported on RFID in 2006.

Critical Challenges

It is always important to place technology development in the wider context and the Horizon report outlines some of the wider challenges that the authors believe face education. These are:

  • Growing need for formal instruction in new media skills such as information and visual literacy
  • A need to radically update existing learning material, some of which is decades old and does not reflect new ways of learning and interacting with information
  • The manner in which academics, researchers and students are measured and rewarded is out of sync with emerging new practices in scholarship, innovation and dynamic information flows.
  • There is a growing expectation that education will make use of mobile technologies and provide ‘anywhere, anytime’ education.

TechTracking: Location-based Experiences

Yesterday’s Guardian featured 100 top sites for the year ahead, and it was interesting to note how many of these were connected to location-based services. For those in HE who are interested in the educational issues surrounding these technologies, you might be interested in the TechWatch report on Future Location-based Experiences, published in January 2005.

The report was written by Steve Benford, currently Professor of Collaborative Computing at the University of Nottingham. Steve is one of the early pioneers of location-based media and is a recipient of the 2003 Prix Ars Electronic Golden Nica for Interactive Art. He was also nominated for a BAFTA in Interactive Arts and Technical and Social Innovation (2005) in conjunction with Blast Theory, a London-based group of interactive media artists.

As well as the location-based experiences report, TechWatch is currently commissioning a report provisionally titled Data mash-ups and the future of mapping (see previous blog item for more information). This will take some of the issues (raised in the 2007 report on Web 2.0) related to the use of geospatial data within the Web 2.0 context and explore them in more detail.

ICT 2008 future directions

ICT 2008 is a big conference and there are great many parallel sessions covering a veritable smorgisbord of technology areas. Taking a helicopter view of the proceedings gives a pretty good idea of the likely direction of travel for EU-funded ICT developments over the coming decade.

I’ve collated a list of all the work packages that were discussed and attempted a crude form of categorisation. Six major themes emerge, although these may not correspond to the EU’s categorisation:

  • artificial intelligence: includes robotics and cognition, language processing
  • green ICT: includes energy efficiency, smartgrids, climate change modelling
  • next generation Internet: includes 3D Internet, new protocols and moving to services rather than devices
  • strategic application areas: including learning, supporting creativity, digital libraries, health
  • security, privacy and trust in the digital world
  • novel computer architectures and electronics: includes nanotechnologies, bio-inspired computation, photonics and quantum computers.

The details of these areas, and their relevance and impact to HE/FE may well be the subject of future TSW reports over the next few years.

Semantic robots

Will we start to see robots in our classrooms and university research labs within a decade or so? According to one of the sessions at ICT 2008 we are starting to see robotic applications move beyond their traditional use in high end automobile manufacturing (remember the Picasso car advert?). There is a push to put robots like this into smaller companies and also a lot of work going on for applications in service industries such as cleaning. There is also considered to be considerable potential in healthcare, even for the care of the elderly in their own homes.

Again, though, the question was raised as to whether Europe could compete. The Japanese and Koreans are strong on robotic development and the USA is ploughing $10 million per annum (on top of various defence-related projects that are kept secret). The three Framework 7 (FP7) calls for research proposals in this area will help and there is a strong commitment to integrate robotic work with that of the semantic Web to deliver knowledge-based robotics. These are the types of robots that may end up in the classroom.

Some of the research questions being posed though may remind older readers of the work of science fiction writer Isaac Asimov. These include:

  • How autonomous and proactive should a robot be allowed to be?
  • How can robots recognise and deal with critical situations and safety problems?
  • What level of cognitive skills should be built in?

These are fascinating questions and it may not be too long before researchers (and the general public) has to have a serious debate about these issues. If you are interested in further details then have a look at: www.cognitivesystems.eu

ICT 2008: Panel debate

At the end of day one there was a panel debate on the successes and failures of EU-funded research. It was generally agreed that the EU research environment is fragmented – there is no equivalent of MIT, no EU-wide centres of excellence. Along with that came the statement, from Martin Sadler of Hewlett Packard, that “we should recognise talent, we let down our young researchers. MIT, for example, has regular competitions for young scientists and technologists – it really puts them on pedestal”.

Sadler also noted that whilst Europe produces top quality graduates, these graduates have poor business acumen compared to their American counterparts. He also noted the lack of women choosing technology-related careers, something that was backed up by Wendy Hall from Southampton University.

One of the initiatives aimed at trying to combat some of these concerns is the European ‘blue card’ for talented foreign students. This would make it easy for them to study and take up research-related jobs within the EU.

ICT 2008: plenary report

The opening plenary debate of the ICT 2008 conference focused on trends and directions for the ICT agenda over the next ten years. It was chaired by Viviane Redding, the EU commissioner responsible for ICT and involved a panel including Ben Verwaayen, CEO of Alcatel-Lucent; Harold Goddijn, the founder of Tom Tom and Esko Aho, the ex-Prime Minister of Finland. The panel agreed that there are three major societal challenges for European ICT research and development:

  • the green agenda and tackling climate change
  • raising productivity and improving skills
  • building the knowledge society

Despite the economic downtown it was agreed it was important that Europe should continue to support work that helped with these challenges and not be diverted from the existing roadmap.

Ben Verwaayen argued that the Web and its evolution into a tool for “massive” collaboration and creativity would be profoundly important for these three areas. He predicted: “a whole new era of collaboration with a new eco-system developing”.

There are potential downsides. One is that the economic downturn will turn people against the idea of investing in research and blue skies thinking. The second is that unless we are careful we will not create ICT systems that are robust, secure and accessible anywhere and by anyone. An important part of this is trust. The financial collapse over the summer has shown the vulnerability of complex systems that are not well understood. The panel were in agreement that the complex systems that are being developed in advanced Web and software systems must not be allowed to fail in a similar way. How this would be brought about was a key part of the discussions.

In particular, the consensus was that Governments will not be building the next generation Internet and ICT systems themselves by a top-down process of ‘grand visions’. Instead, technology will evolve organically and be built by independent researchers, large companies and SMEs. What this means is that the EU and other Governments and agencies need to do is to give “content” or concrete form to the understanding and meaning of trust. Ben Verwaayen said that there will need to be some element of control and regulation. Aho agreed that we need trust in our ICT systems and agreed that there can’t be grand designs, but argued strongly for systematic approaches and the importance of architecture. There is a huge amount of digitization taking place in many areas of life but, he argued, we need some kind of overarching architecture process to make it all fit seamlessly. This fits strategically with JISC’s work on architecture at the enterprise and inter-institutional level and could be seen as EU level confirmation of its efforts.

Service-Oriented Architecture and the future of the CMS

In a previous blog item I talked about the future of the institutional CMS and why TechWatch wasn’t going to commission an update to its 2001 report. You may remember that I focused on two of the main concepts from the CMS report: processes rather than products, and blurring the boundaries between systems. At the time I said that the future of the CMS is actually caught up in technological reinterpretations of these concepts, so I thought I should explain a little bit about what I meant by that.

In order to do that I need to take you back to TechWatch’s 2001 CMS report. In it, Paul Browning and Mike Lowndes, the report’s authors, list some of the processes that a CMS should facilitate (page 3). These include:

    “Engendering the re-use of information by allowing the ready integration of data from diverse sources”
    “Permitting the efficient re-purposing of information”
    “Allowing information maintenance to become devolved but at the same time preserving central control”

In fact, as they later acknowledge, these processes/benefits are not exclusive to CMSs and they go on to say: “The emergence of ‘portal frameworks’ (open source or otherwise) has done much to highlight the overlap and convergence of document management systems, knowledge management systems… There is a pressing need, in our view, for institutions to think holistically (reinforced by their work on information strategies) and to invest in and develop open and extensible information systems” (p.12).

This is the crux of the matter. What they are saying is that it’s the processes that are important, not the software applications per se. We need to shift the emphasis from thinking about kit to thinking about what it is we need to do and how that fits in to the bigger institutional picture.

As always, of course, this isn’t straightforward. One approach that’s being road-tested is Service-Oriented Architecture (‘uppercase soa’), but this has not been without its detractors. The heart of this is for another day, but to start the ball rolling you should have a look at a case study that the e-Framework programme has just published and you’ll hopefully get a feel for how the big picture concerns raised by Paul and Mike are tackled through this particular technique.

Visions of computer science’s future

I attended the British Computer Society’s Visions of Computer Science conference which took place in London, earlier this month. The idea behind the event was to provide inspiration for the future development of computer science as a discipline and, as such, offered a unique chance to feel the pulse of current computer science research agendas. There was a wide range of topical issues and some great keynote speakers including Turing medal winners like Robin Milner, Tony Hoare and Internet pioneer Vint Cerf.

So what pointers can be picked up for the long-term direction of ICT in tertiary education?

• The Internet is starting to show its age and much work is going on behind the scenes to improve capacity, security and its overall architecture in order to cope with developments such as the rapidly increasing number of mobile devices that are being connected. Efforts are underway to map out what the next generation Internet may look like. The sobriquet ‘Web Engineering 2.0’ has been coined for some of this work.

• The introduction of multi-core processors into standard desktop PCs means that making software work in parallel has become a very high priority. For many years attempts have been made to solve the issue of how to easily and efficiently ‘parallelise’ code to work on multiple processors. This is continuing, but there is a debate between those who favour using a few complex CPU cores (the present situation) and those who argue for the use of many simple CPU cores.

• Computer science needs to ‘get a grip’ with regard to ubiquitous computing. In a few years computers will be leaving our desks and merging into our physical environments, cars and even clothing (as TechWatch has reported). There may well be millions of these computing devices spread out across our urban environments and this will obviously include schools and colleges. Understanding how these devices will all interact with each other and with us, in essence, how they will ‘behave’, is one of the big research questions at the moment. They will create an enormous information space and Robin Milner argues that understanding this space is likely to be the greatest challenge for computer science in 21st century (for more on this see the 2006 TechWatch report Will we cope with the invisible computer)

• Mobile phones are starting to incorporate forms of sensor, e.g. location-based sensors (GPS), awareness of user’s status (walking, running, sitting etc.). These devices will form networks with other devices in the near future and exchange information via the Internet, forming a global mobile sensor network. There are potentially many applications (and social implications) for this in the education arena.

• There is a lot of work going on into human facial recognition and expression/emotion detection which, in the long run, will feed into the kinds of human-computer interfaces that we will be using at home and in the educational setting.

• Research is being undertaken into what’s called the outlier detection problem. This is the process of automatically detecting anomalies or unusual events from massive streams of real-time experimental data that can now be generated by scientific experiments. This work will have obvious implications for the research community as enormous data sets become more common through the work of the e-science community.

• Understanding how the Web is working at the large scale, for example when social networks have millions of users and billions of interactions, requires a multi-disciplinary approach. Foundation work in this area is being driven by Tim Berners-Lee, Nigel Shadbolt and colleagues at Southampton University under the title of Web Science.