A lot of what I write in the next few posts will be principally descriptive, but I hope this will still be of some interest. There's been so much to take in at each session, I think serious reflection will take a little while longer. I hope I'm not misrepresenting any of the speakers arguments or the projects they discussed - if any speakers are reading this, please do let me know if I have!
Wednesday was preconference day. There were a range of events organised on the day to coincide with the start of the conference, in the same venues, but not part of the conference proper. I didn't register for any of the talks on Wednesday, but I did go along to the vendor showcase. This provided opportunities to find out more about the latest product developments relating to library collections, including about resource discovery systems and collection management services, and from journal publishers, e-book vendors and database suppliers. I was particularly struck by the move into e-book publishing by producers of other types of e-resources. For example, Project MUSE and JSTOR both have e-book collections due to launch next year.
Thursday saw the start of the main conference (at 7am, an early start at least partly mitigated by residual effects of coming to terms with the GMT-EST time difference). The morning plenary sessions began with a presentation by Michael Keller of Stanford University about linked data. This described the opportunities offered by semantic web developments to get away from the silo approach to information storage which has been a feature of many library systems. Linking open data about authors, papers, quotations and citations in a single place assists disambiguation. An example of how this can work can be seen in Freebase, to which Stanford's libraries have contributed information, on topic pages such as this one about Vincent Van Gogh. The report on which this presentation was based can be found at http://www.clir.org/pubs/archives/linked-data-survey based on a workshop held at Stanford in the summer.
The second presentation, by MacKenzie Smith of MIT, stayed with the theme of data. She discussed the importance of data sharing for research and some of the current difficulties in encouraging this. Interestingly, she described a key feature of data suitable for sharing as the prohibitively expensive cost of collecting them again or reproducing them. This might seem to exclude more types of data than would be case using more inclusive definitions, and seems to me to place a greater emphasis on what not to collect - in a way which may be familiar to librarians who face difficult decisions about withdrawing or deselecting materials or who specify exclusions in collection policy documents. The presentation advocated that librarians and publishers need to work together to be part of the data creation process - with publishers utilising existing peer-review structures to ensure quality control, and librarians both promoting data management tools and taking on a longer term archiving role to preserve access to data.
Mark Dimunation of the Library of Congress gave a presentation about "hidden collections" - special collections not included in routine library processes, or caught up in cataloguing backlogs. Although the topic has been discussed for at least a decade, limited progress has been made - including the establishment of a US hidden collections register. This talk picked up on some of the topics raised in the first session - especially about the limitations of a silo approach to collections. He argued persuasively that new formats will continue to challenge librarians for as long as we persist in separating them out. A Library of Congress report about bibliographic control was mentioned - I think this refers to this 2007 report.
The final Thursday morning session featured Robert Darnton of Harvard University Library, Rachel Frick of the Digital Library Federation and Sandford Thatcher of Pennsylvania State University Press talking about a recently announced initiative to develop a Digital Public Library of America, with an aim to launch this in 2013. This will aim to carry digitised content from books and AV materials in the public domain, but advocating appropriate legal methods - such as Extended Collective Licenses - to facilitate the inclusion of much more recent content (following the example of JSTOR's moving wall approach to negotiating digitisation arrangements with journal publishers). The idea seems ambitious (especially in difficult economic times, envisaging grassroots political organising to raise funds) but the scope of its ambition seemed well summed up by Darnton as a version of Google books in which interest in the public good prevails over commercial solutions. This is definitely a project to watch - it has already announced a collaboration with Europeana and its content will be available internationally as well as in the US. The Digital Library Federation also features news about this project.
No comments:
Post a Comment