Wednesday, September 24, 2014

Week 5 Reading: Uniquely Identify This

So, metadata.  It's quite a buzzword here at the iSchool and, until this week's reading, I had only a vague notion of what people meant about data-about-data.  Thankfully, this set of articles really helped.

From its humble roots as a data organization system for geospatial data management systems, metadata has taken the information world by storm.  Instead of previous methods of categorization and organization to provide storage and use, metadata not only describes the content of an object but also its behavior.  That is, well developed metadata is more than an isolated description of contents and provenance; it describes its use, the history of its use, storage, and management across changing media landscapes, as well as its relationship to the contents and uses of other data across diverse fields.

Different types of metadata can be used to track and organize information at many levels, as well.  If a librarian needs to create a finding aid, she'll use descriptive metadata, as opposed to an archivist looking to document a recent conservation project (he'd manipulate preservation metadata).  The internet generates its own metadata at an alarming rate.  Each of these situations calls for its own classification and encoding systems.  It is increasingly difficult to achieve the goal of generating metadata in the first place.  That is, to create a richer and more accurate body of information in its complex context as it interacts with the changing landscape of knowledge.

To add to all of this, technology changes at such a rate that networked information needs to migrate, so metadata "has to exist independently of the system that is currently being used to store and retrieve them" (Gilliland, 2008).  This requires a high level of technical expertise that has resulted in a rising sense of panic that I've been reading about in my other classes.

Different fields of study value different kinds of information, however, and there is no consistency to track their contents across disciplines; enter Dublin Core!

Have you ever read The Hitchhiker's Guide to the Galaxy?  The Dublin Core Metadata Initiative (DCMI) seems like they're trying to build a Babel Fish, which in Hitchhiker's Guide is a little fish you can put in your ear that instantly translates any language you hear; you can understand anyone in their first language, and you can be universally understood.  From what I can tell, the Dublin Core Metadata Initiative is trying to create a universal translator.  Working within the Resource Description Framework (RDF), Dublin Core identifies the specific Markup language in use and "speaks" in that language.  That is, it pulls up the context-correct dictionary for the data in question, points to a specific definition, and then uses it in the query, what Eric J. Miller calls "modular semantic [vocabulary]".  For instance, if you wanted to know about famous hospitals in the 1800, the DMCI would do the heavy lifting of specifying field-specific classification schemes, and you would get results from systems that use LC, DDC, Medical Subject Headings, and maybe AAT, too.  So, generally, the goal of DCMI is to act as translator for well-established data models in order to allow for a more flexible interdisciplinary discovery system.  Inter! Opera! Bility!


No comments:

Post a Comment