With all these machines sending data along these complex layers of networks, something must act as a traffic light to control the flow of information. This is the job of the router. Thanks, little guys. Once a router gives the go-ahead for information to be sent along its path, this global (and sometimes low-orbit) network then requires vast amounts of physical cable (fiber optics-a series of cables, not a series of tubes) to get the data where its going; companies that own these "backbone" cables all work together to maintain the fiber optic system, as mutual destruction is certainly assured if part of the system doesn't work. So, our data, having been released properly by a friendly neighborhood router, has left its little LAN home and is travelling along the fiber optic spine that stretches across the whole world. But it gets a little more complex.
The internet, though seemingly a wild splatter of information smeared across the globe, is actually organized according to Internet Protocol (IP). Every computer connected to the internet has an IP address, a series of eight pairs of numbers (octets) that uniquely identify a location of origin...at least until we surpass 4.3 billion computers, and then I guess we'll figure something else out. Since IP addresses are meant to be read by a machine, they look a little strange to human eyes, so we assign text to each IP address, which is known as the Domain Name System (DNS), and is familiar to us as, for instance, www.pitt.edu. Note that there are three components to that address: "www" "pitt" and "edu", the whole thing being known as the Uniform Resource Locator (URL). These are built in redundancies that allow different servers to search for the location; if, say, "pitt" can't be located (good old 404!), through it's doman name (.edu), it might be located by either of its other components.
We owe a lot to servers, the physical machines on which much of the information on the internet is stored, and which provides us (clients) with access to the material we're looking for (www.pitt.edu). They have a static IP address; the networks know exactly where they are at all times. "My" IP changes as I connect to the internet through different wifi networks. My machine doesn't have an IP address, therefore, but the location of the modem does.
Meanwhile, the above-mentioned complexity of connections results in a user experience that is delightfully easy to use. Information can be linked together like never before, and libraries are therefore having to deal with the increased expectation for user ease. Our largely proprietary integrated systems did not anticipate this change in the nature of information consumption, and are scrambling to readjust and remain relevant. This has involved smashing other services next to existing services without having them integrate their work flows. For instance, in my experience with the III product Millennium, searching the circulation catalogue for an eBook would not produce any results. Instead, we'd have to go into a separate OverDrive OPAC designed specifically to search our electronic resource collection. This was clunky and confusing to patrons who then did not use it, thus keeping circulation of our investment in electronic material low. Not cost effective, and offensive to the ROI gods.
What we did, therefore, was switch to III's new open source software, Sierra, which has electronic material integrated into both its staff workflows and its sleek, redesigned OPAC. This was an expensive move to be sure, but the ease of use, expanded search capabilities (narrow results down to material type, for instance, or subject heading, or year published, author, etc), and clean visual design is much more attentive to modern user expectations; it allows greater search flexibility and user satisfaction. I think the hand-wringing in the "Dismantling the ILS" is uncalled for, actually. Or, perhaps, the Carnegie Library of Pittsburgh should stand as a positive model of rebuilding an ILS.
I denounce the hand wringing because instead of staring at our own navels as we get washed out on a tide of change, we should be looking ahead to the horizon (nice little nautical metaphor there) where Google is. Their mission to help develop a healthy, education user base by delivering information for free to users sounds awfully familiar, doesn't it? Except they seem to be taking our customers. Excuse me, patrons. Perhaps we need to look at their business model, which, first of all, fosters and informal and collaborative culture of highly trained staff. Many of their successful applications (Gmail, Docs, Blogger, Maps, etc) were developed by allowing their engineers pursue projects that interested them. There are also a fabulous list of failed Google applications (Google Answers-which was mentioned in the video HA!, Buzz, Notebook, etc). That allowance for failure is refreshing.
Sometimes I get the feeling that librarians are possessive of the knowledge we purport to keep safe, and that can develop into an elitist attitude that may stifle innovation when we need it most. That doesn't mean I want advertisements in libraries (although that'd be an interesting method of sustainable funding), but I do want us to be more amenable to the Google way of doing things. Adventurous and world-dominating! Gone are the days of austere buns and shushing in the stacks! The idea that a library can organize its information in isolation have gone the way of the card catalogue. We're not special anymore, Google's made sure of it; we're just the awesome, most amazing specialists. We need to weigh anchor, we need to continue our ocean metaphor, and catch up if we can.
No comments:
Post a Comment