Okay, two things, both reading-related.
I'm living on the wild side here, so things may go horribly wrong. That is, more grades were posted on BlackBoard recently, at it appears that I've fulfilled 20/20 reading blog points. Is that just a minimum, or is that the completed portion of the course? While I will certainly continue to do the reading, is it all right not to post a blog about it? This is crunch time, and I would greatly value the extra time to put toward other projects. Hence why, though I completed the readings, I didn't post about the web 2.0 content (wild side!).
That said, I would like to make absolutely sure about the reading for the remainder of the course. I've completed the web 2.0 reading (for Nov 25). Am I reading BlackBoard correctly that there will not be reading for our December 2 meeting, the one after the Thanksgiving break? And then the final set of reading will concern Web Security and the Cloud, correct?
I think I could have stated that a little more clearly, but I hope you get the point. If not, don't hesitate to contact me. Thank you so much!
Friday, November 21, 2014
Friday, November 14, 2014
Week 10 Muddiest Point
To be perfectly honest, I would just like to thank Dr. Oh for reiterating the CSS slides with examples. It was incredibly helpful, and gave me more confidence to tackle my own style sheet, which had been haunting me for the past few weeks.
If last week's Muddiest Point was "Everything! AAaaaHhHh!", then this week there isn't one because the re-do of the lecture was so helpful. Thanks for all the examples, which were illuminating in a way that a simple explanation of the principles can't possibly be!
Thanks again!
If last week's Muddiest Point was "Everything! AAaaaHhHh!", then this week there isn't one because the re-do of the lecture was so helpful. Thanks for all the examples, which were illuminating in a way that a simple explanation of the principles can't possibly be!
Thanks again!
Week 11 Reading: Blast from the Past
The reading for this week took us on a grand tour of last
decade’s thinking about how radically changing technology influences scholarly
education, as well as a short explanation of how search engines work (hint:
it’s not magic and/or wizards, it just seems that way).
We’ll begin with the Paepcke, Garcia-Molina, and Wesley
piece, cleverly titled Dewey Meets Turing. They sketch a brief history of the uneasy
relationship between librarians and computer scientists in developing what we
know all take for granted: the digital library.
Apparently the librarians were frustrated that the computer scientists
weren't as thorough in organizing the digitized content (what about
preservation, after all?!), and the computer scientists saw the librarians as
stodgy traditionalists who slowed down development with their endless, boring
organization. While this low-level
eye-rolling was happening, the World Wide Web blew the roof off of everyone’s
plans. Instead of crafting beautiful but
closed digital systems for libraries, everyone quickly realized that the public
nature of the Web was the future, and the future was messy. At the time this
article was written (2003), Open Access wasn't as prominent an idea as it is
today, and it addresses the concerns raised by this article. In fact, I imagine it was concerns like this
(in a kind of high-pitched “what do we do what do we do what do we do?!” kind
of mentality) that drove the growth of OA technologies and mindsets. My favorite point from this article is that
change comes very slowly in the LIS field, driven by librarians “spending years
arguing over structures”. Get it
together, everyone, or the train will leave without us.
Still more than a decade in the past, though more
progressive in their thinking, ARL laid out an argument for digital
repositories that has come mostly to fruition here in the second decade of the
21st century. Institutional
repositories are necessary for long-term digital preservation of scholarly
material. The digital migration is a
healthy and empowering movement, but preservation at the institutional level is
necessary for knowledge maintenance.
Moreover, building a strong institutional repository can reflect
strongly on the institution’s prestige; it’s something to be proud of. This paper presages the development of green
Open Access. That is, a digital
repository at the institutional level that collects, organizes, preserves, and
distributes scholarly material that goes beyond just an article accepted and
published in a journal. Instead, it allows
access to a greater body of work, such as data sets, algorithms, theses and
dissertations, and other knowledge objects outside the traditional purview of
peer-review, organized in such a way as to enable new forms of discovery and
connection in a networked environment.
The article warns against absolutely requiring
scholars to self-archive their material, although this seems to be a painless
and productive practice where it is happens today. “Open Access is gonna be great, you guys!”
seems to be the theme of the article.
Moving on to the Hawking article about the structure of web
engines. He describes the mechanisms of
web crawlers (“bots” designed to index web content. Like…all of it. Or most of it- tip of the hat
to the black and white hats): be fast, be polite, only look at what the queue
tells you to, and avoid multiple copies of the same material at different URLs,
never stop, and stay strong against spam. Algorithms index this content and makes it
all searchable, no mean feat, as the amount of information on the available Web
is mind-bendingly huge. Indexing
algorithms create cross-searchable tables based on searchable descriptors, and
then rank them in respect to popularity (how many times a thing’s been clicked). Really slick algorithms that seem to infer
meaning (done through skipping, early termination, “clever assignment of
document numbers”, and caching) get famous, like Google’s. It’s fast, simple, and flexible.
The final article was about the Open Access Initiative
Protocol for Metadata Harvesting, a protocol that is much touted in the other
articles, as well. It allows for interdisciplinary,
interoperable searching of diverse types of content that find themselves
suddenly close together in a digital world.
Though there existed previously a wide variety of organizational systems
across disciplines, through the exacting use of XML, DublinCore, and other
useful metadata structures, digital scholarly content. OAI protocol gives different institutional repositories
a way to communicate with one another to create larger collections freely
accessible to anyone with an internet connection. In addition, as is so important with regards
to Open Access, metadata must be in place to track the provenance of a
knowledge object. Knowledge for the people. Right on, OAI Protocol for Metadata
Harvesting, right on. Of course, this
article came form 2005, a simpler time. As
we approach XML and metadata schemes in this course, it seems to me that these
protocols don’t simplify anything but instead they manage to keep things organized
until they change. Again. Which isn't a
bad thing, of course, and is in fact baseline necessary. The tone in 2005, however, seems to be that
of simplification. Moving toward a
controlled and universal vocabulary for organizing and providing Open Access is
more of a pipe dream; the best we can manage so far is pointing toward a
language, and then using it. We've come
a long way since 2005, but still no wizards. Dang it.
Friday, November 7, 2014
Week 9 Muddiest Point
Greetings!
Now that my head has stopped spinning from the barrage of information from the CSS lecture (rereading the slides is helpful, but sometimes I feel like we rush through things too quickly in class, and I don't retain any of it. Relying on the slides makes me a bit uncomfortable), I was wondering more about creating a navigation bar, as is hinted at in the description for Assignment 5. Is that where creating universal attributes controlled by "#" and "." comes in? Because you can slip the selector into the HTML only where you need to? Or am I way off?
Also, in mentioning the selectors, what has been most useful for me in understanding these concepts hasn't been the slides that give, in loving and painstaking detail, the descriptions of the elements. Those are necessary, for sure, but I feel like we could benefit from more examples. I couldn't understand how universal attributes could be useful until I saw an example, which I feel like we rushed through.
Thanks! See you next week!
Now that my head has stopped spinning from the barrage of information from the CSS lecture (rereading the slides is helpful, but sometimes I feel like we rush through things too quickly in class, and I don't retain any of it. Relying on the slides makes me a bit uncomfortable), I was wondering more about creating a navigation bar, as is hinted at in the description for Assignment 5. Is that where creating universal attributes controlled by "#" and "." comes in? Because you can slip the selector into the HTML only where you need to? Or am I way off?
Also, in mentioning the selectors, what has been most useful for me in understanding these concepts hasn't been the slides that give, in loving and painstaking detail, the descriptions of the elements. Those are necessary, for sure, but I feel like we could benefit from more examples. I couldn't understand how universal attributes could be useful until I saw an example, which I feel like we rushed through.
Thanks! See you next week!
Week 10 Reading: What the What?
I have to be perfectly honest here and say that all this XML is quite confusing to me, and this whole thing is going to read like one giant Muddy Point.
XML, as opposed to HTML, is less about defining the individual structure of a document and more about a document's ability to connect to others. In addition, XML does not rely on a standardized set of terms. This leads to an increased flexibility in determining how different parts of a document relate to other parts, and is therefore inherently a more explicit, dynamic set of semantics. As opposed to HTML, there are no predefined tags, but the language used can refer back to a namespace that serves as a reference point. Using a namespace, as opposed to tags allows for greater interoperability between readable documents. This interoperability makes for an easier time when it comes to exchanging information across formats (maybe? Unsure).
An XML document is composed of entities with elements that have attributes. This concept is familiar. How they are created and manipulated is a little more confusing.
In the introductory statement of a piece of XML (the infuriatingly misspelled prolog), you can introduce the type of "grammar" you are going to use; you make up the tags out of your own, reasonably rational imagination! Having defined the grammar, you can fill in the syntax with elements (like BIB or BOOK), and refine those elements with attributes (BOOK gets attributes like AUTHOR and TITLE). This involves creating a document type definition (DTD) There are very many rules about how to go about organizing the document, most of which boggled my mind. The DTD, as an ancillary, external document, reminds me a little of how CSS relates to HTML, but, again, I'm probably way off on that because they serve different purposes. The downfall of a DTD, however, is that you have to do it yourself. Maybe that's not a downfall, however, as it provides firm control over the specific document you're creating. However, because XML is based on exchange and connection, a tag you've created may mean something within your particular DTD, but may mean something else to the entity that's reading the code. Enter the namespace, which essentially defines the vocabulary your XML grammar will be working with, so the computer on the receiving end of the document can use the namespace as a reference point, or dictionary.
The rules for linking (Xlink, XPointer, and XPath) are a tightly wound that involves what seem to me to be sub-languages. You assign an addresses to the locations of the objects you want linked together using the Xlink namespace, and that's where I get completely, utterly lost. Where does the XPointer point? I know it uses XPath somehow. Even the helpful, Explain-Like-I'm-Five W3C tutorials couldn't get me to understand it.
What I do understand, however, is that the flexibility and hierarchical structure of XML documents are good for storage in databases. This makes them pretty important. I very much look forward to next week's lecture for a little clarity.
XML, as opposed to HTML, is less about defining the individual structure of a document and more about a document's ability to connect to others. In addition, XML does not rely on a standardized set of terms. This leads to an increased flexibility in determining how different parts of a document relate to other parts, and is therefore inherently a more explicit, dynamic set of semantics. As opposed to HTML, there are no predefined tags, but the language used can refer back to a namespace that serves as a reference point. Using a namespace, as opposed to tags allows for greater interoperability between readable documents. This interoperability makes for an easier time when it comes to exchanging information across formats (maybe? Unsure).
An XML document is composed of entities with elements that have attributes. This concept is familiar. How they are created and manipulated is a little more confusing.
In the introductory statement of a piece of XML (the infuriatingly misspelled prolog), you can introduce the type of "grammar" you are going to use; you make up the tags out of your own, reasonably rational imagination! Having defined the grammar, you can fill in the syntax with elements (like BIB or BOOK), and refine those elements with attributes (BOOK gets attributes like AUTHOR and TITLE). This involves creating a document type definition (DTD) There are very many rules about how to go about organizing the document, most of which boggled my mind. The DTD, as an ancillary, external document, reminds me a little of how CSS relates to HTML, but, again, I'm probably way off on that because they serve different purposes. The downfall of a DTD, however, is that you have to do it yourself. Maybe that's not a downfall, however, as it provides firm control over the specific document you're creating. However, because XML is based on exchange and connection, a tag you've created may mean something within your particular DTD, but may mean something else to the entity that's reading the code. Enter the namespace, which essentially defines the vocabulary your XML grammar will be working with, so the computer on the receiving end of the document can use the namespace as a reference point, or dictionary.
The rules for linking (Xlink, XPointer, and XPath) are a tightly wound that involves what seem to me to be sub-languages. You assign an addresses to the locations of the objects you want linked together using the Xlink namespace, and that's where I get completely, utterly lost. Where does the XPointer point? I know it uses XPath somehow. Even the helpful, Explain-Like-I'm-Five W3C tutorials couldn't get me to understand it.
What I do understand, however, is that the flexibility and hierarchical structure of XML documents are good for storage in databases. This makes them pretty important. I very much look forward to next week's lecture for a little clarity.
Subscribe to:
Comments (Atom)