Okay, two things, both reading-related.
I'm living on the wild side here, so things may go horribly wrong. That is, more grades were posted on BlackBoard recently, at it appears that I've fulfilled 20/20 reading blog points. Is that just a minimum, or is that the completed portion of the course? While I will certainly continue to do the reading, is it all right not to post a blog about it? This is crunch time, and I would greatly value the extra time to put toward other projects. Hence why, though I completed the readings, I didn't post about the web 2.0 content (wild side!).
That said, I would like to make absolutely sure about the reading for the remainder of the course. I've completed the web 2.0 reading (for Nov 25). Am I reading BlackBoard correctly that there will not be reading for our December 2 meeting, the one after the Thanksgiving break? And then the final set of reading will concern Web Security and the Cloud, correct?
I think I could have stated that a little more clearly, but I hope you get the point. If not, don't hesitate to contact me. Thank you so much!
Friday, November 21, 2014
Friday, November 14, 2014
Week 10 Muddiest Point
To be perfectly honest, I would just like to thank Dr. Oh for reiterating the CSS slides with examples. It was incredibly helpful, and gave me more confidence to tackle my own style sheet, which had been haunting me for the past few weeks.
If last week's Muddiest Point was "Everything! AAaaaHhHh!", then this week there isn't one because the re-do of the lecture was so helpful. Thanks for all the examples, which were illuminating in a way that a simple explanation of the principles can't possibly be!
Thanks again!
If last week's Muddiest Point was "Everything! AAaaaHhHh!", then this week there isn't one because the re-do of the lecture was so helpful. Thanks for all the examples, which were illuminating in a way that a simple explanation of the principles can't possibly be!
Thanks again!
Week 11 Reading: Blast from the Past
The reading for this week took us on a grand tour of last
decade’s thinking about how radically changing technology influences scholarly
education, as well as a short explanation of how search engines work (hint:
it’s not magic and/or wizards, it just seems that way).
We’ll begin with the Paepcke, Garcia-Molina, and Wesley
piece, cleverly titled Dewey Meets Turing. They sketch a brief history of the uneasy
relationship between librarians and computer scientists in developing what we
know all take for granted: the digital library.
Apparently the librarians were frustrated that the computer scientists
weren't as thorough in organizing the digitized content (what about
preservation, after all?!), and the computer scientists saw the librarians as
stodgy traditionalists who slowed down development with their endless, boring
organization. While this low-level
eye-rolling was happening, the World Wide Web blew the roof off of everyone’s
plans. Instead of crafting beautiful but
closed digital systems for libraries, everyone quickly realized that the public
nature of the Web was the future, and the future was messy. At the time this
article was written (2003), Open Access wasn't as prominent an idea as it is
today, and it addresses the concerns raised by this article. In fact, I imagine it was concerns like this
(in a kind of high-pitched “what do we do what do we do what do we do?!” kind
of mentality) that drove the growth of OA technologies and mindsets. My favorite point from this article is that
change comes very slowly in the LIS field, driven by librarians “spending years
arguing over structures”. Get it
together, everyone, or the train will leave without us.
Still more than a decade in the past, though more
progressive in their thinking, ARL laid out an argument for digital
repositories that has come mostly to fruition here in the second decade of the
21st century. Institutional
repositories are necessary for long-term digital preservation of scholarly
material. The digital migration is a
healthy and empowering movement, but preservation at the institutional level is
necessary for knowledge maintenance.
Moreover, building a strong institutional repository can reflect
strongly on the institution’s prestige; it’s something to be proud of. This paper presages the development of green
Open Access. That is, a digital
repository at the institutional level that collects, organizes, preserves, and
distributes scholarly material that goes beyond just an article accepted and
published in a journal. Instead, it allows
access to a greater body of work, such as data sets, algorithms, theses and
dissertations, and other knowledge objects outside the traditional purview of
peer-review, organized in such a way as to enable new forms of discovery and
connection in a networked environment.
The article warns against absolutely requiring
scholars to self-archive their material, although this seems to be a painless
and productive practice where it is happens today. “Open Access is gonna be great, you guys!”
seems to be the theme of the article.
Moving on to the Hawking article about the structure of web
engines. He describes the mechanisms of
web crawlers (“bots” designed to index web content. Like…all of it. Or most of it- tip of the hat
to the black and white hats): be fast, be polite, only look at what the queue
tells you to, and avoid multiple copies of the same material at different URLs,
never stop, and stay strong against spam. Algorithms index this content and makes it
all searchable, no mean feat, as the amount of information on the available Web
is mind-bendingly huge. Indexing
algorithms create cross-searchable tables based on searchable descriptors, and
then rank them in respect to popularity (how many times a thing’s been clicked). Really slick algorithms that seem to infer
meaning (done through skipping, early termination, “clever assignment of
document numbers”, and caching) get famous, like Google’s. It’s fast, simple, and flexible.
The final article was about the Open Access Initiative
Protocol for Metadata Harvesting, a protocol that is much touted in the other
articles, as well. It allows for interdisciplinary,
interoperable searching of diverse types of content that find themselves
suddenly close together in a digital world.
Though there existed previously a wide variety of organizational systems
across disciplines, through the exacting use of XML, DublinCore, and other
useful metadata structures, digital scholarly content. OAI protocol gives different institutional repositories
a way to communicate with one another to create larger collections freely
accessible to anyone with an internet connection. In addition, as is so important with regards
to Open Access, metadata must be in place to track the provenance of a
knowledge object. Knowledge for the people. Right on, OAI Protocol for Metadata
Harvesting, right on. Of course, this
article came form 2005, a simpler time. As
we approach XML and metadata schemes in this course, it seems to me that these
protocols don’t simplify anything but instead they manage to keep things organized
until they change. Again. Which isn't a
bad thing, of course, and is in fact baseline necessary. The tone in 2005, however, seems to be that
of simplification. Moving toward a
controlled and universal vocabulary for organizing and providing Open Access is
more of a pipe dream; the best we can manage so far is pointing toward a
language, and then using it. We've come
a long way since 2005, but still no wizards. Dang it.
Friday, November 7, 2014
Week 9 Muddiest Point
Greetings!
Now that my head has stopped spinning from the barrage of information from the CSS lecture (rereading the slides is helpful, but sometimes I feel like we rush through things too quickly in class, and I don't retain any of it. Relying on the slides makes me a bit uncomfortable), I was wondering more about creating a navigation bar, as is hinted at in the description for Assignment 5. Is that where creating universal attributes controlled by "#" and "." comes in? Because you can slip the selector into the HTML only where you need to? Or am I way off?
Also, in mentioning the selectors, what has been most useful for me in understanding these concepts hasn't been the slides that give, in loving and painstaking detail, the descriptions of the elements. Those are necessary, for sure, but I feel like we could benefit from more examples. I couldn't understand how universal attributes could be useful until I saw an example, which I feel like we rushed through.
Thanks! See you next week!
Now that my head has stopped spinning from the barrage of information from the CSS lecture (rereading the slides is helpful, but sometimes I feel like we rush through things too quickly in class, and I don't retain any of it. Relying on the slides makes me a bit uncomfortable), I was wondering more about creating a navigation bar, as is hinted at in the description for Assignment 5. Is that where creating universal attributes controlled by "#" and "." comes in? Because you can slip the selector into the HTML only where you need to? Or am I way off?
Also, in mentioning the selectors, what has been most useful for me in understanding these concepts hasn't been the slides that give, in loving and painstaking detail, the descriptions of the elements. Those are necessary, for sure, but I feel like we could benefit from more examples. I couldn't understand how universal attributes could be useful until I saw an example, which I feel like we rushed through.
Thanks! See you next week!
Week 10 Reading: What the What?
I have to be perfectly honest here and say that all this XML is quite confusing to me, and this whole thing is going to read like one giant Muddy Point.
XML, as opposed to HTML, is less about defining the individual structure of a document and more about a document's ability to connect to others. In addition, XML does not rely on a standardized set of terms. This leads to an increased flexibility in determining how different parts of a document relate to other parts, and is therefore inherently a more explicit, dynamic set of semantics. As opposed to HTML, there are no predefined tags, but the language used can refer back to a namespace that serves as a reference point. Using a namespace, as opposed to tags allows for greater interoperability between readable documents. This interoperability makes for an easier time when it comes to exchanging information across formats (maybe? Unsure).
An XML document is composed of entities with elements that have attributes. This concept is familiar. How they are created and manipulated is a little more confusing.
In the introductory statement of a piece of XML (the infuriatingly misspelled prolog), you can introduce the type of "grammar" you are going to use; you make up the tags out of your own, reasonably rational imagination! Having defined the grammar, you can fill in the syntax with elements (like BIB or BOOK), and refine those elements with attributes (BOOK gets attributes like AUTHOR and TITLE). This involves creating a document type definition (DTD) There are very many rules about how to go about organizing the document, most of which boggled my mind. The DTD, as an ancillary, external document, reminds me a little of how CSS relates to HTML, but, again, I'm probably way off on that because they serve different purposes. The downfall of a DTD, however, is that you have to do it yourself. Maybe that's not a downfall, however, as it provides firm control over the specific document you're creating. However, because XML is based on exchange and connection, a tag you've created may mean something within your particular DTD, but may mean something else to the entity that's reading the code. Enter the namespace, which essentially defines the vocabulary your XML grammar will be working with, so the computer on the receiving end of the document can use the namespace as a reference point, or dictionary.
The rules for linking (Xlink, XPointer, and XPath) are a tightly wound that involves what seem to me to be sub-languages. You assign an addresses to the locations of the objects you want linked together using the Xlink namespace, and that's where I get completely, utterly lost. Where does the XPointer point? I know it uses XPath somehow. Even the helpful, Explain-Like-I'm-Five W3C tutorials couldn't get me to understand it.
What I do understand, however, is that the flexibility and hierarchical structure of XML documents are good for storage in databases. This makes them pretty important. I very much look forward to next week's lecture for a little clarity.
XML, as opposed to HTML, is less about defining the individual structure of a document and more about a document's ability to connect to others. In addition, XML does not rely on a standardized set of terms. This leads to an increased flexibility in determining how different parts of a document relate to other parts, and is therefore inherently a more explicit, dynamic set of semantics. As opposed to HTML, there are no predefined tags, but the language used can refer back to a namespace that serves as a reference point. Using a namespace, as opposed to tags allows for greater interoperability between readable documents. This interoperability makes for an easier time when it comes to exchanging information across formats (maybe? Unsure).
An XML document is composed of entities with elements that have attributes. This concept is familiar. How they are created and manipulated is a little more confusing.
In the introductory statement of a piece of XML (the infuriatingly misspelled prolog), you can introduce the type of "grammar" you are going to use; you make up the tags out of your own, reasonably rational imagination! Having defined the grammar, you can fill in the syntax with elements (like BIB or BOOK), and refine those elements with attributes (BOOK gets attributes like AUTHOR and TITLE). This involves creating a document type definition (DTD) There are very many rules about how to go about organizing the document, most of which boggled my mind. The DTD, as an ancillary, external document, reminds me a little of how CSS relates to HTML, but, again, I'm probably way off on that because they serve different purposes. The downfall of a DTD, however, is that you have to do it yourself. Maybe that's not a downfall, however, as it provides firm control over the specific document you're creating. However, because XML is based on exchange and connection, a tag you've created may mean something within your particular DTD, but may mean something else to the entity that's reading the code. Enter the namespace, which essentially defines the vocabulary your XML grammar will be working with, so the computer on the receiving end of the document can use the namespace as a reference point, or dictionary.
The rules for linking (Xlink, XPointer, and XPath) are a tightly wound that involves what seem to me to be sub-languages. You assign an addresses to the locations of the objects you want linked together using the Xlink namespace, and that's where I get completely, utterly lost. Where does the XPointer point? I know it uses XPath somehow. Even the helpful, Explain-Like-I'm-Five W3C tutorials couldn't get me to understand it.
What I do understand, however, is that the flexibility and hierarchical structure of XML documents are good for storage in databases. This makes them pretty important. I very much look forward to next week's lecture for a little clarity.
Friday, October 31, 2014
Muddiest Point: Week 8 (10/31/14)
So, this week's lecture was a lot, I feel like. Lots of slides, a lot of concepts, and a lot of fine-toothed work in Lab. I wish we would have had a better explanation of and more experience with FileZilla.
I am also struggling with the concept of absolute vs relative linking. Is that why, in FileZilla, the remote window, pubic file, there is a file that's titled ".."? So it makes it easier to build linkable web pages for the My 2600 project? I would like a little more clarification on it.
As always, thanks!
I am also struggling with the concept of absolute vs relative linking. Is that why, in FileZilla, the remote window, pubic file, there is a file that's titled ".."? So it makes it easier to build linkable web pages for the My 2600 project? I would like a little more clarification on it.
As always, thanks!
Week 9 Reading: Get Hexadecimal
Mary is trying to understand!
As with the previous week's "readings", I had a lot of fun playing around with W3C's tutorials and extremely helpful examples. The Cascading style sheet is a more efficient way of formatting the visual elements in an HTML document than using HTML itself to dictate the style. The value of the cascading style sheet is that the properties you manipulate are "inherited" throughout the document, so making a single change to the CSS changes everything controlled by the element you changed. Saves a lot of time if, for instance, the company you were working for rebranded itself and wants blue to be the dominant color instead of, say, green. The designer only has to make the necessary adjustments on the style sheet and the adjustment cascades throughout the elements controlled by the original code. There are pre-formatted CSS's out there, but I understand that it's important to know how to write it (and HTML, of course) by hand. It gives me more power over the design of web-based elements I will (hopefully!) be designing as I move back into the workforce.A piece of CSS that describes a change in style (a "rule') can be split into several different parts. The selector defines which element of a document will be modified; this can be a header, a paragraph, etc. The declaration does that just that: declares what qualities to display in relation to the selector. The declaration has two parts, a property and a value, which point specifically to discrete variables that can affect the appearance of the selector. In other words, I want to change the appearance of my first header (selector), so my declaration would be to change the color (value) of the font (property). These rules are governed by discrete semantics, which makes it seem pretty simple.
But it's not!
Formatting a coherent CSS requires a lot of abstract design before writing the style sheet. You (well, I) have to hold a design in you (my) mind the whole time we're dealing with brackets and curly brackets and hex codes and alignment buffers. I can start to understand what people mean when they call a piece of code "beautiful".Moving on, though.
The brilliant thing (but conceptually difficult) thing about CSS is that you can design the entire document in one go. That is, using the specified language provided by, say, W3C's cheat sheets, you can create an entire, consistent thematic design of a website without having to go in and manually change each and every element by hand. If you want your first three headings to be bright pink and in 36 pt Comic Sans, you can specify that in the CSS by simply listing h1 h2 and h3 as the selectors in your rule. No one but no one likes Comic Sans, though, so I don't advise it. Alternately, you can control the design elements of individual headers as their own selectors, and can nest commands like HTML. This is where I see it becoming really difficult to track changes across a long, plain text document.So, you've manipulated your selectors, declarations, values, and properties into a document that you're rather proud of. What next? You can stick your CSS right into the head of the HTML document and upload it to a browser. Apparently, however, some, but not all, browsers won't know how to read the kind of CSS you've used, and so it's important to tell the browser which language you're using. If some jerk browser is giving your CSS trouble, however, apparently you can insert the rules into HTML's "comment" command.
Also, I've gone and hit "Preview", and it turns out that all this CSS I've been trying to do hasn't affected the blog post. I tried to do the "Inspect Element" option, but while I can change the text color, it doesn't seem to stick. Can you see what I was trying to do here? I put some CSS into the HTML head, but I suppose the Blogger code trumps mine. After much tinkering around and swearing under my breath, it worked when I plugged it into the W3C "Try It!" window, which was comforting. I wanted a pinkish background with lovely yellow letters in the headers. All I got was centered headers. Better than nothing, I suppose. I can see how designing with CSS, combined with proper HTML, can be simultaneously satisfying and infuriating.