Drucker & Digital Humanities (Day 24)

Johann Drucker explains in her article, From Digital Humanities to Speculative Computing, that “part of the excitement” about digital humanities was “learning new languages through which to rethink our habits of work.” She describes how there was an impulse to challenge the “cultural authority of computational methods” came from the emergence of a period where the power of digital technology was a source of infatuation. Digital humanists were interested in analogies between “the intellectual power of information structures and processes”, and how they connect with each other.

In the humanities, we are able to create and compose data into websites, social media, or use the data as a source for other pages, and can use our findings to design new architecture. “The systematical analysis of texts, creation of structured data, and design of information architecture are the basic elements of digital humanities.” This means that we can take a data set and add it to a data base, or create a webpage dedicated to our experiment. We can make graphs, charts, interactive timelines, ect. to present our data, based on analysis.

In an expansion of the article, Drucker talks about the digital humanities in regards to analysis and computing (Speculative Computing: Aesthetic Provocations in Humanities Computing). She explains that “‘Digital’ humanities are distinguished by the use of computational methods… but they do also make frequent use of visual means of information display (tables, graphs, and other forms of data presentation) that have become common in desktop and Web environments.” This means that, although digital humanities is known for computation, they can use other forms of media tor represent their content. But two challenges, Drucker explains, in using these other forms of media as representation is “to meet requirements that humanistic thought conform to the logical systemacity” required by the methods of computation. The second challenge is to overcome humanist’s rpassive or hostile resistance to “visual forms of knowledge production”.

The resistance arises due to most humanist’s “idea that visual representation has the capacity to serve as a primary tool of knowledge production is an almost foreign notion” to them. Because they also see digital objects as immaterial, it is difficult to convince them that digital media representations can be very useful to understand content.”Speculative” computing emphasizes visual means of interpretation for digital humanities. Throughout the article, Drucker emphasizes the need for precise attention to detail and utilization of organized data. By doing this, the digital artifacts will be accurate and support the data it is representing.

 

Ridolfo and DeVoss (Day 23)

The article, Composing for Recomposition: Rhetorical Velocity and Delivery, introduces a new conceptual consideration that is called “rhetorical velocity”. It is further described as “a conscious rhetorical concern for distance, travel, speed, and time, pertaining specifically to theorizing instances of strategic appropriation by a third party.” Velocity, like physics, deals with speed and momentum, only applied to rhetoric and written words. This can be used to design texts for third party, and “wrestle with some of the issues particular to digital delivery”.

It’s interesting, because this digital velocity can be used in many digital composing literacies. They give the example of a media press release, where the content is geared towards oral delivery and writing, and they are able to strategize how to recompose it. Taking a media and reconstructing it into something that is easy to discuss, find, and argue for, and where “ideas change shape, gather speed, and are elsewhere delivered”.

Gochenour, Taylor, Jones, and Writing Wakan (Day 21-22)

We live in a time where technology is changing and evolving constantly, but in the past, networks and walls were a new thing. According to Mark Taylor’s article in an excerpt from “The Moment of Complexity, 1989 marks the transition into network culture. This happened because Industriali organizations had began to change subtly, and the changes were “brought on by new information and communications technologies”

Taylor asks us to decipher what makes up grids or networks by asking us to question what makes them. For an example, “What is its function? What is its structure? What is a grid/network today? What is the architecture of grids/networks today?” By doing this we are able to appreciate its complexity, which is “both a marginal and eminent phenomenon” that is never “fixed or secure”, giving rise to Chaos Theory.

Willem Pieterson’s article, The ‘Philosophy of Networks, talks about loops within search engines like Wikipedia. According to Pieterson, if you open a random article on the search engine and click the first link that isn’t in italics or parenthesis, you’ll most likely “end up alternating between the Wikipedia pages for Philosophy and Reality”. It’s really strange, but I’ve tried it, starting at one of my favorite movies and clicking the first link. Sure enough, it ended on the topic of philosophy, where it continued to return to if I clicked a different link each time. Wikipedia explains that “as you repeat the process, Wikipedia’s conventions funnel you towards a similar set of more general articles until you finally reach the smallest set with the most general topic of all.”

So, since Philospophy and Reality subjects are the most reoccurring, does that show importance in the link network? Pieterson concludes that they “do not arise by accident”, with the networks having developed “clear and explainable patterns”. Because they were written by workers, some who could have used bias on putting more/less detail into something (ex: Star wars fight ships could be extremely detailed, or not touched on at all, depending on the information a writer decides to give). But they “can be shaped”, which Pieterson concludes is the most important message to gain from the experiment.

Phillip H. Gochenour, the author of the article Nodalism, explains this further by writing “we can se the formation of a basic assumption later underlying the nodalistic trope: that thought is the result of interaction between units or nodes, in this case neurons or areas of the brain, and that it is reduced to the structural components.” Since the links within Wikipedia pages are nodes within the network, they function like our brains do. Clicking on another node can change the subject of discussion, just like the neurons in our brain change our thoughts or impulses.

These nodes are also present in Professor Grant’s research Paper, Writing Wakan. The Native Americans’s beliefs and ways of life are influenced by traditions, stories, and gods that manipulate the environment. Nodes make up these beliefs and traditions, and they loop around and contribute to the Indian’s understanding and respect for the universe.

 

Clement, Blatt, Hoffman & Waisansen R&DH (Day 20)

In Tanya Clement’s article, Text Analysis, Data Mining, and Visualizations in Literary Scholarship, I know that when I have a bunch of different novels I have to read in for different classes’ homework on the same day, I tend to skim read the texts. In doing so, I don’t have the opportunity to decipher the small interlocking details that make up the plot, but instead am able to ascertain the overall plot. Contrastingly, close reading would be to analyze the smaller data in the text, such as the meaning of specific sentences or words in context. Clement expands upon the experiment where words in a text are explored and counted in order to find connections between the characters and their most commonly used words or phrases. Trends in word frequencies can also be noted and graphed, although Clement says it only “provides us with a simplified view of the text” (paragraph 15).

Ben Blartt also explores the idea of counting words of a text to make connections within it in his article, “A Textual Analysis of the Hunger Games“. Trying to determine why a reader might “take a shine to one series and not the other”, Blartt compares the words that make up the works of Suzanne Collins (Hunger Games), Stephenie Meyer (Twilight), and J.K. Rowling (Harry Potter). What he ends up developing are multiple charts of most distinctive words, divided into adjectives, adverbs, and most common sentences used by each author. The result was pretty cool, with the words ending up representing each series pretty well. Blartt reaches the conclusion based on his findings that “textual analysis has its limitations, of course, but word counting can illuminate the tendencies of writers in a way that word reading may not.”

Long Tail and Topic Modeling (Day 19)

Since we are surrounded by social media, such as Facebook, Twitter, and Instagram, it’s easy for us to keep track of what’s going on in other people’s lives. That’s awesome for most people, including myself; I’m able to keep track of what big things have been going on with my family, even though they might be hours or states away. Sometimes it can be bad, when you’ve just broken up with someone and can’t escape seeing their activity on social media, like this visual shows (check it out, it’s pretty funny!), but for the most part, it keeps us up to date and connected with other people.

Traveling to a different subject, we explore topic modeling. and how topic algorithms can been used to “summarize, visualize, explore, and theorize about a corpus,” according to David M. Blei. In his article, Blei takes the topics, or “groups of terms that tend to occur together” due to patterns found in the corpus, discovered by the algorithm, and analyze the topics. Once a model and archive is ready to be placed into an algorithm, it can be used to estimate how an imagined hidden structure exists and can be found in texts or other forms of media. Those estimates can be used in confirming theories, “forming new theories, and using the discovered structure as a lens for exploration”. This relates to Digital Humanities because it is all about creating archives and creating connections. Data is constantly being reworked and placed into different experiments with hope that new patterns will emerge and new connections will be found.

 

 

Grasping Rhetoric and Composition by its Long Tail: What Graphs Can Tell Us about the Field’s Changing Shape (Day 18)

In the article by Derek Mueller, One technique of visually representing small data is through usage of graphs. Because they are able to “change the scale of detail”, they can “help us engage with patterns of disciplinary activity that would otherwise be difficult to discern” (197).

Personally, graphs are incredibly helpful to me when I’m trying to read through data sets. The numbers seem to blur together, but when they’re modeled in a graph, it gives an organized visual pattern that seems to clear up that blur. Also, when building your own data collection, graphs are perfect to organize them and predict new trends.

 

When utilizing graphs in order to find patterns in a number of articles, Mueller states that an article is the default scale for the traditional scholarly journal, and they already include “numerous features designed to help readers access small-scale units” of the article, such as the issue and article from a journal without having to read it more thoroughly first. “A simple table of contents, for example, supports a glancing sort of distant reading at one scale, and article abstracts allow for distant reading at a scale only slightly closer to the stuff of the article than the title and author listing” (198).

In zooming in, I think it’s cool how close both aspects of close reading and distant reading can go into graphs as well, since I’ve only viewed close or distant reading being something that can only be achieved by looking at a book. I’ve never considered that the whole of a journal or database, or even a graph, can utilize both readings.

“To clearly and responsibly engage with this complicated, shifting expanse, we need the full spectrum of data, not only the list of the most frequently appearing names. The full distribution is required if we are to examine the relationship between what has happened at the head of the distribution and what has happened furthest from in, in the long tail” (215).  But looking at the long tail and comparing the highest point of the graph over time, Mueller is able to gain “new insights, new provocations, and new questions: what has changed, over time, in the relationship between the head of the curve and the long tail?” (215).

 

Twitterbot (Day 16 & 17)

I’d never heard of a Twitterbot before I read the articles on them, but let me just say, those things are pretty cool. If you don’t have time to continuously update your Twitter with a new post whenever you’d like, you can use Twitterbot to post for you.

According to the article, The Intermittent, “Twitterbot is a big Japanese sight” that “lets you queue up to 700 tweets to post either in turn or at random, and lets you set the interval between about 30 minutes and 24 hour.” For the social networking buff, this is a neat source to utilize. If you can’t post tweets, you can type them into Twitter bot, set a timing interval, and let Twitter bot post for you. This can give your profile the illusion of updating steadily, but it’s actually Twitter bot that can be manually rigged to post while you’re away from the site.

Twitter bot creation involves some coding, but doesn’t need it to function, however, Darius Kazemi in the article “How to Make a Twitter bot” states that “if you want to make a creative, interesting bot, you need to understand computer programming.” But his sight includes links about how to do the basic coding needed to create a functioning twitter bot.

The four basic rules Kazemi lists in Basic Twitter bot etiquette is:
1) Don’t mention people who haven’t opted in (@)
2) Don’t follow Twitter users who haven’t opted in
3) Don’t use a pre-existing hashtag (#)
4) Don’t go over your rate limits

As long as you stick to the steps to avoid your twitter bot auto following a bunch of people at once, or when people use particular keywords involved in hashtags. But once the steps are taken and your twitter bot is created, you should have a properly functioning application that will systematically post for you!

 

Strange Attractors (Day 15)

The comic book, Strange Attractors, by Charles Soule and Greg Scott, is based on the idea that miniscule, manipulated events could influence a mass change in society. And let me tell you: it. Was. Awesome.

With beautifully drawn comics and a fast paced plotline that felt like you were watching a movie, or in an alternate reality Sherlock episode, instead of reading. The setting is placed in New York City because, just like a computer, its complex interlocking system where every small detail is dependent on each other. It was the perfect place to base the comic book’s cause-and-effect plot on. Throughout the story, our protagonist Heller Wilson is continuously made aware by Professor Spencer Brown of how one seemingly unimportant action can set off a chain reaction that can lead to a huge negative, positive, or stabilizing effect. And the effects of many of the seemingly unimportant actions can alter the whole city. With his new information, Heller has to save the city from destroying itself.

The story has a really cool butterfly-effect vibe to it that is made mathematical by the use of data. In the story, Heller is able to create a database for Spencer to use, combining local news and statistics to predict new trends in upcoming possible events in the city. From there it is turned into a model, with the color blue being stabilized and red being chaos. With each counteracted chaos data, the city becomes more stabilized. The trick, though, is that each counteractment has to be precise and perfectly timed, just as the trend predicts it.

The city is like a giant machine, and the system Heller and Spencer invented is the network that runs it.

“Computer Programming as a Literacy” & “The Relevance of Algorithms” (Day 14)

In one of my first few blog posts, I talked about how enormously smart technology is, and how it can predict things we’d like from what we’ve already looked at. But how does it predict them? If you look at any trendsetting fashion magazine out there, the “what’s hot/ what’s not” categories let us readers know what’s in and what’s out for the season. And incredibly, technology can do the same. It can predict trends, things that have been wide-received and well-liked, and incorporate those trends into personalized indexes of what we’d possibly like to view.

But how does it do it? Is it magic? Are websites just taking lucky guesses at videos and media we could be interested in?

No, although it would be cool if wizards were the ones manipulating the data: the answer is math. Algorithms to be exact. Tarleton Gillespie, the author of the article “The Relevance of Algorithms“, defines algorithms as “encoded procedures for transforming input data into a desired output, based on specified calculations.” They are able to become an index of possible choice by usage of patterns of inclusion, or the ability to predict and incorporate trends. When a user starts using a site, like YouTube for an example, each video that is watched builds up an index of related content that continuously becomes more defined to you. The more videos you watch, the more related videos will adhere to your interests, just because the algorithm is able to predict consistent trends.

Big Data vs. Small Data (Day 13)

If there is a micro side to things, there is also always a macro, and data is no exception to that rule. Jockers puts it this way: “Just as we would not expect an economist to generate sound theories about the economy by studying one or two consumers or one or two businesses, we should not expect sound theories about literature… or about literary history, to be generated out of a study of a few books, even if those books are claimed to be exemplary or representative.”

Flanders explains this by saying: “I think the classic case for the “micro” approach says, in effect, that we can’t trust big data because it’s fundamentally careless from a data capture standpoint.” In other words, big data can often have many errors, as well as a lack of metadata.

Also, reading can be subject to micro and macro analysis. When reading a novel, poetry, or even watching a movie, one must look at not only the characters’ decisions and characteristics and specific moments within the plot that effected the storyline, but also things like the year the work was published, where, and by whom. Without looking at those bigger “macro” bits of information, we miss out on the historical context of the work, the background of the author, and the general audience it was intended for and where it began.