Technoromanticism » Amanda Visconti http://mith.umd.edu/eng738T English 738T, Spring 2015 Thu, 21 May 2015 19:52:25 +0000 en hourly 1 http://wordpress.org/?v=3.3.1 Book Hacking Primer http://mith.umd.edu/eng738T/book-hacking-primer/?utm_source=rss&utm_medium=rss&utm_campaign=book-hacking-primer http://mith.umd.edu/eng738T/book-hacking-primer/#comments Thu, 17 May 2012 06:53:01 +0000 Amanda Visconti http://mith.umd.edu/eng738T/?p=994 Read more ]]> As part of my final project: a website unbibliography, http://digitalliterature.net/bookhacking, providing a survey of readings on the idea of hacking the book: rewiring, reconsidering, and rebelling against the conventions of the traditional print codex, beginning with William Blake’s masterful Romantic productions. The readings cover the ways in which Blake hacked the book, how formats such as the Total Work of Art and artists’ books have further deformed the standard print tome, and how digital editions—particularly those electronically remediating Blake’s hacked books—themselves function as explosions of the conventions of the book. The readings pay particular attention to the visual design of books and online editions, treating graphical decisions as critical features of these texts and creating a catalog of opportunities and techniques for hacking the book.

]]>
http://mith.umd.edu/eng738T/book-hacking-primer/feed/ 1
Supplemental readings on Agrippa and Digital Forensics http://mith.umd.edu/eng738T/supplemental-readings-on-agrippa-and-digital-forensics/?utm_source=rss&utm_medium=rss&utm_campaign=supplemental-readings-on-agrippa-and-digital-forensics http://mith.umd.edu/eng738T/supplemental-readings-on-agrippa-and-digital-forensics/#comments Sun, 29 Apr 2012 16:40:55 +0000 Amanda Visconti http://mith.umd.edu/eng738T/?p=889 Read more ]]> More resources on Agrippa:

  1. Kirschenbaum, Matthew G., with Doug Reside and Alan Liu. “No Round Trip: Two New Primary Sources for Agrippa.” (From the Agrippa Files site on the syllabus, but added subsequent to the other research work on the site)
  2. Traub, Courtney. “An Interview with Kevin Begos, Jr“. The Oxonian Review 19.1 (23 April, 2012).
  3. Jones, Steven E. “Agrippa, the Eversion of Cyberspace, and Games“. Blog post response to the Traub-Begos interview that suggests thinking ahout Agrippa against ARGs and other transmedia work.

Digital Forensics and Literary Study

Matt Kirschenbaum’s recent Chronicle article on the importance of digital forensics to literary study (which looks like it’s now behind a paywall, but MITH might have a paper copy in the couch area)

Matt’s Mechanisms: New Media and the Forensic Imagination: Chapter 5 in particular focuses on Agrippa, but the whole book is a great read if you’re interested in new media. Outlining two paths for thinking about new media objects–forensic materiality and formal materiality–the book suggests “forensic imagination” as a path to thinking critically about new media (e.g. considering wear, trauma, time) as textual objects with particular histories and physicalities.

Forensic materiality examines each constituent part of a new media object as ultimately unique (e.g. because of varied manufacturing and care conditions, my Tetris NES cartridge is on some level not a perfect double of yours–just as with early printed editions, multiple “copies” are really each objects worthy of separate study because of their inconsistencies)

Formal materiality concerns itself with symbols and symbol manipulation rather than matter, bits (without material dimensions, just on/off switches) rather than than atoms (with their microscopic but real material dimensions). Kirschenbaum gives the example of shifting ways of interfacing with a digital object–with an image file, for instance, we often end up privileging the “view image” function over other functions that can also be studied, such as those that look at the image file’s metadata or header file.

Cultural Memory

A fantastic article on how we manufacture memory as a culture–looks at both how we mark things we want to remember in ways we assume the future will still understand (e.g. monuments for fallen soldiers, victims) and how we might warn away future generations from danger (e.g. how to mark a nuclear waste site to protect those who can no longer read our current written language). Some food for thought on how we imagine permanence and importance with respect to the materials and ways of inscribing we use:

Kenneth E. Foote (1990). “To remember and forget: archives, memory, and culture.” American Archivist 53/3 (Summer): pp. 378-392.

]]>
http://mith.umd.edu/eng738T/supplemental-readings-on-agrippa-and-digital-forensics/feed/ 0
Team MARKUP Documentation http://mith.umd.edu/eng738T/team-markup-documentation/?utm_source=rss&utm_medium=rss&utm_campaign=team-markup-documentation http://mith.umd.edu/eng738T/team-markup-documentation/#comments Mon, 23 Apr 2012 11:48:58 +0000 Amanda Visconti http://mith.umd.edu/eng738T/?p=765 I created some webpages with the documentation used by Team MARKUP: http://amandavisconti.github.com/markup-pedagogy/. The content represents almost everything we worked from during the encoding phase of our project, except some administrivia and links/images representing copyrighted content (sorry, no manuscript screenshots!).

]]>
http://mith.umd.edu/eng738T/team-markup-documentation/feed/ 1
“How Can You Love a Work If You Don’t Know It?”: Six Lessons from Team MARKUP http://mith.umd.edu/eng738T/how-can-you-love-a-work-if-you-dont-know-it-six-lessons-from-the-team-markup-project/?utm_source=rss&utm_medium=rss&utm_campaign=how-can-you-love-a-work-if-you-dont-know-it-six-lessons-from-the-team-markup-project http://mith.umd.edu/eng738T/how-can-you-love-a-work-if-you-dont-know-it-six-lessons-from-the-team-markup-project/#comments Thu, 19 Apr 2012 09:12:44 +0000 Amanda Visconti http://mith.umd.edu/eng738T/?p=686 Read more ]]> X all the Y meme with text encode all the things!

Encode all the things... or not. Remixed from image by Allie Brosh of Hyperbole (hyperboleandahalf.blogspot.com).

Update 4/24/2012: Oh, neat!: this post got the DH Now Editor’s Choice on Tuesday, April 24th, 2012.

Team MARKUP evolved as a group project in Neil Fraistat’s Technoromanticism graduate seminar (English 738T) during the Spring 2012 term at the University of Maryland; our team was augmented by several students in the sister course taught by Andrew Stauffer at the University of Virginia. The project involved using git and GitHub to manage a collaborative encoding project, practicing TEI and the use of the Oxygen XML editor for markup and validation, and encoding and quality-control checking nearly 100 pages of Mary Shelley’s Frankenstein manuscript for the Shelley-Godwin Archive (each UMD student encoded ten pages, while the UVa students divided a ten-page chunk among themselves).

Team MARKUP is currently writing a group blog post on the process, so I’ll use this post to concentrate on some specifics of the experience and link to the group post when it’s published.

Screenshot of TEI encoding of Frankenstein manuscript in Oxygen XML editor

The Creature speaks.

Six takeaways from the Team MARKUP project:

  1. Affective editing is effective editing? One of my favorite quotations–so beloved that it shapes my professional work and has been reused shamelessly on my Ph.D. exams list, a Society for Textual Scholarship panel abstract, and at least one paper–is Gary Taylor’s reasoning on the meaningfulness of editing:

    “How can you love a work, if you don’t know it? How can you know it, if you can’t get near it? How can you get near it, without editors?”*.

    Encoding my editorial decisions with TEI pushed me a step closer to the text than my previous non-encoded editorial experience, something I didn’t know was possible. My ten pages happened to be the first pages of the Creature’s monologue; hearing the voice of the Creature by seeing its true creator’s (Mary Shelley’s) handwriting gave me shivers–meaningful shivers accompanied by a greater understanding of important aspects of Shelley’s writing, such as the large editorial impact made by her husband Percy and the differing ways she crossed out or emphasized changes to her draft. Moving between the manuscripts images and the TEI encoding–so similar to my other work as a web designer and developer–also emphasized the differences in the writing process of my generation and the work that went into inscribing, organizing, and editing a book without the aid of a mechanical or digital device.

  2. Project management. Because we didn’t know what to expect from the project until we were in the thick of encoding–would everyone be able to correctly encode ten full pages? how would we control quality across our work? what would our finished pages look like in terms of encoding depth?–we spent most of the project functioning as a large team, which was both sometimes as unwieldy as our large GoogleDoc (trying to find a time when eight busy graduate students can meet outside of class time is difficult!) and sometimes made sense (I was one of the few people on our team comfortable with GitHub and encoding at the start of the project, so I helped with a lot of one-on-one Skype, in-person, and email sessions early on). If I did the project over, I would have held a single Bootcamp day where we all installed and pushed within GitHub and encoded one page of manuscript up on the projector screen, then delegated my role as team organizer by dividing us into three subgroups. I also might have insisted on people agreeing ahead of time on being available for specific in-person meeting times, rather than trying to schedule these one or two weeks beforehand. I do think things worked out pretty well as they did, largely because we had such a great team. Having the GoogleDoc (discussed more below) as a central point for tech how-tos, advice, and questions was also a good choice, though in a larger project I’d probably explore a multi-page option such as a wiki so that information was a) easier to navigate and b) easily made public at the end of our project.
  3. Changing schemas and encoding as interpretive. Encoders who started their work early realized that their efforts had good and bad results: because the schema saw frequent updates during our work, those who finished fast needed to repeatedly update their encoding (e.g. a major change was removing the use of <mod type>s). Of course it was frustrating to need to update work we thought was finished–but this was also a great lesson about work with a real digital edition. Not only did the schema changes get across that the schema was a dynamic response to the evolving methodology of the archive, it prepared us for work as encoders outside of a classroom assignment. Finally, seeing the schema as a dynamic entity up for discussion emphasized that even among more seasoned encoders, there are many ways to encode the same issue: encoding, as with all editing, is ultimately interpretative.
  4. Encode all the things! Or not. Depth of encoding was a difficult issue to understand early on; once we’d encoded a few pages, I began to have a better sense of what required encoding and what aspects of the manuscript images I could ignore. Initially, I was driven to encode everything, to model what I saw as thoroughly as possible: sums in the margins, different types of overstrikes, and analytical bibliography aspects such as smudges and burns and creases. What helped me begin to judge what to encode was understanding what was useful for Team MARKUP to encode (the basics that would apply to future encoding work: page structure and additions and deletions), what was useful for more advanced encoders to tackle (sitting in on the SGA staff meetings, I knew that some of our work would be subject to find-and-replace by people more experienced with Percy and Mary’s handwriting styles), and what our final audience would do with our XML (e.g. smudges and burns weren’t important, but Percy’s doodles could indicate an editorial state of mind useful to the literary scholar).
  5. Editorial pedagogy. Working on Team MARKUP not only improved my markup skills, it also gave me more experience with teaching various skills related to editions. As I mentioned above, acting as organizer and de facto tech person for the team gave me a chance to write up some documentation on using GitHub and Oxygen for encoding work. I’m developing this content for this set of GitHub Pages to help other new encoders work with the Shelley-Godwin Archive and other encoding projects. Happily, I was already scheduled to talk about editorial pedagogy at two conferences right after this seminar ends; the Team MARKUP experience will definitely become part of my talks during a panel I organized on embedding editorial pedagogy in editions (Society for Textual Scholarship conference,) and a talk on my Choose-Your-Own-Edition editorial pedagogy + games prototype at the Digital Humanities Summer Institute colloquium in Victoria.
  6. Ideas for future encoding work. I’ve started to think about ways to encode Frankenstein more deeply; this thinking has taken the form of considering tags that would let me ask questions about the thematics of the manuscript using Python or TextVoyeur (aka Voyant); I’m also interested in markup that deals with the analytical bibliography aspects of the text, but need to spend more time with the rest of the manuscript images before I think about those. So far, I’ve come up with five new thematic tagging areas I might explore:
  • Attitudes toward monstrosity: A tag that would identify the constellation of related words (monster, monstrous, monstrosity), any mentions of mythical supernatural creatures, metaphorical references to monstrosity (e.g. “his vampiric behavior sucks the energy out of you”), and reactions/attitudes toward the monstrous (with attributes differentiating responses to confronting monstrosity with positive, negative, and neutral attitudes). I could then track these variables as they appear across the novel and look for patterns (e.g. do we see less metaphorical references to monstrosity once a “real” monster is more prevalent in the plot?).
  • Thinking about doodles: We’re currently marking marginalia doodles with <figure> and a <desc> tag describing the drawing. In our section of the manuscript, many (all?) of these doodles are Percy Shelley’s; I’d like to expand this tag to let me identify and sort these doodles by variables such as complexity (how much thought went into them rather than editing the adjacent text?), sense (do they illustrate the adjacent text?), and commentary (as an extension of sense tagging, does a doodle seem ironically comic given the seriousness or tragedy of the adjacent text?). For someone new to studying Percy’s editorial role, such tagging would help me understand both his editing process and his attitude toward Mary’s writing (reverent? patronizing? distracted? meditative?)
  • Names, dates, places: These tags would let us create an animated timeline of the novel that shows major characters as they move across a map.
  • Anatomy, whole and in part: To quote from an idea raised in an earlier post of mine, I’d add tags that allowed “tracking the incidence of references to different body parts–face, arms, eyes–throughout Frankenstein, and trying to make sense of how these different terms were distributed throughout the novel. In a book concerned with the manufacture of bodies, would a distant reading show us that the placement of references to parts of the body reflected any deeper meanings, e.g. might we see more references to certain areas of the body grouped in areas of the novel with corresponding emphases on the display, observation, and action? A correlation in the frequency and placement of anatomical terms with Frankenstein‘s narrative structure felt unlikely (so unlikely that I haven’t run my test yet, and I’m not saving the idea for a paper!), but if had been lurking in Shelley’s writing choices, TextVoyeur would have made such a technique more visible.”
  • Narrative frames: Tags that identified both the specifics of a current frame (who is the speaker, who is their audience, where are they, how removed in time are they from the events they narrate?) and that frame’s relationship to other frames in the novel (should we be thinking of these words as both narrated by Walton and edited by Victor?) would help create a visualization of the novel’s structure.

I expect that playing around with such tags and a distant reading tool would yield even better thinking about encoding methodology than the structural encoding I’ve been working on so far, as the decisions on when to use these tags would be so much more subjective.

* From “The Renaissance and the End of Editing”, in Palimpsest: Textual Theory and the Humanities, ed. George Bornstein and Ralph G. Williams (1993), 121-50.

]]>
http://mith.umd.edu/eng738T/how-can-you-love-a-work-if-you-dont-know-it-six-lessons-from-the-team-markup-project/feed/ 9
Team 2: Caleb Williams + The Matrix http://mith.umd.edu/eng738T/team-2-caleb-williams-the-matrix/?utm_source=rss&utm_medium=rss&utm_campaign=team-2-caleb-williams-the-matrix http://mith.umd.edu/eng738T/team-2-caleb-williams-the-matrix/#comments Fri, 13 Apr 2012 06:35:37 +0000 Amanda Visconti http://mith.umd.edu/eng738T/?p=621 Read more ]]> Seven thematic connections between Caleb Williams and The Matrix identified by me, Allison Wyss, and Phil Stewart:

1. It takes some uncanniness around a cultural ideal for the interpellated to finally recognize their interpellation. The Matrix: Neo has a strong lesson in the shallowness of visual facades when the perfect “Lady in Red” turns into an Agent; Caleb Williams: Falkland, a chivalric, moral, and intellectual exemplar to Laura, Collins, and almost everyone else in the book must be revealed as a murderer for Caleb to begin to recognize “things as they are”. That’s right: Falkland is the Lady in Red.

2. As with Burke, it’s possible to recognize the illusions around us yet continue to embrace them. The Matrix: Cypher is part of the rebellion, yet willingly returns to the ignorance of enjoying a good steak; Caleb Williams: Gines experiences the democracy of the robber band, yet is content to return to the flip-side world of social inequality through the robber-snatching trade in order to survive comfortably and with emotional satisfaction.

3. You can be given the necessary knowledge to break free (or at least recognize) the ideological system surrounding you, but that doesn’t mean you’ll be ready or able to use that information. The Matrix: Neo interprets the Oracle’s words to mean he is not the One and that there thus may be no hope of breaking the A.I.’s dominance; Caleb Williams: after telling Mr. Falkland’s narrative at the beginning of the novel, Collins and Caleb are at the same level of information concerning Falkland (e.g. both able to observe him when he judges the trial that upsets him so), yet only Caleb puts recent and past events together to realize Falkland’s guilt.

4. Sometimes, a certain innocence within the system that ensnares you allows you to temporarily triumph over it. The Matrix: the young child who is able to free his mind can bend spoons (yet ultimately–at least in the first movie–he is a failed Potential, not the One who can entirely break free of the Matrix); Caleb Williams: Emily’s good nature allows her to positively interpret Tyrell’s tyrannies for some time, rendering her experience of the world temporarily more rosy.

5. Ideologies function by oppressing and keeping ignorant a large under-class for the benefit of their masters. The Matrix‘s fields of plugged-in battery-people and the nine-million-odd poorer classes of Caleb Williams‘ time are in the same subjected position, with masters who fear their revolt and require their submission to keep things running well for the privileged group.

6. Consumable media objects (in each time period, objects that would fall into the “new media” category of literature) have a power far beyond their size to unfold new virtual worlds. Matrix: the computer disks that run the non-Matrix training programs; Caleb Williams: Gines’ ballad pamphlet, the hypothetical excuse in Falkland’s trunk)

7. Ideologies place undue stress on small wrongs in order to distract their subjects from the big con of their illusions. The Matrix: The A.I. finds that humans experience a simulation that contains wrongs and sorrows as more “real” (and thus distracting from the Matrix’s unreality) than a Paradise; Caleb Williams: the government focuses on incarcerating those marked by reputation as criminals, and thus the populace is hungry for stories like that of Kit Williams but ignores the larger social evils around them.

]]>
http://mith.umd.edu/eng738T/team-2-caleb-williams-the-matrix/feed/ 5
Caleb Williams: Red Pill, Blue Pill? http://mith.umd.edu/eng738T/caleb-williams-red-pill-blue-pill/?utm_source=rss&utm_medium=rss&utm_campaign=caleb-williams-red-pill-blue-pill http://mith.umd.edu/eng738T/caleb-williams-red-pill-blue-pill/#comments Thu, 12 Apr 2012 18:05:06 +0000 Amanda Visconti http://mith.umd.edu/eng738T/?p=573

]]>
http://mith.umd.edu/eng738T/caleb-williams-red-pill-blue-pill/feed/ 0
A Beat Take on Blake http://mith.umd.edu/eng738T/a-beat-take-on-blake/?utm_source=rss&utm_medium=rss&utm_campaign=a-beat-take-on-blake http://mith.umd.edu/eng738T/a-beat-take-on-blake/#comments Sat, 07 Apr 2012 16:50:27 +0000 Amanda Visconti http://mith.umd.edu/eng738T/?p=542 Read more ]]>

The Allen Ginsberg Project blog has started posting transcripts of the beat poet’s lectures on Blake’s Book of Urizen from a 1978 seminar on Blake. I’ve found the second and third lectures to be the most worth reading:

(found via “The Cynic Sang” Blake Archive blog, which has posts on both Blake and the technical work of the Archive.)

]]>
http://mith.umd.edu/eng738T/a-beat-take-on-blake/feed/ 1
Useful prosthetics, pretty metaphors? (and more on DH tools) http://mith.umd.edu/eng738T/useful-prosthetics-pretty-metaphors-and-more-on-dh-tools/?utm_source=rss&utm_medium=rss&utm_campaign=useful-prosthetics-pretty-metaphors-and-more-on-dh-tools http://mith.umd.edu/eng738T/useful-prosthetics-pretty-metaphors-and-more-on-dh-tools/#comments Fri, 23 Mar 2012 20:42:18 +0000 Amanda Visconti http://mith.umd.edu/eng738T/?p=510 Read more ]]>

“Metaphors will be called home for good. There will be no more likeness, only identity.”

Shelley Jackson, Patchwork Girl

Some interrelated thoughts on cyborgs/metaphors/prosthetics. Shelley Jackson’s Patchwork Girl quotes Shakespeare’s Sonnet 130 (“my mistress’ eyes are nothing like the sun”), bringing into a work already quite aware of the mimicries between body and text the idea of blason, the style of poetry that praises but pieces individual pieces of the loved one’s anatomy through metaphor (“she goes on”). Ever since I encountered the etching above, with its parodic response to such blason conceits as eyes like suns darting rays, cheeks like roses, and teeth like pearls, I’ve been unable to read that form of poetry as intended (i.e. describing a harmonious whole); the etching questions whether we can fashion the ideal from constituent ideals. Victor Frankenstein describes his Creature as an almost-functional blason figure (“I had selected his features as beautiful”), but precedes this claim by admitting another qualifier on his choices for materials: “His limbs were in proportion”. As with the etching, the Creature’s monstrosity comes partly from the failure of these parts, beautiful and proportionate as they may be, to coexist.

I’ve been thinking about extending these questions of the harmony and juxtaposition of parts of a whole (text/body) to prosthetics, whether these prosthetics are more metaphorical (e.g. prosthetics of memory) or physical additions like our cyborg mobile devices. When my group was developing a Cyborg’s Definition of “Women”, we identified “that species” as a group that faced extinction after failing to make use of certain prosthetics/tools; for Wollestonecraft, the tool in question was education. Success through the use of prosthetics was a mark of cyborghood.

With the addition of prosthetics, we’re facing (as with blason) the juxtaposition of disparate parts–except in this case, the metaphors by which we’re extending our bodies aren’t pulling us apart into unbalanced monsters. Certainly they can go either way, but I’m seeing a pattern where metaphors applied onto figures can create monsters like the one in the etching, and metaphors growing out of or chosen by a figure have greater harmony and utility. Perhaps prosthetics are a way of marking these piece-making bodily metaphors not as even more-idealized (and thus less utilizable?) objects, but as tools defined by their individual uses and qualities? I’d be interested in listing and comparing the Creature’s bodily parts with the Patchwork Girl’s; given their gender difference, it’s interesting to see the Creature’s parts as typical of blason inutility (lustrous black hair!) while the Patchwork Girl’s parts are defined (sometimes indirectly via anecdote) by their abilities to dance, dissemble, act.

Read on for more on distant reading…

DH Tools. I’d intended to write my next blog post as a follow-up on my discussion of DH tools, using a few of these tools to ask questions about Frankenstein while pointing out the limits and specifics of what the digital tools’ answers actually say. I didn’t get around to that… but I thought I’d share some tips for distant reading work I’ve used with my English 295 students:

  1. Look for outliers. Is there anything in the visualization that doesn’t look the way you expected? Or, if everything looks the way you expected, what does that say about the text?
  2. Can you imagine a visualization of the text that you’d like to make, but can’t find an appropriate tool to do so? Describe this imagined tool and what you would expect to discover about your text with it. Why do you think such a tool doesn’t exist yet? What would a computer need to be able to do–and if computers would need to do something “more human” than they can now, can you think of a way to train a computer to achieve that? (Think about topic modeling and sentiment analysis.)
  3. It’s okay to ask questions with no previous expectations, questions based on hunches of what you might see, or questions where you there’s a tiny possibility of an interesting result, but you want to check for it anyway. When I was thinking about demoing how to work with the TextVoyeur tool, for example, I was planning on tracking the incidence of references to different body parts–face, arms, eyes–throughout Frankenstein, and trying to make sense of how these different terms were distributed throughout the novel. In a book concerned with the manufacture of bodies, would a distant reading show us that the placement of references to parts of the body reflected any deeper meanings, e.g. might we see more references to certain areas of the body grouped in areas of the novel with corresponding emphases on the display, observation, and action? A correlation in the frequency and placement of anatomical terms with Frankenstein‘s narrative structure felt unlikely (so unlikely that I haven’t run my test yet, and I’m not saving the idea for a paper!), but if had been lurking in Shelley’s writing choices, TextVoyeur would have made such a technique more visible.
  4. Think carefully about what a visualization means. For example, I wanted to make a visualization of Charles Dickens’ David Copperfield; the protagonist is given a name change about halfway through the novel, and I wanted to track what other changes co-occured with this name change and see whether there was a pattern in the characters who used the new name over those who stuck with the old name. This problem is a great candidate for a graph showing name frequency (“David”, the old name, versus “Trotwood”, the new name). Using the TextVoyeur tool, I was able to quickly create graphs of when the two names occurred through the novel:
    (Note that TextVoyeur lets you overlay multiple word frequency graphs, something I didn’t realize a year ago when I made these images. I’d have run a new graph for this post, but both instances of TextVoyeur/Voyant have been non-functional for the past two days… so be aware that the y-axes are slightly different in the two graphs… also that TextVoyeur is a fantastic tool, but sometimes unavailable when you’re hoping to use it.) There are issues, of course, with just accepting a visualization made by dropping the text into a distant reading tool. “David” was both the protagonist’s name and the name of his father; some characters used nicknames for David instead of his given name, etc.: these issues meant that I needed to be careful about what I could claim when reading a visualization of the protagonist’s naming. If I were marking up a transcription of David Copperfield for use in a project concerned with questions of naming and appellation, I’d want to consider tags that let me search for and count names by their speaker, meaning (is a diminutive used lovingly or condescendingly?), and other nuances. I’d also want to read the data I’m focusing on against other, similar data; for example, do other names (e.g. Betsy, Agnes) also occur less frequently in the second half of the book, perhaps because of changes in the monologue style or the physical location of the protagonist? A distant reading visualization should always be accompanied by a careful description of what it does and doesn’t show.
]]>
http://mith.umd.edu/eng738T/useful-prosthetics-pretty-metaphors-and-more-on-dh-tools/feed/ 9
Dissect and Rebuild: Reimagining Frankenstein as E-Lit http://mith.umd.edu/eng738T/elit/?utm_source=rss&utm_medium=rss&utm_campaign=elit http://mith.umd.edu/eng738T/elit/#comments Thu, 01 Mar 2012 04:01:07 +0000 Amanda Visconti http://mith.umd.edu/eng738T/?p=361 Read more ]]> Photo of Mac Classic wearing "Mac cozy" and running Deena Larsen's Marble Springs

Photo of Mac Classic wearing "Mac cozy" and running Deena Larsen's Marble Springs.

For our group teaching tomorrow, Kristin Gray, Kathryn Skutlin, and I will begin class by demoing various forms of e-lit, followed by an e-lit exercise where you’ll re-imagine a pivotal scene of Frankenstein through the possibilities of e-lit (we’ll pass out handouts in class, but if you want a digital copy you can download this or see the assignment on my personal blog).

E-lit mentioned in class:

  1. Both Michael Joyce’s afternoon and Deena Larsen’s Marble Springs can be purchased from Eastgate Publishing. Or… make an appointment with MITH to read these and more e-lit on the original hardware, or visit the Deena Larsen Collection site to read more about Larsen’s work or watch a short video demo of Marble Springs.
  2. Larsen’s “Fun da mentals: Rhetorical Devices for Electronic Literature” is a fantastic site teaching basic approaches to writing e-lit.
  3. Caitlin Fisher’s These Waves of Girls is a 2001 Flash-based work.
  4. The Urban 30 is an example of a “fictional blog” based on WordPress (just like this site–well, the WordPress part!); in this case, multiple writers uses the blog community to write in as fictional characters. Urban 30 is particularly interesting because it tells a superhero story, a genre that was born and lived for a long time solely in comic books.
  5. The 21 Steps is a story told through Google Maps. Notice how this platform complements how important location is to the story.
  6. “Haircut” uses YouTube to create a choose-your-own-adventure video. If you’re curious how to do this, check out this tutorial on creating annotated YouTube videos.
  7. Stories created using texts and Twitter have taken off; “mobile phone novels” are especially popular in Japan, where this article claims they’ve “become so successful that they accounted for half of the ten best-selling novels in 2007.” This short article gives a sense of the kinds of stories people write via Twitter.
  8. In addition to individual-authored Twitter stories, large groups of strangers have used this platform for communal writing. The LA Flood Project was an event that encouraged Twitter users to tweet (with an #laflood hashtag) as if they were experiencing an apocalyptic flood in L.A. This page gives the brief timeline participants were supposed to follow; you can search Twitter for #laflood to see the story unfold, though it was most exciting in real-time (the latest tweets are just people rehashing the week-long event).
  9. And finally, the Electronic Literature Organization (ELO) hosts the Electronic Literature Collection 1 and Collection 2, which display a wide variety of approaches to electronic writing.
]]>
http://mith.umd.edu/eng738T/elit/feed/ 4
Digitally Dissecting the Anatomy of “Frankenstein”: Part One http://mith.umd.edu/eng738T/digitally-dissecting-the-anatomy-of-frankenstein-part-one/?utm_source=rss&utm_medium=rss&utm_campaign=digitally-dissecting-the-anatomy-of-frankenstein-part-one http://mith.umd.edu/eng738T/digitally-dissecting-the-anatomy-of-frankenstein-part-one/#comments Fri, 24 Feb 2012 17:32:03 +0000 Amanda Visconti http://mith.umd.edu/eng738T/?p=343 Read more ]]>

A frequency chart of the terms "human" and "monster" in Frankenstein.

A two-part blog post: the first post will cover grabbing and analyzing Twitter and other textual data and working with them in Wordle and TextVoyeur, and the second will use these tools to consider the function of body parts in Mary Shelley’s Frankenstein.


Get your data!
Text? If you’re producing a project you want other people to see, you’d want to locate–or scan/key-in yourself–a reliable edition of your text. For the purposes of this course, since I’m just asking a quick question for my own purposes, I’ll use the dubious (what edition? what errors?) Project Gutenberg etext I grabbed from this page. Don’t forget to remove the extra licensing information from the beginning and end!
Twitter? Finding old tweets from your individual account might not be difficult (especially if you don’t tweet hourly), but Twitter only saves hashtag searches for around ten days (there are some third-party sites such as Topsy that may have older tweets, but I’ve found these to be unreliable). The best policy is to start archiving once you know you’ve got a hashtag you’re interested in.
1. There are a bunch of ways to archive tweets, but I think the easiest if to set up an RSS feed through something like Google Reader. You can get the feed URL for any Twitter search by replacing “hashtag” in the following string with the search term of your choice (e.g. technoro):

https://search.twitter.com/search.atom?q\x3d%23hashtag

Once you set up your feed reader as subscribed to this URL, you’ll have a feed that updates with all new tweets using the hashtag. You can export these at any time you’d like to work with them in a visualization tool; place any feeds you want to export into a folder (visit Google Reader’s settings > Folders), then enter the following URL into your address bar (replacing “folder” with your folder name):

https://www.google.com/reader/public/subscriptions/user/-/label/folder

This will bring you to an XML file of your feed that you can save to your computer and edit.
2. Too much work? You can use a service like SearchHash, which will let you input a hashtag and download a CSV file (spreadsheet); this might be easier to work with if you’re unfamiliar with RSS feeds and/or XML, but you can only trust such services to cover about the last ten days of tweets.

Get out your tools!
1. Wordle is one of the fastest and easiest tools for checking out a text: you paste in your text or a link to a webpage, and it produces a word frequency cloud (the frequency with which a word appears in your text corresponds to how large the word appears in the cloud). Wordle lets you do a few simple things via the drop-down menu on the top of the visualization:

  • remove stop-words (stop-words are words that appear frequently in texts but usually have little content associated with them–think things like articles and prepositions. If you’ve ever tried to make a word frequency cloud and seen some huge “THE” and “AN” type words, you need to filter your text with a stop-word list),
  • change the look (color, font, orientation of text), and
  • reduce the number of words shown (Wordle only shows the top x words appearing in a text).

Wordle is a simple way to get a look at the words being used in a text; you can get a quick sense of diction, preoccupations, and patterns. However, it doesn’t let you make any sort of strong argument beyond statements about what words are frequent; with text analysis, you always want to be able to “drill down” from your distant reading to the individual words or phrases or moments that make up the macro view you’re seeing, and Wordle doesn’t let you do that.

2. Luckily, there are free, web-based tools that let you go beyond Wordle’s abilities fairly easily. TextVoyeur* (aka Voyant) is really meant for comparing documents among a large corpus of texts, but you can use it to look at a few or even a single text. Voyeur maintains a great tutorial here that I recommend you visit to understand where different features are on the page, but here’s an overview of things you might want to do with it:

  • A word frequency cloud (like Wordle), but with better stop-words. This cloud should appear in the upper-left corner; the settings button for each pane within the page appears when you click the small gear icon that appears in the upper-right of each pane, and clicking it in the cloud pane lets you turn on the stop-word list of your choice to filter out.
  • A list of words in frequency order (click “words in the entire corpus” in the bar at the bottom-left; again, you can filter out stop-words). You can search in this pane for interesting words (e.g. “monster”); then, check the box next to the word, and in the pane that appears use the heart icon to add the word to the favorites list. You can add several terms to your favorites this way (e.g. monster, human, angel), then compare these favorites in the “word trends” pane, which with chart the frequency of these words’ appearances throughout your text.
  • Drill down. “Keywords in context” lets you see where a given word appears in the novel. “Collocates” are words that tend to appear near other specific words. Collocation can help you understand a text’s rhetoric; is the word “monster” often near the word “abnormal” or “misunderstood”? TextVoyeur lets you set how near a given search term you want to look for collocates (e.g. one word on either side of your search term? fifteen words?). If you’re interested in a word with multiple meanings or that appears within larger words (e.g. the word count for “inhuman” may include the count for “human”; you might want to see whether “Frankenstein” is being used to refer to Victor or another family member), you might want to drill down into these examples and see how many of the examples feeding into the count actually support your argument.

3. The internet is full of free tools for working with texts, many with more specific foci (e.g. tools that attempt to determine the gender of a text’s author). Two places to start finding more tools:

I’ll try to publish for the second part of this blog post later this week, where I’ll tackle a question about Frankenstein using some of these tools and also address some of these tools’ shortcomings (i.e. things you can’t say when pointing at these visualizations).

*Note that TextVoyeur was experiencing some interface issues today (2/24), which meant that we didn’t demo it at the DH Bootcamp. If you’re having trouble using this tool, those issues might not have been solved yet.

]]>
http://mith.umd.edu/eng738T/digitally-dissecting-the-anatomy-of-frankenstein-part-one/feed/ 4