Faculty Fellows – Maryland Institute for Technology in the Humanities https://mith.umd.edu Thu, 08 Oct 2020 20:02:14 +0000 en-US hourly 1 https://wordpress.org/?v=5.5.1 Hester Baer named MITH Fellow https://mith.umd.edu/hester-baer-named-mith-fellow/ Thu, 25 Sep 2014 15:59:41 +0000 http://mith.umd.edu/?p=13351 MITH is pleased to announce that Hester Baer, Vambery Distinguished Professor of Comparative Studies for the 2014-15 academic year, has also been named a MITH Fellow for the same period.  During her fellowship year, Hester will be working on her project, Digital Feminisms: Transnational Activism in German Protest Cultures. Hester Baer is Associate Professor of [...]

The post Hester Baer named MITH Fellow appeared first on Maryland Institute for Technology in the Humanities.

]]>
MITH is pleased to announce that Hester Baer, Vambery Distinguished Professor of Comparative Studies for the 2014-15 academic year, has also been named a MITH Fellow for the same period.  During her fellowship year, Hester will be working on her project, Digital Feminisms: Transnational Activism in German Protest Cultures.

Hester Baer is Associate Professor of German at the University of Maryland, where she also serves as a core faculty member in the Film Studies program. Baer’s research interests focus on gender and sexuality in film and media, historical and contemporary feminisms, and German literature and culture in the 21st Century. She is the author of Dismantling the Dream Factory: Gender, German Cinema, and the Postwar Quest for a New Film Language (2009); the guest editor of a special issue of the journal Studies in 20th & 21st Century Literature entitled “Contemporary Women’s Writing and the Return of Feminism in Germany” (2011); and the co-editor with Alexandra Merley Hill of the volume German Women’s Writing in the 21st Century (forthcoming in 2014). She is currently working on a new monograph that rethinks the history of German cinema from 1980-2010, German Cinema in the Age of Neoliberalism. Since 2012, Baer has served as President of the Coalition of Women in German.

Digital Feminisms: Transnational Activism in German Protest Cultures examines the reconfigurations of feminist activism in the context of rapid technological change, analyzing how the increased use of digital media has altered, influenced, and shaped feminist politics in the twenty-first century. Addressing the role of digital media in the transnational flow of feminist ideas, politics, and protesters, the project focuses on Germany in order to examine the way transnational feminist activism intersects with the national configuration of feminist political work, and how feminist activism may in turn transform emergent digital cultures. Bringing together scholars from the US, Canada, Germany, and the UK, this transdisciplinary, collaborative project engages digital media not only as its scholarly focus but also as a key component of the project’s methodology. Combining digital humanities paradigms with a conventional academic publishing project, Digital Feminisms seeks to develop a new research model that reaches multiple constituencies, while also reflecting critically on the subject of transnational feminist activism and digital culture through its presentation formats.

MITH is very excited to have opportunities for conversation and exchange with Hester over the next school year.  Please join us in congratulating Hester on her award!

Hester’s MITH Staff Page

 

The post Hester Baer named MITH Fellow appeared first on Maryland Institute for Technology in the Humanities.

]]>
New Version of Digital Mishnah Demo https://mith.umd.edu/new-version-of-digital-mishnah-demo/ https://mith.umd.edu/new-version-of-digital-mishnah-demo/#comments Mon, 25 Feb 2013 19:49:03 +0000 http://mith.umd.edu/?p=10126 We have released a new version of the demo. Much of the change is in styling and branding, but there are new texts added, some new views, and a new naming convention. New texts. Gradually, I am replacing the sample files with just Bava Metsi’a Ch. 2 with transcriptions covering all of tractate Neziqin (the [...]

The post New Version of Digital Mishnah Demo appeared first on Maryland Institute for Technology in the Humanities.

]]>
We have released a new version of the demo. Much of the change is in styling and branding, but there are new texts added, some new views, and a new naming convention.

New texts. Gradually, I am replacing the sample files with just Bava Metsi’a Ch. 2 with transcriptions covering all of tractate Neziqin (the Bavot). Currently, this applies to the Maimonides autograph, Paris BNF Héb. 328-329, and the Naples editio princeps (with the marginalia from the copy in the National Library of Israel.) Work is ongoing on other witnesses. Some new Genizah fragments have been added, and, in the next release, I hope to be able to show some samples of virtually joined manuscripts that can be broken out into the individual fragments.

New views. Users can now browse through documents page by page or column by column, and they can see witnesses chunked by chapter in a compact view.

New naming convention. Sigla for the manuscripts will now be based on the recent Thesaurus of Talmudic Manuscripts. Print editions will be based on serial numbers in similar format. We are experimenting with a convention for sigla that is slightly more informative, so that it will be possible to tell that a given witness includes the Mishnah alone, or a commentary in Hebrew or Arabic, and perhaps other data such as region and date of hand. (This last will require expert typing of the manuscripts.)

Hayim Lapin is Robert H. Smith Professor of Jewish Studies and Professor in the Department of History at the University of Maryland. He currently is completing a faculty fellowship at MITH. This post originally appeared at Digital Mishnah on February 23, 2013.

The post New Version of Digital Mishnah Demo appeared first on Maryland Institute for Technology in the Humanities.

]]>
https://mith.umd.edu/new-version-of-digital-mishnah-demo/feed/ 1
Asking Questions of Lots of Text with Weka https://mith.umd.edu/asking-questions-of-lots-of-text-with-weka/ Tue, 18 Dec 2012 13:30:36 +0000 http://mith.umd.edu/?p=9901 Adrian Hamins-Puertolas and Adam Elrafei are students in Team POLITIC, an undergraduate research team in the University of Maryland’s GEMSTONE honors research-focused honors college, mentored by MITH Faculty Fellow Peter Mallios. Our undergraduate research team uses newly developed technology to understand and quantify how American audiences received Russian authors in the early 1920s. One of [...]

The post Asking Questions of Lots of Text with Weka appeared first on Maryland Institute for Technology in the Humanities.

]]>
Adrian Hamins-Puertolas and Adam Elrafei are students in Team POLITIC, an undergraduate research team in the University of Maryland’s GEMSTONE honors research-focused honors college, mentored by MITH Faculty Fellow Peter Mallios.

Our undergraduate research team uses newly developed technology to understand and quantify how American audiences received Russian authors in the early 1920s. One of the tools we’re exploring is Weka, a collection of machine-learning algorithms that can be used to mine data-sets. MITH has helped us design and construct our database, which contains thousands of articles about Russian authors featured in American literary magazines written during the 1920s). Each article in the database is associated with values indicating the frequency of words in the text, so we can trace how often a single word (unigram) like “revolution” appears throughout our articles, or how often two words appear next to each other (a bigram), such as “Russian revolution”. Both these features offer us paths to think about our dataset in terms of describing and quantifying word proximity.

MITH’s Travis Brown demonstrated how we could use Weka to train a machine learning classifier that could assign labels to articles in the dataset we had not ever read. To test this, we created a smaller training dataset of just 150 articles, a number small enough that we could actually read through the entire texts and manually describe by answering questions ranging from “Is a given literary author a subject of debate in this article?” to “Is radical politics an issue in the article?”. Given these measures, Weka has the ability to classify every other article in our dataset with some degree of accuracy.

Weka provided us with a decision tree that classifies answers to the question “Is literary style and artistry an issue in this article” appropriately for approximately 67% of our training set. This success rate can be improved as we add new measures for classifying and quantifying the text. One direction we can go is to attempt to use MALLET—“an integrated collection of Java code useful for statistical natural language processing

[and] document classification”—in order to create topics—groups of words that MALLET finds to be significantly thematically related. Topic modeling is fascinating because a preliminary examination of generated topics has already provided us with a variety of distinct themes and vocabulary appearing in our dataset, ranging from religion to specific Russian authors. We’re in the process of running Weka’s classification system on those generated topics that include religious language in order to answer another of our questions—”Is religion an issue in the article?”.

Our current Weka experiments, using a smaller training set of 46 articles, have already acquired promising results. For example, when using the J48 decision tree algorithm on our textual data filtered into unigrams, Weka correctly classifies 76% of our documents when answering the “Is politics an issue?” question. If we filter our data into both unigrams and bigrams, the correct classification rate decreases to 67%. However, if we filter our data into unigrams and apply a stemmer (which breaks down words into their root forms and ignores prefixes and suffixes), our correct classification rate increases to 77%.

We are looking forward to expanding our experiments to apply to an even larger subset of our data, as we continue to learn more about natural language processing tools in the coming weeks.

The post Asking Questions of Lots of Text with Weka appeared first on Maryland Institute for Technology in the Humanities.

]]>
Hayim Lapin: “A Digital Edition of A Classical Hebrew Text: The Digital Mishnah Project https://mith.umd.edu/dialogues/hayim-lapin-a-digital-edition-of-a-classical-hebrew-text-the-digital-mishnah-project/ Wed, 28 Nov 2012 14:00:53 +0000 http://mith.umd.edu/?post_type=mith_dialogue&p=9603 How does one create a digital edition of a classical text, and what do we learn from it? The Mishnah is in many ways a foundational text for contemporary Jews, and continues to be part of the curriculum of formal and informal study. A legal compendium of about 200 AD/CE , the Mishnah is also [...]

The post Hayim Lapin: “A Digital Edition of A Classical Hebrew Text: The Digital Mishnah Project appeared first on Maryland Institute for Technology in the Humanities.

]]>
How does one create a digital edition of a classical text, and what do we learn from it?

The Mishnah is in many ways a foundational text for contemporary Jews, and continues to be part of the curriculum of formal and informal study. A legal compendium of about 200 AD/CE , the Mishnah is also significant for understanding late-second-century Jews in Palestine. Yet no critical edition of the text exists. The Digital Mishnah Project aims to produce a born-digital edition that will take into account the full array of manuscript and other evidence, and automate the process of comparing variant readings and assessing the relationships between manuscripts.

Conceived as a tool rather than an edition, the Project will certainly make it easier for those who wish to track the text back to its earliest form to do so. However, it is becoming increasingly clear that its contribution to medieval book culture. As a tool, its design should draw on but also further develop the resources for other such editions, in any language.

The presentation will feature a demo, followed by a discussion of some of preliminary observations.

The post Hayim Lapin: “A Digital Edition of A Classical Hebrew Text: The Digital Mishnah Project appeared first on Maryland Institute for Technology in the Humanities.

]]>
Answering the Mail: Digital Mishnah Project Update https://mith.umd.edu/answering-the-mail-digital-mishnah-project-update/ Thu, 15 Nov 2012 13:30:14 +0000 http://mith.umd.edu/?p=9855 I had promised to respond to comments on the Digital Mishnah demo, so, at long last, here goes. Request for greater highlighting of collation options (Tim Finney). In fact, CollateX has several alignment methods built into libraries that can be utilized. This is outside of what I feel comfortable talking about (I don’t really read [...]

The post Answering the Mail: Digital Mishnah Project Update appeared first on Maryland Institute for Technology in the Humanities.

]]>
I had promised to respond to comments on the Digital Mishnah demo, so, at long last, here goes.

  1. Request for greater highlighting of collation options (Tim Finney). In fact, CollateX has several alignment methods built into libraries that can be utilized. This is outside of what I feel comfortable talking about (I don’t really read Java … yet) but there is no reason we can’t allow users to select methods and see what yields the best results.
  2. Don’t build unnecessary mechanisms (Desmond Schmidt). Well taken. As a non-programmer, I’m not always the best judge of what is difficult or simple to build. The point though was to allow manual error-correction of the alignment by adding or deleting cells in a table row. As for the order of witnesses, my own sense is that it is extremely useful for visually examining groupings of manuscripts.
  3. Apparatus unnecessary (Desmond Schmidt), or unwieldy (Daniel Stoekl, Naftali Cohn). Well, Stoekl, a potential user, suggests that the print-type apparatus is useful. It is a way of compactly summarizing data. My include-everything model is in fact unwieldy, and the suggestion to leave out readings that are identical with the base text would simplify the situation. Just how text families can be generated and then used in the apparatus is a discussion for a later day, but it is definitely a desideratum.
  4. Additional textual detail; handling absence of evidence (Daniel Stoekl, Naftali Cohn). These are important points. For collation, I made the decision to present a simplified text, but obviously this will have to be made more complex. I don’t think additional tagging is necessary in most cases; different processing is. For additions, corrections in second hand, we effectively generate an additional witness, but ignore the readings of that secondary witness except when they differ from the primary witness. For dealing with highly lacunose texts, the method will be: to have a reference text that includes individual addressing for each word in the Mishnah. The tagging in the lacunose text aligns the text and lacunae with the reference text. At a minimum, this allows us to identify “gaps” to be ignored and “gaps” to be processed. A reference text of the Bavot exists, and I am working on extending it further, but we are still working on the pointing mechanism.
  5. Search functionality (Naftali Cohn). Yes, but what? Ironically, I can envision complex searches (a particular abbreviation in texts in Sephardic hands) more easily than simple searches. What should a search for “Rabbi Meir” or “Prohibited” return?
  6. Other matters (Naftali Cohn). My December and January task is to start working on page by page and chapter by chapter view, especially that now my text sample includes extended runs of text. I’d also like to be able to generate apparatus or alignments for a whole chapter.

Hayim Lapin is Robert H. Smith Professor of Jewish Studies and Professor in the Department of History at the University of Maryland. He currently is completing a faculty fellowship at MITH. This post originally appeared at Digital Mishnah on November 13th, 2012.

The post Answering the Mail: Digital Mishnah Project Update appeared first on Maryland Institute for Technology in the Humanities.

]]>
An Undergraduate View of Data Mining with WEKA https://mith.umd.edu/undergradweka/ Mon, 05 Nov 2012 13:30:32 +0000 http://mith.umd.edu/?p=9717 Manpreet Khural is an undergraduate member of the Gemstone POLITIC undergraduate research team, led by MITH Faculty Fellow Peter Mallios. As we, Team POLITIC of Gemstone, make progress in the effort of utilizing data mining tools such as Weka, it becomes more evident that such a technological approach provides a goldmine of new information that [...]

The post An Undergraduate View of Data Mining with WEKA appeared first on Maryland Institute for Technology in the Humanities.

]]>
Manpreet Khural is an undergraduate member of the Gemstone POLITIC undergraduate research team, led by MITH Faculty Fellow Peter Mallios.

As we, Team POLITIC of Gemstone, make progress in the effort of utilizing data mining tools such as Weka, it becomes more evident that such a technological approach provides a goldmine of new information that would be otherwise impossible to obtain. Currently, we have been working to train Weka to answer a set of questions in which we are interested. In doing so, we have to first provide it data from which it can learn. This requires manual annotations of article documents. It is in doing this that we see the potential of data mining technology.

The lack of human learning biases is this potential. In order to provide Weka the most accurate learning data set, we have made strict guidelines for how we answer the questions.  Even with these guidelines, it is apparent that without strenuous amounts of personal effort, the questions will always have certain biases. The human opinion is a transient quantity and therefore makes it difficult to apply a scientific approach to the analysis of texts. We build associations every single day, making it impossible to maintain constants in mindset and realistically be able to answer these questions without error.

Data mining, on the other hand, has a much more objective learning process. It makes connections solely on the basis of the patterns that the data sets contains. These patterns contain an entirely new and revolutionary insight into texts as they are based on the use of language, what is on the page rather than the ideas, what a reader often infers based on prior personal associations. Even though the training process can be lengthy, the applications for data mining seem endless, considering that without such technology, we would have to go through every bit of text in our data set and annotate them. We foresee data mining as a way for gathering information on any topic that has sufficient amount of available text. For example, national defense agencies can use it to answer queries that could be useful in understanding changes in sentiment as pertaining to whatever topic they are interested in.  We believe that data mining will revolutionize many such industries which aim to understand changes in public sentiment.

The post An Undergraduate View of Data Mining with WEKA appeared first on Maryland Institute for Technology in the Humanities.

]]>
Drowning in Texts https://mith.umd.edu/drowning-in-texts/ Wed, 24 Oct 2012 13:00:54 +0000 http://mith.umd.edu/?p=9712 The comments on the Digital Mishnah demo deserve a full response (although the short response is: thank you and, in almost all cases, I agree). However, for this post I want to report on progress in getting and identifying texts for the extended demo. We have made the decision to build out from the sample [...]

The post Drowning in Texts appeared first on Maryland Institute for Technology in the Humanities.

]]>
The comments on the Digital Mishnah demo deserve a full response (although the short response is: thank you and, in almost all cases, I agree). However, for this post I want to report on progress in getting and identifying texts for the extended demo. We have made the decision to build out from the sample chapter in Bava Metsi’a to all of tractate Neziqin (the “Bavot”), a 30-chapter and 13-14,000-word base text to work with.

Michael Krupp has generously provided transcriptions of 4 orders for three manuscripts (Kaufmann, Parma de Rossi 138, and Cambridge Add. 470.1). The first is now available in an electronic version that is far better than what was available to Krupp when the transcriptions were made. The Cambridge ms  is presumably based on the edition of it by Lowe in the nineteenth century, and the Cambridge Libraries reported recently that that manuscript would be available on line. (At least, that’s what the Genizah Unit said on Facebook on July 4.) So there is room for improving the texts and resources available to do so. This should facilitate making substantial blocks of text available rather quickly. The problem is actually finding the time to encode the texts …

Meanwhile, with the participation of Lieberman Institute, under the direction of Shamma Friedman and the aid of Leor Jacoby, I am gradually filling out the corpus of texts available. I say gradually not because the work on the part of the Institute transcribers is slow. However, our agreement is for transcribers to provide transcriptions, and I see to the conversion to XML.

Those in the “biz” know that Yad Izhak Ben-Zvi and the Friedenberg Genizah Project recently published a three volume Thesaurus of Talmudic Manuscripts, edited by Sussman. The detailed information on joins makes it easier to prioritize fragments to transcribe. (It also leaves me feeling “scooped,” since my discoveries of joins were in most cases, possibly in all, anticipated by the Thesaurus, which was not yet available when I started working on this project.) On the basis of that catalog, the number of distinct shelfmarks for witnesses (once we include all the fragments of joined manuscripts where one or more fragment has text in the Bavot) runs to 200.

So, aside from wondering about next steps on the application that will drive the edition, I am drowning in texts. Happily, but drowning nonetheless.

Hayim Lapin is Robert H. Smith Professor of Jewish Studies and Professor in the Department of History at the University of Maryland. He currently is completing a faculty fellowship at MITH. This post originally appeared at Digital Mishnah on October 20th, 2012.

The post Drowning in Texts appeared first on Maryland Institute for Technology in the Humanities.

]]>
Digital Mishnah: Live Demo https://mith.umd.edu/digital-mishnah-live-demo/ Tue, 04 Sep 2012 12:00:50 +0000 http://mith.umd.edu/?p=9021 I am pleased to say that with a lot of work on a lot of people’s part, there is now a live demo of the Digital Mishnah Project. The demo is just that: a demonstration of possible functionalities.This post will outline some of the features that were always meant to be temporary and some new [...]

The post Digital Mishnah: Live Demo appeared first on Maryland Institute for Technology in the Humanities.

]]>
I am pleased to say that with a lot of work on a lot of people’s part, there is now a live demo of the Digital Mishnah Project. The demo is just that: a demonstration of possible functionalities.This post will outline some of the features that were always meant to be temporary and some new planned or desired features, and then invite comments.

What will be changed

  • The selection of witnesses. Entering numerals is unwieldy. Ideally, users should be able to slide text “icons” around (as one does with a pivot table in Excel, for instance)
  • Output in browse functions. A single chapter was used for the demo version. Future versions will allow users to select specific chapters and/or specific ms pages and progress by page or chapter. Metadata should perhaps be hideable.
  • Output in collate functions. The demo groups output together; these are actually alternative functions.

Additional basic functionalities

  • Ability to download or print results.
  • Ability to  compare longer texts (whole chapters)
  • Improved collation–and/or the ability to select alternative collation methods

Desiderata

  • Statistical tools, such as multi-dimensional scaling and clustering, to group manuscripts and display results
  • Since there will inevitably be errors in collation, ability to correct alignment and re-run various operations
  • Dynamic synoptic view, in which two or more witnesses can be viewed in parallel columns, with the ability to highlight textual differences or other features.

Hayim Lapin is Robert H. Smith Professor of Jewish Studies and Professor in the Department of History at the University of Maryland. He currently is completing a faculty fellowship at MITH. This post originally appeared at Digital Mishnah on August 30, 2012.

The post Digital Mishnah: Live Demo appeared first on Maryland Institute for Technology in the Humanities.

]]>
Digital Mishnah: Summer Update https://mith.umd.edu/digital-mishnah-summer-update/ Tue, 24 Jul 2012 11:55:05 +0000 http://mith.umd.edu/?p=8706 In addition to getting the demo ready to go live–it’s ready to go!–this summer’s agenda has been to add texts and add reference material. We now have two sets of reference data ready to implement. The heavy lifting for this was done by Atara Siegel, an undergraduate at Stern College, who worked for me for [...]

The post Digital Mishnah: Summer Update appeared first on Maryland Institute for Technology in the Humanities.

]]>
In addition to getting the demo ready to go live–it’s ready to go!–this summer’s agenda has been to add texts and add reference material.
We now have two sets of reference data ready to implement. The heavy lifting for this was done by Atara Siegel, an undergraduate at Stern College, who worked for me for several weeks this summer. Atara prepared the lists, and, for the newly expanded sample text (tractates Bava Qamma, Bava Metsi’a and Bava Batra) also linked the relevant words in the reference text to the names list.

  • Personal Names. This list is based on the list of Tannaim in the Mishnah in Albeck, Mavo la-mishnah, cross-referenced with the relevant names from Stemberger-Strack, Introduction to the Talmud and Midrash.
  • Place Names. This list is based on three sources: B-Z Segal, Ha-geografya ba-mishnah, conveniently digitized here, cross-referenced with Tsafrir, et al., Tabula Imperii Romani: Iudaea-Palaestina, and G. Reeg, Die Ortsnamen Israels nach der rabbinischen Literatur. (Note: Map references are given according to the Survey of Israel coordinates; we will have to find alternatives for non-Palestine sites.)

In addition, we continue to add to the corpus of texts. The last of the planned witnesses for Bava Metsi’a Chapter 2 (my initial sample text) will be done by the end of the Summer, thanks to Bruce Roth, a graduate student at the Baltimore Hebrew Institute at Towson University, and transcribers students at Catholic University are preparing Genizah fragments.Working with the Lieberman Institute in Israel, I am preparing to have a number of witnesses to all three Bavot. We are starting with the Maimonides autograph and the Paris MS (Bibliothèqe nationale de France, Heb 328-329).
I keep holding out hope that the state of the Naples first edition is good enough that one should be able to OCR the text, but my experiments thus far have been disappointing.

Hayim Lapin is Robert H. Smith Professor of Jewish Studies and Professor in the Department of History at the University of Maryland. He currently is completing a faculty fellowship at MITH. This post originally appeared at Digital Mishnah on July 10th, 2012.

The post Digital Mishnah: Summer Update appeared first on Maryland Institute for Technology in the Humanities.

]]>
Almost Ready for Prime Time https://mith.umd.edu/almost-ready-for-prime-time/ Fri, 25 May 2012 13:00:03 +0000 http://mith.umd.edu/?p=8418 We now have two versions of a demos up and ready to run. Both allow a user to pull data from the witness files, containing manuscript transcriptions, select texts to compare, run the texts through a version of CollateX, then present the results as an alignment table (a “synopsis” in or “partitur” in some text-critical [...]

The post Almost Ready for Prime Time appeared first on Maryland Institute for Technology in the Humanities.

]]>
We now have two versions of a demos up and ready to run. Both allow a user to pull data from the witness files, containing manuscript transcriptions, select texts to compare, run the texts through a version of CollateX, then present the results as an alignment table (a “synopsis” in or “partitur” in some text-critical dialects), and as a text with apparatus.

The second of these is still buggy (and the cause of both a couple of late nights night and the lateness of this post (for which I apologize heartily to the nice people at MITH)), but it does a couple of additional things:

  • Prioritization. While the ability to generate all sorts of different apparatus is a desideratum, at present what we can do is choose the order in which results are presented, and, in the case of presenting a text with apparatus, the first text chosen becomes the base text for comparison.
  • Tokenizing. I am now able to tokenize in two steps. First with “rich” tokens that retain data about the individual words (e.g., abbreviations, which should be compared based on their expanded text rather than on the abbreviation as written), as well as other data in the text (page breaks, etc). From there we can create “regularized” tokens. For now I have regularized the tokens by removing all yods and waws. Additional candidates might include dealing with prepositions that are sometimes but not always attached in medieval Mishnah manuscripts (shel, e.g.), final aleph/heh, and final nun/mem. “Simple” tokens are passed to Collatex (or, we allow Collatex to process “rich” tokens) and the resulting collation output is merged with the rich tokens.
  • Presentation. Because the “rich” tokens retain information about the witness, it is possible to generate a “text-with-apparatus” in which the base text can be presented with formatting and contextual information that may be useful to the reader. (Disclaimer: Here is a big bug: The XSLT that joins the two lists of tokens inserts the non-words (page breaks etc.) in a position that is offset by one location. Any suggestions?)

Next up: modifying the demo to present multi-column synopses, and linking in Talmudic and Commentary citations.

Hayim Lapin is Robert H. Smith Professor of Jewish Studies and Professor in the Department of History at the University of Maryland. He currently is completing a faculty fellowship at MITH. This post originally appeared at Digital Mishnah on May 24th, 2012.

The post Almost Ready for Prime Time appeared first on Maryland Institute for Technology in the Humanities.

]]>