TEI-C Elections 2020


Contents

Introduction

In 2020, TEI Members will hold an election to fill 5 open positions on the TEI Technical Council (4 for 3-year terms, 1 for a 2-year term) and 3 on the TEI Board of Directors (2 for 3-year terms, 1 for a 2-year term). We are also electing 1 new member to the TAPAS advisory board.

The following persons have been nominated and have agreed to stand as candidates for election to the TEI Technical Council, the TEI Board, and the TAPAS advisory Board. They have all supplied a statement covering two aspects:
  1. a candidate statement in which they discuss their reasons for wishing to serve on the Board, TAPAS or Technical Council and what their particular goals would be.
  2. a biographical description focusing on their education, training, research, etc., relevant to the TEI.

A Note on Voting

Voting will be conducted via the OpaVote website, which uses the open-source balloting software OpenSTV for tabulation. OpenSTV is a widely used open-source Single Transferable Vote program.

TEI Member voters, identified by email address, will receive a URL at which to cast their ballots. Upon closing of the election, all voters who cast a vote will be sent an email with a link to the results of the election, from which it is also possible to download the actual final ballots for verification. Individual members may vote in the TEI Technical Council elections. The nominated representative of institutions with membership may vote for both the TEI Board and TEI Technical Council.

Voting will open in a few days.

Voting closes on November 16, 2020 at 23:59 Hawaii-Aleutian Standard Time (HAST) as it offers the latest global midnight.

Candidate Statements: TEI Technical Council

Syd Bauman

Statement of purpose:

Syd would like to see progress in several areas: “User-oriented” efforts, e.g., creating documentation, recommendations, and customizations for particular constituencies or user groups; improving the look-and-feel (and flexibility) of custom documentation; and creating or commissioning reference implementations expanding the scope of the Guidelines, e.g. to include greater support for legal documents, a method for encoding acrostics, and perhaps a module addressing social media. Technical improvements to the Guidelines, e.g., further automated constraint checking, improvements to the ODD language and the stylesheets that process them, changes in TEI pointers to better align TEI with the existing W3C XPointer framework, and improvements to the automated deprecation system.

Biography:

Syd came to the TEI through an interest in markup and markup languages. He became interested in SGML just prior to its publication in 1986, but did not start engaging with a real markup language until late 1990. At that time he was already working at the Brown University Women Writers Project, where his first major task was to convert WWP legacy data to be in line with the newly published TEI P1. He still works at the WWP as a Senior XML Programmer/Analyst and ever since that first challenge, he’s been thinking of ways to improve the TEI. From 2001 to 2007 Syd served the TEI as the North American Editor, and since 2013 on the Technical Council; thus he is familiar with the workings of the Council. He has been very active in the TEI community as a frequent presenter on TEI topics at conferences; by consulting closely with nearly ½ dozen TEI projects, and providing occasional assistance to another dozen or so; as a member of several SIGs and editor of the Library SIG’s _Best Practices for TEI in Libraries_; as the chair of the Council’s Stylesheets task force; and of course, through teaching numerous TEI workshops and seminars. Syd has an AB from Brown University in political science, and has worked as a systems programmer and a freelance computer typesetter. He has often tought TEI workshops and seminars, and consults for a variety of humanities computing projects. He has been an Emergency Medical Technician since 1983.

Helena Bermúdez Sabel

Statement of purpose:

I am honored to have been nominated to stand for election for the TEI Technical Council. The TEI has always been a powerful resource to model my work within a wide range of research interests. From codicology, paleography, poetry, diachronic linguistics, linked data to semantic analysis, I have delved into TEI gaining a great level of familiarity with the Guidelines. I am currently interested in further employing this annotation standard to model less common applications, such as graph-based annotations of complex linguistic features like syntactic analysis and semantic ambiguity. With this goal in mind, I am currently involved in the development of automatic converters from the most common formats used by natural language processing tools to TEI, and from TEI to RDF serialization formats. I believe that these activities can promote fruitful dialogue between different research communities.

I am also a member of the Text Encoding Initiative Working Group “Internationalization (I18n)” because it embodies one of my main interests: Boosting the use of TEI outside the English-speaking community by increasing the availability of multilingual introductory materials and training, together with the dissemination of TEI projects in languages other than English.

Biography:

I am a postdoctoral researcher at the University of Lausanne (Switzerland) working for the SNF-funded project “A world of possibilities. Modal pathways over an extra-long period of time: the diachrony of modality in the Latin language” (http://woposs.unil). In this project, I am responsible for the technical aspects of the annotation workflow, including the automation of corpus pre- and post-processing. The pipeline makes ample use of XML technologies and it includes a TEI output as part of the results.

I hold a PhD in Medieval Studies from the Universidade de Santiago de Compostela (2019). My doctoral research involved the development of a digital edition model in TEI that enables the quantitative study of linguistic variation through the automatic comparison of witnesses. An implementation of the model was illustrated with a Galician-Portuguese secular poetry corpus (http://www.gl-pt.obdurodon).

Before moving to Lausanne, I worked at the Laboratorio de Innovación en Humanidades Digitales (Madrid, Spain). There, I was a researcher in an ERC-funded project focused on enabling the interoperability of poetic resources of European traditions via linked open data.

I also have a solid experience in Digital Humanities teaching. Besides the specialized workshops in digital philology I have taught in different institutions, I am a regular instructor of the Master in Digital Humanities of the Universidad Nacional de Educación a Distancia (UNED, Spain) in which, among other subjects, I teach a course focused on XSLT.

I am proficient in XML technologies and linked data. I am interested in data modeling and the formalization of annotation schemes, thus I am very keen on having a more active role in the TEI community.

Hugh Cayless

Statement of purpose:

If re-elected to Council, I will push forward on work to shore up some of the TEI’s older pieces of infrastructure. Great improvements have been made already, but some of the remaining tasks include upgrading the Stylesheets to XSLT 3.0 and modernizing the web display of the Guidelines. I will also continue working on improving the infrastructure to support internationalizing and translating the TEI’s documentation and on making the TEI more approachable to newcomers. Finally, I hope to continue work on the TEI’s support for digital critical editions and standoff annotation. It has been a joy and honor to serve on the Council with thoughtful and insightful colleagues and I look forward to supporting them and the TEI in the future, whether I am re-elected or not.

Biography:

Hugh Cayless is a Senior Digital Humanities Developer at Duke University Libraries. He is Treasurer of the Text Encoding Initiative Consortium and he has served on the TEI Technical Council since 2012. Hugh has worked on Digital Humanities projects with a focus on ancient studies since the late 1990s and holds a Ph.D. in Classics and an M.S. in Information Science from UNC Chapel Hill. He is proficient in several programming languages and database systems. His current research focuses on digital critical editions and on improving the accessibility and usability of TEI.

Nicholas Cole

Statement of purpose:

I was elected last year to complete the final year of a term which had been vacated, and I have spent most of this year serving my 'apprenticeship' on the TEI Council, learning how the various parts of the project fit together and how to make a difference. TEI is a venerable project, and it has been a daunting task at times. I would now like to be elected to serve a full term so that I can put this knowledge to use for the community.

I have a strong interest in the design of TEI guidelines, and have been especially interested in the work to implement and a TEI bridge to the world of WADM. I understand that some of the most crucial work of the Council, though, is to improve the consistency and clarity of the guidelines.

In my academic work, I run the Quill Project at the University of Oxford, which produces digital editions of negotiated texts and the discussions that produce them. This category of text includes many written constitutions, pieces legislation, and international treaties. The project hosts a rapidly growing digital archive -- much of which concerns manuscripts literally edited with scraps of paper and glue, and which can therefore be complicated to transcribe accurately.

I have an interest in the very technical aspects of representing humanities data. Due to the algorithms used for processing, the Quill Project's core data model was not initially able to handle structured documents of any kind, While we can now handle some limited rich text formats, XML remains a difficult problem. An active area of current research is therefore to find a way to implement Operational Transformation techniques reliably on XML/TEI documents that describe the work of a formal negotiation, either directly or using bidirectional translation into intermediate document formats. I hope that this work will result not just in improvements for us, but in the ability to write more general tools for editing and processing TEI, especially in multi-user environments.

More generally, our project puts a strong emphasis on the ability to annotate data, and the production of highly accurate transcriptions.

I am interested in the creation of workflows that speed documentary editing processes and in the automatic conversion of TEI to and from other formats for the purpose of editing and analysis. I am interested in extending the TEI specification to capture the features of the specific kinds of material that we encounter during our work on negotiations and legislative histories.

I am well aware of the amount of work needed from members of the Technical Council in order to ensure that TEI remains a vibrant community and that the standard develops sensibly to allow for new or better applications. If elected, I commit to being an active member of the Council throughout my term.

Biography:

I studied Ancient and Modern History at University College, Oxford, where I also completed an MPhil in Greek/Roman History and a Doctorate focused on American Political Thought. I have held several research and teaching posts. I am currently the director of the Quill Project, at Pembroke College, Oxford, and collaborate with a number of institutions in the United States, including a deep partnership with an open-enrollment university. My current role sees me involved in data-model design, visualization, user-interface design, and software engineering, alongside the traditional tasks of a historian.

Janelle Jenstad

Statement of purpose:

The TEI community offers one of the finest models of international, interdisciplinary cooperation. I am eager to give back to a community that has profoundly shaped my work, enthusiastically responded to the challenges of the texts that interest me and my team, and provided rich opportunities for so many scholars and students. I am committed to devising creative, collaborative solutions that serve both the local challenges of individual projects and the broader community at the same time. To the Council, I will bring my lengthy experience in writing clear, user-friendly documention. Having been blessed with extraordinary developer colleagues, I have not needed to develop the full suite of technical expertise that others might bring to the Council. But I have learned to communicate scholarly imperatives to developers and, in turn, to convey developers’ counsel to scholars. If the work of Council tends in these directions, I do have a particular interest in mobilizing TEI-encoded data and texts for linked data applications, in integrating GIS and TEI, in revising the “Performance Texts” chapter, and in rethinking the TEI’s metadata model.

Biography:

I’m an Associate Professor of English at the University of Victoria, where I specialize in digital textual editing, book history, project design and documentation, and the digital geohumanities. Learning TEI in 2005 (from Syd Bauman and Julia Flanders) was a transformative, career-changing experience. With subsequent coaching and chivvying from my collaborator and colleague Martin Holmes, I have built two major TEI projects, The Map of Early Modern London (https://mapoflondon.uvic.ca) and now Linked Early Modern Drama Online (https://lemdo.uvic.ca); co-organized TEI 2017 in Victoria; and (with Kathryn Tomasek) co-edited JTEI 12. I now regularly teach XML for Professional Communicators, incorporate TEI into my textual studies and bibliography classes, and supervise projects/theses with an encoding focus. MoEML is known for its encoding partnerships and its training program, which has introduced hundreds of students to TEI; it is also known for its Praxis documentation and contributions to the TEI guidelines. I am a contributor to “The Endings Project,” which is working on technical and project-management strategies for bringing DH editing projects to a sustainable, archivable end product, as well as to “The LINCS Project” (Linked Infrastructure for Networked Cultural Scholarship). For more information on my other scholarly activities, see (https://janellejenstad.com/).

David Maus

Statement of purpose:

My personal experience with the TEI guidelines comes from the processing of TEI encoded documents and helping scholars to better understand the technologies involved.

I am interested in the technical aspects of maintaining the TEI guidelines and stylesheets, and overarching aspects of text modelling. This also includes the active use of TEI encoded documents in the context of other technologies like the International Image Interoperability Framework™ (IIIF) and Optical Character Recognition (OCR).

During my time at the TEI Technical Council I would like to help moving the infrastructure for publishing and testing the TEI guidelines forward, and revisit, refactor, renew where necessary.

Biography:

I am head of research and development at the Carl von Ossietzky State and University Library Hamburg. From 2010 to 2019 I worked at the Herzog August Library Wolfenbüttel where I supported a variety of Digital Humanities projects including TEI-encoded digital editions.

Since 2016 I engage in the wider XML and markup community. I participated in the XProc 3.0 community group and I am also the author of SchXslt [ʃˈɛksl̩t], an modern XSLT-based implementation of the Schematron validation language. My recent TEI-related activities include a workshop on Schematron and Schematron QuickFix at the 2019 member's meeting in Graz.

My home institution is increasing its engagement towards the text-based Digital Humanities and supports my nomination for election to the TEI Technical Council.

Ariane Pinche

Statement of purpose:

In my opinion, the TEI guidelines represent an essential concernstake for any project. Since I began working with digital editions 7 years ago, I have attached a crucial importance to the documentation of TEI markup, specifically, in ODD, in order to ensure the sustainability of the information encoded in XML files. I believe that it is extremely important to ensure a good understanding of its data, so that it can be understood and reused.

As an assiduous user of TEI guidelines for my own research and as a native French speaker, I would like to support the board in its desire to internationalise and clarify the guidelines. Furthermore, as a TEI XML teacher, I am often confronted with questions from French-speaking and novice students regarding the navigation and understanding of the guidelines. So I encounter daily simple but varied TEI encoding issues and hope to be able to share this novice user experience with problems that are difficult to see for experienced users. After several years of experience with students engaged directly with digital editing, I have a good idea of which questions can arise with new elements.

In my role as an editor and data producer, I’m also interested in the matter of citability of XML data and new standards such as DTS and its integration in TEI. I would like to share this experience too.

Moreover, I also think it is time for me to give back to the community. TEI has been central to my introduction to digital humanities and has helped strengthen my understanding of scientific editions in a broader sense, not just as applied to digital editions.

Biography:

As a PhD student in medieval literature, I currently work on a TEI edition of saints’ Lives collection. I use TEI to structure my text, and also to add codicologic information, to establish a critical apparatus and to enrich my text with linguistic information through semi-automatic annotation. I’m also interested in graphic variation and, through my formation as a classicist and my work in the Hyperdonat project (http://hyperdonat.tge-adonis.fr), in complex manuscript tradition, critical apparatus and their visualisation.

I’m currently a teacher in XML TEI and XSLT at the Ecole nationale des chartes, as well as in Old French. I have also organised DH events (eg. https://jihn.hypotheses.org) and taught at XML TEI and XSLT workshops (eg. https://cosme.hypotheses.org/1117).

I’m invested in the digital humanities fields through my participation in the Cosme consortium for the working group lemmatisation and my participation at a couple TEI conferences and recently at DH 2019 as recipient of the Fortier Prize with my two colleagues Jean-Baptiste Camps and Thibault Clérice for the presentation : « Stylometry for Noisy Medieval Data: Evaluating Paul Meyer's Hagiographic Hypothesis » (https://dev.clariah.nl/files/dh2019/boa/0755.html).

Raffaele Viglianti

Statement of purpose:

During my tenure in the TEI council, my approach has focused on finding practical solutions and seeking a middle ground in the council’s discussions. My primary goal is moving the TEI Guidelines and schema forward effectively and reducing barriers to achieving proficiency in TEI and to create rich scholarly resources with it. Recently I have focused on coalescing minimal computing approaches with TEI’s rich tradition of encoding and publishing. While this most ostensibly impacts how I teach and use TEI, it also reflects on my work as a council member. I intend to focus, for example, on making the TEI accessible and usable globally by reducing barriers to adoption through multilingual resources and developing low-entry technical requirements. Likewise, I will seek to work with underrepresented communities, prioritizing issues that challenge TEI’s cultural biases embedded in its guidelines and modeling. Finally, since my early involvement with TEI, I have been pursuing an agenda that highlights the merits of ODD customizations and have redesigned and re-implemented the customization tool Roma, which I intend to continue perfecting and maintaining.

Biography:

I am a Research Programmer at the Maryland Institute for Technology in the Humanities (MITH) at the University of Maryland and I have served three terms on the TEI council. I have been actively involved in the TEI since 2008 as the convenor of the now dormant Music SIG, which resulted in the introduction of the notatedMusic element to the standard and a number of customizations to encode TEI with music notation. I often teach TEI as part of undergraduate and postgraduate courses, and workshops. I have recently become the Technical Editor of Scholarly Editing journal, for which we are planning a new issue in the next year including TEI-powered “micro” editions using minimal computing technologies.

I also dedicate time to the development of tools for lowering barriers to using the TEI. I was awarded the first Rahtz Prize for TEI Ingenuity (2017) for a graphic tool to encode stand-off markup called CoreBuilder.

I maintain with Hugh Cayless CETEIcean, a JavaScript library to render TEI in the browser without the need for XSLT. I have developed a replacement for Roma, which I am now perfecting and extending further.

At MITH, I work on research and development of TEI projects, including the Shelley-Godwin Archive, one of the first projects to use the TEI’s new vocabulary for the transcription of primary sources. I hold a PhD in Digital Musicology and I worked on scholarly digital editions of music, with a focus on romantic opera. As a result of this research, I have established strong ties with the Music Encoding Initiative (MEI) community, within which I have promoted and established the use of ODD for MEI’s specification and guidelines.

Candidate Statements: Candidate Statements: TEI Board of Directors

Diane Jakacki

Statement of purpose:

It is a great honour to stand for election to the Board of Directors of the Text Encoding Initiative Consortium. While my formal participation in the TEI has not been at the level I would wish in the past few years (too much time spent as ADHO Conference chair(s), which will happily soon be at an end!) I am steeped in the principles and ethics of the TEI in my own research and publication, and that which I support for others — and especially in my teaching. If elected, I will endeavour to support the TEI community in all ways, but particularly as follows:

My current research and teaching portfolio are centered on TEI encoding in a variety of ways. I believe that the activities in which I am engaged, and the degree to which I am engaged with them, would be of value as a member of the TEI Board. Equally, if I were to be elected to the Board, that stronger understanding of and more formal commitment to the community would strengthen the ways in which I collaborate in research and pedagogy. Thank you for your consideration.

Biography:

Diane K. Jakacki, Ph.D. (University of Waterloo, CA). Digital Scholarship Coordinator and Affiliated Faculty in Comparative and Digital Humanities, Bucknell University (Pennsylvania, US). Principle Investigator of REED London Online and the Andrew W. Mellon-funded Liberal Arts Based Digital Editions Publication Cooperative project. Researcher/collaborator on CFI-funded LINCS project (Susan Brown, PI) focusing on early modern London place in TEI-RDF contexts. Chair, ADHO Conference Coordinating Committee. Editor, "Henry VIII or All is True" and "Tarlton's Jests" for Linked Early Modern Drama Online; co-editor (with Brian Croxall), Debates in DH Pedagogy (under contract with U Minnesota Press). Co-instructor (with Laura Mandell) of the Programming 4 Humanists TEI-centric "Digital Editions Start to Finish" course. Research specializations and interests: pre-modern English/London cultural and performance history; digital humanities and DH pedagogy; particular focus on place in/and text, and how we reveal spatial meaning through close dialogic readings of textual and visual documents.

Ken Penner

Statement of purpose:

One of the developments that excites me most in recent years is the push to make our data FAIR: Findable, Accessible, Interoperable, and Reusable. My own dream is to make specifically the texts of ancient manuscripts FAIR as efficiently as possible.

These principles are not entirely new to the New Media age. Before the internet, we managed to make our data FAIR to some extent. In past centuries, it used to be that to make texts findable, they had to be published in library card catalogs bookseller catalogs, or published indexes. It used to be that to make text accessible, it had to be printed, and distributed to bookstores and libraries. It used to be that interoperability was something only a human could do. They could produce a derivative work, such as a concordance or lexicon. It used to be that reproducibility was so time consuming it would take years of someone’s life. It would be better for a person to publish another text rather than reproduce another. In other words, it used to be that scholars working on texts did so by huge investments of labour and other resources.

But in the age of New Media, the possibilities are blown open. Libraries holding ancient manuscripts are increasingly photographing manuscripts and posting the images online. But even that only takes us so far. What I want to do is help develop digital tools for two purposes (1) to reduce the tedium and duplication of effort needed to make ancient texts available to learn from, even in their conventional print forms, and (2) to make these scholarly outputs available as input data for new research, in ways impossible for conventional print output. My thinking is shaped by the theoretical discussions of digital scholarly editions by Gabler (2010), and developed by Peter Robinson (Robinson 2005, 2010b, 2010a, 2013, 2014, 2016a, 2016b, 2017). More recently, many of the issues (both theoretical and practical) regarding digital editions have been discussed in a helpful collection of articles (Pierazzo and Driscoll 2016).

I am now organizing a group of designers, text digitizers, and programmers to develop a free, unified set of tools for digital work on ancient texts. This comprehensive online collaborative platform is the Toolkit for Humanities Research and Editing Ancient Documents. The acronym THREAD is intended to evoke both connectedness (with texts as woven fabrics and this toolkit sewing them together), and sequence (with the workflow stringing one stage to the next). Although many tools already exist to produce and publish digital editions of texts as well as mark them up so they can be used as the basis for further research, often the scholars seeking to produce these digital texts are unaware of the available tools. Furthermore, such tools tend to be difficult to use together because they were developed for a specific project and therefore lack the flexibility or generality to be used for other texts. The result is that we have been working in isolated “silos,” wastefully using our limited resources to repeat much work that has already been done. The end goal is a comprehensive platform for digital humanities textual scholarship in general. The platform will consist of a set of independent but interoperable software modules, each of which performs a specific task. Standardization of output for each stage will be the key to interoperability and tools that can be applies to any text. Each module can be used by itself, with the ability to import and export XML that conforms to the TEI standard. Each module will also be accessible through a standard web services API, allowing them to work together as a single software suite. In some cases, a module will simply be a “wrapper” around existing tools, allowing them to talk to one another.

Biography:

My first venture into digitizing ancient texts was the Online Critical Pseudepigrapha, which Ian Scott, David Miller, and I started in 2002 as graduate students at McMaster University. Ian and I remain co-directors of this project; it is publicly open at http://pseudepigrapha.org . The first few texts we put online were initially hand-keyed from printed editions in ancient Greek, not from manuscripts. The two largest makers of biblical software and data sets for biblical studies (Logos and BibleWorks) then approached the OCP to license the Greek texts of the Pseudepigrapha. This was accomplished in short order because our data was in a standard XML format that allowed repurposing.

In 2005 I began working on a commentary on Greek Isaiah as represented in the Codex Sinaiticus. While the diplomatic transcription that became available in 2008 by the British Library (British Library et al. 2008) was a tremendous help, its spelling, punctuation, and paragraphing were those of the ancient scribes. To make a scholarly edition, I worked to normalize the spelling, accents, and punctuation to follow modern editorial conventions. I found that some of these tasks could be automated to some extent.

Once my normalized transcription was complete, I had the necessary data to compare the textual readings of the three oldest major codices because these were already available in normalized digital form. Besides Sinaiticus, the codices are Alexandrinus (Ottley 1904) and Vaticanus (Swete 1901). From this digital comparison, I created an apparatus of textual variants, which I then included in my commentary’s transcription of the text of Isaiah. For Codex Sinaiticus, I also included the corrections made by later scribes who worked on the manuscript.

I had previously worked with variant readings in the OCP project. When typing the printed editions into the computer, we marked up the textual footnotes into the relevant part of the main text. We then added some code so that the OCP was able to generate in real-time a custom textual apparatus relative to a base text of the user’s choice.

More recently, in 2016, I produced an eclectic edition of the Biblical Dead Sea Scrolls, representing the oldest attested readings in the Qumran biblical scrolls (Penner and Meyer 2016). I did the programming to choose the best reading and calculate the state of preservation of each letter, based on existing manuscript transcriptions.

Bible software was already able to search the original language biblical texts lemmas and inflections (like being able to search for instances where the verb “to be” appears in the plural form (“were”, “are”, “be”, “being”).

When the OCP provided our transcriptions of the pseudepigrapha to a commercial Bible software company, the company contacted me to add lexical and morphological tags to the Greek words of these texts of the pseudepigrapha. There was already an online automated parser into which we could feed the inflected Greek words into, to identify the possible lexical forms and morphology for each word. The output from this parser provided a list of possibilities that then needed a human to curate, to choose the correct lexical form and morphology for the word in its context. This was my first exposure to the combination of computer-assisted markup, in which a tedious task that had previously required exclusively human skill could yield the same or better quality in far less time. The result is that we can search the Pseudepigrapha or the historian Josephus’s writings for all instances of a particular dictionary form in any inflection.

It also means that the dictionary entry for any Greek word is accessible no matter the inflection, if there is a Greek dictionary indexed by headword. Logos Bible Software sought scholars to produce an interlinear version of the Septuagint, in which each Greek word in the text would show beneath it not only its lexical form and parsing but also a contextually appropriate gloss into English and an indication of how those English glosses would be sequenced to make sense in English. I contributed this interlinear data for the first 39 chapters of Isaiah, using computer tools provided by Logos.

So when they planned to produce a translation of the Hebrew Bible in which the Hebrew words were explicitly linked to their corresponding English words, I again contributed the translation of Isaiah. And when I was part of the editorial team for Lexham’s translation of the Greek Septuagint, I used computer tools to rearrange the English contextual glosses from the Septuagint Interlinear mentioned above into a first rough draft of a translation of the Major Prophets and Wisdom Literature. This originally digital-only translation is now published in print as the Lexham English Septuagint. My commentary on Greek Isaiah is in press with Brill.

Gimena del Rio Riande

Statement of purpose:

I believe standards constitute the foundation of any sustainable global community. I’m committed to making the TEI more open and accessible to Spanish-speakers through different projects that focus on translating and re-contextualizing TEI materials to Spanish, and adapting Digital Scholarly Editions principles. Being part of the TEI-C board for one term gave me the opportunity to start this conversation about a more open, diverse, and global TEI “inside” the TEI-C. We are now planning the next steps for the newly born TEI Internationalization Working Group, in which I plan to involve the Spanish-speaking community, and I have also fostered best Open Access practices for the Journal of the TEI. I have chosen to stand for the Board in a second term, as I would like to continue working on these projects inside the TEI-C.

Biography:

I am an Associate Researcher at the Instituto de Investigaciones Bibliográficas y Crítica Textual (IIBICRIT-CONICET, Argentina). I coordinate a DH Lab (HD CAICYT Lab, CONICET) and, among others, the Master in Digital Humanities (UNED, Spain) and the course “Digital Publishing with Minimal Computing” with Raff Viglianti (Global Classrooms Program, 2020-2023). I serve as one of the DOAJ (Directory of Open Access Journals) Ambassadors for Latin America, and director of the Revista de Humanidades Digitales, among others. I am the president of the Asociación Argentina de Humanidades Digitales (AAHD).

Candidate Statements: TAPAS Advisory Board

Laura Estill

Statement of purpose:

As a literary scholar with a focus on the early modern period, for years, my teaching did not include the digital humanities, which have been so central to my research. I have slowly been able to integrate more digital humanities, including TEI, into my teaching. The extensive scholarship on editing, TEI, digital pedagogies, and, of course, the areas/subjects of a given class or research project can be daunting, to say the least. TAPAS can support learning across these areas, or, to play on its name, can encourage small tastes of multiple cuisines. I’d like to continue and extend TAPAS’s participation with existing scholarship and pedagogical initiatives.

I am particularly interested in promoting the use of the “TAPAS Classroom,” and, as Flanders et al recommend, thinking about how TAPAS can help instructors use TEI to meet a variety of course goals (2019, https://journals.openedition.org/jtei/2144). As a TAPAS advisory board member, I would hope to continue to build a welcoming and inclusive community of educators and scholars; I would, likewise, uphold TAPAS’s commitment to open publication and open pedagogy.

Biography:

I am a Tier 2 Canada Research Chair in Digital Humanities and Associate Professor of English at St. Francis Xavier University in Nova Scotia, Canada. My research explores the reception history of drama by Shakespeare and his contemporaries from their initial circulation in print, manuscript, and on stage to how we mediate and understand these texts and performances online today. I am an Associate Director of the Digital Humanities Summer Institute at the University of Victoria, where I enjoy both taking and teaching classes (I have co-taught, for instance, “TEI Fun(damentals)” and “TEI for manuscript description and transcription”). I am a member-at-large of the Canadian Society for Digital Humanities executive. My published journal articles in Digital Humanities Quarterly, Digital Literary Studies, and Digital Studies/Champ Numérique explore digital pedagogy, marginalia and the TEI, and reception of early women's writing. My two co-edited collections are Early Modern Studies after the Digital Turn (2016) and Early British Drama in Manuscript (2019); I am a co-editor of Early Modern Digital Review (https://emdr.itercommunity.org). Based on the research for my monograph I am currently working on a TEI-based project, DEx: A Database of Dramatic Extracts (https://dex.itercommunity.org).