8 Transcriptions of Speech

The module described in this chapter is intended for use with a wide variety of transcribed spoken material. It should be stressed, however, that the present proposals are not intended to support unmodified every variety of research undertaken upon spoken material now or in the future; some discourse analysts, some phonologists, and doubtless others may wish to extend the scheme presented here to express more precisely the set of distinctions they wish to draw in their transcriptions. Speech regarded as a purely acoustic phenomenon may well require different methods from those outlined here, as may speech regarded solely as a process of social interaction.

This chapter begins with a discussion of some of the problems commonly encountered in transcribing spoken language (section 8.1 General Considerations and Overview). Section 8.2 Documenting the Source of Transcribed Speech documents some additional TEI header elements which may be used to document the recording or other source from which transcribed text is taken. Section 8.3 Elements Unique to Spoken Texts describes the basic structural elements provided by this module. Finally, section 8.4 Elements Defined Elsewhere of this chapter reviews further problems specific to the encoding of spoken language, demonstrating how mechanisms and elements discussed elsewhere in these Guidelines may be applied to them.

8.1 General Considerations and Overview

There is great variation in the ways different researchers have chosen to represent speech using the written medium.31 This reflects the special difficulties which apply to the encoding or transcription of speech. Speech varies according to a large number of dimensions, many of which have no counterpart in writing (for example, tempo, loudness, pitch, etc.). The audibility of speech recorded in natural communication situations is often less than perfect, affecting the accuracy of the transcription. Spoken material may be transcribed in the course of linguistic, acoustic, anthropological, psychological, ethnographic, journalistic, or many other types of research. Even in the same field, the interests and theoretical perspectives of different transcribers may lead them to prefer different levels of detail in the transcript and different styles of visual display. The production and comprehension of speech are intimately bound up with the situation in which speech occurs, far more so than is the case for written texts. A speech transcript must therefore include some contextual features; determining which are relevant is not always simple. Moreover, the ethical problems in recording and making public what was produced in a private setting and intended for a limited audience are more frequently encountered in dealing with spoken texts than with written ones.

Speech also poses difficult structural problems. Unlike a written text, a speech event takes place in time. Its beginning and end may be hard to determine and its internal composition difficult to define. Most researchers agree that the utterances or turns of individual speakers form an important structural component in most kinds of speech, but these are rarely as well-behaved (in the structural sense) as paragraphs or other analogous units in written texts: speakers frequently interrupt each other, use gestures as well as words, leave remarks unfinished and so on. Speech itself, though it may be represented as words, frequently contains items such as vocalized pauses which, although only semi-lexical, have immense importance in the analysis of spoken text. Even non-vocal elements such as gestures may be regarded as forming a component of spoken text for some analytic purposes. Below the level of the individual utterance, speech may be segmented into units defined by phonological, prosodic, or syntactic phenomena; no clear agreement exists, however, even as to appropriate names for such segments.

Spoken texts transcribed according to the guidelines presented here are organized as follows. The overall structure of a TEI spoken text is identical to that of any other TEI text: the TEI element for a spoken text contains a teiHeader element, followed by a text element. Even texts primarily composed of transcribed speech may also include conventional front and back matter, and may even be organized into divisions like printed texts.

We may say, therefore, that these Guidelines regard transcribed speech as being composed of arbitrary high-level units called texts. A spoken text might typically be a conversation between a small number of people, a lecture, a broadcast TV item, or a similar event. Each such unit has associated with it a teiHeader providing detailed contextual information such as the source of the transcript, the identity of the participants, whether the speech is scripted or spontaneous, the physical and social setting in which the discourse takes place and a range of other aspects. Details of the header in general are provided in chapter 2 The TEI Header; the particular elements it provides for use with spoken texts are described below (8.2 Documenting the Source of Transcribed Speech). Details concerning additional elements which may be used for the documentation of participant and contextual information are given in 15.2 Contextual Information.

Defining the bounds of a spoken text is frequently a matter of arbitrary convention or convenience. In public or semi-public contexts, a text may be regarded as synonymous with, for example, a lecture, a broadcast item, a meeting, etc. In informal or private contexts, a text may be simply a conversation involving a specific group of participants. Alternatively, researchers may elect to define spoken texts solely in terms of their duration in time or length in words. By default, these Guidelines assume of a text only that:

  • it is internally cohesive,
  • it is describable by a single header, and
  • it represents a single stretch of time with no significant discontinuities.

Deviation from these assumptions may be specified (for example, the org attribute on the text element may take the value compos to specify that the components of the text are discrete) but is not recommended.

Within a text it may be necessary to identify subdivisions of various kinds, if only for convenience of handling. The neutral div element discussed in section 4.1 Divisions of the Body is recommended for this purpose. It may be found useful also for representing subdivisions relating to discourse structure, speech act theory, transactional analysis, etc., provided only that these divisions are hierarchically well-behaved. Where they are not, as is often the case, the mechanisms discussed in chapters 16 Linking, Segmentation, and Alignment and 20 Non-hierarchical Structures may be used.

A spoken text may contain any of the following components:

  • utterances
  • pauses
  • vocalized but non-lexical phenomena such as coughs
  • kinesic (non-verbal, non-lexical) phenomena such as gestures
  • entirely non-linguistic incidents occurring during and possibly influencing the course of speech
  • writing, regarded as a special class of incident in that it can be transcribed, for example captions or overheads displayed during a lecture
  • shifts or changes in vocal quality

Elements to represent all of these features of spoken language are discussed in section 8.3 Elements Unique to Spoken Texts below.

An utterance (tagged u) may contain lexical items interspersed with pauses and non-lexical vocal sounds; during an utterance, non-linguistic incidents may occur and written materials may be presented. The u element can thus contain any of the other elements listed, interspersed with a transcription of the lexical items of the utterance; the other elements may all appear between utterances or next to each other, but except for writing they do not contain any other elements nor any data.

A spoken text itself may be without substructure, that is, it may consist simply of units such as utterances or pauses, not grouped together in any way, or it may be subdivided. If the notion of what constitutes a ‘text’ in spoken discourse is inevitably rather an arbitrary one, the notion of formal subdivisions within such a ‘text’ may appear even more debatable. Nevertheless, such divisions may be useful for such types of discourse as debates, broadcasts, etc., where structural subdivisions can easily be identified, or more generally wherever it is desired to aggregate utterances or other parts of a transcript into units smaller than a complete ‘text’. Examples might include ‘conversations’ or ‘discourse fragments’, or more narrowly, ‘that part of the conversation where topic x was discussed’, provided only that the set of all such divisions is coextensive with the text.

Each such division of a spoken text should be represented by the numbered or unnumbered div elements defined in chapter 4 Default Text Structure. For some detailed kinds of analysis a hierarchy of such divisions may be found useful; nested div elements may be used for this purpose, as in the following example showing how a collection made up of transcribed ‘sound bites’ taken from speeches given by a politician on different occasions might be encoded. Each extract is regarded as a distinct div, nested within a single composite div as follows:
<div type="soundbites"
 subtype="conservativeorg="composite">

 <div sample="medial"/>
 <div sample="medial"/>
 <div sample="initial"/>
</div>

As a member of the class att.declaring, the div element may also carry a decls attribute, for use where the divisions of a text do not all share the same set of the contextual declarations specified in the TEI header. (See further section 15.3 Associating Contextual Information with a Text).

8.2 Documenting the Source of Transcribed Speech

Where a computer file is derived from a spoken text rather than a written one, it will usually be desirable to record additional information about the recording or broadcast which constitutes its source. Several additional elements are provided for this purpose within the source description component of the TEI header:

  • scriptStmt (déclaration du script) contient une citation donnant des détails sur le script à l’origine de la parole. [le terme ‘script’ est entendu au sens large dans ce document comme tout texte préparatoire à une prise de parole (discours politique, sermon, interview, allocution, conférence, émission, etc.)].
  • recordingStmt (déclaration d'enregistrements) décrit un ensemble d’enregistrements utilisés pour la transcription de la parole.
  • recording (enregistrement) décrit en détail l’événement audio ou vidéo utilisé comme source de la parole transcrite, que ce soit un enregistrement direct ou une émission diffusée.
    typetype de l’enregistrement.

As a member of the att.duration class, the recording element inherits the following attribute:

  • att.duration.w3c attributs pour enregistrer des durées de temps normalisées
    dur(durée) indique la longueur de cet élément dans le temps

Note that detailed information about the participants or setting of an interview or other transcript of spoken language should be recorded in the appropriate division of the profile description, discussed in chapter 15 Language Corpora, rather than as part of the source description. The source description is used to hold information only about the source from which the transcribed speech was taken, for example, any script being read and any technical details of how the recording was produced. If the source was a previously-created transcript, it should be treated in the same way as any other source text.

The scriptStmt element should be used where it is known that one or more of the participants in a spoken text is speaking from a previously prepared script. The script itself should be documented in the same way as any other written text, using one of the three citation tags mentioned above. Utterances or groups of utterances may be linked to the script concerned by means of the decls attribute, described in section 15.3 Associating Contextual Information with a Text.
<sourceDesc>
 <scriptStmt xml:id="CNN12">
  <bibl>
   <author>CNN Network News</author>
   <title>News headlines</title>
   <date when="1991-06-12">12 Jun 91</date>
  </bibl>
 </scriptStmt>
</sourceDesc>

The recordingStmt is used to group together information relating to the recordings from which the spoken text was transcribed. The element may contain either a prose description or, more helpfully, one or more recording elements, each corresponding with a particular recording. The linkage between utterances or groups of utterances and the relevant recording statement is made by means of the decls attribute, described in section 15.3 Associating Contextual Information with a Text.

The recording element should be used to provide a description of how and by whom a recording was made. This information may be provided in the form of a prose description, within which such items as statements of responsibility, names, places, and dates may be identified using the appropriate phrase-level tags. Alternatively, a selection of elements from the model.recordingPart class may be provided. This element class makes available the following elements:

  • date (date) contient une date exprimée dans n'importe quel format.
  • time (temps) contient une expression qui précise un moment de la journée sous n'importe quelle forme.
  • respStmt (mention de responsabilité) indique la responsabilité quant au contenu intellectuel d'un texte, d'une édition, d'un enregistrement ou d'une publication en série, lorsque les éléments spécifiques relatifs aux auteurs, éditeurs, etc. ne suffisent pas ou ne s'appliquent pas.
  • equipment (matériel) fournit des détails techniques sur les appareils et les supports servant à l’enregistrement audio ou vidéo utilisé comme source de la parole transcrite.
  • broadcast (diffusion) décrit une émission utilisée comme source de la parole transcrite.
Specialized collections may wish to add further sub-elements to these major components. These elements should be used only for information relating to the recording process itself; information about the setting or participants (for example) is recorded elsewhere: see sections 15.2.3 The Setting Description and 15.2.2 The Participant Description.
<recordingStmt>
 <recording type="video">
  <p>U-matic recording made by college audio-visual department staff,
     available as PAL-standard VHS transfer or sound-only cassette</p>
 </recording>
</recordingStmt>
<recordingStmt>
 <recording type="audiodur="P30M">
  <respStmt>
   <resp>Location recording by</resp>
   <name>Sound Services Ltd.</name>
  </respStmt>
  <equipment>
   <p>Multiple close microphones mixed down to stereo Digital
       Audio Tape, standard play, 44.1 KHz sampling frequency</p>
  </equipment>
  <date>12 Jan 1987</date>
 </recording>
</recordingStmt>
<recordingStmt>
 <recording type="audiodur="P15M"
  xml:id="rec-3001">

  <date>14 Feb 2001</date>
 </recording>
 <recording type="audiodur="P15M"
  xml:id="rec-3002">

  <date>17 Feb 2001</date>
 </recording>
 <recording type="audiodur="P15M"
  xml:id="rec-3003">

  <date>22 Feb 2001</date>
 </recording>
</recordingStmt>
When a recording has been made from a public broadcast, details of the broadcast itself should be supplied within the recording element, as a nested broadcast element. A broadcast is closely analogous to a publication and the broadcast element should therefore contain one or the other of the bibliographic citation elements bibl, biblStruct, or biblFull. The broadcasting agency responsible for a broadcast is regarded as its author, while other participants (for example interviewers, interviewees, script writers, directors, producers, etc.) should be specified using the respStmt or editor element with an appropriate resp (see further section 3.11 Bibliographic Citations and References).
<recording type="audiodur="P10M">
 <equipment>
  <p>Recorded from FM Radio to digital tape</p>
 </equipment>
 <broadcast>
  <bibl>
   <title>Interview on foreign policy</title>
   <author>BBC Radio 5</author>
   <respStmt>
    <resp>interviewer</resp>
    <name>Robin Day</name>
   </respStmt>
   <respStmt>
    <resp>interviewee</resp>
    <name>Margaret Thatcher</name>
   </respStmt>
   <series>
    <title>The World Tonight</title>
   </series>
   <note>First broadcast on <date when="1989-11-27">27 Nov 1989</date>
   </note>
  </bibl>
 </broadcast>
</recording>
When a broadcast contains several distinct recordings (for example a compilation), additional recording elements may be further nested within the broadcast element.
<recording dur="P100M">
 <broadcast>
  <recording/>
 </broadcast>
</recording>

8.3 Elements Unique to Spoken Texts

The following elements characterize spoken texts, transcribed according to these Guidelines:

  • u (énonciation) partie de discours généralement précédée et suivie d'un silence ou d'un changement de locuteur.
  • pause/ (pause) une pause entre énonciations ou bien à l'intérieur d'énonciations.
  • vocal (élément vocal) tout phénomène vocalisé mais pas nécessairement lexical, par exemple des pauses vocales, des réactions non lexicales, etc.
  • kinesic (mouvement) tout phénomène de communication non nécessairement vocalisé, par exemple un geste, une grimace, etc.
  • incident (incident) tout phénomène ou événement, non nécessairement vocalisé ou destiné à la communication, par exemple des bruits fortuits ou d'autres événements affectant la communication
  • writing (texte écrit) fragment d'un texte écrit communiqué aux participants au cours du discours objet de la transcription.
  • shift/ (changement) indique le point où une caractéristique paralinguistique change dans la série d'énonciations d'un même locuteur.

The u element may appear directly within a spoken text, and may contain any of the others; the others may also appear directly (for example, a vocal may appear between two utterances) but cannot contain a u element. In terms of the basic TEI model, therefore, we regard the u element as analogous to a paragraph, and the others as analogous to ‘phrase’ elements, but with the important difference that they can exist either as siblings or as children of utterances. The class model.divPart.spoken provides the u element; the class model.global.spoken provides the six other elements listed above.

As members of the att.ascribed class, all of these elements share the following attribute:

  • att.ascribed fournit des attributs pour des éléments transcrivant la parole ou l'action qui peuvent être attribuées à un individu en particulier.
    whoindique la personne ou le groupe de personnes à qui le contenu de l'élément est attribué.

As members of the att.typed, att.timed and att.duration classes, all of these elements except shift share the following attribute:

  • att.typed fournit des attributs qui peuvent être utilisés pour classer ou interclasser des éléments de n'importe quelle façon.
    typecaractérise l'élément en utilisant n'importe quel système ou typologie de classification approprié.
    subtype(sous-type) fournit une sous-catégorisation de l'élément, si c'est nécessaire.
  • att.timed fournit des attributs communs aux éléments qui expriment une durée dans le temps, soit de manière absolue, soit en se référant à une carte d'alignement.
    startindique dans un alignement temporel (un ordre chronologique) l'endroit où commence cet élément.
    endindique l'endroit où se termine cet élément dans un alignement temporel.
  • att.duration.w3c attributs pour enregistrer des durées de temps normalisées
    dur(durée) indique la longueur de cet élément dans le temps

Each of these elements is further discussed and specified in sections 8.3.1 Utterances to 8.3.4 Writing.

We can show the relationship between four of these constituents of speech using the features eventive, communicative, anthropophonic (for sounds produced by the human vocal apparatus), and lexical:

eventivecommunicativeanthropophoniclexical
incident+---
kinesic++--
vocal+++-
utterance++++

The differences are not always clear-cut. Among incidents might be included actions like slamming the door, which can certainly be communicative. Vocals include coughing and sneezing, which are usually involuntary noises. Equally, the distinction between utterances and vocals is not always clear, although for many analytic purposes it will be convenient to regard them as distinct. Individual scholars may differ in the way borderlines are drawn and should declare their definitions in the editorialDecl element of the header (see 2.3.3 The Editorial Practices Declaration).

The following short extract exemplifies several of these elements. It is recoded from a text originally transcribed in the CHILDES format.32 Each utterance is encoded using a u element (see section 8.3.1 Utterances). The speakers are defined using the listPerson element discussed in 13.3.2 The Person Element and each is given a unique identifier also used to identify their speech. Pauses marked by the transcriber are indicated using the pause element (see section 8.3.2 Pausing). Non-verbal vocal effects such as the child's meowing are indicated either with orthographic transcriptions or with the vocal element, and entirely non-linguistic but significant incidents such as the sound of the toy cat are represented by the incident elements (see section 8.3.3 Vocal, Kinesic, Incident).
<u who="#mar">you
never <pause/> take this cat for show and tell
<pause/> meow meow</u>
<u who="#ros">yeah well I dont want to</u>
<incident>
 <desc>toy cat has bell in tail which continues to make a tinkling sound</desc>
</incident>
<vocal who="#mar">
 <desc>meows</desc>
</vocal>
<u who="#ros">because it is so old</u>
<u who="#mar">how <choice>
  <orig>bout</orig>
  <reg>about</reg>
 </choice>
 <emph>your</emph> cat <pause/>yours is <emph>new</emph>
 <kinesic>
  <desc>shows Father the cat</desc>
 </kinesic>
</u>
<u trans="pausewho="#fat">thats <pause/> darling</u>
<u who="#mar">
 <seg>no <emph>mine</emph> isnt old</seg>
 <seg>mine is just um a little dirty</seg>
</u>
<!-- ... -->
<listPerson>
 <person xml:id="mar">
<!-- ... -->
 </person>
 <person xml:id="ros">
<!-- ... -->
 </person>
 <person xml:id="fat">
<!-- ... -->
 </person>
</listPerson>

This example also uses some elements common to all TEI texts, notably the reg tag for editorial regularization. Unusually stressed syllables have been encoded with the emph element. The seg element has also been used to segment the last utterance. Further discussion of all of such options is provided in section 8.4 Elements Defined Elsewhere.

Contextual information is of particular importance in spoken texts, and should be provided by the TEI header of a text. In general, all of the information in a header is understood to be relevant to the whole of the associated text. The element u as a member of the att.declaring class, may however specify a different context by means of the decls attribute (see further section 15.3 Associating Contextual Information with a Text).

8.3.1 Utterances

Each distinct utterance in a spoken text is represented by a u element, described as follows:

  • u (énonciation) partie de discours généralement précédée et suivie d'un silence ou d'un changement de locuteur.
    trans(transition) transition

Use of the who attribute to associate the utterance with a particular speaker is recommended but not required. Its use implies as a further requirement that all speakers be identified by a person or personGrp element in the TEI header (see section 15.2.2 The Participant Description), but it may also point to another external source of information about the speaker. Where utterances or other parts of the transcription cannot be attributed with confidence to any particular participant or group of participants, the encoder may choose to create personGrp elements with xml:id attributes such as various or unknown, and perhaps give the root listPerson element an xml:id value of all, then point to those as appropriate using who.

The trans attribute is provided as a means of characterizing the transition from one utterance to the next at a simpler level of detail than that provided by the temporal alignment mechanism discussed in section 16.5 Synchronization. The value specified applies to the transition from the preceding utterance into the utterance bearing the attribute. For example:33
<u xml:id="ts_a1who="#a">Have you heard the</u>
<u xml:id="ts_b1trans="latchingwho="#b">the election results? yes</u>
<u xml:id="ts_a2trans="pausewho="#a">it's a disaster</u>
<u xml:id="ts_b2trans="overlapwho="#b">it's a miracle</u>
In this example, utterance ts_b1 latches on to utterance ts_a1, while there is a marked pause between ts_b1 and ts_a2. ts_b2 and ts_a2 overlap, but by an unspecified amount. For ways of providing a more precise indication of the degree of overlap, see section 8.4.2 Synchronization and Overlap.

An utterance may contain either running text, or text within which other basic structural elements are nested. Where such nesting occurs, the who attribute is considered to be inherited for the elements pause, vocal, shift and kinesic; that is, a pause or shift (etc.) within an utterance is regarded as being produced by that speaker only, while a pause between utterances applies to all speakers.

Occasionally, an utterance may seem to contain other utterances, for example where one speaker interrupts himself, or when another speaker produces a ‘back-channel’ while they are still speaking. The present version of these Guidelines does not support nesting of one u element within another. The transcriber must therefore decide whether such interruptions constitute a change of utterance, or whether other elements may be used. In the case of self-interruption, the shift element may be used to show that the speaker has changed the quality of their speech:
<u who="#a">Listen to this <shift new="reading"/>The government is
confident, he said, that the current economic problems will be
completely overcome by June<shift new="normal"/> what nonsense</u>
Alternatively the incident element described in section 8.3.3 Vocal, Kinesic, Incident might be used, without transcribing the read material:
<u who="#a">Listen to this
<incident>
  <desc>reads aloud from newspaper</desc>
 </incident> what
nonsense</u>
Often, back-channelling is only semi-lexicalized and may therefore be represented using the vocal element:
<u who="#a">So what could I have done <vocal who="#b">
  <desc>tut-tutting</desc>
 </vocal> about it anyway?</u>
Where this is not possible, it is simplest to regard the back-channel as a distinct utterance.

8.3.2 Pausing

Speakers differ very much in their rhythm and in particular in the amount of time they leave between words. The following element is provided to mark occasions where the transcriber judges that speech has been paused, irrespective of the actual amount of silence:
  • pause/ (pause) une pause entre énonciations ou bien à l'intérieur d'énonciations.
A pause contained by an utterance applies to the speaker of that utterance. A pause between utterances applies to all speakers. The type attribute may be used to categorize the pause, for example as short, medium, or long; alternatively the attribute dur may be used to indicate its length more exactly, as in the following example:
<u>Okay <pause dur="PT2M"/>U-m<pause dur="PT75S"/>the scene opens up
<pause dur="PT50S"/> with <pause dur="PT20S"/> um <pause dur="PT145S"/> you see
a tree okay?</u>
If detailed synchronization of pausing with other vocal phenomena is required, the alignment mechanism defined at section 16.5 Synchronization and discussed informally below should be used. Note that the trans attribute mentioned in the previous section may also be used to characterize the degree of pausing between (but not within) utterances.

8.3.3 Vocal, Kinesic, Incident

The presence of non-transcribed semi-lexical or non-lexical phenomena either between or within utterances may be indicated with the following three elements.

  • vocal (élément vocal) tout phénomène vocalisé mais pas nécessairement lexical, par exemple des pauses vocales, des réactions non lexicales, etc.
  • kinesic (mouvement) tout phénomène de communication non nécessairement vocalisé, par exemple un geste, une grimace, etc.
  • incident (incident) tout phénomène ou événement, non nécessairement vocalisé ou destiné à la communication, par exemple des bruits fortuits ou d'autres événements affectant la communication

The who attribute should be used to specify the person or group responsible for a vocal, kinesic, or incident which is contained within an utterance, if this differs from that of the enclosing utterance. The attribute must be supplied for a vocal, kinesic, or incident which is not contained within an utterance.

The iterated attribute may be used to indicate that the vocal, kinesic, or incident is repeated, for example laughter as opposed to laugh. These should both be distinguished from laughing, where what is being encoded is a shift in voice quality. For this last case, the shift element discussed in section 8.3.6 Shifts should be used.

A child desc element may be used to supply a conventional representation for the phenomenon, for example:

non-lexical
burp, click, cough, exhale, giggle, gulp, inhale, laugh, sneeze, sniff, snort, sob, swallow, throat, yawn
semi-lexical
ah, aha, aw, eh, ehm, er, erm, hmm, huh, mm, mmhm, oh, ooh, oops, phew, tsk, uh, uh-huh, uh-uh, um, urgh, yup

Researchers may prefer to regard some semi-lexical phenomena as ‘words’ within the bounds of the u element. See further the discussion at section 8.4.3 Regularization of Word Forms below. As for all basic categories, the definition should be made clear in the encodingDesc element of the TEI header.

Some typical examples follow:
<u who="#jan">This is just delicious</u>
<incident>
 <desc>telephone rings</desc>
</incident>
<u who="#ann">I'll get it</u>
<u who="#tom">I used to <vocal>
  <desc>cough</desc>
 </vocal> smoke a lot</u>
<u who="#bob">
 <vocal>
  <desc>sniffs</desc>
 </vocal>He thinks he's tough
</u>
<vocal who="#ann">
 <desc>snorts</desc>
</vocal>
<!-- ... -->
<listPerson>
 <person xml:id="ann">
<!-- ... -->
 </person>
 <person xml:id="bob">
<!-- ... -->
 </person>
 <person xml:id="jan">
<!-- ... -->
 </person>
 <person xml:id="kim">
<!-- ... -->
 </person>
 <person xml:id="tom">
<!-- ... -->
 </person>
</listPerson>
Note that Ann's snorting could equally well be encoded as follows:
<u who="#ann">
 <vocal>
  <desc>snorts</desc>
 </vocal>
</u>

The extent to which encoding of incidents or kinesics is included in a transcription will depend entirely on the purpose for which the transcription was made. As elsewhere, this will depend on the particular research agenda and the extent to which their presence is felt to be significant for the interpretation of spoken interactions.

8.3.4 Writing

Written text may also be encountered when speech is transcribed, for example in a television broadcast or cinema performance, or where one participant shows written text to another. The writing element may be used to distinguish such written elements from the spoken text in which they are embedded.
  • writing (texte écrit) fragment d'un texte écrit communiqué aux participants au cours du discours objet de la transcription.
    gradualindique si l'écrit est communiqué en une fois ou progressivement.
  • att.source provides attributes for pointing to the source of a bibliographic reference.
For example, if speaker A in the breakfast table conversation in section 8.3.1 Utterances above had simply shown the newspaper passage to her interlocutor instead of reading it, the interaction might have been encoded as follows:
<u who="#a">look at this</u>
<writing who="#atype="newspaper"
 gradual="false">
Government claims economic problems
<soCalled>over by June</soCalled>
</writing>
<u who="#a">what nonsense!</u>
If the source of the writing being displayed is known, bibliographic information about it may be stored in a listBibl within the sourceDesc element of the TEI header, and then pointed to using the source attribute. For example, in the following imaginary example, a lecturer displays two different versions of the same passage of text:
<sourceDesc>
<!-- ...-->
 <bibl xml:id="FOL1">Shakespeare First Folio text</bibl>
 <bibl xml:id="FOL2">Shakespeare Second Folio text</bibl>
<!-- ...-->
</sourceDesc>
<!-- ...-->
<u>.... now compare the punctuation of lines 12 and 14 in these two
versions of page 42...
<writing source="#FOL1">....</writing>
 <writing source="#FOL2">....</writing>
</u>

8.3.5 Temporal Information

As noted above, utterances, vocals, pauses, kinesics, incidents, and writing elements all inherit attributes providing information about their position in time from the classes att.timed and att.duration. These attributes can be used to link parts of the transcription very exactly with points on a timeline, or simply to indicate their duration. Note that if start and end point to when elements whose temporal distance from each other is specified in a timeline, then dur is ignored.

The anchor element (see 16.4 Correspondence and Alignment) may be used as an alternative means of aligning the start and end of timed elements, and is required when the temporal alignment involves points within an element.

For further discussion of temporal alignment and synchronization see 8.4.2 Synchronization and Overlap below.

8.3.6 Shifts

A common requirement in transcribing spoken language is to mark positions at which a variety of prosodic features change. Many paralinguistic features (pitch, prominence, loudness, etc.) characterize stretches of speech which are not co-extensive with utterances or any of the other units discussed so far. One simple method of encoding such units is simply to mark their boundaries. An empty element called shift is provided for this purpose.
  • shift/ (changement) indique le point où une caractéristique paralinguistique change dans la série d'énonciations d'un même locuteur.
    featurecaractéristique paralinguistique.
    newprécise le nouvel état de la caractéristique paralinguistique en question.
A shift element may appear within an utterance or a segment to mark a significant change in the particular feature defined by its attributes, which is then understood to apply to all subsequent utterances for the same speaker, unless changed by a new shift for the same feature in the same speaker. Intervening utterances by other speakers do not normally carry the same feature. For example:
<u>
 <shift feature="loudnew="f"/>Elizabeth
</u>
<u>Yes</u>
<u>
 <shift feature="loudnew="normal"/>Come and try this <pause/>
 <shift feature="loudnew="ff"/>come on
</u>
In this example, the word Elizabeth is spoken loudly, the words Yes and Come and try this with normal volume, and the words come on very loudly.

The values proposed here for the feature attribute are based on those used by the Survey of English Usage (see further Boase 1990); this list may be revised or supplemented using the methods outlined in section 23.3 Personalization and Customization.

The new attribute specifies the new state of the feature following the shift. If this attribute has the special value normal, the implication is that the feature concerned ceases to be remarkable at this point.

A list of suggested values for each of the features proposed follows:

  • tempo
    a
    allegro (fast)
    aa
    very fast
    acc
    accelerando (getting faster)
    l
    lento (slow)
    ll
    very slow
    rall
    rallentando (getting slower)
  • loud (for loudness):
    f
    forte (loud)
    ff
    very loud
    cresc
    crescendo (getting louder)
    p
    piano (soft)
    pp
    very soft
    dimin
    diminuendo (getting softer)
  • pitch (for pitch range):
    high
    high pitch-range
    low
    low pitch-range
    wide
    wide pitch-range
    narrow
    narrow pitch-range
    asc
    ascending
    desc
    descending
    monot
    monotonous
    scand
    scandent, each succeeding syllable higher than the last, generally ending in a falling tone
  • tension:
    sl
    slurred
    lax
    lax, a little slurred
    ten
    tense
    pr
    very precise
    st
    staccato, every stressed syllable being doubly stressed
    leg
    legato, every syllable receiving more or less equal stress
  • rhythm:
    rh
    beatable rhythm
    arrh
    arrhythmic, particularly halting
    spr
    spiky rising, with markedly higher unstressed syllables
    spf
    spiky falling, with markedly lower unstressed syllables
    glr
    glissando rising, like spiky rising but the unstressed syllables, usually several, also rise in pitch relative to each other
    glf
    glissando falling, like spiky falling but with the unstressed syllables also falling in pitch relative to each other
  • voice (for voice quality):
    whisp
    whisper
    breath
    breathy
    husk
    husky
    creak
    creaky
    fals
    falsetto
    reson
    resonant
    giggle
    unvoiced laugh or giggle
    laugh
    voiced laugh
    trem
    tremulous
    sob
    sobbing
    yawn
    yawning
    sigh
    sighing

A full definition of the sense of the values provided for each feature should be provided in the encoding description section of the text header (see section 2.3 The Encoding Description).

8.4 Elements Defined Elsewhere

This section describes the following features characteristic of spoken texts for which elements are defined elsewhere in these Guidelines:

  • segmentation below the utterance level
  • synchronization and overlap
  • regularization of orthography

The elements discussed here are not provided by the module for spoken texts. Some of them are included in the core module and others are contained in the modules for linking and for analysis respectively. The selection of modules and their combination to define a TEI schema is discussed in section 1.2 Defining a TEI Schema.

8.4.1 Segmentation

For some analytic purposes it may be desirable to subdivide the divisions of a spoken text into units smaller than the individual utterance or turn. Segmentation may be performed for a number of different purposes and in terms of a variety of speech phenomena. Common examples include units defined both prosodically (by intonation, pausing, etc.) and syntactically (clauses, phrases, etc.) The term macrosyntagm has been used by a number of researchers to define units peculiar to speech transcripts.34

These Guidelines propose that such analyses be performed in terms of neutrally-named segments, represented by the seg element, which is discussed more fully in section 16.3 Blocks, Segments, and Anchors. This element may take a type attribute to specify the kind of segmentation applicable to a particular segment, if more than one is possible in a text. A full definition of the segmentation scheme or schemes used should be provided in the segmentation element of the editorialDecl element in the TEI header (see 2.3.3 The Editorial Practices Declaration).

In the first example below, an utterance has been segmented according to a notion of syntactic completeness not necessarily marked by the speech, although in this case a pause has been recorded between the two sentence-like units. In the second, the segments are defined prosodically (an acute accent has been used to mark the position immediately following the syllable bearing the primary accent or stress), and may be thought of as ‘tone units’.
<u>
 <seg>we went to the pub yesterday</seg>
 <pause/>
 <seg>there was no one there</seg>
</u>
<u>
 <seg>although its an old ide´a</seg>
 <seg>it hasnt been on the mar´ket very long</seg>
</u>
In either case, the segmentation element in the header of the text should specify the principles adopted to define the segments marked in this way.

When utterances are segmented end-to-end in the same way as the s-units in written texts, the s element discussed in chapter 17 Simple Analytic Mechanisms may be used, either as an alternative or in addition to the more general purpose seg element. The s element is available without formality in all texts, but does not allow segments to nest within each other.

Where segments of different kinds are to be distinguished within the same stretch of speech, the type attribute may be used, as in the following example:
<u who="#T1">
 <seg type="C">I think </seg>
 <seg type="C">this chap was writing </seg>
 <seg type="C">and he <del type="repeated">said hello</del> said </seg>
 <seg type="M">hello </seg>
 <seg type="C">and he said </seg>
 <seg type="C">I'm going to a gate
   at twenty past seven </seg>
 <seg type="C">he said </seg>
 <seg type="M">ok </seg>
 <seg type="M">right away </seg>
 <seg type="C">and so <gap extent="1 syll"/> on they went </seg>
 <seg type="C">and they were <gap extent="3 sylls"/>
   writing there </seg>
</u>
In this example, recoded from a corpus of language-impaired speech prepared by Fletcher and Garman, the speaker's utterance has been fully segmented into clausal (type="C") or minor (type="M") units.
For some features, it may be more appropriate or convenient to introduce a new element in a custom namespace:
<u who="#T1">
<!-- ... -->
 <seg type="C">and he said </seg>
 <seg type="C">I'm going to a
 <ext:paraphasia>gate</ext:paraphasia>
   at twenty past seven </seg>
<!-- ... -->
</u>
Here, <ext:paraphasia> has been used to define a particular characteristic of this corpus for which no element exists in the TEI scheme. See further chapter 23.3 Personalization and Customization for a discussion of the way in which this kind of user-defined extension of the TEI scheme may be performed and chapter 1 The TEI Infrastructure for the mechanisms on which it depends.

This example also uses the core elements gap and del to mark editorial decisions concerning matter completely omitted from the transcript (because of inaudibility), and words which have been transcribed but which the transcriber wishes to exclude from the segment because they are repeated, respectively. See section 3.4 Simple Editorial Changes for a discussion of these and related elements.

It is often the case that the desired segmentation does not respect utterance boundaries; for example, syntactic units may cross utterance boundaries. For a detailed discussion of this problem, and the various methods proposed by these Guidelines for handling it, see chapter 20 Non-hierarchical Structures. Methods discussed there include these:

  • ‘milestone’ tags may be used; the special-purpose shift tag discussed in section 8.3.6 Shifts is an extension of this method
  • where several discontinuous segments are to be grouped together to form a syntactic unit (e.g. a phrasal verb with interposed complement), the join element may be used

8.4.2 Synchronization and Overlap

A major difference between spoken and written texts is the importance of the temporal dimension to the former. As a very simple example, consider the following, first as it might be represented in a playscript:
 Jane: Have you read Vanity Fair? Stig: Yes Lou: (nods vigorously)
To encode this, we first define the participants:
<listPerson>
 <person xml:id="stig">
<!-- ... -->
 </person>
 <person xml:id="lou">
<!-- ... -->
 </person>
 <person xml:id="jane">
<!-- ... -->
 </person>
</listPerson>
Let us assume that Stig and Lou respond to Jane's question before she has finished asking it—a fairly normal situation in spontaneous speech. The simplest way of representing this overlap would be to use the trans attribute previously discussed:
<u who="#jane">have you read Vanity Fair</u>
<u trans="overlapwho="#stig">yes</u>
However, this does not allow us to indicate either the extent to which Stig's utterance is overlapped, nor does it show that there are in fact three things which are synchronous: the end of Jane's utterance, Stig's whole utterance, and Lou's kinesic. To overcome these problems, more sophisticated techniques, employing the mechanisms for pointing and alignment discussed in detail in section 16.5 Synchronization, are needed. If the module for linking has been enabled (as described in section 8.4.1 Segmentation above), one way to represent the simple example above would be as follows:
<u xml:id="utt1who="#jane">have you read Vanity <anchor synch="#utt2 #k1xml:id="a1"/> Fair</u>
<u xml:id="utt2who="#stig">yes</u>
<kinesic xml:id="k1who="#lou"
 iterated="true">

 <desc>nods head vertically</desc>
</kinesic>

For a full discussion of this and related mechanisms, section 16.5.2 Placing Synchronous Events in Time should be consulted. The rest of the present section, which should be read in conjunction with that more detailed discussion, presents a number of ways in which these mechanisms may be applied to the specific problem of representing temporal alignment, synchrony, or overlap in transcribing spoken texts.

In the simple example above, the first utterance (that with identifier utt1) contains an anchor element, the function of which is simply to mark a point within it. The synch attribute associated with this anchor point specifies the identifiers of the other two elements which are to be synchronized with it: specifically, the second utterance (utt2) and the kinesic (k1). Note that one of these elements has content and the other is empty.

This example demonstrates only a way of indicating a point within one utterance at which it can be synchronized with another utterance and a kinesic. For more complex kinds of alignment, involving possibly multiple synchronization points, an additional element is provided, known as a timeline. This consists of a series of when elements, each representing a point in time, and bearing attributes which indicate its exact temporal position relative to other elements in the same timeline, in addition to the sequencing implied by its position within it.

For example:
<timeline unit="sorigin="#TS-P1">
 <when xml:id="TS-P1"
  absolute="12:20:01+01:00"/>

 <when xml:id="TS-P2interval="4.5"
  since="#TS-P1"/>

 <when xml:id="TS-P6"/>
 <when xml:id="TS-P3interval="1.5"
  since="#TS-P6"/>

</timeline>
This timeline represents four points in time, named TS-P1, TS-P2, TS-P6, and TS-P3 (as with all attributes named xml:id in the TEI scheme, the names must be unique within the document but have no other significance). TS-P1 is located absolutely, at 12:20:01:01 BST. TS-P2 is 4.5 seconds later than TS-P2 (i.e. at 12:20:46). TS-P6 is at some unspecified time later than TS-P2 and previous to TS-P3 (this is implied by its position within the timeline, as no attribute values have been specified for it). The fourth point, TS-P3, is 1.5 seconds later than TS-P6.

One or more such timelines may be specified within a spoken text, to suit the encoder's convenience. If more than one is supplied, the origin attribute may be used on each to specify which other timeline element it follows. The unit attribute indicates the units used for timings given on when elements contained by the alignment map. Alternatively, to avoid the need to specify times explicitly, the interval attribute may be used to indicate that all the when elements in a time line are a fixed distance apart.

Three methods are available for aligning points or elements within a spoken text with the points in time defined by the timeline:

  • The elements to be synchronized may specify the identifier of a when element as the value of one of the start, end, or synch attributes
  • The when element may specify the identifiers of all the elements to be synchronized with it using the synch attribute
  • A free-standing link element may be used to associate the when element and the elements synchronized with it by specifying their identifiers as values for its target attribute.
For example, using the timeline given above:
<u xml:id="TS-U1start="#TS-P2"
 end="#TS-P3">
This is my <anchor synch="#TS-P6xml:id="TS-P6A"/> turn</u>
The start of utterance TS-U1 is aligned with TS-P2 and its end with TS-P3. The transition between the words my and turn occurs at point TS-P6A, which is synchronous with point TS-P6 on the timeline.
The synchronization represented by the preceding examples could equally well be represented as follows:
<timeline origin="#ts-p1unit="s">
 <when xml:id="ts-p1"
  absolute="12:20:01+01:00"/>

 <when synch="#ts-u1xml:id="ts-p2"
  interval="4.5since="#ts-p1"/>

 <when synch="#ts-x1xml:id="ts-p6"/>
 <when synch="#ts-u1xml:id="ts-p3"
  interval="1.5since="#ts-p6"/>

</timeline>
<u xml:id="ts-u1">This is my <anchor xml:id="ts-x1"/> turn</u>
Here, the whole of the object with identifier ts-u1 (the utterance) has been aligned with two different points, ts-p2 and ts-p3. This is interpreted to mean that the utterance spans at least those two points.
Finally, a linkGrp may be used as an alternative to the synch attribute:
<timeline origin="#TS-p1unit="s">
 <when xml:id="TS-p1absolute="12:20:01"/>
 <when xml:id="TS-p2interval="4.5"
  since="#TS-p1"/>

 <when xml:id="TS-p6"/>
 <when xml:id="TS-p3interval="1.5"
  since="#TS-p6"/>

</timeline>
<u xml:id="TS-u1">
 <anchor xml:id="TS-u1start"/>
This is my <anchor xml:id="TS-x1"/> turn
<anchor xml:id="TS-u1end"/>
</u>
<linkGrp type="synchronous">
 <link target="#TS-u1start #TS-p1"/>
 <link target="#TS-u1end #TS-p2"/>
 <link target="#TS-x1 #TS-p6"/>
</linkGrp>
As a further example of the three possibilities, consider the following dialogue, represented first as it might appear in a conventional playscript:
Tom: I used to smoke - - Bob: (interrupting) You used to smoke? Tom: (at the same time) a lot more than this.  But I never      inhaled the smoke
A commonly used convention might be to transcribe such a passage as follows:
 (1) I used to smoke [ a lot more than this ] (2)                 [ you used to smoke ] (1) but I never inhaled the smoke
Such conventions have the drawback that they are hard to generalize or to extend beyond the very simple case presented here. Their reliance on the accidentals of physical layout may also make them difficult to transport and to process computationally. These Guidelines recommend the following mechanisms to encode this.
Where the whole of one or another utterance is to be synchronized, the start and end attributes may be used:
<u who="#tom">I used to smoke <anchor xml:id="TS-p10"/> a lot more than this
<anchor xml:id="TS-p20"/>but I never inhaled the smoke</u>
<u start="#TS-p10end="#TS-p20who="#bob">You used to smoke</u>
Note that the second utterance above could equally well be encoded as follows with exactly the same effect:
<u who="#bob">
 <anchor synch="#TS-p10"/>You used to smoke<anchor synch="#TS-p20"/>
</u>
If synchronization with specific timing information is required, a timeline must be included:
<timeline origin="#TS-t01unit="s">
 <when xml:id="TS-t01absolute="15:33:01Z"/>
 <when xml:id="TS-t02interval="2.5"
  since="#TS-t01"/>

</timeline>
<u who="#tom">I used to smoke
<anchor synch="#TS-t01"/>a lot more than this
<anchor synch="#TS-t02"/>but I never inhaled the smoke</u>
<u who="#bob">
 <anchor synch="#TS-t01"/>You used to smoke<anchor synch="#TS-t02"/>
</u>
(Note that If only the ordering or sequencing of utterances is needed, then specific timing information shown here in unit, absolute and interval does not need to be provided.)
As above, since the whole of Bob's utterance is to be aligned, the start and end attributes may be used as an alternative to the second pair of anchor elements:
<u start="#TS-t01end="#TS-t02who="#bob">You used to smoke</u>
An alternative approach is to mark the synchronization by pointing from the timeline to the text:
<timeline origin="#TS-T01">
 <when synch="#TS-nm1 #bob-u2"
  xml:id="TS-T01"/>

 <when synch="#TS-nm2 #bob-u2"
  xml:id="TS-T02"/>

</timeline>
<u who="#tom">I used to smoke
<anchor xml:id="TS-nm1"/>a lot more than this
<anchor xml:id="TS-nm2"/>but I never inhaled the smoke</u>
<u xml:id="bob-u2who="#bob">You used to smoke</u>
To avoid deciding whether to point from the timeline to the text or vice versa, a linkGrp may be used:
<body>
 <timeline origin="#T001">
  <when xml:id="T001"/>
  <when xml:id="T002"/>
 </timeline>
 <u who="#tom">I used to smoke
 <anchor xml:id="NM01"/>a lot more than this
 <anchor xml:id="NM02"/>but I never inhaled the smoke</u>
 <u xml:id="bob-U2who="#bob">You used to smoke</u>
 <linkGrp type="synchronize">
  <link target="#T001 #NM01 #bob-U2"/>
  <link target="#T002 #NM02 #bob-U2"/>
 </linkGrp>
</body>

Note that in each case, although Bob's utterance follows Tom's sequentially in the text, it is aligned temporally with its middle, without any need to disrupt the normal syntax of the text.

As a final example, consider the following exchange, first as it might be represented using a musical-score-like notation, in which points of synchronization are represented by vertical alignment of the text:
 Stig: This is |my  |turn Jane:         |Balderdash Lou :         |No, |it's mine
All three speakers are simultaneous at the words my, Balderdash, and No; speakers Stig and Lou are simultaneous at the words turn and it's. This could be encoded as follows, using pointers from the alignment map into the text:
<timeline origin="#TSp1">
 <when synch="#TSa1 #TSb1 #TSc1"
  xml:id="TSp1"/>

 <when synch="#TSa2 #TSc2xml:id="TSp2"/>
</timeline>
<!-- ... -->
<u who="#stig">this is <anchor xml:id="TSa1"/> my <anchor xml:id="TSa2"/> turn</u>
<u who="#janexml:id="TSb1">balderdash</u>
<u who="#louxml:id="TSc1"> no <anchor xml:id="TSc2"/> it's mine</u>

8.4.3 Regularization of Word Forms

When speech is transcribed using ordinary orthographic notation, as is customary, some compromise must be made between the sounds produced and conventional orthography. Particularly when dealing with informal, dialectal, or other varieties of language, the transcriber will frequently have to decide whether a particular sound is to be treated as a distinct vocabulary item or not. For example, while in a given project kinda may not be worth distinguishing as a vocabulary item from kind of, isn't may clearly be worth distinguishing from is not; for some purposes, the regional variant isnae might also be worth distinguishing in the same way.

One rule of thumb might be to allow such variation only where a generally accepted orthographic form exists, for example, in published dictionaries of the language register being encoded; this has the disadvantage that such dictionaries may not exist. Another is to maintain a controlled (but extensible) set of normalized forms for all such words; this has the advantage of enforcing some degree of consistency among different transcribers. Occasionally, as for example when transcribing abbreviations or acronyms, it may be felt necessary to depart from conventional spelling to distinguish between cases where the abbreviation is spelled out letter by letter (e.g. B B C or V A T) and where it is pronounced as a single word (VAT or RADA). Similar considerations might apply to pronunciation of foreign words (e.g. Monsewer vs. Monsieur).

In general, use of punctuation, capitalization, etc., in spoken transcripts should be carefully controlled. It is important to distinguish the transcriber's intuition as to what the punctuation should be from the marking of prosodic features such as pausing, intonation, etc.

Whatever practice is adopted, it is essential that it be clearly and fully documented in the editorial declarations section of the header. It may also be found helpful to include normalized forms of non-conventional spellings within the text, using the elements for simple editorial changes described in section 3.4 Simple Editorial Changes (see further section 8.4.5 Speech Management).

8.4.4 Prosody

In the absence of conventional punctuation, the marking of prosodic features assumes paramount importance, since these structure and organize the spoken message. Indeed, such prosodic features as points of primary or secondary stress may be represented by specialized punctuation marks, or other characters such as those provided by the Unicode Spacing Modifier Letters block. Pauses have already been dealt with in section 8.3.2 Pausing; while tone units (or intonational phrases) can be indicated by the segmentation tag discussed in section 8.4.1 Segmentation. The shift element discussed in section 8.3.6 Shifts may also be used to encode some prosodic features, for example where all that is required is the ability to record shifts in voice quality.

In a more detailed phonological transcript, it is common practice to include a number of conventional signs to mark prosodic features of the surrounding or (more usually) preceding speech. Such signs may be used to record, for example, particular intonation patterns, truncation, vowel quality (long or short) etc. These signs may be preserved in a transcript either by using conventional punctuation or by marking their presence by g elements. Where a transcript includes many phonetic or phonemic aspects, it will generally be more convenient to use the appropriate Unicode characters (see further chapters vi. Languages and Character Sets and 5 Characters, Glyphs, and Writing Modes). For representation of phonemic information, the use of the International Phonetic Alphabet, which can be represented in Unicode characters, is recommended.

In the following example, special characters have been defined as follows within the encodingDesc of the TEI header
<charDecl>
 <char xml:id="lf">
  <desc>low fall intonation</desc>
 </char>
 <char xml:id="lr">
  <desc>low rise intonation</desc>
 </char>
 <char xml:id="fr">
  <desc>fall rise intonation</desc>
 </char>
 <char xml:id="rf">
  <desc>rise fall intonation</desc>
 </char>
 <char xml:id="long">
  <desc>lengthened syllable</desc>
 </char>
 <char xml:id="short">
  <desc>shortened syllable</desc>
 </char>
</charDecl>
These declarations might additionally provide information about how the characters concerned should be rendered, their equivalent IPA form, etc. In the transcript itself references to them can then be included as follows:
<div n="Lod E-03type="exchange">
 <note>C is with a friend</note>
 <u who="#cwn">
  <unclear>Excuse me<g ref="#lf"/>
  </unclear>
  <pause/> You dont have some
   aesthetic<g ref="#short"/>
  <pause/>
  <unclear>specially on early</unclear>
   aesthetics terminology <g ref="#lr"/>
 </u>
 <u who="#aj"> No<g ref="#lf"/>
  <pause/>No<g ref="#lf"/>
  <gap extent="2 beats"/> I'm
   afraid<g ref="#lf"/>
 </u>
 <u trans="latchingwho="#cwn"> No<g ref="#lr"/>
  <unclear>Well</unclear> thanks<g ref="#lr"/>
  <pause/> Oh<g ref="#short"/>
  <unclear>you couldnt<g ref="#short"/> can we</unclear> kind of<g ref="#long"/>
  <pause/>I mean ask you to order it for us<g ref="#long"/>
  <g ref="#fr"/>
 </u>
 <u trans="latchingwho="#aj"> Yes<g ref="#fr"/> if you know the title<g ref="#lf"/> Yeah<g ref="#lf"/>
 </u>
 <u who="#cwn">
  <gap extent="4 beats"/>
 </u>
 <u who="#aj"> Yes thats fine. <unclear>just as soon as it comes in we'll send
     you a postcard<g ref="#lf"/>
  </unclear>
 </u>
 <listPerson>
  <person xml:id="cwn">
   <p>Customer WN</p>
  </person>
  <person xml:id="aj">
   <p>Assistant K</p>
  </person>
 </listPerson>
</div>

This example, which is taken from a corpus of bookshop service encounters, also demonstrates the use of the unclear and gap elements discussed in section 3.4 Simple Editorial Changes. Where words are so unclear that only their extent can be recorded, the empty gap element may be used; where the encoder can identify the words but wishes to record a degree of uncertainty about their accuracy, the unclear element may be used. More flexible and detailed methods of indicating uncertainty are discussed in chapter 21 Certainty, Precision, and Responsibility.

For more detailed work, involving a detailed phonological transcript including representation of stress and pitch patterns, it is probably best to maintain the prosodic description in parallel with the conventional written transcript, rather than attempt to embed detailed prosodic information within it. The two parallel streams may be aligned with each other and with other streams, for example an acoustic encoding, using the general alignment mechanisms discussed in section 8.3.6 Shifts.

8.4.5 Speech Management

Phenomena of speech management include disfluencies such as filled and unfilled pauses, interrupted or repeated words, corrections, and reformulations as well as interactional devices asking for or providing feedback. Depending on the importance attached to such features, transcribers may choose to adopt conventionalized representations for them (as discussed in section 8.4.3 Regularization of Word Forms above), or to transcribe them using IPA or some other transcription system. To simplify analysis of the lexical features of a speech transcript, it may be felt useful to ‘tidy away’ many of these disfluencies. Where this policy has been adopted, these Guidelines recommend the use of the tags for simple editorial intervention discussed in section 3.4 Simple Editorial Changes, to make explicit the extent of regularization or normalization performed by the transcriber.

For example, false starts, repetition, and truncated words might all be included within a transcript, but marked as editorially deleted, in the following way:
<u>
 <del type="truncation">s</del>see
<del type="repetition">you you</del> you know
<del type="falseStart">it's</del> he's crazy
</u>
As previously noted, the gap element may be used to mark points within a transcript where words have been omitted, for example because they are inaudible, as in the following example in which 5 seconds of speech is drowned out by an external event:
<gap reason="passing-truckquantity="5"
 unit="s"/>
The unclear element may be used to mark words which have been included although the transcriber is unsure of their accuracy:
<u>...and then <unclear reason="passing-truck">marbled queen</unclear>
</u>
Where a transcriber is believed to have incorrectly identified a word, the elements corr or sic embedded within a choice element may be used to indicate both the original and a corrected form of it:
<choice>
 <corr>SCSI</corr>
 <sic>skuzzy</sic>
</choice>
These elements are further discussed in section 3.4.1 Apparent Errors.
Finally phenomena such as code-switching, where a speaker switches from one language to another, may easily be represented in a transcript by using the foreign element provided by the core tagset:
<u who="#P1">I proposed that <foreign xml:lang="de"> wir können
 <pause dur="PT1S"/> vielleicht </foreign> go to warsaw
and <emph>vienna</emph>
</u>

8.4.6 Analytic Coding

The recommendations made here only concern the establishment of a basic text. Where a more sophisticated analysis is needed, more sophisticated methods of markup will also be appropriate, for example, using stand-off markup to indicate multiple segmentation of the stream of discourse, or complex alignment of several segments within it. Where additional annotations (sometimes called ‘codes’ or ‘tags’) are used to represent such features as linguistic word class (noun, verb, etc.), type of speech act (imperative, concessive, etc.), or information status (theme/rheme, given/new, active/semi-active/new), etc., a selection from the general purpose analytic tools discussed in chapters 16 Linking, Segmentation, and Alignment, 17 Simple Analytic Mechanisms, and 18 Feature Structures may be used to advantage.

8.5 Module for Transcribed Speech

The module described in this chapter makes available the following components:

Module spoken: Transcriptions de la parole

The selection and combination of modules to form a TEI schema is described in 1.2 Defining a TEI Schema.

Notes
31
For a discussion of several of these see Edwards and Lampert (eds.) (1993); Johansson (1994); and Johansson et al. (1991).
32
The original is a conversation between two children and their parents, recorded in 1987, and discussed in MacWhinney (1988)
33
For the most part, the examples in this chapter use no sentence punctuation except to mark the rising intonation often found in interrogative statements; for further discussion, see section 8.4.3 Regularization of Word Forms.
34
The term was apparently first proposed by Loman and Jørgensen (1971), where it is defined as follows: ‘A text can be analysed as a sequence of segments which are internally connected by a network of syntactic relations and externally delimited by the absence of such relations with respect to neighbouring segments. Such a segment is a syntactic unit called a macrosyntagm’ (trans. S. Johansson).

[English] [Deutsch] [Español] [Italiano] [Français] [日本語] [한국어] [中文]




TEI Guidelines Version 2.9.1. Last updated on 15th October 2015, revision 46ac023. This page generated on 2015-10-15T20:09:00Z.