spring during a writing about popular culture class on Star Trek, my students
taught me what at first seemed an insignificant piece of trivia. As many
fans of the series know, Majel Barrett Roddenberry is an important part
of the Star Trek universe. Not only was she wife and advisor of series
creator Gene Roddenberry, but she also starred as a member of the cast
in Star Trek: The Original Series (1966), later continuing this
tradition with a new character in frequent guest appearances for Star
Trek: The Next Generation. When our class read an article of hers describing
Star Trek's humanistic vision, my students quickly pointed out a little,
less known fact about her role in Star Trek: Majel had been the voice of
the ship's computer throughout all four Star Trek series.
this piece of trivia may seem meaningless to most, for me it recalls my
earliest awareness of speech recognition technology. As an adolescent during
the 70's watching the reruns of Star Trek: The Original Series ,
I always marveled when the computer "listened" to the instructions of the
crew and responded back with Majel's voice. Meanwhile, contemporaneously
with Star Trek, the Stanley Kubrick and Arthur C. Clarke masterpiece, 2001:
A Space Odyssey (1968), featured a dramatic juxtaposition of music,
silence, and spoken communication between man and machine based intelligence.
2001 finally upon us, Clarke's artificial intelligence HAL, along with
a manned mission to Mars and cold fusion, seems a little less the dream
of science fiction, although not yet science fact. Research in computer
artificial intelligence has yet to construct a reasoning machine, but today's
computer scientists would say that the computer "communicates" with the
Star Trek crew using speech synthesis, natural language processing, and
speech recognition. In fact, long before either computer scientists or
science fiction writers envisioned computers that could hear and speak,
Alexander Graham Bell expressed the desire for "a device that would make
speech visible" for the hearing impaired (Klevans 3). As was his wish,
at the research institute which bears his name, Bell Laboratories, in 1952
Davis, Biddulph, and Balashek constructed the first "voice input system"
which recognized "ten digits spoken by a single user with 100% accuracy"
(Fournier 2). Further advances from that original success in voice recognition
have led to AT&T's well-known collect calling services. During a collect
call, a mechanized voice asks the recipient of the call to answer "yes"
or "no" as to whether or not to accept charges. It takes little to extrapolate
from this one example of human/machine voice communication to conceive
of a near future where speech recognition as a means of textual production
might become more than probable.
the example above is more correctly described as voice recognition and
not true continuous speech recognition -- the computer only has to differentiate
responses for a vocabulary of two words. With continuous speech recognition,
the computer must be able to recognize vocabularies of thousands, or hundreds
of thousands, of words, discerning when one word ends and another word
begins in situations where pronunciation of utterances varies from user
to user, and to identify words and speech even when users might pronounce
words in various ways. Hence, it's no wonder that the early so-called speech
recognition programs available in the 1980's seemed limited at best. The
user needed extensive training with the program before initial use and
had to cope with limited vocabularies far less than that necessary for
normal spoken conversation. Because of the difficulty in recognizing word
boundaries, the user was also required to insert a 1/10th of a second pause
between each word in order to break the flow of ordinary conversation.
Look to the Past: An Aid to Writers with Disabilities
the expense of the software due to large development costs passed onto
the consumer, as well as the price of early PC's, the economics of using
a speech recognition system made it a viable technology for only those
with the greatest need, those who would not be able to produce printed
text without it. Speaker independent voice recognition software with speaker
dependent customization and larger vocabularies ran upwards of $5,000 (Chamberlain
45-46). (1) For example, even in 1993, Dragon Dictate
2.0 had a 25,000 plus preexisting vocabulary and 5,000 word user definable
one for a price of $4,995, excluding the 386/SX or DX IBM compatible PC
necessary to run the software (Chamberlain 46). Additionally, the software
limitations of these earlier systems made them a bad alternative to keyboarding
for most individuals, since dictation speeds per minute and accuracy ratings
were well below those of the average typist. Dragon Systems claimed "average
dictation speeds of 30 to 40 words a minute" with accuracy "in the low
90's in percentage terms" for the Dragon Dictate 2.0 (Chamberlain 83).
an expensive and inefficient mode of text production, speech recognition
was deemed only practical and desirable, yet beneficial, by those forced
to look for alternative means of textual production beyond keyboarding
or handwriting. In the 1990's, research at Florida State University generated
two dissertations which posit the benefits of speech recognition for those
with disabilities or impairments. The first and most informative, Michael
A. Chamberlain's Computerized Voice Recognition Systems and Their Application
to the Mobility Impaired (1993), describes the evolution of speech
recognition technology, explores the commercially available products at
the time of the study, and conducts a case study of eighteen mobility impaired
individuals who lack the physical capability to use the typical computer
input interface, i.e. the mouse and keyboard.
these three areas, the most significant conclusions arise from his case
studies of individuals with repetitive stress injuries, multiple sclerosis,
muscular dystrophy, fibromyalgia, and paralysis: "In 16 of 18 case studies,
subjects said that computerized speech recognition had made a real difference
to the quality of their lives, and the two others said, 'it hasn't yet,
but it probably will'" (Chamberlain 201). Additionally, all members of
the case study "reported that using computerized voice recognition systems
had increased the amount of communication 'messages' they sent" (Chamberlain
202). Chamberlain rightly concludes from his data that mobility impaired
individuals using speech recognition have become more active participants
in society through their ability to communicate via email coupled with
the opportunity to return to the work place or college with the ability
to produce text. Psychologically, for the individuals in the study, speech
recognition has led to richer, fuller lives by reducing feelings of inadequacy
second dissertation from Florida State University, Sheryl Lee Day's Computerized
Voice Recognition System Effects on Writing Skills of Community College
Students with Learning Disabilities (1995), also provides evidence
of the benefits of using speech recognition technology. Students with learning
disabilities often have trouble getting language onto paper because of
problems such as "spelling, grammar, punctuation, organization, and coherency"
(Day 4). However, speech recognition would appear to circumvent many of
these difficulties. Using holistic ratings and examination of text features
within the writings of four community college students, Day found:
dictation as a method of text production resulted in two of the three students
writing essays judged to be a better quality than essays written with a
word processor when rated holistically by two independent raters
. . . data indicate computerized voice recognition can assist some students
with learning disabilities to compensate for written language difficulties.
considered together, Day and Chamberlain's studies illustrate that making
speech recognition technology available in schools, universities, and the
workplace could help individuals who feel incapable or disadvantaged in
such environments become better able to participate in writing activities
which most people take for granted. In an era in which we question the
effects of computer technology on literacy, it is promising to find a positive
benefit: speech recognition may offer increased access to members of society
previously marginalized by limited written communication skills.
Future Becomes the Present
looking back over the incredible technological advances during the digital
revolution of the last twenty-five years, most of us can see, with or without
the help of science fiction visionaries such as Roddenberry and Clarke,
that software and hardware design should soon make improved versions of
speech recognition economically feasible and consequently more accessible
to larger portions of society, instead of only as a forced choice option
for those without other means of creating text. One can easily imagine
the following scenario in the not too distant future:
dinner, Megan prepares to write a short report due the next day for her
5th grade social studies assignment. After spending about an hour on the
computer, she walks away with her report in hand, having discovered the
information she needed fairly easily on the Internet. However, Megan differs
from today's students in that her hands never touched the keyboard or mouse
in preparing her text; the computer was controlled and her text was created
with the sound of her voice.
many people are only just starting to realize is that today, with a moderately
priced computer (under $1000) and less than $100 in software, the creation
of text through voice is now a reality; Megan's brief narration above no
longer belongs to the realm of science fiction, nor is it wishful thinking.
Word processing with speech recognition capabilities is within financial
reach of the average American family, for in 1997 Dragon Systems released
the first continuous speech recognition program capable of dictation speeds
of upwards of 160 words per minute. (2)
most of us who are already comfortable with the keyboard may never switch
over to speech recognition as a method of generating text. I personally
have found the act of composing via speech disconcerting, although this
seems a reasonable response given the hundreds of hours composing by handwriting
and typing which has conditioned me to write without speech. Generations
in our culture, like myself, already educated under the influence of pre-digital
print literacy may never switch over to speech recognition as a means of
textual production, unless forced to by development of chronic disabilities
such as carpal tunnel syndrome or arthritis. Then speech recognition may
bring a welcome relief from the pain of typing and using the mouse.
we must remember that digital literacy offers new standards, and it may
only be a matter of time before speech generated text becomes an indication
of high digital literacy. New generations of students already proficient
in uses of technology (which many of us had to acquire as adults, either
willingly or reluctantly, but generally with more difficulty than if assimilation
of the new technology had occurred earlier in our lives) will not be as
predisposed to keyboarding. As Walter Ong explains, "oral speech is fully
natural to human beings" whereas "the process of putting spoken language
into writing is governed by consciously contrived, articulable rules" (82).
For them, moving the fingers across the keys may seem extremely unnatural
and time consuming compared to speaking to the computer. Children learn
to speak not long after they can walk; typing is an artificial form of
communication created by man, and one which they may decide that they can
live without. All of us who consider ourselves accomplished typists, these
days otherwise known as keyboarders, remember the painful and frustrating
number of hours required before satisfactory levels of ability were reached.
Ruthann Dirks and Marvin J. Dirks's study "Introducing Business Communication
Students to Automated Speech Recognition: Comparing Performance Using Voice
Input and Keyboarding" (1997) questions whether educators should continue
teaching keyboarding; their study indicates that speech recognition may
be easier and faster to learn. After completing a 1 ½ hour tutorial
using DragonDictate, 20% of the students achieved higher speeds with the
dictation software than through typing (Dirks 154). Significantly, DragonDictate
is an older, discrete speech based version with much less accuracy and
speed than current systems; 20% may be an extremely conservative figure
for the results which might exist were the study conducted with current
software and hardware.
not hard to imagine that school children might choose composing via voice
over keyboarding. Writing is an alien, mechanical means of recording language,
a medium requiring a conscious physical exertion during composing in the
early stages of acquisition. Speaking to the screen should seem more natural
to almost anyone without the investment of training and practice in handwriting
and keyboarding. While I must admit that composing through speech seems
somewhat awkward since I have internalized the rules and effort required
to make writing automatic, I still continue to work with the software on
occasion, intrigued by the idea that I might one day have the ability to
speak at over 100 words per minute and have text appear on the screen.
For future generations, though, this decision won't be a matter of choosing
whether to acquire speech recognition skills after learning to type:
as former keyboarding teacher Patrick J. Highland suggests, keyboarding
may become a more specialized skill, "used more for editing and less for
initial input of a document" (Highland 32).
compositionists choose to explore this new method of textual production
or not, computer manufacturers will continue to develop it for the marketplace.
Software developers foresee a future in which more user-friendly computers
will be controlled only by speech. As well as producing text, newer speech
recognition programs already allow the user to control most Windows applications,
including Internet browsers and email. Meanwhile, word processors and other
common applications no longer benefit from increases in processing power.
Because hardware manufactures depend upon advances in software design to
fuel the need for better hardware, cpu-intensive technologies such as speech
recognition and natural language processing continue to strengthen the
symbiotic relationship between software development and hardware advance.
I predict that it won't be long before speech recognition software comes
free of charge on every Dell, Gateway, or Compaq sold in an effort to inculcate
a new consumer base who will be partially or completely dependent on speech
recognition for textual generation. Fully recognizing the demographics
of their market and realizing that potential users need to be trained young,
Dragon Systems, the leading manufacturer of speech recognition technology
over the past 20 years, includes speech models specifically tailored for
children nine years old and up as well as adults within their latest versions.
While the parents may never use it, their children will delight in the
magical product which allows them to create text on the screen without
keyboarding. These same children will bring assignments home from school
and find writing through typing tedious, while speaking their essays onto
the screen an ease and a joy.
for Composition: Revisioning Speech and Writing
composition specialists highly trained in the art and skills of writing,
most of us are content in our ability to generate text through keyboarding
and may even feel threatened by the introduction of oral discourse into
the composing process. Nevertheless, in a recent article in Kairos,
Stanley Harrison points out that we must become involved in the identity
construction of the "SoundWriter cyborg," a "machine/organism system" featuring
"a speaking human," "a microphone," "a voice-to-text computer program,"
and a "text-to-voice program" (2). While acknowledging the "incredible
and, heretofore, unexpressed potential" of the SoundWriter, Harrison warns
that "corporate, popular, and educational communities have brought a consistent
and narrow set of concerns to bear upon our understanding of the ASR/human
interface" (2-3). Yet, compositionists, "who have, heretofore, been silent
on the matter of SoundWriting," must become involved in this identity discussion.
I am less concerned by the fearful tone that Harrison's essay raises as
to the hegonomic domination of this new cyborg construction, I do believe
that compositionists need to begin considering the influences that this
method of textual production (3) will have upon
our students and the way in which it requires us to revision composition.
Whether we see the digital and electronic age as a move toward Ong's secondary
orality or not, we need only recall the impact of word processing on the
writing process to glimpse the magnitude of the impact which speech recognition
as a means of textual production may have upon both the field of composition
and definitions of literacy, now and in the future. To fully understand
the implications and potentials of speech recognition, we need to move
beyond the threat that this technology poses for the status quo of print
literacy and the historical primacy of writing over speech that Jacques
Derrida has sketched out in Of Grammatology and grant textual generation
through speech an equal consideration in composition theory. As we
create syllabi for writing classes, read composition journals discussing
old and new theories, and devote hundreds of hours each year to commenting
on and evaluating student papers, we should acknowledge our
natural biases resulting from our constant involvement with and love for
writing, and instead see each media as different, but equivalent.
I would suggest attempting to mediate these boundaries imposed by our predilections
for one medium over another by looking to the work of linguistic studies
of the 1980's as a lens for reexamining conceptions of speech and writing.
For example, in response to naive comments about speech as inferior to
writing, M. A. K. Halliday draws upon his experience with spoken discourse
in pointing out that some speech uses extremely complex sentence structures
(58). In expanding on his initial observations, Halliday's study demonstrates
that written text has greater lexical density--the "proportion of content
words to total discourse"--and thus we might incorrectly assume that "writing
is more complex, since presumably lexical density is a form of complexity"
(62). Yet, under certain circumstances, such as spontaneous spoken discourse,
speech demonstrates greater "grammatical intricacy" than writing (65-66).
This complexity depends upon the type of discourse, just as with writing,
and therefore we should "question the assumption that written language
is syntactically more complex" (66).
continuing to establish the equivalency of speech and writing, other linguistic
studies explore the differences between the two. Deborah Tannen points
out a "recurrent hypothesis" explaining the relationship, or rather differences,
between writing and speech (3). Whereas "spoken discourse is highly context
bound," on the other hand, "writing is decontextualized" (3). Tannen explains
that while this hypothesis is informative, conclusions here are generated
from the differences in "the genres selected for analysis--casual conversation
on one hand and expository prose on the other" (3). Similarly, Halliday's
conclusions agree by pointing out that previous assumptions of speech as
more flawed, more simplistic, may be based upon inappropriate models where
speech is consciously planned: the speaker, listening as he speaks "to
check the outcome . . . naturally" makes the speaker "lose [his] way: to
hesitate, back up, cross out, and stumble over words" (68). On the other
hand, in spontaneous discourse such as conversation, "the clause complexes
tend to flow smoothly without you falling down or changing direction in
the middle" (Halliday 68). Martin Nystrand writes that it is a fallacy
to distinguish "written and spoken language in terms of the autonomy of
the text. . . . cohesion results not when language is written but rather
when language is put to particular uses, especially those uses which bridge
discrepancies in writer-reader knowledge, as in expert-layman communication.
Language is not composed because it is internally cohesive" (201).
He concludes that "it is clearly a mistake to associate the spontaneity
of casual talk with fragmented expression, and equally wrong to confuse
elaborateness of text with fullness of meaning" (211).
linguists have redefined the differences between writing and speech through
a clearer understanding of cohesion. Horowitz and Samuels explain that
"cohesion is expressed through dexis (referring to items outside of a discourse
or text) but also through prosodic cues (the pitch, stress, and pauses
expressed in language) and paralinguistic devises (such as facial expressions,
lifts of the eyebrow, smiles or frowns, or body language such as pointing
or distancing oneself from a listener)" (7). Tannen similarly agrees with
this description of spoken discourse while defining writing as having "lexicalization
and complex syntactic structures" which "make connectives explicit" with
"subordination and other foregrounding or backgrounding devices" establishing
"relationships between propositions" (3). One would then conclude that
"oral strategies . . . make maximal use of context by which maximal meaning
and connective tissue are implied rather than stated, and "literate strategies"
are "those by which maximal background information and connective tissue
are made explicit" (3).
Speaking to Invention: Freespeaking
we can see speech as an equivalent, although different, discourse -- and
in doing so, recognize the close parallels between speech and writing --
it becomes possible to imagine more uses for speech recognition in composition;
for example, its potential application to invention. In one of the few
composition studies that links oral discourse to effective composition,
Christian Koch argues that common invention strategies such as "start now
and keep going; write in chunks; start anywhere; don't worry about language
and mechanics while you write; write more about your focus" are no more
than "written versions of behavior typical of speech" (64). Koch sees these
oral strategies as "content generators," while also noting that spoken
discourse is "powerful stimulus" to the production of text, and because
"we cannot revise what we have said," we must "go on producing words until
we feel that we have expressed the idea, or solved the problem, that we
had in mind" (68).
Koch's work is more than just important for breaking down the boundaries
between speech and writing, or for understanding how oral strategies are
implied in traditional and current invention strategies. These thoughts
also can help us reread current pedagogical theories about invention, readings
that suggest positive implications for using speech recognition for content
generation during early drafting. In summarizing Ken Macrorie's feelings,
Erika Lindemann explains that freewriting "produces honest writing, writing
that is free from phoniness or pretension. The writer must write fast enough
to use 'his own natural language without thinking of his expression'" (Lindemann
58). Assuming that natural language occurs in thought at the same speed
of spoken conversation, neither handwriting nor typing are fluent enough
to produce this honest flow of writing. Additionally, because the writer
is engaged in a text production process when typing or handwriting, cognitive
processes may reimage thought into an appropriate form for writing, even
when following Peter Elbow's advice to " simply write" and to never stop
to think about it (qtd. in Lindemann 58). Writing without thinking is contrary
to every other form of writing that we do. During composing, we think about
we are going to say, and watch to see if the sentences are formed correctly,
spelled correctly, logically sensible, and well-organized. When freewriting,
we must consciously control these urges to compose text. In spoken conversation,
most of us do not compose as we speak with the same cognitive processes--I,
for one, have often been guilty of speaking without thinking. Speech recognition
frees the user to speak as in conversation with the spirit of Elbow's theories
of freewriting never possible before.
me suggest then that freespeaking -- applying the concepts of freewriting
to the generation of text with speech recognition -- may offer increased
potential over freewriting in content generation during writing. Consider
how freespeaking would seem to better satisfy some of these tenets
of freewriting expressed above and in Pat Belanoff, Peter Elbow and Sheryl
Fontaine's definition in their introduction to Nothing Begins with N.
As they explain, in spontaneous spoken discourse we do not often "think
about spelling, grammar, and mechanics" (xii); additionally, because this
is a natural way of talking, a writer new to freespeaking might
acquire this method more easily, whereas the freewriter may be rebelling
against her/his sense of writing as a polished, constructed piece of text.
And, freespeaking may decrease the tendency of freewriting to not
"make sense or be understandable" (xii) since this quality of freewriting
may occur as the writer attempts to transfer thought into text using the
mechanical means of handwriting or typing; some of the confusion in the
freewritten product may be more of a bottleneck problem, a lack of bandwidth
for this transfer.
studies of freewriting would also suggest other potential benefits for
freespeaking over freewriting. Fontaine states that even when encouraged
to be exploratory and ignore the rules of written discourse, that students
remain "close to common convention" (10). She further explains that this
may be in part because we have succeeded so well at teaching students discursive
structures that "they cannot write without using them to organize their
thoughts" (11). I can imagine that using freespeaking may give students
that extra freedom to generate content, a liberation from the restrictions
imposed by years of structural approaches to composition based upon print
through freespeaking may also arise through the rhythms present in the
free flow of thought which are often carried over into spontaneous spoken
discourse. Ken Macrorie writes that he often hears and reads of "an emphasis
on the need for student writers to pick up the rhythms of ordinary speech
in their writing. . . . that powerful rhythms can't be planned for sentences
but usually arise from meanings that count for the writer (186). Are these
rhythms transcribed onto the page as if by magic, or is there another way?
Macrorie assumes that "rhythms, metaphors, analogies, powerful sound effects,
brilliant connections between ideas and objects: all these at their best
are given by the unconscious more often than they are planned and forecast
and contrived" (186). Given the supposition that such prose cannot be taught
as if by skill, even student writers, rarely the masters of written discourse,
might produce these qualities more often when composing initial drafts
through speech, their normal and more familiar medium of communication.
in "The Shifting Relationship between Speech and Writing" when discussing
nine reasons "why writing needs to be like speech," Elbow admits that "speech"
has "a magic that writing lacks--call it presence, voice, or pneuma--but
the truth is that we tend to experience meaning somehow more in
spoken words than written ones" (298). While some of the magic of speech
may be tied to paralinguistic communication and through a speaker's attention
to the perceived audience, it's quite possible that some "magical" meaning
making may occur during the intensity of spoken discourse. And, like Macrorie,
Elbow feels "the best writing has voice: the life and rhythms of speech.
Unless we actively train our students to speak onto paper, they
will write the kind of dead, limp, nominalized prose we hate--or say we
hate" (291). Notice here that the italicized words are Elbow's notations,
not my own, emphasizing the importance of spoken discourse in composition.
Also, Elbow points out the common assumption that "the function of writing
is to record what we have already decided--not to figure out whether
we believe. . . . If we were speaking, we would be more likely to speak
the train of thought as it comes to mind even though we're not sure of
our final opinions" (287) Later, he compares "spontaneous writing" to "freewriting,"
noting that they are both writing "in which we put down whatever words
come to mind--without regard to the conventions of forming words and without
regard to the quality of the writing" (290).
considerations may enlighten pedagogical considerations as well, suggesting
that applications of speech recognition as invention in the classroom may
benefit developmental writers. While studying the prewriting techniques
of 11th grade students, Carol Pope and Doris L. Prater found that students
preferred freewriting over "outlining, looping, clustering, or cubing",
although "advanced and average groups gave freewriting a high ranking,
[while] basic students gave only an average ranking to this strategy" (67).
I would conclude that this is most likely because the mechanics of writing
inhibited the process--thus the reason that the basic students "ranked
talking as their highest rated strategy" (Pope and Prater 67). Also, Pope
and Prater point out a need to "convince basic students of the value and
worth of" prewriting strategies, since they used these heuristic tools
less than the average and advanced groups (70). Isn't it possible that
since the other tools involve writing, which basic writers struggle with,
that more basic writers find less advantage in using them, that they doubt
the benefits of invention as we define it? Freewriting requires tearing
down the inhibitions, letting oneself go free to generate the content within
the mind. How can unskilled writers to whom the tool is not second nature
reach a level of comfort where they are not struggling with the mechanics
the tendency of basic writers to create simple sentence structures result
from difficulties with the mechanics of creating text, and not an inability
to form more complex syntactical language constructions in their heads?
Halliday's observations of complex sentences in spoken discourse might
lead us to conclude this. The process of transferring thought to writing,
the necessity of slowing down, might cause students to have to condense
thoughts into generalizations, or even lose their ideas altogether. This
is why we advise them to freewrite and worry only about generating content
in early drafting. For some students, the mechanics of typing or handwriting
may be the only stumbling block; on the other hand, as Reece and Cummings
point out, when using computer "speech-based composition" the writer can
compose at the rate of speech instead of handwriting or typing (264). Thus,
students composing orally may end up with greater detail, greater sentence
complexity, and more text, a theory supported by Reece and Cummings's work.
Their study concludes that speech-based composition of young students using
a listening word processing (4) (which allowed them
to see the results on the screen as they typed) produces better writing
than with standard dictation or normal writing (380).
given doubts that freespeaking may be more beneficial to textual production
than freewriting, the possibility exists that speech recognition may have
increased benefits for generating a stronger voice, better and more content,
and a superior writing style. As Halliday concludes, "speech and writing
will appear, then, as different ways of meaning: speech as spun out, flowing,
choreographic, oriented toward events (doing, happening, sensing, saying,
being), processlike, intricate, with meanings related serially; writing
as dense, structured, crystalline, oriented towards things (entities, objectified
processes), product like, tight, with meanings as related components" (80).
At the very least, speech recognition may invest the text with desirable
elements of orality which writing sometimes has difficulty emulating, a
minimal justification for compositionists to begin thinking about this
new method of producing text.
the choice of whether or not to use the new technology is not ours to make.
Considering how quickly students have assimilated the tools of electronic
discourse during the past decade, we don't have to look to science fiction
writers such as Roddenberry and Clarke to imagine a ubiquitous use of speech
recognition by the middle of this century. Without consciously or unconsciously
precluding the superiority of the written text, compositionists must see
spoken discourse as a rich source for generating content during invention
work while concurrently expanding our comprehension of the composing process.
If nothing less, this essay should indicate a need for research in understanding
spoken discourse composition, both to prepare us to better understand freespeaking
and the newer generations using speech recognition. Such students will
have closer ties with orality in their sense of what writing is and where
it comes from. Their shift in perspective will become our new paradigm
for composition: writing as a process will become discourse as a process,
and compositionists will become comfortable teaching and discussing the
development and evolution of oral, written, and hybrid texts.
assuming that speech recognition will enable students to generate better
content during invention via freespeaking, if we begin now, we can
concentrate our efforts on other compositional problems, for, as Rob Enderle,
director of desktop and mobile technology at Giga Information Group positively
forecasts, "In five years[,] you'll look back and wonder why you ever used
a keyboard to type" (Randall 2).