- When my
department offered me a second section of the course I was to
teach, I was delighted. When the secretary warned me that, unlike
my first section, this addition would not be in a computer classroom,
I thought nothing of it. Only later, when I began designing
my syllabi, did I realize that I might be facing some dilemmas.
How was I going to teach the same content in such dissimilar
environments? Would my students get the same education and skills
if I were not extremely careful in setting up the courses?
- I confronted
these concerns when, in Spring 2001, I taught two sections of
the same advanced composition course, one meeting early in the
morning in a traditional classroom (TC) and the other meeting
in the mid-afternoon in a computer-assisted (CA) environment.
The populations were approximately 96% and 86% female respectively,
perhaps due to the course topic: The Rhetoric of Fairy Tales.
To enroll in this class, which required at least 16 pages of
written work, students had to have taken or received credit
for a basic rhetoric and composition course. Pupils from a diverse
mix of majors took my class as an elective to fulfill graduation
requirements.
- I should
note that, when I set up the courses, I had no intention of
using them as research subjects. Thus I write this essay in
hindsight, wishing I had gathered more "data" over the semester,
and I ask myself several questions. First, what did I do differently
in terms of constructing the two classes? Second, did the computer
environment affect the quality of students' experience as evidenced
by course-instructor evaluations? Finally, did final course
grades vary significantly between the sections? Here I present
a narrative both ethnographic and statistical to determine differences,
if any, between a traditional and a computer-assisted composition
class.
Class Construction
- In an
effort to simplify my own life as well as to give pupils roughly
equivalent experiences, no matter which section they took, I
decided to use the same course packet and content. Differences
stemmed from the ways in which I delivered that content. For
example, pages on the
TC Web site were almost identical to, or at least comparable
in wording to, those on the
CA Web site. Each included a policy statement, links to
Internet resources, the common syllabus, assignments, and grading
criteria. In-class exercises and guidelines for assignments
such as presentations, quizzes, topic proposals, and papers
often took dissimilar forms due to the method of information
delivery.
- All students
received much of their information, whether from me or their
peers, in the form of hard-copy handouts. Instead of utilizing
transparencies on an overhead projector, however, the CA class
viewed and created Microsoft PowerPoint presentations on an
InFocus LCD (Liquid Crystal Display) projector. This hardware
allows an individual to project a digital image from a computer
monitor onto a wall screen for the entire room to see. I set
aside part of one CA class meeting to go over the PowerPoint
program (for example, the use of backgrounds and how to insert
text and graphics). The PowerPoint medium was appropriate, especially
during the course's final unit on visual rhetoric, since it
allowed for easy display of color images found on the Web.
- I always
gave quizzes in the traditional classroom on paper, whereas
on some occasions my CA quizzes were on paper and at other times
were given in electronic format: In the case of the latter,
students typed answers to quiz prompts into an email message
to me. When giving students an "electronic" quiz, I did allow
them slightly more time than their pen-and-paper counterparts,
since several had expressed outright terror that their slow
keyboarding would destroy their grades.
- In terms
of essay assignments, I handed out hard copies of these to TC
students, so we could go through the details together; I asked
CA students, however, to log onto a classroom computer and access
the appropriate page on the course Web site in order to follow
along. Finally, students in the CA section had the option of
turning in their essays in hard copy or electronically, the
latter by bringing a file copy of the paper to class on disk
and transferring it to a local server. As a matter of fact,
I asked that they turn in at least one of the three essays electronically,
to have the experience of doing so, but I did not make it an
absolute requirement.
- That,
in summary, was my course construction. For further investigation
of the impact of traditional vs. computer-aided classrooms,
I decided to look first at the course evaluations, for signs
of student (dis)satisfaction with one environment more than
with the other.
Course-Instructor Evaluations
- The University
of Texas at Austin offers a variety of anonymous, Scantron questionnaires.
The
expanded form (E) (View with Adobe Reader), which was used by students in both my
sections, consists of 17 statements about the instructor and
the course, asking participants to rate their responses on a
Likert scale of (1) strongly disagree, (2) disagree, (3) neutral,
(4) agree, or (5) strongly agree. Averages are then generated
according to the response's weight on the 5-point scale. An
additional two questions ask for overall course and instructor
ratings respectively, and the final three items cover workload,
approximate overall grade-point average, and the student's estimate
of his/her grade for the course.
- On the
day of my course evaluations, several students in each section
were absent. Furthermore, not all of those present elected to
fill out the questionnaire, and some students replied to some
but not every item. In the TC class (total enrollment 24), 18
of the 19 students present returned evaluations, while in the
CA class (total enrollment 21), 16 of 17 of those present participated
in the evaluation. The table below gives mathematical results
for 19 of the course-instructor survey items, including the
means, standard deviations (sd), and calculated t-values, which
determine whether the difference in means is statistically significant
and due to something other than chance.
Table 1
Course evaluation means and t-values
Evaluation
category |
TC
mean (sd) |
CA
mean (sd) |
t-value
|
Course
well-organized |
4.7
(0.57)
|
4.6
(0.50)
|
0.53 |
Communicated
effectively |
4.7
(0.49) |
4.6
(0.81)
|
0.42 |
Showed
interest in progress |
4.5
(0.86)
|
4.6
(0.50)
|
-0.41 |
Assignments
returned promptly |
4.8
(0.38)
|
4.8
(0.45)
|
0.00 |
Freedom
of expression |
4.7
(0.46)
|
4.5
(0.92)
|
-0.74a |
Assignments
clearly stated |
4.6
(0.51)
|
4.3
(0.87)
|
1.17 |
Instructor
well-prepared |
4.7
(0.46)
|
4.4
(0.63)
|
1.52 |
Instructor
had thorough knowledge |
4.9
(0.32)
|
4.8
(0.45)
|
0.72 |
Genuinely
interested in teaching course |
4.8
(0.38) |
4.8
(0.45)
|
0.00 |
Availability
outside class |
4.6
(0.50)
|
4.4
(0.63)
|
0.99 |
Student
performance evaluated fairly |
4.1
(0.83)
|
4.2
(0.75)
|
-0.36 |
Adequate
instructions for assignments |
4.6
(0.50)
|
4.4
(0.72)
|
0.90 |
Course
made educationally valuable |
4.3
(0.67)
|
4.4
(0.81)
|
-0.38 |
Increased
student knowledge |
4.6
(0.62)
|
4.4
(0.81)
|
0.77a |
Intellectually
stimulating |
4.3
(0.91)
|
4.2
(0.94)
|
0.30a |
Assignments
usually worthwhile |
4.2
(0.88)
|
4.1
(1.02)
|
0.29a |
Course
of value to date |
4.3
(0.83)
|
4.0
(0.97)
|
0.93 |
Overall
instructor rating |
4.3
(0.67)
|
4.3
(0.77)
|
0.00 |
Overall
course rating |
4.0
(0.77)
|
3.9
(0.89)
|
0.34 |
adf=31
instead of 32 (18-1 plus 16-1) because a respondent elected
not to answer the prompt
- The table
above indicates a definite trend, with TC section means 0.1
to 0.3 higher than the averages for the CA section in 13 categories.
To determine significance, however, I must take into account
the degrees of freedom (df), or the sum of all students in both
sections minus two, and the t values for this data set. I checked
these numbers in a standard statistical reference table and
found that at a confidence level of 95% (that is, 5 times out
of 100, any difference in means would be due to chance), the
variances between my computer-aided and traditional class evaluations
showed no statistical significance. In other words, my results
would be expected more than 5 times out of every 100 such samples,
and the fractional differences in response means cannot be attributed
to the difference in classroom setting.
Grading
-
Student
evaluations of my course (and me) showed no statistically
sound variance, but what about my evaluations of their work?
The table below gives the mean grades, out of 100, for the
three major papers as well as for the course, across the sections.
Table
2
Mean grades across sections
|
Paper
1
|
Paper
2
|
Paper
3
|
Course
Grade
|
TC
|
79.2
|
85.1
|
85.2
|
84.1
|
CA
|
72.3
|
83.4
|
77.6
|
78.9
|
The means for TC students ranged from 2.8 to 8.7 points higher
than those for the CA section, with the largest margin of
difference on the first paper. Mean final course grades of
a "B" for the TC and "C+" for the CA were calculated. Just
as their evaluation means were slightly higher, the TC pupils
made higher grades across the board. When I calculated significance
following the same procedure outlined above for course evaluations,
again I found no statistically significant variation. The
differences in mean grades could be due to chance.
- Yet when
I looked more closely at the breakdown of final grades, I discovered
that 37.5% of TC students earned an "A" whereas only 9.5% of
the CA class made that grade, a difference that is not just
chance mathematically. I used the same grading rubric for all
pupils, although admittedly assigning marks can be a subjective
enterprise. In retrospect I wonder if I should have employed
modified grading criteria in the computer-assisted classroom,
since these students had another hurdle in completing assignments:
familiarizing themselves with the electronic formats. Instructors
who use the
Learning Record Online (LRO), a Web-based, developmental
portfolio model, frequently include a category about improvement
in technology usage in the grading scheme, for example. At the
time, because I went over computer and Internet use step by
step, because I marked assignments on content more than anything
else, and because I was unsure of how to modify evaluation standards,
both classes had the same criteria.
- Especially
since a "B" is much different than a "C" both in students' minds
and in my grading criteria, I must wonder about the differences
in the grades students received. A first impulse might be to
attribute the non-significantly higher evaluation means to non-significantly
higher grades, but a recent literature survey by Aleamoni argues
that grades correlate weakly, if at all, with course evaluations
(158). What about the learning environment? Might the CA students
have made slightly lower marks and found it harder to earn an
"A" because of the computer-assisted classroom?
- To answer
this question, at least in part, I turned again to the official
university course-instructor survey. Because it asks students
to locate their approximate overall GPAs within specified ranges,
such as 3.00 to 3.49 or 3.50 to 4.00, it is possible to compare
performance in my course with broader academic achievement.
In the TC section, 76% of the respondents reported a GPA of
3.00 and above, and 74% of the entire class earned grades of
"B" or better in my course. Among the CA pupils, 56% of the
survey participants reported 3.00 and higher as their cumulative
GPA, and 57% of the students received at least a "B" in my class.
In other words, students earned grades for my course that correspond
to their general performance at the university. A computer-assisted
environment, at least as I used it, gave pupils neither a discernible
advantage nor a disadvantage. As an interesting side note, the
university-wide results for the expanded form in Spring 2001
show 74% of respondents claiming an overall GPA of 3.00 or higher.
This figure suggests that, for whatever reason, the TC population
may have been more representative of the entire student body
than the CA class.
Conclusions
- I taught
composition exclusively in computer-aided classrooms for three
semesters before Spring 2001 and had loved the multimedia capabilities
afforded by such environments. When I taught TC and CA sections
at the same time, however, I found myself examining my computer-assisted
tactics (and student responses to them) much more critically.
- In terms
of the different delivery of course content and assignments,
I had mixed feelings about using PowerPoint, email quizzes,
and online submission of topic proposals. Learning to design
presentations using PowerPoint software, increasingly in demand
by employers in many fields, had, and likely will have, its
value for students. But as a means of conveying information,
it seemed to fall a little flat. I observed TC students furiously
copying down information when overhead transparencies went up,
but CA pupils watched more passively the same content in PowerPoint
presentations, perhaps more distracting with their colors and
sounds. It might have been different note-taking habits that
I saw in action, but then again, it might have been the technology.
- On the
plus side of email quizzes, I had a tidy record of all responses
in my email inbox. On the downside, it seemed that the format,
an onscreen window that vanishes once the user hits "send,"
introduced a sense of informality and caused the quiz to be
taken less seriously by some students. I found this true also
of topic proposals, which CA students posted to a threaded,
Web discussion forum. Even though I told the CA students that
they "might consider" typing up their proposals in a word-processing
application, checking spelling and grammar, and then cutting
and pasting the text into our Web forum, especially since their
brief paragraphs would be accessible to me, to their peers,
and to anyone with a modem and browser, they tended to submit
writing full of typos and errors in spelling and mechanics.
Despite my opinion of the quality of work on the discussion
forum, CA students pointed to the topic proposal system as something
they liked best. They could see everyone else's ideas, have
the opportunity to comment on everyone's, and enjoy the convenience
of submitting an assignment at any time before the due date,
as well as my quick response, usually within a few hours. The
online format certainly promoted peer review and exchange of
ideas, both in and out of class, to a greater extent than the
class listserv, which was shared among all 45 pupils. Although
I asked students to post short response papers to the mailing
list and to read replies from both sections and also encouraged
them to test ideas, I saw little evidence of cross-section interaction.
- One might
be tempted to look at computers as tools that distract from
the business of the writing process, and certainly I still wonder
whether or not students who were exposed to similar content
through disparate methods of delivery (overhead transparencies
vs. PowerPoint) learned the material to different degrees. Many
of my students seemed more comfortable in the overhead/blackboard/pen-and-paper
environment, with hard-copy assignments. Most members of the
CA section had never before taken a computer-aided course, and
at the first meeting many actually thought they were in the
wrong place, since they had signed up for an "English" class,
not a computer science lab. Although CA students may have had
initial misgivings about the classroom set-up and did much more
of their coursework electronically, there were few significant
statistical variations between their course evaluations and
course grades and those of their TC counterparts. The single
exception, "A"s handed out across sections, may have been explained
by finer breakdown of overall pupil GPA -- ideally an exact
figure from each respondent -- but these data were not available
for comparison. On the whole, my findings about this small sample
are in keeping with other studies about technology-enhanced
courses, such as the one by Spooner et al, who found no overall
differences in evaluations of the same courses conducted either
by traditional or distance learning.
Works
Cited
|
Please cite this
article as Currents in Electronic Literacy Spring 2002 (6),
<http://www.cwrl.utexas.edu/currents/spring02/wakefield.html>.
|