My Dearest Benson,
I have been perusing English Journal articles from 1964, and I must admit I have been feeling some ennui as most what of what I’ve been finding about new media in that year has been rather humdrum; I required a third cup of coffee just to keep at my reading! I was about to knock off for the day, when lo and behold I discovered an article that startled me like no other: “The Computer and Character Analysis” by Allan Ellis, Assistant Professor of Education at Harvard University! You can no doubt imagine my complete surprise and sheer delight at this unexpected archival find!
In this remarkable text, Ellis demonstrated a computer-aided textual analysis of the canonical high school novel, Huck Finn, in an attempt to convince English teachers that “that a computer, often thought of as an instrument to launch missiles or to solve equations, is capable of reading the text of a novel and of finding evidence that can help us to understand a theme or a character or an author” (p. 527). Ellis argued compellingly that English teachers should not dismiss the computer as simply a numeric device useful only for math instructions, but instead should work to explore how the computer’s superior counting abilities could be put to use for literary analysis.
In addition to touting the ability of computers to quickly calculate word frequencies in a novel, Ellis demonstrated an early form of digital sentiment analysis in which the computer program searched for recurring clusters of words that reflected particular emotions. This
sentiment analysis of Huck Finn’s dialogue enabled Ellis to offer empirical support for interpretive claims such as, “every time Huck is confronted with some societal convention or other- religion, school, superstition, family, manners, or even the rules of his friends' club-he is distressed or anxious” (p. 527). I was absolutely astounded that Ellis’ computer-based textual analysis was in some ways more complex in its calculations than our own work with Voyant more than fifty years later. Consider me humbled!
And yet--I also noted how Ellis at times overstated the computer’s power as a tool for literary analysis, even going so far as to say the the computer was “totally objective and unconcerned with proving anybody's point” (p. 527). Although I appreciate how computers can help us challenge our preconceptions about texts, I cannot help but notice how Ellis’ program was designed to look for particular emotions and themes while ignoring others. I am thinking, for example, of how Ellis’ computer-based analysis seemed not to notice instances of racism in the novel -- alas, it appears that the computer did not help Ellis analyze the text beyond the unmarked lens of whiteness.
Another thing that I found odd about this article was that there was very little mention of pedagogy--a true rarity in our beloved English Journal. I expect that part of the reason the classroom was not mentioned was that computers were scarce, large, and expensive in this period. Harvard professors may have had computer access, but computers were not present in everyday classrooms--though clearly Ellis and the editors of English Journal hoped they would be soon!
The process of preparing literary texts for computer-based analysis was laborious as well: Ellis’ experiment required that every instance of Huck Finn’s dialogue “was punched onto IBM cards” (p. 524). The use of the passive voice here fascinates me, making we wonder who exactly did this punching? (Likely underpaid and unrecognized women, I’m afraid).
Needless to say, I’ll be eager to hear what you think of our new friend, Ellis, and to find out if you find any other computer articles when you turn to reading 1965.
I hope you also are experiencing some respite from this dreadful summer heat this week, as we have been here in Cincinnati… Despite hailing from swamp-drenched Florida, I find the dastardly warmth of summer increasingly hard to bear -- my enfeebled constitution only grows more delicate as I enter middle age. When the mercury rises, a proper Campari Spritz truly is the only suitable cure. May we enjoy such refreshments and convivial conversation when we next meet!
Most Sincerely Yours,
Encl: Ellis, Allan B. (1964). The computer and character analysis. The English Journal. 53(7), 522 - 527.
Jason, you old rascal!
How nice it was to receive your letter. In particular, I must thank you for the enclosed article, which has sparked greatly my interest in how this newly emerging computer technology was imagined for the teaching of English--I will return the favor in kind with this, my reply.
Although I saw no sign of our new friend Ellis, I did locate an article from 1965 that I’m sure you will find fascinating. It comes to us courtesy of Richard A. Stowe (an English teacher) and Andrew J. Maggio (a Coordinator of Instructional Materials), both of whom worked for Niles Township High Schools in Skokie, Illinois (a very different institutional setting that Harvard, to be sure!). In this piece, they described a scene not of punch cards being fed into an imposing mainframe, but rather of a classroom virtually overflowing with media content, all in the support of student learning.
Stowe and Maggio went on to describe a system made up of a variety of technologies cobbled together: photographic camera, projector, audio recorder, an Edex system (a button-based mechanical input system of the time), and the computer. Together, this amalgamation of components was touted as the next step in teaching English. In essence, the authors described a scenario wherein the teacher had assembled a presentation about linguistics. The presentation was made up of visuals (this included text consisting of prompts and main
topics, pictures of a family and a radio announcer), sounds (voiceover narration, sounds of a baby babbling, background music), and multiple-choice questions for students to answer by pressing the corresponding button on their Edex input device. These answers were recorded into a computer, which calculated individual student scores as well as class-based statistics.
Stowe and Maggio continued on to tout their vision as adaptable for schools of more modest means--one did not need to have the latest, most expensive equipment to compile engaging audio-visual lessons. Access to more common technologies--having a student advance a slide projector with appropriate photographs while the teacher reads from a prepared script, for instance, would achieve a similar effect. And the topics one might cover, obviously, could run the gamut of the English curriculum: vocabulary, figures of speech, historical matters, individual poems or stories. They even delighted in imagining the creative possibilities that teachers might explore in order to make the world of Literature come alive for their students: “The slides can, of course, be as artistic as the interest and skill of their creator can make them; full advantage may be taken of color and form. On the tape many sounds and effects may be introduced: appropriate music, background effects, students' voices, radio, television or live performances, phonograph records, and other taped materials” (p. 413).
To hear them explain it, Stowe and Maggio weren’t recklessly cobbling together electronic doohickeys in haphazard fashion. For one, they argued that a teacher’s pedagogical discipline would sharpen by incorporating these sorts of presentations in the classroom, as teachers would have to think more deliberately about the selection and organization of media content. They also claimed that their methods were tied to current learning theories of the day: “Research has established that two media can communicate more information with greater effect than one medium alone. Moreover, pupils learn and retain more material by sight than by hearing” (p. 411). I do note, however, that the gentleman failed to cite the research to which they alluded.
There are things about this described system that give me pause. The image of learning depicted herein seems rather static, students sitting passively through a teacher-designed presentation up until the moment they are called to demonstrate their rote recall by answering a series of multiple choice questions. Why not instead have the students themselves participate in the making of these presentation materials, I ask myself. Stowe and Maggio argued that this model was much more engaging and interactive than the lecture-based methods that came before: “Such items keep all of the students ‘on their toes’, especially if they know that their answers are
being recorded. The presentation is no longer one which the pupils can view lethargically. They have become respondents,participants in their own education. Attention and thinking are demanded in much greater degree than in the customary situation in which a few students dominate the discussion while the rest sit passively” (p. 412). I take their point only so far, the surveillant undertones notwithstanding.
At any rate, although the system described here consisted largely of analog technologies in conversation with a digital computer, it represented a thrilling vision of what might be possible in the future as computing technologies improve (Have you heard, by chance, of Intel founder Gordon Moore’s prediction that computer circuits will double every year in their complexity? He wrote a paper on the matter in 1965, and people have even gone so far as to call this prediction “Moore’s Law”--I have a feeling that we will see major changes as we keep reading through the archive).
On a personal note, reading this history has reminded me of all the way our daily lives have been changed by modern computers, not the least of which is more expedient modes of communicating. While I regard our present correspondence with great affection, waiting for letters to circulate through our noble postal system brings with it a quiet tedium I would not mind letting go of.
Enclosed you will find a clipping of this most remarkable article, torn excitedly from the very pages of EJ; you really must read it at once!
I wholeheartedly endorse your plan for a Campari Spritz to allay the doldrums of summer (although I fancy myself an Old Fashioned man--I trust you have a suitable bourbon to volunteer to the cause?). I do plan on traveling soon to the Queen City, and would love to catch up with you.
Yours, as ever,
Encl: Stowe, Richard A. and Andrew J. Maggio. (1965). Language and poetry in sight and sound. The English Journal. 54(5), 410-413.
Dearest Benson, you dashing rogue--
I was incredibly pleased to to receive this exciting missive! It appears we have new computer friends, Stowe and Maggio!
I must confess I had been worried that the computer would never make it beyond the walls of Harvard in the 60s, but in the article you so thoughtfully enclosed we see a computer put to some good use in a high school setting. Of course, it is fascinating that the computer in this scenario was really only used for the rote task of calculating scores. All the multimedia tools and even the “Edex” student response machine -- what today we might so indecorously call a “clicker” -- were analog. Still, it was so exciting to see these teachers combining digital and analog media in creative ways--looking forward to a future when they hoped the computer could do so much more than just crunch the numbers. And yet--it certainly was revealing that most of the creative work here was done by the teacherxs not the students, alas.
Enough about our dear friends Stowe and Maggio -- I have found an exciting new chum to join our banter: Arthur Daigion, Professor of Education at University of Connecticut. In “Computer Grading and English Composition” (1966), Daigon reported on a study in which a computer was trained to evaluate high school writing in ways that correlated with the ratings of experienced English teachers. Although Daigon acknowledged that human graders could offer valuable feedback about writer’s ideas in ways that computers could not, he also noted that human English teachers had difficulty finding the time to give this kind of intensive feedback; “bearing a 130 - 150 typical student load for most teachers, leaves the instructor a choice between less than minimal
writing thoroughly graded and a reasonable amount of writing perfunctorily graded” (p. 47). If the computer could be taught to efficiently handle some aspects of writing evaluation and feedback, then teachers would be freed to do more “individualized teaching” (p. 52) about more complex writing concerns.
Daigon offered copious detail about how the computer could be programmed to grade. In addition to being trained to look for common misspellings, grammatical “errors,” and “solecisms,” the computer also could evaluate the frequency of uncommon words and complex sentence structures--both of which were seen as signs of superior style by experienced English teachers. As an example of the kind of feedback a computer could be trained to give, Daigon shared this response: “John, [we are told that using first names softens criticism] please correct the following misspellings: believe, recieve. Note the ie, ei problem. You overuse the words interesting, good, nice; then was repeated six times. Check trite expressions. All of your sentences are of the subject-verb variety and all are declarative. Reconstruct” (p. 50). We can see here that Daigon’s computer was indeed quite precise and efficient at giving feedback about spelling and style, though I noted that this feedback gave us no sense of the topic or argument of John’s paper.
As our friend Ellis showed us, computers are devilishly good at counting words! It makes sense that a computer would be better and faster than a human at pointing out when writers are overusing certain common words -- one of my pet peeves as you know. And, computers love binaries: one versus zero, right versus wrong. So, it makes sense that the computer grading programs enacted a current-traditional writing pedagogy that
focused on the correction of “error” first and foremost. If Daigon’s computer ratings correlated with those of experienced English teachers, I take that as a sign that those teachers were emphasizing grammar and style above all else in their evaluations--enacting a writing pedagogy that was current then but has rightly fallen out of favor today.
Of course, Daigon admitted that it was a problem that the computer could not respond to the content of paper as a human would -- noting that the computer “cannot honestly say ‘I disagree with you.’ ‘I was moved.’ ‘Your topic was sensitively handled.’ ‘I am reminded of a personal experience’” (p. 50). I appreciated Daigon’s recognition that humans still have powers that machines do not, but then I was taken aback when he argued that “one could [also] say that the failing in a human contending with student composition is that he is not enough like a machine, which can accurately and consistently respond to discernible elements of style without tiring perceptibly” (p. 50). Oh brave new world! The computers are coming for our jobs!
For Daigon, the computer was not only faster than human readers, but it was also more objective. Daigon pointed out that teachers often let biases toward particular students influence their grading, but a computer would not mark down a student “for appearing in class needing a haircut and a bath during the greater part of the school year, looking at the teacher in a surly manner, or not being one of the ‘nice’ conventional students in the class.” (p. 52). When I read this, I found myself shaken to the core -- perhaps computers could be the solution to persistent problems of classist, racist, and sexist bias in education?
After I picked myself up of floor, however, I realized that Daigon was overstating the case for the computer’s objectivity. After all, Daigon’s computer was programmed to penalize students for using supposedly non-standard grammatical constructions and words--the computer was in fact a more exacting enforcer of linguistic classism and racism than the human teacher. While a subjective human reader might overlook the use of a so-called “solecism” in a paper that otherwise made a strong argument, Daigon’s computer had no choice but to enforce the classist and racist rules it had been programmed to apply. After all, the computer was programmed by humans and so it was not a surprise that it reflected the biases of those teachers who built it! (I dare say that this remains true of automated essay grading programs today.)
I have enclosed Daigon’s article and I am eager to hear your thoughts. I worry that perhaps I am being too hard on dear old Daigon? (As critical as I may be of Daigon’s biased attempt to eradicate solecisms, I must admit that if I could write a computer program that would eradicate use of the dreadful word, “impactful,” from student writing, I most certainly would do so post-haste!)
It would be jolly good of you to come visit me here in the Queen City. I am certain my feline companion, General Tso, would be delighted to see you -- as would yours truly. Although I find it unseamly for a gentleman to drink an old-fashioned in the heat of summer, I have all the requisite materials on hand should you desire. I might even join you in a mint julep, should I be forced to uncork my bourbon in these dreadful summer months.
Please do write again soon -- your letters are always such a joy to read. Although I miss the speed of email to be sure, there is nothing like the feeling of receiving an honest-to-goodness letter in my postal box.
Your obliged and affectionate friend,
Encl: Daigon, Arthur. (1966). Computer grading of English composition. The English Journal. 55(1), 46-52.
My most esteemed colleague Jason,
I was astounded by your latest English Journal find, which I
read with great relish. Your comments on this Daigon fellow have me
thinking.... While computer essay grading is, and remains,
problematic, I believe there is still real value to the kinds of
observations that computers are able to offer student writers. For
example, I was recently involved in the design and implementation of a
writing-based MOOC (Massive Open Online Course). As part of that
course, we developed a peer review system that combined actual
substantive human responses with computer-generated data analytics
such as words-per-sentence, a Flesch-Kincaid readability score, and
similar metrics. But the goal of our system wasn’t to grade the
papers; it was to provide another level of information, usually hidden
from view, for writers to consider as they developed their craft (for
example, if one’s average sentence length is much shorter than the
rest of the group’s, this might be an intentional and rhetorically
effective stylistic choice, but it might also be an unintentional
tendency in one’s writing, in which case making the student aware of
this could lead to deliberate efforts to address the issue). If I were
to tease out a difference between our approach and the one outlined by
Daigon, I would suggest that his seems informed by values such as
efficiency and authority based on supposed objectivity, while our
approach was driven by a pedagogical impulse to share information with
learners that they could use in a more
critical fashion. But, of course, I am inclined to see
my own contribution in a more favorable light.
On to the matter at hand--I have located another article, a copy of
which I have, per our custom, enclosed with this letter. If you liked
the “sky is the limit” vision of the future depicted by Stowe and
Maggio, let me introduce you to the energizing words of Joseph
Auciello. His 1968 article “On Using Computers in English” offered an
arresting scene of what he called a “computer-assisted instruction
system” (p. 650). Students were seated in front of individual stations
consisting of screens, keyboards, and other peripheral devices, and as
they read through an assigned novel, their reading experience was
supplemented by video of the novel’s setting, music that evoked the
mood of specific scenes, suggestions for further reading, and
questions designed for exploring the novel’s persisting themes (p.
650). It was a scene that seemed to bear some resemblance to Vannevar
Bush’s (1945) Memex machine concept, itself a mechanical “hypertext”
navigation device that allowed users to zoom through reams of
microfiche, drawing associative connections among separate texts. One
regret I have in reading this depiction is that the students were not
more engaged in creating their own texts using different types of
media (it seemed as if answering questions via keyboard was their
primary form of interaction). Still, given the time, this was
undoubtedly a very forward-thinking article.
Auciello situated his vision within the theories of learning popular in his day, actually citing his influences, unlike Stowe and Maggio: “Thus, the ideas of Skinner (immediate correction and reinforcement), McLuhan (involvement through use of the senses), and Bruner (incremental understanding of concepts) are incorporated into programs challenging the abilities and meeting the educational needs of students” (p. 650). Also, one particularly impressive aspect of this article was his embrace of interdisciplinary study (the very concept itself seems closely aligned with his proto-hypertextual classroom scene, in that it is largely about drawing conceptual links between seemingly disparate areas of study). I quote one of his examples here, at length, because it is so rich in detail:
Another program could be made up by members of the art, music, English, and history departments showing how each of the arts expresses the spirit of a historical period. Thus in his electronic carrel the student could see and hear how ideas on the dignity of man and his democratic freedom found expression in the art of Delacroix, the music of Chopin, and the poetry of Wordsworth. The advantages of using a computer in a core curriculum lie in its ability to communicate and interplay a variety of media and to retrieve supplemental information almost instantly from a huge microfilm and tape library which would be electronically connected to an international "information loop." (pp. 650-651)
Viva l’information loop! I must confess that I find
myself excited by this imagined account, despite its narrow
Eurocentric scope. What we are seeing in this description is the
convergence of two big movements still in their nascency: growing
interest in computer-assisted instruction and the rising call for
interdisciplinarity in secondary education. That both of these would
develop during the 1960s isn’t entirely a surprise--as education
historians of this period have pointed out, interest in
interdisciplinarity in high school curricula, which begins in earnest
in the 1960s, was a result of several interrelated forces, such as
government policies and the emergence of global markets, which were
increasingly supported by computer technology and mass media (Lenoir
and Hasni, 2016, pp. 2354-2355).
Auciello addressed head-on one of the most persistent criticisms of
the computer: that its overuse risked dehumanizing its users. As
Auciello wrote,
“Critics of computer-assisted education say that with the lack of human contact there's no ‘real’ education, that students will be mass-produced with ‘factory-installed’ minds, and that the creative student will be overlooked. There is room for disagreement. The computer, like the movable-type press, is only a teaching aid. And though the computer is a tool whose use will be determined ultimately by us educators, it is so flexible and capable we can literally change the tool to do the job” (p. 651).
While I appreciate the fervor of his defense, I am
nevertheless mindful that there is danger in framing any technology as
a flexible, neutral tool--I am reminded of Cynthia and Richard
Selfe’s foundational article “The Politics of the Interface”
(1994), which critiqued the insidious ways in which software design
could quietly convey ideological values of the designers, often
reinforcing inequalities of class, race, and gender.
You must convey my deepest regards to the General. You no doubt
remember my too-soon-departed feline companion, The Right Good and
Honorable Colonel Elijah T(odd) Pepper, III -- I suspect that had
these two “military men” met in the flesh, they would have become fast
friends! I must confess, a mint julep does sound good, and is more
seasonally appropriate (not to mention, the Colonel would have
approved). Despite my grumblings about the snail’s pace of the postal
system, I do look forward to your next letter.
Steadfastly yours,
Encl: Auciello, Joseph. (1968). "On Using Computers in English." The
English Journal. 57(5), 650-651.
Dearest Ben (if I may be permitted such
informalities),
Your epistle has delighted me as always. It hadn’t occurred to me to
consider how strong a connection there was between the 1960s
instructional technology craze and the recent excitement around MOOCs.
Although I remain a MOOC skeptic, I grant that Ohio State’s approach
to computer-assisted feedback is much more critical and rhetorical
than what we saw with dear old Daigon. I’m tickled with the idea of
the computer as not an objective authority, but rather as just one
more peer reviewer with a special set of skills (and limitations).
After reading the Auciello article you so thoughtfully enclosed, I
found myself wanting to understand more about the context of
computer-assisted instruction in the 1960s beyond our beloved EJ. An
extensive search of the “card catalog” turned up an exciting 1967
monograph by Edmund J. Farrell, English, Education, and the
Electronic Revolution, that focused primarily (though not
exclusively) on projecting how the emerging computer revolution would
transform the teaching of English.
Surprisingly, Farrell dated the rise of computer-assisted instruction
all the way back to 1962, when both IBM and System Development
Corporation came out with systems in which a central computer was able
to deliver instruction to a wide number of students at their own
terminals (p. 20 - 21). The work of System Development Corporation was
reminiscent of our friends Stowe and Maggio, as it entailed “a digital
computer to control and select material presented and to analyze
[student] responses], a magnetic tape storage unit, a typewriter for
printing out data analysis, a slide projector and
screen for presenting education materials, and individual desks with
keyboards for the students’ responses” (Halacy, 1964, p. 207 qtd in
Farrell, 1967, p. 24). Amazingly, the computer in this case was
serving not only as grader, but also a kind of teacher--both providing
instruction and evaluating student work.
Before you get too scared that the computers are coming for our jobs,
though, it is important to note that Farrell still saw a role for
human instructors in computer-based classrooms: “Freed by the
computer from dull drill, the teacher of English could spend more time
in tutorials and small group discussions, engaging students in a
dialogue about human experiences and values as they are
transmitted through literature” (p. 68). I was relieved to see that
Farrell’s classroom of the future still made room for human dialogue
about complex texts, yet I found it curious that Farrell focused
almost wholly on the human power of reading and discussion while
remaining silent about how writing would fit into the computer
classroom. I found myself worrying that perhaps Farrell assumed that
student writing could be taught and assessed solely through the “dull
drill” of the computer -- a frightening prospect indeed.
As I kept reading Farrell, though, I started to see that he ultimately
outlined a critical humanistic vision of computer instruction that
remains relevant today. Resisting the tendency to see computers solely
as the domain of math and science experts, Farrell insisted that
“the electronic revolution must have the guidance of humanists if it
is
to be that which it can be, the instrumentality for
releasing the creative potential of each individual, rather than an
ingenious means for degrading human life” (p. 68). Here, Farrell helps
me realize that although humanities and the computer may often seem to
be in tension, it is precisely this tension that makes their
combination so necessary. We need humanists to stay involved in
instructional technology conversations so that we can help ensure that
computers are used in more humane, creative, and equitable ways.
On a personal note, I must say that your reference to Colonel Pepper
brought back some fond memories -- it truly ‘tis such a pity
that he and the General never met. Then again, if they had joined
forces (and gained opposable thumbs), we would have been done for I am
afraid!
I have enclosed some photocopied passages from Mr. Farrell’s monograph
for you to peruse. I hope I have included enough tantalizing bits to
inspire you to procure you own copy.
As I write, I sit here in the sweltering heat, fanning myself with
your last letter. It has become frayed, I am sorry to report. I
apologize for treating your treasured missive so rudely. I must buy a
proper fan...
Yours ever, truly,
Encl: Excerpts, Farrell, Edmund J. (1967). English, education, and
the electronic revolution. Champaign, IL: National Council of
Teachers of English.
My most precious P-Funk,
I am beyond excited. The Farrell excerpts you sent along in your last
letter were very illuminating. I am, in fact, very much in concert
with the spirit of his message--we in English studies have much to
offer the computing world in terms of the ethical, critically mindful
uses to which we put these technologies. I definitely plan to procure
a copy of this book soon.
Who would have expected it?! Our good friend Arthur Daigon has made a
return to the archives--and this article, “Pictures, Punchcards, and
Poetry” (1969), is even more remarkable than his last. And, in a vein
similar to Farrell, he made a full-throated defense of why we in
English studies should use computers in the pursuit of the noble goals
of humanistic learning. Let me quote him here:
A hammer can be a murder weapon or it can help build houses. The computer can be used to involve students in significant experiences or it can be the electronic equivalent of Warriner's Handbook. Unfortunately too many in the humanities have rejected the computer out of hand and are, in effect, leaving an open field to the true-or-falsers, the yes or noers, the multiple choicers, and the fill-in-the-blankers. (p. 1035)
Daigon urged those in English to resist the tendency to
hide out in their ivory towers, seeking shelter from the onslaught of
criticism from the computer science community, lest they get caught up
in the “murderous cross-fire [...] between the McLuhanists and the
literary academicians” (p. 1034). The choice, Daigon argued, should
not be a binary one of poetry or computers. Rather, poetry mattered,
by Jove, and computers could be effective tools for helping students
to learn about it.
Daigon set the scene of humanistic computer instruction for his
readers by describing in rich detail a tutor interacting with a
student studying a collection of poems. Well, this tutor was
apparently worth his or her wage ten times over, providing the student
with a veritable cornucopia of resources: relevant background
information about these poems; a collection of images, sounds, and
video content to help deepen the context of the poems; a list of the
more challenging vocabulary from the collection, along with
definitions; meaningful, thought-provoking prompts that explore the
poems’ themes, asking the student to connect them to his or her lived
experience as well as to previously explored literature.
But this was no ordinary, run-of-the-mill tutor. Daigon teed up a
clever ruse for readers: “Yes, the tutor, the literary
conversationalist, the presenter of poems, questions,
comments, and assignments, the catalyst of poetic evocation is--God
forgive us all--a computer-controlled multi-media device, several
models of which are currently in production” (p. 1034-1035). Daigon
went on to describe the cutting-edge state of technology available in
his day, an array of display screens, audio tape players, electric
typewriters, light pens, all of which could be controlled by an
integrated computer to direct student interaction (p. 1035). The way
Daigon then imagined using this technology for teaching poetry was, in
my opinion, a fairly robust pedagogical approach, consisting of
questions designed to promote open-ended exploration, close reading
skills, connections to lived experience, and critical thinking about
the ethical or moral questions raised by the work. Alas, as with many
of these early articles, the student was not composing with multiple
media forms in this scenario (likely owing to the time and cost
restraints of the day), but it was a fascinating and persuasive sketch
that aligned computer and multi-media methodologies with the
humanistic pursuit of literary study.
You must read this article at once! I am, of course, enclosing it (I
must apologize, though, for the haphazardly torn pages; in my zeal, I
fear I was careless in removing them. We are both rather clumsy paper
handlers, it would seem!)
On another note, I confess that the idea of a feline-led
rebellion against humanity quite intrigues me. I imagine
a world organized and run by cats would be an incredible sight.
Eagerly awaiting your next response, I remain,
Encl: Daigon, Arthur. (1969). Pictures, punchcards, and poetry. The
English Journal. 58(7), 1033-1037.
Ben, you sly dog --
Who would have thought that we would see the return of good old
Daigon?! I read the enclosed article with much interest -- despite its
tarnished state. And, I must say I like Daigon’s arguments much more
this time around. I was pleased see him growing more critical of the
ways in which computer programs often reduce learning to simple yes /
no and true / false questions -- I love his futurist vision of a
multimedia computer program that would help students engage
open-ended, “thought-provoking” questions about the texts and contexts
of poems.
And yet… does it strike you as a little disconcerting that we as
computers and writing scholars find ourselves most cheering when
Farrell, Daigon and the like make powerful arguments for how the
humanistic value of literary study can transform computers? Have we
crossed sides in the literature vs. composition divide or maybe just
returned temporarily to our roots as idealistic literature majors?
Perhaps, perhaps … (it would not be the first time I’ve been accused
of such) ... but I expect the reason we find ourselves connecting with
these literary arguments so much is that we have yet to see an article
that actually imagines the computer as tool for student writing. In
this period, it was at the nexus of literature and computing where the
most radical imagineering was happening!
When it comes to writing, however, it appears that English teachers
imagined computers solely as grading tools, as Daigon did in his first
article. And, I’m afraid I must report that this trend continued into
the
1970s (see enclosed). The first article that rung in
the illustrious decade of our collective births was “Essay Grading by
Computer: A Laboratory Phenomenon?” (1971) by Henry Slotnick of the
National Assessment of Educational Progress (Ann Arbor) and and John
Knapp of the University of Illinois. I must admit that I wanted
to dislike this article, and yet in the end I found it to be a
remarkably thoughtful and balanced review of the potentials and
pitfalls of computer-based grading at the time.
Reporting on their own research on computer grading and reviewing
other studies (including Daigon’s), Slotnick and Knapp argued that
computer grading was not yet ready for widespread use, and then they
went on to elucidate a variety of technological and curricular changes
that might make it possible for computer grading to be used in a
limited way in college writing classes in the near future. Although
Slotnick and Knapp found in their own studies (and in the literature
they reviewed) that it was possible to program a computer that would
arrive at similar evaluative judgements as human readers, they
importantly pointed out that the computer had not actually learned to
read papers like a human would. Whereas humans tend to evaluate
elements of student writing in holistic, intuitive ways, “computers
look for approximate qualities of writing--qualities which are
logically related to the intrinsic qualities sought by human judges
but are stated with the finite clarity required to make them
programmable” (p. 76). One example of this is that computer could be
trained to recognize linguistics features of “subordination of
thought” (p. 75) and that these might correlate with
the complexity of the argument. But, we must recognize
that that human readers bring a great deal more contextual knowledge
to evaluating any student argument than merely looking for patterns of
subordination in sentences.
And, of course, today’s computer grading programs are not much better.
As Les Perelman has shown, it is possible to submit an
incomprehensible nonsense essay to a modern automated essay grader and
have it receive a high score because it meets required word counts,
uses high-level vocabulary, and is relatively free of nonstandard
grammatical constructions (Anson & Perelman, 2017; Perelman, 2014;
Kolowich, 2014). At a moment in which educational stakeholders are
increasingly looking to computers to evaluate standardized tests of
student writing (Smith, 2018), it is crucial that we stay alert to
what is lost when we reduce writing evaluation to simply a matter of
counting words and applying simple rules.
In addition to pointing out the perennial problem that human readers
and computer readers rely on differing evaluative logics, Slotnick and
Knapp also highlighted a somewhat simpler technological problem that
needed to be solved--the problem of how to make the process of
preparing papers for evaluation more efficient. In particular,
Slotnick and Knapp looked forward to the development of an
“accurate character reader (a machine which can read typed or printed
material and put it on magnetic tape or punch cards)”(p. 80), which
would allow computers to quickly read student writing that had been
composed with the more ubiquitous technology of the typewriter.
Once the character reader problem was solved, Slotnick and Knapp suggested that computer graders might be integrated into composition classes as an ancillary tool for providing feedback on first drafts. In this model, the student would submit the first draft of a paper to the computer, and the computer would provide feedback following programmed rules specifically aligned with the particular assignments. Both student and teacher would receive a printout that:
would suggest to each student ways in which he might improve the paper before he submitted it to you [the teacher]. It might also tell him how the paper compared to his earlier writings; it might indicate areas in which he was improving (according to standards you have set) and where more work needs to be done. It would definitely tell him that if he is confused, or thinks the comments are inappropriate, that he should see you, the final judge. You would also receive a copy of the printout. And while you may be impressed with the computer's functioning, you may feel that it is inadequate or otherwise overly responsive in certain areas. You would report your criticisms to the person responsible for the programming, and/or inform your students to ignore certain things the computer had told them. In short, your awareness of the computer's functioning would be used to improve its (the computer's) utility. (p. 87)
Finally! A vision of computer feedback that does not totally scare me -- indeed, I am left intrigued. I appreciate here how the computer was not positioned as a replacement for teacher - student dialogue, but rather as a possible instigator of further human conversations about writing and revision. I also think it crucial that Slotnick and Knapp realized
that one-size-fits-all response programs were untenable
and (as a result) writing teachers needed to play a role in the
ongoing development and refinement of the response programs they would
use for particular assignments.
Reading Slotnick and Knapp has inspired me to consider making more use
of machine-based response in my classes. I can see how asking students
to have their work read by a range of computer protocols could help us
open up some exciting conversations about the potential benefits and
limitations of using computer feedback to spur revision. Even
something as simple as making a wordle of essay (showing most frequent
words) can be a way to help students focus and re-see their
work. I still am not comfortable with the idea of the computer
as a sole evaluator, but I am willing to allow computers in as another
possible audience that students must consider when they are writing
for web -- no doubt, understanding the logic of search engine or the
social media algorithm is a crucial part of audience analysis for
digital writing (Gallagher, 2017). Of course, I am wandering pretty
far from Slotnick and Knapp in my musings here, but I do so since I
find their broader model of the computer as an additional reader in
the classroom (alongside students and teachers) to be a compelling
one.
I suppose it is time for me to step down off my soapbox -- you know
well how conversations about automated essay scoring can get me hot
under the collar!
On another note, I must admit that I too would welcome
the ascendance of our cat overlords -- they could not do any worse
than our current leaders, that is certain! I, however, am not willing
to submit essays to be graded by cats -- I am afraid they would too
easily be bribed with tuna to award unmerited scores.
I am very curious to see what new computer articles we may find as we
keep reading the 1970s EJ. The decade has been off to a great start
with Slotnick and Knapp...
Please do write soon!
Your steadfast colleague and faithful friend,
My dear and easily riled compatriot,
You must have been quite exercised, as you apparently forgot to
include the article of which you made so much mention. I trust that
your precis of it was, as it usually is, more than adequate, so all is
forgiven… this time. But I admit that I share your agitation when it
comes to the topic of computerized grading. Beyond the looming
existential threat it poses for us, it does not strike me as effective
pedagogy for the reasons you mention. I was happy, as you were, to see
that Slotnick and Knapp took a measured response to the topic and even
acknowledged that there remains value in having teachers’ input on
student writing. Permit me an aside, though--I bristle at their
characterization of the teacher as a “human judge.” It is somewhat
reductive, and erases one of the most fundamental reasons for writing:
to connect with other human beings. Ultimately, do we write to be
evaluated? I submit to you that we do not! We write first and foremost
not for machines, but for people!
That said, I did find their application, as well as your subsequent
musings, genuinely innovative. These examples show how the computer
can be used, not to off-load the important work of teacher response,
but to expand the types of feedback we are able to offer student
writers. These sorts of tools can help create imaginative openings for
students (and teachers alike) to envision how they approach the craft
of writing--as
we both well know, writing and teaching writing are both
incredibly difficult tasks, so anything that allows us to approach it
with “new eyes,” as it were, is a welcome gift.
As I return to the EJ archives, I must note that as of this
writing, I have reviewed up until 1974, and have seen no mention of
computers. That’s a curious drought, I must say, but I expect it to
pick up again, so I will resume my reading in the morning.
But on to the more important topic of cats and grading, I must confess
that I would have no scruples when it comes to passing along a tin or
two of Chicken of the Sea in order to secure excellent marks. To my
mind, it is a victimless crime.
Looking forward to hearing your next spate of wise words,
PS--I’ve held this letter in the event I did discover a noteworthy
article to share. I have made it through 1976, and still nothing! I am
bewildered, I must admit.
PPS--Egads! What hath G-d wrought? I’m now in 1979, and there is nary
a computer-based article to be found. What is to be made of this Lost
Decade?