|Abstract||Chapter 1||Chapter 2||Chapter 3||Chapter 4||Chapter 5||Chapter 6||Chapter 7||Chapter 8||Bibliography|
EMPIRICAL SUPPORT FOR PERSON-BASED RESPONSE
This chapter details the results of a descriptive quantitative analysis conducted at West Texas A&M University, which suggests empirical support for person-based response. In addition, it examines related issues and possibly confounding variables. Finally, it points out the possible weaknesses of the study and calls for further research.
My specific research questions in this study are preceded by two general ones that relate to the positioning of my research as presented at the close of chapter three. One, I am seeking to find out if these results confirm the findings of Connors and Lunsford as reported in their landmark 1993 study, and, two, I am responding to the call for examination of student perceptions of comments as presented in the May 1997 issue of College Composition and Communication. I want to know how students themselves interpret teacher comments, and, most importantly, if they are motivated by them.
My specific research questions begin with the four tenets of person-based response and can be stated as follows:
My other research questions have to do with related issues of, and possibly confounding variables to, person-based response and can be stated as follows:
To answer these questions, I surveyed student-writers at West Texas A&M University, a 6,000-student state university located at Canyon, Texas. In November of 1996, I asked the Director of Composition to distribute questionnaires to teachers in all sections of English 101 (the first semester of freshman English). Teachers were given a letter asking for their assistance and giving instructions for conducting the survey in their classes (see Appendices A and B). Each questionnaire included a student consent form (see Appendix C) briefly explaining the purpose of the survey and advising students of their right to decline participation in the project without penalty. Both students and teachers were assured anonymity in the recording and reporting of survey results. No names appeared either on the individual questionnaires or on the class folders containing all forms for individual sections. Teachers were allowed to distribute and explain the surveys, but students collected and returned them to the English office.
The questionnaire operationalized the research questions into statements with which students could state their degree of agreement or disagreement as measured on a five-point Likert scale (see Appendix D, especially 1-34). As stated in chapter five, most of these statements were developed in earlier pilot surveys; the major exception to this are statements that have to do with student views of teacher competency ("ethos," see statements 28-34), which are taken from Feldman's article, "The Superior College Teacher from the Students' View." To increase statistical reliability, two or more statements dealt with each variable (see Table 6.1). The advantage of having two or more statements that measure the same variable is to ensure that students understand the measurement questions, something statisticians refer to as reliability. If a student disagrees with one statement, he or she should disagree with a similarly worded one; that is, if the survey is high in reliability. Some propositions were stated negatively (6, 9, 10, 11, 12, 14, 15, 18, 20, 25, 26, 28, 32) to avoid thoughtless or hurried responses, and were subsequently recoded for analysis. The final nine questions asked for some general demographic information and posed two open-ended statements for the students to complete in their own words.
|Table 6.1:||Identity of variables by statement number: |
To compute the variable values, I added all the similar statements: for instance, clarity=1+6+9+15+17+23
|Variable||Statement number||Variable||Statement number|
|Clarity||1, 6, 9, 15, 17, 23||Confidence||5, 13, 20, 25|
|Reader||2, 10, 18, 24, 27||Motive||7, 14, 22|
|Praise||3, 11||Helpful||8, 16, 21, 26|
|Voice||4, 12, 19||Teacher||28, 29, 30, 31, 32, 33, 34|
After collecting the questionnaires and entering the data, I first ran frequency analyses on all the questions. Then I recoded the negatively phrased statements and combined frequencies of related questions to compute variable values. Next I conducted reliability tests to see if students answered in a consistent manner. I then ran a general correlation command for all the variables to see how each covaried with the others. Most importantly, I wanted to know if the independent variables (clarity, teacher as genuine reader, use of praise, encouragement of student voice/style, ability to inspire confidence, and perceptions of teacher competence) covaried with the dependent one, motivational value (student perception of the usefulness of comments was also a dependent variable, but since no research supports this having a bearing on student performance, I do not include it in the multiple regression test). All of the independent variables were subsequently combined in a multiple regression to test the combined covariance (of all the independent variables on student motivation). Finally, I ran t-tests to see if there was a significant difference in the ways certain groups of students responded (gender differences, first language differences, differences in grade expectations).
Of the 25 packets given to teachers, 17 were returned (a 68% return rate) with 303 student questionnaires. Approximately 90% of the students were first-year students, and respondents were equally divided between males and females (50.2% and 49.8%). Nearly 18% of the students spoke a first language other than English. Almost 70% said they presently were making either an "A" or "B" in their class; 91.7% expected these grades by semester's end. Students responded to related statements in a highly consistent manner. The overall reliability for the Likert-type responses on this instrument (as measured by Cronbach's Alpha) was .94. Reliability rates for individual variables were lower (clarity-.70, reader-.69, praise-.63, voice-.71, confidence-.65, motive-.79, helpful-.68, and teacher-.82. For an explanation of the lower values, see the discussion section).
In a marked contrast to the Connors and Lunsford findings, students in this study perceived teacher comments in an overwhelmingly positive way (see Table 6.2 for summary; Appendix E for complete data). Seventy-one percent of students either agreed strongly or agreed with the statement, "My teacher's comments on my essays have helped me improve my writing in this class." Seventy-two percent said they thought the comments would help strengthen their writing in other classes, and 78% thought they would help the quality of their writing in the future. Only 5.9% agreed strongly or agreed with the statement, "My teacher's comments on my essays, as a whole, probably hurt more than they helped my writing."
Other variables received similar responses. Following are some important highlights:
|Table 6.2.||Positive student perceptions per variable: |
These amounts combine "agree" and "agree strongly" responses (after recoding for negatively worded ones), and are averaged for all questions pertaining to that variable.
|Variable type||Variable||Positive student |
The only variable that did not receive an average of positive responses was motivation; it received a 46% ranking, which says that students did not view their teachers' comments as particularly motivating nor unmotivating, a misleading statistic by itself because of what subsequent correlations revealed (for a fuller explanation see the discussion section).
Most students completed the open-ended statements, and their words supported the percentages mentioned above. Students' most enthusiastic and positive responses concerned those teachers who praised them for their successes. "She doesn't focus solely on the bad qualities. She also focuses on what I do well," said one student. Said another, "It encourages me to write and I am excited to get my next essay assignment. My high school teachers made me feel embarrassed to even write a letter because my writing was so bad." Students' suggestions for teacher improvement dealt mostly with a lack of clarity and length. "I guess maybe giving an example would help me out a little more. I need all the help I can get," said one student. "Comments are abbreviated and short and don't help in how to correct errors," said another. Student comments showed that they appreciated teachers who appeared as genuine readers: "She makes me feel that she really read the essay, instead of just skimming for mistakes . . . . She doesn't analyze everything. She actually reads the paper as a reader." Similarly, students praised teachers who encouraged their own voice ("She told me to stick to the way I write. I have my own style.") and condemned teachers who did not ("Our teacher wants us to write like she writes, and wants only to read what she wants to hear. Her views are very different than those of us in the class"). Finally, the open-ended responses showed students who appreciated comments that showed teachers who cared, who believed in them, and who treated them with respect. "[She] makes me feel like she cares how I do and progress in this class," wrote one. "She writes my name (ex. Good job, my name) which makes it seem personal," said another. "She makes me believe I can do better," said a third. "I feel like a person, not just a pupil," added a fourth (for more examples of open-ended responses, both positive and negative, see Appendix F).
All of the variables covaried with each other (when one increased, the others did too). Of particular note were the high correlations between each of the independent variables and the dependent variable motivation (see Table 6.3, third row from bottom). Also noteworthy was the result of a multiple regression test, which measured the collective covariance of all of the above-mentioned independent variables on the dependent variable motivation (R=.798, R² =.636, F=85.679, p<.001). These correlations are markedly high (see the discussion section) and have a high confidence level.
|Table 6.3:||Correlation of commenting strategy variables |
For every case p<.001
Teachers' comments are clear
Teachers' comments inspire student confidence
|Voice =||Teachers encourage student voice and style||Motive =||Students are motivated by teacher comments|
|Reader =||Teachers present themselves as real readers||Praise =||Teachers' comments praise student successes|
|Help =||Students view teacher comments as helpful||Teach =||Students view teachers as competent|
The demographic material also yielded some useful findings. Males and females showed little difference in the way they perceived teacher comments; however, the means for non-English speakers (as a first language) were consistently higher than those who wrote in their native tongue. Yet only the view of teacher competence was statistically significant (F=3.083, t=-2.030, df=301, p=.043). Grades and grade expectations also played a role in student perceptions. Students currently making "A"s and "B"s and those who expected to make "A"s and "B"s rated their teachers higher in every category. They also saw themselves as more helped and motivated by them (see Tables 6.4 and 6.5). Finally, another piece of data showed an interesting trend. While the majority of students found comments neither too brief nor too long, the minority opinion offered a useful insight. While only 10 students (3.3%) felt the comments were too lengthy, a notable 90 students (29.7%) said they were too brief.
|Table 6.4:||Present Grade in Class|
|Variable||"A and "B" mean||"C" and below mean||F value||Degree of freedom||Signifi- |
Lower means (averages) are higher in this scale.
|Table 6.5:||Grade now expected in class|
|Variable||"A and "B" mean||"C" and below mean||F value||Degree of freedom||Signifi- |
Lower means (averages) are higher in this scale.
The students in this study confirm the efficacy of person-based response. Each of the four strategies correlates positively with students' sense of motivation (teacher as genuine reader, .67; teacher use of praise, .63; teacher encouragement of student voice, .69; teacher ability to inspire student confidence, .69). The strength of the correlations shown in Table 3 can be put in perspective by noting Frey's, Botan's, Friedman's, and Kreps' Investigating Communication, which calls correlations under .20 slight and negligible; those between .20 and .40, low but definite; those between .40 and .70, moderate and substantial; and those from .70 to .90, high evidencing a marked relationship (305). According to this definition, all of this study's correlations are in the moderate and substantial range, with the multiple regression in the high range.
Unlike the Connors and Lunsford study, these student responses reflect a more positive state of the profession. These teachers seem to be doing as many things right as those in the Connors and Lunsford study were doing wrong. There are several possible explanations for this.
Regardless of the explanation, the way to know if the WTAMU case is typical will be to conduct multiple tests using the same instrument in other college settings.
The multiple regression (correlation) in this study also suggests an important implication. It suggests that the more of these commenting strategies a teacher employs, the better, a principle supported by the students' views of the length of responses. Where teacher comments are concerned, students seem to feel that more is better, especially when the increase in quantity is accompanied by a similar increase in comment quality. However, there are at least two cautions that need to be sounded. One, it is important to point out that covariance does not prove causality. Just because these variables are found together does not mean one causes the other (they could each be caused by a third, unseen variable). Thus, to find out if things like teacher use of praise actually causes student motivation and, subsequently, better writing, one might test the hypothesis with a true experimental design, something that has already been done with some success (Dragga). Two, this study shows that there are confounding variables that can significantly affect student perceptions of teacher comments. In the first place, the correlations between students' view of their teachers and their perceptions of the usefulness and motivational value of these teachers' comments are substantial (helpful, .60; motivational, .62). Thus, student views of comments seem inseparably tied to student views of teachers. If teachers are hard to understand when lecturing, do not know their subject well, or are unprepared and unenthusiastic in teaching, it is unlikely that even the most effective commenting strategies will help. A second confounding variable involves the grades that students both receive and expect to receive on their papers. Students who make and/or expect grades of "C" or below consistently rate teacher comments as less helpful and motivational. There may be a couple of explanations for this finding. One, students who score lower may actually receive less praise on their papers. This seems to be what Connors and Lunsford found. Some teachers seem to be unable to find anything good in a bad paper. On the other hand, it is possible that students who make lower grades simply "hear" less praise from their teachers. Students may search their papers first for its grade, and if it is low, they may be oblivious to or discount any accompanying praise. In either case, teachers will have to work harder when responding to marginal students. At the least doing so will mean writing more comments. It may take twice as many positive comments to instill confidence in a struggling writer as it does in a more successful one. And the problem may require whole new approaches, such as Sam Dragga's Praiseworthy Grading (where teachers comment only on student successes in their essays and place criticism and evaluation on a separate sheet of paper) or various methods of portfolio grading (where grades are not given at all until the semester's end and then by a panel of evaluators).
Weaknesses of the Study and the
Need for Further Research
This study did have some problems. For instance, a 68% return rate would be high if the survey was mailed out, but for a canvas-type distribution, it is low and threatens the validity of the research. I do not know if the teachers who participated were representative of those who did not. In the future, it will be helpful if I make sure I have the cooperation of all teachers in advance. For now, however, it helps to know that this data does not contradict trends in my pilot studies and, thus, is probably valid. Also, although the overall reliability rate was a high .94 ("Researchers generally accept as reliable any measurement technique with a coefficient of .80 or greater" [Frey, Botan, Friedman, and Kreps 120]), the individual variables were less than the standard. This finding can be explained by the small number of statements in each variable. Statistical reliability needs large numbers to work even when responses are consistent. Finally, one of the most important variables, student motivation, only received 46% of positive responses. Yet motivation showed a high correlation with all the other variables. This finding suggests that, while students did not say they were particularly motivated by teacher comments, they were. They just were not aware of it, something borne out by their response to the negatively phrased motivation statement. Sixty-nine percent of students either disagreed strongly or disagreed with the statement, "My teacher's comments on my essay made me not want to write the next essay."
Thus, this quantitative project is suggestive but not conclusive. Other studies using the same survey instrument need to be conducted in a variety of university and college settings. In addition, to gain a greater degree of validity (construct validity, Frey et al. 123), the strategies of person-based response need to be linked empirically to a time-tested theory. In Chapter VIII, I suggest doing so with the Daly and Miller Writing Apprehension Scale. If the statements measuring person-based strategies in my survey can show statistically significant correlation with writer attitudes as determined by the Daly and Miller scale, my own instrument will gain credibility.
On to Chapter VII
Respond or view responses to my dissertation.