Affective behaviors

For more information on learning, click on the concept map or use the listing below.

Learning Concept Map
Learning Concept Map

Learning Essays

ISD concept Map

Click for ISD Concept Map

Icihro

Learner Self-Assessment Ratings

Cashin (1995) shows that learners' ratings can correlate well with external measures of their learning and with the instructor's self-ratings. Student ratings are 1) statistically reliable (they have internal stability and are consistent over time), 2) are more statistically reliable than colleague ratings and 3) are not easily or automatically manipulated by grades (L'Hommedieu, Menges, Brinko, 1990; d'Apollonia, Abrami, 1997; Ory, Braskamp, Pieper, 1980; and Centra, 1993).

Imagine, your learners are better able to rate you than your fellow instructors (Dancer, & Dancer, 1992)!

Donnelly and Woolliscroft (1989) reviewed student ratings, using 12 descriptive items over a period of a year and concluded that the learners' evaluations were reliable and that their judgments were sophisticated and well thought-out.

Also, intellectually challenging classes average higher ratings than do easier courses with light work loads (Cohen, 1981). This closely relates to Snow's (1988) research in which a learner's future potential depends upon his or her current cognitive state and that we can increase the learner's potential by increasing the standards (aptitude x treatment interaction). Even the learners know when they are being challenged and they appreciate it!

What Should We Ask?

Part of the problem is that we do not always know what to ask. These types of questions produce the most reliable results (Abrami, 1989 & Cashin, 1992):

Do not ask questions on teaching methods—one might get high marks on “how much the learners learned” (which tend to be valid) and low marks on “how well the course was carefully planned and organized” (which tend not to be valid). Even if these process questions were valid, they do not tell us anything that the “Result” questions cannot.

In the questions above, note that the word “rating” and not “evaluation” is used. Rating implies a source of data, while evaluation implies that we have an answer. That is, the learners provide us information and then we combine it with other sources of information to arrive at a total evaluation. Learners are not always on target, thus their ratings can provide valuable information, but they cannot always tell evaluators everything needed in order to make a valid assessment of the training.

Perhaps the most unreliable question is, “How much did you enjoy the class?” Learners generally enjoy courses that are the most intellectually challenging and meaningful. Yet, they will also report that they may enjoy a class that contributes little to their learning. Nevertheless, when the same learners are asked to assess their learning, provide a rating of the instructor and/or course, or to assess its intellectual contributions, the students, as a whole, are able to distinguish fluff from substance (Kaplan, 1974; Naftulin, 1973).

Learner Bias

Prior learner interest in a subject does influence the outcome of student ratings of effectiveness (Marsh & Dunkin, 1992). For example, a trainer taking a train-the-trainer class will probably give a higher rating, than if she was taking a class in which she had no real interest.

Also, learners do not give higher ratings to classes in which they receive the highest grade (Howard & Maxwell, 1980). Again, the highest marks often go to the most challenging courses. However, a learner's ratings tend to be slightly higher if a learner expects to receive higher grades—the research suggests that the differences is due to the learner being highly motivated and he or she is learning more and can thus expect to get higher grades (Howard G. & Maxwell, 1982).

Immediate Feedback

To collect immediate feedback, end the session five minutes early and ask: 1) what major conclusion did you draw from today's session 2) what major questions remain in your mind (The Searle Center for Teaching Excellence Northwestern University)?

There are two questions that can provide learners valuable insight into the feedback process: 1) what have I learned and 2) what do I need to learn now? In addition, they provide instructors with feedback, such as discovering if the learners are drawing conclusions quite different from the ones intended. This will allow you to make adjustments in the next session with responses to the patterns that emerge, or make adjustments in the way you train.

Also, learners who have no previous experience have the most inconsistent feedback. This is partially because they have nothing to base their initial feedback on. By using the two questions above in multiple training sessions, you can help them in scaffolding their feedback so that they may improve upon it (same principle as in scaffolding instruction).

Self-Assessment

Traditional testing methods do not fit well with such goals as lifelong learning, reflective thinking, being critical, the capacity to evaluate oneself, and problem-solving (Dochy & Moerkerke, 1997). For these, self-assessment plays an important role. Self-assessment refers to the involvement of learners in making judgments about their own learning, particularly about their achievements and the outcomes of their learning (Boud & Falchikov, 1989). It increases the role of learners as active participants in their own learning (Boud, 1995), and is mostly used for formative assessment in order to foster reflection on one's own learning processes and results.

Overall, it can be concluded that research reports positive findings concerning the use of self-assessment in educational practice. Students who engage in self-assessment tend to score most highly on tests. Self-assessment, used in most cases to promote the learning of skills and abilities, leads to more reflection on one's own work, a higher standard of outcomes, responsibility for one's own learning and increasing understanding of problem-solving. The accuracy of the self-assessment improves over time. This accuracy is enhanced when teachers give feedback on students' self-assessment. - Dochy, Segers, & Sluijsmans, 1999

Boud (1992, 1995) developed a self-assessment schedule in order to provide a comprehensive and analytical record of learning in situations in which students had substantial responsibility for what they did. The main guidance was a handout that suggested the headings a student might use—goals, criteria, evidence, judgments, and further action.

Weaker learners often overrate themselves. Adams & King (1995) identified a 3 step framework to help develop self-assessment skill: 1) Learners work on understanding the assessment process, such as: discussing good and bad characteristics of sample work, discussing what was required in an assessment, and critically reviewing the literature. 2) Learners work to identify important criteria for assessment. 3) Learners work towards playing an active part in identifying and agreeing on assessment criteria and being able to assess peers and themselves competently.

Another assessment framework looks at the various dimensions (Garfield, 1994):

Trainer Bias

Negative attitudes toward student ratings are especially resistant to change, and it seems that faculty and administrators support their belief in student-rating myths with personal and anecdotal evidence, which [for them] outweighs empirically based research evidence. - Cohen - as cited by Cashin, 1992

 

The research on student SETEs [Student Evaluations of Teacher Effectiveness] has provided strong support for their reliability, and there has been little dispute about it - Hobson & Talbot, 2001

The learner's rating will serve their purpose if 1) you learn something new from them, 2) you value the information, 3) you understand how to make improvements, and 4) you are motivated to make the improvements (Centra, 1993).

Next Steps

Affective Behaviors

The Mayor of Bogota

Self Confidence

 

Changing Behaviors

 

References

Abrami, P. C. (1989). How Should We Use Student Ratings to Evaluate Teaching? Research in Higher Education, vol 30, 221-227.

Adams, C. & King, K. (1995). Towards a framework for student self-assessment. Innovations in Education and Training International, vol. 32, pp. 336-343.

Boud, D. (1992). The use of self-assessment schedules in negotiated learning. Studies in Higher Education, vol. 17, pp. 185-200.

Boud D. (1995). Enhancing Learning through Self-assessment. London and Philadelphia: Kogan Page.

Boud, D. & Falchikov, N. (1989). Quantitative studies of self-assessment in higher education: a critical analysis of findings. Higher Education, vol. 18, pp. 529-549.

Cashin, W. E. (1995). Student Ratings of Teaching. The Research Revisited.Center for Faculty Evaluation and Development, Kansas State University, Manhattan, KS. IDEA Paper, No. 32, September.

Cashin, W. E. & Downey R. G. (1992). Using Global Student Ratings for Summative Evaluation. Journal of Educational Psychology, vol. 84, 563-572

Centra, J. A. (1993). Reflective faculty evaluation: Enhancing teaching and determining faculty effectiveness. San Francisco: Josse-Bass.

Cohen P. A. (1981). Student Ratings of Instruction and Student Achievement: A Meta-analysis of Multisection Validity Studies. Review of Educational Research, vol. 51 Fall, 281-309.

Cohen, P. (1980). Effectiveness of Student-Rating Feedback for Improving College Instruction: A Meta-Analysis of Findings. Research in Higher Education, vol. 13, 321-341.

Dancer, W. T. & Dancer, J. (1992). Peer rating in higher education. Journal of Education for Business, vol. 67, pp. 306-309.

d'Apollonia, S., & Abrami, P. C. (1997). Navigating student ratings of instruction. American Psychologist, vol. 52, 1198-1208, p.1202.

Dochy, F. & Moerkerke, G. (1997). The present, the past and the future of achievement testing and performance assessment. International Journal of Educational Research, vol. 27, pp. 415-432.

Dochy, F., Segers, M., & Sluijsmans, D. (1999). The Use of Self-, Peer and Co-Assessment in Higher Education: A Review. Studies in Higher Education, November, 24(3), p.331.

Donnelly, M. & Woolliscroft, J. (1989). Evaluation of Clinical Instructors by Third-Year Medical Students. Academy of Medicine, vol. 64, 159-164.

Garfield, J. (1994). Beyond Testing and Grading: Using Assessment To Improve Student Learning. Journal of Statistics Education, vol.2, no.1.

Howard, G. & Maxwell, S. (1980). Correlation Between Student Satisfaction and Grades: A Case of Mistaken Causation. Journal of Educational Psychology, no. 72, December, 810-820.

Howard G. & Maxwell, S. (1982). Do Grades Contaminate Student Evaluations of Instruction? Research in Higher Education no. 16, 175-188.

Hobson, S. & Talbot, D.. (2001). Understanding Student Evaluations. College Teaching, vol. 49(1) January, p. 26.

Kaplan, R. (1974). Reflections on the Doctor Fox Paradigm. Journal of Medical Education, no. 49 March, 310-312.

L'Hommedieu, R., Menges, R., & Brinko K. (1990). Methodological Explanations for the Modest Effects of Feedback from Student Ratings. Journal of Educational Psychology, vol. 82(2).

Marsh, H. W. & Dunkin, M. (1992). Students evaluations of University Teaching: A Multidimensional Perspective. (J. C. Smart, editor.) Higher Education: Handbook of Theory and Research. New York: Agathon, vol. 8, 143-233.

Naftulin, D. &Ware J.(1973). The Dr. Fox Lecture: A Paradigm of Educational Seduction. Journal of Medical Education, vol. 48, July, 630-635.

Snow, R. (1998). Abilities and Aptitudes as Achievements in Learning Situations. In Human Cognitive Abilities in Theory and Practice. McArdle, J. (Editor) & Woodcock, R. (Editor). Hillsdale, NJ: Erlbaum.

Ory, J. C., Braskamp L., & Pieper, D. M. (1980). Congruency of Student Evaluative Information Collected by Three Methods. Journal of Educational Psychology, vol. 72, 181-185.