|
Current interest in enhancing student learning in engineering is widespread. Because the curriculum has a major effect on what students learn, design and implementation of curricular programs is a high priority for innovative engineering colleges. The Foundation Coalition (FC) incorporates several strategies to: (a) reform engineering curricula, (b) increase student performance, and (c) evaluate reform with appropriate, authentic assessment. This document provides case studies of diverse institutions in the FC and showcases examples of how assessment and evaluation data have been used to facilitate curricular decision-making.
What is assessment and evaluation? Assessment is defined as data-gathering strategies, analyses, and reporting processes that provide information that can be used to determine whether or not intended outcomes are being achieved.[1] Evaluation uses assessment information to support decisions on maintaining, changing, or discarding instructional or programmatic practices.[2] These strategies can inform:
-
The nature and extent of learning,
-
Facilitate curricular decision making,
-
Correspondence between learning and the aims and objectives of teaching, and
-
The relationship between learning and the environments in which learning takes place. [3]
Why should you care about assessment?
Assessment of student learning can be used for several purposes. Student learning studies can be used to communicate learning achievement for specified outcomes, for example, EC 2000 Criterion [3], to provide learning evaluation to the student and the teacher, to motivate the student, and to reinforce classroom strategies that work well and target those warranting further investigation.
In addition to monitoring student learning, assessment can be used to examine program efficacy. Such assessment can indicate the degree of success of a program after its completion or can be ongoing during a program to foster continuous improvement. Programmatic assessment can be used to manage projects and communicate project outcomes, evaluate the effectiveness of institutional programs, and determine direction of future processes to improve the program over time.
How do you get data? Quantitative studies yield numerical data that give a topical view of program impact. Data collection may involve pretests and posttests on course material, surveys, observations, or analysis of institutional data such as grades, enrollment trends, retention, and graduation rates. Quantitative data provide useful summaries of what is happening in a program and can disclose patterns, anomalies, and relationships. However, quantitative data do not necessarily indicate why. Qualitative studies accommodate individual subjectivity and detail and thus delve deeper into the social context behind student performance, attitudes, and behaviors. The study of social change frequently involves qualitative research because of its focus on the social context and patterns. Qualitative research aims to define meanings and actions in particular contexts, to show how meanings and actions are organized, and to interpret patterns in light of broader social contexts and similar settings. For qualitative studies, researchers observe or interact and talk with participants about their perceptions through individual interviews, focus groups, and document collection.
Back
|
Resources
-
Gagne, R.M., L.J. Bridges, and W.W. Wagne, 1998. Principles of Instructional Design. Orlando, FL: Holt, Rinehart and Winston, Inc.
-
Hanson, G., and B. Price, 1992. Academic Program Review. In M.A. Wjitley, J.D. Porter, and R.H. Fenske (eds.). The Primer for Institutional Research. Tallahassee: Association for Institutional Research.
-
Satterly, D., 1989. Assessment in Schools. Oxford, UK: Basil Blackwell Ltd.
-
Hake, Richard R., 1998. Interactive-engagement vs. traditional methods: A six-thousand-student survey of mechanics test data for introductory physics courses. American Journal of Physics 66:6474.
Back
|
|
|
© 2001 Foundation Coalition. All rights reserved. Last modified
|
|
|
|