Analysis is a systematic examination and evaluation of the findings or data obtained through assessment. It is about deriving meaningful and useful information about students' learning from assessment results. Analysis of data should be a group effort. It needs to involve all program faculty as well as faculty from outside the program when appropriate.
Look for trends or patterns of evidence. Common patterns to consider:
- Patterns of Consistency: this type of pattern develops by studying data acquired from the same outcome over a period of time. The period of time could be from semester to semester or year to year.
- Patterns of Consensus: this involves disaggregating the data to determine if all populations are achieving the expected level of performance. Aggregate data (i.e. reporting an average score on an outcome measure) may hide the fact that a certain population of students is NOT achieving the expected level of performance. Data may be broken down by gender, first-generation students, non-traditional students, students of various ethnic backgrounds, students enrolled in traditional versus online classes, students enrolled in day versus night classes, etc.
Other questions to ask or situations to consider as data are analyzed include:
- Do you need to disaggregate data to analyze results for particular variables such as method of instruction, day/evening section, campus, adjunct versus full-time faculty?
- Is the "N" in the data set reasonable? Have proper sampling procedures been used?
- Does the data represent an acceptable level of achievement? For instance, if data indicates that 80% of the students performed at the expected level of achievement, what happened to the other 20%? Is it acceptable that 20% of the students did not meet the minimum standard?
- Whether a target was achieved or not, were there areas defined within the tool in which students consistently demonstrated deficiencies? Likewise, were there areas in which students’ performance exceeded expectations?
- Did the assessment tool work? Was it appropriate? Did the tool validate student learning of a particular outcome?
- Did the tool satisfactorily distinguish various levels of achievement?