Pre-test / Post-test Evaluation of Learning
One common strategy to measure (assess) student learning in a course is to administer a pre-test / post-test assignment. At or near the beginning of instruction, a pretest is given to the class to determine the class’ preexisting knowledge of the content area. Later on – at or near the end of instruction – the same assessment is given in an attempt to demonstrate measurable gains in student knowledge. Some useful suggestions concerning the implementation of such an assessment strategy can be found in this guide from the International Training and Education Center for Health.
Although this assessment strategy is popular, if you read very much literature concerning pre- and post-test analysis you will discover that it is a bit controversial. Its biggest limitation seems to be that the validity of any inferences you can make is rather low. Because there is rarely a control group (the whole class generally takes the pre- and post-tests), there is no reliable comparison to be made. If a statistically significant difference in scores is detected, we cannot be certain that the instruction provided in class actually caused that difference. Another weakness is that the students do not always try their best on the assignments (their score may not truly reflect the state of their understanding).
I use pre-tests and post-tests in my courses (as one means of assessing student learning). The validity problem cannot be helped. Since this is only one measure of student learning, however, I’m not greatly bothered by that. In practice, the gains seen in my pre-test / post-test scores correlate very well (r >0.7) with exam scores. I try to mitigate the student effort complication by awarding bonus points. My students can earn five bonus points each for simply taking the pre-test and post-test. This encourages participation. They may earn an additional five points each for scoring at or above 70% on the assignments. This encourages them to actually try their best.
Analyzing and summarizing the results of these tests can be time-consuming. To simplify the process, I have created a spreadsheet that automatically does it for me. You can find a copy here or in the Resources page. You simply enter some basic course information and copy/paste the matched pre- and post-test scores in. A succinct summary of the test is generated along with a nifty graph. Simply save that one page report as a PDF and you have documentation of student learning that can be entered into TracDat. Give it a try and let me know how you like it. Cheers until next time.
Mapping Course Outcomes
Yesterday, I wrote a bit about mapping course assessments to specific outcomes. Now I would like to show you a couple of ways that I try to do this in my own classes. I’ll tell you right up front the I do not think that there is any one best way of doing this. Furthermore, these examples are not necessarily what I would consider best practices – I just want to give you some concrete examples to consider.
The first way that I map assessments to outcomes is by explicitly making the connections in my course syllabus. Below is one of my course outcomes for a content area in a Medical Microbiology course that I teach.
A. Microbial Diversity – Give examples of and compare and contrast different types of microbes (including viruses, bacteria, fungi, and protozoa) as well as identify various structures and define their functions.
Assessed via the homework (1-6), laboratory quizzes (1-4), lecture exam questions (especially exam 1), the laboratory practical, and the comprehensive final exam.
I also give feedback to the students concerning their performance on these graded assignments and break down their scores by learning outcome. So, for instance, following a lecture exam they would be able to determine their current state of understanding with regard to what I call learning outcome A (microbial diversity).
A second – more intuitive – way I try to communicate the relationship between assessment and outcome is through a graphical syllabus. I got this idea from reading The Graphic Syllabus and the Outcomes Map by Linda Nilson. This is a way of showing a great deal of information about the structure of a class in the form of a picture. I have linked to an old version of a graphic syllabus from one of my old courses here. As I said before, these are just some examples of how I have tried to explicitly tie my assessments to the course outcomes. What about you? What have you tried in your courses? I am always eager to learn from others; please feel free to comment, I would like to build some community here.
Assessment and the Syllabus
Hello again! I have taken a brief hiatus to enjoy the last glorious bits of summer. Now I, like you, must refocus my energies upon my upcoming classes.
As the new semester rapidly approaches, many of us will be preparing and updating our syllabi. This is an excellent time to consider describing your course assessments as teaching and learning tools. I have recently reread an excellent book by Kathleen Gabriel – Teaching Unprepared Students: Strategies for Promoting Success and Retention in Higher Education. Don’t let that title put you off; I am not trying to denigrate the quality of our students. I think that this book has many excellent suggestions concerning ways to clearly organize course materials and communicate our expectations to students of all abilities. The third chapter deals with the first week of class and setting the tone for the upcoming term. A syllabus with clear and measurable outcomes and explicit descriptions of the assessment rationale plays an essential role in this process. After reviewing many syllabi, it is clear that most of our courses have pretty good outcomes. However, the course assessments (be they tests, quizzes, or assignments) are rarely directly mapped back to those outcomes. One practice that I would like to encourage this year is to begin explicitly linking at least some of our assigned coursework to their corresponding outcomes.
Course assessments can play three complementary roles in our classes: formative feedback to the students, intermediate evaluation of current student skills, and summative evaluation of course goals. My mission is not to add more work to your load to accomplish these ends. Rather, I want to help you hit the assessment trifecta wherein one task simultaneously plays all three roles (at the track, you are on your own). I hope that as we share our assessment successes, near misses, and insights this year that we will all benefit. Not by adding more to our already full teaching loads! Instead, I want to promote a cycle of instructional refinements that will lead to more efficient teaching, improved student learning, and rational program development. Let the adventure begin…
How can we create valid exams?
Yesterday, we briefly introduced two common assessment terms – integrative learning and authentic assessment. Today, I’m going to focus on another common term: validity. We write and give exams in our classes in an attempt to assess our students’ current knowledge and/or skill levels. The degree to which our tests actually measure our students’ learning is referred to as validity. There are many different aspects to consider when discussing validity. Today, I just want to briefly talk about content validity. Content validity refers to the extent to which our assessments measure the course materials that were presented in class and at appropriate cognitive levels. One of the best ways that I have found to improve content validity in my courses is the use of assessment blueprints.
An assessment blueprint is simply a two-dimensional matrix (learning objectives x cognitive levels) that describes the content of the learning module that is covered by a particular assessment. To make your life easier, I have made an automated blueprint as a Microsoft Excel workbook. In my courses, I typically have four to six learning outcomes per assessment and like to provide prompts that use at least four different cognitive levels. Before creating an assessment, I decide what I think would represent a good balance of content between the various learning outcomes and cognitive levels and enter those numbers as my target values. I then fill in the matrix with point values until I reach the total score value for the exam. As you fill in the matrix, the running totals for each learning outcome and cognitive levels are updated. Finally, I tweak my point values until the actual balance is close to the target values for each level. Based upon this matrix, I then sit down and construct the exam. This process helps to increase the likelihood that we will actually measure the things that we value with our assessment (content validity).
You might want to begin using the assessment blueprint with an old exam first (that is what I did). Simply dig out one of your old exams. Try to break down the content covered by that exam into a few discrete learning outcomes. Then try to categorize each question of the exam by learning outcome and the level of Bloom’s taxonomy that would be necessary for students to answer it. As you move through the exam, question-by-question, tally the points into the assessment blueprint matrix. By the end of this exercise, you will have completely mapped out your assessment. You may be surprised (or dismayed – as I was) to see a strong imbalance in the contraction of the assessment. I found my initial exams to be much too heavy in low-level cognitive outcomes (remembering predominated).
I hope that you find this resource useful in your classes. Please feel free to post comments or email me with suggestions or questions. Thanks!
What is meant by “authentic assessment” and “integrative learning”?
Wow! Where to start? There is so much to talk about with regard to assessment. I guess I’ll begin by being a bit philosophical. There will be many future opportunities for more pragmatic discussions. I believe that assessment is not just another bit of paperwork added onto the instructional load in higher education. It is an integral and important part of the process of teaching. As both instructors and learners:
- We should model the things that we value.
- We should measure the things that are modeled.
- We should analyze the data that we have measured.
- We should modify our processes based upon our findings.
That is my philosophy of assessment in a nutshell. Now there are a lot of popular phases (buzzwords) that appear in publications concerning assessment. Two that are important to the mission of Ferris State – as I understand it – are integrative learning and authentic assessment. As we continue to develop our assessment tools and reporting structures, it will be imperative to keep these concepts in mind and intentionally build upon them.
Integrative learning simply refers to structured experiences that emphasize or require inter-disciplinary approaches to problem-solving. I think that Ferris really excels at this; we have a large number of very “practical” programs that are integrative by their very nature. The challenge for all of us to to really embrace this vision (this is the essence of the vision statement for our institution) in all of our programs and courses. In many cases, this can be folded into the learning outcomes as “critical thinking”. If so, our assessment instruments for this learning outcome should then attempt to describe how successfully our students can synthesize information from across different disciplines. This is an important theme and I plan to revisit it many times in future posts.
The term authentic assessment really bothers me. It sort of suggests that any other type of assessment is really not important or even valid. That is not the case at all. There are really several different types of things that we try to teach and assess in higher education:
- Facts – a body of knowledge that can be measured with traditional assessments
- Skills – a set of abilities that can be measured with “authentic” assessments
- Attitudes – a set of beliefs that can be measured with surveys
Each of these is an important component of a liberal education and each needs to be assessed in our courses and programs. One website with excellent ideas about these types of assessments is The Authentic Assessment Toolbox page of Jon Meuller. This site has a wealth of ideas concerning the nature and importance of “authentic assessment” and many suggestions for how to go about implementing it in your courses. I have also put a link to this site in our “Websites” page. Next time I’ll provide a more practical suggestion for you to consider implementing in the upcoming semester. See you then.
Welcome to our new assessment blog!
I have established this blog to help support the assessment of integrative learning at Ferris State University. The goals of this project include:
- Defining and improving the assessment of student learning in the academic programs at Ferris State.
- Identifying, describing, and linking to information that has been recently published in books, journals, and on the web.
- Creating a dynamic environment where interested people from across our institution (and the entire world) can interact and share their thoughts about the assessment of student learning.
I hope that you will consider following this blog and feel free to chime in with your thoughts and opinions. I look forward to enhancing our students’ learning together.