Based on the problem descriptor, certain assumptions have to be made. First, no assessment was done prior to the design and implementation of the course. Second, any work done has not followed established or accepted evaluation procedure, which renders and current knowledge of student performance in reference to the effectiveness of the course unreliable at best.
Going forward from this point, answering the last question first;
How might one use the evaluation results?
Evaluation results can be used by the two decision making entities, the college and the company, to plot the future of this package of courseware.
The hoped for benefits of evaluation are explicitly stated. MCCD would like its implementation of the financial literacy program validated. It would like to know if continuing the program is warranted, and if the results of the course are worth the expense. The course designer, Cashflow Technologies, would like its courseware validated in order to garner the credibility to market the program on a wider scale, with increased distribution and sales as the ultimate goal. Are these realistic goals for an evaluation? Yes and no.
What factors ensure evaluation will be successful?
The design of the evaluation and the quality of the research will be major factors in determining how successful the evaluation will be in providing the validation the school yearns for and the credibility the company needs to market its courseware widely.
Objectivity of the evaluation is also another critical factor determining the success of the evaluation. A rubber stamp evaluation by a friendly, in house evaluator would not provide the assurance the company would need to authorize the expense of a wider marketing campaign for the courseware; nor would such a tainted evaluation provide the college with acceptable data to answer any critics to this program should any emerge.
BSU students, using an evaluation procedure that follows best practice standards, both the school and the company will get data that is objective, and therefore of significant value.
What are the benefits and limitations of an evaluation?
Evaluation can provide some support for the effectiveness of the courseware. It can provide the answers to the question; ‘Is it worth it?’ Evaluation can determine if students learn from the courseware and if they do, how much. Can the evaluation tell the company that its courseware is ready to be marketed on a wider scale?
First, no faculty training has occurred, so evaluating the effectiveness of faculty training or seminars by using the performance of current/future students is possible and of some value. Since no training or seminars have been attended, the most rigorous and valid method of evaluating these materials is by the use of A Post Test Only control group study. This type of evaluation of student performance data could provide insight into the value of faculty training and/or attendance at training seminars. The limit of the evaluation in this case is obvious. Only the effect of faculty training is examined, not the overall value of the course. Such information may be inferred from the evaluation of the training program, but it would not be based upon approved methodology.
Second, The limits are in part based on the fact that the course is already in place despite the absence of proper pre-deployment due diligence. The limited evaluation budget will make a serious, scientifically rigorous study out of reach. However, because student performance data exists from previous semesters, using an Interrupted Time Series evaluation might provide the most accurate and useful basis for decision making by both the company and the school.
One limitation is that since the present courseware is the only available method of teaching the subject of financial literacy, there is no control group to compare this method of teaching the subject matter to the students. So evaluators and decision makers are limited to ‘well, its better than nothing’ answer to the question, does this course work.
The teachers to this point have been entirely untrained. Data collected previously on student performance is, therefore, of limited value in comparison to data collected after teacher training occurs. An evaluation of the faculty courseware and training seminars could be done and might be useful in reference to the effectiveness of the seminars and faculty training, but that may not provide either the company or college with the validation and credibility both decision making bodies desire.
Another, and perhaps the scariest of the limitations, though perhaps it is incorrect, strictly speaking to categorize this as a limit. Evaluation might, and in fact is likely, given the absence of good practices in the implementation of this course in the first place, produce an unintended outcome. If the program does not prove out to be effective and cost efficient, both entities will be subject to criticism for deploying the system without using an established research guided approach to designing the course in the first place.
So rather than provide a springboard for national distribution, the evaluation might doom the financial literacy program to the scrap heap. Rather than validate the college’s decision to implement this program, an evaluation that does not support the program’s value or effectiveness might bring into question the decision making, methodology, and fiscal prudence of the college decision makers.