Friday, September 30, 2011

Using student course evaluations to design faculty development workshops.

Using student course evaluations to design faculty development workshops. INTRODUCTION Teaching consumes fifty percent or more of a professors time (Bowenand Schuster, 1986), yet professors are tenured, promoted and evaluatedmore on the basis of their research and scholarly activities than ontheir teaching. It may be too much to say that institutions of higherlearning "have paid lip service" to the importance ofteaching, or that "Policies, procedures and criteria for theevaluation and promoting of faculty in higher education contribute tothe marginalization of teaching" (Davidovitch and Soen, 2006, p.351). It is curious, however, why the activity that consumes so muchtime, and is seen by many outside the academy as the overarchingobjective of a college or university (namely, to educate students), isoften of lesser importance when evaluating faculty performance. It may be, at least in part, due to the reward structure outside ofcolleges and universities. As Kai Peters (2005, p. 150) wrote in aletter to the editor of the Harvard Business Review, business schools, through their accreditation systems, are driven to adhere to a common academic model that heavily emphasizes the number of articles their faculty members publish in first tier journals rather than the impact the research might have on practitioners. Opting out of this system carries high penalties for those institutions--possible loss of credentials, of degree awarding powers, of access to government funding. It may also be because research and scholarly activity is easier toevaluate than is teaching. Most institutions count journal articles,consider the quality of the journals (often using published rankings),how often articles are cited, how many conference presentations aremade, how many funding grants have been applied for and received, and soon. This is not all that difficult, either conceptually or in practice. Assessment of a professor's teaching effectiveness requires,as Graeme Decarie (2005) stated, "some standard measure of whatstudents know before the course and what they know after." It maybe too much to say, as Decarie then opined, "No one has thefaintest idea how to do that." We do know how to do it: have someidea what is to be accomplished in the class before hand, administer apre-test, administer a post-test, and compare the results. There may beprofessors, schools, colleges or universities that do something likethis, but certainly outcomes based measures are not the standardprocedure for evaluating a professor's teaching effectiveness. Andeven at just this, it certainly would be more involved than the currentstandard procedure for evaluating scholarly activities. Instead, the current standard procedure at most institutions is torely on one form or another of end-of-course student evaluation as anindicator of faculty teaching performance. As Seldin (1993) opined,"student ratings have become the most widely used--and, in manycases, the only--source of information on teaching effectiveness"(see, also, Wilson 1998 for a similar observation). And studentevaluations are not outcomes based measures; they are largelysatisfaction surveys. (1) Using student course evaluations as input into personnel decisionsabout who to hire, hire back, tenure, and promote is controversial. (2)The purpose of the present paper is not to further contribute to thelarge literature regarding the validity and reliability (or lackthereof) of student evaluations, but to suggest that since we doadminister them, and since there is zero likelihood that we will stopadministering them, department chairs, program directors, deans andthose responsible for faculty development programs should use theinformation collected for formative purposed. The student voice, whileimpacted by any number of variables, does say something regarding theinstruction they have received and it ought not be ignored. While weshould not mistake student course evaluations as an assessment ofteaching effectiveness, we should fully appreciate that satisfiedstudents may learn more but they certainly evaluate professors higherand, likely, have a higher opinion of the program, the school, thecollege or the university. In this age of external and public ranking ofinstitutions, this should matters a great deal, and not only to facultybut to department chairs, program directors, deans, university provostsand presidents. FORMATIVE USE OF STUDENT EVALUATIONS While most of the literature on student course evaluations focuseson their summative use, Centra (1993, Ch 4) does discuss their formativeuse. His focus is on how individual faculty members, striving to improvetheir own classroom instruction, can use the information provided bystudent evaluations. Centra emphasizes, however, that a professor mayglean something from course evaluations, believe the informationcredible, and be motivated to use the information, yet not know how tomake changes called for by students. There is evidence that those faculty that receive help make moreprogress than those that go it alone (Cohen 1980; Cohen and McKeachie1980; Williams and Cici 1997). But even here the evidence is ambiguous.For example, Davidovitch and Soen (2006) evaluated theirinstitution's attempt to promote quality instruction, as measuredby student evaluations, by investigating a range of variables for theirimpact on student evaluation scores. One relationship they wereinterested in was the relationship between faculty participation inteaching workshops and the end-of-course student evaluation scores,something that had only recently been introduced at their institution. They found, over a five-semester period, that there was significantimprovement in student evaluation scores. They also found no correlationbetween participation in teaching workshops and scores on the studentevaluations of teaching. In short, improvements in teaching "werenot related to instructors' participation in teachingworkshops" (p. 373). Davidovitch and Soen discussed several possible reasons for thesesurprising and certainly disappointing findings. One possible reason notdiscussed was that the topics for the teaching workshops were unrelatedto what students were being asked to evaluate on their teacher andcourse evaluations. HOW WORKSHOP TOPICS ARE SELECTED Like many colleges and universities, my institution conductsfaculty teaching workshops. I asked one of the organizers in charge of arecent round of workshops how the themes or topics for workshops arechosen. I was told they "ask faculty what they want," thatthey "monitor IT help desk calls to identify problem areas,"and that they "pay attention to 'hot topics' (forexample, a current hot topic is digital copyright)." They also"sometimes have focus groups" with students. Each of these approaches will probably provide a workshop that willbe interesting and informative. But will they improve student opinionof, and satisfaction with, their classes? Not necessarily and onlyaccidentally if the workshops are unrelated to what students are beingasked to evaluate? Conducting focus groups with students is anappropriate strategy, but why collect new and original data fromstudents when virtually every institution already and regularly surveysstudents about how professors perform and how well and what they likeand dislike about their classes? The data are already collected;department chairs, deans, and those charged with faculty developmentactivities should use it. Unfortunately, current practice at far toomany institutions is to collect the data, calculate summary statistics,and provide these summary statistics and sometimes the raw data and thewritten comments to the faculty member, who is then left to do with themas he or she sees fit. STUDENT EVALUATION FORMS Most student evaluation forms ask students to numerically rate alist of 15, 20, sometimes 30 classroom teaching performance traits. Someitems are fairly specific (Instructor puts outline of lecture on board);others are more general (Class sessions are well planned). Studentevaluation forms almost always include a general or overall evaluationof the instructor and/or of the course, and they almost always providespace for the student to write comments about the course and the way itwas taught. If instructors look at their course evaluations at all, they oftenturn to the overall evaluation items first and then to the writtencomments. Faculty look at the written comments for anecdotal insightsand, as often as not, for confirmation of their own great performance.What they less carefully consider are the multiple individual itemsrated by students. Looking at 15, 20 or 30 items, rated by 20, 60 ormore students, to ascertain how students rated various aspects of aprofessor and his or her course is much more difficult and timeconsuming than scanning the written responses for a quick senseimpression. The obverse is true when a department, school, college or divisionwithin a university is looking at several thousand evaluations forseveral hundred courses. Reading, coding, and making sense of thewritten comments would be a daunting task; statistically analyzing aseries of rating scales is much easier. STATISTICALLY ANALYZING COURSE EVALUATIONS The statistical analysis of student course evaluations that I haveseen are limited to the calculation of the number and proportion ofresponses in each response category for each item on the form and thecalculation of the average response for each item. These are presentedto the instructor, sometimes accompanied by the same calculations forthe department or for the school. Occasionally they are even accompaniedby results from peer schools if the evaluation forms are administeredand analyzed by an outside vendor. A recent analysis I received for a course I taught at anotheruniversity during summer 2008 will serve as an illustration (see Table1, below). Had I been a regular member of the faculty, I would have alsoreceived a summary average representing my own history of ratings foreach of the thirteen items on their form, a similar average for theschool in which the course was taught, and a similar average for thedivision of the university within which the school was housed. What is an instructor to do with this data? Presumably one can lookat one's performance on any one item and compare it with theperformance of others or even with one's own historicalperformance. Do you do better than others? Do you do worse? Are yougetting better? Are you getting worse? How this information can be usedfor self-improvement is not obviously clear. As Centra pointed out,faculty members often do not know how to make the changes called for bythe students? Presently far too many institutions use such simple data analysisof student course evaluations, and often considering only the overallevaluation score(s), as an indication of teaching performance and asinput into personnel decisions. This paper suggests thatadministrations--department chairs, program administrators, deans--canuse the information already collected, by way of student courseevaluations, to help plan and design faculty development activities andworkshops that will actually help improve scores on student courseevaluations. A more sophisticated analysis of the data is necessary,however. USING FACTOR ANALYSIS Factor analysis is well suited for exploring the interrelatednessbetween multiple questions asked on a typical course evaluationinstrument. By applying an advanced form of correlation analysis to theresponses received, a list of 15, 20 or 30 items can be reduced to justa few characteristics that students might, themselves, have difficultyidentifying. The adage in correlation analysis is that correlation does notimply causation. This helps to conceptualize what is at work in factoranalysis. Correlation does not imply causation because a third variablemay be the unmeasured (or latent) cause of the observed fluctuation andvariation in the two measured variables. Factor analysis is a way toidentify that third, unmeasured variable (or factor). As an analytical technique, factor analysis relies on overlappingcorrelations, searching for patterns of co-variation among thevariables. If an instrument has eleven questions, and the responses tofive of them co-vary together, the idea is that they each measure thesame underlying construct, or "factor." If the other sixco-vary together, they are measuring another underlying construct. Thus,eleven "variables" are reduced to two "factors."Examining the items that co-vary together, that "load" on a"factor," for what they have in common provides anunderstanding of the underlying construct. When applied to 15, 20 or 30variables, the process "reduces" the many to a few. The endresult is easier interpretation and action. It must always to be remembered that factor analysis is anexploratory tool. Further, it works only on the questions that haveactually been asked. If critical questions are not on the courseevaluation form, or if the wrong questions have been asked, factoranalysis cannot identify characteristics that would have been identifiedif a different set of questions had been asked. Based on the actualquestions asked of students, it identifies what sub-groups of questionsare tied together, and, in the minds of the students, what ties themtogether. The problem at hand is to analyze student course evaluations suchthat the student voice is heard and faculty development workshops can beplanned that actually address student issues and, thereby, help facultyimprove their student evaluation scores. If students are metaphoricallyscreaming answers to 15, 20 or 30 different questions, it will be hardfor a faculty development office to hear what they are saying. Ifstudents will slow down and consolidate their thoughts into fewer"factors," it will be easier for a faculty development officeto understand. That, in essence, is what applying factor analysis tostudent course evaluations attempts to do, after the fact. THE ANALYSIS For the present analysis and illustration, course evaluation datafrom my School of Business Administration was used. At the time of thisstudy our course evaluation instrument was administered as apencil-and-paper questionnaire using a Scantron form for their reply. Itconsisted of eighteen ungrouped statements (see Table 2, below).Although the instrument is now administered online, it consists of thesame eighteen ungrouped statements. Using a 5-point scale, anchored withStrongly Disagree (1) and Strongly Agree (5), students indicate theextent to which they agree or disagree with each statement. Theseeighteen items are followed by two general overall evaluation questions.The first is an overall evaluation of the instructor; the second anoverall evaluation of the course. The overall ratings use a 5-pointordinal scale (Excellent, Good, Satisfactory, Poor, and Very Poor) torecord the student response. Because each of these five responsecategories is presented in association with a number (Excellent = 5,etc.), they are treated by my institution as interval measures. The initial data set consisted of two years of course evaluations.There were 701 classes and 20,877 evaluation forms, both fromundergraduate and graduate programs and from all departments. Althoughmany faculty teach in both programs, only undergraduate evaluations wereincluded in the analysis because the overall evaluation scores differmarkedly between undergraduate and graduate classes. In addition,removed from the data set were all independent study classes, allclasses with less than 10 students, and all classes in which fewer thanhalf of the enrolled students completed a course evaluation form. Since the problem at hand is one of using student courseevaluations to aid in designing faculty development workshops, it wasfurther decided to focus on those sections which students indicated weremost in need of help. Quartile scores for each of the two overallratings were calculated and only those courses that were in the fourthquartile on both the overall evaluation of the instructor and theoverall evaluation of the course were selected for analysis. These arethe instructors and courses that students evaluated lowest and,presumably, are the instructors and courses most in need of help (fromthe students' point of view). The final data set includes 3,146evaluations, representing 103 sections. Because listwise deletion ofvariables was employed in the analysis, the final sample size was 3,017student evaluations. The mean response to each of the eighteen variablesin presented in Table 3, below. Because the intent of the analysis is to reduce the set of measuredvariables (the 18 items on the course evaluation form) to a smaller setof underlying dimensions for the sake of parsimony and conceptualsimplicity, Principal Components Analysis (PCA) was used to extract thefactors. Because it is believed the resulting factors will beindependent and because the desire is to produce a solution in whichmeasured variables substantially load on only one factor rather than onseveral factors, verimax rotation was employed. In the final solution, discussed below, five factors were kept.This number was arrived at through an iterative process. The initialanalysis applied Kaiser's criterion that only factors with aneigenvalue of 1.0 or more be retained. This initial solution retainedtwo factors, one of which can only be described as a global factor.Eleven of the eighteen items substantially load on it (.500 or greater).This factor was very difficult to interpret and did not provide muchguidance for the practical problem at hand: developing facultydevelopment workshops that address the issues in the minds of thestudents. Subsequent iterations increased the number of factors to beextracted and rotated. In this iterative process an eye was kept on thestability of the factors with each iteration. The 3-factor solutionsplit the largest factor of the 2-factor solution into two separatefactors; the smaller of the two original factors remained stable. The4-factor iteration removed two variables from the untouched smallerfactor of the original 2-factor solution, producing a fourth factor. Inall subsequent iterations this two-variable factor remained stable. The5-factor iteration segregated two variables from one of the two factorsgenerated in the 3-factor solution, creating a second two-variablefactor; in all subsequent iterations this two-variable factor alsoremained stable. The 6-factor and the 7-factor solution each extractedone additional variable from the previous 4-factor solution, creatingtwo additional one-variable factors. The 5-factor solution was settled on for the present purposes. The"themes" or "factors" in the minds of the studentsthat emerged follow: * Whether or not the professor is stimulating, interesting, andthought provoking. (Communication Skills) * Whether or not the course goals and the basis for determininggrades are clear and followed. (Course Organization) * Whether or not the actual workload and grading was fair andappropriate. (Evaluation) * Whether or not the instructor was caring and respectful.(Personality) * Whether or not the texts, readings and assignments contributed tostudent understanding. (Assignments) The final rotated solution is presented in Table 4, below. At this point, the issue facing those responsible for developingfaculty development workshops for which of these five factors do theydevelop a faculty workshop? The answer lies in the evaluation scoresgiven by students to each of the five factors. A simple averaging of theevaluation scores in Table 3 for each item in each factor is presentedin Table 5, below. Students are clear. Faculty most need to make theircourses stimulating, interesting and thought provoking. Following thatare issues involving the selection and use of texts, readings and otherassignments. Of course, the preceding is based on the actual items contained onan actual course evaluation form. Ask different questions and adifferent analysis will result. CONCLUSION Information obtained from course evaluations is almost universallyused for personnel decisions: who to hire, promote, tenure and rewardwith a pay raise. The information ought to be used, as well or instead,to help faculty improve their course evaluation scores. If the objectiveis to improve student satisfaction as measured by course evaluationinstruments, then department chairs, program directors, deans, and thoseresponsible for faculty development would be wise to skip "hotbutton issues" like digital copyright, as important as they may be,and focus, instead, on what students are telling them in theirend-of-term courses evaluations. Since the data are collected, theyought to be used for formative purposes as well as for summativepurposes. They should be used, that is, to improve student satisfaction.The faculty member benefits, the program benefits, and the college oruniversity benefits. In the present example, students are saying that faculty shouldfocus on fundamentals, with communication skills on top. It might bedesirable, before proceeding, to further investigate, by way of focusgroups with students, what it is about classroom communication skillsthat is lacking and what it is about the texts, the readings, and theassignments they find disagreeable. But at least then the focus groupwith students will be targeted and not simply a fishing exhibition. This much having been accomplished, the next step is clearly toprovide faculty with the opportunity to attend a targeted facultydevelopment workshop or series of workshops and then monitor futurestudent course evaluations to determine if the workshops have thedesired impact and outcome. What little there is in the literaturesuggests, as indicated above, that those faculty that receive help makemore progress than those that go it alone. A particularly interestingcase is that reported by Williams and Cici (1997). Ceci, a seasoned and respected psychologist, was invited by hisuniversity's faculty development program to participate in ateaching effectiveness workshop. He used this opportunity to conduct anaturalistic experiment to "test" whether or not oralpresentation skills, alone, can make a difference. He taught a class inthe fall, participated in the workshop conducted by a media consultantover the winter break, and then taught the same class the followingspring. He used the same syllabus, presented the same lectures (he hadindependent observers watch video taped sessions from the two semestersand confirmed they presented the same content), had the same schedule,at the same time, used the same book, and gave the same assignments andthe same exams. All that changed from the fall semester to the springsemester was the manner in which he presented the material in class:greater pitch variability in his voice, more hand gestures, etc. Hiscourse evaluation scores improved on every aspect of the studentevaluation form, including items such as instructor's knowledge,organization, accessibility, the quality of the textbook, and fairnessin grading. REFERENCES Bowen, H. R. and J. H. Schuster (1986). American Professors: aNational Resource Imperiled. New York: Oxford University Press. Centra, J. A. (1993). Reflective Faculty Evaluation: Enhancingteaching and Determining Faculty Effectiveness. San Francisco(Jossey-Bass Publishers). Clayson, D. E. and M. J. Sheffet (2006). Personality and theStudent Evaluation of Teaching. Journal of Marketing Education 28(2),149-160. Cohen, P. A. (1980). Using Student Rating Feedback for ImprovingCollege Instruction: A Meta-Analysis of Findings. Research in HigherEducation 13, 321-341. Cohen, P. A. and W. J. McKeachie (1980). The Role of Colleagues inEvaluation of College Teaching. Improving College and UniversityTeaching 28(4), 147-154. Davidovitch, N. and D. Soen (2006). Using Students'Assessments to Improve Instructors' Quality of Teaching. Journal ofFurther and Higher Education 30(4), 351-376. Decarie, G. (2005). AT ISSUE: Course evaluation is 'a goodidea gone terribly bad'. Concordia's Thursday Report 30(3).Retrived on October 8, 2008 fromhttp://ctr.concordia.ca/2005-06/oct_13/04/ on October 8, 2008. Gray, M. and B. R. Bergmann (2003). Student Teaching Evaluations:Inaccurate, Demeaning, Misused. Academe Online (Sept/Oct). Retrived onSeptember 28, 2008 fromhttp://www.aaup.org/AAUP/pubsres/academe/2003/SO/Feat/gray.htm McLaughlin, F. S. and H. L. Bates (2004). Using the Delphi Methodin Student Evaluations of Faculty. Academy of Educational LeadershipJournal 8(2), 29-43. Merritt, D. (2007). Bias, the Brain, and Student Evaluations ofTeaching. St. John's Law Review 82, 235-287. Peters, K. (2005). How Business Schools Lost Their Way. HarvardBusiness Review 83(9), 97-104. Seldin, Peter (1993). The Use and Abuse of Student Ratings ofProfessors. The Chronicle of Higher Education 39(46), A40. Williams, W. M. and S. J. Ceci (1997). How 'm I doing? Change29(5), 13-24. Wilson, R. (1998). New Research Casts Doubt on Value of StudentEvaluations of Professors. The Chronicle of Higher Education 44(19),A12-A14. Raymond Benton, Jr., Loyola University Chicago ENDNOTES (1) Instructional effectiveness is about more then just measuringstudent satisfaction. As Merritt states, "At a very minimumthoughtful evaluation of teaching requires time and attention" and"takes more time than traditional student evaluations" (2007,p. 281, 283). McLaughlin and Bates (2004) discuss an approach forobtaining reflective and deliberative input from students via the Delphimethod and Merritt (2007, pp. 281286) describes a Small-GroupInstructional Diagnosis scheme. (2) Research into and debate about the validity, reliability, andutility of student course evaluations blossomed soon after the practiceof using them for administrative decisions began. The literature on theadequacies and inadequacies of student course evaluations is nowvoluminous. Extensive reviews can be found in each of the following:Deborah J. Merritt (2007), "Bias, the Brain, and StudentEvaluations of Teaching," St. John's Law Review 82: 235-287,provides an informative discussion of much of it, as well as extensivereferences. Dennis E. Clayson and Mary Jane Sheffet (2006),"Personality and the Student Evaluation of Teaching," Journalof Marketing Education 28 (2): 149-160 covers much of the same territoryand also offers extensive references. Additional discussion andreferences can be found in Philip C. Abrami, Les Leventhal and RaymondP. Perry (1982), "Educational Seduction," Review ofEducational Research 52 (3): 446-464; Peter Seldin (1993), "The Useand Abuse of Student Ratings of Professors," The Chronicle ofHigher Education Vol 39, Issue 46, 21 July, p. A-40; Mary Gray andBarbara R. Bergmann (2003), "Student Teaching Evaluations:Inaccurate, Demeaning, Misused," Academe Online September October,http://www.aaup.org/AAUP/pubsres/academe/2003/SO/Feat/gray.htm; CharlesR. Emery, Tracy R. Kramer and Robert G. Tian (2003), "Return toAcademic Standards: A Critique of Student Evaluations of TeachingEffectiveness," Quality Assurance in Education 11 (1): 37-46; NitzaDavidovitch and Dan Soen (2006), "Using Students' Assessmentsto Improve Instructors' Quality of Teaching," Journal ofFurther and Higher Education 30 (4): 351-376; and Robin Wilson (1998),"New Research Casts Doubt on Value of Student Evaluations ofProfessors," The Chronicle of Higher Education 44 (19): A12-A14.Table 1: Instructor Score Analysis Strongly Disagree Disagree Neutral1. Instructional methods 0 1 5enhanced my analytical problem (5.88%) (29.41%)solving skills2. The instructional methods 0 0 2enhanced my critical thinking (11.76%)skills Very Poor Poor Neutral7. Instructor's effectiveness 0 0 4in conducting the class (23.53%)10. Instructor's knowledge of 0 0 1material and subject (5.88%) 1 2 311. Rate the degree to which 1 1 2the course met your (5.88%) (5.88%) (11.76%)expectations Strongly Agree Agree1. Instructional methods 9 2enhanced my analytical problem (52.94%) (11.76%)solving skills2. The instructional methods 10 5enhanced my critical thinking (58.82%) (29.41%)skills Very Good Excellent7. Instructor's effectiveness 9 4in conducting the class (52.94%) (23.53%)10. Instructor's knowledge of 9 7material and subject (52.94%) (41.18%) 4 511. Rate the degree to which 7 6the course met your (41.18%) (35.29%)expectations Number of Average Responses Response1. Instructional methods 17 3.71enhanced my analytical problemsolving skills2. The instructional methods 17 4.18enhanced my critical thinkingskills Number of Average Responses Response7. Instructor's effectiveness 17 4.00in conducting the class10. Instructor's knowledge of 17 4.35material and subject Number of Average Responses Response11. Rate the degree to which 17 3.94the course met yourexpectationsTable 2 *Items 1-18 are rated on a five-point scale with 1=Strongly Disagreeand 5=Strongly Agree.1. The goals of the course were clearly expressed at the beginning of the term.2. What was actually taught was consistent with the goals of the course.3. The course syllabus clearly explained the basis for determining grades.4. The instructor followed the stated basis for determining grades.5. The instructor communicated in a clear, effective way.6. The instructor was organized and prepared for class.7. The instructor presented the material in an interesting, thought-provoking way.8. The text and/or assigned readings contributed to my understanding of the subject.9. Other assignments (papers, projects, homework, etc.) contributed to my understanding of the subject.10. I received useful and timely feedback on my performance.11. The amount of work demanded for this course was appropriate and reasonable.12. The instructor used appropriate methods to evaluate my performance.13. The instructor was fair in grading my performance.14. The instructor was sensitive to students' varying backgrounds and academic preparations.15. The instructor was caring and respectful of students.16. The course stimulated my interest in the subject area.17. The course helped me to develop intellectual skills, such as critical thinking or problem solving.18. I have achieved my education goals for this course.Items 19-20 are rated on the following scale: 5=Excellent 4=Good 3=Satisfactory 2=Poor 1=Very Poor.19. Overall rating of instructor.20. Overall rating of course.* The first 20 items are followed by two additional overall ratings,one for library resources and one for computer resources. These arethen followed by standard census items. There are an additional fourquestions pertinent only to laboratory and clinical courses. Questions21-31 are not relevant to this analysis so their exact wording andresponse structure is omitted.Table 3: Descriptive Statistics Mean Std. Dev Analysis NITEM 1 Goals of course were clearlyexpressed 4.03 1.018 3017ITEM 2 Material taught was consistentw/goals 3.91 1.067 3017ITEM 3 Syllabus clearly explainedbasis for determining grades 4.05 1.084 3017ITEM 4 Followed stated basis fordetermining grades 4.09 1.036 3017ITEM 5 Instructor communicated in aclear, effective way 3.36 1.291 3017ITEM 6 Instructor was organized andprepared for class 3.97 1.112 3017ITEM 7 Material presentedinterestingly and thought-provokingly 3.13 1.332 3017ITEM 8 Text or readings contributed tomy understanding 3.63 1.245 3017ITEM 9 Other assignments (papers,projects, homework) contributed 3.63 1.210 3017ITEM 10 Student received useful andtimely feedback 3.78 1.171 3017ITEM 11 Amount of work was appropriateand reasonable 4.01 1.039 3017ITEM 12 Instructor used appropriatemethods for evaluation 3.85 1.142 3017ITEM 13 Instructor was fair in gradingperformance 3.94 1.114 3017ITEM 14 Instructor was sensitive tostudents' varying backgrounds 3.92 1.169 3017ITEM 15 Instructor was caring andrespectful of students 4.11 1.114 3017ITEM 16 Course stimulated interest inthe subject matter 3.22 1.359 3017ITEM 17 Helped develop intellectualskills 3.46 1.252 3017ITEM 18 Student achieved educationalgoals 3.47 1.258 3017Table 4: Rotated Component Matrix 1 2 3 4 5ITEM_16 Course stimulated interest inthe subject matter .836 .171 .229 .162 .201ITEM_7 Material presentedinterestingly and thought-provokingly .775 .284 .093 .251 .217ITEM_17 Helped develop intellectualskills .772 .210 .316 .114 .265ITEM_18 Student achieved educationalgoals .719 .250 .388 .184 .198ITEM_5 Instructor communicated in aclear, effective way .624 .503 .131 .333 .172ITEM_1 Goals of course were clearlyexpressed .302 .740 .246 .189 .161ITEM_3 Syllabus clearly explainedbasis for determining grades .112 .732 .455 .076 .127ITEM_2 Material taught was consistentw/goals .399 .712 .243 .186 .196ITEM_6 Instructor was organized andprepared for class .315 .691 .087 .299 .243ITEM_4 Followed stated basis fordetermining grades .130 .680 .512 .193 .124ITEM_13 Instructor was fair in gradingperformance .260 .334 .711 .325 .152ITEM_12 Instructor used appropriatemethods for evaluation .325 .331 .705 .279 .192ITEM_11 Amount of work was appropriateand reasonable .283 .251 .601 .275 .261ITEM_10 Student received useful andtimely feedback .302 .355 .507 .269 .240ITEM_15 Instructor was caring andrespectful of students .239 .257 .312 .798 .108ITEM_14 Instructor was sensitive tostudents' varying backgrounds .278 .237 .346 .753 .143ITEM_8 Text or readings contributed tomy understanding .289 .183 .173 .096 .838ITEM_9 Other assignments (papers,projects, homework) also contributed .344 .278 .284 .165 .680Extraction Method: Principal Component Analysis.Rotation Method: Varimax with Kaiser Normalization.a. Rotation converged in 8 iterations.Table 5: Averaged Scores for Items in Each FactorFactor 1 Communication Skills 3.33Factor 5 Selection of Texts and Assignments 3.63Factor 3 Evaluation of Students 3.90Factor 2 Course Organization 4.01Factor 4 Instructor Personality 4.02

No comments:

Post a Comment