President Turpin’s Communication of 1 March 2019 | Ronning Centre Event Cancellation Notice (21 March 2019)

I’ve been asked to provide the text of David Turpin’s communication as University of Alberta President (1 March 2019) to Deans, Chairs, and Directors. I gather some Chairs put it immediately into wider circulation while some did not, so the text is still not in everyone’s hands.  I also provide the cancellation notice for the Ronning Centre “Coffee and Conversation” event that was not held as scheduled on 21 March 2019.

Screen Shot 2019-03-23 at 9.15.24 AM

David Turpin Communication to Deans, Chairs, and Directors at the University of Alberta, 1 March 2019:

Continue reading

Posted in academic freedom, university of alberta | Tagged , , , , | 1 Comment

Turpin Communication Undermines Academic Freedom, Curtails Free Speech

Earlier this week, Alberta Politics blogger David Climenhaga published a blogpost in which he noted that a University of Alberta event at the Augustana campus had been cancelled as a result of a communication that UAlberta president David Turpin had circulated to “unit” heads (Deans of Faculties, Chairs of Departments, and Directors of Centres). The cancellation of the event shows that at least one faculty member at the University has been misled by the President’s communication into believing that the University may not host political conversations of any kind during the period leading up to the election. This is wrong. Whether the President intended it or not, his communication has resulted in academic freedom infringements with widespread implications for all Albertans.

The communication pretends simply to pass on information related to Alberta’s new Elections Act which prohibits postsecondary institutions from taking any partisan position on political matters. The Elections Act also prohibits institutional funds from being used to donate to political campaigns.

The Elections Act protocols make sense. Universities should not take positions, as an institution, on any political campaign, and institutional funds should of course not be used to donate to political campaigns. That would be a clear abuse of public funds for special interests.

The problem with Turpin’s communication is that while reminding “unit” heads of these protocols it fails to speak positively about academic freedom protections and the rights of all members of the University community to free speech. The institution may not take political positions. The individuals, however, who comprise it are entirely free to do so. President Turpin’s communication should have expressly stated this. It should also have expressly stated that the communication was in no way intended to infringe upon the academic freedom rights and responsibilities of academic staff.

In failing to make these essential points, Turpin’s communication has sown widespread confusion across the University’s campuses, and created the climate in which it was possible for a faculty member, uncertain about its implications, to cancel an event for the Ronning Centre at the Augustana campus in which Alberta politics would be discussed.

No faculty member at the University of Alberta should feel the pressure, no matter how subtle, to curtail their academic activities as a result of President Turpin’s communication. And no academic discussion about Alberta politics should come to a halt at any University of Alberta campus because of the provisions of the Elections Act.

The bringing of such an event to a halt is an offense against the academic freedom of the faculty member who had planned the event. All faculty members at the University are free to engage with the election as they choose, both in the course of research and teaching activities and as citizens entitled, along with all others, to free speech.

When any faculty member feels the price to curtail these activities, both the University community and Albertans more generally pay a price — a very heavy price, that involves losses to democracy.

The mission of a university is to produce and disseminate knowledge. To do that, its academic staff depend upon the rights and responsibilities of academic freedom, which require them to share with the public the results of their research within established protocols. Academic staff also have, as part and parcel of their academic freedom protections, rights of extramural and intramural critique — rights to criticize the institution, as well as rights to speak up publicly, in any forum of their choosing, on any issue of importance to them.

The premise of these rights is that democracy depends upon robust debating of all ideas — and our universities are crucial institutions for the most rigorous forms of democratic debate.

President Turpin’s communication is a problem because it creates a very narrow sense of what kind of discourse academic staff at the University may engage in during the election period. It declares

While the university encourages individuals affiliated with the institution to be engaged in the political process and to vote in the upcoming election, the use of public university resources for political or campaign purposes is prohibited.

Nowhere does the communication positively assert academic freedom protections or free speech rights. The clause I’ve just quoted in fact implies that the University “encourages” only voting in the election and some other nebulous engagement in the political process. The communication does not characterize the University as a place of free speech on political matters. Nor does it does clearly state that this free speech extends to all members of the University community including non-academic staff and students. And nowhere does it state that the communication is not in any way intended to constrain the academic freedom protections of academic staff.

There are some very confusing discussions occurring right now amongst academic staff in which it is claimed that it is the Government, not the University, that is acting against academic freedom. Even if this is true (which I seriously doubt), it is the responsibility of the University to educate the Government.

It is also the responsibility of the University not to mislead the members of the community into believing that they cannot participate as fully as they wish in democratic conversations about the election, and influence its outcome. The institution may take no position. Every one of its members is entirely free to. It was a serious error of judgment for Turpin to permit a communication to be issued in his name which could possibly have allowed for any misconstruction of these crucial points. And if the University’s Government Relations Office is encouraging faculty members to cancel their events or otherwise curtail their activities, it is actively infringing upon academic freedom protections.

When the free speech of academic staff is suppressed, we all lose, for we lose the expert opinions and expert shaping of discussions in which academics are trained.

President Turpin should immediately issue a clarifying statement to the entire university community so that there is absolutely no confusion about the free speech rights of all members of the community, and the special rights and responsibilities of the academic freedom protections of academic staff. This election needs to be decided as a result of the proliferation of free speech, and not even the slightest hint of the suppression of it — certainly not at any of Alberta’s postsecondary institutions.

 

Posted in academic freedom, alberta election 2019, university of alberta | Tagged , , , , | 1 Comment

Read this! Dutch Study on “Baseline Grant” System for Research Funding (Guest Post by Andrew Gow)

Dear colleagues,

I urge you to read this thought piece on our current research funding model. It presents a simple and cost-effective alternative to our current labour-intensive method of allocating funding. The potential results are surprising.

https://journals.plos.org/plosone/article?id=10.1371%2Fjournal.pone.0183967#pone-0183967-t001

Just as our local FEC process grinds endlessly to produce average outcomes almost identical to the pay scales at York University, where salaries are based on seniority and progress through the ranks (with a very few salary outliers at UofA, namely the very productive and the very unproductive publishers), so too does the research granting culture of Canada and many other countries devote endless time and human resources to picking winners, when — as this article suggests — research money could be allocated quite fairly and with minimal effort by simply distributing it evenly across those eligible to apply for it, and allowing them to spend it on the existing approved items, accumulate it for one big project, aggregate it with their colleagues, or turn it back into the pot. While anonymous double-blind peer review might guarantee standards, it also licences anonymous, unaccountable abuse and gate-keeping, factionalism, and innovation-blocking. Our candidacy, thesis defence, and hiring processes alone ought to be sufficient guarantee of scholarly credentials to be trusted with research funds.

Best wishes,

Andrew Gow

History & Classics

Posted in Uncategorized | Leave a comment

Witchcraft, Oracles, and Magic among the Albertanae; or WrestleMania: University SmackDown (Guest Post by Kathleen Lowrey)

I have read the recent contributions of my colleagues Michelle Maroto and Carolyn Sale on the debate around the use of teaching evaluations at the University with considerable interest and appreciation. They take up a question that has generated wide interest and commentary in higher education: why is it that university administrators are so attached to measures of teaching quality that are demonstrably misleading? It is an important puzzle. Luckily, I’m an anthropologist, and I’m here to help.

This is a problem with which anthropologists have long grappled in their research. It is, essentially, the problem of magic. Why have so many people in so many places and at so many times seemed not to have realized that magical methods are not effective? Why don’t they apprehend that astrology is not predictive, that oracles are unreliable, that chicken entrails are pretty much undecipherable? All of these might have appeared to offer promising leads in a few trials, but with repeated use shouldn’t people have *noticed* that they don’t really produce the results claimed for them?

The answer is that these practices do work, and beautifully:  just not on the tasks to which they are ostensibly put. The difficulties they tackle and the solutions they produce are political, not technical or epistemological. They absolutely do produce effects in the world.  People who turn to such means again and again are not fools about causality. Not in the least!

Any anthropologist worth his or her salt has seen this happen in the field (or at least read about it in graduate school): social alliances are made and broken via the deployment and interpretation of magical phenomena. What does it mean that the crops are failing, people are getting sick, and the weather is behaving oddly? Whose fault is it and what auguries can different factions invoke to build their coalitions? Magical undertakings are modes of persuasion and — to use an idiom apropos to the case at hand — team-building exercises.

My non-anthropological colleagues are understandably perplexed and frustrated that reiterating rigorous empirical results about bias in teaching evaluations leaves the University of Alberta administration totally unmoved. Do faculty skeptics really have to explain about the fallibility of chicken entrails *again*? Shouldn’t it be obvious? Administrators say they want good information about teaching quality. USRIs cannot be relied upon to provide it.

Dear colleagues: look behind another door. Yes, you are right, teaching evaluations don’t do what it says on the box, so to speak. So what is it that they do accomplish that makes them so valuable to administration, so resilient in the face of failure, such perennial objects of ritual obeisance? They are instruments of political alliance-making. That’s the voodoo that they do so well.

Ric Flair for AS

Witness the last occasion on which they were challenged in a ceremonial clash from which they emerged, as they always must, victorious. What tournament of symbols was enacted there? Caped and costumed as the heel, the proffies (BOOOOOOOOO!): a role taken at the most recent GFC, for the extra delectation of the crowd, by a lady proffie (double BOOOOOOOOO!). Lady profs are the Ric Flairs of the academic game, everybody knows that. Come on. 

 

But wait, guys, wait! It’s gonna be okay!

Hulk for ArtsSquared

HULK UNIVERSITY ADMIN HOGAN!!!!!  THAT DUDE IS AWESOME!

My metaphors have crept a bit (blame Roland Barthes!), but look: the utility of teaching evaluations is in figuring faculty as the heel for the purposes of shared governance.  That’s why it doesn’t matter very much if teaching evaluations are any good at evaluating teaching. That’s not what they are for. The contemporary administratively-dominated university hasn’t come about because university administrators are fools who don’t understand basic statistics. It’s come about because they are, among other things, masterful politicians.

Conflict over USRIs creates a dramatic narrative that can be framed as one in which professors wish to escape accountability. Students, who know what it is to be judged by professors, are concerned to have means to judge professors in their turn. Administration appears to step in as a powerful ally, assisting students in holding the responsible line.

Students are quite right to want the quality of the teaching they receive rigorously evaluated and they are quite right to insist their input and oversight be prioritized in that evaluation. Faculty were themselves students and know the various horrors of incompetent, tyrannical, or unjust teaching. We are on students’ side about this. Students and faculty working together could design brilliant systems of evaluation that would be superior in fairness and ameliorative actionability to the ones that are utilized at present.

But by splitting students from faculty on this highly visible issue — an issue that is understandably a top priority for students — university administrations win tremendous powers.  They are thereby able to frame faculty as the “heels” for the purposes of many other governance conflicts to which administration, students, and faculty are party. By showily seeming to ally themselves with students on the “goodness” (fuzzily defined) of USRIs, they distract from the many, many other teaching issues that affect students and about which students are never consulted at all.

A critically important example, on which I will end, is the constantly increased reliance upon contract academic staff in undergraduate teaching. This has dramatic consequences for students, and yet is an issue upon which they are hardly invited to think, let alone vote. Students often do not know — and are not encouraged to understand — which of their professors are tenured and tenurable and which are contingent. The quality of the classroom performance could be stellar or abysmal from either, but when students need things like recommendation letters they may not realize that a letter from a full-time faculty member may be weighted differently by evaluation committees than one from contract faculty, or that if they need a letter after the lapse of a few years a temporary academic might be impossible to track down where a permanent one is usually attached to the same old familiar department (or, in case they have moved on to another tenured position at another university, easily trackable by that department). Students, who in my experience are earnestly concerned about fairness, may not realize that a harsh evaluation of an instructor can lead to a collegial discussion about teaching methods for a full-time faculty member but to the loss of livelihood for a contingent one. By the same token, effusive praise can lead to a raise for a full-time faculty member while producing nothing at all for a contingent instructor who is the first to be cut when cuts come. 

Faculty would love to talk about these things with students. When it comes to university governance, our interests are much more often allied than opposed. But although faculty collectively spend vastly more time with students than do administrators, we rarely use the main context in which we come to know and care about students to reach out to them about how the university is run. It would be awkward within the teaching relationship, which we certainly take to be as precious as students do. Where we do meet and where we could and should build alliances, in forums for shared governance, a bit of black magic is performed to divide us. It’s an old trick, and an effective one.

Posted in Student evaluations of teaching, ualberta general faculties council | Tagged | Leave a comment

“Whatsoever things are true”? GFC votes to allow continuing use of possibly discriminatory USRIs

At its meeting Monday afternoon the University of Alberta’s General Faculties Council (GFC) voted to endorse a set of recommendations from the Committee on the Learning Environment (CLE) that allows the University’s Universal Student Ratings of Instruction (USRIs) to continue to be used in employment decisions at the University — this despite the fact that a growing body of research shows that student evaluations of this kind discriminate against women.

GFC took this decision in regard to a CLE report, already discussed on this blog here, by me, and here, by Michelle Maroto (Sociology), that is strikingly dismissive of the research suggesting the kinds and degree of discrimination at work in student evaluations of teaching. 

President David Turpin framed the discussion with opening remarks about GFC’s “legislated responsibilities” under the Post-Secondary Learning Act, and the agreement with the Association of Academic Staff (AASUA) that evaluation of teaching should be multi-faceted. He declared the situation as “extraordinarily complicated.” I did not hear a word from him that indicated any concern with the possibility that the USRIs are in any way discriminatory.

Proponents of CLE’s recommendations followed the lead of the President. They spoke of the need for a multi-faceted approach to the evaluation of teaching without taking up the question of what it means for any instrument in that multifaceted approach to be possibly discriminatory. They also insisted that the committee’s work should continue, implying that if GFC disallowed the use of USRIs for “summative” purposes — that is, the use of USRI results in hiring, promotion, tenure, and salary decisions for the academic staff — that somehow this would bring all of the work of CLE to a halt. The chair of CLE contributed to this view when she pleaded with GFC not to leave the committee “spinning its wheels.” There was only one glancing reference to the research showing that student evaluations of teaching discriminate against women, this from a Dean, who made a feint of acknowledging the “scholarship on bias” but then declared that this “scholarship” should not affect the University’s use of USRIs, as “no tool is perfect.”

Almost all those speaking in favour of the CLE recommendations were senior white men. One of these declared his “need” for the USRIs so he can have feedback from his students. Such a comment patently ignores the fact that mechanism that I was urging, a new, sophisticated evaluation to be used for formative feedback only, would secure feedback from students. That’s what it does! Sadly, speakers preferred to deal in red herrings rather than deal with the facts and implications of the research pointing to the discriminatory aspects of student evaluations. How can the members of an institution dedicated to teaching and research not take seriously research that has such profound implications for our evaluation of how we teach?

It has been clear to me for years that GFC cannot pretend on any reasonable basis to be a proper venue of shared governance, equivalent to healthy Faculty Senates. It rarely discusses or votes on anything important, and even where it does so the decisions are not the decisions of faculty, for Vice-Presidents, Deans, and students greatly outnumber faculty on GFC. Monday’s meeting suggests, however, that GFC is much worse off than I realized. It had an opportunity to engage in decision-making of widespread importance to the University community, and its decision-making was not conscientious.

The University’s motto is Quaecumque vera, whatsoever things are true. GFC needs to take its decisions on the part of the University not in relation to what is administratively easy, because it upholds the status quo, but in relation to what is morally right, and what is morally and factually true. The University should not do anything other than scrupulously eschew all forms of discrimination. If it cannot do so where research points towards the problem of one of the practices upon which it depends, how can we not wonder what other forms of discrimination are informing the work of the University?

 

Posted in discrimination, Student evaluations of teaching, ualberta general faculties council | Tagged , , , , , , | 2 Comments

Comments on the Recent GFC USRIs Report (Guest Post by Michelle Maroto, Sociology)

Student evaluations and how to use them have been common topics for discussion here and at other universities for many years. As a result, I wasn’t surprised to see yet another report and set of summary recommendations from the UofA on this issue. I was surprised, however, by some of the conclusions presented in relation to the research conducted in this report. In several cases, the conclusions do not fit with the actual available evidence, which, I would argue, is quite problematic.  

Because I don’t have time to comment on all of the research presented in this report, I’m just going to discuss some of the research on gender and race bias in evaluations, since this area connects to some of my own research on discrimination more broadly. Here, I was very surprised to see the authors come to the conclusion that, “​The literature in this category is extensive and conflicted” (p. 4). Is the evidence really that conflicted? Even the list of literature included in Appendix A shows a large and persistent gender bias across studies. It’s especially apparent in studies with a strong methodological framework and in more recent research on the topic, some of which was not actually included in the report. 

For evidence of no bias, the report authors rely on a 2012 meta-analysis by Wright and Jenkins-Guarieri. In discussing gender bias in evaluations, the authors described the findings of the Wright and Jenkins-Guarieri (2012) article, stating that “Wright and Jenkins-Guarieri (2012) conducted a meta-analysis of 193 studies and concluded that student evaluations appear to be free from gender bias” on page 4 of the Summary Report. However, if you read beyond the abstract of this article, you will see that this article is a meta-analysis of meta-analyses that only includes one 1993 meta-analysis related to gender in 28 studies (not the 193 cited in the report).

Wright and Jenkins-Guarieri (2012) call into question their own findings regarding gender bias in evaluations. They note the following at the end this article:

Lastly, it appears that interactions between instructor and student gender do not impact SET ratings significantly, according to one meta-analysis. This indicates that instructors should consider neither their gender nor that of their students when receiving and interpreting SET results. The results also suggest that administrators do not need to consider instructors’ gender when assigning instructors to various classes. However, the meta-analysis largely included studies from the 1970s, and more current research is needed before making conclusive statements based on gender (p. 693). 

One area specifically that warrants further investigation is the biasing variable of gender. In examining the only meta-analysis focusing on gender as the primary biasing variable on SETs, Feldman (1993) concluded that gender had little effect on SETs, but more current research may suggest otherwise (Bachen, McLoughlin, and Garcia 1999; Centra and Gaubatz 2000); it is also worth noting that the majority of studies included in the meta-analysis were conducted in the 1970s (p. 695).

This is the primary study that the report authors use to conclude that “the literature in this category is extensive and conflicted” (p. 4). Limitations were also present in the other two studies (Centra and Gaubatz 2000; Smith et al. 2007) the report authors list as showing gender bias. For instance, Centra and Gaubatz (2000) were primarily interested in the gender of students, not the gender of instructors.

In contrast, the authors list seven studies that indicate some form of gender bias. These even include studies that apply an experimental and audit methodology (MacNell, Driscoll, and Hunt 2015), which tend to present the best evidence in relation to discrimination (Pager and Shepherd 2008; National Research Council 2004). Below, I have also listed some additional references that further investigate these issues, which the report authors seem to have missed. Here, I would like to draw special attention to a forthcoming article by Mengel, Sauermann, and Zölitz (2017), which relies on a quasi-experimental dataset of 19,952 student evaluations from the Netherlands. Stark and Freishtat (2014) also have a detailed review of SET evidence that seems to be missing from this report.

In addition to the limitations in the review of gender bias in evaluations, I was also concerned about the lack of a discussion of potential racial bias in student evaluations. I know that research on this topic is very limited, but we need to consider race as well. A few studies (Subtirelu 2015; Wagner, Rieger, and Voorvelt 2016) do show some evidence of bias. This should be noted. 

If the university insists on using USRIs to determine pay and promotion decisions for faculty, we must address these issues. If bias is present, as it appears to be in the research, this will limit opportunities for women and racial minority tenure-track faculty members. The potential consequences for contract instructors are much worse. In this case, evaluations can influence whether or not they will have a job the following semester. Finally, gender bias does not just affect women. Gendered expectations for male faculty can also influence student perceptions and evaluations (Sprague and Massoni 2005). 

There is no doubt that we need more research on this issue (and many others). I absolutely agree with the report authors on this point. Many of the studies are small and connected to a single course or department. These could definitely be expanded. Let’s put together a larger one here. Let’s talk about analyzing USRI data at the University of Alberta. TSQS provided some information about the overall median scores for men and women on one question. Let’s take a look at each question. Let’s break down the results by course year, course size, and department. Let’s also look at scores over time. I would also love to see an analysis of written comments. How might these differ by gender, age, and race? Analyses of comments on RateMyProfessor.com (yes, not a random sample, but still informative) indicate that disparities are likely (see http://benschmidt.org/profGender/# for an interactive look at terms used to describe male and female faculty). We love to talk about analyzing “big data.” TSQS has a ton of it. Let’s analyze it and connect it to other data. 

At a university, it is imperative that we use research to inform policy. It is even more important that we use good research to inform our policies. If we are going to spend the time and money to conduct such studies, we need to make sure that the research done is high quality and useful. 

 

References 

Bianchini, S., Lissoni, F., & Pezzoni, M. (2013). Instructor characteristics and students’ evaluation of teaching effectiveness: evidence from an Italian engineering school. European Journal of Engineering Education38(1), 38-57.

Boring, A., Ottoboni, K., & Stark, P. B. (2016). Student evaluations of teaching (mostly) do not measure teaching effectiveness. ScienceOpen Research10.

Braga, M., Paccagnella, M., & Pellizzari, M. (2014). Evaluating students’ evaluations of professors. Economics of Education Review, 41, 71-88. 

Guarino, C.M., & Borden, V.M.H. (2017). Faculty service loads and gender: Are women taking care of the academic family? Research in Higher Education, 58(6), 672-692.

Handley, I. M., Brown, E. R., Moss-Racusin, C. A., & Smith, J. L. (2015). Quality of evidence revealing subtle gender biases in science is in the eye of the beholder. Proceedings of the National Academy of Sciences112(43), 13201-13206.

Loes, C. N., Salisbury, M. H., & Pascarella, E. T. (2015). Student perceptions of effective instruction and the development of critical thinking: A replication and extension. Higher Education69(5), 823-838.

MacNell, L., Driscoll, A., & Hunt, A. N. (2015). What’s in a name: exposing gender bias in student ratings of teaching. Innovative Higher Education40(4), 291-303.

Mengel, F., Sauermann, J., & Zölitz, U. (2017). Gender bias in teaching evaluations. IZA Discussion Paper No. 11000. Available at SSRN: https://ssrn.com/abstract=3037907

Miles, P. & House, D. (2015). The tail wagging the dog: An overdue examination of student teaching evaluations. International Journal of Higher Education 4(2), 116-126.

National Research Council. (2004). Measuring racial discrimination. National Academies Press.

O’Meara, K.A. (2016). Whose problem is it? Gender differences in faculty thinking about campus service. Teachers College Record118(8), 1-38.

Ortega-Liston, R., & Rodriguez Soto, I. (2014). Challenges, choices, and decisions of women in higher education: A discourse on the future of Hispanic, Black, and Asian members of the professoriate. Journal of Hispanic Higher Education13(4), 285-302.

Pager, D., & Shepherd, H. (2008). The sociology of discrimination: Racial discrimination in employment, housing, credit, and consumer markets. Annual Review of Sociology34, 181-209.

Sprague, J. & Massoni, K. (2005). Student evaluations and gendered expectations: What we can’t count can hurt us. Sex Roles, 53 (11-12), 779-793. 

Stark, P. B., & Freishtat, R. (2014). An evaluation of course evaluations. ScienceOpen Research9, 2014.

Subtirelu, N. C. (2015). “She does have an accent but…”: Race and language ideology in students’ evaluations of mathematics instructors on RateMyProfessors.com. Language in Society44(1), 35-62.

Wagner, N., M. Rieger, & K. Voorvelt. (2016). Gender, ethnicity and teaching evaluations: Evidence from mixed teaching teams. Economics of Education Review, 54, 79-94.

 

Posted in Student evaluations of teaching, ualberta general faculties council | Tagged , , | Leave a comment

GFC and the USRIs: How Should Teaching Evaluations at the University of Alberta Be Used?

This is a brief post meant to stimulate conversation about a question faculty and other instructors across campus should be asking: in a decade in which studies indicating that student evaluations of teaching involve bias against certain groups of instructors, how should our teaching evaluations at the University of Alberta, the USRIs (Universal Student Ratings of Instruction), be used?

The issue was on the floor Monday at the first 2017-18 meeting of the General Faculties Council (GFC) in relation to a report tabled by the Committee on the Learning Environment (CLE) responding to a 30 May 2016 motion of GFC. Problem is, neither the formal paperwork for the motion nor the report itself correctly cited the motion. As a result of an amendment I moved at that May 30th meeting in 2016, the motion included a crucial word that should have been a touchstone for the committee’s work. The word omitted from the formal paperwork and every mention of the motion in the report is in blue:

THAT the General Faculties Council, on the recommendation of the GFC Executive Committee, request that the GFC Committee on the Learning Environment report by 30 April 2017, on research into the use of student rating mechanisms of instruction in university courses. This will be informed by a critical review of the University of Alberta’s existing Universal Student Ratings of Instruction (USRIs) and their use for assessment and evaluation of teaching as well as a broad review of possible methods of multifaceted GFC General Faculties Council 05/30/2016 Page 9 assessment and evaluation of teaching. The ultimate objective will be to satisfy the Institutional Strategic Plan: For the Public Good strategy to: Provide robust supports, tools, and training to develop and assess teaching quality, using qualitative and quantitative criteria that are fair, equitable, non-discriminatory and meaningful across disciplines. CARRIED

The paragraph on page 4 of the CLE report dealing with studies indicating that student evaluations involve a gender bias reads as follows:

● Gender: The literature in this category is extensive and conflicted. Numerous articles in this subcategory report gender differences or no differences in student evaluations of teaching. For example, Boring, Ottoboni, and Stark (2016) concluded that student ratings are “biased against female instructors by an amount that is large and statistically significant.” On the other hand, Wright and Jenkins-Guarieri (2012) conducted a meta-analysis of 193 studies and concluded that student evaluations appear to be free from gender bias. The University of Alberta TSQS conducted descriptive analyses and the results showed there is no apparent difference between scores for males ( N†=18576, Mdn†= 4.53) and females ( N†= 13679, Mdn†= 4.57) for statement 211 (“overall the instructor was excellent”) .

In my remarks at GFC I noted that it should concern us that this paragraph is cursory in its treatment of the possibility that student evaluations of teaching involve gender discrimination. I also find the description of the “literature” as “extensive and conflicted” odd given that the table in the report clearly shows that studies indicating gender bias are far more numerous than those that do not. But how about that last sentence?! What do you make of that?

I didn’t mention the sentence in my prepared remarks, partly because I aimed to keep those remarks to no more than two minutes. Over a hundred people sit on GFC. On such a weighty matter I assumed that many colleagues would want to speak. Instead, the presentation team from CLE defended their position by citing that sentence. I cannot for the life of me see how that sentence shows anything. To say that the median scores are the same for men and women tells us nothing about how the scores are achieved. It surely obscures much. Right? But I’m just a Shakespearean . . . .

GFC, by the way, was being asked to endorse a set of recommendations in which our USRIs would continue to be used for summative purposes — that is, for merit, tenure, and promotion decisions. I stated that we need to take seriously the statement on page 10 of the report that indicates the reservations of some chairs: “Some department chairs expressed concerns around biases, validity, and the potential for misinterpretation of USRI results for summative purposes of promotion and tenure decisions.”

This, in my view, is exactly the issue. Instructors at the University of Alberta need to receive formal feedback from their students about their courses. Formative feedback on their teaching from their students is important. And we could seek that feedback by more sophisticated means than we currently do. But with a growing body of research indicating that student evaluations of teaching involve bias — the most significant studies are about bias in the assessment of instructors who are women — it would not be responsible for GFC to continue to endorse the use of USRIs for “summative purposes.” I cited the conclusion of Boring, Ottoboni, and Stark study to this effect. Its final statement reads as follows:

[T]he onus should be on universities that rely on SET [student evaluations of teaching] for employment decisions to provide convincing affirmative evidence that such reliance does not have disparate impact on women, underrepresented minorities, or other protected groups. . . . Absent such specific evidence, SET should not be used for personnel decisions.

                                                                                    [my emphases]

The issue has now entered the international mainstream in the form of an article in last week’s Economist which you can read here. The Economist discusses a study published last Fall by another team of researchers, Mengel, Sauermann, and Zölitz. The Mengel, Sauermann, and Zölitz study is not mentioned in the CLE report. In the Economist it is discussed under the category “Academic Sexism.”

At GFC, President Turpin invited someone to move a postponement of consideration of the CLE’s recommendations. The matter will presumably return at the next meeting of GFC, scheduled for 30 October 2017. Can I hear from you before then? Especially about that darn sentence: The University of Alberta TSQS conducted descriptive analyses and the results showed there is no apparent difference between scores for males ( N†=18576, Mdn†= 4.53) and females ( N†= 13679, Mdn†= 4.57) for statement 211 (“overall the instructor was excellent”). I have heard some very scathing things about this statement from people whose disciplines make their critique significant but I’d like to hear more. 

Shall I also ask Boring (Institut d’études politiques de Paris), Ottoboni (Berkeley), Stark (Berkeley), Mengel (University of Essex), Sauermann (Stockholm University), and Zölitz (Institute on Behaviour and Inequality, Bonn, Germany), what they make of it? ; )

Oh, and for now you can read CLE’s report in full here.

 

 

 

Posted in alberta postsecondary education, Student evaluations of teaching | Tagged , , , , , , , | 3 Comments