Witchcraft, Oracles, and Magic among the Albertanae; or WrestleMania: University SmackDown (Guest Post by Kathleen Lowrey)

I have read the recent contributions of my colleagues Michelle Maroto and Carolyn Sale on the debate around the use of teaching evaluations at the University with considerable interest and appreciation. They take up a question that has generated wide interest and commentary in higher education: why is it that university administrators are so attached to measures of teaching quality that are demonstrably misleading? It is an important puzzle. Luckily, I’m an anthropologist, and I’m here to help.

This is a problem with which anthropologists have long grappled in their research. It is, essentially, the problem of magic. Why have so many people in so many places and at so many times seemed not to have realized that magical methods are not effective? Why don’t they apprehend that astrology is not predictive, that oracles are unreliable, that chicken entrails are pretty much undecipherable? All of these might have appeared to offer promising leads in a few trials, but with repeated use shouldn’t people have *noticed* that they don’t really produce the results claimed for them?

The answer is that these practices do work, and beautifully:  just not on the tasks to which they are ostensibly put. The difficulties they tackle and the solutions they produce are political, not technical or epistemological. They absolutely do produce effects in the world.  People who turn to such means again and again are not fools about causality. Not in the least!

Any anthropologist worth his or her salt has seen this happen in the field (or at least read about it in graduate school): social alliances are made and broken via the deployment and interpretation of magical phenomena. What does it mean that the crops are failing, people are getting sick, and the weather is behaving oddly? Whose fault is it and what auguries can different factions invoke to build their coalitions? Magical undertakings are modes of persuasion and — to use an idiom apropos to the case at hand — team-building exercises.

My non-anthropological colleagues are understandably perplexed and frustrated that reiterating rigorous empirical results about bias in teaching evaluations leaves the University of Alberta administration totally unmoved. Do faculty skeptics really have to explain about the fallibility of chicken entrails *again*? Shouldn’t it be obvious? Administrators say they want good information about teaching quality. USRIs cannot be relied upon to provide it.

Dear colleagues: look behind another door. Yes, you are right, teaching evaluations don’t do what it says on the box, so to speak. So what is it that they do accomplish that makes them so valuable to administration, so resilient in the face of failure, such perennial objects of ritual obeisance? They are instruments of political alliance-making. That’s the voodoo that they do so well.

Ric Flair for AS

Witness the last occasion on which they were challenged in a ceremonial clash from which they emerged, as they always must, victorious. What tournament of symbols was enacted there? Caped and costumed as the heel, the proffies (BOOOOOOOOO!): a role taken at the most recent GFC, for the extra delectation of the crowd, by a lady proffie (double BOOOOOOOOO!). Lady profs are the Ric Flairs of the academic game, everybody knows that. Come on. 

 

But wait, guys, wait! It’s gonna be okay!

Hulk for ArtsSquared

HULK UNIVERSITY ADMIN HOGAN!!!!!  THAT DUDE IS AWESOME!

My metaphors have crept a bit (blame Roland Barthes!), but look: the utility of teaching evaluations is in figuring faculty as the heel for the purposes of shared governance.  That’s why it doesn’t matter very much if teaching evaluations are any good at evaluating teaching. That’s not what they are for. The contemporary administratively-dominated university hasn’t come about because university administrators are fools who don’t understand basic statistics. It’s come about because they are, among other things, masterful politicians.

Conflict over USRIs creates a dramatic narrative that can be framed as one in which professors wish to escape accountability. Students, who know what it is to be judged by professors, are concerned to have means to judge professors in their turn. Administration appears to step in as a powerful ally, assisting students in holding the responsible line.

Students are quite right to want the quality of the teaching they receive rigorously evaluated and they are quite right to insist their input and oversight be prioritized in that evaluation. Faculty were themselves students and know the various horrors of incompetent, tyrannical, or unjust teaching. We are on students’ side about this. Students and faculty working together could design brilliant systems of evaluation that would be superior in fairness and ameliorative actionability to the ones that are utilized at present.

But by splitting students from faculty on this highly visible issue — an issue that is understandably a top priority for students — university administrations win tremendous powers.  They are thereby able to frame faculty as the “heels” for the purposes of many other governance conflicts to which administration, students, and faculty are party. By showily seeming to ally themselves with students on the “goodness” (fuzzily defined) of USRIs, they distract from the many, many other teaching issues that affect students and about which students are never consulted at all.

A critically important example, on which I will end, is the constantly increased reliance upon contract academic staff in undergraduate teaching. This has dramatic consequences for students, and yet is an issue upon which they are hardly invited to think, let alone vote. Students often do not know — and are not encouraged to understand — which of their professors are tenured and tenurable and which are contingent. The quality of the classroom performance could be stellar or abysmal from either, but when students need things like recommendation letters they may not realize that a letter from a full-time faculty member may be weighted differently by evaluation committees than one from contract faculty, or that if they need a letter after the lapse of a few years a temporary academic might be impossible to track down where a permanent one is usually attached to the same old familiar department (or, in case they have moved on to another tenured position at another university, easily trackable by that department). Students, who in my experience are earnestly concerned about fairness, may not realize that a harsh evaluation of an instructor can lead to a collegial discussion about teaching methods for a full-time faculty member but to the loss of livelihood for a contingent one. By the same token, effusive praise can lead to a raise for a full-time faculty member while producing nothing at all for a contingent instructor who is the first to be cut when cuts come. 

Faculty would love to talk about these things with students. When it comes to university governance, our interests are much more often allied than opposed. But although faculty collectively spend vastly more time with students than do administrators, we rarely use the main context in which we come to know and care about students to reach out to them about how the university is run. It would be awkward within the teaching relationship, which we certainly take to be as precious as students do. Where we do meet and where we could and should build alliances, in forums for shared governance, a bit of black magic is performed to divide us. It’s an old trick, and an effective one.

Posted in Student evaluations of teaching, ualberta general faculties council | Tagged | Leave a comment

“Whatsoever things are true”? GFC votes to allow continuing use of possibly discriminatory USRIs

At its meeting Monday afternoon the University of Alberta’s General Faculties Council (GFC) voted to endorse a set of recommendations from the Committee on the Learning Environment (CLE) that allows the University’s Universal Student Ratings of Instruction (USRIs) to continue to be used in employment decisions at the University — this despite the fact that a growing body of research shows that student evaluations of this kind discriminate against women.

GFC took this decision in regard to a CLE report, already discussed on this blog here, by me, and here, by Michelle Maroto (Sociology), that is strikingly dismissive of the research suggesting the kinds and degree of discrimination at work in student evaluations of teaching. 

President David Turpin framed the discussion with opening remarks about GFC’s “legislated responsibilities” under the Post-Secondary Learning Act, and the agreement with the Association of Academic Staff (AASUA) that evaluation of teaching should be multi-faceted. He declared the situation as “extraordinarily complicated.” I did not hear a word from him that indicated any concern with the possibility that the USRIs are in any way discriminatory.

Proponents of CLE’s recommendations followed the lead of the President. They spoke of the need for a multi-faceted approach to the evaluation of teaching without taking up the question of what it means for any instrument in that multifaceted approach to be possibly discriminatory. They also insisted that the committee’s work should continue, implying that if GFC disallowed the use of USRIs for “summative” purposes — that is, the use of USRI results in hiring, promotion, tenure, and salary decisions for the academic staff — that somehow this would bring all of the work of CLE to a halt. The chair of CLE contributed to this view when she pleaded with GFC not to leave the committee “spinning its wheels.” There was only one glancing reference to the research showing that student evaluations of teaching discriminate against women, this from a Dean, who made a feint of acknowledging the “scholarship on bias” but then declared that this “scholarship” should not affect the University’s use of USRIs, as “no tool is perfect.”

Almost all those speaking in favour of the CLE recommendations were senior white men. One of these declared his “need” for the USRIs so he can have feedback from his students. Such a comment patently ignores the fact that mechanism that I was urging, a new, sophisticated evaluation to be used for formative feedback only, would secure feedback from students. That’s what it does! Sadly, speakers preferred to deal in red herrings rather than deal with the facts and implications of the research pointing to the discriminatory aspects of student evaluations. How can the members of an institution dedicated to teaching and research not take seriously research that has such profound implications for our evaluation of how we teach?

It has been clear to me for years that GFC cannot pretend on any reasonable basis to be a proper venue of shared governance, equivalent to healthy Faculty Senates. It rarely discusses or votes on anything important, and even where it does so the decisions are not the decisions of faculty, for Vice-Presidents, Deans, and students greatly outnumber faculty on GFC. Monday’s meeting suggests, however, that GFC is much worse off than I realized. It had an opportunity to engage in decision-making of widespread importance to the University community, and its decision-making was not conscientious.

The University’s motto is Quaecumque vera, whatsoever things are true. GFC needs to take its decisions on the part of the University not in relation to what is administratively easy, because it upholds the status quo, but in relation to what is morally right, and what is morally and factually true. The University should not do anything other than scrupulously eschew all forms of discrimination. If it cannot do so where research points towards the problem of one of the practices upon which it depends, how can we not wonder what other forms of discrimination are informing the work of the University?

 

Posted in discrimination, Student evaluations of teaching, ualberta general faculties council | Tagged , , , , , , | 1 Comment

Comments on the Recent GFC USRIs Report (Guest Post by Michelle Maroto, Sociology)

Student evaluations and how to use them have been common topics for discussion here and at other universities for many years. As a result, I wasn’t surprised to see yet another report and set of summary recommendations from the UofA on this issue. I was surprised, however, by some of the conclusions presented in relation to the research conducted in this report. In several cases, the conclusions do not fit with the actual available evidence, which, I would argue, is quite problematic.  

Because I don’t have time to comment on all of the research presented in this report, I’m just going to discuss some of the research on gender and race bias in evaluations, since this area connects to some of my own research on discrimination more broadly. Here, I was very surprised to see the authors come to the conclusion that, “​The literature in this category is extensive and conflicted” (p. 4). Is the evidence really that conflicted? Even the list of literature included in Appendix A shows a large and persistent gender bias across studies. It’s especially apparent in studies with a strong methodological framework and in more recent research on the topic, some of which was not actually included in the report. 

For evidence of no bias, the report authors rely on a 2012 meta-analysis by Wright and Jenkins-Guarieri. In discussing gender bias in evaluations, the authors described the findings of the Wright and Jenkins-Guarieri (2012) article, stating that “Wright and Jenkins-Guarieri (2012) conducted a meta-analysis of 193 studies and concluded that student evaluations appear to be free from gender bias” on page 4 of the Summary Report. However, if you read beyond the abstract of this article, you will see that this article is a meta-analysis of meta-analyses that only includes one 1993 meta-analysis related to gender in 28 studies (not the 193 cited in the report).

Wright and Jenkins-Guarieri (2012) call into question their own findings regarding gender bias in evaluations. They note the following at the end this article:

Lastly, it appears that interactions between instructor and student gender do not impact SET ratings significantly, according to one meta-analysis. This indicates that instructors should consider neither their gender nor that of their students when receiving and interpreting SET results. The results also suggest that administrators do not need to consider instructors’ gender when assigning instructors to various classes. However, the meta-analysis largely included studies from the 1970s, and more current research is needed before making conclusive statements based on gender (p. 693). 

One area specifically that warrants further investigation is the biasing variable of gender. In examining the only meta-analysis focusing on gender as the primary biasing variable on SETs, Feldman (1993) concluded that gender had little effect on SETs, but more current research may suggest otherwise (Bachen, McLoughlin, and Garcia 1999; Centra and Gaubatz 2000); it is also worth noting that the majority of studies included in the meta-analysis were conducted in the 1970s (p. 695).

This is the primary study that the report authors use to conclude that “the literature in this category is extensive and conflicted” (p. 4). Limitations were also present in the other two studies (Centra and Gaubatz 2000; Smith et al. 2007) the report authors list as showing gender bias. For instance, Centra and Gaubatz (2000) were primarily interested in the gender of students, not the gender of instructors.

In contrast, the authors list seven studies that indicate some form of gender bias. These even include studies that apply an experimental and audit methodology (MacNell, Driscoll, and Hunt 2015), which tend to present the best evidence in relation to discrimination (Pager and Shepherd 2008; National Research Council 2004). Below, I have also listed some additional references that further investigate these issues, which the report authors seem to have missed. Here, I would like to draw special attention to a forthcoming article by Mengel, Sauermann, and Zölitz (2017), which relies on a quasi-experimental dataset of 19,952 student evaluations from the Netherlands. Stark and Freishtat (2014) also have a detailed review of SET evidence that seems to be missing from this report.

In addition to the limitations in the review of gender bias in evaluations, I was also concerned about the lack of a discussion of potential racial bias in student evaluations. I know that research on this topic is very limited, but we need to consider race as well. A few studies (Subtirelu 2015; Wagner, Rieger, and Voorvelt 2016) do show some evidence of bias. This should be noted. 

If the university insists on using USRIs to determine pay and promotion decisions for faculty, we must address these issues. If bias is present, as it appears to be in the research, this will limit opportunities for women and racial minority tenure-track faculty members. The potential consequences for contract instructors are much worse. In this case, evaluations can influence whether or not they will have a job the following semester. Finally, gender bias does not just affect women. Gendered expectations for male faculty can also influence student perceptions and evaluations (Sprague and Massoni 2005). 

There is no doubt that we need more research on this issue (and many others). I absolutely agree with the report authors on this point. Many of the studies are small and connected to a single course or department. These could definitely be expanded. Let’s put together a larger one here. Let’s talk about analyzing USRI data at the University of Alberta. TSQS provided some information about the overall median scores for men and women on one question. Let’s take a look at each question. Let’s break down the results by course year, course size, and department. Let’s also look at scores over time. I would also love to see an analysis of written comments. How might these differ by gender, age, and race? Analyses of comments on RateMyProfessor.com (yes, not a random sample, but still informative) indicate that disparities are likely (see http://benschmidt.org/profGender/# for an interactive look at terms used to describe male and female faculty). We love to talk about analyzing “big data.” TSQS has a ton of it. Let’s analyze it and connect it to other data. 

At a university, it is imperative that we use research to inform policy. It is even more important that we use good research to inform our policies. If we are going to spend the time and money to conduct such studies, we need to make sure that the research done is high quality and useful. 

 

References 

Bianchini, S., Lissoni, F., & Pezzoni, M. (2013). Instructor characteristics and students’ evaluation of teaching effectiveness: evidence from an Italian engineering school. European Journal of Engineering Education38(1), 38-57.

Boring, A., Ottoboni, K., & Stark, P. B. (2016). Student evaluations of teaching (mostly) do not measure teaching effectiveness. ScienceOpen Research10.

Braga, M., Paccagnella, M., & Pellizzari, M. (2014). Evaluating students’ evaluations of professors. Economics of Education Review, 41, 71-88. 

Guarino, C.M., & Borden, V.M.H. (2017). Faculty service loads and gender: Are women taking care of the academic family? Research in Higher Education, 58(6), 672-692.

Handley, I. M., Brown, E. R., Moss-Racusin, C. A., & Smith, J. L. (2015). Quality of evidence revealing subtle gender biases in science is in the eye of the beholder. Proceedings of the National Academy of Sciences112(43), 13201-13206.

Loes, C. N., Salisbury, M. H., & Pascarella, E. T. (2015). Student perceptions of effective instruction and the development of critical thinking: A replication and extension. Higher Education69(5), 823-838.

MacNell, L., Driscoll, A., & Hunt, A. N. (2015). What’s in a name: exposing gender bias in student ratings of teaching. Innovative Higher Education40(4), 291-303.

Mengel, F., Sauermann, J., & Zölitz, U. (2017). Gender bias in teaching evaluations. IZA Discussion Paper No. 11000. Available at SSRN: https://ssrn.com/abstract=3037907

Miles, P. & House, D. (2015). The tail wagging the dog: An overdue examination of student teaching evaluations. International Journal of Higher Education 4(2), 116-126.

National Research Council. (2004). Measuring racial discrimination. National Academies Press.

O’Meara, K.A. (2016). Whose problem is it? Gender differences in faculty thinking about campus service. Teachers College Record118(8), 1-38.

Ortega-Liston, R., & Rodriguez Soto, I. (2014). Challenges, choices, and decisions of women in higher education: A discourse on the future of Hispanic, Black, and Asian members of the professoriate. Journal of Hispanic Higher Education13(4), 285-302.

Pager, D., & Shepherd, H. (2008). The sociology of discrimination: Racial discrimination in employment, housing, credit, and consumer markets. Annual Review of Sociology34, 181-209.

Sprague, J. & Massoni, K. (2005). Student evaluations and gendered expectations: What we can’t count can hurt us. Sex Roles, 53 (11-12), 779-793. 

Stark, P. B., & Freishtat, R. (2014). An evaluation of course evaluations. ScienceOpen Research9, 2014.

Subtirelu, N. C. (2015). “She does have an accent but…”: Race and language ideology in students’ evaluations of mathematics instructors on RateMyProfessors.com. Language in Society44(1), 35-62.

Wagner, N., M. Rieger, & K. Voorvelt. (2016). Gender, ethnicity and teaching evaluations: Evidence from mixed teaching teams. Economics of Education Review, 54, 79-94.

 

Posted in Student evaluations of teaching, ualberta general faculties council | Tagged , , | Leave a comment

GFC and the USRIs: How Should Teaching Evaluations at the University of Alberta Be Used?

This is a brief post meant to stimulate conversation about a question faculty and other instructors across campus should be asking: in a decade in which studies indicating that student evaluations of teaching involve bias against certain groups of instructors, how should our teaching evaluations at the University of Alberta, the USRIs (Universal Student Ratings of Instruction), be used?

The issue was on the floor Monday at the first 2017-18 meeting of the General Faculties Council (GFC) in relation to a report tabled by the Committee on the Learning Environment (CLE) responding to a 30 May 2016 motion of GFC. Problem is, neither the formal paperwork for the motion nor the report itself correctly cited the motion. As a result of an amendment I moved at that May 30th meeting in 2016, the motion included a crucial word that should have been a touchstone for the committee’s work. The word omitted from the formal paperwork and every mention of the motion in the report is in blue:

THAT the General Faculties Council, on the recommendation of the GFC Executive Committee, request that the GFC Committee on the Learning Environment report by 30 April 2017, on research into the use of student rating mechanisms of instruction in university courses. This will be informed by a critical review of the University of Alberta’s existing Universal Student Ratings of Instruction (USRIs) and their use for assessment and evaluation of teaching as well as a broad review of possible methods of multifaceted GFC General Faculties Council 05/30/2016 Page 9 assessment and evaluation of teaching. The ultimate objective will be to satisfy the Institutional Strategic Plan: For the Public Good strategy to: Provide robust supports, tools, and training to develop and assess teaching quality, using qualitative and quantitative criteria that are fair, equitable, non-discriminatory and meaningful across disciplines. CARRIED

The paragraph on page 4 of the CLE report dealing with studies indicating that student evaluations involve a gender bias reads as follows:

● Gender: The literature in this category is extensive and conflicted. Numerous articles in this subcategory report gender differences or no differences in student evaluations of teaching. For example, Boring, Ottoboni, and Stark (2016) concluded that student ratings are “biased against female instructors by an amount that is large and statistically significant.” On the other hand, Wright and Jenkins-Guarieri (2012) conducted a meta-analysis of 193 studies and concluded that student evaluations appear to be free from gender bias. The University of Alberta TSQS conducted descriptive analyses and the results showed there is no apparent difference between scores for males ( N†=18576, Mdn†= 4.53) and females ( N†= 13679, Mdn†= 4.57) for statement 211 (“overall the instructor was excellent”) .

In my remarks at GFC I noted that it should concern us that this paragraph is cursory in its treatment of the possibility that student evaluations of teaching involve gender discrimination. I also find the description of the “literature” as “extensive and conflicted” odd given that the table in the report clearly shows that studies indicating gender bias are far more numerous than those that do not. But how about that last sentence?! What do you make of that?

I didn’t mention the sentence in my prepared remarks, partly because I aimed to keep those remarks to no more than two minutes. Over a hundred people sit on GFC. On such a weighty matter I assumed that many colleagues would want to speak. Instead, the presentation team from CLE defended their position by citing that sentence. I cannot for the life of me see how that sentence shows anything. To say that the median scores are the same for men and women tells us nothing about how the scores are achieved. It surely obscures much. Right? But I’m just a Shakespearean . . . .

GFC, by the way, was being asked to endorse a set of recommendations in which our USRIs would continue to be used for summative purposes — that is, for merit, tenure, and promotion decisions. I stated that we need to take seriously the statement on page 10 of the report that indicates the reservations of some chairs: “Some department chairs expressed concerns around biases, validity, and the potential for misinterpretation of USRI results for summative purposes of promotion and tenure decisions.”

This, in my view, is exactly the issue. Instructors at the University of Alberta need to receive formal feedback from their students about their courses. Formative feedback on their teaching from their students is important. And we could seek that feedback by more sophisticated means than we currently do. But with a growing body of research indicating that student evaluations of teaching involve bias — the most significant studies are about bias in the assessment of instructors who are women — it would not be responsible for GFC to continue to endorse the use of USRIs for “summative purposes.” I cited the conclusion of Boring, Ottoboni, and Stark study to this effect. Its final statement reads as follows:

[T]he onus should be on universities that rely on SET [student evaluations of teaching] for employment decisions to provide convincing affirmative evidence that such reliance does not have disparate impact on women, underrepresented minorities, or other protected groups. . . . Absent such specific evidence, SET should not be used for personnel decisions.

                                                                                    [my emphases]

The issue has now entered the international mainstream in the form of an article in last week’s Economist which you can read here. The Economist discusses a study published last Fall by another team of researchers, Mengel, Sauermann, and Zölitz. The Mengel, Sauermann, and Zölitz study is not mentioned in the CLE report. In the Economist it is discussed under the category “Academic Sexism.”

At GFC, President Turpin invited someone to move a postponement of consideration of the CLE’s recommendations. The matter will presumably return at the next meeting of GFC, scheduled for 30 October 2017. Can I hear from you before then? Especially about that darn sentence: The University of Alberta TSQS conducted descriptive analyses and the results showed there is no apparent difference between scores for males ( N†=18576, Mdn†= 4.53) and females ( N†= 13679, Mdn†= 4.57) for statement 211 (“overall the instructor was excellent”). I have heard some very scathing things about this statement from people whose disciplines make their critique significant but I’d like to hear more. 

Shall I also ask Boring (Institut d’études politiques de Paris), Ottoboni (Berkeley), Stark (Berkeley), Mengel (University of Essex), Sauermann (Stockholm University), and Zölitz (Institute on Behaviour and Inequality, Bonn, Germany), what they make of it? ; )

Oh, and for now you can read CLE’s report in full here.

 

 

 

Posted in alberta postsecondary education, Student evaluations of teaching | Tagged , , , , , , , | 3 Comments

MLCS is going generic or, Why French will no longer be an exception (Guest Post by Sathya Rao, Modern Languages & Cultural Studies)

Modelled after the modern languages departments at the universities of Exeter (UK), Sheffield (UK), and Saint Louis (US), the controversial Modern Languages and Cultural Studies (MLCS) major proposal – which passed last May 25 at Arts Faculty Council – is the first of its kind in Canada and therefore deserves special attention. Under the guise of saving the last of the few small programs that survived the budgetary cuts of recent years, the MLCS major proposal completely transformed our program, making it an anomaly in the Canadian academic landscape. While all sizeable modern languages departments throughout the country (e.g., UofT, UofC, and UBC) have maintained a strong French program in order to capitalize on the opportunities arising from Canada’s bilingual context, the new MLCS major no longer gives preferential treatment to French. In fact, French and Spanish – which together amount to more than half of all majors in MLCS – are aligned with German (15% of majors enrolment), which sets the new lowest common denominator for the whole program in terms of course requirements. In practice, this means that students majoring in French (and Spanish) will be required to take fewer courses in the target language (even though there are more than enough faculty members to teach these) along with more courses in English. By comparison to other Canadian research universities, MLCS students majoring in “French” will have one of lowest course loads in the target language (i.e., a minimum of 24 credits at the senior level compared to 30 or more on average) and one of the highest – if not the highest – course load in English (i.e., 9 credits).

Paradoxically, the reduction in the number of target language courses comes with a new 6-credit requirement in study-abroad OR “language immersive” Community-Service Learning. Beside the fact that a 20-hour CSL placement in Edmonton could hardly qualify as “immersive,” compelling students to engage in CSL as a low-cost alternative to studying abroad is pedagogically questionable. Given that few MLCS students choose to engage in study abroad or in CSL based on our experience (this also extends to instructors), bringing these up to the level of program requirements will not only be constraining for students (especially for those working full-time who cannot afford the extra 20 hours per semester required by CSL) but also very risky for an already under-enrolled program like MLCS. Not to mention the extra pressure on instructors who will feel compelled to include a CSL in their courses in order to accommodate students looking to get their 6 credits. As exhilarating as the idea of CSL may be, those of us who have been practically engaged in it are well-aware that the success of a CSL placement should not be taken for granted. Compelling students to engage in CSL instead of convincing them to do so might not be the best way to ensure that their experience is successful. 

Instead of a language-specific major, students will receive a generic major in “MLCS” likely to penalize them (as some students pointed out in a survey on the new major) when competing on the job market with peers holding a good old major in French or Spanish from other Canadian institutions (including Campus Saint-Jean). It was suggested by the promoters of the MLCS majors that students could submit their transcripts along with their CV. Yet no market analysis was conducted to make sure whether non-academic employers would actually consider going over the additional material or whether students would be comfortable disclosing their transcripts. In fact, nowadays most employers use automated systems to scan resumés for degree titles, and it is rare for transcripts ever to be considered in job applications.

The levelling down of our two most enrolled programs (and incidentally the slow agony of one of our most reputed programs, namely Comparative Literature) was presented as the necessary price to pay to save small programs — indeed, as a necessary sacrifice. But how will these programs be “saved” exactly? Instead of a full-fledged BA (which can no longer be provided given the insufficient enrolment), small programs such as Scandinavian, Ukrainian, and Russian will be granted the opportunity to offer 4 courses (12 credits) in the target language along with 4 (12 credits) content-specific courses in English as part of the “Cross-Cultural Studies” route. For “homeless” or rather “major-less” programs as dynamic as ours, this is certainly a better prospect than losing the little that was left. Yet it is hard to understand why rescuing these majors from extinction would entail harming bigger ones and potentially putting the whole department at risk and in a state of crisis. Hence it is no wonder that the whole discussion about the new MLCS major turned into a tragedy. It is important to keep in mind that French, Spanish & Comparative Literature altogether furnish 75% of majors in MLCS. Also, it is unclear how these newly saved small programs will grow strong enough to make up for any anticipated loss in French, Spanish, and Comparative Literature. The biggest winner in all this is German whose enrolment has been steadily declining in the past 5 years from 27 (in 2011) to 19 (in 2016) and yet now sets the standards for French and Spanish in terms of course requirements. By having its majors pooled into the common new MLCS major while preserving its status of “big” language alongside with French and Spanish, German deftly escapes the potential threat of program suspension. Finally, little consideration was given to the moderate satisfaction rate – 3.9 out of 5 – expressed by the 520 students who were surveyed about the new major. This rate even drops to 3.5 out of 5 for the 62 students involved in MLCS programs (e.g., majors and certificates).

In any case, one could object that there was something truly heroic and yet anomalous not only in the goal the MLCS major proposal set for itself, but also in the support it received from the Faculty of Arts (given the lack of data and contextual information provided to Arts Faculty Council for its decision-making and the fact that the proposal was turned down twice by Academic Affairs with substantial modifications). As a desperate attempt to save some endangered programs, the proposal gave rise to a “generic” model where all languages (French is no exception) are reduced to their lowest common denominator⁄ requirements. For the same cost as a “brand name” or traditional language programs, a generic program will offer more courses in English, affordable local “immersive” experience (as an alternative to more expensive travel abroad experience), fewer courses in the target language, overall decreased proficiency in the target language (but increased experiential learning for more students and improved knowledge in the generic field of “cultural studies”), and a degree in a potentially greater number of languages than any traditional model could provide. While this new model has some merits of its own (e.g., simplified administrative structure and emphasis on experiential learning), it remains to be seen how it will be welcomed in an environment unlike the UK or the US where bilingualism prevails (and within a province with a strong francophone community and a lasting network of Francophone schools & immersion programs), where combined degrees have proven to be unsuccessful (as evidenced by the closure in 2013 of most of our combined majors, except French and Spanish), where proficiency standards and professional expectations for French (and Spanish) speakers are higher (especially for students envisioning a career in the booming sector of bilingual education), where there is already a Francophone campus offering a full-fledged French major, where experiential learning in many programs is either optional (through programs such as Co-Curricular Record) or mandatory through curriculum-embedded travel abroad opportunities (as at Augustana, which as far as we know, was unfortunately not part of the discussion, just as Campus Saint-Jean was not), where most employers are not familiar with the concept of “cultural studies,” and where students are already busy engaging in experiential learning of their own to pay for their tuition fees.   

Posted in UAlberta Faculty of Arts, UAlberta Modern Languages & Cultural Studies, Uncategorized | Tagged , , | Leave a comment

In the Journal today: Kathleen Lowrey (Anthropology) on “flexibility,” “experiential learning” and other curricular matters

In the Edmonton Journal today:

Kathleen Lowrey (Anthropology) contributes to the debate on what the Social Studies curriculum for K-12 and our own should (and should not) involve. Or, as she puts it, “And now, a withering view of flexibility from one of our resident schoolmarms”:

http://edmontonjournal.com/opinion/columnists/opinion-old-school-learning-provides-firmness-in-a-disrupting-world

 

 

Posted in Uncategorized | Leave a comment

Reflexions on the Arts Faculty Council discussion of the MLCS BA proposal (Guest Post by Marisa Bortolussi)

In response to the request for a summary of what happened at the Thursday May 25 meeting of the Arts Faculty Council regarding the BA proposal for Modern Languages and Cultural Studies (MLCS), here is a summary, with a few post-meeting reflexions.

First of all, the proposal passed: the result of the vote was: 37 against, 49 for, with 7 abstentions. 

AFC Council members were placed in a very difficult situation at that meeting. It was pointed out to them that the department of MLCS was divided over the new MLCS BA proposal, with half of the faculty members in favour of it, and the other half against. This meant, effectively, that Council members were being asked to choose between siding with one half of the department against the other. How does one go about making such a decision? Was a thorough knowledge and understanding of the proposal required to decide which half was right and which half was wrong? No. Because understanding the proposal would have required being walked through every detail of the document, hearing all the pros and cons, and reflecting on all that information. But that is not the role of Arts Faculty Council. That is the role of the Arts Academic Affairs Committee, a duly elected committee, whose job it is to vet proposals before they reach Arts Faculty Council. And the AAC members had indeed devoted much time and effort to scrutinizing the proposal; after much discussion, they arrived at the conclusion that the proposal did not cut it. Under normal circumstances, the proposal would then not have been forwarded to Arts Faculty Council. Dean Cormack admitted at the meeting that she had overruled the AAC’s decision. This complicated the Council members’ decision making process, as they now had to decide between, on the one hand, choosing to honour the judgement of the AAC, whose members they elected, or, on the other hand, siding with the Dean. Without even knowing what the concerns of the AAC were, the majority of AFC members chose to side with the Dean and advocates of the proposal. 

This decision is very troubling for several reasons. The first is that it deals a blow to a fundamental principle that I thought we all endorsed—collegial governance. We have these committees in place to ensure the right of academics to participate in the decision-making process. They serve in part as a system of checks and balances to limit the powers of administrators. When we side with administrators, rejecting the hard work and professional decisions of our own elected colleagues, do we not undermine the legitimacy of the decision-making process? Do we not open up the door for administrative micromanaging of our programs, and potential abuses of power? 

Now some may argue that, although they didn’t know the reasons the proposal had not been approved by the AAC, they believed the arguments raised by the supporters. This too, is problematic, because for every argument made by the supporters, the opponents countered with facts that negated those claims. Here are some examples:

  1. A necessary step in this process was the surveying of students. The ‘yea’ side maintained that students were enthusiastic about the proposal. The ‘nay’ side claimed that there is no reliable evidence to support that claim. As proof, it was explained that the method used to survey students was not objective, the satisfaction rate low, and that the comments made by students actually showed they did not understand the proposal. It was explained that the method used consisted of a few advocates of the proposal visiting a few undergraduate classes, giving a rah-rah pep-talk, and then asking students some questions such as “what do you like about this proposal?” Some students heard the talk more than once, and it is possible that they answered the questions more than once, so we don’t know how many different students were surveyed. Of those students, only 62 (again, we don’t know the exact number) were registered in a Minor, Major, or Honors program in MLCS. The satisfaction rate was 3.5/5. Not overwhelmingly positive. More telling are the comments that were made. The two main aspects of the proposal students liked were “access to study abroad”, and the flexibility to study more than one language. Since MLCS students have always studied abroad, and have always had the option of studying more than one language, the responses indicate that students did not understand the proposal. This is not surprising given that they had not been asked to read and study it carefully. It is important to point out here that there are objective methods of surveying groups of people. If MLCS does not have questionnaire-design specialists among its faculty, there are resources on campus to help with that task. (We availed ourselves of those resources for our last unit review.) Conclusion: these facts prove the “student satisfaction” narrative wrong. 
  2. Advocates of the proposal argued that for 4 years there had been ample consultation and discussion. Opponents countered with several facts: i) that in 2015 members of the Curriculum Committee were not permitted to discuss the proposal because it had allegedly been discussed enough the previous year; ii) members of the department were denied the right to submit alternate proposals. One person pointed out that she had sent the Curriculum Committee a proposal; it was never acknowledged. Others received e-mails from the Chair clearly stating that no other proposals would be discussed; iii) the proposal was never discussed at the level of our areas (e.g. Spanish, French, etc.), where much of our business is conducted; iv) discussions at departmental council were short, given the time constraints; v) committee membership was cherry-picked. The Chair of the Curriculum Committee stated that members of the hardest-hit program under this proposal, Comparative Literature, had been invited to serve on the committee, but declined. Members of the Comparative Literature program gave testimony that they had never been invited. Conclusion: these facts seriously undermine the “ample discussion and consultation” narrative. 

One of the most important issues: the potential consequences of this proposal received only minor attention (more on that in a separate blog thread to come). A member of the French area explained how the proposal weakens the French program to the point where it will no longer be competitive with other strong French programs in the country because it lowers the standards and requires students to take more courses in English. Some perceptive students had mentioned in their student survey answers that the new program would entail decreased language proficiency. They are absolutely right. One does not have to be a linguist to know that one does not gain proficiency in a foreign language by taking courses in English. 

It was also pointed out that the proposal pits some areas of the department against others; it saves Scandinavian by weakening the academic integrity of French, virtually destroying Comparative Literature. Someone obviously decided that Scandinavian is more important than Comparative Literature. But members of the opposing side pointed out that it was not necessary to save some programs at the expense of others, that it was entirely possible to craft a proposal that saved endangered programs while maintaining the strength of the stronger programs like French and Spanish. The opponents of the program asked for the opportunity to produce such a proposal, an opportunity they had always been denied. The majority voted not to give the department that chance. Conclusion: the vote of the majority displays a lack of concern for the very negative consequences of this proposal.

On Thursday, Council members who were not in a position to judge for themselves the merits or flaws of the proposal, decided to disregard the decision of the Arts Academic Affairs Committee, and side with one half of the department of MLCS. What a truly sad sign of our times that facts do not guide decision-making.  The vote suggests that the ‘yeas’ won and the ‘nays’ lost. But it’s not that simple. The ‘yea’ vote condemns a department to continued internal conflict, paves the way for the implementation of a very flawed program that will have very dire consequences, and deals a slap in the face to the commitment to facts as well as to collegial governance. We have heard so many complaints about the erosion of collegial governance and the rise of a corporate culture in universities. But we can hardly complain when through our actions we become complicit. The ‘nays,’ armed with facts, argued with courage and integrity; the courage and integrity to stand up to the administration and denounce a flawed process and product. 

I am truly proud of being part of the “losing” side.

 

Posted in Uncategorized | Leave a comment