Skip to main content
SearchLoginLogin or Signup

Being a Good University in Times of Dataism

Published onNov 17, 2022
Being a Good University in Times of Dataism
·

We live in a time of unprecedented quantification. Advances in digital technology allow individuals to almost effortlessly track the number of steps they take each day, the number of hours they sleep each night, and the number of people that viewed and liked their holiday selfies on their social media profiles in the meantime. Also, universities, individual scholars, and academic gatekeepers such as scientific journals have a tendency to increasingly quantify behavior and performance by collecting and displaying data. Metrics show how often a given scientific article has been downloaded, shared, and cited. Students are asked to quantitatively evaluate their lecturers’ teaching performance. Universities track how many international students each teaching program attracts. This widespread belief that everything can or even must be reduced to (“objective”) data has been termed dataism (Rasch, 2020).

Once such data is available, it is only a small step towards organizing the data in the form of rankings. Which lecturer received the highest teaching evaluation scores? Who attracted most external funding? Which university department has the highest average h-index? Which articles are cited most, and who publishes most often in journals with the highest impact factors? Inevitably, the presence and availability of rankings will subsequently lead to competition, particularly in an environment in which permanent positions and opportunities for promotion are scarce. At the same time, the ranking of quantified performance – when viewed through the lens of competition – allows universities to provide their students and employees with awards and prizes: there will be a teacher of the year award, an outstanding student award, or a prize for the most talented young researcher.

The omnipresence and assumed importance of data, rankings, competition, awards, and prizes may lead students and young academics to believe they are unavoidably taking part in a rat race. This perception is reinforced by trends in society as a whole, where intrinsically non-competitive activities such as dancing, making music, and even baking cakes or playing with Lego bricks are presented in quantifiable and competitive formats in television shows, while also online social media continuously quantify and display posts and profiles in terms of their numbers of elicited views, likes, followers, and shares. It is perhaps not surprising that in this overall societal and academic climate at least half of all students and young academics indicate to suffer from mental health issues such as stress, unhealthy pressure to perform, anxiety, and sadness (Levecque et al., 2017; RIVM, 2021). The multifaceted relation that can be assumed between the omnipresence and perceived importance of numeric data on the one hand and downstream mental health consequences in students and academic staff on the other is represented visually in what I will coin the Twenty-First Century Rat Race Model (Figure 1).

Figure 1

The Twenty-First Century Rat Race Model. The omnipresence of data and their use for rankings and competition purposes may lead to mental health issues down the line, in turn leading to reduced quality of teaching, research, and public outreach.

Now, what does a good university look like in this current age of quantification and dataism? If we assume the Twenty-First Century Rat Race Model to be correct, universities – by the data they collect and display, the rankings they adhere to and promote, and the competition, prizes, and awards they initiate – may influence the mental health of their students and staff. In the remainder of this essay, I will argue that they should therefore base their policy and practices on the outcomes of scientific research.

Data, Numbers, and Quantification

Many domains of scientific inquiry – from experimental psychology to econometrics, from artificial intelligence to anthropology – rely on the collection, analysis, and availability of data. However, not all data necessarily deserves to be collected. A case in point where the mere act of data collection leads to scientifically proven adverse effects are the so-called student evaluations of teaching. In many universities, it is common practice that students quantitatively evaluate a course they took and its teacher via an anonymous survey distributed at the end of the course. A recent systematic literature review on the value and validity of such quantitative student evaluations of courses and teachers indicates that these evaluations are consistently biased. Female teachers, academics of color, and lecturers who teach in their non-native language on average receive lower scores and more abusive (e.g., racist and/or sexist) comments from students, irrespective of their actual performance, compared to their male, white, and mother tongue speaking colleagues (Heffernan, 2022a). Such abusive comments and flawed ratings not only impact academics’ career progression, as teaching evaluations are often taken into account in decisions on academic promotions, but also have negative effects on teachers’ mental health (Heffernan, 2022b). As such, the case of student evaluations indeed confirms the possible relation between the availability of certain types of data and its downstream mental health consequences (Figure 1).

Student evaluations are not only biased but also unreliable, as teachers may strategically steer them in a desired direction. Results from a randomized control trial indicate that when teachers provide their students with chocolate cookies during a class, student evaluations of both the teacher and the course material are significantly higher compared to the same class taken by a matched control group of students that did not receive any cookies (Hessler et al., 2018).1 Perhaps most strikingly, it turns out there is no relation whatsoever between student evaluations and how much students actually learned (Uttl et al., 2017). It is therefore surprising that universities actively facilitate quantitative student evaluations of teaching and take such intrinsically flawed measures into account when deciding on academic promotions and hiring. Indeed, “no university [...] can declare to be a gender equal employer or have an interest in growing a safe, inclusive and diverse workforce if they continue using [quantitative student evaluations of teaching] to evaluate course and teacher quality.” (Heffernan, 2022a, 152).

Unfortunately, the student evaluations example is not an exception to the university policy and practice of ignoring scientific evidence. For instance, we know that grading written exams anonymously (i.e., without knowing which exact student provided the answers one is grading) reduces unfair biases in assessment and is easily implemented (Malouff et al., 2013). Nevertheless, it remains common practice for students to be required to write their name - rather than, for instance, only their student number - on their answer sheets.

Rankings and Competition

Data can often be ranked, and rankings seem to have entered our lives to the extent that we no longer seem to even question their presence, their premises, and the consequences they have for our mental health (Brankovic, 2022). We look for rankings when booking a hotel, picking a restaurant, or buying a new laptop online. Within this broader societal frame, it is not surprising that universities find themselves ranked as well. Tilburg University, for instance, is ranked number 37 in the world according to the Times Higher Education World University Ranking when it comes to Business and Economics (Times Higher Education, 2022).

Being in a high position on an international university ranking may become important when potential students – who by their choice of where to study provide universities with financial resources – use such rankings to decide where to study. Not surprisingly then, universities have started installing rankings officers who dedicate their time to finding ways to improve their university’s ranking position, and even whole rankings management departments have started to appear within universities (Chun & Sauder, 2022). Indeed, securing a high position in international university rankings is a form of academic capitalism that has become part of the present-day university’s business model (Groen, 2020; Van Houtum & Van Uden, 2020).

Similar to student evaluations of teaching, however, research has shown that university rankings are intrinsically flawed (e.g., Gadd, 2021; Vernon et al., 2018). These rankings strongly rely on how often scholars at universities get cited, while we know that the number of times a work is cited does not necessarily reflect its scientific quality (Selten et al., 2020). For instance, works published in languages other than English typically get cited less than output of a similar quality published in English, disadvantaging academic domains (e.g., the Humanities) and universities (e.g., in the Global South) that have a tradition of publishing in languages other than English (Van Leeuwen et al., 2004; Van Raan, 2005). Furthermore, we know that citation counts are biased in that articles with a male first author typically receive more citations than articles with a female first author (Larivière et al., 2013), while male academics also cite their own work 56% more often than female academics do (King et al., 2017).

Next to citations, international university rankings typically rely on vague and biased variables such as a university’s reputation (Selten et al., 2020). Indeed, when reputation becomes more important than actual academic quality in securing a position on a ranking, universities may be tempted to devote staggering amounts of tax money to marketing and communication strategies, rather than spending it on increasing the quality of research and education itself. The focus on rankings that universities display in their external communication, for instance in self-congratulatory messages on platforms such as Twitter and LinkedIn and on dedicated web pages, contributes to creating an overall climate in which the university as employer seems to reinforce the notion and the perception of being in constant competition. This undoubtedly will not have a positive impact on the mental health of their students and staff.

Again, the university rankings example is not an exception of academic practices ignoring scientific evidence. Also grant proposals submitted to acquire funding for academic research are typically ranked by funding agencies when a decision is taken on which proposals will receive the funding, based on reviewers’ evaluations and evaluation scores that are found to be highly subjective and unreliable (Pier et al., 2018).

Prizes and Awards

If a university’s reputation becomes more important than the actual quality of its scholarship and education, this requires universities to be visible in the most positive way, online and offline, within society. Besides communicating about an improvement of a university’s position on an international ranking of their liking, prizes and awards given to academics are the perfect excuse to send out a positive news message and strengthen the university’s reputation. And indeed, one sometimes gets the impression that universities decorate some of their own staff members with medals and awards just to be able to communicate to the external world that it houses prize-winning employees.

Intuitively, academic prizes and awards seem a positive feature of the system – they may be conceived of and framed as a well-deserved acknowledgment of the laureate’s contribution to research or education, providing the recipient with a motivational impulse to continue their undoubtedly groundbreaking work. What is often not considered, however, is that the mere presence of prizes and awards in the system reinforces the perception that academics are in competition with one another. Is it worth awarding one or a small group of individuals a prize, if that comes at the expense of reinforcing an overall competitive climate in which there are a couple of winners (of a prize, award, funding) and a large majority of nonwinners? In fact, in line with the Twenty-First Century Rat Race Model introduced in this essay, the intuitively positive gesture of awarding a certain prize may in practice actually contribute to establishing or maintaining an overall climate of competition that induces stress and other commonly and increasingly observed mental and psychosomatic health issues in (young) academics.

Finally, perhaps no longer surprisingly at this stage, decisions on who will receive a certain prize or award are typically biased, in that “adjudication committees, ranking advisories and the leadership of top-ranked institutions form an echo chamber that conflates academic excellence with being white, male, wealthy, and famous” (Stack, 2020, 4). Despite any good intentions, award committees are commonly biased to select prize-winners that have a background similar to their own (Stack, 2020). In line with the well-known Matthew-effect, awards and prizes may subsequently actually reduce the visibility of those academics and their work that do not fall within this privileged group (Merton, 1968), thereby also unjustly influencing the chances of who will receive any future award or scientific funding (Bol et al., 2018).

Conclusion

So how to be a good university in times of dataism and quantification? For one, universities may be expected to base their policies on scientific facts. If quantitative student evaluations of teaching are evidently biased and invalid, universities should develop alternative measures of assuring high-quality education. If international university rankings are intrinsically flawed and unscientific, universities should have the courage to actively oppose rather than reinforce them. If mental health problems in young academics and students are at an unprecedentedly high level, universities should seriously analyze the broader societal climate they are part of and consider setting a different example. When universities continue to ignore the scientific evidence around these matters, how can they expect society at large to take other scientific findings seriously?

If the analysis presented in this essay is correct, turning the university into a healthier environment for students and staff requires a substantial change in culture and policy. Rather than focusing on the treatment of symptoms (e.g., stress, anxiety, etc.) via psychological support (e.g., mindfulness training), the overall academic and societal climate in which these symptoms surface require in-depth analysis and opposition. It is not unlikely that the present relatively young generation of academics, who grew up in the current societal climate of dataism and competition and often climbed the academic ladder at the expense of their own health, will need to be given the power, the freedom, and the means to lead this change for it to be effective.

References

Bol, Thijs, Mathijs de Vaan, and Arnout van de Rijt. 2018. “The Matthew Effect in Science Funding”. Proceedings of the National Academy of Sciences 115 (19): 4887–90. https://doi. org/10.1073/pnas.1719557115.

Brankovic, Jelena. 2022. “Why Rankings Appear Natural (But Aren’t)”. Business & Society 61 (4): 801–6. https://doi.org/10.1177/00076503211015638.

Gadd, Elizabeth. 2021. “Mis-Measuring Our Universities: Why Global University Rankings Don’t Add Up”. Frontiers in Research Metrics and Analytics 6. https://www.frontiersin.org/ articles/10.3389/frma.2021.680023.

Groen, Tessa. 2020. “World University Rankings: spel van marktgestuurde universiteiten”. Accessed August 10, 2022. https://www.apache.be/gastbijdragen/2020/10/02/worlduniversity-rankings-een-spel-van-de-marktgestuurde-universiteit.

Heffernan, Troy. 2022a. “Sexism, Racism, Prejudice, and Bias: A Literature Review and Synthesis of Research Surrounding Student Evaluations of Courses and Teaching”. Assessment & Evaluation in Higher Education 47 (1): 144–54. https://doi.org/10.1080/02602938.2 021.1888075.

———. 2022b. “Abusive Comments in Student Evaluations of Courses and Teaching: The Attacks Women and Marginalised Academics Endure”. Higher Education, Advanced Online Publication. https://doi.org/10.1007/s10734-022-00831-x.

Hessler, Michael, Daniel M Pöpping, Hanna Hollstein, Hendrik Ohlenburg, Philip H Arnemann, Christina Massoth, Laura M Seidel, Alexander Zarbock, and Manuel Wenk. 2018. “Availability of Cookies during an Academic Course Session Affects Evaluation of Teaching”. Medical Education 52 (10): 1064–72. https://doi.org/10.1111/medu.13627.

Houtum, Henk van, and Annelies van Uden. 2022. “The Autoimmunity of the Modern University: How Its Managerialism Is Self-Harming What It Claims to Protect”. Organization 29 (1): 197–208. https://doi.org/10.1177/1350508420975347.

King, Molly M., Carl T. Bergstrom, Shelley J. Correll, Jennifer Jacquet, and Jevin D. West. 2017.

“Men Set Their Own Cites High: Gender and Self-Citation across Fields and over Time”. Socius 3 (January): 2378023117738903. https://doi.org/10.1177/2378023117738903.

Larivière, Vincent, Chaoqun Ni, Yves Gingras, Blaise Cronin, and Cassidy R. Sugimoto. 2013. “Bibliometrics: Global Gender Disparities in Science”. Nature 504 (7479): 211–13. https://doi.org/10.1038/504211a.

Leeuwen, Thed van, Henk Moed, Robert Tijssen, Martijn Visser, and Anthony van Raan. 2004. “Language Biases in the Coverage of the Science Citation Index and Its Consequences for International Comparisons of National Research Performance”. Scientometrics 51 (1): 335–46. https://doi.org/10.1023/a:1010549719484.

Levecque, Katia, Frederik Anseel, Alain De Beuckelaer, Johan Van der Heyden, and Lydia Gisle. 2017. “Work Organization and Mental Health Problems in PhD Students”. Research Policy 46 (4): 868–79. https://doi.org/10.1016/j.respol.2017.02.008.

Malouff, John M., Ashley J. Emmerton, and Nicola S. Schutte. 2013. “The Risk of a Halo Bias as a Reason to Keep Students Anonymous During Grading”. Teaching of Psychology 40 (3): 233–37. https://doi.org/10.1177/0098628313487425.

Merton, Robert K. 1968. “The Matthew Effect in Science”. Science 159 (3810): 56–63. https://doi. org/10.1126/science.159.3810.56.

Pier, Elizabeth L., Markus Brauer, Amarette Filut, Anna Kaatz, Joshua Raclaw, Mitchell J. Nathan, Cecilia E. Ford, and Molly Carnes. 2018. “Low Agreement among Reviewers Evaluating the Same NIH Grant Applications”. Proceedings of the National Academy of Sciences 115 (12): 2952–57. https://doi.org/10.1073/pnas.1714379115.

Raan, Anthony F. J. van. 2005. “Fatal Attraction: Conceptual and Methodological Problems in the Ranking of Universities by Bibliometric Methods”. Scientometrics 62 (1): 133–43. https:// doi.org/10.1007/s11192-005-0008-6.

Rasch, Miriam. 2020. Frictie. Ethiek in tijden van dataïsme. De Bezige Bij.

RIVM. 2021. “Monitor Mentale Gezondheid En Middelengebruik Studenten Hoger Onderwijs. Deelrapport I. Mentale Gezondheid | RIVM”. Accessed August 10, 2022. https://www.rivm. nl/publicaties/monitor-mentale-gezondheid-en-middelengebruik-studenten-deelrapportI#abstract_en.

Stack, Michelle. 2020. “Academic Stars and University Rankings in Higher Education: Impacts on Policy and Practice”. Policy Reviews in Higher Education 4 (1): 4–24. https://doi.org/10.10 80/23322969.2019.1667859.

Times Higher Education. 2022. “Tilburg University”. Accessed August 10, 2022. https://www.timeshighereducation.com/world-university-rankings/tilburg-university.

Uttl, Bob, Carmela A. White, and Daniela Wong Gonzalez. 2017. “Meta-Analysis of Faculty’s

Teaching Effectiveness: Student Evaluation of Teaching Ratings and Student Learning Are Not Related”. Studies in Educational Evaluation, Evaluation of teaching: Challenges and promises, 54 (September): 22–42. https://doi.org/10.1016/j.stueduc.2016.08.007.

Vernon, Marlo M., E. Andrew Balas, and Shaher Momani. 2018. “Are University Rankings Useful to Improve Research? A Systematic Review”. PLOS ONE 13 (3): e0193762. https://doi. org/10.1371/journal.pone.0193762.

Comments
0
comment
No comments here
Why not start the discussion?