Word Count: 1197
Slovenian cultural philosopher Slavoj Žižek once explained to a general audience:
“As important as providing answers is…[philosophy] can ask the right questions. There are not only wrong answers, but there are also wrong questions. Questions which deal with a real problem but the way they are formulated, they obfuscate, mystify and confuse the problem [sic].”1
Here, Žižek draws from the long-honoured tradition of the Socratic method. As the stories go, Socrates would accost the knowledgeable men of ancient Athens and question their expertise. The result was always the same: The artists knew little about beauty, the generals knew little about courage, and the leaders demonstrated an insufficient understanding of justice. Why? Because we all harbour unexamined beliefs, and those presuppositions affect our worldview and, subsequently, how we think, act, and shape what we presume is possible. However, the right questions reveal our judgements as limited. Through this process, we can begin to unpack why and how we come to these wrong answers and seek better ones—but first, ask the right questions.
Fast-forward to the end of 2022. Artists, politicians, academics, and everyone on Reddit had contracted fevered anxieties over Open AI’s ChatGTP and Midjourney. It is not a new subject, but one that ebbs into popularity as new problems arise; and is expected to increase as machine learning is further developed and implemented. It is a complex phenomenon with far-reaching material and social dimensions that we have yet to comprehend fully, adding to our collective anxieties. It is as though we are ‘The Mouse’ in The Sorcerer’s Apprentice, wearing our wizard hat. We may automate brooms to fetch us water, but what will happen when our machines act out their directives too well? Or at the cost of a catastrophe? We may even discover, as did the Apprentice, that we cannot prevent them from executing the tasks we gave them.
The most common question one will likely encounter is, “should we ban AI in academia, art galleries, or other specific places or fields of discourse?” While this question can be applied in moderation, for example, limiting AI art from art galleries ensures that human art is celebrated, the position to ban AI in academia is a wrong question which only mystifies the challenges ahead.
Let us grant for argument’s sake that AI should be banned in schools. What does this mean? In one interpretation, a ban might broadly affect all instances and uses of machine learning in academic writing and research. In contrast, a narrow effect might focus on the most uninspired academic frauds so brazen as to copy-paste complete exposition and argumentation verbatim.
In the latter narrow sense, some promise is offered in anti-plagiarism AIs which check texts for signs of being generated. In a survey of testing 100 false positive texts (text which humans had failed to identify as generated), preliminary studies showed that AI could isolate a series of common patterns of speech that were indicative of generated text.2 However, the effectiveness of using AI to detect AI plagiarism is not guaranteed to remain an effective solution for long. This is because machine learning is highly iterative, and the mistakes it makes today are likely to be absent tomorrow, which means we will need increasingly more complex checks and balances to catch the more clever forms of academic dishonesty.
If our ban is targeted in a broader sense to prevent all machine learning from participating in writing and research, I hate to inform you, but the cat is out of the bag. Machine learning and neural networking are already indispensable tools across the sciences and social sciences. So we can see that even if we grant that an AI ban is the correct course of action, it seems increasingly difficult (perhaps impossible) to enforce in a narrow sense and misguided given the current state of computer science and research in a broad sense.
“Should we ban AI from academia” also treads dangerously close to a Luddite view of technology. A Luddite generally describes a person who supports a position of technological regress; however, historically, the English Luddites disavowed and destroyed machinery during the early years of the industrial revolution because of their reactionary views that integrating machines would make their labourer obsolete—an anxiety we still possess.3
This historical Luddite also offers an analogy to demonstrate how the wrong types of questions obfuscate the problem. Their conclusion was a simple one. Destroy the machines they perceived as threats to their livelihood. However, they could not articulate that the tension was not man versus machine but between those who sell their labour and those who buy it. It was against the backdrop of industrialization that the asymmetric power dynamic between labour and ownership became demystified, allowing for the observations of Adam Smith and Karl Marx to be actualized. The takeaway of this analogy is that because machines were not banned, we were afforded a clearer picture of labour relationships, ownership and production, and the logic presents something parallel to machine learning.
While many important questions are waiting for us, we tend to see them when we are staring at them in the face. However, how can we accomplish this while we defiantly close our eyes? Thankfully, not all fields have suffered from this reaction, and as a result, they produce better questions.
We recognize that machine learning reproduces human bias and can even amplify bias4, which raises the question of whether it is possible to remove our unintentional biases from data sets because of its implementation into research. Similarly, automated cars disproportionately hit certain ethnicities5—more examples of biased data sets realized only when vehicles are on the road. This raises another critical question, who is ethically responsible for autonomous machines? Finally, as it stands, the proprietary ownership of these technologies by mega-corporations like Apple and Google leads us to question the nature of knowledge and its ownership. For example, if Midjourney is a simple aggregate of all our collective artistry and ChatGTP is a summarization of our collected works of knowledge, is it right to be owned for profit? All of these questions occur because machine learning is adopted into sophisticated societal roles not despite it.
While this essay criticizes the conclusion of AI bans as technological regression and for the intellectual deficits they create, I close by remarking that this is not an argument for the laissez-faire adoption of AI and machine learning in academia. We ought to curb academic dishonesty at all avenues, and ChatGTP offers the dishonest a new avenue of play. In addition, AI hallucinations are akin to being lied to by a machine and must be scrutinized meticulously to prevent such hallucinations from becoming institutionalized as knowledge. However, the problem is that these issues already existed before generative text, and a ban on AI will not solve that problem.
There has always been a market for plagiarism, and scholarly research becomes discredited when new information becomes available. We must take proactive positions regarding our future alongside machine learning. Failure to do so may mean we miss out on the novel and crucial questions shaping future consequences produced by AI’s role in society.
Bibliography.
- Big Think (Freethink Media), ” Slavoj Žižek – The Purpose of Philosophy is to Ask the Right Questions,” 2017, video, https://bigthink.com/videos/the-purpose-of-philosophy-is-to-ask-the-right-questions/.
- Jawahar, Ganesh, Muhammad Abdul-Mageed and Laks V. S. Lakshmanan. “Automatic Detection of Machine Generated Text: A Critical Survey.” International Conference on Computational Linguistics (2020). https://arxiv.org/pdf/2011.01314.pdf.
- Donnelly, F. K. “Luddites Past and Present.” Labour / Le Travail 18 (1986): 217–21. https://doi.org/10.2307/25142685.
- Sun W, Nasraoui O, Shafto P (2020) Evolution and impact of bias in human and machine learning algorithm interaction. PLoS ONE 15(8): e0235502. https://doi.org/10.1371/journal.pone.0235502.4
- Wilson, Benjamin, Judy Hoffman and Jamie H. Morgenstern. “Predictive Inequity in Object Detection.” ArXiv abs/1902.11097 (2019). https://arxiv.org/pdf/1902.11097.pdf.