Skip to main content
Start of main content
Research

New paper highlights dangerous misconceptions of AI

September 2023 edition
A new paper by CDU AI expert Dr Stefan Popenici explores the misconceptions about Artificial Intelligence and how it harms education in a new research paper.

Artificial Intelligence (AI) is discriminatory, susceptible to racial and sexist bias and its improper use is sending education into a global crisis, a leading Charles Darwin University (CDU) expert warns in a new research paper.

‘The critique of AI as a foundation for judicious use in higher education’ urges society to look beyond the hype of AI and analyse the risks associated with adopting the technology in education after AI ubiquitously “invaded and colonised public imaginations across the world” in late 2022 and early 2023.

In the paper, author and CDU AI expert Dr Stefan Popenici discusses the two most dangerous myths about AI in education: the belief AI is objective, factual and unbiased when it is in fact directly related to specific values, beliefs and biases; and the belief AI doesn't discriminate when it is inherently discriminatory, referencing also the lack of gender diversity in the growing field.

“If we think about how technology actually operates, we realise that there is not one point in the history of humanity when technology is not directly related to specific cultures and values, beliefs and biases, religious beliefs or gender stances,” Dr Popenici said.

“There is consistent research and books that are providing examples of AI algorithms that discriminate, grotesquely amplify injustice and inequality, targeting and victimising the most vulnerable and exposing us all to unseen mechanisms of decision where we have no transparency and possibility of recourse.”  

Dr Popenici examines how the discrepancy in priorities of higher education and “Big Tech” – the most dominant companies in the information technology industry – are growing, with a striking and perilous absence of critical thinking about automation in education, especially in the case of AI. The lack of concern for AI in education is affecting the use of students’ data, impacts on their privacy and ability to think critically and creatively.

“Big Tech is driven by the aims of profits and power, control and financial gain. Institutions of education and teachers have very different aims: the advancement of knowledge and to nurture educated, responsible, and active citizens that are able to live a balanced life and bring a positive contribution to their societies,” Dr Popenici said.

It is deceiving to say, dangerous to believe, that artificial intelligence is... intelligent. There is no creativity, no critical thinking, no depth or wisdom in what generative AI gives users after a prompt.

“Intelligence, as a human trait, is a term that describes a very different set of skills and abilities, much more complex and harder to separate, label, measure and manipulate than any computing system associated with the marketing label of AI.

“If universities and educators want to remain relevant in the future and have a real chance to reach the aims of education, it is important to consider the ethical and intellectual implications of AI.”

‘The critique of AI as a foundation for judicious use in higher education’ was published in the Journal of Applied Learning & Teaching.

Dr Popenici is a leading AI in education expert who last year published the book Artificial Intelligence and Learning Futures: Critical Narratives of Technology and Imagination in Higher Education.

Back to top