This is a guest post by Dr Stephen Anning, Visiting Researcher in the Department of Web Science at the University of Southampton and online tutor for the MA in Artificial Intelligence.
This blog post shares insights from a student webinar discussion in the ‘Introduction to AI’ module of the University of Southampton’s online MA in Artificial Intelligence.
The course is designed to open the world of artificial intelligence to non-STEM students, showing that you don’t need advanced coding skills or a deep love of maths to engage meaningfully with AI. If you’re curious about how AI is shaping society but haven’t written a line of code, this course aims to make the field accessible, critical and relevant.
Artificial intelligence vs artificially generated intelligence
The webinar discussion was framed around a key distinction from the module reading: the difference between artificial intelligence and artificially generated intelligence. Artificial intelligence refers to the philosophical idea that machines might one day exhibit human-like cognition.
Artificially generated intelligence, by contrast, describes the mathematical and statistical techniques that allow machines to process data at scale. These systems identify patterns and generate insights that can support and augment human decision-making. Understanding this distinction is central to the course’s aims and learning outcomes.
The Turing test and the origins of AI
Ground zero for artificial intelligence, and this course, is the Turing Test. A significant challenge with the Turing test is that it inherently frames the relationship between humans and machines as a zero-sum competition, where success is measured by a machine’s ability to deceive, imitate or displace the human.
This competitive narrative has been a staple of science fiction for decades, often depicting a binary struggle for dominance. Increasingly, this narrative has also bled into the corporate world through bold promises to replace entire workforces with automated systems.
The ‘Introduction to AI’ module aims to critically examine the technical and social feasibility of such claims, questioning whether “replacing” people is a realistic or even desirable goal.
From competition to collaboration: the idea of augmented intelligence
Ultimately, our aim is to move beyond the adversarial constraints of the Turing test toward a collaborative paradigm. Rather than viewing AI as a rival, we want to explore a world of augmented intelligence, where the computational power of machines and the unique cognitive and emotional strengths of humans work in tandem to achieve what neither could alone.
Why defining intelligence in AI is a philosophical question
A major theme that surfaced relates to the fundamental concepts of AI, especially the contested definitions of “artificial” and “intelligence”. Several of the students seemed surprised by how philosophical the field is.
In fact, the foundations of AI are inseparable from philosophical inquiry.
As we discussed, the type of AI imagined by Alan Turing—the idea of machines thinking—belongs largely to the realm of artificial intelligence and has been the fascination of philosophers for decades.
This philosophical inquiry concerns artificial general intelligence (AGI), robot consciousness, and debates over whether machines can ever replicate the richness of human cognition, emotion or experience.
The Turing test was originally called the Imitation Game, yet does the imitation of human intelligence equate to human consciousness? You decide, because the answer very much depends on what you believe about humanity.
Artificially generated intelligence and the reality of AI today
In contrast, what is impacting society right now is not human-like intelligence in machines.
What we have instead is artificially generated intelligence: machines that compute at scale by identifying patterns in vast datasets and produce outputs that imitate intelligence without any real understanding in a philosophical sense.
As several students insightfully noted, this output is essentially statistical pattern matching - a powerful form of automation, prediction and optimisation, but not cognition. The sense of understanding remains with the human interpretation of the outputs. This distinction is vital to maintaining clarity amid the hype that surrounds contemporary AI.
Real-world applications of AI across industries
The discussion reflections also address the social and professional relevance of AI.
Students raised real-world cases from engineering, medicine, mental health, cybersecurity, defence and education. These examples illustrate how artificially generated intelligence is embedded into everyday work: detecting faults in electrical transformers, supporting clinicians with documentation, identifying disease early or helping young people accelerate learning.
However, as several students rightly emphasised, such benefits depend on appropriate governance, ethical oversight and well-defined problem framing. As noted in the discussion, productivity gains are only meaningful if they do not create wider societal costs.
Human direction in the relationship between humans and machines
In the relationship between humans and machines, AI only solves the right problems if humans set the right direction. This relationship aligns closely with our commitment to responsible, human-centred AI and with the module learning outcomes.
Student discussions around cognitive decline, emotional reliance on chatbots, misinformation and algorithmic bias demonstrate an emerging awareness of the societal risks. We recognised that artificially generated intelligence relies heavily on the data it is fed and therefore risks amplifying groupthink, discrimination or harmful norms if not carefully managed.
Many students also recognised that AI systems increasingly learn from AI-generated material—what we called AI slop—raising concerns about systems effectively “eating their own tail”. The potential societal costs are high.
Ethical challenges in artificial intelligence
Another major theme linked to ethics: who defines what is right and wrong? Several students questioned the cultural, political and value-laden assumptions encoded in AI systems.
The discussion noted the influence of Western models, the possibility of future geopolitical divergence and the difficulty of creating “unbiased” systems when even the notion of truth is contested.
This awareness will be essential when applying ethical frameworks to AI applications. Ultimately, we must decide which societal values should govern the use of AI.
Why human-centred AI matters
Finally, the webinar touched on essential human dimensions: trust, emotion, relationships, identity and social norms. Machines may compute intelligence, but it is humans who interpret, negotiate and act upon it. This is why human-centricity is a core principle of responsible AI and a guiding value for this module.
Find out more about the MA in Artificial Intelligence
Artificial intelligence is reshaping decision-making across industries, from healthcare and finance to agriculture and education. The University of Southampton’s online MA in Artificial Intelligence is a conversion course designed for professionals who want to understand and apply AI responsibly. You don’t need prior AI experience or coding knowledge. The course provides a foundation in AI principles, machine learning and the social impact of intelligent systems:
Explore the course