Will AI replace us? Dame Wendy Hall has a more hopeful answer

 
AI

At a time when artificial intelligence is moving from research labs into everyday life, one of the world’s leading thinkers is calling for a broader, more inclusive approach to shaping its future.

Speaking at the University of Southampton’s AI: Friend or Foe? alumni event, Professor Dame Wendy Hall delivered a clear message: AI must not be left solely to the engineers and big technology companies who build it.

“AI is too profound a breakthrough not to use it,” she said. I genuinely think that the good use of AI will help us have better quality lives.” 

The event, hosted by BBC Technology Editor and fellow Southampton alumna Zoe Kleinman, drew global interest. Dame Wendy explored AI’s potential to improve lives, the risks it presents, and how Southampton is helping people across all disciplines gain the tools to engage with this technology meaningfully.

Dame Wendy Hall

“History tells us that new breakthroughs in technology lead to more jobs, not less.” 

  • Professor Dame Wendy Hall, DBE, Regius Professor of Computer Science at the University of Southampton

AI should be seen as a teammate, not a threat

Dame Wendy, a trailblazer in computer science, urged people to see AI not as a rival but as a collaborator.

“[AI will take the drudge out of life]... but it doesn’t mean that the people that do that job... are going to disappear overnight... they'll be able to do other more advanced things,” she explained, later going on to add “I think of AI as part of the team.”  

Dame Wendy and Kleinman explored examples ranging from healthcare, where AI is already speeding up diagnoses and supporting radiologists, to education and scientific research, where vast amounts of data are now being analysed with machine learning in ways that humans could never achieve alone.

The University of Southampton's leading role in shaping the AI future

Dame Wendy’s remarks also celebrated the university she has called home for over 40 years. 

She opened the conversation by reflecting on her deep connection to Southampton, “I’m a proud alumni of Southampton and I’m proud to have worked at Southampton. I was tempted away a few times, but I’ve always stayed at Southampton because it’s always been the right place for me to work from.” 

Later, she proudly noted the university’s pioneering work in AI, “Southampton is one of the top multi-agent systems research groups in the world... We’ve been doing multi-agent systems for twenty-odd years.” 

That heritage now underpins a forward-thinking approach to AI education. Southampton’s suite of online postgraduate conversion courses are designed for students from non-technical backgrounds. These courses are ideal for professionals in law, public service, media, healthcare, arts, or business who want to understand and apply AI in their own field.

Southampton Online currently offers three specialist pathways:

Dame Wendy explained that, “This MA is online so anyone can do it, and people from any discipline can do it. I’m very excited about that.”

The university’s online programmes also specifically address the need for ethical, social and policy-based perspectives on AI, something Dame Wendy has championed at the highest levels, including through her role on the United Nations’ High-Level Advisory Body on Artificial Intelligence.

Optimism with a note of caution

While Dame Wendy and Kleinman spoke with optimism about AI’s potential to transform healthcare, education, and global development, they were equally candid about the serious challenges it poses. From environmental costs to misinformation and deepfakes,  the need for strong oversight and accountability was emphasised. 

Earlier in the conversation, Dame Wendy pointed to the historical impact of technology on employment, noting, “History tells us that new breakthroughs in technology lead to more jobs, not less.” 

Later, reflecting on the scale and inevitability of AI’s rise, she added, “The genie is out of the bottle — we’ve got to cope with it anyway.” 

The conversation as a whole suggested a broad vision: an AI-literate society in which technologists, ethicists, creatives, educators, and policymakers work together to shape systems that benefit humanity. 

Whether AI proves a friend or foe, will likely depend not on the machines themselves, but on the people shaping their use.

Watch the full video below:

Read the video transcript

Good evening and welcome to our event this evening. Thank you so much for joining us. I know that there's people from all around the world here tonight, Southampton alumni, supporters, staff, students and members of the local community and I also know the weather is glorious, it's even glorious here in Glasgow which never happens, so thank you ever so much for taking the time to come and join us this evening.

I should say that more people signed up to come and see Professor Jane Wendy Hall and I than signed up to see John Sople, who is a fellow Southampton alumni and not at all a rival of mine, so I'm very pleased about that too. Thank you.

My name's Zoe Klein and let me introduce myself briefly. I'm the technology editor of the BBC and I'm also a Southampton alumna myself. A hundred years ago, I graduated from the university. I'm absolutely delighted to be here this evening with the brilliant professor Dame Wendy Hall. She's one of the world's leading experts in artificial intelligence and she and I talk often about various news lines, to do with AI, often from a cruise ship, I think, Wendy, but, you know, wherever you need to do your thinking, that's fine.

Over the next hour or so, we're going to dive into the fascinating world of AI, the challenges that it represents, the promise that it brings, and more importantly, how it's going to shape our future. And I think that is so important because it really is going to affect everybody's lives, if it isn't already. I've been covering tech for many years, here at the BBC and I have to say that the rise of AI has been one of the most exciting and fast moving developments that I've witnessed And I think it certainly will be, from my point of view, the story of my career and indeed of my lifetime.

It's something that comes with a lot of potential problems and also a lot of potential benefits if we get it right. And the question of how we get it right is something that enormously divides opinion. There are lots of different thoughts about which way we should be approaching this rapidly evolving and very smart tech. And the person we should all be listening to, of course, is professor Dame Wendy Hall, who has all the answers. Let me tell you quickly how we're gonna do this. We are gonna chat for the next half an hour or so.

I've got lots of questions, and I know that the audience is gonna have lots of questions as well so, we're going to get to as many questions as we can in the last half an hour of the session.

I feel like Professor Dame Wendy Hall doesn't really need much introduction, she is a pioneer of computer science. She's one of the most influential figures in the world of artificial intelligence and web science. She's Regis Professor of Computer Science and Director of the Web Science Institute at the University of Southampton where she's worked for more than forty years, you must like it there, in addition to being a proud member of the alumni community just like me. In 2023 Dame Wendy was appointed to the United Nations High Level Advisory Board on Artificial Intelligence. Her service and dedication to the field have earned her numerous accolades including a damehood for her services to science and technology and Dame Wendy is also particularly passionate about promoting the role of women in technology and has done that consistently throughout her career. She's been an inspiration to lots of women including me.

Dame Wendy, a warm welcome to you. I'm so glad that you could join us. You're hot on the hills of returning from the USA, as am I. I hope the jet lag was kinder to you than it was to me, but I think we're back in the same time zone now. So let's get started. I think we should begin by saying what we mean about artificial intelligence when we use that term AI. There's a tendency to focus on generative AI, isn't there? Things like chat GPT, image generators, video generators, and and that's because it's the thing most people see, it's the most visible, but it's only a small part of the pie, isn't it?

It is. Let me just start by by saying, lovely to see you again, Zoe. I always love it when you WhatsApp me when I'm on a cruise. So what do you think of this story? And, I'm also, yes, a proud alumni of Southampton, and and I'm very proud to have worked here. I was tempted away a few times, but I've always I've always stayed at Southampton because it's always been the right place for me to to work from and, very proud of that. So, and please, you can call me Wendy if you if you that were in Well, now you said it, I will. Yes. Please do.

So, I usually so when people talk about when the media talk about AI, it's very unclear what what they're really talking about. And most people at the moment are talking about what the generative AI does, and we can come back to that.

I usually go back to saying that, in fact, it all started seventy five years ago. We're celebrating that later this year. We're running an event about Alan Turing and his Turing test. But he wrote the paper seventy five years ago asking the question, could machines think? This is before we had any machines really effect effectively. And, then, that that that the meme started. The term AI was coined in the sixties in the States, and they were trying it largely came out of the philosophy. There were no big no computers then that you could do anything with. They were very, very, embryonic then and no data, to to to do sorts of things we do say. So they they they were really thinking about, could you build a machine that thinks like a human being? Could we mimic the human brain? And, it was very hard to do with the computer, as I say. And people started to do, if they're decision making expert systems.

But the other thing, when I came back to Southampton I started as a mathematician. When I came back to Southampton as a computer scientist in nineteen eighty four, there was a lot of AI work already happening in Southampton. And it was there was some expert system work, but it was a lot about we had a very we have and still have a very well known electronics, department with our electronics and computer science, but, that's another story. And it was all about trying to the work then was all about trying to if if a machine was gonna be able to think, it need to understand the world it was in. So it need to have the sensory perception, needed to be able to see it, needed to be able to hear it, needed to be able to speak, it needed to be able to read natural language and speak natural language. So it was text processing, image processing, video processing, That was where the world was.

Actually, when we look at an image, in an instant we can work out what it's about. A machine, it takes an awfully long time to train a machine, to learn to train machines, to do that. We still haven't quite got there. But we're you know, the the in the last forty years, that's something that has come on in leaps and bounds and as the computers have got more powerful. But that was that was how AI was thought of then. And then we had expert systems, and then we moved into this world of neural networks. And I remember again, early days my early days in Southampton, there were people in our department working on the idea of neural networks, which is Geoffrey Hinton just got the Nobel Prize, right? Is it ripped famously?

Let's pause for a quick glossary moment and explain what a neural network is. Well I was just going to do that Zoe. You have people understand we have neurons in the brain, they send signals to each other. That's how the brain communicates one bit to another is through the neurons. And it was like, can we mimic that in a machine? And of course the more neurons you have, you know, we make jokes about this, they haven't got any neurons. The more neurons you have, the more clever you're going to be, or the more you're going to be able to interpret information. And, the networks get explosively more complicated the more neurons you have and the more layers you have in that model.

Jo, that was Geoffrey Hinton's idea. He had to leave the UK because everyone said he was mad. He couldn't get funding here to do it. And that's when he went to America and then to Canada. We still famously think of him as he was a Brit. He got trained at Edinburgh, but, it was that that so that was when did he go? In the late seventies? It was a long time ago when these ideas began to emerge.

And at Southampton, I won't reel off the names, but I can remember the people at Southampton that were doing that work. And it was a real struggle to get the funding because the government at the time said we're not going to fund this stuff, it's not going anywhere. Now here we are forty years later, maybe thirty years later when this work began to mature, it's gone into the commercial world as, what we call large language models, generative AI, foundation models.

They're all different words for the same thing, which is, actually that sort of network with the predictive modeling that they use. And if you wanna know what predictive modeling is, what you have on your phone, it predicts the next word. Well, that's effectively what the generative AI do. They in very, very clever ways, use these neural networks. Machine learning, it's a phrase I should have used earlier. It's all about how machines how we train machines to understand the world. It's kind of advanced patent spotting, isn't it? Yeah. Yeah. You could say it like that. Yeah. Yeah. Yeah. Yes. You could.

And and while ChatGPT has a lot to answer for, I think, in terms of shaping the way in which people view AI, it's it's really one application, and there are some that are arguably more exciting. What what for you is is the most exciting application for AI at the moment?

Actually, it's the in terms of application, let me just answer the the exciting things at the moment because it's all, in a way, neural nets are of neural networks of and generative are a form of machine learning. And again, you know, universities like Southampton have been doing this and teaching it for a long time, as have all the big universities in the UK that have big AI research groups. And, and the key thing that makes the difference today, why we can actually, deliver results, as as clever people push the push the frontiers of the science forward, is is the the the the huge compute power we can draw on, which, of course, is part of the we could talk about the energy supplies for that down the line and the huge amounts of data. And the data is there because we have the Internet.

I mean, you know, my before I became a AI policy geek and started advising governments in that area, I was most well known for the work I've done on the Internet and the World Wide Web. And it's that availability of, the Internet through the World Wide Web that has enabled us to share data and build up these huge amounts of data.

And these new models, the generative AI models, are trained basically on the whole of the Internet, which is a mind boggling idea. And also part of its flaws as well as its this is the friend or foe thing. Okay? We can come back to that. And problematic. Well, all sorts of problems involved in in scraping the Internet every day, and we can we'll come back to that maybe.

But you asked about applications, and I'm I am an optimist, although I'm a I I think we have to be very careful about how we approach AI and how we, how we how we mitigate the risks because there are risks. But I am a huge optimist, and AI is going to lead to profound breakthroughs in science, because we have the data, you know, you hear the scientists, the biologists, the chemists, the physicists, the medics, the bio you know, they have they have access to huge data, and AI can analyse that data in ways we couldn't begin to do. So it's I like to think of it as teamwork. Working with AI, scientists will make huge breakthroughs.

The obvious one to talk about because I think it's the one people can grasp more most easily is for health, both in terms of of, invent you know, new drugs, new cures for treatment, personalized cures, the ability that AI has to read scans, more faster and more accurately than than human beings. It doesn't mean the death of sorry. It does not the death. It doesn't mean the end of the job of the radiologist. That what they do will change, but we need skilled people still in that sort of area. But you used the phrase when we were talking, before about how AI will take the drudge sorry, take the drudge out of life. And and, you know, it it will, but it doesn't mean that the people that do that job as, you know, say the radiologists do today, their their world is they're not gonna disappear overnight because they'll be able to do other more advanced things and, you know, work with the doctors more to to to be able to pinpoint better what treatments patients need.

Obviously, though, that phrase, Wendy, that's something that that Microsoft says, that AI will take the drudgery out of work, whether that's viewing hundreds of mammogram scans every day and worrying about missing something or whether that's, you know, the boring office admin that we all hate but have to do. But my point is that for for lots of people, for loads of people, the drudgery is the work. And if they're not doing that, what are they gonna be doing? Well, history tells us that new breakthroughs in technology, lead to more jobs, not less, overall.

Now there will there are always short term winners and losers, yes, and this is partly what government policies have to do is get over that sort of the the the the tipping points of when jobs start to go before before new ones are created. I always tell the story of my father who, he'd have been over a hundred if he was alive, days born in nineteen nineteen, and, he was an accountant after he did his service in the war and was a prisoner of war in Germany for five years. And by the end of his career in the nineteen eighties, he was running a a a, you know, finance, being a finance director for a relatively large engineering company, and they did everything by hand. Everything. Ledgers, copying, he could in he could divide pound shoes and pence by pound shoes and pence in his head. Certainly, they had to do all that long division. Remember long division? We don't have to worry about that these days. I do.

Well, my dad was very upset that I never knew how to use the slide rule. He thought, you know, it's a terrible failure in my education. Did was that because you went to a girls school? Because I didn't learn. I wasn't taught. No. I didn't learn. He taught. No. I know. I did my brain training. I didn't learn to use a slide rule either. And I was at an all girls school, so it was all log tables. And anyway, today's generation won't understand what we're talking about with slide rules and log tables. I mean, I don't really either. I don't think it's held me back.

It was the way you had to do arithmetic because there weren't calculators and there weren't then leading to on there weren't computers. But so the jobs that my father had, teams of people doing all that manual work, drudge work, copying, you know, from like the old monks used to in the books before before printing, you know, copying from one ledger to another, and all the mistakes that could creep in. And, those jobs have all gone.

You don't have to do anything, but there are more jobs in the finance industry than than you could ever have imagined these days. And, you know, the banks are all we're moving away from, you know, the unfortunately, I do miss having a bank on the high street, but we're moving away to back office, digital. So I used to work in a bank as a summer job, and we used to file checks in a filing cabinet. I remember it very well. It was really dull, but really dull. But nowadays, there's there's, you know, the the banks are still employing lots of people, but they're not at the they're not on the you know, they're not doing those those touch ups. Everything's all about processing and the the amount what we can do now, is is just amazing. And, of course, sometimes the machines do get it wrong. I mean, look at the crash in two thousand and eight. I say the machines get it wrong. The the financial crash in two thousand eight was largely caused I my understanding is that the people were doing things using computers that we didn't really understand what they were doing.

And, anyway, that's another whole story about how but the the these these are I I'm very, So so you sound like you've subscribed to the argument that it will bring more jobs. Yeah. And jobs will come and go. Also, I also like to make the point that, you know, when calculators were and I'm old enough to remember when calculators were introduced into classrooms and university lecture theatres, before and and people said over my dead body, will my students use a calculator?

But the thing about calculators was if you put the right data in, you've got the right answer out. That's how it works. It's still the case. It's not true with AI. You don't you know, we all know part of the problem with even good old fashioned AI. Take face recognition, which was in the papers only today, how the police forces, it's creeping in. We all criticize China for using it, but it's creeping into our world, increasingly without, I don't think, the right checks and balances because it makes mistakes. It doesn't always you know, you'll get false positives.

Someone will be picked out because the system thinks you committed this crime and you've been picked up wrongly. And generative AI is even even worse in terms of, telling you the wrong answer. Yeah. No. Absolutely. And and that kind of brings us on to what do we do about that. We know that AI is biased. We know that it makes stuff up. We know that it's not reliable. What's what's the answer? There is no one answer. We have to, I always come back to it has to be teamwork. We can't which is why I don't I don't believe we're suddenly gonna have wholesale losses of jobs because, no, I don't think any CEO or head of a government department is gonna hand over their their work to AI that can make mistakes, and and do and give you the wrong answers and come out with the wrong decisions.

Now human beings make mistakes. And so, actually, it's quite logical. If you ask the original Turing question, can machines think like humans? Well, if they are gonna think like humans, they're gonna make mistakes just like we do because this stuff is not it's not just numerical data. Right? It's about making sense of facts. And so you've got to learn what in the AI world you can trust. So, if you you know, and what and what where where where in the AI world mistakes can come in, and you've got to have it, I think, it's not you don't want to have something where the human being's got to check every single thing, but we we have to work out systems where, you think I think of AI as part of the team, actually.

That's what we've got to think about. It's it's like a colleague. It's like a colleague. Exactly. I was at at the Paris AI Action Summit back in February and As was I. Yes. What a strange event that that was. It felt like there was so much tension and disagreement and jostling for power, but it also felt a bit to me like AI was entering the it's drill baby drill phase, if you like. And I and I came away thinking a safety fallen out of fashion, I mean, it used to be called the AI Safety Summit and it isn't anymore. What's going on with that agenda? Well, there's lots of layers to the question about what happened in Paris and why the UK didn't sign up to the declaration. We could come back that if you want. That's very political. Yes. It blames national security, didn't it? Well, this is the point. So the UK has changed the name of its AI safety institute to the AI security institute, which is a very subtle but important politically change, I think, because the Paris summit, there was there was so much, there's so much hype in this area. There's so much just lip service to things like ethics and principles and values, that people they'd have no idea how they're going to implement any systems.

What does it mean to say that your AI system is ethical or it follows these moral values? I mean, you know how are we going to police that? Think about how we try and do it with society, white machines aren't going to be any different. We have to make sure that as we build them and build them into our systems, our team you know, human systems as well as, just machine to machine systems, that that they are as as, we need safety, they are as safe as they can be. And the security piece is I think that, there is a huge dash of security issue here, which is largely sort of more aligned to where the cybercrime comes in. And, you know, what we don't want is AI or bad actors using AI to cause real harm, bring our infrastructure down. No. So the UK was right then to be cautious? Well, you see, I think the UK government was very brave in not signing.

There was also the Trump element, you have to remember that, because he'd only been president for a month when that summit happened. And if you, you know, you were reporting on it, JD Vance stood up and told everybody, particularly the Europeans, that, you know, the Americans weren't going to have their tanks on our lawn, the old expression. You know, Trump has said very clearly he doesn't want any regulation in AI coming over from anywhere outside of America. He doesn't want Which for the AI industry is great. Right? Well, for the big companies for the tech companies, yeah. Yeah. It is potentially. And it's worrying that that he, doesn't think there's a need for any sort of regulation. You know, anything goes, and that is that's really worrying, I think.

And that goes against all the advice that people like you have been given giving for the last, well, however many years, doesn't it? Well, I think, you know, it's the same with any new technology. You you don't want you you really got to make sure it's not doing harm. And you see, it's like it's like, you know, he doesn't even he doesn't even talk about self regulation. Biden's executive order was about, you know, the companies you've got to self regulate. You know, you've got promises, you'll you'll you'll check your systems before you release them to that's a really big problem. Imagine saying to the tobacco industry, we just self regulate, you know, we still you sell tobaccos to who you want, but make sure, but but but, you know, self regulate the the the issue around the cancer piece. But the, so that was only but Biden oh, sorry. Trump even, what's the word? Rescinded that executive order. So he doesn't even he's not even telling the companies they've got to self regulate. And I just find that so scary.

And it's all this business, the MAGA business of, you know, America great again. So, it's all about winning the AI race at almost all costs. Yeah. Yeah. But it's such a powerful and lucrative race. Nobody's going to slow down, are they? I wanna talk to you about a little bit about I I would challenge that a bit, but you go on. Yeah. Go on. I wanna talk to you a little bit because we are already running out of time unbelievably. Oh, no. Really? About the sync I wanna talk to you about the singularity, Wendy, which is that moment where AI surpasses human intelligence, which the tech leaders tell us is is coming at us fairly fast. Is it, do you think, and should we be worried about that? Well, it all feels like there was a time when the web felt like this. I often go back to how the there are some analogies with how the web grew, but and I could you know, because there were dangers we didn't know the dangers down the line of an open web. Right?

We're we're living with them now, all the issues of social media. We can see more with with AI where the dangers might be, the risks. And we can see now that, we can build systems that are intelligent, clever. Cleverer than us? What does that mean? The original idea of AI was machines that could think like human beings. The the the tech companies like to say they've already reached that point. And so they put they move the Gold Coast a bit and they don't so I don't see hear the term singularity used so much, like one moment. It won't be like that at all, but but they use the term AGI, which is, artificial general intelligence, which means because at the moment AI tends to be used in niche areas like health, education, defense. Right? It's not what the AGI piece is where, the machines, the AI systems can think across different sectors like we can.

We can think across different areas, translate our skills. Multitasking. And actually the key thing is thinking for themselves, doing things without being instructed by a human being. We've seen glimpses of that, aren't we? I don't know why. I I think it's I think it's slow. And, it'd be I mean, for people that what I'm I was gonna say, think about two thousand and one and and Hal telling Dave, I can't do that, Dave, you know, and Dave wants Hal to open the doors. That's when you're in that's where you got a problem, when the AI does its own thing and stops the human beings doing things, and then you're into the whole world of, can AI hit you know, can you let an AI kill human beings? You get into the world of AI and weaponry, and in terms of creativity, this is the, you know, the whole we were going to talk about copyright weren't we?

Can AI be creative? Can, but I think at the moment we aren't at that point and we have a chance to actually get some governance in that will, like we have governance for nuclear weapons. Right. Now, unfortunately, that came in after the terrible events in Japan where people learned the what we you know, what the effect of a nuclear bomb could be. And Stuart Russell, famously, you know, the ethicist, and he's a Brit who works out in California. He he did the the lectures, didn't he? He did the big the BBC the Rees lecture, very good set of lectures and he says, oh I've forgotten what he says, I forgot where I'm going with that one, what was I thinking of? I can't remember what I'll have to I can't remember what it is where I was going, but let's go I can't think of it. Can I come back? Yeah. That's fine. Don't worry. We'll move on. I I wanna talk to you about something that haunts me a bit, actually.

I've I've recently finished reading Genesis, which is a book written by Eric Schmidt and Henry Kissinger. And they talk about what they consider to be an inevitable scenario in which we decide whether or not to hand over control to AI, to some AI system. And they say based on human history, which is what they're trained on, AI is likely to either decide that war is a terrible thing and there'll never be another war because they won't let it happen, or they'll look at human history and say, actually, it's the only way we ever get anything done, and there will always be war. What's your view on that? Well, it's like the old joke, it's not the new joke, which says, you know, you you give climate change or global warming, as we used to refer to it, as a problem to AI to solve, and the obvious thing to do is to wipe out all the human beings, right? You reach net zero immediately. That's the same thing as the Genesis book says. That was so interesting that because that was written, he died while while they were writing that book, didn't they, or before they wrote it. Yeah. And I I, you know, this is the, this is the it's all about it is about control and, I remember the Stuart Russell thing is that he's always said we should not allow AI systems to make a decision about killing human beings. Right? But you can see that's already creeping because of the drones and things.

You know, who's commanding those drones to do what they do? And defense is is quite notably omitted, isn't it, from lots of these sort of pledges and commitments? Well, that's because well, there are people looking at it, and and they largely the problem is it overlaps with the national security stuff and the intelligence agencies so much. But so the human for example, our United Nations report does not cover weapons and more and, AI and weaponry. Was that an active decision? Yes. Because there are there are the military, arena are discussing these things and the ethics of all that and the the moral values that we should adopt in this area. But there's always sort of technical creep, you know, in this.

And there's always potentially I think of them as the James Bond villains, you know, the rogue actors, whether they're heads of states or just, like heads of crime organizations that use the technology against to to make money, to win wars, you know, to get power, to get control. That's always a danger. Then that's what we've got to we've got to we've got to try and mitigate against. And and also, check as the safety and security institutes, it's called now, is doing. What what are the generative AI? What are the foundation models coming up with? Can you detect whether any of them are actually, thinking for themselves and creating new chemical biological weapons? I mean, or drugs that would harm us. We have got to worry about that, just like we do, you know, things like in the plane the aircraft industry and the, you know, all the transport industries where we've learned over time how to keep us keep us safe, you know, keep the keep the world safe if you go. And we need to learn that and have global agreements. Just think of it like, you know, the aircraft industry, say, there's a plane crash, everyone worries about what caused it and how you mitigate against that happening again. I'm gonna go to our audience questions in a moment because we have so many.

But before I do, one last quick question for me because I want to end on a positive note. If we do get things right, right, what does our AI led future look like in your opinion? Well, it's it's not do you remember that car crash of an interview? I shouldn't say this. We're being recorded, aren't we? That interview between Rishi Sunak and Elon Musk. Elon Musk. Yes. It was an interesting evening. It was an interesting evening. It's not fair to call it a car crash really, but it was it was awkward I thought. And Elon Elon Musk, you know, he said, this this AI is going to be absolutely amazing. Nobody will need to work. Yes. I remember he talked about universal base, high income, didn't he? And I thought that sounds amazing.

Well, it's fine. I immediately think back to Thomas More and Utopia, you know, but but, you know, this is the if you're not gonna what does it mean to say no one's gonna work? The idea is this sort of, you know, that the AI will be so good. It will feed us, it'll, you know, it'll help us get, it'll keep us healthy, it'll give us food and water, and we won't need to go out to work. Everything will be managed for us. And what are we gonna do? And I, the the but I I I I am an optimist, and I genuinely think well, the genie's out of the bottle, so we gotta cope with it anyway. But I genuinely think that a the good use of AI will help us have lead have better quality lives. And I'm you know, the United Nations report was very much about applying this across the world, not just to the rich West or, China, But, for the global south, we talk about that a lot in the United Nations report to help raise the quality of of of, people's lives around the world.

Right. I'm gonna quick fire through as many of these as we can get. Okay? Are you ready? There's quite a few people actually who are concerned about the impact on the climate, it's something I've reported on as well. You know, is AI likely to lead to a climate crisis, someone asks, due to its significant use of water and electricity?

Well, I don't know about the water piece, but, you might be thinking of other types of power Data centers, I think, isn't it? Yeah. Well, it's the data centers, but it's definitely the electricity. It's how the electricity is generated, obviously. I think, ever the optimist, there's research going on about how you could actually turn all that around and use the heat generated by data centres to supply the electricity for your local area.

I should say by the way, if you want to put your questions into Slido, please do it now. So I mean, yes there are, of course, we have got to be cognizant of that but I do think we could actually make it a virtuous circle. Because we're going to, you know, to get use the AI, we're gonna need the data centers. They generate a lot of heat. Let's use that to supply heat. I mean, I I was so impressed with up to to the local area, to their regional area, and develop. There's work going on in, I know, in India around this, but also I was so impressed when I went to Iceland and they they don't pay anything for electricity because it all comes from the heat from the thermal water spongy.

They do have volcanoes exploding all over the place, but I think we could be optimistic about how we could turn that whole debate round and use them to our advantage. Okay. Next question. How can we ensure AI develops in a way that enhances human decision making rather than replacing it? Well, I think I've talked about that a bit. I think of it as augmentation definitely because, and as I said, I think of it as you have the AI and the team, and when you need an answer, you ask the AI. We will of course anthropomorphize them, give them names and personalities. I mean, this is the, you know, the sci fi films always talk about if you give them personality, then you don't want to kill them off. The film I saw recently was Companion, which was really interesting about AI companions. And they looked like human beings. They were robots with a very advanced AI in them. The ending is quite scary as these sci fi films often are. But I genuinely think that they will think of them as part of the team.

And so you'll ask a question, just just like people are using chat chatGPT or, well, Gemini, today to to find out. And and they will help us do things like, you know, there's nothing wrong with and we're encouraging our students here at Southampton to use AI to help them write their essays. Oh, you are. That's good because to start with it was like we're going to find a tool that's going to identify your your AI. That was the initial gut reaction. But we've actually, under the leadership of a lady called Kate Borthwick at Southampton, we're quite advanced in thinking about advising staff and students how to use it.

The problem is, of course, one of economic we can't afford to pay for everyone to use this stuff. Right? So you've got inequalities. It's just like when computers emerge, you know, universities can't pay for every student to have a computer. We can't afford it. But but that aside, the inequalities aside, I mean, I think that, the the students have have got to, learn that they're gonna have to check the work because the AI is not foolproof. It doesn't come up with the right art. It makes things up. The problem there, I see so the assessment for staff, if you're reading an essay, there's been a student has asked chat DPT to help with, you've got to be just as careful reading it as if and as, you know, as if it had been written by a human writer that could have got it wrong.

I think you can spot it as well, can't you, because it's too verbose. It's it's kind of too Yeah. Potentially. But, potentially. And we will develop AI assessment tools as well for And this kind of answer plays to a question that's come in from Susan Sharkey who says can you envisage a time when AI is routinely used in education and becomes a teacher's friend by way of taking on tasks like marking? Yeah. Absolutely. I think I've just said that. Bring it on. But we have to be very careful about how we do it. I'm not, you know, this is not, I don't want to be flippant about it. You can't just say, yeah, or do whatever you like with it. But again, I remember when computers came out and, you know, we had, I mean, I don't know, it's very hard, it's It's hard marking essays, but marking computer programs is also very difficult. And if you've got hundreds of students in a class, you know, so there are there are a lot of tools out there now to help you check code and check what, you know, that says, you know, we will we will develop tools that help with assessment as well, I'm sure. I'm being very positive here, but I there's no point in saying to people don't use it because it's it's too much too profound a breakthrough not not to I mean, yeah, not to use it.

Another quick question here about education. If AI is going to change our lives, how quickly should we be teaching AI in all university curricula to prepare students? Yes. So we are beginning to do this. In fact, our competitors down the road, Solent, if there's anyone from Solent here, they've already launched a module for any student to get them an intro to AI. We here at Southampton have plans to do something similar because we are running I'm directing something we call AI at Southampton, which is all about looking at the research that goes around it's not all in computer science, there's AI research across the whole university and of course teaching using a not using AI in teaching but teaching about AI across the university.

So we have AI at Southampton, and we're developing, something for the students, for any student to be able to use. And also, my favorite thing is we have very good, course teaching in CS about AI. We have master's programs about AI and machine learning in in CS, but we're just launching a master's. It's an MA in AI at Southampton. It's running through the social science department. It's for people who haven't got a technical or mathematical background, who can't learn. They're not gonna be able to become machine learning programmers, but they might be from they might have done a degree in law or history or politics. What did you do, Zoe? English literature.

This sounds like right up in English literature, anything really. We did it with websites. We taught people how to work in the web world, whatever discipline they came from, to understand it and understand the ecosystem. And we're now applying that skill we have to AI. And so to to then this MA is gonna be online. So anyone can do it, and it can people from any discipline can do it. And I'm very excited about that, I have to say.

That's it starts in September. Good news. Which field, asks Nofal and Nam, I hope I've got that right, was going to be the one where AI will absolutely fail. Who beats AI? What? What, philosophy? I don't know. I'm I I don't really know how to answer that one. Who beats AI? In terms of fields, disciplines, is it I'm not quite sure. What would you why don't you answer that one, Zoe? Have you got one? I I, I don't think it's a question of beat of who beats or who wins. Right? If we think of it like that, then we're on a bad path, I think. I I hear the view that human human values will become, even more valuable than they are now. You know, people will really appreciate human, personality and and, interpretation more than they do. Yes. And it's really about I mean, we have people here. One of my one of the people, the post doc that I work closely with is thinking about these things and, how we actually imbue, if that's the right word, AI systems with moral values. Can we do that? And that's very Asimov again, but can we do that? And then how will people, you know, it's it's already becoming quite hard to distinguish between and this is the the, Turing test thing again.

You know, can you tell the difference between a machine and a human being? My thing about the Turing test was always that Turing overestimated the intelligence of human beings because it's actually quite easy to fool people. I think in the age of disinformation, we we see that writ large. It's very easy to fool people. Even, you know, I can say clever people, that's not fair. People who, think a lot about things, you know, the scammers get through if you're busy and, you know, and someone tells you something that sounds plausible. Yeah. There's all sorts of ways in which you can, and we get things wrong, right? So, if you're doing a Turing test and the human gets it wrong and the machine gets it right, does that, you know, it's not as simple as Turing made it actually, made it sound.

I wanna summarize a few questions that have come in along the same theme, which is concerns that it's going to affect our own abilities. The more AI does for us, you know, will it affect our ability to to to to carry out mathematics, to write code, to think? You know, you and I were talking earlier about how social media rewires your brain. Is is there a concern that it will change the way we use our own brains? It will. Just like, calculators and computers have with the maths, right? You you might learn I don't know if kids in school still learn long division and fractions and and decimals and division and all that, but I assume they do at some point. But they don't you don't have to do it on a daily basis. Right? You you you really don't, need it. And, I think that this the the, you know, the ability for people to help people start write a craft an essay, design an essay, a piece, I have no idea whether novelists will use it. I think they possibly will. I think it will, but it will certainly the one thing, for example, it will change is the way we need if you've got something big to read, you summarise it.

So writing the reports for the people who need to know analyze the reports and then present them to the board or whoever. AI is gonna, you know, it's gonna transform that type of work. And and I think it will be an advantage. I do think if, you know, we can use it well to our advantage, to, not just to save money, you know, because you might need less people doing summaries of stuff. We'll we'll do other things. And, so what was the question? Is it gonna change the way we use our own brains? Well, there is the sort of the the argument, and there's I've heard both sides of this debate as to whether it will make us lazy, about things. And, but there's also, I think, is it I I we're never gonna get perfect AI. And if you think about it, that's that I I genuinely believe that. I've seen papers that sort of that prove theoretically that generative AI, because of the way it's designed and trained, has to hallucinate. Right? It has to make things up because it's working on on stuff that it knows from the Internet on existing knowledge. And if it hasn't got that, it can't it's not gonna make it's not gonna it can't it just makes things up whether you know? And it doesn't know whether what it's making up is right or wrong.

It doesn't have an understanding of right or wrong, truth or not truth. Right? It just doesn't. They don't at the moment. And they and they you know, we are we are flawed in the way we think about things and the way we the way we come up with answers and, for things. And, I mean, some people will make things up because they don't like to admit they don't know something, or they just tell you something incorrect. So, you know, how we think we can we can create machines that do that better than us, I don't know. That's an interesting counterargument isn't it?

Sometimes people say to me about the use of AI in law, you know, oh my god, what if it makes a mistake and sends somebody to prison who's innocent? I say, well, exactly, it's not like that's never happened before, is it? Exactly. Again, I like to think of it as teamwork and to improve by augmenting what we do to improve the decision making and help us be more creative. So I'm sorry. I'm very positive about it, but there are huge That's important.

Why am I saying sorry? There are big it doesn't mean it's, you know, it's all safe. We have to work on the safety aspects for sure. Dave Johnston says can AI understand altruism? Well, can it understand anything? At the moment, these the AI we have at the moment doesn't understand anything. Right? They're generative AI. We are they're moving into the world of reasoning, because generative AI at the moment doesn't reason. We're moving into that world, and they this world they call agentic computing.

Southampton, by the way, is one of the top multi agent systems research groups in the world, and not a lot of people know that. And we've been doing multi agent systems for twenty odd years. One of the founders of multi agent systems, Nick Jennings, was professor here. He was the first Regis professor actually. He's now vice chancellor of Loughborough. But, you know, we have a great but they're they're using the idea of multi agent systems. So the the agents will do a bit of the reasoning and then they'll they'll talk to the large language models to get the information. But what do you do about the fact the information might be wrong?

This world gets very complicated very quickly and if it's a life or death decision, I mean, I wouldn't want to get in a plane that has any of these systems anywhere near decision making for example. That is I think that's a really powerful point to make isn't it? It sort of feeds into a question here from somebody who's only called S who says what if it's used to make decisions about medical treatment? Well, it is. Exactly. But then again, I keep using this word team because, I don't think we should at the moment, we should never allow an AI to make a decision over treatment. Down the line, they might do triage. You know?

You could you could argue that they could do the triage thing. There's some you know? But I think that if if it's, actually diagnosing what's wrong and prescribing and operating I mean, look how the surgeons are losing robotics at the moment, but, there's still human hands on those robots. Yeah. So And there will be for a while, you think? Yeah. So let's have some human hands on this stuff. And and, sorry. We're going yes. We're going through this quite fast. We are. I'm afraid. This is this is exhausting for you. Aoife Whitford says I'd like to hear more about your take on agentic AI and whether it's going to roll out. Did they just ask that or did they, is that coming after I mentioned it? It's coming in the last few minutes.

Yeah Well, as I said, Southampton is one of the top places in the world for agents to be Do you think it's do you think Adjentic AI is coming or do you think it's Yeah. Definitely. Definitely. But And just briefly describe what it is for people who don't know. Well, an agent is just what you think of as a human agent, right? You ask it to book your travel and it will book your flight for you, that sort of idea that you have and and you can move that into the world of the digital. And then agents could talk to other agents. So I could book a flight through one and then say, well, what hotel? And it the agent talks to a hotel agent that has more information about that. That's the sort of that's what the agent world is.

And now the idea is that the agents because agents have always been a part of artificial intelligence. They've done reasoning. They have capabilities to reason about the the topic they are programmed to reason about, like flights or hotels or whatever. And now the idea is that they will do reasoning and they'll talk to the LLMs, the large language models, to get the information they need to help with the reasoning. But there's the problem that the LLMs won't always give them the right answer, and who's checking that. So there's a huge it gets more complicated once you're bringing agents in. But I do think that is the way we're going, and the companies are doing a lot more work on reasoning and LLMs generally.

But this is about decision making. It isn't about sentience. It's not about any these these systems having any understanding of what they're doing at the at the moment. And a question here that's pretty, very popular actually, which set of AI safety standards, frameworks, or ethical principles do you think is currently the most suitable? My gosh, there must be about eleven thousand, right? Well, there's a lot of them and there's not one answer to that. There isn't the Europeans would love us to say the AI, their AI act, the EU AI act, but the problem with that is it's very heavy handed. Nobody had a good word to say about it in Paris, did they? No. Well, except the French probably. Except the French they like. I mean, and and the European Commission would say they like it.

But it's, it it people argue that it's, and I would too, that it stifles innovation because you're worried about so worried about the rules and regs that you can't do anything. And of course, the Americans are not gonna adopt it. And it potentially I mean, Trump has said, and I think this is gonna be the case before, that we're not we're just not gonna play in Europe because we're not going to be bound by your rules and regulations. Now the UK, because of Brexit, has the ability to be, to sidestep that.

So there is a position for us here? Yeah. And I think we have a people look to us as a soft power in this area. Right? We have a lot of skills in the area of regulation governance. And, and, you know, we we went under the radar a bit, but we the the the white paper on this that the previous government released on, an innovative approach to regulation was actually quite a good white paper. And we've we're getting our existing agencies like Ofcom and, you know, the other I can't remember all the names. But the agencies that deal with this type of stuff, are using them as they are. And then the idea the new idea this government's talking about is coordinating that. That's this idea that if you feel like you've been discriminated against by AI, you go to the Equalities Commission just like you would in any other situation.

But you need a coordinating body so these these are joined up in the way they work and I think I think there's a good chance that I think that's that's a good model actually. So I'm back in UK on this. I do feel like everyone said, you know, please regulate the AI sector that we all want to be regulated. We don't want to go down the same road as social media. And then the EU did it, and everyone went, not like that. Not like that. Well, I'll tell you who does it the best. China. China. I mean, that is that is the the massive player that we don't have in the past. Talks about it because they just we just think you you know, the title very clever title of this conversation is friend or foe. Is China a friend or foe when it comes to AI?

And they, they very early on with the Internet, when the web came out, China, in nineteen ninety seven, they immediately saw that it was a way, yes, for people to communicate and use machines to help them get information, but it was also a way to create revolution. Right? Because people could talk about stuff. And and and the big thing in China is you always got to stop the revolution because you don't have elections. Anyway, so they, they brought in, you know, basically, they have privacy laws in China and all sorts of things businesses can't do, but the key thing is the Chinese government can look at anything. That's the difference between them and us. And I always used to say, well, if you don't mind losing freedom of speech, China's a good place to be in this Internet world. Actually, we're all beginning to realize we're losing freedom of speech too. So that's yeah. But the in terms of the AI world, a China's just as worried about the risks of AI as we are.

And, they're playing in you know, they run the UN body, and they've brought in a number of laws that, really make life makes life safer in China than we have. I'm just writing a paper. There is a sentence I don't think I hear very often. Thank you. We're gonna have to leave it there, Wendy, I'm afraid. I haven't loved this conversation. I could talk to you all evening. Thank you so much for your time tonight. Thank you to everyone who's been watching.

We asked the question, AI friend or foe? And I feel like what I've got from you is not only a sense of optimism, but a sense that actually AI is is kind of both, but we should consider it to be a companion, a colleague, something that sits alongside us and not something that either can or will replace us. Thank you everyone. Thank you so much for joining us today. There's my summary of our Thank you, on behalf of the University of Southampton, thank you Zoe. Oh thank you very much, thank you.

Listen, before you go you really must, do our short poll, it would not be a webinar without a poll at the end, would it? I think it should be coming up on your screen any second now. Please complete it. It will take a couple of minutes of your time and it really helps us to, put on events like this in the future. If you want to watch the talk again or you want to recommend it to people who couldn't make it tonight, and you absolutely should do that of course. We're going to be sharing the footage in the next couple of days, so please do keep an eye on your inbox and you will get details of how to access it. You can also find details of how to join the mailing list on the email, and that way the university can keep you updated on future events. But for now that's it from me and from Professor Dame Wendy Hall. Thank you so much for joining us, we can relax now. This can become a gin and tonic.

Interested in building your own understanding of AI and applying it in your field?

Find out more about Southampton Online’s MA Artificial Intelligence courses, including specialist pathways in Criminal Justice Systems and Digital Transformation. These fully online programmes are designed to help learners from all backgrounds shape the future of AI:

Explore Artificial Intelligence courses