
Is AI over-hyped?
Ana Welch
Andrew Welch
Chris Huntingford
William Dorrington
Get featured on the show by leaving us a Voice Mail: https://bit.ly/MIPVM
FULL SHOW NOTES
https://www.microsoftinnovationpodcast.com/666
We explore whether AI is overhyped or dangerously underhyped, examining the disconnect between those creating AI technology and those selling it without adequately addressing trustworthy AI concerns.
TAKEAWAYS
• The Microsoft AI Tour event demonstrated excellent technical content with a strong focus on trustworthy AI
• There's a dangerous disconnect between people who make AI technology and those who sell it regarding responsible AI implementation
• Trustworthy AI doesn't mean stopping innovation but preventing potential calamities
• The scale of AI's impact may be drastically underestimated, similar to our inability to truly comprehend "65 million years since dinosaurs"
• AI enables processing information at unprecedented scale, creating extraordinary risks in surveillance and human rights contexts
• Corporate discussions about completely replacing customer service departments with AI raise serious socioeconomic concerns
• Shadow AI applications being developed without proper governance represent significant risks
• Containing AI's risks while harnessing its benefits requires education, curiosity, and political wisdom
Get educated and don't rely on echo chambers or news articles - read in-depth material from experts to form your own opinions about AI's trajectory and implications.
BOOK RECOMMENDATIONS:
👉The Coming Waveby Mustafa Suleiman
👉Origin by Dan Brown
This year we're adding a new show to our line up - The AI Advantage. We'll discuss the skills you need to thrive in an AI-enabled world.
DynamicsMinds is a world-class event in Slovenia that brings together Microsoft product managers, industry leaders, and dedicated users to explore the latest in Microsoft Dynamics 365, the Power Platform, and Copilot.
Early bird tickets are on sale now and listeners of the Microsoft Innovation Podcast get 10% off with the code MIPVIP144bff
https://www.dynamicsminds.com/register/?voucher=MIPVIP144bff
Accelerate your Microsoft career with the 90 Day Mentoring Challenge
We’ve helped 1,300+ people across 70+ countries establish successful careers in the Microsoft Power Platform and Dynamics 365 ecosystem.
Benefit from expert guidance, a supportive community, and a clear career roadmap. A lot can change in 90 days, get started today!
If you want to get in touch with me, you can message me here on Linkedin.
Thanks for listening 🚀 - Mark Smith
00:27 - Welcome to the Ecosystem Show
00:59 - 2025: A Year of Acceleration
02:43 - Microsoft's AI Tour Experience
03:27 - The Dangerous Disconnect in AI
05:43 - Trustworthy AI vs Innovation
09:45 - Is AI Underhyped?
13:13 - The Human Cost of AI Adoption
18:23 - The Endgame of AI Development
23:30 - Book Recommendations and AI Containment
29:28 - Shadow AI and Ungoverned Tools
31:00 - AI's Implications for Global Security
32:59 - Getting Educated on AI Risks
Mark Smith: Welcome to the Ecosystem Show. We're thrilled to have you with us here. We challenge traditional mindsets and explore innovative approaches to maximizing the value of your software estate. We don't expect you to agree with everything. Challenge us, share your thoughts and let's grow together. Now let's dive in. It's showtime. Welcome back, everybody. It's been a while since we have recorded an episode. Life's been busy, people have been dying and life has got interesting for all of us and we're in a. I just find it an incredibly exciting time, 2025,. My wife and I were reflecting overnight that we almost feel like we've already done a year's worth of work in the speed that this year has kind of accelerated away. Um, on, how's everyone going actually, before we crank in and I, uh, all of a sudden get very on topic focused how's everyone doing?
Andrew Welch : I mean anna and I live in valencia now, and even though so it the city of it's the city of 300 days of sunshine. Seven of the rainy days occurred last week, so that was kind of terrible, but it's beautiful again here now. So yeah, we're happy.
Chris Huntingford : Yeah, all good man. It's been a, it's been a learning, a learning experience of the past few weeks, but it's been good man. I hit up. I hit up Azure. Well, I had the AI tour stuff that was really cool in London and things were learned, many things were learned. What?
Andrew Welch : was the AI tour. I don't think we've chatted about it.
Chris Huntingford : With a wait list of thousands.
Mark Smith: Wow.
Chris Huntingford : So, yeah, and I will tell you, they pulled out all the stops, man, like what was presented. I'll tell you they pulled out all the stops, man, like what was presented. I'll send you the GitHub repository. What was presented that day was nothing short of excellent. It was excellent. They made a point of talking about trustworthy AI, a huge point, which I loved. Yeah, johnson was on stage. He did a great job. Ceo in the UK, darren, he was on stage. Same thing, deeply technical. So the UK have got now a deeply technical CEO, the guy's demoing agents to partners. Like, I say this without any sort of prejudice, I was really impressed with what they did. Right, like it was actually good. And they showed up. Microsoft showed up in force, Like force. They were everywhere talking to people. Yeah, I stood in the same place by this. So there's like an entry door. I stood at that door for three and a half hours. So many people here. It was absolutely excellent.
Andrew Welch : So, yeah, I I hope that all countries get that exact same experience one of the phenomena that I have witnessed and this is not I'm not, I'm not calling out any specific firm here, because I think that this is endemic across technology vendors and system integrators, partners everybody right is that I'm finding that there is an increasingly dangerous disconnect between the people and I think this will tee Mark up really nicely for the show's topic today is that there's an increasingly dangerous disconnect between the people who make AI technology and the people who sell AI technology around the topic of trustworthy AI right, and that's.
Andrew Welch : I think it's because the people who make the tech right, or the people who are deep in the weeds of the technology, like the four of us we are, we see, I think, where this is, where this is going, and I think that we see the danger of what we're playing with in a lot of cases. But those who sell the technology, what they see is that they need to get the technology into people's hands and that trustworthy AI and responsible AI they don't want those things to become an obstacle to doing the deal, and I think that that is endemic across every technology company that I see. So everyone should feel indicted by this, but no one should feel singled out by this statement of mine.
Mark Smith: Yes, thank you Donald Trump. Well, because you know he's the same thing, right Like. Let's forget about trustworthy, let's just innovate, innovate, innovate. Let's get going, anna.
Ana Welch : And we are innovating. We are innovating constantly. Just the fact that you take note about what you're doing and you are aware of what sort of data sources you are using and how you should be testing, measuring and making sure what risk you're exposing yourself to that doesn't mean that you're not innovating. It just means that you are aware of what you're actually doing. That doesn't mean that you're not innovating. It just means that you are aware of what you're actually doing.
Ana Welch : Like I was talking to a customer today and he was like oh, I've been experimenting with agents and I've created this agent to do XYZ. And I was like okay, that's really great. How did you test it? He was like oh, yeah, it turns out it gives like bad answers a lot of the time. I'm like okay, but you need to test it. And he's like no, no, no, no, no, Cause I turned off the generative AI feature. And I'm like then how are you using it? So you see, people, out of the fear to do something wrong, they turn on, they turn off the very features that would help them. Yeah, create something very useful.
Mark Smith: So, and yeah, trustworthy doesn't mean stopping innovation, but it does mean preventing calamities potentially it's doing it under, you know, a very well thought out um architecture way of thinking about things that doesn't just barrel down an innovation path without balancing it with human rights safety, saying you know that trustworthy ai is opposed to ai. It's absolutely about innovating in a way that ultimately, um protects humanity, um and and the future of the world. And I know that sounds like wow, that's over the top and even as it comes out of my mouth, I know it sounds like that. But I posed, just before we got on the air, an interesting thing, and that is is AI underhyped? Because there's a lot of people out there would say that, oh, this is just a hype cycle that we're in, it's overhyped, it's nuts.
Mark Smith: And now I'm seeing reoccurring evidence that it's potentially under hyped, that we are not actually realizing that what we think is a wave of transformation of technology that we're on that we're actually at the start of a massive tsunami. We think we're on a wave. If you've come from the power platform, two wave cycles a year. No, I think we're on a wave. We think we're on a. You know, if you've come from the power platform, you know two wave cycles here. No, I think we're on a tsunami that, if people to some degree are not aware of it, like all tsunamis, there's damage.
Andrew Welch : I think that part of this comes from the human and and all humans. This is this. This is a trait that is in some way baked into us. Right, we? We struggle to comprehend things that are so huge. Right, like we here, we can read that the dinosaurs were extinct 65 million years ago, but we lack the ability to really understand how long 65 million years is.
Andrew Welch : Anna and my daughter, alexandra, is obsessed with Bluey, the Australian cartoon, and we were watching an episode with her the other day when Bingo, bluey's little sister, or, as our daughter thinks, her name is Fingo with an F, bingo asks her mom, or the mom asks Bingo says you don't know how much a minute is, do you Right? And Bingo has no idea how long a minute is. And what I'm getting to here is that, from the perspective of the individual human, the individual user, right, or even from one organization, we look at what AI can do and we say, okay, I mean big deal, it lets, it saves me. Maybe it saves me a few minutes a day, helping me to process my emails or whatever. And that, I think, is truly the experience that many, many people have with AI today.
Andrew Welch : But what they don't get right is how that experience that you are, that you are having in the micro right can be, can rose into a macro phenomenon, right? So a good example of this is that and a totalitarian state or a, or a nation, a place that that restricts human rights, that surveils you. I'm going to I don't use this word on the podcast very often, I don't think since the very first
Mark Smith: and.
Andrew Welch : I'm not Chris loves this, but I don't think that we can overstate the danger of this ability of AI to process information at a scale that human beings never could process.
Mark Smith: Information in sorting through right Things that you know, sorting through data in nefarious ways, and I think that this is extraordinarily dangerous so, just to back up, like what you said, here's what blows my mind, here's like that, that is amazing, like, oh, there's so much in what you just said there, the, the reference to the dinosaurs, that is absolutely like, that's critically important as into what's possible, right? So at the moment we're at an inflection point, or a coming inflection point, of synthetic biology, robotics and AI and how they will interweave within each other and what is possible. So right now, you know you've heard of CRISPR, gene splicing, right? Yeah, so Mustafa Suleiman, who is the CEO of Microsoft AI, the co-founder of DeepMind, he was at a prestigious university. The inference was it was in the UK where a biology professor showed the advancements in gene splicing and that the technology now has come down to a biological piece sorry, a piece of lab equipment that is around 10,000 quid.
Andrew Welch : That's bucks for you Americans in the audience.
Mark Smith: So that device can sit in somebody's garage. They can go online, not even have a biology degree, find enough through all the data online about gene splicing and he said for the first time in history. And he said one person could synthesize a pathogen that will wipe out a billion people and he goes. The problem is when I say that to you, he goes, your mind goes. Nah, nah, like you're a nutter. You're crazy, Don't be silly. $10,000 for the piece of equipment that can now synthetically generate a pathogen like we've never seen with online research in the individual's hand that could wipe out a billion people.
Mark Smith: And this dude right why I think it's important he's the CEO of of microsoft, ai. This is no schmuck. It blew my mind and this is why I think that ai is potentially underhyped because he goes on like that's an example of an individual actor and what is possible. He said now you go to a nation state and the state of the world at the moment and what happens when nation states decide, you know what. We're going to take advantage of this, yeah, to our benefit, you know, against another nation state and stuff. And I'm like I'm not a you know, I'm not. I'm a very positive, non-fair type guy.
Mark Smith: I think chris has gone too fearful lately, um in in his you know cautiousness of AI, but I think that we need to really and, by the way, it's not going to stop me still going at 100 miles an hour in my education and learning on the subject but I think some of the stuff that we're finding in our work now around trustworthy AI and the even, you know, recently really getting into the human rights element of what we're doing, it's becoming critically important that people get educated and not kind of go la, la, la la, you know, hear, no see, no, say no evil because a wave, a tsunami, doesn't care what you think, a wave, a tsunami doesn't care what you think.
Andrew Welch : Well, right, and just a little pin in, chris. Remember, chris Huntingford is the only one, I think, on this show who has really come face-to-face with the Armageddon that would occur if AI reprogrammed the world's tools and turned them against us. One of the funniest rants Chris has ever gone on.
Chris Huntingford : Okay so I'm coming at it from talking directly to big customer organizations. That's who I'm coming at it from. I'm not coming at it from any other place and working with people who don't care, not in my company, but like other folks in the community have said to me. Oh so this thing isn't real, we don't really give a damn whatever and I'm like my mind is blown about it. But, like somebody said to me the other day, this euai act thing is just gonna blow over. Where are the people that have been in trouble from it? I'm like my guy. It's just it bothers me and the more people do that, the more aggro I get about it, because I care and I feel a responsibility and that's really important, right.
Chris Huntingford : And Mark, on your comment around, like this is underhyped, it is grossly underhyped. Somebody said to me today it was a customer meeting and I said do you all have like a relatively strange fear about AI? And I said, yeah, some people do. I'm like that's a very good thing because it shows that you're respecting it a little bit. You know, like it's not something to be trifled with.
Chris Huntingford : And my worry is that the propensity again I use my phrase turning walking sticks into weapons. The propensity for something to go wrong is not like the propensity for something to go wrong with regular type software, like into people. Something to go wrong with regular type software like this could go very badly wrong. The other thing that I really struggle with is, again, the socio-economic impact on people. Right, like the more something that, the more something affects people, the more something. That's when I like really really get passionate about it and I can see people being affected by this. Do you know there are discussions in big companies about wiping out entire workforces of people with this stuff, literally saying we will fire our customer service departments, like I can tell you, because these conversations have been had in front of me, right, and I'm like what happened to protecting people, what happened to protecting them? Like what happened, right, so it's scary.
Mark Smith: That's what happened Profits- Exactly what happened, right.
Ana Welch : So it's scary, that's what happened. Profits Exactly because I'm also seeing examples where people are now generating content or doing their research in maybe an hour, two hours, stuff that still takes their colleagues the whole week, the whole work week, stuff that still takes their colleagues like the whole week, the whole work week. And these individuals who are able to use AI to do their job in this way, on the one hand, they're like, well, I'm going to relax now for a bit, and on the other hand, they're thinking and they're seeing how their job is going away. So they're just saying that OK, way, yeah. So they're just saying that okay. But I guess what everybody's missing here is the fact that that particular job potentially is going away. But there's, there are things that we don't know yet, that are still there for for all of us, you know, know new things, new innovation.
Chris Huntingford : That scares me, though, because so I was with Andrew Brownlee this evening, right Like, just had a couple of chats, and I love that dude because he's a recruiter. But he said to me he was like man, what am I going to do in three years? Like homie, you're lucky because you're in the tech space, right Like you can get skilled up. But he has something that blows my mind and he said something to me that stuck and he's like what do you think the end game is here? I'm like, oh no, I don't know how to answer that question, because I mean, like the end game with data is get your data. The end game with security is secure your landscape. The end game with AI is, I don't know, pumpkins, like it could be it.
Ana Welch : We don't know pumpkins, like it could be it. We don't know. No, I think the end game with AI is like the end game with everything. Why do people even do all of this? They do it because they want to be happy, because they want to live longer, because they want to be healthier. I don't know. That's the point. That's the end game, isn't it Like? For me, that's the ultimate end game.
Mark Smith: I don't know achieving immortality, and it sounds silly but Mark is like.
Andrew Welch : Mark wants to live forever. He wants to have a 300-hour workweek with the help of AI. I actually think. So my perspective on?
Andrew Welch : First of all, I would add Anna to your list of aspirations.
Andrew Welch : Right, that the endgame for many is and I don't actually like the word endgame it's the why, right, for many it is curiosity. It is curiosity, it is President Kennedy's moon speech at Rice University in the early 60s, I think 1962, 61, 62, in which you know he says we choose to go to the moon not because it is easy but because it is hard, because that challenge will serve to measure the best, to organize and measure the best of our energy and skills. Right, that's. I think that that's a motivation. So I think to some extent it is the same thing that led humans to board rickety wooden ships and cross the ocean, or to strap themselves to a firecracker and shoot themselves into space. I do think that there's an exploration and a curiosity element here, but I wouldn't, I wouldn't call it the end game. To me there is no end game, right? We're not pursuing an end game. This is something that this is just another thread in the fabric that continues to be pulled until there is no more the end.
Mark Smith: I guess the end, but it's not a game, right?
Andrew Welch : Like there is no end game.
Mark Smith: So perhaps it's a theater. It's all a world of stage. It's all a world of stage, maybe not a game.
Ana Welch : I also think that you could look at it from a positive perspective as well. So let's just say, because we believe in the EU AI Act, yes. Say because we believe in the EU AI Act, yes, and we see many you know organizations following it and we see nations having their own version of the EU AI Act. Now, not all of the nations, they've just like we, stumbled upon truly autonomous agents with absolutely no testing and like craziness out there as well. I don't know how that's going to pan out, but let's just take the positive scenarios and let's just think about us again, right? So people are into and we've been talking about I don't know vitamins and biohacking and stuff like that, about I don't know vitamins and biohacking and stuff like that. Imagine being able to have you know AI. Just plan that and like evolve like a plan with you so that it always fits the purpose for you, to make you like healthier, more energetic, to give you better sleep, to just make you happier in general. We were talking.
Ana Welch : My sister-in-law is a doctor. She's an ICU doctor and an anesthesiologist. She was saying that out of curiosity. She just went on and just asked Chagipiti some questions, very specific questions. She swears like a lot of that detail has been lost on, like her nurses, for example. Like imagine all of that information being refreshed into your mind constantly. Imagine the rate of error that's being avoided because of all of that. Like this could be really, really good as well.
Mark Smith: I totally agree. If, if people want some reading material, check out a book, uh, written in 2013 by dan brown, right, the same dude that wrote the divinity code. He wrote a book called inferno and and, and that book, which is you know I, I went like on like I never read novels. I only read dan brown novels in my life. Otherwise it's always business uh type books that I read.
Mark Smith: That book shows how a virus could spread and in this case, you're thinking it's a virus to wipe out the world. What he did was, you know, sorry, plot killer, but the virus that was released basically makes 50% of the world infertile to control population growth. Right, and this is a thing is that there's different ways that people can think about this and, of course, the conclusion of the book is the virus is gone, it's already released and the technique used was very cunning. Then he released a book in 2017 called the origin, and that book, man, like where we are in ai. Now, that book is all about ai. It was released in 2017, it it will mount your face, and he's a brilliant author in that he brings reality, but he ties it in with enough and it's all set in Spain, guys, valencia, oh no.
Mark Smith: Is it set in Valencia? No, it's actually sent in just north of you. Well, it's all over Spain because it's run sent in the just north of you. And uh, well, it's all over spain because it's run with this king of spain. It's, but it's the the why I loved it. What's catalan? What's the city there? Um, what's the main city on that above you? I went there for my honeymoon. I don't remember what's it called? Barcelona, barcelona, right, it's set there. And the crazy thing is, because I went there for my honeymoon, I went to the location of where the supercomputer is that's doing this book will melt your face, like now. When I keep thinking back to this book that I read in 2017, I am like how much the world has become the reality of that fiction. It blows my mind. It blows my mind and it'll just open you up to the potential of what we're, what we're in at the moment so.
Andrew Welch : So at the at the risk of, at the risk of sort of introducing, having introduced the topic of is ai overhyped or underhyped?
Andrew Welch : And then we get to the end of the show and we've just sort of rolled off on various tangents. I want to bring us back to this question because, it is true, I do have a lot of conversations with folks that I think believe that AI is overhyped and increasingly my LinkedIn feed is polluted with people you know who you are that just sort of hoopoo, every advancement and capability, and think that this is a bubble and think that this is and it may be a bubble for other reasons. Right, it may be a bubble in that much cheaper LLMs are just as effective running on hardware that is not made by NVIDIA, right, like there may be some big names and, by the way, I don't know, right, we'll see, time will tell. But what I don't think is a bubble is AI as a broad technology category, and I don't think that. I don't believe that it is overhyped. I think that, if anything, it is underhyped and it is underhyped in some really fundamentally dangerous, dangerous ways.
Mark Smith: Yeah, and remember the word containment, like what blows my mind by this book and, by the way, the book from Sullivan is called the coming wave technology power in the 21st century greatest dilemma. And for me, knowing microsoft, I was like I'm surprised they let this guy publish this because it is such a stabilizing, grounding book. And his first, you know, concern is can we contain it? Yeah, and he thinks that maybe not, but we need to try everything we can too. That's the risk profile and I tell you, I mean, I'm only one and a half chapters in and I am like it's blowing my mind and once again, I'm not talking about somebody that's just theorizing. This is one of the founding. I mean, deepmind is definitely considered for a very long time the leading company in AI, hence why Google purchased it. This guy is no, as I say, schmuck. He knows his stuff and he has seen things probably way ahead of what we're seeing.
Chris Huntingford : Yeah, I, I dude, I. I a thousand percent agree. I think that just listening to the way people talk within the community and some of the places I've been to, it's just crazy how people think that things just come and go Right. And the containment thing the containment thing scares me as well, because this is going to be like spell check on steroids type thing that'll just land up everywhere it will. It's going to be beach sand, right, it's just going to be in everything. And again, you know, I don't.
Chris Huntingford : I think Microsoft do a great job of doing things in a trustworthy manner. Like I've looked at the tools, I'm working in Azure, I'm working in the incremental AI pieces in Copilot, I'm working through the Power Platform, and I do think they do a wonderful job. Like I'll give you an example what I used to do in jailbreaks in Copilot now sorry, two months ago I can't do now. The exact same prompts I can't do. And I know how to lead an LLM. I know exactly what to do and I'm how to lead an LLM. I know exactly what to do and I'm starting to see, even in the content management side of things inside Azure, like the things that they're doing in there are excellent. But that's Microsoft right. Like I believe that they've got a very strong message around trustworthy computing from people like Pablo Cortez, but that's fine.
Chris Huntingford : But what about these other companies that are building these tools that are ungoverned, like Grok and shit like that? That's that. I'm not concerned about people like microsoft and google. I'm concerned about the things that happen outside of that, like the shadow ai that is appearing all over the place without that governance. So I think when people I think people think when we're talking about things like responsible ai, trustworthy computing, trustworthy ai, we're referring to the big vendors, which we are to an extent, but it's not that it's wider than that. It's like way, way wider. And that's the part that terrifies me, because I I'm doing a lot of research into it and I'm seeing a lot of it and it's it's bonkers man, it's absolutely bonkers. Like we are kind of, to an extent, third order ignorance, where we don't know what we don't know. Like we're in the world of looking at the big vendors and thinking that is what AI is, but actually it's, that's just a portion of it. That's the thing that I'm like.
Mark Smith: Yeah, I do want to balance this with. You know, I'm um, um, you know I'm not a naysayer at all for AI. I'm definitely not a Luddite. You know, I'm not a naysayer at all for AI, I'm definitely not a Luddite. And there's a lot of people that are in that Luddite category which are risky as well. Right, because we don't want people coming out with pitchforks and whatnot and creating disruption globally. Who knows?
Mark Smith: But I mean, I, you know, I'm excited about the concept or the. You know, dare I say, I feel theory at the moment of what an agent could be. And you know, I'm finding more and more use cases for agents in my life. Right, I want to have a health agent that's just monitoring my vitals and tweak, tweak, tweak, tweak. I want an agent for what? There's so many use cases for agents. I just don't think they're there yet. But boy am I excited about the possibility of that. But I'm saying this is a scales thing, right, we're just got to balance the ford momentum, the innovation, with the way that goes shit. We didn't just create something that's gonna be damaging to us all. Yeah, it's, you're right, man, I mean, I think so.
Chris Huntingford : But the other thing, I just want to bring up real quick is that there's a big difference between be damaging to us all yeah, you're right, man, I mean, I think so. But the other thing I just want to bring up real quick is that there's a big difference between an agent and sparkling automation. Right, and we're in the sparkling automation era. All we're doing is attaching we're attaching pieces of AI into layers of automation and I think like 100% we don't truly ourselves understand the use cases for agentified AI, like we don't truly ourselves understand the use cases for agents agents, if I, they are like we don't. I don't think we understand truly in our brains what this is going to do.
Andrew Welch : I think that you know to, to me, I keep. I'm actually less when I think about the, the things that concern me the most in AI. It's not where I'm putting my brain power and where I'm. Where I'm thinking right now is not about an agent or what an agent can do. I'm thinking about what, what AI's capacity like I said at the very beginning of the show what AI's capacity for, um, you know, say, surveillance of of populations can do.
Andrew Welch : I think that, um, we, you know, something that hasn't come up on this episode that maybe we should talk about is the implications of AI in global security or in national security, right.
Andrew Welch : AI's ability to process satellite imagery, right, or to outperform a one nation's rival in a cyber attack situation, right. Like to me, the story here yeah, I mean, there's lots of stories, but the story that really keeps me up at night is AI's ability to process and then act on information at a scale that is 65 million years since the dinosaurs died, Right, and I don't think that. I don't think that we really can comprehend, just like we don't really understand, the use cases and what agents will be able to do in a pretty short amount of time. I think that we are very, very close to. We're very, very close to a very dangerous situation in terms of law enforcement, in terms of human rights, in terms of uh in terms of global, global security. Not to end us on a on a down note, I have lots of optimism about this technology, but I do think we need to be sober in the way that we think about it, in the way and in our tragic imagination of it so what are we saying?
Ana Welch : what should we do? Like to wrap this up, what should we do to, to, to contain this, to make it better, to, like you're saying, to understand what the risks are? Is there anything that we can do or get?
Mark Smith: educated broadly. Like right, yeah, you know. Like I think that now more than ever, individuals need to go into a repeatable learning cycle. Like I don't know if you can see there wipe out for life. I really am taking this mindset as I know nothing, teach me everything I know. Like about whatever I'm focusing on and I'm really in this like this curiosity, like get curious. If you're not curious, work out what will make you curious. I think you need to go out there and and don't get into echo chambers of oh, my friend said this, oh my, I saw this on the news article. Like no, no, that's not education, that's just you're in the news cycle, right, like go read people that know stuff on these topics. Read a proper book. Like don't read, you know just a post of you know a consolidated concept. Like go a bit deeper, really own your own thought process around this, don't just um borrow somebody else's yeah, totally agree with you.
Ana Welch : So get educated with as much as you can. Don't only believe in like the bad news. I know that the human psyche is more hyped for like catastrophes and and things like that. Just let's have a look into detail and tell us your opinion. Drop us a message. We're very interested in feedback. Have a good night.
Andrew Welch : I have one final one Don't vote for lunatics.
Ana Welch : And don't vote for lunatics, very important. Do not vote for crazies. Ciao everyone, bye everyone, ciao, ciao.
Mark Smith: Bye. Thanks for tuning into the Ecosystem Show. We hope you found today's discussion insightful and thought-provoking, and maybe you had a laugh or two. Remember your feedback and challenges help us all grow, so don't hesitate to share your perspective. Stay connected with us for more innovative ideas and strategies to enhance your software estate. Until next time, keep pushing the boundaries and creating value. See you on the next episode.

Chris Huntingford
Chris Huntingford is a geek and is proud to admit it! He is also a rather large, talkative South African who plays the drums, wears horrendous Hawaiian shirts, and has an affinity for engaging in as many social gatherings as humanly possible because, well… Chris wants to experience as much as possible and connect with as many different people as he can! He is, unapologetically, himself! His zest for interaction and collaboration has led to a fixation on community and an understanding that ANYTHING can be achieved by bringing people together in the right environment.

William Dorrington
William Dorrington is the Chief Technology Officer at Kerv Digital. He has been part of the Power Platform community since the platform's release and has evangelized it ever since – through doing this he has also earned the title of Microsoft MVP.

Andrew Welch
Andrew Welch is a Microsoft MVP for Business Applications serving as Vice President and Director, Cloud Application Platform practice at HSO. His technical focus is on cloud technology in large global organizations and on adoption, management, governance, and scaled development with Power Platform. He’s the published author of the novel “Field Blends” and the forthcoming novel “Flickan”, co-author of the “Power Platform Adoption Framework”, and writer on topics such as “Power Platform in a Modern Data Platform Architecture”.

Ana Welch
Partner CTO and Senior Cloud Architect with Microsoft, Ana Demeny guide partners in creating their digital and app innovation, data, AI, and automation practices. In this role, she has built technical capabilities around Azure, Power Platform, Dynamics 365, and—most recently—Fabric, which have resulted in multi-million wins for partners in new practice areas. She applies this experience as a frequent speaker at technical conferences across Europe and the United States and as a collaborator with other cloud technology leaders on market-making topics such as enterprise architecture for cloud ecosystems, strategies to integrate business applications and the Azure data platform, and future-ready AI strategies. Most recently, she launched the “Ecosystems” podcast alongside Will Dorrington (CTO @ Kerv Digital), Andrew Welch (CTO @ HSO), Chris Huntingford (Low Code Lead @ ANS), and Mark Smith (Cloud Strategist @ IBM). Before joining Microsoft, she served as the Engineering Lead for strategic programs at Vanquis Bank in London where she led teams driving technical transformation and navigating regulatory challenges across affordability, loans, and open banking domains. Her prior experience includes service as a senior technical consultant and engineer at Hitachi, FelineSoft, and Ipsos, among others.