The player is loading ...
Autonomous Agents: Marketing Hype vs. Technical Reality

Autonomous Agents Marketing Hype
Ana Welch
Andrew Welch
Chris Huntingford
William Dorrington

Get featured on the show by leaving us a Voice Mail: https://bit.ly/MIPVM 

FULL SHOW NOTES
https://www.microsoftinnovationpodcast.com/676  

Andrew Welch, Chris Huntingford, and William Dorrington explore the gap between marketed AI capabilities and technical reality, highlighting how "autonomous agents" often lack true orchestration and memory needed for genuine autonomy.

TAKEAWAYS
• Home gym innovation with resistance training that uses AI to adjust to your goals and fatigue levels
• Analysis of the US AI policy focusing on "US-made AI" compared to more detailed EU and UK frameworks
• The increasing fracturing of the global technology landscape with nations prioritizing domestic innovation
• Calling out marketing hype around "autonomous agents" that are merely deterministic automation with fancy names
• The critical need for memory capabilities in AI to enable true contextual understanding
• How delegation skills will become essential as we move toward more capable AI systems
• Creative workflows using Copilot for structuring ideas while preserving unique voice

This year we're adding a new show to our line up - The AI Advantage. We'll discuss the skills you need to thrive in an AI-enabled world.

DynamicsMinds is a world-class event in Slovenia that brings together Microsoft product managers, industry leaders, and dedicated users to explore the latest in Microsoft Dynamics 365, the Power Platform, and Copilot.

Early bird tickets are on sale now and listeners of the Microsoft Innovation Podcast get 10% off with the code MIPVIP144bff 
https://www.dynamicsminds.com/register/?voucher=MIPVIP144bff

Accelerate your Microsoft career with the 90 Day Mentoring Challenge 

We’ve helped 1,300+ people across 70+ countries establish successful careers in the Microsoft Power Platform and Dynamics 365 ecosystem.

Benefit from expert guidance, a supportive community, and a clear career roadmap. A lot can change in 90 days, get started today!

Support the show

If you want to get in touch with me, you can message me here on Linkedin.

Thanks for listening 🚀 - Mark Smith

00:27 - Welcome to the Ecosystem Show

01:05 - Home Gym Innovation and Speedience

04:15 - US AI Policy and National Interests

11:04 - AI Advancements in China vs US

18:16 - Calling Bullshit on Autonomous Agents

28:27 - The Future of AI Delegation

33:33 - Building Better with Copilot

Mark Smith: Welcome to the Ecosystem Show. We're thrilled to have you with us here. We challenge traditional mindsets and explore innovative approaches to maximizing the value of your software estate. We don't expect you to agree with everything. Challenge us, share your thoughts and let's grow together. Now let's dive in. It's showtime. Welcome back, everybody. I'm excited to be on the show. It's like sparrow my time, meaning it's very early in the morning with all the time zone changes, but I'm excited to see Mr Dorrington is in the house, one of my favorite, favorite, favorite favorite people in the world and Will. I am keen to hear the update of my new gym equipment that I'll be buying later in the year to follow in the steps of your new gym equipment. What are you up to?


William Dorrington:
 So on that alone, yeah, I've been using Speedience and it's, honestly, it's phenomenal. It's the first time I've seen a bit of innovation in a home gym for a long time. So we'll be sure to drop the link uh to and to the youtube video there, but we don't get endorsed by that. I want to add feel free for them to uh to reach out to us.


Mark Smith:
 But what are you finding?


William Dorrington:
 the resistance training works well, like you know it's phenomenal, it really is, and it just it just feels like weights. I mean, chris came over and uh done the neighborly thing and helped me lift it up and and had a go on it and it was fantastic, you know it was. It's really good, uh, you know, chris, don't you have any thoughts on that, with you being the the gym king that you are?


Chris Huntingford:
 mate. I actually so honestly, I thought it was going to be sketch. Um, I was like, okay, resistance training is okay. I always thought, felt like resistance training is a bit lame because it's kind of cheating, like I pick up heavy things and put them down again. But after doing this I was like, actually, this is very freaking cool. I like it and, um, the thing that I love about it is that it kind of auto adjusts as you can set it to auto adjust as you go. So you know, when you're like busting out that, um, you know 100 kilogram bench press that andrew does regularly, then you, uh, you realize that and you can't get the last rep in. I don't even know.


Andrew Welch :
 I don't really even have a firm grasp of how much 100 kilograms weighs.


William Dorrington:
 So I just that's how strong he is.


Chris Huntingford:
 It means nothing to him. Andrew, do you remember those times we used to run and jump and cuddle me? Imagine I did that to you. Yes, ooh. Well, that's not good, that's not good. Yeah, it is. Ooh, that's so good, that's so good. Yeah, it is. Yeah. Anyway, it did Mark it short term, it works.

Mark Smith: Nice.


Andrew Welch :
 Well, to quote the great Forrest Gump, I have been spending my time exercising my arms. Oh, wow, wow.


William Dorrington:
 What is great to bring it back to that is, although it's resistance training, it all it does is mimic the weight right. It goes up to 100 kilos and obviously it knows the amount of resistance that equates to a kilogram. So you've got this ability. Yeah, you have a home gym that doesn't weigh a ton, that you have to have a load of of plates or weights with. That goes up to 100 kilogram difference. You have a ring that goes on your hand that you can adjust so. So if you're drop setting or up setting, you can literally just adjust the weight as you go. Any workout you can do at a gym is there. I mean, it is a little bit expensive, but for actual innovation and does a lot of AI coaching, adjusts to your goals, what you're up to, your fatigue, et cetera. It's just next level. It's really good.


Mark Smith:
 You sold me on AI.


Chris Huntingford:
 It is cool. The thing is, though, honestly there's nothing like a good thumbnail.


Mark Smith:
 What I found last year in hindsight was that the Black Friday sale is just ridiculous and it's their only sale a year, and so I'm like I've already measured up for where it's going and I'm ready for the Black Friday sale, and then I'll be on.


William Dorrington:
 My biggest advice is go harass them, them honestly. Their sales team want to sell, and if you said, look, if you offer it to me for the black friday price, I'll buy it, I'll buy it today. They'll give you a link. It gets attributed to you. I'll tell you another thing about ai, though, and a nice little segue is have you seen what the us has published recently around their uh, around their, around their driving usage within federal agents In the last two days, right, Will has said it in the last four minutes, since we discussed it just before the show.


Andrew Welch :
 Have?


William Dorrington:
 you seen it, I was trying to get us back on track.


Mark Smith:
 Nice, nice, I have seen it. I read it as bedtime reading last night. It came out on the 7th, which was the 8th hour time, and it's the 10th today, so it's in the last 48 hours. It's very interesting because for all the public bluster, it's actually got a bit of good guidance, I feel, in there around responsible AI and a lot of benefiting in America, benefiting in America, benefiting in America all the way through, and I always just find when countries do that it reminds me of a dog pissing on a street light like. This is my territory and it's something like when I went to Belfast in Ireland. The British have their bunting flags up everywhere and I don't see it anywhere in Britain but in Belfast. And I said to the cab driver man, it's like the Brits are here pissing on every flag post to make sure you're really clear it's British. Well, he was massively offended. I'm British.


Chris Huntingford:
 He said well, I'm British.


Mark Smith: What do you mean? I'm not Irish. I'm like mate, you sound Irish to me and wow, I reach back and grab me.


Andrew Welch :
 Mark, and I remember you telling the story. That was an absolutely stupid thing to say to the cab driver. I have no other way around it.


Mark Smith:
 Yes, yes, I know, but you know I'm a little see how you oh, you're irritating.


Andrew Welch :
 Color me. This is my shocked face here.


Mark Smith:
 Exactly, so you know thoughts on what the US have said doing.


Andrew Welch :
 Yeah, so it's really. It's interesting. You know, first of all, the fact that responsible AI, even landed in this thing as a phrase, is the first indication that we have that Elon Musk obviously didn't read it, so I joke, I joke, I joke.


Chris Huntingford:
 I'm sure he's very responsible. You're going to land yourself in border control, my friend.


Andrew Welch :
 I know, I know no, but it's actually, it's quite, it's quite interesting. First of all, you, you do have to get. So I'm looking, I'm reading off of the. They did a fact sheet and I'm sure we can put the. Put this thing in the show notes. They did a fact sheet and there's some funny things in here, right, like there's a line in the first paragraph.


Andrew Welch :
 These policies fundamentally shift perspectives and direction from the prior administration, focusing now on utilizing emerging technologies to modernize the federal government. The executive branch is shifting to a forward-leaning, pro-innovation and pro-competition mindset, rather than pursuing the risk-averse approach of the previous administration. So I mean, it's on brand, right, like we had a whole preamble that had to trash the Biden administration, which did just make me laugh. Like can you just publishamble that had to trash the Biden administration, which did just make me laugh Like can you just publish a document and get to the point, or do we have to see this play out again? But neither here nor there. But I'm looking here.


Andrew Welch :
 So a couple of things that I thought were really interesting that stood out to me. First of all, big section to big bold letters, all caps, promoting rapid and responsible AI adoption. It's almost like someone like the Ecosystems podcast wrote that heading and you know it goes on to talk about how chief AI officers are tasked. This is chief AI officers within federal agencies are tasked with promoting agency-wide AI innovation and adoption for lower risk AI, mitigating risks for higher impact AI and advising on agency AI investments and spending. Which that made me laugh. It was like, oh my God. They read our white paper about incremental and differential AI and how to balance your portfolio of risk.


William Dorrington:
 That's amazing, do you not find it interesting, though, that if you look through that, we had a bingo card of all the things you'd hope to see. They've got the words there, the bit that I find missing, and you know, mark did call me out and say well, andrew, I got about three minutes ago, but for the quick skim read I have, I can't see anything that goes right. Here's the criteria for classifying high impact and what it means to you. Here's, here's. Here's actually the, an ethical framework. Here's how you can actually classify and do the risk classification. Here's how you map, manage and measure risk. It's just like we're going to do this, we're going to get someone who's actually responsible but go off and go off and figure it out.


Andrew Welch :
 Unless I can, we drop a link to cloud lighthouse and the center for test for the ai in the chat in the chat, almost like I know someone who has these things.


William Dorrington:
 But the key thing there is it's all the right words, but the context behind it it's like the GPT wrote it right.


Chris Huntingford:
 It's like you know I've got three of these open right here, so I've got this one. I've got the 50-point plan by the UK and I've got the newest one by the EU AI pact called the AI Continent Action Plan. So I've got three of them open right. When you go into the UK 50-point plan, it all brings it down to things like infrastructure. It's really detailed and actually even the 50 points drill into layers and layers and layers of content right, and that's the UK one. By the the way, when you go into the new ai content plan action plan, just go and look at the q a. I'll stick the links in the chat.


Chris Huntingford:
 They break it down so beautifully of like computing infrastructure data skills developments, but they bring out the human, the human-esque part of it, whereas when I look at this, I'm like how many gbts did you run this through to make these words right? And without being mean?


William Dorrington:
 it's the, it's the meat that I'm missing but look at the political narrative as well of encourages private sector-led ai innovation, promotes us-made ai. It's, it's going back to that, that sort of charge again.


Andrew Welch :
 Yeah I also. I also, I also do think that there's a, that there's a bit of a, you know. So some some cultural, some cultural context is necessary here in in that, you know, just anecdotally speaking, europeans love detail in a way that Americans really hate, right? So so I think part of that is simply like, if you, if you were to give this thing, if you were to give this topic to American government writers and then give it to European government writers, the Europeans will produce 10 times more words every time. And I go back to that book.


Andrew Welch :
 I go back to that book, the Culture Map, right, that we've talked about many times on this show about like Americans are light on details and now this is like just go do it. So there's a little bit of cultural context there. Now, let's just go do it.


William Dorrington:
 So there's a little bit of cultural context there and I cannot believe that you guys have just backed me into defending the Trump administration on something that's the podcast. That's it. It's nothing you can not find, though, even with that in mind, the low detail, I do respect that, and let's face it. We're much more risk adverse in the EU and UK. Think of GDPR acts and how we got that in and how, as soon as. Ai came like yeah yeah


Mark Smith:
 I agree stronger focus on citizen protection I'll.


William Dorrington:
 Yeah, I was trying to, I was trying to be nice by saying conservative, uh, but the it's still that focus on. They want to drive innovation, but as long as it's and I don't mean this, there's not a political point. It's a curious point as long as it's domestically driven ai innovation, you know, I I don't. Obviously, and let's face it, the us are smashing some of the uh, you know the ai game. They've got some of the best models, etc. But it's interesting, they only their focus is on us made rather than going. Let's see how we can innovate, collaborate more openly than that I don't know that the us do.


Mark Smith:
 I think them. Everybody tells us that the us does and this is why I love the book you know that you recommended Will the Coming Wave by Mustafa Suleiman is because he clearly has a wake-up call in there at how advanced China is. It's a great book, right, it refers to the US had their Sputnik. You know, when Sputnik was launched that kicked the space race off globally. It was American. It was like oh my, the russians are beating us. And he said china had their sputnik moment. When he was at google and alpha go for the second time beat the reigning champion in the world in china, yeah, and the whole chinese government system, legal system etc. Said that was their sputnik moment. And I think they're potentially way beyond even what they're letting out in market and their advancements in the space.


Andrew Welch :
 There are, you know, and we I think that the most, the most, the most recent. One of the more recent not the most recent, but one of the more recent, really high profile bits of this was when DeepSeek hit the market and everyone freaked out and saw what this thing could do with much, relatively less computing power than what some of the other models that are out in the market can do. So I think that it is becoming increasingly clear that the US and the countries that joined with the US on this to kind of choke China's capacity to build artificial intelligence, that that mission has not gone well right.


William Dorrington:
 We're getting increasingly clear that China is very, very much, much in the game, look at the benchmarking and look at the parameter size and you know, I I absolutely agree. I think the us, china are are still sort of the at the forefront there. Yeah, there is a conversation to have as well which goes back to that sort of, you know, the the coming wave type approach, which is with the political environment we've got going on at the moment and now the race against ai. We know that ai, if it gets out of hand and it does become a, should we do it or we have to do it because otherwise they're going to beat us to it? Yeah, that creates a huge risk and, with tensions at the moment, it's a very interesting place to be would you have believed 10 years ago the world would be so much more fractured?


Mark Smith:
 I feel that, like you know, I always thought, you know, um, we were going to a global world view and everything, and I think what's happened this year is now nationalism and protecting my nation state has become so much more the focus. I listened to a um, a press release from the? C, the other ceo, the, the I don't know what he's called the president or prime minister of singapore this week.


William Dorrington:
 Let's go with ceo and he was.


Mark Smith:
 I mean, his speech is so sobering that he believes we're going to a much more fractured world than ever before with what's happening and that you know these be dark times that are looming. And I'm like, wow, I'm hearing this thread a lot more. Mo, the guy that wrote um, he wrote a book on ai. He was, he was the ceo of google moonshot programs and he the first book I read from him, was sold for happy. I've talked about this before and but he wrote a book on ai and his latest podcast is talking about us going into a 10-year dark period of where AI is going. And I'm like I'm always now in this position. I'm like I'm so excited about what AI is doing. And there's these people that are way more knowledgeable than me in the space are starting to talk of cautionary tales type thing, fractured nation states, and they're what's coming through quite strongly. It's not just the risk of individual bad actors, it's nation bad actors coming much more to protect their national interests.


Andrew Welch :
 I think that this we could do. We could teach a semester or rather an entire PhD program on this topic, but I think that you know we can. Let's separate the AI implications from what's. You know what's happening kind of fundamentally under under the covers, right and at a more base level, and it is. It is absolutely true that it's absolutely true that across the world, the political mood and parties and candidates they are sweeping to power, or at least coming within spitting distance of power, that support increasingly nationalistic, nationalist, isolationist tendencies, and I think there's a lot of reasons on that.


Andrew Welch :
 This is something that this is very, very close to, sort of my first interest before I broadened my horizons into tech. So, guys, stop me before we go too far. For I broadened my horizons into tech, so guys, stop me before we go too far. But yeah, this is, this is a even setting aside AI.


Andrew Welch :
 This is a serious problem and I think that it's probably something, it's a problem that we could have seen coming. I think that folks in the technology world or in the global business world or kind of the global you know, the globalized world, really did not do a good job at all and this is an understatement of bringing their fellow citizens along for the ride. That is absolutely true, but yeah, I do think that when you then add into that the dangers that are inherent in we talked about this on a recent episode the dangers inherent to AI's ability to process and to interpret and to understand an ungodly amount of data, that is nothing that human beings can imagine. Yeah, I think that these are going to be. The world is going to be very different for at least a few years, and I don't think it's all good. In fact, much of it is not good.


Mark Smith:
 You know you mentioned its ability to process large amounts of data. We've heard a lot about agents and that they are going to do a whole bunch of stuff for us and I hate the term at the moment or the overuse I don't hate the term, the overuse of the word autonomous agents. I've never seen one.


Chris Huntingford:
 It's because they don't exist, that's because it's power automate. That's all it is. It's power automate.


Mark Smith:
 I think, as Donna put it, it's sparkling automation. It's from the. What did she call it? How do you say champagne, the autonomous region of champagne? She said it's not from the autonomous region of sparkling automation. Yes, no it's dumb.


Chris Huntingford:
 So first of all, I just want to tell you guys something wild, right, like you know. So just back on Andrew's point. You know China is really big on WeChat, right? Yeah, huge. Yes, now you have a thing called pay by palm in China, so you don't carry your phone around you, you carry no cards around with you and you pay with your palm right Now. That's interesting, right, because that turns this really weird in the fact that if I am going to pay for something, I don't have to carry stuff around. But the concept of bad actors in the scenario makes it way more difficult, because now you have a thing attached to your body that people want, right, so that is AI.


Mark Smith:
 I've always had that.


Chris Huntingford:
 Oh, wow.


William Dorrington:
 My story's face lights up, and I knew it he's just so happy.


Chris Huntingford:
 He's just so happy, but isn't that scary, um anyway. So that's one thing.


Andrew Welch :
 The other thing is it's the bald part for those who are listening to this without the video feed. I am burying my face, my head in my hands.


Chris Huntingford:
 I think it's the greatest thing You've got to line him up, though. The other thing that's interesting so this agents thing I'm going to call bullshit and I'm going to say it right now. I think that there are a lot of people that are like, oh, we're making agents in this technology and we're chaining them. I put out a post recently. I'm like okay, show me. I'm like, show me, show me how you're doing this. I want to see how all of you genius people are doing this. And I got maybe five responses, and none of them were accurate. And I'll tell you why. Because, number one, an agent requires orchestration and memory.


Chris Huntingford:
 Okay, now here's the thing. I'm not even going to start there. I'm going to start with autonomous agents. So we have a thing in Copilot Studio called an autonomous agent. Okay, up until extremely recently, in fact, the 2nd of April, we only got agent flows, which is basically Power Automate in Copilot Studio. And then we discovered we had a thing called a trigger where you can invoke an agent. Now, tell me this how is an agent autonomous if you don't have a mechanism of firing it? Well, you do now with the triggers. Okay, but it's just photo. It's been Power Automate for years. So I started digging into a bunch of other areas Now not autogen, because I'm not a coding expert, right, so I'm happy to be kept honest here but I'm working on a project right now where we are using Semantic Kernel.


Chris Huntingford:
 In Azure AI Foundry we're generating agents using Semantic Kernel. We are putting the correct descriptions in. We have found no way to orchestrate and plan them because the plan of functionality isn't there. That we found, so I'm happy to be told it is. And if somebody can show me and I would love to be shown so if somebody can sit me down and physically show me there is a way to have five agents that you build with deeply descriptive information about the agent and what it does, and you can physically fire off those agents based on an orchestration engine and a planning engine. I want to see it, because all I can see now is deterministic chaining between agents.


Mark Smith:
 Yeah.


Chris Huntingford:
 That's all I can see. I cannot see anything that's intelligent.


Andrew Welch :
 So I'm calling bullshit. Right is in general purpose AI, right. So Copilot or ChatGPT or a general purpose AI development tool, something that is meant to bring AI development to lay people right, like to to folks who are not data scientists, who are not engineers specializing in the field right, data scientists who are not engineers specializing in the field? Right, that you're. I think that a lot of this right now can seem and feel very underwhelming. Right, the real capacity and the real capability that I think is very impressive in the world of AI right now is there are very high-end purpose scenarios, purpose built solutions. Right. So, I excuse me, I do think that that to some extent, the rush to get consumer grade agents, autonomous agents, into market is, in a way, hurting the story. Right, because coming to market they're very underwhelming. They're very underwhelming compared to what I think people expect and they are underselling to the average observer the capacity and the capability that is out there beyond the reach of the casual observer.


Chris Huntingford:
 Dude in breaking that down. What I'm seeing is extremely watered down. Yes, and in order to achieve true agentification, you have to write reams and reams and reams and reams of code. Really, and I've seen it. I've seen the introductory course to agents. I've seen what it can do. They've done Microsoft have done a good job of that. But AI introductions man, it's hardcore Like. We got some great code snippets from it, by the way, and I recommend looking at it. But if you are inexperienced in writing code and you don't know how to do, build Azure, sorry. Function apps to drive action. You don't have an agent, sorry. You have, at best, a thing that does retrieval, augmented generation and maybe at best, yeah.


Andrew Welch :
 So, in defense of all of this, in defense of all of this, I will say that two years ago, or even 18 months ago, the idea of real consumer-grade RAG, it seemed very far off. Right Now, I think that we all looked at it and we said, okay, it's not actually as far off as it seems right. So, in the grand scope of IT time, the fact that we've gone from where we were 18 months ago on consumer-grade RAG to especially the ability of laypeople and folks again who are not experts in this to generate fairly decent AI scenarios that are fairly decent- Dude, it's a chatbot on data.


Chris Huntingford:
 It's not. It's a chatbot on data mate. You can do this with search. This is not rocket science. This is applying search to a chatbot and whacking some data in there. Right, the fact that it's generative is fine, but this is not rocket science. What I think is rocket science is where my post said I had a picture of three agents and I said how do I link them, how do I make this truly autonomous and how do I make this actual, create this orchestration engine? And this is it. Yeah, yeah, because not many people could tell me. There are three posts on there, two of which are Copilot Studio and, by the way, copilot Studio is only autonomous now, to an extent. There is no orchestration. There is not. It is deterministic. Folks, if people say it's non-deterministic, it's not. It's deterministic.


Andrew Welch :
 Well, this is the Downer Jam podcast. No, screw that. This is the Downer Jam episode.


Mark Smith:
 I think we're going to get to a switching moment when, all of a sudden, we'll get that aha, a switching moment when all of a sudden, we'll get that aha. I think what's happened is that, like anything, marketing needs to own phrases, needs to own words, etc. And so they go hard on this, even though the actual reality of what they're saying doesn't exist, like I've found it very interesting. In such as podcast that he did in January this year, he talked about something that Microsoft he feels will solve this year, which is memory. Ai, to get to its next level, needs to have memory it needs to be able to go.


Mark Smith:
 you know what? I remember this happening, I remember that happening and therefore that means something different. Remember this happening, I remember that happening and therefore that means something different. And until and I've not, I'm not seeing any of the big llms in market, the commercial um over the shelf, um over the counter llms, none of them are really nailing memory yet, but I think that when they counter like like pills like pills, the paracetamol of the authentic race.


Mark Smith:
 Yeah, OTC, OTC I used to work in the pharmaceutical game for a while and OTC drugs you didn't need a prescription for over-the-counter yeah. So yeah, I think memory is going to solve a lot of that, but at the moment, if I've got to spend more time telling fire triggers and do this, these are if-then statement-based automation.


William Dorrington:
 It's deterministic. If we look at this from a business point of view, with our Microsoft hat on, if we were on the board we'd go right. This is our first stepping stone into some intent that we have. So I always say it's not agents, it's a gentic light, it's agents with extreme seat belts that you can't even get to the point of agents just yet. You know there is I like that extreme seat belts.


William Dorrington:
 It's good, I'm going to use it yeah, but it's true because there there is. It's not truly autonomous. Okay, it still triggers, it still flows, but what it can do is it can, it can look at intent, right, so it has an element of it and you can see where the route is going. And a bit like all Microsoft tools nowadays, they really polish it up. They used to polish it up back in the day and then get it out. Now it's more let's get it out and we'll improve it as we go. So you know, you can see where they're trying to head to. But I absolutely agree with you all, which is surprising, is it? But yeah, it's not quite there yet, andrew.


Andrew Welch :
 Just a little bit off topic. I was laughing Will when you used the phrase extreme seatbelts, and it reminded me of one of my sort of favorite humorous political moments from another time, before Donald Trump had ever been elected president, and there was a guy in the US, a politician who I respect and admire, a guy named John Kasich, and he was running for president and in a debate, right, he said something to the effect of if I'm elected president, you should go out and buy yourself a seatbelt, because we're going to be moving so fast it's going to make your head spin, or something like that. And it left all of these commentators in the news the next day being like where do you buy a seatbelt?


Andrew Welch :
 Like, where do you go if you just want the seatbelt, I'm sorry, so a little bit of a rat hole, but it made me giggle.


William Dorrington:
 The reality is, though, right now, that, if we did have truly autonomous, agentic, ai a lot of we would be struggling for adoption and it's good that we have time to breathe and have get responsible ai frameworks in place and the technology maturity and the mindset maturity the biggest skill.


Mark Smith:
 I feel that you can't. I haven't seen a course on it and I'd love to find a course on it, and I'd love to find a course on it. That people are going to need to learn is the ability to truly delegate. Yes, and I don't think people are ready for what delegation means. And if you've never been in a managerial position, delegation is not just go do something. It's putting the parameters around what my expectation is of an outcome, what my timeframes are. There's a whole bunch of stuff that goes around. Good delegation, delegation that gets meaningful results. It's almost like in Scrum. What's the definition of done? When you come back to me and say you did what you said you were going to do, does it meet this bar at a minimum or did you go beyond? Type thing. And I think there's going to be a need when we go to this autonomous, which I think is definitely uh, I on the near horizon it's really close I think we're going to have to get really good individually at delegating and knowing how to put all the nuance around delegating.


Chris Huntingford:
 So I want to tell you something, man. Maybe we should wrap with this. I was chatting to a very good friend at Microsoft and she said to me the other day she's like because I was whinging about exactly this, right, because I do get mad and I do get angry. And the reason I get angry is because I have to spend hours and hours and hours and hours literally I've nearly two weeks on this trying to figure out where this invisible thing is. And she said would you prefer that we were behind or way ahead in our messaging? And I said you know what? I'd prefer? You were way ahead. And I agree with that because I think she's right.


William Dorrington:
 Love it.


Chris Huntingford:
 It signals intent yes, it does. It does signal intent, so it gives us something to go for and if I hadn't been in the position where I was so bummed out about trying to figure out how these things connected. I do have a message behind it right now and I do know how they work right, but I spent so much time researching and actually I'm grateful to Microsoft for being ahead and having that ahead message right and I think it is important.


Mark Smith:
 I think you're on point, and what I'm going to say is not an intentional flex, but it's going to flex. I'm 50% through writing a book on co-pilot adoption for Microsoft Press, yes, and it's forced me to use co-pilot much more than I ever have. My default tool has always been ChatGPT. I'm on the $200 a month. In fact, I just saw that anthropic has bought out overnight a 200 a month plan as well. Um, I use perplexity, I use anthropic. I use um, what's elon's one? Um, yeah, and but what?


Mark Smith:
 I have found what I have found. What I have found is most of these models are verbose in their response, right, they are just so much fluff and you know self-flagellation type stuff in their prose. What's that?


Andrew Welch :
 The ultimate mansplainer.


Mark Smith:
 Yeah, yeah, mansplainer, yeah, yeah, what I have found, copilot and copilot is really now becoming my favorite tool, because the more I use it inside my organization data, it is actually getting smarter yeah, it is and it is really concise and it is not full of fluff like one.


Mark Smith:
 one of my kind of secret hacks at the moment is that if I want to write something, I open up Teams, I set a meeting with myself, I put on transcription, I turn off the camera and I will talk for 30 minutes to it. Like I'll just riff all the ideas that are going on around, whatever topic I'm focused on, and then I close it down. I'll give it a couple of minutes and I'll go back and here's the transcription, but next to it is a summary of my thoughts and then I go into prompting against it and I'll go listen. Don't take out any of my storytelling, any of my voice, anything that's me. But can you structure my thoughts? Because they were really haphazard and what comes out is fucking amazing.


Andrew Welch :
 I mean let's, let's, let's just, let's just stop for a minute to observe that a 30 minute meeting in which you're the only one doing all the talking sounds like a typical. A typical meeting for Will, which you're the only one doing all the talking, sounds like a typical meeting for Will.


Chris Huntingford:
 Okay, you fools Big love.


Andrew Welch :
 I'm sorry. I'm sorry, I couldn't All right, are we? Done yeah we're done.


Mark Smith:
 This is awesome, guys. I love it, will. I'm so pleased to see you back online. It's you know.


William Dorrington:
 I've missed you guys. I have this is beautiful. We missed your musk.


Mark Smith:
 I can't wait. You know May is coming at us like a freight train and we get to hang out again.


Andrew Welch :
 Yes, we do Dynamics minds.


Mark Smith:
 I love it. See you guys soon. Ciao, ciao, peace Later, guys. Bye guys. Thanks for tuning into the Ecosystem Show. We hope you found today's discussion insightful and thought-provoking, and maybe you had a laugh or two. Remember your feedback and challenges help us all grow, so don't hesitate to share your perspective. Stay connected with us for more innovative ideas and strategies to enhance your software estate. Until next time, keep pushing the boundaries and creating value. See you on the next episode.

Chris Huntingford Profile Photo

Chris Huntingford

Chris Huntingford is a geek and is proud to admit it! He is also a rather large, talkative South African who plays the drums, wears horrendous Hawaiian shirts, and has an affinity for engaging in as many social gatherings as humanly possible because, well… Chris wants to experience as much as possible and connect with as many different people as he can! He is, unapologetically, himself! His zest for interaction and collaboration has led to a fixation on community and an understanding that ANYTHING can be achieved by bringing people together in the right environment.

William Dorrington Profile Photo

William Dorrington

William Dorrington is the Chief Technology Officer at Kerv Digital. He has been part of the Power Platform community since the platform's release and has evangelized it ever since – through doing this he has also earned the title of Microsoft MVP.

Andrew Welch Profile Photo

Andrew Welch

Andrew Welch is a Microsoft MVP for Business Applications serving as Vice President and Director, Cloud Application Platform practice at HSO. His technical focus is on cloud technology in large global organizations and on adoption, management, governance, and scaled development with Power Platform. He’s the published author of the novel “Field Blends” and the forthcoming novel “Flickan”, co-author of the “Power Platform Adoption Framework”, and writer on topics such as “Power Platform in a Modern Data Platform Architecture”.

Ana Welch Profile Photo

Ana Welch

Partner CTO and Senior Cloud Architect with Microsoft, Ana Demeny guide partners in creating their digital and app innovation, data, AI, and automation practices. In this role, she has built technical capabilities around Azure, Power Platform, Dynamics 365, and—most recently—Fabric, which have resulted in multi-million wins for partners in new practice areas. She applies this experience as a frequent speaker at technical conferences across Europe and the United States and as a collaborator with other cloud technology leaders on market-making topics such as enterprise architecture for cloud ecosystems, strategies to integrate business applications and the Azure data platform, and future-ready AI strategies. Most recently, she launched the “Ecosystems” podcast alongside Will Dorrington (CTO @ Kerv Digital), Andrew Welch (CTO @ HSO), Chris Huntingford (Low Code Lead @ ANS), and Mark Smith (Cloud Strategist @ IBM). Before joining Microsoft, she served as the Engineering Lead for strategic programs at Vanquis Bank in London where she led teams driving technical transformation and navigating regulatory challenges across affordability, loans, and open banking domains. Her prior experience includes service as a senior technical consultant and engineer at Hitachi, FelineSoft, and Ipsos, among others.