
AI's Transformative Power
Ana Welch
Andrew Welch
Chris Huntingford
William Dorrington
Get featured on the show by leaving us a Voice Mail: https://bit.ly/MIPVM
FULL SHOW NOTES
https://www.microsoftinnovationpodcast.com/659
What if AI could transform the way we navigate our professional and personal lives? This episode addresses the pressing challenges of balancing AI innovation with the regulatory frameworks emerging worldwide. We explore the differences between the EU and U.S. approaches to AI regulation, the importance of human rights considerations, and the responsibility organizations have in navigating ethical implications.
TAKEAWAYS
• Highlighting advancements in AI tools
• Discussion of Grok 3 and its capabilities
• Exploring the EU AI Act versus U.S. regulatory approaches
• Complexity of navigating international AI regulations
• The risk of human rights violations with AI algorithms
• Emphasizing educational needs for organizations
• Importance of a responsible culture in AI implementation
This year we're adding a new show to our line up - The AI Advantage. We'll discuss the skills you need to thrive in an AI-enabled world.
DynamicsMinds is a world-class event in Slovenia that brings together Microsoft product managers, industry leaders, and dedicated users to explore the latest in Microsoft Dynamics 365, the Power Platform, and Copilot.
Early bird tickets are on sale now and listeners of the Microsoft Innovation Podcast get 10% off with the code MIPVIP144bff
https://www.dynamicsminds.com/register/?voucher=MIPVIP144bff
Accelerate your Microsoft career with the 90 Day Mentoring Challenge
We’ve helped 1,300+ people across 70+ countries establish successful careers in the Microsoft Power Platform and Dynamics 365 ecosystem.
Benefit from expert guidance, a supportive community, and a clear career roadmap. A lot can change in 90 days, get started today!
If you want to get in touch with me, you can message me here on Linkedin.
Thanks for listening 🚀 - Mark Smith
Mark Smith: Welcome to the Ecosystem Show. We're thrilled to have you with us here. We challenge traditional mindsets and explore innovative approaches to maximizing the value of your software estate. We don't expect you to agree with everything. Challenge us, share your thoughts and let's grow together. Now let's dive in it's showtime. Welcome back to the Ecosystem Show. We're here with Anna and Andrew and myself. Will is, unfortunately. Did he say he's sick?
Andrew Welch : I think he said he called him sick. They're both sick. Chris and Will are both. I got a text from Chris and he said I have not been this sick in 10 years, Years.
Mark Smith: So they're not joining us today, but we find that we're in a space and time where AI is just front and foremost in everything that I seem to be doing these days and it seems to have transitioned super quickly to us all of us really coming from a business applications mind space for the last 20 odd years, and then we're seeing this massive pivot into artificial intelligence and really from a practical application point of view, right, not just theory, and I don't know if you guys have tried anything new lately, but I have had the pleasure of using Grok 3, elon Musk's release of AI this last week Isn't he busy? The deep research functionality on it, I definitely think, outperforms ChatGPT and I'm on the $200 a month plan. The thoroughness of it was mind-blowing to me. So I gave it a topic that I knew stuff about and I couldn't believe how good the responses were. And I'm talking about I'm not talking about Q&A full conversation with it over an hour and unbelievable Like and I hadn't really touched Grok up to that point.
Mark Smith: I'd cursory over the last year or so, jumped into the previous versions, wasn't really into it. I'm like you know, could you use it as a serious business tool? But, boy, honestly, amazingly powerful. I've got the app on my phone now, so that's where I generally, you know, I'm watching something on TV or something. I'll go and spend some time while watching, I'll do some prompting and stuff and yeah.
Mark Smith: I'm blown away.
Andrew Welch : This is and, by the way, mark, I just want to tell you that I'm offended. I actually, 20 years ago, was an infrastructure guy, not a biz apps guy, but anyway nonetheless. But see, this is actually where the most readily applicable use of AI in my daily life comes in right, which is refereeing random trivia, all right, like that is. If you are not using your AI of choice to referee a debate that you might be having with your wife or with your father about something that doesn't really matter in the end, then you are missing out on AI's true calling.
Ana Welch : It's very irritating. It's very irritating. I can also vouch to Andrew's background in, you know, hands-on engineering. He did fix our internet earlier in the week, like he knew what to plug in where, and you know I was impressed. I have to say he was on the floor. We had to move the couch. Yeah, it's pretty cool.
Andrew Welch : I ripped a fiber connection out of the wall and rewired it.
Mark Smith: Exactly, yeah, that's what you did.
Mark Smith: But if you're in IT in the last 20 years and dare I say mine's around 30 years you know it started with networking, right, it started with, in my case, board modems and you know my entire network infrastructure I've configured and I run myself. Um, it's pretty high spec. It's what you'd call a prosumer, I suppose, um uh network with multiple wan connections and whatnot. But, like internet, working is one of the first courses I ever taught, maybe 30 years ago, which was basically how packets break down and travel across the network and the OSIRM, the Open System Interconnection Reference Model, and how it moved down the layers to ultimately get to your hardwired network card or NIC and travel.
Andrew Welch : So yeah, so this was my biggest finding from fixing the internet in our new apartment here in valencia was um. My number one finding was so I got my first job in it 24 years ago. I celebrate this anniversary every january, so just just celebrated my 24th year. My number one takeaway is that I am way less flexible than I was 24 years ago. I laid on the floor with my head stuck up somewhere for about 15 minutes the other day and I had to take part of sit-down all afterwards, like he needed.
Ana Welch : It's going to get better, though, because now we are exercising. I mean, we're exercising, we're walking for a good hour every day we live in the old city now, so yeah, and that's wonderful, yeah, so I'm gonna.
Andrew Welch : I'm gonna bring this back the thing, and mark you asked a few minutes ago and we we talked about grok, and then we went down a tangent of of our prior lives as network engineers, but the thing that's been really top of mind for me the last little while here is the choppiness of the regulatory and the legal landscape surrounding artificial intelligence, and I think that this is something that has not gotten a lot of play, has not gotten a lot of mindshare, because we've been really focused on the capabilities of AI or, maybe secondarily, on the engineering around AI, and this is going to be a fun episode for me as a guy who did my college major in political science not actually in something technical college major in political science, not actually in something technical but there's been a couple of events that I think have shown a light on how complex and how uneven that regulatory landscape is. So the first is, in my mind, the EU-European Union AI Act. The second, though, would be the presidential transition in the United States and I'm not going to talk politics here, that's not my point here, but the vice president, jd Vance, was in Europe and, I think has been pretty widely reported, gave Europe quite a reported, gave Europe quite a scolding in his view on various things. But you know, listen, today we have this regulatory landscape where the EU is coming down quite hard on, you know, regulating and trying to make artificial intelligence safer and more reliable and more trustworthy legislatively and from a regulatory perspective. The United States is moving in the opposite direction. Donald Trump rescinded Joe Biden's executive order pertaining to AI safety and AI responsible AI.
Andrew Welch : You have the.
Andrew Welch : You have the Chinese, who are training models we had DeepSeek that hit the scene in the last month, but training models that have baked into their knowledge the, I would say, the Chinese geopolitical view of the world.
Andrew Welch : So famously, folks asking DeepSeek for information about Taiwan getting a very, very different answer than what you would get from a Western large language model.
Andrew Welch : You have the Brits, who are looking at their own version of an AI act, which I suspect will come down in between the American wild, wild west that, it seems, donald Trump and Elon Musk envision and the EU regulatory landscape. And then, really interestingly, you have the US states, who are many of them taking on their own perspective here. California is obviously going to be huge, a huge mover here, because they're home to so many tech companies and are themselves, I think, the seventh largest economy in the world. So if you're an organization that operates across borders, you have a mind-boggling complex task ahead of you to just understand and navigate how to be in compliance with what you're doing with AI from a regulatory perspective, but also how to deal with the multinational citizens who work inside of your company and may be subject to a particular regulation, even if they're not living in their home country. So that's my prompt for the next 20 minutes or so. I think this is really underplayed.
Ana Welch : You underplayed. By who?
Mark Smith: Yeah.
Andrew Welch : Underplayed by almost everybody who I speak with about anything related to AI other than, to be honest, the wonky folks who either are maybe you're a lawyer who's building or a part of a practice at your firm focused on AI, or you're a lawmaker or a policy maker but outside of that kind of legal and policy world, and maybe the in-house counsel at some of the larger tech companies, I think people have no idea really what they are navigating here, no idea.
Ana Welch : Yeah, so last time we were talking about this as well. And how many people believe that this doesn't apply to them? Percent extra. You know, funding budget work in order to, you know, publish an AI product if it is to withhold all of these? You know all of these regulations. I'm not sure that anybody calculated how much it will cost long term, because how much does it cost once you've got the framework, the tooling, the um? I know the terms of reference and you, you actually know what you're doing, what you're testing, what you're governing for, because in his, in his speech, elon musk uh, not elon musk, jesus vance um scolded europe, not just for the euai act, but but he was upset with GDPR as well.
Andrew Welch : GDPR and NATO spending and you know.
Ana Welch : And everything. So GDPR is another one that has been. You know, people understood it and have created frameworks and you know, and now we just kind of get on with our lives. But, fun fact, nobody's suing us for losing their data or using their data improperly. It's a good thing, right.
Mark Smith: Do you think Vance is unaware that at least 12 states in the US are implementing their own AI acts that align pretty closely with the EU AI Act?
Ana Welch : I think that. I don't think he's unaware, he's a clever guy.
Mark Smith: I know they rescinded Biden's executive order.
Andrew Welch : JD Vance. Think what you want of him and his politics. Anna is right, he is a clever guy.
Ana Welch : He also mentioned a few things in his speech again without getting political, about how you know, america is at the forefront of, you know, these developments for the use of their own chips as well. Now, biden was actually the one who said we are no longer buying Chinese microchips, we're going to make our own. And that was months and months and months ago and the legislation worked. So it's not a current administration thing, even though it was definitely a political speech, and I do believe that he knows and understands what his states are doing. But he plays for the, for the people who listen to him, you know, who feel empowered by that talk and who don't realize that in many states, in the United States, you can actually go and sue somebody yourself. You don't even need a lawyer.
Andrew Welch : Well, and I think that part of this so I had a my thinking on this has evolved over time, right. So I had a thesis, if you go back, maybe six weeks, two months ago, Right, I had this thesis that I don't think I haven't abandoned, Right, but this idea that what we're going to have is with Europe being kind of the slow lane and I mean slower but safer right. So AI was going to pretty fundamentally advance at a slower pace in Europe but also be categorically safer to use, right, both for an individual and for an organization. And then you were going to have on the you know America was going to be the fast lane an organization, and then you were going to have on the. You know America was going to be the fast lane right where AI leapt ahead because it was far less regulated, at least by the federal or the national government, but was also much less safe to use.
Andrew Welch : And my amendment to that thesis, right, is that and this is where I come in with this I come to this conclusion about first order and second order regulation of AI. So there's a couple of ways to do it. The best way to think of this is that you can regulate the development of AI itself, which Europe has clearly said we're going to do and America has clearly said we're going to and again, I'm not talking about state governments, I'm talking about the national government has clearly said we're going to and again, I'm not talking about state governments, I'm talking about the national government has clearly said we're going to take a much lighter approach here. But what gets lost in that discussion about the regulation of AI right is that, even if AI itself is unregulated, what AI does inside of a financial services institution, like inside of a bank, or inside of a law firm or inside of a health hospital right?
Andrew Welch : Any of these highly regulated organizations what AI does is still very much regulated, right? So, like if AI inadvertently spills mass amounts of patient data from a hospital that is using AI to process patient data, right? Yeah, okay, the vendor who created the AI may not have the same kind of legal ramifications in the US as they would in Europe, but that hospital still has the legal ramifications in place for misappropriating patient health data, which is, by law, confidential and protected. So organizations need to think long and hard about this. I think it is a false choice to say either we're going to highly regulate AI or we're not going to highly regulate AI. And if we're not going to highly regulate AI, you can do whatever the hell you want with AI, because you are still regulated by the laws that pertain to your industry's behavior.
Mark Smith: Yeah, totally, totally agreed. I just think that a lot of companies that are right now right, it's your technologists that are at the forefront of technology generally. And the last thing all the technologists I know about or know of are not thinking about what's illegal ramifications, like I'm not committing a crime, I'm not going out of my way to do something illegal, like you know that I know of. They're not thinking like you know these days. The developers, they'll think about security. Developers, they'll think about security. They'll think about you know, is there there? You know things like encryption and and not doing stupid stuff in the code that's going to expose, but they're not thinking about. Hang on a second, is this a human, human rights violation? Is this going to? You know that's not happening at the moment. They're not thinking like a technologist doesn't think like that and I think potentially it's going to have to change. It's going to be part of the hey education program any company is going to run. Is that?
Andrew Welch : hey, if you're using ai, you need to be educated on the the implications of it well, and we've used in this conversation, we've used the phrase, even phrases like legal and regulatory right, but there's mark you hit on a really important dimension here, which is the, which is the human rights dimension.
Andrew Welch : Okay so, and obviously different nations in the world have different stances on various human rights issues. There's in the united nations has, the office of the high Commissioner for Human Rights, which is the global body that addresses and looks out for and promotes human rights around the world. But there is a huge human rights component to whether AI inadvertently violates various human rights or puts an organization in a position of inadvertently violating human rights and where you know, there are certainly ethical implications and there are certainly moral implications there. But then there's enormous reputational implications there. Right, no tech vendor wants to be the tech vendor, whether it's your tech vendor of choice or not but no tech vendor wants to be the one whose AI somehow facilitates sex trafficking or whose AI somehow facilitates political violence. Right, you don't want to be that.
Mark Smith: But there's a layer below that right and and I and I was reading this article, which had a very interesting look at algorithms, right and and and. Algorithmic impact assessment is going to be part of what every company needs to be looking at, based on the algorithms they used. And one of the things was what happened. If you're a job listing site, so you're advertising job roles, but your algorithm makes sure that it only is displayed to certain people that your algorithm thinks let's say, middle-aged white guys with x experience and you're violating the human rights of other people that you're not displaying that ad to but nobody gets hurt, type thing. Right, nobody hears about it, nobody sees, so like nobody knows human trafficking, sex, all that kind of stuff. Very like confronting, but I'm like that next tear down where there's who, who knows, you know, like it's not doing deliberate harm, but it's definitely a violation of human rights. If you're going, hey, I choose that.
Ana Welch : You don't fit the demo that I believe should apply for this role, so therefore I'm not going to show you the ad as an example show you the ad as an example, or the platform itself can actually, because if the platform itself actually only learns from the data that it had and we know that historically, you know you will have I don't know less women being at an executive work or board, less diversity in I don't know high power positions less.
Andrew Welch : America doesn't care about that anymore.
Ana Welch : Yeah, I understand, but less men being a nurse, et cetera, et cetera. So, in essence, even the platform itself may choose to target your ad to you know the audience that it believes it will get successful to, and then the question still applies but then who's responsible? Is it the platform? Is it you that you made the algorithm? Can anybody prove that you've tested it this way?
Mark Smith: So I highly recommend people go and read Yuval Noah Harari's book Nexus, which he just published last year and in that he calls out, for example, facebook in detail around the part they played in genocide that has happened around the world and like there's no way he would publish the word Facebook and that whole without opening himself if it was to potential legal liability right for publishing that so clearly in the book.
Mark Smith: And what he showed is that their algorithms incited hate speech and their defense was well. Well, that's people's choice, like it's people's choice that they you know, they, they, they put this hate speech up. We're just providing the platform, the old telco model, right. We can't be responsible for people downloading objectionable material, we're just the infrastructure. But what it showed is that the actual algorithms had a target point of you need to sell as many ads and therefore dwell time is critically important. So the algorithm noticed that in the hate speech etc got more eyeballs. So what it did it said, hey, that approves my algorithm of getting ad time, and so therefore it was something like 70 percent of displaying of all hate speech was algorithmically recommended. It wasn't Andrew recommending to Anna to watch the video.
Andrew Welch : I want to be very clear that I do not recommend hate speech.
Mark Smith: As in it resulted in massacres of people. That was the outcome of this. You know that. That was out of the outcome of this, and so it's a. It's a good example, though, of really one. The tech platforms, the people owning the llms, the people. Then at the next layer, down, which affects a lot of microsoft partners right, which are the implementation layer, and then, of course, you've got the end customer or the end users. Everybody has this kind of level of obligation and responsibility.
Mark Smith: And how ai is applied as we move forward, and I think that some of those arguments of the past like oh, we didn't know when you clearly did, because you're making so much money off it um, dare we say, blood money the thing is is that I don't think the, that I think the reason, like we're seeing that over about 12 states in the us are already putting their own ai acts into effect, and why has it all of a sudden coordinated and come about?
Mark Smith: Because this has been going on for really the last 10 years, pre-genitor of ai, that this concern around, like I remember in 2015, 2016, I was doing this whole piece of work with Microsoft in federal and state governments around law enforcement for what was called situational awareness.
Mark Smith: So you have a big football match on, you have surveillance cameras operating and you I mean one of the cases with new york city right, they've got sensors all over new york city that can detect radioactive substances. Then cameras can clock onto a number plate and they can play back for the last eight hours, ten hours, whatever, everywhere that number of plate appeared on any of their surveillance cameras. And then you could take that to another level and do facial recognition and go, hey, let's put people at the crime scene like based on you know that, that, that type, that type of data. And of course, there was a lot of concerns, particularly around the facial recognition piece, around identifying and therefore drawing conclusion algorithmically. And that's why I think a lot of these acts have all of a sudden appeared. It's not because of Gen AI, it's just that it's actually been in flight.
Ana Welch : For you know, if you just go back to the machine learning models and things like that around algorithmic selection, it's also so this will prevent AI models from making mistakes, but it doesn't mean that it will guarantee that AI models and AI products don't make mistakes. So these things could still happen, like the facial recognition can still be wrong or biased. But what it does mean is that you show that you've done your very best to make the algorithm not biased. So in your example earlier with Facebook, it means that Facebook would have had a big, massive set of documentation showing that they did their best, you know, to prevent hate speech, which obviously they didn't right. But this is what these ua acts mean. They're not guaranteeing the fact that you'll never make an error within your programming or your product, but it does guarantee that you are doing your very best not to and that you immediately have a set of mitigation activities so that the second you realize that something went wrong, you can fix it.
Ana Welch : So a lot of people's opinion and I feel like JD Vance was going that way as well when he was saying we choose innovation. You know the American people choose innovation and not fear or safety. So he started with fear, but then he went on and on about how safety is important here by saying that, oh, actually, if there is a problem, we can fix it. Well, fun fact, you cannot fix it unless you've got that very thorough testing done already and a series of mitigation plans, clear rollback plans. This is what the UA AI Act is. Everybody talks about it as if it's like this very complicated set of laws that nobody can understand or follow, but the reality is it's a set of actions that you have to follow. You need to read the thing and then you have to put some work into it, but it's not necessarily rocket science, dare I say.
Mark Smith: Yeah, I want to discuss just something else that's come up for me. This week I was interviewing with a large telecommunication company and we were discussing ai adoption and you can see here their technology. Adoption cycle right has been around for a while. If we drill into it here, you've got this innovators that make up 2.5%, early adopters 13.5%. Then you've got your earlier majority, late majority and laggards and the discussion was we know what's happening all the way through to here, but what happens to the laggards right in an organization? And do you know what the leadership of this organization's opinion was of the laggards?
Ana Welch : I'm so curious they won't have a role in our company. Yeah, and do you know?
Mark Smith: what the leadership of this organization's opinion was of the laggards. I'm so curious. They won't have a role in our company. In other words, they're making it so built into their culture. You need to learn about AI, you need to be adopting it, and if you don't feel that this is for you and they weren't using this particular graph, I'm showing you here, but I'm referencing the laggards. Using this particular graph, I'm showing you here, but I'm referencing the laggards. They were saying, the people that are choosing not to get on board and not to go down this journey.
Mark Smith: They've made it quite clear to their staff at this early stage that there's probably not going to be a long-term role for them inside the organization. Pretty phenomenal, right Like that kind of very clear mandate I was coming from, the, from the slt um inside the organization, that you need to find out where you're going to land on this bell curve. But if you're in this tail end, you should probably either be looking for another role or we. We can't see you having a role inside this organization long term. And, of course, the big focus is on skill development. We're going to put lots of skill training programs on and you need to be adopting right through to this because, yeah, and I just thought it's an interesting stance that perhaps companies are starting to take now, and the other interesting thing was this company has already received the benefits from AI that has enabled them to stop hiring certain roles inside the organization. In other words, they are in a position that AI is filling the gaps of certain roles and reducing their need to hire more.
Andrew Welch : Yeah, I think, I think that's very interesting. And you know, certainly I mean I don't want to say certainly my instinct says that if I were running a big company, that's the attitude that I would take. And you know, listen from the the work that, from the work that that we do, we see a lot of organizations that are themselves. The organization themselves is a laggard.
Ana Welch : But they work with us because they don't want to be one. That's why they work with us.
Andrew Welch : Right. But I mean we see some phenomenally dysfunctional, technologically dysfunctional organizations. Again, you know, to Anna's point, that's why we work with them. But it did make me laugh when I saw that A story from years ago.
Andrew Welch : I worked in an organization I'm sure I've told this on a podcast right where one of my colleagues he had retired from a role in the US government and he had taken a job in the private sector, which often happens and he asked me one day. He said can you come look at this with me? And I said yes. I came over into his office and he said you know, andrew, I notice that you send emails to more than one person at a time. At first I didn't realize what he was asking me, right, like this scene was so mind blowing to me and I was sort of like well, well, yes, what do you mean? His name was Wayne, as I recall. Yes, wayne, what do you like? And I'm trying to feel him out here, I'm trying to get to what his problem is.
Andrew Welch : And he says well, how do you send the same email to more than one person? And I said well, how do you do it now? And he shows me. He says well, I typed the email and then I put someone's address in and I send it, and then I go into my set folder and I copy the text of the email and I create a new email and I paste it and I put the next person's address in. And if he needed to email the same thing to five people, he did this five times. So I explained to him how the CC field worked and the BCC field was mind-blowing, and I also explained to him that you could just put a little semicolon or a comma, or forget what we were using in that version of Outlook all those years ago, right, that you could actually send it to more than one person in the two line. And every time I hear stories like this I just sort of I laugh and I think of Wayne Bless him.
Mark Smith: Well, it looks like we're done and is up and left us for some for some reason. But anyhow, thanks for joining us again for this. For the show Remember, if you look in the show notes now, you can leave us a voicemail. So if you want to get featured on a future episode, click on that link in the show notes. You can leave an audio voicemail. We'll then splice it into our edit.
Mark Smith: If you've got a question you want us to address on the show, something that you feel you would like our input on, we'd love to hear from you, or an idea that you want us to explore. Yeah, leave us a voicemail and it'd be great to connect. Otherwise, ciao for now. Thanks everyone. Thanks for tuning into the Ecosystem Show. We hope you found today's discussion insightful and thought-provoking and maybe you had a laugh or two. Remember your feedback and challenges help us all grow, so don't hesitate to share your perspective. Stay connected with us for more innovative ideas and strategies to enhance your software estate. Until next time, keep pushing the boundaries and creating value. See you on the next episode.

Chris Huntingford
Chris Huntingford is a geek and is proud to admit it! He is also a rather large, talkative South African who plays the drums, wears horrendous Hawaiian shirts, and has an affinity for engaging in as many social gatherings as humanly possible because, well… Chris wants to experience as much as possible and connect with as many different people as he can! He is, unapologetically, himself! His zest for interaction and collaboration has led to a fixation on community and an understanding that ANYTHING can be achieved by bringing people together in the right environment.

William Dorrington
William Dorrington is the Chief Technology Officer at Kerv Digital. He has been part of the Power Platform community since the platform's release and has evangelized it ever since – through doing this he has also earned the title of Microsoft MVP.

Andrew Welch
Andrew Welch is a Microsoft MVP for Business Applications serving as Vice President and Director, Cloud Application Platform practice at HSO. His technical focus is on cloud technology in large global organizations and on adoption, management, governance, and scaled development with Power Platform. He’s the published author of the novel “Field Blends” and the forthcoming novel “Flickan”, co-author of the “Power Platform Adoption Framework”, and writer on topics such as “Power Platform in a Modern Data Platform Architecture”.

Ana Welch
Partner CTO and Senior Cloud Architect with Microsoft, Ana Demeny guide partners in creating their digital and app innovation, data, AI, and automation practices. In this role, she has built technical capabilities around Azure, Power Platform, Dynamics 365, and—most recently—Fabric, which have resulted in multi-million wins for partners in new practice areas. She applies this experience as a frequent speaker at technical conferences across Europe and the United States and as a collaborator with other cloud technology leaders on market-making topics such as enterprise architecture for cloud ecosystems, strategies to integrate business applications and the Azure data platform, and future-ready AI strategies. Most recently, she launched the “Ecosystems” podcast alongside Will Dorrington (CTO @ Kerv Digital), Andrew Welch (CTO @ HSO), Chris Huntingford (Low Code Lead @ ANS), and Mark Smith (Cloud Strategist @ IBM). Before joining Microsoft, she served as the Engineering Lead for strategic programs at Vanquis Bank in London where she led teams driving technical transformation and navigating regulatory challenges across affordability, loans, and open banking domains. Her prior experience includes service as a senior technical consultant and engineer at Hitachi, FelineSoft, and Ipsos, among others.