Accelerate your career with the 90 Day Mentoring Challenge → Learn More
Exploring Trust in AI and Embracing Spanish Culture
Exploring Trust in AI and Embracing Spanish Culture
Exploring Trust in AI and Embracing Spanish Culture Ana Welch Andrew Welch Chris Huntingford William Dorrington
Choose your favorite podcast player

Exploring Trust in AI and Embracing Spanish Culture

Exploring Trust in AI and Embracing Spanish Culture
Ana Welch
Andrew Welch
Chris Huntingford
William Dorrington

Send me a Text Message here

FULL SHOW NOTES
https://www.microsoftinnovationpodcast.com/646   

What challenges await when you pack up your life and settle in a vibrant new city like Valencia, Spain? Our own Ana shares her personal adventure of embracing the local culture and the quirks of the Spanish education system, offering a glimpse into the exhilarating yet challenging landscape of 2025. Alongside this, Mark unpacks the burgeoning opportunities within the AI landscape, emphasizing a pivotal shift towards practical applications that promise to redefine the way organizations function. As we journey through this transformative year, we explore the concept of "trustworthy AI"—a vision that extends beyond mere compliance to encompass reliability and safety, inviting you to imagine a digital future where innovation and trust walk hand in hand. 
 
Is AI truly a threat to our jobs, or could it be a catalyst for new opportunities? In this episode, we challenge the fear surrounding AI's potential to replace human roles, underscoring the irreplaceable value of critical thinking and creativity—skills only humans possess. Drawing parallels to the early days of the iPhone, we explore how AI's form and utility are still evolving, with trust in its systems being paramount. We unravel the intricacies of ensuring accuracy in AI outcomes by prioritizing relevant data and processes that correct errors. The conversation paints a hopeful picture of a future where AI models continue to improve through better data management, reminding us that while AI continues to evolve, so too will the opportunities for human growth and innovation.

• Exploring the pivotal concept of trustworthy AI 
• Importance of data quality in AI implementations 
• Evolving user experiences and the reducing importance of UI 
• The necessity of filtering data sets for accurate insight 
• Human roles in AI: critical thinking and creativity emphasized 
• Anticipating future job roles in an AI-dominated landscape 
• Navigating upcoming AI regulations and their implications

This year we're adding a new show to our line up - The AI Advantage. We'll discuss the skills you need to thrive in an AI-enabled world.

DynamicsMinds is a world-class event in Slovenia that brings together Microsoft product managers, industry leaders, and dedicated users to explore the latest in Microsoft Dynamics 365, the Power Platform, and Copilot.

Early bird tickets are on sale now and listeners of the Microsoft Innovation Podcast get 10% off with the code MIPVIP144bff 
https://www.dynamicsminds.com/register/?voucher=MIPVIP144bff

Accelerate your Microsoft career with the 90 Day Mentoring Challenge 

We’ve helped 1,300+ people across 70+ countries establish successful careers in the Microsoft Power Platform and Dynamics 365 ecosystem.

Benefit from expert guidance, a supportive community, and a clear career roadmap. A lot can change in 90 days, get started today!

Support the show

If you want to get in touch with me, you can message me here on Linkedin.

Thanks for listening 🚀 - Mark Smith

Chapters

00:14 - Innovation and Change in 2025

15:45 - Building Trust in AI Models

26:12 - Trustworthy AI and Future Skills

Transcript

Mark Smith: Welcome to the Ecosystem Show. We're thrilled to have you with us here. We challenge traditional mindsets and explore innovative approaches to maximizing the value of your software estate. We don't expect you to agree with everything. Challenge us, share your thoughts and let's grow together. Now let's dive in it's showtime. Welcome back, it's 2025, and it's game on for us at Cloud Lighthouse and on the ecosystem show. You've only got Cloud Lighthouse staff today, but of course, our friends Will Dorrington and Chris Huntingford will be back with us real soon. I feel like we're already halfway through the year. There's just been so much going on in the first two weeks. I feel like I'm absolutely smashed. I feel like my task list is a million miles long and it is exciting. It's exciting times.

Andrew Welch: At Microsoft, they already are halfway through the year, I mean, you know Indeed they are right, the FY totally on point.

Mark Smith: What are the big changes that are happening in your guys' lives, Ana?

Ana Welch: Happy New Year first of all. Everybody. Happy New Year first of all, everybody. Happy New Year. I hope everybody has an exceptional year. We are excited about a ton of things changes in our personal lives and in technology. Biggest things in our personal life is no, I am not pregnant, but we're moving to Spain.

Andrew Welch: personal life is no, I am not pregnant, but we're moving to spain for anyone who tunes into this show, because that's what you want to know there's a big.

Ana Welch: There's a big change, but I am not pregnant. We're moving to spain, though we're gonna have probably sangria every day. We're gonna go to the beach a lot. We're gonna try and tune into the spanish culture. We're gonna learn spanish, we're gonna put our child into a spanish school and let's hope we we survive it.

Mark Smith: We're already having some difficulties with you know some things I love it, and and valencia is the location of choice. What a beautiful city.

Ana Welch: Gorgeous city. Oh, the difficulties are the fact that you know everybody is so happy here and relaxed and understand why kids finish school at 4.30. So that's the thing that we're going to have difficulties with, yeah.

Mark Smith: I want to see you guys having the siestas. Right, you're going to have to have your siestas. That's very Spanish, right, right and it's a whole thing.

Ana Welch: You're not allowed to go visit between 1 and 3 because it's siesta time.

Andrew Welch: It's so amazing, it's beautiful, we like, we eat better, I feel like we look younger not yet for me, I'm still working on it, but but but anna's making anna's making good progress. Yeah, we went today to so we have, we have anna's mother, um, who is in town here, uh, helping us. Our daughter Alexandra, and Anna and I have been. Today's activity was going to visit potential nurseries or garderas so that we can enroll Alexandra, and we turned up to one of them today and she told us the woman there said that they close at 4.30. And I you close it. What huh, like 4 30 is when the americans are waking up and in my business, 4 30, 4 30, 4 30 spain time is like the worst time of the day for school to to be over. So we're, we're hoping to uh, get at least something that goes until five Six would be ideal.

Mark Smith: Yeah, yeah, we'll see.

Andrew Welch: For everyone who's been following our lives.

Mark Smith: Nice, nice.

Ana Welch: How about you? What's changing for you, Mark?

Mark Smith: Oh mate, Hard work, hard work. I don't know, it's just like. I just feel like 2025 is an explosion of opportunity there's. You know, I feel like we're really getting down to, in the AI space particularly, a much higher degree of practicality. Right Organizations. They've seen the hype, they've seen the marketing, and now we're moving into right let's bed this down. Marketing, and now we're moving into right the let's bed this down. How do we create, uh, frameworks to operate inside our organizations? You know we can't just have people just randomly building bits and pieces. It really needs to be a key, cohesive pattern of behaviors that is going to drive success, and so it seems that every project I'm on at the moment, this is becoming a much more prevalent thing in the thinking, particularly of executives in organizations. You know they want to use the sound principles of the past and really apply them to maximize the innovation they can do with AI.

Andrew Welch: Yeah, the innovation they can do on or with AI. Yeah, yeah, I think that 2025 for me and I think, for a lot of people, even if they don't realize it yet, is the year of trustworthy AI, though, really interestingly to me, I still see many, many, many organizations that are not taking trustworthy AI, or what they, I think, think of as responsible AI, seriously. Still a lot of development teams that are trying to deploy solutions, scenarios, use cases, workloads, whatever you want to call it, without having accounted for how to make that AI trustworthy and reliable, safe, etc. But I think and maybe we can debate or can talk this through on this episode but I think that 2025, among many things really must be the year of trustworthy AI.

Mark Smith: Yeah, and I like that concept, that concept, trustworthy. Ai is not about compliance, so, as an, it's a, it's a dimension of it, right, but it's around um creating ai that, literally, the people that use it can trust that everything has been thought about to make this safe, usable, reliable, accurate as possible and an enabler right for the organization. It's not about just making sure that it is uh, you know, not hallucinating off the richter scale, not not doing unethical things. That's a component of it or a dimension of it, but it's so much more. It's everything from your governance layer, your strategy, right on through to your AI ops model for the organization.

Ana Welch: Yeah, I think that's so true and people are taking it seriously, like you guys said, and I would say, not just executives, but also, you know, professionals of any sort. I was chatting with Andrew today because I had a catch up with one of my friends. She is head of UX at a big company, a global company X as a big company, a global company, and she was saying I've stopped hiring people who come to me with this huge, perfect expertise in, like Figma, I'm a Figma god. I'm gonna, you know, create the best design for you. She's like I don't care, like we are not pixel pushers anymore.

Ana Welch: Uh, and these were her words. She's like I want somebody who understands that we, as user experience people, need to create a reality where people get their results grounded in data without a UI. We need to be the first ones to eliminate the app and the screen, and I thought that was, you know, that's revolutionary for somebody who leads teams of designers. You know to say do you know what reality right now is? That we need to trust our data first and foremost? That's what it is.

Andrew Welch: I mean, anna, we, we, you and I only hang out with you, and I only hang out with professional and expert people who are into making bold statements, like the head of UX, who wants to get rid of UI, or our friend Mark here, who famously got up at a Dynamics conference and said at the beginning of his session that D365 is dead. Microsoft folks in the audience, d365 is not dead. Mark is just a sayer of incendiary things. But uh, yes, right, anna and I have a tight check.

Mark Smith: Check out the sound bites there. Pixel pusher right, not pencil pusher. Pixel pusher I love that. Yeah, and there's a new there's a new thing out there right pixel pushes are dead.

Mark Smith: Um no no, no, no, we're not going to add that one but here's the thing I I just want to riff on on what anna said then, because I was I was having a chat with a partner in australia yesterday and, um, he was like so is dynamics going away, is the fno going away, is powerics going away, is F&O going away, is Power Platform going away? Is Power Automate going away?

Andrew Welch: And I said, well, I want to be clear. No, it is. I'm going to get a call from Microsoft when this goes.

Mark Smith: So here's the thing I said you've got to look at it differently. I said all these tools are building blocks that facilitate data movement and business rules and logic within an organization. Traditionally, the only way we've interfaced with those, or most of the way, is through a form over that data set. We update the field, we get a report out, we do something with that information. I said I believe in the next five years, that concept of us updating and let's just say we take a contact record to keep it real simple, that contact record could be enriched all the time, based on an agent that operates on it, that's always searching out, is do I have the most accurate data? You get a new email from that contact and they've changed their phone number. Well, it detects that change and says oh hello, is this an update to the phone number field? And it it updates it for you. You don't have to go oh heck, I'm going to go copy and paste that. And so you think of that. All those little data touch points in any engagement in an organization is constantly enriching your data set, not because you typed it in or you copied and pasted it in, but because it is picking and detecting that up and enriching it. So therefore, all of a sudden, the concept of a menu do we need a menu anymore in applications? And because, all of a sudden, the concept of navigating to something. The data exists. We've got APIs into it.

Mark Smith: Get me the answer I'm looking for. Or, hey, update this record, tom. He's informed us. He's actually Thomas. Well, I'm just going to say hey to my agent. Can you update Thomas' record to Thomas? He prefers that. I don't have to know where that field was in the system. It's potentially just going to go and update it. So I think these tools like Power Automate particularly is one I've thought about a lot. I think it's going to become massively AI infused. But do we think in five years time we're going to have consultants that in their title is the word Power Automate? I'm a Power Automate architect or creator, I don't know. I think AI will be doing so much more of it at that time.

Andrew Welch: We have a special guest on the podcast.

Ana Welch: I think that I really want to introduce you to my friend Mark. Honestly, that's exactly what she was saying. She was not saying we won't have UI for any. She was just saying that some of the things are just so complex to implement but so easy for ai to extract, manipulate and use in our day-to-day lives. That is just um, that that efficiency can be done without the ui. That that's all she was saying.

Mark Smith: So spot on, hello little I think there's this concept, that that is, that we will have output devices or surfaces and what I mean by that. It could be a, a digital screen on a wall, it could be your ipad or whatever that is, it could be a phone device, it could be a computer monitor, tv screen, a projector, whatever. And we all have the concept of going asking a question just with our voice, not through keyboard, and say can you display it over there? And you can just point to wherever it was and it would have the context and the connectivity, et cetera, and bring up the data set that you want without the chroma right. It's just like I need this information. I want to drill down into that concept deck and let's drill through it, and it would be a very fluid type of experience, I think, around how you interact with data if you no longer need a chroma in pace, and what I mean by that is the UI, ux around what we do when we do navigation. That all goes away potentially.

Ana Welch: But what comes with it, though, is that flexibility. That flexibility for you to change your way of working or for you to get to the results quicker, and this is where the really deep thinkers are going to prevail, you know right now. So the concept of trustworthy AI actually drills into those deep thinkers. Did I really think about what the intended purpose of my application, of my agent, of my co-pilot, you know bot is? Could I do anything else with it? Did I state that out loud? You know what those types of things are going to make sure that our information is accurate and, of course, the grounding in good data. It will always be a thing.

Andrew Welch: Yeah, so, and, by the way, thank you guys for humoring the arrival of Alexandra back here.

Andrew Welch: She's on her way to bed it's quite late for her, but yeah. So I think that we've been talking for a year at least right about this the emerging but still very much to be determined form factor or patterns of user interaction that AI will take on Once we get past our you know, the comparison that I've made is about the iPhone. Right? When the iPhone first came out, people spent several years just trying to cram desktop apps onto a smaller screen, and it was several years on before application designers were they really figured out what that form factor was best for and what the appropriate form factor was right. So I think that that's part of it. But I also think that and this will become more important the more that things change right, the more that gets moved, the more important trust becomes right.

Andrew Welch: So I've been all around the world in the last year talking with big groups, doing workshops, helping them set in motion their AI, data and broader cloud technology strategy, and one of the things that I hear again and again right is that people and organizations around the world are asking, in one form or another can we trust artificial intelligence? Right? And sometimes what we're talking about is, you know, is a question of. Can we do people trust and can they believe that the results that artificial intelligence produces are accurate, right? Can they trust that the data is of a good quality and that there's integrity to that data, to that data Sometimes it's, you know can they trust that the model? I'm going to have to come back to this because I'm being asked to kiss a boo-boo, so soliloquy on pause, I'll leave it with you guys for just a moment here.

Mark Smith: Okay, so. So I want to pick up on something Andrew just said there, because trust is often inherently connected to data and to the data correctness. Now, if you take that concept and this is something I've been it's blown my mind as a thought experiment If you look at a medium-sized business, I think the data shows that a medium-sized organization has around 10 million artifacts. So we're talking about documents. Powerpoints. I'm shocked. It's that few artifacts, right? So we're talking about documents, PowerPoints, Excel spreadsheets.

Andrew Welch: I'm shocked, it's that few.

Mark Smith: Stuff right? Yeah, yeah, let's just use 10 million as a full part of our thought experiment Now, in that you will have a massive amount of duplication. Someone took that Excel spreadsheet and they copied it a few times. The problem is is that by human nature, we make errors. As humans, we forget the formula.

Mark Smith: We made a mistake in the formula. We we didn't paste in the correct numbers into the correct columns in the powerpoint presentation. We misquoted somebody in the um in and we're like, oh yeah, we might come back and change it. But what happens is that you amplify that across you know 500 or 5 000 employees. The data set that we're pointing ai to train on is full of human incremental error at a massive scale across, let's say, those 10 million artifacts and then we point ai at it as a grounding for whatever we're building with AI and we go, oh, it hallucinates. Perhaps it's not hallucinating as much as we think.

Mark Smith: Perhaps it's actually just repeating the error and the problem in the data that is inherently built in. So now take this idea we take that 10 million and rather than training our model for your organization on all our organization data, you go hang on, what is the workload that I want to address? And let's extract the data sets that just address this workload. Let's say we get down to 5,000 artifacts from that, from that right. So now we've taken the data on a journey, we've got 5 000 artifacts and then when you drill into that, you get rid of things like the duplicates and you start looking at putting procedures in place to identify error in data and you might come down to, let's say, 500 artifacts and that's what you should then be using with a rag process to build out your workload for the organization.

Mark Smith: And of course, the accuracy will go through the booth. You're not trying to train your ai on the universe, you're training it on the actual core thing you want your particular ai to do, which will then have a much greater degree of accuracy. Then you've got an ongoing process of making sure that data that the 500 artifacts arbitrary number is being updated and enriched at all times to be accurate and lockstep with where your organization is for that particular workload. Use case solution.

Ana Welch: And those things change as well. Sorry, because you can come back to it and you're like wait, I don't care about X anymore as much. I actually care about Y more right now. So I want my model to enrich Y more right now, and it can totally do that and with the right monitoring and observability in place like you can provide an exact history of what has happened to your data Like this can be a massive opportunity for people to not only get trustworthy results out into the world, but to also demonstrate their thought process, because humans are not faultless. We are creatures who make a lot of mistakes and before those mistakes could be obscured because nobody really knew who changed what. And now they're really like out in the open.

Andrew Welch: and that's okay, as long as you demonstrate that you've tried to correct your mistake, you know yeah, well, and and I think, I think that so, just like and I'm sure I've said this on a previous episode, but to to reiterate um, just like power bi, once upon a time um showed and and Power BI kind of traditional reporting right showed organizations how bad their data was. Right. The data had always been bad. The data had always been wrong.

Andrew Welch: You just couldn't see it, so you didn't know that it was wrong. Right, and now that phenomenon is to say that it's magnified, or you know, to say that it's magnified or to say that it's multiplied byprone human information. Future models will be trained on the error-riddled content that previous AI has created, and I think that this is a. You know, this is going to be a huge problem. That may we may, in a few years, look back on this, you know, and apply the old adage that it was used in bankruptcy right, Like, how do you go bankrupt very slowly at first and then all at once? Right, where this problem mounts and builds over time? Right, and then all of a sudden we realize, oh my god, we have multiplied and cascaded the effects of human error over a prolonged period.

Ana Welch: Yeah, that's so true. Like I remember, I was working at it was at the at the beginning of my career, um, and somebody gave me a big job to. To be fair, it was a big job. I was making, uh, crm systems for a call center who had like 400 employees. All of those people were working with, with tools that I have made and they needed to have like data integration and and and stuff like that, and everything was based on like an on-prem server that sometimes failed.

Ana Welch: So, um, you know, I wasn't clever enough and I don't think that there was a clear procedure of retrying those messages anyway, so we would lose a lot of data. Therefore, every morning, there used to be a team who would count the number of calls that were recorded, with the number of calls that actually happened, and then they would identify the ones that were not, you know, recorded in the system and they would do them manually. Can you imagine how many calls they still missed? Like, let's get real, it was tens of thousands of calls every day, right? And then they would just make up the coding for the rest. So an AI module over solutions like that that I'm pretty sure still exists today would really cause for a reality check in this organization on how important it is to have those 10 million assets really filtered down to your 500 that you can use. Yeah.

Mark Smith: Yeah.

Andrew Welch: Yeah Well, and I think that, to bring us back to the trustworthy AI topic, and the final thing that I was going to say before I had to go kiss a boo-boo, was that I think that you also get, when you have this conversation and when you talk, when you talk to organizations and to users and to colleagues within these organizations about this, about what their concerns are, where their mind is when they think about AI. One of the you know it's it's everything from can I trust the results that AI is producing to. You know, can I trust that AI will be safely used? Right? And that category of trust ranges from everything from is AI going to take my job to. Is AI going to grow a personality and malevolently conquer the world by, you know, turning a society of toasters against us, right? I'm looking at you, chris Huntingford. I think that the angry robotic toasters was a criticism, but you know anyway. So I think that when you boil all of this down, the idea of AI trust, of trustworthy AI, is so top of mind for people, even if they are not to a point where they can articulate it right.

Andrew Welch: There's no brand for trustworthy AI, certainly.

Andrew Welch: So when you know, I've been rolling this idea for an article around in my, for my newsletter, around in my head, something to the effect of trustworthy AI is way more than red teaming, right?

Andrew Welch: Because I do think that for wonks and for, you know, folks who are deeply enmeshed in the technology, there's an instinct to say, oh, trustworthy AI is about testing, or trustworthy AI is about the pillars of responsible AI, right, so you know things like reliability, safety, privacy, security. But to me, trustworthy AI is about whether or not AI can be trusted at scale across an organization or across a society. So my working definition and I'm going to read this is that trustworthy AI is about building a digital ecosystem that is strategic, responsible, safe, reliable and scalable. So I think that in the months ahead, we need to really expand the idea of what trustworthy AI actually is, far, far beyond the idea that trustworthy AI is about buying down risk or being compliant or being in line with regulatory concerns or with with with legal strictures. I think that it's a very wide world that goes far, far beyond that I find it interesting.

Mark Smith: Last year, anna and I did an episode on trust, right, trust between I love that episode and microsoft, you know, and and yeah, and one of the things that I I think about, because I get questions on, is what's the role of people in in the future, right, what is inherently human? And I think even this concept of trust is going to become a much more looked at. And not how do you display trust, but how are you trustworthy as an individual, as a person, as a contributor inside your organization? It's not about how you portray that you're trustworthy, it's about are you trustworthy? And I think that you know there's the kind of three traits that I see and I just find it interesting. You know there's the kind of three traits that I see and I just find it interesting, and we look at it as trustworthy AI, but it really is an extension of our humanity and the trustworthy nature of what we should be.

Mark Smith: Two other skills that I see is going to be critically important for us moving forward to develop as people. One is critical thinking, like at a level we've never done before. I think we're going to need to get really good because for us to look at the outputs that come from AI and from other people. We're going to have to be much more able to really you know whether it's a human gut check what is it? Is this legit, like? I think we need to, because in the world of ai, it's going to be easier than ever to manipulate that and I think that humans that really need to develop it.

Mark Smith: And then the other one is creativity. I think we're going into a world where we're going to have a tool set which will allow us to paint anything we want on the canvas and therefore it's going to really be on us to go. What is the art of what we could do here? How extensive, expansive, what are the possibilities? Because it's really going to come down to our ability to, you know, work with ai to really create the future there there is something I, something I wrote almost two years ago now.

Ana Welch: If I can find this, yeah, before you, you, you find that I, I I totally am there with you, mark, and I feel like there's just like the amount of information you see and how much critical thinking you can apply to it will be really, really important. Like, for example, I can tell you I've just posted this little picture on our chat that says privacy report. In the last seven days, my browser has prevented 22 trackers from profiling me, so I'm wondering how many trackers it didn't stop from profiling me, you know? So, yeah, the moment you receive that information, how much is it? Because I've been profiled and somebody wants to influence me.

Ana Welch: I've been profiled and somebody wants to influence me, and in critical thinking and all of that creativity is really hard work. So that's why, when people I do feel that when people are saying, oh, ai is going to take my job, it's because they see, you can, you know, you can use a tool to create a little website, for example, and it's not half bad, it's a good starting point. So that's how people start to believe that AI is going to take their job, because we really have to do that hard work of thinking deeply.

Mark Smith: Did you know that six out of ten roles that are and I'm talking about this is a segment that we operate within and and consulting software development that that type of thing six out of 10 roles didn't even exist in the 1940s, that we have today that are high paying, high yield roles?

Andrew Welch: I think we're going to. I'm surprised it's not higher yeah, well, why it's?

Mark Smith: that is because there's been traditional roles like, let's say, um, you know, wait, waitressing in a restaurant. There's, there's the hospitality industries, there's accommodation, or you could be waitering waitering, yeah, uh, all still exist and so in my mind I'm actually processing as a waiter. Is that you know waitress, like I was like trying to I think it's serving mark yes, serve stuff in america that's a american way of looking at it. I like it we're.

Andrew Welch: We're an evolved culture. What can I say?

Mark Smith: so what I'm saying? What I'm saying is that there's this whole area of, I think, new jobs. They're going to come out and that, like you know, when I went to school I never trained for what I do now, right, didn't exist, even what I do now. And I think that we're going to go once again to this whole era of a whole new range of jobs becoming available. And as long as we which humans are very good at is the ability to adapt, adapt and change and evolve with what's new, I don't think there should be any fear that we're going to do ourselves out of things to do, and I think that's where it comes back to creativity. I think there will be. You know, I can't wait to see what I'm doing in five years. I'm excited about it.

Andrew Welch: I just hope that I live long enough to get to see someone have the job of starship captain, but I think I've got a few hundred years for that.

Mark Smith: That's if you're coming from a paradigm of.

Andrew Welch: I'm coming from a paradigm of Star Trek, Mark.

Mark Smith: Yeah, yeah, but it's not if you're coming from a paradigm that within the next 30 years you're not going to resolve every medical issue that we have and therefore give us the ability to extend well beyond 100 years of life or 150, but in a very healthy state, not in a decrepit state.

Ana Welch: But that's such a reality, Mark. We just read some documentation, I think, today, on trustworthy AI and why it's so important. Because there's so much AI solving types of cancer quicker and figuring out solutions for various allergies and for critical medical conditions so governments are starting to see that you cannot stop AI, so you need to make it trustworthy, because this is what's going to enable us to live past 100, unless we get hit by a car, of course.

Mark Smith: Yeah, the bus effect still applies, right, yeah, absolutely, or will it, or will it?

Ana Welch: Who knows how advanced medicine is going to get, can it?

Mark Smith: reminicularize ourselves. Now we are in Star Trek land.

Andrew Welch: Yeah, you know, Anna just touched on something, though, that I think is going to be an interesting thing to watch here over the next little while, and that is the responses of governments around the world and how governments around the world deal with AI in their own um, in their own territory, in their own jurisdiction. And you know, we're we're already seeing a pretty mixed set of, uh, a pretty mixed set of of of responses, right? So, to give you an example, um, I you know I don't hold me to this, it could be 33 days, but something within the same month right, the EU AI act, um, which is going to significant impose some significant impose some significant responsible AI responsibilities on companies. That's going to come online.

Mark Smith: Enforceable in August this year. Enforceable in August.

Andrew Welch: Enforceable in August. Okay Okay, so not the same month, but it's being the final version of it I think is due in about a month's time, Something like that it becomes enforceable later this year.

Andrew Welch: At the same time, donald Trump the soon-to-be probably by the time this episode is released, the US president, of course has announced that he will be appointing as his AI and crypto czar a Silicon Valley investor whose goal it is to deregulate AI so that American companies can innovate faster. So we'll see exactly how this goes and we'll also see You're talking about.

Mark Smith: Sachs right.

Andrew Welch: I don't know the fellow's. I don't recall the fellow's name.

Mark Smith: Yeah, sachs, he actually was one of the creators of Yammer back in the day, that's one of his big, oh interesting Okay. You know sold to. Microsoft. I mean, he's had a lot of investments and stuff. But you're talking about Sachs. Yeah, I don't know his name.

Andrew Welch: Yeah, yeah, I, David maybe rings a bell, but anyway, don't hold us to that.

Andrew Welch: I just don't recall the fellow's name. But the speculation and my hunch here is that we may very well see kind of a multi-lane highway for AI emerging right, where we'll just take America and Europe, where AI is a lot safer in Europe but the pace of innovation is a lot slower, and then in America AI is actually a lot more dangerous but the pace of innovation is a lot faster. And I think that navigating that regulatory environment, particularly given some of the extraterritorial provisions of the EU AI Act, but also given the fact that one you've got a lot of global companies I mean, you don't have to be big to be a global company, right Like Cloud Lighthouse serves clients in 11 countries, right? So you've got a lot of companies that are in the mix here, that are global and are going to be subject to a wildly different and an extraordinarily difficult to navigate, especially in the early days regulatory landscape. You think that GDPR was a challenge. You ain't seen nothing yet. That's going to be really interesting to see unfold.

Mark Smith: And with that we'll end today's episode. Thank you so much for joining us. We'll be back regularly from this point forward. We're looking forward to an exciting year. If you've got suggestions or things that you'd like us to discuss on the show, make sure you reach out. If you go to the innovation the microsoftinnovationpodcastcom new URL, you will see in the bottom right-hand corner there is a microphone and that allows you to send us a voicemail. If you click on that, you can approve your browser to use the microphone. You can record a voicemail. We can then play that on air as a future episode and allow you to participate, perhaps ask your question yourself and be on a show in the future. With that, good luck, thank you, and enjoy the rest of your week.

Ana Welch: Thanks all, bye guys.

Mark Smith: Thanks for tuning into the Ecosystem Show. We hope you found today's discussion insightful and thought provoking, and maybe you had a laugh or two. Remember your feedback and challenges help us all grow, so don't hesitate to share your perspective. Stay connected with us for more innovative ideas and strategies to enhance your software estate. Until next time, keep pushing the boundaries and creating value. See you on the next episode.

 

Chris Huntingford Profile Photo

Chris Huntingford

Chris Huntingford is a geek and is proud to admit it! He is also a rather large, talkative South African who plays the drums, wears horrendous Hawaiian shirts, and has an affinity for engaging in as many social gatherings as humanly possible because, well… Chris wants to experience as much as possible and connect with as many different people as he can! He is, unapologetically, himself! His zest for interaction and collaboration has led to a fixation on community and an understanding that ANYTHING can be achieved by bringing people together in the right environment.

William Dorrington Profile Photo

William Dorrington

William Dorrington is the Chief Technology Officer at Kerv Digital. He has been part of the Power Platform community since the platform's release and has evangelized it ever since – through doing this he has also earned the title of Microsoft MVP.

Andrew Welch Profile Photo

Andrew Welch

Andrew Welch is a Microsoft MVP for Business Applications serving as Vice President and Director, Cloud Application Platform practice at HSO. His technical focus is on cloud technology in large global organizations and on adoption, management, governance, and scaled development with Power Platform. He’s the published author of the novel “Field Blends” and the forthcoming novel “Flickan”, co-author of the “Power Platform Adoption Framework”, and writer on topics such as “Power Platform in a Modern Data Platform Architecture”.

Ana Welch Profile Photo

Ana Welch

Partner CTO and Senior Cloud Architect with Microsoft, Ana Demeny guide partners in creating their digital and app innovation, data, AI, and automation practices. In this role, she has built technical capabilities around Azure, Power Platform, Dynamics 365, and—most recently—Fabric, which have resulted in multi-million wins for partners in new practice areas. She applies this experience as a frequent speaker at technical conferences across Europe and the United States and as a collaborator with other cloud technology leaders on market-making topics such as enterprise architecture for cloud ecosystems, strategies to integrate business applications and the Azure data platform, and future-ready AI strategies. Most recently, she launched the “Ecosystems” podcast alongside Will Dorrington (CTO @ Kerv Digital), Andrew Welch (CTO @ HSO), Chris Huntingford (Low Code Lead @ ANS), and Mark Smith (Cloud Strategist @ IBM). Before joining Microsoft, she served as the Engineering Lead for strategic programs at Vanquis Bank in London where she led teams driving technical transformation and navigating regulatory challenges across affordability, loans, and open banking domains. Her prior experience includes service as a senior technical consultant and engineer at Hitachi, FelineSoft, and Ipsos, among others.