Accelerate your career with the 90 Day Mentoring Challenge → Learn More
Exploring Trust in AI and Embracing Spanish Culture
Exploring Trust in AI and Embracing Spanish Culture
Send me a Text Message here FULL SHOW NOTES https://www.microsoftinnovationpodcast.com/646 What challenges await when you pack up your life…
Choose your favorite podcast player

Exploring Trust in AI and Embracing Spanish Culture

Exploring Trust in AI and Embracing Spanish Culture

Send me a Text Message here

FULL SHOW NOTES
https://www.microsoftinnovationpodcast.com/646   

What challenges await when you pack up your life and settle in a vibrant new city like Valencia, Spain? Our own Ana shares her personal adventure of embracing the local culture and the quirks of the Spanish education system, offering a glimpse into the exhilarating yet challenging landscape of 2025. Alongside this, Mark unpacks the burgeoning opportunities within the AI landscape, emphasizing a pivotal shift towards practical applications that promise to redefine the way organizations function. As we journey through this transformative year, we explore the concept of "trustworthy AI"—a vision that extends beyond mere compliance to encompass reliability and safety, inviting you to imagine a digital future where innovation and trust walk hand in hand. 
 
Is AI truly a threat to our jobs, or could it be a catalyst for new opportunities? In this episode, we challenge the fear surrounding AI's potential to replace human roles, underscoring the irreplaceable value of critical thinking and creativity—skills only humans possess. Drawing parallels to the early days of the iPhone, we explore how AI's form and utility are still evolving, with trust in its systems being paramount. We unravel the intricacies of ensuring accuracy in AI outcomes by prioritizing relevant data and processes that correct errors. The conversation paints a hopeful picture of a future where AI models continue to improve through better data management, reminding us that while AI continues to evolve, so too will the opportunities for human growth and innovation.

• Exploring the pivotal concept of trustworthy AI 
• Importance of data quality in AI implementations 
• Evolving user experiences and the reducing importance of UI 
• The necessity of filtering data sets for accurate insight 
• Human roles in AI: critical thinking and creativity emphasized 
• Anticipating future job roles in an AI-dominated landscape 
• Navigating upcoming AI regulations and their implications

This year we're adding a new show to our line up - The AI Advantage. We'll discuss the skills you need to thrive in an AI-enabled world.

DynamicsMinds is a world-class event in Slovenia that brings together Microsoft product managers, industry leaders, and dedicated users to explore the latest in Microsoft Dynamics 365, the Power Platform, and Copilot.

Early bird tickets are on sale now and listeners of the Microsoft Innovation Podcast get 10% off with the code MIPVIP144bff 
https://www.dynamicsminds.com/register/?voucher=MIPVIP144bff

Accelerate your Microsoft career with the 90 Day Mentoring Challenge 

We’ve helped 1,300+ people across 70+ countries establish successful careers in the Microsoft Power Platform and Dynamics 365 ecosystem.

Benefit from expert guidance, a supportive community, and a clear career roadmap. A lot can change in 90 days, get started today!

Support the show

If you want to get in touch with me, you can message me here on Linkedin.

Thanks for listening 🚀 - Mark Smith

Chapters

00:14 - Innovation and Change in 2025

15:45 - Building Trust in AI Models

26:12 - Trustworthy AI and Future Skills

Transcript
WEBVTT

00:00:01.080 --> 00:00:02.527
Welcome to the Ecosystem Show.

00:00:02.527 --> 00:00:05.161
We're thrilled to have you with us here.

00:00:05.161 --> 00:00:11.150
We challenge traditional mindsets and explore innovative approaches to maximizing the value of your software estate.

00:00:11.150 --> 00:00:13.507
We don't expect you to agree with everything.

00:00:13.507 --> 00:00:17.471
Challenge us, share your thoughts and let's grow together.

00:00:17.471 --> 00:00:20.548
Now let's dive in it's showtime.

00:00:20.548 --> 00:00:27.765
Welcome back, it's 2025, and it's game on for us at Cloud Lighthouse and on the ecosystem show.

00:00:27.765 --> 00:00:36.853
You've only got Cloud Lighthouse staff today, but of course, our friends Will Dorrington and Chris Huntingford will be back with us real soon.

00:00:36.853 --> 00:00:39.387
I feel like we're already halfway through the year.

00:00:39.387 --> 00:00:43.832
There's just been so much going on in the first two weeks.

00:00:43.832 --> 00:00:45.786
I feel like I'm absolutely smashed.

00:00:45.786 --> 00:00:50.487
I feel like my task list is a million miles long and it is exciting.

00:00:50.487 --> 00:00:51.625
It's exciting times.

00:00:52.299 --> 00:00:59.067
At Microsoft, they already are halfway through the year, I mean, you know Indeed they are right, the FY totally on point.

00:00:59.679 --> 00:01:02.750
What are the big changes that are happening in your guys' lives, Ana?

00:01:03.540 --> 00:01:04.929
Happy New Year first of all.

00:01:04.929 --> 00:01:05.834
Everybody.

00:01:05.834 --> 00:01:06.859
Happy New Year first of all, everybody.

00:01:06.859 --> 00:01:07.921
Happy New Year.

00:01:07.921 --> 00:01:11.888
I hope everybody has an exceptional year.

00:01:11.888 --> 00:01:18.307
We are excited about a ton of things changes in our personal lives and in technology.

00:01:18.307 --> 00:01:24.528
Biggest things in our personal life is no, I am not pregnant, but we're moving to Spain.

00:01:25.240 --> 00:01:34.144
personal life is no, I am not pregnant, but we're moving to spain for anyone who tunes into this show, because that's what you want to know there's a big.

00:01:34.284 --> 00:01:36.769
There's a big change, but I am not pregnant.

00:01:36.769 --> 00:01:41.745
We're moving to spain, though we're gonna have probably sangria every day.

00:01:41.745 --> 00:01:43.968
We're gonna go to the beach a lot.

00:01:43.968 --> 00:01:48.983
We're gonna try and tune into the spanish culture.

00:01:48.983 --> 00:01:55.403
We're gonna learn spanish, we're gonna put our child into a spanish school and let's hope we we survive it.

00:01:55.444 --> 00:02:05.573
We're already having some difficulties with you know some things I love it, and and valencia is the location of choice.

00:02:05.573 --> 00:02:07.531
What a beautiful city.

00:02:08.135 --> 00:02:08.780
Gorgeous city.

00:02:08.780 --> 00:02:22.575
Oh, the difficulties are the fact that you know everybody is so happy here and relaxed and understand why kids finish school at 4.30.

00:02:22.575 --> 00:02:28.370
So that's the thing that we're going to have difficulties with, yeah.

00:02:30.092 --> 00:02:32.045
I want to see you guys having the siestas.

00:02:32.045 --> 00:02:33.995
Right, you're going to have to have your siestas.

00:02:33.995 --> 00:02:39.323
That's very Spanish, right, right and it's a whole thing.

00:02:39.364 --> 00:02:44.046
You're not allowed to go visit between 1 and 3 because it's siesta time.

00:02:44.207 --> 00:02:56.344
It's so amazing, it's beautiful, we like, we eat better, I feel like we look younger not yet for me, I'm still working on it, but but but anna's making anna's making good progress.

00:02:56.344 --> 00:03:04.587
Yeah, we went today to so we have, we have anna's mother, um, who is in town here, uh, helping us.

00:03:04.587 --> 00:03:07.974
Our daughter Alexandra, and Anna and I have been.

00:03:07.974 --> 00:03:23.663
Today's activity was going to visit potential nurseries or garderas so that we can enroll Alexandra, and we turned up to one of them today and she told us the woman there said that they close at 4.30.

00:03:23.663 --> 00:03:24.645
And I you close it.

00:03:24.645 --> 00:03:38.983
What huh, like 4 30 is when the americans are waking up and in my business, 4 30, 4 30, 4 30 spain time is like the worst time of the day for school to to be over.

00:03:38.983 --> 00:03:45.831
So we're, we're hoping to uh, get at least something that goes until five Six would be ideal.

00:03:47.033 --> 00:03:49.075
Yeah, yeah, we'll see.

00:03:49.340 --> 00:03:51.185
For everyone who's been following our lives.

00:03:53.090 --> 00:03:54.072
Nice, nice.

00:03:55.481 --> 00:03:56.145
How about you?

00:03:56.145 --> 00:03:57.885
What's changing for you, Mark?

00:03:58.800 --> 00:04:01.025
Oh mate, Hard work, hard work.

00:04:01.025 --> 00:04:03.021
I don't know, it's just like.

00:04:03.021 --> 00:04:07.980
I just feel like 2025 is an explosion of opportunity there's.

00:04:07.980 --> 00:04:16.951
You know, I feel like we're really getting down to, in the AI space particularly, a much higher degree of practicality.

00:04:16.951 --> 00:04:18.545
Right Organizations.

00:04:18.545 --> 00:04:24.310
They've seen the hype, they've seen the marketing, and now we're moving into right let's bed this down.

00:04:24.310 --> 00:04:28.906
Marketing, and now we're moving into right the let's bed this down.

00:04:28.906 --> 00:04:30.851
How do we create, uh, frameworks to operate inside our organizations?

00:04:30.851 --> 00:04:34.062
You know we can't just have people just randomly building bits and pieces.

00:04:34.062 --> 00:04:51.269
It really needs to be a key, cohesive pattern of behaviors that is going to drive success, and so it seems that every project I'm on at the moment, this is becoming a much more prevalent thing in the thinking, particularly of executives in organizations.

00:04:51.269 --> 00:05:03.372
You know they want to use the sound principles of the past and really apply them to maximize the innovation they can do with AI.

00:05:05.839 --> 00:05:07.129
Yeah, the innovation they can do on or with AI.

00:05:07.129 --> 00:05:33.855
Yeah, yeah, I think that 2025 for me and I think, for a lot of people, even if they don't realize it yet, is the year of trustworthy AI, though, really interestingly to me, I still see many, many, many organizations that are not taking trustworthy AI, or what they, I think, think of as responsible AI, seriously.

00:05:33.855 --> 00:05:48.963
Still a lot of development teams that are trying to deploy solutions, scenarios, use cases, workloads, whatever you want to call it, without having accounted for how to make that AI trustworthy and reliable, safe, etc.

00:05:48.963 --> 00:06:01.994
But I think and maybe we can debate or can talk this through on this episode but I think that 2025, among many things really must be the year of trustworthy AI.

00:06:02.500 --> 00:06:05.668
Yeah, and I like that concept, that concept, trustworthy.

00:06:05.668 --> 00:06:32.687
Ai is not about compliance, so, as an, it's a, it's a dimension of it, right, but it's around um creating ai that, literally, the people that use it can trust that everything has been thought about to make this safe, usable, reliable, accurate as possible and an enabler right for the organization.

00:06:32.687 --> 00:06:44.329
It's not about just making sure that it is uh, you know, not hallucinating off the richter scale, not not doing unethical things.

00:06:44.329 --> 00:06:49.016
That's a component of it or a dimension of it, but it's so much more.

00:06:49.016 --> 00:06:58.533
It's everything from your governance layer, your strategy, right on through to your AI ops model for the organization.

00:06:59.903 --> 00:07:12.836
Yeah, I think that's so true and people are taking it seriously, like you guys said, and I would say, not just executives, but also, you know, professionals of any sort.

00:07:12.836 --> 00:07:18.492
I was chatting with Andrew today because I had a catch up with one of my friends.

00:07:18.492 --> 00:07:40.165
She is head of UX at a big company, a global company X as a big company, a global company, and she was saying I've stopped hiring people who come to me with this huge, perfect expertise in, like Figma, I'm a Figma god.

00:07:40.165 --> 00:07:44.130
I'm gonna, you know, create the best design for you.

00:07:44.130 --> 00:07:49.904
She's like I don't care, like we are not pixel pushers anymore.

00:07:51.187 --> 00:07:53.312
Uh, and these were her words.

00:07:53.312 --> 00:08:10.435
She's like I want somebody who understands that we, as user experience people, need to create a reality where people get their results grounded in data without a UI.

00:08:10.435 --> 00:08:24.069
We need to be the first ones to eliminate the app and the screen, and I thought that was, you know, that's revolutionary for somebody who leads teams of designers.

00:08:24.069 --> 00:08:29.103
You know to say do you know what reality right now is?

00:08:29.103 --> 00:08:32.311
That we need to trust our data first and foremost?

00:08:32.311 --> 00:08:33.734
That's what it is.

00:08:34.400 --> 00:08:59.091
I mean, anna, we, we, you and I only hang out with you, and I only hang out with professional and expert people who are into making bold statements, like the head of UX, who wants to get rid of UI, or our friend Mark here, who famously got up at a Dynamics conference and said at the beginning of his session that D365 is dead.

00:08:59.091 --> 00:09:04.171
Microsoft folks in the audience, d365 is not dead.

00:09:04.171 --> 00:09:07.389
Mark is just a sayer of incendiary things.

00:09:07.389 --> 00:09:12.870
But uh, yes, right, anna and I have a tight check.

00:09:12.951 --> 00:09:14.354
Check out the sound bites there.

00:09:14.354 --> 00:09:18.287
Pixel pusher right, not pencil pusher.

00:09:18.287 --> 00:09:20.149
Pixel pusher I love that.

00:09:20.149 --> 00:09:25.548
Yeah, and there's a new there's a new thing out there right pixel pushes are dead.

00:09:25.948 --> 00:09:47.788
Um no no, no, no, we're not going to add that one but here's the thing I I just want to riff on on what anna said then, because I was I was having a chat with a partner in australia yesterday and, um, he was like so is dynamics going away, is the fno going away, is powerics going away, is F&O going away, is Power Platform going away?

00:09:47.788 --> 00:09:49.385
Is Power Automate going away?

00:09:49.426 --> 00:09:51.485
And I said, well, I want to be clear.

00:09:51.485 --> 00:09:52.248
No, it is.

00:09:52.248 --> 00:09:55.929
I'm going to get a call from Microsoft when this goes.

00:09:56.741 --> 00:10:00.443
So here's the thing I said you've got to look at it differently.

00:10:00.443 --> 00:10:12.269
I said all these tools are building blocks that facilitate data movement and business rules and logic within an organization.

00:10:12.269 --> 00:10:20.591
Traditionally, the only way we've interfaced with those, or most of the way, is through a form over that data set.

00:10:20.591 --> 00:10:25.769
We update the field, we get a report out, we do something with that information.

00:10:25.769 --> 00:10:47.102
I said I believe in the next five years, that concept of us updating and let's just say we take a contact record to keep it real simple, that contact record could be enriched all the time, based on an agent that operates on it, that's always searching out, is do I have the most accurate data?

00:10:47.102 --> 00:10:51.344
You get a new email from that contact and they've changed their phone number.

00:10:51.344 --> 00:10:56.096
Well, it detects that change and says oh hello, is this an update to the phone number field?

00:10:56.096 --> 00:10:57.278
And it it updates it for you.

00:10:57.278 --> 00:11:00.519
You don't have to go oh heck, I'm going to go copy and paste that.

00:11:00.519 --> 00:11:02.000
And so you think of that.

00:11:02.000 --> 00:11:17.613
All those little data touch points in any engagement in an organization is constantly enriching your data set, not because you typed it in or you copied and pasted it in, but because it is picking and detecting that up and enriching it.

00:11:17.613 --> 00:11:25.330
So therefore, all of a sudden, the concept of a menu do we need a menu anymore in applications?

00:11:25.330 --> 00:11:29.104
And because, all of a sudden, the concept of navigating to something.

00:11:29.104 --> 00:11:30.506
The data exists.

00:11:30.506 --> 00:11:32.260
We've got APIs into it.

00:11:33.275 --> 00:11:34.923
Get me the answer I'm looking for.

00:11:34.923 --> 00:11:37.240
Or, hey, update this record, tom.

00:11:37.240 --> 00:11:39.043
He's informed us.

00:11:39.043 --> 00:11:41.937
He's actually Thomas.

00:11:41.937 --> 00:11:45.649
Well, I'm just going to say hey to my agent.

00:11:45.649 --> 00:11:47.817
Can you update Thomas' record to Thomas?

00:11:47.817 --> 00:11:48.721
He prefers that.

00:11:48.721 --> 00:11:51.932
I don't have to know where that field was in the system.

00:11:51.932 --> 00:11:53.476
It's potentially just going to go and update it.

00:11:53.476 --> 00:11:57.947
So I think these tools like Power Automate particularly is one I've thought about a lot.

00:11:57.947 --> 00:12:02.197
I think it's going to become massively AI infused.

00:12:02.197 --> 00:12:08.957
But do we think in five years time we're going to have consultants that in their title is the word Power Automate?

00:12:08.957 --> 00:12:14.048
I'm a Power Automate architect or creator, I don't know.

00:12:14.048 --> 00:12:18.225
I think AI will be doing so much more of it at that time.

00:12:18.815 --> 00:12:20.282
We have a special guest on the podcast.

00:12:21.456 --> 00:12:23.981
I think that I really want to introduce you to my friend Mark.

00:12:23.981 --> 00:12:26.875
Honestly, that's exactly what she was saying.

00:12:26.875 --> 00:12:29.619
She was not saying we won't have UI for any.

00:12:29.619 --> 00:12:43.302
She was just saying that some of the things are just so complex to implement but so easy for ai to extract, manipulate and use in our day-to-day lives.

00:12:43.302 --> 00:12:50.115
That is just um, that that efficiency can be done without the ui.

00:12:50.115 --> 00:12:51.599
That that's all she was saying.

00:12:51.639 --> 00:13:03.042
So spot on, hello little I think there's this concept, that that is, that we will have output devices or surfaces and what I mean by that.

00:13:03.042 --> 00:13:13.878
It could be a, a digital screen on a wall, it could be your ipad or whatever that is, it could be a phone device, it could be a computer monitor, tv screen, a projector, whatever.

00:13:13.878 --> 00:13:22.100
And we all have the concept of going asking a question just with our voice, not through keyboard, and say can you display it over there?

00:13:22.100 --> 00:13:31.263
And you can just point to wherever it was and it would have the context and the connectivity, et cetera, and bring up the data set that you want without the chroma right.

00:13:31.263 --> 00:13:34.124
It's just like I need this information.

00:13:34.124 --> 00:13:52.958
I want to drill down into that concept deck and let's drill through it, and it would be a very fluid type of experience, I think, around how you interact with data if you no longer need a chroma in pace, and what I mean by that is the UI, ux around what we do when we do navigation.

00:13:52.958 --> 00:13:55.206
That all goes away potentially.

00:13:56.155 --> 00:13:59.546
But what comes with it, though, is that flexibility.

00:13:59.546 --> 00:14:13.519
That flexibility for you to change your way of working or for you to get to the results quicker, and this is where the really deep thinkers are going to prevail, you know right now.

00:14:13.519 --> 00:14:21.078
So the concept of trustworthy AI actually drills into those deep thinkers.

00:14:21.078 --> 00:14:31.941
Did I really think about what the intended purpose of my application, of my agent, of my co-pilot, you know bot is?

00:14:31.941 --> 00:14:35.005
Could I do anything else with it?

00:14:35.005 --> 00:14:37.369
Did I state that out loud?

00:14:37.369 --> 00:14:46.120
You know what those types of things are going to make sure that our information is accurate and, of course, the grounding in good data.

00:14:46.120 --> 00:14:48.321
It will always be a thing.

00:14:49.243 --> 00:14:58.365
Yeah, so, and, by the way, thank you guys for humoring the arrival of Alexandra back here.

00:15:00.596 --> 00:15:05.618
She's on her way to bed it's quite late for her, but yeah.

00:15:05.618 --> 00:15:27.548
So I think that we've been talking for a year at least right about this the emerging but still very much to be determined form factor or patterns of user interaction that AI will take on Once we get past our you know, the comparison that I've made is about the iPhone.

00:15:27.548 --> 00:15:27.735
Right?

00:15:27.735 --> 00:15:47.063
When the iPhone first came out, people spent several years just trying to cram desktop apps onto a smaller screen, and it was several years on before application designers were they really figured out what that form factor was best for and what the appropriate form factor was right.

00:15:47.063 --> 00:15:49.763
So I think that that's part of it.

00:15:49.763 --> 00:16:03.634
But I also think that and this will become more important the more that things change right, the more that gets moved, the more important trust becomes right.

00:16:05.576 --> 00:16:31.067
So I've been all around the world in the last year talking with big groups, doing workshops, helping them set in motion their AI, data and broader cloud technology strategy, and one of the things that I hear again and again right is that people and organizations around the world are asking, in one form or another can we trust artificial intelligence?

00:16:31.067 --> 00:16:31.754
Right?

00:16:31.754 --> 00:16:36.682
And sometimes what we're talking about is, you know, is a question of.

00:16:36.682 --> 00:16:44.936
Can we do people trust and can they believe that the results that artificial intelligence produces are accurate, right?

00:16:44.936 --> 00:17:00.301
Can they trust that the data is of a good quality and that there's integrity to that data, to that data Sometimes it's, you know can they trust that the model?

00:17:00.301 --> 00:17:06.479
I'm going to have to come back to this because I'm being asked to kiss a boo-boo, so soliloquy on pause, I'll leave it with you guys for just a moment here.

00:17:06.839 --> 00:17:07.422
Okay, so.

00:17:07.422 --> 00:17:17.328
So I want to pick up on something Andrew just said there, because trust is often inherently connected to data and to the data correctness.

00:17:17.328 --> 00:17:33.805
Now, if you take that concept and this is something I've been it's blown my mind as a thought experiment If you look at a medium-sized business, I think the data shows that a medium-sized organization has around 10 million artifacts.

00:17:33.805 --> 00:17:35.538
So we're talking about documents.

00:17:35.538 --> 00:17:35.779
Powerpoints.

00:17:35.779 --> 00:17:36.041
I'm shocked.

00:17:36.041 --> 00:17:36.644
It's that few artifacts, right?

00:17:36.644 --> 00:17:38.193
So we're talking about documents, PowerPoints, Excel spreadsheets.

00:17:38.394 --> 00:17:39.679
I'm shocked, it's that few.

00:17:40.099 --> 00:17:40.982
Stuff right?

00:17:40.982 --> 00:17:50.567
Yeah, yeah, let's just use 10 million as a full part of our thought experiment Now, in that you will have a massive amount of duplication.

00:17:50.567 --> 00:17:53.443
Someone took that Excel spreadsheet and they copied it a few times.

00:17:53.443 --> 00:17:57.537
The problem is is that by human nature, we make errors.

00:17:57.537 --> 00:18:00.663
As humans, we forget the formula.

00:18:00.682 --> 00:18:02.145
We made a mistake in the formula.

00:18:02.145 --> 00:18:08.866
We we didn't paste in the correct numbers into the correct columns in the powerpoint presentation.

00:18:08.866 --> 00:18:15.487
We misquoted somebody in the um in and we're like, oh yeah, we might come back and change it.

00:18:15.487 --> 00:18:21.884
But what happens is that you amplify that across you know 500 or 5 000 employees.

00:18:21.884 --> 00:18:42.922
The data set that we're pointing ai to train on is full of human incremental error at a massive scale across, let's say, those 10 million artifacts and then we point ai at it as a grounding for whatever we're building with AI and we go, oh, it hallucinates.

00:18:42.922 --> 00:18:46.164
Perhaps it's not hallucinating as much as we think.

00:18:46.654 --> 00:18:53.501
Perhaps it's actually just repeating the error and the problem in the data that is inherently built in.

00:18:53.501 --> 00:19:06.404
So now take this idea we take that 10 million and rather than training our model for your organization on all our organization data, you go hang on, what is the workload that I want to address?

00:19:06.404 --> 00:19:11.375
And let's extract the data sets that just address this workload.

00:19:11.375 --> 00:19:16.565
Let's say we get down to 5,000 artifacts from that, from that right.

00:19:16.565 --> 00:19:40.779
So now we've taken the data on a journey, we've got 5 000 artifacts and then when you drill into that, you get rid of things like the duplicates and you start looking at putting procedures in place to identify error in data and you might come down to, let's say, 500 artifacts and that's what you should then be using with a rag process to build out your workload for the organization.

00:19:41.040 --> 00:19:43.586
And of course, the accuracy will go through the booth.

00:19:43.586 --> 00:19:54.221
You're not trying to train your ai on the universe, you're training it on the actual core thing you want your particular ai to do, which will then have a much greater degree of accuracy.

00:19:54.221 --> 00:20:10.883
Then you've got an ongoing process of making sure that data that the 500 artifacts arbitrary number is being updated and enriched at all times to be accurate and lockstep with where your organization is for that particular workload.

00:20:10.883 --> 00:20:12.066
Use case solution.

00:20:12.815 --> 00:20:14.342
And those things change as well.

00:20:14.342 --> 00:20:22.824
Sorry, because you can come back to it and you're like wait, I don't care about X anymore as much.

00:20:22.824 --> 00:20:25.915
I actually care about Y more right now.

00:20:25.915 --> 00:21:02.663
So I want my model to enrich Y more right now, and it can totally do that and with the right monitoring and observability in place like you can provide an exact history of what has happened to your data Like this can be a massive opportunity for people to not only get trustworthy results out into the world, but to also demonstrate their thought process, because humans are not faultless.

00:21:02.663 --> 00:21:11.616
We are creatures who make a lot of mistakes and before those mistakes could be obscured because nobody really knew who changed what.

00:21:11.616 --> 00:21:14.318
And now they're really like out in the open.

00:21:14.380 --> 00:21:42.064
and that's okay, as long as you demonstrate that you've tried to correct your mistake, you know yeah, well, and and I think, I think that so, just like and I'm sure I've said this on a previous episode, but to to reiterate um, just like power bi, once upon a time um showed and and Power BI kind of traditional reporting right showed organizations how bad their data was.

00:21:42.064 --> 00:21:42.806
Right.

00:21:42.806 --> 00:21:44.520
The data had always been bad.

00:21:44.520 --> 00:21:46.141
The data had always been wrong.

00:21:46.515 --> 00:21:49.609
You just couldn't see it, so you didn't know that it was wrong.

00:21:49.609 --> 00:22:24.067
Right, and now that phenomenon is to say that it's magnified, or you know, to say that it's magnified or to say that it's multiplied byprone human information.

00:22:24.067 --> 00:22:35.011
Future models will be trained on the error-riddled content that previous AI has created, and I think that this is a.

00:22:35.011 --> 00:22:38.554
You know, this is going to be a huge problem.

00:22:38.554 --> 00:22:49.946
That may we may, in a few years, look back on this, you know, and apply the old adage that it was used in bankruptcy right, Like, how do you go bankrupt very slowly at first and then all at once?

00:22:49.946 --> 00:22:54.162
Right, where this problem mounts and builds over time?

00:22:54.162 --> 00:23:02.984
Right, and then all of a sudden we realize, oh my god, we have multiplied and cascaded the effects of human error over a prolonged period.

00:23:03.886 --> 00:23:05.756
Yeah, that's so true.

00:23:05.756 --> 00:23:19.088
Like I remember, I was working at it was at the at the beginning of my career, um, and somebody gave me a big job to.

00:23:19.088 --> 00:23:20.430
To be fair, it was a big job.

00:23:20.430 --> 00:23:28.125
I was making, uh, crm systems for a call center who had like 400 employees.

00:23:28.125 --> 00:23:43.005
All of those people were working with, with tools that I have made and they needed to have like data integration and and and stuff like that, and everything was based on like an on-prem server that sometimes failed.

00:23:43.185 --> 00:23:54.402
So, um, you know, I wasn't clever enough and I don't think that there was a clear procedure of retrying those messages anyway, so we would lose a lot of data.

00:23:54.402 --> 00:24:16.201
Therefore, every morning, there used to be a team who would count the number of calls that were recorded, with the number of calls that actually happened, and then they would identify the ones that were not, you know, recorded in the system and they would do them manually.

00:24:16.201 --> 00:24:20.125
Can you imagine how many calls they still missed?

00:24:20.125 --> 00:24:27.186
Like, let's get real, it was tens of thousands of calls every day, right?

00:24:27.186 --> 00:24:30.664
And then they would just make up the coding for the rest.

00:24:30.664 --> 00:24:55.192
So an AI module over solutions like that that I'm pretty sure still exists today would really cause for a reality check in this organization on how important it is to have those 10 million assets really filtered down to your 500 that you can use.

00:24:55.192 --> 00:24:55.333
Yeah.

00:24:56.634 --> 00:24:56.874
Yeah.

00:24:58.536 --> 00:25:31.959
Yeah Well, and I think that, to bring us back to the trustworthy AI topic, and the final thing that I was going to say before I had to go kiss a boo-boo, was that I think that you also get, when you have this conversation and when you talk, when you talk to organizations and to users and to colleagues within these organizations about this, about what their concerns are, where their mind is when they think about AI.

00:25:31.959 --> 00:25:37.058
One of the you know it's it's everything from can I trust the results that AI is producing to.

00:25:37.058 --> 00:25:42.487
You know, can I trust that AI will be safely used?

00:25:42.487 --> 00:25:42.787
Right?

00:25:42.787 --> 00:25:50.838
And that category of trust ranges from everything from is AI going to take my job to.

00:25:50.838 --> 00:26:04.509
Is AI going to grow a personality and malevolently conquer the world by, you know, turning a society of toasters against us, right?

00:26:04.509 --> 00:26:05.855
I'm looking at you, chris Huntingford.

00:26:05.855 --> 00:26:12.298
I think that the angry robotic toasters was a criticism, but you know anyway.

00:26:12.298 --> 00:26:25.057
So I think that when you boil all of this down, the idea of AI trust, of trustworthy AI, is so top of mind for people, even if they are not to a point where they can articulate it right.

00:26:25.077 --> 00:26:28.586
There's no brand for trustworthy AI, certainly.

00:26:29.229 --> 00:26:41.383
So when you know, I've been rolling this idea for an article around in my, for my newsletter, around in my head, something to the effect of trustworthy AI is way more than red teaming, right?

00:26:41.682 --> 00:26:59.984
Because I do think that for wonks and for, you know, folks who are deeply enmeshed in the technology, there's an instinct to say, oh, trustworthy AI is about testing, or trustworthy AI is about the pillars of responsible AI, right, so you know things like reliability, safety, privacy, security.

00:26:59.984 --> 00:27:11.965
But to me, trustworthy AI is about whether or not AI can be trusted at scale across an organization or across a society.

00:27:11.965 --> 00:27:23.326
So my working definition and I'm going to read this is that trustworthy AI is about building a digital ecosystem that is strategic, responsible, safe, reliable and scalable.

00:27:23.326 --> 00:27:46.441
So I think that in the months ahead, we need to really expand the idea of what trustworthy AI actually is, far, far beyond the idea that trustworthy AI is about buying down risk or being compliant or being in line with regulatory concerns or with with with legal strictures.

00:27:46.441 --> 00:27:51.883
I think that it's a very wide world that goes far, far beyond that I find it interesting.

00:27:51.923 --> 00:28:13.915
Last year, anna and I did an episode on trust, right, trust between I love that episode and microsoft, you know, and and yeah, and one of the things that I I think about, because I get questions on, is what's the role of people in in the future, right, what is inherently human?

00:28:13.915 --> 00:28:21.722
And I think even this concept of trust is going to become a much more looked at.

00:28:21.722 --> 00:28:31.152
And not how do you display trust, but how are you trustworthy as an individual, as a person, as a contributor inside your organization?

00:28:31.152 --> 00:28:36.000
It's not about how you portray that you're trustworthy, it's about are you trustworthy?

00:28:36.000 --> 00:28:42.127
And I think that you know there's the kind of three traits that I see and I just find it interesting.

00:28:42.127 --> 00:28:49.755
You know there's the kind of three traits that I see and I just find it interesting, and we look at it as trustworthy AI, but it really is an extension of our humanity and the trustworthy nature of what we should be.

00:28:50.575 --> 00:28:57.182
Two other skills that I see is going to be critically important for us moving forward to develop as people.

00:28:57.182 --> 00:29:04.123
One is critical thinking, like at a level we've never done before.

00:29:04.123 --> 00:29:10.980
I think we're going to need to get really good because for us to look at the outputs that come from AI and from other people.

00:29:10.980 --> 00:29:18.603
We're going to have to be much more able to really you know whether it's a human gut check what is it?

00:29:18.603 --> 00:29:21.722
Is this legit, like?

00:29:21.722 --> 00:29:30.454
I think we need to, because in the world of ai, it's going to be easier than ever to manipulate that and I think that humans that really need to develop it.

00:29:30.516 --> 00:29:32.604
And then the other one is creativity.

00:29:32.604 --> 00:29:43.750
I think we're going into a world where we're going to have a tool set which will allow us to paint anything we want on the canvas and therefore it's going to really be on us to go.

00:29:43.750 --> 00:29:47.163
What is the art of what we could do here?

00:29:47.163 --> 00:29:51.434
How extensive, expansive, what are the possibilities?

00:29:51.434 --> 00:30:04.406
Because it's really going to come down to our ability to, you know, work with ai to really create the future there there is something I, something I wrote almost two years ago now.

00:30:04.487 --> 00:30:26.558
If I can find this, yeah, before you, you, you find that I, I I totally am there with you, mark, and I feel like there's just like the amount of information you see and how much critical thinking you can apply to it will be really, really important.

00:30:26.558 --> 00:30:35.959
Like, for example, I can tell you I've just posted this little picture on our chat that says privacy report.

00:30:35.959 --> 00:30:49.871
In the last seven days, my browser has prevented 22 trackers from profiling me, so I'm wondering how many trackers it didn't stop from profiling me, you know?

00:30:49.871 --> 00:30:57.076
So, yeah, the moment you receive that information, how much is it?

00:30:57.076 --> 00:31:00.299
Because I've been profiled and somebody wants to influence me.

00:31:00.339 --> 00:31:07.847
I've been profiled and somebody wants to influence me, and in critical thinking and all of that creativity is really hard work.

00:31:07.847 --> 00:31:25.780
So that's why, when people I do feel that when people are saying, oh, ai is going to take my job, it's because they see, you can, you know, you can use a tool to create a little website, for example, and it's not half bad, it's a good starting point.

00:31:25.780 --> 00:31:36.827
So that's how people start to believe that AI is going to take their job, because we really have to do that hard work of thinking deeply.

00:31:37.576 --> 00:31:55.288
Did you know that six out of ten roles that are and I'm talking about this is a segment that we operate within and and consulting software development that that type of thing six out of 10 roles didn't even exist in the 1940s, that we have today that are high paying, high yield roles?

00:31:55.976 --> 00:31:56.757
I think we're going to.

00:31:56.757 --> 00:32:02.625
I'm surprised it's not higher yeah, well, why it's?

00:32:02.684 --> 00:32:10.845
that is because there's been traditional roles like, let's say, um, you know, wait, waitressing in a restaurant.

00:32:10.845 --> 00:32:22.727
There's, there's the hospitality industries, there's accommodation, or you could be waitering waitering, yeah, uh, all still exist and so in my mind I'm actually processing as a waiter.

00:32:22.727 --> 00:32:33.619
Is that you know waitress, like I was like trying to I think it's serving mark yes, serve stuff in america that's a american way of looking at it.

00:32:33.619 --> 00:32:34.804
I like it we're.

00:32:34.865 --> 00:32:35.807
We're an evolved culture.

00:32:35.807 --> 00:32:36.451
What can I say?

00:32:36.751 --> 00:32:37.575
so what I'm saying?

00:32:37.575 --> 00:32:44.106
What I'm saying is that there's this whole area of, I think, new jobs.

00:32:44.106 --> 00:32:53.982
They're going to come out and that, like you know, when I went to school I never trained for what I do now, right, didn't exist, even what I do now.

00:32:53.982 --> 00:33:02.849
And I think that we're going to go once again to this whole era of a whole new range of jobs becoming available.

00:33:02.849 --> 00:33:23.846
And as long as we which humans are very good at is the ability to adapt, adapt and change and evolve with what's new, I don't think there should be any fear that we're going to do ourselves out of things to do, and I think that's where it comes back to creativity.

00:33:23.846 --> 00:33:24.696
I think there will be.

00:33:24.696 --> 00:33:27.324
You know, I can't wait to see what I'm doing in five years.

00:33:27.324 --> 00:33:28.346
I'm excited about it.

00:33:29.875 --> 00:33:41.327
I just hope that I live long enough to get to see someone have the job of starship captain, but I think I've got a few hundred years for that.

00:33:42.474 --> 00:33:44.603
That's if you're coming from a paradigm of.

00:33:45.256 --> 00:33:47.462
I'm coming from a paradigm of Star Trek, Mark.

00:33:47.923 --> 00:34:04.983
Yeah, yeah, but it's not if you're coming from a paradigm that within the next 30 years you're not going to resolve every medical issue that we have and therefore give us the ability to extend well beyond 100 years of life or 150, but in a very healthy state, not in a decrepit state.

00:34:06.999 --> 00:34:08.603
But that's such a reality, mark.

00:34:08.603 --> 00:34:17.914
We just read some documentation, I think, today, on trustworthy AI and why it's so important.

00:34:17.914 --> 00:34:49.570
Because there's so much AI solving types of cancer quicker and figuring out solutions for various allergies and for critical medical conditions so governments are starting to see that you cannot stop AI, so you need to make it trustworthy, because this is what's going to enable us to live past 100, unless we get hit by a car, of course.

00:34:50.010 --> 00:34:56.842
Yeah, the bus effect still applies, right, yeah, absolutely, or will it, or will it?

00:34:57.996 --> 00:35:01.275
Who knows how advanced medicine is going to get, can it?

00:35:01.295 --> 00:35:02.822
reminicularize ourselves.

00:35:02.822 --> 00:35:04.900
Now we are in Star Trek land.

00:35:05.260 --> 00:35:33.679
Yeah, you know, anna just touched on something, though, that I think is going to be an interesting thing to watch here over the next little while, and that is the responses of governments around the world and how governments around the world deal with AI in their own um, in their own territory, in their own jurisdiction.

00:35:33.679 --> 00:35:44.467
And you know, we're we're already seeing a pretty mixed set of, uh, a pretty mixed set of of of responses, right?

00:35:44.467 --> 00:36:07.592
So, to give you an example, um, I you know I don't hold me to this, it could be 33 days, but something within the same month right, the EU AI act, um, which is going to significant impose some significant impose some significant responsible AI responsibilities on companies.

00:36:07.592 --> 00:36:10.123
That's going to come online.

00:36:10.775 --> 00:36:12.181
Enforceable in August this year.

00:36:12.181 --> 00:36:13.418
Enforceable in August.

00:36:13.900 --> 00:36:14.822
Enforceable in August.

00:36:14.822 --> 00:36:27.005
Okay Okay, so not the same month, but it's being the final version of it I think is due in about a month's time, Something like that it becomes enforceable later this year.

00:36:27.326 --> 00:36:53.112
At the same time, donald Trump the soon-to-be probably by the time this episode is released, the US president, of course has announced that he will be appointing as his AI and crypto czar a Silicon Valley investor whose goal it is to deregulate AI so that American companies can innovate faster.

00:36:53.112 --> 00:36:58.057
So we'll see exactly how this goes and we'll also see You're talking about.

00:36:58.097 --> 00:36:58.740
Sachs right.

00:36:59.054 --> 00:36:59.920
I don't know the fellow's.

00:36:59.920 --> 00:37:01.380
I don't recall the fellow's name.

00:37:01.760 --> 00:37:08.867
Yeah, sachs, he actually was one of the creators of Yammer back in the day, that's one of his big, oh interesting Okay.

00:37:08.867 --> 00:37:10.039
You know sold to.

00:37:10.039 --> 00:37:10.722
Microsoft.

00:37:10.722 --> 00:37:12.907
I mean, he's had a lot of investments and stuff.

00:37:12.907 --> 00:37:13.914
But you're talking about Sachs.

00:37:13.914 --> 00:37:16.719
Yeah, I don't know his name.

00:37:16.778 --> 00:37:21.266
Yeah, yeah, I, david maybe rings a bell, but anyway, don't hold us to that.

00:37:21.467 --> 00:37:22.789
I just don't recall the fellow's name.

00:37:22.789 --> 00:37:50.655
But the speculation and my hunch here is that we may very well see kind of a multi-lane highway for AI emerging right, where we'll just take America and Europe, where AI is a lot safer in Europe but the pace of innovation is a lot slower, and then in America AI is actually a lot more dangerous but the pace of innovation is a lot faster.

00:37:50.655 --> 00:38:15.233
And I think that navigating that regulatory environment, particularly given some of the extraterritorial provisions of the EU AI Act, but also given the fact that one you've got a lot of global companies I mean, you don't have to be big to be a global company, right Like Cloud Lighthouse serves clients in 11 countries, right?

00:38:15.233 --> 00:38:30.407
So you've got a lot of companies that are in the mix here, that are global and are going to be subject to a wildly different and an extraordinarily difficult to navigate, especially in the early days regulatory landscape.

00:38:30.407 --> 00:38:34.925
You think that GDPR was a challenge.

00:38:34.925 --> 00:38:36.340
You ain't seen nothing yet.

00:38:36.340 --> 00:38:39.003
That's going to be really interesting to see unfold.

00:38:39.795 --> 00:38:41.663
And with that we'll end today's episode.

00:38:41.663 --> 00:38:43.001
Thank you so much for joining us.

00:38:43.001 --> 00:38:45.403
We'll be back regularly from this point forward.

00:38:45.403 --> 00:38:47.682
We're looking forward to an exciting year.

00:38:47.682 --> 00:38:52.900
If you've got suggestions or things that you'd like us to discuss on the show, make sure you reach out.

00:38:52.900 --> 00:39:09.123
If you go to the innovation the microsoftinnovationpodcastcom new URL, you will see in the bottom right-hand corner there is a microphone and that allows you to send us a voicemail.

00:39:09.123 --> 00:39:14.706
If you click on that, you can approve your browser to use the microphone.

00:39:14.706 --> 00:39:16.922
You can record a voicemail.

00:39:16.922 --> 00:39:26.440
We can then play that on air as a future episode and allow you to participate, perhaps ask your question yourself and be on a show in the future.

00:39:26.440 --> 00:39:30.369
With that, good luck, thank you, and enjoy the rest of your week.

00:39:30.954 --> 00:39:31.918
Thanks all, bye guys.

00:39:32.719 --> 00:39:34.583
Thanks for tuning into the Ecosystem Show.

00:39:34.583 --> 00:39:40.670
We hope you found today's discussion insightful and thought provoking, and maybe you had a laugh or two.

00:39:40.670 --> 00:39:46.659
Remember your feedback and challenges help us all grow, so don't hesitate to share your perspective.

00:39:46.659 --> 00:39:52.764
Stay connected with us for more innovative ideas and strategies to enhance your software estate.

00:39:52.764 --> 00:39:56.800
Until next time, keep pushing the boundaries and creating value.

00:39:56.800 --> 00:39:58.364
See you on the next episode.