Transcript
WEBVTT
00:00:01.080 --> 00:00:02.685
Welcome to the Ecosystem Show.
00:00:02.685 --> 00:00:05.160
We're thrilled to have you with us here.
00:00:05.160 --> 00:00:11.153
We challenge traditional mindsets and explore innovative approaches to maximizing the value of your software estate.
00:00:11.153 --> 00:00:13.523
We don't expect you to agree with everything.
00:00:13.523 --> 00:00:17.472
Challenge us, share your thoughts and let's grow together.
00:00:17.472 --> 00:00:20.548
Now let's dive in it's showtime.
00:00:20.548 --> 00:00:22.946
Welcome back to the Ecosystem Show.
00:00:22.946 --> 00:00:27.269
We're here with Anna and Andrew and myself.
00:00:27.269 --> 00:00:29.405
Will is, unfortunately.
00:00:29.405 --> 00:00:30.528
Did he say he's sick?
00:00:31.000 --> 00:00:32.165
I think he said he called him sick.
00:00:32.165 --> 00:00:33.405
They're both sick.
00:00:33.405 --> 00:00:35.125
Chris and Will are both.
00:00:35.125 --> 00:00:41.773
I got a text from Chris and he said I have not been this sick in 10 years, Years.
00:00:43.582 --> 00:01:34.608
So they're not joining us today, but we find that we're in a space and time where AI is just front and foremost in everything that I seem to be doing these days and it seems to have transitioned super quickly to us all of us really coming from a business applications mind space for the last 20 odd years, and then we're seeing this massive pivot into artificial intelligence and really from a practical application point of view, right, not just theory, and I don't know if you guys have tried anything new lately, but I have had the pleasure of using Grok 3, elon Musk's release of AI this last week Isn't he busy?
00:01:34.608 --> 00:01:47.290
The deep research functionality on it, I definitely think, outperforms ChatGPT and I'm on the $200 a month plan.
00:01:47.290 --> 00:01:53.091
The thoroughness of it was mind-blowing to me.
00:01:53.091 --> 00:02:00.299
So I gave it a topic that I knew stuff about and I couldn't believe how good the responses were.
00:02:00.299 --> 00:02:10.127
And I'm talking about I'm not talking about Q&A full conversation with it over an hour and unbelievable Like and I hadn't really touched Grok up to that point.
00:02:10.259 --> 00:02:15.250
I'd cursory over the last year or so, jumped into the previous versions, wasn't really into it.
00:02:15.250 --> 00:02:18.689
I'm like you know, could you use it as a serious business tool?
00:02:18.689 --> 00:02:22.591
But, boy, honestly, amazingly powerful.
00:02:22.591 --> 00:02:28.724
I've got the app on my phone now, so that's where I generally, you know, I'm watching something on TV or something.
00:02:28.724 --> 00:02:34.710
I'll go and spend some time while watching, I'll do some prompting and stuff and yeah.
00:02:34.930 --> 00:02:35.550
I'm blown away.
00:02:36.330 --> 00:02:41.175
This is and, by the way, mark, I just want to tell you that I'm offended.
00:02:41.175 --> 00:02:47.443
I actually, 20 years ago, was an infrastructure guy, not a biz apps guy, but anyway nonetheless.
00:02:47.443 --> 00:02:59.925
But see, this is actually where the most readily applicable use of AI in my daily life comes in right, which is refereeing random trivia, all right, like that is.
00:02:59.925 --> 00:03:16.117
If you are not using your AI of choice to referee a debate that you might be having with your wife or with your father about something that doesn't really matter in the end, then you are missing out on AI's true calling.
00:03:16.905 --> 00:03:17.819
It's very irritating.
00:03:17.819 --> 00:03:19.145
It's very irritating.
00:03:19.145 --> 00:03:25.800
I can also vouch to Andrew's background in, you know, hands-on engineering.
00:03:25.800 --> 00:03:35.966
He did fix our internet earlier in the week, like he knew what to plug in where, and you know I was impressed.
00:03:35.966 --> 00:03:38.008
I have to say he was on the floor.
00:03:38.008 --> 00:03:39.325
We had to move the couch.
00:03:39.325 --> 00:03:41.926
Yeah, it's pretty cool.
00:03:42.240 --> 00:03:45.911
I ripped a fiber connection out of the wall and rewired it.
00:03:46.221 --> 00:03:47.687
Exactly, yeah, that's what you did.
00:03:48.540 --> 00:04:06.719
But if you're in IT in the last 20 years and dare I say mine's around 30 years you know it started with networking, right, it started with, in my case, board modems and you know my entire network infrastructure I've configured and I run myself.
00:04:06.719 --> 00:04:08.844
Um, it's pretty high spec.
00:04:08.844 --> 00:04:15.846
It's what you'd call a prosumer, I suppose, um uh network with multiple wan connections and whatnot.
00:04:15.846 --> 00:04:37.050
But, like internet, working is one of the first courses I ever taught, maybe 30 years ago, which was basically how packets break down and travel across the network and the OSIRM, the Open System Interconnection Reference Model, and how it moved down the layers to ultimately get to your hardwired network card or NIC and travel.
00:04:37.110 --> 00:04:47.093
So yeah, so this was my biggest finding from fixing the internet in our new apartment here in valencia was um.
00:04:47.093 --> 00:04:53.028
My number one finding was so I got my first job in it 24 years ago.
00:04:53.028 --> 00:04:57.622
I celebrate this anniversary every january, so just just celebrated my 24th year.
00:04:57.622 --> 00:05:03.648
My number one takeaway is that I am way less flexible than I was 24 years ago.
00:05:03.648 --> 00:05:17.245
I laid on the floor with my head stuck up somewhere for about 15 minutes the other day and I had to take part of sit-down all afterwards, like he needed.
00:05:19.182 --> 00:05:23.091
It's going to get better, though, because now we are exercising.
00:05:23.091 --> 00:05:33.754
I mean, we're exercising, we're walking for a good hour every day we live in the old city now, so yeah, and that's wonderful, yeah, so I'm gonna.
00:05:33.934 --> 00:06:38.752
I'm gonna bring this back the thing, and mark you asked a few minutes ago and we we talked about grok, and then we went down a tangent of of our prior lives as network engineers, but the thing that's been really top of mind for me the last little while here is the choppiness of the regulatory and the legal landscape surrounding artificial intelligence, and I think that this is something that has not gotten a lot of play, has not gotten a lot of mindshare, because we've been really focused on the capabilities of AI or, maybe secondarily, on the engineering around AI, and this is going to be a fun episode for me as a guy who did my college major in political science not actually in something technical college major in political science, not actually in something technical but there's been a couple of events that I think have shown a light on how complex and how uneven that regulatory landscape is.
00:06:38.752 --> 00:06:41.598
So the first is, in my mind, the EU-European Union AI Act.
00:06:41.598 --> 00:07:07.629
The second, though, would be the presidential transition in the United States and I'm not going to talk politics here, that's not my point here, but the vice president, jd Vance, was in Europe and, I think has been pretty widely reported, gave Europe quite a reported, gave Europe quite a scolding in his view on various things.
00:07:07.629 --> 00:07:26.120
But you know, listen, today we have this regulatory landscape where the EU is coming down quite hard on, you know, regulating and trying to make artificial intelligence safer and more reliable and more trustworthy legislatively and from a regulatory perspective.
00:07:26.120 --> 00:07:29.125
The United States is moving in the opposite direction.
00:07:29.125 --> 00:07:39.629
Donald Trump rescinded Joe Biden's executive order pertaining to AI safety and AI responsible AI.
00:07:40.151 --> 00:07:41.512
You have the.
00:07:42.394 --> 00:07:57.896
You have the Chinese, who are training models we had DeepSeek that hit the scene in the last month, but training models that have baked into their knowledge the, I would say, the Chinese geopolitical view of the world.
00:07:58.240 --> 00:08:08.464
So famously, folks asking DeepSeek for information about Taiwan getting a very, very different answer than what you would get from a Western large language model.
00:08:08.925 --> 00:08:26.913
You have the Brits, who are looking at their own version of an AI act, which I suspect will come down in between the American wild, wild west that, it seems, donald Trump and Elon Musk envision and the EU regulatory landscape.
00:08:26.913 --> 00:08:34.504
And then, really interestingly, you have the US states, who are many of them taking on their own perspective here.
00:08:34.504 --> 00:08:43.030
California is obviously going to be huge, a huge mover here, because they're home to so many tech companies and are themselves, I think, the seventh largest economy in the world.
00:08:43.030 --> 00:09:14.043
So if you're an organization that operates across borders, you have a mind-boggling complex task ahead of you to just understand and navigate how to be in compliance with what you're doing with AI from a regulatory perspective, but also how to deal with the multinational citizens who work inside of your company and may be subject to a particular regulation, even if they're not living in their home country.
00:09:14.043 --> 00:09:17.274
So that's my prompt for the next 20 minutes or so.
00:09:17.274 --> 00:09:19.860
I think this is really underplayed.
00:09:20.821 --> 00:09:21.662
You underplayed.
00:09:21.662 --> 00:09:22.182
By who?
00:09:22.883 --> 00:09:23.543
Yeah.
00:09:24.364 --> 00:09:59.081
Underplayed by almost everybody who I speak with about anything related to AI other than, to be honest, the wonky folks who either are maybe you're a lawyer who's building or a part of a practice at your firm focused on AI, or you're a lawmaker or a policy maker but outside of that kind of legal and policy world, and maybe the in-house counsel at some of the larger tech companies, I think people have no idea really what they are navigating here, no idea.
00:10:00.322 --> 00:10:04.708
Yeah, so last time we were talking about this as well.
00:10:04.708 --> 00:10:09.395
And how many people believe that this doesn't apply to them?
00:10:09.395 --> 00:10:28.821
Percent extra.
00:10:28.821 --> 00:10:32.908
You know, funding budget work in order to, you know, publish an AI product if it is to withhold all of these?
00:10:32.908 --> 00:10:35.033
You know all of these regulations.
00:10:35.033 --> 00:10:46.490
I'm not sure that anybody calculated how much it will cost long term, because how much does it cost once you've got the framework, the tooling, the um?
00:10:46.490 --> 00:11:05.851
I know the terms of reference and you, you actually know what you're doing, what you're testing, what you're governing for, because in his, in his speech, elon musk uh, not elon musk, jesus vance um scolded europe, not just for the euai act, but but he was upset with GDPR as well.
00:11:07.041 --> 00:11:10.740
GDPR and NATO spending and you know.
00:11:11.140 --> 00:11:11.922
And everything.
00:11:11.922 --> 00:11:14.868
So GDPR is another one that has been.
00:11:14.868 --> 00:11:23.111
You know, people understood it and have created frameworks and you know, and now we just kind of get on with our lives.
00:11:23.111 --> 00:11:29.405
But, fun fact, nobody's suing us for losing their data or using their data improperly.
00:11:29.405 --> 00:11:30.870
It's a good thing, right.
00:11:31.580 --> 00:11:44.129
Do you think Vance is unaware that at least 12 states in the US are implementing their own AI acts that align pretty closely with the EU AI Act?
00:11:45.581 --> 00:11:46.764
I think that.
00:11:46.764 --> 00:11:49.904
I don't think he's unaware, he's a clever guy.
00:11:50.519 --> 00:11:53.149
I know they rescinded Biden's executive order.
00:11:53.679 --> 00:11:54.221
JD Vance.
00:11:54.221 --> 00:11:56.207
Think what you want of him and his politics.
00:11:56.207 --> 00:11:58.746
Anna is right, he is a clever guy.
00:11:59.399 --> 00:12:16.081
He also mentioned a few things in his speech again without getting political, about how you know, america is at the forefront of, you know, these developments for the use of their own chips as well.
00:12:16.081 --> 00:12:25.027
Now, biden was actually the one who said we are no longer buying Chinese microchips, we're going to make our own.
00:12:25.027 --> 00:12:29.245
And that was months and months and months ago and the legislation worked.
00:12:29.245 --> 00:12:44.370
So it's not a current administration thing, even though it was definitely a political speech, and I do believe that he knows and understands what his states are doing.
00:12:44.370 --> 00:13:02.326
But he plays for the, for the people who listen to him, you know, who feel empowered by that talk and who don't realize that in many states, in the United States, you can actually go and sue somebody yourself.
00:13:02.326 --> 00:13:03.740
You don't even need a lawyer.
00:13:04.735 --> 00:13:12.195
Well, and I think that part of this so I had a my thinking on this has evolved over time, right.
00:13:12.195 --> 00:13:37.274
So I had a thesis, if you go back, maybe six weeks, two months ago, Right, I had this thesis that I don't think I haven't abandoned, Right, but this idea that what we're going to have is with Europe being kind of the slow lane and I mean slower but safer right.
00:13:37.274 --> 00:13:48.187
So AI was going to pretty fundamentally advance at a slower pace in Europe but also be categorically safer to use, right, both for an individual and for an organization.
00:13:48.187 --> 00:13:52.918
And then you were going to have on the you know America was going to be the fast lane an organization, and then you were going to have on the.
00:13:52.918 --> 00:14:02.682
You know America was going to be the fast lane right where AI leapt ahead because it was far less regulated, at least by the federal or the national government, but was also much less safe to use.
00:14:03.302 --> 00:14:16.899
And my amendment to that thesis, right, is that and this is where I come in with this I come to this conclusion about first order and second order regulation of AI.
00:14:16.899 --> 00:14:18.583
So there's a couple of ways to do it.
00:14:18.583 --> 00:14:36.850
The best way to think of this is that you can regulate the development of AI itself, which Europe has clearly said we're going to do and America has clearly said we're going to and again, I'm not talking about state governments, I'm talking about the national government has clearly said we're going to and again, I'm not talking about state governments, I'm talking about the national government has clearly said we're going to take a much lighter approach here.
00:14:36.850 --> 00:14:55.259
But what gets lost in that discussion about the regulation of AI right is that, even if AI itself is unregulated, what AI does inside of a financial services institution, like inside of a bank, or inside of a law firm or inside of a health hospital right?
00:14:55.318 --> 00:15:01.779
Any of these highly regulated organizations what AI does is still very much regulated, right?
00:15:01.779 --> 00:15:14.386
So, like if AI inadvertently spills mass amounts of patient data from a hospital that is using AI to process patient data, right?
00:15:14.386 --> 00:15:34.169
Yeah, okay, the vendor who created the AI may not have the same kind of legal ramifications in the US as they would in Europe, but that hospital still has the legal ramifications in place for misappropriating patient health data, which is, by law, confidential and protected.
00:15:34.169 --> 00:15:38.927
So organizations need to think long and hard about this.
00:15:38.927 --> 00:15:47.282
I think it is a false choice to say either we're going to highly regulate AI or we're not going to highly regulate AI.
00:15:47.282 --> 00:15:57.931
And if we're not going to highly regulate AI, you can do whatever the hell you want with AI, because you are still regulated by the laws that pertain to your industry's behavior.
00:15:58.815 --> 00:16:00.140
Yeah, totally, totally agreed.
00:16:00.140 --> 00:16:07.168
I just think that a lot of companies that are right now right, it's your technologists that are at the forefront of technology generally.
00:16:07.168 --> 00:16:22.907
And the last thing all the technologists I know about or know of are not thinking about what's illegal ramifications, like I'm not committing a crime, I'm not going out of my way to do something illegal, like you know that I know of.
00:16:22.907 --> 00:16:27.222
They're not thinking like you know these days.
00:16:27.222 --> 00:16:29.508
The developers, they'll think about security.
00:16:29.508 --> 00:16:34.256
Developers, they'll think about security.
00:16:34.256 --> 00:16:35.139
They'll think about you know, is there there?
00:16:35.139 --> 00:16:39.837
You know things like encryption and and not doing stupid stuff in the code that's going to expose, but they're not thinking about.
00:16:39.837 --> 00:16:43.971
Hang on a second, is this a human, human rights violation?
00:16:43.971 --> 00:16:45.134
Is this going to?
00:16:45.134 --> 00:16:47.562
You know that's not happening at the moment.
00:16:47.562 --> 00:16:54.264
They're not thinking like a technologist doesn't think like that and I think potentially it's going to have to change.
00:16:54.264 --> 00:16:58.138
It's going to be part of the hey education program any company is going to run.
00:16:58.138 --> 00:16:58.659
Is that?
00:16:58.700 --> 00:17:18.328
hey, if you're using ai, you need to be educated on the the implications of it well, and we've used in this conversation, we've used the phrase, even phrases like legal and regulatory right, but there's mark you hit on a really important dimension here, which is the, which is the human rights dimension.
00:17:18.549 --> 00:17:25.186
Okay so, and obviously different nations in the world have different stances on various human rights issues.
00:17:25.186 --> 00:17:39.482
There's in the united nations has, the office of the high Commissioner for Human Rights, which is the global body that addresses and looks out for and promotes human rights around the world.
00:17:39.482 --> 00:18:05.542
But there is a huge human rights component to whether AI inadvertently violates various human rights or puts an organization in a position of inadvertently violating human rights and where you know, there are certainly ethical implications and there are certainly moral implications there.
00:18:05.542 --> 00:18:09.766
But then there's enormous reputational implications there.
00:18:09.766 --> 00:18:28.805
Right, no tech vendor wants to be the tech vendor, whether it's your tech vendor of choice or not but no tech vendor wants to be the one whose AI somehow facilitates sex trafficking or whose AI somehow facilitates political violence.
00:18:28.805 --> 00:18:30.876
Right, you don't want to be that.
00:18:31.804 --> 00:18:43.163
But there's a layer below that right and and I and I was reading this article, which had a very interesting look at algorithms, right and and and.
00:18:43.163 --> 00:18:50.560
Algorithmic impact assessment is going to be part of what every company needs to be looking at, based on the algorithms they used.
00:18:50.560 --> 00:18:53.184
And one of the things was what happened.
00:18:53.184 --> 00:19:18.266
If you're a job listing site, so you're advertising job roles, but your algorithm makes sure that it only is displayed to certain people that your algorithm thinks let's say, middle-aged white guys with x experience and you're violating the human rights of other people that you're not displaying that ad to but nobody gets hurt, type thing.
00:19:18.266 --> 00:19:24.890
Right, nobody hears about it, nobody sees, so like nobody knows human trafficking, sex, all that kind of stuff.
00:19:24.890 --> 00:19:39.851
Very like confronting, but I'm like that next tear down where there's who, who knows, you know, like it's not doing deliberate harm, but it's definitely a violation of human rights.
00:19:39.851 --> 00:19:42.941
If you're going, hey, I choose that.
00:19:43.000 --> 00:20:14.290
You don't fit the demo that I believe should apply for this role, so therefore I'm not going to show you the ad as an example show you the ad as an example, or the platform itself can actually, because if the platform itself actually only learns from the data that it had and we know that historically, you know you will have I don't know less women being at an executive work or board, less diversity in I don't know high power positions less.
00:20:14.836 --> 00:20:16.220
America doesn't care about that anymore.
00:20:16.942 --> 00:20:21.555
Yeah, I understand, but less men being a nurse, et cetera, et cetera.
00:20:21.555 --> 00:20:40.184
So, in essence, even the platform itself may choose to target your ad to you know the audience that it believes it will get successful to, and then the question still applies but then who's responsible?
00:20:40.184 --> 00:20:41.186
Is it the platform?
00:20:41.186 --> 00:20:42.998
Is it you that you made the algorithm?
00:20:42.998 --> 00:20:46.686
Can anybody prove that you've tested it this way?
00:20:47.414 --> 00:21:21.962
So I highly recommend people go and read Yuval Noah Harari's book Nexus, which he just published last year and in that he calls out, for example, facebook in detail around the part they played in genocide that has happened around the world and like there's no way he would publish the word Facebook and that whole without opening himself if it was to potential legal liability right for publishing that so clearly in the book.
00:21:22.115 --> 00:21:31.842
And what he showed is that their algorithms incited hate speech and their defense was well.
00:21:31.842 --> 00:21:38.063
Well, that's people's choice, like it's people's choice that they you know, they, they, they put this hate speech up.
00:21:38.063 --> 00:21:40.736
We're just providing the platform, the old telco model, right.
00:21:40.736 --> 00:21:46.136
We can't be responsible for people downloading objectionable material, we're just the infrastructure.
00:21:46.136 --> 00:21:55.778
But what it showed is that the actual algorithms had a target point of you need to sell as many ads and therefore dwell time is critically important.
00:21:55.778 --> 00:22:02.618
So the algorithm noticed that in the hate speech etc got more eyeballs.
00:22:02.618 --> 00:22:15.190
So what it did it said, hey, that approves my algorithm of getting ad time, and so therefore it was something like 70 percent of displaying of all hate speech was algorithmically recommended.
00:22:15.190 --> 00:22:18.525
It wasn't Andrew recommending to Anna to watch the video.
00:22:19.154 --> 00:22:22.045
I want to be very clear that I do not recommend hate speech.
00:22:24.259 --> 00:22:27.227
As in it resulted in massacres of people.
00:22:27.227 --> 00:22:30.984
That was the outcome of this.
00:22:30.984 --> 00:22:35.037
You know that.
00:22:35.037 --> 00:22:36.099
That was out of the outcome of this, and so it's a.
00:22:36.099 --> 00:22:36.982
It's a good example, though, of really one.
00:22:36.982 --> 00:22:38.788
The tech platforms, the people owning the llms, the people.
00:22:38.788 --> 00:22:48.460
Then at the next layer, down, which affects a lot of microsoft partners right, which are the implementation layer, and then, of course, you've got the end customer or the end users.
00:22:48.460 --> 00:22:53.208
Everybody has this kind of level of obligation and responsibility.
00:22:53.347 --> 00:23:20.683
And how ai is applied as we move forward, and I think that some of those arguments of the past like oh, we didn't know when you clearly did, because you're making so much money off it um, dare we say, blood money the thing is is that I don't think the, that I think the reason, like we're seeing that over about 12 states in the us are already putting their own ai acts into effect, and why has it all of a sudden coordinated and come about?
00:23:20.703 --> 00:23:42.392
Because this has been going on for really the last 10 years, pre-genitor of ai, that this concern around, like I remember in 2015, 2016, I was doing this whole piece of work with Microsoft in federal and state governments around law enforcement for what was called situational awareness.
00:23:43.474 --> 00:23:58.547
So you have a big football match on, you have surveillance cameras operating and you I mean one of the cases with new york city right, they've got sensors all over new york city that can detect radioactive substances.
00:23:58.547 --> 00:24:09.068
Then cameras can clock onto a number plate and they can play back for the last eight hours, ten hours, whatever, everywhere that number of plate appeared on any of their surveillance cameras.
00:24:09.068 --> 00:24:19.480
And then you could take that to another level and do facial recognition and go, hey, let's put people at the crime scene like based on you know that, that, that type, that type of data.
00:24:19.480 --> 00:24:30.909
And of course, there was a lot of concerns, particularly around the facial recognition piece, around identifying and therefore drawing conclusion algorithmically.
00:24:30.909 --> 00:24:33.722
And that's why I think a lot of these acts have all of a sudden appeared.
00:24:33.722 --> 00:24:37.903
It's not because of Gen AI, it's just that it's actually been in flight.
00:24:38.003 --> 00:25:01.420
For you know, if you just go back to the machine learning models and things like that around algorithmic selection, it's also so this will prevent AI models from making mistakes, but it doesn't mean that it will guarantee that AI models and AI products don't make mistakes.
00:25:01.420 --> 00:25:08.134
So these things could still happen, like the facial recognition can still be wrong or biased.
00:25:08.134 --> 00:25:15.785
But what it does mean is that you show that you've done your very best to make the algorithm not biased.
00:25:15.785 --> 00:25:31.267
So in your example earlier with Facebook, it means that Facebook would have had a big, massive set of documentation showing that they did their best, you know, to prevent hate speech, which obviously they didn't right.
00:25:31.267 --> 00:25:36.641
But this is what these ua acts mean.
00:25:36.641 --> 00:25:58.625
They're not guaranteeing the fact that you'll never make an error within your programming or your product, but it does guarantee that you are doing your very best not to and that you immediately have a set of mitigation activities so that the second you realize that something went wrong, you can fix it.
00:25:59.246 --> 00:26:09.709
So a lot of people's opinion and I feel like JD Vance was going that way as well when he was saying we choose innovation.
00:26:09.709 --> 00:26:14.675
You know the American people choose innovation and not fear or safety.
00:26:14.675 --> 00:26:28.814
So he started with fear, but then he went on and on about how safety is important here by saying that, oh, actually, if there is a problem, we can fix it.
00:26:28.814 --> 00:26:41.133
Well, fun fact, you cannot fix it unless you've got that very thorough testing done already and a series of mitigation plans, clear rollback plans.
00:26:41.133 --> 00:26:44.143
This is what the UA AI Act is.
00:26:44.143 --> 00:26:58.160
Everybody talks about it as if it's like this very complicated set of laws that nobody can understand or follow, but the reality is it's a set of actions that you have to follow.
00:26:58.160 --> 00:27:06.909
You need to read the thing and then you have to put some work into it, but it's not necessarily rocket science, dare I say.
00:27:07.671 --> 00:27:13.570
Yeah, I want to discuss just something else that's come up for me.
00:27:13.570 --> 00:27:24.205
This week I was interviewing with a large telecommunication company and we were discussing ai adoption and you can see here their technology.
00:27:24.205 --> 00:27:26.310
Adoption cycle right has been around for a while.
00:27:26.310 --> 00:27:32.958
If we drill into it here, you've got this innovators that make up 2.5%, early adopters 13.5%.
00:27:32.958 --> 00:27:46.140
Then you've got your earlier majority, late majority and laggards and the discussion was we know what's happening all the way through to here, but what happens to the laggards right in an organization?
00:27:46.140 --> 00:27:50.184
And do you know what the leadership of this organization's opinion was of the laggards?
00:27:50.401 --> 00:27:51.527
I'm so curious they won't have a role in our company.
00:27:51.527 --> 00:27:51.990
Yeah, and do you know?
00:27:52.009 --> 00:27:53.458
what the leadership of this organization's opinion was of the laggards.
00:27:53.458 --> 00:27:53.779
I'm so curious.
00:27:53.779 --> 00:27:58.875
They won't have a role in our company.
00:27:58.875 --> 00:28:03.642
In other words, they're making it so built into their culture.
00:28:03.642 --> 00:28:11.213
You need to learn about AI, you need to be adopting it, and if you don't feel that this is for you and they weren't using this particular graph, I'm showing you here, but I'm referencing the laggards.
00:28:11.213 --> 00:28:14.759
Using this particular graph, I'm showing you here, but I'm referencing the laggards.
00:28:14.759 --> 00:28:17.423
They were saying, the people that are choosing not to get on board and not to go down this journey.
00:28:18.240 --> 00:28:25.946
They've made it quite clear to their staff at this early stage that there's probably not going to be a long-term role for them inside the organization.
00:28:25.946 --> 00:28:40.251
Pretty phenomenal, right Like that kind of very clear mandate I was coming from, the, from the slt um inside the organization, that you need to find out where you're going to land on this bell curve.
00:28:40.251 --> 00:28:45.310
But if you're in this tail end, you should probably either be looking for another role or we.
00:28:45.310 --> 00:28:48.964
We can't see you having a role inside this organization long term.
00:28:48.964 --> 00:28:52.271
And, of course, the big focus is on skill development.
00:28:52.271 --> 00:29:20.145
We're going to put lots of skill training programs on and you need to be adopting right through to this because, yeah, and I just thought it's an interesting stance that perhaps companies are starting to take now, and the other interesting thing was this company has already received the benefits from AI that has enabled them to stop hiring certain roles inside the organization.
00:29:20.145 --> 00:29:30.445
In other words, they are in a position that AI is filling the gaps of certain roles and reducing their need to hire more.
00:29:31.949 --> 00:29:34.252
Yeah, I think, I think that's very interesting.
00:29:34.252 --> 00:29:45.866
And you know, certainly I mean I don't want to say certainly my instinct says that if I were running a big company, that's the attitude that I would take.
00:29:45.866 --> 00:29:54.674
And you know, listen from the the work that, from the work that that we do, we see a lot of organizations that are themselves.
00:29:54.674 --> 00:29:57.306
The organization themselves is a laggard.
00:30:01.742 --> 00:30:05.271
But they work with us because they don't want to be one.
00:30:05.271 --> 00:30:06.944
That's why they work with us.
00:30:07.647 --> 00:30:07.828
Right.
00:30:07.828 --> 00:30:13.364
But I mean we see some phenomenally dysfunctional, technologically dysfunctional organizations.
00:30:13.364 --> 00:30:17.026
Again, you know, to Anna's point, that's why we work with them.
00:30:17.026 --> 00:30:22.750
But it did make me laugh when I saw that A story from years ago.
00:30:22.911 --> 00:30:38.864
I worked in an organization I'm sure I've told this on a podcast right where one of my colleagues he had retired from a role in the US government and he had taken a job in the private sector, which often happens and he asked me one day.
00:30:38.864 --> 00:30:41.550
He said can you come look at this with me?
00:30:41.550 --> 00:30:43.826
And I said yes.
00:30:43.826 --> 00:30:54.873
I came over into his office and he said you know, andrew, I notice that you send emails to more than one person at a time.
00:30:54.873 --> 00:31:05.328
At first I didn't realize what he was asking me, right, like this scene was so mind blowing to me and I was sort of like well, well, yes, what do you mean?
00:31:05.328 --> 00:31:07.432
His name was Wayne, as I recall.
00:31:07.432 --> 00:31:10.345
Yes, wayne, what do you like?
00:31:10.345 --> 00:31:13.601
And I'm trying to feel him out here, I'm trying to get to what his problem is.
00:31:13.641 --> 00:31:18.893
And he says well, how do you send the same email to more than one person?
00:31:18.893 --> 00:31:22.250
And I said well, how do you do it now?
00:31:22.250 --> 00:31:23.213
And he shows me.
00:31:23.213 --> 00:31:36.393
He says well, I typed the email and then I put someone's address in and I send it, and then I go into my set folder and I copy the text of the email and I create a new email and I paste it and I put the next person's address in.
00:31:36.393 --> 00:31:41.369
And if he needed to email the same thing to five people, he did this five times.
00:31:41.369 --> 00:32:04.153
So I explained to him how the CC field worked and the BCC field was mind-blowing, and I also explained to him that you could just put a little semicolon or a comma, or forget what we were using in that version of Outlook all those years ago, right, that you could actually send it to more than one person in the two line.
00:32:04.153 --> 00:32:11.012
And every time I hear stories like this I just sort of I laugh and I think of Wayne Bless him.
00:32:12.923 --> 00:32:18.814
Well, it looks like we're done and is up and left us for some for some reason.
00:32:18.814 --> 00:32:23.671
But anyhow, thanks for joining us again for this.
00:32:23.671 --> 00:32:28.069
For the show Remember, if you look in the show notes now, you can leave us a voicemail.
00:32:28.069 --> 00:32:32.412
So if you want to get featured on a future episode, click on that link in the show notes.
00:32:32.412 --> 00:32:34.026
You can leave an audio voicemail.
00:32:34.026 --> 00:32:35.866
We'll then splice it into our edit.
00:32:36.319 --> 00:32:45.950
If you've got a question you want us to address on the show, something that you feel you would like our input on, we'd love to hear from you, or an idea that you want us to explore.
00:32:45.950 --> 00:32:50.123
Yeah, leave us a voicemail and it'd be great to connect.
00:32:50.123 --> 00:32:51.344
Otherwise, ciao for now.