Breaking AI to Build Trust: A Conversation with a Microsoft Red Team Engineer
The player is loading ...
Breaking AI to Build Trust: A Conversation with a Microsoft Red Team Engineer

Breaking AI to Build Trust
Joris de Gruyter

Get featured on the show by leaving us a Voice Mail: https://bit.ly/MIPVM 

FULL SHOW NOTES
https://www.microsoftinnovationpodcast.com/672

Joris de Gruyter dives deep into the world of AI security with Microsoft's Senior Offensive Security Engineer from the AI Red Team who shares insights into how they test and break AI systems to ensure safety and trustworthiness.

TAKEAWAYS
• Microsoft requires all AI features to be thoroughly documented and approved by a central board
• The AI Red Team tests products adversarially and as regular users to identify vulnerabilities
• Red teaming originated in military exercises during the Cold War before being adapted for software security
• The team tests for jailbreaks, harmful content generation, data exfiltration, and bias
• Team members come from diverse backgrounds including PhDs in machine learning, traditional security, and military experience
• New AI modalities like audio, images, and video each present unique security challenges
• Mental health support is prioritized since team members regularly encounter disturbing content
• Working exclusively with failure modes creates a healthy skepticism about AI capabilities
• Hands-on experimentation is recommended for anyone wanting to develop AI skills
• Curating your own information sources rather than relying on algorithms helps discover new knowledge

Check out the Microsoft co-pilot and other AI tools to start experimenting and finding practical ways they can help in your daily work.

This year we're adding a new show to our line up - The AI Advantage. We'll discuss the skills you need to thrive in an AI-enabled world.

DynamicsMinds is a world-class event in Slovenia that brings together Microsoft product managers, industry leaders, and dedicated users to explore the latest in Microsoft Dynamics 365, the Power Platform, and Copilot.

Early bird tickets are on sale now and listeners of the Microsoft Innovation Podcast get 10% off with the code MIPVIP144bff 
https://www.dynamicsminds.com/register/?voucher=MIPVIP144bff

Accelerate your Microsoft career with the 90 Day Mentoring Challenge 

We’ve helped 1,300+ people across 70+ countries establish successful careers in the Microsoft Power Platform and Dynamics 365 ecosystem.

Benefit from expert guidance, a supportive community, and a clear career roadmap. A lot can change in 90 days, get started today!

Support the show

If you want to get in touch with me, you can message me here on Linkedin.

Thanks for listening 🚀 - Mark Smith

00:00 - Introduction to AI Security Red Team

04:28 - Red Team Origins and Functions

08:03 - Testing Security and Ethics in AI

13:53 - The Evolving Challenge of AI Jailbreaking

19:25 - Mental Health and Philosophical Perspectives

23:40 - Learning Curves and Future Developments

26:20 - Advice for Skill Development in AI

Mark Smith: 
Mark Smith:
 
Joris de Gruyter:
 
So at Microsoft, any feature that involves AI has to be heavily documented and submitted to a central board, which is a huge thing. For the size of the company and the breadth of the portfolio of products that we have, this is huge. Everything has to be centrally approved and so as part of that, I mean there's security reviews, privacy reviews, all sorts of things that have to happen, and then it gets to a central board here in Seattle and so, based on what they see, they can decide that hey, maybe this looks like a high risk usage, or they feel like maybe the team doesn't quite understand the potential issues, or maybe it's just something super interesting. They basically kick that to us, to the AI Red team. So we get involved.

Joris de Gruyter: We test the product adversarially, but also as a regular user. Sometimes, as you've probably seen in the news as well, sometimes a regular user asks a very benign question and gets some really interesting non-intended responses. So we test for all of these things and I guess it kind of depends on what the product is or does exactly, but this can go anywhere from testing a model itself. So the Microsoft like the small language models, for example the Phi series. We usually get our hands on those before they go out. So we test sort of more in a broad scale, like how susceptible is it to jailbreaks, things like that?

Joris de Gruyter: Or we dive straight into products and see, hey, can we get it to generate bad images, or can we get it to exfiltrate data from your power, platform, environment or whatever it may be, and we usually have just a couple of weeks to do that. So, depending on what exactly is happening, this could be anywhere from two weeks to four weeks of us and it's interesting not just because one you get to see sort of tip of the spear type of things constantly have to adapt, learn new things. Sometimes, a lot of times, we get to see products that we've never used before. So you sort of have to dive in and figure out. So it's been a super, super interesting thing.

Joris de Gruyter: So it's a mix of sort of traditional security things. Sometimes it's a mix of responsible AI, like okay, is this product not going to be too biased against or for something, whatnot? Or is it like potential for harmful content? Like in what way is this being used? Is it, you know, just generating images or children's stories, which you know there's a lot of these that are immediate red flags, when you just hear that.

Mark Smith: But yeah, so it's very broad but super, super interesting. A lot of people listening won't know what red teaming is or what the concept of what a red team is, as opposed to, in the industry, a blue team. And can you just give us a bit of origin about this? Is not a new concept. Where it's from? It didn't come from neo choosing the red or blue pill in the matrix. That's not where it's from. It didn't come from neo choosing the red or blue pill in the matrix. That's not where it originated. Can you give us a bit of a backstory and explain?

Joris de Gruyter: also, doesn't come from halo red or blue team. Yeah, yeah, it's no. I think this originated in the military, I believe during the cold war or something, where basically the military was trying to do these exercises. Where they were, they would pretend to be a communist invasion force of some sort and try to break into things or attack things and do these simulations.

Joris de Gruyter: So this is carried over into software security, where typically red teams are teams within or outside a company sometimes that essentially try to break your own product, try to break in and see how far they can get, break your own product, try to break in and see how far they can get. And then the blue team is typically sort of a defensive team that looks into how can we you know what's going on with the product? Do we need to shore up the fences, do we need to look at something? And then, technically, I guess there's something called a purple team as well, where the red team and blue team sort of work together in the same room, where the red team's like oh, I think I can get in, and then a blue, oh, let me come check what's going on.

Joris de Gruyter: yeah, in a way that's more kind of what we do on the ai red team, I guess, is, you know, we collaborate, we sort of get on a phone call, try to understand the product and then we go into our thing and we have constant communication like on at least on a weekly basis. We're like, hey, here's the things we've been seeing. You guys should look at this. Yeah, so the term red teaming is really very security focused thing, but it came from the military originally.

Mark Smith: Another term I often see in conjunction with red teaming is chaos engineering. Is that something that you touch into as well, and can you explain what your definition of chaos engineering is?

Joris de Gruyter: Well, I'll fully admit that I'm still learning a lot about the very traditional security stuff as well. But yeah, chaos engineering is very much on the security focus side and, like I said, what we do is very broad. So on our team it's a very diverse team in the sense that we have folks on the team are like PhDs in machine learning and we have traditional security experts who know all about that sort of thing like chaos engineering, whatever traditional security experts who know all about that sort of thing like chaos engineering, whatever. They have folks like myself who have a product development background. So you know, it's very, very broad. We have folks that are involved with the military. So we don't directly do advisory but we do attend a lot of. We have some folks that attend a lot of sort of military and policy type of conferences directly on our team, but us at microsoft are involved at that level. So it's it's super, super broad.

Mark Smith: so, yeah, yeah, chaos engineering, but then also like policy and it's like all over the place, just as we hit christmas, I saw a whole bunch of stuff not from microsoft come out around uh, jailbreaking llms and I saw some stuff where, like if you construct your sentence in camel casing and that was the one that stood out to me because I'd seen the term in a developer, you know, in my background, and I was like I would assume and obviously in a call like this we can't talk about the actual techniques and stuff because we're trying to prevent to be used for harm, right?

Joris de Gruyter: I mean a lot of it. There's plenty to be found. A quick Google search away, yeah exactly, exactly.

Mark Smith: But is it a case of you just following patterns and procedures to do this testing, following patterns and procedures to do this testing, or do you find yourself in a situation where you're like I'm going to try, like is there always that you're learning new ways to break?

Mark Smith: Because it's interesting, I just went through a process yesterday of implementing Passkey in my M365 tenant right for my organization and I was just like I watched this video and this person was like well, haven't we just got MFA? And then before before mfa, we had sent a text message and I'm like, and he was like because hackers are always learning new techniques, we always have to learn or create new security measures on the fly. Is the speed of things happening that you're coming up with, hey, based on, based on these things happening last week? I'm going to try this way because you've had a new mindset change or something like that, or a new and, of course, with a diverse team that you have, you're a mounting pot of. One idea is going to spark another idea. How fluid is it and how structured is it?

Joris de Gruyter: I feel like it sort of comes in waves a little bit right, especially at the company. So you know, we do sometimes get involved with OpenAI as well when they release something new. Obviously we're a big partner, so we usually get our hands on that as well and that's usually sort of a spark, right. Let's say, you know, when the O1 stuff came out with the chain of thought reasoning for us, I was like, okay, we need to come up with something new, like how are we going to play with this? How are we going to test this? And then that happens. And then there's usually a bit of a lag and then you start seeing products at Microsoft adopting it or figuring out ways. So we usually have like a little bit of warning that like something new is coming out, because that's sort of different things, right. So one is maybe somebody comes up with a new way of using the existing LLMs in a novel way, like some product innovation, like oh, we're going to do this, and then obviously we'll have to figure out how can we like there's an assumption that we can always jailbreak a model, but what does that mean in the sense of how it's used in the product? Maybe it's not even useful at all that we can show. There's some novelty there based on the product, but then there's also novelty based on just the new ai stuff that's coming out.

Joris de Gruyter: So you know, we had, I think in the fall we were looking into a lot of the audio stuff. Right, that's coming out, sort of the advanced voice stuff. I mean, that's completely new. We need to come up. Okay, can we still jailbreak? Do we just say that the jailbreak out loud, or how does that work? But then you come up with new things like okay, well, same with text actually is like how does it react to different languages or different accents? And that could be a thing with harmful content. But it could also be to your point like in text you're doing capitalization or you're like using emojis to sort of maybe get past certain filters. What does that mean when we're doing audio? And then you got the same with images, right, and now we're starting to see video chat, right, we are basically talking and it can see you, and so there's novelty in all these things.

Joris de Gruyter: And in fact, recently on the team we've sort of separated out because we've been hiring a lot of folks who've been growing just to keep up, and part of. We have now a separate team that can take because, like I said, we usually only have a few weeks to work on these things because, of course, they're on a deadline. They want to release it. So we now have a team that's dedicated to doing sort of longer term things, where maybe we look at a product and we say, hey, this is something novel and we feel pretty good about what we've tested, and then the board can decide to approve it or not. But maybe there's something we want to go further, we want to play more with this, just to learn more or to see more. And then so we now have well, not separate, I mean, we're still sort of together but group of folks that are sort of taking this and doing more research longer term and actually also trying to stay ahead of it, where I think, video right now we're struggling with.

Joris de Gruyter: you know, with text you can generate a lot of text and you can you know, we have our open source tool called a pirate that we can use to send thousands and thousands of messages really quickly to an endpoint to see all the stuff that we get back. How does that go with video, like, are we uploading gigabytes? Can we generate videos? Yeah, can we use something like sorat generate bad videos to send? I mean, there's all sorts of things. So for sure, it's a constantly moving thing and it's a good mix of tried and true stuff, but also being creative on the spot based on what we're doing you can deny or confirm this.

Mark Smith: Up to you. It's an edgy question. Are you playing with o3 yet?

Joris de Gruyter: I would not be able to tell you if I was for sure. But yeah, we do get these Usually. I mean, obviously it's all very tight-lipped. There's tons and tons of NDAs.

Mark Smith: In December they had said that they were getting all their security piece. It wouldn't be released straight away because it was going through that, and they were even inviting, if you're a security researcher.

Joris de Gruyter: I'll say, you know, I obviously follow that news a lot and I'll ping my manager to be like, hey, I saw this thing. And then, even if they know, they'll be like well, if we do, I'll tell you when we do, you know, kind of thing. So I'll get it like last minute, like I think when we got 01, I think that was literally like three or four days before the announcement was yeah.

Joris de Gruyter: Wow, that's not long. I mean, I think there is some effort to do to get them to give it to us a bit earlier, because that's like work over the weekend type of stress, which we'd rather not do too often.

Mark Smith: How much is your view of the world changing? And you know, because this is on the bleeding edge of so much change like we've never seen before. And you were saying, you know, with the image prompting and stuff you could get into some grimy places right with what you're doing. And you know, I've talked with police photographers that you know photograph crime scenes and stuff and they said, listen, this is not something you do a career on. You've got a few years and then your mind is just like I'm done, like I don't want to see any more of humanity this way. And so how much of I know philosophy and the way you look at the world and the, even the separation from sci-fi slash modern day marketing, slash reality. How does that all gel with you? And I'm not saying how does that gel with Microsoft, I'm asking from your perspective and what you can say, how do things sit?

Joris de Gruyter: So I'll touch on the mental health aspect. I mean, that was something that came up even in my interview with the team before I joined. It's like we do deal with the stuff and something to be very much aware of, so there's a high focus on that. I mean, it's part of the job. I wouldn't say that it's something we deal with all the time, but we do deal with it on a regular basis. The good thing is we have great support, so all levels of management, we have a lot of extra resources we can tap into if we need to for mental health support. We do also basically try to make sure we have a very open culture with the team. So it's very clear that everybody has their breaking point or there are things that they just can't deal with, and we're very open with that. We're like, if I have to deal with this or with that, I'm just not going to do it. You have the right for refusal, basically.

Mark Smith: Yeah, yeah.

Joris de Gruyter: Which is great. Obviously, that goes into multiple directions. There's certain content is straight up illegal, so we deal with a lot of the attorneys as well. If we do have to go down that direction, we have to sort of make sure there's certain things we literally cannot test, not just because we wouldn't want to see it, but also because it's downright illegal. And then there's other things that you know we're in the US, there's export controls, there's all sorts of things. So anything to do with like chemical weapons or whatever. We sometimes have to test for those things. We have folks that are specialized in that, but that has to be handled with extra security, like those things cannot be shared with our remote employees. So there's a lot of different levels. But mental health is something we're highly focused on and it's great to see that we're trying to be very open with each other good culture. So so far that's been great. I forgot what the second part of your question was Philosophy view of the world.

Joris de Gruyter: Oh yes, sci-fi marketing reality. Our cvp reminded me the other day that you know, we have to be mindful of our skepticism because basically we deal but nothing but failure modes yeah right.

Joris de Gruyter: So, although I'm still optimistic, I love the prospect of ai, what it can do. You know, I see it do nothing but fail basically on a daily basis, right? So I do have to sometimes take a step back and be like, okay, it's not all bad. I do feel, obviously, there's a lot of marketing, a lot of hype going on. I think there's tons of useful stuff out there as well. I personally do think that I'm still waiting for a killer app. To be honest, I think there's a lot of cool stuff, a lot of useful stuff, but I feel like there's more there. I feel like, even with the technology that exists today, I feel like there could be more interesting things being done that I haven't seen yet. I think a lot of everybody's sort of going for the quick wins. I think Everybody's trying to stay ahead of the game. So things like, you know, summarizing and generating texts and all these things Sure Search yeah, all cool, but I'm still waiting to see something that's like oh my God, this is fantastic yeah.

Mark Smith: For me, it's my digital twin. That's what I'm after. It thinks like me, it acts, behaves like me and therefore it can take a bunch of stuff off, but it'll act as though I am absolutely doing it.

Joris de Gruyter: Yeah, and we'll see. I mean this new sort of agent stuff that's coming out. I mean, some of it is fairly, some of it is just repackaging of what we had last year.

Mark Smith: Repackaging of then statements.

Joris de Gruyter: Yeah, I mean, there's always a level of marketing at every company and you know we're not immune to that either. But there's definitely actual new stuff that's being built at this agent and I'm excited to see what people are going to do with that. I think it does have a positive impact on like accuracy and things like that. It actually I've seen so far positive impact on security as well, but I've also seen negative impact on security where people go overboard without thinking and then you know you have some of these agent systems are basically multiple LLMs in the background talking to each other. You know, can we convince one LLM to poison all the other ones? You know stuff like that we're looking into.

Joris de Gruyter: So it's good and bad, but I think that to me it's an interesting concept, but I feel a lot of it is still very much sort of research and I'm just waiting to see the killer app. I mean, don't get me wrong, a lot of this is cool, right, and a lot of people are saying, well, the goalpost is being moved and in a way, that's true, right. If you would have shown you Chad GPT just five or 10 years ago, you'd be, like wow, that's crazy, that's amazing.

Joris de Gruyter: But then the question today is like, yeah, we're kind of used to it and now we're seeing it's not always accurate or it's still amazing. Technology right, but the productivity part I think everybody's still kind of figuring out for themselves. Where can I use it to make myself more productive? But I think the killer app is what I'm looking forward to seeing.

Mark Smith: Yeah, I read earlier this week that when the US first put man on the moon, that rocket mission was the trajectory, et cetera, of getting there was all. They failed all the way to the moon. Right, and that skepticism, right. It requires the failure to stay on track. It requires the rocket to go off track, to go, hey, you're off track, I need to bring you back on track. But it's about going, hey, we're failing, we're failing, we're failing rather than hey, actually we're making these incremental gains and we're moving forward.

Mark Smith: And I see the same thing about marketing. I can go, oh, that's marketing. And then, but I'm like, yeah, but it's sparking ideas. It's sparking ideas of the future. And you know, organizations move so slowly that I can understand the need to put the marketing down, because in a lot of organizations it'll be three years before they do something about it. But you have to get those tracks laid so that that's starting to become part of the conversation that's had in these organizations, even though, you know, when I think of agentic ai, it's a copy of me, probably five or so years away from that happening.

Joris de Gruyter: You know, that's what I want right now, but I'm very much wanting the future now, but not everybody does so yeah, and I think, like I said, it's a lot of it is research and there's tons of cool research. We circulate a lot of papers amongst our team where people find things online and we sort of discuss it. A lot of cool things coming up. But, to your point, by the time it gets to a product team and then by the time that the product team actually understands what the research is telling them or how to even apply it, it all takes time, right. So I know sometimes it feels that we're in this mode right now, we're just throwing spaghetti at the wall and to a degree there's some of that, for sure.

Joris de Gruyter: But I think what you're saying is correct. It's like also just product engineers trying to like figure out what are the limits of what it can do, what are the limits of when it's useful, right, because there's some things like, oh, this seems like cool, save me time, and they're like it's just annoying, it doesn't save me any time, it's just annoying. And yeah, it's a whole new world also on user research and user experience research, as far as how do we make it useful, how does it not get it like, remember, clippy, right, I think that's the perfect example, where you don't want that thing, keep popping up saying, hey, you need some help with that.

Joris de Gruyter: Yeah, in some contexts it's good, but in some it's get annoying quickly. And so figuring those things out, I think at all levels of product development, needs to be figured out. Not just engineering, but user research to your point. Marketing, yeah, but the competition's fierce, so everybody's yeah full steam ahead.

Mark Smith: What do you know now that you didn't know two years ago?

Joris de Gruyter: Wow, that's a loaded question. Of course that you can share. Well, I mean, I joined this team in April, right of 2024. So I feel like I've learned a ton at a personal level, just the skills, but also just about AI and LLMs, and what do I know? Now I don't know. I honestly I don't know how to answer that question.

Joris de Gruyter: I think, from a skills perspective, lots, lots I mean just from simple things like having to deal with Python, which wasn't really dealing with much before, all the way down to machine learning, things that I had no idea about. Yeah, it's a lot.

Mark Smith: It's cool Hopes and dreams for 2025?.

Joris de Gruyter: Hopes and dreams. I mean, to be fair, when I made this jump to the AI Red team was a bit of a gamble, right. I mean, I've been in biz apps for my whole career pretty much, which is, dare I say it, like more than 20 years at this point. So you're basically saying you know what I'm going to do? Something completely different. Part of it, of course, was like okay, there's tons of investment right now, but still you just don't know right. It's a very different thing than doing product development, but so far it's been absolutely fantastic.

Joris de Gruyter: I love for the previous point how much I've been having to learn and step up and do all those things and just getting to see. I mean, I love technology, I know you do too, so it's just cool to see all the things that people are working on, and I like the diversity of the work, diversity of the team. Like you said, we just sometimes have to put our heads together and then somebody comes out of left field with something you've never heard of before and it's like you know, we have ex-military folks that sometimes say things like wait what?

Joris de Gruyter: And then yeah, we should try that, you know. So that's super cool. So the hopes and dreams, really. At this point I'm still new to this team, but I hope the trajectory continues of what it's been so far. For sure, yeah, that's my big thing, just keep learning.

Mark Smith: Now I'm going to ask for some advice.

Mark Smith: One of the things that I talked to you just before we went on air is about FOBU, which is posted on LinkedIn today the fear of being obsolete.

Mark Smith: And my question is and of course, the concept behind it here is that people worried about how do I'm upskill so that I don't become obsolete in whatever career they're in. We're not just talking about tech, we're talking about anything. If folks are wanting to develop their skills and you've said you've been drinking from the fire hose in the last part year as your skills are developed, if you're in this space or you know, let's say, take me, technology is something that's really, you know, core in my life. What go-do things do you think people should consider as part of their training schedule over the next I'm only asking in the next three to six months, because I just feel everything's evolving so quickly. But if you were saying right now, let's say here I am a buddy and I say to you hey, what do you recommend I go? Go study this, go study that, go check that out what would your advice to me be?

Joris de Gruyter: That's a good question. I think of myself a bit like a generalist. I love to know a lot about different things. I'm typically not like a deep expert in any of them, but I can hold a conversation kind of thing. But also I am a hands-on guy so that to me I feel like, especially with this is something that I would encourage anyone. Like just play with it, try it out, even if it's doesn't seem immediately useful.

Joris de Gruyter: I know my wife. She's not as technical as I am at all. She tries to avoid technology if she can, in the sense that she doesn't want to deal with it or learn it if it's too difficult. But she just got into it herself. Obviously, english is our second language for us, so for her it's been super helpful to help write things in her job. So once she discovered that, she just doesn't want to give it up anymore. But it's one of those things where I had to show her a few times like, hey, here's what you could do. Or she'd come to me. It's like, hey, can you help me with this? Like just let's pull up Copilot and try it out, see if it. And so that's one of the things just playing with it and trying it out, and, of course, you may not even have the access to do it, but there's tons of free stuff out there as well. So I feel like hands on is a good thing. You start feeling out what can I do, what can't I do. It's like Google search, right. When we first started doing searches it was like pure keywords based, and then, as Google you know, we now call it Google foo is like somebody who's very good at doing searches the same kind of thing. You sort of have to figure out how to use it, and I feel it's the same with this getting hands-on.

Joris de Gruyter: And then, for the rest, for me, I love reading. So maybe I'm the only user out there that still uses RSS feeds, but I don't like algorithmic curation too much. So I really do not like timelines on Facebook or whatever it is where stuff gets pushed to you. Based on what you saw yesterday, I was like I don't want to see the same thing, I want to see something new. So I love RSS feeds where I subscribe to technical websites.

Joris de Gruyter: Ars Technica is one of my favorites, but there's a ton of them and I basically I scroll through my RSS feed, like my dad used to read the paper. It's just like scanning the headlines, like, oh, this looks interesting, I'm going to read that. I do a ton of that and I like to curate for myself, not just have something fed to me through a YouTube algorithm or whatever, because I want to discover things that I don't know yet, like and sometimes you don't know what you don't know, right, yeah, and so finding some general websites or podcasts or whatever to follow will spark ideas or things like I didn't know this existed or I've never heard of this. Let me look that up. That's how you learn right, and then trying it hands-on to me is super useful. What's your feed reader of choice? I use Feedly, which I think is now owned by Google.

Mark Smith: Yep, yep, yep. And it came from one of the other best products. I remember when it transitioned through.

Mark Smith: Interesting you say that it's made me go, oh my gosh, because I've just decided. And it's what is it? Generally? The ninth for me, eighth for you, I suppose.

Mark Smith: I've decided to leave all social media this year after being heavily in it last year, like right. One of my challenges I set myself was to do 100 tiktok videos and 100 days of posting a tiktok. So I'm very heavily. I was pretty much posting seven days a week on LinkedIn, blah, blah, blah. I've decided and I've removed all the apps. They're gone. Well, I've just said, oh, you should look at this Instagram reel. I said don't have an app anymore. Come and show me on your phone Right, and because I'm sick of the algorithmic right summer, like when, the minute you said that it was like a light bulb for me, I'm like that's it, that's the right definition of it. And I used to love rss, like six, four, four, maybe four years ago. I feedly, I had it. I'd go in and, as you say, read the scan headlights and you've just shown me, of course, how to bypass algorithm. I'll hit you up for your OPML file.

Joris de Gruyter: Yeah, I mean, I went through mine actually and I got rid of a bunch of dynamic stuff that I'm like, okay not that I want to lose all links there, I love that community but I skimmed through when I got rid of a bunch of stuff and I added a bunch of like security things and AI things. And again you just read things that you're reading and you're like I have no idea what they're talking about, but you just immerse yourself. Right, it's an immersion in a way, and you can pick and choose things that an algorithm would never suggest you read that article. No, because by definition, you don't know what it is and the algorithm knows that you've never looked at it before. So that's is a big thing, yeah.

Mark Smith: I love it. This has been a massively awesome conversation. Thank you so much for taking your time. I look forward to the next one.

Joris de Gruyter: Yeah, absolutely.

Mark Smith: Thanks for having me. Hey, thanks for listening. I'm your host, mark Smith, otherwise known as the NZ365 guy. Is there a guest you would like to see on the show from Microsoft? Please message me on LinkedIn and I'll see what I can do. Final question for you how will you create with Copilot today, ka kite.

 

Joris de Gruyter Profile Photo

Joris de Gruyter

Joris de Gruyter began his career as a Microsoft Dynamics technical consultant and developer in 2002. He is now a senior program manager at Microsoft, focusing on the open-source PowerFx functional language in Power Apps and the Power Platform. Over the years, he has worked on implementations and ISV products in Europe and the USA, taking on roles as a developer, architect, and engineering manager. At Microsoft, he has served as a solution architect, software engineer, and program manager. Joris was recognized as a Microsoft MVP in Business Applications in 2012, 2013, and 2014. He is a Microsoft Certified Trainer Alumni, public speaker, and blogger with a strong passion for Microsoft technologies. He has a particular interest in developer tools and processes, DevOps, and Dev ALM. Additionally, he is a hobbyist game developer.