Protecting data and minimizing threats with Microsoft’s Sarah Armstrong-Smith

Video interview
October 2024

IN BRIEF

  • “When we're talking about cyber-attacks, data breaches, intellectual property theft, whichever way you want to look at it, ultimately it'll come down to one thing, which is effective data governance.”
  • “A couple of months ago Microsoft actually issued some research in conjunction with Open AI, as we're talking about ChatGPT. What we identified, if we took some of the larger nation state actors, they're using these models to do reconnaissance so that they're learning about their targets and they're also using those large language models to refine their attack.”
  • “We are over five years since GDPR has come into being and other regulations around the world are all coming up to a varying degrees. Has it made any difference? I'm not sure. Arguably, I think it's going to have to get much worse before it gets better.”

In this VISION by Protiviti interview, Protiviti’s Roland Carandang, Managing Director in the London office and one of the firm’s global leaders for innovation, security and privacy, sits down with Sarah Armstrong-Smith, Microsoft’s Chief Security Advisor for Europe, Middle East and Africa, independent board advisor and author of Understand the Cyber Attacker Mindset: Build a Strategic Security Programme to Counteract Threats. The two discuss Microsoft’s data governance strategies in the face of elevated risk, the impact of AI and emerging technology and what steps business leaders should be talking to build out a strategic security plan.

In this interview:

1:04 – What are the biggest threats to privacy?

2:58 – How AI changes the game: pros and cons

7:00 – Microsoft’s role in protecting customers’ privacy

10:18 – Thinking like a cyber criminal

15:35 – Will it get worse before it gets better?


Read transcript

Protecting data and minimizing threats with Microsoft’s Sarah Armstrong-Smith

Joe Kornik: Welcome to the VISION by Protiviti interview. I'm Joe Kornik Editor-in-Chief of VISION by Protiviti, our global content resource examining big themes that will impact the C-Suite and executive boardrooms worldwide. Today we're exploring the future of privacy, and we welcome in Sarah Armstrong Smith, Microsoft Chief Security Advisor for Europe, Middle East and Africa, Independent Board Advisor, and author of “Understanding the Cyber Attacker Mindset: Building a strategic security program to counteract threats.” Today, she'll be speaking with my Protiviti colleague Roland Carandang, Managing Director in the London Office and one of our global leaders for innovation, security, and privacy. Roland, I'll turn it over to you to begin.

Roland Carandang: Thanks so much Joe. Sarah, welcome. Congratulations on the publication of your latest book and thank you so much for being with us today.

Sarah Smith: That's great to be here. Thank you.

Carandang: I'm going to dive in with a very big question just to start things off. What do you see as the biggest threats to data privacy right now and what are some things that executives and boards should be focused on?

Smith: Yes. Well, I think I'm going to go for the easy option to start with being a Chief Security Adviser at Microsoft, it has to be just the scope and scale of cyber-attacks. Now they're at a range that we have never seen before just in terms of the ferocity of those different types of threat actors. What are they doing? What are they after in particular? Then when we talk about cyber attacks, we then got to think about what are those threat actors after. In essence they're looking to, how do I monetize my return on investment? Some of those are financially motivated actors, some of those might be espionage, nation state actors, they're activists, but ultimately, it's all about data and that's something we've really got to be cognizant about. So whenever we've had a cyber-attack, we then have to think about the data breaches and what does that mean for the impact to those individuals that may be impacted by that cyber attack as well.

Then we have questions that no doubt have to be answered, maybe that’s through regulators, our own business, our customers, partners, with regards to what data, how much data, and what's the impact of that. If I took all of that combined, when we're talking about cyber attacks, data breaches, intellectual property theft, whichever way you want to look at it, ultimately it'll come down to one thing, which is effective data governance. I would really say, what data, where is it, what is the value of that data, and what are my expectations, not just from regulators but consumers and employees as well, about how I should be protecting that data no matter what is on the horizon?

Carandang: On VISION by Protiviti we often talk about AI, and I know that's something that's on your mind. Ultimately, what impact do you think AI will have on data privacy and data security. Is there anything that business leaders should be doing to prepare for that now?

Smith: Well, I think with any technology there are always pros and cons. So we start with the pro. Ultimately, when you think about the ability for AI, machine learning, to provide really deep insights across large data sets. I think one of the biggest challenges that a lot of companies have, reflecting on where we started is where's my data, how much data, how much data exposure do I have? Getting those real deep insights but also thinking about how I can use that data to drive innovation

It's no doubt we're thinking about AI and just the scale of innovation that we've seen over the last couple of years. We're seeing tremendous work with regards to breakthroughs in science, medicine, and technology. So there's absolutely no doubt that there are some huge positive impacts for a lot of companies.

Now, I go to the cons. So kind of the reverse of that. In particular when we think about Gen AI, so that's only been around in the last couple years. It's probably made famous by ChatGPT. There are multiple other AI models. Then we got to think about how that was actually trained and where did that data come from. Some of the data, let's say, might have been scraped off the Internet. It could have been taken from social media. There's a multiple ways in which this data has come from and it's been asking a lot of questions again about what data, where did that data come from, do I have any say in that data in terms of consent, legitimate interest and all of these type of things. Again, if I can reflect back to the first question with regards to the cyber attackers and how they are thinking about amplifying their cyber-attacks with some of these large language models. Again, I think from a nation state perspective, highly resourced, highly motivated threat actors.

Now a couple of months ago Microsoft actually issued some research in conjunction with Open AI, as we're talking about ChatGPT. What we identified, if we took some of the larger nation state actors, they're using these models to do reconnaissance so that they're learning about their targets and they're also using those large language models to refine their attack. So this is just a caveat that the AI itself is not doing anything bad. It's not a naughty AI. It's still tool in the threat actors kit bag. When we're talking about phishing, ransomware, malware, whatever the case may be, the AI is just another tool, if you think about it that way. I want to think about AI, and I know there's a lot of companies that are spinning up R&D centers, innovation, thinking about the art of the possible. Maybe they are building their own models or they're buying them, whatever the case. There are some really fundamental things as we're talking about privacy in particular, that's responsible and ethical AI. It's a really having deep appreciation for those security and privacy implications, the detriment of some of those large language models and how they're being utilized but also keep thinking about privacy-enhancing technology. So having encryption, how we're thinking about managing the data or the data when it's exfiltrated… none of those things change just because we have some new technologies, right? We can't lose sight of the fundamental, the foundation layers if you like, of security and privacy in particular.

Carandang: That's super interesting, Sarah. Microsoft clearly has a big role to play—it sounds like such an understatement—in AI but it also has just lots of customers as well and customer data. Since you mentioned it, can you just tell us a bit more about your role at Microsoft and how a company—you mentioned large data sets, and how a company like that deals with protecting its customer data. How do you spend your days and perhaps some of your nights as well?

Smith: Can I say, it's never a dull day, let's say, being at a big tech company. If I've had to talk about my role first and foremost, in essence, my role is to liaise with our largest enterprise customers across Europe. I work multi country and multi sector and it's really at that C-seat level. I can be talking to CISO, CIO, CTOs. It's really understanding those biggest challenges. Some of that we've already touched on. We've talked about cyber security, cyber-attacks, how they're evolving. We've talked about evolving technology particularly when it comes to AI, responsible AI and all of these things but it all fundamentally comes down to data and really understanding the value and the proposition of all of this big tech together.

Now we look at the cloud in its most simplistic form, irrespective of the size of the enterprises that we're talking about. Although I'm at this level I've obviously got lots of different small enterprises and consumers who are utilizing the cloud. I would say the real value comes down to the shared responsibility model first and foremost. So if you thought about having your own data center or your own services, you're responsible for everything. You're responsible for the building, the infrastructure, the networks, all of the data, and all of these things. The big difference when you move to the cloud, and some of that comes down to the type of cloud or SaaS services or whatever the case may be, but the shared responsibility modeling, that just means the platform, the cloud platform, itself is the accountability of the cloud service provider. So in essence that infrastructure—patching, backups, recovery—won’t completely go away but it's one of those things that you don't necessarily have to think about.

The other part of that shared responsibility model, if you think about all of the different companies across the globe, some of those are highly regulated entities and those regulations are going to differ depending on what country they're in or even what region they're in. Now part of that, for customers to be able to adopt the cloud, Microsoft also has to have a very comprehensive compliance portfolio. If you're thinking about, we’re talking about GDPR or we're talking about various different standards like NIST for example, the underpinning platform first and foremost has to have all of those controls in place that you take advantage of. There's a huge advantage right out-of-the-box I'd say in terms of the inbuilt capability that's already there by standard and by default. The challenge, however, is you have to take advantage of it. This still means you’re still accountable for who's accessing that data and what data you put into the cloud.

Carandang: You mentioned in the introduction in your new book, Understand the Cyber Attacker Mindset. It dives right into the global cyber crime. You've engaged with actual cyber criminals. What are some of the key takeaways that you learned from your engagement with those cyber criminals that you could share with the audience here?

Smith: I think what's interesting to me and why I wrote it is to really focus on the human part of security. I think again, when we think about security, a lot of people think about we're here to protect data and we're here to protect technology and servers in the cloud and all of these things but actually, the data only has a value to it when we understand the repercussions of the impact of some of that data in the wrong hands and how it could be misused, abused in various different things. I think what we talked about at the beginning is a million and one ways in which I could potentially attack you but there's only a finite reason why I would want to, and why I'm motivated enough to want to do it. So I looked at the different types of threat actors. As I said, we've got some that are financially motivated, we've got activists, nation state actors, and we've got malicious insiders as well. Then it's the same data but in the different hands, what is the impact of that? Then it's being able to work backwards and say, “Ok, well, if someone was trying to sell this data, if someone was trying to use this data for espionage, if someone was trying to use it for other nefarious purposes, what do I need to do to protect that in all of those different hands, in essence?” That's really important, to understand the human motivation behind it and why they are willing to go to that extra degree to get their hands on that data. So I think about it from a very, very simplistic, no matter what size organization is we're talking about, the little ones up to the big enterprises, and I try and keep it quite simple. Our strategy in essence comes down to protecting the access in and the exit out. So the access in is identity. As we're talking about privacy it’s identity in all its guises. So it's identity as a human and identity of things. So we're talking about laptops and devices and various things like that. In essence from the threat actors perspective, I have to find a way into your network. I don't particularly care how I get in. Whether I'm trying to do those phishing emails, I'm going directly to the source, or I found a vulnerability in your network. I will find any which way in to that network. The exit out really then comes into that data. What is it I'm trying to exfiltrate out of your company that's giving me that value in particular.

Carandang: Thank you, Sarah. That's fantastic. You mentioned scale earlier. Just with the number of data or tax on data growing exponentially day by day, I do wonder if it's time for just some bold paradigm shifts. Do you see any of these shifts on the horizon? For example, can you imagine where consumers will start to pay small fees for otherwise free services, so companies won't need to sell that data to third parties?

Smith: I think we're going to see that a little bit. I think people are starting to pay for subscription services where it's a highly tailored service. They don't get adverts or the adverts they do get are more tailored. We are starting to see these people who want an enriched service. But I think the challenge we have as well is, a lot of this technology, particularly when we’re talking about social media, has been around for a very long time and it's been free for a very long time. Even when we know that when it is free, you’ve heard the comments you are in essence the commodity but there's data, there's profiling that's being sold to varying degrees across different companies depending on how you're interacting with some of their services.

I think the interesting thing is even when we've spoken about the size of some of the cyber attacks, the size of some of the data breaches, the fact that we've had these regulations, the fact that we've had record-breaking fines as a result of misuse or abuse of data and selling of data in various different things, has it actually stopped people from using it? I would argue not. Maybe there's a handful of people who are a bit mindful of it. I think you'll get pockets of people that want a better service and that you could sell it as a better service and enriched service or some way, maybe you'll have those kinds of people who might want to do that but I think overall, I can't see it happening to a large extent.

Carandang: Got it. Thank you, Sarah. So we've covered a lot today. I wanted to just ask you your overall feelings on maybe the next five years or so. So take us out to 2030. Tell us what you see. Are we in a better place? How well we have gone with this endeavor.

Smith: I think it's interesting, isn't it? Like we talked about GDPR, we talked about how long that's been around. So we are over five years since GDPR has come into being and other regulations around the world are all coming up to a varying degrees. Has it made any difference? I'm not sure. Arguably, I think it's going to have to get much worse before it gets better but I do think there is some positive coming as well. I would just frame that with where we started, when we're talking about cybersecurity and what's the game changer. So I think what we have seen is this willingness for more collaboration across big tech but across multiple different countries and jurisdictions. Particularly when we think about different actors and they're moving data around and moving data, there's money laundering, people are hiding in plain sight, making it really hard to bring a lot of these people to justice. Therefore what we have seen, as I said, in the last couple of years, is that willingness to collaborate, the willingness to share intelligence and really, really think about, there are some of these core principles of what we've been talking about and really then coming back to those foundational levels that we talk about. How do we have security and privacy by design, by default and as standard, so that nobody questions all of these things that have to be added on. Are you doing it for the right reasons? It just is. So, I think, as I said, there's going to be a lot more work. It's not going to be easy. I have a tiny bit of optimism that we can tip the balance but I just want to be realistic at the same point, not underestimating how much work is involved.

Carandang: That’s brilliant, Sarah. Thank you so much for your time and insight today. You've been very generous. Thank you for the great work you're doing more generally, and congratulations again on your book. Joe, back to you.

Kornik: Thank you for watching the VISION by Protiviti interview. On behalf of Roland and Sarah, I'm Joe Kornik. We'll see you next time.

Close transcript

Sarah Armstrong-Smith is Microsoft’s chief security advisor for EMEA and an independent board advisor on cybersecurity strategies. Sarah has led a long and impactful career guiding businesses through digital attacks and specializing in disaster recovery and crisis management. Sarah is the author of Understand the Cyber Attacker Mindset: Build a Strategic Security Programme to Counteract Threats. Prior to Microsoft, she was Group Head for Business Resilience & Crisis Management at The London Stock Exchange and Head of Continuity & Resilience, Enterprise & Cyber Security at Fujitsu.

Sarah Armstrong-Smith
Chief Security Adviser, Microsoft
View bio

Roland Carandang is a Managing Director in Protiviti’s London office and one of the firm’s global leaders for innovation, security and privacy. He leads a world-class consulting team focused on modernizing and protecting businesses where he helps clients understand, implement and operate technology-based capabilities and takes pride in helping clients navigate an increasingly complex world. He collaborates across the Protiviti and Robert Half enterprise to ensure we are solving the right problems in the right way.

Roland Carandang
Managing Director, Protiviti
View bio
Add a Comment
* Required
Comments
No comments added yet.