Protiviti's Scott Laliberte: Regulation of AI and emerging tech should not stifle innovation

Video interview
June 2024

IN BRIEF

  • “[Executives] need guidance, they want the protection, but they want the ability to be able to continue to innovate and innovate quickly, and that’s going to require a real balancing act to make sure that the government’s providing oversight, they’re providing guidance, they’re providing regulations, but they’re not stifling innovation.”
  • "I think that the government is really going to say, “We’ve set out the tenets that we want you to abide by. If you’re going to violate those tenets, we’re going to hammer you with existing laws that are on the books,” and that will be the way that they regulate. They’ll regulate through enforcement actions rather than necessarily legislation."
  • "Humans have always been the weakest link in the security chain and now AI is going to allow for really sophisticated attacks that you can’t expect any human to be able to decipher or pick out that it’s a fake. So, we got to get really creative and collaborate because the group mind working together and collaborating on how to defend is going to be much stronger than any one individual or any one company."

ABOUT

Scott Laliberte
Global Leader, Emerging Technology Group
Protiviti

Scott Laliberte is Managing Director and Global Leader of Protiviti’s Emerging Technology Group. Scott and his team enable clients to leverage emerging technologies and methodologies to innovate, helping organizations transform and succeed by focusing on business value and managing risk. His team specializes in many technological areas, including artificial intelligence and machine learning, Internet of Things, cloud, blockchain, and quantum computing. Scott is a published author, accomplished speaker, and quoted subject-matter expert in the area information systems security. He has been quoted as a security expert in Compliance Week, Computerworld, Financial Times, Securities Industries News and The Wall Street Journal. Before joining Protiviti, Scott was an Information Systems Security Officer for the United States Coast Guard.

Joe Kornik, Editor-in-Chief of VISION by Protiviti, sits down with Scott Laliberte, Managing Director and Global Leader of Protiviti’s Emerging Technology Group. Scott and his team enable clients to innovate by leveraging emerging technologies, such as AI, machine learning and IOT, among others. In this Q&A, Scott discusses all those emerging technologies and if, how and why and when the government should regulate them, and he offers his feedback on a few emerging tech data points from the Protiviti-Oxford global survey on the future of government.

In this interview:

1:03 – Emerging tech findings from the Protiviti-Oxford survey

3:30 – Regulation of AI and lessons from the privacy

6:15 – The foundations of AI governance

9:58 – The role of the private sector

12:18 – The 3-5 year outlook


Read transcript

Protiviti's Scott Laliberte: Regulation of AI and emerging tech should not stifle innovation

Joe Kornik: Welcome to the VISION by Protiviti interview. I’m Joe Kornik, Editor-in-Chief of VISION by Protiviti, a global content resource examining big themes that will impact the C-suite and executive boardrooms worldwide. Today, we’re exploring the future of government, and I’m happy to be joined by my Protiviti colleague, Scott Laliberte, managing director and global leader of Protiviti’s Emerging Technology Group. Scott and his team enable clients to innovate by leveraging emerging technologies such as AI, machine learning and IoT, among others. Scott, thanks so much for joining me today.

Scott Laliberte: Thanks for having me.

Kornik: Scott, as you know, we’ve recently published our Future of Government survey that we do with the University of Oxford, and we found out some interesting things regarding AI and emerging technologies. For starters, more than three-quarters of global business leaders expect emerging technologies such as IoT and AI to substantially transform the delivery of public sector services. Does that finding surprise you at all?

Laliberte: No, that doesn’t surprise me, Joe. I think so many people are encouraged and really looking forward to the advances and the efficiencies that they’ll gain with AI and IoT. AI is enabling businesses and people to do so many things right now, like more efficient predictive analytics, and when you combine that with the power of IoT and some of the data we can get from IoT censors, we’re going to be able to do some really amazing things with improving services and preventative maintenance, predictive analytics, all those things that will make services in our lives a lot easier.   

Kornik: Right. Scott, in the survey, when we asked executives what role, if lany, government should play in regulating emerging technologies such as AI and deepfakes, technologies that, as we put in our survey, can disrupt democracies, 82% of business leaders believe government has a role and 53% said that role should be substantial. What do you make of those numbers?  

Laliberte: Well, I think not only are folks encouraged by this technology and the breakthroughs that it is creating for us, they’re also scared of the consequences because those consequences and bad outcomes can be pretty significant. The things like deepfakes, the ability of AI to speed up and make attacks more effective and quicker all are things that they’re worried about. You combine that with the fact that it’s so new and there’s a lack of standards and regulations and all those types of things, people are pretty nervous and they’re looking for guidance. On the flipside of that, I think they don’t want to be overregulated as well. They need that guidance, they want the protection, but they want the ability to be able to continue to innovate and innovate quickly, and that’s going to require a real balancing act there to make sure that the government’s providing oversight, they’re providing guidance, they’re providing regulations, but they’re not stifling innovation by putting so much on there that companies aren’t going to be able to take advantage of the enhancements and gains that they can get with this technology.   

Kornik: Right. Scott, we’ve already seen some government involvement in regulating emerging technologies with two recent SEC enforcements for AI washing, the deceptive marketing tactic which exaggerates the use of AI. Do you think we’ll see more in the future, and if so, what will that look like?

Laliberte: Yes, I think we’re going to see a lot more in the future. It’s really interesting, history tends to repeat itself. When I saw the White House executive order on AI issued back in December — fourth quarter of 2023, it was — it reminded me of the early 2000s when a similar White House executive order came out on privacy. If you remember, in the early 2000s, states were just starting to come up with some of their privacy regulations. California had come out with one. Massachusetts had come out with one. Europe had put out some of their legislation and regulations but the U.S., federally, didn’t have any guidance or regulations in this area. The White House put out a similar executive order, and then right after that, we saw enforcement actions by the FTC, DOJ, SEC for privacy-related issues but they related back to regulations and laws that already existed, Consumer Fraud and Abuse Act, Computer Protection Act, and things of that nature.

You’re seeing that now as well, these enforcement actions for AI washing. They weren’t for AI. Right. They were existing laws that existed for consumer fraud and abuse and those types of things. I think we’re going to see the same thing happen here. That executive order basically laid out a bunch of aspirational things but it set some very common-sense things as well. It said you have to be transparent with what you’re doing with AI. You have to protect customer data. You have to make sure you don’t have bias worked into any of the decisions that you’re making. It had a whole bunch of other things in there, but those were really your core tenets. I think what we’re going to see is if you violate those core tenets, the government’s going to come down on you pretty hard to make an example out of you. That’s what we saw in the early 2000s. I think we’re going to see that again. So, that’s the warning shot and a notice that people need to be taking and really embracing that they’ve got to start putting AI governance initiatives in place, make sure they don’t violate any of those core tenets, and that they’re putting their companies in a good position to leverage AI but in a responsible and ethical manner.  

Kornik: Scott, from where you sit, what does effective AI governance and compliance look like? How much regulation is appropriate? How much is too much? What should companies be doing right now to prepare for when all this rolls into shape?

Laliberte: Yes. Well, I think what companies need to be doing is really laying down the foundation for AI governance. They need a steering committee. They need a group with — a multidisciplinary group of people that can really be tackling this from multiple directions. It’s not just going to be compliance. It’s not just going to be information security. It’s not just going to be legal. It’s going to be all of those groups working together with the business to really set the foundation of how do they responsibly use AI to enhance the business.

That’s going to be things like making sure you got the right policies and procedures in place, and that may sound like a daunting task but when we’ve helped companies map their policies and procedures to some of the AI standards, like the NIST AI risk management framework, you’ll find that 85%, 90% of the standards are already in place with existing policy statements or controls that they have today. They might just need to be reminded that they have those types of things in place and they apply to AI as well as other technology they already have, but majority of that is going to be in place. And then making sure that you’re educating your people on how to use it, how to develop with it in a responsible way. If you’re going to be doing risky transactions, making sure that you’ve got ways of ensuring transparency, ensuring no biases there or that you have a human in the loop to make sure that it’s not making decisions based upon incorrect information.

So, laying that foundation is going to be very important and it’s also going to be important that it continues to evolve because this area is evolving so quickly. So, that’s really on the company side. On the regulatory side of this, the balance is that, how do you regulate this given that it’s evolving so quickly and not stifle the innovation? Right. So, when you look at Europe, who took a bit of a heavier hand in putting out some very specific guidance, some consequences that will happen if you violate those things, the risk there is that it’s going to slow AI innovation in Europe. I think the United States doesn’t necessarily want to do that because we’re not just competing in the United States, we’re not just competing with Europe, we’re competing with the rest of the globe and many jurisdictions that are not going to put any type of regulations on this whatsoever. So, I think that the government is really going to be — the federal government — is going to be looking to say, “We’ve set out the tenets that we want you to abide by. Frankly, if you’re going to violate those tenets, we’re going to hammer you with existing laws that are on the books,” and that will be the way that they regulate. They’ll regulate through enforcement actions rather than necessarily legislation. What’s going to complicate that also is that the states are going to continue to put forth their legislations as well. So, it’s going to be like it was with privacy where you have a ton of different state laws that you’re trying to navigate, no real federal legislation, and you’re going to be looking at precedented enforcement actions to try and sense what’s the right direction to go in.

Kornik: Right. You touched on this a bit earlier but let’s talk about the private sector for a minute if we could. They are obviously way out in front on this. Do you see an opportunity for them to sort of lead on regulation or perhaps be integral in working with governments to align strategically to sort of make sure that we get all this right?  

Laliberte: Yes. That’s a tough question. [Laughter] We’ve seen some of the big players that are already trying to set standards and put out frameworks and things like that, it’s like you got Microsoft Responsible AI and Google and AWS — all the big players are putting forth their frameworks and their guidance, and there’s many common elements. As you look across those, they align with a lot of the NIST standards and the ISO standards and things like that. I believe we will continue to see large leading organizations like that putting forth guidance, templates, all those things that we can take advantage of because they want consumers to use the technology, right, because it’s in their best interest for those things to be used aggressively and responsibly. They’ve always been in collaboration with the government, and I think we’ll continue to see that.

I think when you look at the bad things that could happen with AI, I look at it and compare it to, say, ransomware. Right. Ransomware still is a devasting attack vector that we deal with today. When it first came out, people didn’t know what to do. The government worked in collaboration with private sector and put forth guidance, and it wasn’t necessarily regulation. They didn’t regulate ransomware, but they worked trying to harden critical infrastructure and applying lessons learned and guidance from the commercial sector with government to put forth a really good defensive strategy that could be employed not only by the government but by the private sector as well. I think we’ll see a parallel here and that’s how we’ll be attacking now with AI and other emerging technologies as they start to emerge.

Kornik: Thanks, Scott. Finally, if I were to ask you to sort of look out three to five years or even to the end of the decade, how optimistic are you about all this emerging tech and its role to be a force for good in the world rather than the alternative?

Laliberte: This is a double-edged sword. I am really optimistic about the gains that we will see in society, in business, and in government services with AI and IoT, but especially AI and generative AI. We’re already seeing those gains. You look out three to five years, I mean your imagination is probably the only thing that will limit you in what we’re going to be able to provide and see as services and enhancements.

The other side here, right, the naysayer side of me or the critical thinker that I have, I also see the extreme negatives that could happen, and we have to be prepared for that. We’ve seen that over the years. I’ve been doing this for 30 years and it’s always, a new technology comes out that has great promise and it also could be used for really bad things. Security professionals, technology professionals such as myself and others, we need to be thinking about how are the bad guys going to try and use this against us? It’s that cat-and-mouse game, a game of chess where move and countermove and you’re trying to predict three moves ahead so that you can stay ahead of the bad guys and the cybercriminals out there. It is going to be a challenge. We already see accelerated attacks. The things like deepfakes are really scary in that you think about how do we defend against that? Humans have always been the weakest link in the security chain and now AI is going to allow for really sophisticated attacks that you can’t expect any human to be able to decipher or pick out that it’s a fake. So, we got to get really creative and think outside of the box and collaborate because the group mind working together and collaborating on how to defend this stuff is going to be much stronger than any one individual or any one company is going to be able to do. We’ve seen that collaboration in the past. We’ve seen the ISACs and we’ve seen the government and private sector collaboration, and that is going to be more important than ever as we move into these new waters and new territories that we have to work to combat against.   

Kornik: Well, Scott, let’s hope we get this right, huh?

Laliberte: Let’s hope. We will. We have. We always have figured it out. There’ll be some pain along the way. There’ll be some very difficult lessons learned, but with each one of those we take those lessons learned and we apply it to the future to get better and stronger and we’ll continue to succeed.

Kornik: Thanks, Scott. Appreciate your time today. I really enjoyed our conversation.

Laliberte: Me, too. Thank you very much.

Kornik: Thank you for watching the VISION by Protiviti interview. On behalf of Scott Laliberte, I’m Joe Kornik. We’ll see you next time.

Close transcript
Add a Comment
* Required
Comments
No comments added yet.