Panel discussion: Protiviti hosts Forum on the Future of Money and Privacy

Panel discussion
December 2024
Audio file

IN BRIEF

  • “You really have to drive that value proposition with your boards and with your senior leadership teams to help them to understand how these strategic initiatives and how furthering the privacy posture of an organization can really make a differentiation when it comes to sales.”
  • “Everything that we see in the news [with regard to AI] is doing one of two things; it’s either scaring people, so they're afraid to engage with the technology, or they're saying it's not a big deal, it’s just an incremental build, and I would say I'm more aligned to the ‘it’s an incremental increase in risk to all of the different control functions.’”
  • “The EU is known to be very regulatory-heavy. You have GDPR, you have the AI Act, you have the Digital Services Act, the Digital Market Act. I honestly can't track. So, I'm waiting for the day when a major multinational company will just say, ‘We're over this and we're pulling out.’”

In this VISION by Protiviti podcast, we present a panel discussion hosted by Protiviti Senior Managing Director Tom Moore. The discussion was held in New York in November as part of VISION by Protiviti’s Forum on the Future of Money and Privacy with Protiviti partners, The Women’s Bond Club and Société Générale, the host of live event. Tom leads a lively discussion among panelists Heather Federman, Head of Privacy and Product Counsel at Signifyd; Stephanie Schmidt, Global Chief Privacy Officer and Head of Data Compliance (AI and Cyber) at Prudential Financial; and David Gotard, Chief Information Security Officer at Société Générale.


Read transcript

Panel discussion: Protiviti hosts Forum on the Future of Money and Privacy

 

Joe Kornik: Welcome to the VISION by Protiviti podcast. I’m Joe Kornik, Editor-in-Chief of VISION by Protiviti, a global content resource examining big themes that will impact the C-suite and executive board rooms worldwide. Today, we present a panel discussion hosted by Protiviti’s Tom Moore. The discussion was held in New York City last month as part of our VISION by Protiviti Forum on the Future of Money and Privacy, with Protiviti’s partners, the Women’s Bond Club and Société Générale, the host of the live event. Here’s Tom, kicking off the panel discussion.

Tom Moore: I’m Tom Moore, I’m a Senior Managing Director at Protiviti. I’ve been with the firm just under a year. Prior to that, I served AT&T for 33 years in a diversified career, the last five of which I was the Chief Privacy Officer. AT&T at that time was very diverse and had TV, entertainment, gaming, you name it, in addition to what is now just the mobile and internet company. I say “just,” it's a Fortune 10 company. I had a great career there, but now I am serving clients across the spectrum as a member of the security and privacy practice with a focus on privacy.

With that, I'm going to ask each of our panelists to introduce themselves. Heather?

Heather Federman: Hello. I'm Heather Federman. I am the Head of Privacy and Product Counsel at Signifyd. Signifyd is a vendor which basically helps companies with their fraud protection. Our customers are the merchants, but we work closely with the financial institutions as well to basically help authorize more legitimate transactions, weed out the bad transactions. So, we sit in that little zone in between, and it’s uncomfortable but interesting place I'll say, between merchants and the banks. Prior to Signifyd, I was at a different company called BigID, and they deal with data management, data governance, data privacy for various enterprises as well. I've also been on privacy teams at Macy's and American Express. I started my career on the policy side of privacy, so it's always interesting to see what's happening regulatory-wise, and I'm excited to be here today. Thank you.

Moore: Stephanie?

Stephanie Schmidt: Awesome. Good evening I should say. I’m Stephanie Schmidt. I am the Global Chief Privacy Officer for Prudential Financial. I am also the Head of our Data Compliance Organization, which includes building out compliance for cyber and also AI, so it's been an interesting year, as you guys can imagine. Prudential is a global company with 40,000 employees globally, helping bring financial wellness across the industry. I’ve been in a number of, I'll call them control partner, check-the-box sort of roles. I am a recovering auditor, as you can imagine, as well as working operational risk and compliance. I'm very excited to be here. Thanks, Tom.

Moore: Thanks, Stephanie. David?

David Gotard: Hi, good evening. I’m David Gotard and I am the Chief Information and Security Officer for Société Générale for the Americas. For those who are unfamiliar, we’re a global investment bank with retail and wholesale banking activities. I've been involved in financial services for the better part of my career. I’ve worked at the big-name banks you can probably think of, mostly on the IT side, and then decided that trying to protect data and systems was the way that I was interested in going, so I found myself in this space. So, I’m happy to be here.

Moore: All right. You can see we've got a tremendous blend of experience here and I’m looking forward to this. We're going to talk about AI, we’re going to talk about regulation, maybe peel back a little bit what it looks like in the C-suite talking about privacy and security, but let's ease into the topic. Panelists, and I'll start with you, Heather, what generally should financial services companies be thinking about in terms of their privacy program?

Federman: For me, I always like to go back to the FAIR information privacy principles. These were created several decades ago, and they've been codified in various ways through laws and other principles and companies, but essentially they list out the fundamentals of privacy, thinking about transparency, individual participation, accountability, security; and a lot of the regulations have adopted them as well.

So, to me it's a very principle-based approach and each company's going to be very different on what's going to be important, how they're going to apply it. So, there is no bulletproof strategy for any one financial institution or company, but again, it goes down to what's your risk profile and how are you applying these principles with them.

Moore: Stephanie, would you like to add?

Schmidt: We typically think about privacy risk through the typical three lenses; certainly, where your organization is, a data-driven perspective, and then also looking at the regulatory landscape that you have to engage with. So, as you can imagine across the areas that I support today, AI and privacy have this really interesting intersection of where they're competing for things like consent and transparency and adding on or upping the game, and then we also think about how aware are the consumers. It was really interesting to see the statistics that were just put on the board, but all of this is wrapped around how you operationalize your privacy programs.

So, the controls that have to be in place to support how your company views those three lenses is really important because it needs to be just as ahead, when you think about the digital landscape and data holistically, to be able to prepare for that. So, you really need to think about, gone are the days of the manual inventories and things like that. We really need to be thinking about, how do we automate—similar to how the businesses are doing business with AI and things like that—how do we automate privacy controls? Not looking to put ourselves out of a job, obviously, but that's the goal, it’s to be able to minimize how many manual efforts it takes to be able to comply with the varying privacy compliance requirements.

Moore: Excellent. David, you come at it from a little different perspective, from the security side rather than the privacy side. Give us your viewpoint of what financial institutions, given you're a part of one, should be doing to protect the data from the security perspective.

Gotard: Sure, yes. In the information security space, there’s a similar principle that we apply here. It’s called the CIA triad; confidentiality, integrity, and availability is really at the center of what the cyber security program is intending to protect. So, working in partnership with the data privacy efforts as an effort to ensure that we can provide that type of CIA coverage for the data privacy is very, very important. So, we have a very similar interests in terms of identifying the data that needs to be protected, ensuring that its integrity is preserved, and that its availability and confidentiality is also taken into custody.

Moore: My best friend at AT&T was the chief security officer. We spoke regularly. The two topics are intertwined, and that's why we're here today.

Schmidt: We're best friends already. [Laughter]

Moore: All right. We have already mentioned regulation, and that's an important part of financial institutions, obviously heavily regulated. Heather, I'm going to start with you. There's a flurry of privacy laws that have come about. GDPR, many of you have heard of, came about in 2018, followed by the law in California, CCPA. Now we're up to 18 in 20 states with privacy laws. How do companies keep up with that? What kind of tools should they have in place to prepare themselves for that changing environment?

Federman: Well, they could start with hiring Protiviti or a good outside counsel, [Laughter] but in all seriousness, I think for each company, again it's understanding what the particular risk profile is for your sector, your company, but then also understanding what is the risk within each region or how those laws are actually enforced. In some places you might have really active regulators and they could be poking around a lot. There are other places where they might enforce one or two really big cases a year, because that's just the only budget they have.

I think, Stephanie, you can probably speak a bit more to this, but with some of the privacy regulations, at least in America, the California one is the only one that kind of touches financial data and it's like in a weird, kluge way, and then it's basically exempt from all the other state privacy laws that are coming out. So again, that goes back to that understanding what the risk profile is for all these regions and determining how are you going to apply the various standards across these regulations.

Moore: To that point, Heather, I saw the CFPB came out, I think it was just a week ago, with a report criticizing the states that have passed privacy legislation for exempting financial data. Stephanie, is that the right way for the CFPB to go about this?

Schmidt: My personal opinion, [Laughter] it does make it really hard to think about what your external posture is going to be for your company, right?

I think what we find is that if you look at the data that you hold as a company, very often companies overlook their employee data. So, I would definitely say, go back and look at that, because when you combine globally where you have employees based, as well as where you engage with consumers, or prospects, or customers, that create a road map for you to determine, and I love the principles-based approach that you talked about. That is what I would call baseline foundational, “What are we going to do about privacy?”

So, going back to that original piece I was talking about with those three lenses, companies have to decide, “What is our external posture going to be?” Even though we don't have to honor individual rights in the U.S. or in another jurisdictions, is that the right solution for our customers or for our employees? Is that who we want to be as a company? Is that going to be the right thing to do?

So, you really have to drive that value proposition with your boards and with your senior leadership teams to help them to understand how these strategic initiatives and how furthering the privacy posture of an organization can really make a differentiation when it comes to sales. Maybe you get that additional client because they understand how important privacy is to you, or because you’ve offered their customers choices about how they're going to engage with you as a company. So, I do think it creates a very unique opportunity for companies now.

Moore: A customer-centric approach to privacy versus a compliance-based one. I love it.

Schmidt: Absolutely, yes.

Moore: Stephanie, we’ll stay with you for a minute. We just had an election in the U.S. and obviously a new administration coming in January, changes to the Senate composition as well. Do you see anything happening in terms of the momentum around privacy law in the next few years?

Schmidt: That's a loaded question. [Laughter]

Moore: It is.

Schmidt: Personally, again, it's going to be really interesting. I think we're going to see a lot more states driving change, and I would tell you from my seat again, even though we have a principles-based approach, I'm looking at the operational complexities as it relates to how they require us to deploy privacy compliance, things like opt outs for sensitive personal information, how they do that. Is it an opt in by design or opt out by design? Do I have to go in and say “Yes, you can use my sensitive personal information,” or are you just going to use it and not tell me about it, and then overlay again artificial intelligence regulations where you may need to get or collect consent to be able to use or tell people that you're using artificial intelligence.

So, it does create this really complex landscape on how you actually operationalize those privacy controls. So, definitely an opportunity to step back and say, what's going to be our high watermark and how do we go about execution, and then what's the value proposition both externally and internally to your company.

Federman: Just to kind of follow-up with that though, do you decide whether or not to do opt-ins for one state versus opt-outs for another state or just take the strictest standard approach, or only honor employee request in California because no other state law requires it? Again, it's a determination that each institution needs to make on their own but it's part of that thinking.

Moore: David, the privacy world is not the only one that has seen an onslaught of regulation and laws. Security has as well, especially around notification requirements. Tell us a little bit about how financial services industry companies should harden themselves against regulation or just comply with regulation.

Gotard: Yes, I think our landscape is similar to privacy in that there are a myriad of regulations that are enacted. They differ by different jurisdictions or just within the Americas here, operating within the United States or within a particular state within the U.S. versus our teams in Canada, our business operations there, and in South America. It's a different situation everywhere you turn, but what we've seen over the last 18 months, two years, is an ever-increasing focus by regulators on the implementation of existing regulations, as well as increasing the expectations.

You mentioned the SEC and the requirement to report incidents, quite a controversial element, the regulations as well. If you had a material cybersecurity incident, you need to disclose it to the public so that they know as an investing public that you had this breach, but the firms are saying, “Hold on. If I tell what's going on to that level of detail, that is just going to open us up to more attackers coming in. So, you find this balance that's trying to be struck between transparency to investors, for example, and trying to provide the safety from a cyber perspective to the systems that they're relying on for managing their financial services.

Schmidt: If I can add to that, it's who do you tell first, because all the regulators, if you operate globally across all the jurisdictions, they want to know within a certain period of time, and what do you tell and how do you tell them? There's a consistency factor that comes into play as well, and who makes the decision to notify? That's something from how you operationalize incident response that’s incredibly important to make sure that you understand who has that decision-making authority, who drafts the language, are you talking to the lawyers, are you making sure that you're consistent and logical with your explanation around why you notified this regulator before that regulator will ask. Absolutely.

Gotard: You need a global strategy if you're operating in that type of landscape.

Schmidt: Yes.

Moore: Both from Heather and David, and Stephanie, we heard about decision points. Do you apply one approach universally that might be costly, you might be extending rights to consumers who aren't entitled to them by statute, maybe that adds some cost, but then you also have one approach that's maybe a little easier to operationalize. It’s a tough decision for enterprises to decide what is the right approach, one-size-fits-all, because now you're subject to necessarily the least common or worst common denominator of international law, but it's a great point.

We’ve talked about AI a couple of times. We're going to spend some time on this, and if there's questions from the audience, we’ll put them in the slide there and we'll get to them later. David, I'm going to stay with you for just a second. Artificial intelligence is becoming increasingly critical to all of our operations and can help, but as we saw from the survey data, there's either hubris or magical thinking about what it can really do and the harm that it might cause. Give us your perspective. Is artificial intelligence a help in the security world? What’s your perspective?

Gotard: That's a great question. Yes, this seems to get a lot of attention these days. I think like every new technology that gets introduced, whether it was our cell phones, or the internet, or video conferencing, for example, depending on someone's motivations, it can be used to advance things or become a weapon against an institution or a government, for example, and I view artificial intelligence as just another evolution of that type of game. So, that arms race of how do you leverage the tool for your own purposes, and how do you protect yourself against misuse and abuse of the technology is at the forefront of everyone's mind with artificial intelligence.

You mentioned earlier, I'm sorry, Joe mentioned this earlier, about how quickly the threat actors can move relative to certainly regulators and even institutions, right? They are not hampered by the type of constraints that we have. They are very nimble in how they operate. So, I do expect, and when we already see, the use of artificial intelligence as a weapon against institutions to exploit vulnerabilities, to gain footholds through advanced social engineering attacks. There’s been some ones that hit the newspaper that have been quite shocking in terms of how effective they have been, even for people that were aware that social engineering attacks could be perpetrated this way and are still falling victim to this. Then the use of it internally, both as a counter weapon to the threat actors as well as a business enabling tool, that's where we're going to see the next phases of this coming to.

Moore: Stephanie, AI, net good or net bad?

Schmidt: I think it depends on the day. [Laughter] I will say that everything that we see in the news is doing one of two things; it’s either scaring people, so they're afraid to engage with the technology, or they're saying it's not a big deal, it’s just an incremental build, and I would say I'm more so align on the it’s an incremental increase in risk to all of the different control functions.

If you think about how you engage with third parties, if you think about information security and cyber and privacy, we have seen, I think continue to see, across the industry privacy subject-matter experts with an increase in just the volume of use cases that we see coming through. So, as you think about what it takes to operationalize and assess privacy risk in the AI capabilities that your company's investing in, that's driving a significant increase in the amount of time from your privacy teams. So, think about it, even with the security perspective, all of the control partners now need to review whatever that is before it can be deployed. So, things like data flows, things like inventories, they are more and more important.

So, I go back to my original point of, you have to automate your privacy controls and your security controls to keep up with the evolving technologies so that your control partners can actually step back and advise directionally and strategically on where the organization needs to go. That would be my point.

Moore: I think everybody in financial services understands the idea of risk and risk analysis. You mentioned assessments. Tell everybody a little bit about what privacy folks kind of do behind the scenes in a privacy impact assessment.

Schmidt: Sure. If you think about the data life cycle, sort of collection through disposal, there are a lot of integration points that we have to review and there's regulatory drivers of why we complete privacy impact assessments as well. But at the end of the day, you really have to understand how you’re operationalizing, whether it's a third party relationship, or how you're doing a new technology or an AI capability, whatever that is, you really have to understand how that's going to impact your business.

So, looking at the data flows specifically, even if you think about AI, you have to look at how you collected that data. Did you purchase the data? Did someone give you consent to use that data? Is it a secondary use of that data? What’s going into the model? Is the model being trained using that data, and then on the back end, is that data then collecting new data from customers like in a chatbot situation? It is a full lifecycle of review that has to happen, and those privacy impact assessments help to assess the level of risk and determine what controls need to be in place.

So, the automation of those privacy controls helps offset the, I'll call it the manual effort around those impact assessments, but it will never fully eliminate, for example, a lawyer looking at privacy notices to determine if we've told somebody how we're going to use that data and if now that use of the AI is included in that notice, or if we're collecting the right consent, if we have contractual agreements in place or if we're relying on terms and conditions. All of that really matters now more than ever, with the introduction of generative AI and artificial intelligence more broadly.

So, I think that's where companies are struggling to say, what is the incremental risk that this presents to my organization based upon how we want to use AI, and then ultimately, are we willing to accept that risk, or what level of  control partner deployment do we need to put against that risk?

Moore: Heather, I hear a frequent question from clients and others talking about how can AI be used internally to help us comply with all these laws and regulations? Is there some way that it can be deployed to assist in the compliance effort?

Federman: Well, I'm sure you're going to find a lot of vendors trying to sell you on their AI solutions for compliance and privacy and security. That’s already starting to happen and it's definitely going to explode in the next year because AI is the big buzz word. But just to kind of add, AI is an umbrella term. My company's actually been using machine learning technology for the last decade. Machine learning falls under the umbrella of AI, but because we have generative AI and large language models, which is what's exploded in the last year, that's what's creating a lot of this hype today.

So, to start with, it's important to understand what type of AI is actually in play and what are we trying to help with or moderate here. So, I think there are some real interesting solutions out there, maybe just even from a research perspective it can be helpful, although one of the risks with AI or generative AI is making sure that that research is real and not just hallucinations, as we've seen a few times. So, it's really understanding what is being sold to you, and that comes for any sort of solution.

I would also just kind of add as a side note here, there was a talk that Ben Affleck was part of the other day and he was talking about AI for entertainment purposes and talking about how AI is never really going to replace movies or the artist, but I really liked some of his comparisons and kind of really thinking about the risks for his industry. So, I would recommend that as just kind of a good two-and-a-half minutes of your time and a way just to think about AI that there are pros and there are cons and it can be used, I think for the three of us, but we'll probably be asking a lot of questions when we're assessing the [Audio Gap] actually doing, again to kind of do a little bit more research when you're hearing the buzz words around AI.

Moore: Excellent. Let's move on to the C-suite for a second. I believe members of these audience are C-suite members already or are certainly aspiring to be, and may be interested in what happens in the C-suite around discussions about privacy and security, maybe even at the board level.

Stephanie, let me start with you. I presume you've presented to your C-suite, your CEO, maybe even the board. Can you tell us about that experience? What data did you present? What questions did you get asked? Pull back the curtain a little bit.

Schmidt: Yes. I would say generally, in my experience, you have to answer the “So what?” question. Most boards or most senior leadership teams really want to understand at the end of the day, what is the impact to the business, what is the impact to the revenue, to the customers? So, jumping to that point and working backwards to be able to build out that value proposition that we talked about before, is it going to enhance the brand? Is it going to enhance trust with customers or employees? What is that story or that narrative that you're able to draw the thread through so that they can follow how and why you're building out your program the way that you are.

Trust me when I say it's easier said than done, and even with the numbers of years that this panel has, and experience, we still struggle because there are seat changes, expense pressures, things like that. So, you're constantly, just like any other role, having to retrofit your perspective and how you think about maturing your program based on the environment around you.

Moore: David, any differences in the security world with respect to how you talk to your C-suite?

Gotard: More similarities than differences, for sure. The “So what” factor, whether it's business impact, regulatory impact, customer trust impact, that's really what they want to know at the end of the day, whether they're talking about a cyber security risk or a data privacy risk, if I can speak on your behalf, and trying to translate the very complex, in my case sometimes very typical elements of a cybersecurity risk, to something that a senior manager at the organization or a board member can relate to is an enormous challenge.

It's something that we struggle with all the time, and the changes that go on in the environment, but also within management or even board members, or even who it is that presented before you, what stage did they set with that audience beforehand? Did the audience respond to that well? Do they look for changes? Trying to maybe connect with the members of the board on the side and understand what they see working and where they would like more information and maybe less detail on others is a way to try to maybe shape that message in a manner that they can resonate with.

Moore: Heather, any experience with that?

Federman: I guess I would just add to what David was saying is understanding what those C-suite or board members really want and how they can process the information. Assuming that you are a board member or a C-suite or on that way, my expectation is that you're not going to want to know all the details of how many PIAs we filled out or contracts we've done or reviewed, but for you to really ask what are the key things that you should know in order to make the right decisions. That's typically how I like to frame those conversations because unless we're asked those questions, we'll be prepared with the details, but typically I would expect that you would want more higher-level strategic thinking around these things.

Moore: Yes, that's exactly right. In my experience, I've also seen C-suite and board typically ask, what are others doing? Our competitors, companies of similar size, what's in their tool set? What are they doing? What do we need to be on alert for? That's a tough one because, as our panelists know, there's not a lot of great benchmarking out there about size of organizations, and structure, and activity levels. Stephanie, I see that may resonate with you.

Schmidt: Yes. The benchmarking is key, but don't just do it in your industry. Do, it across for organizations of your size with your footprint globally. Then the other piece that I would just add is, align with your strategic partners. They should not be hearing a very different message from your auditors, or from your risk partners, or your compliance partners, or your legal partners, when asked about the maturity of your programs.

So, regardless of what seat you sit in, I think that is always something that if my records program goes and the head of our records organization goes and they're talking about the same things that I am, they're talking about the capability to de-identify data, they’re talking about automated data discovery and things, data governance holistically, the need for all of that, it's just going to further my cause. So, getting together and drawing that thread again through those control partners that are like-minded and can carry your message for you is really critically important.

Moore: A few times we've heard the words “customer trust.” Stephanie, we talked earlier about a customer-centric approach to privacy versus a compliance, legalese-based one. Look, the consumers are becoming more and more aware of their privacy. I venture to say each and every one of you cares a lot more about it now than you did maybe just a few years ago. Survey data says that as well. Heather, let me go to you first. Is there a way to have financial services companies empower their consumers to take control of their data and help them through that process?

Federman: It's perhaps thinking about privacy and security, or those choices in a way that, like the other choices you are giving them, you're making it a more seamless user experience. We don't want to have to go through five, six, seven different settings to opt out of this data usage or whatever it is. We want it to be easy.

We want the control the same way that you would make the settings easy for everything else, and some platforms are really great at doing this and some are really terrible at doing this. So, this might be more on the product management side of the business, but it's definitely a really, really key area, and it's something that regulators are also paying attention to because they look for things like dark patterns where you do make it harder for those choices and those opt-outs to occur.

One example that actually did occur I guess in the wild or public recently was PayPal came out, I think at the end of October, and basically announced that they are going to be updating their privacy notice to now allow for our user data, for any of you who use PayPal, to basically be shared with merchants in order to do more inferences and personalization opportunities and things like that, but “Hey, you have at least a month” because I think actually the change occurs at the end of this month because we're in November now, but you can go into your settings, and they made it relatively easy to go in and say, “No, I don't want to share my user data with merchants.”

So even though there was some media about this of, “Why are they sharing data in the first place,” I actually thought it was pretty great of PayPal to say, “We're making this, we have the right to do so, but we are giving our user base, our consumers also the rights to opt out of this, and we're giving you a month's notice basically to make that choice,” and again, make it relatively easy. That's one example of a great way to kind of think about these things when it comes to those choices and choices or decisions for your company and your business you'll want to make in the future.

Moore: Stephanie, thoughts about empowering consumers?

Schmidt: I’d put that at the bottom of my to-do list but now it's back on top. [Laughter] I think the biggest piece there, and you're right about the complexity, simple is usually better, more is not always more, and so you tend to get lost in the choice if you're not really able to articulate the drivers or the why you're doing a particular activity.

So, to your point about PayPal, I think the biggest piece there is, were they able to articulate a value proposition for why they're doing what they're doing? What are you as a consumer going to get out of this sharing of your data with merchants? Are you going to be open to more opportunities, or perhaps coupons or discounts from those vendors who you typically engage with?

That to me, you might look back and go, “You know what? I'm always dealing with this particular shoe company, and so I want to absolutely get discounts and deals from them. So, I'm going to share my data with them,” but another consumer may step back and go, “No.” So we have to look at the creep factor if they don't want to share that data, right? It’s a technical term, the creep factor, it’s used a lot in privacy, but it's true.

So, you go back to simpler is better, do your notices clearly articulate it, and then when you do change practices, are you able to articulate the value proposition to the customer or the employee, or whoever that is, that's impacted by that?

Moore: So that we have time for Q&A from the audience, I'm just going to ask each panelist one more question. It's put your future-looking hat on, and David, I’m going to start with you. It's 2030. How does the landscape of security look like, maybe law regulation, consumer awareness. You saw the survey data earlier. Do you think the execs have it right that everything's going to be fine and we're going to take care of it? What’s your predictions?

Gotard: I think we have some challenges ahead for sure. As technology advances and we become more interconnected with our digital data and our commerce, the headwinds that we face to secure things only keep going up. So, it's incumbent on to all of us as executives and as consumers to be facing that head on, be aware of what's going, try your best to navigate the landscape that is going to occur with artificial intelligence, I think other technologies that are on the horizon. If I look to 2030, might be quantum computing, which is basically a transition that we’ll have to make to ensure that we can maintain the confidentiality of all of our data, our sensitive personal information, as well as our other information that we use. That is definitely something that I think is going to be hitting us in our lifetime.

Moore: Excellent. Stephanie, your predictions for the future privacy law, consumer expectations?

Schmidt: I know. I think we historically in the U.S. have been a bit behind in terms of how we think about protecting our privacy. We typically connect it with our financial accounts and we wouldn't necessarily connect it with the 23andMe survey that we did online. So, I do think we have a bit of catching up to do, but I think it’s happening very quickly.

I know, myself, I have kind of a stack of breach notifications at any given point, and it's scary. So, I do think, going back to how do we, within the compliance and the privacy rules start to better automate that. You've got to do it by design. So, your controls and how you operationalize your programs has to keep up with the technologies, has to keep up with the volume of data that you process. So to me, I think that's the biggest thing that we're focused on in thinking about data governance and management more broadly.

Moore: Heather?

Federman: I have two. One is, and I don't like this prediction, but I unfortunately think that there's going to be a major cyber attack on some form of infrastructure like our water supply, or electricity grid, or something like that, so I'm hoping the security folks working there are paying attention. So, David, if you can [Laughter] talk to your friends over there.

Gotard: We're on it. [Laughter]

Federman: The other one I'd say is more of a legal kind of legal one. [Audio Gap] The EU is known to be very regulatory-heavy. You have GDPR, you have the AI Act, you have the Digital Services Act, the Digital Market Act. I honestly can't track. So, I'm waiting for the day when a major multinational company will just say, “We're over this and we're pulling out.” I mean it’s what, 2024, so we've got six years, and I've already heard some rumblings of this might happen at least with certain companies and I've tried to poke a few friends at various tech companies about it, but I'm waiting for that day because I think at some point you have to say, “Enough is enough. We're tired of these fines. We're tired of having to create a whole different architecture and system for one region. Let's just get out of here.”

Moore: I actually subscribe to that. I believe you’re right there. I'll answer my own question. I think you'll hear the term “data minimization” a lot more than you're hearing today. In the privacy and security world, minimization means only collecting absolutely what you need in a promise from customers to fulfill the service, not collecting a vast amount of other data because you might use it in the future or it's nice to have because it creates an opportunity to monetize something, somewhere, somehow. Minimization is going to be enacted in law. Minimization is already a focus of the FTC, for those of you who have heard of that organization, and I think other regulatory bodies across the globe will be pushing hard on data minimization.

There's a business case for it in the company sector as well. Look, data is costly; cost to store, cost to transport, cost to move. It is at exposure. Companies who have been breached, when they are breached with data they're not even using that's 10, 20 years old, have just increased the blast zone of the bad actors and the fine potential for risk. Then effectiveness, organizations that have data in repositories all over the place, it's hard for the analytics folks to find the right database, the right time, the one source of truth.

So, I think minimization is something that companies will want to pay attention to, building by design, and ensure that they're getting ahead of the regulatory environment. We talked about the consumer expectations. I think consumer expectations around minimization are going to be there as well. I will, as Stephanie said, willingly give you my information in return for value, but I'm not going to give you a bunch of stuff that you don’t even know what you're going to do with right now. That's my prediction for the future.

Kornik: Thank you for listening to the VISION by Protiviti podcast. Please rate and subscribe wherever you listen to podcasts, and be sure to visit vision.productivity.com to view all of our latest content. Until next time, I'm Joe Kornik.

Close transcript

As the Chief Information Security Officer (CISO) for Société Générale in the Americas, David Gotard is responsible for managing SG’s regional information security and cybersecurity compliance program. David has strong technical expertise, an extensive background in financial services, and significant experience in information security. Most recently he served as Head of both Equity Derivatives and Commodities Technology for Société Générale in the Americas. Previously, David held senior IT Management positions at AllianceBernstein, Bear Stearns, and JPMorgan Chase.

David Gotard
CISO, Société Générale
View bio

Heather Federman is the Head of Privacy & Product Counsel at Signifyd, a leading e-commerce fraud and abuse protection platform. In this role, Heather leads the development and oversight of Signifyd’s privacy program, compliance initiatives and AI governance. Prior to joining Signifyd, she served as Chief Privacy Officer at BigID, an enterprise data discovery and intelligence platform and was also Director of Privacy & Data Risk at Macy's Inc., where she was responsible for managing privacy policies, programs, communications, and training.

Heather Federman
Head of Privacy, Signifyd
View bio

Stephanie Schmidt is the Global Chief Privacy Officer and Head of Data Compliance (AI and Cyber) at Prudential Financial. In her role, Stephanie provides strategic guidance around the governance and application of privacy risk management strategies for Prudential’s global operations. Previously, Stephanie held various positions across other control partner disciplines in internal audit, risk management, and financial management.

Stephanie Schmidt
Global CPO, Prudential Financial
View bio

Tom Moore is a senior managing director in Protiviti’s Data Privacy practice. Previously, Tom served as chief privacy officer at AT&T, directly responsible for all privacy programs, policies, strategy, and compliance with regulations at the state, national and international levels. Tom joined AT&T in 1990. Tom also serves on the board for the Future of Privacy Forum and the Community Foundation of the Lowcountry. He was formerly a member of the Executive Committee of the Board of Directors of the AT&T Performing Arts Center in Dallas.

Tom Moore
Senior Managing Director, Protiviti
View bio
Add a Comment
* Required
Comments
No comments added yet.