Samsung Chief Design Officer Mauro Porcini: Human-centric design ignites user experiences
Samsung Chief Design Officer Mauro Porcini: Human-centric design ignites user experiences
Samsung Chief Design Officer Mauro Porcini: Human-centric design ignites user experiences
In this VISION by Protiviti interview, Mauro Porcini, President and Chief Design Officer at Samsung, sits down with Protiviti’s Alex Weishaupl, Managing Director, Digital Experience. Porcini says human centricity is the key to unlocking innovation and purpose, and even though AI will surely disrupt design, and almost certainly will decrease the number of workers, it could be for the best. “Ultimately, AI can do the most human thing of all, give us back the happiness we’ve lost over time,” he says.
In this interview:
1:17 – Building brand identity today
6:10 – Culture, values, and the customer promise
11:20 – Failures or experiments? Reframing innovation
12:45 – Balancing efficiency with empathy
18:38 – Advice for staying relevant
Samsung Chief Design Officer Mauro Porcini: Human-centric design ignites user experiences
Joe Kornik: Welcome to the VISION by Protiviti interview. I’m Joe Kornik, Editor-in-Chief of VISION by Protiviti, our global content resource examining big themes that will impact the C-Suite and executive boardrooms worldwide. Today, we’re exploring the future of customer experience, and we’re thrilled to be joined by Mauro Porcini, President and Chief Design Officer at Samsung, where he oversees a global team of 1,500 designers. Mauro hosts the “In Your Shoes with Mauro Porcini” podcast and has been a presenter and judge on the TV show “New York by Design” and “America by Design” airing on CBS and Amazon Prime. His most recent book is “The Human side of Innovation: The Power of People in Love with People,” published in 2022. Prior to joining Samsung, Mauro was Chief Design Officer at PepsiCo and 3M. Today, Mauro will be speaking with my Protiviti colleague, Managing Director Alex Weishaupl. Alex, I’ll turn it over to you to begin.
Alex Weishaupl: Thanks, Joe and hi, Mauro. Thank you so much for being here today.
Mauro Porcini: It’s really a pleasure. Thanks for having me.
Weishaupl: Absolutely. So, one of the things that digital, in particular, I think, has done over the last decade or two has really, absolutely proliferated the number of touchpoints and surfaces people interact with regularly. So, I’m going to start with a softball question. As people interact with brands across an increasing number of touchpoints, how are you seeing the role of brand identity, user experience, and ultimately, product innovation evolving or maybe more specifically, what is the role that you’re seeing that design plays in that emerging future?
Porcini: Well, very interesting question in this specific moment in time. It’s more important than ever. The role of design today is more important than ever for a variety of different reasons. First of all, the way we build brands today is so different than just 10, 20 years ago. You mentioned it earlier. We’re moving from a world where brands were communicating top down, one direction, with specific messages to people that were passively receiving this information, to a world where instead there is a dialogue. So, these brands are moving from being actors of the conversation most of the time to becoming just topic of the conversation happening amongst people. Because of this, they’re also moving from the ability to buy the right to talk to people, to the need of earning the right to be talked about. So, from buying, the more money you have the more presence you can buy in the media, to earning. Eventually, you don’t even need huge amounts of money, big budgets to be relevant in the conversation amongst people. You need meaningful content.
So, this is changing the way you interact with people and therefore, first of all, the way you build experiences — and design has a very, very important role in this — needs to change. It’s becoming more and more relevant. For instance, you go to a store. In the past, you were just going there for a financial transaction, to acquire, to buy a product, a brand, a service. Today, you can build those brands in those stores. People, if they are excited about what happened, their experience, their living, they can take out their cell phones, take a picture, take a video, share it with the rest of the world, become your ambassadors. It’s user-generated content. In a world where communication moves at the speed of light in social media, you need to be very impactful in delivering the message, and we study in semantic that the message itself is not enough to define the meaning of your communication or what you’re trying to say with your products and your brand. There is another element that is called “the code” in semantic that is the visual element that is super important. So, you’re scrolling your images and the different brands and products and services in your digital platforms. If you have the right aesthetic, if you have the right identity, the right visual language, you’re going to grab the attention of these people, you’re going to be impactful, you’re going to be meaningful. If you don’t, even if you have the right message, the right proposition, the right service, you’re going to be neglected by people.
Then last but not least, you need to keep innovating. The world is moving at the speed of light, so you need to keep creating something meaningful for these people. And so the only, the strongest competitive advantage you can build for your company — I say the only because it should be the primary competitive advantage — is human centricity: care, love for people. I call these innovators the real innovators, people in love with people. That’s the driver. Everything else follows that kind drive and is not something you create through a project. It’s something you need to drive through the entire culture of the organization.
Design is a community that embodies that idea of caring for people, love for people. This is what they teach in school, to care for people. In business school they teach you to grow a business, a brand. You can be an amazing business leader if you are able to grow the business and the brand. In design school, they teach you to care, and then they tell you, “By the way, you also need to make money for your company.” So, you need to understand other variables, the three lenses of design thinking, the business world, the technology manufacturing world on top of the first world that is the world of people. Today, we need that kind of culture in these companies. It’s design culture, but it’s a culture of human centricity that needs to be spread to every single function, vertically from the CEO to the entry-level employee, and transversely, across every capability of the company.
Weishaupl: It’s funny, I recently returned to your book, “The Human Side of Innovation,” and that notion of people in love with people or that — I’d say my read of the theme was almost impactful innovation really comes from people who care deeply, who design with human needs at the forefront. How do you get or help an organization’s culture and values align better with a brand’s external customer promise, and as part of that, maybe specifically, what kind of challenges have you seen where there’s a disconnect between the two, where that obsession just isn’t fully there?
Porcini: Look, I think it depends on the industry. In some industries, this is pretty obvious. This alignment is there to see. As an example, in the fashion industry, you see the employees of fashion, luxury brands embodying exactly what the consumers, the customers are looking for. You see that also in sport. If you work in Nike, in Adidas, even in products that are not related, belonging to the apparel world, like in PepsiCo, the Gatorade brand, that is brand really focus on the world of sport and that kind of customer. In those worlds, the people who work in those companies reflect the values, the behaviors, even the look and feel of those customers and consumers.
In many other industries, they don’t, and they don’t need to, but they need something very important, and this is true for both industries: they need to care. They need to care for real. Once again, we call it “human centricity,” “people in love with people.” What does it mean? Look at the customers in front of you as you will look at your daughter or son, your children, your parents, your friends, people you care about. If you do that, the first thing you try to do is to put yourself in their shoes. We call it “empathy,” right? I mean, from the Greek “empathos,” you put yourself inside the pathos, the soul of these people. So, it starts with this care and that ability to really understand deeply what drives these individuals. So, you need to build that kind of culture inside the organization.
Now, what are the biggest challenges? Often, companies talk about human centricity. Sometimes they use the word “consumer centricity” as an example, and they confuse these two approaches with what consumer insights function does. So, they start to collect a lot of data. Now we live in the world of data. We have really the ability to dissect the customer base with the capillarity, with the precision, with the focus that we never had before. That’s great. That’s great. But a lot of people think that consumer centricity, human centricity is just knowledge, insights, data about the people we serve, and that’s the problem. It’s not. It’s the beginning. You need that, but then you need to care. You may have a lot of information about your customers, but you’re like, “You know what? I don’t care. Our brand is so strong. It’s very profitable. We are the leader in the market,” and this is the biggest mistake you do often. When you are successful, you are paralyzed by the success, and you stop caring, observing and caring. And that observation and care push you to innovate even when you are successful, especially when you are successful because you have eventually more resources to do it with and you have less pressure. So, you need to care.
Now, even that is not enough. You may understand what you need to do. You may care about this, but then as a leader in the organization, you may eventually not have the courage to act on it because innovating, changing, especially if things are working well, is risky. So, you need also that. This is what you must be intuitive about. Then finally, well, let’s say you have it all. You have the courage, you act, and then you screw up. You need two things. First, try not to screw up, so you need knowledge, skills. You need leaders with specific kind of skills that make the difference. Then the third thing is, if you do screw up — and you will screw up sooner or later — I mean, this is statistically certain, we will make mistakes in the future. I learned that in design school, in the study of ergonomics where when you design the cockpit of a plane, of a train, they take into account that there will be a human mistake for sure. There is no doubt. It’s going to happen sooner or later, so you have a series of backup systems to manage all of this. So, companies should understand that if you invest in change, transformation, innovation, you’re going to screw it up in multiple instances, and therefore you need a culture that protects the ability to make mistakes, that doesn’t crucify the people making these mistakes. They learn out of those mistakes. So, all of these create the right culture that is really human centric and is really in sync with the values of the customers and the consumers that we serve every day.
Weishaupl: I love this idea of failing of really figuring out how to fail gracefully, both at the organization and at the individual level because you’re absolutely right, that risk is going to have failure. It’s almost a given after a certain point in time, but how you recover, again, both as an organization and as a set of individuals who work together is absolutely massive.
Porcini: Now you learn out of it. I mean, failure has been celebrated by many platforms. I remember multiple articles in our business review just to mention one of many. Everybody understands by now that you need to fail here and there to succeed. I come from a science company. Many years ago, I used to work at 3M. Scientists know very well that to arrive to one innovation, one patent, they need to do thousands of experiments. What scientists call “experiments,” the business world calls “failures.” You need a company with the right culture, the right financial algorithm, the right organization to embrace experimentation. By the way, if we start to call them “experiments” instead of “failures” and we manage them in the right way internally, but also externally, when we face customers, shareholders, the media, then we’ll be able to build the culture that foster innovation in a much more powerful and effective way.
Weishaupl: That’s fantastic advice. I do want to shift to one area that you started to bring up around data. I guess one thing I’m curious about is, as data, and AI, and automation, and personalization and ever more kind of data-driven capabilities become more embedded, both in business operations and in customer interactions, how do you balance efficiency and scale that those technologies can provide with, to your point, that ongoing need for empathy and for human connection? How do you get those working together and not in tension?
Porcini: Well, first of all, you used the keyword that is the most important one is “balance,” right? Let’s start remembering that these are just tools, so the key priority is that these tools are designed to serve us as humans, and they need to be used by humans. Whoever designs those tools, is in charge of those tools, needs to keep that in mind all the time — human centricity, how to frame those tools that are becoming more powerful than ever today, true purpose, true high-tech, true care for the people that they serve, how we design everything, to make sure that they stay at the service of the humans.
I will divide, especially talking about AI and new technologies, the world in front of us in the coming decades in two phases. One is the transition phase, and the other one is where we land really, these technologies in full maturity. In transition, the role of humans in managing them is huge, is huge because you need critical thinking, you need the ability to interpret information, to build a prompt, to manage all these technologies in ways that are meaningful and relevant to all of us, to companies, to the customers, to the people out there in general. So, this balance is key. What we need to understand, though, in any profession that we have is that our roles will evolve. The way I’m designing today is going to be different than in 10 years. I remember, I studied in the ‘90s design and computers, softwares were arriving. A lot of people were like, “Oh, my God. I’m going to lose my job.” Well, you do lose your job if you don’t upgrade your skills. Typographers that were using the ink or people that were making mockups in a certain way, yes, that job is gone, but you doing that job, you can learn the new tools, and you can be as relevant as ever.
So, the first thing is, embrace new technologies. As soon as AI started to show up or new technologies in general, instead of rejecting them, I pushed my entire organization to be the pioneers of the use of AI. Now, it’s not easy. We love to stay in our comfort zone. Newness, innovation, different things scare us, so a lot of people won’t do it. So, if you’re the CEO of a company, make sure that you build that kind of culture, make sure you find the right leaders that embody that idea of experimenting with new things so that the entire organization can follow.
Now, this is true until AI will reach a level — AI robotics, a series of technologies — where they will be able to replace completely humans in their jobs. If they’re able to replace humans in society, that’s another issue that we don’t want to talk about today probably. Now, we shouldn’t be scared about this. It’s a dream. I mean, we invented tools in the prehistoric times because we needed them to be more efficient, more effective to protect ourselves and serve our needs in the best possible way. Imagine a future where we have machines that do our jobs. That’s amazing. Now, the innovation project, though, becomes a different one. We need to start today thinking, what is going to be the society of the future? If we’re going to lose many, many, many jobs around the world and people will be free to invest their time in something even more meaningful to them, something that drives their happiness — at the end of my book, I talked about three dimensions that drive happiness. One is investing in yourself, who you are, your identity. For sure, your job is important. What can you do beyond your job to define yourself so if you lose your job, you don’t lose yourself? Second is, investing in people close to you, your family, your loved ones, your close community, including, by the way, work, your close colleagues. Freeing up time, you will have more time for your family, for your friends, for all of this. Then the third dimension is doing something bigger than you, something that you can use to defeat death, to become immortal. Again, if you don’t have to work from morning to night every day, you will free up time for that. So, for our society, that’s great. We’re going to see the indexes of happiness in society rising. Actually, where countries where the highest level of business effectiveness, countries with that high level of business effectiveness, with index of happiness very, very low. So, AI can do the most human thing of all, giving us back part of the happiness that we lost over time. But to do that, the innovation project is to understand what is going to be the society of the future when there will be less jobs. And this is an innovation project for governments, also for companies, and then for any individual that wants to have a voice, a perspective, a point of view and share it with the world. We talk about social media a lot today. We have a platform. Anybody has a platform. So let’s start to think collectively how to redesign a society where machines will be at our service, and they will drive happiness.
Weishaupl: I look forward very much to that future. I guess to bring this full circle, one last question for you. If you could give one piece of advice to a business leader, what would you tell them to prioritize or cultivate in their teams to have the best chance of staying relevant and successful and ultimately innovative in the marketplace of 2025 and beyond?
Porcini: Well, first of all would be, focus on your teams. I mean, you mentioned it. That’s the frame of your question, but many business leaders don’t. I mean, yes, people, they have HR, they have values, they have leadership attributes, but they focus on other things. I think that the most important thing to focus on is people. Now, people are — not just the technical and their skills, but the soft skills today are more important than ever. In a world where it’s so difficult to be competitive, where you need extreme efficiency and effectiveness, you need to really understand in-depth what are the characteristics of these individuals so that you can nurture them in the best possible way.
In my book, I talked about 24 different characteristics. We don’t have the time today to go through all of them, but I will mention a few. Some are more obvious in the world of entrepreneurship and innovation. For instance, the ability to dream. We’re all born with that ability. As kids, we dream, we fantasize, and then society tells us at a certain point that dreaming is not okay, it’s childish actually, but you keep dreaming. You go to school, you get out of school, you go to these companies, and you think that you can change the company, you can change the brand, you can change the industry until sooner or later a manager or multiple managers come to you and tell you, “Stop, stop. That’s so naïve. Actually, it’s even arrogant. Why do you think you can change anything? Nobody was able to do it before?” So, we start to think that dreaming is wrong, that is childish and naïve. Very few people maintain and preserve that ability to dream. They are the real innovators.
Now, dreaming is not enough. You need to dream, but you need in parallel to be able to execute, to make things happen, and you need to do it fast. So, these three dimensions, dreaming, acting and doing it fast by prototyping, failing, learning, prototyping again is not common in organizations for a variety of different reasons that I’m sure the people listening to us today are very familiar with. Again, when you talk about innovation, entrepreneurship, these are values that are pretty much obvious.
There are others that people don’t talk too much about — kindness, optimism, curiosity. These three in particular are being really, really important in my professional journey, the curiosity that pushes to learn, to see any interaction as a potential opportunity to grow. Curious people love to read, to travel, to interact with people, to interact especially with strangers, actually, to interact especially with people that are different from them. Curious people love diversity by definition because they know that in diverse people, in people different from them, because they have different political views, different colors of skin, different religious view, in that difference, there is the precious gift of knowledge. There is a perspective that is different than yours. My perspective, combined with your perspective is going to generate a third, original, innovative, new perspective that didn’t exist before, and that’s the real value.
Curiosity, optimism. If you try to change a company, a brand, society, the world, you’re going to face roadblocks, difficulties all the time. You need people that have the kind of optimism inside, but then you can also amplify that. When you are in a very difficult moment, look back, and enjoy, and celebrate, and appreciate the progress, everything you came from, even the mistakes because you learn so much out of that. That will give you also the awareness that that difficult moment you are in actually a month is going to be a moment of growth. You’re going to learn out of it, and you’re going to project yourself towards the future.
Now, the second thing you need to do, you need to have the dream because that dream will give you the excitement to go on and the resilience to overcome all kinds of roadblocks. Finally, kindness. We are told often the opposite. Kindness is a weakness, and you need to be tough and mean, a little bit of a jerk to succeed. That’s so wrong. Eventually, it worked better 10, 20 years ago, but in today’s society where you need to be hyper efficient, where competition is extreme, you need every part of the organization to really work in total sync, in full effectiveness, to build the company of the future and the solution for your customers of the future. Kind people drive trust. Often, we talk about trust in companies, but trust comes from people who really care. They are nice to each other. They are kind to each other. They love each other. So, kindness is the foundation of trust and is, at the end of the day, foundation of a new form of productivity.
When we talk about productivity in these companies, we talk about laying off people, cutting resources, A&M and variety of different investments. We rarely talk about investing in productivity by amplifying the level of kindness and trust in the organization. For me and our design organization over the years across multiple companies, this has been really, really an incredible driver of growth and efficiency.
Weishaupl: Mauro, thank you so much. I could talk for hours but thank you so much for your time today. This has been really, really enlightening for me, and I hope our viewers enjoyed this conversation as well. I’m going to hand it back to you, Joe.
Kornik: Thanks, Alex and Mauro, and thank you for watching the VISION by Protiviti interview. On behalf of Alex and Mauro, I’m Joe Kornik. We’ll see you next time.
Mauro Porcini is President & Chief Design Officer at Samsung where he oversees a global staff of 1,500 designers. He is the host of his own successful video podcast “In your shoes with Mauro Porcini” and since 2020, he has been a presenter and judge on the TV shows New York by Design and America by Design, airing on CBS and Amazon Prime. He is the author of The Human Side of Innovation. The Power of People in Love with People. Prior to joining Samsung, Porcini was the Chief Design Officer of PepsiCo and 3M.

Alex Weishaupl is a Managing Director, Protiviti Digital – Creative and UX Design. He is a digital design executive with a deep history of helping clients envision, build and evolve customer experiences that help their organizations find and deliver on their vision and purpose to build rich connections with their audiences—both external and internal.

Did you enjoy this content? For more like this, subscribe to the VISION by Protiviti newsletter.
Global market trends and consumer expectations with The Economist’s Barsali Bhattacharyya
Global market trends and consumer expectations with The Economist’s Barsali Bhattacharyya
Global market trends and consumer expectations with The Economist’s Barsali Bhattacharyya
In this VISION by Protiviti interview, The Economist Intelligence Unit’s Barsali Bhattacharyya, talks global market trends and customer expectations with Protiviti Managing Director Bryan Comite, Customer Experience Strategy lead, Protiviti Digital. As CX moves further into digital domains, Bhattacharyya says we may eventually end up through the looking glass where the future reflects the past. “Businesses that are able to merge automation with human overview will be the winners,” she says.
In this interview:
1:09 – Consumer markets outlook
5:46 – The trends: experiences and value
8:21 – AI’s impact on customers
10:54 – Emerging expectations
Global market trends and consumer expectations with the Economist’s Barsali Bhattacharyya
Joe Kornik: Welcome to the VISION by Protiviti Interview. I’m Joe Kornik, Editor-in-Chief of VISION by Protiviti, our global content resource examining big themes that will impact the C-suite and executive boardrooms worldwide. Today, we’re exploring the customer experience. I’m thrilled to be joined by Barsali Bhattacharyya, Deputy Director for The Economist Intelligence Unit and Global Lead for the Consumer and Retail Sectors. Barsali is one of the Economist’s leading voices on geopolitical and macroeconomic trends, and how they will impact consumers and businesses. Today, she’ll be speaking with my colleagued Bryan Comite, Managing Director and Leader of Customer Experience Strategy for Protiviti Digital. Bryan, I’ll turn it over to you to begin.
Bryan Comite: Thanks very much, Joe. Barsali, thank you so much for joining us today.
Barsali Bhattacharyya: Thank you very much for having me. I’m glad to be here.
Comite: I’m very much looking forward to the conversation. Maybe we’ll start with a first question about, based on the current economic situation, how are things looking for consumers in 2025?
Bhattacharyya: Thank you. That’s a really good question. I’m going to answer that by taking a step back and going back to 2024. At The Economist Intelligence Unit, at The EIU, we have a practice where every year, in the second half of the year, we analyze the outlook for the coming year for countries and industries that we cover. So last year, in 2024, when we conducted this exercise, our 2025 forecast for retail sales and consumer spending appeared broadly positive. The US had experienced quite a strong growth in 2024 and we anticipated some of that momentum to continue into 2025.
On the other hand, in Europe, we expected a recovery after a slow 2024. In fact, China was the only region where the outlook for 2025 seems somewhat sluggish. Now, we did, however, at the time identify a potential change of administration in the US as a key risk for this outlook. About six–seven months later, I think those risks have materialized with US President Trump announcing some sweeping policy changes, which primarily marked by tariffs against key US trading partners, and that has led to create a certain level of uncertainty while we wait and watch about how things are going to unfold.
Comite: So, if you think about that then, looking beyond the US, how will other markets be affected by these events?
Bhattacharyya: Given the state of uncertainties, one way of thinking about which other economies are going to be affected and to what extent, is looking at which other countries that are quite export-dependent and, in particular, which have high level of trade dependency with the US, so who export a lot to the US, because they will obviously be quite affected by the tariffs. Again, I think Asia stands out in that regard. It’s been, over the past couple of years with some slowdown in the western market, Asia has broadly, for many global consumer companies, been a standout market which has offered a lot of opportunities. There’s a whole bunch of reasons there, emerging middle class, and most notably a lot of economic growth over the past few years in Asia, which has, interestingly, mostly been driven by growth in the manufacturing sector. So, if you look at countries like China, Vietnam, even actually Japan, which are quite sizable manufacturing sectors, so these are some of the countries that are going to be affected.
Looking across the rest of Asia, India, of course, stands out as another economy where there’s already been a lot of momentum about increasing consumer base, a lot of middle-income consumers who are looking to spend, looking to have a better lifestyle, looking to travel. Lots of opportunities there. Again, so far in comparison to many of its Asian peers, India is slightly better positioned with the trade war. It’s one of the countries that might benefit from the China Plus One strategy that a lot of businesses already have in place. Those would be some of the markets to look forward to.
The Middle East as a region is also — you know, countries like Saudi Arabia and the UAE which are big and rapidly developing consumer markets — I think those also offer a lot of opportunities. With Europe, again, the picture is going to be quite mixed, where a lot of tourism-dependent economies might be doing better than export-dependent economies like Germany.
Comite: Thank you so much for that detail and perspective. It’s very interesting to see how the regions are responding and reacting and what some of those outlooks look like. I’d love to shift gears a little bit with you. With that in context, what are the kinds of experiences that consumers are most valuing right now?
Bhattacharyya: There’s going to be a lot of caution among consumers, but we think there will be people looking to treat themselves. The consumers are going to look for opportunities to treat themselves from time to time. I think those are the kinds of opportunities businesses probably need to look for and make the most of. Again, the basics — if you think at what consumers are going to be spending on — the basics are probably going to be fine. They have to spend on essentials, but they are going to be looking for more value. I think that’s probably something for businesses to keep in mind that, sure, prices are probably going to go up for some products, but how do they stand apart from other competitors? There’s going to be probably price wars among businesses as they try to retain their consumer bases. How can they position themselves differently from their competitors and make sure that consumers see value in them?
Comite: Barsali, do you also see, based on the connection back to the conversation around the macroeconomic drivers, that there will be regional differences or trends in the behaviors of the consumers that you’re seeing based on your research?
Bhattacharyya: Yes. I think that’s right. Again, looking at what’s happening around the world, we are probably heading towards a situation where goods might be priced very differently across markets. Companies are probably going to adopt different pricing strategies. That means that the economic impacts to consumers are going to different. The price raises, inflation trends that they see are going to be quite different. I think there’s also going to be factors to watch around tourism where we are going to see a lot of spending. Again, there would be variations in regional trends that we see. For example, we have already seen a slowdown — we are starting to see a slowdown in incoming — in tourist arrivals to the US in response to geopolitical events. Since the pandemic, we have seen an increase in tourist arrivals in the Middle East, in countries like UAE. Again, those kinds of trends are going to continue.
Comite: Absolutely, Barsali. Super helpful to think about all of these interconnected drivers that are impacting the customer experiences. I’d love to get your perspective on the role of emerging technologies. Artificial intelligence or AI in these emerging technologies and the pace of change. How do you see that impacting the customer experience over the next two to three years?
Bhattacharyya: Right. That’s a very important question, and I think especially relevant for the consumer and retail sector, because these companies have been quite at the forefront of using new technologies. For quite a while, we have seen consumer and retail businesses using AI and ML technologies at the backend, in their supply chains, and ensuring that they have more efficient supply chains. They have been using it to manage the inventories, for generating consumer analytics and insights. There’s a lot of use in the back office. So for consumer analysis, for predictive analytics. Online shopping, obviously, has been using it for predictive analysis for quite some time.
I think what’s really interesting is that, with the rise of generative AI technologies over the past few years, a lot of the use cases probably shifted away from the back office towards more consumer=facing experiences and more consumer-facing use cases. So, I think that probably the biggest use case of generative AI is in marketing and advertising. So, very much what consumers are seeing, what they are reading and viewing to make their purchasing decisions.
One other use case that really stands out is personalization. Again, lots of opportunities there for businesses, especially digital retail businesses, when they are trying to offer more curated, a more personalized, a better, more convenient customer experience. I think there are a couple of really emerging use cases that we are seeing where companies are, if you look across industry surveys, companies say that they are beginning to invest a lot in customer analysis and segmentation and in digital assistance and profilers. I think those are two areas where there’s a lot of trial, a lot of initial implementation that’s happening across retail and consumer businesses.
Comite: Let’s look out a little further. Let’s think about 2030. Any bold predictions about where the evolution of expectations around customer experience are going? Any ideas about where this is headed?
Bhattacharyya: I think given the uncertainties we are living amidst, it’s quite difficult to be thinking even one year ahead. If you think about what consumers would expect, there are probably two things. One is probably a little less surprising than the other. So, the less surprising thing is that if businesses are using so many new technologies, and we know one of the concerns around AI technologies has been around data privacy and accuracy and hallucinations, I think consumers would expect a lot more progress at those ends. They would expect businesses to be really transparent about what’s happening with their data, how they’re using their data. They would also expect a certain level of accuracy in how businesses are using these technologies. So, if I have a shopping assistant suggesting me what kind of TV I should be buying, I would expect that to be absolutely accurate data. I would expect that to be something I can trust and I don’t have to double check that. Again, transparency and accuracy would be something that consumers are going to be expecting.
I think one surprising thing might be the human element, right? While we are using a lot — businesses might be using a lot of technologies, it’s probably going to be important to make sure that there is a level of human overview on things. So, if a lot of my shopping experience is done by – through automated processes, through bots and virtual assistants, if something goes wrong as a consumer, I might be expecting a quick redress. It might be that businesses that manage to correctly use a merged human overview with the use of new — adoption of new technologies are standout as the winners. If something goes wrong, the ability — the ease with, and the speed at which, a consumer is able to find a solution, or maybe speak to a human at the company end might become quite important.
Comite: Barsali, thank you so much for taking the time to share your experience, your expertise looking at the markets, both in the US, globally, and thinking about the future of customer experience. I just want to say thank you again for all of those insights.
Bhattacharyya: Thanks, Bryan. It has been a pleasure talking to you.
Comite: Joe, I’m going to turn it back to you.
Kornik: Thanks, Bryan. Thanks, Barsali. Thank you for watching the VISION by Protiviti interview. On behalf of Bryan and Barsali, I’m Joe Kornik. We’ll see you next time.
Barsali Bhattacharyya is a deputy director for the Economist Intelligence Unti and lead analyst for the global consumer and retail sectors. She is one of The Economist’s leading voices on global trends affecting businesses, including geopolitical and macroeconomic shifts and their implications on consumers and businesses. Leveraging EIU’s quantitative and qualitative forecasts, she helps global business leaders identify opportunities and challenges relevant to their sectors.

Bryan Comite is a Managing Director and leads Customer Experience Strategy within Protiviti Digital. With over 20 years of experience, he regularly partners with clients to solve complex challenges and connect to value across the end-to-end customer lifecycle. Bryan’s expertise is in developing customer journeys and experience strategy, voice of the customer program and platforms design and implementation, strategic initiatives, and planning and execution and quantitative and qualitative market entry evaluations.

Did you enjoy this content? For more like this, subscribe to the VISION by Protiviti newsletter.
Former Procter & Gamble global CMO Jim Stengel: A hyperfocus on customers builds trust in tough times
Former Procter & Gamble global CMO Jim Stengel: A hyperfocus on customers builds trust in tough times
Former Procter & Gamble global CMO Jim Stengel: A hyperfocus on customers builds trust in tough times
In this VISION by Protiviti interview, Jim Stengel, former Global Marketing Officer at Proctor & Gamble and host of The CMO podcast with Jim Stengel, sat down with Protiviti’s Jen Friese Managing Director and Global Lead of Digital Solutions, to talk about how having a hyperfocus on consumers is the best way to build brand trust and customer loyalty in these tough times. Despite all the current uncertainties, “it’s a great time to be a marketer,” Stengel says.
In this interview:
1:06 – Changing customer expectations
6:01 – Marketers thriving in austerity mode
11:38 – How to spark innovation
14:27 – What to expect of the next 5 years
Joe Kornik: Welcome to the VISION by Protiviti interview. I’m Joe Kornik, Editor-in-Chief of VISION by Protiviti, our global content resource examining big themes that will impact the C-suite and executive boardrooms worldwide. Today, we’re exploring the customer experience, and I’m happy to be joined by Jim Stengel, former Global Marketing Officer at Procter & Gamble where he spent 25 years leading the effort to reinvigorate the consumer giant. He is a globally recognized speaker and author, host of the CMO Podcast with Jim Stengel, and president and CEO of the Jim Stengel Group. Today, he’ll be speaking with my Protiviti colleague, Managing Director Jen Friese, Global Digital Marketing Solutions Lead for Protiviti. Jen, I’ll turn it over to you to begin.
Jen Friese: Thanks, Joe. Hello, Jim. It’s great to be here today.
Jim Stengel: Hi, Jen. We’re talking about brands and marketing and competition and customers. My favorite topics. I’m so happy to be here.
Friese: I love it. Jim, you’ve been at this for a long time, more than 40 years.
Stengel: Yes.
Friese: I’m sure you would agree that this is one of the most disruptive times for marketers. With the rapid growth of AI, technology and innovation we are seeing changing expectations from consumers and unpredictable consumer behavior. What is your perspective on the state of the industry?
Stengel: Yes, it’s a big question. We can probably speak the entire show about that. I want to reflect on a little bit of a personal story before we start, Jen. Last week, I was in Chicago. I was teaching in the Kellogg School Executive Education CMO Program, which is a residential program for CMOs, about 22 people. So, an intense immersion into what’s going on with them. Later that week we had an annual Kellogg School Marketing Summit where we invite all sorts of thought leaders, academics, practitioners for a day and a half. So, a good immersion into sort of what’s on people’s mind. I mean, I talk to a CMO every week on my show but I have to say the mood was — with all the stuff going on in the world — the mood was very positive. I reflected on that before our conversation, thinking, “Why would that be?” I think it is related to the chaotic world we’re in right now. So, what happens with people when things are changing a lot, when trust is eroding in big institutions, when prices are going up, when there’s unpredictability at large? I think what happens is you look for something that you can trust, that might be familiar. That’s a huge role for brands today.
So, I think this era of building brands, building organizations that people admire, want to be a part of, feel like they share their values, is actually more important now than ever because of the macroeconomic world we’re in. What happens when prices go up and people start to get a little bit nervous? They go to a brand they can trust because they don’t want to take the chance. What makes for a great brand? Part of it is pricing flexibility, which is really, really important. Anyway, the mood at least from my immersion last week very, very positive.
One theme with everyone was the importance of relationships with each other as employees with our customers and with everyone who is a stakeholder for your brand. It’s interesting when there is deep trust creativity can happen, innovation can happen, and brand loyalty can happen. That’s for the environment.
In terms of beyond that, what successful brands could do in this situation we are in? The number one principle — I spent 25 years in Procter & Gamble. Procter & Gamble is very good at being consumer-centric. You’ve got to be in touch with customers today because things are changing so freaking fast. So, the organizations that are in touch and agile to respond to changing needs, desires, wants, are the ones that will win. There is one story told at this conference last week, it’s of the Chili’s brand, the casual dining. Their stock price in the last year is up 2 ½ times. In the last three years it’s up five X. What’s the key to it? They’re listening to customers and acting on it.
One simple thing — I won’t talk all day about this but one simple thing is —they noticed on social media a lot of people are driving through fast food chains and holding the receipt up. So, Chili’s says, “Aha!” They’re saying, “Well, we’re going to make them our competitive foil. We’re going to change our marketing message to say, ‘We’re about the same price as fast food but when you come to Chili’s you get a friendly server. You get chips. You get a $6.00 margarita and you get a great burger and fries for about $12.00 or $13.00.’” Their business is going like that. In touch with customers, what are they saying, acting on it, and the business results follow. So, that’s a big win.
Beyond that, of course, it’s all about building trust, building advocacy, focus on the customer experience, which brings everyone together in a company, and the last one is, never lose sight that we’re in the trust and attention game. If you’re building trust and getting attention you have a higher likelihood of building a great brand.
Friese: You mentioned many things that are impacting the industry. We know that marketing spend has dropped 7.7% from overall company revenue. It’s the lowest in three years. Knowing that marketing budgets are tight and customer experience is so important, how do you think marketers can better plan to hit their performance KPIs while also building their brand? What’s your advice for how marketers can thrive in this austerity mode?
Stengel: Yes. Let me take the first one on brand and performance marketing. That’s maybe the hottest debate we have in the industry. I teach a program at the Cannes Festival every year. We always survey the people coming into the program about the hot issue on their mind. For the last years it has been — one of the top three has been brand and performance marketing. So, it’s a really, really big issue. I ask that a lot of guests on my show about how they’re thinking about it. There’s one really fundamental thing that we should’t just breeze by. If you do have two separate organizations internally, one that runs brand and one that runs performance — put them together. Physically maybe or metaphorically but they should be helping each other. They should be working off the same song sheet in terms of the kind of brand we’re trying to build. They should be held accountable for both. They should have the same KPIs. If you do that you will get — the sum of the parts will be a lot of greater than the whole — or the whole will be a lot of greater than the sum of the parts.
That’s a really powerful one. We can act on that. It’s not something we have to invent. It’s proven in companies that are doing it. I had the CMO of Autodesk on my show. They put those departments together and they’re crushing it. That’s the first one.
The second one is we have to, all of us, whether we’re B2B or B2C, challenge the assumption that great brand campaigns can’t build performance. Great performance campaigns can’t build the brand. If you challenge those two assumptions and hold those, whoever is running those elements of your marketing in-house, to those standards great things happen.
One brand that I admire a lot these days is Duolingo, the language learning brand, the education brand. It’s super hot. It’s in culture. It’s growing like crazy, great team. They simply believed that their brand communication can build demand and they have the data to prove it. So, when they do all the snarky things they do with the owl, which is building their brand, it’s also bringing new users and having lapsed users come back to the brand. So, it’s about putting them together. It’s about changing expectations. That’s why I like to think about performance marketing. If we’re siloed in thinking about those things then I don’t think your odds of building a great brand that has great metrics and performance and brand, they are less optimal.
In terms of how to think about marketing today and the environment we’re in with signs of recessions, certainly signs of inflation, I’ve been through a lot of these. Maybe not as extreme as we’re going through now with the tariffs and so on, but I’ve been through lots of ups and downs on many brands in my career. The one very empowering thing we can all do is we can pull our organizations together and say value, which is always important, is going to be more important than ever. So, what are some ideas that we can come to market with that really hit value with our consumers?
I can tell you this is a bit of dated story but still a very relevant one. When I was — I was the Global Head of Pampers when I was back in P&G before I became a Global Marketing Officer, and before that I was the West and European Head of Pampers, it’s P&G’s biggest brand. We had trouble cracking Pampers in developing markets. A lot of parents didn’t use any diapers. It was a cultural thing. We did some pretty serious research to show that when you wore a diaper at night you slept more deeply and you slept better, and sleep is very important for developing babies. The cost of one diaper in developing markets was about 30 cents. So, we had a brand campaign which said, “If your baby sleeps better they will develop better. We have research on that. If they wear Pampers they sleep better. It costs about 30 cents.” What happened is, the brand exploded because we reframed the value. So, it’s all about reframing value.
In fact, the Chili’s story, I told a minute ago, they reframed their value, “We’re about the same price as fast food and you can have a great experience.” When you tap into an organization’s imagination to come together on some new ideas or some ideas you might have had in your past that you want to bring forward again to reinforce the value you have. By the way, if you don’t have a strong value then work on that.
Friese: Right, there’s a problem. [Laughter]
Stengel: I think this is a great time for marketers, honestly.
Friese: Yes.
Stengel: It really is.
Friese: No, I couldn’t agree more. We know it has been a while since you published your “Unleashing the Innovators” where you explored how legacy companies can renew themselves by acquiring new technologies and creating new business lines, sparking innovation, and learning from failures. How have these lessons changed?
Stengel: Yes. That book — it was a great project to work on, that book it came out within 2018-ish. So, it’s several years ago. The reason I wrote it, which is interesting, Jen, was there were at that time lots of big companies experimenting with startups. It was in the press a lot. My book agent said to me, “What are people using as guidelines? Is there a playbook for these collaborations?” There wasn’t. I thought it was a really interesting topic so I started interviewing. I did a big quantitative study about what’s working in partnerships. I worked with a data group within the Ogilvy group, the Ogilvy Red. We interviewed lots of people on site, startups and big companies.
That mindset of going outside your company, talking to others who may have ideas that compliment your core skills is always powerful. Certainly, in this era of AI we should be amped up on that because we don’t have all the answers. There are lots of different companies doing interesting things with AI. We can talk about that in a minute. There are lots of companies who bring interesting capabilities with new platforms.
So, the first step, and this is certainly a lesson in my book, the data showed us in my book that companies that had a mindset of building successful external partnerships were three times more likely to win versus their competition. Part of that, of course, is your whole mindset, that you collaborate. You’re looking for new ideas. You don’t get stuck in a silo. That you have expansive growth-oriented thinking. So, that one is evergreen.
It’s funny the title was inspired, “Unleashing the Innovators,” by a quote from someone at Toyota who said, “We let the outsiders in and it unleashed our innovators inside.” To me that’s a timeless thought and I think a danger in many, many companies are that we do get too closed off, too siloed, too much internal thinking, too much internal politics. Those that keep their heads up and keep outside looking for who are the right partners they have a much higher likelihood of sharing and growing their business.
Friese: Finally, to close out, if I could ask you to look out a few years and think about the end of the decade, what will be different than today? Where do you see the state of marketing and consumer expectations in 2030?
Stengel: Yes, it’s a good one and a tough one. The way I like to think about the future, at least in this kind of horizon, like five years or so — and I used to practice this at P&G and I certainly have done this with other clients I work with, if you just look at things we know there’s going to be lapsed ones, all sorts of things happening. You just never know when something is going to fly in but if you just double down on things that maybe emerging, that are not theoretical but are actually happening, I think we have enough to work on. You think about number one here, we have a generational thing happening. We have Gen Z coming into the consumer space, to the workforce. We have Gen Alpha not far behind them. What are they like? What are they valuing? How are they behaving? That’s going to change a bit but the fundamentals probably won’t.
Here’s one interesting stat, 53% of Gen Z self-identify as neurodiverse. Think about that. What does that mean in terms of retail, in terms of digital communication, in terms of product and service innovation? So, that’s what I mean. If you’re customer-centric and you’re looking at emerging customers you better understand them and that’s going to impact marketing and impact them. So, that’s the number one. That’s right on our doorsteps. So, I think the pivoting a bit more to the next generation of employees and customers is really, really powerful.
AI is here. This isn’t theoretical. We’re all working with it. It’s changing our lives already. Are you within your organization taking your strategies and thinking about AI to help super charge those? I can tell you the ones I talk to are doing stuff — I don’t think they’re thinking deeply enough about it. AI in five years is going to be — we all know it, everyone says it — is here and it is revolutionary — even more revolutionary in five years and we’ll still be learning about it. So, strategy first and how AI can help you achieve that strategy faster, more creatively, more inexpensively. At this meeting I was at last week in Chicago, a Senior Executive from Coca-Cola talked about how AI is helping them totally reframe their global agency strategy, how they work together, their collaboration, how they do creative work, how they do media planning in a totally unprecedented way. That's an example of a company really thinking about deeply how can this technology, which is nascent but here, help us with our business goals. So, that’s the second one I would do.
The other thing that’s here is more nationalism. We are looking at — I don’t think tariffs are going to — maybe some of them will go away, but we have already started a conversation about being less interdependent, more self-sufficient, what are the implications of that in marketing. Certainly, big ones on supply chain but also in how we think about brands, how we work together globally, how we build global brands. I think we’ll have a bit of different model for that in the next five years. This is maybe related to all of these things, I do feel the number of people that we will need to work on our businesses is going to be less because of AI, because of more focus, simplification. That’s another one we have to deal with with care, with planning, with strategy. There aren’t many people I talk to who feel like AI is going to help them increase staffing.
Friese: Right. [Laughter] Yes.
Stengel: It’s going to change a bit staffing but it’s going to be different, which we have to approach carefully because it has a lot to do with morale and creativity and productivity. The last one, I think, marketing will be even more measurable in five years. We’ve made a lot of progress but I think we are going to understand the brand and performance and interconnectedness between the two even better in five years, which will help us be sharper.
The last one, which is fun, is I’m even thinking now versus five years ago — think of the role of sports in our lives, in our communication, and culture. That’s not slowing down. If you’re not — obviously, the larger point is how do I make my company and my brand be in the culture so that people care and we gain attention — sports is a big way to do that and will be a bigger way to do that.
I don’t have a crystal ball. I do know that there will be a brand in 2030 that we don’t know about now that is making a difference in our lives so just be ready for that. Agility, obviously, is important. Just looking at the stuff that’s here and doubling down on that because all these things that are here are going to change things in two, three, four, or five years. Some of them very profoundly. Just to be sure that you have strategies to deal with the stuff that is already rising. The stuff that we don’t know is there, we can’t do anything about that. Just be ready to think about it when it comes along the scene but think deeply about the stuff that’s here.
Friese: That’s great. Thank you so much, Jim, for all of your insights and taking the time with us today.
Stengel: Thank you, Jen.
Friese: Joe, I’ll go ahead and hand it back to you.
Kornik: Thanks, Jen, and thanks, Jim, for that great conversation. Thank you for watching the VISION by Protiviti interview. On behalf of Jen and Jim, I’m Joe Kornik. We’ll see you next time.
Jim Stengel is the former Global Marketing Officer of Procter & Gamble, where he spent more than 25 years, and oversaw an $8 billion advertising budget and had organizational responsibility for nearly 7,000 people. Currently, he is President & CEO of The Jim Stengel Company where he serves as an advisor to several global companies. He is a renowned speaker on marketing, brand and customer experience. His latest book is Unleashing the Innovators: How Mature Companies Find New Life With Startups. Stengel also is the host of the award-winning CMO podcast with Jim Stengel.

Jen Friese is a Managing Director at Protiviti and leads Digital Experience & Platforms. She is a creative, results-focused leader with experience in devising and executing digital business and marketing strategies that build brands and drive growth. Her work includes leading digital and customer transformation projects, creative and product development, content strategy management and execution, employee experience strategy and implementation and digital media strategy and buying.

Did you enjoy this content? For more like this, subscribe to the VISION by Protiviti newsletter.
We think you’ll love this! Hyper-personalization supercharges CX in the age of AI
We think you’ll love this! Hyper-personalization supercharges CX in the age of AI
We think you’ll love this! Hyper-personalization supercharges CX in the age of AI
Like 260 million of you, I have a Netflix account. I like to tell people of a certain age that mine began back in the ’90s when Netflix would mail you DVDs, and you would mail them back. At least that was the plan.
A few years ago, I found one of those old Netflix envelopes in a box I never fully unpacked when we moved into our house back in 2001. Inside, there were three DVDs: Shrek, Ocean’s Eleven and Memento. (Do I have good taste in movies or what?)
This provided everyone with a funny story and a few laughs, and I figured the Netflix police had long since given up on trying to track me down.
I bring up Netflix 1.0 not to revisit the nostalgia of the ’90s — at that point in time, no one, maybe not even Netflix, was dreaming of streaming. Rather, I bring it up to talk about how a huge part of its customer experience today is hyper-focused on hyper-personalization. And Netflix, it turns out, is exceptionally good at it.
A certain place and time
Most people are aware Netflix will make content recommendations based on a user’s ratings, search history and previous selections. In fact, Netflix says more than three-quarters of everything subscribers watch on the streaming service comes from its algorithm. But did you also know that those recommendations change based on where you are, when you are watching and on which device? I didn’t.
But after I read about its AI-powered recommendation algorithm that predicts not only what I want to watch, but where and when, I started paying closer attention to what Netflix recommended to me. Here’s what I noticed:
On Saturday mornings, I will often be on Netflix searching for shorter content, maybe a sit-com, lighthearted series, or perhaps a music documentary to watch on my phone while I pedal away on an exercise bike for an hour or so.
On Saturday nights, I will often be on Netflix searching for longer content, maybe a movie, drama series or perhaps a true-crime documentary to watch on my living room smart TV with my wife for a few hours or so.
Netflix knows the difference. It has hyper-personalized my viewing experience. Netflix uses temporal factors and predictive modeling, and it collects data about my viewing behavior and habits, even highlighting content with certain character arcs, storylines or settings. Give it data and it gives you what you want, even if you don’t know it yet.
Netflix understands the many moods of me better than I do. It “understands” the differences between Saturday Morning Joe and Saturday Night Joe. Could Grumpy Joe, Happy Joe or Anxious Joe be far behind? The possibilities are mindboggling.
Supercharging CX
I am not sure why I find this so surprising. Personalization has become ubiquitous; we barely even notice when Amazon’s algorithms offer products based on previous purchases, or we see Disney ads after talking about kids and needing a vacation.
It makes perfect sense: hyper-personalization improves customer satisfaction by making product and user discovery effortless, intuitive and engaging. And brands understand every interaction is an opportunity to fulfill an unmet customer need or build upon a trusted relationship. But my realization about Netflix feels different; it feels like a big step in supercharging CX in the age of AI algorithms. Considering the pace of change, one can only imagine where all this hyper-personalization and predictive modeling end up, and the ultimate impact on the customer experience.
With all that in mind, VISION by Protiviti explores The Customer Experience, examining how companies can create and deliver exceptional experiences that offer excitement, build trust and loyalty with customers, and unlock revenue growth.
In July, we will publish our Protiviti-Oxford global Executive Outlook on the Customer Experience, which will probe into companies’ CX strengths and weaknesses, resources and readiness and measure the impact of AI and other emerging technologies.
‘A hyperfocus on customers builds trust’
As part of our editorial exploration, VISION by Protiviti reached out to experts to discuss the future of the customer experience in this rapidly changing business environment. Despite all the current uncertainties, “it’s a great time to be a marketer,” says Jim Stengel, former Global Marketing Officer at Proctor & Gamble and host of The CMO podcast with Jim Stengel. Stengel sat down with Jen Friese, Protiviti Managing Director and Global Lead of Digital Solutions, to talk about how having a hyperfocus on consumers is the best way to build brand trust and customer loyalty in these tough times.
‘AI will surely disrupt design’
Meanwhile, Mauro Porcini, President and Chief Design Officer at Samsung, says human centricity is the key to unlocking innovation and purpose, and even though AI will surely disrupt design, eliminating, transforming and creating jobs, it will be for the best. AI’s ability to decrease or even do away with meaningless work and menial tasks will free us up for more quality time for the things we enjoy. “It’s a dream,” Porcini says. “Ultimately, AI can do the most human thing of all, give us back the happiness we’ve lost over time. Porcini sat down with Alex Weishaupl, Protiviti Managing Director, Digital Experience, to talk about these things.
‘Merge automation with human overview’
We also feature Barsali Bhattacharyya with The Economist Intelligence Unit who talks global market trends and customer expectations with Protiviti Managing Director Bryan Comite. As CX moves further into digital domains, Bhattacharyya says we may eventually end up through the looking glass where the future reflects the past.
An automated consumer experience via bots and virtual assistants may be fine… until something goes wrong. “Businesses that are able to merge automation with human overview will be the winners,” she says. The ease and speed at which a consumer can reach a human anywhere along the CX journey will be crucial to future success.
While more humanity sounds great, I don’t see us going back to the days of dropping DVDs into a mailbox. Would you believe Netflix subscribers were still able to do that just two years ago? Netflix finally ended the practice in late 2023, and said subscribers could keep any DVDs they currently have. Whew! Forgetful Joe is relieved.
Anyone still have a DVD player?
BYU professor on keys to sustainability success: Get beyond the eighth-quarter mentality
BYU professor on keys to sustainability success: Get beyond the eighth-quarter mentality
In this VISION by Protiviti Interview, Protiviti Managing Director Steve Wang sits down with Paul Godfrey, the William and Roceil Low Professor of Business Strategy at the BYU Marriott School of Business. Godfrey’s latest book, Clean: Lessons from Ecolab’s Century of Positive Impact, lays out a path for companies to improve their social, environmental and business performance. Here, Wang and Godfrey discuss the strategy of sustainability and why companies can be committed to people, the planet and profit.
In this interview:
1:03 – The state of sustainability today
3:11 – Long-term trends
5:37 – What is a culture of sustainability?
8:07 – Creating multi-stakeholder value
11:04 – Navigating the regulatory field
BYU professor on keys to sustainability success: Get beyond the eighth-quarter mentality
Joe Kornik: Welcome to the VISION by Protiviti interview. I’m Joe Kornik, Editor-in-Chief of VISION by Protiviti, our global content resource examining big themes that will impact the C-suite and executive boardrooms worldwide. Today, we’re joined by Paul Godfrey, the William and Roceil Low Professor of Business Strategy at the BYU Marriot School of Business. His latest book is “Clean: Lessons from Ecolab's Century of Positive Impact” and it lays out a path for companies to improve their social, environmental, and business performance. Today, Paul will be speaking with my Protiviti colleague, Managing Director Steve Wang. Steve, I’ll turn over to you to begin.
Steve Wang: Thanks, Joe, and thanks, Paul, for joining us today.
Paul Godfrey: Happy to be here, Steve. Wonderful. Thanks.
Wang: So, Paul, with that, we live in an incredibly fast-moving world, specifically for our topic today, the world of sustainability continues to change each day. We read about severe weather, climate change, the change in landscape of social issues, emerging regulations, and the list just goes on. Let’s talk about the present first. What is the current state of sustainability both here in the United States and then in the rest of the world?
Godfrey: I think I’ll make two comments. One is, reports of the demise of sustainability are vastly overstated. If you look at, something like 95% or 90% of the Fortune 500 companies are solidly producing sustainability reports, they’re engaged in sustainability reporting. So, at least at that level where we’re keeping track of what we’re doing, it’s deeply embedded in a lot of organizations. In terms of the uncertainty that we’re seeing in the world, if you look at the popular media, we’re seeing this retreat from DE&I. We’re seeing what might be a retreat from sustainability. Current administration sort of wanting to pursue some policies that have worked very well in the past, and what I recall is a great quote by Warren Buffet who said, “When the tide goes out you’ll find out who’s swimming with no pants.” So, whenever there’s a reversal in the stock market, you’ll find out who are just the looky-loos, who are just the pretenders, and who are real players, and I think what we’re finding out now is who’s swimming with no pants. Who are the companies where their commitment to sustainability was popular, it was skin deep, it wasn’t very concerted. Those are the companies with no pants. They’re retreating for the beaches right now. I think there are a number of organizations out there that are deeply committed to sustainability, not just for its social value but for its business value, and those folks are going to continue to push forward the agenda, will learn more best practices and will continue to evolve in a very, very uncertain world.
Wang: Thanks for that. How about the future, let’s look out maybe 10 years or so?
Godfrey: Where we’re going to be in 2035? Well, let’s look at some long-term trends that you noted in your introduction. I have a client in Arkansas. I did some research for him about tornadoes and if you look over the 40-year time horizon, the number of tornadoes in the state of Arkansas continues to increase. The severity of tornadoes on average tends to go down but that doesn’t mean there aren’t awful tornadoes. It just means there are lots of little ones. So, in terms of climate, weather uncertainty, hurricanes, gale-force winds, these types of destructive events, we’re seeing more and more of those. That’s a secular trend. If you go back to 1980, you can see that climate.
If we look at some of the social issues that we’re facing, changes in our education system, introduction of new technology such as AI, the social scene is going to continue to be uncertain. We’ve still got human trafficking problems going on all around the world. We’re now reconfiguring supply chains. So, thinking about sustainability and how to build those for the long term, it’s just not going to go away. I think the trend—and I think, again, these are the folks who are swimming with pants the whole time—they tie sustainability to business outcomes. So, they’re looking at supply chain resilience not only from “Hey, let’s really help our partners in the Philippines,” but “Let’s be ready and aware when there are floods and mass storms in the Philippines that our supply chain doesn’t get disrupted,” that we’ve actually reduced operating risk.
So, why do we use sustainability issues that are customer-focused? Why do we create products that are sustainable revenue? Because it reduces customer churn risk. Because it not only grows revenue but it allows us to carve out an island of stability amongst the sea of uncertainty. So, I think in 2035, companies that are committed to all three prongs, people, planet and profit, are going to continue to thrive. If you have no pants on, you’re just doing it because there’s something popular about people and profit.
Wang: Thanks for sharing that. How about culture, how important is fostering a culture of sustainability within a modern organization? You have a bunch of different internal and external stakeholders that might have different expectations, so managing a culture can be extremely challenging. What strategies can leaders use to embed sustainability into their organizational culture?
Godfrey: Well, I think that the first thing is to realize sustainability is a journey, like almost anything else. Traditional risk management is a journey. Risks are constantly changing. The world is updating, best practices are updating. So, if you realize it’s a journey and realizing it’s got to be deeply embedded in your culture just as a natural realization, because if it’s a one-off commitment, if it’s a speech by the CEO but no follow-up, the journey isn’t going very far. However, when it’s gets embedded—so when I think about culture, I'm thinking about two things. One are the formal rewards that a culture reinforces, so, what do we pay people to do. One of the best-practice companies that I know of actually ties sustainability outcomes with the business unit level to managerial compensation. Part of their bonus is tied to hitting emissions goals, water use goals, the supply chain goals around sustainability. So, you got these formal things.
Then you got these informal things. Who are the heroes, what are the stories we really tell about our organization? Do we tell stories about just wasting, being profligate in the pursuit of customer value or do we reward stories about, “Hey, we were kind towards stakeholders. We actually took our customers and gave them a price break because our cost went down.” It’s what we reward, both formally and informally, that embeds sustainability in a culture, and again, it’s not going to happen tomorrow but it won’t happen unless you start tomorrow with some of these longer-term initiatives.
Wang: That’s a really good advice for the business leaders that are out there. Right now within our organizations, our sustainability leaders are also dealing with very complex issues and changing but increasing expectations. In some of your publications, you reference a tool called the Sustainability Canvas. Can you tell us a little bit about what that is and how can the Sustainability Canvas be adapted to fit the unique needs of modern organizations, especially with some of these leaders that are in rapidly changings industries?
Godfrey: The Sustainability Canvas basically builds from the premise that your company, or any company, competes in private economic markets. On the supply side, we have to buy materials, we have to hire employees. On the demand side, we have to create and sell products and services for our customers. We also compete in the public square. We all have, on one hand, regulatory, formal requirements that we have to meet, SEC regulations, EPA regulations, OSHA regulations. These are sort of formal regulatory bodies we have to comply with, but then we’ve also got our communities where we live, where our employees live, where our customers live, the schools, the music scene, the art scene, all kinds of how we build communities around. So, we got these four areas, costs, customers, compliance, and communities and the Sustainability Canvas is a way that companies can think about, one, how are we active in each one of those four areas, and number two, how are we creating value for our multiple stakeholders.
So, for example, how do shareholders win when we are focused on supply chain efficiency? Well, look, every gallon of fuel wasted is cost that went out the door that didn’t have to go out the door. Every gallon of water that we needlessly waste without recycling costs us money. So, in the sort of cost and efficiency focus, shareholders win because costs go down, operating risks tend to go down. How do employees win? Hey, look, we all like to work for winning companies. We all like to work for companies that show that they care about more than just the bottom line. If reducing energy use actually improves the bottom line, employees want to be involved in that kind of stuff. Think about customers, again, when we develop products, shareholders love new revenue. Customers love their needs are actually met by a product that we produce. Same goes for compliance where you’ve got activist risk, people who are looking to take potshots at your company because they don’t understand the full picture. The more transparently that you comply and report what you’re doing, the less ammunition those folks have to make your life miserable.
Wang: Let’s talk about everyone’s favorite topic and one that you just specifically referenced, the legal side of things. How should modern organizations navigate the evolving regulatory landscape related to sustainability?
Godfrey: So, in my field, strategy, there’s a guru, a guy with guru status named Michael Porter and he wrote a book about competing internationally, and his recommendation for companies there was counterintuitive. He said, “You want to find and compete in the toughest regulatory market that you can find because if you can compete in the hardest regulatory market, it’s easy to dial back your requirements for other markets that have less regulatory risk.” So, for example, now the EU is sort of at a moment of truth around sustainability where they’re figuring out that a lot of the stuff that they’re asking companies to do might be needlessly asking, adding costs to the company and its operations. But the EU has the strictest sustainability standards and so look at your operations and say, “Hey, if we move to Frankfurt, what would we have to change to be able to comply with EU regulations?” and if it’s nothing, then you’re in a great position. But then start to make those changes so that you’re prepared to compete in the toughest markets in the world because then when you get regulatory relief that’s a blessing. It’s not just saving you investments that you needed to make. Does that make sense?
Wang: Yes. It does make a lot of sense. This is where transparency from the business leaders to their employees is vital for organizations.
Godfrey: Absolutely. If folks in the C-suite are playing whack-a-mole on sustainability, people on the frontline figure that out pretty quickly, in spite of whatever get said or whatever emails get sent out, so that cultural consistency from top to bottom is absolutely vital.
Wang: Do you have any suggestions, advice or last second thoughts for our audience? What can we do as an individual or even as a society to help our organizations and community do what’s right?
Godfrey: I think there are two things. Number one, you got to get beyond an eighth-quarter mentality. You’ve got to be able to think about, “Okay. When I retire, what’s the world going to look like? What’s weather in Florida going to look like? What’s fire risk in California going to look like 10 or 20 years from now? How do you need to think about responding today to prepare ourselves for that world?” So, one is, you got to think much longer than eight quarters and the other one, this comes back to the culture. You have to be who you are. Don’t try to be somebody you’re not. Don’t try to be a company that buys into sustainability when it’s popular and then abandons when it’s not because then you’ll find you swim with no pants and that’s not a fun place to be.
Wang: Well, thanks, Paul, for all your insights, your inspiration, and as you mentioned today, just your overall mission to help others thrive.
Godfrey: Thank you and it is a great conversation. A really great conversation.
Wang: Thanks for that. So, with that, Joe, I turn it back over to you.
Kornik: Thanks, Steve and Paul, and thank you for watching the VISION by Protiviti interview. On behalf of Steve and Paul, I’m Joe Kornik. We’ll see you next time.
Paul Godfrey, the William and Roceil Low Professor of Business Strategy at the BYU Marriott School of Business, received the school’s Outstanding Faculty Member award in 2022. His latest book, Clean: Lessons from Ecolab’s Century of Positive Impact, lays out a path for every company to improve its social, environmental and business performance. The book has been recognized with a Nautilus Book Award and an Axiom Award.

Steve Wang is a Protiviti Managing Director with over two decades of experience in internal audit and sustainability reporting across different industries. Prior to joining Protiviti, Steve worked with two public accounting firms: Deloitte & Touche and Arthur Andersen. Prior to obtaining his Bachelor of Science in Finance from the University of Illinois, he had also worked three years in the retail industry.

Did you enjoy this content? For more like this, subscribe to the VISION by Protiviti newsletter.
Australian Privacy Commissioner Carly Kind breaks down new rules in Australia's Privacy Act
Australian Privacy Commissioner Carly Kind breaks down new rules in Australia's Privacy Act
Australian Privacy Commissioner Carly Kind breaks down new rules in Australia's Privacy Act
In this VISION by Protiviti Interview, Protiviti Director Hanneke Catts sits down with Carly Kind, Privacy Commissioner for The Office of the Australian Information Commissioner (OAIC), to discuss recent updates to the Australian Privacy Act. The OAIC is an independent national regulator for privacy and freedom of information; its responsibilities include conducting investigations, reviewing decisions, handling complaints, and providing guidance and advice.
In this interview:
1:05 – The Australia Privacy Act: An overview
4:40 – Opportunities to enhance privacy protections
7:57 – Concerns for businesses
10:03 – Penalties and impacts
13:51 – Implications of AI
17:23 – The future of privacy in Australia
Australian Privacy Commissioner Carly Kind breaks down new rules in Australia’s Privacy Act
Joe Kornik: Welcome to the VISION by Protiviti podcast. Joe Kornik, Editor-in-Chief of VISION by Protiviti, a global content resource examining big themes that will impact the C-suite and executive boardrooms worldwide. Today, we’re exploring the future of privacy, and I’m thrilled to welcome in Carly Kind, Privacy Commissioner for the Office of the Australian Information Commissioner, an independent national regulator for privacy and freedom of information and government information policy. The OAIC’s responsibilities include conducting investigations, reviewing decisions, handling complaints, and providing guidance and advice. I’m happy to turn over the interviewing duties today to my colleague, Protiviti Director, Hanneke Catts. Hanneke, I’ll turn it over to you to begin.
Hanneke Catts: Thanks, Joe. Carly, thank you so much for joining us today. It’s an absolute pleasure to be speaking with you.
Carly Kind: Thank you, Hanneke, and thank you so much for having me.
Catts: To begin the interview, I wanted to start with the Australia Privacy Act that’s now been in place since 1988, with several updates focused on the increased protection of personal information. It includes the most recent tranche of changes to the act, including the protection of children’s information online and improved regulator powers, the information sharing following a data breach. Carly, can you please provide some context to the nature of the changes and how they came about as well as their importance to Australian businesses?
Kind: Absolutely. Reform of the Privacy Act has been an ongoing agenda for this government and previous governments for some time. In fact, it was instituted by the Australian Law Reform Commission more than a decade ago. There’s a pretty widespread understanding that the act does need some updates. Those updates have been articulated in a very long report called the Privacy Act Review and the government has accepted in principle many of the recommendations from that review and they’re now bringing forth those changes in tranches. What we saw last year with the privacy and other legislation amendment act was Tranche 1 of a range of Privacy Act reforms. The key components of Tranche 1 were, as you said, Hanneke, first and foremost, the introduction of the mandate for our first to develop a Children’s Online Privacy Code, which will essentially particularize the requirements of the Privacy Act for services likely to be accessed by children and that code development process will take about two years. By the time we consult with children, parents, teachers, industry actors, those services who will be regulated under the code and then put the code out for consultation.
The other key elements of Tranche 1 included a statutory tort of privacy and that is a pretty novel approach to developing privacy law in the common law jurisdiction. It also included criminal offences around doxing, that is the malicious disclosure of personal information, and then, as you also intruded, Hanneke, it also includes some new enforcement powers and regulatory powers for the OAIC as well.
Then, the final small change but really important for businesses is a requirement that they articulate in their privacy policy when they’re using automated decision-making systems. That requirement won’t come into effect for two years and it will be a small, on paper a small change, but it is an important one for regulated entities to think about. Probably, the most substantial change in terms of obligations on entities at this stage, the other changes from Tranche 1 are more about the powers and responsibilities of my office.
Tranche 2, should it proceed, and government are now beginning thinking about how to take Tranche 2 forward, will contain more of these substantive changes to the legislation, potentially, new thresholds for data collection and processing, specific changes around how to do advertising, for example, and a range of other different tweaks to the existing law.
Catts: That’s really great background and context. Building on that, and you alluded to Tranche 2 upcoming changes, what do you see as the next big opportunities to enhance privacy guidance for organizations to give Australians more confidence that their information is protected?
Kind: I would say there being two parts to that, Hanneke. On the one hand, overhaul of the Privacy Act, should the government proceed with it in 2025, would definitely be a big moment of uplift for Australian regulated entities. That may look like a range of different things depending on how government choose to take forward that project, but for example, the inclusion of small businesses is something that is on the table currently. Small businesses are exempted from the application of the Privacy Act.
Other big changes might include an interaction of a fair and reasonable test for processing personal information. As I said, potentially, restrictions on targeted advertising, updated definitions around things like consent and personal information.
If the government proceeds with that project and it passes through Parliament, then that will require really a pretty broad update of compliance work, although, I would say that most entities are already on that journey given that Privacy Act reform has been anticipated for some years. We’ve seen quite high levels of engagement across the regulated sector at this stage, that is, those entities already regulated by the Privacy Act, to really get ahead of the legislation and put in place updated good governance practices, particularly in response to some of the major data breaches in Australia in the last few years and just generally the expectations of the Australian public, which are very consistent with stronger privacy protection. But there is another part, as I said, that really is contingent on Privacy Act reform going ahead. We don’t know at this stage whether that will be the case or what that would look like.
In parallel, our office is working really hard to put some more meat on the bone on some of the requirements of the Privacy Act as it is. It’s a principles-based framework. It really rests heavily on some concepts that haven’t benefitted from very much judicial interpretation. Things like raise in the bonus, things like fairness, lawfulness are already built into the Privacy Act as it is. Equally definitions of consent, personal information, et cetera. These haven’t really been tested, stress tested in the courts at this stage, and so our office is really focused at the moment on how we can use enforcement actions to advance jurisprudence in the courts around Privacy Act interpretation, and therefore, give more specific guidance back to entities about exactly what the law requires.
That’s something that we can take into our hands because obviously we’re not legislators, we’re regulators. So stronger, more robust enforcement of the Privacy Act with the view to advance in judicial interpretation, and therefore ultimately, being able to provide more guidance in education back to entities is really the priority for the OAIC in 2025.
Catts: Are there any key take-outs which should be adopting from other jurisdictions for our Australia obligations? What’s your advice to Australian businesses as they consider their privacy obligations?
Kind: Most Australian companies or many, at least, are engaged in the global economy and so many of them will already be complying with other jurisdictions, privacy requirements. On the one hand, Australia’s privacy framework is a little outdated compared to some other jurisdictions, and therefore, those Australian companies that are already trying to meet the level of the GDPR, which is the European framework, will really be in a good position to ensure privacy compliance no matter what happens in the Privacy Act reform, as the GDPR is quite a robust, and at least, a more recent legal framework than the Privacy Act.
Having said that, the Privacy Act does involve some unique approaches that aren’t really evident in other privacy, in other jurisdictions. It’s not, for example, heavily contingent on consent the way GDPR is. That may be cast as a weakness of the Australian regime. I see it actually as a positive because many of us know that consent isn’t really a very effective means for protecting individuals anymore. People just could click “Yes” to terms and conditions that have 40, 50 pages long without really reading them. Actually, the Australian framework requires concepts like necessity and fairness to be built in right from the start and that means that you can’t just consent to waive your rights, and equally, that entities can’t just get consent to cover up a range of other uses. I do think that the Australian framework is quite unique. It requires particular analysis and thought to ensure compliance, but I equally think that where entities are broadly displaying interest in good governance practices, they’re going to be in a really good position to comply with the Privacy Act, generally.
Catts: Great. Some really great insights and food for thought there too, Carly. The new privacy penalties that were introduced a couple of years ago, they significantly increased penalties for repeated or serious privacy breaches, along with the OAIC’s new powers and the penalty fees from the recent changes that you were mentioning before. Has there been any notable impacts from the penalty increase?
Kind: The litigation we have in court on foot currently against both Medivac and ACL is under the old regime, not the new enforcement regime, or the enforcement penalty regime that came into effect in 2022. We’re still at the tail end of the previous approach in our enforcement matters, but likely any new enforcement matters and particularly civil penalties proceedings will be under that new regime.
I think the more notable changes you allude to, Hanneke, is the introduction of different tiers of penalties in 2024. Basically, it preserves the serious privacy interference tier and it adds just interference with privacy, so it removes serious from the second tier, and then it also gives us the lowest tier, which gives us the ability to issue infringement notices for technical contraventions of the Privacy Act. These are things that relate to the privacy policy of an entity, for example. That’s a really interesting and exciting development because it enables us to take a more robust approach to enforcement of technical infringement. It’s lower cost and a simpler procedure by which, after issuing a notice, we can issue an infringement fine up to about $60,000. That should act as a really good incentive for entities to tighten up their compliance around those technical requirements, including on privacy policies, and it also will allow us to really take action whether it’s persistent or quite egregious non-compliance in that space. We wouldn’t be using these powers arbitrarily to go after entities that have made good faith efforts to comply with the law, but we would be using it where we see really consistent or egregious harms or where vulnerable people are implicated.
Catts: Yes. In regards to the different tiering, and I know you said that it’s still being worked through, but can you give us some insights into what type of breach would constitute a mid-tier penalty versus a low-level administration breach?
Kind: Mid-tier penalties are applicable to things like the collection principle, APP 3. Whether it’s necessary for an entity to collect personal information in the first place, whether they do that with fair and lawful means. We’ve issued a range of determinations recently that look at scraping of personal information, which we say may not reach the threshold of fair and lawful depending on all the circumstances. That could be a space in which we potentially think about this matter. Another APP to which it clearly applies is APP 6, which is around use and secondary use and disclosure of personal information. When an entity has collected it for one reason and then they decide to use it for another, or they pass it on to a third-party in a way that wasn’t anticipated when they first collected that data. That is another space in which a contravention of that provision may give rise to enforcement proceeding under the new provisions.
Catts: Thank you for those clarifications. Turning our attention now to artificial intelligence, what do you see as the privacy implications of the dramatic rise of artificial intelligence broadly, and generative AI more specifically?
Kind: Yes, and it’s hard to disaggregate the two sometimes, isn’t it? Obviously, we’re very preoccupied by generative AI at this current moment in the last year or so, but of course, artificial intelligence in various forms, particularly machine learning, has been around for a long time and much of it doesn’t have privacy implications, for example, where it’s not using personal information at all. That kind of AI, as I said, is well-established in some sectors, for example, in supply chain logistics, and wouldn’t fall within the category of AI that does raise privacy concerns.
Generative AI does raise particularly novel and challenging concerns, and I would group those issues in, maybe crudely, in an input and an output bucket. The input bucket relates to the scraping of personal information, or the use of personal information collected for one purpose, to train an AI model. I think that this does raise potential concerns when it comes to the Privacy Act, which does have a range of thresholds that have to be met. One is, of course, the collection is fair and lawful, and another that if you’re using data that you’re already holding for one purpose for a secondary purpose, you either need to have consent for that or be able to establish that it was within the reasonable expectations of the individual. I think there are some challenges there when it comes to the use of personal information to train generative AI models, not something my office is looking at, at the moment.
There’s a big question about whether or not an individual’s data may be misused or being out of their control by being used to train AI.
Then, at the output end, we see a range of privacy issues as well. We see, potentially, inaccurate data being disclosed through models. We see privacy. security risks that may implicate privacy. through AI models, generative AI models, so potential risks around technical vulnerabilities. It could be occasioned through AI models. We’ve issued guidance on both of those issues, one on the input question, on how you can develop and train an AI model consistently with the Privacy Act, and then on the output question we focused on the use of commercially available AI products and how entities are using those. One thing to really draw out from that guidance is that there is a big difference when a business is using a commercially available model, whether or not they’re doing that with the tool on premises or whether they’re using cloud-based infrastructure. If it’s the latter, then there’s a separate range of concerns, which relate to the disclosure of personal information to entities, particularly those overseas.
We are urging caution when it comes to the use of commercially available AI models within the context of personal information. Again, that’s the line in the sand for me, if you’re not using personal information, for example, customer information, then it’s a different set of considerations. You might want to think about other things like accuracy or hallucinations, et cetera, but if you’re using personal information in the context of models, particularly cloud-based models, then you do need to think about things like their disclosure requirements.
Catts: Carly, finally, looking forward to the end of the decade, what’s your vision for the future of privacy in Australia?
Kind: What a fun question. [Laughter] I’ve got big hopes. Look, I think the Privacy Act reform would be great. There are some wonderful proposals in the Privacy Act review. It would be really great to level up the Privacy Act. But I also think that notwithstanding those reforms, I think there’s a lot we could be doing. Our office has really only shifted towards the more enforcement posture in the last couple of years, I see a lot of scope for building that up. Again, not with a view to being punitive to entities, but with a view to really establishing how the Privacy Act applies in achieving general and specific deterrents. I think we could continue to build the privacy community in Australia, I think there’s a really strong privacy professional’s community and I’d love to see that continue to grow.
One of the upsides of data breaches in the last few years, and there are very few upsides given that there are harmful impacts, is that privacy is starting to be on the agenda, at board meetings and with the C-suite, and that we could continue to enhance that, particularly, through this more robust enforcement posture. So that CEOs and others, the general counsels and others have to really say that this is the issue of risk that we need to manage proactively.
AI is going to, obviously, be a big game changer in the next few years and we need to look in Australia about how to approach that. I would like to see some effort in the regulatory space to articulate particular rules around AI. But I’d also like to us to think about how to use innovative methods to regulate. We haven’t really, in Australia, dipped our toe in the water of regulatory sandboxes, for example, whereas this is something that other jurisdictions are doing. I’d love that to be something that the OAIC can take onboard. Likewise, being able to provide innovation advice to entities who are starting to think about how to use personal information in the development of products and services and might be able to come to the regulator before they do that to get advice on how the law applies. Again, this is something that I’d be stealing from other jurisdictions, perhaps better resource regulators who have that ability to provide innovation hotlines et cetera, but again, that would be a great space for us to live into the way I see.
Catts: Carly, that’s fantastic. Thank you very much for speaking with us today. We really appreciate all of your time and your insights.
Kind: Thank you for having me, Hanneke. I appreciate it.
Catts: Thanks, Carly. With that, we’ll hand back over to Joe.
Kornik: Thanks, Carly. Thank you for watching to the VISION by Protiviti interview. On behalf of Hanneke and Carly, I’m Joe Kornik. We’ll see you next time.
Carly Kind is the Privacy Commissioner for The Office of the Australian Information Commissioner, an independent national regulator for privacy and freedom of information and government information policy. The OAIC’s responsibilities include conducting investigations, reviewing decisions, handling complaints, and providing guidance and advice. Previously, Kind was Director of the Lovelace Institute, an independent research institute and deliberative body with a mission to ensure data and AI work for people and society.

Hanneke Catts is a Sydney-based Protiviti director with over 15 years experience focusing on technology consulting, including privacy, technology risk, project management and assurance, IT controls and security compliance, enterprise risk management, and internal audit and regulatory compliance. Catts has worked with many organisations in Sydney and London with large and complex IT environments in the financial services, technology, government, health and manufacturing industries, and with smaller organisations with specific IT needs.

Did you enjoy this content? For more like this, subscribe to the VISION by Protiviti newsletter.
Robert Half execs: Our focus on data security and privacy creates competitive advantage
Robert Half execs: Our focus on data security and privacy creates competitive advantage
Robert Half execs: Our focus on data security and privacy creates competitive advantage
In this VISION by Protiviti interview, Joe Emerson, Protiviti Managing Director in the Security & Privacy practice, sits down with three Robert Half executives: Chris Hoffmann, Senior Vice President & Global Privacy Officer; Emebet Chesley, Vice President of Global Privacy; and Clint Maples, Chief Information Security Officer, to discuss the future of data security and privacy. Protiviti is a wholly owned subsidiary of Menlo Park, California-based Robert Half, the world’s largest talent solutions firm specializing in connecting highly skilled job seekers with companies.
Joe Emerson: Welcome, and thanks for doing this. There’s no question that privacy and data protection will continue to be major issues for the next few years. What are your biggest challenges? Where are you focusing your efforts?
Chris Hoffmann: There are difficulties, but we also see opportunities. As a global company with offices in more than 20 countries, we recognize that privacy and data protection are important considerations for creating a competitive advantage for us by building the trust of our stakeholders, including our employees, clients, and candidates. To that end, we put a lot of emphasis on both and face challenges that touch on both.
Emebet Chesley: From a privacy perspective, I think the answer is twofold: First, the ever-increasing array of privacy laws. It feels like each day there is a new privacy law adopted or modified. Each time this happens; we must analyze the law and its impact on our business and our processes. Second, the regulatory framework is not written with the goal of facilitating business and the transfer of data. This requires businesses to adopt business processes to meet the varying regulatory frameworks. These laws are not business-friendly, so modifying your processes can have a detrimental impact on your business.
Clint Maples: Well, from a security perspective, I think it is the evolution of the threat actor. Threat actors spend all day, every day trying to find a weakness or vulnerability in your environment. One mistake or one bad click can create material, expensive, time-consuming incidents that can have negative brand and financial consequences. Our employees are our first line of defense, and one mistake is all it takes to create a potential incident.
Emerson: The role of the chief privacy officer is in flux. How do you see the CPO role evolving? What do you see as your primary role within the organization? And do you anticipate any changes of responsibilities in the future?
Hoffmann: The CPO role seems to be evolving into more of a front-line role, akin to a chief trust officer role. The CPO used to function behind the scenes, with little direct impact on the business. Now, with the ubiquitous nature of privacy laws and their impact on the business, and the introduction of AI and its direct impact on business operations, the CPO must be aware of all processes within the business. Frankly, implementing the notion of privacy and security by design requires the privacy and security roles to be at the table for all conversations regarding new or modified processes for the collection, use or storage of data, especially personally identifiable information (PII).
As a global company with offices in more than 20 countries, we recognize that privacy and data protection are important considerations for creating a competitive advantage for us by building the trust of our stakeholders, including our employees, clients, and candidates.
Emerson: Thanks. Let’s talk about AI. We're already seeing AI—the development, use, etc. — have an impact on data and privacy. Do you have any major ethical concerns that are often overlooked or not considered closely enough?
Chesley: Robert Half has been using AI for quite some time already, and we have adopted processes designed to limit our use appropriately. As the access to and use of AI have increased, we, as an enterprise, have embraced it, with the goal of increased efficiency and better solutions for our stakeholders. We also recognize the possibility for intended or unintended misuse of AI. As a result, we have created an enterprise-wide AI Steering Committee made up of senior executives, whose purpose is to monitor evolving technologies and standards relating to artificial intelligence, and to develop and maintain an artificial intelligence governance program consistent with the enterprise’s AI policy.
Emerson: Barely a week goes by without hearing about a significant breach, often from repeat offenders. Are we becoming desensitized to these breaches? And if so, what do you foresee as the biggest danger or concern of that occurring? Are boards and the C-suite taking this seriously enough?
Maples: It's a real concern that we're becoming desensitized to breaches. We see the headlines, but the impact on individual companies—beyond a temporary stock dip or reputational hit—isn't always lasting. The biggest danger of this desensitization is complacency. If breaches become “business as usual,” investment in proactive privacy and security measures may stagnate.
Are boards and the C-suite taking it seriously enough? It's a mixed bag. Some are, especially in highly regulated industries or after experiencing a major incident. Many others are still treating privacy and security as a compliance checkbox rather than a strategic imperative. We are fortunate that our CEO and board have made it very clear that protecting the security, confidentiality, and integrity of the data we collect is paramount to our business and a prerequisite to our success.
The catalyst for change? Unfortunately, it might take something more than just breaches. Think sustained regulatory fines that significantly impact the bottom line, major class-action lawsuits with hefty payouts, or a truly catastrophic breach that causes irreversible damage. Consumer pressure, expressed through boycotts or demands for greater transparency, could also be a powerful driver.
It's a real concern that we're becoming desensitized to breaches. We see the headlines, but the impact on individual companies — beyond a temporary stock dip or reputational hit — isn't always lasting. The biggest danger of this desensitization is complacency.
Emerson: Looking at the international landscape of privacy regulations, they all follow a similar premise but have their own unique nuances. The General Data Protection Regulation (GDPR) often is portrayed as the beacon on the hill, though the enforcement actions from the European data protection authorities have been limited to date. Who do you think is getting it "the most right" in balancing regulation and enforcement and should the U.S. use them as the model to put comprehensive privacy regulation in place?
Chesley: I wouldn't say any one jurisdiction has it perfectly “right,” but some are doing interesting things. For example, GDPR’s emphasis on and creation of a fundamental right to privacy has created the foundation for other jurisdictions to implement similar legislation, selecting those laws that best fit their needs and concerns. Creating a balance between the need for the free flow of data while protecting and acknowledging the individual’s right to privacy is difficult since the two notions are often at cross-purposes.
The U.S. should absolutely look to these models, but it should not copy any one blindly. A successful U.S. framework needs to find a balance. It must be strong enough to protect consumer rights and drive real change, but also pragmatic enough to be workable for businesses of all sizes. Strong federal preemption, clear definitions, and reasonable enforcement mechanisms are key.
Emerson: Looking to the future—privacy's five-year plan—where do you think the U.S. will be on its journey in 2030?
Hoffmann: By 2030, I hope we will have a comprehensive federal privacy law in place in the U.S. I doubt it will be perfect, but I expect that. The patchwork of state laws with their different rules and requirements are becoming untenable, and the pressure from international partners—and the business community itself, seeking clarity—likely will force action.
Beyond legislation, I expect to see a few things: One is a greater focus on data minimization and purpose limitation. Laws will require the “collect everything” mentality to be replaced by a more thoughtful approach to data processing. Also, I see increased consumer awareness and agency. Individuals will have better tools and understanding to control their data, though whether they use them effectively is another question. Finally, I think there will be more AI-driven privacy tools, both for compliance and for individual control. Overall, a fragmented landscape will coalesce into a more uniform approach. At least that’s the hope.
Board directors and business leaders need to stay hyper-informed in a rapidly evolving landscape. There are many proposals on the table in terms of legislative initiatives, but no comprehensive federal regulation in the U.S. yet, let alone a global set of standards.
Emerson: Finally, let’s stay in 2030, by then I think we’ll be seeing significant impacts from new emerging technologies—namely, quantum, spatial and biometric computing—that could impact privacy in ways we have not even realized yet. How do you see those technologies impacting privacy?
Maples: Well, quantum, spatial and biometric computing present enormous privacy challenges. I’ll start with quantum: The most immediate threat is to encryption. Quantum computers could break many of the encryption algorithms we rely on today, rendering vast amounts of sensitive data vulnerable. We need to prioritize the development and deployment of post-quantum cryptography now.
As far as spatial computing, these technologies collect incredibly detailed data about our physical spaces, movements and even our emotional responses. The potential for surveillance and manipulation is significant. We need to establish clear rules about what data can be collected, how it can be used, and who has access to it. Consent mechanisms need to be completely rethought in this context.
When it comes to biometric computing, the widespread use of biometric data—fingerprints, facial recognition, voiceprints, etc. —creates a honeypot for attackers and raises serious concerns about bias, discrimination, and government overreach. We need strict limitations on the collection and use of biometric data, particularly by law enforcement, and strong protections against misuse.
The key with all these technologies is to get ahead of the curve. We can't wait until they're widely deployed, and the privacy risks are exposed or vulnerable. We need to be proactive in developing ethical guidelines, technical safeguards, and legal frameworks now to ensure that privacy is built in by design, not bolted on as an afterthought.
I see increased consumer awareness and agency. Individuals will have better tools and understanding to control their data, though whether they use them effectively is another question.
Emebet Chesley is vice president of global privacy at Robert Half. In her role, she leverages her strong legal background and expertise to strengthen Robert Half’s global information privacy initiatives, leading multiple teams throughout North America, South America and the Asia-Pacific region. Chesley has held multiple positions throughout her 18 years at Robert Half, most recently as senior director of the legal practice for client engagements at Protiviti, a Robert Half subsidiary.

Clint Maples is chief information security officer at Robert Half. In his role, Clint successfully identifies security risks while overseeing an information security program that protects data privacy, meets compliance requirements and ensures the protection of proprietary information. Additionally, he is President and Board Chairman of the Information Security Leadership Foundation, a community of information security executives focused on the education, mentorship and development of future security leaders.

Chris Hoffmann is a senior vice president and the global privacy officer at Robert Half. In this role, he supports Robert Half and Protiviti and is responsible for managing an international team of legal, business, privacy and security professionals, including overseeing multiple legal teams and the company’s global data privacy and IT security efforts and initiatives. Hoffmann has more than 30 years of experience, with a focus on compliance, policy, privacy, security, technology and complex commercial transactions.

Joe Emerson is a Managing Director and leader in Protiviti’s Data Protection and Privacy practice, where he works to strategize, develop and deliver complex privacy and compliance solutions for some of the world’s largest and most innovative companies. His career has included serving as an independent assessor pursuant to FTC Consent Orders, acting as a HIPAA Compliance Officer and Privacy Officer for major corporations and government agencies, managing privacy regulation readiness and performing compliance assessments.

AI and teen privacy panel discussion with Future of Privacy Forum leaders
AI and teen privacy panel discussion with Future of Privacy Forum leaders
AI and teen privacy panel discussion with Future of Privacy Forum leaders
In this VISION by Protiviti podcast, Protiviti Senior Managing Director Tom Moore leads a discussion on the impact of AI and the critical need for children and teen privacy with key members of the Future of Privacy Forum, a global non-profit organization that serves as a catalyst for privacy leadership and scholarship, advancing principled data practices in support of emerging technologies. Tom welcomes Anne Flanagan, Vice President of Artificial Intelligence for the Forum and Bailey Sanchez, Senior Counsel with the Future of Privacy Forum’s U.S. Legislation Team. The panel was recorded as part of VISION by Protiviti’s recent webinar “Building trust through transparency: Exploring the future of data privacy.”
In this discussion:
1:15 – Future of Privacy forum: mission and purpose
4:05 – AI risks and harms
8:55 – Youth and teen privacy concerns
14:09 – Regulatory frameworks
22:54 – Three- to five-year outlook on privacy and AI regulation
AI and teen privacy panel discussion with Future of Privacy Forum leaders
Joe Kornik: Welcome to the VISION by Protiviti podcast. I'm Joe Kornik, Editor-in-Chief of VISION by Protiviti, a global content resource examining big themes that will impact the C-Suite and executive boardrooms worldwide. This special edition podcast highlights a panel discussion hosted by Protiviti Senior Managing Director Tom Moore. The panel was recorded as part of VISION by Protiviti 's recent webinar, Building Trust through Transparency: Exploring the Future of Data Privacy. Tom leads a discussion about the impact of AI and the critical need for children and teen privacy with key members of the Future of Privacy Forum, a global nonprofit organization that serves as a catalyst for privacy leadership and scholarship, advancing principal data practices in support of emerging technologies. Tom welcomes Anne Flanagan, Vice President of artificial intelligence for The Forum, and Bailey Sanchez, Senior Counsel of The Forum’s U.S. Legislation Team. Tom, I’ll turn it over to you to begin.
Tom Moore: Great. Thanks, Joe. Anne and Bailey, thank you very much for the opportunity to speak with you today. You're both deep subject-matter experts representing a fantastic organization, the Future of Privacy Forum. We're thrilled to have you today, so welcome. I'm going to start just with a general question about FPF. Can you tell me about the mission of FPF, what role it plays in thought leadership around the privacy space? Anne, why don’t you go first and then Bailey, I'll let you chime in.
Anne Flanagan: Obviously, Tom, it’s such a pleasure for us to be here today and great that Bailey is joining as well. Future of Privacy Forum, so, I know Joe introduced us very briefly earlier on and indeed we may have some Future of Privacy Forum members on the webinar today, and we’re a membership-funded organization, combination of membership and some grants. We really sit in the nonprofit space between the public sector and the private sector. We primarily help senior privacy, data, AI executives and folks that work in the policy and regulatory space to really understand what's happening around the world of privacy as concepts evolve. We are a technology-optimistic, but obviously, very pro-privacy. We're headquartered in Washington DC, I myself am based on the West Coast over in San Francisco, but we also have a presence in the EU, Asia Pacific and folks as well that work in the Africa region as well as Latin America.
So, we really are, as you can see, right around the world in our presence and the word “forum” is definitely not accidental. We really act as a convener for folks to have these difficult conversations around the world of privacy right now, particularly as technology evolves ever and ever faster and data needs are really at their first and foremost for most companies in this day and age. I'm going to hand over to Bailey because I lead our work on artificial intelligence and we really—I think in 2024, even though FPF did work for seven or eight years on artificial intelligence, we launched a center for AI earlier on this year to really consolidate that work and to tackle some bigger AI projects. I'm really pleased to announce that we have a major piece of work launching before the end of the year the folks on this call may be interested in, so we can come back later and let you know about it. But we're really looking at how executives are tackling, assessing risk around AI right now, which I think is top of mind for a lot of folks, but Bailey, I want to hand over to you.
Bailey Sanchez: Thank you, Anne. So, at FPF, we look at privacy and AI from a law, technology and policy perspective. And so, me on the U.S. legislation team, I am looking a lot at like what the law says and where there are emerging trends in the law. We do comparative analysis of different legal regimes. I think one report that is pretty relevant for this group here is we just published a report on 2024 state AI legislation trends. And then myself in particular, I have a lot of expertise in the youth privacy and safety space, which is why I'm joining today's conversation.
Moore: Great. Well, again, thank you both for joining us and Anne, let's start with you. Artificial intelligence is your area of expertise. Can it potentially compromise an individual's right to privacy? Can you give us some examples of harms that could come and risks that are accompanying artificial intelligence?
Flanagan: So, I love these questions because AI is something which I think is top of mind for absolutely everybody. I'm sure folks are talking about it around the dinner table, folks are talking around the C-Suite table. People are using it in their day-to-day jobs right now. It's really gone very, very mainstream. But those of you in the privacy and the data community, I'm sure you've been talking about it for years, if not using it for years. AI is not necessarily anything that is new, but of course, I think we'll all have to acknowledge that about two years ago, this thing came along called Chat GPT and really revolutionized and democratized access to AI in a way that we had never seen before. I think the potential of that has unleashed in the way that that is a consumer-facing technology. It's really seen this absolutely exponential boom, and as a result of that, we start to see pressures on the market. We start to see pressures internally in organizations around using AI.
I think anytime you end up with a new technology or effectively a new technology where there's a lot of pressure to use it, deploy it, develop it, the data behind it can obviously create risk. And I think that's really what you're getting out there is, what is the intersection here and as we all sit here at the end of the year between AI and privacy and how does that change the dynamic.
I think when we go back to basics and we really look at what it means for a technology to create risk around privacy, it's really looking at, I think, two main things, Tom. I think one is, where is the data coming from that's really backing onto that technology? So, when you look at something like an LLM, you're talking about the training data. Where is that? Where did that data come from? Is it information that was scraped off the web? Is it information that's been collected from apps on your phone? Is it a form that you signed somewhere? Is there personal data in the mix? There could be proprietary information in the mix as well. I think that's sort of a separate concern because we're focused on privacy today. I think going back to the basics of where did that data come from and the hygiene around that data, I think that's one area where things can go really wrong really quickly because I think one of the biggest challenges with generative AI is, if you are going through this “garbage in garbage out” quote, but it's very, very real when it comes to an LLM because you're constantly iterating and you're constantly building on what's there before.
So, when it comes to developing and building models or indeed deploying an AI system in an environment where you're inputting data into it, it's really, really important to have that hygiene around protecting that data on the input. So, you could have potential privacy implications there.
The other area, which I think is the one that's maybe more obvious and really where consumers actually might see harm, is on the output side of things. In other words, you may have some very, very serious situations where you might have, for example, consequential decision-making. You could be applying for a mortgage and maybe your bank is deploying an AI system to make a decision about your creditworthiness, for example. If they have information that is incorrect, biased, or if the model is not developed in such a way that is taking into account the fairness in its output, you could end up with some outcomes that are going to be very consequential for you in your life that really come from a violation of your privacy or come from data that's not quite accurate. So, I think that we start to see the rubber hit the road in that respect.
In terms of general output, we already heard today on the call, data breaches are mentioned. To build and deploy AI models you're often looking at huge swathes of data. I think we've heard for years this idea of the data were more, data is always better, more data is always better, and the consequences of a data breach in an organization that is developing or deploying AI, may be—not necessarily, but may be—more grave than an organization where the data use is more minimal. So, I think it really goes back to basics around the data hygiene and the normal risks that companies are looking at when it comes to privacy, they still apply. AI just amplifies and increases that risk.
And then the last thing here is that there's maybe a literacy gap right now because AI is developing so fast. I mean, I don't just mean a literacy gap in terms of how the technology actually works, but what the technology means for your businesses, your customers, and those folks whose personal data might be in the mix, where the PII is actually coming into play. There often just isn't a lot of time to think about these problems right now because there are other concerns around the business. So, the speed of the deployment is certainly a really, really big barrier, so that literacy gap catch up period, and organizations obviously like Protiviti and also the Future of Privacy Forum, we try to really help in educating in that space.
Moore: Excellent. Thank you. Bailey, turning to you. Obviously, we just talked about AI, but there's other innovations out there as well, quantum computing, AR, etcetera. How are these influencing the landscape of teen and youth privacy? Is it all harms? Is it also—is there potential opportunity to enhanced privacy with these tools?
Sanchez: Sure. So, there's certainly harms to consider. I think one harm, in particular that's very top of mind right now for kids and teens specifically is synthetic content and using generative AI to generate synthetic content. It's Election Day. I think there's been a lot of focus on how generative AI will impact elections, but I think it's important to remember that there's a whole spectrum of harms with AI and other emerging technologies that you just mentioned. And it's not that they are different for children, but they're usually just exacerbated. So, things like kids using generative AI to kind of like bully their peers, kids and teens using generative AI to create CSAM. A lot of the stories that we hear about that online are often perpetrated by other students rather than like a shadowy bad actor
But there are also opportunities with AI and other emerging technologies. I think something that we talked about a lot is cyber hygiene and making sure that you have your passwords in order, or just different internet facilitated scans. I think there's actually an opportunity to use AI to help vet malicious content. Again, keeping in mind that kids and teens are particularly vulnerable groups there.
Then also AI can have a lot of benefit in the school context. Predictive AI has been used in schools for a long time. There are those harms that we hear about, like whether AI is being used to make a decision about college applications. There's a really bad story a couple of years ago out of Florida, that early warning systems were kind of predicting how likely a student was to become a criminal, but on the flip side, the technology can be used to help students do homework. I think there's an interesting Google Notebook tool where you can upload your notes, or your documents, and it creates a podcast for students. So, I think there are opportunities as much as there are risks. Another harm to consider is just that kids cannot always vet an AI tool, but if we think, like Anne just said, I think there's a digital literacy gap for adults as well. So, we tend to think of kids as this very separate and distinct group, but obviously, a lot of the time it's the same or similar harms and we just need to kind of amplify whatever tools that we create or safeguards we put in place.
Moore: Well, Bailey, let's sticks on that topic for just a second and talk about what proactive steps can individuals, schools, families, policymakers take to help young people avoid these threats and use these tools for good.
Sanchez: I mean, I think a really basic one is just to learn and understand the technology. We call kids a vulnerable group, but they're pretty savvy. Kids are going to be bringing a lot of tools at home, into the classroom, and so I think there is kind of an obligation for us as adults to also be up to speed on all the tools. I think focusing on the most high-risk type of processing is really important from that company and government perspective. Again, AI is used for just kind of a range of things, like Spotify uses AI to make song recommendations. I think that's a much lower risk of harm than something like AI being used to make a decision about students’ educational outcome, and so, pinpointing what types of risk that we are trying to solve for.
Then I think another thing specific to the education and student context is, I've been seeing an uptick of companies wanting to deploy their products in the education space because they might see, hey, I've created this for a consumer facing or B2B, what about B to school? But I think it's important to keep in mind that there are special considerations with schools and student data, and you need to really tread cautiously in those spaces and make sure you have all of your compliance check boxes ticked off.
Then another immediate thing to keep in mind is, there's a whole discussion about age assurance. Should we restrict kids from certain segments of the internet? Do we need to design things that are child-friendly? I don't think there is an answer to that policy debate quite yet, but I think in the meantime, something that companies can do is just make sure they have a process in place for handling kids’ data if it makes its way to you. Again, a lot of companies might be B2B and not intended for kids, but they also might not be doing that proactive age verification because they just don't anticipate a lot of kids coming your way. If kids’ data makes its way into your processing, just making sure that you have a plan in place for what you're going to do with that.
Moore: So, Bailey, we have talked about government regulation somewhat, basically, what legal frameworks exist? How should policy evolve over time to help and continue to safeguard the privacy rights of our young people?
Sanchez: Yes. So, as I've mentioned, something that the Future of Privacy Forum published recently was a 2024 state AI trends report. I think as we know, one of the more significant state bills was the Colorado AI Act. So, Colorado AI Act has broad protections on broad consumer rights and business obligations, but it is only focused on discrimination and systems that are substantial factor in consequential decisions, which we've been talking about a bunch, of things like health, employment and housing. Again, that's not necessarily a bad thing. Maybe we don't need very specific AI regulation for every single type of AI out there. I’m going to mention Spotify recommendations. So, I think a trend that we're seeing in the U.S. is a big focus on those consequential decision-making AI systems rather than just kind of general-purpose AI.
I think some other steps that can be taken are targeted rule making, the focus on different segments of the risk that we're trying to pinpoint. But I think it's important to keep in mind that with privacy rules, particularly with strict data minimization and limits on secondary use, that could have a negative impact on training safe and fair AI systems, which rely on training using representative data sets. So, there's kind of like a tradeoff that we need to be considering between very strong privacy safeguards while still allowing room for innovation.
Moore: So, Bailey mentioned Colorado, other states. Do you see regulation of artificial intelligence, especially with respect to youth and teen privacy occurring at the state level in the U.S. or do you think you foresee anything happening at the federal level?
Sanchez: That is a good question. I think kids’ privacy and online safety has been a very big topic for policymakers globally. I think if we saw anything pass on privacy or AI at the federal level, because I know you mentioned some kind of like skepticism with federal privacy. I think kids’ privacy is one of the areas that's most ripe for something to pass federally, but I think it's important to keep in mind that when it comes to kids’ privacy and kids’ safety, lawmakers are all often approaching it from just a lot of different factors, again, the risk can include the data risk that Anne highlighted. It can include content moderation, free speech, safety, and then just the rights of the kids themselves. So, I think predicting what might happen federally is very tough. Then at the state level, a lot of the bills that I've seen have been focused on just needing specific opt-ins for training with kids’ data or just banning kids from addictive feeds. So, those are very, very concrete versus I think the rest of the AI policy conversation is focused on that broader subset.
Moore: Let's zoom out to just AI, in general. Do you think the legal frameworks that are in place today are adequate to address AI threats and harms, or how do you see them evolving to better protect individual privacy?
Flanagan: So, this is a great question, and I think one that's very close to our hearts at the Future of Privacy Forum. There's obviously a lot of activity happening in the United States right now. We see a lot of AI bills at state levels, but given that we're in a global webinar today, I think it's helpful to zoom out and sort of look at the general state of play because we have, of course, that precedent of a lot of privacy and data protection regulation right around the world, which really serves as a core building block where it looks to tackling some of the issues around AI. We already spoke about data and certainly in the EU, for example, GDPR has been there since 2018. We're starting to see more and more enforcement, more and more cases involving AI, but actually, the GDPR is being used as the tool to course-correct any harms. So, GDPR, quick reminder, obviously it's use-case agnostic, technology-neutral. It certainly did not foresee generative AI as a technology, but it should be future-proof enough to be able to be used in that context. It's a big conversation happening in Brussels right now as to whether it needs to be opened up or modified in any way, shape or form.
I think we're starting to see a lot more enforcement on AI, in addition to automated decision-making where we've seen enforcement for quite a while. You have, of course, the addition of the EU AI Act in Europe. It came into force in the EU in the middle of August. It is going to take about 24 months to come into force. And really, what we're going to see is this staggered approach and based on whether or not you're in an area of operation that is categorized as high risk such as, for example, education, employment, to name two examples. Your obligations strictly and overtime, but it's really based around product liability. It's not really based around rights of people, and it doesn't have a civil rights component to it like we see in the laws in the United States, for example
So, instead of the long and the short of that end, of course, given how influential the GDPR has been around the world, to a degree in the United States, but mostly outside of the United States, you really see that there is a baseline of privacy protection in place in most countries, which is certainly not adequate to address all of the harms and correct all of the problems in respective AI but it goes a really, really long way and certainly, I don't think anyone can turn around and say they have nothing to go on. There’s certainly something there already.
If you look at what's happening in the Asia Pacific region, very, very interesting. You see government like Singapore, which has its model governance framework, which is a softer type of law. It's falling short of regulation, but it is advising companies to create risk frameworks around how they use AI, really, really similar in the United States when it comes to public-sector use of AI, particularly around procurement for example. You have the NIST risk management framework for AI and it really goes back to basics around—again, get us a softer piece of work, not like, shy of regulation, but the tools are really, really there around making sure that you know what data you have, you're mapping it, you are doing some risk analysis. You're actually taking time, attention, and focus and having folks in the organization actually address any risks surrounding AI—a lot of best practice there.
We're starting to see some of the I guess the ideas of NIST creeping into—sorry, the NIST RMF—those building blocks. We're starting to see those reflected in state level legislation around AI. We're starting to see the ideas around ensuring that there is consistency with any privacy laws in the United States. We're starting to see a bit more polish and a bit more sophistication. We still, of course, have a patchwork of laws in the U.S. It can create a lot of confusion. One of the things that Future of Privacy Forum talks about a lot is that if we had a federal level privacy law, it's not that it will solve all of these problems, but it certainly would create a more cohesive and harmonized framework around the United States that would improve the state of play with respect to the spelling, I guess questions and inconsistency that's good for business, it’s good for people, and it certainly would bring about a state where you have a minimum level of safety around this topic.
Then, I think when we look at what else is happening around the AI regulatory landscape, I think those two big areas there around data and around any potential risk—you start to see this risk basis that you see in the EU AI act where you have the different levels of risk around the use case. So, I think we've moved to—Tom, I guess long story short—we've moved from a world where the existing regulation around AI, which is very principles-based, it's based around the person, it's relatively technology neutral in a lot of cases, as you see in privacy laws—we're starting now to see more of a focus on the use case. Of course, those use cases will continue to evolve and as Bailey mentioned earlier on, when it comes to AI harms certain activities are going to intrinsically carry a lot more risk than others.
Moore: Yes. All right. Well, I think we have time for one last question for both of you. Make a bold prediction three, five years out. What may surprise us about youth, teen privacy, AI, something that people may not be thinking of? What might you expect to see in the future that others who aren't studying this as deeply as you are, may miss?
Sanchez: I can go first. So, in the kids’ privacy and safety space, there have been a lot of laws that have passed at the state level and a lot of laws that have passed on the state level that have resulted in litigation. Those are making their way through the courts right now. There's actually an age verification law that's going to be heard at the Supreme Court this term and then there's one that is at the Ninth Circuit and then there’s a bunch of district court cases. So, I think these are important to pay attention to because they're answering a lot of interesting questions just about the future of internet regulation. Again, getting back into that question of whether you can kind age-gate your service or if you have to make something kind of like age-appropriate for everyone. Another interesting aspect to those is having certain types of disclaimers that you're legally required to do, which I think will be very relevant for the discussion with them when it comes to AI transparency. So, I think it will take three to five years to get those answers, but that will be my bold prediction, is that in five years, I think we're going to have a lot more legal clarity on just what the legal framework in the U.S. will look like around privacy and AI.
Moore: That's a great call, I agree. Anne, anything from you, any bold predictions?
Flanagan: I love this crystal ball question. I think five years ago, we couldn't have predicted generative AI. So, I'm going to start with that, which is that I think the technology will surprise us and I think the consequences of that are going to be twofold. I think we're going to see existing regulations enforced more strictly—not more strictly, but I think we're going to see more and more enforcement because we're going to see harms that weren't necessarily anticipated, and regulators will use the tools in their toolbox already to address them. The second thing that I'm going to see is as those new technologies evolve, I think we're going to see some of the principles that we've accepted stretched to the limit. And in that respect, I think we're going to see a little bit more new—so, I'll give you a perfect example of this. There's an outstanding question right now as to whether—and it's almost a philosophical one—can an LLM actually contain personal data? Being trained on personal data, there can be personal data coming out on the other side, but is the model actually—does that contain personal data? What are the implications for other technologies and other similar scenarios? And you have disagreement from different regulators on this topic right now, it's come up in California, it's come up in Hamburg in Germany, The European Data Protection Board right now is investigating, what it thinks itself about it and has asked for comments from various different stakeholders. So, I think we're going to see some of the things that we have taken for granted. We're going to have to think a little bit harder and get a little bit more sophisticated, but I think we'll have a lot of surprises.
I will leave folks with one last message, which is that no matter what happens with the technology and how it’s stretched and what enforcement we’ll see, getting the basics right is really, really half of the battle. By that, I mean the data hygiene piece, having time and attention and systems set up internally, and that really, really goes a long way to preventing any harms that might emulate from the use of AI.
Moore: Great, thank you both to you for that answer, as well as all the others. You articulated the point I made earlier about organizations who value customer trust, want to earn it, keep it, need to continue to focus on this particular area, look out for the future, stay close to it. Have a leadership that represents the voice of the customer. It's a really important issue. Thank you both. This was tremendous.
Kornik: Thanks, Tom, and thanks, Anne, and thanks Bailey for that session. The insights and the conversation was fantastic. Thank you for listening to the VISION by Protiviti podcast. Please be sure to rate and subscribe wherever you listen to podcasts. Be sure to visit the VISION site at vision.protiviti.com for all the latest content about privacy and data protection. On behalf of Tom, Anne and Bailey, I'm Joe Kornik. We'll see you next time.
Anne J. Flanagan is Vice President for Artificial Intelligence at the Future of Privacy Forum where she leads a portfolio of projects exploring the data flows driving algorithmic and AI products and services, their opportunities and risks, and the ethical and responsible development of this technology. An international policy expert in data and AI, Anne is an economist and strategic technology governance and business leader with experience on five continents. Anne spent over a decade in the Irish government and EU institutions, including developing Ireland’s technical policy positions and diplomatic strategy in relation to EU legislation on telecoms, digital infrastructure and data.

Bailey Sanchez is Senior Counsel with the Future of Privacy Forum’s U.S. Legislation Team where she leads the team’s work analyzing legislative proposals that impact children's and teens’ online privacy and safety. Bailey seeks to understand legislative and regulatory trends at the intersection of youth and technology and provide resources and expertise to stakeholders navigating the youth privacy landscape. Prior to joining FPF, Bailey was a legal extern at the International Association of Privacy Professionals.

Tom Moore is a senior managing director in Protiviti’s Data Privacy practice. Previously, Tom served as chief privacy officer at AT&T, directly responsible for all privacy programs, policies, strategy, and compliance with regulations at the state, national and international levels. Tom joined AT&T in 1990. Tom also serves on the board for the Future of Privacy Forum and the Community Foundation of the Lowcountry. He was formerly a member of the Executive Committee of the Board of Directors of the AT&T Performing Arts Center in Dallas.

Did you enjoy this content? For more like this, subscribe to the VISION by Protiviti newsletter.
Panel discussion: Protiviti hosts Forum on the Future of Money and Privacy
Panel discussion: Protiviti hosts Forum on the Future of Money and Privacy
Panel discussion: Protiviti hosts Forum on the Future of Money and Privacy
In this VISION by Protiviti podcast, we present a panel discussion hosted by Protiviti Senior Managing Director Tom Moore. The discussion was held in New York in November as part of VISION by Protiviti’s Forum on the Future of Money and Privacy with Protiviti partners, The Women’s Bond Club and Société Générale, the host of live event. Tom leads a lively discussion among panelists Heather Federman, Head of Privacy and Product Counsel at Signifyd; Stephanie Schmidt, Global Chief Privacy Officer and Head of Data Compliance (AI and Cyber) at Prudential Financial; and David Gotard, Chief Information Security Officer at Société Générale.
In this interview:
3:40 – Privacy priorities a financial company perspective
7:10 – How to navigate regulatory complexity
10:40 – Is the momentum around privacy changing in the U.S.?
15:02 – AI, security and privacy: a scrutinizing look
22:55 – Privacy discussions at the C-suite and board level
27:55 – Consumer trust and empowerment
31:49 – Security and privacy in 2030
Panel discussion: Protiviti hosts Forum on the Future of Money and Privacy
Joe Kornik: Welcome to the VISION by Protiviti podcast. I’m Joe Kornik, Editor-in-Chief of VISION by Protiviti, a global content resource examining big themes that will impact the C-suite and executive board rooms worldwide. Today, we present a panel discussion hosted by Protiviti’s Tom Moore. The discussion was held in New York City last month as part of our VISION by Protiviti Forum on the Future of Money and Privacy, with Protiviti’s partners, the Women’s Bond Club and Société Générale, the host of the live event. Here’s Tom, kicking off the panel discussion.
Tom Moore: I’m Tom Moore, I’m a Senior Managing Director at Protiviti. I’ve been with the firm just under a year. Prior to that, I served AT&T for 33 years in a diversified career, the last five of which I was the Chief Privacy Officer. AT&T at that time was very diverse and had TV, entertainment, gaming, you name it, in addition to what is now just the mobile and internet company. I say “just,” it's a Fortune 10 company. I had a great career there, but now I am serving clients across the spectrum as a member of the security and privacy practice with a focus on privacy.
With that, I'm going to ask each of our panelists to introduce themselves. Heather?
Heather Federman: Hello. I'm Heather Federman. I am the Head of Privacy and Product Counsel at Signifyd. Signifyd is a vendor which basically helps companies with their fraud protection. Our customers are the merchants, but we work closely with the financial institutions as well to basically help authorize more legitimate transactions, weed out the bad transactions. So, we sit in that little zone in between, and it’s uncomfortable but interesting place I'll say, between merchants and the banks. Prior to Signifyd, I was at a different company called BigID, and they deal with data management, data governance, data privacy for various enterprises as well. I've also been on privacy teams at Macy's and American Express. I started my career on the policy side of privacy, so it's always interesting to see what's happening regulatory-wise, and I'm excited to be here today. Thank you.
Moore: Stephanie?
Stephanie Schmidt: Awesome. Good evening I should say. I’m Stephanie Schmidt. I am the Global Chief Privacy Officer for Prudential Financial. I am also the Head of our Data Compliance Organization, which includes building out compliance for cyber and also AI, so it's been an interesting year, as you guys can imagine. Prudential is a global company with 40,000 employees globally, helping bring financial wellness across the industry. I’ve been in a number of, I'll call them control partner, check-the-box sort of roles. I am a recovering auditor, as you can imagine, as well as working operational risk and compliance. I'm very excited to be here. Thanks, Tom.
Moore: Thanks, Stephanie. David?
David Gotard: Hi, good evening. I’m David Gotard and I am the Chief Information and Security Officer for Société Générale for the Americas. For those who are unfamiliar, we’re a global investment bank with retail and wholesale banking activities. I've been involved in financial services for the better part of my career. I’ve worked at the big-name banks you can probably think of, mostly on the IT side, and then decided that trying to protect data and systems was the way that I was interested in going, so I found myself in this space. So, I’m happy to be here.
Moore: All right. You can see we've got a tremendous blend of experience here and I’m looking forward to this. We're going to talk about AI, we’re going to talk about regulation, maybe peel back a little bit what it looks like in the C-suite talking about privacy and security, but let's ease into the topic. Panelists, and I'll start with you, Heather, what generally should financial services companies be thinking about in terms of their privacy program?
Federman: For me, I always like to go back to the FAIR information privacy principles. These were created several decades ago, and they've been codified in various ways through laws and other principles and companies, but essentially they list out the fundamentals of privacy, thinking about transparency, individual participation, accountability, security; and a lot of the regulations have adopted them as well.
So, to me it's a very principle-based approach and each company's going to be very different on what's going to be important, how they're going to apply it. So, there is no bulletproof strategy for any one financial institution or company, but again, it goes down to what's your risk profile and how are you applying these principles with them.
Moore: Stephanie, would you like to add?
Schmidt: We typically think about privacy risk through the typical three lenses; certainly, where your organization is, a data-driven perspective, and then also looking at the regulatory landscape that you have to engage with. So, as you can imagine across the areas that I support today, AI and privacy have this really interesting intersection of where they're competing for things like consent and transparency and adding on or upping the game, and then we also think about how aware are the consumers. It was really interesting to see the statistics that were just put on the board, but all of this is wrapped around how you operationalize your privacy programs.
So, the controls that have to be in place to support how your company views those three lenses is really important because it needs to be just as ahead, when you think about the digital landscape and data holistically, to be able to prepare for that. So, you really need to think about, gone are the days of the manual inventories and things like that. We really need to be thinking about, how do we automate—similar to how the businesses are doing business with AI and things like that—how do we automate privacy controls? Not looking to put ourselves out of a job, obviously, but that's the goal, it’s to be able to minimize how many manual efforts it takes to be able to comply with the varying privacy compliance requirements.
Moore: Excellent. David, you come at it from a little different perspective, from the security side rather than the privacy side. Give us your viewpoint of what financial institutions, given you're a part of one, should be doing to protect the data from the security perspective.
Gotard: Sure, yes. In the information security space, there’s a similar principle that we apply here. It’s called the CIA triad; confidentiality, integrity, and availability is really at the center of what the cyber security program is intending to protect. So, working in partnership with the data privacy efforts as an effort to ensure that we can provide that type of CIA coverage for the data privacy is very, very important. So, we have a very similar interests in terms of identifying the data that needs to be protected, ensuring that its integrity is preserved, and that its availability and confidentiality is also taken into custody.
Moore: My best friend at AT&T was the chief security officer. We spoke regularly. The two topics are intertwined, and that's why we're here today.
Schmidt: We're best friends already. [Laughter]
Moore: All right. We have already mentioned regulation, and that's an important part of financial institutions, obviously heavily regulated. Heather, I'm going to start with you. There's a flurry of privacy laws that have come about. GDPR, many of you have heard of, came about in 2018, followed by the law in California, CCPA. Now we're up to 18 in 20 states with privacy laws. How do companies keep up with that? What kind of tools should they have in place to prepare themselves for that changing environment?
Federman: Well, they could start with hiring Protiviti or a good outside counsel, [Laughter] but in all seriousness, I think for each company, again it's understanding what the particular risk profile is for your sector, your company, but then also understanding what is the risk within each region or how those laws are actually enforced. In some places you might have really active regulators and they could be poking around a lot. There are other places where they might enforce one or two really big cases a year, because that's just the only budget they have.
I think, Stephanie, you can probably speak a bit more to this, but with some of the privacy regulations, at least in America, the California one is the only one that kind of touches financial data and it's like in a weird, kluge way, and then it's basically exempt from all the other state privacy laws that are coming out. So again, that goes back to that understanding what the risk profile is for all these regions and determining how are you going to apply the various standards across these regulations.
Moore: To that point, Heather, I saw the CFPB came out, I think it was just a week ago, with a report criticizing the states that have passed privacy legislation for exempting financial data. Stephanie, is that the right way for the CFPB to go about this?
Schmidt: My personal opinion, [Laughter] it does make it really hard to think about what your external posture is going to be for your company, right?
I think what we find is that if you look at the data that you hold as a company, very often companies overlook their employee data. So, I would definitely say, go back and look at that, because when you combine globally where you have employees based, as well as where you engage with consumers, or prospects, or customers, that create a road map for you to determine, and I love the principles-based approach that you talked about. That is what I would call baseline foundational, “What are we going to do about privacy?”
So, going back to that original piece I was talking about with those three lenses, companies have to decide, “What is our external posture going to be?” Even though we don't have to honor individual rights in the U.S. or in another jurisdictions, is that the right solution for our customers or for our employees? Is that who we want to be as a company? Is that going to be the right thing to do?
So, you really have to drive that value proposition with your boards and with your senior leadership teams to help them to understand how these strategic initiatives and how furthering the privacy posture of an organization can really make a differentiation when it comes to sales. Maybe you get that additional client because they understand how important privacy is to you, or because you’ve offered their customers choices about how they're going to engage with you as a company. So, I do think it creates a very unique opportunity for companies now.
Moore: A customer-centric approach to privacy versus a compliance-based one. I love it.
Schmidt: Absolutely, yes.
Moore: Stephanie, we’ll stay with you for a minute. We just had an election in the U.S. and obviously a new administration coming in January, changes to the Senate composition as well. Do you see anything happening in terms of the momentum around privacy law in the next few years?
Schmidt: That's a loaded question. [Laughter]
Moore: It is.
Schmidt: Personally, again, it's going to be really interesting. I think we're going to see a lot more states driving change, and I would tell you from my seat again, even though we have a principles-based approach, I'm looking at the operational complexities as it relates to how they require us to deploy privacy compliance, things like opt outs for sensitive personal information, how they do that. Is it an opt in by design or opt out by design? Do I have to go in and say “Yes, you can use my sensitive personal information,” or are you just going to use it and not tell me about it, and then overlay again artificial intelligence regulations where you may need to get or collect consent to be able to use or tell people that you're using artificial intelligence.
So, it does create this really complex landscape on how you actually operationalize those privacy controls. So, definitely an opportunity to step back and say, what's going to be our high watermark and how do we go about execution, and then what's the value proposition both externally and internally to your company.
Federman: Just to kind of follow-up with that though, do you decide whether or not to do opt-ins for one state versus opt-outs for another state or just take the strictest standard approach, or only honor employee request in California because no other state law requires it? Again, it's a determination that each institution needs to make on their own but it's part of that thinking.
Moore: David, the privacy world is not the only one that has seen an onslaught of regulation and laws. Security has as well, especially around notification requirements. Tell us a little bit about how financial services industry companies should harden themselves against regulation or just comply with regulation.
Gotard: Yes, I think our landscape is similar to privacy in that there are a myriad of regulations that are enacted. They differ by different jurisdictions or just within the Americas here, operating within the United States or within a particular state within the U.S. versus our teams in Canada, our business operations there, and in South America. It's a different situation everywhere you turn, but what we've seen over the last 18 months, two years, is an ever-increasing focus by regulators on the implementation of existing regulations, as well as increasing the expectations.
You mentioned the SEC and the requirement to report incidents, quite a controversial element, the regulations as well. If you had a material cybersecurity incident, you need to disclose it to the public so that they know as an investing public that you had this breach, but the firms are saying, “Hold on. If I tell what's going on to that level of detail, that is just going to open us up to more attackers coming in. So, you find this balance that's trying to be struck between transparency to investors, for example, and trying to provide the safety from a cyber perspective to the systems that they're relying on for managing their financial services.
Schmidt: If I can add to that, it's who do you tell first, because all the regulators, if you operate globally across all the jurisdictions, they want to know within a certain period of time, and what do you tell and how do you tell them? There's a consistency factor that comes into play as well, and who makes the decision to notify? That's something from how you operationalize incident response that’s incredibly important to make sure that you understand who has that decision-making authority, who drafts the language, are you talking to the lawyers, are you making sure that you're consistent and logical with your explanation around why you notified this regulator before that regulator will ask. Absolutely.
Gotard: You need a global strategy if you're operating in that type of landscape.
Schmidt: Yes.
Moore: Both from Heather and David, and Stephanie, we heard about decision points. Do you apply one approach universally that might be costly, you might be extending rights to consumers who aren't entitled to them by statute, maybe that adds some cost, but then you also have one approach that's maybe a little easier to operationalize. It’s a tough decision for enterprises to decide what is the right approach, one-size-fits-all, because now you're subject to necessarily the least common or worst common denominator of international law, but it's a great point.
We’ve talked about AI a couple of times. We're going to spend some time on this, and if there's questions from the audience, we’ll put them in the slide there and we'll get to them later. David, I'm going to stay with you for just a second. Artificial intelligence is becoming increasingly critical to all of our operations and can help, but as we saw from the survey data, there's either hubris or magical thinking about what it can really do and the harm that it might cause. Give us your perspective. Is artificial intelligence a help in the security world? What’s your perspective?
Gotard: That's a great question. Yes, this seems to get a lot of attention these days. I think like every new technology that gets introduced, whether it was our cell phones, or the internet, or video conferencing, for example, depending on someone's motivations, it can be used to advance things or become a weapon against an institution or a government, for example, and I view artificial intelligence as just another evolution of that type of game. So, that arms race of how do you leverage the tool for your own purposes, and how do you protect yourself against misuse and abuse of the technology is at the forefront of everyone's mind with artificial intelligence.
You mentioned earlier, I'm sorry, Joe mentioned this earlier, about how quickly the threat actors can move relative to certainly regulators and even institutions, right? They are not hampered by the type of constraints that we have. They are very nimble in how they operate. So, I do expect, and when we already see, the use of artificial intelligence as a weapon against institutions to exploit vulnerabilities, to gain footholds through advanced social engineering attacks. There’s been some ones that hit the newspaper that have been quite shocking in terms of how effective they have been, even for people that were aware that social engineering attacks could be perpetrated this way and are still falling victim to this. Then the use of it internally, both as a counter weapon to the threat actors as well as a business enabling tool, that's where we're going to see the next phases of this coming to.
Moore: Stephanie, AI, net good or net bad?
Schmidt: I think it depends on the day. [Laughter] I will say that everything that we see in the news is doing one of two things; it’s either scaring people, so they're afraid to engage with the technology, or they're saying it's not a big deal, it’s just an incremental build, and I would say I'm more so align on the it’s an incremental increase in risk to all of the different control functions.
If you think about how you engage with third parties, if you think about information security and cyber and privacy, we have seen, I think continue to see, across the industry privacy subject-matter experts with an increase in just the volume of use cases that we see coming through. So, as you think about what it takes to operationalize and assess privacy risk in the AI capabilities that your company's investing in, that's driving a significant increase in the amount of time from your privacy teams. So, think about it, even with the security perspective, all of the control partners now need to review whatever that is before it can be deployed. So, things like data flows, things like inventories, they are more and more important.
So, I go back to my original point of, you have to automate your privacy controls and your security controls to keep up with the evolving technologies so that your control partners can actually step back and advise directionally and strategically on where the organization needs to go. That would be my point.
Moore: I think everybody in financial services understands the idea of risk and risk analysis. You mentioned assessments. Tell everybody a little bit about what privacy folks kind of do behind the scenes in a privacy impact assessment.
Schmidt: Sure. If you think about the data life cycle, sort of collection through disposal, there are a lot of integration points that we have to review and there's regulatory drivers of why we complete privacy impact assessments as well. But at the end of the day, you really have to understand how you’re operationalizing, whether it's a third party relationship, or how you're doing a new technology or an AI capability, whatever that is, you really have to understand how that's going to impact your business.
So, looking at the data flows specifically, even if you think about AI, you have to look at how you collected that data. Did you purchase the data? Did someone give you consent to use that data? Is it a secondary use of that data? What’s going into the model? Is the model being trained using that data, and then on the back end, is that data then collecting new data from customers like in a chatbot situation? It is a full lifecycle of review that has to happen, and those privacy impact assessments help to assess the level of risk and determine what controls need to be in place.
So, the automation of those privacy controls helps offset the, I'll call it the manual effort around those impact assessments, but it will never fully eliminate, for example, a lawyer looking at privacy notices to determine if we've told somebody how we're going to use that data and if now that use of the AI is included in that notice, or if we're collecting the right consent, if we have contractual agreements in place or if we're relying on terms and conditions. All of that really matters now more than ever, with the introduction of generative AI and artificial intelligence more broadly.
So, I think that's where companies are struggling to say, what is the incremental risk that this presents to my organization based upon how we want to use AI, and then ultimately, are we willing to accept that risk, or what level of control partner deployment do we need to put against that risk?
Moore: Heather, I hear a frequent question from clients and others talking about how can AI be used internally to help us comply with all these laws and regulations? Is there some way that it can be deployed to assist in the compliance effort?
Federman: Well, I'm sure you're going to find a lot of vendors trying to sell you on their AI solutions for compliance and privacy and security. That’s already starting to happen and it's definitely going to explode in the next year because AI is the big buzz word. But just to kind of add, AI is an umbrella term. My company's actually been using machine learning technology for the last decade. Machine learning falls under the umbrella of AI, but because we have generative AI and large language models, which is what's exploded in the last year, that's what's creating a lot of this hype today.
So, to start with, it's important to understand what type of AI is actually in play and what are we trying to help with or moderate here. So, I think there are some real interesting solutions out there, maybe just even from a research perspective it can be helpful, although one of the risks with AI or generative AI is making sure that that research is real and not just hallucinations, as we've seen a few times. So, it's really understanding what is being sold to you, and that comes for any sort of solution.
I would also just kind of add as a side note here, there was a talk that Ben Affleck was part of the other day and he was talking about AI for entertainment purposes and talking about how AI is never really going to replace movies or the artist, but I really liked some of his comparisons and kind of really thinking about the risks for his industry. So, I would recommend that as just kind of a good two-and-a-half minutes of your time and a way just to think about AI that there are pros and there are cons and it can be used, I think for the three of us, but we'll probably be asking a lot of questions when we're assessing the [Audio Gap] actually doing, again to kind of do a little bit more research when you're hearing the buzz words around AI.
Moore: Excellent. Let's move on to the C-suite for a second. I believe members of these audience are C-suite members already or are certainly aspiring to be, and may be interested in what happens in the C-suite around discussions about privacy and security, maybe even at the board level.
Stephanie, let me start with you. I presume you've presented to your C-suite, your CEO, maybe even the board. Can you tell us about that experience? What data did you present? What questions did you get asked? Pull back the curtain a little bit.
Schmidt: Yes. I would say generally, in my experience, you have to answer the “So what?” question. Most boards or most senior leadership teams really want to understand at the end of the day, what is the impact to the business, what is the impact to the revenue, to the customers? So, jumping to that point and working backwards to be able to build out that value proposition that we talked about before, is it going to enhance the brand? Is it going to enhance trust with customers or employees? What is that story or that narrative that you're able to draw the thread through so that they can follow how and why you're building out your program the way that you are.
Trust me when I say it's easier said than done, and even with the numbers of years that this panel has, and experience, we still struggle because there are seat changes, expense pressures, things like that. So, you're constantly, just like any other role, having to retrofit your perspective and how you think about maturing your program based on the environment around you.
Moore: David, any differences in the security world with respect to how you talk to your C-suite?
Gotard: More similarities than differences, for sure. The “So what” factor, whether it's business impact, regulatory impact, customer trust impact, that's really what they want to know at the end of the day, whether they're talking about a cyber security risk or a data privacy risk, if I can speak on your behalf, and trying to translate the very complex, in my case sometimes very typical elements of a cybersecurity risk, to something that a senior manager at the organization or a board member can relate to is an enormous challenge.
It's something that we struggle with all the time, and the changes that go on in the environment, but also within management or even board members, or even who it is that presented before you, what stage did they set with that audience beforehand? Did the audience respond to that well? Do they look for changes? Trying to maybe connect with the members of the board on the side and understand what they see working and where they would like more information and maybe less detail on others is a way to try to maybe shape that message in a manner that they can resonate with.
Moore: Heather, any experience with that?
Federman: I guess I would just add to what David was saying is understanding what those C-suite or board members really want and how they can process the information. Assuming that you are a board member or a C-suite or on that way, my expectation is that you're not going to want to know all the details of how many PIAs we filled out or contracts we've done or reviewed, but for you to really ask what are the key things that you should know in order to make the right decisions. That's typically how I like to frame those conversations because unless we're asked those questions, we'll be prepared with the details, but typically I would expect that you would want more higher-level strategic thinking around these things.
Moore: Yes, that's exactly right. In my experience, I've also seen C-suite and board typically ask, what are others doing? Our competitors, companies of similar size, what's in their tool set? What are they doing? What do we need to be on alert for? That's a tough one because, as our panelists know, there's not a lot of great benchmarking out there about size of organizations, and structure, and activity levels. Stephanie, I see that may resonate with you.
Schmidt: Yes. The benchmarking is key, but don't just do it in your industry. Do, it across for organizations of your size with your footprint globally. Then the other piece that I would just add is, align with your strategic partners. They should not be hearing a very different message from your auditors, or from your risk partners, or your compliance partners, or your legal partners, when asked about the maturity of your programs.
So, regardless of what seat you sit in, I think that is always something that if my records program goes and the head of our records organization goes and they're talking about the same things that I am, they're talking about the capability to de-identify data, they’re talking about automated data discovery and things, data governance holistically, the need for all of that, it's just going to further my cause. So, getting together and drawing that thread again through those control partners that are like-minded and can carry your message for you is really critically important.
Moore: A few times we've heard the words “customer trust.” Stephanie, we talked earlier about a customer-centric approach to privacy versus a compliance, legalese-based one. Look, the consumers are becoming more and more aware of their privacy. I venture to say each and every one of you cares a lot more about it now than you did maybe just a few years ago. Survey data says that as well. Heather, let me go to you first. Is there a way to have financial services companies empower their consumers to take control of their data and help them through that process?
Federman: It's perhaps thinking about privacy and security, or those choices in a way that, like the other choices you are giving them, you're making it a more seamless user experience. We don't want to have to go through five, six, seven different settings to opt out of this data usage or whatever it is. We want it to be easy.
We want the control the same way that you would make the settings easy for everything else, and some platforms are really great at doing this and some are really terrible at doing this. So, this might be more on the product management side of the business, but it's definitely a really, really key area, and it's something that regulators are also paying attention to because they look for things like dark patterns where you do make it harder for those choices and those opt-outs to occur.
One example that actually did occur I guess in the wild or public recently was PayPal came out, I think at the end of October, and basically announced that they are going to be updating their privacy notice to now allow for our user data, for any of you who use PayPal, to basically be shared with merchants in order to do more inferences and personalization opportunities and things like that, but “Hey, you have at least a month” because I think actually the change occurs at the end of this month because we're in November now, but you can go into your settings, and they made it relatively easy to go in and say, “No, I don't want to share my user data with merchants.”
So even though there was some media about this of, “Why are they sharing data in the first place,” I actually thought it was pretty great of PayPal to say, “We're making this, we have the right to do so, but we are giving our user base, our consumers also the rights to opt out of this, and we're giving you a month's notice basically to make that choice,” and again, make it relatively easy. That's one example of a great way to kind of think about these things when it comes to those choices and choices or decisions for your company and your business you'll want to make in the future.
Moore: Stephanie, thoughts about empowering consumers?
Schmidt: I’d put that at the bottom of my to-do list but now it's back on top. [Laughter] I think the biggest piece there, and you're right about the complexity, simple is usually better, more is not always more, and so you tend to get lost in the choice if you're not really able to articulate the drivers or the why you're doing a particular activity.
So, to your point about PayPal, I think the biggest piece there is, were they able to articulate a value proposition for why they're doing what they're doing? What are you as a consumer going to get out of this sharing of your data with merchants? Are you going to be open to more opportunities, or perhaps coupons or discounts from those vendors who you typically engage with?
That to me, you might look back and go, “You know what? I'm always dealing with this particular shoe company, and so I want to absolutely get discounts and deals from them. So, I'm going to share my data with them,” but another consumer may step back and go, “No.” So we have to look at the creep factor if they don't want to share that data, right? It’s a technical term, the creep factor, it’s used a lot in privacy, but it's true.
So, you go back to simpler is better, do your notices clearly articulate it, and then when you do change practices, are you able to articulate the value proposition to the customer or the employee, or whoever that is, that's impacted by that?
Moore: So that we have time for Q&A from the audience, I'm just going to ask each panelist one more question. It's put your future-looking hat on, and David, I’m going to start with you. It's 2030. How does the landscape of security look like, maybe law regulation, consumer awareness. You saw the survey data earlier. Do you think the execs have it right that everything's going to be fine and we're going to take care of it? What’s your predictions?
Gotard: I think we have some challenges ahead for sure. As technology advances and we become more interconnected with our digital data and our commerce, the headwinds that we face to secure things only keep going up. So, it's incumbent on to all of us as executives and as consumers to be facing that head on, be aware of what's going, try your best to navigate the landscape that is going to occur with artificial intelligence, I think other technologies that are on the horizon. If I look to 2030, might be quantum computing, which is basically a transition that we’ll have to make to ensure that we can maintain the confidentiality of all of our data, our sensitive personal information, as well as our other information that we use. That is definitely something that I think is going to be hitting us in our lifetime.
Moore: Excellent. Stephanie, your predictions for the future privacy law, consumer expectations?
Schmidt: I know. I think we historically in the U.S. have been a bit behind in terms of how we think about protecting our privacy. We typically connect it with our financial accounts and we wouldn't necessarily connect it with the 23andMe survey that we did online. So, I do think we have a bit of catching up to do, but I think it’s happening very quickly.
I know, myself, I have kind of a stack of breach notifications at any given point, and it's scary. So, I do think, going back to how do we, within the compliance and the privacy rules start to better automate that. You've got to do it by design. So, your controls and how you operationalize your programs has to keep up with the technologies, has to keep up with the volume of data that you process. So to me, I think that's the biggest thing that we're focused on in thinking about data governance and management more broadly.
Moore: Heather?
Federman: I have two. One is, and I don't like this prediction, but I unfortunately think that there's going to be a major cyber attack on some form of infrastructure like our water supply, or electricity grid, or something like that, so I'm hoping the security folks working there are paying attention. So, David, if you can [Laughter] talk to your friends over there.
Gotard: We're on it. [Laughter]
Federman: The other one I'd say is more of a legal kind of legal one. [Audio Gap] The EU is known to be very regulatory-heavy. You have GDPR, you have the AI Act, you have the Digital Services Act, the Digital Market Act. I honestly can't track. So, I'm waiting for the day when a major multinational company will just say, “We're over this and we're pulling out.” I mean it’s what, 2024, so we've got six years, and I've already heard some rumblings of this might happen at least with certain companies and I've tried to poke a few friends at various tech companies about it, but I'm waiting for that day because I think at some point you have to say, “Enough is enough. We're tired of these fines. We're tired of having to create a whole different architecture and system for one region. Let's just get out of here.”
Moore: I actually subscribe to that. I believe you’re right there. I'll answer my own question. I think you'll hear the term “data minimization” a lot more than you're hearing today. In the privacy and security world, minimization means only collecting absolutely what you need in a promise from customers to fulfill the service, not collecting a vast amount of other data because you might use it in the future or it's nice to have because it creates an opportunity to monetize something, somewhere, somehow. Minimization is going to be enacted in law. Minimization is already a focus of the FTC, for those of you who have heard of that organization, and I think other regulatory bodies across the globe will be pushing hard on data minimization.
There's a business case for it in the company sector as well. Look, data is costly; cost to store, cost to transport, cost to move. It is at exposure. Companies who have been breached, when they are breached with data they're not even using that's 10, 20 years old, have just increased the blast zone of the bad actors and the fine potential for risk. Then effectiveness, organizations that have data in repositories all over the place, it's hard for the analytics folks to find the right database, the right time, the one source of truth.
So, I think minimization is something that companies will want to pay attention to, building by design, and ensure that they're getting ahead of the regulatory environment. We talked about the consumer expectations. I think consumer expectations around minimization are going to be there as well. I will, as Stephanie said, willingly give you my information in return for value, but I'm not going to give you a bunch of stuff that you don’t even know what you're going to do with right now. That's my prediction for the future.
Kornik: Thank you for listening to the VISION by Protiviti podcast. Please rate and subscribe wherever you listen to podcasts, and be sure to visit vision.productivity.com to view all of our latest content. Until next time, I'm Joe Kornik.
As the Chief Information Security Officer (CISO) for Société Générale in the Americas, David Gotard is responsible for managing SG’s regional information security and cybersecurity compliance program. David has strong technical expertise, an extensive background in financial services, and significant experience in information security. Most recently he served as Head of both Equity Derivatives and Commodities Technology for Société Générale in the Americas. Previously, David held senior IT Management positions at AllianceBernstein, Bear Stearns, and JPMorgan Chase.

Heather Federman is the Head of Privacy & Product Counsel at Signifyd, a leading e-commerce fraud and abuse protection platform. In this role, Heather leads the development and oversight of Signifyd’s privacy program, compliance initiatives and AI governance. Prior to joining Signifyd, she served as Chief Privacy Officer at BigID, an enterprise data discovery and intelligence platform and was also Director of Privacy & Data Risk at Macy's Inc., where she was responsible for managing privacy policies, programs, communications, and training.

Stephanie Schmidt is the Global Chief Privacy Officer and Head of Data Compliance (AI and Cyber) at Prudential Financial. In her role, Stephanie provides strategic guidance around the governance and application of privacy risk management strategies for Prudential’s global operations. Previously, Stephanie held various positions across other control partner disciplines in internal audit, risk management, and financial management.

Tom Moore is a senior managing director in Protiviti’s Data Privacy practice. Previously, Tom served as chief privacy officer at AT&T, directly responsible for all privacy programs, policies, strategy, and compliance with regulations at the state, national and international levels. Tom joined AT&T in 1990. Tom also serves on the board for the Future of Privacy Forum and the Community Foundation of the Lowcountry. He was formerly a member of the Executive Committee of the Board of Directors of the AT&T Performing Arts Center in Dallas.

Did you enjoy this content? For more like this, subscribe to the VISION by Protiviti newsletter.