NY Comptroller: If COVID can’t kill a city, can it make it stronger? - test
NY Comptroller: If COVID can’t kill a city, can it make it stronger? - test
NY Comptroller: If COVID can’t kill a city, can it make it stronger? - test
Thomas DiNapoli is the 54th Comptroller of New York, a cabinet officer of the state of New York and head of the New York state government's Department of Audit and Control. As Comptroller, DiNapoli is the State’s chief fiscal officer ensuring that state and local governments use taxpayer money effectively and efficiently to promote the common good. Employing more than 2,700 people, the office’s responsibilities include serving as sole trustee of the $254.8 billion New York State Common Retirement Fund, one of the largest institutional investors in the world; administering the New York State and Local Retirement System for more than one million public employees and more than 3,000 employers; administering the State’s approximately $16.7 billion payroll and overseeing the fiscal affairs of local governments, including New York City. In 1972, DiNapoli became the first 18-year-old in New York state to hold public office when he was elected a trustee on the Mineola Board of Education. In 2007, DiNapoli was elected State Comptroller. He was re-elected Comptroller by New York’s voters in 2010, 2014 and 2018. Joe Kornik, VISION by Protiviti’s Editor-in-Chief, sat down with DiNapoli in May to discuss New York City’s future.
Kornik: I’d like to start talking about how COVID-19—and the economic crisis it’s caused—has the potential to alter a city’s finances for a long time. Now that we’re nearing the end, how’d we do?
DiNapoli: Well, I certainly think compared to where we were a year ago, we've done much better than any of us could have imagined at the time. When you think of the depths of the economic fallout from COVID and the severe job loss, it was devastating from an economic point of view. And New York City was the first and the hardest hit of the U.S. metropolitan areas. We experienced a severe spike in unemployment and a severe drop in sales tax revenue, and I think everybody was expecting the worst. So here we are about halfway through 2021 and we’ve seen the picture improve in terms of unemployment and sales tax revenue, but we’re certainly not back to pre-pandemic levels. The big game changer for the city was the support that came from the federal government and the American Rescue Plan Act of 2021. The change in the presidency, the change in the Congress and certainly Chuck Schumer as Senate Majority Leader were all big factors helping lead the city through the crisis: We’re actually on target to end the year with a surplus. That doesn't mean there still aren’t major concerns, but it’s a much better picture from where we thought we’d be a year ago.
Kornik: Honestly, that’s more optimistic than I expected. It seems like there are so many headwinds in terms of lost tax revenue, unemployment, real estate and other factors to consider.
DiNapoli: You know the employment numbers are still going to be off and revenue numbers are going to be off, and the property tax loss is significant—the city's projecting the highest drop in property tax collections in its history. And we’re concerned that may continue well into the future. In terms of real estate, that depends a lot on how business moves forward with bringing people back to the office. There's still a lot of uncertainty, but one of the bright spots has been the resilience of financial services. When the markets tanked in March of 2020, everybody thought Wall Street was going to tank, too. But it didn’t; bonuses were up, and that has helped maintain an important part of the city's revenue. So, that’s been a big key to financial stability. I’m optimistic. I was in Manhattan recently and there's more street traffic than I've seen in many months, and people seem to be returning to work and the office. And maybe we’re starting to get some day-trippers? I don't think we’re getting very much overseas tourism yet, but we’re all watching tourism because it’s so vital to the city’s overall economy. But even as Broadway starts to reopen and restaurants continue to come back with the help of federal support, the pace of the recovery is so important to the future of the city’s finances. So, we’re keeping a close eye on all of this. We’ve done a series of reports on the retail sector, the restaurant sector, the hospitality sector, the tourism sector and the forecasts are still way off. But if this recovery is slower than anticipated, we could be dealing with a lot of tough choices sooner rather than later.
Kornik: I know some of the biggest challenges are imminent, but if you were to focus a little farther out—maybe even something the next Comptroller will have on his or her plate a decade from now—what comes to mind?
DiNapoli: Well, first I would point out that I still have many years to go to beat Arthur Levitt’s run of 24 years of being New York Comptroller. A decade from now, I could still be Comptroller… now, I’m not announcing anything, I assure you. But if we’re looking a decade out, one of the key dynamics is, will New York City still be a place that attracts young, talented people—in the arts, or technology or the financial sector? And, even pre-pandemic, there's always been a concern about the out-migration of established upper-income New Yorkers, but I think we probably need to focus more on the migration of some of those younger talented people who are on the verge of launching their careers and perhaps settling down and raising a family in the city but because of this pandemic, we might have lost some of them. So, if we want New York to continue to be a vibrant, wonderful place 10 years from now, we've got to make sure we're focusing on that next generation. So that really speaks to some of those factors I was talking about earlier, safety and employment. Businesses will need to adapt to a new reality, even if that means a hybrid model of remote and in-person work—they need to be mindful of how younger people want to work. I do think if we address some of those broader issues, and if we focus on the next generation and make sure we're not losing them, I think the city has the potential to be stronger than ever in 2030. Look, New York has come through many crises over the years, including a pandemic, by the way. And history says we always end up better, not worse.
Kornik: Do you suspect that will happen again?
DiNapoli: Right after 9/11, there was nothing going on downtown. Now, lower Manhattan is humming in terms of business activity, but it's also become a residential community. Much more so than it ever was pre-9/11. It’s better than it was. And I think when we look back on this time a decade from now, there will be lessons learned and things about New York City that are better than they were pre-COVID. I'm very positive about what New York will be 10 years from now. And while it’s always difficult to look that far out, our history as a city says, almost without fail, that we’re better than we were the decade before. So, I have every reason to think that we’ll look back on this time as a big turning point to a better New York City.
Joe Kornik is Director of Brand Publishing and Editor-in-Chief of VISION by Protiviti, a content resource focused on the future of global megatrends and how they’ll impact business, industries, communities and people in 2030 and beyond. Joe is an experienced editor, writer, moderator, speaker and brand builder. Prior to leading VISION by Protiviti, Joe was the Publisher and Editor-in-Chief of Consulting magazine. Previously, he was chief editor of several professional services publications at Bloomberg BNA, the Nielsen Company and Reed Elsevier. He holds a degree in Journalism/English from James Madison University.
I'm very positive about what New York will be 10 years from now. And while it’s always difficult to look that far out, our history as a city says, almost without fail, that we’re better than we were the decade before.
Did you enjoy this content? For more like this, subscribe to the VISION by Protiviti newsletter.
Future of Privacy Forum CEO Jules Polonetsky on “exciting but risky” road ahead
Future of Privacy Forum CEO Jules Polonetsky on “exciting but risky” road ahead
Future of Privacy Forum CEO Jules Polonetsky on “exciting but risky” road ahead
In this VISION by Protiviti interview, Protiviti senior managing director Tom Moore sits down with Jules Polonetsky, CEO of the Future of Privacy Forum, a global non-profit organization that serves as a catalyst for privacy leadership, to discuss how business leaders can navigate a tricky road ahead for data security and privacy. For 15 years, Polonetsky and the FPF have helped advance principled data practices, assisted in the drafting of data protection legislation and presented expert testimony before legislatures around the world.
In this interview:
1:15 – Why the Future of Privacy Forum?
2:50 – What should business leaders focus on in the next five years?
7:02 – How is the head of privacy role evolving?
12:58 – GDPR and the fragmented state of U.S. regulation
14: 00 – Looking ahead to 2030
Future of Privacy Forum CEO Jules Polonetsky on “exciting but risky” road ahead
Joe Kornik: Welcome to the VISION by Protiviti interview. I’m Joe Kornik, Editor-in-Chief of VISION by Protiviti, our global content resource examining big themes that will impact the C-suite and executive boardrooms worldwide. Today, we’re exploring the future of privacy, and I’m excited to welcome Jules Polonetsky to the program. For 15 years, Jules has been CEO of the Future of Privacy Forum, a global non-profit that serves as a catalyst for privacy leadership, where Jules has helped advance principled data practices, assisted in the drafting of data protection legislation, and presented expert testimony with legislatures around the world. He is an adjunct faculty member for the AI Law and Policy Seminar at the College of William & Mary Law School. Jules will be speaking with my Protiviti colleague, Senior Managing Director Tom Moore. Tom, I’ll turn it over to you to begin.
Tom Moore: Great. Thank you, Joe. I couldn’t think of anybody better to talk about the future of privacy than Jules Polonetsky. Jules, I’m so happy you’re joining us today. Thank you.
Jules Polonetsky: Delighted.
Moore: If you don’t mind, tell us a little bit more about the Future of Privacy Forum, its history, what you’re working on today?
Polonetsky: We’ve been around for about 15 years, and our members are very generally the chief privacy officers at 200 plus organizations. The people who are really trying to grapple with the fact that the organizations they lead are driving the AI agenda, whether it’s big tech companies or startups, or banking, or car companies, right? Everybody is challenged by the fact that the pace of how data is being used is accelerating and the norms of what’s right, what’s legal, what’s creepy, what’s innovative and exciting to users is rapidly developing, and it’s developing everywhere in the world. So we work with those folks to create best practices, standards, try to support reasonable legal rules and structures around it.
We do that as well with the leading policymakers because it turns out they’re busy. They’ve got a big agenda. They’re trying to deal with wars around the world, the economy, all the challenges that legislators and government leaders grapple with, and they want and need support. They want to know which country is doing it right. What can we learn? Where are their mistakes? How does this technology work? So we try to work as the pragmatic voice, optimistic about tech and data, but quite aware that things can and will go wrong if you don’t have clear guidelines, clear landmarks for how organizations can be responsible as they roll out new data-powered products.
Moore: Excellent, and congratulations on 15 years of existence for the Future of Privacy Forum. Let’s talk about it. You obviously have your pulse on the world of privacy, and what do you think are some of the biggest issues over the next five years? If you’re a business leader, you’re a leader of an enterprise, you’re a regulator, what should you be thinking about? What should you be focused on to get prepared for the next five years?
Polonetsky: You know, the easy answer is to immediately talk about AI, but before we go to AI I think it makes sense for us to pause sort of a second and recognize that it’s only the last few years where you’ve been able to assume that almost everybody, in any at least decent, advantaged, progressing economy, has a mobile phone, probably has a smartphone, probably has apps, probably is connected to people via some sort of social media or WhatsApp group or the like. The world has started hurling, part of it—it’s a COVID world, where suddenly we all got comfortable doing things over video conference. We became a small world where people are connected, which means that the good and the bad things that happen around the world immediately reverberate. It means the bad actors can do their work from every part of the world and can develop sophisticated, complicated organizations and have sort of teams and levels of different delegated services that they can use as they deal with organizations.
So we’ve moved to this super connected, super immediate, sort of 24/7 world where users can create a giant alarm, sometimes correctly, sometimes incorrectly, when they think that your organization is doing the wrong thing, and it immediately is driven into the media because the media seem to spend a good chunk of their day following what happens on social media.
Those stages are only going to accelerate, but we’re also seeing the backlash, right? People who are just feeling burnt out because they were locked up at home during COVID, and they didn’t get to go out, and now they’re still gaming all day and all night, and they’re still connected. All the business tools are pinging them, not just on email, but on Slack, on Teams, and all these tools. Being ready and thoughtful and structured enough to navigate this incredibly frothy, turbulent world—and then let’s talk about AI, where suddenly the investments are moving so quickly that the policy concerns are being left temporarily by the wayside, right? Who would have imagined that we’re rolling out products and we say, “Well, actually, they don’t work a lot of the time, but when they do, they do these incredible really cool things except it can’t be fully reliable but we’re relying on it for incredibly important processes like interacting with our customers.”
So, for a long time, the problems of our current generated AI tools were well-aware and you had leading companies saying, “Not yet. We don’t know the answers yet to how we’re going to put out stuff that isn’t reliable but can do super cool things, but actually also might be discriminatory, right?” For better or worse, the dam burst and everyone, from the most conservative organization to the wildest startup, is rolling out stuff that comes with lots of risks.
So that’s the world we live in. Chief privacy officers and legal and compliance folks suddenly need to go from a careful measured world where they do assessments, and they consult, and they discuss, and they give advice and the business accepts the advice, to a place where people are rolling things out that are purchased from vendors who’ve purchased from vendors and putting it out in the market. So we are in an exciting, risky—exciting because really cool things are happening, but I don’t know that we’ve ever seen as much risk or drama and guess what? The media are super interested because it’s about AI. So it can be the silliest flap and suddenly it’s front page news.
Moore: You mentioned chief privacy officers, heads of legal, heads of compliance. They’re at the forefront of all this. The roles continue to evolve with AI and other technologies. Tell me about what you see as the primary role of the head of privacy within a large organization.
Polonetsky: You know, I see two trends. This is really a role that’s in flux. There is one trend, maybe it’s a negative trend or maybe it’s just the way of the world as laws and policies become established. When I first became a chief privacy officer many, many years ago, it was a novel title and it wasn’t the highly-regulated companies that had the most senior executives in these roles. The banks had regulation and structures and lines of defense and dealt with it for years. HIPAA, the health privacy and portability law was in place and organizations had structures around that. It was the startups, the internet companies, it was the ad tech companies who didn’t have detailed legislation, at least not in the U.S., but who were running into all of these explosions of concern, or the data companies who were suddenly able to do so much more than just send you targeted mail, who needed senior executives navigating the nuances of, “What do the consumers really want and what is civil society saying? They’re making a fuss about this. And what about regulators who want to support the internet and want to support these new business models, and who are very excited to come up with new laws and rules? And what about our customers who need to understand what we’re doing with their data in ways that we’ve never used their data before?”
Here now, we’re in a world that’s become far more regulated. We’ve got all these state laws in the U.S. now. We’ve got AI laws. We have privacy laws. We have global data protection regulation not just in Europe, which has been a leader and has been mature, but almost every other jurisdiction. We’ve got a team in Africa. The countries across Africa are rolling out data protection regulation. South America, the big economies, India, right? The most giant economies, China, all have new data protection regulation and, now, new AI regulation. So for some companies they’ve said, “We don’t need the drama. We know how to do compliance. We worry about all kinds of compliance issues.” Some companies are rolling these roles into compliance and perhaps eliminating this sort of executive type role. Other companies are going in exactly the other direction. They’re looking at the challenges of AI, which are not only about privacy, but start with, in many cases, personal information that’s collected and used and already regulated by data protection law. Even automated decisioning is already regulated by data protection law. So, some companies are recognizing that here’s this incredibly strategic area, who is going to help us shape what are very nuanced decisions about not only how to deal with complying with laws— “Hey, we’re now going to use video and improve it and your face is involved, and our customer’s data is involved, and we’re going to read their confidential information to create better tools that serve them. But, boy, they better trust us and trust the output.”
We see multiple layers of regulation, for instance, in Europe, where not only do we have privacy law, not only do we have AI law, but we have new kinds of competition laws. New laws that force you to provide data to your competitors. New laws that force you to provide data for researchers. So, we see a number of other companies saying, “Digital governance has become really complicated and we need somebody or some team managing the envelope of restrictions that exist around how we use data.”
So we're at an inflection point, and we’ll either, over time, see some of this absorbed into the legal and compliance structure of the organization, but I think we’re seeing a whole new breed of folks who are stepping up from data protection to a broader scope, whether it’s AI, whether it’s perhaps digital governance, perhaps it might be ethics. That’s where it’s going.
Moore: Excellent. So speaking of that broader scope, talk to the privacy community, the privacy leader, chief privacy officer, or other title. What do they need to do to prepare themselves for this environment to grow into those broader responsibilities?
Polonetsky: I love telling some of my colleagues and friends in data protection, they spend too much time on data protection. By that, I mean there is so much. I mean you can’t stop. There’s a new law. There’s a new regulation and California keeps rolling out new changes. The Europeans keep interpreting and reinterpreting. So you can really spend all your time keeping up with the incredible rush of details. But the reality is, guys, people, gentlemen, ladies, all of you, you know how to do that. There might be a nuance, there might be an item to deal with, you know how to read legislation. You know how to do compliance. What’s changing super-fast are the way your business, the way your sector is using data. Things that were norms are now changing. Things that the platforms are doing for their business that affects your business are changing. Spend more time, please, legal, compliance, ethics, privacy people, being gurus of how data is being used because that’s going to help you ask the smart question. You ask your legal assessment question, you’re going to get your legal assessment answer. Understanding how your partner and what their business goals are and how they’re really planning to use data give you the opportunity to ask much more probing questions that answer what you need to know.
Moore: Earlier, Jules, you mentioned Europeans, the GDPR. They’ve obviously invested quite a bit in legislating, regulating, enforcing the data protection for European citizens. Are they striking the right balance? Related question, what lessons can the U.S. learn should we ever get to national privacy law in the U.S.?
Polonetsky: GDPR, I think, is a very thoughtful document. The European legal process is a challenging process. It’s not one country. It’s a union. My hope is that we will move in the U.S. to regulate quickly around AI and data protection. Even if it’s not perfect, I think businesses need the certainty. They need a level playing field, and then they’ll compete. If anything ended up being too restricted, then we can go back and debate it. Right now, I think we’re suffering from a gap, tools being rolled out, and the law is sort of catching up in a way that may end up being quite challenging.
Moore: So let me put you on the spot, turn that hope into a prediction. By 2030, do we have a U.S. national privacy law or do we still have the state patchwork, federal agencies regulating, state agencies regulating?
Polonetsky: By 2030, I think the answer is easily yes. By next year, the answer is, “That’s going to be hard to say.” You know, it took the Europeans seven years to build out GDPR. Again, mostly, 70-80% of GDPR was already in the UK’s data protection law and German data protection law. They didn’t start with a blank slate. We’re talking about regulating a huge chunk of the U.S. economy. That’s complicated. It ought to be taking a while. I think Congress is in this period where they’re struggling through understanding the complexity of what it takes. So, you know what? Although I’d like them to do it now so that the states don’t all go do disparate things, it’s going to take them some time. They should take the time, but they need to do a bit better a job really getting thoughtful and smart, and there are hard issues that need to be debated by critics, and business, and researchers and so forth.
Moore: So Jules, on a couple occasions today, you’ve expressed optimism or hope. Let’s go the other route for just a second. What if we don’t get this right? What if national law, thoughtful and smart, doesn’t come into play by 2030? What could be the consequences of not getting this right?
Polonetsky: I don’t think we have a choice to not get this right. I think the not getting this right, perhaps, is doing it very piecemeal, doing it in ways—My home state of Maryland has done a very strict state privacy law that doesn’t have any greater flexibility for research. Could they have really intended to make it very, very complicated and hard, the home of the National Institutes of Health and leading universities and so forth? Could they have intended to do that? So, I think we could have inadvertent, complicated mistakes, complications of multi-state compliance that cost money and cost time and probably don’t add any value.
So I think we move slowly and haphazardly if the world is state laws, the world is regulation by crisis and pushback. We end up not being trusted to use the most robust forms of data that we actually do need. We need data about sensitive populations to identify where discrimination can be taking place, where are people not getting access to health facilities. So if state laws make me worry about collecting any sensitive data, which many of them do with minimization or opt-in requirements, then it’s too risky. I don’t collect that location data, and that’s fine. We’ll protect some people who won’t get targeted by ads or who won’t have sensitive locations being exposed, but we then won’t have the data that the CDC needs to understand how a pandemic spreads. We won’t have information needed to know how students travel to school and traffic information. So we’ll end up in a world where we progress, but with drama, with regulation by Twitter and media headline and class action litigation.
We need the certainty of a level playing field, as imperfect as laws will always be, so that we can actually move forward rapidly, particularly around AI where there are huge debates. We need to decide, is it okay to suck up all the data from the public internet? Well, you know what? Maybe it’s public data, but maybe we didn’t actually intend this when we hammered out the IP rules and the copyright rules, and maybe we want to think about what the right balance is. If not, it’s the courts that are going to decide it. Let’s decide it with good, thoughtful public policy.
Moore: Jules, this has been fantastic. You shared an incredible amount of information, breadth of both concern but also optimism. I’m thrilled that you joined us today. Thank you for your time and hope to see you again soon.
Polonetsky: I am indeed optimistic despite, I think, all the drama. Exciting things are happening with data. We just need to get the guardrails that can help us drive quickly, safely.
Moore: Great, thank you. Back to you, Joe.
Kornik: Thanks, Tom. And thanks, Jules. And thank you for watching the VISION by Protiviti interview. On behalf of Tom and Jules, I’m Joe Kornik. We’ll see you next time.
Jules Polonetsky has served for 15 years as CEO of the Future of Privacy Forum, a global non-profit organization advancing principled data practices in support of emerging technologies. Jules has led the development of numerous codes of conduct and best practices, assisted in the drafting of data protection legislation and presented expert testimony with agencies and legislatures around the world. Jules is an adjunct faculty member for the AI Law & Policy Seminar at William & Mary University Law School. Jules has worked on consumer protection issues for 30 years, as chief privacy officer at AOL and at DoubleClick, a Consumer Affairs Commissioner for New York City, and an elected New York state legislator.
Tom Moore is a senior managing director in Protiviti’s Data Privacy practice. Previously, Tom served as chief privacy officer at AT&T, directly responsible for all privacy programs, policies, strategy, and compliance with regulations at the state, national and international levels. Tom joined AT&T in 1990. Tom also serves on the board for the Future of Privacy Forum and the Community Foundation of the Lowcountry. He was formerly a member of the Executive Committee of the Board of Directors of the AT&T Performing Arts Center in Dallas.
Did you enjoy this content? For more like this, subscribe to the VISION by Protiviti newsletter.
Data and privacy: Exploring the pros and cons of doing business in a digital world
Data and privacy: Exploring the pros and cons of doing business in a digital world
Data and privacy: Exploring the pros and cons of doing business in a digital world
These days, data breaches happen so often that they feel like they are just the cost of doing business in a digital world. The worst ones involve credit card payment data, which could result in fraudulent charges to your account. Caught early enough, this will not impact your credit rating, and your bank will issue you a new card number. Because this happens with such regularity, I keep a list of web sites and passwords handy so that I can easily change all my credit card automatic payment info
In July, I received a letter saying that Ticketmaster, more specifically its parent company Live Nation Entertainment, had suffered a breach and my personal data had been compromised. Ticketmaster, which sold more than 620 million tickets in 35 countries in 2023, sent that same letter to some 560 million members (6.25% of the Earth’s population). Maybe you got one, too.
Exposing the personal data of half a billion people to malicious hackers is astounding news, but my first reaction wasn’t “wow” but “meh.” I’ve been breached before and I will, undoubtedly, be breached again, so I initiated the routine damage control sequence.
The latest, but not the worst
The Ticketmaster breach is just the latest, and not nearly the worst. That distinction belongs to CAM4, which exposed more than 10 billion records in 2020; Yahoo in 2017 with 3 billion ; and Aadhaar and Alibaba, which exposed more than a billion users each in 2018 and 2022. And household names like LinkedIn (2021) and Facebook (2019) have also had bigger breaches.
Thankfully, Ticketmaster says more crucial information—such as U.S. social security numbers, which are required for users who want to sell their tickets on the site, were not compromised, but phone numbers, e-mail addresses, home addresses and encrypted credit card payment data was—a hacker’s paradise. (Ticketmaster did offer free credit and identity report monitoring, which I gladly accepted.)
Thankfully, nothing bad has come of it for me… at least not yet. But who knows who has access to my personal data on the dark web? And what can I—and 560 million others—do about it? The truth is, absolutely nothing.
And, perhaps foolishly, I have resold tickets on Ticketmaster, so my social security number is currently sitting in a Ticketmaster database—secured for now. Should I be worried? My bank has it. My tax software has it. And probably a few other for-profit businesses I’ve forgotten about have it too. It’s funny how we rationalize where danger to our privacy and most sensitive data lies and where it doesn’t. And how nonchalant we’ve become about the possibility, or probability, of it being exposed.
Big data means big worries
It’s been five years since Forbes declared data privacy would be the biggest issue facing businesses and consumers over the next decade. That was in 2019, before the pandemic accelerated our mass digitization. In many ways, that prediction has come to fruition. Fast forward to more recent Forbes findings that indicate 86% of Americans are more concerned about their privacy and data security than the state of the U.S. economy, and two-thirds either don't know or are misinformed about how their data is being used, and who has access to it.
86%
of Americans are more concerned about their privacy and data security than the state of the U.S. economy, and two-thirds either don't know or are misinformed about how their data is being used, and who has access to it.
- Forbes 2024 Global Threat Report
A Pew Research Center survey of U.S. adults found 81% were concerned about the data companies collect about them and 71% are concerned about the data the government collects about them. Globally, the numbers are similar: A 2023 IAPP survey found 68% of respondents say they are very concerned about their privacy online.
Meanwhile, in Protiviti’s Executive Perspectives on Top Risks 2024 and 2034 survey, cyber threats are increasingly on the minds of global executives, moving from the 15th ranked risk in 2023 all the way to the third ranked risk for 2024. And when we asked them to identify risks a decade from now, cyber threats climbed to the top as the biggest risk anticipated in 2034.
The challenges are complex: AI and other emerging technologies will impact data security and privacy in ways we’re not entirely sure of just yet; and shifting state, national and global regulation complicate data policy and governance. Executives are aware of the problems, and probably many of the solutions, but implementing them in a measured way in an ever-evolving digital data and privacy landscape is incredibly difficult.
Exploring the future of privacy
That’s why VISION by Protiviti is embarking on a months-long journey to explore the future of privacy. Organizations are experiencing unprecedented change, and the regulations that govern how personal information from consumers and clients is collected, used, stored and archived are evolving.
In addition, the roles of the chief privacy officer (CPO), as well as the chief information security officer (CISO) and chief technology officer (CTO), are evolving day by day to match the external pressures of maintaining data privacy. Too many data breaches also have eroded customer trust, and consumers—undoubtedly growing tired of the “we regret to inform you…” letters—are demanding more say in the management of their data.
To take a 360-view of the topic, VISION by Protiviti’s Future of Privacy content includes interviews with experts and leaders in the data privacy and protection space, including:
-
Jules Polonetsky, CEO of the Future of Privacy Forum, speaking with Protiviti’s Tom Moore about navigating the road ahead, the AI opportunities that will emerge and why we absolutely cannot get this wrong
-
Sarah Armstrong-Smith, Microsoft’s Chief Security Advisor for EMEA, sitting down with Protiviti’s Roland Carandang to discuss what steps business leaders should be taking to build out a strategic data security plan
-
The Economist’s Dexter Thillien discussing how privacy is in peril in the digital economy, and ways the private sector will play a significant role in the future of data privacy
-
Sue Bergamo, executive advisor, author and former CISO highlighting what boards are getting wrong about data protection and privacy
-
Mauro Guillén, futurist and vice dean of the Wharton School at University of Pennsylvania, writing about the effect of AI on the availability and use of personal data
-
Protiviti’s senior managing director Tom Moore’s take on the evolving role of the chief privacy officer and its uncertain future.
In addition, VISION by Protiviti will be publishing its own research on the topic in collaboration with the University of Oxford. Look for our Global Executive Outlook on the Future of Privacy, 2030 at the end of October. We’ll be taking a closer look at the survey findings in a Protiviti webinar on November 5, 2024. And VISION by Protiviti will be hosting two privacy-focused live events in New York in mid-November. Stay tuned for details.
And while I’m in New York, maybe I’ll take in a Broadway show or a concert. And yes, I will probably buy those tickets through Ticketmaster.
81%
of U.S. adults are concerned about the data companies collect about them and 71% are concerned about the data the government collects about them.
- Pew Research Center Survey
Protecting data and minimizing threats with Microsoft’s Sarah Armstrong-Smith
Protecting data and minimizing threats with Microsoft’s Sarah Armstrong-Smith
Protecting data and minimizing threats with Microsoft’s Sarah Armstrong-Smith
In this VISION by Protiviti interview, Protiviti’s Roland Carandang, Managing Director in the London office and one of the firm’s global leaders for innovation, security and privacy, sits down with Sarah Armstrong-Smith, Microsoft’s Chief Security Advisor for Europe, Middle East and Africa, independent board advisor and author of Understand the Cyber Attacker Mindset: Build a Strategic Security Programme to Counteract Threats. The two discuss Microsoft’s data governance strategies in the face of elevated risk, the impact of AI and emerging technology and what steps business leaders should be talking to build out a strategic security plan.
In this interview:
1:04 – What are the biggest threats to privacy?
2:58 – How AI changes the game: pros and cons
7:00 – Microsoft’s role in protecting customers’ privacy
10:18 – Thinking like a cyber criminal
15:35 – Will it get worse before it gets better?
Protecting data and minimizing threats with Microsoft’s Sarah Armstrong-Smith
Joe Kornik: Welcome to the VISION by Protiviti interview. I'm Joe Kornik Editor-in-Chief of VISION by Protiviti, our global content resource examining big themes that will impact the C-Suite and executive boardrooms worldwide. Today we're exploring the future of privacy, and we welcome in Sarah Armstrong Smith, Microsoft Chief Security Advisor for Europe, Middle East and Africa, Independent Board Advisor, and author of “Understanding the Cyber Attacker Mindset: Building a strategic security program to counteract threats.” Today, she'll be speaking with my Protiviti colleague Roland Carandang, Managing Director in the London Office and one of our global leaders for innovation, security, and privacy. Roland, I'll turn it over to you to begin.
Roland Carandang: Thanks so much Joe. Sarah, welcome. Congratulations on the publication of your latest book and thank you so much for being with us today.
Sarah Smith: That's great to be here. Thank you.
Carandang: I'm going to dive in with a very big question just to start things off. What do you see as the biggest threats to data privacy right now and what are some things that executives and boards should be focused on?
Smith: Yes. Well, I think I'm going to go for the easy option to start with being a Chief Security Adviser at Microsoft, it has to be just the scope and scale of cyber-attacks. Now they're at a range that we have never seen before just in terms of the ferocity of those different types of threat actors. What are they doing? What are they after in particular? Then when we talk about cyber attacks, we then got to think about what are those threat actors after. In essence they're looking to, how do I monetize my return on investment? Some of those are financially motivated actors, some of those might be espionage, nation state actors, they're activists, but ultimately, it's all about data and that's something we've really got to be cognizant about. So whenever we've had a cyber-attack, we then have to think about the data breaches and what does that mean for the impact to those individuals that may be impacted by that cyber attack as well.
Then we have questions that no doubt have to be answered, maybe that’s through regulators, our own business, our customers, partners, with regards to what data, how much data, and what's the impact of that. If I took all of that combined, when we're talking about cyber attacks, data breaches, intellectual property theft, whichever way you want to look at it, ultimately it'll come down to one thing, which is effective data governance. I would really say, what data, where is it, what is the value of that data, and what are my expectations, not just from regulators but consumers and employees as well, about how I should be protecting that data no matter what is on the horizon?
Carandang: On VISION by Protiviti we often talk about AI, and I know that's something that's on your mind. Ultimately, what impact do you think AI will have on data privacy and data security. Is there anything that business leaders should be doing to prepare for that now?
Smith: Well, I think with any technology there are always pros and cons. So we start with the pro. Ultimately, when you think about the ability for AI, machine learning, to provide really deep insights across large data sets. I think one of the biggest challenges that a lot of companies have, reflecting on where we started is where's my data, how much data, how much data exposure do I have? Getting those real deep insights but also thinking about how I can use that data to drive innovation
It's no doubt we're thinking about AI and just the scale of innovation that we've seen over the last couple of years. We're seeing tremendous work with regards to breakthroughs in science, medicine, and technology. So there's absolutely no doubt that there are some huge positive impacts for a lot of companies.
Now, I go to the cons. So kind of the reverse of that. In particular when we think about Gen AI, so that's only been around in the last couple years. It's probably made famous by ChatGPT. There are multiple other AI models. Then we got to think about how that was actually trained and where did that data come from. Some of the data, let's say, might have been scraped off the Internet. It could have been taken from social media. There's a multiple ways in which this data has come from and it's been asking a lot of questions again about what data, where did that data come from, do I have any say in that data in terms of consent, legitimate interest and all of these type of things. Again, if I can reflect back to the first question with regards to the cyber attackers and how they are thinking about amplifying their cyber-attacks with some of these large language models. Again, I think from a nation state perspective, highly resourced, highly motivated threat actors.
Now a couple of months ago Microsoft actually issued some research in conjunction with Open AI, as we're talking about ChatGPT. What we identified, if we took some of the larger nation state actors, they're using these models to do reconnaissance so that they're learning about their targets and they're also using those large language models to refine their attack. So this is just a caveat that the AI itself is not doing anything bad. It's not a naughty AI. It's still tool in the threat actors kit bag. When we're talking about phishing, ransomware, malware, whatever the case may be, the AI is just another tool, if you think about it that way. I want to think about AI, and I know there's a lot of companies that are spinning up R&D centers, innovation, thinking about the art of the possible. Maybe they are building their own models or they're buying them, whatever the case. There are some really fundamental things as we're talking about privacy in particular, that's responsible and ethical AI. It's a really having deep appreciation for those security and privacy implications, the detriment of some of those large language models and how they're being utilized but also keep thinking about privacy-enhancing technology. So having encryption, how we're thinking about managing the data or the data when it's exfiltrated… none of those things change just because we have some new technologies, right? We can't lose sight of the fundamental, the foundation layers if you like, of security and privacy in particular.
Carandang: That's super interesting, Sarah. Microsoft clearly has a big role to play—it sounds like such an understatement—in AI but it also has just lots of customers as well and customer data. Since you mentioned it, can you just tell us a bit more about your role at Microsoft and how a company—you mentioned large data sets, and how a company like that deals with protecting its customer data. How do you spend your days and perhaps some of your nights as well?
Smith: Can I say, it's never a dull day, let's say, being at a big tech company. If I've had to talk about my role first and foremost, in essence, my role is to liaise with our largest enterprise customers across Europe. I work multi country and multi sector and it's really at that C-seat level. I can be talking to CISO, CIO, CTOs. It's really understanding those biggest challenges. Some of that we've already touched on. We've talked about cyber security, cyber-attacks, how they're evolving. We've talked about evolving technology particularly when it comes to AI, responsible AI and all of these things but it all fundamentally comes down to data and really understanding the value and the proposition of all of this big tech together.
Now we look at the cloud in its most simplistic form, irrespective of the size of the enterprises that we're talking about. Although I'm at this level I've obviously got lots of different small enterprises and consumers who are utilizing the cloud. I would say the real value comes down to the shared responsibility model first and foremost. So if you thought about having your own data center or your own services, you're responsible for everything. You're responsible for the building, the infrastructure, the networks, all of the data, and all of these things. The big difference when you move to the cloud, and some of that comes down to the type of cloud or SaaS services or whatever the case may be, but the shared responsibility modeling, that just means the platform, the cloud platform, itself is the accountability of the cloud service provider. So in essence that infrastructure—patching, backups, recovery—won’t completely go away but it's one of those things that you don't necessarily have to think about.
The other part of that shared responsibility model, if you think about all of the different companies across the globe, some of those are highly regulated entities and those regulations are going to differ depending on what country they're in or even what region they're in. Now part of that, for customers to be able to adopt the cloud, Microsoft also has to have a very comprehensive compliance portfolio. If you're thinking about, we’re talking about GDPR or we're talking about various different standards like NIST for example, the underpinning platform first and foremost has to have all of those controls in place that you take advantage of. There's a huge advantage right out-of-the-box I'd say in terms of the inbuilt capability that's already there by standard and by default. The challenge, however, is you have to take advantage of it. This still means you’re still accountable for who's accessing that data and what data you put into the cloud.
Carandang: You mentioned in the introduction in your new book, Understand the Cyber Attacker Mindset. It dives right into the global cyber crime. You've engaged with actual cyber criminals. What are some of the key takeaways that you learned from your engagement with those cyber criminals that you could share with the audience here?
Smith: I think what's interesting to me and why I wrote it is to really focus on the human part of security. I think again, when we think about security, a lot of people think about we're here to protect data and we're here to protect technology and servers in the cloud and all of these things but actually, the data only has a value to it when we understand the repercussions of the impact of some of that data in the wrong hands and how it could be misused, abused in various different things. I think what we talked about at the beginning is a million and one ways in which I could potentially attack you but there's only a finite reason why I would want to, and why I'm motivated enough to want to do it. So I looked at the different types of threat actors. As I said, we've got some that are financially motivated, we've got activists, nation state actors, and we've got malicious insiders as well. Then it's the same data but in the different hands, what is the impact of that? Then it's being able to work backwards and say, “Ok, well, if someone was trying to sell this data, if someone was trying to use this data for espionage, if someone was trying to use it for other nefarious purposes, what do I need to do to protect that in all of those different hands, in essence?” That's really important, to understand the human motivation behind it and why they are willing to go to that extra degree to get their hands on that data. So I think about it from a very, very simplistic, no matter what size organization is we're talking about, the little ones up to the big enterprises, and I try and keep it quite simple. Our strategy in essence comes down to protecting the access in and the exit out. So the access in is identity. As we're talking about privacy it’s identity in all its guises. So it's identity as a human and identity of things. So we're talking about laptops and devices and various things like that. In essence from the threat actors perspective, I have to find a way into your network. I don't particularly care how I get in. Whether I'm trying to do those phishing emails, I'm going directly to the source, or I found a vulnerability in your network. I will find any which way in to that network. The exit out really then comes into that data. What is it I'm trying to exfiltrate out of your company that's giving me that value in particular.
Carandang: Thank you, Sarah. That's fantastic. You mentioned scale earlier. Just with the number of data or tax on data growing exponentially day by day, I do wonder if it's time for just some bold paradigm shifts. Do you see any of these shifts on the horizon? For example, can you imagine where consumers will start to pay small fees for otherwise free services, so companies won't need to sell that data to third parties?
Smith: I think we're going to see that a little bit. I think people are starting to pay for subscription services where it's a highly tailored service. They don't get adverts or the adverts they do get are more tailored. We are starting to see these people who want an enriched service. But I think the challenge we have as well is, a lot of this technology, particularly when we’re talking about social media, has been around for a very long time and it's been free for a very long time. Even when we know that when it is free, you’ve heard the comments you are in essence the commodity but there's data, there's profiling that's being sold to varying degrees across different companies depending on how you're interacting with some of their services.
I think the interesting thing is even when we've spoken about the size of some of the cyber attacks, the size of some of the data breaches, the fact that we've had these regulations, the fact that we've had record-breaking fines as a result of misuse or abuse of data and selling of data in various different things, has it actually stopped people from using it? I would argue not. Maybe there's a handful of people who are a bit mindful of it. I think you'll get pockets of people that want a better service and that you could sell it as a better service and enriched service or some way, maybe you'll have those kinds of people who might want to do that but I think overall, I can't see it happening to a large extent.
Carandang: Got it. Thank you, Sarah. So we've covered a lot today. I wanted to just ask you your overall feelings on maybe the next five years or so. So take us out to 2030. Tell us what you see. Are we in a better place? How well we have gone with this endeavor.
Smith: I think it's interesting, isn't it? Like we talked about GDPR, we talked about how long that's been around. So we are over five years since GDPR has come into being and other regulations around the world are all coming up to a varying degrees. Has it made any difference? I'm not sure. Arguably, I think it's going to have to get much worse before it gets better but I do think there is some positive coming as well. I would just frame that with where we started, when we're talking about cybersecurity and what's the game changer. So I think what we have seen is this willingness for more collaboration across big tech but across multiple different countries and jurisdictions. Particularly when we think about different actors and they're moving data around and moving data, there's money laundering, people are hiding in plain sight, making it really hard to bring a lot of these people to justice. Therefore what we have seen, as I said, in the last couple of years, is that willingness to collaborate, the willingness to share intelligence and really, really think about, there are some of these core principles of what we've been talking about and really then coming back to those foundational levels that we talk about. How do we have security and privacy by design, by default and as standard, so that nobody questions all of these things that have to be added on. Are you doing it for the right reasons? It just is. So, I think, as I said, there's going to be a lot more work. It's not going to be easy. I have a tiny bit of optimism that we can tip the balance but I just want to be realistic at the same point, not underestimating how much work is involved.
Carandang: That’s brilliant, Sarah. Thank you so much for your time and insight today. You've been very generous. Thank you for the great work you're doing more generally, and congratulations again on your book. Joe, back to you.
Kornik: Thank you for watching the VISION by Protiviti interview. On behalf of Roland and Sarah, I'm Joe Kornik. We'll see you next time.
Sarah Armstrong-Smith is Microsoft’s chief security advisor for EMEA and an independent board advisor on cybersecurity strategies. Sarah has led a long and impactful career guiding businesses through digital attacks and specializing in disaster recovery and crisis management. Sarah is the author of Understand the Cyber Attacker Mindset: Build a Strategic Security Programme to Counteract Threats. Prior to Microsoft, she was Group Head for Business Resilience & Crisis Management at The London Stock Exchange and Head of Continuity & Resilience, Enterprise & Cyber Security at Fujitsu.
Roland Carandang is a Managing Director in Protiviti’s London office and one of the firm’s global leaders for innovation, security and privacy. He leads a world-class consulting team focused on modernizing and protecting businesses where he helps clients understand, implement and operate technology-based capabilities and takes pride in helping clients navigate an increasingly complex world. He collaborates across the Protiviti and Robert Half enterprise to ensure we are solving the right problems in the right way.
Did you enjoy this content? For more like this, subscribe to the VISION by Protiviti newsletter.
From bureaucratic performance to the common good: The challenge of Public Value in Italy
From bureaucratic performance to the common good: The challenge of Public Value in Italy
From bureaucratic performance to the common good: The challenge of Public Value in Italy
The soccer field fable: A lesson in misaligned priorities
Once upon a time, there was a mayor of a town with some 5,000 residents who prided himself on making the most of public funding. One key goal of his administration was to build five soccer fields in five years, with the ultimate goal of increasing the level of sports participation and health of the town’s citizens. That well-meaning mayor, thanks to a well-functioning organization and efficient and motivated employees, was able to fulfill the political goal. As a result, the 5,000 residents, who happened to average 80 years of age, had five beautiful new soccer fields, delivered on time and budget, Unfortunately, running and playing soccer was not the way the elderly residents of the town were looking to get their exercise. From their perspective, soccer fields were a well-intentioned, but ultimately faulty endeavor.
Breaking free from empty indicators
For decades, both global and Italian bureaucracies have dragged businesses and citizens through a complex labyrinth of public projects focused on quantitative outputs—too focused on how efficiently public funding was utilized (the input), how much was accomplished and how quickly (output), and what public benefit was delivered (social and health benefit, in the case of the soccer fields).
These indicators for success (‘done/not done’ and ‘how much was done and in how much time’) have given rise to a new kind of bureaucracy where “performance for performance’s sake” is the norm and where the true impacts on citizen well-being are often overlooked or, at best, incidental. In fact, research published in 2020 and 2021 in the Italian journals RIREA and Management Control show that just 13% of the 3,798 indicators used by the Italian ministries were impact indicators. Exceedingly rare were the cases of co-planning and co-reporting of impacts; research published in 2020 in the Italian journal Azienda Pubblica shows a "heat map" with few cases of co-planning between ministries concerning the same topic.
In this bleak scenario, among the small amount of existing research on the impacts created, we cite CERVAP's research on the Public Value created by the 14 Italian metropolitan cities. The study ranked the Public Value created by those cities through a Public Value Index ranging from 0 to 100.
Milan (between 68 and 70 on the Public Value Index) and Bologna (between 66 and 68) were the cities that generated the most well-being, with Milan's leadership being focused on economic impact, while Bologna’s leadership was keyed into social impacts.
Unfortunately, the southern cities didn’t fare so well, highlighting the fact that Public Value creation in the region is still a cultural and civic battle. These cities include Catania (30 and 32 on the index) and Naples, Palermo, Reggio Calabria and Messina, all between 32 and 39.
This context was also enabled by disjointed planning tools and programmatic fragmentation. In Italy, before 2021, public administrations (PAs) typically operated in silos with as many as ten different planning instruments per administration, resulting in overlapping projects and redundant efforts.
It wasn't until 2022 that Italian PAs began adopting integrated planning methods. For example, the Integrated Plan of Activities and Organization (PIAO) was created by an infusion of funds from the European Union as part of a public administration reform spurred by post-COVID-19 recovery efforts. In springing to life, the PIAO created the first legislative definition of Public Value: the multidimensional (social, economic, environmental) level of well-being created by a public administration in relationship to its citizens and businesses.
As a single planning tool, PIAO aligns resources with performance management and risk mitigation, but more importantly, it creates measurable Public Value by focusing on the comprehensive impact of public projects from the public’s perspective, rather than isolated outputs from the PA’s perspective.
The Public Value pyramid: A new framework for success
But the PIAO also raises some practical questions where the proverbial rubber hits the road: how do PAs systematically and consistently combine resourcing, risk management, performance, and other administrative factors to achieve a measurable impact on wellbeing? The methodological framework of the "Public Value Pyramid" answers these questions. It integrates various administrative functions—from resource management at its base up through performance and risk management—allowing policymakers and managers to govern the enabling, protecting, and creating of Public Value holistically.
The Pyramid operates on a principle of progressive value generation and measurement, beginning at the base level and moving upwards through intermediate programming levels that either create or protect Public Value.
Milan and Bologna were the cities that generated the most well-being, with Milan's leadership being focused on economic impact, while Bologna’s leadership was keyed into social impacts.
The Pyramid operates on a principle of progressive value generation and measurement, beginning at the base level and moving upwards through intermediate programming levels that either create or protect Public Value.
- The basis of the pyramid addresses how to enable Public Value. Public Value creation and protection are enabled by the planning of actions that are preparatory and functional to improve the quantity and quality of diverse types of PA resources.
- The intermediate levels of the pyramid address the issues of how to create Public Value and how to protect Public Value. The intermediate levels should be planned and measured in an integrated way so that the specific performance objectives, such as a funding call for businesses, are protected with specific risk measures.
- At the top of the pyramid, we find impacts and Public Value, which serves as the horizon of the entire programmatic architecture and addresses the question of “what and how many impacts?” and, finally, “how much Public Value?” Precisely, at the top level of the pyramid, we find the analytical or one-dimensional impacts, the average external impact and, ultimately, the average value between impacts, performance and health.
The pyramid also emphasizes the crucial role of public managers whose individual performances are measured based on their contribution to organizational success and risk management. This methodological framework enables PAs to plan by aligning administrative health, risk reduction, and performance improvements promoting holistic Public Value aimed at enhancing citizen well-being.
Engaging the next generation for the “Public Value generation”
The soccer field fable told at the opening of the article warns us against the risk of self-referentiality in defining what is Public Value. Public Value should be observed through the eyes of citizens and businesses, it should be communicated in their own words, but most importantly, it should be enabled, protected, and co-created with them.
The concept of Public Value must be extended to distinct categories of stakeholders, and it must preserve the possibility of improving the well-being of future generations. It is therefore important to engage with the new generations to create awareness and proper appreciation of Public Value. This is a particularly vital move as Italy prepares for nearly a third of its civil service workforce to retire by 2032. Compounding this demographic problem is the fact that young people overwhelmingly gravitate to the private sector as they enter the workforce.
Research conducted at Italian universities is trying to understand what would motivate young people to enter the public sector. When asked: “What would incentivize you to go and work in PA?” university students ranked “contribution to the creation of Public Value,” “clear career prospects” and “higher salaries” as their top three choices. Clearly, young people are looking for meaning in the work they will do and the sense of the common good that is embedded in the concept of Public Value. This is great news! Public Value is key to building a better future in Italy, as well as other countries around the world.
Every country walks at the speed of its public administrations. To encourage PAs to walk faster, we need to attract the best resources Italy has—young people—and actively involve them in innovative and shared Public Value projects.
Embracing Public Value isn’t merely about adopting new methodologies; it’s about changing perspectives—viewing policies through citizens’ eyes and measuring success not just by efficiency or output but by tangible improvements in people’s lives. As other countries observe Italy’s journey from bureaucratic chaos to a systematic approach highlighting Public Value, they too might find inspiration to pursue similar paths for building better futures for their citizens.
When asked: “What would incentivize you to go and work in PA?” university students ranked “contribution to the creation of Public Value,” “clear career prospects” and “higher salaries” as their top three choices.
Former CISO on what boards are getting wrong about data protection and privacy
Former CISO on what boards are getting wrong about data protection and privacy
Former CISO on what boards are getting wrong about data protection and privacy
In this VISION by Protiviti interview, Joe Kornik, Editor-in-Chief of VISION by Protiviti, sits down with Sue Bergamo. Bergamo is an executive advisor, former CIO, CISO, and Global Technology Strategist for Microsoft. She sits on several boards, is host of the Short Takes podcast and author of So You Want to Be a CISO: A Practical Guide to Becoming a Successful Cybersecurity Leader. Here, Bergamo discusses recent SEC rulings and their impacts on the current and future state of the CISO role, how the C-suite and boards view data governance and privacy, and what steps they should be taking right now to build customer trust.
In this interview:
0:57 – The CISO role in a state of flux
4:20 – The effect of the SEC’s cyber disclosure rule
7:39 – Is there a playbook for privacy?
10:20 – Will companies get it right for their customers?
Former CISO on what boards are getting wrong about data protection and privacy
Joe Kornik: Welcome to the VISION by Protiviti interview. I’m Joe Kornik, Editor-in-Chief of VISION by Protiviti, a global content resource examining big themes that will impact the C-Suite and executive boardrooms worldwide. Today, we’re exploring the future of privacy, and I’m thrilled to welcome Sue Bergamo to the program. Sue is an executive advisor, former CIO, CISO, and global technology strategist from Microsoft. She sits on several boards, is host of the Short Takes podcast, and author of “So, You Want to be a CISO: A practical guide to becoming a successful cybersecurity leader.” Sue, thank you so much for joining me today.
Sue Bergamo: Thank you for having me. It’s a pleasure to be here.
Kornik: First off, Sue, let’s talk about the state of the CISO. As you point out in your book, which I mentioned in the intro, “So, You Want to be a CISO,” the position is really in a state of flux right now. So, talk to me a little bit about where the CISO is right now and how it’s changing, and if you think it will continue to be a critical part of the executive team going forward.
Bergamo: I like to use the term evolution because we’re in a position that I hope will evolve to a better state in the future. Just like the CIO role about 20 years ago it had to go through some ebbs and flows and finally, it came out at the end of the tunnel in a much better spot. Everyone was very much aware of what the CIO needed and wanted to do which was really around our back office applications for our infrastructure that run our corporations.
The CISO role is going through that evolution and unfortunately, right now, it’s in a really ugly spot. I’m hopeful that it will come out a little bit better. What’s going on in the industry is the SEC’s cyber disclosure rule that came into effect late last year, which basically said the CISO does not need to report to the board, but the board and the executive team need to be aware of cyber incidents. So, what ended up happening with that—and I can go into more elaboration around two CISOs that were charged with felonies for material breaches that happened in the past—but what happened with that is that—this is my opinion based on what I see and what I know—executive teams decided that CISOs weren’t really needed. A lot of the CISOs said, “We’re not going to put up with these personal liabilities.” So, a lot of us left our positions and then there were a whole bunch of us that lost their jobs because the SEC, the cyber disclosure rule, talked about awareness. They didn’t put the CISO on the board, but they talked about awareness with incidents.
So, what has transpired is—and I don’t mean with this with any disrespect to SecOps managers—but inexperienced, from a CISO perspective, SecOps managers secure the operations people will put into the role of head of security. Sometimes CISO, but mostly head of security because they deal with incident response. Now, the dirty little secret in most organizations is that when an incident occurs, the SecOps manager has a major role in that breach, defending against the breach, but they’re really there to tell the CISO where the threat is coming from. They are not there to lead the band. They’re only there for a very specific focus. So, I see this convergence of inexperienced people and cyber criminals and we’ll see what the future brings, but I do hope that when this evolution comes to fruition the CISO will be put into a much better position, much better light with the executive team.
Kornik: You mentioned those SEC decisions and regulations. I don’t know if you want to expand on that at all or talk more specifically about where CISOs find themselves between a rock and a hard place right now.
Bergamo: Yes. There’s really three types of CISOs. There’s the very inexperienced one that’s just coming into the role, not really sure what they’re doing. Again, it’s not a dig. They have to learn and they’re going to learn the hard way. There’s the middle-of-the-road, as I call it. They’re more experienced than the inexperienced ones, but they’re still trying to find their spot in the position. Then there’s the experts who were exiting. So, a lot of CISOs on the inexperienced and middle-of-the-road side, believe that our jobs are really about the technology, and that is so far from the truth. The experienced ones know that we follow something called the triad, it’s confidentiality, integrity, and availability. We do that, we accomplish the triad through people, process, and technology. People obviously are employees, process is security frameworks and controls, and then the tech. Once you get the tech up and running and optimized for efficiencies so it’s giving you the data that you need in order to defend your companies, the tech is the easy part. It changes all the time, but that’s the easy part. It’s the compliance frameworks that take the majority of our time and if you ask any experienced CISO, they’ll tell you, once the tech is installed and optimized, we spend the majority of our time on compliance and data privacy. The newbies, as I refer to them, sometimes we have to explain this to them and explain why compliance and data privacy are so important.
So, it’s a little bit of a mess out there right now and then you throw in the personal liability. Let me just expand upon that for a moment. We had two well-known CISOs with two public, very public companies—I won’t mention their names—charged with felonies through the SEC which led to the cybersecurity disclosure rule being implemented after the first one. The second one fell into that disclosure rule. That sent shockwaves. Not just waves, but shockwaves through the CISO industry and we’re just sitting here saying to ourselves, “Holy cow.” A lot of us don’t have a lot of support because everybody thinks cyber is our problem and not theirs. It takes a village to defend a company against cyber attackers. Now, we’re being held personally responsible and felony charges, potential jail time, so we’re all saying ourselves, “I don’t think so,” which is why there’s a huge influx of us getting out of the role.
Kornik: Right. So, let’s talk a little bit more about the strategic role of the CISO or where that falls in the organization. Let’s talk specifically about data governance and protecting privacy. How did the companies that do it best do it best? In your experience, do they have chief privacy officers or chief data officers? Is there a playbook that business leaders should be following to really make sure that they’re getting this right?
Bergamo: I wish there was a playbook, but there isn’t. So, I think that’s half of the battle because everyone has a cellphone or a computer, and everyone feels that they know technology and they know data. This is a very specialized field. The CIO—I’ll just say tech and security—it’s a very specialized field. I’ve been fortunate enough to have both roles and yes, everyone always has an opinion on how we can do our jobs better, but this is our craft, and we have all kinds of different education or certifications. There’s no one thing that anyone can point to and no one game plan. But good C-level tech and security executives are well-rounded. We study. We research. We get involved and we understand how to protect data. Now, that AI is coming out, we have a whole new set of technologies that we need to encompass into our program. So, it’s about staying involved and understanding what we need to do to protect the data.
Kornik: When you’re in those conversations with the C-Suite, the boards, and the business leaders, do you think they understand the importance, not just the compliance and the governance aspect of this, but maybe the business importance of data privacy and what that means ultimately to building customer trust in the business and the bottom line?
Bergamo: I do think that everyone understands that data matters and that data is important to running the business. I mean, every business needs information in order to make good decisions and to process customer requests, B2B requests, employee requests. It’s all data driven and so is it given enough limelight? It depends on the size of the company. I do think that the executives and the boards understand the importance of data and how to use it, but I think where they fall short is really in the investments of strategizing and securing the information and giving even the technology or the engineering teams what they need to make sure that that data is sound.
Kornik: Right, and that’s an interesting perspective I would say from the company side. How confident are you that we’ll get this right for the customer, the client, the consumer? Are you optimistic that they’ll be better off over the next several years?
Bergamo: I’m always optimistic. The sun’s always shining in my world, right? Data is the stronghold of every company. From managing the most—my new piece of information all the way to executive reporting. Everybody’s processing information. So, I think with some of the technologies that are coming out either through public cloud vendors or through artificial intelligence, I think that the data and the ability to gather data is just going to be better in the future.
Kornik: Well, Sue, you said you’re an optimist. So, I’m going to leave you with this final question where I ask you to look out a few years. Maybe the end of the decade, let’s say 2030. Where do you think will be in terms of privacy, data privacy? Do you think 2030 is a better place than where we are currently?
Bergamo: Well, we can only get better with time, right? Kind of like a fine wine. So, I’m optimistic that material breaches will continue to happen fast and furiously and finally, our business brothers and sisters will wake up and say, “Oh, I need to be responsible for security too. I need to be responsible to help the CISO or the CIO, or whoever, with my data problems. Maybe I should get more involved.” So, I am optimistic that eventually the tables will turn. I think it’s going to take a little bit more time but 2030, sure, I’ll go with that.
Kornik: Great. Well, thanks so much for the time today, Sue, and the insights. I really enjoyed our conversation.
Bergamo: Thank you, Joe. I appreciate you having me.
Kornik: And thank you for watching the VISION by Protiviti interview. On behalf of Sue Bergamo, I’m Joe Kornik. We’ll see you next time.
Did you enjoy this content? For more like this, subscribe to the VISION by Protiviti newsletter.
The Economist’s Dexter Thillien: Privacy in peril amid digital data explosion
The Economist’s Dexter Thillien: Privacy in peril amid digital data explosion
The Economist’s Dexter Thillien: Privacy in peril amid digital data explosion
In this VISION by Protiviti interview, Joe Kornik, Editor-in-Chief of VISION by Protiviti, sits down with The Economist’s Dexter Thillien. Dexter is the lead analysts for technology and data at The Economist Intelligence Unit, the research arm of The Economist. Dexter is the lead author of numerous reports on AI, cybersecurity, data privacy, technology and regulation as well a frequent speaker on the intersection of the digital economy and global business. Here, he discusses how privacy is in peril in the digital economy, the impact of emerging technologies on data protection, regulation vs. innovation, and how the private sector will play a significant role in data privacy in the future.
In this interview:
1:11 – Biggest privacy issues for consumers and companies
3:18 – Emerging tech’s effects on privacy
5:42 – What type of regulation is needed?
7:49 – Who’s taking this seriously?
11:01 – Privacy in 2030
The Economist’s Dexter Thillien: Privacy in peril AMID digital data explosion
Joe Kornik: Welcome to the VISION by Protiviti Interview. I’m Joe Kornik, Editor-in-Chief of VISION by Protiviti, our global content resource examining big themes that will impact the C-suite and executive boardrooms worldwide. Today, we’re exploring the future of privacy, and I’m happy to be joined by The Economist’s Dexter Thillien. Dexter is the lead analyst for technology and data at The Economist Intelligence Unit, the research arm of The Economist. Dexter is the lead author of numerous reports on AI, cyber security, data privacy, technology, and regulation, as well as a frequent speaker on the intersection of the digital economy and global business. We spoke to Dexter last year about privacy in the metaverse and he has been kind enough to come back. Dexter, thank you so much for joining me today.
Dexter Thillien: Great to be here.
Kornik: Dexter, in a digital economy I don’t think there’s any question that data privacy is now and probably will continue to be one of the biggest issues facing consumers and companies, the rest of this decade and probably much further into the future, actually. What do you see as the biggest threats in terms of data privacy for both customers and companies?
Thillien: Yes, thank you for the question. First of all, you’re right to differentiate between companies and consumers because I think they will have different issues to deal with. For companies, they’re owning more and more personal data as part of the business processes, the key is to make sure that the data is secured. That means putting the right government system in place internally. So, instance, that only the right people can access sensitive data. Also, making sure that the company’s own data, any data from suppliers or consumer is being dealt with properly. That’s going to become more and more of an issue to deal with because there’s going to be more and more data to deal with. For some companies, it might even become a competitive advantage. We’ve seen Apple trying to do that in terms of its privacy policy compared to its competitors.
For consumers, it’s a different question, different range of issues. For consumers, it’s important to keep the data safe and secure, but it is also becoming increasingly difficult because we’re giving away much more personal data at any one time. Giving away personal data is no longer about just filling a form, but any time we see something online or do something online, and also anytime we’re going to be on the move because most of us are going to have a smartphone. Many of the apps we have on our smartphone will also collect a huge amount of data. We may become a bit blasé about all those data we’re giving away, but it’s also very difficult to operate and to use the internet and go online if we’re trying to minimize the amount of data we give away. Meta, as an example, tried to build a more private platform which has been charging users and making privacy as a premium feature, but is so far this has been refused by the European Commission in the European Union. The issue is that advertising remains the cornerstone of Meta which means that it is free as long as we give away much of our personal data. With the addition of pictures and videos and on top of text and also facial recognition entering the fray we’re starting to giveaway data which is even more unique and much more difficult to replace if it ever becomes hacked.
Kornik: I’ve been reading a lot of The EIU’s position papers on AI and really all emerging technologies, which includes quantum and spatial and biometric computing, and how those will ultimately impact data and privacy. How do you see AI and those other emerging technologies I mentioned impacting privacy going forward?
Thillien: I think it will all have an impact. So, starting with artificial intelligence, artificial intelligence is all about data. The fact that some are arguing that we may run out web data as early as 2026 as well as so much of a personal data we have given away so far. One of the major issues with artificial intelligence is the output as a model as it may give away as part of answer some personal data because that personal data is part of the input. Sometimes it’s very, very difficult to understand why it does that. We have seen some cases in Europe where this has happened and privacy regulators are keeping track. There is also consent issue which is why Meta say they will not release its most advanced Llama model in Europe because the company is not entirely sure if it can comply with the GDPR in terms of using pictures and videos and things other than text—content other than text.
In terms of quantum computing we’ve seen in August the National Institute of Science of Technology in the U.S. releasing its first post-quantum encryption standards and this is over the fears that a quantum computer might break the current encryption standards sooner rather than later. It’s still not very clear when that will happen and we do not think at The EIU that it’s going to happen any time soon, at least in the short to medium term because many, many technical hurdles remain. But it’s better to be safe especially as some of the encrypted data which can be, will still be very, very valuable when and if quantum computer becomes operational.
When we’re talking about biometric data and biometric computing it raises a question of what type of data we might be sharing. When it is possible to change an email address or even your financial or other details, it is impossible to change your fingerprint or your DNA. We may not be—we may not be able to identify in terms of what we share, but it is something we consider if we don’t want to—that data and want to make sure that the data do not fall into the wrong hands.
Kornik: Right. Thanks, Dexter. You mentioned GDPR. So, let me just follow up on that. Globally, Europe has invested significantly in data protection rules with GDPR. Japan has had very strict privacy laws in place as well. Meanwhile, India and China not so much. U.S. is somewhere in the middle, but has no federal regulation. A lot of the states have sort of taken the lead on that front. Where is the sweet spot? Who is getting this right? Does too much regulation stifle innovation or does not enough create chaos? Where do you stand on regulation?
Thillien: I think finding the sweet spot between regulation and innovation is what every policymaker, every regulator, is trying to do. I think it’s a problem or an issue for all tech regulation and not just data privacy. This could happen when sometimes regulation becomes more of a box-ticking exercise and we have seen that with cookies in Europe. It has no real impact on privacy because—for instance, active consent will now be fully given. I do think we do need some level of regulation because without it any protection will be lacking and there needs to be independent rules put in place.
I think for me there are two main things to consider when we’re talking about regulation. The first is fragmentation. Many, many businesses will be global in nature, whereas, regulation is very often not. This means that making a decision as to what to do, whether to follow offshore jurisdiction is required, whether there would be some overlap or to go with the strictest rule and have only one set of rules to comply globally for the company. Now, some companies have already done that in terms of the GDPR.
The second and probably the most important one is enforcement. I think rules are very nice, but without the right enforcement it can be a bit pointless. We’ve seen, again, with the GDPR where it’s taking quite a long time before any rulings or any judgement because it can be very, very tough for regulators to make the case. It’s very important to note what level you can enforce before you start thinking of regulation.
Kornik: Barely a week goes by without hearing about another significant breach, right? I just wonder if consumers specifically are becoming desensitized to these breaches, do we suffer low expectations when it comes to our own privacy?
Thillien: I don’t know if we desensitize, but I think the issue that is not always visible or very visible to the main user, we often hear about an attack where millions if not even billions of entries have been hacked, but the impact of that attack is very difficult to gauge because in most cases that means we’re going to receive a bit more spam emails. It becomes much more personal when they are a financial repercussions is in attack meaning we can be scammed or buying details are now available with people buying things online with that money. I also want to make the point that companies can try to do as much as they can and many, many do but the attacker, in this case the hacker, is much more favored than a defender because the attacker, in terms of an attack needs to get it right once when a defender has to get it right all the time. Now, as we’re spending more and more time online it means that the attacked surface is only increasing, which means that those breaches will keep happening.
Kornik: Right. We’ve seen big companies—I mean, very big tech companies even playing sort of fast and loose with data and privacy. Even children’s privacy, I think, is—we’ve heard that has been in the news recently. Can we trust the private sector? I mean, we were talking about regulations so I’m just curious. Can we trust the private sector to do what’s right in terms of privacy? Are boards and the C-suite taking this issue seriously enough, do you think?
Thillien: Some are, but I still don’t think that self-regulation is the answer. While I mentioned the GDPR might not be as well enforced as it should be it still offers a EU citizen much more protection than in many other jurisdictions across the world. You mentioned that the U.S. still doesn’t have federal rules, is trying to remedy that in terms of children. It needs to get passed through congress which is very difficult as well. The U.S. also has a much bigger, what I would call, a third-party market. With the data you might have given happily to a provider or a retailer is then sold on to a third party without you knowing about it, and perhaps you wouldn’t want that particular company, the third party company, to have access to your data. Companies do have to take it seriously because it can impact their reputation if it is proven they haven’t done as much as they could have should it be hacked. With a greater penetration of technology in the world place and the move towards digital information it can also become a phenomenal advantage to business continuity. I think the example of the CrowdStrike incident in July 2024 has shown how reliant we’ve become on digital technology and how important it is to protect those. Could it become a competitive advantage is very, very difficult to say because privacy is one of those areas where doing things right to make sure nothing happens has a limited impact, but not doing things right when something is happening could have a major negative impact. So, the positive and the upside is not very apparent, but you do need to do as much as possible to make sure that nothing actually happens.
Kornik: Dexter, I appreciate your time today. I really enjoyed our conversation. I just have one more question and it’s forward looking. I’m wondering if you could take me out to the end of the decade, let’s say 2030, and tell me what you see around data and privacy. I’m wondering how we’ll view privacy in 2030.
Thillien: I think for me it’s evolving concept because the online world has become so prevalent, but the right to privacy is also a fundamental human right whether it is online or offline. This is part of Article 12 for Universal Declaration of Human Rights which is, and I’m going to quote that, “No one should be subjected to arbitrary interference with their privacy, family, home or correspondence, nor to attacks upon their honor or reputation. Everybody has the right to the protection of the law against such interference or attacks.” I think this is the case both in the online world and both in the offline world. I’m going to give you a personal example maybe. I graduated from my school in very late 20th century. I don’t think I used the internet at all for any of my course work during high school. If you can see, for instance, the iPhone launched in 2007, so 17 years ago, and Facebook in 2004, so 20 years ago, it shows that many, many younger people are now what we call fully digital native and are going to have maybe a different perspective. What I find interesting is some mystery that I saw over the last few months and years where kids, where younger people, were telling their parents not to upload pictures of them online. It made me think about the concept of what I might call online privacy native. Where maybe the younger generation is less keen to share publicly compared to the previous generation. I think we’ll have to wait and see to see what will actually happen going forward.
Kornik: Yes, that’s interesting. I hadn’t really thought about that, but you’re right. That generation did seem more conscious—conscientious about sharing photos.
Thillien: I think they might have a different perspective when it comes to their online persona and their offline self and what they share online. So, they might not have all vision for all generation, vision of privacy more broadly, but in terms of what they’re doing online because they’re fully digital native and they are online a lot of time. Everybody is going to have a smartphone. That’s not going to change. We’re still going to be using the internet. We’re still going to share some data. There is still going to be probably from the younger generation which have only known that kind of a different perspective in terms of what they’re willing to share and especially what they’re not willing to share and to what they might get in return for what they’re sharing. I think it’s very early days and we’ll have to wait and see.
Kornik: Right. Very interesting. Thanks, Dexter, for that perspective and your insights. I really enjoyed our conversation today.
Thillien: Thank you very much for having me.
Kornik: Thank you for joining the VISION by Protiviti interview. On behalf of Dexter Thillien, I’m Joe Kornik. We’ll see you next time.
Did you enjoy this content? For more like this, subscribe to the VISION by Protiviti newsletter.
Data by default: How AI radically changes the data privacy landscape
Data by default: How AI radically changes the data privacy landscape
Data by default: How AI radically changes the data privacy landscape
“Data is a precious thing and will last longer than the systems themselves.”
- Tim Berners-Lee, colloquially known as the inventor of the World Wide Web
It’s been almost twenty years since Berners-Lee uttered those words and they are truer, and perhaps even a little more ominous, today than they were then. The advent of artificial intelligence (AI) makes data even more valuable, and thus raises the issue of data privacy and data ownership to a new level of importance, complexity and controversy.
AI can be fed people’s private data from different sources — not just online and offline databases but also content that users upload to the Web, sensor data from the Internet of Things, and even the digital footprint users leave behind as they use their digital devices. Protecting the rights that individuals have as to what information is collected, where it is stored, who can use it, and for what purpose has always been difficult. And in the future, AI will make that exceedingly complex.
Data and digital footprints
We entered the era of “data collection by default” some time ago, argue Stanford University’s Jennifer King and Caroline Meinhardt in a recent comprehensive analysis. There are two potential ways to address this issue. The mostly unworkable one at this point is to move from an opt-out to an opt-in system. The issue with that is not just regulating and ensuring compliance, but also what to do about information collected in the past. And companies can still encourage users to opt in through special offers and other incentives, and then use the data for purposes that were not anticipated.
A second solution is to develop applications that prevent third parties from collecting activity data in the first place, such as enabling the opt out option when we download an app to our smartphones. But this only applies to activity data, not to the data the user supplies while searching or transacting once the app is installed.
Moreover, efforts at controlling information at the point of collection are undermined by Web crawlers and Web scrapers, which can automatically locate, classify, download and extract vast amounts of data, images and other types of material from the internet writ large. In principle, they can only access public, readily viewable material. But in practice, Web crawlers can jump over paywalls by disguising themselves as users and can use pirated content that has been stored somewhere other than the original location.
In addition, data often get misplaced, breached, leaked or otherwise mishandled, making it an easy target for AI Web crawlers. Thus, the issue goes well beyond the traditional approaches of offering assurances about confidentiality or non-disclosure and establishing opt-in or opt-out mechanisms.
The AI data supply chain
Given the difficulties involved in addressing privacy issues at the point of data collection, options at a later stage of the data supply chain must be considered. The broadest measure would be to ask companies to disclose basic information about the data they feed into AI, indicating the sources, scope and scale. This enables, for example, checking if there is copyright infringement.
efforts at controlling information at the point of collection are undermined by Web crawlers and Web scrapers, which can automatically locate, classify, download and extract vast amounts of data, images and other types of material from the internet writ large.
However, no such requirement or regulation exists, and very few companies do it voluntarily. Some companies are now offering users an opt-out option so that their data and images are not used for AI training. Amazon’s AWS, Google’s Gemini, and OpenAI offer such options, but they are often cumbersome to activate and not totally fool-proof.
The supply chain ends with outputs, which in the case of AI include applications and predictions. Individuals need to be protected if their data are unwittingly disclosed at that point. At the societal level, the thorny problem with using training data from the web is that, even if all permissions and legal requirements were to be met, there is issue of “bias in, bias out,” in the sense that data on the web are not representative of society or of the world. Some users, companies, and countries are more prone to uploading material or leaving behind a digital footprint. Such a biased body of data then becomes the raw material for AI applications.
AI never forgets…
The truth is that individuals, unfortunately, have very few options at their disposal to prevent the misuse or unauthorized use of their data. It is also exceedingly difficult to compel companies to delete data at the user’s request even if it is mandated by law. More alarmingly, there is no good way of making an AI application to “forget” or “unlearn” what it has unlawfully learned. And the more time passes without corrective actions, the harder and costlier these instances become.
As is often the case with emerging technologies, regulation is lagging. And, not surprisingly, there is much debate as to the amount of regulatory oversight that is necessary, warranted or desirable. Adding to the complexity, digital data are global while regulation is local.
According to the United Nations, 137 out of 194 countries have passed data protection and privacy legislation with various levels of safeguards. The Web is a global medium, but it is subject to a mosaic of regulations at the supranational (the European Union), national and subnational levels (i.e., state by state, as in the Unites States). Most importantly, regulations aimed at the Web or AI sometimes collide with those in other areas, like national security. The European Union has complained about American intelligence agencies’ use of private data of EU citizens and residents without their approval. This issue is complicated by the fact that large and small American digital platforms routinely send user data to the U.S. The U.S.-EU Data Privacy Framework, signed in July 2023, regulates the circumstances under which the U.S. can gather information and how European citizens can appeal.
Unintended consequences
From the standpoint of the companies managing digital platforms, the regulatory context could not be more complex. They need to comply with regulations implemented by the country where they are based but also by the laws of the countries in which they collect data from their users. In addition, cross-border data flows may also be regulated: This represents a major obstacle for new startups aiming at international growth, while offering a built-in advantage to more established companies that have the resources to either comply or to deal with the potential litigation if they do not comply.
The truth is that individuals, unfortunately, have very few options at their disposal to prevent the misuse or unauthorized use of their data. It is also exceedingly difficult to compel companies to delete data at the user’s request even if it is mandated by law.
The future of personal data protection and privacy remains uncertain. And yet, companies need to make operational decisions today that may be legally questionable in the future. Companies, especially those engaged in large-scale AI efforts, will continue to amass data and to use it to advance their goals, even at the risk of being found non-compliant.
Another unintended consequence stems from applying new regulations to both tech companies whose core business involves data collection and manipulation, especially those engaged in AI, and those in other industries, which gather and process data in support of selling other products or services. On the one hand, the concern is that complying with regulations designed to prevent the worst potential harms might constrain the ability of such companies to compete. But on the other hand, many companies whose core business is not AI are also developing, or at least using, AI applications. Thus, the default for politicians and regulators is to make all companies comply.
Requirements, standards and scorecards
It is not clear yet if different jurisdictions around the world will treat all companies the same way, or have less onerous requirements for small firms, and for data that are not deemed “sensitive” (sensitive data includes but is not limited to financial, health, biometric and genetic information), as in the proposed American Privacy Rights Act of 2024.
Board directors and business leaders need to stay hyper-informed in a rapidly evolving landscape. There are many proposals on the table in terms of legislative initiatives, but no comprehensive federal regulation in the U.S. yet, let alone a global set of standards other than the decades-old principles of information minimization and information specificity.
Eventually, companies will be asked to create data privacy scorecards, so they should keep track and meticulously document all practices and procedures. In the meantime, they need to exercise sound business privacy practices to to avoid bad publicity, public-relations problems and a loss of customer trust over data mismanagement and hacking.
Board directors and business leaders need to stay hyper-informed in a rapidly evolving landscape. There are many proposals on the table in terms of legislative initiatives, but no comprehensive federal regulation in the U.S. yet, let alone a global set of standards.
CPO or no? Protiviti’s Tom Moore on the evolution of the privacy role and its uncertain future
CPO or no? Protiviti’s Tom Moore on the evolution of the privacy role and its uncertain future
CPO or no? Protiviti’s Tom Moore on the evolution of the privacy role and its uncertain future
The Information Age created an information explosion that shows no signs of slowing down. In fact, the increasing availability of information, including digital data, is speeding up exponentially. With the flood of all that new information come concerns for the C-suite about how to manage it safely and securely.
While some companies expanded the role of their existing technology leaders to deal with this challenge, others opted to expand their executive teams, and the role of the chief privacy officer (CPO) was born. According to the International Association of Privacy Professionals (IAPP), Jennifer Barrett Glasgow of Axciom Corporation was the first CPO. She began her oversight of privacy at Axciom in 1991. Barrett Glasgow’s job description then, for sure, was radically different from the role of today’s CPO, which continues to evolve as quickly as new information becomes available and privacy laws and regulations proliferate. Many privacy leaders in organizations have taken on new titles and enhanced responsibilities in the areas of A.I., trust and ethics, and data governance.
Regulation and legislation
Back in 1991, very few privacy regulations existed globally. Since then, in the U.S., states have stepped up to fill a regulatory void left by the federal government. More than two-thirds have addressed privacy regulation: 18 states have it on the books, eight states have active bills pending, and 10 have bills working their way through their respective legislatures. Meanwhile, in April 2024, U.S. lawmakers announced the American Privacy Rights Act, a bipartisan draft legislation that seeks to create a national standard for data privacy and security, addressing the unregulated sale of online data and aiming to ensure individuals’ right to control their personal information. Although it has no shot to pass before the presidential election in November, lawmakers are optimistic it could serve as a framework for legislation in 2025.
Globally, there’s also been a dramatic increase in the number of privacy regulations. The General Data Protection Regulation (GDPR) from the European Union, in force since 2018, created one of the largest shifts in how information is managed within organizations. Every year, more countries, including Japan, Singapore and South Korea, have introduced new privacy regulations. According to the IAPP, as of March 2024, 70% of nations and 79% of the world’s population are covered by some form of data privacy law.
As privacy regulations continue to expand rapidly, business leaders continue to question who owns the responsibility for ensuring their organizations’ data practices are compliant and who should be responsible for meeting any new compliance requirements. Legal? Technology? Compliance? There may not be one correct answer, and truth be told, privacy is a shared responsibility across the organization.
70%↑
as of March 2024, 70% of nations and 79% of the world’s population are covered by some form of data privacy law.
Is the CPO role in decline?
In an IAPP survey of privacy professionals conducted last year, 78% said their organizations’ most senior privacy leader was in the five highest levels of the organization, while 21% were in the two highest levels. The data also showed that most of those surveyed reported to either the General Counsel (32%), the chief compliance officer (16%), or directly to the CEO (15%).
The annual survey’s biggest one-year shift shows a decline in direct reporting to the CEO and a rise in reporting to the chief compliance officer (CCO). One possibility: This shift may illustrate a decline in the stature of the role for the CPO in organizations and may signal that privacy, like many other regulations, requires an integrated approach. Real-life indicators also point to the decreasing importance of the CPO. Anecdotally, there are plenty of instances when a CPO leaves the organization or the position is eliminated in a restructuring, and it is not filled.
This is exactly what happened earlier this year when Google eliminated its CPO role in a corporate restructuring and opted not to fill it. There are other examples in large organizations where the CPO role either remains vacant or isn’t even on the org chart any longer. Is this because of a lack of expertise in the field, an inadequate internal bench, or a reprioritization of efforts and focus within the enterprise?
Or is it, as is the case at Google, that the varied responsibilities for data privacy have outgrown the role of a single CPO? Whatever the answer, it’s safe to say that when Google, a company estimated to hold between 10 and 15 exabytes of data—or the storage power of about 30 million PCs—makes a potentially game-changing decision regarding privacy, it’s probably a good idea for the rest of us to take note.
Another possibility the CPO role is in decline may lie in the lack of measurable KPIs, making it difficult to conduct benchmarking for privacy professionals. The status quo is that information and data should be protected, so unless an information breach occurs, a regulatory investigation is launched or a fine is levied, some companies may have a hard time evidencing that the CPO role has had a significant and direct impact on customer sentiment, the business and its bottom line.
Of course, good CPOs will serve to preserve the “status quo” every day and in this sense may even be victims of their own success. And if the responsibility for privacy is, ultimately, being dispersed throughout multiple roles within the organization, pitfalls could begin to emerge. For instance, a team that is already resource-constrained could end up with increased privacy responsibilities, potentially, and inadvertently, losing its focus on privacy—a risky proposition.
What are the risks of losing focus?
The risk of a diminished CPO role is losing a dedicated function and leader hyper-focused on privacy. When teams pick up privacy as a second or third priority, important tasks and obligations can get missed. Regulations may not be reviewed fully, legislative efforts are not monitored for anticipated changes, and dealing with enforcement becomes even more challenging. This, of course, has a direct impact on operations and customer perception.
78%
of privacy professionals said their organization's most senior privacy leader was in the five highest levels of the organization, while 21% were in the two highest levels.
Privacy should not be a reactive function. Customers want to collaborate with companies that they trust and protecting an individual’s privacy leads to trust. Additionally, fines levied against companies found mishandling a customer’s data can have a significant economic and reputational impact on the business. Though COVID-19 may have slowed global regulators from enforcing regulations, they are now making up for lost time with increased legislative authority and automated tools. And the repercussions for noncompliance are making headlines with fines and consent decrees.
It’s also important to consider the effect on the career paths and overall morale of the privacy team. When the CPO is deprioritized or pushed down the org chart, it becomes more difficult to attract top talent, and when the privacy pipeline dries up, it’s tough to turn on again. Moreover, eliminating the role altogether leads privacy team members within the organization to seek other disciplines or external opportunities to advance their careers.
Not prioritizing the CPO also leads to many management conundrums. Without a CPO, where does the privacy direction originate? Who will listen to the voice of the customer for privacy concerns and respond in a consistent, centralized manner? How does the organization create internal privacy awareness? The reality is that when the CPO is dispositioned or deprioritized within the organization, so is privacy itself. With the ever-changing and expanding legislative landscape and the sheer amount of data at our disposal, one would expect the role’s strategic importance to be apparent and become more ingrained and elevated within organizations in the coming years.
Building customer trust
Those organizations that do employ and value the CPO role should expect continued cross-collaboration across the entire enterprise. Much like with the expansion and awareness of internal audit and compliance functions following new regulations, privacy awareness also needs to be well communicated and understood across the entire organization. Initiating activities like completing a Privacy or Data Privacy Impact Assessments required under GDPR and some U.S. state laws can only happen if the CPO and privacy team are well versed in the legislation.
The CPO needs to have a stake in the product change management and lifecycle process and work closely with the data governance teams to understand what data is collected, how it’s processed and how it’s protected. The CPO today has numerous vectors of responsibility, including state, federal and global law enforcement; leadership and board attention; internal business models, products and services; technology advancements; customer expectations; and competitor brand and product positioning. Though privacy can be a shared responsibility across the organization, the CPO needs to be the focal point across the enterprise and be accountable for building customer trust through the company’s data protection and privacy practices.
Whether your organization has a chief privacy officer, is looking to hire one, or has opted to split the role across several functions of the business, the one thing that remains certain is data privacy is not optional. More than ever, customers are demanding accountability from organizations about how their data is used, processed, shared and stored. It’s imperative that organizations invest in building a privacy program run by strong leaders who can navigate an evolving data privacy landscape. The risk of not doing so is eroding the company brand and losing customer trust.
Without a CPO, where does the privacy direction originate? Who will listen to the voice of the customer for privacy concerns and respond in a consistent, centralized manner?