Data and privacy: Exploring the pros and cons of doing business in a digital world
Data and privacy: Exploring the pros and cons of doing business in a digital world
Data and privacy: Exploring the pros and cons of doing business in a digital world
These days, data breaches happen so often that they feel like they are just the cost of doing business in a digital world. The worst ones involve credit card payment data, which could result in fraudulent charges to your account. Caught early enough, this will not impact your credit rating, and your bank will issue you a new card number. Because this happens with such regularity, I keep a list of web sites and passwords handy so that I can easily change all my credit card automatic payment info
In July, I received a letter saying that Ticketmaster, more specifically its parent company Live Nation Entertainment, had suffered a breach and my personal data had been compromised. Ticketmaster, which sold more than 620 million tickets in 35 countries in 2023, sent that same letter to some 560 million members (6.25% of the Earth’s population). Maybe you got one, too.
Exposing the personal data of half a billion people to malicious hackers is astounding news, but my first reaction wasn’t “wow” but “meh.” I’ve been breached before and I will, undoubtedly, be breached again, so I initiated the routine damage control sequence.
The latest, but not the worst
The Ticketmaster breach is just the latest, and not nearly the worst. That distinction belongs to CAM4, which exposed more than 10 billion records in 2020; Yahoo in 2017 with 3 billion ; and Aadhaar and Alibaba, which exposed more than a billion users each in 2018 and 2022. And household names like LinkedIn (2021) and Facebook (2019) have also had bigger breaches.
Thankfully, Ticketmaster says more crucial information—such as U.S. social security numbers, which are required for users who want to sell their tickets on the site, were not compromised, but phone numbers, e-mail addresses, home addresses and encrypted credit card payment data was—a hacker’s paradise. (Ticketmaster did offer free credit and identity report monitoring, which I gladly accepted.)
Thankfully, nothing bad has come of it for me… at least not yet. But who knows who has access to my personal data on the dark web? And what can I—and 560 million others—do about it? The truth is, absolutely nothing.
And, perhaps foolishly, I have resold tickets on Ticketmaster, so my social security number is currently sitting in a Ticketmaster database—secured for now. Should I be worried? My bank has it. My tax software has it. And probably a few other for-profit businesses I’ve forgotten about have it too. It’s funny how we rationalize where danger to our privacy and most sensitive data lies and where it doesn’t. And how nonchalant we’ve become about the possibility, or probability, of it being exposed.
Big data means big worries
It’s been five years since Forbes declared data privacy would be the biggest issue facing businesses and consumers over the next decade. That was in 2019, before the pandemic accelerated our mass digitization. In many ways, that prediction has come to fruition. Fast forward to more recent Forbes findings that indicate 86% of Americans are more concerned about their privacy and data security than the state of the U.S. economy, and two-thirds either don't know or are misinformed about how their data is being used, and who has access to it.
86%
of Americans are more concerned about their privacy and data security than the state of the U.S. economy, and two-thirds either don't know or are misinformed about how their data is being used, and who has access to it.
- Forbes 2024 Global Threat Report
A Pew Research Center survey of U.S. adults found 81% were concerned about the data companies collect about them and 71% are concerned about the data the government collects about them. Globally, the numbers are similar: A 2023 IAPP survey found 68% of respondents say they are very concerned about their privacy online.
Meanwhile, in Protiviti’s Executive Perspectives on Top Risks 2024 and 2034 survey, cyber threats are increasingly on the minds of global executives, moving from the 15th ranked risk in 2023 all the way to the third ranked risk for 2024. And when we asked them to identify risks a decade from now, cyber threats climbed to the top as the biggest risk anticipated in 2034.
The challenges are complex: AI and other emerging technologies will impact data security and privacy in ways we’re not entirely sure of just yet; and shifting state, national and global regulation complicate data policy and governance. Executives are aware of the problems, and probably many of the solutions, but implementing them in a measured way in an ever-evolving digital data and privacy landscape is incredibly difficult.
Exploring the future of privacy
That’s why VISION by Protiviti is embarking on a months-long journey to explore the future of privacy. Organizations are experiencing unprecedented change, and the regulations that govern how personal information from consumers and clients is collected, used, stored and archived are evolving.
In addition, the roles of the chief privacy officer (CPO), as well as the chief information security officer (CISO) and chief technology officer (CTO), are evolving day by day to match the external pressures of maintaining data privacy. Too many data breaches also have eroded customer trust, and consumers—undoubtedly growing tired of the “we regret to inform you…” letters—are demanding more say in the management of their data.
To take a 360-view of the topic, VISION by Protiviti’s Future of Privacy content includes interviews with experts and leaders in the data privacy and protection space, including:
-
Jules Polonetsky, CEO of the Future of Privacy Forum, speaking with Protiviti’s Tom Moore about navigating the road ahead, the AI opportunities that will emerge and why we absolutely cannot get this wrong
-
Sarah Armstrong-Smith, Microsoft’s Chief Security Advisor for EMEA, sitting down with Protiviti’s Roland Carandang to discuss what steps business leaders should be taking to build out a strategic data security plan
-
The Economist’s Dexter Thillien discussing how privacy is in peril in the digital economy, and ways the private sector will play a significant role in the future of data privacy
-
Sue Bergamo, executive advisor, author and former CISO highlighting what boards are getting wrong about data protection and privacy
-
Mauro Guillén, futurist and vice dean of the Wharton School at University of Pennsylvania, writing about the effect of AI on the availability and use of personal data
-
Protiviti’s senior managing director Tom Moore’s take on the evolving role of the chief privacy officer and its uncertain future.
In addition, VISION by Protiviti will be publishing its own research on the topic in collaboration with the University of Oxford. Look for our Global Executive Outlook on the Future of Privacy, 2030 at the end of October. We’ll be taking a closer look at the survey findings in a Protiviti webinar on November 5, 2024. And VISION by Protiviti will be hosting two privacy-focused live events in New York in mid-November. Stay tuned for details.
And while I’m in New York, maybe I’ll take in a Broadway show or a concert. And yes, I will probably buy those tickets through Ticketmaster.
81%
of U.S. adults are concerned about the data companies collect about them and 71% are concerned about the data the government collects about them.
- Pew Research Center Survey
Protecting data and minimizing threats with Microsoft’s Sarah Armstrong-Smith
Protecting data and minimizing threats with Microsoft’s Sarah Armstrong-Smith
Protecting data and minimizing threats with Microsoft’s Sarah Armstrong-Smith
In this VISION by Protiviti interview, Protiviti’s Roland Carandang, Managing Director in the London office and one of the firm’s global leaders for innovation, security and privacy, sits down with Sarah Armstrong-Smith, Microsoft’s Chief Security Advisor for Europe, Middle East and Africa, independent board advisor and author of Understand the Cyber Attacker Mindset: Build a Strategic Security Programme to Counteract Threats. The two discuss Microsoft’s data governance strategies in the face of elevated risk, the impact of AI and emerging technology and what steps business leaders should be talking to build out a strategic security plan.
In this interview:
1:04 – What are the biggest threats to privacy?
2:58 – How AI changes the game: pros and cons
7:00 – Microsoft’s role in protecting customers’ privacy
10:18 – Thinking like a cyber criminal
15:35 – Will it get worse before it gets better?
Protecting data and minimizing threats with Microsoft’s Sarah Armstrong-Smith
Joe Kornik: Welcome to the VISION by Protiviti interview. I'm Joe Kornik Editor-in-Chief of VISION by Protiviti, our global content resource examining big themes that will impact the C-Suite and executive boardrooms worldwide. Today we're exploring the future of privacy, and we welcome in Sarah Armstrong Smith, Microsoft Chief Security Advisor for Europe, Middle East and Africa, Independent Board Advisor, and author of “Understanding the Cyber Attacker Mindset: Building a strategic security program to counteract threats.” Today, she'll be speaking with my Protiviti colleague Roland Carandang, Managing Director in the London Office and one of our global leaders for innovation, security, and privacy. Roland, I'll turn it over to you to begin.
Roland Carandang: Thanks so much Joe. Sarah, welcome. Congratulations on the publication of your latest book and thank you so much for being with us today.
Sarah Smith: That's great to be here. Thank you.
Carandang: I'm going to dive in with a very big question just to start things off. What do you see as the biggest threats to data privacy right now and what are some things that executives and boards should be focused on?
Smith: Yes. Well, I think I'm going to go for the easy option to start with being a Chief Security Adviser at Microsoft, it has to be just the scope and scale of cyber-attacks. Now they're at a range that we have never seen before just in terms of the ferocity of those different types of threat actors. What are they doing? What are they after in particular? Then when we talk about cyber attacks, we then got to think about what are those threat actors after. In essence they're looking to, how do I monetize my return on investment? Some of those are financially motivated actors, some of those might be espionage, nation state actors, they're activists, but ultimately, it's all about data and that's something we've really got to be cognizant about. So whenever we've had a cyber-attack, we then have to think about the data breaches and what does that mean for the impact to those individuals that may be impacted by that cyber attack as well.
Then we have questions that no doubt have to be answered, maybe that’s through regulators, our own business, our customers, partners, with regards to what data, how much data, and what's the impact of that. If I took all of that combined, when we're talking about cyber attacks, data breaches, intellectual property theft, whichever way you want to look at it, ultimately it'll come down to one thing, which is effective data governance. I would really say, what data, where is it, what is the value of that data, and what are my expectations, not just from regulators but consumers and employees as well, about how I should be protecting that data no matter what is on the horizon?
Carandang: On VISION by Protiviti we often talk about AI, and I know that's something that's on your mind. Ultimately, what impact do you think AI will have on data privacy and data security. Is there anything that business leaders should be doing to prepare for that now?
Smith: Well, I think with any technology there are always pros and cons. So we start with the pro. Ultimately, when you think about the ability for AI, machine learning, to provide really deep insights across large data sets. I think one of the biggest challenges that a lot of companies have, reflecting on where we started is where's my data, how much data, how much data exposure do I have? Getting those real deep insights but also thinking about how I can use that data to drive innovation
It's no doubt we're thinking about AI and just the scale of innovation that we've seen over the last couple of years. We're seeing tremendous work with regards to breakthroughs in science, medicine, and technology. So there's absolutely no doubt that there are some huge positive impacts for a lot of companies.
Now, I go to the cons. So kind of the reverse of that. In particular when we think about Gen AI, so that's only been around in the last couple years. It's probably made famous by ChatGPT. There are multiple other AI models. Then we got to think about how that was actually trained and where did that data come from. Some of the data, let's say, might have been scraped off the Internet. It could have been taken from social media. There's a multiple ways in which this data has come from and it's been asking a lot of questions again about what data, where did that data come from, do I have any say in that data in terms of consent, legitimate interest and all of these type of things. Again, if I can reflect back to the first question with regards to the cyber attackers and how they are thinking about amplifying their cyber-attacks with some of these large language models. Again, I think from a nation state perspective, highly resourced, highly motivated threat actors.
Now a couple of months ago Microsoft actually issued some research in conjunction with Open AI, as we're talking about ChatGPT. What we identified, if we took some of the larger nation state actors, they're using these models to do reconnaissance so that they're learning about their targets and they're also using those large language models to refine their attack. So this is just a caveat that the AI itself is not doing anything bad. It's not a naughty AI. It's still tool in the threat actors kit bag. When we're talking about phishing, ransomware, malware, whatever the case may be, the AI is just another tool, if you think about it that way. I want to think about AI, and I know there's a lot of companies that are spinning up R&D centers, innovation, thinking about the art of the possible. Maybe they are building their own models or they're buying them, whatever the case. There are some really fundamental things as we're talking about privacy in particular, that's responsible and ethical AI. It's a really having deep appreciation for those security and privacy implications, the detriment of some of those large language models and how they're being utilized but also keep thinking about privacy-enhancing technology. So having encryption, how we're thinking about managing the data or the data when it's exfiltrated… none of those things change just because we have some new technologies, right? We can't lose sight of the fundamental, the foundation layers if you like, of security and privacy in particular.
Carandang: That's super interesting, Sarah. Microsoft clearly has a big role to play—it sounds like such an understatement—in AI but it also has just lots of customers as well and customer data. Since you mentioned it, can you just tell us a bit more about your role at Microsoft and how a company—you mentioned large data sets, and how a company like that deals with protecting its customer data. How do you spend your days and perhaps some of your nights as well?
Smith: Can I say, it's never a dull day, let's say, being at a big tech company. If I've had to talk about my role first and foremost, in essence, my role is to liaise with our largest enterprise customers across Europe. I work multi country and multi sector and it's really at that C-seat level. I can be talking to CISO, CIO, CTOs. It's really understanding those biggest challenges. Some of that we've already touched on. We've talked about cyber security, cyber-attacks, how they're evolving. We've talked about evolving technology particularly when it comes to AI, responsible AI and all of these things but it all fundamentally comes down to data and really understanding the value and the proposition of all of this big tech together.
Now we look at the cloud in its most simplistic form, irrespective of the size of the enterprises that we're talking about. Although I'm at this level I've obviously got lots of different small enterprises and consumers who are utilizing the cloud. I would say the real value comes down to the shared responsibility model first and foremost. So if you thought about having your own data center or your own services, you're responsible for everything. You're responsible for the building, the infrastructure, the networks, all of the data, and all of these things. The big difference when you move to the cloud, and some of that comes down to the type of cloud or SaaS services or whatever the case may be, but the shared responsibility modeling, that just means the platform, the cloud platform, itself is the accountability of the cloud service provider. So in essence that infrastructure—patching, backups, recovery—won’t completely go away but it's one of those things that you don't necessarily have to think about.
The other part of that shared responsibility model, if you think about all of the different companies across the globe, some of those are highly regulated entities and those regulations are going to differ depending on what country they're in or even what region they're in. Now part of that, for customers to be able to adopt the cloud, Microsoft also has to have a very comprehensive compliance portfolio. If you're thinking about, we’re talking about GDPR or we're talking about various different standards like NIST for example, the underpinning platform first and foremost has to have all of those controls in place that you take advantage of. There's a huge advantage right out-of-the-box I'd say in terms of the inbuilt capability that's already there by standard and by default. The challenge, however, is you have to take advantage of it. This still means you’re still accountable for who's accessing that data and what data you put into the cloud.
Carandang: You mentioned in the introduction in your new book, Understand the Cyber Attacker Mindset. It dives right into the global cyber crime. You've engaged with actual cyber criminals. What are some of the key takeaways that you learned from your engagement with those cyber criminals that you could share with the audience here?
Smith: I think what's interesting to me and why I wrote it is to really focus on the human part of security. I think again, when we think about security, a lot of people think about we're here to protect data and we're here to protect technology and servers in the cloud and all of these things but actually, the data only has a value to it when we understand the repercussions of the impact of some of that data in the wrong hands and how it could be misused, abused in various different things. I think what we talked about at the beginning is a million and one ways in which I could potentially attack you but there's only a finite reason why I would want to, and why I'm motivated enough to want to do it. So I looked at the different types of threat actors. As I said, we've got some that are financially motivated, we've got activists, nation state actors, and we've got malicious insiders as well. Then it's the same data but in the different hands, what is the impact of that? Then it's being able to work backwards and say, “Ok, well, if someone was trying to sell this data, if someone was trying to use this data for espionage, if someone was trying to use it for other nefarious purposes, what do I need to do to protect that in all of those different hands, in essence?” That's really important, to understand the human motivation behind it and why they are willing to go to that extra degree to get their hands on that data. So I think about it from a very, very simplistic, no matter what size organization is we're talking about, the little ones up to the big enterprises, and I try and keep it quite simple. Our strategy in essence comes down to protecting the access in and the exit out. So the access in is identity. As we're talking about privacy it’s identity in all its guises. So it's identity as a human and identity of things. So we're talking about laptops and devices and various things like that. In essence from the threat actors perspective, I have to find a way into your network. I don't particularly care how I get in. Whether I'm trying to do those phishing emails, I'm going directly to the source, or I found a vulnerability in your network. I will find any which way in to that network. The exit out really then comes into that data. What is it I'm trying to exfiltrate out of your company that's giving me that value in particular.
Carandang: Thank you, Sarah. That's fantastic. You mentioned scale earlier. Just with the number of data or tax on data growing exponentially day by day, I do wonder if it's time for just some bold paradigm shifts. Do you see any of these shifts on the horizon? For example, can you imagine where consumers will start to pay small fees for otherwise free services, so companies won't need to sell that data to third parties?
Smith: I think we're going to see that a little bit. I think people are starting to pay for subscription services where it's a highly tailored service. They don't get adverts or the adverts they do get are more tailored. We are starting to see these people who want an enriched service. But I think the challenge we have as well is, a lot of this technology, particularly when we’re talking about social media, has been around for a very long time and it's been free for a very long time. Even when we know that when it is free, you’ve heard the comments you are in essence the commodity but there's data, there's profiling that's being sold to varying degrees across different companies depending on how you're interacting with some of their services.
I think the interesting thing is even when we've spoken about the size of some of the cyber attacks, the size of some of the data breaches, the fact that we've had these regulations, the fact that we've had record-breaking fines as a result of misuse or abuse of data and selling of data in various different things, has it actually stopped people from using it? I would argue not. Maybe there's a handful of people who are a bit mindful of it. I think you'll get pockets of people that want a better service and that you could sell it as a better service and enriched service or some way, maybe you'll have those kinds of people who might want to do that but I think overall, I can't see it happening to a large extent.
Carandang: Got it. Thank you, Sarah. So we've covered a lot today. I wanted to just ask you your overall feelings on maybe the next five years or so. So take us out to 2030. Tell us what you see. Are we in a better place? How well we have gone with this endeavor.
Smith: I think it's interesting, isn't it? Like we talked about GDPR, we talked about how long that's been around. So we are over five years since GDPR has come into being and other regulations around the world are all coming up to a varying degrees. Has it made any difference? I'm not sure. Arguably, I think it's going to have to get much worse before it gets better but I do think there is some positive coming as well. I would just frame that with where we started, when we're talking about cybersecurity and what's the game changer. So I think what we have seen is this willingness for more collaboration across big tech but across multiple different countries and jurisdictions. Particularly when we think about different actors and they're moving data around and moving data, there's money laundering, people are hiding in plain sight, making it really hard to bring a lot of these people to justice. Therefore what we have seen, as I said, in the last couple of years, is that willingness to collaborate, the willingness to share intelligence and really, really think about, there are some of these core principles of what we've been talking about and really then coming back to those foundational levels that we talk about. How do we have security and privacy by design, by default and as standard, so that nobody questions all of these things that have to be added on. Are you doing it for the right reasons? It just is. So, I think, as I said, there's going to be a lot more work. It's not going to be easy. I have a tiny bit of optimism that we can tip the balance but I just want to be realistic at the same point, not underestimating how much work is involved.
Carandang: That’s brilliant, Sarah. Thank you so much for your time and insight today. You've been very generous. Thank you for the great work you're doing more generally, and congratulations again on your book. Joe, back to you.
Kornik: Thank you for watching the VISION by Protiviti interview. On behalf of Roland and Sarah, I'm Joe Kornik. We'll see you next time.
Sarah Armstrong-Smith is Microsoft’s chief security advisor for EMEA and an independent board advisor on cybersecurity strategies. Sarah has led a long and impactful career guiding businesses through digital attacks and specializing in disaster recovery and crisis management. Sarah is the author of Understand the Cyber Attacker Mindset: Build a Strategic Security Programme to Counteract Threats. Prior to Microsoft, she was Group Head for Business Resilience & Crisis Management at The London Stock Exchange and Head of Continuity & Resilience, Enterprise & Cyber Security at Fujitsu.

Roland Carandang is a Managing Director in Protiviti’s London office and one of the firm’s global leaders for innovation, security and privacy. He leads a world-class consulting team focused on modernizing and protecting businesses where he helps clients understand, implement and operate technology-based capabilities and takes pride in helping clients navigate an increasingly complex world. He collaborates across the Protiviti and Robert Half enterprise to ensure we are solving the right problems in the right way.

Did you enjoy this content? For more like this, subscribe to the VISION by Protiviti newsletter.
From bureaucratic performance to the common good: The challenge of Public Value in Italy
From bureaucratic performance to the common good: The challenge of Public Value in Italy
From bureaucratic performance to the common good: The challenge of Public Value in Italy
The soccer field fable: A lesson in misaligned priorities
Once upon a time, there was a mayor of a town with some 5,000 residents who prided himself on making the most of public funding. One key goal of his administration was to build five soccer fields in five years, with the ultimate goal of increasing the level of sports participation and health of the town’s citizens. That well-meaning mayor, thanks to a well-functioning organization and efficient and motivated employees, was able to fulfill the political goal. As a result, the 5,000 residents, who happened to average 80 years of age, had five beautiful new soccer fields, delivered on time and budget, Unfortunately, running and playing soccer was not the way the elderly residents of the town were looking to get their exercise. From their perspective, soccer fields were a well-intentioned, but ultimately faulty endeavor.
Breaking free from empty indicators
For decades, both global and Italian bureaucracies have dragged businesses and citizens through a complex labyrinth of public projects focused on quantitative outputs—too focused on how efficiently public funding was utilized (the input), how much was accomplished and how quickly (output), and what public benefit was delivered (social and health benefit, in the case of the soccer fields).
These indicators for success (‘done/not done’ and ‘how much was done and in how much time’) have given rise to a new kind of bureaucracy where “performance for performance’s sake” is the norm and where the true impacts on citizen well-being are often overlooked or, at best, incidental. In fact, research published in 2020 and 2021 in the Italian journals RIREA and Management Control show that just 13% of the 3,798 indicators used by the Italian ministries were impact indicators. Exceedingly rare were the cases of co-planning and co-reporting of impacts; research published in 2020 in the Italian journal Azienda Pubblica shows a "heat map" with few cases of co-planning between ministries concerning the same topic.
In this bleak scenario, among the small amount of existing research on the impacts created, we cite CERVAP's research on the Public Value created by the 14 Italian metropolitan cities. The study ranked the Public Value created by those cities through a Public Value Index ranging from 0 to 100.
Milan (between 68 and 70 on the Public Value Index) and Bologna (between 66 and 68) were the cities that generated the most well-being, with Milan's leadership being focused on economic impact, while Bologna’s leadership was keyed into social impacts.
Unfortunately, the southern cities didn’t fare so well, highlighting the fact that Public Value creation in the region is still a cultural and civic battle. These cities include Catania (30 and 32 on the index) and Naples, Palermo, Reggio Calabria and Messina, all between 32 and 39.
This context was also enabled by disjointed planning tools and programmatic fragmentation. In Italy, before 2021, public administrations (PAs) typically operated in silos with as many as ten different planning instruments per administration, resulting in overlapping projects and redundant efforts.
It wasn't until 2022 that Italian PAs began adopting integrated planning methods. For example, the Integrated Plan of Activities and Organization (PIAO) was created by an infusion of funds from the European Union as part of a public administration reform spurred by post-COVID-19 recovery efforts. In springing to life, the PIAO created the first legislative definition of Public Value: the multidimensional (social, economic, environmental) level of well-being created by a public administration in relationship to its citizens and businesses.
As a single planning tool, PIAO aligns resources with performance management and risk mitigation, but more importantly, it creates measurable Public Value by focusing on the comprehensive impact of public projects from the public’s perspective, rather than isolated outputs from the PA’s perspective.
The Public Value pyramid: A new framework for success
But the PIAO also raises some practical questions where the proverbial rubber hits the road: how do PAs systematically and consistently combine resourcing, risk management, performance, and other administrative factors to achieve a measurable impact on wellbeing? The methodological framework of the "Public Value Pyramid" answers these questions. It integrates various administrative functions—from resource management at its base up through performance and risk management—allowing policymakers and managers to govern the enabling, protecting, and creating of Public Value holistically.
The Pyramid operates on a principle of progressive value generation and measurement, beginning at the base level and moving upwards through intermediate programming levels that either create or protect Public Value.
Milan and Bologna were the cities that generated the most well-being, with Milan's leadership being focused on economic impact, while Bologna’s leadership was keyed into social impacts.
The Pyramid operates on a principle of progressive value generation and measurement, beginning at the base level and moving upwards through intermediate programming levels that either create or protect Public Value.
- The basis of the pyramid addresses how to enable Public Value. Public Value creation and protection are enabled by the planning of actions that are preparatory and functional to improve the quantity and quality of diverse types of PA resources.
- The intermediate levels of the pyramid address the issues of how to create Public Value and how to protect Public Value. The intermediate levels should be planned and measured in an integrated way so that the specific performance objectives, such as a funding call for businesses, are protected with specific risk measures.
- At the top of the pyramid, we find impacts and Public Value, which serves as the horizon of the entire programmatic architecture and addresses the question of “what and how many impacts?” and, finally, “how much Public Value?” Precisely, at the top level of the pyramid, we find the analytical or one-dimensional impacts, the average external impact and, ultimately, the average value between impacts, performance and health.
The pyramid also emphasizes the crucial role of public managers whose individual performances are measured based on their contribution to organizational success and risk management. This methodological framework enables PAs to plan by aligning administrative health, risk reduction, and performance improvements promoting holistic Public Value aimed at enhancing citizen well-being.
Engaging the next generation for the “Public Value generation”
The soccer field fable told at the opening of the article warns us against the risk of self-referentiality in defining what is Public Value. Public Value should be observed through the eyes of citizens and businesses, it should be communicated in their own words, but most importantly, it should be enabled, protected, and co-created with them.
The concept of Public Value must be extended to distinct categories of stakeholders, and it must preserve the possibility of improving the well-being of future generations. It is therefore important to engage with the new generations to create awareness and proper appreciation of Public Value. This is a particularly vital move as Italy prepares for nearly a third of its civil service workforce to retire by 2032. Compounding this demographic problem is the fact that young people overwhelmingly gravitate to the private sector as they enter the workforce.
Research conducted at Italian universities is trying to understand what would motivate young people to enter the public sector. When asked: “What would incentivize you to go and work in PA?” university students ranked “contribution to the creation of Public Value,” “clear career prospects” and “higher salaries” as their top three choices. Clearly, young people are looking for meaning in the work they will do and the sense of the common good that is embedded in the concept of Public Value. This is great news! Public Value is key to building a better future in Italy, as well as other countries around the world.
Every country walks at the speed of its public administrations. To encourage PAs to walk faster, we need to attract the best resources Italy has—young people—and actively involve them in innovative and shared Public Value projects.
Embracing Public Value isn’t merely about adopting new methodologies; it’s about changing perspectives—viewing policies through citizens’ eyes and measuring success not just by efficiency or output but by tangible improvements in people’s lives. As other countries observe Italy’s journey from bureaucratic chaos to a systematic approach highlighting Public Value, they too might find inspiration to pursue similar paths for building better futures for their citizens.
When asked: “What would incentivize you to go and work in PA?” university students ranked “contribution to the creation of Public Value,” “clear career prospects” and “higher salaries” as their top three choices.
Former CISO on what boards are getting wrong about data protection and privacy
Former CISO on what boards are getting wrong about data protection and privacy
Former CISO on what boards are getting wrong about data protection and privacy
In this VISION by Protiviti interview, Joe Kornik, Editor-in-Chief of VISION by Protiviti, sits down with Sue Bergamo. Bergamo is an executive advisor, former CIO, CISO, and Global Technology Strategist for Microsoft. She sits on several boards, is host of the Short Takes podcast and author of So You Want to Be a CISO: A Practical Guide to Becoming a Successful Cybersecurity Leader. Here, Bergamo discusses recent SEC rulings and their impacts on the current and future state of the CISO role, how the C-suite and boards view data governance and privacy, and what steps they should be taking right now to build customer trust.
In this interview:
0:57 – The CISO role in a state of flux
4:20 – The effect of the SEC’s cyber disclosure rule
7:39 – Is there a playbook for privacy?
10:20 – Will companies get it right for their customers?
Former CISO on what boards are getting wrong about data protection and privacy
Joe Kornik: Welcome to the VISION by Protiviti interview. I’m Joe Kornik, Editor-in-Chief of VISION by Protiviti, a global content resource examining big themes that will impact the C-Suite and executive boardrooms worldwide. Today, we’re exploring the future of privacy, and I’m thrilled to welcome Sue Bergamo to the program. Sue is an executive advisor, former CIO, CISO, and global technology strategist from Microsoft. She sits on several boards, is host of the Short Takes podcast, and author of “So, You Want to be a CISO: A practical guide to becoming a successful cybersecurity leader.” Sue, thank you so much for joining me today.
Sue Bergamo: Thank you for having me. It’s a pleasure to be here.
Kornik: First off, Sue, let’s talk about the state of the CISO. As you point out in your book, which I mentioned in the intro, “So, You Want to be a CISO,” the position is really in a state of flux right now. So, talk to me a little bit about where the CISO is right now and how it’s changing, and if you think it will continue to be a critical part of the executive team going forward.
Bergamo: I like to use the term evolution because we’re in a position that I hope will evolve to a better state in the future. Just like the CIO role about 20 years ago it had to go through some ebbs and flows and finally, it came out at the end of the tunnel in a much better spot. Everyone was very much aware of what the CIO needed and wanted to do which was really around our back office applications for our infrastructure that run our corporations.
The CISO role is going through that evolution and unfortunately, right now, it’s in a really ugly spot. I’m hopeful that it will come out a little bit better. What’s going on in the industry is the SEC’s cyber disclosure rule that came into effect late last year, which basically said the CISO does not need to report to the board, but the board and the executive team need to be aware of cyber incidents. So, what ended up happening with that—and I can go into more elaboration around two CISOs that were charged with felonies for material breaches that happened in the past—but what happened with that is that—this is my opinion based on what I see and what I know—executive teams decided that CISOs weren’t really needed. A lot of the CISOs said, “We’re not going to put up with these personal liabilities.” So, a lot of us left our positions and then there were a whole bunch of us that lost their jobs because the SEC, the cyber disclosure rule, talked about awareness. They didn’t put the CISO on the board, but they talked about awareness with incidents.
So, what has transpired is—and I don’t mean with this with any disrespect to SecOps managers—but inexperienced, from a CISO perspective, SecOps managers secure the operations people will put into the role of head of security. Sometimes CISO, but mostly head of security because they deal with incident response. Now, the dirty little secret in most organizations is that when an incident occurs, the SecOps manager has a major role in that breach, defending against the breach, but they’re really there to tell the CISO where the threat is coming from. They are not there to lead the band. They’re only there for a very specific focus. So, I see this convergence of inexperienced people and cyber criminals and we’ll see what the future brings, but I do hope that when this evolution comes to fruition the CISO will be put into a much better position, much better light with the executive team.
Kornik: You mentioned those SEC decisions and regulations. I don’t know if you want to expand on that at all or talk more specifically about where CISOs find themselves between a rock and a hard place right now.
Bergamo: Yes. There’s really three types of CISOs. There’s the very inexperienced one that’s just coming into the role, not really sure what they’re doing. Again, it’s not a dig. They have to learn and they’re going to learn the hard way. There’s the middle-of-the-road, as I call it. They’re more experienced than the inexperienced ones, but they’re still trying to find their spot in the position. Then there’s the experts who were exiting. So, a lot of CISOs on the inexperienced and middle-of-the-road side, believe that our jobs are really about the technology, and that is so far from the truth. The experienced ones know that we follow something called the triad, it’s confidentiality, integrity, and availability. We do that, we accomplish the triad through people, process, and technology. People obviously are employees, process is security frameworks and controls, and then the tech. Once you get the tech up and running and optimized for efficiencies so it’s giving you the data that you need in order to defend your companies, the tech is the easy part. It changes all the time, but that’s the easy part. It’s the compliance frameworks that take the majority of our time and if you ask any experienced CISO, they’ll tell you, once the tech is installed and optimized, we spend the majority of our time on compliance and data privacy. The newbies, as I refer to them, sometimes we have to explain this to them and explain why compliance and data privacy are so important.
So, it’s a little bit of a mess out there right now and then you throw in the personal liability. Let me just expand upon that for a moment. We had two well-known CISOs with two public, very public companies—I won’t mention their names—charged with felonies through the SEC which led to the cybersecurity disclosure rule being implemented after the first one. The second one fell into that disclosure rule. That sent shockwaves. Not just waves, but shockwaves through the CISO industry and we’re just sitting here saying to ourselves, “Holy cow.” A lot of us don’t have a lot of support because everybody thinks cyber is our problem and not theirs. It takes a village to defend a company against cyber attackers. Now, we’re being held personally responsible and felony charges, potential jail time, so we’re all saying ourselves, “I don’t think so,” which is why there’s a huge influx of us getting out of the role.
Kornik: Right. So, let’s talk a little bit more about the strategic role of the CISO or where that falls in the organization. Let’s talk specifically about data governance and protecting privacy. How did the companies that do it best do it best? In your experience, do they have chief privacy officers or chief data officers? Is there a playbook that business leaders should be following to really make sure that they’re getting this right?
Bergamo: I wish there was a playbook, but there isn’t. So, I think that’s half of the battle because everyone has a cellphone or a computer, and everyone feels that they know technology and they know data. This is a very specialized field. The CIO—I’ll just say tech and security—it’s a very specialized field. I’ve been fortunate enough to have both roles and yes, everyone always has an opinion on how we can do our jobs better, but this is our craft, and we have all kinds of different education or certifications. There’s no one thing that anyone can point to and no one game plan. But good C-level tech and security executives are well-rounded. We study. We research. We get involved and we understand how to protect data. Now, that AI is coming out, we have a whole new set of technologies that we need to encompass into our program. So, it’s about staying involved and understanding what we need to do to protect the data.
Kornik: When you’re in those conversations with the C-Suite, the boards, and the business leaders, do you think they understand the importance, not just the compliance and the governance aspect of this, but maybe the business importance of data privacy and what that means ultimately to building customer trust in the business and the bottom line?
Bergamo: I do think that everyone understands that data matters and that data is important to running the business. I mean, every business needs information in order to make good decisions and to process customer requests, B2B requests, employee requests. It’s all data driven and so is it given enough limelight? It depends on the size of the company. I do think that the executives and the boards understand the importance of data and how to use it, but I think where they fall short is really in the investments of strategizing and securing the information and giving even the technology or the engineering teams what they need to make sure that that data is sound.
Kornik: Right, and that’s an interesting perspective I would say from the company side. How confident are you that we’ll get this right for the customer, the client, the consumer? Are you optimistic that they’ll be better off over the next several years?
Bergamo: I’m always optimistic. The sun’s always shining in my world, right? Data is the stronghold of every company. From managing the most—my new piece of information all the way to executive reporting. Everybody’s processing information. So, I think with some of the technologies that are coming out either through public cloud vendors or through artificial intelligence, I think that the data and the ability to gather data is just going to be better in the future.
Kornik: Well, Sue, you said you’re an optimist. So, I’m going to leave you with this final question where I ask you to look out a few years. Maybe the end of the decade, let’s say 2030. Where do you think will be in terms of privacy, data privacy? Do you think 2030 is a better place than where we are currently?
Bergamo: Well, we can only get better with time, right? Kind of like a fine wine. So, I’m optimistic that material breaches will continue to happen fast and furiously and finally, our business brothers and sisters will wake up and say, “Oh, I need to be responsible for security too. I need to be responsible to help the CISO or the CIO, or whoever, with my data problems. Maybe I should get more involved.” So, I am optimistic that eventually the tables will turn. I think it’s going to take a little bit more time but 2030, sure, I’ll go with that.
Kornik: Great. Well, thanks so much for the time today, Sue, and the insights. I really enjoyed our conversation.
Bergamo: Thank you, Joe. I appreciate you having me.
Kornik: And thank you for watching the VISION by Protiviti interview. On behalf of Sue Bergamo, I’m Joe Kornik. We’ll see you next time.
Did you enjoy this content? For more like this, subscribe to the VISION by Protiviti newsletter.
The Economist’s Dexter Thillien: Privacy in peril amid digital data explosion
The Economist’s Dexter Thillien: Privacy in peril amid digital data explosion
The Economist’s Dexter Thillien: Privacy in peril amid digital data explosion
In this VISION by Protiviti interview, Joe Kornik, Editor-in-Chief of VISION by Protiviti, sits down with The Economist’s Dexter Thillien. Dexter is the lead analysts for technology and data at The Economist Intelligence Unit, the research arm of The Economist. Dexter is the lead author of numerous reports on AI, cybersecurity, data privacy, technology and regulation as well a frequent speaker on the intersection of the digital economy and global business. Here, he discusses how privacy is in peril in the digital economy, the impact of emerging technologies on data protection, regulation vs. innovation, and how the private sector will play a significant role in data privacy in the future.
In this interview:
1:11 – Biggest privacy issues for consumers and companies
3:18 – Emerging tech’s effects on privacy
5:42 – What type of regulation is needed?
7:49 – Who’s taking this seriously?
11:01 – Privacy in 2030
The Economist’s Dexter Thillien: Privacy in peril AMID digital data explosion
Joe Kornik: Welcome to the VISION by Protiviti Interview. I’m Joe Kornik, Editor-in-Chief of VISION by Protiviti, our global content resource examining big themes that will impact the C-suite and executive boardrooms worldwide. Today, we’re exploring the future of privacy, and I’m happy to be joined by The Economist’s Dexter Thillien. Dexter is the lead analyst for technology and data at The Economist Intelligence Unit, the research arm of The Economist. Dexter is the lead author of numerous reports on AI, cyber security, data privacy, technology, and regulation, as well as a frequent speaker on the intersection of the digital economy and global business. We spoke to Dexter last year about privacy in the metaverse and he has been kind enough to come back. Dexter, thank you so much for joining me today.
Dexter Thillien: Great to be here.
Kornik: Dexter, in a digital economy I don’t think there’s any question that data privacy is now and probably will continue to be one of the biggest issues facing consumers and companies, the rest of this decade and probably much further into the future, actually. What do you see as the biggest threats in terms of data privacy for both customers and companies?
Thillien: Yes, thank you for the question. First of all, you’re right to differentiate between companies and consumers because I think they will have different issues to deal with. For companies, they’re owning more and more personal data as part of the business processes, the key is to make sure that the data is secured. That means putting the right government system in place internally. So, instance, that only the right people can access sensitive data. Also, making sure that the company’s own data, any data from suppliers or consumer is being dealt with properly. That’s going to become more and more of an issue to deal with because there’s going to be more and more data to deal with. For some companies, it might even become a competitive advantage. We’ve seen Apple trying to do that in terms of its privacy policy compared to its competitors.
For consumers, it’s a different question, different range of issues. For consumers, it’s important to keep the data safe and secure, but it is also becoming increasingly difficult because we’re giving away much more personal data at any one time. Giving away personal data is no longer about just filling a form, but any time we see something online or do something online, and also anytime we’re going to be on the move because most of us are going to have a smartphone. Many of the apps we have on our smartphone will also collect a huge amount of data. We may become a bit blasé about all those data we’re giving away, but it’s also very difficult to operate and to use the internet and go online if we’re trying to minimize the amount of data we give away. Meta, as an example, tried to build a more private platform which has been charging users and making privacy as a premium feature, but is so far this has been refused by the European Commission in the European Union. The issue is that advertising remains the cornerstone of Meta which means that it is free as long as we give away much of our personal data. With the addition of pictures and videos and on top of text and also facial recognition entering the fray we’re starting to giveaway data which is even more unique and much more difficult to replace if it ever becomes hacked.
Kornik: I’ve been reading a lot of The EIU’s position papers on AI and really all emerging technologies, which includes quantum and spatial and biometric computing, and how those will ultimately impact data and privacy. How do you see AI and those other emerging technologies I mentioned impacting privacy going forward?
Thillien: I think it will all have an impact. So, starting with artificial intelligence, artificial intelligence is all about data. The fact that some are arguing that we may run out web data as early as 2026 as well as so much of a personal data we have given away so far. One of the major issues with artificial intelligence is the output as a model as it may give away as part of answer some personal data because that personal data is part of the input. Sometimes it’s very, very difficult to understand why it does that. We have seen some cases in Europe where this has happened and privacy regulators are keeping track. There is also consent issue which is why Meta say they will not release its most advanced Llama model in Europe because the company is not entirely sure if it can comply with the GDPR in terms of using pictures and videos and things other than text—content other than text.
In terms of quantum computing we’ve seen in August the National Institute of Science of Technology in the U.S. releasing its first post-quantum encryption standards and this is over the fears that a quantum computer might break the current encryption standards sooner rather than later. It’s still not very clear when that will happen and we do not think at The EIU that it’s going to happen any time soon, at least in the short to medium term because many, many technical hurdles remain. But it’s better to be safe especially as some of the encrypted data which can be, will still be very, very valuable when and if quantum computer becomes operational.
When we’re talking about biometric data and biometric computing it raises a question of what type of data we might be sharing. When it is possible to change an email address or even your financial or other details, it is impossible to change your fingerprint or your DNA. We may not be—we may not be able to identify in terms of what we share, but it is something we consider if we don’t want to—that data and want to make sure that the data do not fall into the wrong hands.
Kornik: Right. Thanks, Dexter. You mentioned GDPR. So, let me just follow up on that. Globally, Europe has invested significantly in data protection rules with GDPR. Japan has had very strict privacy laws in place as well. Meanwhile, India and China not so much. U.S. is somewhere in the middle, but has no federal regulation. A lot of the states have sort of taken the lead on that front. Where is the sweet spot? Who is getting this right? Does too much regulation stifle innovation or does not enough create chaos? Where do you stand on regulation?
Thillien: I think finding the sweet spot between regulation and innovation is what every policymaker, every regulator, is trying to do. I think it’s a problem or an issue for all tech regulation and not just data privacy. This could happen when sometimes regulation becomes more of a box-ticking exercise and we have seen that with cookies in Europe. It has no real impact on privacy because—for instance, active consent will now be fully given. I do think we do need some level of regulation because without it any protection will be lacking and there needs to be independent rules put in place.
I think for me there are two main things to consider when we’re talking about regulation. The first is fragmentation. Many, many businesses will be global in nature, whereas, regulation is very often not. This means that making a decision as to what to do, whether to follow offshore jurisdiction is required, whether there would be some overlap or to go with the strictest rule and have only one set of rules to comply globally for the company. Now, some companies have already done that in terms of the GDPR.
The second and probably the most important one is enforcement. I think rules are very nice, but without the right enforcement it can be a bit pointless. We’ve seen, again, with the GDPR where it’s taking quite a long time before any rulings or any judgement because it can be very, very tough for regulators to make the case. It’s very important to note what level you can enforce before you start thinking of regulation.
Kornik: Barely a week goes by without hearing about another significant breach, right? I just wonder if consumers specifically are becoming desensitized to these breaches, do we suffer low expectations when it comes to our own privacy?
Thillien: I don’t know if we desensitize, but I think the issue that is not always visible or very visible to the main user, we often hear about an attack where millions if not even billions of entries have been hacked, but the impact of that attack is very difficult to gauge because in most cases that means we’re going to receive a bit more spam emails. It becomes much more personal when they are a financial repercussions is in attack meaning we can be scammed or buying details are now available with people buying things online with that money. I also want to make the point that companies can try to do as much as they can and many, many do but the attacker, in this case the hacker, is much more favored than a defender because the attacker, in terms of an attack needs to get it right once when a defender has to get it right all the time. Now, as we’re spending more and more time online it means that the attacked surface is only increasing, which means that those breaches will keep happening.
Kornik: Right. We’ve seen big companies—I mean, very big tech companies even playing sort of fast and loose with data and privacy. Even children’s privacy, I think, is—we’ve heard that has been in the news recently. Can we trust the private sector? I mean, we were talking about regulations so I’m just curious. Can we trust the private sector to do what’s right in terms of privacy? Are boards and the C-suite taking this issue seriously enough, do you think?
Thillien: Some are, but I still don’t think that self-regulation is the answer. While I mentioned the GDPR might not be as well enforced as it should be it still offers a EU citizen much more protection than in many other jurisdictions across the world. You mentioned that the U.S. still doesn’t have federal rules, is trying to remedy that in terms of children. It needs to get passed through congress which is very difficult as well. The U.S. also has a much bigger, what I would call, a third-party market. With the data you might have given happily to a provider or a retailer is then sold on to a third party without you knowing about it, and perhaps you wouldn’t want that particular company, the third party company, to have access to your data. Companies do have to take it seriously because it can impact their reputation if it is proven they haven’t done as much as they could have should it be hacked. With a greater penetration of technology in the world place and the move towards digital information it can also become a phenomenal advantage to business continuity. I think the example of the CrowdStrike incident in July 2024 has shown how reliant we’ve become on digital technology and how important it is to protect those. Could it become a competitive advantage is very, very difficult to say because privacy is one of those areas where doing things right to make sure nothing happens has a limited impact, but not doing things right when something is happening could have a major negative impact. So, the positive and the upside is not very apparent, but you do need to do as much as possible to make sure that nothing actually happens.
Kornik: Dexter, I appreciate your time today. I really enjoyed our conversation. I just have one more question and it’s forward looking. I’m wondering if you could take me out to the end of the decade, let’s say 2030, and tell me what you see around data and privacy. I’m wondering how we’ll view privacy in 2030.
Thillien: I think for me it’s evolving concept because the online world has become so prevalent, but the right to privacy is also a fundamental human right whether it is online or offline. This is part of Article 12 for Universal Declaration of Human Rights which is, and I’m going to quote that, “No one should be subjected to arbitrary interference with their privacy, family, home or correspondence, nor to attacks upon their honor or reputation. Everybody has the right to the protection of the law against such interference or attacks.” I think this is the case both in the online world and both in the offline world. I’m going to give you a personal example maybe. I graduated from my school in very late 20th century. I don’t think I used the internet at all for any of my course work during high school. If you can see, for instance, the iPhone launched in 2007, so 17 years ago, and Facebook in 2004, so 20 years ago, it shows that many, many younger people are now what we call fully digital native and are going to have maybe a different perspective. What I find interesting is some mystery that I saw over the last few months and years where kids, where younger people, were telling their parents not to upload pictures of them online. It made me think about the concept of what I might call online privacy native. Where maybe the younger generation is less keen to share publicly compared to the previous generation. I think we’ll have to wait and see to see what will actually happen going forward.
Kornik: Yes, that’s interesting. I hadn’t really thought about that, but you’re right. That generation did seem more conscious—conscientious about sharing photos.
Thillien: I think they might have a different perspective when it comes to their online persona and their offline self and what they share online. So, they might not have all vision for all generation, vision of privacy more broadly, but in terms of what they’re doing online because they’re fully digital native and they are online a lot of time. Everybody is going to have a smartphone. That’s not going to change. We’re still going to be using the internet. We’re still going to share some data. There is still going to be probably from the younger generation which have only known that kind of a different perspective in terms of what they’re willing to share and especially what they’re not willing to share and to what they might get in return for what they’re sharing. I think it’s very early days and we’ll have to wait and see.
Kornik: Right. Very interesting. Thanks, Dexter, for that perspective and your insights. I really enjoyed our conversation today.
Thillien: Thank you very much for having me.
Kornik: Thank you for joining the VISION by Protiviti interview. On behalf of Dexter Thillien, I’m Joe Kornik. We’ll see you next time.
Did you enjoy this content? For more like this, subscribe to the VISION by Protiviti newsletter.
Data by default: How AI radically changes the data privacy landscape
Data by default: How AI radically changes the data privacy landscape
Data by default: How AI radically changes the data privacy landscape
“Data is a precious thing and will last longer than the systems themselves.”
- Tim Berners-Lee, colloquially known as the inventor of the World Wide Web
It’s been almost twenty years since Berners-Lee uttered those words and they are truer, and perhaps even a little more ominous, today than they were then. The advent of artificial intelligence (AI) makes data even more valuable, and thus raises the issue of data privacy and data ownership to a new level of importance, complexity and controversy.
AI can be fed people’s private data from different sources — not just online and offline databases but also content that users upload to the Web, sensor data from the Internet of Things, and even the digital footprint users leave behind as they use their digital devices. Protecting the rights that individuals have as to what information is collected, where it is stored, who can use it, and for what purpose has always been difficult. And in the future, AI will make that exceedingly complex.
Data and digital footprints
We entered the era of “data collection by default” some time ago, argue Stanford University’s Jennifer King and Caroline Meinhardt in a recent comprehensive analysis. There are two potential ways to address this issue. The mostly unworkable one at this point is to move from an opt-out to an opt-in system. The issue with that is not just regulating and ensuring compliance, but also what to do about information collected in the past. And companies can still encourage users to opt in through special offers and other incentives, and then use the data for purposes that were not anticipated.
A second solution is to develop applications that prevent third parties from collecting activity data in the first place, such as enabling the opt out option when we download an app to our smartphones. But this only applies to activity data, not to the data the user supplies while searching or transacting once the app is installed.
Moreover, efforts at controlling information at the point of collection are undermined by Web crawlers and Web scrapers, which can automatically locate, classify, download and extract vast amounts of data, images and other types of material from the internet writ large. In principle, they can only access public, readily viewable material. But in practice, Web crawlers can jump over paywalls by disguising themselves as users and can use pirated content that has been stored somewhere other than the original location.
In addition, data often get misplaced, breached, leaked or otherwise mishandled, making it an easy target for AI Web crawlers. Thus, the issue goes well beyond the traditional approaches of offering assurances about confidentiality or non-disclosure and establishing opt-in or opt-out mechanisms.
The AI data supply chain
Given the difficulties involved in addressing privacy issues at the point of data collection, options at a later stage of the data supply chain must be considered. The broadest measure would be to ask companies to disclose basic information about the data they feed into AI, indicating the sources, scope and scale. This enables, for example, checking if there is copyright infringement.
efforts at controlling information at the point of collection are undermined by Web crawlers and Web scrapers, which can automatically locate, classify, download and extract vast amounts of data, images and other types of material from the internet writ large.
However, no such requirement or regulation exists, and very few companies do it voluntarily. Some companies are now offering users an opt-out option so that their data and images are not used for AI training. Amazon’s AWS, Google’s Gemini, and OpenAI offer such options, but they are often cumbersome to activate and not totally fool-proof.
The supply chain ends with outputs, which in the case of AI include applications and predictions. Individuals need to be protected if their data are unwittingly disclosed at that point. At the societal level, the thorny problem with using training data from the web is that, even if all permissions and legal requirements were to be met, there is issue of “bias in, bias out,” in the sense that data on the web are not representative of society or of the world. Some users, companies, and countries are more prone to uploading material or leaving behind a digital footprint. Such a biased body of data then becomes the raw material for AI applications.
AI never forgets…
The truth is that individuals, unfortunately, have very few options at their disposal to prevent the misuse or unauthorized use of their data. It is also exceedingly difficult to compel companies to delete data at the user’s request even if it is mandated by law. More alarmingly, there is no good way of making an AI application to “forget” or “unlearn” what it has unlawfully learned. And the more time passes without corrective actions, the harder and costlier these instances become.
As is often the case with emerging technologies, regulation is lagging. And, not surprisingly, there is much debate as to the amount of regulatory oversight that is necessary, warranted or desirable. Adding to the complexity, digital data are global while regulation is local.
According to the United Nations, 137 out of 194 countries have passed data protection and privacy legislation with various levels of safeguards. The Web is a global medium, but it is subject to a mosaic of regulations at the supranational (the European Union), national and subnational levels (i.e., state by state, as in the Unites States). Most importantly, regulations aimed at the Web or AI sometimes collide with those in other areas, like national security. The European Union has complained about American intelligence agencies’ use of private data of EU citizens and residents without their approval. This issue is complicated by the fact that large and small American digital platforms routinely send user data to the U.S. The U.S.-EU Data Privacy Framework, signed in July 2023, regulates the circumstances under which the U.S. can gather information and how European citizens can appeal.
Unintended consequences
From the standpoint of the companies managing digital platforms, the regulatory context could not be more complex. They need to comply with regulations implemented by the country where they are based but also by the laws of the countries in which they collect data from their users. In addition, cross-border data flows may also be regulated: This represents a major obstacle for new startups aiming at international growth, while offering a built-in advantage to more established companies that have the resources to either comply or to deal with the potential litigation if they do not comply.
The truth is that individuals, unfortunately, have very few options at their disposal to prevent the misuse or unauthorized use of their data. It is also exceedingly difficult to compel companies to delete data at the user’s request even if it is mandated by law.
The future of personal data protection and privacy remains uncertain. And yet, companies need to make operational decisions today that may be legally questionable in the future. Companies, especially those engaged in large-scale AI efforts, will continue to amass data and to use it to advance their goals, even at the risk of being found non-compliant.
Another unintended consequence stems from applying new regulations to both tech companies whose core business involves data collection and manipulation, especially those engaged in AI, and those in other industries, which gather and process data in support of selling other products or services. On the one hand, the concern is that complying with regulations designed to prevent the worst potential harms might constrain the ability of such companies to compete. But on the other hand, many companies whose core business is not AI are also developing, or at least using, AI applications. Thus, the default for politicians and regulators is to make all companies comply.
Requirements, standards and scorecards
It is not clear yet if different jurisdictions around the world will treat all companies the same way, or have less onerous requirements for small firms, and for data that are not deemed “sensitive” (sensitive data includes but is not limited to financial, health, biometric and genetic information), as in the proposed American Privacy Rights Act of 2024.
Board directors and business leaders need to stay hyper-informed in a rapidly evolving landscape. There are many proposals on the table in terms of legislative initiatives, but no comprehensive federal regulation in the U.S. yet, let alone a global set of standards other than the decades-old principles of information minimization and information specificity.
Eventually, companies will be asked to create data privacy scorecards, so they should keep track and meticulously document all practices and procedures. In the meantime, they need to exercise sound business privacy practices to to avoid bad publicity, public-relations problems and a loss of customer trust over data mismanagement and hacking.
Board directors and business leaders need to stay hyper-informed in a rapidly evolving landscape. There are many proposals on the table in terms of legislative initiatives, but no comprehensive federal regulation in the U.S. yet, let alone a global set of standards.
CPO or no? Protiviti’s Tom Moore on the evolution of the privacy role and its uncertain future
CPO or no? Protiviti’s Tom Moore on the evolution of the privacy role and its uncertain future
CPO or no? Protiviti’s Tom Moore on the evolution of the privacy role and its uncertain future
The Information Age created an information explosion that shows no signs of slowing down. In fact, the increasing availability of information, including digital data, is speeding up exponentially. With the flood of all that new information come concerns for the C-suite about how to manage it safely and securely.
While some companies expanded the role of their existing technology leaders to deal with this challenge, others opted to expand their executive teams, and the role of the chief privacy officer (CPO) was born. According to the International Association of Privacy Professionals (IAPP), Jennifer Barrett Glasgow of Axciom Corporation was the first CPO. She began her oversight of privacy at Axciom in 1991. Barrett Glasgow’s job description then, for sure, was radically different from the role of today’s CPO, which continues to evolve as quickly as new information becomes available and privacy laws and regulations proliferate. Many privacy leaders in organizations have taken on new titles and enhanced responsibilities in the areas of A.I., trust and ethics, and data governance.
Regulation and legislation
Back in 1991, very few privacy regulations existed globally. Since then, in the U.S., states have stepped up to fill a regulatory void left by the federal government. More than two-thirds have addressed privacy regulation: 18 states have it on the books, eight states have active bills pending, and 10 have bills working their way through their respective legislatures. Meanwhile, in April 2024, U.S. lawmakers announced the American Privacy Rights Act, a bipartisan draft legislation that seeks to create a national standard for data privacy and security, addressing the unregulated sale of online data and aiming to ensure individuals’ right to control their personal information. Although it has no shot to pass before the presidential election in November, lawmakers are optimistic it could serve as a framework for legislation in 2025.
Globally, there’s also been a dramatic increase in the number of privacy regulations. The General Data Protection Regulation (GDPR) from the European Union, in force since 2018, created one of the largest shifts in how information is managed within organizations. Every year, more countries, including Japan, Singapore and South Korea, have introduced new privacy regulations. According to the IAPP, as of March 2024, 70% of nations and 79% of the world’s population are covered by some form of data privacy law.
As privacy regulations continue to expand rapidly, business leaders continue to question who owns the responsibility for ensuring their organizations’ data practices are compliant and who should be responsible for meeting any new compliance requirements. Legal? Technology? Compliance? There may not be one correct answer, and truth be told, privacy is a shared responsibility across the organization.
70%↑
as of March 2024, 70% of nations and 79% of the world’s population are covered by some form of data privacy law.
Is the CPO role in decline?
In an IAPP survey of privacy professionals conducted last year, 78% said their organizations’ most senior privacy leader was in the five highest levels of the organization, while 21% were in the two highest levels. The data also showed that most of those surveyed reported to either the General Counsel (32%), the chief compliance officer (16%), or directly to the CEO (15%).
The annual survey’s biggest one-year shift shows a decline in direct reporting to the CEO and a rise in reporting to the chief compliance officer (CCO). One possibility: This shift may illustrate a decline in the stature of the role for the CPO in organizations and may signal that privacy, like many other regulations, requires an integrated approach. Real-life indicators also point to the decreasing importance of the CPO. Anecdotally, there are plenty of instances when a CPO leaves the organization or the position is eliminated in a restructuring, and it is not filled.
This is exactly what happened earlier this year when Google eliminated its CPO role in a corporate restructuring and opted not to fill it. There are other examples in large organizations where the CPO role either remains vacant or isn’t even on the org chart any longer. Is this because of a lack of expertise in the field, an inadequate internal bench, or a reprioritization of efforts and focus within the enterprise?
Or is it, as is the case at Google, that the varied responsibilities for data privacy have outgrown the role of a single CPO? Whatever the answer, it’s safe to say that when Google, a company estimated to hold between 10 and 15 exabytes of data—or the storage power of about 30 million PCs—makes a potentially game-changing decision regarding privacy, it’s probably a good idea for the rest of us to take note.
Another possibility the CPO role is in decline may lie in the lack of measurable KPIs, making it difficult to conduct benchmarking for privacy professionals. The status quo is that information and data should be protected, so unless an information breach occurs, a regulatory investigation is launched or a fine is levied, some companies may have a hard time evidencing that the CPO role has had a significant and direct impact on customer sentiment, the business and its bottom line.
Of course, good CPOs will serve to preserve the “status quo” every day and in this sense may even be victims of their own success. And if the responsibility for privacy is, ultimately, being dispersed throughout multiple roles within the organization, pitfalls could begin to emerge. For instance, a team that is already resource-constrained could end up with increased privacy responsibilities, potentially, and inadvertently, losing its focus on privacy—a risky proposition.
What are the risks of losing focus?
The risk of a diminished CPO role is losing a dedicated function and leader hyper-focused on privacy. When teams pick up privacy as a second or third priority, important tasks and obligations can get missed. Regulations may not be reviewed fully, legislative efforts are not monitored for anticipated changes, and dealing with enforcement becomes even more challenging. This, of course, has a direct impact on operations and customer perception.
78%
of privacy professionals said their organization's most senior privacy leader was in the five highest levels of the organization, while 21% were in the two highest levels.
Privacy should not be a reactive function. Customers want to collaborate with companies that they trust and protecting an individual’s privacy leads to trust. Additionally, fines levied against companies found mishandling a customer’s data can have a significant economic and reputational impact on the business. Though COVID-19 may have slowed global regulators from enforcing regulations, they are now making up for lost time with increased legislative authority and automated tools. And the repercussions for noncompliance are making headlines with fines and consent decrees.
It’s also important to consider the effect on the career paths and overall morale of the privacy team. When the CPO is deprioritized or pushed down the org chart, it becomes more difficult to attract top talent, and when the privacy pipeline dries up, it’s tough to turn on again. Moreover, eliminating the role altogether leads privacy team members within the organization to seek other disciplines or external opportunities to advance their careers.
Not prioritizing the CPO also leads to many management conundrums. Without a CPO, where does the privacy direction originate? Who will listen to the voice of the customer for privacy concerns and respond in a consistent, centralized manner? How does the organization create internal privacy awareness? The reality is that when the CPO is dispositioned or deprioritized within the organization, so is privacy itself. With the ever-changing and expanding legislative landscape and the sheer amount of data at our disposal, one would expect the role’s strategic importance to be apparent and become more ingrained and elevated within organizations in the coming years.
Building customer trust
Those organizations that do employ and value the CPO role should expect continued cross-collaboration across the entire enterprise. Much like with the expansion and awareness of internal audit and compliance functions following new regulations, privacy awareness also needs to be well communicated and understood across the entire organization. Initiating activities like completing a Privacy or Data Privacy Impact Assessments required under GDPR and some U.S. state laws can only happen if the CPO and privacy team are well versed in the legislation.
The CPO needs to have a stake in the product change management and lifecycle process and work closely with the data governance teams to understand what data is collected, how it’s processed and how it’s protected. The CPO today has numerous vectors of responsibility, including state, federal and global law enforcement; leadership and board attention; internal business models, products and services; technology advancements; customer expectations; and competitor brand and product positioning. Though privacy can be a shared responsibility across the organization, the CPO needs to be the focal point across the enterprise and be accountable for building customer trust through the company’s data protection and privacy practices.
Whether your organization has a chief privacy officer, is looking to hire one, or has opted to split the role across several functions of the business, the one thing that remains certain is data privacy is not optional. More than ever, customers are demanding accountability from organizations about how their data is used, processed, shared and stored. It’s imperative that organizations invest in building a privacy program run by strong leaders who can navigate an evolving data privacy landscape. The risk of not doing so is eroding the company brand and losing customer trust.
Without a CPO, where does the privacy direction originate? Who will listen to the voice of the customer for privacy concerns and respond in a consistent, centralized manner?
NSW Health Pathology CEO: Patient empowerment, AI, Big Data are trends to watch in healthcare
NSW Health Pathology CEO: Patient empowerment, AI, Big Data are trends to watch in healthcare
NSW Health Pathology CEO: Patient empowerment, AI, Big Data are trends to watch in healthcare
In this VISION by Protiviti podcast, we welcome Vanessa Janissen, CEO of New South Wales Health Pathology, where she leads Australia’s largest public pathology and forensic science service. Employing more than 5,000 people, NSW Health Pathology performs over 100,000 clinical and scientific investigations each day across 60 laboratories and more than 150 collection centers. Janissen is interviewed by Ruby Chen, a Director with Protiviti Australia.
In this interview:
1:05 – The digital agenda: Balancing patient privacy with data security
5:01 – The challenges of an aging demographic
8:05 – Strategies for employee wellbeing
13:10 – Pathology’s role in cancer care
18:38 – Emerging trends: Aging, AI, data analytics
Joe Kornik: Welcome to the VISION by Protiviti podcast. I’m Joe Kornik, Editor-in-Chief of VISION by Protiviti, our global content resource examining big themes that will impact the C-Suite and executive boardrooms worldwide. Today we’re exploring the future of government, and we welcome Vanessa Janissen, CEO of New South Wales Health Pathology, where she leads Australia’s largest public pathology and forensic science service. Employing more than 5,000 people, New South Wales Health Pathology performs over 100,000 clinical and scientific investigations each day, across 60 laboratories and 150 plus collection centers.
Sitting down with Janissen today is my colleague, Ruby Chen, a Director with Protiviti Australia. Ruby, I’ll turn it over to you to begin.
Ruby Chen: Great. Thank you so much for the introduction, Joe. Today, we’re so privileged to have Vanessa from New South Wales Health Pathology here on our podcast. Thank you so much for joining us today, Vanessa.
Vanessa Janissen: Thanks for having me, Ruby.
Chen: Let’s start off with a series of questions. The first one is around impacts on the community. As the CEO of New South Wales Health Pathology, I’m sure there are many things that get you excited about the organization, one of which I believe is the major inflight digital transformation agenda. The organization is looking to deliver a single digital patient record management system so that healthcare teams can access patient data anywhere in New South Wales. This will allow laboratories to connect and communicate for the first time in New South Wales’s history.
With New South Wales Health Pathology being the largest public pathology provider, you are custodians of a large store of patient data. So, many people may be wondering, how do you then balance patient privacy, data security, and IT resilience controls whilst ensuring comprehensive access to patient information for healthcare teams?
Janissen: It’s a great question, Ruby. It’s an exciting time in digital healthcare. It’s transforming the way healthcare is delivered globally, and fundamentally shifting how we consume health services as individuals. As you said, New South Wales Health Pathology, we’re working to transform our services and models of care to take advantage of new digital technology. We’re delivering patient services where they’re needed and connecting our 65 laboratories and 5,000 medical and scientific experts to deliver care in new ways that we haven’t been able to do before.
At the moment, we’re a very geography-based and constrained service. What I mean by that is pathology really relies on either having the expertise on each site of our 65 locations, or transporting specimens or samples across sites. The physicality of that way of working really has some logistical limitations, particularly if you live in regional and rural communities where you don’t always have access on site to that expertise, and you may have to wait longer for important clinical answers.
So, investing in digital is a real game changer for us in that space. Our investment in a single laboratory system that will connect and integrate our operations of 65 laboratories, adding to that the investment in technology that digitizes specimens and samples at the point of collection but allows them to be reported by specialist expertise across the state will really mean we can leverage our enormous capability in New South Wales to provide better answers, and therefore better patient care, and potentially hopefully outcomes for patients.
In terms to the risks relating to data, undoubtedly, as we become more digitally enabled and process reliant, and collect, as you said, a massive amount of data, we have to be aware of our risks and duties of accountability to protect our patients’ information. This is not a new problem or a new accountability for us. We’ve always done that. We’ve always had important and sensitive information that we need to protect, but certainly the scale and the impact of data breaches, data loss, and system outages is fundamentally different now.
I don’t think that means we shy away from that direction, because the upside of accessibility and integration and better care is phenomenal. What we have to do is really apply our clinical and corporate governance expertise into that new environment. How do we make sure we’re on top of having the right protections, controls in place, making sure we test their efficacy and our ability to respond and recover.
Chen: Perfect, yes. Thank you so much for sharing that. I guess particularly across different industries in recent years, having sufficient corporate governance and oversight around the cybersecurity is such an important topic. So, then in terms of our next question, this is just around overall challenges particularly with changes in demographics.
According to the Australian Bureau of Statistics, Australia is facing an aging population due to increasing life expectancy and declining fertility rates. It’s estimated that by 2026, which is not long from now, more than 22% of Australians will be over 65 years of age. Would you mind sharing what strategies are being put in place to ensure pathology services and diagnostics services are tailored for this growing demographic?
Janissen: Certainly, as we have a larger older population in the community, for New South Wales Health Pathology, we will need to scale our capacity in our services of hospital to meet that additional demand and complexity of care, and we’ll do that through a number of ways; utilizing automation, robotics, the use of digital as we’ve spoken, and I think AI has an important part to play in ensuring we can continue to provide those levels of services out to that community. But we also know that people don’t fundamentally like being in hospitals and will prefer to receive care in other settings, and certainly our vulnerable, elderly patients often have poorer outcomes in hospitals with increased rates of delirium and falls.
So, we need to think about what care looks like in people’s homes, whether that be their traditional family home or whether that’s residential age care homes. We’ve seen the advent of virtual hospitals and I think that’s part of the answer. Virtual hospitals really came to the fore through necessity in COVID and lockdowns, but certainly they’re now taking off more mainstream where people can receive treatment in their home while being virtually monitored at remote locations.
I think pathology is an integral part of that virtual hospital strategy, not just in the way we participate in collecting samples for diagnostic purposes to help treat patients, but also how we might use technology like point-of-care devices, which are sort of a lab in your hand when we’re out in peoples’ homes, and how do we help monitor patients remotely. We know that the functionality of wearables and biometric devices will certainly help keep people safe in those locations. So, we’re really looking at our investment and how we partner with the local health districts and the hospitals around enabling that virtual setting.
Chen: Yes, that actually sounds really interesting and it gives me a lot of hope for my future when I get to that age group.
Janissen: Absolutely. We’ve got to get right before we get there, Ruby. [Laughter]
Chen: Exactly. Thank you for that. Now moving on to the next question, this is around employee well-being, which again I think it’s quite a topical area particularly in the last few years kind of post-COVID/pandemic. It has been relatively tough for some of our healthcare workers across the state.
I understand that New South Wales Health Pathology is investing in new technologies such as AI and robotics to reduce the manual burden on pathologists and scientists, referring back to what you had mentioned earlier but for slightly different uses. You have also launched a revised people strategy to ensure staff well-being continues to be looked after. Would you be able to shed more light on these initiatives and others to improve employee well-being?
Janissen: Our team worked incredibly hard during COVID to protect the community. We did over 15 million tests in the peak two years, an enormous amount of work, working in ways we’ve never had to work before. With that ambitious change agenda that I mentioned in our first question, it’s really important that we think about how we invest in our people alongside of that process.
People are at the heart of what we do, and every discussion, survey and interaction, it’s really clear to me how proud we are to serve the New South Wales Community, and that people gain a tremendous amount of professional and personal satisfaction coming to work and being a part of a team that has such a positive impact. However, feedback also tells me that fatigue is real and prevalent post-COVID, and the challenging daily realities are impacting on people’s experiences and leaving people feeling tired, sometimes undervalued and overwhelmed when change is happening.
That’s why we’ve really invested a lot of time in developing our people strategy. It’s our map for achieving a culture we all seek, investing in things that our team told us were really important. We spent six months consulting with our staff around what is it that makes them feel welcomed and valued, and that’s the core of it. Our vision for every person who works for New South Wales Pathology is that they feel they belong, a place where everyone feels safe, valued, and supported to do their roles and be the best that they can be.
To achieve that, our people strategy really focuses on six priority outcomes: that our teams feel safe and valued; that our leaders are considerate, accessible and authentic; that people’s work is meaningful and attractive; that we grow, develop, and help people achieve their goals; that we can help them balance work and other important aspects of their life; and we represent the diverse communities that we serve.
We’re really going to focus heavily on the first two; people feeling safe and valued, and that our leaders are consistently supporting them. They’re the top two priorities for focus of our investment. That’s what our team told us was most important to them. The sorts of things we’ll be working around is how do we manage sustainable workloads, making sure that people have access to the right level of training, that we have good rostering practices, that we have new technology that helps free them up to do things more easily.
We’ve commenced a range of psychological safety, well-being, and trauma-informed practice and development of our people leaders and their teams. We’re building on the success of our recruitment model and investing in other initiatives that will attract to make it quicker and easier for people to join our organization, and we’re also recognizing the important relationship that people have with their managers and investing in a whole range of leadership training programs for our clinical leaders, our senior operations team, and medical frontline leaders, and of course, important that we have an advocate for modern awards that recognize the value of what people do in their job.
Chen: That sounds like a very structured approach to the people strategy and something that I probably don’t hear as often.
Janissen: Yes. I mean you’ve got to have a plan and you’ve got to have a focus around people. People are the backbone of our organizations, and when there’s so much going on in terms of out in their personal lives, and in their work environments, and technology coming at them, really focusing in how we support people is absolutely critical.
Chen: All right, awesome. Let’s move on to the next question then. This is a little bit more forward-looking.
There has been investment into cancer care through precision oncology technology, and training and retaining anatomical pathologists in order to provide services for personalized cancer care based on specific biomarkers and their tumor and genetic makeup. Where do you see New South Wales Health Pathology progressing in the cancer care field in the next few years?
Janissen: It’s a really important area for us to focus in on. Diagnosis and treatment of cancer is a core part of our role in New South Wales Health Pathology. Every year, we average about 450,000 tests that are looking for and diagnosing what type of cancer people might have, and unfortunately, it’s a growing area of demand. Last year, we saw about 11% increase in that area, not only in numbers of tests but also what we’re seeing is a shift in complexity. I suspect that’s a part of changing demographics as we get older, but also potentially a result of delayed screenings and limited access to care during COVID, which is really sad.
The good news is that what we continue to see is better health outcomes for patients when fast and accurate diagnosis and treatment pathways can be enacted. This includes the advantages that come from precision medicine, so using genomics to really understand the cancer types that people have, and the ability to have personalized care, so where we target therapies deployed for a very specific cancer. New South Wales Health is leading the way in developing both of those fields, and we’re really proud that Richard Scolyer, the Australian of the Year, is actually an anatomical pathologist in our organization, and he’s a perfect example of those innovations. He’s generously sharing his lived experience of diagnosis and treatment of brain cancer, and the great outcomes as a result of that.
What does it mean we need to invest in? What we’re going to be focusing around is our cancer genomics capability, so using new genome sequencing technology and translation of research into new testing protocols to be more accurate in our diagnosis. We’re expanding our team’s capability, so investing in genetic pathologists, the registrars that train, developing a very specific scientific workforce in this area, and in particular, an investment in bioinformaticians, because genomics produces a massive amount of data, and it’s the interpretation of that data that we need to be really good at. We’ll also continue to partner with world-leading researchers and institutes that advance knowledge of what good cancer care and good cancer diagnosis can look like.
Chen: Yes. I was actually just about to ask you about whether or not you have partnerships and alliances with other overseas institutions. I guess in some other jurisdictions, the cancer rates are also relatively high if not higher than what we see in New South Wales. Would you mind sharing a little bit about the types of research that you’ve been doing jointly with some of the world leading institutions?
Janissen: We partner specifically with our hospitals and local health districts, and quite often, they have onsite research institutes and connection into international research institutes. So, that local partnership is critically important because then you’re focusing in on a range of local need and demand in that area, but also the expertise, that multidisciplinary research support is really critical. At New South Wales Health Pathology, we can participate in that in terms of the capability, we have our experts in areas like anatomical pathology, but also we operationalize a biobank, which is an area that collects and stores specimens that goes into helping in research, in clinical trials, to really support those researchers on the ground to make those advancements.
Chen: That sounds amazing. I really look forward to seeing more research outcomes and benefits for people who are going through a cancer diagnosis and treatment. I think it sounds quite promising. I’m hoping there will be a breakthrough collectively as a lot of the leading organizations come together. Okay, great.
The last question we have here for you, Vanessa, is just a little visionary. We’ve talked about emerging trends from enhancing digital experiences, to increasing controls around IT security, Australia’s aging population, improving employee well-being, and the previous question around the cancer key initiatives. Could you share your thoughts on any other emerging trends that New South Wales Health Pathology will need go grapple with in the next five to ten years, and how else do you see the organization shaping into the future?
Janissen: I think there’s three things that are on my mind around the future. The first is really about this ongoing need to empower our patients in their care. That’s what we hear from patients; they want to have more control, choice, and activate preventative care earlier in their lives, and I think the growing capability around point of care testing and biometric devices that really allow people to gain access to their information personally early to help them manage their diseases, to enable them to access and change their lifestyle to support better outcomes, is something that we really need to support, and I think for New South Wales Health Pathology, we’ve got a really good role to play in that because we do have the expertise around how to interpret data, how to interpret results. So, I’m thinking about how do we connect that expertise into patients when they’re in control of their care in whatever setting that might be.
The second area that I think is an important, emerging trend, and I say emerging but it’s here, and it’s real, and it’s growing really rapidly is how we use AI and deep learning. We’re already seeing the advantages of that in image analysis and large dataset interpretations such as in whole genome sequencing, and it’s really helping to augment our expertise, not replace it. So, how do we maximize that support, how do we ensure that our teams have got the right tools, they’re applying those tools in a safe way to do their job, but also to free them up to do the parts of their job that matter most, which is the human-to-human contact.
The third area that I think is again rapidly emerging is big data analytics and being able to look at trends through big data and population insights. I think through that area, what we can be doing is looking at the prediction of care needs, really thinking and seeing how is patients’ health performing, where are they starting to see deterioration in their health, and how can we translate that into care needs and models of care, and into research requirements.
So, I think they’re a couple of the important trends that we’re thinking about, but inevitably, what matters most is how we support our people to have the capability, the time to engage, to collaborate, to partner in those emerging trends to identify the incredible opportunities and bring them into reality for the communities we serve.
Chen: Yes. I think you’ve raised some very important areas, and I guess the thematic which is around data and information in general. From gaining access to data early, to AI and big data analytics and deep learning, they all kind of fit within IT and information, and that kind of a bucket, right, so that’s really interesting to hear.
Janissen: Yes, and that really reshapes I guess in some ways kind of what our workforce needs are as well. So, we’re thinking about how does that play into our people strategy in terms of the new jobs that will be coming along, and how do we attract and retain that expertise in that area that really, we haven’t had to have in our workforce before. So, it’s a really exciting time.
Chen: Really, really interesting. Thank you so much, Vanessa. I’ve learned so much just over the past half an hour or so about New South Wales Pathology and also just pathology in general. That’s been super helpful and very insightful. I would just like to take the opportunity to thank you so much for your time in this podcast.
Janissen: Thanks very much for having me, Ruby. It’s been a great conversation.
Chen: Yes, thank you. Okay, great. Well, Joe, we’ll hand it over back to you.
Kornik: Thanks, Ruby, and thanks, Vanessa, and thank you for listening to the VISION by Protiviti Podcast. Please rate and subscribe wherever you listen to podcasts, and be sure to visit vision.protiviti.com to view all of our latest content. Until next time, I’m Joe Kornik.
Vanessa Janissen is the CEO of NSW Health Pathology where she leads Australia’s largest public pathology and forensic science service. Employing more than 5,000 people, NSW Health Pathology performs over 100,000 clinical and scientific investigations each day across 60 laboratories and 150-plus collection centres. She’s spent over 25 years in healthcare, both in public and private settings, with a deep commitment to serving the community and the strategic pursuit of better outcomes for people. She is also passionate about growing and developing future leaders, particularly championing and supporting women in leadership positions. Previously, Vanessa held a number of leadership positions at Calvary Healthcare, most recently as the National Director, Strategy and Service Development, leading their strategic growth across hospital, aged care, community and virtual care services.

Ruby Chen is a Protiviti director with over 12 years of experience in the financial services industry, for 10 of which she worked within the Big Four banks before transitioning into consulting. She has a broad range of experience providing advisory services and secondments across all three lines of defense.

Did you enjoy this content? For more like this, subscribe to the VISION by Protiviti newsletter.
Anglicare Sydney CEO on AI, housing and critical issues facing governments
Anglicare Sydney CEO on AI, housing and critical issues facing governments
Anglicare Sydney CEO on AI, housing and critical issues facing governments
In this VISION by Protiviti interview, Simon Miller, CEO of Anglicare Sydney, a nonprofit organization that offers services for seniors, families and individuals in need, from food and housing to mental health and family care, sits down with Protiviti’s Leslie Howatt, a managing director and the firm’s Technology Consulting solution lead in Australia, to discuss Miller’s work as the CEO of an NGO, AI use cases in government, the future of public housing, and how government can more effectively work with the social sector to deliver outcomes.
In this interview:
2:43 – The five biggest challenges facing governments
4:55 – Using AI to deal with policy challenges
8:28 – Interfacing with the social sector on housing, aging, food and mental health
14:30 – A more community-minded future
Anglicare Sydney CEO on AI, housing and critical issues facing governments
Joe Kornik: Welcome to the VISION by Protiviti interview. I'm Joe Kornik, Editor-in-Chief of VISION by Protiviti, a global content resource examining big themes that will impact the C-suite and executive boardrooms worldwide. Today, we’re exploring the future of government, and I'm thrilled to welcome Simon Miller, CEO of Anglicare Sydney, a nonprofit organization that offers services for seniors, families, and individuals in need from food and housing to mental health and family care. I'm happy to turnover the interviewing today to my Protiviti colleague, Leslie Howatt, managing director and the firm’s Technology Consulting solution Australia lead. Leslie, thanks so much for joining me today.
Leslie Howatt: Thanks, Joe. Great to be here and I'm delighted to have Simon, you here with me today. Thank you so much for joining us.
Simon Miller: My pleasure to be here, Leslie. Thank you.
Howatt: You have such a unique background as a CEO of an NGO, former management consultant for 15 years and more than a decade in the New South Wales government. How do you think those experiences come together to help shape the work you currently do as a CEO of a not-for-profit providing age care, retirement living, social housing, and community services to over a hundred thousand people across Greater Sydney and the Illawarra?
Miller: It’s interesting because I’ve had the privilege of working across really all three different sectors. I’ve worked across the public sector. I’ve worked across the private sector and now, in the for-purpose sector. And I think what that does is it gives me the capacity to view problems from different perspectives, to understand how different stakeholders think about things. One of the things with being a CEO of a not-for-profit is there is so much that we have to do in working with other sectors and so, actually, being able to speak their language, understand the challenges that they are working on to be able to bring a whole range of different skill sets around public policy and advocacy, around commercial analysis, around technology and be able to meet those together I think is a really useful perspective that I'm able to bring and contribute to the sector, hopefully, to the development of the sector and some of the things that we’re trying to do as an organization and indeed as a society here in Australia.
Howatt: So, given that, what do you see are the biggest challenges facing governments both in Australia or beyond right now?
Miller: Yes. Look, I think there are five really big themes that governments are grappling with. I think the first one is clearly cost of living, the sticky inflation. Governments and central banks are trying to deal with managing economic growth along with the price pressures and they’re struggling with that. They haven’t really had to do since the 1970s. Well, this is different from the 1970s as well so this is almost a unique challenge around cost of living. What’s that leading to, it’s leading to what I call the rise of disillusionment. There’s disillusionment amongst the population. You see this showing up in populism and extremism. You see it coming up with the challenges from social media. I think the third thing is energy transition. There’s never been a global crisis quite like that of climate change and the energy transition is feeding into all of these. It sorts of feeding into a hopelessness which leads to populism, it’s feeding into the cost-of-living pressures through the rise of renewables, but it’s a really, really serious issue that government is trying to grapple with. They know they have to do it but there’s a lot of pushback from the populists because it’s expensive. And then I think housing, the availability of housing, the cost of housing. You’re seeing rents at unaffordable levels across at the least most of the Western world. You’re seeing dislocations in the housing market in places like China and so there’s a really significant challenge around the housing right at the moment and that’s not just in Western democracies. It’s right across the globe. I think the last one is AI and technology and governments working out, “What do we do with artificial intelligence? How do we manage them? How do we use it?”
Howatt: So, let’s pick up on that last point. In your consulting days, you built and led an artificial intelligence and advanced analytics business across Asia-Pacific, growing it from scratch to hundreds of data scientists and engineers. How do you think government can use AI to deal with living policy challenges?
Miller: Look, I'm genuinely quite excited about AI and what it can be used for in public policy. So, I think there’s just an enormous number of use cases and so let me give you an example. I heard recently about people starting to use large language models to do real time coaching in call centers. So, you can have the AI model listening essentially to the call and provide prompts. So, it’s not just, “We’re going to record this call for later. We can actually give you real-time feedback on how you’re doing and real-time access to information.” So, it’s going to improve experience. I think there’s opportunities for automated approvals, automated regulatory approvals. For instance, planning approvals, interpretations of laws and policies, I think is a really interesting area. I was talking to a large Asia-Pacific bank just a week ago and they were saying they’re now running an AI across all of their policies and all of their terms and conditions and it means that customer queries to bankers, 90% of them can now be answered live without people needing to go back and try to ask experts. AI can actually do that and, again, speed things up, makes it more efficient. Governments can do that same thing with laws and policies.
Airlines and logistics companies have been using AI to help with optimization and scheduling disruption management, and governments are now starting to think about how do we use that in say public transport networks or road networks and so that creates improvements and experience of lives of citizens. We are using at my organization AI to actually automate recruitment in terms of resume screenings, in terms of asynchronous video interviews. Well, government hires thousands of people and so the capacity to be able to find the right people, find them fast, takes a lot of the pressure and friction out of the recruitment process, again, it’s going to improve public policy.
I think the used cases are absolutely endless. The only thing that’s going to limit it, really, is the imagination, government can be quite risk-averse and needs to be risk-averse because it’s actually dealing with balancing quite difficult public conversations and trying to manage the interaction with the population. So, risk aversion and imagination, and on the other hand, their willingness to invest. I think the public investment we require will be substantial but the payoffs, particularly as government revenues fall and we have a demographic pressure of an aging population, I think the payoffs for the government will be enormous from the application of AI, right across almost any area you can imagine in public policy.
Howatt: Let’s talk a little bit about your current role as the CEO of Anglicare. What are some of the big issues that you’re dealing with in that sector and how can government more effectively work with the social sector to deliver better outcomes?
Miller: There’s really four things that I’m particularly concerned about that we are working on. I think there’s the question of aging and ageism. Our population is significantly aging across most of the Western world, indeed across most of the world, we have an aging population and there’s a need for government to help foster more healthy approach to aging. There is some research out of Yale that suggest that the way you think about aging actually impacts really significantly how healthy you are as you get older. You can have more years of being well as a result of thinking more positively about aging. And so I think there’s a role for not-for-profits and governments to really think about, how do we age well as a society? So, let’s not just make aging a clinical thing, something that is about a loss of capacity but how do we celebrate, how do we embrace aging, how do we actually help well-being to be a feature of how we all grow older and that’s one of the things that we’re trying to grapple with in terms of building communities for older people that are healthy and thriving and flourishing that are not just about medical care but are about community, that are about hospitality, that are about social connection.
The second thing that we really focus on is the housing crisis. Anglicare is a significant housing provider and it’s an area that we focus our growth in because we see this as really being the great kind of social challenge of our time. Because without good housing, you lose social cohesion and I think that that’s an enormous challenge to our society. So, that’s an area that requires government investment, government regulation, government support. It’s something that simply doesn’t happen because if it was something the private would just do on its own, we wouldn’t need to worry about it but it’s clearly a huge problem.
The third case is food and security. This is a big challenge, cost of living and food insecurity, huge challenge across the Western world. Certainly in Australia, where I am, food insecurity is an enormous challenge for us and how do we work on distribution, how do we work on subsidizing that, how do we work on building capacity requirement.
I think the last one that we’re significantly working on is around mental health, just the increase in the number of mental health challenges in community, particularly for both young people and for older people. The group in society that has the highest rate of suicide is men over 85. It’s not the largest number but it’s the highest rate and so mental health challenges for both teenagers and for seniors are an area that we are partnering with government to really invest in counselling, in psychology, in support. They are big challenges that we need to face into as an organization, having partnership with government.
Howatt: That’s quite amazing. That’s a stat that I had never heard before but having an aged father, I can kind of understand how that might come about. So, for my last couple of questions, I want you to look out five years or so, maybe to 2030, what do you envision for the future of housing, specifically social housing and community services to support Australia’s aging population?
Miller: One of the biggest challenges with housing is around supply. We just need more housing. But of course, it’s not just a case of building more because land is expensive. Regulatory approvals are expensive. Construction is expensive, and so people, developers, even Anglicare, we need to make a return on the money that we invest in building housing. So, one of the things that I see in the next decade, five to 10 years, is government stripping away some of those regulatory costs, make it really easy for people to build if they build following a certain design standard. I see technology playing a significant role into the things like prefabrication, things like automation, offsite manufacturing, the reduction of the cost of [Unintelligible] architecture, of project management, of engineering. I think AI, in particular, can play a role in those spaces. Then, of course, the lifecycle cost of housing around energy, heating, cooling management and there’s an estimation that actually, at least in high-rise buildings, those lifecycle costs can be half the net person cost of the building. So, they’re really, really significant particularly for people on low incomes.
So, I see really a quite exciting world where government regulation, where technology, and where really smart design in the next five years can dramatically lower some of those costs and enable developers to make good returns but at significantly lower costs and so you get a win-win. You get developers still making the money that they need to be able to put their money at risk but you see people on lower incomes, on minimum wages, on income support actually able to afford it because we’ve changed the way that we deliver housing. So, that’s something I see us being quite exciting over the next five years and I’m hopeful that organizations like mine can actually play a role in saying, “You know, we’re going to take a risk. We’re going to do some things a little bit differently than we’ve traditionally done because we can and because we can,” sort of show the way and so I’d like us to be the innovative organization. But these things are going to really transform the way we do the housing, I think.
Howatt: Amazing. Finally, then, the same question but about the future of government in the delivery of services. Based on everything we’ve talked about today, how optimistic are you that we’ll get this right and what do you see for 2030?
Miller: Look, I think I’m ultimately an optimist and I think the reason I’m an optimist in terms of public policy and getting this right is, what I see when I look at millennials and Gen Z and then the generations coming after them is that they are much more publicly minded and I think these things like climate change that have led them to taking a different, more positive, more community-minded approach. So, I think that by 2030, the millennials and Gen Z are going to be a really significant part of the voting population. So, they’re going to demand it from government and governments tend to respond to voters, one way or another, and so I’m actually really optimistic that their attitude will actually translate into government action.
Howatt: Thanks, Simon. We really appreciate you taking the time to spend with us today and for your amazing insights. That was so helpful and I learned a lot and, hopefully, others do too.
Miller: Fantastic. Thank you.
Howatt: Now, back to you, Joe.
Kornik: Thanks, Leslie. Thank you for joining the VISION by Protiviti interview today. On behalf of Leslie and Simon, I’m Joe Kornik. I’ll see you next time.
Simon Miller is the CEO of Anglicare Sydney, a nonprofit organization that offers services for seniors, families and individuals in need, from food and housing to mental health and family care. Prior to joining Anglicare, Simon was a Managing Director and Senior Partner at The Boston Consulting Group with over 14 years’ experience in advising the boards, CEOs and executives of Australia’s top companies on strategy, growth, digital transformation, advanced analytics, and mergers and acquisitions. In previous roles he was First Assistant Secretary at the Department of Prime Minister and Cabinet, Deputy Director-General at the Department of Water and Energy, Policy Director to the Premier of New South Wales and Chief of Staff to the Treasurer of New South Wales.

Leslie Howatt is a managing director and Protiviti’s technology consulting solution lead in Australia. She specializes in digital and technology strategy as well as transformational change, and boasts over 25 years’ experience across consulting, industry, and government sectors. She has extensive experience designing and delivering large-scale change initiatives across organizations in Australia, New Zealand, Asia and North America. Leslie's industry experience spans financial services, transport & aviation, energy & utilities, consumer products & retail, and telecommunications. She is a champion for diversity and inclusion.

Did you enjoy this content? For more like this, subscribe to the VISION by Protiviti newsletter.