Back to Blog
Minute Read

CISO Series Digest #1: AI and Enterprise Risk Management

Abhi Puranam

Read highlights of Chris Hodson and Dan Walsh on AI and Enterprise Risk Management

In this article

In the debut of the CISO series, Chris Hodson, CSO at Cyberhaven, and Dan Walsh, CISO at VilageMD, discussed the present and future of Enterprise Risk Management given the sudden and rapid rise of AI apps - like ChatGPT. Read below for the highlights of their discussion, and be sure to check out the full session here if you’re interested!

Partnering with the business on AI: Understand the use cases and classify the data

Chris starts the discussion off by asking Dan how he's approaching conversations regarding the risk of AI at VillageMD. Dan’s key insight to having a productive discussion? – Understanding the use cases, personas, and data types involved:

"Are we using it to write a marketing email or are we using it to write a patient letter? Right? Two different things. One has PHI in it, one probably doesn't have a ton of confidential information in it. What are we entering into it in order to get what we need out of it? Is it simply asking a question, or are we entering source code? Are we entering confidential data? And so I think kind of understanding the use cases and the personas of the folks that are using it is really key."

– Dan Walsh, CISO, VillageMD

Dan’s belief is the role of a CISO is a risk advisor, not a business decision-maker. To properly understand the risks, a CISO has to understand the business objectives employees are trying to accomplish using generative AI applications. This means identifying the teams and job functions involved in their usage, as well as what type of data is being input into these tools.

The Future of AI Governance - “It’s gonna get crazier and harder”

The conversation turns to how security teams and governments are going to tackle the security challenges posed by AI. Chris notes that many corporations, like Walmart and Amazon, have already blocked the usage of these applications by their employees. Both Chris and Dan agree the structure of security teams may need to evolve to accommodate AI threats and governments will need to catch-up from a regulation perspective. Dan says:

We’re speeding up, right? It's gonna get crazier and harder. Maybe the CISO function as it relates to that splits off. Right now we've got functions for application security and GRC. Now we've maybe got machine learning and AI security team, I don't know. Or we have a massive series of events that the regulation around it and the governance around it is sped up, right? Almost like a GDPR type of legislation around it.

– Dan Walsh, CISO, VillageMD

On the topic of legislation, Dan speculates whether leading AI companies will follow the path of healthcare or financial services – either waiting for governments to intervene or proactively coming up with their own governance to avoid the external scrutiny:

They can take the path that the healthcare companies took or the path that the credit card companies took. The path the healthcare companies took in the US was they waited for Congress to basically demand they do something when they, when they put through HIPAA, the, the credit card companies looked at that in the nineties and they're like, Ooh, we don't want to go through that. And they came up with the PCI standard, which is a self-governing to say like, look, you can't use credit cards, you know, unless you follow these standards.

– Dan Walsh, CISO, VillageMD

Chris speculates on the future need for ML security architects and pen-testing strategies for companies that are developing their own large-language models:

Are we gonna have ML security architects? Is that how you think we're gonna tackle your business stakeholders, your CTO, saying ‘hey, we're building this large language model for X.’ Because while there isn't regulation and governance, that's the other side of it, there isn't a pen-testing strategy for any of this stuff, right? So I think we are really gonna struggle with internal due diligence and threat modeling quite frankly on these things for a few years to come.

– Chris Hodson, CSO, Cyberhaven

He also highlights the supply chain security challenges that will be introduced by using APIs for these models or partnering with vendors that utilize these models. The uncertainties with the usage and privacy of third-party models lead Dan back to highlighting the need for proper data classification and business understanding in order to gauge the risk AI application usage:

We're thinking about data classification and we're thinking about the types of information that we're comfortable seeing in there. So I think it's more internal work than it is trusting a third-party privacy policy.

– Dan Walsh, CISO, VillageMD

The potential of AI to improve cybersecurity

Ending the conversation on a positive note, Chris and Dan share some of their optimism about the potential for AI to impact the world positively. While Dan speaks about the potential benefits to healthcare and life-saving diagnoses, Chris shares some optimism about how technological advances will help cybersecurity teams deal with the shortage of talent in the industry:

Directly from security space, I'm seeing it as well, like, you know, the research that I've done, around kind of the efficacy and proficiency of threat detection. There’s never enough people and resources in the secops community. There’s a real opportunity, you know, to improve incident triage and uncovering hidden TTPs.

– Chris Hodson, CSO, Cyberhaven

Wrapping Up

If you enjoyed this recap, be sure to check out the full session here. Chris and Dan also touch on the societal impact of AI, how AI impacts their approach to interviewing technical security employees, and the threat of new malware created by generative AI.

Also, be sure to register for the second installment of the CISO series here, so you don’t miss the next one!