Blog

The GenAI Train has Left the Station: It’s Time to Secure the Tracks

Posted by: Dan Anconina
May 24, 2023
Getting your Trinity Audio player ready...

The adoption of generative AI in business introduces significant security and privacy risks. The ability to create convincing fake content and deep fakes opens the door for fraud, misinformation, and identity theft. Malicious actors can exploit these technologies to deceive individuals or compromise sensitive data. To mitigate these risks, businesses must implement stringent security measures, including access controls, encryption, and ongoing monitoring. Ethical considerations are crucial to maintain privacy and trust in AI-powered solutions.

I could not have said it better myself. Yet the fact remains that I did not. The text above was not written by me, but rather by ChatGPT. Would anyone have known this, had I not revealed it? It’s possible, but far from sure.

This is just a small example of the risks associated with integrating Generative AI (GenAI) and other AI-powered platforms into organizations. With the AI train already barreling down the tracks towards day-to-day indispensability, now is the time for CISOs to pause, regroup and define limits for its responsible use.

Defining the Security and Privacy Risks

Before we dive into the policies, processes and best practices that can help mitigate the risks from GenAI, let’s take a quick look at the risks themselves. GenAI poses cross-organizational risks to:

  • Compliance – Incorrect GenAI usage can create liabilities vis-à-vis relevant laws, regulations, and industry standards.
  • Privacy and confidentiality – GenAI technologies may inadvertently expose or disclose sensitive user information.
  • Intellectual Property – Content generated by GenAI may infringe on copyrights, trademarks, or other intellectual property rights.
  • Data security – GenAI platforms and services may have their own security vulnerabilities, putting enterprise data at risk.
  • Reputation and brand – GenAI can produce incorrect, inappropriate, or misleading information, potentially harming business operations and reputation.

Additionally, the integration of AI-powered technology has already given rise to a new type of cyberattack – prompt injection. As the name suggests, prompt injection attacks involve threat actors inserting malicious prompts or requests into interactive AI-power systems (like ChatGPT). These prompts are designed to deceive or manipulate users, resulting in unintended output or even disclosure of sensitive data.

The Principles of Responsible GenAI Use

Responsible generative AI use entails prioritizing ethical considerations, transparency, accountability, and human-centricity to ensure technology benefits society while mitigating potential risks and biases.

Once again, I agree with ChatGPT on this one.

Enterprises are already using GenAI for various use cases, including research and development, code quality improvement, data analysis, customer service, content creation, personalization, automation, and innovation.

CISOs and enterprise security stakeholders need to develop policies that ensure responsible implementation of GenAI technology, focusing on acceptable use, implementation guidelines, and risk management. Managing these risks starts with user education – creating a culture of responsible AI use through training and awareness programs. Security stakeholders also need to ensure that regular and thorough impact assessments are conducted to identify potential risks and biases, alongside regular audits and continuous monitoring.

AI Policies in Principle and Practice

How do these high-level principles translate into actionable policies and processes? Here are ten concrete (and anonymized) examples from customers and colleagues: 

1. PRINCIPLE: Develop a comprehensive AI risk management framework 

PRACTICE: An organization using AI-powered customer support chatbots created a risk management framework that assesses potential risks originating with the chatbots including data breaches, privacy concerns, and biased responses. For each of these risks, they established mitigation strategies.

2. PRINCIPLE: Implement strong access control and authentication 

PRACTICE: A healthcare company using AI for patient data analysis restricted access to the AI system to specific roles and implemented multi-factor authentication to ensure only authorized personnel could interact with sensitive patient data.

3. PRINCIPLE: Regularly update and patch AI systems 

PRACTICE: A bank using AI for fraud detection ensures that its AI system receives regular updates and patches, addressing any vulnerabilities and improving the system’s overall security.

4. PRINCIPLE: Enforce data privacy and protection regulations 

PRACTICE: An e-commerce company using AI for personalized marketing complies with GDPR by implementing data minimization, ensuring user consent, and providing users with the right to access, correct, or delete their personal data.

5. PRINCIPLE: Conduct employee training and awareness programs 

PRACTICE: A software company using AI for code analysis and optimization trains its employees on secure coding practices and the potential risks associated with AI-generated code, such as intellectual property theft or insecure code generation.

6. PRINCIPLE: Monitor AI systems for security incidents and anomalies 

PRACTICE: A cybersecurity firm using AI for threat analysis monitors the AI system’s behavior for any anomalies or unexpected outputs that could indicate a security incident or a potential vulnerability in the system.

7. PRINCIPLE: Establish an incident response plan 

PRACTICE: A financial institution using AI for credit risk assessment developed an incident response plan that outlines the steps to take in case of a data breach involving the AI system, including containment, investigation, and communication to relevant stakeholders.

8. PRINCIPLE: Collaborate with industry peers and experts 

PRACTICE: CISOs from different organizations in the energy sector formed a working group to discuss and share best practices for securing AI systems used in critical infrastructure monitoring and control.

9. PRINCIPLE: Engage with AI vendors and suppliers 

PRACTICE: A manufacturing company using AI for quality control works closely with its AI vendor to ensure that the AI system meets required security standards and includes security-related clauses in the contract, such as regular security audits and prompt vulnerability disclosure.

10. PRINCIPLE: Regularly review and update AI policies and procedures 

PRACTICE: A transportation company using AI for route optimization reviews and updates its AI security policies and procedures as new risks emerge, such as new attack vectors targeting AI systems or changes in regulatory requirements.

The Bottom Line

These recommendations and examples demonstrate how organizations can apply best practices to address security and privacy risks associated with AI systems across various use cases. Harnessing AI at scale for productivity and innovation is an achievable goal. By taking the initiative in the short term, CISOs can head off potential AI-powered headaches (in the best case) and liabilities (in the worst case). 


Dan Anconina

CISO & Head of Cyber Security 

Find and fix the exposures that put your critical assets at risk with ultra-efficient remediation.

See what attackers see, so you can stop them from doing what attackers do.