Table Of Content
The CISOs Survival Guide to Generative AI Adoption in 2025
-
September 16, 2025
-
This is not a hypothetical scenario. In 2023, Samsung employees inadvertently uploaded some of the company’s source code to ChatGPT. In 2024, several banks reported that confidential customer data was quaintly included in the training datasets for large language models (LLMs).
- 91% of security teams already use GenAI tools at work.
- 34% of organizations still lack a formal acceptable use policy.
- 73% of CISOs admit they are unprepared for AI-driven threats.
The GenAI Boom: Fuel and Fire
- Finance: Banks are embedding LLMs into fraud detection, but adversaries are also using AI to generate deepfake CEO voices for wire fraud.
- Healthcare: Hospitals use GenAI to summarize patient notes, yet misconfigured APIs have exposed PHI in multiple breach incidents.
- Software Development: AI-generated code accelerates shipping cycles but introduces supply chain vulnerabilities, as seen when GitHub Copilot-generated snippets included insecure functions.
The 2025 CISO Dilemma: Secure AI or Get Left Behind
CISOs must respond: “How do we secure AI without slowing down innovation?”
- Phishing 2.0: Realistic, contextually-aware phishing-related campaigns compiled in seconds.
- Creation of malware: Using GenAI tools to create self-compiling polymorphic malware to evade traditional antivirus.
- Data poisoning attacks: The insertion of malicious inputs into training data to affect AI outcomes.
- Secure AI deployments without creating friction for business users.
- Translate security risks to business risks that the board can understand.
- Guarantee compliance with GDPR, EU AI Act, CRA, or emerging AI security regulations.
Six Steps CISOs Should Take to Safely Enable GenAI in 2025
Step 1 - Visibility
As a CISO, you should know which all Gen AI applications are being used in the company. These don’t have to be limited to what is accessed using SSO. Any access should be captured in this list.
There are few ways of doing this: talk to your SSE vendor and see how much traffic is being analyzed by them. Are they configured to use a split tunnel where a lot of Internet traffic is skipped? If yes, you may need another solution that runs on the endpoint. Otherwise, you can look at their logs and see if you are able to capture this list. We provide this list using Kitecyber while skipping 0% of Internet traffic.
Step 2 - Access Control: Sanctioned vs Unsanctioned Gen AI Apps
Once you have the list, you should be able to mark the approved list as sanctioned and others as unsanctioned, so that you can actually block the access to them.
Step 3 - Discover Sensitive Data
Endpoint DLP: use these solutions to classify files and discover sensitive data across your endpoint devices.
DSPM: The data can also be in cloud drives or other cloud storage platforms. You can use a DSPM solution to discover the data in cloud drives or cloud storage.
Using a combination of a DLP and DSPM solution, you should be able to create a complete view of the sensitive data across devices, cloud storage and drives. Note that most DSPM solutions are not inline so although they can discover data, they are not in the network path. Similarly many Endpoint DLP solutions are also not inline with network traffic.
Step 4 - Track data movement and stop leakage
Network based: In network based approaches like SSE or secure web gateway solutions, one has to tunnel all the Internet traffic through them. This traffic needs to be decrypted, inspected and encrypted back before getting forwarded. This hairpinning of data adds latency, bandwidth limitations and also does not work for many sites that are end to end encrypted. Some of the ones we have heard are Zoom, Teams and other meeting applications. Also a lot of sites are bypassed due to impact on their latency or performance.
Step 5 - Auditability and Reporting
Gen AI apps are evolving at a fast pace and sometimes the configurations are not able to catch up with what’s out there. Also it is not possible in many cases to add very strict controls as that may impact employees’ productivity.
The best solution in that case is to have a product that allows for complete auditability of access to all the apps. That acts as a deterrent for insider threats and also gives confidence to your customers that you are serious about security and are deploying tools to give you ability to do incident analysis if something happens.
Final Word: CISOs as AI Enablers
By 2025, GenAI adoption won’t just be required, it will be an existential necessity. And so, too, of security.
CISOs who find success will not only think about security lines drawn in the sand, they will also think of how to enable GenAI adoption but in a way where productivity doesn’t come at a cost to security.
If your team is looking to adopt GenAI in a secure way that doesn’t sacrifice privacy or performance, solutions like Kitecyber can assist by integrating endpoint DLP, network DLP (data loss protection), and AI informed cyber guidance that acts like having a security expert sitting at the elbow of every employee.

Ajay Gulati
Ajay Gulati is a passionate entrepreneur focused on bringing innovative products to market that solve real-world problems with high impact. He is highly skilled in building and leading effective software development teams, driving success through strong leadership and technical expertise. With deep knowledge across multiple domains, including virtualization, networking, storage, cloud environments, and on-premises systems, he excels in product development and troubleshooting. His experience spans global development environments, working across multiple geographies. As the co-founder of Kitecyber, he is dedicated to advancing AI-driven security solutions.