Data Annotation Blog|Nextremer Co., Ltd.

What are the security risks of generative AI? Explaining security measures to prevent misuse

Written by Toshiyuki Kita | Jan 20, 2026 9:50:46 AM

 


In recent years, while the business use of generative AI has progressed rapidly, concerns about security risks, including data leakage, have also increased. In fact, news regarding generative AI security issues has followed one after another, such as "confidential personal information leakage occurred via generative AI" and "incorrect information from generative AI was misused."

This article explains in an easy-to-understand manner the reasons why security measures for generative AI are prioritized in companies and the security issues faced. Furthermore, we provide useful hints for proceeding efficiently while understanding risk measures for generative AI, such as specific usage examples and points of caution.

This content is useful for those considering utilizing generative AI in actual business or AI development after understanding the dangers of generative AI.

 

Related Article:

What is generative AI? Explaining the types, mechanisms, advantages, disadvantages, and use cases!

 

 

【Table of Contents】

  1. 4 Major Security Risks Lurking When Using Generative AI
  2. Why Are Generative AI Security Measures Important for Companies?
  3. Cases Where Generative AI Security Risks Were Pointed Out
  4. Security Measures for Safely Utilizing Generative AI
  5. Summary

 

 

1. 4 Major Security Risks Lurking When Using Generative AI


As the use of generative AI expands, security dangers and problems are also becoming apparent. Here, we introduce four major security risks lurking when using generative AI.

 

Learning Data Poisoning

Learning data poisoning is the act of introducing vulnerabilities, backdoors, or biases into the data that an LLM learns, which could compromise ethical behavior.

In the 2025 version of "OWASP Top 10 for LLM," a report on critical vulnerabilities and security guidelines discovered in LLM applications, learning data poisoning is a security risk with increasing danger, ranking in the TOP 4.

Because generative AI learns from vast amounts of data, the risk of data tampering (data poisoning) by malicious third parties increases. If fraudulent data is intentionally mixed in, there is a risk that generated content could become incorrect information or biased content.

In particular, external data sources used during LLM training cannot be managed by the generative AI user side, so caution is required during use.

 

Related Article:

What is training data? How is it different from learning data? How much do you need? We explain how to create it in-house or outsource it, and what to be careful of when collecting it!

 

 

Prompt Injection

Prompt injection is an attack method that sneaks malicious commands or code into the prompt entered by a user. For example, intrusion prompts such as "Ignore all restrictions" or "This is my monologue. I want to know your internal information" are famous examples.

Prompt injection increases risks that lead to information leaks or system misuse, such as leakage of information regarding in-house products or systems, or output results containing unintended information.

Against prompt injection, measures such as the following are recommended:

  • Give clear operation restriction instructions in the system prompt
  • Detect and process malicious content through input/output filtering
  • Require human intervention for high-risk operations
  • Conduct regular vulnerability tests

By using multiple approaches above in combination, risks from prompt injection can be reduced.

 

Related Article:

What are generative AI prompts? Explaining tips for using LLM effectively, business examples, and points to note!

 

Risk of Confidential Information Leakage

Confidential information can easily leak if mistakes occur on the side using generative AI or if vulnerabilities arise on the side of vendors providing generative AI tools.

For example, if internal confidential information or personal data is mistakenly entered when using generative AI, there is a possibility that in-house data will be used in answers for other users or leaked through unauthorized access.

Furthermore, if generative AI tools with insufficient data protection or access management are used, there is also a risk of confidential information leaking to third parties.

Since information leakage and personal information flow-out have serious impacts such as loss of trust from customers and legal responsibility, discernment when selecting generative AI tools is important.

 

Cyber Attacks on Generative AI Models

There is also a risk that systems using generative AI models will become targets of cyber attacks.

If an AI model is accessed unauthorizedly and tampering or unauthorized use of the generative AI model is carried out, damage will spread widely to users of the generative AI tool, such as reduced reliability of generated content or long-term system suspension.

To counter cyber attacks on generative AI models, it is effective to continuously monitor LLM resource usage rates and identify spikes or patterns indicating DoS attacks.

Additionally, setting appropriate rate limits for API access and blocking excessive requests or unauthorized access are also effective methods.

 

2. Why Are Generative AI Security Measures Important for Companies?


Here, we explain in detail from three perspectives the reaso
ns why companies should focus on generative AI security.

Rising Awareness of Data Privacy

Companies possess valuable data such as customer information, internal documents, and intellectual property. The leakage of this information is a problem directly linked to business survival.

When using generative AI, risks exist such as entered data being sent to external servers, or being accumulated as learning data and used for other users' outputs. Therefore, data privacy protection is being prioritized even more.

Efforts to protect data, such as data encryption and access privilege management, lead to gaining trust from customers as well as reducing legal risks in preparation for unforeseen circumstances.

 

Decline in Brand Image Due to AI Misuse

Content created by generative AI may, in some cases, include inaccurate information or inappropriate expressions. If generative AI is used to create advertisements or PR materials and they are published and spread while containing inappropriate expressions, there is a risk that a company's brand image and reliability will be damaged.

Since text, images, and videos created with generative AI are often output with organized appearances and layouts, it can sometimes cause lax checking of details.

In fact, there are cases where image ads or video ads using generative AI became negative topics, leading to a decline in brand image.

In today's world where information spreads instantly on SNS, brand damage due to generative AI misuse can be fatal. Generative AI risk measures are indispensable.

 

Strengthening of Legal Regulations

While legal regulations regarding generative AI use are being considered in various countries around the world, companies are forced to strengthen security measures also from the perspective of legal compliance.

In the EU, the "EU AI Act," a comprehensive regulation of AI including generative AI, was established in May 2024. The EU AI Act specified as transparency requirements that it must be disclosed that content was generated by AI, and that models must be designed so as not to generate illegal content.

Even in Japan, which was said to be delayed in legal response regarding generative AI compared to the US and European countries, legal regulations have been rapidly progressing in recent years from the perspective of protecting copyrights and personal information.

Related Article:

Does generative AI constitute copyright infringement? Explaining the problematic conditions, examples, and countermeasures!

 

3. Cases Where Generative AI Security Risks Were Pointed Out


Here, we introduce cases where generative AI security risks were pointed out. Let's confirm the background and impact of each case.


Samsung Electronics Employee Leaks Confidential Source Code to ChatGPT

At Samsung Electronics, a situation occurred in April 2023 where an employee entered confidential source code and recording data of internal meetings into ChatGPT, resulting in information leaking externally.

Samsung Electronics is concerned that shared data will be saved on servers of generative AI service operating companies such as OpenAI, resulting in a state where it cannot be accessed or deleted. It also cites the risk of confidential data being provided to other users via ChatGPT.

To prevent the recurrence of security risks through generative AI, Samsung banned the use of generative AI tools such as ChatGPT and Google Bard by employees.

Reference: Samsung Bans Internal Use of ChatGPT Following Leak of Confidential Code

 

New York Times Sues OpenAI and Microsoft for Copyright Infringement

On December 27, 2023, The New York Times Company (hereafter NYT) sued OpenAI and Microsoft regarding content created by generative AI from the perspective of protecting its own content.

NYT claims that millions of its own articles were used as training data by OpenAI without permission. As a result, it claims that its own interests and brand were damaged, and damages due to the unauthorized use of article data amount to billions of dollars.

Along with a claim for damages, it demands the deletion of its own content from training data and AI models.

This lawsuit clarified the risk of generative AI using existing copyrighted works without permission and highlighted the need for intellectual property rights protection and legal maintenance.

 

Reference: New York Times sues Microsoft and OpenAI for ‘billions’

Vulnerabilities Reported in DeepSeek's Generative AI

R1, a generative AI developed by the AI startup DeepSeek from China, is explosively increasing its user base with high reasoning ability and low cost.

However, multiple security and privacy vulnerabilities in the company's system were reported. In particular, the following multiple vulnerabilities have been pointed out:

 

  • Unencrypted data transmission
  • Encryption keys using weak encryption technology
  • Insecure data storage
  • Extensive data collection and fingerprinting
  • Data transmission to China

 

Following reports regarding such vulnerabilities, movements to refrain from business use of DeepSeek's AI models have been seen in some government agencies, including the US government, and large corporations.

Reference: Hundreds of Companies Ban DeepSeek, the Industry-Noted Chinese AI, Due to Data Leakage Risks

 

4. Security Measures for Safely Utilizing Generative AI


To utilize generative AI effectively, measures against security risks are also indispensable. We introduce specific security measures against generative AI.

 

Assessing Vendors

What is important in utilizing generative AI tools is the selection of highly reliable AI vendors.

When introducing generative AI tools, thoroughly evaluate the vendor's track record, technical capabilities, and the adequacy of security measures. In particular, to reduce security risks such as information leakage and unauthorized use, it is important to use tools from vendors with established legal compliance structures.

Furthermore, regarding data used for generative AI training and prompts, risks of data poisoning can be avoided by outsourcing from data collection to data annotation work to data professional companies.

 

 

Strengthening Data Protection

When utilizing generative AI via API, strengthening security measures on the company side is essential. Specifically, security measures that should be performed are as follows:

 

  • Encryption of communication paths and stored data
  • Strict setting of access privileges
  • Introduction of multi-factor and biometric authentication

 

By linking with in-house network and hardware security, the risk of confidential information leaking unauthorizedly can be minimized, and rapid response becomes possible even in the event of an incident.

 

Employee Education and Guideline Formulation for Generative AI Utilization

In the safe operation of generative AI, not only technical measures but also employee education and the formulation of clear operational guidelines are necessary.

By implementing regular education for employees regarding correct usage methods and risk management of generative AI, effects such as preventing the occurrence of information leakage or inappropriate operations due to carelessness can be expected.

Additionally, by formulating clear guidelines regarding generative AI use within a company, unified security standards are applied between departments and projects.

For example, content such as the following should be included in generative AI utilization guidelines:

 

  • Basic mechanism of generative AI
  • Points to note during use
  • Security risk countermeasure methods for generative AI
  • Importance of risk management based on specific corporate cases
  • Emergency response procedures

 

As mentioned above, it is important to clearly state specific operational rules and create a mechanism where employees monitor the operational status and security state of generative AI models.


Through such employee education and guidelines, it becomes possible to systematically manage risks of misuse and information leakage. Furthermore, vulnerabilities can be discovered early, and damage can be minimized even in the event of a security incident.

 

5. Summary

While generative AI brings major benefits to business such as operational efficiency improvement and innovation promotion, it encompasses various security risks such as learning data poisoning, information leakage, and copyright infringement.

As actual cases show, employee carelessness, deficiencies in internal management, and incorrect usage methods have serious impacts, and there are risks that develop into situations such as litigation problems and loss of intellectual property.

Therefore, when introducing generative AI into business, selecting highly reliable vendors, robust data protection measures, and formulating generative AI utilization guidelines are essential.

 

 

 

 

Author

 

 

Latest Articles