Page 159 - Cyber Defense eMagazine RSAC Special Edition 2025
P. 159

DeepSeek’s Appeal

            DeepSeek’s  biggest  draw  is  its  cost-efficient,  open-weight  model,  which  significantly  reduces  AI
            deployment expenses. Unlike proprietary AI models, DeepSeek allows businesses to train and fine-tune
            models at a much lower cost. The company trained its model for just under $6 million—a stark contrast
            to the billions spent by U.S. tech giants.

            This affordability has made DeepSeek attractive for certain enterprise use cases, particularly in coding
            assistants, internal knowledge exploration, and processing sensitive data in airtight environments (where
            no data exits company premises). Startups and academia are also leveraging the technology due to its
            accessibility and flexibility.



            The Security Risks of DeepSeek

            While  DeepSeek  presents  similar  security  risks  as  other  generative  AI  models,  such  as  biases,
            hallucinations, and potential copyright infringements, it introduces unique concerns tied to its Chinese
            data centers. A few prominent risks include:

            Data transparency and compliance risks: DeepSeek does not disclose its training data, making it difficult
            to assess potential biases or data rights issues. U.S. and European enterprises must consider regulatory
            compliance risks when handling sensitive data with an AI model hosted in China.

            Code and security vulnerabilities: All AI-generated code must be tested for security, suitability, and IP
            protection,  ensuring  it  does  not  include open-source  components  with  restrictive  licenses  that  could
            compromise proprietary rights. For DeepSeek specifically, its undisclosed training data raises concerns
            that it may replicate coding patterns with inherent security vulnerabilities.

            Data storage and government access risks: DeepSeek stores user data in China, raising concerns about
            government intervention. Chinese data center providers must comply with local laws, which could allow
            authorities to access stored information. This concern has fueled calls for tighter AI regulations and
            influenced the decision to ban DeepSeek from U.S. government devices.



            Mitigating Risks: AI Governance and Compliance Best Practices

            Enterprises looking to integrate DeepSeek, or any AI model, must implement a responsible AI framework
            to ensure security, privacy, and compliance. Elements of this framework include:

            Self-hosting AI models: Running DeepSeek in a closed environment to prevent external data access.

            Using synthetic data or strong anonymization protocol: Deploying advanced synthetic data generation
            tools to simulate real datasets without exposing sensitive information or anonymizing data.

            Implementing  strict  AI  governance:  Establishing  internal  policies  to  oversee  AI  model  use,  security
            testing, and compliance with international regulations.






                                                                                                            159
   154   155   156   157   158   159   160   161   162   163   164