USD Generative AI IT Security Policy

Tags security AI

Issue/Question

What is USD's IT Security Policy on Generative AI?

Environment

  • AI
  • Artificial intelligence
  • Machine Learning
  • Microsoft Copilot
  • CIS

Cause

Protect USD data while allowing for the improved outcomes AI provides

Resolution

USD Generative AI IT Security Policy

Policy Statement:

This policy is designed to uphold the integrity, confidentiality, and availability of data generated by generative AI across diverse domains. Generative AI revolutionizes problem-solving across various sectors, enhancing outcomes in business, education, and research. 

While embracing our commitment to innovation and improved outcomes, we will also uphold our commitment to security and regulatory compliance, ensuring adherence to industry standards and legal obligations.

Compliance and Enforcement:

This policy applies to all employees, contractors, and third-party vendors who have access to generative AI within our organization. It encompasses the handling, processing, and storage of USD data used by and generated with generative AI regardless of the platform or environment in which it is deployed. Compliance with this policy is mandatory for all individuals involved in AI activities using generative AI at the University. 

Note: All portions of the following policy implementation below must be in place for a Generative AI system to be considered for use with non-public USD data.  At this time the only solution that meets this standard is Microsoft Copilot.  Copilot is currently in a review and testing phase.  We encourage the use and experimentation with all generative AI, however use of non-public USD data is prohibited.
 

Policy Implementation:

  1. Inventory and Control of Hardware Assets (CIS Control 1):

  • Identify and maintain an inventory of hardware assets used for AI development, including servers and workstations hosting generative AI.
  • Implement device health attestation to ensure that only trusted devices can access generative AI resources.
  1. Inventory and Control of Software Assets (CIS Control 2):

  • Maintain an inventory of software assets, including generative AI, associated tools, and user accounts ensuring timely updates and patches to mitigate vulnerabilities.
  • Utilize endpoint protection solutions to continuously monitor and protect generative AI-enabled devices from threats.
  1. Data Protection (CIS Control 3):

  • Data Classification and Labeling:
    • Utilize data protection solutions to classify and label sensitive data used by generative AI, ensuring appropriate access controls and data handling.
    • Implement a data classification policy that defines categories of sensitive data and specifies how each category should be handled and protected.
    • Apply labels to generative AI-generated content based on their sensitivity level, facilitating better data management and protection.
  • Data Loss Prevention (DLP):
    • Deploy DLP policies to prevent the unauthorized disclosure of sensitive data used by generative AI.
    • Configure DLP policies to monitor and enforce controls on generative AI-generated content, ensuring that sensitive information is not inadvertently shared or leaked.
    • Configure DNS security-based DLP to provide defense in depth.
  • Data Access Controls:
    • Enforce strict access controls to prevent unauthorized access to generative AI-generated content or any training data.
    • Utilize identity and access management solutions to manage user access and permissions, ensuring that only authorized individuals can access sensitive data.
    • Implement role-based access controls (RBAC) to grant permissions based on job roles and responsibilities, limiting access to generative AI data to only those who require it for their tasks.
  • Data Privacy Compliance:
    • Ensure compliance with data privacy regulations when processing and storing generative AI-generated content and any training data.
    • Conduct regular audits and assessments to verify compliance with data privacy requirements and address any gaps or deficiencies identified.
    • Provide training and awareness programs to employees and contractors on their responsibilities regarding data privacy and the protection of generative AI-generated data.
  1. Secure Configuration of Enterprise Assets and Software (CIS Control 4):

    • Configure generative AI and related software according to security best practices to reduce the attack surface and mitigate potential security risks.
    • Utilize security assessment tools to assess and improve the security posture of generative AI-enabled devices and configurations.
    • Follow recommended approaches for the secure deployment of generative AI, including access control, encryption, and continuous monitoring.
  2. Account Management (CIS Control 5):

    • Enforce strong authentication mechanisms, including multi-factor authentication (MFA), for access to generative AI resources, and enforce least privilege access controls.
    • Implement access control policies to control access to generative AI based on user identity, device health, and location.
  3. Data Recovery (CIS Control 11):

    • Implement data backup and recovery mechanisms to ensure the availability and integrity of generative AI-generated content.
    • Leverage backup and disaster recovery solutions to implement robust data protection for generative AI-enabled environments.
  4. Secure Communication and Network Protection (CIS Control 13):

    • Encrypt communication channels used by generative AI to protect sensitive information transmitted over the network, and implement network segmentation and firewall rules.
    • Utilize network security solutions to monitor and protect network traffic in generative AI-enabled environments, and detect and respond to network-based threats.
  5. Security Awareness and Training (CIS Control 14):

    • Provide comprehensive security awareness and training programs to employees and contractors involved in AI development, emphasizing the security risks associated with generative AI.
    • Leverage security training and awareness resources to educate users on the latest threats and best practices for secure usage of generative AI.
  6. Incident Response (CIS Control 17):

    • Develop and maintain an incident response plan specific to generative AI-related security incidents, including procedures for detecting, reporting, and responding to incidents.
    • Utilize security incident orchestration and automation tools to enable rapid detection and remediation of security incidents involving generative AI.
  7. Penetration Testing and Red Team Exercises (CIS Control 18):

    • Conduct regular penetration testing and red team exercises to identify and address vulnerabilities in generative AI and associated systems.
    • Utilize threat intelligence to proactively identify and mitigate potential threats targeting generative AI-enabled environments.


Conclusion: By mapping key points from best practices in AI security to relevant CIS critical controls, our organization ensures the alignment of security practices with industry standards, mitigating risks and enhancing the overall security posture of our AI implementation.
 

Details

Article ID: 9023
Created
Thu 4/4/24 6:36 PM
Modified
Thu 4/25/24 10:48 AM
KCS Article Status
WIP: Only Problem & some Environment captured
Not Validated: Complete & Resolution captured, confidence lacks in structure, content, no feedback
Validated: Complete & reusable, used by licensed KCS user, confidence in resolution & std. compliance
Validated

Related Articles (3)

What are the the data classification categories or types?
Establish USD Microsoft Copilot IT Security Policy

Related Services / Offerings (1)

Microsoft Copilot generative AI service offerings available at USD.