USD Generative AI IT Security Policy
Policy Statement:
This policy is designed to uphold the integrity, confidentiality, and availability of data entered in and generated by generative AI across diverse sue cases. Generative AI revolutionizes problem-solving across various sectors, enhancing outcomes in business, education, and research.
While embracing our commitment to innovation and improved outcomes, we will also uphold our commitment to security and regulatory compliance, ensuring adherence to industry standards and legal obligations.
Compliance and Enforcement:
This policy applies to all employees, contractors, and third-party vendors who have access to generative AI within our organization. It encompasses the handling, processing, and storage of USD data used by and generated with generative AI regardless of the platform or environment in which it is deployed. Compliance with this policy is mandatory for all individuals involved in AI activities using generative AI at the University.
Note: All portions of the following policy implementation below must be in place for a Generative AI system to be considered an approved AI technology. At this time the only solution that meets this standard is Microsoft Copilot. Submit a Software Request to have your use case reviewed for the use of other AI technology.
We encourage the use and experimentation with all generative AI, however use of restricted USD data is, without prior authorization, prohibited.
Policy Implementation:
-
Inventory and Control of Hardware Assets (CIS Control 1):
- Identify and maintain an inventory of hardware assets used for AI development, including servers and workstations hosting generative AI.
- Implement device health attestation to ensure that only trusted devices can access generative AI resources.
-
Inventory and Control of Software Assets (CIS Control 2):
- Maintain an inventory of software assets, including generative AI, associated tools, and user accounts ensuring timely updates and patches to mitigate vulnerabilities.
- Utilize endpoint protection solutions to continuously monitor and protect generative AI-enabled devices from threats.
-
Data Protection (CIS Control 3):
- Data Classification and Labeling:
- Utilize data protection solutions to classify and label sensitive data used by generative AI, ensuring appropriate access controls and data handling.
- Implement a data classification policy that defines categories of sensitive data and specifies how each category should be handled and protected.
- Apply labels to generative AI-generated content based on their sensitivity level, facilitating better data management and protection.
- Data Loss Prevention (DLP):
- Deploy DLP policies to prevent the unauthorized disclosure of sensitive data used by generative AI.
- Deploy browser based plugins to govern sensitive data use in AI system. Microsoft Edge, Google Chrome, and Mozilla Firefox are the only browsers authorized for interaction with AI technologies.
- Configure DLP policies to monitor and enforce controls on generative AI-generated content, ensuring that sensitive information is not inadvertently shared or leaked.
- Configure DNS security-based DLP to provide defense in depth.
- Data Access Controls:
- Enforce strict access controls to prevent unauthorized access to generative AI-generated content or any training data.
- Utilize identity and access management solutions to manage user access and permissions, ensuring that only authorized individuals can access sensitive data.
- Implement role-based access controls (RBAC) to grant permissions based on job roles and responsibilities, limiting access to generative AI data to only those who require it for their tasks.
- Data Privacy Compliance:
- Ensure compliance with data privacy regulations when processing and storing generative AI-generated content and any training data.
- Conduct regular audits and assessments to verify compliance with data privacy requirements and address any gaps or deficiencies identified.
- Provide training and awareness programs to employees and contractors on their responsibilities regarding data privacy and the protection of generative AI-generated data.
-
Secure Configuration of Enterprise Assets and Software (CIS Control 4):
- Configure generative AI and related software according to security best practices to reduce the attack surface and mitigate potential security risks.
- Utilize security assessment tools to assess and improve the security posture of generative AI-enabled devices and configurations.
- Follow recommended approaches for the secure deployment of generative AI, including access control, encryption, and continuous monitoring.
-
Account Management (CIS Control 5):
- Enforce strong authentication mechanisms, including multi-factor authentication (MFA), for access to generative AI resources, and enforce least privilege access controls.
- Implement access control policies to control access to generative AI based on user identity, device health, and location.
-
Data Recovery (CIS Control 11):
- Implement data backup and recovery mechanisms to ensure the availability and integrity of generative AI-generated content.
- Leverage backup and disaster recovery solutions to implement robust data protection for generative AI-enabled environments.
-
Secure Communication and Network Protection (CIS Control 13):
- Encrypt communication channels used by generative AI to protect sensitive information transmitted over the network, and implement network segmentation and firewall rules.
- Utilize network security solutions to monitor and protect network traffic in generative AI-enabled environments, and detect and respond to network-based threats.
-
Security Awareness and Training (CIS Control 14):
- Provide comprehensive security awareness and training programs to employees and contractors involved in AI development, emphasizing the security risks associated with generative AI.
- Leverage security training and awareness resources to educate users on the latest threats and best practices for secure usage of generative AI.
-
Incident Response (CIS Control 17):
- Develop and maintain an incident response plan specific to generative AI-related security incidents, including procedures for detecting, reporting, and responding to incidents.
- Utilize security incident orchestration and automation tools to enable rapid detection and remediation of security incidents involving generative AI.
-
Penetration Testing and Red Team Exercises (CIS Control 18):
- Conduct regular penetration testing and red team exercises to identify and address vulnerabilities in generative AI and associated systems.
- Utilize threat intelligence to proactively identify and mitigate potential threats targeting generative AI-enabled environments.
Conclusion: By mapping key points from best practices in AI security to relevant CIS critical controls, our organization ensures the alignment of security practices with industry standards, mitigating risks and enhancing the overall security posture of our AI implementation.