Public Consultation on Suggested Guidelines for AI Security

Public Consultation on Suggested Guidelines for AI Security

Artificial Intelligence (AI) is widely recognized for its ability to enhance efficiency and drive innovation across various sectors. However, some inherent threats can hinder this potential. AI systems are at risk of cybersecurity issues, including adversarial attacks, where malicious actors intentionally attempt to mislead the AI. These issues can worsen existing weaknesses in business systems, leading to data leaks, breaches, and unfair or harmful outcomes. To fully realize AI’s benefits while mitigating these risks, organizations must prioritize its security

To address security concerns, the Cyber Security Agency of Singapore (CSA) has developed the Guidelines on Securing AI Systems to help system owners protect AI throughout its lifecycle. These guidelines raise awareness of potential threats to AI behaviour and system security, offering principles and best practices for implementing security controls. Additionally, the CSA is working with AI and cybersecurity experts to create a Companion Guide that will provide practical measures and insights from both industry and academia to support the main guidelines.

Additionally, CSA is holding a public consultation on the Guidelines and the Companion Guide, which will close on September 15, 2024.

The Guidelines point out the cybersecurity risks related to AI, including both classical threats and Adversarial Machine Learning (ML). These new attacks try to confuse the AI and lead to inaccurate or harmful results.

Classical risks include supply chain attacks, unauthorized access, and disruptions to cloud services, data centre operations or other digital infrastructure (e.g Denial of Service attacks). Adversarial ML risks involve data poisoning, evasion attacks that mislead trained models, and extraction attacks that aim to steal sensitive data or the model itself.

Risk Assessment

The Guidelines suggest that organisations begin securing AI systems with a risk assessment. This helps identify potential risks and priorities, leading to suitable risk management strategies. They recommend a four-step process for managing risks:

  • Step 1: Conduct a risk assessment that focuses on the security risks to AI systems.
  • Step 2: Prioritise areas to address based on risk level, impact, and available resources.
  • Step 3: Identify and take the necessary actions to secure the AI system.
  • Step 4: Evaluate any remaining risks and decide whether to mitigate or accept them.

Guidelines for Securing AI Systems

These Guidelines apply to all five stages of an AI system’s lifecycle. System owners should consider it as essential points to ensure the secure implementation of AI.

Stage 1: Planning and Design
• Increase awareness and understanding of security risks.
• Perform security risk assessments.

Stage 2: Development
• Ensure the security of the supply chain.
• Evaluate the security benefits and trade-offs when choosing the right model.
• Identify, monitor, and safeguard AI-related assets.
• Protect the AI development environment.

Stage 3: Deployment
• Secure the infrastructure and environment for deploying AI systems.
• Set up incident management procedures.
• Ensure the responsible release of AI systems.

Stage 4: Operations and Maintenance
• Monitor the inputs to the AI system.
• Track the outputs and behavior of the AI system.
• Implement a secure-by-design approach for updates and ongoing learning.
• Establish a vulnerability disclosure process.

Stage 5: End of Life
• Ensure the proper disposal of data and models.

The Cyber Security Agency of Singapore’s guidelines provide a structured approach for businesses to enhance their AI security, promoting a proactive stance on potential threats. The ongoing public consultation allows stakeholders to contribute their insights, ensuring diverse perspectives are considered in developing effective security measures. As the field of AI security continues to evolve, businesses must remain vigilant and adapt their strategies to counter emerging threats, highlighting the necessity for continuous improvement in safeguarding AI systems.

By establishing the Guidelines on Securing AI Systems, the Cyber Security Agency of Singapore (CSA) emphasizes the importance of proactive measures in safeguarding AI technologies. This initiative not only raises awareness of potential vulnerabilities but also encourages system owners to adopt best practices for risk management throughout the AI lifecycle.

Source from news from Cyber Security Agency of Singapore (CSA).

Course Enquiry for Public Consultation on Suggested Guidelines for AI Security

"*" indicates required fields

I want to find out more about:
Where did you hear of ITEL?*
*Note: If you chose Others, kindly provide more information in the Remarks/Comments/Questions box.
Consent*
Newsletter Subscription
This field is for validation purposes and should be left unchanged.