ISO/PAS 8800:2024 is a specification developed to ensure the functional safety of artificial intelligence (AI) and machine learning (ML) systems in road vehicles. It extends established automotive safety standards (such as ISO 26262 and ISO 21448/SOTIF) to address the unique challenges and risks introduced by AI technologies. Compliance with ISO 8800 requires organizations to adopt a structured, lifecycle-based approach to the development, deployment, and monitoring of AI systems in safety-critical automotive applications.

Integration with Existing Standards

ISO/PAS 8800 is designed to complement, not replace, existing automotive safety standards:

Standard

Focus Area

Relationship to ISO 8800

ISO 26262

Functional safety of E/E systems

ISO 8800 extends to AI/ML components

ISO 21448

Safety of Intended Functionality (SOTIF)

ISO 8800 addresses AI-specific risks

ISO 9001

Quality management systems

Forms the organizational foundation


Best Practices for Successful Compliance

  • Early Risk Identification: Address potential AI-specific hazards during concept and design phases.

  • Traceability: Maintain clear links between safety requirements, AI components, and test results.

  • Continuous Improvement: Use field data and post-market monitoring to refine and update AI models and safety measures.

  • Training and Awareness: Ensure staff are trained on the requirements and intent of ISO/PAS 8800 and related standards.

  • By following these structured steps, organizations can systematically address the unique safety challenges posed by AI in automotive systems, build robust assurance arguments, and demonstrate compliance with ISO/PAS 8800.

Key Steps for Organizational Compliance

  • Establish an AI Safety Management Framework

  • Define AI System Scope and Safety Requirements

  • Data Management and Quality Assurance

  • AI System Design and Architecture

  • Verification and Validation (V&V)

  • Safety Analysis and Risk Management

  • Lifecycle Management and Post-Market Measures

  • Documentation and Assurance Argumentation

  • Establish an AI Safety Management Framework

    · Define roles, responsibilities, and processes for AI safety management.

    · Integrate AI safety into the organization’s quality management system (QMS), building on standards like ISO 9001 and IATF 16949.

    · Ensure cross-functional collaboration between AI specialists, safety engineers, and domain experts.

  • Define AI System Scope and Safety Requirements

    · Clearly identify and document the AI elements within the vehicle system (e.g., trained models, decision modules).

    · Specify a complete set of AI safety requirements, ensuring traceability to system-level safety goals and regulatory needs.

    · Refine requirements iteratively based on development insights and verification results.

  • Data Management and Quality Assurance

    · Establish rigorous data management practices to ensure data quality, completeness, and representativeness for AI training and validation.

    · Document dataset creation, validation, and limitations, including known insufficiencies or gaps in coverage.

  • AI System Design and Architecture

    · Select and justify the use of specific AI technologies and architectures with respect to safety requirements.

    · Allocate safety requirements to specific AI components, and define architectural and development measures to mitigate risks.

    · Address differences between development and operational environments, adapting measures as necessary.

  • Verification and Validation (V&V)

    · Implement comprehensive verification and validation processes for both AI components and integrated systems.

    · Develop test cases using best practices, including requirements analysis, boundary value analysis, and error guessing.

    · Ensure each test case has clear pass/fail criteria and covers the specified input space.

  • Safety Analysis and Risk Management

    · Conduct systematic safety analyses to identify and address potential hazards, including those unique to AI (e.g., bias, performance limitations, misclassification).

    · Apply cause-and-effect modeling to trace root causes of functional insufficiencies and define mitigation strategies.

  • Lifecycle Management and Post-Market Measures

    · Manage the AI safety lifecycle from concept through deployment, including maintenance and field monitoring.

    · Implement processes for monitoring AI system performance in the field, detecting emerging risks, and updating models as necessary.

  • Documentation and Assurance Argumentation

    · Produce comprehensive documentation for all phases: requirements, design, data, V&V, and safety analyses.

    · Develop a safety assurance argument demonstrating the absence of unreasonable risk due to AI system behavior.

RELATED RESOURCES

Download Request

Do you need technical support?

Verification code

Warm tips:

We will send you the download address by e-mail. Please check whether your work email is correct.

Submit