AIRO (AI Risk Ontology) is an ontology for expressing risk of AI systems based on the requirements of the proposed AI Act and ISO 31000 series of standards. AIRO assists stakeholders in determining "high-risk" AI systems, maintaining and documenting risk information, performing impact assessments, and achieving conformity with AI regulations.

Introduction

The AI Act aims to avoid the harmful impacts of AI on critical areas such as health, safety, and fundamental rights by setting down obligations which are proportionate to the type and severity of risk posed by the system. It distinguishes specific areas and the application of AI within them that constitutes "high-risk" and has additional obligations (Art. 6) that require providers of high-risk AI systems to identify and document risks associated with AI systems at all stages of development and deployment (Art. 9).

Existing risk management practises consist of maintaining, querying, and sharing information associated with risks for compliance checking, demonstrating accountability, and building trust. Maintaining information about risks for AI systems is a complex task given the rapid pace with which the field progresses, as well as the complexities involved in its lifecycle and data governance processes where several entities are involved and need to share information for risk assessments. In turn, investigations based on this information are difficult to perform which makes their auditing and assessment of compliance a challenge for organisations and authorities. To address some of these issues, the AI Act relies on creation of standards that alleviate some of the compliance related obligations and tasks (Art. 40).

We propose an ontology-based approach regarding the information required to be maintained and used for the AI Act’s compliance and conformance by utilising open data specifications for documenting risks and performing AI risk assessment activities. Such data specifications utilise interoperable machine-readable formats to enable automation in information management, querying, and verification for self-assessment and thirdparty conformity assessments. Additionally, they enable automated tools for supporting AI risk management that can both import and export information meant to be shared with stakeholders - such as AI users, providers, and authorities.

AIRO is an ontology for expressing risk of harm associated with AI systems based on the proposed EU AI Act and the key standards in the ISO 31000 series including 31000:2018 Risk management – Guidelines and ISO 31073:2022 Risk management — Vocabulary. AIRO assists with expressing risk of AI systems as per the requirements of the AI Act, in a machine-readable, formal, and interoperable manner through use of semantic web technologies.

Requirements

The purpose of AIRO is to express AI risks to enable organisations to represent their AI systems and the associated AI risks and determine whether their AI systems are "high-risk" as per Annex III of the AI Act and

We analysed the requirements of the AI Act, in particular the list of high-risk systems in Annex III, and identified the specific concepts whose combinations determine whether the AI system is considered high-risk.These are listed in Table 1 in the form of: competency questions, concepts, and relation with AI system.



Table 1: Questions necessary to determine whether an AI system is high-risk according to Annex III
ID Competency Question Concept Relation
1 In which domain is the AI system used? Domain isAppliedWithinDomain
2 What is the purpose of the AI system? Purpose hasPurpose
3 What is the capability of the AI system? AICapability hasCapability
4 Who is the user of the AI system? AIUser isUsedBy
5 Who is the AI subject? AISubject hasAISubject
annex III concepts
Figure 1: concepts required for determining high-risk AI applications as per Annex III

To specify the conditions where use of an AI system is classified into the high-risk category, we determine values of the identified concepts by answering the 5 questions for each clause in Annex III. Combinations of values, which can be treated as rules for high-risk uses, for Annex III's high-risk applications are represented in Figure 2.
If an AI system meets at least one of the conditions, it is considered as high-risk unless (i) its provider demonstrates that ``the output of the system purely accessory in respect of the relevant action or decision to be taken and is not therefore likely to lead to a significant risk to the health, safety or fundamental rights.'' (Art. 6 (3)), or (ii) it is put into service by a small-scale provider in the public or private sector for their own use to assess creditworthiness, determine credit score, health/life insurance risk assessment, or health/life insurance pricing (Annex III, pt. 5(a) and 5(b)).
annex III combinations
Figure 2: Describing Annex III high-risk conditions using the 5 concepts

Overview

Core Concepts and Relations

AIRO’s core concepts and relations are illustrated in Figure 3. The upper half shows the main concepts required for describing an AI System (green boxes), and the lower half represents key concepts for expressing Risk (purple boxes). The relation hasRisk links these two halves by connecting risk to either an AI system or a component of the system.

AIRO concepts
Figure 3: AIRO core concepts and relations

Namespace declarations

Table 3: Namespaces used in the document
airo<https://w3id.org/airo>
owl<http://www.w3.org/2002/07/owl>
rdf<http://www.w3.org/1999/02/22-rdf-syntax-ns>
terms<http://purl.org/dc/terms>
xml<http://www.w3.org/XML/1998/namespace>
xsd<http://www.w3.org/2001/XMLSchema>
rdfs<http://www.w3.org/2000/01/rdf-schema>
prov<http://www.w3.org/ns/prov>
dc<http://purl.org/dc/elements/1.1>

Classes

AI System

IRI https://w3id.org/airo#AISystem
Term AISystem
Label AI System
Definition An engineered or machine-based system that can, for a given set of objectives, generate outputs such as predictions, recommendations, or decisions influencing real or virtual environments. AI systems are designed to operate with varying levels of autonomy [NIST]
Source AI Act proposal
SubClassOf prov:Entity

Domain

IRI https://w3id.org/airo#Domain
Term Domain
Label Domain
Definition Refers to domain, sector, or industry

Purpose

IRI https://w3id.org/airo#Purpose
Term Purpose
Label Purpose
Definition Refers to the use for which an AI system is intended by the provider, including the specific context and conditions of use, as specified in the information supplied by the provider in the instructions for use, promotional or sales materials and statements, as well as in the technical documentation. [AI Act, Art. 3(12)]
Source AI Act proposal

AI Capability

IRI https://w3id.org/airo#AICapability
Term AICapability
Label AI Capability
Definition The capability of an AI system that enables realisation of the system's purposes

AI Technique

IRI https://w3id.org/airo#AITechnique
Term AITechnique
Label AI Technique
Definition Approach or technique used in development of an AI system

AI Lifecycle Phase

IRI https://w3id.org/airo#AILifecyclePhase
Term AILifecyclePhase
Label AI Lifecycle Phase
Definition A Phase of AI lifecycle which indicates evolution of the system from conception through retirement

AI Component

IRI https://w3id.org/airo#AIComponent
Term AIComponent
Label AI Component
Definition Component (element) of an AI system

Output

IRI https://w3id.org/airo#Output
Term Output
Label Output
Definition Output of an AI system

Event

IRI https://w3id.org/airo#Event
Term Event
Label Event
Definition occurrence or change of a particular set of circumstances
Source ISO 31000, 3.5

Risk Source

IRI https://w3id.org/airo#RiskSource
Term RiskSource
Label Risk Source
Definition An element that has the potential give rise to a risk
SubClassOf airo:Event
Source ISO 31000, 3.4

Risk

IRI https://w3id.org/airo#Risk
Term Risk
Label Risk
Definition Risk of harm associated with an AI system
SubClassOf airo:Event

Consequence

IRI https://w3id.org/airo#Consequence
Term Consequence
Label Consequence
Definition Outcome of an event affecting objectives
SubClassOf airo:Event
Source ISO 31000, 3.6

Impact

IRI https://w3id.org/airo#Impact
Term Impact
Label Impact
Definition Outcome of a consequence on persons, groups, facilities, environment, etc.
SubClassOf airo:Consequence

Area Of Impact

IRI https://w3id.org/airo#AreaOfImpact
Term AreaOfImpact
Label Area Of Impact
Definition Areas that can be affected by an AI system

Control

IRI https://w3id.org/airo#Control
Term Control
Label Control
Definition A measure that maintains and/or modifies risk
Source ISO 31000, 3.8

Document

IRI https://w3id.org/airo#Document
Term Document
Label Document
Definition A piece of written, printed, or electronic matter that provides information or evidence [from Oxford Languages dictionary]
SubClassOf prov:Entity

Standard

IRI https://w3id.org/airo#Standard
Term Standard
Label Standard
Definition A resource, established by consensus and approved by a recognized body, that provides, for common and repeated use, rules, guidelines or characteristics for activities or their results, aimed at the achievement of the optimum degree of order in a given context [ISO/IEC TR 29110-1:2016(en), 3.59]
SubClassOf prov:Entity

Stakeholder

IRI https://w3id.org/airo#Stakeholder
Term Stakeholder
Label Stakeholder
Definition Represents any individual, group or organization that can affect, be affected by or perceive itself to be affected by a decision or activity [ISO/IEC TR 29110-1:2016(en), 3.59]
SubClassOf prov:Entity

AI Provider

IRI https://w3id.org/airo#AIProvider
Term AI Provider
Label AI Provider
Definition A natural or legal person, public authority, agency or other body that develops an AI system or that has an AI system developed and places that system on the market or puts it into service under its own name or trademark, whether for payment or free of charge [AI Act, Common position, Art.3(2)]
SubClassOf airo:AIOperator

AI User

IRI https://w3id.org/airo#AIUser
Term AIUser
Label AI User
Definition Any natural or legal person under whose authority the system is used [AI Act, Common position, Art.3(4)]
SubClassOf airo:AIOperator

AI Subject

IRI https://w3id.org/airo#AISubject
Term AISubject
Label AI Subject
Definition An entity that is subjected to the use of AI
SubClassOf airo:Stakeholder

Affected Stakeholder

IRI https://w3id.org/airo#AffectedStakeholder
Term AffectedStakeholder
Label Affected Stakeholder
Definition An entity that is affected by AI
SubClassOf airo:Stakeholder

AI Operator

IRI https://w3id.org/airo#AIOperator
Term AIOperator
Label AI Operator
Definition the provider, the product manufacturer, the user, the authorised representative, the importer or the distributor[AI Act, Common position, Art.3(8)]
SubClassOf airo:Stakeholder

Version

IRI https://w3id.org/airo#Version
Term Version
Label Version
Definition A unique number or name that is assigned to a unique state of an AI system

Characteristic

IRI https://w3id.org/airo#Characteristic
Term Characteristic
Label Characteristic
Definition

Likelihood

IRI https://w3id.org/airo#Likelihood
Term Likelihood
Label Likelihood
Definition Chance of an event happening
Source ISO 31000, 3.7

Severity

IRI https://w3id.org/airo#Severity
Term Severity
Label Severity
Definition Indicates level of severity of an event that reflects level of potential harm

Properties

is applied within domain

IRI https://w3id.org/airo#isAppliedWithinDomain
Term isAppliedWithinDomain
Label is applied within domain
Definition Specifies the domain an AI system is used within
Domain airo:AISystem
Range airo:Domain

has purpose

IRI https://w3id.org/airo#hasPurpose
Term hasPurpose
Label has purpose
Definition Indicates the intended purpose of an AI system
Domain airo:AISystem
Range airo:Purpose

has capability

IRI https://w3id.org/airo#hasCapability
Term hasCapability
Label has capability
Definition Specifies capabilities implemented within an AI system to materialise its purposes
Domain airo:AISystem
Range airo:Capabiliy

uses technique

IRI https://w3id.org/airo#usesTechnique
Term usesTechnique
Label uses technique
Definition Indicates the AI techniques used in an AI system
Domain airo:AISystem
Range airo:AITechnique

produces output

IRI https://w3id.org/airo#producesOutput
Term producesOutput
Label produces output
Definition Specifies an output generated by an AI system
Domain airo:AISystem
Range airo:Output

has component

IRI https://w3id.org/airo#hasComponent
Term hasComponent
Label has component
Definition Indicates components of an AI system
Domain airo:AISystem
Range airo:AIComponent

has risk

IRI https://w3id.org/airo#hasRisk
Term hasRisk
Label has risk
Definition Indicates risks associated with an AI system, an AI component, etc.
Domain
Range airo:Risk

is risk source for

IRI https://w3id.org/airo#isRiskSourceFor
Term isRiskSourceFor
Label is risk source for
Definition Specifies risks caused by materialisation of a risk source
Domain airo:RiskSource
Range airo:Risk

has consequence

IRI https://w3id.org/airo#hasConsequence
Term hasConsequence
Label has consequence
Definition Specifies consequences caused by materialisation of a risk
Domain airo:Risk
Range airo:Consequence

has impact

IRI https://w3id.org/airo#hasImpact
Term hasImpact
Label has impact
Definition Specifies impacts caused by materialisation of a consequence
Domain airo:Consequence
Range airo:Impact

has impact on area

IRI https://w3id.org/airo#hasImpactOnArea
Term hasImpactOnArea
Label has impact on area
Definition Specifies the area that is affected by an AI impact
Domain airo:Impact
Range airo:AreaOfImpact

has impact on stakeholder

IRI https://w3id.org/airo#hasImpactOnStakeholder
Term hasImpactOnStakeholder
Label has impact on stakeholder
Definition Specifies stakeholders that are affected by an AI impact
Domain airo:Impact
Range airo:AffectedStakeholder

modifies event

IRI https://w3id.org/airo#modifiesEvent
Term modifiesEvent
Label modifies event
Definition Indicates the control used for modification of an event
Domain airo:Control
Range airo:Event

detects event

IRI https://w3id.org/airo#detectsEvent
Term detectsEvent
Label detects event
Definition Indicates the control used for detecting an event
Domain airo:Control
Range airo:Event
SubPropertyOf modifiesEvent

eliminates event

IRI https://w3id.org/airo#eliminatesEvent
Term eliminatesEvent
Label eliminates event
Definition Indicates the control used for eliminating an event
Domain airo:Control
Range airo:Event
SubPropertyOf modifiesEvent

mitigates event

IRI https://w3id.org/airo#mitigatesEvent
Term mitigatesEvent
Label mitigates event
Definition Indicates the control used for mitigating an event
Domain airo:Control
Range airo:Event
SubPropertyOf modifiesEvent

is followed by control

IRI https://w3id.org/airo#isFollowedByControl
Term isFollowedByControl
Label is followed by control
Definition Specifies the order of controls
Domain airo:Control
Range airo:Control

is part of control

IRI https://w3id.org/airo#isPartOfControl
Term isPartOfControl
Label is part of control
Definition Specifies composition of controls
Domain airo:Control
Range airo:Control

has documentation

IRI https://w3id.org/airo#hasDocumentation
Term hasDocumentation
Label has documentation
Definition Indicates documents related to an entity, e.g. AI system
Range airo:Document

conforms to standard

IRI https://w3id.org/airo#conformsToStandard
Term conformsToStandard
Label conforms to standard
Definition
Range airo:Standard

has stakeholder

IRI https://w3id.org/airo#hasStakeholder
Term hasStakeholder
Label has stakeholder
Definition Indicates stakeholders of an AI system
Domain airo:AISystem
Range airo:Stakeholder

is provided by

IRI https://w3id.org/airo#isProvidedBy
Term isProvidedBy
Label is provided by
Definition Indicates provider of an AI system
Domain airo:AISystem
Range airo:AIProvider

is used by

IRI https://w3id.org/airo#isUsedBy
Term isUsedBy
Label is used by
Definition Indicates user of an AI system
Domain airo:AISystem
Range airo:AIUser

has AI subject

IRI https://w3id.org/airo#hasAISubject
Term hasAISubject
Label has AI subject
Definition Indicates subject of an AI system
Domain airo:AISystem
Range airo:AISubject

affects

IRI https://w3id.org/airo#affects
Term affects
Label affects
Definition Indicates the stakeholders affected by the AI system
Domain airo:AISystem
Range airo:AffectedStakeholder

has version

IRI https://w3id.org/airo#hasVersion
Term hasVersion
Label has version
Definition Indicates the version of an AI system
Domain airo:AISystem
Range airo:Version

has severity

IRI https://w3id.org/airo#hasSeverity
Term hasSeverity
Label has severity
Definition Indicates severity of a consequenve or an impact
Domain airo:Consequence or airo:Impact
Range airo:Severity

has likelihood

IRI https://w3id.org/airo#hasLikelihood
Term hasLikelihood
Label has likelihood
Definition Indicates the probability of occurrence of an event
Domain airo:Event
Range airo:Likelihood

has lifecycle phase

IRI https://w3id.org/airo#hasLifecyclePhase
Term hasLifecyclePhase
Label has lifecycle phase
Definition Indicates the AI system's lifecycle phase
Domain airo:AISystem
Range airo:LifecyclePhase

AIRO Usage and Application

Use-cases

To be published.

Identification of High-risk AI Systems

To assist with determination of whether the system would be considered a high-risk AI system under the AI Act, the concepts presented in Table 1 need to be retrieved for the use-cases and compared against the specific criteria described in Annex III.

For demonstration, we first utilise a SPARQL query, depicted below, to list the concepts necessary to determine whether the system is high-risk.


PREFIX airo: <https://w3id.org/airo#>
SELECT ?system ?domain ?purpose ?capability ?user ?subject 
WHERE {
      ?system a airo:AISystem ;
              airo:isAppliedWithinDomain ?domain ;
              airo:hasPurpose ?purpose ;
              airo:hasCapability ?capability ;
              airo:isUsedBy ?user ;
              airo:hasAISubject ?subject . }


References

1. Artificial Intelligence Act: Proposal for a regulation of the European Parliament and the Council laying down harmonised rules on Artificial Intelligence (Artificial Intelli-gence Act) and amending certain Union legislative acts, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELLAR:e0649735-a372-11eb-9585-01aa75ed71a1, (2021).
2. ISO 31000 Risk management — Guidelines, (2018).
3. ISO/IEC DIS 22989(en) Information technology — Artificial intelligence — Artificial intelligence concepts and terminology, https://www.iso.org/obp/ui/#iso:std:iso-iec:22989:dis:ed-1:v1:en, (2022)

Acknowledgments

This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 813497, as part of the ADAPT SFI Centre for Digital Media Technology is funded by Science Foundation Ireland through the SFI Research Centres Programme and is co-funded under the European Regional Development Fund (ERDF) through Grant#13/RC/2106_P2.