This Practice Exam is for the Certified in the Governance of AI Systems credential which is awarded to candidates who exhibit a comprehensive understanding of enabling effective governance for artificial intelligence systems. This includes achieving ethical behavior, responsible stewardship, and successful performance—all aligned with stakeholder expectations. Candidates must also demonstrate competence in adapting or enhancing the established components of the AI governance system to ensure it remains effective, efficient, and suitable for the unique and dynamic nature of the artificial intelligence system and its context.
The governance of artificial intelligence systems is defined as "the system by which the current and future use of artificial intelligence (AI) is directed and controlled. This involves evaluating and directing the use of artificial intelligence systems to support the organisation’s purposes and monitoring this use to achieve its strategic plans. It includes the organisational structures, principles, values, strategy, policies, processes, and controls for using AI within the organisation."
Certified in the Governance of Artificial Intelligence Systems (CGAIS)
This credential is granted to candidates who demonstrate an understanding of the organisational implications of using artificial intelligence systems. They possess the expertise to direct and control artificial intelligence system usage within their organisations. Candidates are expected to comprehend the accountability requirements of the EU AI Act, the General Data Protection Regulation, relevant ISO standards, authoritative guidance, and best practices from institutions like the OECD.
The Certified in the Governance of AI Systems exam consists of 100 questions covering 13 job practice domains, all testing the candidate's knowledge and ability in real-life job practices leveraged by expert professionals. The exam comprises a similar number of questions from each domain. A passing score is 70% or more questions being answered correctly.
JOB PRACTICE DOMAINS
Domain 1: Governance, value generation, and ethical outcomes from AI systems
Description: The governing body should approve artificial intelligence system value generation objectives (i.e., purpose) that support the organisation's purpose in line with the organisation's values and the natural environment, social and economic context within which it operates.
Its value generation model should clarify: |
• |
what value the organisation intends the artificial intelligence system to generate (define). |
• |
how the artificial intelligence system should generate that value (create). |
• |
how the generation of value will be assured (deliver). |
• |
how the value generated by the artificial intelligence system is to be retained and distributed (sustain). |
|
|
Tasks: |
• |
Optimise value creation for stakeholders from investments in artificial intelligence systems and supporting assets. |
• |
Set parameters for the ethical intentions of the artificial intelligence system towards the natural environment and the social and economic context within which it operates. |
• |
Ensure value-generation objectives can fulfil organisational purposes, can be delivered, and will remain viable over time. |
• |
Evaluate the benefits realised from AI-enabled investments, processes, services, and systems. |
• |
Review whether the deployment of the artificial intelligence system remains consistent with the organisation's strategic intent, purpose, and values. |
Domain 2: Components of AI Governance, Risk management, and Compliance frameworks
Description: The governing body should establish an artificial intelligence governance framework, encompassing risk management, compliance management, and post-market monitoring systems. This ensures that artificial intelligence systems perform effectively, adhere to responsible stewardship, and uphold ethical behavior.
Often the components for the governance of artificial intelligence systems already exist: |
• |
Adapt or improve existing components so that the governance of the artificial intelligence system remains effective, efficient, and appropriate for its unique and dynamic nature and context. |
• |
Use a conceptual governance framework to identify, plan, organise, and direct the adaption or improvement of the components necessary for the governance of artificial intelligence systems. |
|
|
Tasks: |
• |
Establish the AI governance framework. |
• |
Establish an accountability framework. |
• |
Ensure that those to whom they delegate are empowered to create management policies, which are consistent with the governance policies, and are also empowered to provide proposals for changes to the governance policies. |
• |
Approve an AI governance charter, and clarify delegations within the organisation, including in relation to the strategy process. |
• |
Appoint an artificial intelligence system oversight body. |
• |
Approve the AI development framework and process model. |
• |
Establish a risk management system, and set expectations for internal controls, compliance, risk management, and risk-taking. |
• |
Establish a compliance management system. |
• |
Establish a post-market monitoring system. |
• |
Direct the implementation of a system of internal control. |
• |
Establish governance policies and ensure that these clarify the governing body’s intentions and expectations with respect to the organisational purpose, organisational values, and the organisation’s value generation objectives. |
• |
Evaluate the results of artificial intelligence system tests, validation, compliance, conformance, and performance audits. |
Domain 3: AI governance scope: purpose, type, functionality, benefits, risks, role
Description: The governing body should ensure that the artificial intelligence system's purpose is clearly defined and aligned with the organisational purposes. This artificial intelligence system's purpose should define the system’s intentions towards the natural environment, society and the organisation’s stakeholders.
• |
Ensure that the artificial intelligence system's purpose and organisational values are defined, communicated and embedded. |
• |
Create clarity for the artificial intelligence system’s stakeholders on the system’s intentions, behaviours, and activities. |
• |
Provide a framework within which artificial intelligence systems are created and executed in a focused manner, avoiding unnecessary distractions. |
• |
Identify the issues and stakeholders for which the artificial intelligence system is to address and the negative impacts that are to be avoided. |
• |
Ensure that the artificial intelligence system achieves its intended purposes and this is done in a manner that demonstrates the defined organisational values. |
• |
Provide a basis on which stakeholders can assess the artificial intelligence system’s outcomes and the achievement of stated objectives. |
Domain 4: Principles for the governance of artificial intelligence systems
Description: The governing body should ensure that the artificial intelligence system's principles are clearly defined and aligned with those of the organisation.
• |
Oversee the artificial intelligence system’s performance to ensure that it meets the governing body’s intentions for, and expectations of, the artificial intelligence system, its ethical behaviour and its compliance obligations. |
• |
Require those to whom responsibilities have been delegated to provide timely and accurate reports on all material aspects of the management of the artificial intelligence system. |
• |
Ensure that an internal control system is implemented, including a risk management system, a compliance management system, a post-market monitoring system, and a system of financial controls. |
• |
Oversee the corrective action taken. |
• |
Obtain assurance of the accuracy of reports and evidence received and the effectiveness of the internal control system. |
• |
Evaluate the effectiveness of governance policies in guiding the development and deployment of the artificial intelligence system. |
• |
Obtain risk information regarding the assessment and treatment of the artificial intelligence system's key threats and opportunities in consideration of the organisational risk framework. |
• |
Ensure that the artificial intelligence system’s stakeholders are identified, prioritised, appropriately engaged, consulted and their expectations understood. |
• |
Ensure ethical and effective leadership throughout the artificial intelligence system's life cycle. |
• |
Set expectations for the artificial intelligence system using robust decision-making processes. |
• |
Direct how the artificial intelligence system is to behave in a manner consistent with the defined organisational values. |
• |
Direct organisational alignment through the integration of artificial intelligence systems. |
• |
Direct management to act in good faith and in the best interests of the organisation and its stakeholders. |
• |
Hold people accountable along the AI value chain. |
• |
Hold people accountable for non-compliance with business, regulatory and ethical requirements. |
Domain 5: Organisational values that impact the deployment of artificial intelligence systems
Description: The governing body should ensure that the organisation's values are clearly defined. The governing body shall demonstrate its accountability for the artificial intelligence system's lawfulness, trustworthiness, fairness, integrity, effectiveness, efficiency, resilience, explainability, and acceptable use to the stakeholders and hold to account the operators to whom it has delegated. It shall establish an accountability framework for developers, providers, and deployers and ensure all regulatory compliance and data breach reporting obligations are fulfilled.
• |
Oversee the organisational values and governance policies adopted to guide the development and use of artificial intelligence systems, associated culture, and ethical behaviour. |
• |
When defining organisational values, ensure that all relevant stakeholders are engaged. |
• |
Clearly express what ethical behaviour is expected as a result of the organisational values. |
• |
Understand the consequences of unethical behaviour. |
• |
Demonstrate a commitment to organisational values. |
• |
Demonstrate accountability. |
• |
Hold processors along the AI value chain accountable. |
Domain 6: Strategic importance and AI strategy development
Description: The governing body shall direct and engage with the artificial intelligence system strategy, following the value generation model, to achieve the artificial intelligence system purpose, fulfil its regulatory compliance obligations, and enable data subject rights.
• |
Provide strategic direction and set the strategic outcomes expected from the artificial intelligence system. |
• |
Implement privacy by design and default. |
• |
Approve time scales for the strategic outcomes of the artificial intelligence system's strategic plan. |
• |
Approve choices made between open-source and proprietary artificial intelligence models. |
• |
Approve choices made between decentralised products and centralised AI platforms. |
• |
Ensure the artificial intelligence system's strategy considers the interdependence between the natural environment, social and economic context |
• |
Ensure the artificial intelligence system strategy includes developing the capacities and competencies that will be required during the artificial intelligence system's life cycle. |
• |
Ensure that the selection of AI models and acquisition of training data is in line with the organisation's principles and values. |
Domain 7: Artificial intelligence system and data sourcing strategies and suppliers in the AI value chain
Description: The governing body should ensure AI acquisitions are made for valid reasons, based on appropriate and ongoing analysis, with clear and transparent decision-making..
• |
Evaluate options for providing artificial intelligence systems to realise approved proposals, balancing risks and value for money of proposed investments. |
• |
Direct that AI assets (systems and infrastructure) be acquired appropriately, including the preparation of suitable technical documentation and instructions for use, while ensuring that required capabilities are provided. |
• |
Direct that supply arrangements (including both internal and external supply arrangements) support the organisation's objectives, intended purposes, and stakeholder expectations. |
• |
Monitor AI investments to ensure that they provide the required capabilities. |
• |
Monitor the extent to which the organisation and its suppliers maintain a shared understanding of the organisation's intent in making any artificial intelligence system acquisition. |
• |
Acquisition, design, development, and deployment. |
• |
Human resource competency development. |
• |
Capacity planning. |
• |
Test and training data. |
• |
Management of relationships, contracted services and data sharing. |
Domain 8: Legal and regulatory compliance obligations for artificial intelligence systems
Description: The governing body should regularly evaluate the extent to which the artificial intelligence system satisfies obligations (regulatory, legislation, common law, contractual), internal policies, standards, professional guidelines, and public policy statements.
• |
Establish regular and routine mechanisms for ensuring that the use of AI complies with relevant obligations (regulatory, legislation, common law, contractual), standards, and guidelines. |
• |
Regularly evaluate the organisation’s internal conformance to its system for governance of artificial intelligence systems. |
• |
Direct that policies are established and enforced to enable the organisation to meet its internal obligations in its use of artificial intelligence systems. |
• |
Direct that all actions relating to AI be ethical. |
• |
Monitor artificial intelligence system compliance and conformance through appropriate reporting and audit practices, ensuring that reviews are timely, comprehensive, and suitable for the evaluation of the extent of satisfaction of the organisation and its stakeholders. |
• |
Monitor artificial intelligence system activities, including disposal of assets and data, to ensure that environmental, privacy, strategic knowledge management, preservation of organisational memory, and other relevant obligations are met. |
Domain 9: Key artificial intelligence system management, process, and internal control domains
Description: The governing body shall oversee the artificial intelligence system’s performance (concerning the people, process, technology, and data) to ensure that it meets the governing body’s intentions. and expectations of the artificial intelligence system, its ethical behaviour, and its compliance obligations to ensure the artificial intelligence system's purpose and strategic outcomes are achieved in the intended and required manner.
• |
Acknowledge that data serves as a critical asset for artificial intelligence systems. |
• |
Understand that effective decision-making relies on quality data. |
• |
Validate that data processing adheres to legal and ethical standards. |
• |
Prioritise responsible and effective data utilisation. |
• |
Advocate for ethical and responsible handling of data. |
• |
Recognise that different data classes and processing methods carry varying levels of risk. |
• |
Provide clear guidance on managing risks associated with data. |
• |
Implement controls to address these risks throughout the artificial intelligence system’s life cycle. |
• |
Oversee the design, development, deployment, and operation of artificial intelligence systems. |
• |
Oversee the design, development, deployment, and operation of risk management, compliance management, and post-market monitoring systems. |
• |
Ensure robust data processing, incident handling, and system malfunction control mechanisms are in place. |
• |
Collect, share, aggregate, retain, and process personal data only as necessary for legitimate purposes. Avoid indiscriminate data collection. |
• |
Address potential inaccuracies through rigorous data input, processing, and output controls. |
• |
Strive for reliable and precise AI outcomes. |
• |
Limit an artificial intelligence system's collection, sharing, aggregation, retention, and further processing of personal data only to what is necessary to fulfil the legitimate identified purpose(s) and ensure personal data is not collected and processed indiscriminately. |
• |
Ensure the risks posed by the potential lack of data accuracy are mitigated by data input, processing, and output controls. |
• |
Monitor the artificial intelligence system's internal controls, assurance, and transparency processes. |
• |
Monitor the behaviour and responses of the system, and fine-tune the behaviour to produce accurate responses. |
• |
Implement processes for change control and configuration management. |
Domain 10: Governance of risk in deployed artificial intelligence systems
Description: The governing body should ensure the effect of uncertainty on the artificial intelligence system's purpose and associated strategic outcomes is analysed, measured, and evaluated.
• |
Set the tone for managing artificial intelligence system risk to achieve its intended purposes and strategic outcomes. |
• |
Determine the nature and extent of the risks that the artificial intelligence system shall accept in achieving its intended purposes. |
• |
Determine how to oversee the appropriate management of the artificial intelligence system's risk. |
• |
Position risk as a key consideration in the setting of governance policies. |
• |
Ensure that the appetite and tolerance for the artificial intelligence system’s risk are understood, articulated, and communicated. |
• |
Set risk criteria and associated limits for the artificial intelligence system. |
• |
Ensure management assesses, treats, monitors, and reviews risk in line with the established organisational risk framework. |
• |
Ensure that risk management is integrated into all artificial intelligence system activities. |
• |
Recognise that data is a valuable resource for artificial intelligence systems and that different classes of data and types of processing bring different levels of risk that the governing body should understand and direct management on how to manage these risks. |
• |
Consider the impact of, changes to and dependencies on the external (AI value chain) and internal context of the artificial intelligence system. |
• |
Ensure effective data analytics are employed to correctly assess artificial intelligence system risk and risk interactions. |
• |
Monitor any mitigation that is necessary to ensure that the risk appetite for the artificial intelligence system is not exceeded by management. |
• |
Consider the impact of, changes to, and dependencies on the external and internal context of the artificial intelligence system. |
• |
Ensure that the governing body is adequately and proactively informed of new and emerging artificial intelligence system risks. |
• |
Ensure substantive risk, limits, and associated expectations are disclosed to relevant stakeholders. |
• |
Identify the risks from reasonably foreseeable misuse of the artificial intelligence systems. |
Domain 11: Social Responsibility and stakeholder engagement
Description: The governing body shall ensure that artificial intelligence system-related decisions and activities are transparent and aligned with broader societal expectations.
• |
Identify all relevant artificial intelligence system stakeholders within and outside the enterprise. |
• |
Establish and maintain positive relationships with the stakeholders in the artificial intelligence system. |
• |
Ensure that the expectations of stakeholders are clearly understood. |
• |
Continually engaging relevant stakeholders through an engagement process. |
• |
Implement strategies and techniques to avoid unfair bias, discrimination, and exclusion in artificial intelligence system use. |
• |
Ensure the artificial intelligence system performs in a socially responsible way by operating within the parameters of acceptable behaviour. |
• |
Not allow actions that are legally or locally permissible but not in line with what is expected of it by its broader stakeholders and society concerning human rights, inclusion and diversity, unfair bias, the natural environment, and democracy. |
• |
Ensure user-centric and human-rights-based approaches are applied to artificial intelligence system design, development, and deployment. |
• |
Measure performance against objectives related to socially responsible behaviour. |
• |
Report the artificial intelligence system’s social responsibility objectives clearly and transparently so that stakeholders can understand these objectives. |
Domain 12: AI Viability and Performance Over Time
Description: The governing body should ensure that the artificial intelligence system remains viable (concerning broader social, economic, and environmental goals), and performs as expected over time, without compromising the ability to meet the needs of current and future stakeholders to meet their needs.
• |
Define acceptable performance metrics, measure and evaluate results for: |
|
• |
achievement of its intended purposes |
|
• |
influence on the organisation’s functioning |
|
• |
inter-relationship with external systems on which it depends |
|
• |
use over time (post-market monitoring) |
|
• |
performance |
|
• |
financial performance |
|
• |
ethical behaviour |
|
• |
compliance with obligations |
|
• |
viability over time (protects and restores those systems on which it depends). |
• |
Report on the evaluated performance metrics. |
• |
Ensure the artificial intelligence system is protected and restorable. |
• |
Measure the impact on climatic stability, the level of biodiversity, and social equality. |
Domain 13: Post-market monitoring, conformance, and reporting to the authorities
Description: The governing body should log the artificial intelligence system's post-market operation, and monitor its conformance, instances of misuse, serious incidents and malfunctions, and when necessary, report non-conformance to the competent authority.
• |
Implement logging of all post-market processing activities. |
• |
Detect environmental changes impacting the risk profile and current risk appetite. |
• |
Perform system functional and configuration changes when required. |
• |
Ensure change control. |
• |
Detect and respond to artificial intelligence system incidents and malfunctions. |
• |
Correct artificial intelligence system processing errors. |
• |
Report serious incidents and non-conformance to competent authorities. |