Draft artificial intelligence responsible use principles
Artificial intelligence (AI) technologies can create efficiencies and improve how government delivers services to people in British Columbia. AI also has limitations, risks and potential for unintended consequences. The BC Public Service wants to be able to take advantage of the potential while addressing and mitigating any negative impacts.
Last updated on
Overview
The benefit of an AI tool depends on its careful and responsible use. While we are optimistic about the potential of AI, we recognize that emerging technologies can raise important challenges that must be addressed with clarity and intention.
Our updated guiding AI principles describe the B.C. government’s commitment to:
- Developing and using AI systems responsibly and transparently
- Governing with clear values and ethics
- Safeguarding privacy, fairness and human rights
- Ensuring security
The AI principles establish a clear vision for how to approach AI responsibly. They are closely aligned with the AI principles of the governments of Ontario and Canada to support collaboration and harmonization. In addition, these principles are informed by other jurisdictions, such as New Zealand, the United States and the European Union.
We shared an earlier draft of these principles for review, from October 2023 to February 2024. We gratefully received feedback and have revised the principles to make them clearer and simpler. Thank you to everyone who contributed suggestions for the guiding principles and the responsible use of AI policy framework. As AI tools continue to evolve, we will continue to evolve our policy framework to ensure AI tools are used appropriately.
Please consider these AI principles as a set rather than individually, as there is purposeful overlap across each principle. They are meant to complement our Digital Principles and Digital Code of Practice.
Principle 1: Transparency
Provide clear information on why, how and when an AI system is used. Include the purpose, benefits, and a description of the system.
Have a robust and transparent review mechanism in place, so anyone significantly impacted by an AI system can ask questions about it.
Why this is important:
Being transparent about the use of AI systems and establishing a review process fosters trust, supports a good understanding of the system, and allows scrutiny. It helps ensure we use AI systems effectively, use data safely and legally, and can easily justify AI-informed outcomes.
Transparency looks like:
- Providing meaningful explanations about AI systems in clear, simple, easy to understand language
- Explaining how data is used to support decision-making
- Establishing review processes to enable correction of data
- Developing a process for questioning and seeking reviews of decisions made or assisted by an AI system
Principle 2: Accountability
Establish clear rules and responsibilities for creating, using, and managing AI systems so they work properly, are unbiased, and have proper oversight.
Why this is important:
Effective governance, consistent monitoring, and thorough oversight are key to achieving system objectives and building trust.
Accountability looks like:
- Establishing clear roles and responsibility for AI systems’ outcomes and impacts
- Monitoring and evaluating AI systems continuously to detect and address issues
- Having human involvement and oversight when using AI systems
Principle 3: Public benefit
Use an AI system because it is the most appropriate solution for a problem, delivers the best outcomes for the public, and considers those who may be affected by it.
Why this is important:
AI systems can’t think or feel like humans. AI systems should be used to help people and build trust. This means making sure AI is designed to be responsible, respect rights and improve outcomes.
Public benefit looks like:
- Delivering clear benefits or insights for the public or government
- Considering alternative solutions before deciding AI is the most effective
- Aligning use of AI systems with specific plans and goals
- Considering the institutional and public benefits
- Prioritizing the needs of individuals and communities, including Indigenous peoples
Principle 4: Fairness
Design, use and evaluate AI systems to ensure fairness and equity so that no individuals or communities face discrimination or harm.
Why this is important:
When government prioritizes equity, ethics and fairness throughout the development and maintenance of an AI system, it mitigates the risk of biased outcomes and protects human rights.
Fairness looks like:
- Recognizing bias in data inputs and outputs, particularly when using data collected in an environment where there may have been systemic discrimination
- Reviewing and assessing the AI system and its outputs throughout its lifecycle
Principle 5: Reliability
Monitor AI systems so they continue to meet their intended purpose and produce accurate outputs.
Why this is important:
An AI system can learn and change over time, altering its performance. For example, unwanted patterns in a system’s training data may become amplified and change how the system operates.
Proactive human oversight allows us to identify and address issues in the AI system as needed. This prevents the system from producing results that deviate from its original design and intended purpose.
Reliability looks like:
- Using high-quality data for input
- Monitoring and assessing outputs for accuracy and ensuring unwanted biases or patterns have not inadvertently crept in over time
- Identifying and addressing problems as they emerge after deployment
- Continuous improvement
Principle 6: Safety
Put safeguards in place to protect data and AI systems. Manage risks through ongoing review, testing and monitoring.
Continue risk management processes throughout the AI system lifecycle, including decommissioning.
Why this is important:
A data breach would cause harm and erode public trust in an AI system. A breach could compromise government infrastructure.
Safety looks like:
- Applying privacy and security safeguards set out in policy and legislation
- Analyzing risks related to the AI system
- Understanding the risks, including community harms, and identifying the best way to manage them
- Using AI systems without compromising data, government information and infrastructure