How to use Copilot Chat effectively
Copilot Chat is a generative artificial intelligence (gen AI) chat tool similar to ChatGPT, Claude and Google Gemini. It can help summarize documents, analyze data and create graphics.
Last updated on
How it can help you work
Copilot Chat can be used for various business purposes, helping with tasks like drafting emails or conducting research and analysis. It can read and respond in multiple languages, making it accessible to a diverse range of users. It’s designed to assist employees with:
- Answering general knowledge questions
- Providing summaries and explanations
- Assisting with research and information gathering
Simply type your question or request into the chat interface and Copilot Chat will return helpful information or guidance. There are resources and courses available to learn how to optimize results with effective prompts, such as those offered by Apolitical. You can also ask Copilot Chat to suggest its own useful prompts.
Using AI responsibly
The BC Public Service is committed to responsible AI use, ensuring transparency, security and fairness in how we use this technology.
As AI becomes more advanced and widely used, there is a greater risk that it may perpetuate inequality or be misused, even unintentionally.
The public sector has a broad scope and reach when serving the public and we are obliged to uphold high standards of conduct by using AI responsibly.
Before using AI tools, you should be familiar with the relevant policy and guidance.
Verify enterprise data protection with the green shield

Copilot Chat has been evaluated as safe to use with government information if enterprise data protection is enabled. As long as you’ve logged into Copilot Chat with your IDIR and the shield icon is visible, the information you add is protected.
Considerations for responsible use
Bias and fairness
Gen AI models are trained on large amounts of training data which come from various sources, including the internet. If the models have flaws or are trained on biased data, it can lead to unfair decisions, produce biased outcomes or perpetuate stereotypes. This risk is especially acute with historical data which might be influenced by systemic racism and discrimination.
The BC Public Service needs to be aware of biases in the data used to develop AI models and tools. This includes monitoring and testing to make sure these models and tools don’t unfairly discriminate against groups of people or feed existing inequities.
- Do write your prompts clearly and without bias to avoid generating biased or harmful outputs
- Do check that the output generated is fair, inclusive and free from harmful stereotypes
- Do consider how the output might impact individuals and communities, including Indigenous peoples
- Do use AI systems in a way that respects rights, improves outcomes and builds trust with those affected
- Don’t use AI tools without considering whether they are the best solution to the problem
- Don’t ignore the potential negative effects AI-generated outputs might have on specific communities
- Don’t use data that may reinforce systemic discrimination or unfairness
- Don’t assume that AI outputs are neutral or unbiased without verification
Transparency and accountability
Organizations leveraging AI systems must take responsibility and accountability for actions, outputs and decisions made by those systems. They should track and maintain an inventory of what AI systems they are using and how.
We must maintain accountability, be transparent and be able to explain how AI is used in the BC Public Service.
- Do indicate on the final product that gen AI was used in its creation
- Do explain the purpose of using gen AI and how it contributed to the result
- Do take responsibility for the outputs generated
- Do review all AI-generated content to ensure it is accurate, fair and unbiased
- Don’t claim AI-generated content as entirely your own work
- Don’t use gen AI without understanding how it processes and uses data
- Don’t assume AI-generated content is correct without reviewing it
- Don’t rely entirely on AI without verifying using human judgment or oversight
Privacy, security and governance
Any processing of personal data or confidential information needs to be governed and protected. Only use confidential or personal information if you are logged into Copilot Chat and see the shield icon. Do not put confidential or personal information into other AI tools.
You need to be mindful of what data you are putting into AI systems and where that data goes. For example, AI tools are often cloud-based or use externally created resources, code or models and the information you put into publicly available tools is often collected and used by the company creating the tool.
- Do protect data and systems by following privacy and security rules
- Do opt out of training and logging features where possible
- Don’t put personal, sensitive or protected information into unsecure AI tools
- Don’t use AI tools without considering potential risks to data security and public trust
Data sources and copyright
You must be mindful of the provenance (source) of your data. Data provenance concerns the origins, ownership, collection and reliability of source data.
Be aware that AI datasets often pull from vast and opaque sources including personal information or copyrighted works and that many tools source data from the wider internet which may not be directly applicable to B.C. It is likely that most gen AI training data contains copyrighted information.
- Do use gen AI tools with high-quality, reliable data to produce accurate outputs
Accuracy and human oversight
Gen AI is a tool to augment your existing knowledge and skills. It should be used as an aid or support tool rather than a replacement for your ideas and skills.
Remember that AI systems aren’t perfect. Gen AI may create content that includes incomplete or false information, sometimes called hallucinations. Always fact-check the output.
Hallucinations can seem plausible and be difficult to detect. Users need to think critically, be aware of potential biased outputs and hallucinations, and be responsible for checking and verifying outputs.
Be aware that gen AI will give subtly different answers to repeated prompts.
- Do regularly monitor AI outputs to ensure they stay accurate and free of unwanted biases
- Don’t depend solely on AI tools without human oversight to verify results and ensure reliability
Using AI-generated content for a public audience
Guidance detailing the use of gen AI to make content for a public audience is being developed. Exercise your best judgement when using Copilot Chat to assist with creating:
- Email or written correspondence with clients and members of the public
- Service information on gov.bc.ca or other public websites
- Marketing or promotional materials, including social media
- Images or graphics
Government Communications & Public Engagement (GCPE) retains approval authority of any material prepared for public consumption, detailed in CPPM Policy Chapter 22. Employees should note when materials were generated using AI during the approval process.
Help and training
We currently have 4 courses available:
- Understand How AI Impacts You and Your Government
- AI Fundamentals for Public Servants: Opportunities, Risks and Strategies
- Responsible AI for Public Professionals: Using Generative AI at Work
- Responsible AI for Public Professionals: Building AI That Works — Implementation Strategies for Public Service
More courses are being evaluated and will be added over time.
Join the Artificial Intelligence Microsoft Teams channel to ask questions, share ideas and help shape AI guidance for the B.C. government.