Skip to main content
FR

REGULATORY

Quebec Law 25 and AI Systems: The 12-Point Compliance Checklist

Quebec Law 25 now imposes precise obligations on organisations handling personal information, including when an AI system makes, supports, or influences a decision affecting an individual.

Agentica Risk TeamAI Risk Practice
8 min read

For boards and compliance teams, the question is no longer whether AI is regulated — it is how to prove every deployment meets a documented framework. Quebec Law 25 (the modernized Act respecting the protection of personal information in the private sector) is the current reference point for any organization handling Quebec residents’ data, and federal Bill C-27 / AIDA is tracking close behind for high-impact AI systems. This note summarises the twelve points that come up most often in our client engagements.

Appoint a responsible person

Law 25 requires a designated privacy officer whose identity is made public. For organisations deploying AI, this person must understand both the legal obligations and the technical implications of automated decisions. In practice, we rarely see one individual covering both — a legal/technical pairing is usually more realistic.

Maintain an AI systems register

Before writing any policy, you need to know what is already running in production. The register should list every AI system, its internal owner, the data it consumes, the decisions it influences, and the underlying model vendor. Real mapping is the foundation the other eleven points rest on.

Conduct privacy impact assessments

Every high-impact use case requires a privacy impact assessment (PIA). The Commission d’accès à l’information expects to see documented analysis, not a general declaration. PIAs must be revised whenever the system changes materially — a new model, a new data source, a new purpose.

Govern third-party vendors

Foundation models come from vendors outside Quebec in the majority of cases. Contracts must specify the vendor’s obligations regarding personal information processing, incident notification, and data retention. Vendor standard clauses are rarely sufficient — a negotiated addendum is almost always required.

Define retention and destruction

How long do you keep training data? Inference logs? Historical model versions? Law 25 requires an explicit, documented policy. Without one, the default is “keep forever” — which expands exposure with every incident.

Internal governance covers half the work. Transparency and control cover the other half — and that is where most organisations fail.

Inform affected individuals

When a significant decision is made or supported by an automated system, the affected person has the right to be informed. This information must be clear, not buried in a forty-page privacy policy. Drafting these notices is a precision exercise: too vague and you do not meet the law, too technical and nobody understands.

Enable human intervention

The affected person can request that the decision be reviewed by a human. That request must be operationally actionable, not just accepted in theory. It implies a receipt channel, a processing window, and a documented review path.

Provide a complaint mechanism

Internal complaints must be receivable, documented, and handled. The mechanism must be distinct from general customer service, because privacy complaints carry a specific regulatory status. Regulators consult this register during an inspection.

Plan incident response

The notification window for an incident involving a serious risk of harm is short — a pre-established chain of responsibility, a notification template, and an escalation procedure tested at least once a year are the only reliable way to meet it. Improvising during a crisis is the fastest way to multiply exposure.

Log decisions for reconstruction

Logging must be sufficient to reconstruct a specific decision after the fact. This includes model inputs, the exact version used, the parameters applied, and the output. Without this traceability, no challenge is possible — which puts the organisation on the defensive against any complaint.

Review models periodically

Models drift. Data changes. Use cases evolve. Periodic review — at minimum annually, ideally quarterly for high-impact systems — is essential. The review must produce a written record, archived in the register.

Report to the board

The twelfth point is often neglected: AI risks must appear in board reports, with indicators that are comprehensible to non-technical directors. Without that reporting line, the other eleven points become paperwork disconnected from strategy.

What separates organisations that succeed

The organisations that pass these assessments share a common trait: they stopped treating AI compliance as a one-off project and started treating it as a continuous programme, with a named owner, a review schedule, and indicators presented to the board. The audit is no longer an event — it is a state that is maintained.

Our practical recommendation: start by mapping AI systems already in production before writing new policies. An abstract policy does not survive an audit. Real mapping, combined with a risk register, lets you prioritise fixes where exposure is most material.

Each of the twelve points deserves a detailed analysis specific to your sector and to the maturity of your systems. The note above stays general and is not a substitute for a tailored assessment.