I had a quick read through of the Canadian Government’s CAN-ASC-6.2:2025 – Accessible and Equitable Artificial Intelligence Systems. Here are some quick notes on it.
Much of the document is about the how, the implementation: using AI in an equitable way for people with disability. I was pleasantly surprised, though, at the bits that did not treat AI and its use as a foregone conclusion, as an uncomplicated good.
One theme that ran through for me was: allow opt-out of AI, without penalty. With two sub-themes:
- Have a clear plan for human options that are equally available and usable;
- Stop using AI if it causes harm or is no longer fit for purpose.
Another theme for me was: monitor for harm and unfair treatment, on an ongoing basis. With a few sub-themes:
- Understand risks and impacts, including things that build up over time;
- Allow anonymous feedback;
- Respond to feedback with a specific plan for fixing it;
- Allow challenging of decisions.
As someone who tends to be critical of any new tech, and likes to be clear on the pros and (especially) the cons, I thought this was great.