Understanding Canada’s Algorithmic Impact Assessment Tool

Understanding Canada’s Algorithmic Impact Assessment ToolA must for doing business with the Federal GovernmentMathieu LemayBlockedUnblockFollowFollowingJun 10(Note: I am not affiliated with the Canadian federal government.

Our company is a qualified AI vendor, but my goal for this article is simply to show how to interpret and execute what I believe will be a mandatory requirement for any vendor project moving forward.

)With the recent advent of the new Canadian government’s pre-qualified AI suppliers list and the Directive on Automated Decision-Making, equivalent efforts have been put into spurring innovation within guardrails for what seems to be a future chock-full of government innovation.

The cornerstone of this approach is a (somewhat) centralized framework which allows technical and non-technical people to have a transparent discussion about the sustainability and long-term impacts about deployed solutions, and their role within their respective organization.

While efforts by both the leadership and execution team are nothing short of applause-worthy, it is possible to extend its coverage to help ensure less long-term exposure for highly technical projects.

With certain modifications (below), I recommend you use this tool in your own projects to help ensure commercial and production readiness, government involvement or not.

The BackgroundThere is already a thorough article explaining the rationale behind the need for the Algorithmic Impact Assessment, so here are the key takeaways:Back in December 2018, the Treasury Board Secretariat updated their mandate on automated decision-making.

The tool was developed to help organizations “better understand and mitigate the risks associated with Automated Decision-Making (ADM) Systems by providing the appropriate governance, oversight and reporting, and audit requirements.

” (From the Supergovernance article.

)There is an unspoken expectation from February 2019 moving forward that any government entity is to use this tool (or a version of it) to supervise, control, and mitigate potential issues with the deployment of an ADM system.

The website of the ADM directive as it came out in February 2019.

Hold up — what does the Treasury Board Secretariat do?For those outside of Canada (and even within Canada that aren’t directly involved with the Federal Government), the Treasury Board Secretariat is a bit of a mythical creature.

The TBS website.

Although its primary mandate is to “ provide advice ad makes recommendations to the Treasury Board committee of ministers on how the government spends money on programs and services” (from the website), it is different from the UK’s Cabinet Office and the Office of American Innovation.

It lists the following obligations:Transparency of government spending and operationsPolicies, standards, directives and guidelinesOpen governmentInnovation in the public serviceValues and ethics of the public serviceProfessional development in the public serviceThat’s different than your standard administrative department.

They are experimenting and looking at new ways of serving Canadians by accelerating and streamlining government functions.

Now without looking at the politics involved, you can see why they would be well suited to decree the best practices for using AI within government.

The Tool — Insurance as a ChecklistThe Algorithmic Impact Assessment tool is a scorecard intended to bring attention to design and deployment decisions that might have been overlooked.

It asks a lot of questions pertaining to the why, what, and how a system will be built in order to avoid pitfalls and issues (and a potential black eye on the government, always a risk during election season).

The tool is available interactively on github.

io and has already had some massive rework on it since its inception.

It is a definite step-up from its earlier Excel-based scorecard (available from the AIA original site).

The newer version of the tool is more entangled within existing and future government policies, so for clarity let’s explore the earlier version.

The clumsy-but-specific spreadsheet-based system.

The main impact assessment is comprised of 4 major sections:The business case.

Are you trying to speed things up?.Are you trying to clear a backlog of activities?.Are you trying to modernize your organization?The system overview.

What’s the main technological foundation?.Image recognition, text analysis, or process and workflow automation?The decision oversight.

Is it related to health, economic interests, social assistance, access and mobility, or permits and licenses?The data source(s).

Is it from multiple sources?.Does it rely on personal (potentially identifiable) information?.What’s the security classification, and who controls it?These checks avoid the obvious “I think it’s cool” issue, wasting money and all.

It also accidentally sets the tone for overall measures of success, often linked to the department’s key performance indicators.

(For the full list of questions for the new AIA, you can refer to the raw JSON of the questionnaire off of the github repo.

)The Good, the Bad and the UglyMost of this article has been a continuous slow nod towards the work performed, like someone giving Michelangelo the thumbs up to keep going.

There’s a few points that require more attention, listed below.

The Good — Best Practices Front and CenterA lot of questions cover exactly what this tool is supposed to cover.

It inquires about the key functionalities of the tentative deployment so that checks and balances are in place.

Here are a few:“Will you maintain a log detailing all of the changes made to the model and the system?”“Will the system be able to produce reasons for its decisions or recommendations when required?”“Will there be a process in place to grant, monitor, and revoke access permission to the system?.(Yes/No)”“ Does the system enable human override of system decisions?.Is there a process in place to log the instances when overrides were performed?”Wonderful, that’s exactly what it needs to ask.

But there’s still a lot of stuff to fix.

The Bad — Lack of Quantification or PrecisionAs we’re working with a scorecard, the ability to not force an answer would be much more helpful than just filling the blanks.

Here are some examples:“Are stakes of the decisions very high?.(Yes/No)”“The impacts that the decision will have on the economic interests of individuals will likely be: (Little to no impact/Moderate impact/High impact/Very high impact)”“Have you assigned accountability in your institution for the design, development, maintenance, and improvement of the system?.(Yes/No)”Now, it is possible that “stakes”, “risk” and “accountability” are quantified and defined on a per-department basis, but I would like to see a standard definition (like ISO’s impact vs.

probability calculus).

Here’s another surprising question that left me slightly baffled:“Do you have documented processes in place to test datasets against biases and other unexpected outcomes?.This could include experience in applying frameworks, methods, guidelines or other assessment tools.

”That is what this tool is intended to do.

I accept that its lofty catch-all nature doesn’t allow specifics for every possible case, but at least a “this is how you know you’re on the right track” reference should be mentioned or referred to.

The Ugly — PoliticizationThis is one aspect of the present version of the tool that surprised me compared to the older releases.

It’s intended to help departments offer better services to present and soon-to-be citizens, not help a political party get or stay in power by not stirring the pot.

Here’s the question:“Is the project within an area of intense public scrutiny (e.


because of privacy concerns) and/or frequent litigation?”Although I understand that on its face it seems like a proper question to ask, but the historical background for such a question does have a lot political overtones.

(For those curious, the Chief Auditor called the federal employee’s payroll revamp an ‘incomprehensible failure’.

Worth reading with a bag of popcorn.

)If the particular area is having a lot of public scrutiny and frequent litigation, it could be that it should be updated and innovated upon.

I sincerely hope that I’m reading this wrong.

I wouldn’t want the ability to innovate within government to be slowed down by political turmoil.

Future-proofing Your EngagementWithout taking anything away from the tremendous effort that was performed so far in establishing a framework that is both thorough and comprehensive, I recommend modifying the checklist adding with the following.


Simple Changes to QuestionsMoving a lot of yes/no questions to “show me” documents makes for a more thorough examination of the solution.

For example:Change “Will you be developing a process to document how data quality issues were resolved during the design process?” to “How do you plan to document and communicate data quality issues that were identified?”“Have you assigned accountability in your institution for the design, development, maintenance, and improvement of the system?” should become “Who is the responsible party for the design, development, maintenance, repair, refactoring, and improvement of the system?”“Do you have documented processes in place to test datasets against biases and other unexpected outcomes?” can be “What are all the documented processes in place to test datasets against biases and other unexpected outcomes?”2.

Long-term Ownership and Change ManagementWho will own the algorithm?.The tool has a simple accountability question now.

However, my good friend Jen said that the best processes and and technologies are the ones that are managed like employees: they fall under someone’s supervision and have clear measures of success.

Therefore:Who will maintain it?. More details

Leave a Reply