OpenAI, Deceptive Technology, and Model Risk Management

I’ve already written briefly about the concerns and damage caused by deepfakes.

A lot of money and resources are being put into detecting them, with some expressing concerns about how deepfakes and similar technologies could be used to disrupt the 2020 elections (along with broader implications).

Still from video by Deeptrace.

(Source: IEEE Spectrum)Indeed, OpenAI referenced these technologies in their reasoning for not providing a full release of GPT-2:We can also imagine the application of these models for malicious purposes, including the following (or other applications we can’t yet anticipate):Generate misleading news articlesImpersonate others onlineAutomate the production of abusive or faked content to post on social mediaGoing back to Zhang’s suggestion of “a small delay between paper publication and code release” for deceptive technologies, many researchers working to detect deepfakes have already spoken to the issue of sharing their findings.

Computer scientist Siwei Lyu published an article in August 2018 explaining how his team had achieved an over 95 percent detection rate of deepfakes based off of blinking.

But a follow-up piece revealed that only “a few weeks after his team put a draft of their paper online, they got anonymous emails with links to deeply faked YouTube videos whose stars opened and closed their eyes more normally.

The fake content creators had evolved.

”Hany Farid, a professor of computer science at University of California, uses forensic technology to detect deepfakes.

He explains how combatting them has become harder due to machine learning and why he doesn’t share new breakthroughs:All the programmer has to do is update the algorithm to look for, say, changes of color in the face that correspond with the heartbeat, and then suddenly, the fakes incorporate this once imperceptible sign.

Once I spill on the research, all it takes is one asshole to add it to their system.

While some people are calling for technologies that can generate fake media to be made open source, those working to detect fakes benefit in the so-called AI arms race by keeping their innovations under wraps.

Verification and Surveillance(Photo by Chris Ried on Unsplash.

)Rather than focusing on detecting fake media, some are looking at verifying real media instead.

Techno-sociologist Zeynep Tufekci suggests that verification could come by way of spoof-proof metadata in cameras or blockchain databases.

These solutions may help verify images or videos, but verification of text could prove more difficult.

Tufekci also reminds us:An effective identification system, however, carries with it a worrisome truth: Every verification method carries the threat of surveillance.

While she goes on to say there are ways to mitigate this concern, it’s not one to be taken lightly.

Digital Defense PlaybookOne week before OpenAI’s release of GPT-2, there was another release that I cared more about, but that didn’t get nearly as much attention as GPT-2.

Our Data Bodies (ODB) released the Digital Defense Playbook: Community Power Tools for Reclaiming Data.

The workbook is described as “a set of tried-and-tested tools for diagnosing, dealing with, and healing the injustices of pervasive and punitive data collection and data-driven systems.

”The release goes on to say that “ODB hopes the Playbook will energize community involvement in tackling surveillance, profiling, and privacy problems rooted in social injustice.

”ODB is doing the work of trying to deal with the aftermath of some of the issues named earlier, including bias encoded into AI (both intentionally and unintentionally) and the fallout from unintended consequences.

Tawana Petty during the Data for Black Lives II closing panel.

(Source: Data for Black Lives Facebook)During the closing panel of the Data for Black Lives II conference at MIT Media Lab, organizer and ODB team member Tawana Petty informed the audience:Y’all getting the Digital Defense Playbook, but we didn’t tell you all their strategies and we never will.

Because we want our community members to continue to survive and to thrive.

So you’ll get some stuff, but the stuff that’s keeping them alive, we keepin’ to ourselves.

While the group created this resource in order to share knowledge, limiting what they share is just as important to their project.

It also recalls Farid’s tactic of explaining some aspects of his research but keeping others private to maintain an advantage.

Model Risk ManagementLast week, after I had already started trying to piece this article together, I attended a talk by Sri Krishnamurthy, founder of QuantUniversity.

com, entitled “Model Governance in the age of Data Science and AI.

” He talked about the challenges surrounding reproducibility in code, as well as the importance of interpretability and transparency in codebase.

I was especially interested to learn more about Model Risk Management as defined by the Federal Reserve’s SR 11–7.

The document defines “model risk” as “the potential for adverse consequences from decisions based on incorrect or misused model outputs and reports.

” Although the document is clearly geared towards financial institutions in its examples of adverse consequences (financial loss or “damage to a banking organization’s reputation”), it does provide guidelines for mitigating risk that could help inform practices elsewhere.

If we shift thinking about adverse consequences in terms of impact on people instead of the impact on business, we might start heading in the right direction.

(I can already hear the laughter at this line.

This article could pivot in a whole other direction at this point, but I’ll keep moving forward.

)So where does this leave us?In yet another piece covering OpenAI’s release of GPT-2, technology writer Aaron Mak restates the issue I was thinking through above:Machine learning practitioners have not yet established many widely accepted frameworks for considering the ethical implications of creating and releasing A.

I.

-enabled technologies.

However, he continues:If recent history is any indication, trying to suppress or control the proliferation of A.

I.

tools may also be a losing battle.

Even if there is a consensus around the ethics of disseminating certain algorithms, it might not be enough to stop people who disagree.

While an agreement or widely adopted set of guidelines could help address unintended consequences and other issues discussed here, Mak is right in that it wouldn’t stop actual malicious actors.

Personally, I am glad OpenAI chose to bring this conversation into the public and find that more important than the model that they did or didn’t release.

While I also tend to land on the side of making things open source and sharing information — for the sake of democratization of technology, transparency, and accountability — I also understand why people would be wary of releasing technology and information that could be used in ways they didn’t intend.

(And I worry about other aspects of open source, but that’s for another time.

)Thank you for coming on this journey with me.

I’m still working to develop my understandings of these issues and clarify where I stand.

I would love to hear your thoughts, questions, or feedback on any of this.

.. More details

Leave a Reply