Bias in APAS

Ian Kelling iank at fsf.org
Tue Jul 12 05:48:41 UTC 2022


"Bradley M. Kuhn" <bkuhn at sfconservancy.org> writes:

> I agree with the Committee's conclusion that the AI assistance is unlikely to
> introduce inherently *more* bias than the humans that wrote the original
> software did,

I also disagree on this point. Someone reverse engineered the profanity
filter in copilot, and I found it to have a bias toward the cultural
opinions of the filter's authors.

Note, that filtering happens outside on the model's output, but 2 points
on that: the model's input very likely had it's own filter which
included bias from it's authors. github is keeping the model secret on
it's servers, so functionally it doesn't matter much between it doing a
post-model filter on the server and having that filter embedded in the
model.

Beyond that, the choice of training with software only on github as the
input includes biases of what github allows. Officially, some of that
bias is here:
https://docs.github.com/en/site-policy/acceptable-use-policies/github-acceptable-use-policies,
so, for example: "We do not allow content or activity on GitHub that: is
unlawful or promotes unlawful activities;" Code contains data, I bet
copilot will complete a list of US states, but it might not complete a
list of states where abortion is legal, at least in some point in the
future, because lawyers that have lead the drafting of successful
anti-abortion legislation are giving interviews about how they think it
is a good idea to make laws targeting companies that provide that kind
of information, and github's terms of service already go along with
that kind of law.


-- 
Ian Kelling | Senior Systems Administrator, Free Software Foundation
GPG Key: B125 F60B 7B28 7FF6 A2B7  DF8F 170A F0E2 9542 95DF
https://fsf.org | https://gnu.org


More information about the ai-assist mailing list