Bias in APAS
Bradley M. Kuhn
bkuhn at sfconservancy.org
Fri Jul 1 12:24:15 UTC 2022
[ Duplicate sent to list, my previous accidentally had X-No-Archive set and I
wanted this to land in the archives. ]
Ian, thanks for splitting threads out into topics.
The minutes from the 2022-06-22 read:
> > As a final matter for the meeting, we discussed what one committee member
> > dubbed the “creepiness factor” of AI-assistive systems. We've found there
+to
> > be creepy and systemic bias issues in, for example, AI systems that assist
> > with hiring, or those that decide what alleged criminals receive bail. We
> > considered: do these kinds of problems exist with APAS's?
Ian Kelling wrote:
> I'm surprised by the lack of imagination following this.
I think you are mistaking my lack of good minutes-taking skills as lack of
imagination. 😛
> The answer is clearly yes. An APAS is meant to be a general purpose
> programming tool, so it can be used to create a program which "assists with
> hiring, or those that decide what alleged criminals receive bail",
The question we were considering is whether the model itself exacerbated
problems of bias *beyond* what you would find in the alternative … due to a
human author writing the software completely unassisted by AI.
I agree with the Committee's conclusion that the AI assistance is unlikely to
introduce inherently *more* bias than the humans that wrote the original
software did, or that the new human using the other end of the system would.
If you disagree, we'd be interested to know your arguments and I can brief
the Committee at the next meeting.
--
Bradley M. Kuhn - he/him
Policy Fellow & Hacker-in-Residence at Software Freedom Conservancy
========================================================================
Become a Conservancy Sustainer today: https://sfconservancy.org/sustainer
More information about the ai-assist
mailing list