Bias in APAS
Ian Kelling
iank at fsf.org
Fri Jul 1 02:26:42 UTC 2022
"Bradley M. Kuhn" <bkuhn at sfconservancy.org> writes:
> As a final matter for the meeting, we discussed what one committee member
> dubbed the “creepiness factor” of AI-assistive systems. We've found there to
> be creepy and systemic bias issues in, for example, AI systems that assist
> with hiring, or those that decide what alleged criminals receive bail. We
> considered: do these kinds of problems exist with APAS's?
I'm surprised by the lack of imagination following this. The answer is
clearly yes. An APAS is meant to be a general purpose programming tool,
so it can be used to create a program which "assists with hiring, or
those that decide what alleged criminals receive bail", and replicate
biases of existing programs which do those things. It could suggest the
programmer create an AI system and public data set. Also, I think any
program that aimed to judging hiring and deciding bail, would very
easily be biased even without AI.
--
Ian Kelling | Senior Systems Administrator, Free Software Foundation
GPG Key: B125 F60B 7B28 7FF6 A2B7 DF8F 170A F0E2 9542 95DF
https://fsf.org | https://gnu.org
More information about the ai-assist
mailing list