Bias in APAS

Ian Kelling iank at fsf.org
Tue Jul 12 04:41:30 UTC 2022


"Bradley M. Kuhn" <bkuhn at sfconservancy.org> writes:

> [ Duplicate sent to list, my previous accidentally had X-No-Archive set and I
>   wanted this to land in the archives. ]
>
> Ian, thanks for splitting threads out into topics.
>
> The minutes from the 2022-06-22 read:
>> > As a final matter for the meeting, we discussed what one committee member
>> > dubbed the “creepiness factor” of AI-assistive systems.  We've found there
> +to
>> > be creepy and systemic bias issues in, for example, AI systems that assist
>> > with hiring, or those that decide what alleged criminals receive bail.  We
>> > considered: do these kinds of problems exist with APAS's?
>
> Ian Kelling wrote:
>> I'm surprised by the lack of imagination following this.
>
> I think you are mistaking my lack of good minutes-taking skills as lack of
> imagination. 😛
>
>> The answer is clearly yes. An APAS is meant to be a general purpose
>> programming tool, so it can be used to create a program which "assists with
>> hiring, or those that decide what alleged criminals receive bail",
>
> The question we were considering is whether the model itself exacerbated
> problems of bias *beyond* what you would find in the alternative …  due to a
> human author writing the software completely unassisted by AI.
>
> I agree with the Committee's conclusion that the AI assistance is unlikely to
> introduce inherently *more* bias than the humans that wrote the original
> software did, or that the new human using the other end of the system would.
>
> If you disagree, we'd be interested to know your arguments and I can brief
> the Committee at the next meeting.

I do disagree on the "new humans using the other end of the system
would." For AI systems that regurgitate some combination of their
training set, like copilot, by their nature, those suggestions include
the biases in their training set. To say that that people using the
system would not have any more bias in their code due copilot seems to
come with some faulty assumptions: Eg: those people already would create
code with >= biases of the past, as if humans can't advance. I don't
believe that. Or, that they would generally recognize the bias in the
suggestions. I don't believe that. Or, that the copilot code does not
contain significant biases. I don't believe that. I think it would be
quite easy to find copilot suggestions which contain harmful bias.


More information about the ai-assist mailing list