That’s a pretty uncharitable interpretation of my post. You shared an example of a single mitigation that you personally find ludicrous (without explaining why). And then I’m supposed to throw up my hands and go “I guess it’s pointless to try and be less racist”?
Tay might still be around if Microsoft gave a thought to potential issues before release. I’d prefer not to have this awesome technology tainted out of the gate as a tool for racists and pornographers. They’ll get their hands on it eventually but it’d be nice if they don’t get all the up-front press.
That is the only mitigation used in DALL-E 2, which up until recently was the only publicly available text to image model.
> I’d prefer not to have this awesome technology tainted out of the gate as a tool for racists and pornographers
Why is it your business what people do with the model? If people want to be racist they can already do so, they don't need a shitty model that doesn't work half as well as paying some guy in the third world $2/h to shitpost online. And I don't see the problem with pornography.
Bias amplification is a real issue. https://www.theverge.com/2016/3/24/11297050/tay-microsoft-ch...
Tay might still be around if Microsoft gave a thought to potential issues before release. I’d prefer not to have this awesome technology tainted out of the gate as a tool for racists and pornographers. They’ll get their hands on it eventually but it’d be nice if they don’t get all the up-front press.