Large language models are infamous for spewing toxic biases, thanks to the reams of awful human-produced content they get trained on.  But if the models are large enough, and humans have helped train them, then they may be able to self-correct for some of these biases. Remarkably, all we have to do is ask. That’s…

Leave a Reply

Your email address will not be published. Required fields are marked *