AI image generators can amplify biased stereotypes in their output. There have been attempts to quash the problem by manual fine-tuning (which can have unintended consequences, for example generating diverse but historically inaccurate images) and by increasing the amount of training data. “People often claim that scale cancels out noise,” says cognitive scientist Abeba Birhane. “In fact, the good and the bad don’t balance out.” The most important step to understanding how these biases arise and how to avoid them is transparency, researchers say. “If a lot of the data sets are not open source, we don’t even know what problems exist,” says Birhane.