Tool Combats Copyright Infringement in AI Image Generators by ‘Poisoning’ Training Data
Text-to-image generators, like Midjourney or DALL-E, may provide unintended results due to being trained on datasets that include copyrighted images, leading to copyright infringement accusations against tech companies. To combat unauthorized image scraping, a tool called “Nightshade” has been developed, subtly altering pixels to disrupt computer vision while maintaining human visual integrity.
The introduction of “poisoned” images into training data can lead to AI models generating unpredictable and unintended results, impacting related prompt keywords. While Nightshade aims to protect copyright, concerns arise about potential misuse. Proposed solutions include scrutinizing input data sources, ensemble modeling, audits, and addressing the larger issue of technological governance in the context of data poisoning and adversarial approaches in AI systems.