Estimated reading time: 1 minutes
OpenAI keeps deleting data that could allegedly prove the AI company violated copyright laws by training ChatGPT on authors’ works. Apparently largely unintentional, the sloppy practice is seemingly dragging out early court battles that could determine whether AI training is fair use.
Most recently, The New York Times accused OpenAI of unintentionally erasing programs and search results that the newspaper believed could be used as evidence of copyright abuse.
The NYT apparently spent more than 150 hours extracting training data, while following a model inspection protocol that OpenAI set up precisely to avoid conducting potentially damning searches of its own database. This process began in October, but by mid-November, the NYT discovered that some of the data gathered had been erased due to what OpenAI called a “glitch.”
About The Author
Discover more from Artificial Race!
Subscribe to get the latest posts sent to your email.