AI & TechArtificial IntelligenceBusinessNewswireTechnology

Ethical AI Models: A Must for IT Teams to Adopt Now

▼ Summary

– Some US states’ laws on receiving stolen property could apply to enterprises using LLMs, as they often know the training data violates copyright.
Companies may struggle to prove they didn’t suspect the data was stolen if legal issues arise in the future.
– Derivative risks exist, such as LLMs using scraped proprietary data, potentially leading to lawsuits over profits gained from that data.
– Jason Andersen warns enterprises to consider legal exposure as open-source model costs drop, given the high-stakes regulatory environment.
– Unauthorized use of proprietary data in LLMs could result in costly litigation for companies benefiting from the derived insights.

The growing adoption of AI models brings significant legal and ethical concerns that IT teams can no longer ignore. Many enterprises rely on large language models (LLMs) trained on questionable data sources, turning a blind eye to potential copyright violations. This approach mirrors the legal concept of receiving stolen property, where mere suspicion of wrongdoing can lead to liability. If legal challenges arise in the coming years, companies may struggle to prove they acted in good faith when using these models.

Beyond direct copyright issues, there’s the risk of derivative legal exposure. Imagine a scenario where proprietary research on geothermal energy extraction is scraped without consent and integrated into an AI model. If a corporation like ExxonMobil licenses that model and unknowingly leverages the stolen data to generate billions in profits, the original creator could pursue legal action for compensation. The financial and reputational fallout could be devastating.

Jason Andersen, a VP and principal analyst at Moor Insights & Strategy, highlights the urgency of addressing these risks. He notes that as open-source AI models become cheaper to train and fine-tune, businesses must proactively safeguard themselves. Regulatory scrutiny is intensifying, and failing to implement ethical AI practices could leave organizations vulnerable to costly litigation and reputational damage.

The message is clear: IT leaders must prioritize transparency in AI training data and ensure compliance with intellectual property laws. Ignoring these concerns now could lead to severe consequences down the line.

(Source: COMPUTERWORLD)

Topics

legal risks using llms 95% copyright violations ai training data 90% receiving stolen property laws 85% derivative legal exposure 80% proprietary data misuse 75% litigation risks enterprises 70% ethical concerns ai adoption 65% regulatory scrutiny ai 60% open-source ai model risks 55% reputational damage from ai misuse 50%