Google Supports EU AI Code Despite Innovation Concerns

▼ Summary
– Google will adopt the EU’s voluntary AI code of practice despite concerns it may slow Europe’s AI development and deployment.
– The code is a voluntary framework to help AI developers align with the AI Act’s requirements.
– The AI Act’s rules for high-risk General Purpose AI models will take effect on August 2.
– Major tech firms like OpenAI, Google, Meta, and Anthropic are expected to be subject to these rules.
– Affected companies have a two-year period to achieve full compliance with the AI Act.
Google has announced its commitment to follow the European Union’s voluntary guidelines for artificial intelligence development, even as the company raises concerns about potential impacts on innovation. The tech giant acknowledges the importance of responsible AI practices but warns that strict regulations could slow Europe’s ability to compete in the fast-evolving AI sector.
The EU’s voluntary code of practice serves as a roadmap for AI developers, helping them align with the broader AI Act, a comprehensive regulatory framework set to take effect on August 2. Under these rules, general-purpose AI models considered high-risk will face stricter oversight. Major players like Google, OpenAI, Meta, and Anthropic will need to adapt, with a two-year transition period to meet full compliance requirements.
While Google supports the initiative, its stance reflects a broader industry tension between fostering innovation and ensuring ethical AI deployment. The company’s participation signals a willingness to collaborate with regulators, though it remains cautious about how rigid policies might affect technological advancement. The EU’s approach aims to balance safety with competitiveness, but the long-term impact on Europe’s AI ecosystem remains uncertain.
(Source: COMPUTERWORLD)





