AI & TechArtificial IntelligenceBusinessNewswireTechnology

Build Trustworthy AI Agents for Your Business

▼ Summary

– AI agents are becoming widespread in business, with companies either adopting off-the-shelf tools or developing in-house systems using large language models.
– Building trustworthy AI agents requires rigorous measurement of success through public benchmarks, internal automated evaluations, and final human expert assessments.
– Effective agent development depends on close collaboration between technical teams and designers to create a shared interface and understanding for human-AI interaction.
– Agents should be designed to leverage existing, proven software capabilities as tools, rather than being treated as omniscient systems.
– Companies like Thomson Reuters are engaging in industry alliances and academic partnerships to address the high accuracy and trust requirements needed for professional domains.

Businesses are increasingly integrating AI agents into their operations, whether through customized internal tools or commercial software powered by large language models. For professionals aiming to implement these systems effectively, adopting a strategic approach is essential. Joel Hron, Chief Technology Officer at Thomson Reuters Labs, offers valuable insights from his work deploying generative AI and agentic technologies at the information services corporation. He emphasizes that building reliable AI requires focused effort in several critical areas.

A primary step is establishing clear success metrics. Hron stresses that teams must define what excellence looks like for their specific applications. This involves more than simple benchmarks; it requires creating detailed internal evaluations that assess the quality and relevance of an agent’s output. While automated testing accelerates development by allowing rapid iteration, human expert review remains indispensable before any solution is launched. This combination of automated and human assessment helps ensure that final products are both robust and trustworthy.

Fostering collaboration between teams is another vital component. Hron notes that effective agentic systems function as partners to human users, necessitating a shared interface and common language. To achieve this, designers and data scientists must work closely together, bridging the gap between technical functionality and user experience. Regular, integrated collaboration allows for a natural exchange of ideas, leading to more intuitive and effective AI tools that users can understand and rely upon.

It is also crucial to recognize the current limitations of AI models. Agents are not all-knowing; their capabilities are enhanced when connected to established, proven software tools. Hron’s strategy involves deconstructing existing professional applications into discrete functions that agents can utilize. This approach extends an AI’s practical utility far more than relying on a model’s native knowledge alone. Teams should re-examine their systems to determine how interfaces and workflows can be adapted not just for human operators, but for effective human-agent collaboration.

Finally, looking beyond internal development to industry and academic partnerships can drive significant advances. Thomson Reuters co-founded the Trust in AI Alliance, a forum where leading AI researchers from major technology firms discuss engineering trustworthy systems. The company has also established a dedicated research lab with Imperial College London. These initiatives focus on the last increments of performance,moving from good to exceptional accuracy,which are often what define professional-grade, reliable AI. For businesses in fields like law and compliance, that marginal gain is where competitive advantage is ultimately secured.

(Source: Variety)

Topics

ai agents 98% professional integration 95% success measurement 93% Human-AI Collaboration 92% trustworthy ai 91% model capabilities 89% industry collaboration 87% proprietary knowledge 86% ai evaluation 85% cross-functional teams 83%