Artemis Seaford & Ion Stoica on AI’s Ethical Crisis at Sessions

▼ Summary
– Generative AI’s rapid advancement raises urgent ethical concerns about deception and safety as tools become widely accessible.
– Artemis Seaford and Ion Stoica will discuss AI ethics at TechCrunch Sessions: AI, focusing on risks, interventions, and responsible scaling.
– Seaford brings expertise in AI safety, media authenticity, and risk management from roles at ElevenLabs, OpenAI, and Meta.
– Stoica offers a systems-level perspective on AI ethics, drawing from his work on open-source projects and leadership at Databricks.
– The event features insights from top AI experts, networking opportunities, and discounted tickets for attendees.
The rapid advancement of generative AI has pushed ethical concerns from hypothetical debates to urgent realities. With these tools becoming more accessible and convincing, the risks of misuse, from deepfakes to misinformation, demand immediate attention. At TechCrunch Sessions: AI, industry leaders Artemis Seaford and Ion Stoica will tackle these pressing issues head-on, offering insights into how we can harness AI’s power responsibly.
Seaford, Head of AI Safety at ElevenLabs, brings a unique perspective shaped by her work at OpenAI, Meta, and global risk management. Her expertise lies in preventing AI-driven abuse while ensuring media authenticity. Attendees can expect a no-nonsense breakdown of emerging threats, from evolving deepfake technologies to practical solutions that actually work.
On the other hand, Ion Stoica, co-founder of Databricks and UC Berkeley professor, approaches the challenge from an infrastructure standpoint. His contributions to open-source projects like Spark and Ray have laid the groundwork for today’s AI advancements. With firsthand experience in scaling AI responsibly, Stoica will highlight where current systems fail and how safety can be integrated into core architectures from the start.
Their discussion will delve into ethical blind spots in AI development, exploring the roles of industry, academia, and regulation in shaping a safer future. This isn’t just theoretical—it’s a roadmap for ensuring AI remains a force for good.
(Source: TechCrunch)