SHARON AI and VAST Data Launch InsightEngine in Australia

▼ Summary
– SHARON AI has partnered with VAST Data to provide scalable AI inference solutions for enterprise and government customers.
– The VAST InsightEngine enables real-time ingestion and processing of diverse data types for low-latency AI workflows with built-in security and governance.
– This collaboration helps organizations transition from AI experimentation to production with sovereign, enterprise-grade workflows across sectors like finance and public safety.
– The infrastructure is operational in a Melbourne data center, supporting local and secure AI workloads for Australian institutions.
– University of New South Wales researchers are using this platform to advance AI reasoning capabilities through model optimization and specialized applications like weather forecasting.
A major new partnership in Australia’s technology sector is set to transform how enterprises and government bodies deploy artificial intelligence at scale. SHARON AI, the country’s premier Neocloud provider, has joined forces with VAST Data, specialists in AI operating systems, to deliver powerful inference capabilities tailored for large-scale organizational needs.
The collaboration introduces the VAST InsightEngine, a comprehensive system designed for end-to-end data handling. It supports continuous, real-time ingestion of structured, unstructured, and streaming information. This platform delivers low-latency, massively parallel vector and hybrid search functionalities, essential for powering Retrieval-Augmented Generation and agentic workflows. Integrated directly within the VAST AI OS, it benefits from unified governance, stringent security protocols, and full data lineage tracking, ensuring policy-based access controls, encryption, and comprehensive auditability for every query processed.
Ofir Zan, AI Solutions & Enterprise Lead at VAST Data, emphasized the strategic importance of this development. He noted that as AI systems advance, the capacity to perform secure, real-time reasoning across vast datasets will become a cornerstone of enterprise intelligence. The combined solution from SHARON AI and VAST orchestrates event triggers and functions linked to scalable data pipelines. This enables complex, multi-step retrieval and reasoning workflows, all operating within a sovereign and secure environment.
This technological union allows organizations to transition from experimental AI projects to full-scale production, using repeatable and enterprise-grade workflows. In the financial services sector, where high throughput and minimal latency are non-negotiable, the platform empowers RAG applications at any scale. It utilizes a large native vector index to search through billions of embedded records while maintaining fine-grained permission controls.
For public safety initiatives and smart city projects, the ability to ingest and process enormous volumes of video and metadata in real time offers significant advantages. It reduces operational expenditures, enhances situational awareness, and accelerates incident response times, all while ensuring sensitive data remains within national borders.
Wolf Schubert, CEO of SHARON AI, highlighted the foundational impact of this partnership. By merging SHARON AI’s sovereign GPU cloud infrastructure with the VAST InsightEngine, enterprises and government institutions gain the ability to run advanced AI workloads locally, securely, and without performance compromises. The recent activation of their supercluster at NEXTDC’s Tier IV M3 data centre in Melbourne underscores a firm commitment to providing Australia with sovereign, high-performance AI infrastructure.
Initial workloads are already operational, with researchers from the University of New South Wales collaborating on the SHARON AI cloud platform. Their work focuses on advancing reasoning-centric AI research across various fields. PhD candidates are leveraging these resources to achieve several key objectives.
They are working to enhance reasoning capabilities within smaller language models. This involves structured reasoning techniques, auto-formalization processes, and innovative expert-aware post-tuning applied to Mixture-of-Experts architectures.
Additionally, researchers are conducting parallel fine-tuning and evaluation of state-of-the-art large language models, including Falcon, Llama, Qwen, and Deepseek. These models are being tested on tasks like question-answering, with specific applications in mathematical and spatio-temporal reasoning domains.
Another significant project involves accelerating global weather forecasting. The team is training high-resolution, data-driven models on extensive ERA5 datasets to achieve faster and more accurate predictions.
Collectively, this research explores how specialized techniques like post-tuning, fine-tuning, and GPU-accelerated model architectures can boost AI’s reasoning performance, scalability, and domain-specific applicability. The efforts by UNSW researchers are establishing a foundation for developing smaller, more efficient, and highly capable reasoning models with potential applications across scientific research, forecasting, and advanced AI evaluation.
(Source: ITWire Australia)


