Artificial IntelligenceNewswireStartupsTechnology

XCENA’s MX1 Memory: Thousands of RISC-V Cores, CXL 3.2 & SSD Tiering

▼ Summary

– XCENA unveiled its MX1 computational memory product at FMS 2025, featuring thousands of RISC-V cores for near-data processing.
– The MX1 reduces CPU-memory overhead by integrating compute directly next to DRAM using PCIe Gen6 and CXL 3.2 standards.
– It supports petabyte-scale SSD-backed memory expansion and is designed for workloads like AI, analytics, and vector databases.
– The product roadmap includes the MX1P model in late 2025 and the MX1S in 2026, both leveraging CXL 3.2 for enhanced bandwidth.
– MX1 won the “Most Innovative Memory Technology” award at FMS 2025, and XCENA offers a software development kit for evaluation and deployment.

At the recent FMS 2025 conference, South Korean innovator XCENA unveiled its groundbreaking MX1 computational memory platform, a solution poised to redefine server architecture through near-data processing and massive scalability. Engineered around PCIe Gen6 and the advanced Compute Express Link 3.2 standard, the MX1 places compute resources directly alongside DRAM, dramatically cutting down the latency and energy costs tied to shuttling data between processors and memory.

This design incorporates thousands of custom RISC-V cores, purpose-built to tackle demanding workloads like vector database operations, real-time analytics, and memory-intensive queries. By situating processing power within the memory subsystem, XCENA’s approach minimizes data movement overhead, a critical advantage for data-heavy applications in AI and large-scale enterprise environments.

The MX1 also introduces a novel tiered memory architecture that blends DRAM with SSD-backed expansion, supporting capacities reaching into the petabyte range. This system doesn’t just scale, it also integrates compression and enhanced reliability mechanisms, making it suitable for the most demanding data centers.

XCENA has outlined a clear product rollout strategy, with the MX1P model scheduled for release later this year. Working samples are expected to reach select partners by October. A more advanced variant, the MX1S, is planned for 2026 and will feature dual PCIe Gen6 x8 interfaces along with expanded capabilities. Both versions leverage the improved bandwidth and functional flexibility of CXL 3.2, ensuring compatibility with next-generation infrastructure.

Industry recognition came swiftly for the MX1, which received the “Most Innovative Memory Technology” award at FMS 2025. This marks the second consecutive year XCENA has been honored at the event, following its “Most Innovative Startup” win in 2024. According to Jay Kramer, Chair of the Awards Program, computational memory like the MX1 represents a pivotal architectural shift, accelerating performance and efficiency for data-centric applications by drastically reducing unnecessary data transit.

To encourage adoption and experimentation, XCENA is providing a comprehensive software development kit complete with drivers, runtime libraries, and diagnostic tools. This SDK is designed for seamless integration into standard development environments, allowing engineers to test and deploy the MX1 across a variety of use cases, from AI inference and machine learning to high-speed in-memory analytics.

The introduction of the MX1 signals a meaningful step forward in memory technology, combining high core density, intelligent tiering, and emerging interconnect standards to meet the growing demands of modern computational workloads.

(Source: techradar)

Topics

computational memory 95% near-data processing 90% cxl 3.2 90% memory expansion 85% risc-v cores 85% data-intensive workloads 85% pcie gen6 80% ai inference 75% product roadmap 75% in-memory analytics 75%