Phison’s aiDAPTIV+ Boosts PC AI Speed 10X, Expands Model Size 3X

▼ Summary
– At CES 2026, Phison demonstrated its aiDAPTIV+ technology enabling AI inference on consumer PCs up to ten times faster by using NAND flash as a managed memory tier alongside DRAM.
– The technology solves a memory bottleneck by writing overflow tokens from a GPU’s cache to flash for reuse, reducing recomputation and improving efficiency, especially for long-context AI tasks.
– It allows PCs with entry-level or integrated GPUs to run much larger AI models than their DRAM would normally permit, such as a 120-billion-parameter model on a laptop with only 32GB of memory.
– Phison’s internal testing shows aiDAPTIV+ can accelerate inference, reduce power consumption, and is particularly beneficial for large Mixture of Experts models and agentic AI workloads.
– The stack involves an AI-aware SSD with Phison’s controller, firmware, and software, offering a straightforward implementation for PC makers and partners, potentially increasing Phison’s controller usage and revenue.
Phison’s aiDAPTIV+ technology is poised to dramatically reshape the AI capabilities of consumer PCs, bringing high-performance inference and training to systems with limited memory. Originally unveiled as an enterprise-focused proof-of-concept in 2024, this hardware and software suite has evolved into a key enabler for client-side artificial intelligence. By CES 2026, the platform demonstrated it could accelerate AI inference speeds by up to ten times while allowing models three times larger to run on standard hardware, a breakthrough for developers and businesses without access to expensive, specialized infrastructure.
The core innovation addresses a fundamental bottleneck in AI processing. When a graphics processor runs an AI model, it uses a key-value cache in its memory to store temporary data, or “tokens,” needed for generating responses. If a conversation or task is long and complex, this cache can fill up, forcing the system to discard older tokens. When those tokens are needed again, the GPU must waste time and power recalculating them from scratch, slowing everything down. Phison’s solution elegantly sidesteps this problem. The aiDAPTIV+ stack uses the system’s high-speed NAND flash storage, managed by a specialized Phison SSD controller and firmware, as an intelligent extension of the GPU’s memory. Tokens evicted from the fast cache are written to the SSD and can be retrieved almost instantly when required, eliminating redundant computations.
This approach fundamentally changes what is possible on a typical desktop or laptop. Systems with entry-level or integrated graphics and modest DRAM can now handle vastly larger AI models. For instance, Phison showcased an Acer laptop with only 32GB of system memory successfully running a massive 120-billion-parameter model. Using conventional methods, that same model would demand approximately 96GB of DRAM, putting it out of reach for most consumer machines. The efficiency gains are most pronounced with complex workloads like Mixture of Experts models and agentic AI, where context length and model size are critical.
The practical benefits extend beyond raw capability. By streamlining how data moves between the processor and storage, aiDAPTIV+ also significantly improves the Time to First Token, the delay before an AI begins its response, and reduces overall power consumption. This is particularly valuable for battery-powered notebooks, making sustained, powerful AI assistance a realistic feature for mobile professionals.
Adoption is being driven by a straightforward implementation path for manufacturers. The technology centers on an AI-aware SSD powered by Phison’s controller, coupled with dedicated firmware and software. This has attracted a growing ecosystem of partners, including major brands like Acer, Asus, Corsair, MSI, and even Nvidia, who featured systems with aiDAPTIV+ at CES. For these companies, integrating the technology into premium models aimed at developers, creators, and power users represents a compelling differentiator. For Phison, it secures the use of their controllers and creates a new software licensing revenue stream.
Ultimately, Phison’s aiDAPTIV+ is democratizing access to advanced AI. It transforms standard PCs into capable platforms for local model inference and light training, unlocking new applications for small businesses and individual innovators who previously found the cost of entry prohibitive.
(Source: Tom’s Hardware)



