GadgetsNewswireScienceTechnology

New Camera Captures Everything in Perfect Focus

Originally published on: December 29, 2025
▼ Summary

– Researchers at Carnegie Mellon University have developed a breakthrough lens technology that can bring an entire scene into sharp focus at once, unlike traditional lenses.
– This new “spatially-varying autofocus” system uses a computational lens to give each pixel its own adjustable lens, allowing different depths to be focused simultaneously.
– The system combines a Lohmann lens with a phase-only spatial light modulator and uses two autofocus methods (CDAF and PDAF) to achieve this effect.
– While not yet available commercially, the technology could fundamentally change how cameras see the world.
– Potential future applications extend beyond photography to include microscopes, VR headsets, and autonomous vehicles for improved clarity.

A revolutionary new lens system developed by researchers at Carnegie Mellon University promises to shatter a fundamental limitation of photography: the inability to keep an entire scene in perfect focus at once. This breakthrough technology, known as spatially-varying autofocus, could fundamentally alter how cameras capture images, moving beyond the single focal plane constraint that has defined optics for centuries. By granting each pixel its own adjustable focus, the system captures finer details across the entire image, regardless of an object’s distance from the camera.

Historically, camera lenses, much like the human eye, can only focus sharply on one plane at a time. Everything in front of or behind that point appears blurred, a technique often used to create artistic depth. To see an entire scene clearly, photographers typically must combine multiple images taken at different focal lengths. This new computational approach eliminates that need. As explained by CMU associate professor Matthew O’Toole, the system mixes technologies that “let the camera decide which parts of the image should be sharp , essentially giving each pixel its own tiny, adjustable lens.”

The core of the innovation is a “computational lens” that merges a specialized Lohmann lens with a phase-only spatial light modulator. The Lohmann lens consists of two curved, cubic lenses that slide against each other to tune focus, while the modulator precisely controls how light bends at each individual pixel. This combination allows the system to focus at multiple depths simultaneously. It also employs two sophisticated autofocus methods working in concert: Contrast-Detection Autofocus (CDAF), which divides the image into regions to independently maximize sharpness, and Phase-Detection Autofocus (PDAF), which detects focus status and determines the direction for adjustment.

Left: a conventional photo with a regular lens, where objects at a single focal plane appear sharp. Right: An all-in-focus photo captured through spatially-varying autofocusing.

The potential implications are vast. CMU professor Aswin Sankaranarayanan notes the system “could fundamentally change how cameras see the world.” While not yet available in any commercial camera and likely years from consumer markets, the technology’s applications extend far beyond traditional photography. Researchers suggest it could dramatically improve the efficiency of microscopes, create more lifelike depth perception for virtual reality headsets, and provide autonomous vehicles with a view of their surroundings characterized by unprecedented clarity. This represents a significant leap from capturing a moment to comprehensively documenting an entire visual field in perfect detail.

(Source: The Verge)

Topics

lens technology 95% autofocus systems 90% computational photography 85% camera innovation 80% research breakthrough 75% carnegie mellon university 70% optical engineering 65% depth perception 60% Virtual Reality 55% Autonomous Vehicles 50%