Pretty soon, the current iteration of digital cameras and phones will join film cameras in the junk shop of history. That’s because pixels are dead, or soon will be: A rapidly advancing technology called the single-photon avalanche diode (SPAD) sensor is poised to replace current CMOS sensors over the next decade and revolutionize imaging one light particle at a time.
Currently, these SPAD sensors are almost too powerful: They collect a massive amount of visual data—so much that it is difficult for the processors in our phones or cars to render them into practical images or videos. That’s where Ubicept, a startup company founded by researchers from the University of Wisconsin-Madison and MIT, comes in.
Ubicept is a computational imaging company developing algorithms that can quickly process SPAD data to make the sensors useful in real time; it also uses the extra information from photons to aid in 3D imaging and other sensing applications.
Andreas Velten
The company, founded in 2021, is based on the work of Andreas Velten, an associate professor of electrical and computer engineering and biostatistics and medical informatics at UW-Madison. Sebastian Bauer, a former postdoctoral scholar in Velten’s lab, serves as co-founder and CEO along with Tristan Swedish, an MIT PhD who is co-founder and CTO. Velten, along with Mohit Gupta, associate professor of computer sciences at UW-Madison, and Ramesh Raskar of the MIT Media Lab are founding advisors.
Ubicept currently employs about 10 people in the Madison area, including many UW-Madison graduates, and also has an office in Boston. It has received significant investment, including a $1 million prize for winning the TitletownTech StartUp draft competition in spring 2025, backed by the Green Bay Packers and Microsoft.
We asked Velten about the future of digital imaging, and how Ubicept will make it a reality.
Why are SPADs better than the current generation of cameras?
“The way pixel-based cameras work is that they collect the light from many photons, or light particles, and then average it together. The information from the photons is then lost. But SPADs collect information from individual photons and that gives them a lot of properties that are very desirable for a camera. They don’t oversaturate. They work in very low light and essentially have infinite dynamic range. They also have very high time resolution and don’t have a problem with motion blur. They have access to much more information that’s encoded in the light that a regular camera doesn’t have. You can just build a better camera as a result.
“I’m convinced, and we’re betting on this, that these SPAD sensors have the potential to replace most cameras we use today. They can also be made in the same facilities as the CMOS processors that they use for regular iPhone cameras. In the next five years, you could just switch all the cameras in the world to SPADs by retooling the machines and they wouldn’t be that much more expensive.”
How does your tech overcome the problem with SPADs?
“This work is about the combination of SPAD hardware design and machine vision processing. These two areas are usually kept separate. Research and training, but also structures within corporations, are set up in a way that make it difficult to find people with the knowledge and the resources to really combine them. That is why we are in a situation now where we have companies that make very advanced SPADs, and others that develop very sophisticated machine vision, but they are combined in very inefficient ways that eliminate much of the advantage SPADs could bring. Ubicept is a computational imaging lab that is designing machine vision tools to take full advantage of SPADs.
“A SPAD can collect massive amounts of information, producing 100 gigabytes of data per second or more. We work on strategies to make the data processing energy-efficient and lightweight. We have developed a system called FLARE, or the Flexible Light Acquisition and Representation Engine, that manages the massive data streams from SPADs using encoding schemes to reduce the data load while preserving enough information for image reconstruction.
“So, if you have a stationary system that does license plate reading, for example, it’s probably fine to use a fairly decent, energy-hungry embedded graphics processing unit to process the data. But once you think about doing that on a cell phone, it gets increasingly more challenging.”
Is Ubicept moving the technology forward?
“I think we’re making excellent progress and I’m really proud of our former students who work there. When we started the company, we thought the algorithm we had would be nice for initial applications and demos, but that we were going to need new innovations down the road. And we are now at the point where we have new methods that look like they are going to do the trick, and that is due to some really brilliant work by UW-Madison graduates.
“I think we are part of an avalanche or wave where the world’s 40 billion cameras are going to transition from standard, analog schemas to SPADs. I think we are coming along at the right moment, and I think we’re actually in really good shape.”
What part does UW-Madison play in this story?
“This is a very good example where you can say that clearly the company wouldn’t be around without the people that were trained here and the research done here. You would not have found this concentration of specific talent for SPAD processing in other places.”
Caption for top image: Ubicept’s solution can capture much more detail in an image taken at night. On the left is a image taken on the Las Vegas strip with a high-end automotive camera, while its on the right leverages Ubicept’s hardware and software to reduce blur and noise and deliver a clear image. Image courtesy of Ubicept.