Those crazy kids at Bell Labs have built a camera that uses no lens and a single-pixel sensor.
The idea uses a grid of small apertures that each direct light rays from different parts of the scene to the sensor, and can be opened and closed independently. This is similar to spectroscopy, where a variable slit allows the light from a distant object to be split into its component spectrum that can then be analyzed.
Using a technique called ‘compressive sensing’, the sensor makes a series of measurements with different combinations of open apertures, and uses this data to reconstruct the scene in front of the camera. Because there’s no lens to focus the resultant image has infinite depth of field, rather like a pinhole camera.
Each aperture in the LCD array is individually addressable and so can be open to allow light to pass through or closed. An important aspect of this kind of imaging is that the array of open and closed apertures must be random.
The process of creating an image is straightforward. It begins with the sensor recording the light from the scene that has passed through a random array of apertures in the LCD panel. It then records the light from a different random array and then another and so on.
Although seemingly random, each of these snapshots is correlated because they record the same scene in a different way. The more snapshots that are taken, the better the image will be. But it is possible to create a pretty good image using just a tiny fraction of the data that a conventional image would require. The images above were taken only using a quarter of the data normally taken by a camera.
The lensless camera only requires a small amount of data to create an image. Also, because there isn’t a lens, the image doesn’t have chromatic aberrations, vignetting, shifting, focusing or other mechanical/optical flaws associated with lenses. The image is always in focus and the resolution can be adjusted by the size and number of apertures in front of the sensor, or sensors. Using two or more sensors behind the same aperture array make it possible to create two different images at the same time.
You can even detect and image other wavelengths of the electromagnetic spectrum, including infrared and ultraviolet.
So you might be wondering why I am talking about this in an astronomy post. Well, I take, on average, between 10-20 images for each deepsky object during a session. Wouldn’t it be great to be take all of them at once? Not only that, but I could increase the number and all the images could be processed! Just using a simple 640×480 VGA image sensor you could take 307,200 images at once! The applications for NASA, the ESA and other science organizations is unlimited. You could easily use this technology to search for exoplanets, image asteroids, comets, planets…whatever.
But enough about them. For me, I could finally take a really good image of Jupiter or Saturn! Oh the possibility. Hopefully, this tech will actually be developed by someone and made into a camera that I can slap on the end of my telescope.
If you want to read all the gory details, you can download the research paper here.
– Ex astris, scientia –
I am and avid amateur astronomer and intellectual property attorney in Pasadena, California. As a former Chief Petty Officer in the U.S. Navy, I am a proud member of the Armed Service Committee of the Los Angeles County Bar Association working to aid all active duty and veterans in our communities. Connect with me on Google +
Norman