HOW can image sensors - the most complicated and expensive part of a digital camera - be made cheaper and less complex? Easy: take the lid off a memory chip and use that instead.
As simple as it sounds, that pretty much sums up a device being developed by a team led by Edoardo Charbon, an engineer at the Swiss Federal Polytechnic Institute (EPFL) in Lausanne.
Gigavision
In a paper presented at an imaging conference in Kyoto, Japan, this week, the team say that their so-called "gigavision" sensor will pave the way for cellphones and other inexpensive gadgets that take richer, more pleasing pictures than today's devices. Crucially, Charbon says the device performs better in both very bright light and dim light - conditions which regular digital cameras struggle to cope with.
An Established Principle
While Charbon's idea is new and has a patent pending, the principle behind it is not. It has long been known that memory chips are extremely sensitive to light: remove their black plastic packages to let in light, and the onrush of photons energises electrons, creating a current in each memory cell that overwhelms the tiny stored charge that might have represented digital information. "Light simply destroys the information," says Martin Vetterli, a member of the EPFL team.
A similar effect occurs aboard spacecraft: when energetic cosmic rays hit a cell in an unprotected memory chip they can "flip" the state of the cell, corrupting the data stored in the chip.
What Charbon and his team have found is that when they carefully focus light arriving on an exposed memory chip, the charge stored in every cell corresponds to whether that cell is in a light or dark area. The chip is in effect storing a digital image.
To what effect?
All very clever, you might say, but why would anyone want to do that? The answer is that the two types of sensor chips used in today's digital cameras store the brightness of each pixel as an analogue signal. To translate this into a form that can be stored digitally, they need complex, bulky, noise-inducing circuitry.
CCD Sensors
The charge-coupled device (CCD) sensors used on early cameras and camcorders, and the cheaper and more modern complementary metal oxide semiconductor (CMOS) type both operate on a similar principle. On each, the area that forms an individual pixel can be thought of as a small charge-containing "bucket". The size of the charge contained in one of these buckets depends only on the amount of light falling on it.
Analogue to Digital
In a CCD, the contents of each bucket of charge are "poured" into the bucket next door, and then the next until the signal reaches the edge of the chip. There, an analogue-to-digital converter (ADC) typically assigns it an 8-bit greyscale value, ranging from 0 to 255. In a CMOS sensor, the charge is converted to a voltage local to each pixel before being shunted off to an ADC at the edge of the chip - where it too is assigned a greyscale value between 0 and 255 (see diagram).
CMOS Sensors
A memory chip needs none of this conversion circuitry, as it creates digital data directly. As a result, says Vetterli, the memory cell will always be 100 times smaller than CMOS sensor cells; it is bound to be that way because of the sheer number of signal-conditioning transistors the CMOS sensor needs around each pixel. "Our technology will always be two orders of magnitude smaller," he says.
So for every pixel on one of today's sensors, the memory-based sensor could have 100 pixels. A chip the size of a 10-megapixel camera sensor will have 100 times as many sensing cells if implemented in memory technology - hence the choice of the gigavision name.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment