photo:sensors
digital camera sensors
digital sensors
digital sensors consist of:
a sensor chip containing light-sensitive cells called photosites each of which are covered by a microlens which focuses the light into the photosite cell
in general, the larger the photosite, the more efficient it is and thus the greater dynamic range it can capture & the less noise it generates at higher ISO values (see below).
the size of the photosite also determines the circle of confusion which in turn is a component in the
depth of field (DOF) which results.
in general, with most manufacturers targeting a resolution of about 10mp irrespective of sensor size, this means the smaller the sensor, the smaller the photosite.
cameras can reduce noise at high ISO but usually at the expense of sharpness.
there is no perfect sensor size, all sensor sizes are a photographic compromise:
larger sensors:
allow more pixels and thus potential resolution for the same size photosites
allow larger photosites and thus more dynamic range, less noise at high ISO, and larger circle of confusion & thus less depth of field and less limitation of resolution due to diffraction at smaller f stops.
if photosites are above 8 micron then high quality legacy 35mm lenses are likely to be adequately matched in resolution terms.
BUT if photosites are less than 7-8 micron, then the lenses may be the limiting factor in resolution, especially zoom lenses and dedicated lenses designed for digital may be required to get the most out of the sensor (hence Olympus ZD lenses).
smaller sensors:
tend to have more depth of field (ie. more objects appear sharp)
very handy in macrophotography, self-portraits (hence the point and shoot cameras are often used this way)
useful in telephotos with wide apertures to achieve both adequate shutter speed and depth of field
better for beginners or the casual photographer who is not aware of the need to select the best subject to accurately focus on, and is happy for the camera to do their thinking.
allow smaller lenses for a given telephoto reach
allows smaller & lighter cameras
if using smaller photosites, allows use of weaker anti-aliasing filters as sensor resolution approaches optical resolution.
personally there are good reasons to have different cameras of different sensor sizes that will complement each other:
small sensor point and shoot carry anywhere camera
-
5.5x area point and shoots and Canon G9
the 2x crop provides a bigger differential in its advantages from a 1.3x or full frame crop as would a 1.6x crop camera (if you are thinking of a 1.6x crop why not get a 1.3x crop or full frame camera)
full frame sensor (ie. 35mm size sensor or perhaps a 1.3x crop sensor)
medium format sensor for the studio professionals
best image quality but big, heavy, only 1fps, very large files so often need to be tethered to a computer
-
before light hits the digital sensor it first passes through:
some sensors are mounted in a frame that can be moved to counter-act camera movement - sensor-based image stabilised sensors
some sensors can output a live image to the camera's LCD to give live preview
sensor dynamic range
this is an important specification of as sensor as it determines its ability to capture detail in dark as well as light areas, and in addition, the degree of noise produced.
in general, the larger the size of the photosites on the sensor, the greater their dynamic range, hence for a given megapixel count, the larger the sensor, the greater the dynamic range potentially available & the lower the noise at high ISO.
dynamic range is the log of (the largest value possible / the smallest value possible) and this value is multiplied by 20 to give a dynamic range in dB.
for film, dynamic range in dB = 20 x log (maximum film density / minimum film density)
the smallest possible value in digital systems is 1, while the maximum value is 2 (bit depth)
dynamic range of sensor in DB = 20 x log (maximum CCD well capacity / total sensor noise (rms) )
the camera then uses a A/D converter of matching dynamic range to convert the analog signals into a digital value, the dynamic range of an A/D converter in dB = 20 x log (2 (bit depth of converter) ), thus a 8bit A/D = 48dB, 10bit A/D = 60dB, 12 bit A/D = 72dB, and a 14 bit A/D = 84dB.
a 10 bit A/D (eg. most non-SLR prosumer cameras in 2005) gives a photographic dynamic range similar to transparency film of about 5-6 stops and currently, will display obvious noise when output is amplified to ISO values > 200.
digital images are most commonly stored as 8bit but using camera RAW files, one can save them as 16 bit per channel format (as TIFFs) although, but only the maximum A/D bit rate for the camera will actually be used.
film scanners:
to put the above in context, the higher the dynamic range of a film scanner CCD, the more information you can get from the darkest areas of the slide, so high dynamic range is a good and desirable property of a scanner, this dynamic range value is usually given as its Dmax (this needs to be multiplied by 20 to get a dB value).
a scanner with Dmax of 2.0 (ie. 60dB) will be adequate for most negatives
slide film however may require a Dmax of 3.5 or for Velvia, even 4.0 to get the deepest blacks.
no matter what the A/D bit rate is in a scanner, the limiting factor will be the lowest dynamic range of either the scanner sensor (usually 58-72dB currently) or the A/D converter
sensor sizes
-
although for the same technology, the smaller the photosite pitch, the less dynamic range & lower quantum efficiency, the more noise, as technology improves smaller pitch can be created with better than expected noise BUT there are laws of physics that limit this (eg. wavelength of light is about 0.75 microns) and thus at current technology in 2006:
sensors with photosites down to about 6.8 microns produce the highest quality, with little if any image quality degradation over ones with larger photo sites
sensors with pixels between about 5 microns and 6.8 microns are capable of excellent image quality, but are being pushed close to the limits of current technology when used at higher ISO settings, and suffer in this regard when compared with sensors with a larger pixel pitch.
physically larger sensors will always have advantages over smaller ones. This means that (other factors aside) image quality from medium format will be higher than full-frame 35mm, which will be better than APS size, which will have an edge over 4/3.
on the other hand small sensors have their own good points:
more depth of field - very useful in macrophotography, casual & travel photography, etc.
allows smaller cameras and lenses - more portable and less intrusive.
it is unlikely that you will be excited by the difference in output or resolution seen unless you are doubling the number of pixels on a sensor - just going from 8 mpixels to 11mpixels won't give you a substantial difference in print quality.
the smaller the sensor, the deeper the
depth of field, which means easier point-and-shoot and macro-photography but may make it difficult to emphasise the subject by blurring foreground & background for subjects at a portrait distance.
the smaller the sensor, the higher the quality lens required to achieve a given resolution, thus the need for especially designed lenses for digital cameras
consumer cameras:
1/3.6“ style = 3x4mm
1/3.2” style = 3.4×4.5mm
1/3“ style = 3.6×4.8mm
1/2.7” style = 4×5.4mm
1/2.5“ style = 4.29×5.76mm
5 megapixel (2004 eg. Panasonic FZ20)
at 1.76 µm pixel pitch then 8 megapixel (2007 eg Canon S5 IS, Panasonic FZ18)
1/2.35” style
1/2.3“ style = 4.7×6.3mm?
1/2” style = 4.8×6.4mm
1/1.8“ style = 5.32 x 7.18mm
at 2.8 µm pixel pitch then 5.2 megapixel (Olympus C5050, Canon Powershot S500)
at 2.2 µm pixel pitch then 8.3 megapixel (2004, eg. Panasonic FZ30)
at 1.9 µm pixel pitch then 10 megapixel (2006: Canon G7 3648×2736)
1/1.75” style
1/1.7“ style = 5.7×7.6mm
1/1.6” style =
2/3“ style = 6.6 x 8.8mm
4/3 style: = 13.5 x 18.0mm
digital SLR cameras:
-
especially designed for digital SLR cameras
⇒ smaller, lighter lens for same field of view cw larger digital SLR CMOS sensors below
35mm film lens crop value = 2.0
1.33 aspect ratio
at 6.8 µm pixel pitch then 5.6 megapixel (eg Olympus E-1)
at 5.7 µm pixel pitch then 7.5 megapixel (eg Olympus E-330 2006)
at 5.4 µm pixel pitch then 8 megapixel (eg Olympus E-300/500)
at 4.7 µm pixel pitch then 10 megapixel (eg Olympus E-410/510/E3 2007)
Foveon x3 CMOS style = 13.8 x 20.7mm
Nikon DX series = 15.6 x 23.7mm
35mm lens used on this have 1.5x effective focal length
1.52 aspect ratio
Nikon D1x has 5.93 x 11.8 µm pixel pitch with twice as many horizontal as vertical ⇒ 5.33 megapixels
Nikon D200 (2005) has 10.2 mpixels
Nikon D2x (2005) has 12.2 mpixels
at 5.5 µm pixel pitch then 12.3 megapixel (eg. Nikon D300 2007)?
Canon EOS APS-C style = 15.1 x 22.7mm
⇒ allows use of 35mm film lens with 1.6x field crop and 3:2 aspect as for 35mm film.
pixel density is 2.56x that for an equivalent megapixel full frame, but as it uses the central region of a 35mm film lens only, then the impact on telephoto lens resolution requirements is not as great as first seems at the poorer resolution at the edges is not used.
at 6.4 µm pixel pitch then 8 mp Canon 350D actually 22.2mm x 14.8mm
at 5.7 µm pixel pitch then 10mp Canon 40D 2007 (3888×2592)
Fujifilm super CCD = 15.5 x 23.0mm
Canon 1D APS-H = 19.1 x 28.7mm
full frame CMOS = 24x36mm
⇒ same size as 35mm film & thus allows use of 35mm lens at same field of view
at 8.8 µm pixel pitch then 11.4 megapixel (eg. Canon EOS 1Ds)
at 8.5 µm pixel pitch then 12.1 megapixel (eg. Nikon D3 2007)
at 8.2 µm pixel pitch then 12.7 megapixel (eg. Canon EOS 5D)
at 7.2 µm pixel pitch then 16.7 megapixel (eg. Canon EOS 1Ds MII)
at 6.4 µm pixel pitch then 21.1 megapixel (eg. Canon EOS 1Ds MIII 2007)
at 5.9 µm pixel pitch then 24.8 megapixel (eg.
Sony chip 2008 6096×4056)
medium format sensors:
37x37mm sensor:
48x36mm sensor:
49x36mm sensor:
astronomy sensors:
sensor resolution
number of pixels
to see a significant improvement in prints due to number of pixels alone, you need to at least double the number of pixels and even then the lens must be capable of resolving this resolution, and there is perfect focus, no subject or camera movement, and dynamic range (ie. digital noise) is not significantly worse.
in addition, subject matter can impact on how much one can enlarge an image - portraits tend to enlarge better than landscapes - hence part of the reason why many use
panorama stitches for landscapes to give sufficient detail in enlargements as well as breadth of image.
so let's see what effect we have of going from a 10mp camera with well matched lenses to a 21mp camera with well-matched lenses:
the Canon 1D Mark III at 10mp creates images 3888×2592 pixels and thus without software interpolation, can produce native prints at 250dpi to 15.6” x 10.4“
the Canon 1Ds Mark III at 21mp creates images 5616×3744 pixels and thus without software interpolation, can produce native prints at 250dpi to 22.5” x 15“
so the BEST the 1Ds can do if it had ideal conditions such as matched lenses, etc is 44% bigger prints
BUT in reality, few full format 35mm lenses are capable of providing sufficient resolution to realise this improvement wide open and thus you may just be getting larger files to handle and store with not much more actual detail extractable from them.
Furthermore, one can usually use software interpolation to take a 10mp image to a 20”x30“ print or even up to a 30”x45“- most users will not be needing to produce a print bigger than this.
Lastly, increasing the number of pixels on a same size sensor means the photosites must become smaller and thus lower dynamic range and higher digital noise, so more pixels is not necessarily better!
it is said that current Canon L series lenses are only just adequate for the 16.7mp Canon 1Ds Mark II camera but better lenses will be needed to make the most of a 21mp sensor of the same size.
it would be thus reasonable to assume that in resolution terms, lenses not specifically designed to match digital sensors are likely to be the resolution bottleneck if sensor resolution exceeds 135 pixels per mm.
pixels per mm for various sensors
lower sensor resolution which match legacy 35mm or medium format lenses nicely:
22 mp Hasselblad HD3-22 49×36.7mm = 111 pixels per mm
21.5mp Mamiya 645ZD 36mmx48mm = 111
12.7mp Canon 5D = 121
12.1mp Nikon D3x = 118
10mp Canon 1D Mark III = 135 - perhaps a perfect compromise when using 35mm full frame lenses although 1.3x crop is an issue for wide angle work.
high sensor resolution means lenses designed for 35mm cameras will be unlikely to have sufficient resolution to match the sensor, especially if used wide open or zoom lenses, hence unless using specially designed high resolution digital lenses, the increase in pixels is just creating larger image files with no further detail:
39mp Hasselblad HD3-39 49×36.7mm = 147 pixels per mm ⇒ hence new range of digital lenses
21.1mp Canon 1Ds Mark III full frame = 156 ⇒ hence mark II versions of L lenses
16.7mp Canon 1Ds Mark II = 139
12.9mp Fuji S5 Pro = 189
12.3mp Nikon D300 = 182
12.2mp Nikon D2Xs = 181
10mp Canon APS-C = 175
10mp Nikon DX = 163
10mp Leica M8 = 146
10mp Olympus = 203 ⇒ thus specially designed high resolution lens to match this sensor.
7.4mp Olympus E330 = 174
-
NB.
BUT if the camera is not on a tripod with the mirror locked up, then camera shake is much, much more likely to be the bottleneck this is why image stabilisers can be so important.
image aspect ratio
printing image aspect ratios
computer screen aspect ratios
film and digitalm sensor aspect ratios
1:1 aspect ratio = 1.0 (ie. square):
-
negates the need to rotate camera from landscape to portrait, which is a nuisance when on tripods
great for showing things but not as good as a panoramic rectangular ratio for telling stories
6×7 aspect ratio = 1.17:
4×5 and 8×10 aspect ratio = 1.25
4:3 aspect ratio = 1.33
-
-
pros:
great for displaying images on computers as most screen displays are 1.33 ratios
great for printing to 6”x8“, 12”x16“, 18”x24“,
less cropping than 35mm aspect ratio for printing to 8”x10“ or 10”x12“
more width for over-lapping when doing panorama stitches in portrait orientation (a preferred orientation for most who want high resolution panoramas)
3:2 aspect ratio = 1.5
16:9 aspect ratio = 1.78:
an often preferred ratio for telling stories as it is wider than the 3:2 which in turn is wider than 4:3
the current standard wide screen TV format
eg. 1280×720 movies
eg. Panasonic LX-2 and LX-3 compact digital cameras and the Panasonic GH series have native 16:9 option
BUT most computer monitors will display it in letterbox style and there are no standard print formats for this ratio
6×12 medium format film = 2.0:
telecentricity
Electronic sensors have a thick cover of filters that deviate the incident light if it comes in acute angles. Thus when the telecentricity coefficient (see below) is low, one can expect vignetting to increase unless special sensor designs are used to mitigate it (such as angled microlenses as on the new digital Leica M with 1.33 crop but this means they had to do away with an IR filter).
For this reason Olympus and partners developed the
Four Thirds mount to minimise this problem - the other advantage of this design is that the short lens to flange distance means almost any legacy lens from another manufacturer can be adapted and still focus at infinity.
But in reality, it seems that with standard sensor designs, a telecentricity coefficient of 1 is probably adequate as evidenced by the fact that the Canon APS-C is not significantly better than the Canon 35mm digital in terms of vignetting.
-
Mount and format | Lens to flange distance (millimeters) | Sensor diagonal (millimeters) | Telecentricity coefficient |
| A | B | C = A/B |
Nikon F FX | 46.50 | 43.3 | 1.07 |
Nikon F DX | 46.50 | 28.4 | 1.64 |
Canon EOS 35mm | 44.00 | 43.3 | 1.02 |
Canon APS-C | 44.00 | 27.0 | 1.63 |
Contax N/35mm | 48.00 | 43.3 | 1.11 |
Pentax K/35mm | 45.46 | 43.3 | 1.05 |
Pentax K/APS-C | 45.46 | 28.3 | 1.61 |
Minolta AF/35mm | 44.50 | 43.3 | 1.03 |
Minolta AF/APS | 44.50 | 28.4 | 1.57 |
Olympus OM/35mm | 46.00 | 43.3 | 1.06 |
Olympus Four Thirds dSLR system | 38.67 | 22.5 | 1.72 |
Leica R/35mm | 47.00 | 43.3 | 1.09 |
Leica R/1.37 crop | 47.00 | 31.7 | 1.48 |
Leica M/35mm | 27.95 | 43.3 | 0.65 |
Leica M/1.33 crop | 27.95 | 32.4 | 0.86 |
sampling, aliasing, anti-aliasing and Nyquest theorem
whenever one converts a signal to a digital form, a process of sampling is used to read parts of the analog signal at regular time or space intervals, all other parts of the analog signal is discarded (eg. occurs between the time when the signal is being read or, as in camera sensors, misses the photosite sensor).
in audio, we sample the sound at regular time intervals to capture instantaneous values, the higher the sampling rate, the closer the digital signal is to the original analog signal.
in imaging, we sample light using sensors arranged in an array, the distance between these sensors is the spatial sampling frequency in cycles per millimetre.
the famous Nyquest-Shannon sampling theorem says that we need to sample at twice the highest frequency contained in the analog signal to be able to perfectly reconstruct the original signal from the series of sampled values alone.
thus if we sample at a rate of R per second (the sampling rate), we will completely capture any signal containing frequencies up to R/2 per second (the Nyquist frequency for the system). Unfortunately, if the signal contains frequencies above this, it is not just discarded, but creates an artefact by becoming part of the sampled value at a value the same amount below the Nyquest frequency as the original value was above it - this maverick component of the sample is called aliasing distortion and the phenomenon is called aliasing.
to prevent such aliasing distortion, we use an anti-aliasing low pass filter which attempts to remove all parts of the signal which are at or above the Nyquest frequency before we sample the signal.
in digital photography, the anti-aliasing filter needs to work best at frequencies at or just above the Nyquest frequency, as frequencies much higher than that tend to be removed by the optical system's aberrations (incl. diffraction limits) and inexact focusing. In addition, as sensors are not perfect, they can usually only resolve up to 70-80% of Nyquist frequency, and thus the anti-aliasing filter may need to block frequencies in this range as well to minimise digital sampling artefacts.
thus in digital cameras, the anti-aliasing “filter” is actually provided by a combination of factors:
lens blur
image-shifting anti-aliasing filter - usually with a cos(pi * x) response
integration over the area of the sensor pixels
lenslets if the sensor is so equipped, with its sin(pi*x)/(pi+x) response
it is thus possible that for optimum anti-aliasing levels, the degree of anti-aliasing needed depends to some extent on the lens system used, and thus the importance of matching the lens to the camera, which may explain the often sub-optimal results digital SLR's have when used with lenses designed for film cameras.
photo/sensors.txt · Last modified: 2019/07/27 13:49 by gary1