low light

Discussion in 'Photography' started by ipy2006, Mar 7, 2007.

  1. ipy2006

    Lionel Guest

    Oops. The above is obviously wrong, please ignore it. (I was confusing
    different algorithms.)
     
    Lionel, Mar 19, 2007
    1. Advertisements

  2. Hey Scott,
    Where is your test with the picket fence? As I recall,
    all attempts to downsample without artifacts pretty much failed.
    Quite interesting.

    Roger
     
    Roger N. Clark (change username to rnclark), Mar 19, 2007
    1. Advertisements

  3. Your math doesn't add up. If the FZ50 gets 4800 electrons at
    ISO 100, then at ISO 1600 the most it will record is
    4800/16 = 300. With 3.3 electron read noise, that is only a dynamic
    range of 91. VERY poor. But I digress. Your 3 binned pixels
    would then have a max signal of 900 electrons and read noise
    of 5.8 electrons and a dynamic range of only 155.

    The Canon 1D Mark II at ISO 1600 records ~3,300 electrons
    with 3.9 electron read noise and has a dynamic range
    of 850, or 9.7 stops.

    Roger
     
    Roger N. Clark (change username to rnclark), Mar 19, 2007
  4. There is a simple reason for this "real-world fact."
    The 1D Mark II is a CMOS sensor; CMOS sensors have lower fill
    factors than CCDs. The FZ50 is a CCD, which generally have
    larger fill factors. You are comparing apples and
    oranges. The on pixel support electronics is why
    there are no small pixel size CMOS sensors, because once
    pixel size drops below about 4 microns, the active area
    drops too much. CCD encounter similar problems around
    2 microns, only due to the inactive area between pixels.

    Roger
     
    Roger N. Clark (change username to rnclark), Mar 19, 2007
  5. ipy2006

    Scott W Guest

    Here is the test image.
    http://www.pbase.com/konascott/image/69543104/original

    What looks like a fence is a test pattern then is just past Nyquist
    when down sampled to 25%.
    What I find that that mostly we try to avoid frequencies that are
    twice the Nyquist limit as these
    are the ones that make strong moiré patterns. Frequencies that are
    just past Nyquist create much
    more subtle artifacts and in a normal photo are not all that visible,
    the test pattern however does
    show the artifacts pretty strongly and any of the down sample methods
    that people put forth.

    In an perfect world we would not have any information past Nyquist but
    given that we are often left with a limited number of sample, like
    what a computer screen can display, we are force to push things a bit
    if we want the photo to look at all sharp.

    Now if I could have a 20 inch monitor with something like 3000x2000
    pixels life would be a lot easier.

    Scott
     
    Scott W, Mar 19, 2007
  6. ipy2006

    acl Guest

    But it's not so simple. Imagine using a square cutoff (a step) in
    frequency space to remove all frequencies above Nyquist. We'd get
    ringing artifacts even though they are not actually caused by the
    downsampling itself (but by the low-pass filter). We need a smooth
    rolloff. In fact, the product of the extend of the rolloff in
    frequency space and the extend of the artifacts in real space should
    be a constant, I think, so it's a tradeoff. Of course it depends on
    the constant, if it's 10^-10 who cares. I don't know what it is.

    Also, if simply removing all such frequencies (above half the
    sampling) in any way was sufficient to avoid artifacts, binning 2x2
    (ie just addding the 4 pixels together) would result in zero
    artifacts. I think the point is to avoid creating artifacts by the
    process of removing the high frequencies itself.
     
    acl, Mar 19, 2007
  7. ipy2006

    Paul Furman Guest

    Thanks for the reply... a few comments below.
    OK so a 1D mark II can boost ISO 19.5x without increasing read noise. I
    see unity on your chart at 1300 (close enough). Still it seems like the
    read noise would be trivial compared to the basic noise at ISO 1300.
    The read noise (rounding errors) is going to be the difference between
    30 & 31 (on a scale from -30 to 4096) and intuitively I'd guess ISO 1600
    noise is more like 30 & 300 isn't that roughly in the ballpark? Maybe it
    does make more of a difference in the shadows because of the linear
    issue & applying a curve when setting a 'normal' gamma. When I look at
    noise I see clear reds, blues & greens in what ought to be greys, that
    looks like a lot more than a few bits to me.
    Oh yeah, now I recall, it's heat generated noise... background heat
    producing apparent detail, and can be reduced with dark frame subtraction.

    OK this makes sense. So including the negative shadow noise would give
    blacker blacks, even though it is just random cloudiness.

    Only 30-50% sounds like tons of room for improvement.

    The pixel pitch is a nice clue but ultimately it's not reliable data and
    not really meaningful, except to show how neatly each camera has packed
    it's sensors. Those charts seem to me like they would be easier to read
    as a simple stack or bar chart.

    Well default settings don't really matter.

    I tried RSE and was not pleased at all, I like CS3/CS much better. It
    may be that RSE just allowed more extreme adjustments so I was more
    likely to create unnatural looking conversions.

    Thanks again for taking the time.
     
    Paul Furman, Mar 19, 2007
  8. ipy2006

    John Sheehy Guest

    Read noise is not rounding errors. The blackframe read noise in Canons
    is mostly real, analog noise picked up somewhere between amplification at
    the sensor wells, and digitization.

    I'm not sure if Roger is suggesting that the read noise is just something
    that happens in general before digitization or is part of the
    quantization itself, but it is most certainly *NOT* the quantization. It
    is analog noise, digitized.

    The idea that current blackframe read noises are a hard mathematical
    result of quantization is nonsense. In the absence of any analog noises,
    the quantization only makes noises of less than 0.3 ADUs (and 0.3 is a
    worst-case scenario, and requires a complex signal to appear all over the
    image).

    Do you remember my images of the dock pilings, shot with the same
    absolute exposure at ISOs 100 and 200 from a couple years ago?

    I quantized the ISO 1600 image to the same level as the ISO 100, and its
    noise did not increase visibly at all. I had to subtract one from the
    other and multiply the result greatly to even see the difference! The
    ISO 100, however, looked quite noisy compared to either the quantized or
    unquantized ISO 1600. Conclusion: bit depth and quantization are *NOT*
    the limiters of shadow quality; analog read noise is.

    Another point in this regard is that the DR of the 1DmkIII is exactly the
    same as the 1DmkII; if the standard deviation of the blackframe were
    somehow correlated to the least significant bits, you would expect the
    values to remain fairly constant with the 2 extra bits (in native ADUs),
    but they do not - they quadruple, meaning that they have *NOTHING*
    whatsoever to do with quantization, and everything to do with analog read
    noise.

    --
     
    John Sheehy, Mar 19, 2007
  9. ipy2006

    Paul Furman Guest

    Maybe a semantics problem? Are you two talking about the same thing?
    Urgh, what is 'blackframe'?

    I think analog noise is what he calls plain old noise??? Rounding errors
    was an issue for him in explaining why the high bit ImagePlus raw
    converter produced cleaner images, though I'm not convinced rounding
    errors are significant.

    ADU = Analog to Digital Unit?
    electrons, photons, bits???

    Yes, that was the result of getting the detail into the higher part of
    the counts so that when the gamma curve is applied, it doesn't get
    trashed: more detail in the highlights than the shadows due to linear
    conversion to normal gamma. After A/D conversion, in the raw conversion
    step. Roger's argument is to add more bits to the raw conversion and get
    more detail in the shadows that way.
    I'm not sure what you mean by 'quantized'. Is that the application of
    normal gamma curves during raw conversion. Sorry if I'm not using the
    right terms.
    You lost me here. Standard deviation refers to noise level deviating
    from what it should be? I don't even really know what standard deviation
    is, honestly.
     
    Paul Furman, Mar 19, 2007
  10. ipy2006

    John Sheehy Guest

    This I know.
    I am not "comparing" in the context you suggest. I am simply trying to
    demonstrate the fact that small pixels are not necessarily the bad thing
    they are made out to be by big pixel fanatics. Maybe you're not
    concerned, but I get very concerned about false information circulating
    as fact, or half-truths taken out of context like an evangelist quoting
    scripture for his own gain. There is a growing cult of people who
    believe that small pixels can not give good image quality, and your work
    is the most often-quoted Bible.
    You don't need all of the amplification levels, though. If the pixel
    pitch halves to 4u, you can eliminate the ISO 100- and ISO 200-related
    circuit components. When you go smaller yet, there may be no more
    benefit in Canon's current technology at all. What if you could read 2u
    pixels with 4800 photons each with a single amplification with only 1.5
    electrons of read noise; what would be the point in having bigger pixels,
    especially if you had the option of the firmware downsampling or binning
    for you, if you didn't want all that data?

    My main concern is that companies don't want to be bothered with higher
    pixel densities in DSLRs, and big-pixel fanaticism is exactly what they
    want people to believe, so that they don't have to move in the right
    direction for maximum IQ, or niche products. AFAIAC, there are huge gaps
    in current offerings. Where is the camera that takes EOS lenses that has
    a small sensor like the one in the FZ50? Imagine an FZ50 sensor
    capturing the focal plane of a 500mm or 600mm f/4L IS! Imagine a more
    professional version with lower read noise. No bokeh-destroying TCs
    necessary; you can leave them home and get as much or better detail, with
    better bokeh.

    --
     
    John Sheehy, Mar 19, 2007
  11. ipy2006

    John Sheehy Guest

    I don't think so, based on Roger's hope that the extra 2 bits in the
    mkIII will increase DR (lower read noise), which they fail to do.
    That an "exposure" that really has no exposure; going through the motion
    of an exposure, with the lens cap on. Most if not all digital cameras
    have pixels that are covered, and therefore capture "blackframe pixels"
    in every exposure.
    They aren't all that significant in capture, not with current read noise
    levels. They are a little more significant in conversion and PP, though,
    AFAICT.
    There is no direct relationship between ADUs and photons or electrons.
    They can be expressed as a ratio to each other, but that are arbitrarily
    independent from system to system.
    Not exactly. It's not about the higher parts of the counts, per se.
    It's about signal-to-noise ratios. Michael Reichmann's explanation of
    "exposing to the right" introduced that vocabulary of more levels to the
    right, but in real world cameras, the levels aren't nearly important as
    the S/N ratios, which increase as you expose to the right with a given
    camera and ISO. When you compare one ISO against another, or against
    another camera, then exposing to the right in one is not necessarily
    better than exposing to the left in another.
    I have nothing against precision; more precision is always better, even
    if by just a tiny amount. In my own hand-conversions, I try to use the
    full range of precision available to me; I promote RAW files to 16-bit,
    multiplying the values by 16, before doing any white balancing, or
    demosaicing, and I even downsample in this bloated precision before
    clipping the blackpoint.

    My point in playing down bit depth in this thread is that it is not the
    main source of shadow noise in current cameras; analog read noise is the
    main source. Roger's opinion on this is incorrect, IMO, and I have
    proven it by quantizing RAW data myself. Quantization does not rear it's
    ugly head, in clear visibility, until you quantize so far that the
    standard deviation is a bit below 1 ADU. IOW, you can turn the last four
    least significant bits of any ISO 1600 from a current Canon into zeros,
    and gain only a tad of noise, and still be quite a bit cleaner than ISO
    100 under-exposed by 4 stops, even though they are both quantized exactly
    the same.

    We need less analog read noise, much more than we need >12-bit depths.
    Quantization is just the act of converting analog data to digitized
    integers. If there is no added noise in the process, then any analog
    range of values equivalent to one ADU will wind up with that single ADU
    value. For systems where absolute values matter, this means errors over
    any one ADU range, like -0.999 to 0, or -0.5 to +0.499, or 0 to +0.999;
    never +/- 1 as Roger suggests in other posts. In systems like RAW data,
    where the blackpoint is movable and arbitrary, there is no point in
    viewing the errors as anything but +/- 0.5. Of course, the analog part
    of the read noise can make it wider than that, (and always does in
    current consumer products), but that's the most that quantization in and
    of itself will do.
    It's what you get when you take a number of samples, subtract each one
    separately from the average of them all, square each result, average them
    all together, and take the square root of the new average.

    There is no way that you can tell what value a pixel is supposed to be,
    so the individual deviation of a particular pixel is generally unknown.
    If, however, you photograph something like a Color Checker card, out-of-
    focus, in even lighting, then you have an average value that all samples
    within a square can be considered as the fixed value from which
    everything is deviating. In a real camera, light is not even and may
    increase the standard deviation for non-noise reasons. So, you can
    subtract one RAW image from another, properly registered, and measure the
    standard deviation a little more accurately, as you're only measuring
    what changes between frames. This ignores fixed-pattern noises, of
    course, that repeat from frame to frame. We know that adding noise to
    noise of equal intensity multiplies it by the square root of two, so you
    have to divide your standard deviations by the square root of two to get
    the single-frame deviations of non-repeating noises with the subtractive
    methods.

    With a blackframe, you know exactly what the signal is supposed to be;
    nothing.

    --
     
    John Sheehy, Mar 20, 2007
  12. David J. Littleboy, Mar 20, 2007
  13. Gee, some simple research would prove you are wrong.
    Try reading http://en.wikipedia.org/wiki/Analog-to-digital_converter
    which is a pretty good writ-up.
    For example, note the statement:
    "Commercial converters usually have ±0.5 to ±1.5 LSB error in their output."
    (section on commercial analog-to-digital converters.

    Let's look at some noise in ADUs from a wide range cameras:

    Camera Read Noise in ADU (or DN, or LSBs)
    ISO: 50 100 200 400 800 1600
    Canon 1DMII 1.2 1.3 1.4 1.7 2.5 4.8
    Canon 5D 1.8 1.8 1.9 2.1 2.6 7.4
    Canon 10D 1.4 2.0 3.9 6.4 13.
    Nikon D50 1.8 4.0
    Nikon D200 1.3 2.0 3.8 7.4 15.
    Canon 20D 2.0 2.2 2.4 3.2 4.5
    Canon S60 2.5
    Canon S70 2.0 3.4 6.3 17.

    The best noise is 1.2 ADU and the average of the lowest
    7 values (iso 50 or 100) is 1.7 ADU.

    Now let's look at a real device, e.g. a 14-bit converter from
    Analog devices:
    http://www.analog.com/en/prod/0,2877,760%5F788%5FAD7952,00.html
    Note it says: ±0.3 LSB typical, ±1 LSB maximum.
    Error depends on the speed of conversion. This device does
    maximum 1 million samples per second. Canon's 1D Mark II
    most do 100 million samples per second.

    Here is Analog Devices summary of 14 and 16 bit A/Ds working at
    ~ 100 megasamples/sec:
    http://www.analog.com/IST/SelectionTable/?selection_table_id=204
    Notice that the SNR for 14-bit converters ranges 71.9 to
    77.6 dB, and the lower SNR is for lower power devices (those that
    would more likely be used in cameras). That's less than 12-perfect
    bits equivalent. Also notice that no 16-bit converter reaches an
    SNR above 80 dB (barely over 13 perfect bits equivalent).

    12-bit converters do a little better,
    http://www.analog.com/IST/SelectionTable/?selection_table_id=197
    with SNR at 62 to 71 dB. 62 dB is less than 11 bits, consistent
    with the noise observed in the cameras listed above.

    Explore other converters:
    http://www.analog.com/en/subCat/0,2879,760%5F788%5F0%5F%5F0%5F,00.html

    Busted!

    Roger
     
    Roger N. Clark (change username to rnclark), Mar 20, 2007
  14. ipy2006

    Lionel Guest

    Bit depth of the ADC & 'shadow noise' are totally unrelated to each
    other, except as economic engineering constraints. For a given
    combination of sensor & (analog) conditioning circuit, the kinds of
    noise sources we've been discussing have already been managed to the
    best of the ability of the people responsible for the analog part of
    the design.
    Once the characteristics of the analog signal are known, an engineer
    selects an ADC based on the voltage range of the analog input signal
    (the output of the photodiode sense-amps, in this context), & then
    calculates the maximum useful bit-depth at which the voltage
    represented by one LSB (eg; a 12 bit ADC measuring a 1V signal has an
    LSB value of ~244uV (microvolts)) will be consistently detectable
    above the noise-floor of the input signal. (nb: for the sake of
    simplicity, I'm ignoring dithering & other sampling esoterica.) While
    one can convert to as many bits as one is willing to pay for, the
    (very expensive) extra bits will, for all practical purposes, be
    random numbers.

    (The ADC /can/ add conversion artifacts to the sampled data, but
    they're not noise, are very well defined, & a good design will keep
    them out of the digital output.)
    That's incorrect. It isn't possible to make a blanket claim that any
    particular type of noise is the "main source" of noise in images from
    digital cameras in general, because noise in images is totally
    dependent on the design of the camera, & can vary not only from brand
    to brand, & camera to camera, but between different modes or settings
    on the same camera, & even at different ambient temperatures.
    That statement makes no sense whatsoever. There is no way in the world
    that you have actually 'quantised' image data from a real camera
    yourself, unless you have actually built your own analog to digital
    conversion system & wired it up to the image sensor of the camera.
    Presumably, you're talking about processing RAW files, which have
    already been processed by the sense amps, quantised by the ADC in the
    camera, then twiddled by the firmware. If so, that is totally
    irrelevant to the topic of small photodiodes vs large photodiodes, &
    tells you nothing whatever about the noise levels on either.
     
    Lionel, Mar 20, 2007
  15. ipy2006

    Paul Furman Guest

    Is there a way to show the 'main' sensor noise in the same units
    compared to this read noise? I think I understand that an AUD is the
    smallest unit of info that can be read, right? And these AUDs are
    essentially rounding errors, not random noise? If so I would expect them
    to follow a more consistent increas like:

    1.2 2.4 4.8 9.6 19.2 38.4
     
    Paul Furman, Mar 20, 2007
  16. ipy2006

    John Sheehy Guest

    Gee, maybe you should read what I actually write. I never said that the
    low ISO read noise has nothing to do with the ADC. I said it wasn't the
    *bit depth* that causes the noise, in reply to the idea that the noise
    was a mathematical artifact of quantization. It is only reasonable that
    in the design of ADCs, bit depth far beyond analog noise are not
    worthwhile, in general, making the typical ADC only designed for a
    worthwhile bit depth, putting them all in a close range.

    If you google my posts in other forums, you will see that I have
    concluded that the flat rate of read noise at all Canon DSLR ISOs
    probably has something to do with the last stages, including the ADC. I
    have not concluded, in along time, however, that it is because of the
    bit depth of the capture. It is easy to quantize data further, and see
    at what point on the quantization curve you are. The fact is, you have
    to quantize ISO 100 by about two bits, and ISO 1600 by about 3 bits,
    before you see more noise, due to the quantization.
    If you had paid any attention to what I wrote, you would have seen that I
    wrote "If there is no added noise ...". IOW, I was clearly and
    deliberately taking the mathematical aspect of quantization into
    isolation. I mentioned also in some other spot that it was not 100%
    clear if you were talking about the mathematical act of quantization, or
    the total effect of the ADC, incuding the noise it introduces. The fact
    is, you used the exact term "+/- 1", which doesn't look like a real noise
    figure, but a mathematical, theoretical one. For me to lean towards the
    interpretation that you meant that the +/- 1 was a mathematical error was
    only logical. Once again, your language leaves a lot of mystery.

    In the past 24 hours, I have had three people on DPReview quote your work
    to me, to prove that the 14 bits in the mkIII will automatically increase
    DR by 2 stops, because current cameras are limited by 12-bit capture.
    had you made it clear that it isn't the bit-depth itself, but the noise
    inherent in real-world ADCs, people might be drawing more accurate
    conclusions. There is only going to be 2 more stops of DR if the
    blackframe noise drops to 1.3 14-bit ADUs (0.325 12-bit ADUs). The
    Imaging Resource mkIII had ISO read noises of 4.88 14-bit ADUs and
    greater (I get 7.91 in one file; this may have some kind of electrical
    interference; I have to look closer for patterns).
    Those 10D figures are way off. They are 1.9, 2.8, 4.9, 9.0, and 18.0.
    Perhaps your figures were taken from a blackpointed RAW blackframe.

    The 5D figure is very high for ISO 1600, also. The 5D ISO 1600
    blackframes I have here are all 4.6.
    I don't recall seeing values this low at the low ISOs in the Nikon RAW
    files I had. These are probably taken literally from the RAW blackframe,
    so they are automatically reduced to about 60% of what they'd be if they
    weren't black-clipped, like the Canons.
    You should pay more attention. I never said no noise came from the ADC
    stage or unit; I said the *bit depth* was not the problem.

    Let me state my viewpoint with a very clear example; if you quantize a
    12-bit Canon DSLR ISO 100 to 11 bits, it will lose little DR, much closer
    to 0 stops than 1 stop.

    If the 1DmkIII actually had noise of 1.3 14-bit ADUs, and you quantized
    that RAW data to 11 bits, it would still have more DR at the pixel level
    than a 12-bit RAW from existing 12-bit Canons.



    --
     
    John Sheehy, Mar 20, 2007
  17. ipy2006

    acl Guest

    Exactly: For the D200 at least, if you measure from areas where the
    average signal isn't zero, you see clearly that a measurement from a
    blackframe gives too low a stdev, consistent with clipping. This
    occurs with dcraw and other software which uses it to read the data,
    but it doesn't look like dcraw itself subtracts an offset or anything
    (but I may have missed it).
     
    acl, Mar 20, 2007
  18. Correcting the half truths is getting tiring.

    Here is a demo: See figure 9 at:
    http://www.clarkvision.com/photoinfo/night.and.low.light.photography

    Here is the original raw data converted linearly in IP, scaled by 128:
    http://www.clarkvision.com/photoinf...nightscene_linear_JZ3F7340_times128-876px.jpg

    Now here is the same data with the bottom 4 bits truncated:
    http://www.clarkvision.com/photoinf...linear_JZ3F7340-lose-4bits_times128-876px.jpg

    You lose quite a bit in my opinion.
    It would be a disaster in astrophotography.

    Roger
     
    Roger N. Clark (change username to rnclark), Mar 21, 2007
  19. Yeah, some the other way. I always discussed quantization
    in terms of the ADC. ADCs are not perfect.

    Fine, now I hope we are on the same page.
    Great, we agree, sort of! From the data I see, I conclude
    most of the noise at the low ISOs is due to the ADC.
    A better ADC will improve the noise at low ISO. That comes with
    more bits (higher bit converters).
    This does not make sense.
    I was only talking about the ADC. That is all that matters in the
    quantization step. IT IS ALL ABOUT ADC PERFORMANCE.
    12-bit ADCs do not give perfect 12 bits quantization.
    That is your jump to conclusions. If you read what I actually wrote...
    e.g. see the caption to Figure 4 at:
    http://www.clarkvision.com/imagedetail/digital.sensor.performance.summary
    which I wrote before the 1D mark II was announced:
    Figure 4. Dynamic range of sensors. Many sensors are limited to
    just under 12 photographic stops by the camera's 12-bit analog-to-digital
    (A/D) converter. Look for future DSLRs to use 14 or 16 bit A/Ds.

    There won't be 2 stops of improvement with 14-bit ADC if the
    Analog Devices ADCs are indicative of the ADC used by Canon.
    Canon claims 1 stop of shadow improvement.
    Oh, so you've tested my canon 10D? I didn't see you in my house.
    My numbers are correct for my camera.
    Perhaps there is some variation in cameras, or you are testing
    at different temperatures. See reference 13 on my
    digital.sensor.performance.summary web page for the 5D data.
    Well, perhaps you could examine the real data, e.g.:
    http://www.clarkvision.com/imagedetail/evaluation-nikon-d200

    I don't just do a dark frame measurement; I analyze the
    noise and response over the entire range of the sensor and model
    the results. See Figure 1 on the above web page. You'll see the
    largest deviation from the model is less than 10%, and I have
    light levels down to DN 16 (out of 4095). Where is your data
    that proves this is wrong?
    You've been arguing that a 14-bit ADC would not help the low
    ISO performance. Canon and I claim otherwise. Canon has stated
    improved shadow noise with their 14-bit converter in the 1DIII.
    Current data indicate low ISO cameras (Nikon and Canon) are limited by
    12-bit ADCs.
    This does not make sense.

    I think this thread has gone on long enough. Let's simply wait
    a few months until 1DIIIs are in the hands of competent testers
    and publish real evaluations of read noise and full well
    capacities. I predict the read noise in the 1DIII at low iso
    will improve by about a factor of 2 over the 1DII simply from
    typical ADC specifications.

    Oh, and one other prediction: we'll see more images being limited
    by photoshop's 15-bit math.

    Roger
     
    Roger N. Clark (change username to rnclark), Mar 21, 2007
  20. Yes, the ISO 1600 values are pretty close to the true read noise
    of the sensor. So for read noise in ADUs at ISO 100 divide the
    ISO 1600 values by 16. For example, the 1DMII with 4.8 ADUs at
    ISO 1600 should be about 4.8/16 = 0.3 ADU at ISO 100. That is why
    a converter with more bits should improve the low ISO shadow detail.
    Yes, ADU. I don't know where this ADU term came from. In the
    terrestrial and planetary sciences, we use DNs, and so do the
    engineers I've worked with on spacecraft sensors.
    DN = data number.
    The ADUs (DNs) are errors introduced by 1) sensor noise + 2) analog
    gain amplifier noise + 3) A/D converter noise and converter errors.
    It's not a straight line increase because one of those three dominates at
    one end and another dominates at the other end of the ISO.
    #1 and 2 are strongly coupled. 1+2 dominates at the high ISO,
    #3 dominates at the low ISO in the above sensors. We are all
    hoping that #3 will be less in the new canon 1DMIII with the
    14-bit converter. And Canon says that is the case.
    I hope they are right.

    Roger
     
    Roger N. Clark (change username to rnclark), Mar 21, 2007
    1. Advertisements

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments (here). After that, you can post your question and our members will help you out.