Read noise: get it 30x down?

Discussion in 'Digital Cameras' started by Ilya Zakharevich, Sep 30, 2007.

  1. I looked through papers on sensor read noise, and it looks like all
    the major contributors to it (kT and 1/f) are random. AND, reading is
    non-destructive.

    And I can't combine these two statements together. It LOOKS LIKE if
    one reads the sensel voltage 100 times, then averages the value, then
    the noise should go down 10 times. So: does it?

    One example: the new Sony APS CMOS sensor has 4000 ADCs; this is about
    1000x more than the previous generation of sensors. In particular (at
    least, if one forgets power consumption issue), they could spend 1000x
    time to read each sensel.

    Assume they read each sensel 100 times, and assume they get the same
    read noise as Canon did in its CMOS sensors (about 4 electrons). Then
    the noise goes down to 0.4 electrons.

    Sounds too good to be true... Puzzled,
    Ilya
     
    Ilya Zakharevich, Sep 30, 2007
    #1
    1. Advertisements

  2. [A complimentary Cc of this posting was NOT [per weedlist] sent to
    Ilya Zakharevich
    It was a temporary shutdown of mind. :-( ;-)

    The fact that the noise is "colored" (not white) implies that the
    measurements made at different times are not independent. Thus the
    average noise won't scale down as 1/sqrt(N).
    Now the REAL question becomes: what is the LOWER cut-off frequency of
    the 1/f noise? ("Pure" 1/f noise is physically impossible, since it
    carries infinite power; so the density should not increase that quick
    on very low frequencies. My question is about this frequency. Yet in
    other words: actual low-frequency power of actual noise coincides with
    power of 1/f noise filtered above some frequency; what is it?)

    If, for example, we know that this frequency is about 20KHz, then
    spending more than 50 usec reading a pixel would significantly
    decrease noise; while spending 5 usec vs 50 usec would not influence
    the noise very much.

    Thanks,
    Ilya
     
    Ilya Zakharevich, Sep 30, 2007
    #2
    1. Advertisements

  3. [A complimentary Cc of this posting was sent to
    Scott W
    ??? Roger has his blind spots, but on this topic he is quite kosher.
    He would not say anything as wrong as this.

    Just look at his tables: quantization dominates the low-ISO region.
    Read noise dominates high-ISO region.

    And adding another bit or two to quantizer is not a big deal.
    Decreasing readout noise may be quite tricky...
    Readout noise of 4 electrons matters only on the individual cells
    where the exposure is below about 25 electrons. So full well of 50Ke
    is not relevant.

    When low readout noise is very important is the situations when people
    long for cameras of yesterday, those with 6MP. Given low enough
    readout noise, you can combine pixels of 24MP camera so that it
    performes as 6MP one - has lower noise, and lower resolution. (And
    you can make it selectively - at low zones only.)

    Hope this helps,
    Ilya
     
    Ilya Zakharevich, Sep 30, 2007
    #3
  4. [A complimentary Cc of this posting was sent to
    Doug McDonald
    Thanks; looks like google may be enough to self-educate myself. [The
    problem of having no mental model of 1/f-noise is that I have no clue
    what happens when you switch between sources...]
    Are there cases where astronomers do this? Two years ago the best
    reference I have seen was for 2e noise. (Starting from about 0.2e,
    the noise becomes just a ditherer for the ADC. Seems like a long way
    to go.)

    Yours,
    Ilya
     
    Ilya Zakharevich, Sep 30, 2007
    #4
  5. Ilya Zakharevich

    John Sheehy Guest

    Then the problem becomes, "how do you do ISO 100 on the same camera?".

    Maybe we'll see dedicated low-light bodies in the future?

    I believe that shot noise, in isolation, without any read noise, is much
    less distracting than (current levels of) read noise, in very low photon
    captures. I've simulated pure shot noise from well-exposed ISO 100 images,
    and the shot noise looks quite nice compared to the garbage we call read
    noise and total blackframe noise. Read noises always have a chance to be
    patterned; shot noise can not be, not with any reasonable chance. Shot
    noise looks like a texture in the subject; read noise looks like a film of
    streaks, smears and fireworks super-imposed upon the image.

    --
     
    John Sheehy, Oct 1, 2007
    #5
  6. [A complimentary Cc of this posting was sent to
    Roger N. Clark (change username to rnclark)
    Reset noise should be random.
    I doubt there is a contemporary camera which does not subract reset
    noise. It is of level about 50e with contemporary dSLR sensors; we
    would see it if it would contribute.
    Not very correct. 4e read noise contributes significantly starting
    with about 25e exposures. And with 4Ke overloading ADC, this is 7.5
    steps below the max; i.e., the zone 1.5. Quite an important zone with
    a lot of subjects...

    Hope this helps,
    Ilya
     
    Ilya Zakharevich, Oct 1, 2007
    #6
  7. [A complimentary Cc of this posting was sent to
    Roger N. Clark (change username to rnclark)
    A/D converters are quite sufficient now - when used in high-ISO mode,
    when the read noise is more important than in low-ISO. So it is the
    "amplifier" read noise which must be improved first.

    Handling fixed-pattern noise MAY help in some situations; but as far
    as I know, read noise is dominated by the random part.
    Note that one may be mixing two different terms; termal noise is, in
    some situations, a synonim of kT-noise.

    Hope this helps,
    Ilya
     
    Ilya Zakharevich, Oct 1, 2007
    #7
  8. Ilya Zakharevich

    John Sheehy Guest

    There are some cases, and it is certainly true with long exposures, but
    there are non-single pixel noises that do not repeat in successive frames
    in many cameras. Most often, it is an offset in the blackpoint that occurs
    for only a fraction of a pixel line. Offsets that affect the entire line
    could be removed with data obtained from unexposed pixel borders (Canon
    RAWs maintain black pixels in the RAW file from the top and left edges of
    the image), but artifacts that occur mid-line can not be positively
    identified.




    --
     
    John Sheehy, Oct 1, 2007
    #8
  9. [A complimentary Cc of this posting was sent to
    Roger N. Clark (change username to rnclark)
    If "it is random when examining one pixel again and again", then, in
    the context of this thread, you can remove it by (multi)sampling.
    When removed, the variations do not matter much, right? ;-)
    Thanks, looks like it contains a lot of datapoints I was looking for.
    No, they are not "needed"; they are "wanted" - if possible. But if
    you can't get more, such a difference is noticable. And S/N=5 is not
    "completely unusable"; as far as I'm concerned, it would be about S/N<3.

    Yours,
    Ilya
     
    Ilya Zakharevich, Oct 2, 2007
    #9
  10. [A complimentary Cc of this posting was sent to
    Roger N. Clark (change username to rnclark)
    So you confirm that S/N are quite sufficient in high-ISO mode...
    This is not a fixed-pattern noise. It woud survive to Figure 13.
    Might have been 1/f noise...
    Does not make any sense, since kT noise is a part of read noise...

    Hoep this helps,
    Ilya
     
    Ilya Zakharevich, Oct 2, 2007
    #10
  11. [A complimentary Cc of this posting was NOT [per weedlist] sent to
    Floyd L. Davidson

    Actually, the only piece which could have been useful to my purposes
    is the "CCD Amplifier Noise Spectrum" figure, and and the caption at
    the vertical axis is not readable (nanoVolts per WHAT?).
    If one wants to be more precise, this would be incorporating an ADC
    with performance comparable to at least 10bit ideal converter (in
    practice, to achieve such a performance, one would use at least 12-bit
    converter). The noise of n-bit converter is about 1/2^(n+2).

    Yours,
    Ilya
     
    Ilya Zakharevich, Oct 2, 2007
    #11
  12. [A complimentary Cc of this posting was NOT [per weedlist] sent to
    Floyd L. Davidson
    First, IMO using dB in the video context is very misleading (they have
    an "extra" factor of 2).

    Second, an ideal 12-bit ADC would give noise of about 1/2^14; so when
    counting 6dB per 1/2 (instead of 3, as one should...), this is about
    84dB. Your other numbers are WAY off.
    Correct numbers show that things are much worse than this. 12-bit
    converters work approximately like 10-bit ideal converters.
    You are forgetting about very low QE of the current (well, 2-years
    old) generation of sensors (about 0.1). So with non-RGB-Bayer
    sensors, the numbers could be quite significant.

    Decreasing the read noise well below 3e has, IMO, only one use: you
    can bin the pixels by postprocessing; so you can trade resolution vs
    noise very late in the pipeline (e.g., before printing), do it
    adaptively, and apply intelligent algorithms for choosing the
    tradeoffs.

    Increasing QE gives much more direct advantage. And anyway, I do not
    think that the numbers you cite are attainable without signficant
    decrease in resolution.

    Hope this helps,
    Ilya
     
    Ilya Zakharevich, Oct 2, 2007
    #12
  13. Ilya Zakharevich

    John Sheehy Guest

    The wells are capable of higher DR, but the readout circuitry can not
    deliver. You seem to believe that Canon DSLRs have a single on-photosite
    amplification, and that differences in blackframe noise at different ISOs
    for short exposures are due to the ADC noise. I really don't see any
    evidence to support that. I see it in many other brands' DSLRs; I do not
    see it in Canon. The evidence suggests that *at* the photosite, there are
    different read noises in electrons at different ISOs:

    1) If you try to account for blackframe noise at various ISOs by the square
    root of the sum of a fixed photosite read noise squared and a scaled ADC
    noise squared (both in electrons), you can find values that satisfy both
    ISO 100 and 1600, but they won't satisfy ISO 400 at all.

    2) On the Canon 5D and 1D* cameras, ISOs 100, 125, 160 have almost exactly
    the same blackframe noise in electrons. ISOs 200, 250, and 320 have the
    same blackframe noise in electrons, but less than the former group, etc.
    There are no gaps or spikes in the histograms of the extra ISOs, compared
    to the main ones, so these ISO are clearly done by a second level of
    amplification, which suggests that each ISO in the groups of three have a
    unique amplification at the photosites, because if all the ISOs were
    achieved by second-stage amplification, there would be a smooth
    relationship between blackframe noise and ISO.


    IMO, the real problem is that the readout circuitry just isn't up to par
    for reading large photon counts, and that is one reason why I think smaller
    pixels are generally better for image-level blackframe noise.

    --
     
    John Sheehy, Oct 4, 2007
    #13
  14. I think both of you guys are missing the point of the Sony technology
    mentioned by the OP. That has circa 4000 ADCs - one or two per column.
    That bypasses the issue of achieving >80dB at >50Mhz in <500mW
    completely.

    To achieve the same system level performance, those ADCs need to be
    but that would be the equivalent). This is a much easier task since
    some very low power ADC technologies (eg. delta-sigma, dual slope etc.)
    are incapable of operating at the high Mhz speeds, but frequently
    operate at 10s of Khz.

    This step has been taken in other types of image sensor (eg. thermal
    imaging FPAs) for exactly the same reasons - cost, power, cost,
    bandwidth, cost, performance and cost - but it is the first time I have
    seen it used on commercial cameras. Basically it trades sensor yield
    and cost for system performance, complexity and cost.

    The next logical step, which I have only seen on R&D prototypes so far,
    is to have an ADC per pixel or small group (eg. 2x2, 3x3, 4x4) of
    pixels. That way the bandwidth of the ADC (and with it the read noise
    bandwidth) can reduce to a few hertz or tens of hertz.
     
    Kennedy McEwen, Oct 5, 2007
    #14
  15. [A complimentary Cc of this posting was NOT [per weedlist] sent to
    Floyd L. Davidson
    I strongly object to using dB in this context. "Electrical dB" has an
    extra factor of 2; so your 73.07 dB should be actually 36.54 dB in the
    context of optical signals. This misusage (coming from handling audio
    signal, with different semantic of "intensity") is too confusing.
    Ideal ADC (e.g., 24-bit ADC rounded to closest 10bit number ;-) will
    have MUCH larger dynamic range. E.g., the stop of 10-bit converter
    will be in the context (about 16K full well?) about 16 units. So the
    error is in the range -8..8units, equally distributed. The mean
    square is 8/sqrt(3), about 4.6.

    This is noticable with 4 units read noise, but anything even slightly
    less than this will be not noticable.

    Hope this helps,
    Ilya
     
    Ilya Zakharevich, Dec 12, 2007
    #15
  16. [A complimentary Cc of this posting was NOT [per weedlist] sent to
    Floyd L. Davidson
    ??? 10dB should 10x change in intensity. Intensity of video signal
    is the voltage (with audio signal, the intensity is the square of
    voltage). Thus 100x dynamic range of video signal SHOULD BE
    (according to definition of Bell) be denote as 20dB change.

    [Of course, things get very confusing when the same ADC is used for
    processing of video and audio signals... ;-[ ;-]

    Yours,
    Ilya
     
    Ilya Zakharevich, Dec 12, 2007
    #16
  17. [A complimentary Cc of this posting was sent to
    Roger N. Clark (change username to rnclark)
    You know quite well that QE of the "sensor in the camera" is much
    closer to 10%. If you could separate different components of the
    sensor (Bayer filter, AAF, IR filter), then the STANDALONE
    "photoelectric part" of the sensor will get QE much higher than 10% (I
    suspect even quite higher than 30%...).

    But since there is no possibility of such a separation, quoting these
    "unreal theoretical" numbers makes little sense.
    If this were applicable, it would appear on Figure 13. It does not,
    so what is observed is not a fixed pattern noise.
    I wonder how much the noise depends on the other parameters than the
    temperature. If it (essentially) depended only on temperature,
    putting a handful of temperature sensors along the sensor (and storing
    them in EXIF) would allow removal without extra exposure (at least
    when camera gets large enough database of dark current/pix at
    different temperatures).
    Sure. AFAIU, kT noise depends too weak of T to be measured this way.

    Yours,
    Ilya
     
    Ilya Zakharevich, Dec 12, 2007
    #17
  18. [A complimentary Cc of this posting was sent to
    ejmartin
    In the context, what Roger said was indeed silly.

    However, at least in "a theoretical discussion": imaging a camera with
    each ADC duplicated (remember that Sony just 1000-cated them in their
    latest sensor; so it is not a silly assumption ;-). Run them in
    parallel, one on the highest amplification, another (as done
    currently) on the variable amplification.

    Store both readings, and merge them in postprocessing. With the price
    of several 0.1mm2 of die, and no ESSENTIAL redesign, you got dynamic
    range as above.

    Hope this helps,
    Ilya
     
    Ilya Zakharevich, Dec 12, 2007
    #18
  19. [A complimentary Cc of this posting was sent to
    Roger N. Clark (change username to rnclark)
    I do not think speed and power are THAT important. Current generation
    sensors (or should it be "the sensor"? I do not know how many of
    these 10MPx 125f/sec cameras are produced ;-) read about 1.25GPx/sec
    at 14bit... This is way above than the what consumer photo cameras
    produce...

    Some other restrictions with current technology may be applicable
    (like shortage of engineers with the knowledge ;-).

    BTW, did you get any experimental noise data for A700 (and D3?)
    sensors, which started the discussion?

    Thanks,
    Ilya
     
    Ilya Zakharevich, Dec 12, 2007
    #19
  20. [A complimentary Cc of this posting was NOT [per weedlist] sent to
    Kennedy McEwen
    I wonder how they compensate for fixed-pattern gain difference of
    different ADCs... On the other hand, most sensors were interleaved
    anyway, so had similar issues. The engineers had a lot of time to
    solve this...
    My reading of the 1/f-noise compensation is that it is easy (by
    interleaved reading of 0 and signal) to trade 2x in speed (less than
    2x increase of noise) to change 1/f spectrum to almost white spectrum.

    Now considering things naively, you can decrease the speed 4x more,
    and decrease noise spectrum below the original level (and still have
    no 1/f-complications).

    Effectively, such an interleaved reading decreases speed 8x, decreases
    noise on every (temporal) frequency, AND removes 1/f phenomenon.
    After this, increasing reading time N times decreases the noise
    sqrt(N) times. (Again, this uses quite naive models of noise;
    however, they fit all the literature I found so far. Also, I did not
    take into account any power consumption effects.)

    Take Sony sensor: it has 1000x the normal time to read a sensel; this
    gives N=125; so I would expect the noise to be (better than) 11 times
    less than a previous-generation sensor. Taking Canon's sensor as
    comparison, with 4e noise; this gives 0.36e noise. If it were true:
    do we really NEED more?

    Still puzzled,
    Ilya
     
    Ilya Zakharevich, Dec 12, 2007
    #20
    1. Advertisements

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments (here). After that, you can post your question and our members will help you out.