low light

Discussion in 'Photography' started by ipy2006, Mar 7, 2007.

  1. ipy2006

    acl Guest


    If that's the case, wouldn't on chip binning improve this problem (ie
    number 3) at low ISOs? on-chip I mean by reading off 4 (say) pixels at
    a time. I know that in CCDs this is not so hard to do for 4 pixels in
    a line, but I have no idea if it's equally easy for eg a 2x2 block; I
    also don't know if this kind of binning them together introduces other
    sources of noise (in which case on-chip binning wouldn't help beyond a
    point), or how it works for a CMOS sensor. Do you have any pointers to
    more information?
     
    acl, Mar 21, 2007
    1. Advertisements

  2. ipy2006

    John Sheehy Guest

    Dalsa claims to be doing this with a 28MP CCD; they actually read a 2x2
    block of pixels with the same CFA filter color, if I remember their
    whitepaper correctly. They claim the same read noise four the 4 binned
    pixels, in electrons, as a single pixel, giving a gain of a stop over 2x2
    binning in software or firmware.

    --
     
    John Sheehy, Mar 21, 2007
    1. Advertisements

  3. ipy2006

    acl Guest

    Great thanks, I'll look for info on their website.
     
    acl, Mar 21, 2007
  4. ipy2006

    John Sheehy Guest

    *You* do. I never have the blackpoint drift up like that when I
    truncate/quantize data; the effect is usually subtle. The overall
    intensity should remain almost the same. You are doing something wrong,
    I think.

    Part of the problem might be that you are using tools that hide what
    they're really doing from you. I see references to "linear conversions"
    in your texts. You should do all the steps yourself, under your control,
    so you know *exactly* what is happening to the data at every step of the
    way. IRIS, DCRAW with the "-D" parameter are the only, and loading the
    RAW images from un-compressed DNGs are the only ways I know of that get
    you the real RAW data. (MaximDL, as well, I think).

    Note, I didn't say that an ISO 1600 suffers nothing at all from 8-bit
    quantization; I said that it is still better than ISO 100, pushed to the
    same EI.

    --
     
    John Sheehy, Mar 22, 2007
  5. ipy2006

    acl Guest

    But if indeed the signal is clipped to what would have been zero had
    there been no noise, then this would start to be visible only when the
    signal and the standard deviation are roughly equal (since if the
    signal is higher, the noise doesn't reach zero so doesn't get
    clipped).

    This would be invisible on the scale of figure 1. But if I understand
    correctly, you obtained the values for the read noise by measuring the
    output s and the noise n and fitting the noise curve to
    n(s)=sqrt(f^2+m) where m is the number of electrons and f the fixed
    noise? [that is, you determine f from this]? In that case indeed your
    value for f would be the true read noise, because the deviation from
    the model caused by this clipping would be over a tiny range of values
    to the extreme left of your plot and wouldn't affect the fitting
    appreciably.

    Anyway, my D200 does clip the noise at zero (ie the stdev is
    abnormally low for very low signals). Not that this contradicts your
    results or has any practical significance (that I can tell).
     
    acl, Mar 22, 2007
  6. ipy2006

    John Sheehy Guest

    Not necessarily. Many cameras have fairly significant noise that is
    neither in the blackframe, nor is shot noise. There are basically three
    components I've seen; the fixed, blanket noise (blackframe noise), the shot
    noise, and noise that is directly proportional to signal. If the latter
    type is significant, the camera will fail to reach the maximum S/N dictated
    by the photon count. My XTi is certainly like this; the top 1.5 stops or
    so at ISO 100 has the same S/N; about 100:1. Failure to account for it
    leads to an estimation of lower photon count than the actual. When I
    measure shot noise in low-ISO highlights, I divide signal in DN by standard
    deviation in DN, of a completely OOF patch of a solid area with no texture
    and shadows (like the color checker squares) in a single color channel of
    the RAW data (treating green as two different, but similar, channels), and
    square the result of the division. I consider this number to be the
    *minimum* number of photons; not *the* number.

    There may be some other noise correlations as well, which I have not
    noticed yet (albeit low in intensity).


    --
     
    John Sheehy, Mar 22, 2007
  7. ipy2006

    John Sheehy Guest

    Here's the shadow area of a 1DmkIII ISO 100 RAW, at the original 14 bits,
    and at quantizations to 12, 11, and 10 bits:

    http://www.pbase.com/jps_photo/image/76001165

    The demoasicing is a bit rough; it's my own quick'n'dirty one, but it is
    applied homogenously to all quantization levels, and I gave the three with
    extra quantization the same bit depth for demosaicing as the 14-bit (they
    all have the same precision for processing). Gave them all a little USM
    (0.5px/120%), which emphasizes the noise a little. These are all pushed
    from ISO 100 to 3200; the full tonal range of these images is linear, and
    represents the lowest 256 photonic levels (1024 through 1279) of the 15,280
    usable levels of the ISO 100 RAW.

    --
     
    John Sheehy, Mar 22, 2007
  8. ipy2006

    acl Guest

    What I mean is this. As you say in your webpage
    http://www.clarkvision.com/imagedetail/evaluation-nikon-d200/
    the read noise at ISO 100 corresponds to about 1 DN; 10 electrons. So
    unless the signal itself is of the order of 10 electrons, almost no
    clipping will occur. In other words, we're talking about a deviation
    from the noise model you have when you are at 1 DN or thereabouts,
    which basically means no deviation. This would be completely invisible
    on the graph and missed by any fitting procedure I know of (and
    rightly so).

    Another way to put it: this thing would occur when s\approx n, with s
    the number of electrons and n the "noise electrons". This could not
    possibly affect the fitting unless you only include a very small range
    of data, nor would it be visible unless you specifically looked for it
    (or noticed it by chance).

    Now it may be that what I saw in my blackframes is because of the way
    dcraw outputs "raw" data; maybe it subtracts an offset. I don't know,
    and this effect, whatever is causing it, is so inconsequential that I
    did not try to find out.

    But all this has made me doubt myself, so I'll take some blackframes
    and check again. I'll try to find and use a program not based on dcraw
    to read the raw files (if such a thing exists).
     
    acl, Mar 22, 2007
  9. You are limiting the signal-to-noise ratio your derive because of
    variations in the target you are imaging. E.g. the macbeth color
    checker of make of paper, which has fine textures. Illuminate
    the chart at a low angle and this will become more obvious.
    Those small variations in the target translate to small
    variations in intensity from pixel to pixel and result
    in your lower S/N. I initially tried to do this too in order
    to speed up testing, but hit this problem. I've encountered
    this problem at work in testing sensors too (more difficult
    when you are trying to evaluate sensors in flight on aircraft
    and spacecraft). I have found the only reliable way is
    the method used by the sensor manufacturers which uses
    pairs of full field illumination. That method also avoids
    scattered light which can also influence the lowest signal
    measurements which impact correct dynamic range evaluations.
    Details are available on my website and references therein:

    Procedures for Evaluating Digital Camera
    Sensor Noise, Dynamic Range, and Full Well Capacities;
    Canon 1D Mark II Analysis
    http://www.clarkvision.com/imagedetail/evaluation-1d2

    Roger
     
    Roger N. Clark (change username to rnclark), Mar 22, 2007
  10. I use the following noise model:

    N = (P + r^2 + t^2)^(0.5),

    Where N = total noise in electrons, P = number of photons,
    r = read noise in electrons, and
    t = thermal noise in electrons (effectively zero for short exposures).
    Noise from a stream of photons, the light we all see and image
    with our cameras, is the square root of the number of photons,
    so that is why the P in equation 2 is not squared (sqrt(P)2 = P).

    I track the signal and noise as a function of intensity, and watch for
    deviations from the model. Deviations indicate other noise sources
    are present, or other issues in the testing, or the camera and its
    processing. At low signals, if the read noise was clipped significantly,
    it would become obvious in the data as it would not fit well, showing
    a change in read noise as a function of intensity.

    Details are given here:
    Procedures for Evaluating Digital Camera
    Sensor Noise, Dynamic Range, and Full Well Capacities;
    Canon 1D Mark II Analysis
    http://www.clarkvision.com/imagedetail/evaluation-1d2

    Roger
     
    Roger N. Clark (change username to rnclark), Mar 22, 2007
  11. ipy2006

    acl Guest

    I've written papers on stochastic processes, and I know perfectly well
    what a standard deviation is; the point is that if this thing occurs,
    it is confined to extremely low signals. Maybe I should have replaced
    "when s=n" by "when the signal is of the order of the noise", to
    prevent this. Anyway, not much point in talking about this, as I think
    it's gotten to the point where everybody is talking past each other
    and we're just creating noise ourselves [which by now exceeds the
    signal, methinks :) ]. I'll take some blackframes tomorrow and check
    again.
    The D200 (and more expensive models) have an option to save
    uncompressed raw data. And yes, the resolution loss is indeed below
    the shot noise (using your measured values for the well depth).
    Although I guess it's now my turn to point out that this noise
    obviously isn't always sqrt(n) so shot noise can exceed the resolution
    limit (eg for a uniform subject it could be that you get zero photons
    in one pixel and 80000 in the other; not terribly likely, though), but
    never mind.

    But keep in mind that Nikons do process their "raw" data. I once wrote
    a short program to count the number of pixels above a given threshold
    in the data dumped by dcraw. I ran it on some blackframes. For a given
    threshold, the number of these pixels increases as the exposure time
    increases, up to an exposure time of 1s. At and above 1s, the number
    drops immediately to zero for thresholds of x and above (I don't
    remember what x was for ISO 800), except for a hot pixel which stays
    there. So obviously some filtering is done starting at 1s (maybe
    they're mapped, I don't know).

    It also looks to me (by eye) like more filtering is done at long
    exposure times, but I have not done any systematic testing. Maybe
    looking for correlations in the noise (in blackframes, for instance)
    will show something, but if I am going to get off my butt and do so
    much work I might as well do something publishable, so it won't be
    this :)

    Well, plus I am rubbish at programming and extremely lazy.
     
    acl, Mar 22, 2007
  12. Well, lets look at this another way. Go to:
    http://www.clarkvision.com/imagedetail/dynamicrange2

    4 bits is DN = 16 in the 0 to 4092 range. In 16-bit
    data file, that would be 16*16 = 256.

    Now go to Figure 7 and draw a vertical line at 256 on the
    horizontal axis. Now note all the data below that line that
    you cut off. Now go to Figure 8b and draw a vertical line
    at 4 stops, and note all the data you cut off. Now go to
    Figure 9D and draw the vertical line at 256 and
    note all the data you cut off. (Note too how noisy the
    8-bit jpeg data are.)

    Pretty obvious.

    Roger
     
    Roger N. Clark (change username to rnclark), Mar 22, 2007
  13. Me too!
    This makes NO sense. As pixel size and active area drops,
    the unity gain ISO drops. You don't need ISOs above unity gain ISO,
    so it is the high ISOs that are not needed. The low ISOs give
    you the full well range of the sensor. Dropping those low ISOs and
    you just lose dynamic range, which you've already reduced by
    using a smaller pixel..
    The problem with this scenario are multiple:
    1) reduced dynamic range.
    2) you want many more pixels, so the readout is slower and you lose
    frames per second. You lose with fast action photography.
    3) you lose high ISO performance.
    Image quality is more than just megapixels. Signal-to-noise ratio
    is very important, and that is what you are sacrificing with
    smaller pixels. However, the one thing you have not thought
    that does change the equation is QE. If QE could be increased
    along with maintaining full well with smaller pixels, then
    we would have a winner. See below.
    No, it would not be very good. See below.
    The factors in image quality include resolution, and signal-to-noise ratio.
    To get that wonderful quality with current QE and full wells gives
    the sweet spot of about 6 to 8 microns. And that sweet spot also corresponds
    to the sweet spot in 35mm camera lenses. WE ARE AT THE SWEET SPOT TODAY!

    If you changed DSLR sensors to 4 microns, to give good image quality,
    you would need to maintain full wells, increase QE by 3x (basically
    to max: >90% QE), and improve all the lenses by about 2x in MTF
    response. While all of this might happen, and I hope it does,
    there are no indications of sensors that meet that criteria, and
    lens designs for that improved MTF would not be cheap.

    Its nice to dream of the future, but don't forget we have wonderful
    performance right now. I imagine a 30D class full frame sensor,
    about 22 megapixels, 5 frames per second.
    That should come out soon. ;-)

    Roger
     
    Roger N. Clark (change username to rnclark), Mar 22, 2007
  14. Remember, a standard deviation of 1 means peak to peak variations on about
    4 DN. It is not simply you get 1 and only 1 all the time.

    There is another issue with the Nikon raw data: it is not true raw, but
    depleted values. I think they did a good job in designing the
    decimation, as they made it below the photon noise.

    Roger
     
    Roger N. Clark (change username to rnclark), Mar 22, 2007
  15. ipy2006

    John Sheehy Guest

    The Leica M8 does something similar, but a little different. It writes out
    8-bit gamma-adjusted RAWs as uncompressed DNG files. The RAW image is
    sitting neatly in the DNGs; any program that opens ".raw" files (the kind
    from before the era of digital cameras) can read them.

    --
     
    John Sheehy, Mar 22, 2007
  16. ipy2006

    John Sheehy Guest

    No, that is not the problem. I am quite aware of texture; that is why I
    extremely unfocus the chart, and use diffuse light. I also window the
    visible luminance range to exaggerate contrast for the squares, so I can
    clearly see any dust or texture that might be present. I look for areas
    that only vary at high frequency due to noise, and create a rectangular
    selection, and try others, of sufficient size to get a good sample, but
    small enough so that it is less likely to include a problem area.

    The results vary from camera to camera as well; my 20D and my FZ50 have no
    such limit to S/N, but the XTi does.

    --
     
    John Sheehy, Mar 23, 2007
  17. ipy2006

    Lionel Guest

    Yes, after first seeing your site, I was interested in performing
    similar tests to yours, so I sat down & did some maths. I soon
    realised that it wasn't possible to get accurate data from jury-rigged
    setups using charts & the like. The obvious approach was to illuminate
    the sensor directly with a calibrated light source, (which is
    something I can design myself), but I'd need lab grade optical
    equiment to get a flat, accurate illumination field on the sensor, &
    access to people with a lot more optical expertise than myself, & I no
    longer have access to an optics lab.
    It shows. (And there's nothing quite like trying to duplicate someone
    else's work to make you realise how hard it was create in the first
    place. ;^)
    I'm sure I've said this before, but thank you for all the hard work
    you did to create a really useful photography resource.

    PS: I've stopped responding to John's posts on this topic, because the
    weird misconceptions he's expressing about data aquisition technology
    are getting so irritating that I feel more like flaming him than
    educating him.
     
    Lionel, Mar 23, 2007
  18. Perhaps you need to look at this issue a little closer. There are
    very difficult problems in getting uniformity better than ~1%.
    Here are some of the issues:

    1) Even with diffuse light, it is very difficult to produce a uniform
    filed of illumination better than a percent. Try some computing
    of light source distance and angles to different spots.
    1/r^2 has a big impact. Scrambling the light may help, but
    it also scrambles knowledge. For example if one part of the
    diffuser has a fingerprint or slightly different reflectance
    for some reason, it produces a different field,
    and at the <1% level it becomes important. I have several diffuse
    illuminaters and run tests for uniformity and none pass the
    1% test in my lab.

    2) At the <1% level few targets are truly uniform. I have tested multiple
    surfaces in my lab for just this issue and most fail. There are
    large (many mm) variations in macbeth targets at the ~< 1% level.
    Here, for example is the macbeth color chart:
    http://www.clarkvision.com/imagedetail/evaluation-1d2/target.JZ3F5201-700.jpg
    Now here is the same chart with the top and bottom rows stretched
    panel by panel to show the variations:
    http://www.clarkvision.com/imagedetail/evaluation-1d2/target.JZ3F5201-700-str1.jpg
    There are variations on a few mm range, small spots (those are
    not sensor dust spots--they are too in focus), and there are gradients
    from one side of a patch to the other. The variations are
    typically a couple of percent (which in my opinion is actually
    very very good for such a low cost mass produced test target.)

    3) The light field through the lens as projected onto the focal
    plane even given a perfectly uniformly lit test target, is not uniform.
    You have a) cosine angle changing the apparent size of the
    aperture, b) 1/r^ changes from center to edge of the frame,
    variations in optical coatings and optical surfaces translate
    to small variations in the throughput of the system, d) center
    optic rays pass through more glass than edge optic rays, and the
    percentage of light passing through different angles to the optical
    axis pass through different amounts of glass, thus have different absorption.

    All of these effects may be small in photographic terms (although light
    fall-off is commonly seen), but at the percent level, even few percent
    level, they become important. Some cameras collect enough photons
    that the noise from photons gives S/N > 200. With your methods
    you are likely limiting what you can achieve.

    Try replacing the macbeth chart with several sheets of white paper.
    Take a picture and stretch it. Can you see any variation in level?
    If you can't see any variation, please tell us how you compensated
    for all the above effects, which would require a careful balance
    of increasing illumination off axis to counter the light fall-off
    of your lens, let alone the other non-symmetric effects.

    If you are testing sensors and want answers better than 10%, your
    method requires field illumination to be better than 10 times
    the photon noise, or 0.0005%. There is a reason why sensor
    manufacturers have adopted the methods in use today.
    Your method, even defocusing the target (which introduces other
    issues), probably can't even meet a 2% uniformity requirement.

    (Actually I tried this too, thinking I could speed up the
    testing. It became obvious in my first tests it didn't work.)

    (I've designed illumination sources for laboratory spectrometers
    for 25+ years, where requirements are quite tight.)

    Roger
     
    Roger N. Clark (change username to rnclark), Mar 23, 2007
  19. ipy2006

    John Sheehy Guest

    What misconceptions?

    Almost every reply you or Roger has made to me has ignored what I have
    actually written, and assumed something else entirely.

    Look at the post you just replied to; I made it quite clear in my post
    that Roger responded to, that the effect only happens with *ONE CAMERA*,
    yet Roger replied as if my technique were at fault, in some elementary
    way. He didn't pay attention, and *YOU* didn't pay attention, made
    obvious by your ganging up with him and failing to point out to him that
    it only happened with one camera.

    Did you even notice that fact? (That post wasn't the first time I said
    it was only one camera, either).

    Did you point out to Roger that when he wrote that ADCs have an error of
    +/- 1 DN, that because there was no range of errors amongst ADCs, and
    that 1 AN = 1 DN, that it would seem that he was writing about the
    rounding or truncation aspect of the quantization, itself, but mistakenly
    doubled? Surely if he were talking about ADC noise not due directly to
    the mathematical part of quantization, he would have given the range of
    error the the best and worse, or typical ADC, none of which would likely
    be exactly +/- 1.

    It was not my fault that I thought he was talking about the mathematical
    aspect; he, as usual, is sloppy with his language, and doesn't care that
    it leads to false conclusions. He is more interested in maintaining his
    existing statements than seeking and propagating truth.

    If anyone is weird here, it is you and Roger. You agree with and support
    each other when an "adversary" appears, no matter how lame your
    statements or criticisms.

    Where was Roger when when you implied that microlenses can effect dynamic
    range, without qualifying that you meant mixing sensor well depths *and*
    microlenses? Or perhaps you didn't even have that in mind the first time
    you did; you came up with that exceptional, non-traditional situation to
    make yourself right, without giving me a chance to comment on such an
    unusual arrangement, to which I would have immediately said that
    different well depths and/or sensitivities would affect overall system
    DR. Your use of different well depths in the example brings things to
    another level of dishonesty on your part. That was nothing short of
    pathetic.

    --
     
    John Sheehy, Mar 24, 2007
  20. ipy2006

    John Sheehy Guest

    in
    You can't just divide by 16, to drop 4 LSBs. 0 through 15 become 0. You
    have to add 8 first, and then divide by 16 (integer division), then
    multiply by 16, and subtract the 8, to get something similar to what you
    would get if the ADC were actually doing the quantization. The ADC is
    working with analog noise that dithers the results; you lose that
    benefit" when you quantize data that is already quantized. You won't
    notice the offset when the full range of DNs is high, but for one where a
    small range of DN is used for full scene DR, it is essential. I am
    amazed that you didn't stop and say to yourself, "I must have done
    something wrong" when you saw your quantized image go dark. That's what
    I said to myself, the first time I did it. I looked at the histograms,
    and saw the shift, and realized that an offset is needed unless the
    offset is a very small number relative to the full range of the scene.

    In the case of the mkIII image at 14, 12, 11, and 10 bits in another
    post, I used PS' Levels, because it simplifies the process, by doing the
    necessary offset to keep the distribution of tones constant.


    --
     
    John Sheehy, Mar 24, 2007
    1. Advertisements

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments (here). After that, you can post your question and our members will help you out.