The Implications of the "Number of Bits" of DSLRs

Discussion in 'Digital SLR' started by RiceHigh, Dec 12, 2006.

  1. RiceHigh

    RiceHigh Guest

    1. Advertisements

  2. You have a lot of misconceptions in your article, some
    propogated from Reichmann.

    # bits actually needed is dependent on the sensor,
    and the smaller sensors coming out today do not necessarily
    need more than 12 bits; some even less.

    The digital transfer curve reduces the need for as many
    bits (this is a variable gamma encoding scheme).
    Regardless of some perceptions, more bits are NOT needed
    in the top cameras for the high end, as photon noise
    becomes greater than 1 digital number above a few dozen
    photons (which is most of all images).

    You have many incorrect statements, such as:

    "So, how important is this "number of bits" figure
    on the image quality and the tonal response
    (smoothness in transition, etc.)? It is very
    trivial that the more levels it records, the
    smoother the tonal quality and response it would be."
    -- No, photon noise limiting.

    "However, it is the main weakness of *any* digital
    camera that the number of tonal levels at the shadow
    areas are much less than those counted at the
    brighter parts, i.e. the highlights, owing to
    the primitive nature of CCD/CMOS imager"
    -- incorrect. The sensors are linear, and the
    tone curve compresses tonality in the high end, not the
    low end. See Figure 7 at:
    Dynamic Range and Transfer Functions of Digital Images
    and Comparison to Film
    http://www.clarkvision.com/imagedetail/dynamicrange2

    "As such, assuming *perfectly* ideal exposure which
    means that an evenly spread histogram is obtained,..."
    -- Incorrect. There are many correctly
    exposed scenes where histograms are not "perfectly spread."

    "Well, at this point, what I must emphasize (again,
    as always) is that an accurate metering and exposure
    system of the DSLR is of prime importance than anything
    else, given that the number of bits of the RAW file
    is the identical."
    -- doesn't make sense.

    "So, in the end, you need a high bit *output*
    device to output your high bit pictures, if any."
    -- Incorrect. You need good processing to
    compress the range for the output device.

    "Last but not least, if you still have some
    unresolved puzzles about the basic concept(s)
    in your mind after reading all these here..."
    -- This describes your article.

    #bits needed depends on the sensor performance. Much
    of what we are seeing today is marketing hype.

    Some references related to your article:

    Digital Camera Sensor Performance Summary
    http://www.clarkvision.com/imagedetail/digital.sensor.performance.summary

    Digital Camera Raw Converter Shadow Detail and
    Image Editor Limitations:
    Factors in Getting Shadow Detail in Images
    http://www.clarkvision.com/imagedetail/raw.converter.shadow.detail

    Digital Camera Raw versus Jpeg Conversion Losses
    http://www.clarkvision.com/imagedetail/raw.versus.jpeg1

    Digital Imaging Information
    http://www.clarkvision.com/imagedetail

    Roger
     
    Roger N. Clark (change username to rnclark), Dec 12, 2006
    #2
    1. Advertisements

  3. RiceHigh

    RiceHigh Guest

    Roger N. Clark (change username to rnclark) ¼g¹D¡G
    First, thanks for your interest and reply.

    I've gone throught a few of your articles which you pointed to at your
    website. I still do not think neither Norman Koren, Michael Reichmann
    nor me are wrong. Just that you are viewing the case from a different
    perspective than we do, e.g., the figures in your charts are the
    intensity out of the my so-called "black box" (i.e., the whole camera,
    and for data number from the final image file), whereas Koren,
    Reichmann and me are all talking about the lowest level stuff like
    quantized digital levels of directly out of the linear ADC which
    receives the raw CCD analog signals (which is the intermediate data
    *within* the black box, which no end-user can extract those data)
    Yes, when "photon noise" (actually I would say it is simply signal
    noise) is concerned, it is true.
    I do not include the photo site size and area as for discussion and
    this would involve another physical factor/parameter which will vary
    according to different size and different technology and chips used. As
    a result, the case will be over complicated.

    Of course, you're right that if the noise level is larger than one
    quantization step, more bits will be meaningless. That's trivial.
    But, how can you know the "photon noise" has already reached its limit?
    Say, for a 12-bit ADC with a APS-C imager. Do you refer to a 6MP, 8MP
    or a 10MP one and which chip model do you refer to? Any datasheet for
    reference?
    As I have just said above, your figures are a different thing than the
    tables which Koren, Reichmann and I have presented.

    You have plotted in log scale of intensity against the output image
    file levels which is NOT the same thing as what we three are talking
    about. Your 0 to 16 bit of the tiff file of your plotted chart (i.e.,
    the Y-axis) is just something like what we are talking about of 0 to
    255 in the histogram. Of course, they should be roughtly be linear
    (just because they are actually the SAME thing with different scale,
    which can be inter-mapped in a linear way), otherwise, we cannot see an
    image if this linear relationship does not hold!

    I fully second Reichmann and Koren's suggested theory and concept just
    because the light energy power is directly proportional to the charge
    it can store at a pixel (photon energy for your term), than the charge
    is the source for current/voltage from the CCD, which is linear to the
    "photon energy". But then as you have also plotted in a logarithmic
    scale, human eyes see steps in Zones or in log scale. As such the
    distribution of LOWEST LEVEL Tonal Levels will be condensed at the
    brightest side or right side of the histogram/ right side (larger
    value) of the log scale, after remapping from a linear light power into
    a logged scale.
    At the time of exposure, anything which are not recorded at the higher
    bits for the CCD signal levels and not converted from the ADC with have
    the bits wasted and lost!
    You meant it'd be better to have an inaccurate ligth meter and exposure
    control system?? (Would this be even more doesn't sense?)
    I won't want to argue with you on the intrepretation of the words in my
    article. The logic is very simple. If you do not have a higher bit
    output device, then good compression into the "range" in crucial. But
    if we have a higher bit output device to match with, why compression is
    needed?

    E.g., if you have a high resolution picture on a high resolution
    monitor, then things will look very good for the first time. If you
    only have a lower resolution monitor, then a good downsizing and
    re-sampling software algorithm will be required.

    Afterall, I don't see why your point of "a good compression algorthim
    is crucial" can turn my statement of "higher bit output to match with
    high bit input/data" incorrect!
    I hope that I could have written a better article with more accurate
    information. Grateful if you could respond to my above questions for
    further clarifcation and discussions.
    Partially agreed, some are marketing hype as there are some other
    limiting factors and technology barriers. But at least theoretically,
    more bits will be useful, if the other affecting factors are not
    dominant.
    Well, it seems that we are talking about things virtually at different
    hierachy of the "black box".

    RiceHigh
    http://ricehigh.blogspot.com
     
    RiceHigh, Dec 12, 2006
    #3
  4. RiceHigh

    RiceHigh Guest

    Roger N. Clark (change username to rnclark) ¼g¹D¡G
    First, thanks for your interest and reply.

    I've gone throught a few of your articles which you pointed to at your
    website. I still do not think neither Norman Koren, Michael Reichmann
    nor me are wrong. Just that you are viewing the case from a different
    perspective than we do, e.g., the figures in your charts are the
    intensity out of the my so-called "black box" (i.e., the whole camera,
    and for data number from the final image file), whereas Koren,
    Reichmann and me are all talking about the lowest level stuff like
    quantized digital levels of directly out of the linear ADC which
    receives the raw CCD analog signals (which is the intermediate data
    *within* the black box, which no end-user can extract those data)
    Yes, when "photon noise" (actually I would say it is simply signal
    noise) is concerned, it is true.
    I do not include the photo site size and area as for discussion and
    this would involve another physical factor/parameter which will vary
    according to different size and different technology and chips used. As
    a result, the case will be over complicated.

    Of course, you're right that if the noise level is larger than one
    quantization step, more bits will be meaningless. That's trivial.
    But, how can you know the "photon noise" has already reached its limit?
    Say, for a 12-bit ADC with a APS-C imager. Do you refer to a 6MP, 8MP
    or a 10MP one and which chip model do you refer to? Any datasheet for
    reference?
    As I have just said above, your figures are a different thing than the
    tables which Koren, Reichmann and I have presented.

    You have plotted in log scale of intensity against the output image
    file levels which is NOT the same thing as what we three are talking
    about. Your 0 to 16 bit of the tiff file of your plotted chart (i.e.,
    the Y-axis) is just something like what we are talking about of 0 to
    255 in the histogram. Of course, they should be roughtly be linear
    (just because they are actually the SAME thing with different scale,
    which can be inter-mapped in a linear way), otherwise, we cannot see an
    image if this linear relationship does not hold!

    I fully second Reichmann and Koren's suggested theory and concept just
    because the light energy power is directly proportional to the charge
    it can store at a pixel (photon energy for your term), than the charge
    is the source for current/voltage from the CCD, which is linear to the
    "photon energy". But then as you have also plotted in a logarithmic
    scale, human eyes see steps in Zones or in log scale. As such the
    distribution of LOWEST LEVEL Tonal Levels will be condensed at the
    brightest side or right side of the histogram/ right side (larger
    value) of the log scale, after remapping from a linear light power into
    a logged scale.
    At the time of exposure, anything which are not recorded at the higher
    bits for the CCD signal levels and not converted from the ADC with have
    the bits wasted and lost!
    You meant it'd be better to have an inaccurate ligth meter and exposure
    control system?? (Would this be even more doesn't sense?)
    I won't want to argue with you on the intrepretation of the words in my
    article. The logic is very simple. If you do not have a higher bit
    output device, then good compression into the "range" in crucial. But
    if we have a higher bit output device to match with, why compression is
    needed?

    E.g., if you have a high resolution picture on a high resolution
    monitor, then things will look very good for the first time. If you
    only have a lower resolution monitor, then a good downsizing and
    re-sampling software algorithm will be required.

    Afterall, I don't see why your point of "a good compression algorthim
    is crucial" can turn my statement of "higher bit output to match with
    high bit input/data" incorrect!
    I hope that I could have written a better article with more accurate
    information. Grateful if you could respond to my above questions for
    further clarifcation and discussions.
    Partially agreed, some are marketing hype as there are some other
    limiting factors and technology barriers. But at least theoretically,
    more bits will be useful, if the other affecting factors are not
    dominant.
    Well, it seems that we are talking about things virtually at different
    hierachy of the "black box".

    RiceHigh
    http://ricehigh.blogspot.com
     
    RiceHigh, Dec 12, 2006
    #4
  5. That is not correct. Most reviewers, including dpreview,
    luminous landscape, etc use raw-converter modified data.
    The data on my site ARE from the linear data out of the
    ADC as recorded in the camera raw file (which is the only
    low level data one can get access too). Most raw converters
    modify that data, including converting the Bayer
    data to RGB through interpolation. If you read my methodology,
    you would see that I extract the raw data from the raw
    file with no Bayer interpolation or variable gamma
    curve application. Review sites like luminous landscape
    or dpreview do not do that. As a result, their reported
    noise is modified by what raw converter they use and the
    setting in that converter.
    That's nice to see you agree with this, because all digital
    camera noise is photon noise dominated for any signal above
    a few numbers up from zero on the 12-bit scale.
    Examples:

    High signals:
    Canon S70 P&S at ISO 100, linear 12-bit output:
    max signal (DN 4095) ~ 4220 photons, noise = 65 photons
    noise = 63 12-bit data numbers (DN).
    Convert this to 8 bit and the noise is 4 DN.
    The noise is photon noise limited.

    Canon 1D Mark II at ISO 100, linear 12-bit output:
    max signal (DN 4095) ~ 52,300 photons, noise =229 photons
    noise = 13.8 12-bit data numbers (DN).
    Convert this to 8-bit and the noise = 0.86 DN
    The noise is photon noise limited, and 12-bits
    digitizes the signal by a factor of 13.8 overkill,
    but 8-bits is under.

    Now let's look at a shadow area 7 stops down

    12-bit
    read total signal Photon
    Camera photons noise noise noise to noise noise
    (photons) e- DN ratio limited?
    -----------------------------------------------------------
    S70 33 5.7 3.4 6.5 4.9 yes
    1D II 409 20.2 16.6 1.6 15.7 yes

    The 12-bit linear DN on both cameras would be 32 out of
    4095, so quite low.

    Several things should become obvious from the above examples:
    1) The noise from both cameras are photon noise limited over
    the most important range of imaging data from the camera.
    2) The large pixel camera is close to 12-bit limited
    at -7 stops and becomes 12-bit limited below 8 stops,
    but smaller pixel cameras never become 12-bit limited.
    3) At the high end, 12-bits adequately covers the data
    as the photon noise is MUCH LARGER than the A-D levels.

    Large pixel cameras, like the 1D Mark II could benefit
    from a 14-bit converter for the low end, if the low
    signal chain can be made low enough noise.
    This is the EXACT opposite reason of what you are advocating.
    But you see, if you are advocating needing more bits, you
    must demonstrate the need for those bits. The photon noise is
    limiting that need. The variable gamma encoding further
    reduces that need. That is why 8-bit images look as good as they
    do.
    And that is what is shown in Figure 6 at:
    http://www.clarkvision.com/imagedetail/digital.sensor.performance.summary

    The unity gain shows the trade point for digitizing
    1 electron = 1 A/D quantization level. Note the minimum noise
    is on the order of 4 times higher than 1 electron on some cameras
    (most Canons) and around 8 electrons on others. Thus the trade
    point for noise is 4 to 8 times less than shown in Figure 6.
    (on P&S cameras that works out to ISOs 12 to 25!).

    I hope the above has demonstrated that. ALL modern digital
    cameras tested, from P&S to top DSLRs are photon noise
    limited over the vast majority of the image data.
    Only in very low light signals, e.g. astrophotos,
    does other (read and thermal) noise dominate.
    That's because they do not properly include photon noise,
    which is the dominant noise source in digital camera images.
    A plot in log space does not mean the data were log values.
    Neither does using 16-bit computer values. If you read the
    text you would have seen that the data were linear 12-bit
    data out of the camera and not processed with any gamma
    encoding, i.e. linear.
    The linear nature of the data from CCDs and CMOS are NOT
    Reichmann and Koren's suggested theories. It is well
    established and measured results. E.G. see:

    Procedures for Evaluating Digital Camera
    Sensor Noise, Dynamic Range, and Full Well Capacities;
    Canon 1D Mark II Analysis
    http://www.clarkvision.com/imagedetail/evaluation-1d2

    and for Nikon fans:
    The Nikon D200 Digital Camera:
    Sensor Noise, Dynamic Range, and Full Well Analysis
    http://www.clarkvision.com/imagedetail/evaluation-nikon-d200

    Again, plots in log space are for convenience in viewing
    and understanding the data, NOT that the date were logarithmic.
    And like I illustrated above, the finest DN at the upper level
    is irrelevant due to photon noise. It is actually the bottom
    end that is important for establishing the needed A-D size.
    Like I showed above, the 1D Mark II has such low noise and
    large photon range, that darker than 8 stops down from max signal
    at ISO 100, the camera noise is becoming 12-bit A/D limited.
    No, you misunderstand histograms. You don't always want
    a big spread in the histogram over the whole range.
    For example: a black dog on a black carpet,
    or white dog in a snowstorm. In each case, the histogram
    is strongly to one side and may have no values toward
    the other side.
    "the number of bits of the RAW file is the identical."
    Identical to what? The sentence is poorly written.
    "the identical" does not make sense.
    There is NO output device that covers the full dynamic range
    of a good digital camera. If you want to show the dynamic
    range of the scene recorded, you must compress that range
    for ALL currently available output. And that doesn't even
    include the need to make the image have impact. Ever
    heard of dodge and burn? I suggest you get a copy of
    and read "The Print" by Ansel Adams.
    See above. It does not have the range of a digital camera.
    Irrelevant since higher bit output does not exist.
    Like photon noise! Noise from digital cameras is well
    understood (but apparently not by the typical reviewers):

    The noise model for digital cameras is:

    N = (P + r^2 + t^2)^(1/2), (eqn 1)

    Where N = total noise in electrons, P = number of photons,
    r = read noise in electrons, and t = thermal noise in
    electrons. Noise from a stream of photons, the light
    we all see and image with our cameras, is the square
    root of the number of photons, so that is why the P
    in equation 1 is not squared (sqrt(P*P) = P). The
    signal corresponding to equation 1 would simply be
    the number of photons, P, so the signal-to-noise
    ratio, SNR, in a pixel is:

    SNR = P/N = P/(P + r^2 + t^2)^(1/2). (eqn 2)

    the values of r are low, a few electrons, so signals above
    a few tens of photons are photon-noise-limited. Thermal
    noise only comes into play in minutes-long exposures.
    Perhaps. I'm talking about what the sensor delivers in
    electrons, and that is what is important for deciding
    how many bits you need.
    There is no more fundamental level than that, and that
    is the level where the decision on the number of bits
    needed must be determined.

    There is too much mis-information "out there."

    Roger
    Photos, digital info at: http://www.clarkvision.com
     
    Roger N. Clark (change username to rnclark), Dec 13, 2006
    #5
  6. RiceHigh

    RiceHigh Guest

    Roger N. Clark (change username to rnclark) ¼g¹D¡G
    I see, in general I second your methodology as when we carry out
    analyses on the basic characteristics of a camera, the "raw" data
    should be as primitive as possible.

    However, I have some reservations on how "raw" the data contained in
    the raw files. For example, for the NuCore (Pentax K10D?) case I
    mentioned in my article:-
    http://ricehigh.blogspot.com/2006/12/secrets-of-k10d-part-2-of-3-bridge-to-d.html

    It can be seen that (from the technical info from NuCore) the RAW file,
    which is obtained from the Digital Back End of the "Analog Image
    Processor" with an A to D Convertor. The data have actually received
    some kind of gamma correction as well as some analog signal
    re-conditioning already, e.g., it says it has "analog WB".
    Thank for your clear explanations and your patience to point out this
    practical limitation to me. I shall write another Blog entry by
    pointing to your website and articles to clarify the case and provide
    additional reference informations for my readers.

    After having gone thro your website in more details again, I must say
    your articles are really providing very interesting, unique plus useful
    stuff, for anyone who are interested in the topic or wish to know more
    in-depth, technically.
    Do you get access to the Canon datasheet on these, in particular on the
    noise level in the units of photons and the max. signal photons for a
    photo site (just say for the 1D MkII, do you refer four R-G-G-B light
    sensors for 52,300 photons for just one single color filtered photo
    cell?)?

    Also, I cannot figure out how the noise for the 12-bit DN is
    calculated. My concept is that the DN is directly linear to the
    photons, as such, 229 / 52300 * 4095 should be 17.9 instead of 13.8. Am
    I miss somethign here?
    The above table heading is in a mess, so I cannot really figure out the
    details, I guess the headings are:
    0. Camera
    1. Read photons
    2. total noise
    3. signal noise DN
    4. Photon noise ratio
    5. ???
    6. Noise limited?

    Actually, I cannot follow the equations for deriving these values
    myself, but I get your conclusion about it is "noise limiting" for
    12-bit. Grateful if you could elaborate and tell me again about the
    actuall headings of your table (and how these are derived)
    Still, I cannot follow :-( Your reply post is simply a bit too
    difficult for me!
    Okay, I know that now and thanks for letting me know for that.
    Oh, what an eye opener!
    I see. I would like to know about the bases of some of the basic
    formulas you based on for your calculation, though. E.g., the photon
    noise is the square root of total photons projected on a photo site
    (cell) and etc.
    I see again!
    I know that your plot is in unlogged figures but in a log scale. So,
    there is virtually no difference in plotting against log values (in
    x-axis of your charts) as the 10, 100, 1,000 and 10,000 are virtually
    the same as 1, 2, 3, 4 scale linearly. Am I correct here?
    I still have something don't understand quite well up till now about
    your last conclusion of your measurement(s) found that the sensor DN is
    linear to EV levels (which is a logged space). Shouldn't the output DN
    (in raw, say) should be linear to the input intensity (light power of
    scenes) instead, for the basic characteristics of CCD/CMOS imager,
    provided that the ADC is linear?

    If the intensity of received energy is linear to ADC DN levels, how can
    the logged intensity can be linear with the ADC DN levels?
    Really interesting! I think I still need more time to digest your
    articles. But before that, I will be much grateful if you could
    enlighten me on the above for what I couldn't understand yet.
    Ditto for the above. I am looking forward for your explanations.
    No. I understand the histogram as a photographer. What I refer to is a
    general case. Do note also light meter should revert mid-tone no matter
    for highlights and shadows and that's what I am talking about, and,
    asumming also that the scene is a well mixture of objects which
    reflects different reflectance.

    For your example, the black dog and white dog will both became grey
    dogs for a standard light meter which is calibrated for mid-greys.
    I mean that have the same bits in the RAW files, e.g. 12-bit.
    How about CRT displays? Would these are better?
    Yes, but it is always desirable to have a higher resolution device and
    a device with more bits to "catch up" with the input image data.
    Yes, here are already, HDMI devices support 12-bits.
    Thanks again and Thanks in Advance for your further inputs.
    Are these simple Physics law? Or, only true for particular models of
    device?
    Oh, I see this (again) and I think I shall think this out in my pages
    to let people aware of this fact (I think it's easier for me to point
    to your website for further read and as the source reference).
    I hope I am not one of them! :)

    Cheers,
    RiceHigh
    http://ricehigh.blogspot.com
     
    RiceHigh, Dec 18, 2006
    #6
  7. Regardless of how "raw" the data from the camera is, the best
    possible noise and basic performance of a sensor is determined
    by the number of photons collected and the low level read noise.
    If the Pentax does other things it can still be no better
    than dynamic range and noise through the physics of photon
    counting. Most likely, if other "things" are done to the
    signal, performance will be degraded and fall below
    that of similar sized sensors. While I have not seen
    detailed data on the Pentax, reviews seem to indicate
    it's performance is similar to other cameras with similar sized
    sensors.

    In the case of other tested cameras, like Canon 20D, 1D Mark II,
    Nikon D50, D200, show data that is well modeled by photon
    noise, and the data match similar manufacturer published
    sensor data. So until we see test data on the Pentax,
    we will not know if there is something new done
    or it's just marketing speak.
    Thank you.
    Canon does not publish data on their data sheets. But the methods
    for deriving the data are actually quite simple and can be derived
    by anyone. Amateur astronomers are often analyzing these
    cameras, based on methods done in electronics labs,
    and by professional astronomers testing sensors at observatories
    and on spacecraft.

    The signal in photons is for each photo site and is independent
    of whether it is a red, green, or blue filtered pixel.
    For example, in the 1D Mark II, the 52,300 photons is the maximum
    signal from the camera at ISO 100.
    You need to use the gain factor to compute photons from DN,
    or visa versa. In investigating your question, I see I have
    a misprint on my web page. The 52,300 should be 53,300
    and the gain is 13.02 photons/DN (53300/4095 = 13.02).
    So the noise is sqrt(53300) = 230.9,
    which = 230.9/13.02 = 17.7 DN. So you were computing
    correctly.
    The table looks fine to me. Does your browser change
    the spacing? It is simple ascii.
    The columns are:

    0. camera
    1. number of photons 7 stops (128x) down from the maximum signal.
    2. The noise from column 1 (sqrt value in column 1) in photons.
    3. read noise in electrons
    4. total noise from the sensor in 12-bit camera DN (includes read
    and photon noise).
    5. The signal-to-noise ratio.
    6. is the camera noise photon noise limited? (yes or no)
    The equations and more explanations are in:
    http://www.clarkvision.com/imagedetail/evaluation-1d2
    Here is an analogy:
    when people estimate length of a bar, there is some
    noise or error in their estimate. What is the accuracy,
    and if you wanted to digitize the length as an integer,
    how fine would your digitization need to be?

    Let's pick 3 bars:
    A) 1 meter
    B) 1 cm
    C) 0.5 mm

    How accurate could one estimate the length of a bar?
    I suspect if you ask a number of people
    (assuming they knew what a meter was), and showed them a bar
    asking them to estimate its length, it might look something
    like this (these are guesses as to what it might be):

    A) 1000 mm +/- 100 mm
    B) 10 mm +/- 2 mm
    C) 0.5 mm +/- 0.25 mm

    So a linear digitization needs to be on the order of 0.25mm
    to describe what people can estimate. But that requirement
    is driven by the small end, not the large end. And that
    0.25mm digitization at the long lengths does not mean it
    is a meaningful digitization.

    Same with digital cameras and digitizing the photoelectrons.
    The requirements for the digitization are driven by the small
    signal, not the large signal.
    Yes, it is an interesting concept. I did not come up
    with unity gain ISO, some astronomers were doing it,
    but had not collected the data to show the trend.
    Look up Poisson counting statistics. Wikipedia has a good
    explanation, but the derivation gets heavy into the math.
    No, on a log scale, linear lengths of a factor of 2 are orders
    of magnitude (log(1)=0, log(10)=1, log(100)=2, in base 10 logs).
    For linear plots, see Figures 1 and 3 at:
    http://www.clarkvision.com/imagedetail/evaluation-1d2
    but you can't see what is going on at the low signal end.
    It is linear, and the linearity data are shown in
    Figures 1 and 3 in each of the full test of each sensor, e.g.:
    http://www.clarkvision.com/imagedetail/evaluation-1d2
    http://www.clarkvision.com/imagedetail/evaluation-nikon-d200

    The data are linear within measurement error up to the point
    of sensor saturation.
    Linear data plotted on a log-log plot is still linear.
    I hope the above example of people estimating the length of
    a bar helps.
    But if you PROPERLY expose the scene, the histogram will
    appear as I described. And if you didn't compensate and made
    gray images, the histogram is still a narrow range, not spread
    from left to right as you describe. This might help you:
    http://www.luminous-landscape.com/tutorials/understanding-series/understanding-histograms.shtml
    Then perhaps you should reword the text to read something
    like:
    ... the number of bits of the RAW file is identical to ...

    No. Plasma displays come closest: some have a dynamic
    range of 3,000:1, but that is still less than good
    digital cameras. That 3000:1 only happens if the display
    is in a dark room; reflections off the screen too easily
    reduce that level.

    Higher intensity resolution does not translate to
    meeting the dynamic range of DSLRs. For example,
    I can digitize the length of a meter of string
    into 16-bits. It doesn't help of my string is
    100 meters and I only have 16 bits to work with.
    99 meters does not get digitized.
    # Bits does not = range.
    The photons noise is basic math and physics. The read and thermal
    noise is models used in the sensor industry.
    We all make mistakes. I'll correct the one I mentioned
    above (53,300 versus 52,300).
    I did notice one other error in skimming some of your
    pages. You say the dynamic range of digital is less than
    that of film. That is incorrect. Even small P&S cameras
    have higher dynamic range than print or slide film.
    e.g., see:
    Dynamic Range and Transfer Functions of Digital Images
    and Comparison to Film
    http://www.clarkvision.com/imagedetail/dynamicrange2

    This is an urban myth due to people not understanding how to
    meter with digital.

    Roger
     
    Roger N. Clark (change username to rnclark), Dec 19, 2006
    #7
    1. Advertisements

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments (here). After that, you can post your question and our members will help you out.