To RAW or not to RAW?

Discussion in 'Digital SLR' started by M, Oct 4, 2005.

  1. M

    Jeremy Nixon Guest

    Well, then they wouldn't be "raw" any more, for one thing, which would kind
    of defeat the purpose of getting the data pre-processing.
     
    Jeremy Nixon, Oct 8, 2005
    #61
    1. Advertisements

  2. M

    JPS Guest

    In message <>,
    I think he meant "why doesn't the analog-to-digital converter apply a
    curve before digitizing". I know that if you put a diode in series with
    a resistor, the voltage across the resistor will be a logarithm of the
    voltage applied to the two, but of course, lots of noise results.
    --
     
    JPS, Oct 8, 2005
    #62
    1. Advertisements

  3. The data is lost at the ADC, and the ADC (and associated electronics)
    would have to be much more precise to capture the full shadow details -
    i.e. better than the 12-bits in common use. Once you've gone to more than
    12-bits, 16-bit linear is the obvious easy choice and simplifies the other
    processing. Doing a simple subtraction (e.g. for dark current) is easy in
    the linear domain, rather more complex in the gamma-corrected domain.

    David
     
    David J Taylor, Oct 8, 2005
    #63
  4. That not possible. First, you *can't* get a JPEG file that
    was not transformed from 12 bit data in a camera that uses a 12
    bit encoder. The 12-bit "limit" you speak of *is* *the*
    *definition* *of* *what* *the* *camera* produces*. That is not the file
    format, but the 12 bit A-D conversion hardware in the camera.

    Second, a 12 bit file has more levels with which to resolve any
    given value, meaning it can discriminate between values that the
    8 bit file cannot. Because the 8 bit file is non-linear it can
    provide an extended signal to noise ratio over a more restricted
    dynamic range, as one example, or do the opposite. But there is
    *always* a trade off, and if any one parameter it improved some
    other parameter is necessarily limited.
    Which is to say you necessarily reduce the SNR.

    The 12-bit representations that I've seen do not make full use
    of the 12-bit range. And further, what is actually used is
    *not* 12-bit linear, but rather a 12-bit compressed format. For
    example the Nikon D70 "RAW" files actually have only 683 values,
    not 4092. Despite using only 9.4 bits of resolution the dynamic
    range is the same.

    There is exactly *no* difference in the maximum dynamic range
    available in the JPEG file, simply because the limitation is the
    sensor's noise and dynamic range, not the 12 bit or 8 bit format.
    The 8 bit format can maintain the same dynamic range only by
    giving up SNR though.
    Why would manufacturers use 12 bit quantization if the sensors
    actually produce more data? In fact the dynamic range of the
    sensors runs right at 70 dB, and 12 bit (linear) encoder
    are used because they have a dynamic range of 72 dB. The
    encoder loses approximately 1/2 bit per sample to uncertainty
    (noise), and thus has a slightly lower useful dynamic range than
    72 dB... just about exactly 70 dB! A *very* nice match for CCD
    sensors.

    The actual number of digital levels that are useful depends on
    the absolute noise level produced by the analog system plus the
    uncertainty of the encoder. Each digital step has to represent
    roughly 2.7 times that noise level for each stop to be
    resolvable with certainty. Another way of looking at it is to
    note that the dynamic range of a sensor is defined as the well
    size divided by the noise. Your 40,000 electrons works out to a
    convenient 4096 levels for a noise of 9.8 electrons... but that
    is a 1 to 1 correspondance, and it would be impossible to
    discriminate between levels. The 12 bit encoding is only a
    limitation *if* the camera noise is lower than about (9.8 /
    2.7), or 3.6 electrons.

    That might not exacty be a typical example though! I tried to
    find numbers for a few typical cameras but it seems that
    information is not commonly released. The only one I found was
    this:

    http://www.ayton.id.au/gary/photo/Dig_sensors.htm

    That URL discusses the Kodak KAF8300CE CCD used in the Olympus
    E300 SLR. With a well capacity of 25,000 electrons and noise at
    16 electrons, the dynamic range is 64 dB. Their comment,

    "thus the camera only needs a 12 bit A/D to maximally utilise
    this, any more would just add expense, power consumption &
    slow the processing down."

    All of which is to say that it is *not* the limitation of a 12
    bit linear representation that restricts the eventual dynamic
    range of the display, whether that is provided by an 8 bit JPEG
    file or some other representation.
    A 16 bit A-D conversion would have a dynamic range of 96 dB and
    provides a contrast ratio of 65535:1, which is far more than the
    sensor.

    Note too that 14 bit linear codecs are a very common technology
    used in every modem since v.32 was introduced, and of course in
    digital telephone systems that use 8-bit PCM for tranmission of
    voice over digital facilities. If there were some great
    advantagee to using an encoder with a wider dynamic range, 14
    bit AD converters have a dynamic range of 84 dB, which is
    greater than the sensors. (In fact there are special cameras
    that use 14 bit quantizers, because the have cooled CCD sensors
    where the noise output is significantly improved.)
     
    Floyd Davidson, Oct 8, 2005
    #64
  5. Quantization introduces "quantization distortion". That is
    technically different than noise, though it is commonly called a
    "noise". The difference is a bit technical though. Distortion
    is a characteristic of a communications channel that can be
    determined from the signal at the input (i.e., the same input
    signal will always produce the same distortion, and hence a
    predictable output signal) and that is not true of noise. Noise
    is not related to the input, and cannot be known at the input.
    (Another example of a "noise" that is actually distortion is the
    production of harmonics due to non-linear transfer
    characteristics. Intermodulation products are also a distortion
    which typically causes something referred to as noise. Another
    distortion is "amplitude distortion" over a frequency range, but
    that is rarely thought of as "noise", though it is not really
    different than the others.)

    The significance is that via various means it is possible to
    compensate for distortion, at the input, while that is untrue of
    noise.

    The difference between noise and distortion is generally of no
    significance to users but for design engineers it is important
    to distinguish between them.
    Quantization error is typically modeled as additive white
    Gaussian noise (AWGN) if the channel is bandwidth limited, but
    that is only true when the input signal and the sampling clock
    are asynchronous. And yes it has a "value from minus half the
    quantizing step to plus half the quantizing step".
    Which makes it... distortion, just like harmonic distortion
    and intermod distortion.
    Pure quantization, according to what we both agree it does, does
    in fact make the transfer characteristic non-linear. I would
    invite you to do a web search on "quantization distortion", and
    read up on it.
    There is *no* quantization distortion in only an analog to
    digital converter. None. Quantization error occurs when the
    digital signal is converted back to analog. And in fact it is
    theoretically impossible to do an analog to digital to analog
    conversion and *not* introduce distortion.
    Fine. Most cameras are technically *not* using a 12 bit linear
    data format.
    Wrong. It is inherent that a 4-bit gamma corrected coding can
    have an identical dynamic range as the 8 bit or 12 bit format.
    Of course, one could be *really* pedantic and say that a 1-bit
    format offers the same dynamic range! Of course "dynamic range"
    has nothing to do with the ability to resolve intermediate levels
    between the limits of that range. An 8 bit format derived from
    12 bit data *cannot* have better resolution of intermediate levels
    than existed in the original 12 bit data. It can, however, have
    different error levels at any given point.
     
    Floyd Davidson, Oct 9, 2005
    #65
  6. M

    JPS Guest

    In message <>,
    1-bit would only supply a threshold; no ratios.
    --
     
    JPS, Oct 9, 2005
    #66
  7. M

    JPS Guest

    In message <>,
    Time to stop beating the strawman, don't you think?

    When are you going to admit that the original reason you gave for JPEGs
    having less dynamic range than the RAW data they come from is wrong?
    --
     
    JPS, Oct 9, 2005
    #67
  8. Potential means something is available but unused.

    If it is not available, it is not potential.
    There are an infinite number of 12 bit linear data formats. The
    analog size of the quantum represented by each digital symbol is
    infinitely variable.

    Regardless, the limitations of the 8 bit JPEG files produced
    from the data originally formatted in those "12 bit linear"
    formats are necessarily greater than the limitations of a
    genuine 12 bit linear formatted file. Of course, dynamic range
    is the same for both...
    That is not true generally, though certainly there might be some
    sensors in use that have greater dynamic range than a 12-bit A/D
    can handle. The limit with 12 bit A/D is 72 dB, and most CCD sensors
    used in typical DSLR cameras have a range between 60 and 70 dB.
    It *can't* be used, and therefore is not "potential".
    Only because the sensor doesn't produce the data.
    First, it would only take a 14 bit A/D encoder to provide 4
    times the range that a 12 bit encoder does. That technology is
    rather well known, and cheap too.

    The reason 14 (or 16) bit encoders are not used is they require more
    overhead and provide no benefit. More power and more time, with no
    gain, is not productive.
     
    Floyd Davidson, Oct 9, 2005
    #68
  9. <more techno-babble snipped>

    As I suspected...another know-it-all who can recite technical facts and
    figures, yet can't apply it when you put a camera in his hands.

    I've run across this type with my other two interests...computers and
    music. I know people who can tell you how to engineer sound...the've
    got all the facts and figures, yet put them in front of a PA board and
    they don't know what to do.

    Have fun with your graphs.
     
    Randall Ainsworth, Oct 9, 2005
    #69
  10. M

    Jeremy Nixon Guest

    Okay, so I guess you're just arguing definitions of "potential", which
    really doesn't matter, so, okay.
     
    Jeremy Nixon, Oct 9, 2005
    #70
  11. M

    Andrew Haley Guest

    See Vanderkooy and Lipshitz, "Dither in Digital Audio" and "Resolution
    Below the Least Significant Bit in Digital Audio Systems with Dither",
    JAES 1984. These papers explain why quantization, properly done, adds
    no distortion, because the quantization error is entirely decorrelated
    from the input signal. Although they are about digital audio,
    these papers apply to all quantized systems.
    Those forms are nonlinear. Quantization, done correctly, is linear.

    Andrew.
     
    Andrew Haley, Oct 9, 2005
    #71
  12. M

    McLeod Guest


    Actually, that's the one thing that wasn't clear. The other person
    was talking about a jpeg from the raw file and you were arguing with
    him so it wasn't clear at all, even with your use of indefinite
    articles, that you were just talking about the difference between file
    types and not a jpeg created from the raw.

    If you want to argue, at least argue the specifics. All this
    weaseling around with definite and indefinite articles makes you look
    like a dick.
     
    McLeod, Oct 9, 2005
    #72
  13. I'm afraid that many of your points are either incorrect or not expressed
    in the signal-processing terms which are used by the engineering
    community.

    Quantisation error is called quantisation noise, not distortion, as it is
    not a non-linear process per-se. If modelled as a gaussian white noise,
    then it is being incorrectly modelled as it is /not/ gaussan, but linear.

    I stand by statements.

    David
     
    David J Taylor, Oct 9, 2005
    #73
  14. Floyd Davidson wrote:
    []
    I don't think that manufacturers are as concerned with image quality as
    with number of megapixels and full-frame versus APS. Yes, 14-bits would
    probably do nicely. I welcome improvements in both sensors and
    digitisation towards the wider system dynamic range goal.

    David
     
    David J Taylor, Oct 9, 2005
    #74
  15. Thanks for that, Andrew.

    David
     
    David J Taylor, Oct 9, 2005
    #75
  16. M

    JPS Guest

    In message <>,
    Nonsense; I went back and forth with him a few times, and each time I
    tried to find another way to write it. He moved the goalposts from
    "Does the JPEG format limit the shadows" (actual answer is no; specific
    conversions do) to "does the extra potential get used", which is
    irrelevant, as it is only important that the DR is not limited by JPEG
    per se, which is what I replied to in the first place.
    To whom?

    --
     
    JPS, Oct 10, 2005
    #76
  17. Particularly when he posted specs for a specific RAW format that
    was not *the* specific generic one of a kind linear 12 bit encoding.
     
    Floyd Davidson, Oct 10, 2005
    #77
  18. Who said anything about "ratios". I said, the same dynamic
    range! That is to say, two thresholds.

    And I didn't say a word about noise... ;-)
     
    Floyd Davidson, Oct 10, 2005
    #78
  19. When are you going to go look at what the OP was asking about, and
    the statements to which I replied?

    You've wandered all over Hell's Half Acre trying to find enough
    weasel room to make up for the fact that you don't understand
    enough about digital signal processing to apply it to photographs.
     
    Floyd Davidson, Oct 10, 2005
    #79
  20. It is always interesting to watch people like you get in over
    their head, and see how they respond when it becomes obvious.

    All of the above about the difference between noise and
    distortion might be more obvious if you would take the time to
    read a bit of Claude Shannon's work, "A Mathematical Theory of
    Communication"

    http://cm.bell-labs.com/cm/ms/what/shannonday/paper.html

    Of course, that's even worse than those horrible graphs Randall,
    it's filled with equations that are used to generate graphs!

    As to whether it actually makes any difference, try finding
    equipment to measure any of the "noise" that I've said is
    actually distortion. All such equipment will be referred to as
    a "distortion" analyzer...

    Floyd L. Davidson http://www.apaflo.com/floyd_davidson
    Ukpeagvik (Barrow, Alaska)
     
    Floyd Davidson, Oct 10, 2005
    #80
    1. Advertisements

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments (here). After that, you can post your question and our members will help you out.