To RAW or not to RAW?

Discussion in 'Digital SLR' started by M, Oct 4, 2005.

  1. M

    JPS Guest

    In message <>,
    What you originally wrote doesn't imply that, however. My point was
    that 12-bit linear is not better than 8-bit gamma-corrected in all
    ranges, so it is not accurate to say RAW is better than JPEG just
    because 12 bits are more than 8. 8-bit 2.2-gamma-corrected has several
    times the precision, in the deepest shadows, as 12-bit linear has.

    --
     
    JPS, Oct 7, 2005
    #41
    1. Advertisements

  2. Which part do you think doesn't imply that. I can't respond to shotgun
    comments that mean nothing.
    That isn't true. (If for no other reason than the 8-bit data is
    derived from the 12-bit data, and *cannot* have a greater range
    than is started with. Like I said, "potential" is worthless.)
    Can you provide data to demonstrate?

    I don't have complete data, but what I have says there are 203
    levels in the brightest 5 fstops. That leaves a total of 53
    levels at most for the rest of the image.

    With RAW, there are 3968 levels in the brightest 5 fstops. And
    64 in th 6th and 32 in the 7th... nearly twice as
    many as is available in an 8 bit JPEG.

    Something doesn't add up here for those "deepest shadows".
     
    Floyd Davidson, Oct 7, 2005
    #42
    1. Advertisements

  3. M

    JPS Guest

    In message <>,
    But you can post them. You originally implied that 12-bit is better
    than 8-bit, because 12 is bigger than 8. That is simply not true, and
    that is what I replied to.
    That's totally irrelevant, as I already previously qualified my
    statement by saying "POTENTIAL".
    That's irrelevant. I was addressing your logic, which implied that
    12-bit linear had more dynamic range for mathematical reasons.
    Not exactly; that depends totally on the camera. Most don't even have
    that many levels, *total*. The Canon 20D has only 3967 levels between
    blackpoint (128) and 4095. That makes 4095 actually 3967, and dropping
    by stops, we get:

    RAW RAW 8-bit
    Stops luminance value 2.2gamma

    0 3967.0 4095.0
    -1 1983.5 2111.5 255.0
    -2 991.8 1119.8 186.2
    -3 495.9 623.9 135.9
    -4 247.9 375.9 99.2
    -5 124.0 252.0 72.4
    -6 62.0 190.0 52.9
    -7 31.0 159.0 38.6
    -8 15.5 143.5 28.2
    -9 7.7 135.7 20.6
    -10 3.9 131.9 15.0
    -11 1.9 129.9 11.0
    -12 1.0 129.0 8.0
    -13 0.5 128.5 5.8
    -14 0.2 128.2 4.3
    -15 0.1 128.1 3.1
    -16 0.1 128.1 2.3

    This is typically for the green channel, in most white balances; the red
    and blue typically use a smaller RAW range.
    The full RAW dynamic range is not used by default; what is mapped to 255
    in an 8-bit output is usually only about 2000 to 2200 in the RAW data
    (green channel; red and blue are usually less). Literal 8-bit 2.2-gamma
    data has about 4 to 12 levels to represent the range of 0 to 1 in the
    RAW data, depending on the camera, white balance, and color channel.
    The reason the 8-bit images are not posterized in the shadows is
    because:

    1) Demosaicing creates intermediate values.

    2) White balance creates scaling in the three channels.

    3) JPEGs typically have the shadows darkened to hide noise. The noise
    is very bright if represented literally.
    --
     
    JPS, Oct 8, 2005
    #43
  4. M

    Eugene Guest

    Excuse my ignorance, but how can the JPEG possibly have more precision
    in the shadows than the 12 bit RAW file? The JPEG file is created from
    the RAW data. It stands to reason then that the JPEG can only ever have
    <= information as the RAW file. Actually logically it can't have even as
    much information because it's only 8bit.

    I understand that the processing algorithms may be able to make the JPEG
    appear better, but surely it cannot actually have information that
    wasn't already available in the RAW file.

    Isn't it kind of like suggesting a print or scan can have better detail
    in the shadows than the original negative?
     
    Eugene, Oct 8, 2005
    #44
  5. M

    Jeremy Nixon Guest

    The 8-bit gamma-corrected JPEG has more precision in the shadows than the
    12-bit linear RAW file; that the information does not exist in the RAW
    file means the extra precision is not used, not that it doesn't exist.
    In other words, if the RAW file were, say, 16-bit, it could do better
    in the shadows than current 12-bit ones when converted to 8-bit with
    gamma correction.
     
    Jeremy Nixon, Oct 8, 2005
    #45
  6. M

    JPS Guest

    In message <di75v0$jnt$>,
    I don't know; how can it? Who said that it was?
    No.

    It can't have more levels, but it can easily have more dynamic range
    than a 12-bit linear format.
    Not any that has to do with the original scene.
    It would be, if someone actually said it.
    --
     
    JPS, Oct 8, 2005
    #46
  7. Which also means that *no potential* for it exists.
    Actually, there are 12 bit linear formats that have more
    precision than JPEG. Whether any given camera uses one though,
    is up to the design team. It stands to reason, for example,
    that if the sensors cannot produce valid data in that range,
    there is no point in encoding it. The data simply does not
    exist to begin with. It also does not exist in the 12 bit file,
    and it does not exist in the 8 bit file either. There is *no*
    potential for it to exist if the sensor does not produce it.

    Just claiming that there are bits available in the 8 bit format
    to encode such data means *nothing*. There are also bits
    available in 12 bit formats too, but there are good reasons they
    are not used. And it stands to reason that if and when they are
    useful, the formats will be adjusted to do exactly that.
     
    Floyd Davidson, Oct 8, 2005
    #47
  8. "Actually, for display purposes, an 8-bit JPEG of highest
    quality has more potential dynamic range than a 12-bit linear
    RAW file."
    in <>
    Wed, 05 Oct 2005 21:49:56

    You've gone on to argue that is because there is supposedly more
    precision in the shadows.

    But you are going around in circles and moving the goal posts by
    trying hard to shift that from a JPEG file to an "8-bit gamma
    corrected" format, but that is *not* where the contention was.

    I said the above statement was not true. Jeremy stated that it
    made no difference, because 8-bit vs. 12-bit range and lossy vs.
    lossless is not the reason people chose between RAW and JPEG,
    and I agreed with him. You then said that wasn't what I implied
    in saying your original statement wasn't true.

    You are arguing about apples and oranges, and shift to the other
    any time someone disagrees with you on the last statement you
    made about either of them.
    That is true, but it is also a useless factoid. The "dynamic
    range" can of course be expanded regardless of the number of
    bits in the original data. That is just turning up the gain at
    the speakers, and has no meaning at all. Signal to noise ratio
    and bandwidth are what determines how much information exists.
    Bit transformations can trade those parameters around, but it
    can't gain on any one of them without an equal loss on another.
    You can add information that a viewer cannot distinguish from
    the original information.

    That is why your statement about JPEG files is wrong and the
    almost identical statement about "8-bit 2.2 gamma" corrected data
    can be true.
    "Actually, for display purposes, an 8-bit JPEG of highest
    quality has more potential dynamic range than a 12-bit linear
    RAW file."
    in <>
    Wed, 05 Oct 2005 21:49:56

    You did.
     
    Floyd Davidson, Oct 8, 2005
    #48
  9. John said "potential dynamic range", not "precision".

    The range of a gamma-corrected 8-bit data could be (1/255)^2.2, i.e.
    approx. 1/200,000. The range of 12-bit linear data is 1/4095.

    David
     
    David J Taylor, Oct 8, 2005
    #49
  10. He specified a JPEG file. Different beast.

    The specified 12-bit file, if we ignore the "linear data"
    designation just the same as ignoring the JPEG designation, can
    have a significantly different "potential dynamic range" too.

    Regardless, the point is that "potential" which cannot be used...
    is *not* potential at all. The data in the 8-bit file (regardless of
    what format) is directly derived from a 12-bit format. You *cannot*
    add information to it with merely a bit encoding transformation.
    All that can do is juggle parameters, and gaining in one necessarily
    means losing in another.
     
    Floyd Davidson, Oct 8, 2005
    #50
  11. The JPEG files I've seen are typically gamma-corrected. Were a more
    precise measurement of the sensor output possible than the 12-bit limit,
    then you could actually greater dynamic range in the gamma-corrected space
    than in the linear space, at the expense of lack of precision in
    representing the highlights.

    If you only have 8-bits to represent the image, gamma-corrected data makes
    better use of the 8 bit range. The dynamic range of such data exceeds the
    dynamic range of 12-bit linear encoded data. For DSLRs, 12-bit
    representation does not quite do full justice to a sensor who's well size
    is of the order of 40000 electrons. So by using a 12-bit linear
    representation to start with, it is actually that 12-bit quantisation
    which is limiting the dynamic range of the 8-bit gamma-corrected JPEG.

    I guess if we had 16-bit linear, with a stop or two of headroom, that
    might make us all happier (including the memory card manufacturers!).

    David
     
    David J Taylor, Oct 8, 2005
    #51
  12. M

    JPS Guest

    In message <>,
    I said clearly earlier that I was talking about the file format, and
    it's potential. Are you one of those people that just can't be wrong?
    You implied a simple mathematical reason why 12-bit linear has more
    dynamic range than an 8-bit JPEG.
    No; they wouldn't be linear if they did. Linear has poor shadow
    precision.
    There is much more data in the sensor at lower ISOs than the 12-bit
    digitization can account for. ISO 100 dynamic range in most DSLRs is
    limited on the shadow end by the limits of 12-bit linear data. Even at
    higher ISOs, the noise is a little noisier than it should be because of
    quantization. Quantization makes the noise values swing more wildly
    than they would if they were left in an analog form, increasing the
    contrast of noise. Images look best when the inherent noise in the
    system is rendered precisely; not when it (and the signal) are
    quantized. Here is an example; both of these images have the exact same
    absolute exposure; taken in manual exposure mode with only the ISO
    setting changed between them. They are of a black Camera strap on a
    black guitar bag, The ISO 1600 shot is pretty much what you could
    expect with 16-bit digitization at ISO 100, except that there is
    probably a tad more readout noise at ISO 1600, so it could be even
    better:

    http://www.pbase.com/jps_photo/image/40038800
    Neither I nor anyone else here has suggested that you can get better
    shadow level precision than the original RAW data in a JPEG rendered
    from it, yet you are arguing against the strawman, apparently in an
    attempt to look correct. Just deal with it, your original reason was
    wrong. There is not less Dynamic Range in a typical JPEG because it has
    8 bits; the reason is that the shadows are not converted well because of
    noise reduction, and an s-shaped transfer curve, with clipping of the
    highlights typical. The 8-bit JPEG file format is capable of more
    dynamic range than a 12-bit linear RAW file; that's a fact. If a 12 bit
    file contains more real-world dynamic range than an 8-bit JPEG can
    possibly have, it is not linear.

    The fact is, it is possible to render a final low-compression 8-bit JPEG
    with literal 2.2 gamma throughout its range from a RAW file that doesn't
    compromise the shadows or dynamic range.

    --
     
    JPS, Oct 8, 2005
    #52
  13. M

    JPS Guest

    In message <>,
    Do you know the difference between the indefinite article "a" or "an",
    and the definite article "the"? Replace my indefinite articles with
    your definite articles, and it will say what you think it says.
    Funny; that's exactly what I was thinking about you. I haven't made any
    goalpost adjustments, because I made it clear that I was talking about
    file formats and not *THE* jpeg from *THE* RAW.
    A low-compression 8-bit JPEG does not significantly different from any
    other gamma-adjusted 8-bit format.
    Not at all. I have always been talking about the file format, and as
    soon as you disagreed, I clarified further. I never said that "*THE*
    jpeg from *THE* RAW" has more dynamic range. You gave the *REASON* that
    an 8-bit JPEG has less dynamic range is that 8 is less than 12. That is
    wrong, and that is why I am arguing with you. It is rendering decisions
    that drop dynamic range in a JPEG conversion, and it doesn't have to be
    done at all; if an 8-bit JPEG is intended to be biased more toward
    containing data than being nice eye-candy, it can contain *more* dynamic
    range than a single 12-bit RAW can. This can be realized practically by
    taking two RAW exposures several stops apart, and storing the merged
    data into an 8-bit JPEG with minimal compression.
    That doesn't alter dynamic range at all, in the context of which we are
    speaking. Brightness is not dynamic range. Dynamic range is a ratio of
    maximum to minimum useable signal, according to some standard of
    useability.
    12-bit linear data from cameras only comes to us that way; we can't do
    anything with it to increase the dynamic range of the real world data it
    represents; only how it is displayed.
    I have no idea what you're talking about here.
    No, it is not the capabilities of the JPEG, but rather, render decisions
    that typically lose dynamic range in the JPEG conversion.

    Looking at it, I should have left off the "for display purposes,"; that
    was left from a different train of thought that I didn't edit away
    completely. Display is not the crux of the issue; data storage is, and
    12-bit data that is a linear representation of real-world intensities
    linear format can not contain the dynamic range than an 8-bit file that
    gamma2.2-encodes real world intensities can.

    Again, the only reason I disagreed with you originally was because you
    implied that 12 is always better than 8, for dynamic range because it is
    bigger. This is only guaranteed to be true if the same
    real_world-to-data transfer curve is the same for both formats. In our
    case, the 12-bit data is a linear representation.
    --
     
    JPS, Oct 8, 2005
    #53
  14. M

    JPS Guest

    In message <PgN1f.124450$>,
    Of course, the more times you argue back and forth with someone who
    doesn't get it the first time, the more likely you are to finally write
    something wrong, like my choice of the term "for display purposes", and
    that is what they'll latch onto, as if it were the only thing you ever
    wrote.
    --
     
    JPS, Oct 8, 2005
    #54
  15. M

    JPS Guest

    In message <>,
    Included, actually. Other than the variable compression artifacts,
    there is no reason why a JPEG can't use uniform gamma throughout its
    range.
    Nonsense. An 8-bit JPEG from a 16-bit linear RAW can have more dynamic
    range than a 12-bit linear RAW can possibly contain, so your original
    implication that JPEG was the limitation is simply untrue. It is the
    RAW conversion to JPEG that usually loses practical dynamic range, not
    the JPEG format.
    Cut the crap; no one has ever implied that here. Stop beating your
    strawman to death.
    We were talking about dynamic range; not precision in the midtones and
    highlights.
    --
     
    JPS, Oct 8, 2005
    #55
  16. All of the above is talking about an apple: an 8 bit JPEG image
    derived from 12 bit linear data.
    You are talking about an orange, not the apple above.
    Nice discussion of oranges as a fruit, but the question was
    about apples.
    Quantization produces "distortion" (as opposed to "noise") which shows
    up as something everyone calls noise. The larger the quantization steps
    used, the "more wildly" the values swing from the introduced distortion.
    It increases the *amount* of the noise, by adding distortion that
    didn't previously exist.
    I'm not sure what you mean by that.
    I responded to what was asked, and the responses that resulted,
    all of which were about apples.

    *You* keep bringing up oranges.
    It was dead on right. And it had nothing to do with all this
    wonderful information you continue to provide about oranges.
    You have shown a modified 12 bit linear coding that may well
    have less dynamic range. But that is not inherent in 12 bit
    linear coding. Even with 12 bit linear coding it is possible to
    quantize the the entire first f/stop as 1 level and fold it
    into the second level just as your 8-bit data did in the graphs
    that you posted. That of course provides one more bit to deal
    with low light levels. Likewise it is not necessary to clip
    everything in the lower 128 levels off.

    What you have shown is that one particular 8 bit 2.2 gamma
    corrected encoding has more range than one particular abbreviated
    12 bit linear encoding. But neither define the two that you
    label them as above.
    It is also possible to generate a 12 bit linear encoded image
    which you cannot do that with.
     
    Floyd Davidson, Oct 8, 2005
    #56
  17. Floyd Davidson wrote:
    []
    No - quantisation introduces what is known as "quantisation noise" which
    is a uniformly distributed random variable ranging in value from minus
    half the quantising step to plus half the quantising step. It is
    completely mathematically describable. Other noise sources may include
    photon-limited noise, and shot-noise. Distortion implies alteration of
    signal values in such a way as to make the transfer characteristic
    non-linear. Pure quantisation does not do this.

    It is possible with a poorly designed or implemented analog to digital
    converter to introduce distortion, though.

    []
    If a 12-bit coding does not uniformly represent light with values 0..4095
    on the basis that the light value required for a digital value of 4095 is
    4095 times the light level required for a digital value of 1, then that
    coding is not linear.

    It /is/ inherent that a 12-bit linear coding will offer less dynamic range
    than an 8-bit gamma 2.2 corrected coding.

    David
     
    David J Taylor, Oct 8, 2005
    #57
  18. M

    Jeremy Nixon Guest

    It doesn't mean that at all.
    Mathematically there is only one 12 bit linear representation. If they do
    something else with the values, then it's not linear anymore; it has been
    processed in some way after the sensor capture and is no longer what we're
    talking about.
    The limiting factor here is not the sensor; it is the 12-bit A/D
    conversion. There *is* more data at the sensor, data that is not being
    used in the digitization. The sensor is perfectly capable of producing
    the data; the 12-bit file is not capable of using it, but some of it
    could be used by an 8-bit gamma-corrected file.
    Of course it means something. It means that the potential exists but is
    not being used. The reason it is not being used is, simply, because the
    intermediate format, a 12-bit linear representation of the image, doesn't
    have the data.
    It's not the format that is the problem. The problem is not having a
    16-bit A/D converter that is suitable for this application.
     
    Jeremy Nixon, Oct 8, 2005
    #58
  19. So why they do not use gamma corrected raw files?
     
    Remco Raaphorst, Oct 8, 2005
    #59
  20. M

    JPS Guest

    In message <[email protected]>,
    Noise? Expense?

    Linear data is easier to work with for the initial tasks of blackpoint
    biasing and white balancing. Luminance level 0 is not necessarily 0 in
    the RAW data, and a gamma-corrected digitization would not be able to
    take advantage of it's better deep shadow range, because they would be
    clipped below the blackpoint. Almost verything would probably have to
    be done differently.
    --
     
    JPS, Oct 8, 2005
    #60
    1. Advertisements

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments (here). After that, you can post your question and our members will help you out.