To RAW or not to RAW?

Discussion in 'Digital SLR' started by M, Oct 4, 2005.

  1. M

    Andrew Haley Guest

    And how on earth can you possibly know that? Is there some reason you
    need to be so gratuitously rude?
    It seems that we do not disagree about substantial issues at all. You
    admit that by correctly dithering the input signal the quantization
    process can be made fully linear, and thus there is no resulting
    distortion. This seems to be in stark contrast to your original
    comment, which suggested that distortion was an inevitable consequence
    of quantization.

    Our only argument is about whether "correctly done" always implies
    dither. I suspect Vanderkooy and Lipshitz would be on my side if
    asked that question, but it is only a matter of opinion, not of fact.
    Not at all. No-one claimed that it wasn't possible to cause
    distortion. It's always possible to screw up, after all.

    Andrew Haley, Oct 11, 2005
    1. Advertisements

  2. M

    Andrew Haley Guest

    CDs definitely do "properly done" quantization! I suspect that there
    is not one CD mastering house in existence that does not dither the
    conversion, and most digital audio workstations re-dither after
    processing too.

    Andrew Haley, Oct 11, 2005
    1. Advertisements

  3. You need to re-read a couple of things. The above said "no distortion".
    Then you should go actually read the papers listed above.

    "*No* *distortion*" is not the result of dithering. Reduced or masked,
    yes, not totally eliminated.

    Many other uses, such as the PSTN, don't do dithering at all.
    Floyd Davidson, Oct 11, 2005
  4. Wrong. Signal level does *not*, in any way, change amplitude distortion.
    Do you even know what "amplitude distortion", or what causes it? Think
    about reactance... as one example of a common cause!

    It is a linear distortion.
    A non-linear distortion.
    Another, despite what you have previously said, non-linear distortion.
    You've been, and continue, to spout techno babble. Big words
    that you don't understand. The problem is that not only is half
    of what you say wrong, but you later double back and say exactly
    the opposite thing because you know what the words actually

    You don't know enough about signals and signal processing to
    discuss this. You probably can't explain even simple things
    like how an FM transmitted signal that is shifted 90 degrees out
    of phase with the information signal can easily be demodulated
    with a pseudo-synchronous phased locked loop if you use gold
    extrusion plated bidirectional diodes in the phase detection
    clamp circuit.

    I proposed exactly that to another fellow a couple years ago who
    was also spouting off big words, and his only response was that
    the OP had answered the question correctly by describing the FM
    modulation concept and what makes it more noise resistant than
    AM, while by contrast, I had merely attempted to described a
    particular type of FM receiver design while neglecting to answer
    the original question.

    It went over his head, and my bet is it's over yours too.
    Floyd Davidson, Oct 11, 2005
  5. My bad. You might have read them. You *obviously* didn't
    understand them. They specifically demonstrate that your
    arguments here are trivially ridiculous.

    You've provided a cite that logically demonstrates your point
    is invalid. Is it rude to suggest you didn't actually read
    the cite???

    That is what they describe. So if there is no such distortion,
    why have a process to remove it? What value could there be in
    replacing non-existant distortion with noise?

    No, that is *not* what it does. "Fully linear" is *not* the
    point, nor does that reduce distortion. And since when is there
    "no resulting distortion" in a system that reduces distortion?

    You aren't paying attention to what they described.
    It *is*. Even if it is then masked with noise by dithering.
    You are grossly mistaken. There are *many* uses of quantization
    where it would be abjectly rediculous to use dithering.
    Lets put the text back in so that readers know what you are calling
    a "screw up"

    Wannamaker, R.A., S.P. Lipshitz and J. Vanderkooy, "Dithering to
    Eliminate Quantization Distortion," Proc. of the Annual Meeting
    of the Canadian Acoustical Assoc., Halifax, Nova Scotia
    (Oct. 1989).

    The point again is still the same, that quantization distortion
    exists, it is *not* technically just a noise rather than a
    distortion, and it is in fact an inevitable result of the
    quantization process.
    Floyd Davidson, Oct 11, 2005
  6. Floyd Davidson wrote:
    This is complete nonsense. I truly hope you don't seriously believe it.

    David J Taylor, Oct 11, 2005
  7. John, just to let you know that I won't be responding to any further
    comments from Floyd, as his last statement to me was complete nonsense
    (and I suspect he knows it), and I'm no longer prepared to carry on a
    reasoned argument in such circumstances.

    I am happy to continue to engage in a dialog with you and the others,
    should you wish.

    David J Taylor, Oct 11, 2005
  8. 14 bit codecs are common (and are even used in cameras already).
    There is no need to jump to 16 bit directly, with the
    significant increase in both data storage space and processing
    time that would entail.
    Floyd Davidson, Oct 11, 2005
  9. Nothing you quoted is anything that I wrote. You've deleted the
    appropriate attribution, which would indicate that the below
    quoted text was written by David J Taylor.
    Thank you for posting this! I spent a good bit of time using
    google but didn't come up with these two, very interesting,

    Here's another couple of URL's of interest:

    Both help to add perspective to what various specifications mean.

    Digital cameras, have different output formats and you can
    use those formats to save memory space in your camera, but
    that savings with jpeg compression comes at a price: image
    quality in the form of decreased signal-to-noise, and
    clipped highlights. But the loss becomes less with
    increasing ISO, and the loss is less on consumer cameras."

    So much for the supposed dynamic range of a JPEG format. If the
    gain is cranked up, and the image is taken at ISO 800 there is little
    loss in the conversion to JPEG simply because the SNR is so poor that
    JPEG doesn't make it any worse!
    Well, maybe.

    It does specifically say:

    "Possible dynamic range of the sensor is theoretical and in
    practice is limited by the 12-bit (or 10-bit) analog to
    digital converters in these cameras."

    However, under table 5 it also says:

    "Signal-to-noise assumes photon noise limited. Read noise, and
    other factors can only degrade this number."

    Catching little comments like that, and understanding the
    significance of it, is important to applying the data given.
    What that means is that none of the SNR figures in that chart
    (or else where in the article from what I can determine) are
    comparable with the SNR values we discuss for quantizing the
    data or the range available with any given number of linear
    bits. Read noise, which is not included in the data, is exactly
    what determines whether any given sample size in the quantizing
    process differentiates each quantized level from another.

    Hence, while certainly a 10 bit quantizer is a limiting factor
    (with a 60 dB dynamic range) that isn't actually demonstrated by
    the data for 12 bit quantizers, nor can I find confirming
    observations elsewhere. It appears it *should* be a case that a
    12-bit quantizer is just barely limiting the sensor. That is
    true because if all else is equal the 12 bit quantizer has a
    dynamic range of 72 dB, while the best sensors (such as the
    KAI-11000) have a 74 dB range.

    Perhaps it is a good idea to put that difference into
    perspective *before* discussing it. It is an almost
    insignificant difference, which can be measured but which is
    *not* visible in the end results. That is true if for no other
    reason than we are talking about more than a 12 f/stop
    brightness range, which is probably beyond the ability of most
    equipment to display. Typical photography is within the range
    of 5 to at most 7 f/stops. Hence keep firmly in mind that this
    discussion is about something of almost no significance to a
    user of a DSLR (as opposed to the user of night vision goggles
    using the same sensor, where a 2 dB increase in low light
    sensitivity is an entirely different beast).

    The question is also does the camera actually have the ability
    to make use of that extra 2 dB of dynamic range, and that cannot
    be answered. The article points out several possible causes of
    error, and points where assumptions are made to allow the
    analysis. Since the point was not to analyze the need for
    moving from 12 bit to 13 or 14 bit ADC's (with dynamic ranges of
    72, 78 and 84 dB respectively), the assumptions used for the
    observations in that article may or may not make the data
    invalid for this discussion.

    It is clear that the camera manufacturers did not see the added
    overhead (both for speed and storage space) required for 13 or
    14 bit ADC's as a worthwhile trade for whatever benefit would
    come from the added dynamic range. (I.e., the time and storage
    difference is significant, and the dynamic range is not.)

    And in fact there may be enough other noise in the electronics
    or from the sensor to mask the "theoretical" values that the
    article is using. Hence the 2 dB increase might be only 1 dB,
    or none.

    I do expect that when a sensor is developed that has, for example,
    greater than 80 dB of dynamic range *and* we have other technology
    that can increase the processing speed of the camera and the storage
    ability (both of which are almost certain to advance right along with
    sensor technology), that we will indeed see 14 bit ADC's in typical
    top of the line DSLR. That may happen in the next generation, or
    the one following... For now that is restricted to special cameras
    but is not seen in consumer products.
    Floyd Davidson, Oct 11, 2005
  10. Another non-sequitur.

    The question is not and never has been whether people use the
    term "quantization noise". What I said was, and what that URL
    does not address in any way, and what you cannot provide a URL
    which does differ with what I said, is that you cannot find a
    cite which discusses the significance of calling it "distortion"
    as opposed to "noise" that will side with it is noise and *not*
    First you have said that it is linear and therefore not a
    distortion, now you say it does depend on signal level (it's non
    linear), and therefore doesn't depend on bandwidth. That's

    None of that has anything to do with what a "bandwidth limited"
    channel is in relationship to an additive white Gaussian noise
    model for quantization noise. It refers to a channel that has
    bandwidth filters to eliminate aliasing and in the process
    eliminates all of the components of the quantization distortion
    that are above that frequency.

    Essentially, it describes a typical "4KHz" telephone voice
    channel transported over digital facilities.
    But whether it can be appropriately modeled as an AWGN does
    have to do with bandwidth. You did get the reason it exists
    Not precisely true. Note that you say "single dominant noise source", and
    I assume you specifically mean quantization distortion as the noise source.
    However, what Bissell and Chappman are referring to is indeed quantization
    distortion... as "a large number of random processes with identical
    distributions". Each sampling cycle is a random process in the sense that
    they mean, because the sampling clock is not correlated with the input
    signal in any way. That makes, usually, for a very random cumulative

    On the other hand... since the advent of digital carrier and
    switching systems for the PSTN, the test tones used to measure
    circuit parameters are no longer exactly divisible by 100 Hz,
    and the above is exactly the reason why. Since the tones are
    very stable (generated either digitally or by the exact same
    timebase that would also be generating the sampling clock),
    there are some very interesting results obtained if test tones
    are a submultiple of the 8 KHz sampling frequency. That is because
    it no longer amounts to "a large number of random processes".
    Hence it is non-linear. Reducing the signal level increases the
    percentage of noise.
    It is non-linear in exactly the opposite direction, where
    increasing the signal level increases the percentage of noise.
    Porcine soap water. Some forms of distortion actually are
    linear: amplitude distortion and phase distortion. The
    percentage of distortion remains constant with varying signal
    levels. Yet both are *clearly* distortion (have you ever heard
    anyone refer to amplitude distortion as a "noise"???).

    Linear vs. non-linear has *nothing* to do with defining
    distortion vs. noise, and as long as you insist there is a
    relationship you will *not* be able to understand what
    quantization distortion is.
    Who told you that? The signal level is 4095 times the step size
    plus the first step level (not size), which is not necessarily 0.

    (Did you even look at the values you posted for a digitized image
    that you said was a 12-bit linear encoding?)

    Regardless, that is just another non-sequitur. It has *nothing*
    to do with whether the distortion is linear or not. In fact, it
    increases with the reduction in signal level, and hence is
    non-linear. But again, you are the one who has claimed that
    only a non-linear error is distortion, not me. In fact that has
    *nothing* to do with defining distortion.

    Again, harmonic distortion is non-linear and amplitude
    distortion is linear. Yet we *never* hear of anyone referring
    to either of those as noise... (In fact, "amplitude noise"
    would almost certainly be recognized as something very distinct
    from amplitude distortion.)
    Yes, under the circumstances that I noted (which confused you
    As noise can be used to define an even larger number of errors,
    how can this be a valid argument?
    All noise is a "linearity error". But you are still calling it
    linear in some articles and insisting it is non-linear in
    Floyd Davidson, Oct 11, 2005
  11. I'll be damned. You actually passed the test.

    It is complete nonsense. Just like most of what you've been

    (As noted, the last guy I pulled that one on claimed it was
    a valid description of a demodulator... giggle snort.)
    Floyd Davidson, Oct 11, 2005
  12. I'm not sure why you would be so upset that someone tried to
    trap you with a little psuedo babble, given that is exactly what
    you've been posting for quite some time. The only difference is
    that when I do it I do know that is what I posted... you

    Regardless, this discussion has long since passed the point of
    Floyd Davidson, Oct 11, 2005
  13. SNIP
    I, so far, fail to see the connection with (not-oversampled) digital
    imaging. In fact, digital images are typically spatial frequency
    limited by AA filters and lens characteristics. How do you suggest
    dithering to be applied to such image capture?

    Despite the "Acoustical" aspect (this group is about DSLRs) I tried to
    find the document and only found references to it. Do you have any
    links to the document you apparently use to "build your case". Also,
    in particular, additional references with respect to image capture
    could be helpful.

    Bart van der Wolf, Oct 12, 2005
  14. M

    JPS Guest

    In message <>,
    That works fine for filtered audio, where the sample rate is
    significantly higher than necessary for the waveform, but we generally
    see the influence of every sample in digital imaging, and it is
    perceived as noise. With audio, the filter returns the signal to
    approximately its analog source, because we don't hear all of the
    samples individually, like we see pixels.
    JPS, Oct 12, 2005
  15. M

    JPS Guest

    In message <>,
    It is not perceived as distortion, though. It looks more like noise,
    the more noise there is (if there is no noise, it may look like
    You are good at regurgitating texts; now, try to understand things,
    especially in regard to practical matters.

    The experience of course quantization in an image is one of noise, not
    one of distortion, as it is usually thought of.

    You are an ivory tower asshole watching like a hawk for people to use
    practical terminology that betrays arcane and irrelevant definitions,
    and pose as their guardian. You serve no useful purpose; most of your
    ideas about linear RAW data are way off from reality. Two thresholds
    from one-bit imaging? That's ridiculous, and shows that you have
    absolutely zero personal visualization of digital imaging.
    JPS, Oct 12, 2005
  16. M

    JPS Guest

    In message <>,
    14 is probably next, but not because it is the ultimate; it will be for
    file storage (each extra bit is less compressible) and cost reasons.
    JPS, Oct 12, 2005
  17. M

    JPS Guest

    In message <>,
    So, Floyd; tell us all about the two thresholds in 1-bit capture. I'm
    sitting on the edge of my seat!
    JPS, Oct 12, 2005
  18. Okay, so you repeat what I said, and add "and cost reasons".

    Other than the cost of having more memory and faster cpu's,
    which would happen with or without a change to higher bit ADC's,
    I don't see cost as a factor. The ADC itself isn't going to
    have a significant cost difference.

    The problem is cpu speed, available storage space, and the time
    it takes to process each image in the camera.
    Floyd Davidson, Oct 12, 2005
  19. Simple ignorance. It *is* perceived as distortion by people who
    know what the difference is. Systems design engineers, for example,
    *must* deal with the difference.
    If I say it correctly the first time, and you claim it isn't so,
    is there some value to saying it exactly the same way again?
    No... So I change the way I say it. If that doesn't work to
    help you understand, I go find a number of places where others,
    who have credibility that you cannot impinge, have defined it
    the same way.

    The fact that I can do it in that order should suggest to you
    that I *must* have understood it in the first place well enough
    to have stated it identically to an unimpeachable source. (And
    that I am well aware of how discussions on Usenet progress.)
    That is simply bullshit. As I've pointed out to you, it is a
    distortion. Distortion is a subset of the general term "noise",
    but is distinctly different than the specific term "noise". And
    that difference *does* mean that it can and is treated
    differently. Why else would dithering to mask the distortion
    with a noise be of such significance that someone here would
    (mistakenly) claim it is the one and only "proper" way to
    quantize a signal?

    As I noted originally, to users it makes no difference, but the
    design engineers it is very significant.
    So the only argument you have left is an attempt a gratuitous
    insults. You aren't even accurate at insulting people!

    I pointed out a *fact*, that had significance at the level of
    the existing discussion. The prime significance of that fact
    is that it is not well known or understood, but that it does
    affect understanding of the topic.

    More than one person went ballistic with the same kind of
    nonsense that you are posting. And all because I posted
    something that is difficult to understand and slightly more
    complex than they want to deal with, but which is also
    significant to the topic.
    Hilarious statement. 1 bit... *binary*. That is *two*
    thresholds by definition. (Now I should try to explain
    something like m-ry multilevel encoding to you! It trades
    bandwidth for signal to noise ratio. If you think binary is
    complex, try an 8 bit digit all in one pulse.)

    Note that none of this has to do with "imaging". And that is
    why you are so confused by statements like the above. It has to
    do with digital signal processing, whether that is of image
    data, voice data, or measurements of the ocean depth at the
    North Pole.

    Your lack of exposure to the details is not significant until
    you start mouthing off, particularly with gratuitous insults for
    those who are familiar with the details.
    Floyd Davidson, Oct 12, 2005
  20. Dithering was injected into this thread in an attempt to
    discount the statement I made that quantization error is a
    distortion. I don't know that it has any connection to digital
    imaging. It serves only to *prove* that quantization error is
    in fact a distortion (the opposite of what the person who
    injected it claimed his cite would support).
    When I gave that cite I pointed out that the *title* contains
    the significance. There is no need to discuss the details of
    their process, because it does not apply (as you note) to
    digital imaging. The point is *only* that the purpose of
    dithering is to "eliminate quantization distortion", which
    therefore obviously must exist. More detail would be

    Not only am I not making a case for the use of dithering, I
    previously stated that those who claim it is the only "proper"
    way to quantize an analog signal are abosolutely wrong.

    Please try to focus on what I have argued. It is a waste of
    time to ask me to defend what I have not proposed. On the other
    hand, if you simply want to wrestle more with these side issues,
    do ask but don't write your questions as accusations. And then
    don't accuse *me* of being off topic!
    Floyd Davidson, Oct 12, 2005
    1. Advertisements

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments (here). After that, you can post your question and our members will help you out.