To RAW or not to RAW?

Discussion in 'Digital SLR' started by M, Oct 4, 2005.

  1. Then not one of the uses of quantization for audio that I am familiar
    with (CD's, the Public Switched Telephone Network, etc.) are using
    "properly done" quantization? Or have you mistaken what the paper
    actually says?
    Quantization, as I've noted, adds *distortion* when the signal
    is converted back to analog. It may not be obvious, but if the
    signal is never converted back, there is no distortion.

    Do a google search on "quantization distortion" and read up on
    it.
     
    Floyd Davidson, Oct 10, 2005
    #81
    1. Advertisements

  2. Neither of you have read those papers, and apparently have no
    idea what they say.

    They describe the use of dithering to *remove* quantization
    distortion, which is the error from quantization. The error
    directly correlates to the signal that is quantized, which is
    why it is properly called "distortion". By dithering the signal
    before quantization it is possible, as they demonstrated, to
    cause the quantization error to *not* be correlated to the
    signal. Thus, the dithering changes the resulting signal from
    signal + distortion to a signal + noise, and *that* noise is
    sometimes called "quantization noise" rather than quantization
    distortion. (Actually, it probably shouldn't be called
    quantization noise, but that is just about as good as anything
    since quantization is part of the process that causes it. It
    masks the quantization error (distortion) and raises the actual
    noise floor, but due the the particular spectral characteristic
    it is not as objectionable as quantization distortion. All of
    which is true for... audio, but perhaps not for quantizing
    brightness levels in image files.)

    Of course they do *not* differentiate that technique as
    "properly done" quantization. Most proper uses of quantization
    do not dither the signal...
    Dithering, according to the people who came up with the process,
    is a way to remove the quantization distortion that you both
    claim doesn't exist.

    But I do appreciate you posting references to expert evidence
    that what I've said is in fact correct!

    Here's another paper which you will no doubt find fascinating,
    since you claim quantization distortion doesn't exist:

    Wannamaker, R.A., S.P. Lipshitz and J. Vanderkooy, "Dithering to
    Eliminate Quantization Distortion," Proc. of the Annual Meeting
    of the Canadian Acoustical Assoc., Halifax, Nova Scotia
    (Oct. 1989).




    Okay, you don't understand terms such as "bandwidth limited"?
    Sure, but that won't make you any more right about this than you are about
    any of the other weasel words you've tried.

    "If a particular transmitted signal always produces the
    same received signal, i.e., the received signal is a
    definite function of the transmitted signal, then the
    effect may be called distortion."
    Claude Shannon, "A Mathematical Theory of
    Communication", 1948, p19,


    "Eight bit audio has an additional problem: the quieter the
    sound, the louder the ?noise? in the signal appears to be!
    This isn?t actually noise, but quantization distortion. The
    quieter a sound gets, the fewer bits that are used to
    describe it; the fewer the bits, the higher the distortion."
    http://www.cybmotion.com/sharing/OnTheLevel.pdf
    Chris Meyer, CyberMotion; "On The Level", p4
     
    Floyd Davidson, Oct 10, 2005
    #82
    1. Advertisements

  3. It's even more interesting to listen to know-it-alls pontificate about
    technical things, but put a camera in their hands and they're lost.
     
    Randall Ainsworth, Oct 10, 2005
    #83
  4. You would do well to stop memorizing technical gibberish and work in
    improving your photographic skills.

    Did you check your histograms for all of those shots out in the
    sunshine?
     
    Randall Ainsworth, Oct 10, 2005
    #84
  5. Floyd Davidson wrote:
    []
    "Distortion" is a misleading term here. Digitisation introduces an
    well-defined non-linearity into the transfer characteristic, as does
    harmonic distortion, but whereas a system may well be linear up to signal
    levels near clipping (in the case of harmonic distortion), with
    digitisation the non-linearity affects all signal level ranges, and is
    constant in nature as a one bit peak-to-peak uniformly distributed error,
    not correlated with the gross signal amplitude. Thus the error due to
    digitsation is better described as "noise", although that noise is
    correlated with the fine-grain detail of the signal. As you say, dither
    can reduce the correlation of the quantising noise with the fine-grain
    signal level.

    If you wish to define the digitisation process as creating "distortion",
    then the distortion is created when the signal is digitised, not when it
    is reconverted to analog, so your assertion that "if the signal is never
    converted back, there is no distortion." is incorrect.

    In a well-designed system, digitisation should not of itself cause any
    additional noise, but it seems from John's tests that the 12-bit
    digitisation used in certain cameras is not capable of capturing the full
    dynamic range of the sensor.

    David
     
    David J Taylor, Oct 10, 2005
    #85
  6. Lets try this from the top, and use a few references so that you
    realize I'm not exactly the only one who uses those terms.

    "If a particular transmitted signal always produces the
    same received signal, i.e., the received signal is a
    definite function of the transmitted signal, then the
    effect may be called distortion."
    Claude Shannon, "A Mathematical Theory of
    Communication", 1948, p19,

    You though *I* came up with that definition? Shannon may not
    have known much about digital signal-processing either, eh? But
    he did invent the idea...
    Wannamaker, R.A., S.P. Lipshitz and J. Vanderkooy, "Dithering to
    Eliminate Quantization Distortion," Proc. of the Annual Meeting
    of the Canadian Acoustical Assoc., Halifax, Nova Scotia
    (Oct. 1989).

    They seem to believe otherwise.
    Do you understand what "bandwidth limited" means, when
    referenced to a channel?

    (Just in case you don't have a good reference point for what
    that statement meant, it describes a typical 4 Khz wide
    telephone voice channel.)
    Do you know what Gaussian is? So every telephone systems
    engineer in the country has been doing this wrong since
    Shannon's paper in 1948?

    In fact, I think you probably should look up the engineering
    definitions of "linear" and "correlated" and see if you haven't
    confused them. "Linear" does not define any difference between
    what is distortion and what is noise. Correlation to the input
    signal does. In fact you are dead wrong about quantization
    error being linear, it is not. And if you want a linear
    distortion, read up on the effects of "robbed bit signaling" on
    PCM voice channels in the PSTN.
    Sure, but that won't make you right about this.

    "If a particular transmitted signal always produces the
    same received signal, i.e., the received signal is a
    definite function of the transmitted signal, then the
    effect may be called distortion."
    Claude Shannon, "A Mathematical Theory of
    Communication", 1948, p19,

    "Eight bit audio has an additional problem: the quieter the
    sound, the louder the "noise" in the signal appears to be!
    This isn't actually noise, but quantization distortion. The
    quieter a sound gets, the fewer bits that are used to
    describe it; the fewer the bits, the higher the distortion."
    http://www.cybmotion.com/sharing/OnTheLevel.pdf
    Chris Meyer, CyberMotion; "On The Level", p4

    Instead of standing by what you've said, why not do some research
    *before* you make more incorrect comments.
     
    Floyd Davidson, Oct 10, 2005
    #86
  7. Floyd Davidson wrote:
    []
    I have already explained why I believe the term "noise" is better.
    Yes. Quantisation error depends on the signal level, and not its
    bandwidth.
    Yes, I understand the Gaussian distribution. Quantisation noise does not
    have a Gaussina distribution, but a linear distribution.
    Actually, I spent several years researching digital signal processing, and
    many more years applying the well-known DSP techniques.

    I did not say that quantisation error was linear - but that it has a
    linear distribution (ranging from minus one half to plus one half of the
    quantising steps).

    I have already explained why I think that the distortion caused by
    quantisation is best described as noise. Indeed, that is the term most
    practitioners would recognise.

    David
     
    David J Taylor, Oct 10, 2005
    #87
  8. Yes, it's a technical term. One that is leading you all over
    the territory.

    Why don't you do a little research, as suggested.
    Do you understand the relation between clipping and harmonic
    distortion???? I'm not exactly sure what you meant by that
    statement, but it appears that you might not. If the "system"
    is linear, there is no harmonic distortion. If the system is
    clipping, it is not linear, and it *will* produce harmonic
    distortion.
    It does not affect all signal levels equally. (Why don't you do
    some research? Do you want me to post a few cites for that one too?)
    Cite, please. (You are confused. *Greatly* confused.)
    Now you are just babbling. By Claude Shannon's definition, it
    is distortion. By David Taylor's definition it isn't. Who we
    gonna believe?
    No sir. I've *never* said any such thing. The people who
    invented the technique don't say that either. What's the point
    of you folks citing references that you are unfamiliar with, and
    then one round later here you are contradicting exactly what the
    reference said?

    Quantization produces quantization distortion. Dithering
    reduces the quantizing distortion, by decorrelating the error
    from the signal, thus leaving only noise, not distortion.

    Remember this cite? Perhaps you didn't catch the irony in the
    title of this paper?

    Wannamaker, R.A., S.P. Lipshitz and J. Vanderkooy, "Dithering to
    Eliminate Quantization Distortion," Proc. of the Annual Meeting
    of the Canadian Acoustical Assoc., Halifax, Nova Scotia
    (Oct. 1989).
    Cite please! You are saying that apples are supposed to be equal to
    oranges, and I hate to tell you, but they are not.

    A digital signal *cannot* be identical to an analog signal, by definition.
    The two are unique, and *different* by definition.

    But an analog signal transported via a digital channel... outputs an
    analog signal. The difference between the input *analog* signal and the
    output *analog* signal is a combination of noise and distortion.

    No reconversion to analog? No distortion... just two related data sets.
    Cite please. You are babbling again.
     
    Floyd Davidson, Oct 10, 2005
    #88
  9. Nobody cares if you *believe* one term is better. The fact is
    that there is a standard definition, and you are using the wrong
    criteria to come to the wrong conclusions, partially because you
    don't understand or use proper terminology.

    And then you have the nerve to claim I'm either wrong or using
    improper terminology. Note that I've been citing multiple
    references, and you have yet to provide even a single reference
    that agrees with you instead of me.
    Ahhh... you don't have a clue as to what the term means.

    It merely means that the *channel* does not have unlimited
    bandwidth. For example, a typical voice channel used in the
    telephone industry is thought of as a 4KHz wide channel. In
    fact it is bandwidth limited by filters at the transmit end, and
    is more like 3.75 KHz wide. Of course with a Nyquist limit of 4
    KHz, that is significant because aliasing (ouch, another linear
    distortion!) is filtered out and right along with it a great
    deal of the distortion products from quantization. The
    amplitude distortion (ouch, another linear distortion) that
    results is out of the range necessary for use by the human ear,
    but does affect V.34 and V.90 modems. The phase distortion
    produced (ouch again, yet *another* linear distortion) is of no
    consequence at all to human ears, but again it affects modems.

    Regardless, quantization error, as I previously noted, is
    typically modeled as an additive white Gaussian noise (AWGN).

    That is *not* to say that quantization error is Gaussian. "One
    reason for this is that a Gaussian distribution is approached
    when a large number of random processes with identical
    distributions combine to produce a cumulative effect -- even if
    the individual processes are not themselves Gaussian." Bissell
    and Chappman, "Digital Signal Transmission", 1992, Cambridge
    University Press, p 68.
    It is neither. Whatever gives you the idea that it is linear?
    For linear sampling, the error is necessarily significantly larger
    for small signals than it is for large signals.
    If you are fishing for insults, that line sure could drag up a
    few!
    Which of course means that it does *not* have a linear
    distribution, unless you adjust the sampling steps to match the
    signal level.
    I noted right at the start that many people use the term noise,
    and that it is technically incorrect. That is *obviously* true.
    Claiming it is "best described as noise" is simply ignorant. It
    may work to use that term, but it is far from "best".

    That is *not* to say that many rather good sources do not
    intermix them. But if you find a discussion of the technical
    difference between distortion and noise, *every* *single* *one*
    will point out that quantization causes distortion, not noise.
     
    Floyd Davidson, Oct 10, 2005
    #89
  10. Floyd Davidson wrote:
    []
    The term quantisation noise is so widely used, but if you insist:

    http://www.digitalradiotech.co.uk/sampling.htm
    I will let others be the judge of that!
    Quantisation error has nothing to do with bandwidth. It exists because
    whilst an analog signal has a continuous range, the digitised signal is
    only represented by a finite set of values.
    This may be true where there are a large number of similar amplitude
    values, but it is not true where there is a single dominant noise source.

    []
    The amplitude distribution is linear (assuming an ideal ADC). The
    magnitude of the error is the same for both small and large signals. Of
    course, that means that is the signal amplitude is reduced, the amplitude
    of the noise will increase relative to the signal. Note that this is in
    complete contrast to harmonic distortion in analog systems, where the
    amplitude of the distortion reduces as a fraction of the signal amplitude
    when the signal level is lowered. This distinction is another reason
    practitioners prefer the term noise.

    []
    We have been talking about analog signals which are digitised to 12-bit
    accuracy. The signal level is some 4096 times the step size.

    Many people use the term noise because it is the most useful way to model
    the effects of quantisation within a system, as you have already quoted.
    As distortion can be used to describe so many different errors, I would
    use the term linearity error to describe quantisation error if you forced
    me to name an alternative.

    David
     
    David J Taylor, Oct 10, 2005
    #90
  11. I have done some research on the topic being discussed here, and
    just discovering this thread, I'll point you to some of it:

    Digital Camera Raw versus Jpeg Conversion Losses
    http://www.clarkvision.com/imagedetail/raw.versus.jpeg1

    The above page analyzes the quantization losses from 12-bit raw
    to 8-bit jpeg, and includes Poisson noise from photon (electron)
    counting, and for 3 types of cameras, from point and shoot to high end
    DSLR.

    Table 3 on this page:
    http://www.clarkvision.com/imagedetail/digital.signal.to.noise
    shows sensor dynamic range (full well electrons/read noise
    in electrons) and indeed some sensors (e.g. better DSLRs)
    do have more then 12 bits of dynamic range.

    Roger
     
    Roger N. Clark (change username to rnclark), Oct 10, 2005
    #91
  12. M

    JPS Guest

    In message <>,
    1-bit has only one threshold. Noise + signal may or may not register.
    I'm sure that you would misapply the concepts even if you did.

    --
     
    JPS, Oct 10, 2005
    #92
  13. M

    JPS Guest

    In message <>,
    The entirety of a thread is not about the OP. The thread takes on a
    life of its own. I mentioned that the 8-bit 2.2-gamma data format has
    more possible dynamic range than a 12-bit linear RAW, and you argued
    with that.
    Nonsense. You argued with my statement about data format precision.
    Perhaps, the first time it wasn't 100% clear, but every single time I or
    someone else said that I was talking about the format, and not the data
    from a specific capture, you still kept on addressing the latter.

    CLUE: There isn't a single person who participated in this sub-thread
    that thinks that you can get extra dynamic range by converting to JPEG.
    Exactly who are you arguing with? IMO, you are just trying to look
    correct, and don't care what is really being discussed.
    --
     
    JPS, Oct 10, 2005
    #93
  14. M

    JPS Guest

    In message <Qct2f.125869$>,
    It's probably true of any camera that has resonable noise over more than
    a couple stops of ISO settings. How could one get a relatively clean
    ISO 1600 with 12 bits, and think that ISO 100 is fully representable
    with 12 bits?

    --
     
    JPS, Oct 10, 2005
    #94
  15. M

    JPS Guest

    In message <>,
    It looks like English, but it is more of a puzzle than a statement.

    In any event, I haven't seen any RAW file formats so far that weren't
    linear. Some scale the numbers after digitizing, so you might get
    something like 101 102 104 105 107 108, etc, but they all translate well
    to 2.2 gamma after you subtract their blackpoint. I can "zoom" into any
    subset of the levels just above blackpoint and do the same.
    --
     
    JPS, Oct 10, 2005
    #95
  16. M

    JPS Guest

    In message <>,
    Here's what I believe: Technically, it is distortion, but since the
    distortion occurs at the single-sample level, and it manifests itself in
    a similar manner as noise, it is practical to refer to its result as
    noise. Especially so when you consider the fact that it enhances
    existing noise.
    --
     
    JPS, Oct 10, 2005
    #96
  17. []

    I am not defining whether or not the effects of quantisation are a
    distortion. In that the effects make the transfer characteristic
    non-linear at the fine-grained level, yes they are a distortion, but in a
    well designed practical system the effects should be dominated by the
    noise of the signal being digitised, and under those circumstances the
    effects of quantisation more resemble classic analog noise (an addition to
    the signal independent of signal level) than of classic analog distortion.
    In my experience, most practitioners of digital signal processing will
    therefore consider and refer to quantisation effects as noise rather than
    distortion.

    Consulting Google:
    "quantisation noise" - 13,800 hits
    "quantization noise" - 127,000 hits

    "quantisation distortion" - 308 hits
    "quantization distortion" - 10,400 hits

    (I do hope I get the spleeing right for that!)

    What is much more important than the descriptive name used, is a clear
    understanding of the quantisation process, the errors it introduces into
    the signal, and how those the effects due to such errors can be minimised
    or reduced.

    Cheers,
    David
     
    David J Taylor, Oct 10, 2005
    #97
  18. End of discussion then. It is, as I said at the start,
    technically distortion.
    That has about as much significance as this business of it
    supposedly being or not being linear, and whatever else it was
    that has been thrown out. None of those are significance.

    There are three points of significance. A distortion depends on
    a correlation between the input signal and the error. A noise,
    in its broadest sense, is *any* change (and thus in that sense
    noise includes all distortions).

    The third point is that sometimes, such as modelling
    quantization distortion as an addivitive white Gaussian noise,
    where under very special circumstances the correlation can be
    ignored and noise generalalities which normally do not apply can
    be used.
     
    Floyd Davidson, Oct 11, 2005
    #98
  19. Ain't that the truth. Instead, you are babbling again:
    Since making "the transfer characterisitc non-linear at the
    fine-grained level" has no relationship to what is or is not
    distortion (try applying that to amplitude distortion, for
    example), that statement means nothing.
    Oh, if only that were true... just think of all the engineering to
    get around the effects of quantization noise that would not have been
    necessary! Mu-law vs. A-law wouldn't exist. Dithering would not be
    useful, etc. etc. etc.
    So tell me just what conclusions you would draw from the above
    numbers?

    I would say it means there are a lot more non technically
    correct references than not. As I noted you *cannot* find even
    one discussion about which is technically correct that does not
    say it is in fact distortion.
    And trying to apply psuedo engineering to that, by suggesting
    that linear/non-linear, or "fine grained" vs something else, is
    just babbling.

    The controversy in the this thread is based on one simple fact:
    you don't know what "distortion" is relative to "noise", and
    refuse to make any effort at learning about it.

    None of your statements about what distinquishes quantization
    error as a distortion vs a noise can be successfully applied to
    amplitude distortion. Perhaps doing some research on the
    defining characteristics of amplitude distortion would help you
    understand distortion on a larger scale and enable applying that
    to quantization distortion.
     
    Floyd Davidson, Oct 11, 2005
    #99
  20. I already said that one characteristic which distinguish amplitude
    distortion from quantisation noise is that in general amplitude distortion
    reduces with reduced signal level (consider percentage harmonic distortion
    in well-designed audio amplifiers) whereas the percentage of quantisation
    noise in a signal will increase with reduced signal level.

    I have tried to provide some insight into how quantisation can be
    considered as both distortion and noise in practical systems, but that the
    majority of practitioners prefer the term "quantisation noise" rather than
    "quantisation distortion", as evidenced by the Google searches.

    David
     
    David J Taylor, Oct 11, 2005
    1. Advertisements

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments (here). After that, you can post your question and our members will help you out.