I Missed the Lecture on DV Compression-- Anyone Have the Cliff Notes?

Discussion in 'Amateur Video Production' started by eruga, Sep 14, 2003.

  1. eruga

    eruga Guest

    When defining the DV format as having a fixed 5:1 compression ratio, how
    does that apply to say, a DV AVI file residing on a computer's hard drive,
    that was captured via firewire from DV videotape, recorded and played back
    on a DV camcorder? Where does the 5:1 compression take place?

    The camera's CCDs certainly doesn't ignore 5 units of incoming light for
    every 1 it accepts. Does it occur when the tape records the CCDs' output? If
    that were the case, I'd expect you could adjust the compression rate by
    varying the tape speed. Yet the miniDV and DVCAM formats (which have
    different tape speeds) both yield DV 5:1 video. And lastly, it doesn't
    occur when the file is captured by the computer, which any posting newbie
    discovers, is a straight file transfer.

    I know enough not to confuse compression ratio with color sampling ratios
    (4:2:2, 4:1:1). I've been able to understand comparative file size and file
    transfer rate implications of uncompressed, DV and VBR compression schemes.
    But when trying to understand where the format-defining compression rate of
    DV occurs, and its particular image to file size efficiencies and
    limitations, I'm at a loss. Any experts out there want to crack this nut
    (ouch)?
     
    eruga, Sep 14, 2003
    #1
    1. Advertisements

  2. eruga

    Tony Guest

    Im no expert, but in my reading s on here and the web when ;learning up on
    DV-camcorder to DivX avi, to SVCD, abnd also to DVD, I have read somewhere
    that the file that is on the camera (DV-1 or DV-2), is compressed. How much
    I have no idea, but it must be done by accepting a full light source itno
    the camnera, and in the process of convertiung that to a avi stream, the
    encoding must be done them .
     
    Tony, Sep 14, 2003
    #2
    1. Advertisements

  3. Another non-pro here. My understanding is that the 5:1 is at the camera as
    it lays the info on the tape. It's all about tape width and speed, combined
    with head rotating speed - how much data can be placed on the size tape
    being used.

    Once on the tape, it's a transfer to the computer.

    PapaJohn
     
    PapaJohn \(MVP\), Sep 14, 2003
    #3
  4. eruga

    RGBaker Guest

    When it is 'encoded' to DV, a step that takes place in the camera before it
    is recorded -- the signal that comes off the camera head doesn't yet have a
    format -- by running it through a DV encoding chip the chroma sampling,
    compression and digitization takes place. Therefore all DV material, by
    definition, is compressed 5:1 (among other things) -- if it isn't 5:1
    compressed, it isn't DV. Note too that MPEG2 is actually very similar to
    DV, but that MPEG2 allows for varying rates of compression whereas DV is
    fixed, and MPEG2 allows for temporal compression (compression spread over
    time) whereas DV is only intraframe compression (compression within a frame,
    with no reference to adjoining frames).

    HTH
    GB
     
    RGBaker, Sep 14, 2003
    #4
  5. eruga

    Mike Kujbida Guest


    I'm no expert either but the compression does happen during the initial
    recording process. That is, when it's put on tape. A good place to start
    looking for information is http://www.adamwilt.com/DV.html Another place
    that might be useful is http://www.snellwilcox.com/knowledgecenter/books/
    HTH.

    Mike
     
    Mike Kujbida, Sep 14, 2003
    #5
  6. The ~5:1 DV compression takes place in the camcorder or VCR.
    DV camcorders/VCRs have specialized microprocessor integrated
    circuits that are designed/optimized/hard-wired to do the DV
    compression/encoding. See Adam Wilt's excellent FAQ at...
    http://www.adamwilt.com/DV-FAQ-tech.html#DVformats

    Of course, there is a corresponding conversion that must
    be done when the tape is played back. Commonly uses the
    same IC circuitry "in reverse". This refers to the analog
    inputs/outputs.

    The digital (IEEE1394/Firewire/iWire, et.al.) interface transmits
    essentially the same bitstream that is recorded on tape.

    Once the DV-standard info is stored in a computer file,
    there are software equivalents of the DV encoding/decoding
    IC firmware (known as "codecs" in the computer world.)
    The "camera" portion of digital camcorders are commonly
    completly "conventional" or "neutral". They output a signal
    that is not committed to any standard/format.
    In simplified terms, yes.
    DVCAM (and DVCpro25) use the same STANDARD as
    DV, but they spread it across more physical tape to make
    it more "rugged" (less susceptable to dropouts from storing
    so much data on so little tape area.) DVCpro50 indeed
    uses your idea of decreasing the compression and increasing
    tape speed (and it gets only 1/2 the "rated" capacity.)
    I believe 4.1.1 vs. 4.2.2 is the major difference between
    DVCpro25 (same as DV) and DVCpro50. But there is
    plenty of authoritative info out there on the WWW if you
    wish a less speculative opinion.

    Of course 4.2.2 and 4.1.1 are the sampling ratios for NTSC.
    PAL is 4.2.0 or something like that. My understanding is that
    the 4.1.1 ratio is what makes DV (or DVCAM/DVCpro25) less
    than optimal for chroma-keying type uses (and what makes
    DVCpro50 much better at that sort of thing.)

    The DV compression methodology (and even the NTSC
    encoding scheme) relies on tricking our eyes into accepting
    lower color resolution. Indeed our eyeballs work in this
    manner to some extent (rods vs. cones, etc.)

    Disclaimer: Some information here is simplified to meet the
    percieved level of the OP's questions.
     
    Richard Crowley, Sep 14, 2003
    #6
  7. eruga

    David McCall Guest

    David McCall, Sep 14, 2003
    #7
  8. eruga

    Rienk Guest

    The reason for this (as I always understood), is that compression takes
    place in mostly the BLUE chroma information. For this reason it is adviced
    to use a GREEN screen for chroma-keying. But my experience is that using
    BetaSP is preferrable for chroma-keying above DV-formats.
    Correct me when I'm wrong.

    Rienk
     
    Rienk, Sep 14, 2003
    #8
  9. eruga

    David McCall Guest

    --

    Close, but not exactly. Both Blue and red get beast up pretty bad. The
    green information winds up being in the luminance part of the signal, and
    thus makes a better source for keying, especially with highly compressed
    formats, but this is also true for S-video, and even Betacam. The reasons
    to use blue are that it is a more pleasant environment for the actors and
    it often easier to eliminate blue from the scene, because fleshtones tend to
    contain a lot of green and red.

    David
     
    David McCall, Sep 14, 2003
    #9
  10. eruga

    Tony Guest

    By the time the images hit the tape, it is already compressed. When you use firewire to get it to
    the computer, it is a simple transfer of information and no compression is used at that point.

    Tony
     
    Tony, Sep 15, 2003
    #10
  11. eruga

    eruga Guest

    Thanks for your thoughtful responses, and for folding in DV50pro, chroma
    sampling, the logic of chroma key colors, and links to more info.

    OP, Elliott
     
    eruga, Sep 15, 2003
    #11
  12. Close.
    Y = 59% Green, 30% Red, and 11% Blue
    I = -28% Green, 60% Red, and -32% Blue
    Q = -52% Green, 21% Red, and 31% Blue
    (- symbols represent phase/polarity of signal)
    Which continues to be true :)
     
    Richard Crowley, Sep 15, 2003
    #12
    1. Advertisements

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments (here). After that, you can post your question and our members will help you out.