resolution verses video data rate

Discussion in 'Amateur Video Production' started by Brian, Mar 17, 2012.

  1. Brian

    Brian Guest

    I'd be interested in hearing other people's options on this.

    In order to get a video file to be a certain file size you have two
    options. One is to reduce the video data rate resulting in more compression
    of the video, or you can reduce the resolution of the video.
    I did some experimenting and found I got better results from reducing the
    resolution
    of the video from 1920 x 1080 to 1280 x 720 and keep the same video bit
    rate.
    This will work if the source video is of high quality.
    In some cases I need to reduce both the resolution and the video data rate
    such as sending video's to YouTube at 1280 x 720 at 2 Mbps.

    If it's played back on a device that upscales the video this will help with
    the reduced video resolution.
     
    Brian, Mar 17, 2012
    #1
    1. Advertisements

  2. Brian

    Frank Guest

    Reducing the frame size, for example from 1920 by 1080 to 1280 by 720
    in this case, is a typical first step. Next, depending upon the
    circumstances, is to reduce the frame rate.

    Sometimes, however, there are other steps that can be taken first.

    For example, if you're working with 8-bit-per-channel RGB video, try
    chroma decimation to 4:2:2 and see how that looks. Next, try 4:2:0.

    Each reduction of chroma information will result in a file size
    reduction.

    Of course, if you're starting out with a 4:2:0 (or 4:1:1) file,
    there's not much that you can do in terms of chroma decimation.

    And if you're starting out with high quality 10-bit or 12-bit footage,
    reduction to 8 bits would be the first step on the road to file size
    reduction.
     
    Frank, Mar 18, 2012
    #2
    1. Advertisements

  3. Brian

    Brian Guest

    Thanks Frank.
    I don't know much about chroma decimation but its good to learn something
    new.
     
    Brian, Mar 19, 2012
    #3
  4. I hate the word "decimation" (or "decimate"), since its
    primary definition refers to a terrible practice in the
    Roman military of drawing lots after poor performance,
    and then stoning to death the one in ten soldiers who
    drew the unfortunate lots. It was a stupid practice (in
    addition to being unnecessarily cruel) since it weakened
    the strength of the group by 1/10th arbitrarily,
    regardless of individual performance. May I suggest the
    word "reduction" instead?
    --DR
     
    David Ruether, Mar 19, 2012
    #4
  5. Brian

    Frank Guest

    You're very welcome, sir.

    Although I believe that you're currently working mostly with AVCHD,
    which being 8-bit 4:2:0 is already the low end, but perhaps someday
    you'll win the lottery or something along those lines and get even a
    prosumer product such as a Panasonic AG-HPX250 that shoots AVC-Intra
    100 which is 10-bit 4:2:2. Then, after editing is complete, you can
    consider reducing that footage from 10-bit to 8-bit and from 4:2:2 to
    4:2:0 for distribution of smaller files.

    And if anyone is upset by my use of the term "prosumer", please be
    aware that I'm simply echoing Panasonic's own nomenclature, namely,

    HDC-series = consumer
    AG-series = prosumer
    AJ-series = professional

    Have a great day!
     
    Frank, Mar 19, 2012
    #5
  6. Brian

    Frank Guest

    And just how many people besides you happen to know this important
    fact? :)
    It's still a semi-free country, so of course you can suggest it. :)
     
    Frank, Mar 19, 2012
    #6
  7. Brian

    Smarty Guest

    Interesting observation! But "decimation" is a standard engineering term
    defining the specific digital processing method employed. I share the
    opinion that it is suggestive of other things, and is perhaps
    unfortunate in that regard. To replace it with some other far more vague
    and undefined non technical description, however, hardly makes sense to me.

    Perhaps an easier way to accept the term Frank has correctly used is to
    view some number of bits being sacrificed, weakening the 'strength' of
    the original content. Although the method is neither stupid nor
    arbitrary, it essentially achieves downsampling by killing off some bits !!
     
    Smarty, Mar 19, 2012
    #7
  8.  
    Gene E. Bloch, Mar 19, 2012
    #8
  9. Brian

    Gavino Guest

    David is correct about the origin of "decimation", and strictly speaking it
    means reduction by 1/10th.

    But as I understand it, when used in video processing it means reducing the
    *frame rate* by some fraction by the simple method of discarding frames
    (eg 50fps to 25 fps by discarding every 2nd frame).

    Frank's original post referred to conversion from RGB to (YCrCb) 4:2:2 and
    4:2:0, which I would call chroma *subsampling*, not decimation.
    This is not the same as using less bits per individual pixel (which I would call
    'reduction'), but actually reduces the number of pixels for which chroma is
    stored (eg 4:2:0 has one chroma pixel for every four luma pixels).

    I'm happy to be corrected if "decimation" may be used as a general term
    encompassing all these things, but up till now I have only seen it applied in
    the context of frame rate reduction.
     
    Gavino, Mar 19, 2012
    #9
  10. Brian

    Frank Guest

    When referring to video, subsampling is the process of reducing the
    amount of chroma information.

    Subsampling is certainly the correct term to use, especially when
    starting with a 4:4:4 source and reducing the amount of color/colour
    information to 4:2:2 or 4:2:0. Indeed, it's the term that I would
    normally use and, in fact, normally do use, although I've also been
    known to say or write "chroma decimation", meaning a reduction in the
    amount of chroma information.

    Of course, what term would be used if going from a 4:2:2 source to a
    4:2:0 output, given that the 4:2:2 source is already subsampled
    (compared to a pure-as-a-virgin non-subsampled 4:4:4 source)?

    And if we're going from a 10-bit or higher source down to an 8-bit
    final output file for distribution purposes, that can be referred to
    as a reduction in quantization.

    When it comes to reducing (lowering) the frame rate, I simply refer to
    this as a "frame rate reduction". I don't think that I've ever used
    the term "decimate" in reference to a frame rate reduction, although I
    may have.

    I guess that I have a tendency to sometimes use the word "decimate" as
    if it were synonymous with "reduce", when it really isn't.

    The problem with consumer-level digital video recording formats is
    that, in terms of quantization and chroma subsampling, they're already
    in the lowest possible form to begin with. They're all 8-bit 4:1:1
    (NTSC DV) or 8-bit 4:2:0 (everything else, including PAL DV, HDV, and
    AVCHD), so there's nothing that can be done to further reduce the
    quantization or the chroma subsampling.

    So if we've got low-end footage that we wish to reduce in size for
    distribution purposes, about all that we can do aside from increasing
    the compression ratio (which may result in unacceptable picture
    quality), is to manipulate the frame rate and/or frame size.

    If we have 720p50 footage, we could reduce it to 720p25.

    If we have 720p59.94 footage, we could reduce it to 720p29.97.

    If we have 720p60 footage, we could reduce it to 720p30.

    If we have 1080p50 footage, we could reduce it to 1080p25.

    If we have 1080p59.94 footage, we could reduce it to 1080p29.97.

    If we have 1080p60 footage, we could reduce it to 1080p30.

    If we have a frame size of 1280 by 720, we could reduce it to 640 by
    360 or even to 320 by 180.

    If we have a frame size of 1920 by 1080, we could reduce it to 960 by
    540 or even to 480 by 270.

    And if we have 1080i footage, we can (and should) deinterlace it, as
    progressive footage is easier to compress than interlaced footage and
    besides, most people these days will be viewing our product on devices
    that utilize a natively progressive display technology (LCD, LCOS,
    OLED, plasma, etc.).

    And if we have 2:3 pulldown footage (23.976p over 59.94i) we can (and
    should) remove the pulldown (reverse telecine) by creating native
    23.976p footage prior to compression.

    One other thing to consider, but only in rare circumstances because it
    will result in less sharp imagery, is to apply a softening filter, as
    soft images are easier to compress than sharp, highly detailed images,
    due to the reduction in high frequency content.

    And of course, we always want to use a high quality compressor, one
    that produces small file sizes while maintaining good visual/aural
    quality. At the present time, that usually means MPEG-4 Part 10 AVC /
    H.264 video encoding (along with MPEG-4 AAC audio encoding).

    I've recently been working with some HTML5 video, where it's necessary
    to produce not just the usual MPEG-4 (.mp4) file (with MPEG-4 Part 10
    AVC / H.264 video encoding and MPEG-4 AAC-LC audio encoding) but also
    a Google WebM (.webm) file and a Xiph.Org Foundation Ogg Video (.ogv)
    file.

    The Google WebM file uses the Google/On2 Technologies VP8 video codec
    along with the Xiph.Org Foundation Ogg Vorbis audio codec while the
    Xiph.Org Foundation Ogg Video file uses the Xiph.Org Foundation Ogg
    Theora video codec along with the Xiph.Org Foundation Ogg Vorbis audio
    codec. The Ogg Theora video codec, as we all know, is derived from the
    decade-old On2 VP3 video codec (and was developed a few blocks from
    where I live, which is unusual because Manhattan is not exactly known
    as a hotbed of new technology).

    I must say, based upon my limited tests, that the good old MPEG-4
    AVC/AAC file looks the best of the three files. Although they may be
    less patent-encumbered, I don't think that the Google WebM format or
    the Xiph.Org Foundation Ogg Video format represent a technological
    advance.

    I long for the good old days (of two years ago) when I could just
    create a single MPEG-4 AVC/AAC file and serve it up through the
    browser using the Adobe Flash Player (9.0.124.0 or later, of course).

    Is everyone happy now? :)

    Next week we can discuss whether 3840 by 2160 or 4096 by 2160 is "True
    4K". ;-)
     
    Frank, Mar 20, 2012
    #10
  11. Yup! 8^)
    --DR
     
    David Ruether, Mar 20, 2012
    #11
  12. Brian

    Steve King Guest

    Big Snip of very interesting stuff.

    Frank and Smarty, thanks for the excellent discussion. You've helped to
    redeem the newsgroup from blather once again.

    Steve King

     
    Steve King, Mar 20, 2012
    #12
  13. Brian

    Brian Guest

    if I come into s lot of money, the first equipment I'd buy is a faster
    computer, then I'd consider another video camera.
     
    Brian, Mar 21, 2012
    #13
  14. Brian

    Frank Guest

    I agree 100 percent. That would be the exact correct approach to take,
    Brian, and I do hope that you come into a lot of money, in fact, I
    hope that we all do!

    Regards,
     
    Frank, Mar 21, 2012
    #14
  15. At the low end, there are excellent camcorders at not
    very high prices (the Panasonic TM700/900 is one that
    I like - but it requires an "umphy" computer to edit
    easily its high-quality AVCHD output...), and a few
    medium-priced 3-chip Canon and Sony HDV camcorders that
    require very little in the way of computer "umphiness"
    to edit easily their very good output. I would (as I
    did) first get the camcorder, then see what is needed
    in the way of computer gear to edit practically what
    I was shooting. (Camcorders don't go out of date at
    anything like the rate that computers do...) BTW,
    these days you can often upgrade your existing desktop
    computer easily and relatively cheaply, if needed (I
    did this for about $1050 in new parts and I now have a
    4.4GHz i7 with 16 gigs of RAM and a whiz-bang 480-core
    video card - but as it turned out, the video card was
    not necessary for editing and I could have saved its
    $400 cost, although it did speed output rendering by a
    factor of about 5.5).
    --DR
     
    David Ruether, Mar 21, 2012
    #15
  16. Brian

    Brian Guest

    A friend of mine recently upgraded his computer and his video rendering
    went from 10 hours down to 2 hours.
    I recently upgraded my camera to a Sony camera CX700 that records in AVCHD
    so I won't be upgrading my camera for a while.
    I might consider upgrading the graphics board as that would be a cheaper
    upgrade to try and render my videos at a faster speed. I'm currently using
    a Sapphire X1950 Pro graphics board that has 256MB of memory, its from the
    ATI radeon group.
    Any suggestions on what graphics card to update to?
     
    Brian, Mar 22, 2012
    #16
  17. That is about right for an nVidia GTX570-based video card.
    It turns out that the render speed appears to be mostly
    dependent on the video card's GPU characteristics and the
    ability of the software to use the GPU for rendering. For
    playing the timeline, though, the CPU appears to be much
    more important.
    BTW, for what it's worth, Camcorderinfo rates the performance
    of the Panasonic TM700 at 9.7 vs. the Sony CX700 at 8.1
    (both offer 60P video...). I still recommend HDV to people
    who don't want to move out to the "bleeding edge" with their
    computer gear for reasonably easy editing since any moderately
    OK computer can easily handle HDV - and there are still times
    I dearly wish I had stuck with HDV...;-)
    As it turns out (for one or two 1920x1200 monitors), 1 gig
    of video card RAM is sufficient (although I got 2.5 gigs on
    my card). I like the EVGA cards based on the nVidia GTX 570
    480-core chip (there are quite a few - I bought this one -

    http://www.evga.com/products/moreInfo.asp?pn=025-P3-1579-AR&family=GeForce
    500 Series Family&sw= ),
    but EVGA is now offering some new faster cards . These vary
    in length (potentially VERY IMPORTANT, since some are LONG!),
    connectors, cooling schemes (the card can use 285 watts[!],
    so make sure your power supply and case cooling are up to it),
    power connections (mine uses two), warranty periods (the model
    numbers ending in "AR" can have lifetime warranties *IF* you
    go through the hoops to register the card with EVGA within
    30 days of purchase), and they use two slots (check to see
    if you have the right type, with an open pair to the right
    of the main slot [one "dead" to support the card, one more
    to keep a space for the cooling fan/s to operate efficiently]).
    The model number of the one I bought was the EVGA 025-P3-1579-AR.
    In a case with 7 6" fans in it, both the overclocked CPU (with
    a Coolermaster 212+ on it) and the video card run about 50
    degrees C. under maximum load (while rendering). *IF* your
    CPU is fast enough to make previewing your video from the
    timeline, then putting money into a video card upgrade may be
    the best next move...
    --DR
     
    David Ruether, Mar 22, 2012
    #17
  18. Brian

    mike Guest

    mike, Mar 22, 2012
    #18
  19. Yup! "...but EVGA is now offering some new faster cards."
    Thet's th' prollem wi' a-gittin' th' lay-tist in kom-pyoo-tr
    geer - it ages REEL fast!!! (Unlike with cameras...;-)
    --DR
     
    David Ruether, Mar 22, 2012
    #19
  20. Brian

    Brian Guest

    I like the 3D feature but you might need a 3D monitor, if there is such a
    thing, It would be nice if it could play 3D movies as 3D on the screen.
     
    Brian, Mar 23, 2012
    #20
    1. Advertisements

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments (here). After that, you can post your question and our members will help you out.