Why is movement in videos blurred?

Discussion in 'Professional Video Production' started by Peter, Jul 11, 2011.

  1. Peter

    Frank Guest

    That is correct, as the normal European television frame rate is 25
    frames per second, just as it's 29.97 frames per second in North
    America, Japan, and about a dozen or so other countries around the
    world.
    I think that film rate is almost universally 24 frames per second;
    it's not a U.S. thing.
    Since you didn't qualify your statement in any way (or maybe you did,
    since you said "Always"), I take it that you're saying that the many
    hundreds of millions of people all over the globe who watched (and
    continue to watch, in many cases) interlaced standard definition
    programming on their direct-view CRT-based televisions for fifty years
    were watching or doing something evil? I think not.

    I do think that it's a shame that the ATSC standard didn't allow for
    1080p59.94 and 1080p60 (although it does support 720p59.94 and
    720p60), especially in today's world of consumer-grade HDTVs most of
    which are flat panels using either LCD or plasma technology and which
    are therefore natively progressive, but bandwidth was an issue - plus
    many broadcasters were really in love with interlacing (sadly, many
    still are).

    But to call 50 years of television viewing as "evil", or even just the
    technology involved as "evil", is just going overboard, in my opinion.
    Direct-view CRTs, which themselves are neither natively interlaced nor
    natively progressive, do a great job of displaying interlaced content
    - and without first having to deinterlace the signal, which results in
    lost visual resolution.

    Remember, the two fields which comprise any given frame of video are
    not simultaneously displayed to the viewer. They are separated apart
    in time by the field rate, 59.94 fields per second in the
    U.S./Canada/Japan/Mexico/etc. and 50 fields per second in Europe and
    many other places including China.

    It's only when you take two fields of video from a high lateral motion
    scene and slap them together and view them, as Panasonic is prone to
    doing in many of its sales brochures, that it looks "evil", but that's
    not how we all watched television all those years.

    But please don't misunderstand me: while I completely disagree with
    your blanket "Interlacing is evil. Always." statement, I do sincerely
    wish that interlacing would "just go away", but it seems like it will,
    unfortunately, still be with us for some time to come.

    For those many millions of people who refuse to buy a flat-panel
    television (which today would mean an HDTV) and continue to watch
    interlaced standard def material on their old direct-view CRTs, what
    can I say? They're the lucky ones, especially in cases where they will
    die before their telly does.

    Of course, some would argue that the really fortunate ones are those
    who don't even own a television. :)
    Right, and this is even more evident when using a low contrast ratio
    LCD panel. The rise and fall time from full on to full off is simply
    too long in duration to maintain sharp-looking imagery.
     
    Frank, Jul 13, 2011
    #21
    1. Advertisements

  2. Peter

    Bob Myers Guest

    No, what I meant was the standard originated in the US, and when film is
    used with US-rate video, of course we see the common 3:2 pulldown
    method. But when film is used in European video systems, we more often
    see simply a 4% speedup to 25 FPS, and people generally live with the
    resulting problems (e.g., audio moving up in pitch a bit and so forth).
    And it's not unheard of to actually shoot at 25 FPS if the film in
    question is headed strictly for European video distribution.
    Nope, not what I meant at all.

    Interlacing is best seen as a cheap "compression" technique that was the
    best option for getting reasonably high-definition video into a channel
    just a few MHz wide, using the analog technology available in the 1940s
    and 50s. But I can still call it "evil," as in this day and age it's
    really just introducing more problems than it's worth. Obviously the
    VIEWERS aren't doing something evil when they watch an interlaced
    transmission - but I do think it's unfortunate that interlaced formats
    have survived as acceptable broadcast, etc., practice in the digital
    television age.
    Except at the time that standard was being put together, pro-scan
    cameras for 1080 were not feasible, nor was a compression method which
    would permit 1080p60 to be packed into the existing channels. Or at
    least those were the justifications given at the time. Personally, my
    preference would've been to limit the initial support to 1080p30 and
    720p60 (given that there's little actual difference between 720p and
    1080i in terms of delivered resolution), and then wait for the
    technology that would permit 1080p60 to be added to the mix, but then I
    didn't get to dictate the standard....;-)

    Bob M.
    Interlacing loses resolution to begin with; it's unavoidable, at least
    for moving objects, given that we're talking about two fields separated
    in time.
    Even if an LCD had zero response time, there would still be motion
    blurring issues with the technology (at least unless something else is
    done, like backlight modulation, etc.). The problem is that the LCD is
    a "write and hold" sort of display, while CRTs are actually displaying
    something (emitting a significant amount of light) for a small fraction
    of the total frame/field time. So perceptually, you get motion
    blurring. "MPRT" improvement generally results from something outside
    of the panel itself, not from improvements in the basic on-off or GtG
    response time specs.

    Bob M.
     
    Bob Myers, Jul 13, 2011
    #22
    1. Advertisements

  3. Peter

    Peter Guest

    OK....

    I have finally connected the G10 to a high-end Sony 1024P LCD TV, via
    HDMI.

    The results are most interesting.

    The PC display is basically crap. Yet this is a 3.2GHz PC, VLC, dual
    core, Nvidia Geforce 8800GT video card. But it can't keep up with any
    moving video.

    The TV shows virtually no motion blur on any of the 50i settings
    at/above XP+. On LP (the one which can record 25 hours to a 64GB SD
    card) one can see it, and then one can also see some display
    artefacts.

    There is however a massive amount of motion "effect" on 25P. It is so
    bad that I don't know why anybody would use 25P. The 50i setting is
    massively better.

    On 50i, the difference between XP+ and MXP is quite hard to spot. One
    can see it if filming a bookshelf full of books; with MXP they are
    easier to read the titles on.
     
    Peter, Jul 15, 2011
    #23
  4. I tol' yuh so (see my various posts in this thread), but some
    didn't believe it! 8^) Interlacing IS your friend, especially
    with motion - and with the TV's deinterlacing, you are going
    to get a better picture yet. As for LCD "smearing", I have not
    been able to detect it on any decent recent HDTV, and with
    slow frame-rate, the individual frames are ALL TOO OBVIOUS
    and the results look terrible to me (you not only reduce the
    amount of information on the screen per unit of time, but you
    add that unfortunate "flickering" effect - YUCK!). DOWN WITH
    24/25/30p! Up with 50i/60i/50p/60p! (It just plain looks
    better! ;-)
    --DR
     
    David Ruether, Jul 15, 2011
    #24
  5. Peter

    Bob Myers Guest

    Still don't. Interlacing sucketh mightily. ;-)

    Deinterlacing (if done well) can help, but there are other ways, in this
    modern digital era, to have achieved high-resolution, high-frame-rate
    video without having resorted to an interlaced format and all the
    trouble that causes. Ah, well - too late to change it now. Just have
    to hope that it eventually dies a very well-deserved death...

    Bob M.
     
    Bob Myers, Jul 15, 2011
    #25
  6. Peter

    Peter Guest

    It seems readily apparent to me that 24/25P has a lot of inevitable
    motion blur, 50i is obviously going to be better (because it is
    similar to 50P except every 2nd frame is kind-of interpolated), and
    50P would be better still.

    However I wonder if the high degree of motion blur I see on 25P is
    related to the shutter speed.

    I am familiar with interlacing; in the 1980s I used to design graphics
    cards using the 9365/9367 graphics controllers
    http://www.datasheetarchive.com/EF93-datasheet.html
    http://en.wikipedia.org/wiki/Thomson_EF936x

    and used to design RGB colour monitors with a composite video input.
     
    Peter, Jul 16, 2011
    #26
  7. With the 50i material, the TV is likely deinterlacing it, converting
    it to an approximation of 50p... With a slower frame rate for "p"
    material, the TV can't do anything (so faster rate "i" material has
    the advantage here). Also, lowering the data rate significantly
    does tend to show more artifacts and lower detail, clearly evident
    in your samples on my computer system.

    You would also begin to see the effects of data rate more as
    you do recompressions during editing while adding filters, etc.
    ('course, for that reason, you would try to add all your image
    changes in one "go", but there may still be visible degradation,
    more with the lower data rate video than with higher...).

    'Course!;-) Let's go with 120fps high data rate NON-interlaced
    video! That WOULD look MUCH BETTER! 8^)
    Since I've been shooting and editing in "60"p 1920x1080 at
    28Mbps (which looks GREAT directly seen on a good HDTV), and
    since Frank brought the bad news that Blu-ray had no full
    resolution "60"p option, I tried the various remaining
    options. 720x1280/60p was clearly inferior, as was 1920/24p,
    leaving 1920/60i, which looks nearly as good as 60p (when
    deinterlaced by the TV), but some more artifacting is visible
    that I can live with - but, I WANT FULL 60P ON BLU-RAY,
    WAAAAH! 8^)
    More likely the frame rate. With slow frame rates, there
    must be a balance between seeing the individual frames when
    using a high shutter speed when shooting, and seeing (probably
    preferably) blurring instead. Also with all of this, the
    camera is likely to add blur with detailed subjects in motion
    to keep the data rate within the capabilities of the
    compression abilities of the camera.
    Ah-HAH! 8^)
    --DR
     
    David Ruether, Jul 16, 2011
    #27
  8. Peter

    Peter Guest

    This raises an interesting point, regarding the data rate at which to
    habitually record.

    Obviously, with 32GB in the G10 camera, and with relay ("spill over")
    enabled to continue onto a 64GB SD card, and perhaps a 2nd 64GB SD
    card which I haven't spent money on yet (10x ones aren't cheap in that
    size) there is no reason to record in anything less than MXP.

    Except:

    1) I keep all videos for ever (have dozens of DV tapes in a safe,
    which I will grab into AVIs with the old cam before it goes on Ebay),
    basically because I am sure my young boys will want to watch them in
    many years' time. But the video size in MXP will be huge... I know one
    can get 1TB+ hard drives but what about the backup? I have a 160GB DLT
    tape, which is the ideal way for offsite backups, but 1 hr of MXP will
    be how big? One DV tape should be about 10GB but this is a lot bigger.
    I will need to downconvert all the videos which are being kept for the
    future, maybe to MP4? (I use Handbrake for MP4 generation; it is very
    good).

    2) I do in-flight videos (small private aircraft) of e.g. flying over
    the Alps, and in some of these the camera will be fixed, looking
    forward, and be running unattended. A long flight could be 7hrs, so I
    need to pick a quality setting which can run for that long.

    I have to say I am amazed with this G10 camera. The quality, seen on
    the Sony TV, is every bit as good as I would expect to see from a
    studio produced job (obviously subject to proper lighting etc). And
    the low light performance is amazing too. The manual controls (e.g.
    shutter priority) make light work of weird effects which plague
    airborne recording (via the propeller).
     
    Peter, Jul 16, 2011
    #28
  9. Peter

    Bob Myers Guest

    Whether or not "every other frame" could be considered as interpolated
    would depend entirely on the nature of the deinterlacing performed.
    Motion blur at 24/25P is definitely inevitable, esp. with some display
    technologies; however, consider the fact that movies are all at 24 FPS
    in the first place. No question, as well, that higher frame rates are
    better in this regard, again assuming that you aren't up against some
    display-technology-induced limitations already.

    The reason I don't care for interlacing has to do with the inevitable
    motion artifacts and loss of resolution that must accompany it.
    Interlacing had its place in the days of analog transmission formats and
    CRT displays, but these days there are better means of achieving the
    same goal - which is simply to pack a high-definition, high-frame-rate
    image stream into the available channel.
    Small world; except for the composite input (never had to provide one of
    those), me too! Also never used the 9365/67 controllers, but I do have
    fond (?) memories of the old Motorola 6845 timing controller. Ah, those
    were the days...geeze, what am I SAYING? ;-)

    Bob M.
     
    Bob Myers, Jul 16, 2011
    #29
  10. Peter

    Frank Guest

    If I were you, I would make it an absolute habit to record *ONLY* in
    the 24 Mbps MXP mode.
    For me, the consideration would not be file size or even recording
    time per GB of storage media; it would be picture quality. On any
    given camcorder, I will record ONLY in the highest quality mode
    available.

    Why start out by purposely recording in a lower quality mode? To my
    (perhaps unique) way of thinking, that would make no sense whatsoever.

    The more you can make your footage look like it was shot with a Sony
    SRW-9000 (about USD $75,000 base price, plus options), the better.

    When you're starting out with a consumer-grade palmcorder with limited
    manual controls, a heavily lossy compressed video (and audio) codec,
    limited function DSP, a tiny sensor, and a lens that didn't exactly
    cost $15k or more, you've already got several major strikes against
    you.

    Why make matters worse by consciously, purposely, and deliberately
    choosing anything other than the highest quality available recording
    mode?
    And now you're equipped with a variety of DV tape ingest programs. :)
    If you record your AVCHD "footage" in the highest video quality 24
    Mbps mode (called MXP on your Canon LEGRIA HF G10) along with two
    channels of lossy compressed Dolby Digital AC-3 audio, the total
    datarate will approximately equal that of standard definition 576i50
    DV tape recordings or high definition 1080i50 HDV tape recordings. In
    other words, about 13 GB per hour.

    This is not "huge". It's downright tiny. If you were recording
    non-compressed RGB at 1.5 Gbps (or stereoscopic 3D at 3 Gbps) well,
    that would be "huge". Even Sony's HDCAM SR format at 880 Mbps will
    generate some pretty large files, but DV, HDV, or AVCHD, no way.
    3 TB drives are currently available for under $200 here in the U.S.

    Of course, they're consumer class drives and not enterprise class
    drives, but the price is right. To avoid possible system
    incompatibility issues, however, it might be best to stick to 2 TB
    drives. Some systems will not properly support the full capacity of a
    3 TB drive unless you create multiple partitions.
    As mentioned above, it should be about 13 GB per hour for DV, HDV, or
    AVCHD.
    Do not downconvert (or transcode from one lossy compressed format to
    another lossy compressed format), please!

    Keep the camera-original DV and HDV tapes and the original AVCHD
    files. And if you also want to store the DV tape footage in the form
    of DV-AVI files, that's fine, too, as the audio and video data within
    the DV-AVI file will be an exact bit-for-bit copy of the data that
    resides on the DV tape, assuming no tape drop-out errors are
    encountered during the transfer process.
    Well, that is a limitation. Perhaps you could put your aircraft on
    auto-pilot for a minute and swap flash memory cards when needed?

    Even if I were a passenger, I wouldn't object too much to that! :)

    However, according to Canon, the 32 GB of built-in flash memory in
    your camcorder will hold 2 hours and 55 minutes (175 minutes) of
    footage in the MXP recording mode.

    Additionally, the camcorder has two flash memory card slots. Insert a
    pair of SanDisk Extreme 32 GB Class 10 SDHC cards (SanDisk part number
    SDSDX3-032G-A31) and you should be fine with regard to maximum
    recording time.

    http://pct1.sandisk.com/ProductPage.aspx?ID=7641

    In fact, that combination of built-in memory plus the two cards will
    give you about 525 minutes of total recording time in the MXP
    recording mode. That's 8 hours and 45 minutes. You'll probably run out
    of fuel before you run out of flash memory storage capacity. :)

    As it is, you'll probably want to use a high-capacity BP-827 battery
    and even that may not give you a full seven hours of operation, so you
    may have to change batteries in-flight.

    Unless you have a source of AC power and can run the camcorder off of
    mains power using the supplied adapter, I think that you may find that
    your maximum recording time limitation is battery-induced and not
    flash-memory-induced.
    But some of us just love those weird prop effects! :)
     
    Frank, Jul 17, 2011
    #30
  11. Peter

    Peter Guest

    Many thanks for the advice, both of you :)

    I will stick to MXP. 13GB/hour is not a problem; I thought it would be
    several times that. I forgot that DV data is poorly compressed, hence
    the 10GB/hour.

    Re running out of battery, actually powering the camera from the
    aircraft power is trivial; there is a cigar lighter socket and I
    already have a multi-output power supply which works off that and can
    drive all kinds of stuff like satellite phones etc.

    In the past, the only way to do long videos using the HC1E DV cam was
    to feed its composite output to a laptop for capture, and power it
    externally (which prevented it from auto power down after a few mins).
    With the G10 it will be possible to record everything in the camera.

    The 6845 is still what every PC graphics chipset emulates, and is a
    superset of, AFAIK.

    The Thomson-CSF 9367 was a great chip in its time. It did PAL video
    straight out, with all the interlace timing and sync pulses, front and
    back porches, the lot, and drew text, and vectors using the Bresenham
    algorithm. I wrote (in Z80 assembler) a fairly complete graphics
    library for it which did circles, arcs, polygon fills, etc. Later I
    used an NEC chip (don't recall the name) which did circles natively
    using the Horn algorithm (but drew only octants so you had to do
    several calls to get a circle) and did polygon fills.

    Then the PC came along and killed that whole market, and everybody was
    doing graphics in software, until people started doing graphics cards
    with the clever functions implemented with massive processors.

    It was fun while it lasted :)
     
    Peter, Jul 17, 2011
    #31
    1. Advertisements

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments (here). After that, you can post your question and our members will help you out.