Image Deblurring using Inertial Measurement Sensors

Discussion in 'Canon' started by Skybuck Flying, Nov 8, 2012.

  1. Hello,

    The last few days I was wondering if something like this was possible. It
    seems Microsoft has already researched these kinds of ideas.

    Now I am wondering if these kinds of algorithms and hardware is already
    implemented in todays digital cameras ? Any ideas ?

    (I know my soon to arrive canon powershot sx50 has some sort of "image
    stablization" but perhaps that uses different techniques...)

    News article:

    Microsoft Anti-Blur Algorithm Saves Photos From Your Shaky Hands

    Microsoft Article:

    We present a deblurring algorithm that uses a hardware attachment coupled
    with a natural image prior to deblur images from consumer cameras. Our
    approach uses a combination of inexpensive gyroscopes and accelerometers in
    an energy optimization framework to estimate a blur function from the camera’s
    acceleration and angular velocity during an exposure. We solve for the
    camera motion at a high sampling rate during an exposure and infer the
    latent image using a joint optimization. Our method is completely automatic,
    handles per-pixel, spatially-varying blur, and out-performs the current
    leading image-based methods. Our experiments show that it handles large
    kernels – up to at least 100 pixels, with a typical size of 30 pixels. We
    also present a method to perform “ground-truth†measurements of camera
    motion blur. We use this method to validate our hardware and deconvolution
    approach. To the best of our knowledge, this is the first work that uses 6
    DOF inertial sensors for dense, per-pixel spatially-varying image deblurring
    and the first work to gather dense ground-truth measurements for
    camera-shake blur.

    Skybuck Flying, Nov 8, 2012
    1. Advertisements

  2. Skybuck Flying

    Whiskers Guest

    ["Followup-To:" header set to]

    As far as I know, current anti-shake methods rely on moving the image
    detector, or some part of the lens, or both. They do of course use
    'inertial detectors' of some sort.

    MS seem to be using the output from their inertial detectors to control a
    software approach instead. That avoids some moving parts, but probably
    uses quite a lot of electronic processing instead, so may not be practical
    with current electronics and batteries.

    I still like a good tripod, or at least something solid to lean on.
    Whiskers, Nov 8, 2012
    1. Advertisements

  3. Skybuck Flying

    PeterN Guest

    according to the article, they use six sensors per pixel. Sounds like a
    lot of sensors.
    PeterN, Nov 9, 2012
  4. "PeterN" wrote in message

    according to the article, they use six sensors per pixel. Sounds like a
    lot of sensors.

    No, what they ment to write was the blur algorithm is applied to each pixel,
    instead of something else like just certain areas or so.

    I think the 6 sensors are ment to read/detect the x,y,z movement of the
    camera, and also the roll of each axis.

    The "kernel" which is mentioned probably indicates it's some kind of
    parallel algorithm which processes 30 to 100 pixels per launch or block or
    something, that part isn't entirely clear to me but I am guessing it's ment
    for a parallel chip, perhaps a nvidia/cuda-capable chip. Maybe a 30 to 100
    cuda cores chip or so.

    Skybuck Flying, Nov 11, 2012
  5. Wrong.
    It's a convolution kernel containing 30 to 100 pixels.

    Maybe that makes it clearer for you:

    The operation can, but doesn't need to be parallelized.

    Wolfgang Weisselberg, Nov 11, 2012
    1. Advertisements

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments (here). After that, you can post your question and our members will help you out.