Filtering away the diffraction blurriness

Discussion in 'Digital Cameras' started by Alfred Molon, Sep 12, 2013.

  1. Alfred Molon

    Alfred Molon Guest

    Olympus claims the new E-M1 can remove the diffraction blurriness with
    postprocessing, allowing to use very small apertures for better DOF.
    Is is really possible to remove the diffraction blurriness via
    postprocessing?
     
    Alfred Molon, Sep 12, 2013
    #1
    1. Advertisements

  2. Alfred Molon

    Sandman Guest

    Why not? If the software has full knowledge about the lens, then it's
    just a mathematical operation. With 12 or 14 bits of data per point,
    it's just a matter of moving data, i.e. increasing or lowering the
    radiometric data for every point.
     
    Sandman, Sep 12, 2013
    #2
    1. Advertisements

  3. Alfred Molon

    Martin Brown Guest

    Short answer is yes. But this is more likely marketing BS.

    Provided that you have sufficient signal to noise and an extremely well
    characterised point spread function that is uniform across the entire
    scene. These conditions are usually met in astronomy for most practical
    purposes. It is comparatively rare for them to be valid in photography.

    The technique is called deconvolution and under the most favourable
    conditions can extract details that are roughly 3x finer than the normal
    linear analysis diffraction limit. You pay for this by ending up with
    some artefacts in your image and resolution depending on local signal to
    noise. That is bright highlights are much sharper, midrange sharper and
    dark noisy areas potentially more blurred. Cook it slightly too much and
    you get bullseye ringing around the highlights.
    No it isn't. The deconvolution problem is a classic ill-posed problem
    even in the simplest case of a uniform blur applied to a target image.
    There are many possible target images that when blurred with your lens
    point spread function will give your observed data to within the noise -
    the problem for a deconvolution code is to find one of them that is most
    nearly representative of the entire set.

    Typical choices include maximum entropy or maximum smoothness as a way
    to pick one decent representative from the set of possible images.

    However, real photographic images tend to have depth of field varying
    point spread and that severely complicates the mathematics. I don't see
    any way to get around this without orthogonal shadow mask and multi lens
    techniques as used in the Lytro cameras.

    http://en.wikipedia.org/wiki/Lytro

    Any post processing that does this sharpening tends to be extremely
    computationally intensive 2-300x slower than common linear methods. It
    is worthwhile on images which are irreplaceable but I can't see it ever
    being mainstream.
     
    Martin Brown, Sep 12, 2013
    #3
  4. Alfred Molon

    Sandman Guest

    No it isn't. The deconvolution problem is a classic ill-posed problem
    even in the simplest case of a uniform blur applied to a target image.
    There are many possible target images that when blurred with your lens
    point spread function will give your observed data to within the noise -
    the problem for a deconvolution code is to find one of them that is most
    nearly representative of the entire set.[/QUOTE]

    Which is infinitely easier when you know everything about the lens being
    used.

    It's still just a mathematical function, hard as it may be. But I do
    agree that it's probably exaggerated for marketing purposes, but it's
    certainly possible, only hard.
     
    Sandman, Sep 12, 2013
    #4
  5. Alfred Molon

    Eric Stevens Guest

    Blurriness is not a matter which can be resolved on a point by point
    basis. It is matter of micro-contrast which can only be resolved if
    one fully knows the details of what gave rise to the blurriness in the
    first place.

    If it is 'merely' a matter of lens diffraction the blurriness can be
    resolved by inverting the calculation of the image from the raw data
    but this can only be done if the details of the lens are fully known.
    It is an almost impossible task if the blurriness is due to other
    factors which are only poorly known.
     
    Eric Stevens, Sep 12, 2013
    #5
  6. Alfred Molon

    Martin Brown Guest

    Which is infinitely easier when you know everything about the lens being
    used.[/QUOTE]

    Most places where the technique is used the lens or telescope being used
    is world class and extremely carefully characterised. Even so it is
    necessary to use additional constraints on feasible image
    reconstructions the most powerful being positivity. The sky always has
    positive brightness which turns out to be a powerful constraint.
    Especially so in astronomy where most things are against a dark sky.

    It doesn't matter how well you know the lens point spread function high
    frequency noise is a killer for all deconvolution methods.
    No! In general is it impossible!! Pretty much like dividing by zero -
    you can do it naively in algebra but the answer will be gibberish.

    You can add an arbitrary amount of any unmeasured spatial frequency to
    the sharpened model image and still get something that when blurred by
    the finite lens aperture looks the same. You absolutely have to rely on
    additional heuristic constraints to get any solution at all.
     
    Martin Brown, Sep 12, 2013
    #6
  7. Alfred Molon

    Sandman Guest

    Most places where the technique is used the lens or telescope being used
    is world class and extremely carefully characterised. Even so it is
    necessary to use additional constraints on feasible image
    reconstructions the most powerful being positivity. The sky always has
    positive brightness which turns out to be a powerful constraint.
    Especially so in astronomy where most things are against a dark sky.[/QUOTE]

    Yes, that's a bit easier in normal photography.
    The very reason to know the lens' PSF is to combat frequency noise. THe
    more knowledge you have about the PSF, the easier it gets to calculate
    diffractions. Unless you mean the noise you can get from the
    deconvolution method itself? Current restoration algorithms use logical
    assumptions about the object such as smoothness or non-negative values
    and use information about the noise process to avoid some of the noise
    related stuff.

    You will probably never *remove* diffractions entirely, but again - the
    more you know about the lens, the more you can calculate for
    diffractions.
    People have calculated diffractions for decades. In a perfect lens with
    no aberration the diffraction at the perfect focal point is both
    symmetrical and periodic in the lateral and axial planes, this creates
    nonrandom diffraction. Even in a non perfect lens, the diffraction is
    certainly "calculate-able" with mathematical model of the blurring
    process, based on the convolution of a point object and its PSF, can be
    used to deconvolve or reassign out of focus light back to its point of
    origin, and thus recompile it to a less blurry image. Not a *sharp*
    image, mind you, just a less blurry one due to diffraction.

    This is already widespread in 3D widefield imaging today.
     
    Sandman, Sep 13, 2013
    #7
  8. Alfred Molon

    Alan Browne Guest

    Deconvolution should be able to do it.

    Requires accurate system modeling. I can't say at all how well Olympus
    can model all lenses from production accurately enough to do this (and
    store the data in the lens - though they do already store lens
    measurements in some lenses - not sure if enough for this purpose); nor
    how much error in same is tolerable while yielding a "good enough to
    see" result.

    Controlled photos using the same lens on that and previous cameras will
    show if it does work.


    --
    "Political correctness is a doctrine, fostered by a delusional,
    illogical minority, and rapidly promoted by mainstream media,
    which holds forth the proposition that it is entirely possible
    to pick up a piece of shit by the clean end."
    -Unknown
     
    Alan Browne, Sep 13, 2013
    #8
  9. Blur can be removed but it has a very high cost in the signal to noise
    ratio. A typical digital camera doesn't allow for much sharpening
    before moving into algorithms that attempt to guess how to reconstruct
    an image.

    Lots of cheap cameras do this to simulate a sharp image from a cheap
    lens and a noisy sensor. The artifacts first noticed are missing or
    unnatural fine details. Concrete, grass, and leafs are usually damaged
    because their fine textures become indistinguishable from noise.
     
    Kevin McMurtrie, Sep 14, 2013
    #9
  10.  
    David Ruether, Sep 14, 2013
    #10
  11. Alfred Molon

    Me Guest

    I don't think in the case of diffraction, that deconvolution will be
    able to do it at all WRT "normal" photography.
    It might be useful to correct aberrations where the point spread factor
    is "skewed", but when /every/ point is not skewed but is blurred
    symmetrically in every direction by diffraction, and nothing but the
    radius of the Airy disk can be known (by simply knowing the f-stop
    used), it shouldn't be any more "effective" than any other sharpening
    method. So increasing/decreasing the sharpening based on f-stop used is
    possibly convenient for in camera jpgs or pp, but not revolutionary in
    any way.
    But (IMO) Olympus, a once great company, has been full of shit for a
    decade or so, so a claim like this isn't a surprise. Nor would it be a
    surprise if they "cook" the pp in to their raw files.
     
    Me, Sep 15, 2013
    #11
    1. Advertisements

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments (here). After that, you can post your question and our members will help you out.