Questions about isolating green channel in RAW data

Discussion in 'Digital Cameras' started by Paul Ciszek, Jun 3, 2013.

  1. Use the -d or -D options to /dcraw/, generate a PGM format
    file, and then convert it from a binary format to an ASCII
    format. For example:

    dcraw -D DSC_0000.NEF

    will produce a monochrome image, DCS_0000.PGM, that does
    not scale the RGB values (use of the -d option would
    scale R and B values).

    Imagemagic's /convert/ tool can be used to convert that to an
    ascii format:

    convert DCS_0000.PGM -compress none 0000.PGM

    The result is an ascii text file that can be manipulated with
    text tools rather than requiring C or C++ programming ability.

    Another technique would be to produce an interpolated image:

    dcraw -6 -W -g 1 1 DCS_0000.NEF

    will produce a 16 bit linear encoded PPM file, which also can
    be converted to an ASCII format:

    convert DCS_0000.PPM -compress none 0000.PPM

    With the PGM format it would be necessary to determine
    the Bayer Color Filter pattern used by the particular
    camera to figure out how to remove/extract R, G, or B
    raw data. With the PPM format the raw data is
    interpolated, and only the format for the PPM data needs
    to be understood.
    Floyd L. Davidson, Jun 5, 2013
    1. Advertisements

  2. Paul Ciszek

    Guest Guest

    that's calculating it, not faking it.
    however, the green & blue data for a red pixel can be calculated from
    neighboring pixels.

    is it perfect? no, but nothing is.
    it's still not faked.

    and the edge cases can be discarded. you could also have aliasing
    errors, even on a monochrome sensor.
    there are always edge cases. nothing is perfect.
    how often does that happen in the real world?
    Guest, Jun 5, 2013
    1. Advertisements

  3. Paul Ciszek

    Alan Browne Guest

    Here it means "not the truth".
    Proof that you do not get it. Filling the blue channel at a given point
    only requires information from the several pixels around it. You don't
    need (certainly don't WANT) millions of sampling points.
    Absolutely not. You do not have the truth about the R & B channels at
    the pixel where you filtered to get green only. The information (the
    truth) was left behind when the photo was taken.

    There is one camera that solves this problem, but it is limited in scope
    (must be on a tripod): the Hasselblad H3DII-39MS which takes 3 full
    frame shots of each color + 4th shot as a registration check). It then
    composes output RBG for each location with real, sampled, unfiltered colour.

    This is to get over color interpolation as well as softness introduced
    by the Bayer pattern.

    It is used to get very high quality photos of artwork, museum pieces and
    so on. And demostrates (see the comparison photos on the H site that
    interpolated data contributes to loss of contrast.
    Alan Browne, Jun 5, 2013
  4. Paul Ciszek

    Guest Guest

    i'm not saying millions of samples for each pixel. i'm saying there are
    millions of pixels so there's plenty of data.

    typically any given pixel uses 9-25 sensels. it could be more but the
    benefit is not usually worth it.
    you don't need to sample the truth. it can be calculated and then
    compared to the original.

    different bayer algorithms have different error rates. this can and has
    been measured. the simple ones that just do a linear calculation have a
    higher error that the more sophisticated ones, which have lower errors.

    different algorithms have their strengths and weaknesses, and there are
    always edge cases.
    Guest, Jun 5, 2013
  5. Paul Ciszek

    Alan Browne Guest

    If that many - it could be quite a bit fewer. IAC the millions above
    was not relevant at all.
    What original? The original was filtered away in the camera. It is
    gone. Does not exist anymore. Never got to the sensor. Converted to
    heat and left to the general entropy of the universe.
    Regardless, they are not 'truth' and never will be.
    Alan Browne, Jun 6, 2013
  6. Paul Ciszek

    Guest Guest

    the subject you're photographing.
    however, the subject is still there. those who develop bayer algorithms
    measure both the subject and the result and try to get it as as
    accurate as possible. they are doing an amazing job of it too.
    it's *very* close to the truth, indistinguishable in nearly all cases.
    Guest, Jun 6, 2013
  7. Paul Ciszek

    Alan Browne Guest

    We're talking about measurement and estimate variance so how do you do
    that in a quantifiable way?
    Close to the truth is not the truth.

    As I pointed out in another post, the only way to get that true reading
    is to use a camera capable of that such as the Hasselbald H3D-39MS.
    Alan Browne, Jun 6, 2013
  8. Paul Ciszek

    Guest Guest

    it's closer than film, and that didn't do any chroma interpolation.
    even that isn't the truth. nothing is perfect.
    Guest, Jun 6, 2013
  9. Paul Ciszek

    Martin Brown Guest

    How do arrive at that bizarre claim? Fine grain colour film like
    Kodachrome 25 could easily take on a modern CCD sensor and would unlike
    the Bayer masked image sample all colours at all sites.
    But the point here is that there is a whole known class of images that
    Bayer cannot sensibly measure. They are rare in natural scenes but they
    are not negligible. You seem to think that demosaicing can do magic!

    It is always limited to work from the raw data that it has available and
    the sampling effects that go with it. The eye generally cannot tell the
    difference because the human eye puts a far greater weight on luminance
    resolution than on colour which is why chroma subsampling works so well.
    The limitations of the human eye are the crucial factor.

    Bayer demosaic gets away with an approximation that works in practice
    except for pathological targets and a handful of awkward natural images.
    Notably things like red flowers with black veins on and tree branches
    sillouetted against clear blue sky. These would show a distinct
    difference at a pixel level if fully chroma sampled.
    Martin Brown, Jun 7, 2013
  10. Paul Ciszek

    Guest Guest

    digital has more accurate colour (lower delta-e) as well as higher
    resolution than film. that makes it closer to the truth than film could
    ever be.

    not that people want the truth. take velvia for example. or hdr.
    those are edge cases.

    if you like to shoot colour resolution charts, as the foveon fanbois
    do, then bayer is a bad choice. however, most people shoot real world
    scenes, so it's not an issue.
    in other words, it doesn't matter except in the lab and on very rare
    occasion in the real world.

    film isn't perfect either.
    no it wouldn't, and bayer captures more chroma than the human eye can
    resolve anyway.
    Guest, Jun 7, 2013
  11. The solution --- since you won't have longditudinal CA,
    everything being at 'infinite' --- you just need to map the
    red and blue channels linearly per distance from center to a
    new distance, which coincides with the green channel
    registering on these objects.
    But your colour filters are not narrow spectrum.
    So the sum of red + green + blue has twice as many pixwls
    as just green ... so shouldn't you use their additional

    Look into dcraw.

    Look up demosaicking. There are a number of methods that
    work well or very well on non-synthetic images.
    Simple methods don't, more intelligent methods do, what Adobe
    uses, probably only Adobe knows.
    It depends on the algorithm. Some non-standard Bayer
    patterns may need a different algorithm.

    Convert to 16-bit TIFF (without doing any changes because your
    green filters are not delivering the same green as, say, sRGB
    or AdobeRGB), punch out the right 50% of the pixels, delete red
    and blue. Of course the green values actually measured are not

    See dcraw.

    Wolfgang Weisselberg, Jun 9, 2013
    1. Advertisements

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments (here). After that, you can post your question and our members will help you out.