Questions about isolating green channel in RAW data

Discussion in 'Digital Cameras' started by Paul Ciszek, Jun 3, 2013.

  1. Paul Ciszek

    Paul Ciszek Guest

    I photographed Venus transiting the face of the sun last June with a
    Panasonic FZ35, and saved the RAW files. I would like to isolate the
    RAW green channel data from the red and blue, with all of the extra bits
    of dynamic range intact.

    First of all, my reason for doing this: I have two reasons for
    believing that the green channel data is sharper than the red or blue.
    The first is optical--I can tell that the image suffers from chromatic
    aberration, and I am working on the assumption that the middle of the
    spectrum (green) is better focussed that either of the extremes. The
    second has to do with the Bayer filter and the layout of the sensor--
    there are twice as many green pixels as red or blue.

    My first question: Do the RAW files record the red, green, and blue
    pixels as they exist on the sensor, or does some interpolation and
    fudging take place even before the data is saved? i.e., if the sensor
    of the camera is 4000x3000 pixels with the usual Bayer filter, will the
    RAW file contain 2000x1500 arrays of red and blue pixel data, and data
    from 6 million green pixels on a funny diagonal grid? I guess this
    question will be specific to Panasonic, and possibly even specific to
    the FZ35, but I would be interested in knowing what other cameras do
    also.

    Then, when you read the RAW file into a photo editor, the software--
    let's assume for now that it is going to be some flavor of Adobe-- goes
    and interpolates like mad so you have a 4000x3000 grid with red, green
    and blue data for every pixel, most of it "faked" to some extent. My
    next question is, do Adobe (or other) programs consult the red or blue
    pixel data when filling in the missing green pixel data? I suppose it
    is possible that the answer to this question may also depend on the
    model of the camera, but I hope not.

    I want to yank out the unadulterated green pixel data from my Panasonic
    RAW files before it gets contaminated with any of that fuzzy, sparsely
    sampled red or blue pixel data. Some day I will want to do that for my
    Olympus RAW files as well. Any advice on how to do this would be
    appreciated.
     
    Paul Ciszek, Jun 3, 2013
    #1
    1. Advertisements

  2. Adobe published the full spec for their DNG file format. They also have
    free converters available that will convert Panasonic and Olympus raw
    formats to DNG. That would just leave it up to you to read the spec and
    extract just the green channel data. Definitely doable.
     
    Mark Storkamp, Jun 3, 2013
    #2
    1. Advertisements

  3. Paul Ciszek

    Guest Guest

    yes. that's what raw means.
    no, unless you use something like s-raw, which is a resized and a
    smaller raw file but it's not really raw.
    it's not faked.
    it doesn't depend on the camera. typically a 9 pixel grid is used for
    any given pixel. some converters use a 25 pixel grid for better
    results. it could even be bigger but it quickly becomes a lot more work
    for very little benefit.
    dcraw can output the raw pixels, about the only thing it's useful for.
     
    Guest, Jun 3, 2013
    #3
  4. Paul Ciszek

    Martin Brown Guest

    I doubt if it will give you any more resolution than simply asking the
    reconstruction filter to generate a monochrome image from the raw file.

    It would be a different matter if you had been using an H-alpha filter
    and imaged the thing entirely in pure red light. Then there are some
    issues that have to be addressed with the logic of demosaicing.
    Generally yes, but there may be tweaks for known hot or dead pixels.
    What you have to do is convert the image into a monochrome image with
    every pixel irrespective of colour saved and then and it with a mask to
    isolate the colour component that you want.
    It really doesn't contaminate them enough to worry about. You are
    worrying about a non-problem.
     
    Martin Brown, Jun 3, 2013
    #4
  5. Paul Ciszek

    Alan Browne Guest


    1) Get the channels. (Will interpolate green, leaving out the R & B
    data)
    In CS5:
    Load the raw file into CS5 (via ACR). Leave the settings as is
    (or adjust to as neutral as you like.

    Open in Photoshop (what is on the screen is _not_ JPG - but it
    is 'interpolated').

    Open the "Channels" control panel.

    Turn off the red and blue channels.

    What you see is what the camera saw in the green pixels (and
    interpolated into the space where the R and B channels were. But there
    is no R or B contamination).

    Saving that is another issue. Unfortunately, PS does colour separation
    in additive space (CYMK) and not subtractive (RGB).


    2) Filter
    As above up to open in PS.

    Then Image, Adjustments, Channel Mixer

    Select Preset "B&W Green filter".

    Renders as greytones - but that is all that was in the grey channel.

    To be clear, none of the R or B channel info is left in the image.
     
    Alan Browne, Jun 3, 2013
    #5
  6. Paul Ciszek

    Alan Browne Guest

    Some cameras filter the data for noise before storing raw, but that
    still results in isolated RGB channels in the output raw file.
    Download dcraw.c and every model's sensor arrangement is described in
    detail. You'll have to wade through the tables and some of the algorithms.

    Source code:
    http://www.cybercom.net/~dcoffin/dcraw/dcraw.c
    If you're handy with c download dcraw.c and modify it to extract the raw
    as you need it. You may know someone who can do it for you.
     
    Alan Browne, Jun 3, 2013
    #6
  7. Paul Ciszek

    Paul Ciszek Guest

    Well, in the sense that each pixel out of those 12 million started out
    with data for only the red, the green, or the blue chanel, but the moment
    you have imported your RAW image into Adobe, every pixel has all data for
    all three channels, 2/3 of the color information in the picture has to be
    created mathematically using data from other pixels.
    My question is, when filling in the green data for a pixel that does not
    have green data of its own, does the algorithm make use of any data from
    red or blue pixels (to follow the overall brightness level or something),
    or does it just use data from nearby green pixels?
     
    Paul Ciszek, Jun 4, 2013
    #7
  8. Paul Ciszek

    Paul Ciszek Guest

    If nothing else, it sure kills the chromatic aberration better than any
    of the tools available in Silkypix. In the full color image, the
    silouette of Venus is fringed with red and blue; converting to monochrome
    would leave it fuzzy, while isolating the green channel leaves it sharp.
     
    Paul Ciszek, Jun 4, 2013
    #8
  9. Paul Ciszek

    Paul Ciszek Guest

    Sorry, what do CS5 and ACR mean in this context?
     
    Paul Ciszek, Jun 4, 2013
    #9
  10. Paul Ciszek

    Guest Guest

    the missing two components are precisely calculated from surrounding
    pixels.

    nothing about it is faked.
    it makes use of surrounding pixels, as i said.

    there are many ways to calculate it, from simple algorithms (and not
    very good) to very sophisticated ones. that's why you get different
    results from different raw converters.
     
    Guest, Jun 4, 2013
    #10
  11. Paul Ciszek

    Guest Guest

    adobe creative suite 5 (namely, photoshop, not the rest of the suite)
    adobe camera raw
     
    Guest, Jun 4, 2013
    #11
  12. Paul Ciszek

    Martin Brown Guest

    Depending on when you took the image of Venus it is quite possible that
    most of the false colour you see is atmospheric dispersion of the
    spectrum. Although it is possible and even likely that you have an image
    where the red and blue images are slightly different sizes and displaced
    relative to the green one I think it is unlikely that they are so out of
    focus to the extent that it would affect things too much.

    Split it to RGB and then adjust using one of the tools that allows
    matching the centroid and magnification. You would probably get better
    advice on sci.astro.amateur.

    There are several sorts of chromatic aberration and to correct it you
    have know which sort and use the right technique.
     
    Martin Brown, Jun 4, 2013
    #12
  13. Paul Ciszek

    Martin Brown Guest

    They are calculated according to a specific algorithm that makes certain
    usually valid assumptions about correlations in natural images. If the
    target is a pathological test case image then those assumptions are not
    met and the results can go haywire.

    The most fundamental assumption of Bayer demosaicing is that the green
    channel is a good proxy for luminous intensity and can be used to
    bootstrap the Y channel with relatively minor corrections from R & B.
    Depends what you mean by faked. The target image could contrive to only
    have an exact match Bayer mosaic pattern at a specific distance and
    would then be indistinguishable from a white sheet of card.

    In practice you would get Moire fringes because the alignment would be
    critical. It isn't that uncommon to see such fringes in test images.
    It usually generates a pseudo image in Y,Cr,Cb space and there are quite
    a few choices for how to do it and minor improvements are still an area
    of active research. I don't entirely agree with everything in this paper
    but it shows the quirks of some of the most common methods.

    http://iplab.dmi.unict.it/download/...e-colors removal on the YCrCb color space.pdf

    The rules that you use to remove colour fringing affect the final image.
    The problem is that even the sophisticated ones can be wrong if you are
    unlucky in your choice of target and the key assumptions are not met.
     
    Martin Brown, Jun 4, 2013
    #13
  14. Paul Ciszek

    Alan Browne Guest

    Photoshop CS5 (CS3, 4 and 6 will work too).

    Adobe raw converter (ACR) is the first stage of import into PS.

    If you don't have them then use similar procedures in any high end photo
    editor.

    Or get the dcraw source code and modify it to do as you need.
    (Programming skills needed).
     
    Alan Browne, Jun 4, 2013
    #14
  15. Paul Ciszek

    Alan Browne Guest

    Interpolated data is "faked" because it is an _estimate_ using adjacent
    data and weightings. IOW it is not "precise" because the actual
    information at that photosite is unknown.
     
    Alan Browne, Jun 4, 2013
    #15
  16. Paul Ciszek

    Guest Guest

    true, and those test case images are the ones the foveon idiots fixate
    upon. nobody cares about colour resolution charts.
    originally. not so much now. all three pixel components contribute to
    the luminance.
    that's theoretical edge case that isn't possible even if you tried.
    exactly. you could never align such an image so that it would match the
    pixel spacing.
     
    Guest, Jun 4, 2013
    #16
  17. Paul Ciszek

    Guest Guest

    it's not faked. it's calculated and it's very accurate. it's not
    perfect, but nothing is.
    the calculations are very precise. the error can be measured and it's
    *very* low.

    it's in no way 'guessed' or 'faked'.
     
    Guest, Jun 4, 2013
    #17
  18. Paul Ciszek

    Alan Browne Guest

    The calculation is an estimate.
    Precise calculations are meaningless in the absence of data. The
    estimate could be accurate to 1000 decimal places and that would not
    make it any more 'true'.

    Since it is an estimate of what would have been in that location had the
    information not been filtered out it remains an estimate no matter how
    low the error may be.

    The error cannot be measured since the data was thrown away.
     
    Alan Browne, Jun 5, 2013
    #18
  19. Paul Ciszek

    Guest Guest

    which means it's not faked.
    there's plenty of data. millions of sampling points.
    which means it's not faked.
    you have the source image and the output of the demosaic, so the error
    can be calculated.
     
    Guest, Jun 5, 2013
    #19
  20. Paul Ciszek

    Martin Brown Guest

    It wasn't actually measured. It is interpolated from the data that you
    do have. This is usually sufficient for most images but not always.
    But still no green or blue data where red was measured and all
    permutations of these colour exclusions.
    It is always an inferred value based on the data that you do have. It
    could still be wrong and would certainly *BE* wrong if the target was
    one of the pathological test cards so beloved of Foveon supporters.
    Only *iff* you actually have a source image that was fully sampled in
    the first place. That is how the algorithms are tuned against the real
    world images - but they can still struggle a bit with white picket
    fences at shallow angles to the sensor array. The Moire fringing in
    chroma is very hard to remove without losing some real image data too.

    If you use a Bayer sampled CCD sensor then you are making assumptions
    about the target image that are usually valid but there are situations
    where the Bayer demosaic cannot get the right answer. These situations
    are usually contrived but they do sometimes occur in real life.

    An example is imaging the sun in the pure red light of H-alpha 656nm
    which presents serious problems to a Bayer array demosaicer. Early Kodak
    ones would go completely haywire on this source material.
     
    Martin Brown, Jun 5, 2013
    #20
    1. Advertisements

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments (here). After that, you can post your question and our members will help you out.