Possible to extract high resolution b/w from a raw file?

Discussion in 'Digital Cameras' started by bob, May 10, 2011.

  1. bob

    bob Guest

    Is it possible to to extract a b/w photo from a camera raw file that is
    higher resolution than the color version, since one color pixel is made up
    of 4 b/w pixels with color filters?
     
    bob, May 10, 2011
    #1
    1. Advertisements

  2. bob

    Savageduck Guest

    With all thing being equal(no change in dimensions) a RAW file
    converted to B&W will retain the same resolution as the original.
    Depending on the method/technique used to make the B&W conversion,
    there is the possibility that some psycho-optical illusion of induced
    sharpness might be perceived by the removal of interacting colors. The
    resolution remains the same, as does the pixel count.
    So to answer your question, no you will not be able to extract a B&W
    image from a RAW file that is higher resolution than the color version.

    < http://homepage.mac.com/lco/filechute/BigSur04_085BW-compw.jpg >
     
    Savageduck, May 10, 2011
    #2
    1. Advertisements

  3. bob

    Mxsmanic Guest

    No. The limit of luminance resolution doesn't change. All you're doing with
    black and white is removing the color information, but no new information is
    added. You can get the same black-and-white resolution by simply removing the
    color from the image.

    If you could physically remove the filters from the photosites on the sensor,
    then you could get better luminance resolution, at the expense of eliminating
    all color resolution entirely.
     
    Mxsmanic, May 10, 2011
    #3
  4. bob

    Bruce Guest


    I often wonder why no manufacturer offers a b/w digital SLR or digital
    rangefinder camera (yes, Leica Camera, that's you!). I think it would
    be a strong seller to a niche market.

    In the meantime, I am very satisfied with ADOX CM 20 film, which
    probably has about the best resolving power of any currently available
    photographic medium:

    http://www.adox.de/english/ADOX_Films/ADOX_Films.html
     
    Bruce, May 10, 2011
    #4
  5. It is not. It's made up of the pixel itself and then (with some
    intelligent processing) of the values of it's neighbours with
    different colours.

    -Wolfgang
     
    Wolfgang Weisselberg, May 10, 2011
    #5
  6. bob

    Bowser Guest

    No, and I wouldn't want to. Once you remove the color info at the
    expense of resolution I'd lose the ability to fully manipulate the image
    in PS where I can control every channel. Dark skies, nice flesh tones,
    and the like are no problem, even when shot without using filters.
     
    Bowser, May 10, 2011
    #6
  7. bob

    Paul Furman Guest

    You'd need a sensor with no Bayer color filter, and even then you'd need
    to remove the antialiasing filter, then risk moire patterns.
     
    Paul Furman, May 10, 2011
    #7
  8. bob

    ray Guest

    I'm not aware of any current software that does that. I'm not an expert
    either, but I do know a bit about digital signal processing. Seems to me
    that a different de-mosaicing algorithm would have the potential to do
    that.
     
    ray, May 10, 2011
    #8
  9. The AA filter would be just as useful (or not, depending
    on your opinion).

    The problem is that monochrome image sensors are not
    equally sensitive to all colors, and instead map
    different colors to different intensity levels.

    That means there has to be some form of a "color"
    filter. If it were not a Bayer filter used to encode a
    broad range of color information it would mean the
    camera would have the option of only using one type of
    "film". You could buy, for example, a camera that
    matched Kodak Tri-X or one that matched Ilford HP5;
    which is not nearly as nice as having a Bayer filter and
    being able to use the same camera to emulate virtually
    any monochrome film.

    To put it mildly, even for B&W the functionality of the
    Bayer pattern encoding is extremely useful.
     
    Floyd L. Davidson, May 10, 2011
    #9
  10. bob

    shiva das Guest

    Phase One makes a monochrome back for medium format cameras, "the
    Achromatic+ digital back", 39MP, which does not have a color filter.

    "The Phase One Achromatic+ is available for the Mamiya 645 AFD
    (including the Phase One 645DF camera), Contax 645 and Hasselblad V
    interfaces.

    "Also available is the Phase One Achromatic+ for Hasselblad H1 and H2
    cameras.

    "The Achromatic+ can be ordered without an IR filter mounted
    permanently. There are multiple solutions available for working with
    interchangeable filters for such a solution."

    <http://www.phaseone.com/en/Digital-Backs/Achromatic/Achromatic-plus-Info
    ..aspx>
     
    shiva das, May 10, 2011
    #10
  11. It is. And even your description below says that it is.
    Actually, the minimum number of sensor locations that
    could be used per pixel is 4, and in fact what actually
    is used will be a matrix of at least 9 sensor locations
    (and maybe more than that). They *all* contribute to
    the RGB values for a pixel produced by interpolation.

    It is grossly inaccurate to consider each sensor
    location as directly related to a given pixel location
    of the image. It just doesn't work that way.
     
    Floyd L. Davidson, May 10, 2011
    #11
  12. Every camera that uses a Bayer filter to encode color
    information has the option of emulating any BW film.
    How nice!

    Take out the Bayer filter and it becomes necessary to
    replace it with some kind of a color mapping filter to
    produce exacly one specific tone mapping (say for
    example to map tones in the same way that Kodak TriX
    does, or the same as Ilford HP5 does). Just one. How
    restictive!
     
    Floyd L. Davidson, May 10, 2011
    #12
  13. Use dcraw with the -d option. It's not particularly useful though.
     
    Floyd L. Davidson, May 10, 2011
    #13
  14. bob

    Guest Guest

    kodak had a couple and they weren't.

    it makes a lot more sense to use a standard sensor and convert to b/w
    when you want it, without giving up the ability to shoot colour when
    you don't. it's also substantially less expensive, since low volume
    sensors are not cheap.
     
    Guest, May 10, 2011
    #14
  15. bob

    Guest Guest

    it isn't, which you confirm.
    he gave no number, so it doesn't.
    actually, it's 5: the pixel itself plus the 4 direct neighbors (up,
    down, left, right). i don't know of anything that does that, since it
    looks like shit. normally 9 is considered the minimum.

    if you're thinking of a 2x2 block for a 4 pixel minimum, no. bayer does
    not work that way.
    true, which means that it doesn't use 4.

    typically it's 9 (good) or 25 (better) and occasionally even more but
    it begins to not be worth it at that point.
    actually, quite accurate.
     
    Guest, May 10, 2011
    #15
  16. It's 4, not 5. One single RGGB matrix is the minimum that will provide
    a full color encoding.
    You don't seem to understand how it works.
    Nobody said it does. What I said is that 4 is the minimum it *can* use.
    So that's exactly what I said, and you repeat it as if it had not been
    said...
    Ignorant, actually.
     
    Floyd L. Davidson, May 10, 2011
    #16
  17. bob

    Alfred Molon Guest

    Yes, a B&W sensor outperforms resolutionwise a colour sensor. Here is
    why:

    The colour resolution in a Bayer sensor is way lower than the luminance
    resolution. To avoid colour aliasing - or let's say to reduce it to an
    acceptable level - the AA filter needs to be dimensioned somewhere
    between the colour resolution and the higher luminance resolution. The
    cutoff point is somewhere in between.

    In a B&W sensor instead the AA filter is dimensioned for the much higher
    luminance resolution. Higher cutoff point => more resolution.
     
    Alfred Molon, May 10, 2011
    #17
  18. bob

    Martin Brown Guest

    Actually he is right that a matrix of 4 cells is the bare minimum that
    can be used for Bayer demosaic although the results are not great.

    The pattern of 5 you describe fails completely for all red and blue
    sensor sites which in the standard Bayer mosaic have only green direct
    neighbours. At least Floyds method would allocate full RGB pixels to
    every location on the grid apart from at the very edges.

    RG
    GB

    Is the unit cell of the Bayer sensor grid.
    In a real sense it does sometimes although heuristics are used based on
    the green channel information to decide what weights to use. The default
    is 3x3 unless special conditions like sharp luminance edges are found.

    The detailed algorithms are patented but in rough form green channel is
    used to work out a crude green (proxy luminance) value for all the
    unsampled points and then a heuristic shader uses the red and blue
    pixels to fill in the gaps. Most digicams actually interpolate to a 2x1
    chroma subsampled image that will be JPEG encoded. There are only 2G 1B
    1R pixels per unit cell and it makes no sense to interpolate up to a
    full colour 4G 4B 4R then convert to 4Y 4Cr 4Cb and subsample when you
    can retain more accuracy and do it quicker from 4Y 2Cr 2Cb into JPEG.
    Typically it uses 9 and maybe a few from the next ring out to try and
    work out if there is a sharp edge transition and chose the right tweak.
    No it isn't. Each pixel location in the final image is potentially
    related to all its neighbouring sensor sites as well as its own measured
    value. Measured values are not normally allowed to change in Bayer
    demosaicing but may be altered by any unsharp masking done later.

    The individual pixel tells you one colour channel at that point in the
    image. The green channel is fairly informative and is used to generate
    the first guess at luminance and then the red and blue are combined in.

    The answer for the OP is that it depends. If you know the precise
    blurring function of your monochrome image and it obeys some very strict
    criteria then scientific deconvolution codes can be used to get a
    roughly 3x increase in resolution in regions of high signal to noise at
    the expense of various artefacts. The HST myopia problem was worked
    around using these codes and they were used to diagnose the fault but it
    isn't quick and the results are not always pretty. Unsharp masking is by
    comparison quick, crude but moderately effective.

    Regards,
    Martin Brown
     
    Martin Brown, May 10, 2011
    #18
  19. bob

    Guest Guest

    only if you accept shitty results. 5 is the minimum if you want to
    maintain the actual resolution of the sensor, not cut it by 75%.
    i definitely understand how it works.
     
    Guest, May 10, 2011
    #19
  20. bob

    Guest Guest

    right, the results are awful and also very low resolution. no bayer
    camera uses 2x2 blocks. it's stupid and a straw man.
    i wouldn't say fail completely. green is the main component of
    luminance (and in the original bayer patent, only green was
    considered). the colour errors will be high but the eye isn't that
    sensitive to that.

    a realistic minimum is 9 pixels. yes you 'can' do it with less but
    nobody does.
    nobody uses 2x2, except in the minds of some foveon fanbois thinking
    that's how bayer works (it doesn't).
     
    Guest, May 10, 2011
    #20
    1. Advertisements

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments (here). After that, you can post your question and our members will help you out.