Now that we have high resolution, why isn't "binning" a standard option?

Discussion in 'Digital SLR' started by Mike Warren, Sep 15, 2007.

  1. Mike Warren

    Mike Warren Guest

    What's wrong with binning in PP? I don't see a real need for in camera.
    Mike Warren, Sep 15, 2007
    1. Advertisements

  2. Mike Warren

    RichA Guest

    The combining of pixels into superpixels prior to digitization. 4
    pixels of 7um become one of 14um. Low light shooting becomes much
    easier as effective ISO is raised. Not every situation calls for 12
    megapixels of resolution, but often situations call for low light
    RichA, Sep 15, 2007
    1. Advertisements

  3. Mike Warren

    gpaleo Guest

    Excellent question and i presume quite easy to implement at the sensor
    gpaleo, Sep 15, 2007
  4. At the sensor level those 4 colour 7um pixels become one monochrome 14um
    pixel - and your image reduces to 3Mp.

    To do binning on a conventional Bayer array while retaining colour
    information means combining non-adjacent pixels. That means even more
    complex circuitry and tracking in an already dense pixel. That, in
    turn, means the signal losses may swamp any noise benefits.

    For example, any 4x4 array on a standard Bayer sensor has 4 red, 4 blue
    and 8 green pixels. Each colour pixel has pixels of the other colours
    between them, so adjacent pixels cannot be binned. However non-adjacent
    pixels could be combined to give 1 red & blue and 2 green superpixels in
    a 2x2 array, with each superpixel spanning a 3x3 area of the originals.
    So, if your original pixels are 7um then the superpixels are actually on
    a 21um in size, but on a 14um pitch. ie. the superpixels physically
    overlap. That may not be a bad thing, since spatial coherence between
    pixels is Foveon's main feature, but the resolution would be less than a
    conventional 3Mp sensor.
    Kennedy McEwen, Sep 15, 2007
  5. Mike Warren

    Paul Furman Guest

    Is binning any better that downsampling?
    Paul Furman, Sep 15, 2007
  6. In principle, yes, since you only have one read operation and therefore
    one read noise contribution, with sqrt(n) photon signal to noise. The
    problem comes with whether that sqrt(n) can be achieved in practice
    without any signal losses or additional noise. Downsampling, in
    contrast, has n read noise contributions.
    Kennedy McEwen, Sep 16, 2007
  7. Mike Warren

    wiyum Guest

    At the sensor level those 4 colour 7um pixels become one monochrome 14um
    What would the sensitivity of that monochrome pixel be though? Would
    it be the mean average of the sensitivities of the four pixels? That's
    what I'm assuming. I'm just wondering how the difference between the
    density of the green pixel filtration (relatively minor) and the blue
    pixel filtration (around 3 stops) would change how the binning
    operates. You seem to know about these things.

    wiyum, Sep 17, 2007
    1. Advertisements

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments (here). After that, you can post your question and our members will help you out.