Olympus to implement in-camera automatic HDR imaging?

Discussion in 'Olympus' started by RichA, Jun 29, 2007.

  1. RichA

    RichA Guest

    1. Advertisements

  2. RichA

    gowanoh Guest

    If I understand this it is really a way to try to read data twice off one
    sensor at least twice during what will constitute a single exposure (short
    and long) so that gain can be adjusted for dark/light and merge the two.
    This would not require merging two complete images, although it might, but
    only parts of the image that are out of exposure range to try to increase
    the apparent latitude. The description mentions that if movement is
    detected, meaning a change in image data from the short to the long
    exposure, the long exposure is discarded.
    I have wondered since the beginning of digital time why such a system could
    not be implemented.
    Olympus is positioned to do this with its "live view" dual sensor dSLRs: why
    can't imaging data be incorporated from both sensors?
    Based on my limited technical expertise this could be a more complex but
    technically better solution than the Kodak change to the Beyer filter which
    would seem to have to lead to less imaging data coming off the sensor.
    gowanoh, Jun 29, 2007
    1. Advertisements

  3. gowanoh wrote:
    Optical alignment tolerances?

    David J Taylor, Jun 29, 2007
  4. RichA

    Pete D Guest

    If the photos turn out like some of the totally unrealistic HDR results I
    have seen then no thanks. Mind you I guess Olympus has to do this to try and
    keep up with the APS-C sensored cameras.
    Pete D, Jun 30, 2007
  5. RichA

    RichA Guest

    I agree on both points. HDR images just look weird. Much like those
    infinite DOF images compiled using
    dozens of images taken at different focus points.
    RichA, Jun 30, 2007
  6. RichA

    Ray Macey Guest

    The second sensor that handles the live preview is a cheap small
    thing, not a full fledged sensor in its own right

    Ray Macey, Jun 30, 2007
  7. RichA

    Pete D Guest

    Yes maybe but it has to help the Oly 4/3rds, any improvement will be
    Pete D, Jun 30, 2007
  8. For the same reason we prefer single-exposure cameras to
    scanning backends or "one chip, 3 filters in succession"
    methods, even though they can deliver superior results:

    Subject movement.

    Otherwise I would have an even better solution: a short(ish)
    exposure or two to find out the relative light levels, then
    extrapolate that so that the sensor well is nearly, but never
    completely, full. Make a long(er) exposure, which is
    interrupted pixel-by-pixel as they reach their calculated
    "full" exposure. The exposure time for the single pixel and
    their (now exactly) measured fillgrade gives you the exact
    strength of light with practically zero noise in the dark,
    at the cost of a much longer exposure.
    Because they are not aligned 100% and they do not cover the
    same field of view.

    You may also try and read up on 3-chip cameras. This technique
    is used in (more expensive) video cameras and has long been used
    in professional equipment, but not in still cameras. Guess there
    must be a reason, since the pay-off (much better colour resolution,
    better luminance resolution) would be good to have. IIRC they
    have been experimenting with these things ...
    Kodak is simply reducing the colour resolution for a higher
    luminance resolution and luminance suscepibility. I would have
    preferred a dedicated b/w camera (with full luminance resolution,
    and at least the same sensitivity), but ... who wants b/w these

    Wolfgang Weisselberg, Jun 30, 2007
    1. Advertisements

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments (here). After that, you can post your question and our members will help you out.