P&S vs DSLR - Does this argument make sense?

Discussion in 'UK Photography' started by aniramca, Jul 26, 2007.

  1. FWIW, the blue channel is often the noisiest on standard
    CCD and CMOS sensors as well. If you look at CCD data
    sheets you'll see that of the three channels, the blue chanel
    invariably has the lowest output for a given light level.

    rafe b
    Raphael Bustin, Jul 30, 2007
    1. Advertisements

  2. Profanely mundane? Heh. Never heard those two
    words arranged quite that way.

    rafe b
    Raphael Bustin, Jul 30, 2007
    1. Advertisements

  3. aniramca

    Paul Furman Guest

    Yes, I think that was a typo. However, note that it has more to do with
    the size of the sensor than the size of the pixels. There is some
    difference with dynamic range in smaller pixels but not that much if you
    have more pixels in the same area.
    Paul Furman, Jul 30, 2007
  4. I'm not pro DSLR. I'm pro performance. The current crop
    of P&S cameras have generally low performance. Shutter
    lag in P&S digital camera is, in general, worse than with
    P&S film cameras. Noise is higher than it needs to be.
    There is no technical reason why camera manufacturers
    can't make small, pocket-size cameras with virtually no
    shutter lag (e.g., use the same, or an improved
    AF mechanisms from film P&S cameras), with
    large sensors to give good signal-to-noise
    ratios, comparable to many DSLRs, and for prices
    less than the current low end DSLRs. But for some reason
    (fear of eroding the DSLR market?) they haven't yet
    done that.

    When they do, I'll buy one. From the many complaints we
    continuously see about P&S digitals in this newsgroup, I bet
    many other people will buy them too.
    Camera manufacturers, are you listening?

    That doesn't mean there isn't a market for the low performance
    lowest cost digital P&S, but today, there is a performance
    gap were there need not be one.

    Roger N. Clark (change username to rnclark), Jul 30, 2007
  5. Yes, I agree. But also, the Foveon sensor probably has lower
    well capacities in each channel, probably contributing to higher noise
    in all channels. The blue channel, if ~1/8 the well depth, would
    be additionally challenged as well as having the lower
    quantum efficiency of silicon sensors in general in the blue.
    It sure would be nice to see real sensor performance data
    (full well capacities and read noise for each channel) to
    really understand what is going on.

    Roger N. Clark (change username to rnclark), Jul 30, 2007
  6. aniramca

    Rich Guest

    But what evidence do we have that (for e.g.) 4/3rds with 10 megapixels
    are less sensitive than 1.5 sensors with larger pixels?
    Rich, Jul 30, 2007
  7. aniramca

    DHB Guest

    Thanks for the save, yes it was a typo as I did intend to type
    photons not photos.

    Several things actually need to be considered & "physical
    sensor size" & pixel or photo receptor site size "both" matter. In
    addition so does the size & type of the micro lenses used over each
    photo receptor site on the Sensor, be it CMOS or CCD.

    If for example if you kept the sensor resolution fixed @ say
    5MP & used a 1/2.5" physical sized sensor, the size of the micro
    lenses could only be so large. If however you increased the physical
    sensor size to 1/1.8", you would have more area to use larger micro
    lenses since we are still keeping the resolution fixed @ 5MP.

    If I recall correctly Fuji used both large & small micro
    lenses on the same sensor in an effort to increase dynamic range &
    overall sensitivity. This may in part be responsible for the need to
    increase the physical sensor size to 1/1.7" as they did on my Fuji
    (F11 @ 6.3MP).

    If they increased sensor size even farther & kept the
    resolution @ the same 5MP, I think it would be hard to manage photo
    receptor site micro lenses because I am certain that there are
    practical limits as to how large they can be before dynamic range
    becomes very narrow & other optical problems get out of control.

    Personally I suspect that the next sensor innovations may come
    when photo receptor sites become more like the eyes of nocturnal
    animals that have a light reflective coating behind their photo
    receptors to allow for more photons to hit the same area. The other
    possibility might be a multi layer sensor or a 3 sensor approach like
    that now being used in high end video cameras.

    Unless some new innovative approach is conceived by somebody
    thinking way "outside the box" of conventional known methods, we are
    likely to continue to see relatively minor variations of sensor
    optics, electronics, hardware & firmware tweaks.

    Fuji's new 1/1.6" @ 12MP sensor seems interesting but I can't
    help but think it's a bit too much even for them but relative to the
    completion & what "most consumers" seem to want, it might just prove
    to be a very lucrative combination for them. Competition certainly is
    a good thing, let's hope we have much more!

    Respectfully, DHB

    "To announce that there must be no criticism of the President,
    or that we are to stand by the President, right or wrong,
    is not only unpatriotic and servile, but is morally treasonable
    to the American public."--Theodore Roosevelt, May 7, 1918
    DHB, Jul 30, 2007
  8. You can start by reading papers like this one:

    QE Reduction due to Pixel Vignetting in CMOS Image
    Sensors, by Peter B. Catrysse et al.,

    "It is well known that the QE of CMOS photodiodes
    decreases with technology scaling due to the reduction
    in junction depths and the increase in doping concentrations."

    Google searches turn up other relevant reading.

    Roger N. Clark (change username to rnclark), Jul 30, 2007
  9. There is a more fundamental physics reason than seems to be implied in this
    discussion. Photons are a finite resource. In a given exposure,
    there are X photons/square micron delivered to the focal plane
    of any camera system. By the definition of ISO, X works out
    to be, for properly metered scene, that a 20% diffuse reflectance
    spot will deliver about 3200 photons per square micron to the
    focal plane for ISO 200, over the green passband, regardless
    of exposure, f-stop, focal length, or sensor size.

    So regardless of improvements in sensor technology, larger
    pixels will always collect more photons. And it is the
    total of the photons counted that determine signal-to-noise ratio
    and dynamic range in the best situation (photon noise limited systems).

    Roger N. Clark (change username to rnclark), Jul 30, 2007
  10. aniramca

    Paul Furman Guest

    Larger sensor areas (regardless of pixel count) will do pretty good.
    Pixels too small for their sensor size make no sense: because large
    silicon chips are expensive, it makes no sense to put small pixels
    spaced apart on large sensors.
    Paul Furman, Jul 30, 2007
  11. aniramca

    ASAAR Guest

    That's what I meant. Not that you were unfairly pro-DSLR. I'm
    sure that it was you and several others that used to speak of
    "focusing on the eyes" (of birds and/or animals), and that would
    have been quite impossible with any P&S I've used. Perhaps some of
    the P&S models I've never handled do better, but I wouldn't want to
    bet on it. For that matter, in addition to the accuracy, the AF
    speed is also much better with DSLRs, which when mentioned agitates
    the anti-DSLR trolls into a frenzy, and also results in the
    spontaneous generation of new sock puppets, as anyone with their
    eyes opened must have noticed.
    ASAAR, Jul 30, 2007
  12. aniramca

    Akiralx Guest

    As you probably know it's the mirror slapping up which makes the noise
    rather than the shutter. The volume varies between SLRs - dpreview.com
    often has sound recordings of the noise in its reviews.

    As far as I know the new and expensive Canon 1D Mark III has a whisper mode
    but I'd check the specs to confirm that.
    Akiralx, Jul 30, 2007
  13. aniramca

    DHB Guest

    Your now well past my understanding of digital sensor
    & optical knowledge, however as an E.T. I offer this analogy:

    If I recall correctly, right now the best "solar cells" have
    about 35% efficiency in converting light into electricity.

    Photo diodes or photo transistors used in digital camera
    sensor are not 100% efficient either & in a similar fashion, any
    increase in their efficiency might constitute a considerable
    improvement it either dynamic range & or useable ISO.

    The same goes for the efficiency of "all" of the associated
    electronics both on & off the photographic sensor itself.

    Consider LEDs. When they 1st came out, they were not very
    efficient & were initially limited to red. Now LEDa have become much
    more efficient. Keep in mind that "many" principles in electronic are
    reversible. For example a motor can be turned into a generator as is
    true of the reverse. That being said, LEDs also work as narrow band
    optical "sensors" & I often use them as dual function devices in
    certain applications.

    For those that don't believe this, take any LED & a volt meter
    out into the light & see how well it works as a light sensor. Not
    nearly as efficient as a photo diode or transistor but it does work
    well enough to be useful in some applications. The point here is to
    graphically illustrate that if progress can be made with LEDs, it's
    proof that further progress in efficiency may yet be made with photo
    diodes & transistors too.

    Who knows what light sensitive device may yet be developed to
    take the place of a photo diode or transistor? So the sensors of
    future cameras many be very different from what we can now conceive
    with known & proven technologies of today.

    Yes I realize that the laws of physics are unlikely to change
    but there is much about quantum physics that we have yet to understand
    & @ some future point, it may play an active role in digital camera
    sensor technology.

    Respectfully, DHB

    "To announce that there must be no criticism of the President,
    or that we are to stand by the President, right or wrong,
    is not only unpatriotic and servile, but is morally treasonable
    to the American public."--Theodore Roosevelt, May 7, 1918
    DHB, Jul 30, 2007
  14. aniramca

    Scott W Guest

    There are two things that make the 35% efficiency in solar cells not
    apply to a camera's sensor, one is that the 35% is for the whole
    spectrum of sunlight, going into the inferred and ultraviolet, neither
    of which you want to capture with a cameras sensor. The other is that
    for a camera sensor a captured photon is a captured photon and we don't
    need to worry about what voltage it produces when captured. In a solar
    cell the working voltage needs to be low enough to allow capturing long
    wavelengths, which means much of the energy of the shorter wavelengths
    is lost. In a camera we are not after energy just electrons.

    For sensors we talk about quantum efficiency, how many electrons do we
    get per photon. Within the visible area of light and given the color
    filter in front of the CCD/CMOS sensors the quantum efficiency is
    currently not all that bad.

    There are games that can be played with changing the filters to improve
    things, Kodak (if I am remembering right) is working on a sensor with
    half the sensors not having any color filters in front of them at all,
    and from there test images it looks like this may have some advantages.

    Scott W, Jul 30, 2007
  15. aniramca

    Ron Hunter Guest

    Ahhh, yes, that quantum sensor, used to take pictures today of
    tomorrow's car race. Get your bets down!
    Ron Hunter, Jul 30, 2007
  16. aniramca

    Ron Hunter Guest

    I think that is about 1/4 of the sensors, such that each color triad has
    a sensor element that just reads the illumination level. By blending
    that into the bayer sensor's output, much improvement should be
    possible. I look forward to seeing how well that works.
    Ron Hunter, Jul 30, 2007
  17. aniramca

    Alfred Molon Guest

    I framed a clock with an Olympus 8080 P&S. The clock is a quartz clock
    with a clearly identifiable sound ("tac .. tac .. tac"), in sync with
    the movement of the second hand.

    The sound of the second hand was perfectly in sync with the video in the
    LCD screen (i.e. the second hand moving). If there is a delay, it must
    be minimal, perhaps below 0.1 seconds.

    Then I repeated the test with a Sony R1, another camera with live
    preview. Again no noticeable delay, even the vibrations of the second
    hand were visible.

    Lastly I tried out the Olympus mju 700 tiny compact of my wife. Again no
    noticeable delay.
    Alfred Molon, Jul 30, 2007
  18. aniramca

    D-Rexter Guest

    You will find that on nearly *all* P&S cameras today that the display delay is
    directly proportional to the shutter-speed selected. The EVF/LCD display in all
    P&S cameras correctly depicts the shutter-speed in use. For example if you
    wanted to do a motion-blur photo of a waterfall, rapids, or trickling water. You
    could dial-in a slow shutter-speed and your EVF/LCD will accurately depict the
    exact motion-blur effect that you want to achieve. The moving water in your
    EVF/LCD blurred with the same amount as will appear on your final image. Any
    delay that people report in P&S camera displays are unaware of this advanced
    ability of EVF capable cameras--the EVF *perfectly* matching the image that you
    will get at whatever shutter-speeds and f/stops you select. The DOF also
    automatically relayed to the EVF without having to press any awkward DOF-preview
    buttons that dims the image so much that it is useless, as what happens on all
    SLR and d-SLR designs.

    They automatically or intentionally twist this shutter-speed-matching advantage
    of EVF/LCD displays into their last-century thinking that this must be some
    defect. Or more commonly the only time they've ever held a P&S camera is indoors
    where a slow shutter-speed is automatically chosen for them. They mistakenly (or
    purposely) assume that the EVF perfectly matching the chosen shutter-speed must
    be a defective 1/5th or 1/10th second display-lag. They never see that the
    EVF/LCD will keep in perfect sync under normal shooting circumstances and
    average shutter-speeds. I suspect this is why Roger N. Clark doesn't know any
    better and can't find one that works to his liking. He's never had enough
    experience with them to realize he doesn't know one thing about P&S cameras nor
    even how to use them correctly. His mind is still stuck in last-century's SLR
    methodology and hardware with its inherent drawbacks and limitations. His mind
    just can't make the leap to present-day technology with its many imaging and
    preview advantages.

    If you choose a shutter speed as fast or faster than your EVF/LCD refresh rate,
    which on most P&S cameras is 60 to 160 fps, the most amount of lag you will ever
    get is 1/60th or 1/160th of a second. If people who have only used SLRs and
    d-SLRs think that they are going to miss a photo due to a 1/60th of a second lag
    then they have some serious psychological issues, blatant agendas, or don't know
    a thing about human-response times. Their own nerve reactions can't even
    compensate for speeds that fast. Even their eyes alone can't respond that fast.
    This is why video is often delivered at 30 fps, it is beyond the abilities of
    human nervous system to see individual frames.

    Ergo: any "display lag" that they constantly perceive on the EVF/LCD displays of
    P&S cameras is directly proportional to their "experience lag", "brain lag" or
    "intelligence lag". More often it is a combination of all three types of lag.
    D-Rexter, Jul 30, 2007
  19. aniramca

    Scott W Guest

    I found the link to an it, looks like 1/2 of the sensors have no filter
    in front of them.
    It looks like a pretty good idea. We will have to wait and see just how
    well it works in real life. Will it be as good as a standard Bayer
    pattern in bright light for example.

    Scott W, Jul 31, 2007
  20. It seems you have missed my point. The number of photons/square
    micron in the focal plane is independent of the sensor. Nor does it
    include the IR filter, Bayer filter, or blur filter. It is the
    number of photons delivered to the focal plane by a lens.
    It has nothing to do with quantum efficiency. The point is that
    the photon density is a finite number. You can't increase it
    (again this is the definition of proper exposure). You can't
    create additional photons. The simple fact that this number is
    finite means that a larger bucket (larger photo sensor) collects
    more photons than a smaller one.

    Side note: some CCDs have 90% quantum efficiency (QE). New CCD and CMOS
    consumer digital cameras run 30 to 50% QE. Even if QE increased,
    larger photo sites will collect more photons. Also, elsewhere in this
    thread I posted links to an article that illustrates QE decreases
    with smaller pixel size.

    Second side note:
    I love your Roosevelt quote--I saved it a couple of years ago.

    Further reading:

    Digital Cameras: Does Pixel Size Matter?
    Factors in Choosing a Digital Camera

    Digital Cameras: Does Pixel Size Matter?
    Part 2: Example Images using Different Pixel Sizes


    Roger N. Clark (change username to rnclark), Jul 31, 2007
    1. Advertisements

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments (here). After that, you can post your question and our members will help you out.