Light fall off on dSLRs - an experiment

Discussion in 'Digital SLR' started by Kennedy McEwen, Mar 16, 2006.

  1. Photons do not interact in this case. E.G., all the photons
    flying around in a lit room do not interact with each other.
    The only thing that has changed in the experiment is diffraction
    effects, but for the wide apertures we have been discussing,
    that is not relevant either.

    What is relevant and has been pointed out is scattered light
    affecting the results. Kennedy needs to control
    that in his experiment. Not having any pictures
    or diagrams of the experiment, it is hard to evaluate.

    Roger
     
    Roger N. Clark (change username to rnclark), Mar 17, 2006
    #81
    1. Advertisements

  2. Yes, I agree. And depending on the lens design, it could
    make the problem better or worse.
    Probably.

    Roger
     
    Roger N. Clark (change username to rnclark), Mar 17, 2006
    #82
    1. Advertisements

  3. Kennedy McEwen

    ehhackney Guest

    "Don't be an idiot: the sensor doesn't know (let alone care) where the

    photons are coming from."

    With a focal plane using microlenses, this is not true. In the focal
    plane itself (behind the microlens array), the active (light detecting)
    area is only 25 - 35% of the total pixel area. (The rest of the area
    is taken up with gates and electronics to get the photo-electrons out
    of the focal plane.) The microlens acts like light funnel, catching
    the light from most of the pixel area, and focusing it onto the active
    area of the focal plane, and increasing the focal plane sensitivity.
    Microlenses aren't particularly good lenses, but they do focus the
    light into a blob smaller than the active area. At high incidence
    angles, not all of the light from the microlens hits the active area
    and there is an intensity fall-off. (This is in addition to the normal
    cos^4 effect.)

    An effect of microlens focal planes that I have SEEN was an on-axis
    reduction in expected sensitivity at low f-numbers. What happened here
    was that for low f-numbers (high cone angles), the light after the
    microlens focused IN FRONT of the active area, and by the time it
    reached the active area it had expanded, and the blur spot was larger
    than the active area, reducing the light gathered.

    The same thing ocurred, more or less, from both Sony and Kodak focal
    plans, which were the only candidates we were looking at at that time.
    Note that both Sony and Kodak consider the details of their microlens
    focal planes proprietary and were no help in tracking down this
    problem. A lot of the focal plane data was "backed out." That said I
    came up with a realistic, simple geometric model that matched the
    experimental data we were getting very well. As I remember, the effect
    was noticeable at around F/2.5 and faster. Since I don't know the
    particulars of Canon's focal plans, YMMD.

    BTW: Do you guys talk this way to people in person when you dissagree?

    Hack
    --//--
     
    ehhackney, Mar 17, 2006
    #83
  4. The shadow area of the image had a histogram mean of 0.46, which
    suggests that it is mainly noise and not scattered light. But I should
    have included this information on the original post. The whole point of
    shooting this in a darkroom was to avoid scattered light - whilst the
    walls aren't black, they are far enough from the light source that
    reflections were not an issue.
     
    Kennedy McEwen, Mar 17, 2006
    #84
  5. I very much like the idea of testability; the basis of science and all
    that.

    However, your theoretical limitation on the possible angle is based on
    the light entering through the lens mount. Actual light coming out of
    an actual lens exits the rear of the lens well inside the lens mount,
    so that's not the real constraint on the real light angles on the
    sensor. It may well be that real lenses don't have light exiting the
    rear at sharper angles than possible through the lens mount, but I
    can't prove why they couldn't.
     
    David Dyer-Bennet, Mar 17, 2006
    #85
  6. It was clearly greater than 5%, anyway. Much greater.
    It's a valid question, but I'm pretty sure the answer is yes. I'm
    reasonably confident that two sources would add as expected from the
    individual results.
    Internal reflections, obviously. Again, I don't think they account
    for a big part of vignetting, though.
    I had one point to raise myself that I think the test misses. I'm
    still interested in understanding what it does and doesn't prove.
     
    David Dyer-Bennet, Mar 17, 2006
    #86
  7. I see no reason for the restriction on exit angle that you're
    assuming. What's your argument?
     
    David Dyer-Bennet, Mar 17, 2006
    #87
  8. I kind of thought it would - I was quite surprised at the results myself
    and plan to try a more comprehensive test in the next few days. A
    couple of intermediate angles and a few rotational points - both
    directions horizontally and vertically as well as diagonals. I think I
    should also average a few shots at each position to reduce any shutter
    speed variation, although this should be less of a problem at longer
    exposure times.
    I did think of that initially, but I happened to have the white LED and
    DC current source readily at hand from some previous work. It was just
    sitting there as I read the other threads on the topic and I thought
    "Hang on, this is essentially all that complex mechanised kit at work
    does, but I only need a few data points to see how significant it is."
    ;-)

    However, after thinking about using colour LEDs I came to the conclusion
    that if any angular response variation also had a colour component to it
    then the shading caused by it would also have a colour cast. From the
    images I have seen of the problem it doesn't appear to be coloured.
    Agreed - and I will do for the additional tests.
     
    Kennedy McEwen, Mar 17, 2006
    #88
  9. Well the rear lens element can't go very far inside the mount because
    the mirror gets in the way - that is why all these super wide angle SLR
    lenses are retrofocus designs.
    Lets say they can be 4mm inside the mount flange, but the bayonet itself
    is 5mm or so thick with any clearance for lens movement inside that and
    then the mount for the rear element. Also, if you look inside the SLR
    mount you will see that it is obscured at the bottom and top by the
    support for the electrical contacts and the prism mount respectively.
    Pretty soon you get down to a practical rear element limit that is about
    30mm maximum diameter no closer than 40mm from the focal plane. Not
    much scope for it to be any worse than the test limits, though I could
    extend the shadow to the opposite edge of the frame with a couple of
    extra degrees rotation, to take the test conditions well beyond the
    limits of a practical SLR lens.
     
    Kennedy McEwen, Mar 17, 2006
    #89
  10. Use an old manual-diaphragm lens, and the bulb setting for the shutter
    (controlling the exposure with the lens cap, the power supply to the
    light, or something). (Yeah, finding an old manual-diaphragm lens
    that fits the camera and is an extreme enough wideangle to be a good
    test may not be feasible).
     
    David Dyer-Bennet, Mar 17, 2006
    #90
  11. He specifically refers to exposure time unevenness across the frame --
    which clearly *is* relevant since what we'll be measuring in the end
    is exposure differences across the frame.

    You're right about the aperture of course, since we'll be making
    comparisons only between spots in the *same* frame, not between
    frames.
     
    David Dyer-Bennet, Mar 17, 2006
    #91
  12. You've answered *a* question; the practical effects *with that lens*.
    And if there are differences you don't know what caused them (which is
    irrelevant to just using the camera, of course, but many of us are
    curious beyond that).

    This test was designed specifically to see if angle of incidence makes
    a difference -- testing one factor independently.

    It looks to me like there's *something* funny going on since the
    measured difference is a lot less than simple cosine law would require.
     
    David Dyer-Bennet, Mar 17, 2006
    #92
  13. Doesn't matter, doesn't matter, doesn't matter, and I wouldn't, I'd
    measure directly from the processed film.

    The measurements being made are of *relative* degree of exposure
    within a single frame. The film stock is pretty uniform across the
    area of a single frame, and the processing will be pretty uniform
    acorss the area of a single frame, and that's all that matters.
     
    David Dyer-Bennet, Mar 17, 2006
    #93
  14. Kennedy McEwen

    Paul Furman Guest


    Ah, that's pretty close to the 5%. I drew a Nikon mount and also looked
    at some lenses which coincidentally achieve about the same angles. It's
    kind of sketchy but gives the general idea:
    <http://www.edgehill.net/1/?SC=go.php&DIR=Misc/photography/lens-angles>
     
    Paul Furman, Mar 17, 2006
    #94
  15. As I have posted on many occasions in the past few months, I use my old
    OM 18mm f/3.5 prime lens on the Canon 5D with an adapter - I have
    *never* seen light fall off in the corners that is any worse than I
    experienced on the OM-1, 2, 3, 4, 4Ti cameras I used this lens on in the
    past. Others have related similar experience. Consequently a
    film/digital comparison is likely to be dominated by media response
    curve variations and other parameters. That is why a direct measurement
    of the only cited cause of this alleged difference is more meaningful.
     
    Kennedy McEwen, Mar 17, 2006
    #95
  16. Kennedy McEwen

    eawckyegcy Guest

    I repeated McEwen's experiment last night with a Canon 1DMkII and a
    PrincetonTec "Impact" LED flashlight (it has a very clean pattern).
    The flashlight has a 20mm exit aperture, and was placed 7000mm from the
    camera -- about f/350 equivalent. The camera itself was mounted on a
    Acratech ball, and carefully placed (albeit by eye) so that rotating
    the head horizontally kept the sensor "in place", more or less. The
    Acratech has 5 degree angle markers etched on it (used here for the
    first time!). Observations below are the raw data from a square patch
    of pixels 128x128 at the centre of the frame, collecting frames at 5
    degree increments:

    ====== Bayer Channel ======
    AOI G0 B R G1
    --- ------ ------ ------ ------
    -25 484.6 510.2 164.1 479.3
    -20 801.6 861.6 295.8 799.5
    -15 928.0 964.0 359.7 926.9
    -10 992.6 1008.8 396.1 990.8
    -05 1025.4 1031.6 415.7 1023.5
    +00 1032.9 1036.9 420.4 1031.0
    +05 1023.4 1029.3 415.1 1021.5
    +10 998.7 1011.4 400.4 996.8
    +15 948.4 974.8 371.8 946.3
    +20 850.8 897.3 320.3 848.8
    +25 539.8 570.1 189.8 543.0

    (I tried to format this; apologies if it failed). There are errors in
    the AOI (made by eyeball, with attention paid to parallax effects) and
    do not be misled by the number of digits in the sensor values: the
    usual +-sqrt() applies.

    I tried to get "AOI 0" to be square to the source, but again, this is
    all eyeball and fingers stuff. It looks like I missed by a few
    degrees. I also tried to level the head and such; again, eyeball and
    fingers.

    Nevertheless, as is clearly evident -- and in stark contrast to
    McEwen's results -- there is a very strong function at work. Playing
    around with gnuplot, a cos(aoi)^3 is a semi-reasonable fit, though
    that's just dicking around with math -- no physical process is
    proposed. There are also slight differences between the channels.
    Dispersion effects?

    Conducting this experiment revealed, however, that the above data is
    probably a bit higher than it would be: there is a major problem with
    glare. The first times I ran it, I noticed a strong left/right
    gradient with in images in the aoi > 0 cases, and even some streaks of
    flare. Looking "up LED" from the camera, I noticed a number of forward
    scattering objects that may have contributed to this. I covered these
    with large black blankets (don't ask). However, there are still
    signifigant glare problems in the final images used to produce the
    above table -- which is why I selected the centre of the image as it is
    probably the area least effected by this problem. My guess is that
    this glare is from camer's internal baffling; it's probably not as
    black as it should be, and they certainly weren't designed to deal with
    this kind of use of the camera.

    So I think the test should be done with a collimated source, probably
    the easiest to obtain is a laser pointer. Unless someone beats me to
    it, I'll re-do the test next week with such a source.

    But looking at the above table, any falloff beyond cos(aoi) is
    basically negligible +- 10 degrees, and only 20% at +- 20 degrees.
    With a ruler, a room light and a piece of paper (a poor-mans optical
    bench), I tried to calculate the location of the exit pupil of my
    17-35/2.8 lens at 17mm ... from this crude estimate, it looks like a
    pixel at the corner of the sensor is still inside the 20 degree cone.
    The same for my 20/2.8 and a 50/1.4, though the cones are in more
    favourable positions of course.

    Finally: DUST! This is indeed an excellent dust-detector! But there
    is more: as one swings the sensor around, the dust "moves" because it
    is offset from the pixels themselves. I found an easy to recognize
    blob and observed it's position as the sensor swung. At 16 pixels near
    the AOI 0, we can then estimate that the sensor itself is covered by
    about 16/tan(5) = 183 pixel-lengths of glass (AA filter, etc). At
    8.2um per pixel, this comes to 1.5mm.
     
    eawckyegcy, Mar 17, 2006
    #96
  17. Why would I want to use a coherent light source? None of the light
    reaching the sensor is usually coherent and to do so, such as with a
    laser illumination source, would only introduce speckle noise. The
    correct light source for this type of measurement is an incoherent
    source.

    You probably mean, as others have suggested, that I did not use a
    collimated source - which is something completely different. However,
    to those who are concerned about such matters, calculate how collimated
    the beam from a 5mm LED at 80cm distance is. Here's a hint - it is more
    collimated than light from the sun, and more than adequate for this
    purpose.
    Do you have, or have you ever seen, a document specifying this parameter
    for the Canon 5D sensor? I doubt it, but you are welcome to provide
    references to it here and now.
    Yes. That is why I design optical imaging sensor systems for a living.
    Do you?
     
    Kennedy McEwen, Mar 17, 2006
    #97
  18. Yes you are correct, it was late last night when I responded - there
    *should* be a cos(theta) reduction *if* the pixels were flat, without
    any microlenses at all. The microlenses, however change this
    characteristic, since they present almost the same collection area
    independent of the incident angle. Think of a sphere - it has the same
    diameter from whatever angle you look at it. Obviously being spherical
    segments rather than full spheres, the lenses will change their
    effective collection area at extreme angles, but this may be well beyond
    the angles available from practical camera lenses. So the actual area
    reduction at each pixel really depends on how completely spherical the
    microlenses are and also how much they obscure each other at high
    incidence angles.

    These microlenses are actually made by heating etch resist material
    deposited on the pixel until it melts and reflows into a spherical
    segment. We make thermal imaging detectors with a similar process to
    create indium bump bonds at each pixel for connecting each cadmium
    mercury telluride alloy sensel to its CMOS circuit in the readout
    matrix. Depending on the thickness of the deposited material, it is
    possible to create almost perfect spheres with a small flat base of much
    less than the radius of the sphere itself. So creating almost perfect
    truncated spheres for microlenses is quite possible, and that would
    reduce the sensitivity to the incident light compared to a flat surface.

    Also, if the microlenses were perfect immersion lenses then there would
    be no deflection of the light off of the sensitive area at the centre of
    each hemisphere even at extreme angles of incidence, which is the
    conventional argument for the angular response. With perfect immersion
    lenses the only loss in signal would be the obscuration by adjacent
    microlenses.

    Interestingly, if you model the microlenses as spheres with a diameter
    equal to the pixel pitch, the obscuration of one sphere by its adjacent
    sphere at an incident angle of 30deg works out at approximately 5.77%.
    This may be what Paul Furman was talking about when he discussed the
    Autocad modelling. (Is it, Paul?)

    In exact terms, it is 1/3 - sqrt(3)/2pi - I can go into the geometry of
    this if you like, but it is reasonably simple, just the percentage area
    intersected by two circles separated by 2r.cos30.

    Obviously the lenses will not be complete spheres, but truncated ones,
    however the result remains the same even if they are only marginally
    larger than hemispheres, provided the radius is maintained above the
    surface at the point where the incident ray is tangential to the sphere.
    This only requires the microlens to be marginally greater than a
    hemisphere. Finally, to create such a microlens geometry, requires that
    the lenses themselves be smaller than the actual pixel pitch - reducing
    the percentage loss significantly for even small reductions in spherical
    radius.

    I think this explains the almost null result - in fact ad *less* than
    5.77% it provides a very close match to the actual observed signal
    reduction of 2.68%!
    No, I didn't, but I plan to run a more comprehensive test now, which
    will use longer separation of the LED from the focal plane, effectively
    reducing the shutter speed, and hence improving accuracy and
    repeatability. The LED supply was a stabilised current source, so there
    should be no variation in its output.
     
    Kennedy McEwen, Mar 17, 2006
    #98
  19. Kennedy McEwen

    Jeremy Nixon Guest

    I certainly haven't. If you're suggesting that this particular sensor is
    different from the others, okay -- maybe it is, and Canon is keeping quiet
    about it. Why wouldn't they trumpet it from the rooftops, though?
     
    Jeremy Nixon, Mar 17, 2006
    #99
  20. Kennedy McEwen

    Jeremy Nixon Guest

    Every one who says anything about it says that angle of incidence matters
    significantly.
    So they performed the test, discovered that angle of incidence doesn't
    matter at all, and then embarked on a vast conspiracy to convince the
    world that it does?

    There aren't that many companies making commercial sensors like this, so
    I guess the conspiracy wouldn't have to be all that vast. But why? Why
    mislead everyone about this single point? Why are you the first person
    to reveal the horrible truth? Are there now assassins after you, to
    silence you before you expose this plot?
     
    Jeremy Nixon, Mar 17, 2006
    1. Advertisements

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments (here). After that, you can post your question and our members will help you out.