Stopped down focus

Discussion in 'Digital SLR' started by AaronW, Jun 23, 2006.

  1. AaronW

    AaronW Guest

    Can any camera auto focus with the lens stopped down? I know it would
    be dimmer, but suppose there is more than enough light. Suppose the
    lens stopped down is sharper than wide open, would that make auto focus
    more accurate and faster?
     
    AaronW, Jun 23, 2006
    #1
    1. Advertisements

  2. AaronW

    Toby Guest

    It really doesn't matter how sharp the lens is, as the AF only looks for the
    maximum sharpness. AF searching often happens because it is trying to focus
    in an area where there is very little edge contrast. And having more depth
    of field with a stopped down lens would, I imagine, make finding maximum
    edge contrast more difficult, not to mention that AF works best with maximum
    light.

    Toby
     
    Toby, Jun 23, 2006
    #2
    1. Advertisements

  3. AaronW

    Sheldon Guest

    When you stop the lens down you increase depth of field. So, the fact that
    more of your photo is in focus at a smaller aperture would mean that an auto
    focus mechanism would see more in focus, therefore making it less accurate.
    Also, as the poster says, there is less light to focus with. When the photo
    is actually taken the lens will (should) automatically go to the proper
    aperture, or if you are using aperture priority it will go to whatever you
    have it set at while the shutter speed adjusts for the proper exposure.
     
    Sheldon, Jun 24, 2006
    #3
  4. AaronW

    ben brugman Guest

    There are several ways to do the autofocus in a
    SLR.

    1.
    Using different light path's in the lens. One light path
    goes through the left of the lens on goes through the
    right of the lens.
    Stopping down would not influence the focusing up to
    the point the the light path's are blocked making focussing
    impossible.
    (There are camera's with 2 set's of light path's one for
    large apparatures one for small apparatures. The large
    is better because of the large angle between the two
    light path's so stopping down would result in slower less
    accurate focus.

    2.
    Some focussing systems work with sensors places at
    different depth's and maximise contrast differences in
    the sensor. Using a smaller apperature would make less
    light reaching the sensor and making is less fast and making
    the contrast difference less fast changing and therefore
    less accurate.

    3.
    Some systems hunt for the focus (probably not used
    in SLR systems). This hunting would be hindered if
    less light was let into the system.

    The only advantage of stopped down AF would be that
    shifting of the focus by using different apparatures would
    be counteracted. But shifting of focus is far less than
    depth of field so this shifting is hardly a problem.

    So stopped down AF systems are not preferable.

    ben
     
    ben brugman, Jun 24, 2006
    #4
  5. AaronW

    Alan Browne Guest

    Were it possible then you would have less control over the placement of
    the beginning and end of the apparent in-focus area (the DOF) as the
    contrast would be inside the hysteresis of the focus phase sensor over a
    greater depth of focus.

    Put a little more simply, from shot to shot of the same subject, the
    plane of sharpest focus would not be at the same distance.

    Don't forget that AF is always less accurate than MF in a MF lens due to
    the measurement system tolerances in the AF lens.

    Further, as others have said, the AF would have additional trouble due
    to the reduced light level that the AF depends on.

    Cheers,
    Alan.
     
    Alan Browne, Jun 24, 2006
    #5
  6. AaronW

    DoN. Nichols Guest

    [ ... ]

    [ ... ]
    The problem may be with a lens on extension tubes or a bellows.
    Note that the focusing aids in viewfinders often consist of tiny prisms
    to route to the eye information from extreme angles (out near the edge
    of the diaphragm), to get a maximum baseline for detecting the
    difference between two adjacent image parts. When you stop down to
    produce an aperture below that angle, the prisms black out. And when
    close to the limit, your pupil position became quite critical, with one
    half or the other blacking out unless you were precisely centered.

    I'm not sure exactly how the autofocus mechanism is designed in
    these cameras, but I could certainly imagine it utilizing a similar
    mechanism with pattern recognition algorithms to detect breaks in lines.

    Back in the days of the Nikon F, focusing screens with prisms
    (both the split-image in the center, and the ones with a grid of tiny
    prisms surrounding the center) had several versions -- typically four,
    to allow you to select ones appropriate for the maximum aperture of the
    lens which you were currently using. The only ones which would work
    with all apertures were the plain ground glass, and the clear center
    spot with a fine crosshair. On that one, you needed the camera on a
    stable mount (tripod or equivalent), and you moved your eye from side to
    side, looking for relative motion between the crosshair and the part
    of the subject on which you were attempting to focus (parallax
    focusing). That one worked very well, if you had time to use it and a
    stationary subject and camera.

    So -- it *may* be that the autofocus will not work below a
    certain minimum aperture even with plenty of light. I believe that I
    remember reading a warning about this in my manual.

    Enjoy,
    DoN.
     
    DoN. Nichols, Jun 25, 2006
    #6
  7. AaronW

    ColinD Guest

    Some cameras use ultrasound or IR for focusing - rengefinding would be
    more accurate since they work like radar, time out and back equates to
    distance.

    Cameras which utilize semiconductors behind the lens require a
    reasonably wide aperture to work. Light levels aren't the problem, it's
    to do with the f/stop. Canon, and I think Nikon, need a minimum f/5.6
    for focusing to work. Some lenses when used with a 2x extender won't
    focus because the effective maximum stop is smaller than f/5.6,
    typically f/8 or so. All Canon lenses have a minimum aperture of f5.6
    at any point in the zoom range; but some third-party lenses have only
    f6.3 at the long end, and they won't focus properly.

    Colin D.
     
    ColinD, Jun 25, 2006
    #7
  8. AaronW

    Alan Browne Guest

    Ultrasound yes. IR no. You would need sub nanosecond timing for
    accurate enough focus using IR. (speed of light is about 1 foot per
    nanosecond).

    IR assist shines a pattern on the subject to get contrast lines for the
    AF to focus on. AF assist is sometimes body mounted and more often
    accessory flash mounted.

    Cheers,
    Alan
     
    Alan Browne, Jun 25, 2006
    #8
  9. AaronW

    ColinD Guest

    Yep, thanks.

    Colin D.
     
    ColinD, Jun 26, 2006
    #9
  10. AaronW

    Ben Brugman Guest

    IR has been used very extensively in Point and Shoot camera's
    to determine the distance. This was done with an IR beam and
    a sensor.

    System 1. (Oldest)
    The IR beam made a sweep and the sensor picked it up.
    Only a primitive sensor is needed for this.
    The sweep of the IR beam was linked to the focusing
    mechanism.

    System 2.
    A fixed IR beam. With a multiple sensor setup.

    The beam and the sensor were places some distance
    apart on the camera. Most used positions were left
    and right above the lens.

    I think the first Pentax SLR system with AF worked this
    way. The system was integrated in the lens and not
    in the camera.

    ben
     
    Ben Brugman, Jun 26, 2006
    #10
  11. AaronW

    Alan Browne Guest

    Ben, Why did you snip the part where I said:

    "IR assist shines a pattern on the subject to get contrast lines for the
    AF to focus on. AF assist is sometimes body mounted and more often
    accessory flash mounted. "

    That is what IR AF assist does. It does NOT do ranging as Colin first
    implied (accidently or otherwise). That's what I was clearing up.
     
    Alan Browne, Jun 30, 2006
    #11
  12. Actually, you could also measure distance with IR easily:

    1) two sensors, one emits, the other detects; given the reception
    angle, you know the distance

    2) make the incoming reflected beam interfere with one directly from
    the IR source; you could then deduce the elapsed time, hence the
    distance (however, only modulo one wavelength, ie of the order of
    600nm; that is, 10m and 10m plus any integer multiple of 600nm (eg 6m)
    would be the same to this system; so please ignore this method...).

    I have no clue if method 1 is actually used, though I suspect it's what
    was used in 35mm compacts; never having owned a film compact, however,
    I don't know.
     
    achilleaslazarides, Jun 30, 2006
    #12
  13. Ben Brugman has also written above that triangulation was indeed used
    with old compacts.
     
    achilleaslazarides, Jun 30, 2006
    #13
  14. I don't see the problem.

    Computers do a couple of GHz nowadays. So you can get a stable
    tact rate in these frequencies without trouble.

    1 Hz => cycle 1s
    1 kHz => cycle 1 millisecond
    1 MHz => cycle 1 microsecond
    1 GHz => cycle 1 nanosecond

    No problem even for consumer electronics.

    Even waaay back (1990 or earlier) my Dad had a video camera that
    would measure distances by IR light. No, no patterns there, and
    it didn't focus correctly when filming through a glass window.
    (OK, focussing requirements are lower for moving images on small
    chips with low resolution.)

    -Wolfgang
     
    Wolfgang Weisselberg, Jun 30, 2006
    #14
  15. But the return trip for an object 1m away would be less than 7ns, ie 7
    cycles. I suppose it is possible to emit a signal and measure the
    number of cycles until something is received, but I imagine that 15
    years ago this was not feasible. Also, I suppose you'd need to do some
    processing to work out if the thing you received really is the signal
    you wanted, and I don't know if this is possible in such short times
    (since you must be ready to restart if it's the wrong signal). And did
    compact cameras in 1990 have 1GHz oscillators in them? Again, I don't
    know; it seems hard to believe, but somehow they autofocused.

    Well, as I said, I have no clue, so please correct me if I am wrong.
     
    achilleaslazarides, Jun 30, 2006
    #15
  16. Sorry, having seen that with consumer VHS video cameras back
    then, I disagree.
    Send many signals and average over time (the focus motor has
    a finite speed, thus doing averaging itself), and use either a
    specific frequency or a specific pattern (patterns are no problems
    either, how do you think IR remote controls work? And they have
    been around a _long_ time.
    No problem. Throw away signals detected a spurious, continue
    sending pulses as before.
    Oscillators are not _that_ new an invention.
    Hey, the Mercury capsule worked. Without any computers.

    -Wolfgang
     
    Wolfgang Weisselberg, Jul 3, 2006
    #16
  17. But did they measure distance by measuring the time between emission
    and reception? I don't know, I am asking.
    What do you mean? If your distance measuring is done by detecting the
    time elapsed between emission and reception, then you have to emit,
    wait until you receive the signal, and then divide that time by the
    speed of light to get the distance, and only then activate the focus
    motor to focus properly. Furthermore, changing the focus has absolutely
    no effect on the signal received by the IR sensor; so I fail to see how
    the speed of the motor enters into the discussion. Basically, I don't
    understand what you're saying.

    As for IR remotes, I don't know how quickly the TV (say) reacts, but I
    suppose there is some time lapse between reception and reaction, since
    you have to react differently according to the signal so some
    processing is needed.

    What I was trying to say is this: You emit a signal, start a stopwatch
    and wait; when its reflection is detected, you stop the stopwatch, and
    may calculate the distance. However, to do that, you have to make sure
    that the received signal is the reflection of whatever you emitted, so
    you need to do some processing to check. If you process and find that
    it is, then all is fine; but if it's not, then you have to wait some
    more. The problem is that while you're processing the spurious signal,
    the real one might have arrived. What do you do? I don't know if you
    can process a signal while managing a queue of incoming candidates so
    quickly (of the order of nanoseconds).
    No, but I was questioning whether 1GHz oscillators were built into VHS
    cameras. As I said, I don't know, it's a question (although it sounds
    too high-frequency, to be honest)

    I think I am misunderstanding you. Could you please specify whether you
    are talking about a system that measures the elapsed time between
    emission and reception (as opposed to the angle of reception, a
    completely different method)? If so, could you explain how the speed of
    the focus motor has anything to do with the feasibility of doing this?

    Thanks.
     
    achilleaslazarides, Jul 4, 2006
    #17
  18. AaronW

    J. Clarke Guest

    Oscillators aren't, but inexpensive electronic devices that can operate at
    gigahertz frequencies are.

    If you know of any cameras that use the timing of a reflected light signal
    for distance measurement please name them.
     
    J. Clarke, Jul 4, 2006
    #18
  19. Yes. To the best of my knowledge and memory. They would be
    mis-focussing on windows, they were not hunting, the light
    was not visible.
    The maximum stated range was 10 meters. So the maximum time to
    wait would be 20m / c ~= 6.7 * 10^-8 seconds. If you allowed just
    1/1000s before activating the focus, you'd be able to do more thas
    10.000 measurements. (Not that you'd want to, it'd be draining
    the NiCd battery (yep, that old), but you can cram in a few ...)
    The focus motor was not _very_ fast. So what would happen if
    you had the following pattern of distance results (probably
    a few dozen per seconds):

    4m focus to 4m
    4m still focussing
    4m still focussing
    ....
    .... arrived at focus
    ....
    4m
    4m still at focus
    0.3m start focussing motor, now at 4.0m
    0.3m still focussing (now at 3.95m)
    4m reverse focus motor (now at 3.85m) (inertial mass!)
    4m still reversing (now at 3.9m)
    4m arrived at focus
    ....


    The fact that the focus motor is slow DOES mean you average
    out spurious signals. Think about this sequence. The whole
    sequence probably takes half a second or less:

    4.0m at 4.0
    4.1m at 4.0 starting for 4.1
    3.9m at 4.05 starting for 3.9
    4.0m at 4.0 stop motor
    3.9m at 4.0 starting for 3.9
    4.1m at 3.95 starting for 4.1
    4.1m at 4.0
    4.0m at 4.1 starting for 4.0

    You see, even with a fairly fast motor and slow updates, the
    speed of the motor evens out a lot.
    First patent for remotes: 1893, Nicola Tesla, US Patent 613809.
    First remote controlled model airplane: 1932.
    First remote controlled SAM missile "Wasserfall", WWII.
    First wireless TV remote control "Flashmatic" 1955 (visible light)
    First ultrasound TV remote control "Zenith Space Command",
    1956. (4 buttons, 4 frequencies, no batteries needed!, and
    6 extra tubes in the TV)
    Many-Button remotes, prototypes at 1977-78.
    IR-remote controls from the early 1980s.
    Learing remote controls from mid 1980s.

    Remember that the whole unmanned space exploration (and a
    good part of the manned one as well) are using remote

    Of course some lapse occurs because you have to decode the
    whole signal, and probably wait for more pulses to come, but
    the pauses are not that long.
    Exactly. All you need is a 'counter' which can cope with a GHz
    pulse and some start, stop and readout electronics. If you want
    to do that digitally.

    You can probably get by by discharging a capacitor over a resistor
    while the light is going there and back again, and measuring the
    rest voltage. And since the voltage is dropping fast at first,
    you get increased accuracy with close targets. (Accumulate
    by discharging over multiple bounce cycles.)
    Nope. It would be nice if you had. You can average over 10 or
    100 or so measurements. You can use a well-defined frequency.
    And you can say 'I don't care, it's consumer electronics anyway'.
    We are talking about times like .000000067 seconds per measurement
    for the light to travel. Ok, take 1.000 times that time to
    handle the stuff. We still talk about .00067 seconds. Negible.
    About 1/2000s lagtime, even with generous handling time.
    You switch off or ignore the receiver once you get the first
    signal. After all, you _will_ often get scatter from the
    background, after the target has reflected.
    You are thinking digital. Try thinking analog.
    Elapsed time.
    As I said, if you get a few spurious data (we are talking about
    that VIDEOcamera), the focus speed can average them out.

    For a photo camera you'd probably average measurements electrically.

    -Wolfgang
     
    Wolfgang Weisselberg, Jul 5, 2006
    #19
  20. That proves it's IR radiation; it could be by triangulation, not by
    measuring the time for the signal to come back.
    But my point wasn't that the time interval is too long; it's that it's
    too short.
    OK I see what you mean: you measure many times, the errors have little
    effect. Well I don't think this is what cameras did/do. Feel free to
    disagree.
    What do these dates have to do with anything? And "not that long"?
    We're talking about reactions that must occur in nanoseconds.
    All you need is a gigahertz oscillator? OK, I never paid any attention
    to circuits etc, but it doesn't sound trivial to me. OK, maybe I am
    wrong, but I'd be rather surprised if it's easy to build an accurate
    gigahertz oscillator with cheap components. As I said, I may be wrong.
    Did you bother to read what I wrote? I am arguing exactly the opposite
    of what you're answering to.
    And what if the signal is spurious? You measure many times, would
    probably be your answer. OK. I don't think anything works this way.

    OK, I can see this will become long and tedious. I give up. Frankly, I
    don't know how it works, so maybe you're right. I don't think so, but
    maybe you are.
     
    achilleaslazarides, Jul 5, 2006
    #20
    1. Advertisements

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments (here). After that, you can post your question and our members will help you out.