Sony tells DSLR shooters they're idiots

Discussion in 'Digital Cameras' started by Alfred Molon, Nov 24, 2012.

  1. [/QUOTE]
    We have it on record that *you* think your eyes are reliable
    colour measurement instruments --- even when used in different
    circumstances and without being able to see both scenes at the
    same time --- whereas *I* claim they can easily be tricked /and/
    have given you enough pointers to research that on your own.

    Then we have it on record that you believe knowing that the eye
    can be tricked equals "you don't know what you are looking at".

    Maybe you /can/ bring proof that *your* eyes are good, absolute
    colourimeters, but I wouldn't bet a cent to an Earldom on it.


    Oh, BTW, let me congratulate you on embarking on your quest
    to ridicule me since I don't agree with you. But look up what
    Ghandi said about that.

    -Wolfgang
     
    Wolfgang Weisselberg, Dec 20, 2012
    1. Advertisements

  2. Alfred Molon

    Eric Stevens Guest

    You are reading too much into what I wrote, unless you believe white
    walls are never found in the presence of snow.
    Well, doesn't it?
    This is not just a discussion of colour.
    Tsk, tsk. You are too sensitive. Maybe you have seen too many Santa
    Clauses in the mall?
     
    Eric Stevens, Dec 20, 2012
    1. Advertisements

  3. Alfred Molon

    Eric Stevens Guest

    Yes, that sometimes happens, in which case I delete the D-Lighting. On
    other occasions it makes a noticable difference to the image. It's
    very useful when it comes to helping sort out difficult exposure
    situations.
     
    Eric Stevens, Dec 20, 2012
  4. It was a generalisation based on the fact that in my sequence of
    camera purchases autoexposure has improved as the number of sensors
    has increased, and seeing others report similar experiences. Did the
    people you heard this from have personal experience, or were they
    reporting what they had heard?

    [snip]
    It *need* not, but in cases where the kind of autoexposure is biassed
    towards the in focus areas it inevitably will be as a natural
    consequence of the autofocus following the faces.
     
    Chris Malcolm, Dec 22, 2012
  5. They usually show you what you'll get. In certain circumstances they
    don't. You're right that learning what those circumsntaces are, and if
    an inquisitive person, why, is a good and useful idea. A brief glimpse
    of the taken shot flashed up on the LCD or EVF will immediately show
    you the difference between preview prediction and result.
    Of course. It would be nice if our cameras' auto functions were all
    infallibly perfect, but none of them are, and it's part of any
    inquisitive and careful photographer's work to find out when they
    can't be trusted.
    No in the sense that both photographer and camera are in taking still
    photographs mode. Yes in the sense that some of what the camera has to
    do to offer this kind of preview mode is similar to some aspects of
    video.
    True if "they" are lazy or learning impaired. Not true if they're
    inquisitive and capable. You seem to be criticising these kinds of
    camera features on the grounds that they permit morons to use cameras
    without understanding anything about photography. Why is that a
    problem? And why should that be of any interest to those of us who
    want to learn how our cameras work?
    So what? The preview mode can equally well be used as an aid to
    intelligent experiment by the curious.
    It takes a little skill to judge it in preview mode, true. I found it
    quicker use it to home in to the right kind of shutter speed and then
    use chimping for the final refinement (if there was time) than use
    chimping all the way. That's arguably because I'm not a sports shooter
    and shoot this kind of ice rink shot about once every two years. It's
    also possible that all kinds of variable eye and brain physiology
    comes into this and that some people will find the preview facilities
    far more annoying than useful.
    Obviously for your kind of shooting that might be true. All I can say
    is that in my kind of shooting the crucial difference is almost always
    shown in the viewfinder. It only fails (for me) in unusual and
    predictable circumstances). What's more it's very easy to see if it's
    failed by setting the camera to replace the preview with a brief
    glimpse of the postview. That quick visual flick between pre and post
    makes even small differences stand out.
    A 10x loupe?? Why on earth would you ever need a loupe on a camera
    which can easily magnify any part of the image, including the preview
    image, up to pixel level? All a 10x loupe would show you is the pixels
    of the display, which is very far below the resolution of the image
    sensor.
    If you've got it. Which I haven't. My point is that I'm finding it a
    more rapid way of gaining that experience, while considerably
    improving the number of good shots I happen to take while doing the
    learning.
    As I've explained you don't need a loupe in a camera which can do the
    image magnification faster and better than any loupe.
    Which you don't need, whereas the camera has in effect a built in
    zoomable loupe. It's harder to follow action with that than simply
    looking at the straight image through the viewfinder, but it's
    easier to follow the action with it than with a 10x loupe.
    Apparently not, because I seem to have the first training mentioned
    above without having the last mentioned above. I do know about the
    reciprocal of the focal length for shutter speed, that you have to
    adapt that to digital sensor size and resolution, add in the image
    stabilising factor when appropriate, adapt it to the holding method
    employed (e.g. elbows on wall, monopod, tripod), factor in wind,
    factor in unusual rotational inertias (e.g. long reflective vs
    refractive lens), etc etc.

    In other words predicting steady hand holding speed in advance is an
    educated guess which often needs verification and adjustment in
    practice.
    Many action shooters employ the both eyes open method for following
    action. Lets them keep an eye on what's happening outside the scope of
    the viewfinder as well what's in it. The same technique can be used
    when panning to follow action while using the preview facility which
    is including a slow shutter speed producing a jerky lagging
    display. It does take practice, but it's possible and at least in my
    case useful.
    That is indeed much easier, but sometimes there isn't time to do that.
    I find it also helps in my case where I have quite a good idea, having
    learned my photgraphy back in the old days before there was even
    autoexposure let alone autofocus.
    Perfectly true. These are all things you have to learn for each new
    camera, just as in the old days you had to learn about different films
    and developing techniques.
    Your arguments are much too black and white. That not all the
    knowledge carries over doesn't mean that none of it does. All the
    knowledge doesn't carry over. But a useful amount of it does.
    Because they're elementary and specifically designed to be camera
    independent. In fact angle of view is more independent and useful than
    "equivalent focal length" which IMHO is a silly fudge of an incomplete
    generalisation.
    No, you need to use your camera for that kind shot more than two times
    a year. I use my camera more than once a week, but I only try that
    specific ice rink problem about once every other year. Two years ago I
    spent about fifteen minutes on it, during which time I learnt a
    lot. This year I spent only a few minutea on it. I was stopped by a
    security guard who was worried that I might have a perverted interest
    in photographing child skaters or be planning a terrorist attack.
    No idea what that argument means. I'm wondering whether you have any
    experience of what you're criticising. You're beginning to sound to me
    like someone arguing that zoom lenses lead to obesity, atrophy of the
    legs, and loss of the manual dexterity and balance required to change
    prime lenses while standing on a windswept rock in a river.
    It does you credit that you're worried about the educational state of
    lazy or stupid photographers and would rather the market insisted on
    supplying them with cameras they couldn't work without a proper
    scientific understanding of camera technology. Unfortunately the
    market is based on consumer choice. On the other hand it doesn't worry
    me at all that some of the features I like in my new camera could be
    abused by the plebs take better photographs than their moral and
    educational state deserves.

    [snip]
    Because that learning process is less fun, and I'm easily bored. Plus
    preview lets me get a lot more fairly good shots while I'm doing the
    learning. Helps my motivation. I also suspect that learning which is
    more fun works faster and better. But I'm willing to accept that may
    be a personal idiosyncracy.
     
    Chris Malcolm, Dec 23, 2012
  6. So you need the same time to assess the correctness of the
    preview mode as you need to check any other parameters.

    Unfortunately, aperture and exposure time is well behaved,
    but preview prediction is much less so. Thus you need to
    check the results mor often and for a much longer time.
    Incorrect. There can be an inquisitive and careful photographer
    who stays with full manual for his work.

    Sure, the camera needs to read the sensor and display the
    image in real-time --- but an optical view finder does the same.

    A rare minority. Otherwise we would not need schools.
    You seem to be misunderstanding me. Morons use an
    all-auto-everything mode, not preview mode, not even scene
    modes.
    See above.

    So they are addicted. And helpless without their crutch.
    True.
    Psychoactive drugs can equally well be used as an aid to
    therapy, too.

    Unfortunately in the case of drugs, this turns out to be
    unlikely in most cases of drug use. I fear it'll be the same
    with preview mode: only a very few will use it "as an aid to
    intelligent experiment".

    I'm curious. My first guess would be 1/500 or 1/1000 on a
    moderate tele to freeze action. What was your first guess
    after preview mode and your final shutter speed=
    I would find it plainly impossible to see the difference between
    tack sharp and mostly sharp in preview. It would only tell
    me major blur or no blur --- not only because the EVF doesn't
    have that much resolution, but because I can't look that fast.

    Same with DOF, but I can get an idea of DOF by stopping down
    the lens.

    I can't really judge sharpness without magnification,
    much less in a limited-dot/3=pixel-viewfinder. If you can,
    than your display size won't be much larger than the pixel
    count in the viewfinder. Just as one example. Tack sharp
    or slightly blurred (but visible in high resolution 20x30cm
    prints) is another.
    And doesn't show the stuff you can't see due to shortness of
    time (given a non-static subject) and low resolution.

    That isn't called a loupe?
    That's part of the allure. You need no experience! Just change
    the settings until it looks (sorta) OK. You need not learn
    a thing. And thus you gain the ability to sort-of get what
    you want, without needing to note the numbers for exposure
    time and aperture.

    If you don't want more than sorta-WYSIWYG, preview mode only,
    then that's perfectly fine with me, use preview mode, stay
    with preview mode.

    But you cannot transfer what you learn there to other modes,
    not without deliberately and consciously working on it.
    In other modes you can't get away ignoring the numbers and
    their meanings --- you learn by default.

    See above.
    maybe even a 1x, 5x, 10x loupe? :)
    Yep, try that sometime. Take a flock of birds flying
    overhead, track one bird with a long lens and see in the
    viewfinder if his eye is tack sharp.

    Or try a skater. Try to frame your shot so that just his head
    is on it (you're going for the facial expression) --- and his
    eyes shall be sharp. Assume he's not just doing repeated,
    predictable circles or ovals ... see if you can zoom in,
    track his face, check the sharpness ... all in preview mode
    with varying distances to the camera.
    Ah, you use static motives and a tripod.
    Yep. So how many bodies and lenses do you regularly use?
    Really? I try a new technique, I see what comes out, I
    adjust as necessary to my goal, I remember what works and
    what not for next time.
    Action shooters generally do not employ preview mode, though:
    they don't want the additional *variable* lag between photons
    hitting the sensor and dots lighting up on the EVF.
    Yep. But only for that reason and only as a background task.
    It's a work around for what is a non-problem with optical
    viewfinders.

    Then there isn't time to play with preview modes either.

    You're saying you're still surprised by aperture or exposure
    time settings?

    But you didn't need to relearn all of exposure time and
    aperture. Which is what the preview mode supplies. Preview
    doesn't do newer generations of sensors or digital darkroom.
    Yet with aperture and exposure time almost all carries over,
    even switching sensor sizes.

    Which makes them a *good* idea.
    So you'd write an angle of view on a lens --- which is then
    attached to a 35mm-sized sensor, a 1.6x crop sensor, a 2x crop
    MFT and maybe even to a 2.7x '1"' sensor. For which sensor
    would you write the angle of view?

    So focal length and thus equivalent focal length.

    And next year you'll be arrested for carrying a camera.

    Basically: If you need to chimp for a rather long time to
    find the right settings, you don't know your camera well.
    If you don't need to chimp for a long time, I call your
    ""sufficiently much faster that in five minutes shooting you
    can come away with many more good shots of a much greater
    ^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^
    variety than without preview" bull.
    ^^^^^^^
    You begin to sound like someone who thinks photography *can't*
    be done without preview modes, unless one chimps for many
    minutes. At least not if you're not a super-photographer.

    Where did I require that one enter Maxwell's equations into
    the camera before the shutter works? Or alternatively, the
    theory of charge transport in semi-conductors? Or maybe how
    to design and make a processor for the camera?
    Really? So where are the cameras many ask for?

    Isn't it that companies decide what they will make, based on
    what *they* think makes them the most money, launch it where
    *they* think it'll make them the most money and price it as
    *they* think it'll make them the most money?

    I'm not worried about "the plebs". I'm worried about people
    who could be more but will be held back by such things.
    I see. You want the easiest way, not the fastest or the
    best way.

    -Wolfgang
     
    Wolfgang Weisselberg, Dec 24, 2012
  7. [/QUOTE]
    Any reason why you wouldn't decide how much highlights to blow
    and expose accordingly, pushing shadows in post (and
    adjusting the curves) as needed?

    -Wolfgang
     
    Wolfgang Weisselberg, Dec 24, 2012
  8. In the same time the world population has also increased, as
    have the national debt of the US. Based on that fact higher
    national debt and more people improve autoexposure. :)
    I /think/ it was some white paper from camera manufacturers.

    I'm afraid that while your claim is plausible, it doesn't mean
    it's true. One *easy* counter example: AF face recognition on &
    integral exposure metering set.

    -Wolfgang
     
    Wolfgang Weisselberg, Dec 24, 2012
  9. [/QUOTE]
    Oh, sure, there are. So, what did the camera make from that
    scene?

    Say, how do you know you saw white walls in the presence of
    snow, or can't your eyes be tricked? Probably that was a
    yellow wall and smog-stained snow you say ...
    Well, either colour measuring is necessary, or it's not. In
    the latter case "It must have had color sensitivity of some
    ^^^^
    kind" is proven wrong --- which was my point, in the former
    case it *is* a discussion of colour and whether your eyes can
    report them correctly.
    .... for your ploys to work.
    Are they spitting images of you, or where's the connection?

    -Wolfgang
     
    Wolfgang Weisselberg, Dec 25, 2012
  10. [big snip]
    Since that is equally true of your claim it seems that the sensible
    thing to do is that we agree to differ.
     
    Chris Malcolm, Dec 28, 2012
  11. Not necessarily. For example if the EVF (or LCD panel) is set to show
    the resulting image after the shot there is an instant flick between
    pre and post views which highlights small differences rapidly &
    effectively. Astronomers use that same process to draw the eye to very
    small changes which are otherwise imperceptible.
    I don't find that to be the case.

    [snip]
    True, but the more restricted is the range of inquisitive
    investigation the less it deserves the title.

    [snip]
    Not the same. An optical viewfinder shows the image the lens is
    presenting to the image sensor. An EVF shows you what the sensor makes
    of it. An EVF in preview mode shows you in addition what your selected
    jpeg processing options etc. have on the image.

    [snip]
    You may be right. But I don't choose my photographic equipment or
    develop my techniques with a view to their use in educating the
    unwilling or incurious. Nor do I or my doctor choose my drugs on the
    basis of how addicts abuse them.

    [With resepct to finding te right shutter speed to blur moving skaters
    while keeping stationary skaters sharp]
    The point was not to freeze action but to blur it! The shutter speeds
    needed were generally in the 1/10th to 1/100th sec range. Preview mode
    usually let me get it right, and when it didn't it wasn't more than a
    stop out.
    Which I expect is why the facility to magnify the preview image up to
    image sensor pixel level was provided.
    The preview magnfication of the sensor image is done by computer not
    lenses, so the resolution of the EVF doesn't put any limits on the
    magnification.
    A natural limit which applies to any technology which relies on you
    seeing what's happening.
    A nice feature of EVFs is that stopping down the lens doesn't dim the
    view unless you want it to. Very handy when doing long tripod
    exposures in churches or carefully selecting DoF in interior shots lit
    by strobes.

    [snip]
    You seem to think the magnification is done by optically magnifying
    the view of the EVF screen. It's not. It's done the same way as your
    computer can zoom into pixel level detail on your image editor, even
    if it's a 50MP image and your computer monitor has only 1MP.

    [snip]
    No. A loupe is a lens. Do you describe zooming in and panning around
    when inspecting a large image on your computer monitor as "using a
    loupe"? I suggest you look up "loupe" in a dictionary.

    [snip]
    But underneath the image are displayed the shutter speed, aperture,
    ISO, plus a lot more which you can choose whether or not to display.
    That facilitates the learning if learning is what you want to do. You
    might be right that it also helps you to avoid learning if that's what
    you want to do.

    It doesn't bother me if the preview mode of my camera might help the
    lazy and ignorant to take photographs without learning what's going
    on. I've managed to avoid having to teach people who didn't want to
    learn all my life and I don't intend to start now.

    [snip]
    So what? I'm talking about using a specific camera feature and method
    to help solve a particular photographic problem -- selective speed
    blurring of moving skaters.

    You're quite right that it would be silly to use that method for all
    sorts of other photographic problems. I've tried getting sharp
    photographs of birds in flight with a 500mm lens and I use quite
    different methods, including the use of an adapted gunsight in
    combination with an AF which has been specifically calibrated for that
    purpose.

    [snip]
    For the blurring of moving skaters kind of problem I prefer a monopod
    plus image stabilisation.
    Depepnding on how you define "regularly" anywhere between five and ten
    lenses. I'm not counting lenses I use less than once a year. I usually
    carry at least three. I only use my backup camera when I want to
    reduce lens changes. It doesn't do anything better than my best camera
    so it never goes out alone. That varies between more than once a week
    to less than once a month so I wouldn't call my use of two bodies
    regular.
    Sounds like you're blessed with a much better memory than I've ever
    had. I sometimes solve a problem on the run in a busy shoot and by the
    time I get to reviewing the results on the computer I've forgotten how
    I did it. But of course not entirely, because next time I hit the problen
    I'll solve it faster and remember it better.
    Of course. In that case you'd either avoid using the laggier kinds of
    EVF processing or avoid preview mode altogether. The fastest and most
    difficult kind of action shooting I do is birds in flight with a 500mm
    lens and for that I don't use the camera's viewfinder at all. I use an
    adapted gunsight which lets me use both eyes on the whole scene.

    [snip]
    Not my experience. I find I still have time to use preview when
    there's no time to shoot and review. It's better at that than I
    expected before trying it.
    Surprised is too strong a word. But where there are conflicting
    demands and aesthetic trade offs involved I like to experiment with
    different compromises. Anything which shortens the time between
    experiment and result is useful, especially when the shortening steps
    over the important boundary between experiment and check into
    interactive process control.
    It does do newer sensors because it's tied to the sensor. And it does
    do some digital darkroom techniques -- those which have become part of
    the rapid jpeg processing repertoire of the camera such as colour
    balancing and some kinds of tone mapping.

    But if you're referring to learning and experience which is specific
    to certain makes and models of camera you're right. That lack of
    generality is also true of autofocus. That doesn't stop autofocus
    being very useful, nor does it stop it being useful to learn exactly
    how a specific kind of autofocus technology works and where and why it
    fails.
    A surprising amount of what I learned in my film shooting days has
    turned out to have been oversimplified and overgeneralised. That's
    mainly been due to increasing sensor resolution revealing unsuspected
    problems in earlier cruder generalisations. Rather like the way
    improved detail resolution in scientific measuring instruments reveals
    the simplifications and overgeneralisations in earlier mathematical
    models.

    [snip]
    None, for exactly the same reasons I wouldn't write "equivalent focal
    length" on a lens either.
    I doubt it. In the UK over the last few years photographers and
    lawyers have succeeded in getting the law clarified and better
    guidelines issued from the government to police and security
    personnel. That's led to improved relations between photographers,
    police, and security guards. I used to get harassed often enough that
    I carried a copy of the relevant legislation in my gear bag. I don't
    now because there's much less harassment.

    The ice rink incident above was unexpected and untypical. One of our
    group of photographers emailed the security company a copy of the
    legislation and guideliens for police and security. The reply suggests
    an improved attitude would be forthcoming. Later experiences by other
    photographers suggests it has.
    I don't. It's a recent acquisition. It takes me at least six months to
    get to know a camera well. But on the other hand the exposure
    parameters I'm playing with are in this case largely camera
    independent.
    Which I don't.
    Experiment trumps speculation. My experiment, your speculation.
    When you look up "loupe" in your dictionary look up "eristic argument"
    as well :)
    In the shops.

    [snip]
    Not what I said nor meant. I said I wanted the most fun while
    learning. By that I meant playful skill acquisition and problem
    solving. That's always been the best and fastest way to learn for me.
    A friend and cognitive psychologist tells me that's not just a
    personal idiosyncrasy of mine.
     
    Chris Malcolm, Dec 30, 2012
  12. Except that we have no reason to expect that world population and
    national debt affects autofocus, whereas the number of autofocus
    sensors was increased in order to improve autofocus. That doesn't
    prove it, but it makes the causal relation more likely.

    [snip]
    That's not a counter example to my claim. It's a counter example to
    general claim I quite sepcifically did not make. Read my claim again.

    Note too BTW that in some recent DSLRs you can no longer set purely
    integral metering. The handbook may suggest you can, but in practice
    there's a bias towards any focus sensors which are in use. IIRC this
    has been discussed here in recent monts -- unexpectedky large shifts
    in exposure when slight changes in composition move a bright area off
    the sensor focus area. Same goes for the RAW files of some cameras
    which turn out to have been unavoidably slightly cooked.
     
    Chris Malcolm, Dec 30, 2012
  13. Oh, the higher the population density, the higher the chance
    that any random focus will hit some person ...
    .... but what does that have to do with auto*exposure*?

    "e.g. if face recognition is turned in the menu, the
    camera will expose for the faces it finds, otherwise not."
    ^^^^

    Sounds like a definite claim to me.
    What part of that means face recognition plays any role in
    metering in that case?
    That would indicate a strong bias. Or a user error.
    You mean like Nikon's long exposure median filter? (Google
    for "Nikon mode 3") That's not even nearly recent.

    -Wolfgang
     
    Wolfgang Weisselberg, Jan 5, 2013
  14. Sorry, brain fart, kept saying autofocus when I meant autoexposure.
    It is a definite claim. I didn't say it wasn't. I said it wasn't a
    general claim. Which it isn't. The point is that a general claim can
    be disproved by a single counter example. Your counter example
    countered a general claim I did not make, not the specific and
    definite claim I did.
    Because, as was mentioned in earlier posts, face recognition being
    switched on will cause preferential selection of the focus sensors
    which best cover the face. So if the camera has an inherent and
    ineradicable bias in exposure towards the selected focus areas (as
    some do) this will become a face-biassed exposure.
    The posters reporting these effects have claimed an unexpectedly
    strong bias. Other have suggested user error. That's always possible,
    but having seen it myself and carefully tested it to verify that it's
    the camera doing it even when unbiassed full frame exposure metering
    has been selected I don't doubt that some cameras do behave like that.
     
    Chris Malcolm, Jan 7, 2013
  15. ["Followup-To:" header set to rec.photo.digital.]
    Blink comparators switch many times between 2 images, not
    just once --- for a very good reason. And even then you need
    perfect alignment *and* no blanking. Oh, and does your EVF
    or LCD show 10x magnification of the whole image at once?

    Why should someone who investigates what's effective in
    his photography and doesn't need automatic functions for
    his work investigate something so unimportant to his work?
    Do photographers routinely learn electronics and program
    firmware and study algorithms for deinterlacing and the
    theoretical groundwork for image manipulation, lest they be
    called not inquisitive enough?

    An EVF in preview mode *sometimes* show you an image
    *downsampled to 0.3-0.4 MPix* of what the sensor and JPEG engine
    made of the light that arrived through the lens *some time ago*
    on an *uncorrected* monitor that's usually much too bright in
    low light situations.

    OTOH your doctor doesn't try out a dozen different drug
    combinations on you until he hits one that sort of works,
    at least usually. He does chimp, though.
    i.e. just handholdable.
    Whatever floats your boat ... you still needed to chimp.
    Which again --- with roller skates --- means you need to have
    your subject pass the part of whatever 0.3 MPix you're currently
    seeing at pixel level and judge in 1/60s (or whatever refresh
    rate you EVF uses) if the subject was tack sharp or only mostly
    sharp --- or reliably follow the subject at that magnification
    without camera shake or blur from the camera movement. At, say
    150mm (35mm equivalent) and 15 MPix that would mean steadying an
    effective 1060mm (and that's only ~7x). That's the same as if
    your skaters filled the frame at 21mm for distance and speed and
    you'd followed someone's belly button at 150mm at that range.

    You'd need to *record* the preview image and then zoom in and
    watch at your leisure, taking a second or 3. Which is
    chimping and works even better with the shutter button.

    You either get the whole frame and low resolution, or a tiny
    shred of the whole frame, through which even a stationary
    object will jump around unless you're well braced and have
    steady hands or a tripod. (I can do that. Sort of. Sitting.
    Bracing my heavy 70-200mm lens on my legs and fixating the
    camera with my hands. Which won't work at all with an EVF or
    moving subjects I have to track.)

    It's kinda hard to see 'is tack sharp' or not under these
    circumstances.

    Which is easily circumvented by softly squeezing the shutter
    button and chimping.

    Yes, *please* show me how you evaluate the light, the JPEG
    settings and the DoF in a dark interior shot lit by strobes
    .... in preview mode!

    I KNOW. Still, I can't judge the sharpness of a 24 or 36
    MPix image on a 0.33 MPix (1 MDot) monitor. I can usually
    see if such an image is *quite* unsharp on a large 27"
    monitor (when the image occupies >3 MPix (9 Mdot)). I
    *still* need magnification to judge critical sharpness
    there.)
    http://www.gregorybraun.com/Loupe.html
    http://www.artissoftware.com/screentools/loupe.html
    http://www.markus-bader.de/MB-Ruler/help/loupe.htm

    Maybe your dictionary is out of date?

    And on web pages there are ad banners on the top and on each
    site. They're habitually ignored.

    On Windows, there are many cases where you need to confirm a
    detrimental action. But as such pop ups are used for
    everything, people are conditioned to ignore the warning that
    they destroy their installation and click OK.

    Just having the numbers doesn't mean that they're being seen.
    In your skater example, did you look at the numbers, adjusted
    them and then checked the effect, or did you turn the dials
    till the effect was sorta what you wanted and then looked at
    the numbers? (Or did you ignore the numbers completely?)

    That's not the kind of person to use preview mode, they use
    full auto everything mode, not even scene modes.

    You're right, preview mode is just applicable to moving
    skaters. :) Phew! Now I'm relieved!

    Do your know your regular lenses well?

    You don't have EXIF in your files? There's everything in
    there that you can influence by using the preview mode ...
    You're not using chemical sensors, are you? :)

    Yet wouldn't it be perfect if you used preview mode to have
    the body tack sharp but the tips of the wings blurred to
    show the dynamic movement?

    That would mean lots of experience with preview, but low on
    experience with exposure times and apertures.
    The best shortening would be knowing pretty well which
    combination work. It's sorta like phase AF and contrast AF:
    Phase AF knows the direction and (quite exact) the amount
    of travel, contrast AF is an interactive control process ...
    guess which one is still faster.
    Exactly.
    You learn --- and then have to throw away most of it when you
    change cameras.
    AF is fully automatic.
    Full auto mode is fully automatic.
    Preview is fully manual. And slower than full auto.

    Naah. You've increased the enlargement (you print larger or
    look at 100% with higher resolutions), you needed to factor
    that in even back when.

    What has changed is that you are more variable in your ways.
    But luckily all you need is a simple correction factor.

    Just as switching from Deutschmark to Euro.
    Thus the focal length is most useful, and from there you get
    trivially to equivalent focal length, but not to angle of view.

    All it takes is just one terrorist that also used a camera once.


    And exposure --- unlike preview mode --- carries over well.
    So you did experiment shooting a second five minutes shooting
    without preview mode and came off with many less good shots
    and with a much smaller variety? I doubt that!

    So, let's stay with logic: you'd need to produce many good
    shots and of a much greater variety in the time you'd spend
    with chimping otherwise. I dunno about you, but I can't do many
    *good* shots in 20 seconds. Much less in a very great variety.

    Yep, it's really a good term for your "cameras they couldn't
    work without a proper scientific understanding of camera
    technology." Thank you.

    So where's the affordable compact camera with a really large
    sensor, 8 or less huge MPix, a good *fast* lens, an optical
    view finder ...

    Yep, the easiest way.
    "best" and "fastest" are hard to judge, since you cannot
    compare well yourself.

    Funniest/most entertaining way, that's easy to find out.

    -Wolfgang
     
    Wolfgang Weisselberg, Jan 9, 2013
  16. *Any* claim can be disproven by a single counter example that
    fits.
    Assume face recognition (for AF, since there is no AE face
    recognition switch) is turned on in the menu. Your claim:
    THEN the camera WILL expose for the faces it finds.

    My counter example: ... even when set to integral (or centre
    spot, assuming the face is off center, for that matter)?

    Easy to test:
    - Centre spot: put a bright lamp in the center spot and a well
    underexposed face to the side, set AE to centre spot, let the
    AF capture the face, see what you get.
    - Integral: darkish room, brightly lit face in small part of the
    frame and to the side. Set AE to centre weighted integral,
    let AF capture the face, see what you get.

    If the face is well exposed and the room/lamp not, you've got
    a broken camera, IMHO, otherwise your camera doesn't expose
    for the face even though your claim says it does.

    Not having your camera I find it a bit hard to test your
    camera's behaviour in that point.
    That's assuming a lot ('if the camera has') and is quite some
    backpaddeling from your former 'will'. Agreed, IF the camera
    does not honor spot or centre AE modes and IF the camera does
    weight the focus point highly and IF you cannot disable that,
    then the camera will expose at least somewhat for the face.

    That's probably the reason some people do call matrix metering
    unreliable and "rolling dice". But they did that even with
    cameras that lack the sensors and algorithms to face detect
    and some cameras also show that behaviour when in fact no face
    is within the frame.

    Thus: no proof that face detection plays a role.

    -Wolfgang
     
    Wolfgang Weisselberg, Jan 10, 2013
  17. For their purposes, which involve identification of very small changes
    of detail. For the purpose I'm describing the instant switch between
    the two images works well to draw the eye and attention to
    differences. For static subjects alignment is perfect. For moving
    skaters it's good enough to be useful. The blanking is very brief, I
    guess about the same as an OVF DSKR mirror blank, and while I agree it
    would be better with no blanking, there is despite the blank a useful
    comparative effect.
    No, neither does my computer monitor no does the optical viewfinders
    in my DSLRs and SLR. Despite that handicap I find them useful. I agree
    that if they were big enough to show the whole image at pixel level
    resolution they'd be even better.

    [snip]
    From a purposeful point of view of course they don't. The point is
    that a purposeful point of view is based on assumptions which while
    usually true sometimes turn out to be false. Especially when there are
    changes in the technology. I'm sure you're aware of how many important
    changes in science ocurred when changing technology introduced higher
    data resolution and new possibilities, and how often the consequent
    new ideas and new methods required an unusually open-minded individual
    who noticed something intriguing which all the other experts had
    either been blind to or "knew" was of no significance.
    Excellent job of describing an EVF is such a way as to make it sound
    worse than useless. Shame you couldn't find more credible numbers.

    [snip]
    There are bigger EVFs than that.
    If we're talking preview mode then the refresh rate is the chosen
    shutter speed plus a small constant.
    You're absolutely right that's what you'd have to do if you were
    trying to judge tack sharpness in preview mode while photgraphing
    moving skaters. I've already explained to you at least twice why I I
    wasn't trying to do anything as silly as that.

    Do you find this poor memory a handicap in everyday life, or does it
    only happen when you're arguing with people in newsgroups?

    [snip]
    Are you deliberately trying to be stupid? Or does it come quite
    naturally to you to have forgotten that *I*'ve already pointed out to
    *you* that evaluating strobe lighting was one of the things EVF
    preview can't possibly do.

    [snip]
    That's possible, but pretty unlikely in this case. You're certainly
    not going to convince me that it is by citing product advertisements
    which only use the word in a specifically qualified phrase. That you
    even bother doing that strongly suggests that you know even less about
    dictionaries than you know about EVFs.
    There you go again. I'm explaining to you what *I* do with certain
    bits of photographic technology and why. I don't give a damn what
    other people do with it. I want to know those numbers, and they're
    conveniently displayed alongside the image. Good. I don't give a hoot
    that people who don't want to know the numbers will ignore them.
    So what? I make no claim that my camera or my methods are good
    educational tools for forcing the lazy and ignorant to learn. I don't
    even give a damn if they're worse than useless at forcing education on
    the unwilling. The educational fate of the incurious is not something
    that influences my choices of camera technology.
    So what? When I let my camera choose aperture, shutter speed, or ISO,
    I want to know what it's chosen. I don't give a damn if other people
    ignore the information.
    Haven't I already explained that? I did both of the first two in that
    order.

    [snip]
    Yes. How about you? How many bodies and lenses do you regularly use?
    How well do you know your lenses?
    You really think that? You mean all your lenses are fully automated?
    There's an unexpected surprise! Plus even with fully automated lenses
    all that's recorded is what's settable. That may be only part of
    what's necessary to understanding how the problem was solved.

    [snip]
    I don't see the point. If the technology somehow offered that then
    human eyes and hands are way too slow to be able to use it.
    You must be making some seriously false assumptions then. My
    experience with preview is a few months of elapsed time and a few
    hundred photographs.

    Whereas I spent fifteen years and thousands of photographs with fully
    manual cameras before I started using cameras with either autofocus or
    autoexposure. I still regularly use at least two fully manual lenses,
    plus all but one of my flashguns are only manual, and I still usually
    an exposure meter in my gear bag. So my experience of exposure times
    and apertures hugely outweighs my experience of preview mode, and is
    still being refreshed and improved by being in regular use.
    Of course. Iterative approximation is always greatly helped by being
    able to start with a good estimate.
    As usual it's the one which is least accurate. I'm pleased to see that
    my latest camera has two phase based autofocus speeds. The slower one
    is a bit more reliably accurate despite being faster than the single
    AF speed of my previous camera.

    I'm disappointed that it doesn't offer three AF speeds, where the
    third and slowest would be a final tuning of the phase based autofocus
    by contrast based. A deliberate slight undershoot of the phase-based
    AF would solve the contrast based focus direction problem.

    [snip]
    Not my experience at all. I'm beginning to wonder just how much you've
    learned about these newfangled technologies you don't like. The
    biggest technology shift I've ever made was the shift from film to
    digital. But even then I didn't have to throw most of what I'd learned
    away. Even some film darkroom technique knowledge carried over into
    the digital darkroom of computer post processing.
    If you're suggesting that it works so well that you don't need to
    understand how it works in order to understand how and why it fails
    than you're a much less sophisticated photographer than I took you
    for. There's a reason why the top end cameras with the best AF also
    have the best aids to rapid and accurate manual focusing, plus the
    ability to do lens speciofic microfovus adjustments.

    There are also degrees of "fully automatic".

    My new camera has a nice AF mode which drops into manual focus mode
    once AF has locked. That allows me to check and if necessary adjust
    focus manually without having to press any buttons. That's also an
    inherent feature of some of the latest in-lens focus drives, but this
    camera AF mode applies to all AF lenses of any vintage. So even though
    the new camera has got the best AF I've used so far, I'm using manual
    focus more often because it's now easier and faster than before.
    Depends what you mean by full. On my previous camera full auto meant
    the camera chose aperture, shutter, and ISO. When using full auto the
    user could select various modes, such as sports, which went for action
    freezing shutter speeds, or landscape, which went for lowest
    ISOs. etc.. Whereas my new camera has two auto modes, the fuller of
    which selects the mode based on looking at the image. e.g. it will
    select sports mode if there are large fast moving things in view.
    There wouldn't be much point to auto modes which were slower than
    manual. The point about manual adjustment is being able to make more
    sophisticated and accurate choices than auto is capable of. The point
    of an optional preview mode is that in some circumstances it lets you
    make your manual choices more easily and faster, which can bring the
    extra sophistication of manual into faster kinds of photography.
    You're ignoring the fact that increased resolution often splits apart
    things which were previously confounded because they couldn't be
    distinguished. For example with film cameras I never noticed the
    differences between mirror shake and shutter shake in image
    blurring. I put it all down to mirror. Whereas the increased detail
    resolution which digital photography gave me allowed me to see the
    differences for the first time. (I know some people measured shutter
    shake back in film days. But I couldn't see it my photographs and
    considered it to be trivially insignificant.)
    Luckily it's a lot more complex than that. I say luckily, because if
    it was just correction factors I'd find it much less interesting.
    I don't see why calculating equivalent focal length is more trivial
    than calculating view angle. Nor do I see why it's more useful to me
    (obviously YMMV). For example in off-site planning of shots of
    building exteriors and interiors I've much more often decided I wanted
    a specific view angle, and then had to calculate what focal length I'd
    need, than the reverse.

    [snip]
    No, I didn't do that experiment. I've already described the experiment
    I did. The results were qualitatively similar to the results a number
    of others have reported when doing their own comparative assessments.

    [snip]
    And plenty of people want Porsche performance at Volkwagen Beetle
    prices. I wonder why the car makers aren't making that obvious best
    seller? Must be some kind of conspiracy against the customer...
     
    Chris Malcolm, Jan 11, 2013
  18. 20 MPix on a 0.3 MPix screen (1MDot): each *single* pixel is
    66 pixel in the original. Rather large changes in the original
    are there very small changes of detail.
    If the change is well visible and comparatively large.
    But for static subjects you have all the time in the world
    to try settings and chimp and don't need a preview mode.
    Which means you'll only percive large changes ...
    Neither of them are 0.3 MPix only ... and in the case of a
    computer monitor it's extremely hard to zoom in and view
    small details, don't you think so?
    Optical view finders come closest to that.
    And what "changes in science" would be analogus to the changes
    pertaining to fully manual operation, and wouldn't your answer
    be akin to telling all photographs to better learn optics and
    electronics and quantum physics since they all are relevant
    to digital photography, and to learn computer science and
    algorithms since they directly pertain to the digital dark room?
    Well, I can't help if they are near worse than useless.
    If you'd manage to reduce the drawbacks *a lot* it may become
    viable. Battery eating, but viable.
    Tell me, how many dots do your EVFs have ... then we have a
    credible number!
    Really? How many dots is your's?
    Nope. The EVF can't update at infinite speeds. The sensor
    cannot be read at infinite speeds. Even if the light is only
    captured for a 1/4000s, that doesn't mean the refresh rate
    is in the 1000's per second.

    Same with your LCD screen. Your game may do 300 fps, but
    your screen does 60Hz ...

    So you never would want a moving skater tack sharp, you say?
    It's more likely my writing skill, since you obviously can
    divine everything someone else writes.

    No, just reading what you write.
    So you do DoF in interior shots by pushing the ISO to so high
    the noise precludes you from seeing really sharp --- huh, wait
    a moment, you're right! It doesn't matter on VGA resolution!
    (Not that you can judge DoF very well there ...)

    Obviously you know everything and need to tell others they
    don't know anything.

    *I* can do without preview mode, make of that what you like.
    Oh well, it only takes enough people who don't give a hoot
    for what other people do ...

    I don't care about forcing anyone. I care about people being
    sucked into dead ends.
    Do you have fun misrepresenting my position? If I wanted to
    force educatiuon I'd be on the barricades against full auto
    modes ...
    The idea that all valuable people are naturally curios and
    there's no need for directed learning is proven by the fact
    that most countries have stopped having schools.

    Straw man: you don't let the camera choose with preview mode.
    Not as many lenses as you use, but I think I know my lenses
    fairly well.
    So how come your preview mode shows the aperture on your
    manual lenses?
    And what does preview mode do better for directing/adjusting/
    adding light? Or for correcting the composition? Or for
    using a lens hood? Or using makeup on the models?

    I can think of several easy ways to allow that for people who
    can see moving skaters blurred with preview mode, apart from
    simply finding out the right combinations.

    Which means --- since you are a curious person and make every
    shot count --- you have lots of experience.
    But you still don't hit the right combinations from experience,
    thus you need preview mode to be fast.

    Getting a good estimate in this case means remembering what
    worked and what didn't.

    Go read
    http://www.lensrentals.com/blog/2012/08/autofocus-reality-part-3b-canon-cameras
    and find out that that and why that isn't true in all cases
    anymore ...
    Use an extender and it'll be slower.

    So you say preview mode carries over?
    I'm beginning to wonder if I've stopped being able to write
    English. "these newfangled technologies", indeed, probably
    everything that's been invented since just after 2982 BCE,
    when one reads you.

    As I said, it seems I'm unable to get you to understand what I am
    saying. Let's try again:
    - AF is a fully automatic system, therefore it has a great
    excuse why it's camera exclusive and needs to be learned for
    full usage. Additionally, it's now much faster than almost
    all human photographers.
    - Preview mode is of very little use on fully automatic
    settings. Now, if exposure time and aperture behaves
    differently between cameras, say one camera being slow and
    the other having a wider aperture ....
    So which top end cameras have split prismas and microfocus
    rings in their focussing screens?
    Just like, say, they have manual mode?
    Your old camera was ancient? One-shot AF is a really old hat.

    Well, go look the word up in a dictionary.
    Scene modes are not full auto modes, as the scene type is
    selected by the user.

    That is a rather limited view.
    Wrong. The point is making a choice that isn't the most common
    choice given all the camera knows about you and this specific
    situation --- which, short of mind reading, is the best bet
    the camera can make.
    The choices are not easier. Choose a change, twirl a knob.
    There you are, made your choice.
    Assuming you can see the change on your low resolution EVF or
    screen ... and you cannot see the change without preview mode.
    What part of "increased the enlargement" didn't you get and
    how comes that even on film people used tripods with large
    format, where the resolution is higher?
    Duh. How comes that you can see the difference? You enlarge
    more. You could have seen it with film, if your film had had
    high enough resolution to support that kind of enlargement.

    And if you only enlarge as much as you can sensibly with
    film ... can you still see the difference?

    The thing that's complex is how to archive the tighter
    tolerances allowed for the enlargements.

    Quick: Crop factor 1.5. Focal lengths are 34mm, 62mm, 93mm.
    What are the equivalent focal lengths?
    What are the view angles?

    Which one did you get faster?
    OK, that is a special case.

    Yep, you did. It does in no way support your conclusion.
    You didn't do a comparative assessment.

    Well, were are the cameras for Porsche prices?

    Blah blah.

    Let's ask another one: Where is the fully programmable DSLR?

    -Wolfgang
     
    Wolfgang Weisselberg, Jan 16, 2013
    1. Advertisements

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments (here). After that, you can post your question and our members will help you out.