Re: 25 Reasons to Aviod the SD-10 (was 15 Reasons to Aviod the SD-10)

Discussion in 'Digital Cameras' started by Paul Howland, May 9, 2004.

  1. Paul Howland

    Big Bill Guest

    I thought you said you did translation for a living.
    Do I have the wrong person?
    If this is you, aren't you supposed to *understand* what you
    translate?

    We are NOT speaking of a tube of water, or cars, or anything but the
    X3 Foveon sensor.

    But, to answer your question, the thermometers would be sampling
    *different* water, which could indeed be at different temps at
    different depths.
    With the X3, the sensors are samling the *same* light at 3 levels,
    minus filtering in the silicon.
    There is no attempt to differentiate any special characteristics of
    the light *except* the color.

    Bill Funk
    Change "g" to "a"
     
    Big Bill, Jun 15, 2004
    1. Advertisements

  2. Huh? What do you mean by this?

    Different wavelengths of light penetrate to different depths, but all
    the light reaching the 3 levels of photosensors comes from the same
    position in the image plane, and that in turn comes from the same
    objects in the original photographic subject. Thus, all levels of
    sensors in one location, no matter how many they are, form a single
    image pixel.
    Everyone in this discussion has always agreed that the three
    measurements taken at one photosite are spatially separated in silicon
    depth. We will get nowhere as long as you insist on arguing about what
    everyone already agrees is true.
    Because that's the essence of the definition of pixel! The meaning is
    simple: if it comes from a spatially-separated place in the 2D image
    plane, then it's part of a different pixel. If it differs in colour, or
    depth, or *anything* else, but comes from the same 2D location in the
    image, then it's part of the same pixel.
    It is counted - it just doesn't get to be named a pixel. Every molecule
    in your body is doing something or other, but the individual molecules
    are not each named Laurence.
    That's nonsense. All the measured data is used. If you start parroting
    George like that, your credibility drops as low as George.
    Isn't it rather stupid to use a description of the Bayer process that
    you know isn't true? Particularly when it's the same rubbish that
    George writes? It doesn't do any good to say "don't get on my case"
    about it - if you say stupid things, you look stupid regardless of
    whether I comment or not.
    One photosensor is *not* the smallest addressible element of an image.
    It's merely 1/3 of the structures that capture the data that forms a
    pixel of the image.
    Not by any of the existing definitions. You're discarding them and
    creating your own incompatible definition. That is what I do not
    accept.
    No they aren't. They are not spatially distinct in the 2D image plane.
    By your argument, each separate colour component (red, green, and blue)
    at every location should itself be called a pixel. But they are not:
    red, green, and blue together form one pixel, not three.
    I did not mention mosaic filter sensors at all in that paragraph. Where
    do you get this from? The Bayer array is one of the few examples of a
    sensor that does *not* make 3 measurements for each output pixel,
    instead calculating colour using clever signal processing. Still, it's
    clear that the number of spatial measurement locations equals the number
    of pixels for that sensor.

    Dave
     
    Dave Martindale, Jun 16, 2004
    1. Advertisements

  3. Paul Howland

    JPS Guest

    In message <>,
    You can't be serious. In the tub, you are measuring the temperature at
    different depths. In the foveon sensor, what you are measuring is not
    color *at* different depths; you're measuring color in a 2D grid *by*
    the depth at which they are absorbed. The structure of the sensor uses
    depth to measure levels of three wide color bands that are all focused
    or unfocused together at the surface of the sensor.
    --
     
    JPS, Jun 16, 2004
  4. Paul Howland

    JPS Guest

    In message <>,
    What a joke - you just want to bend terminology to award the
    Sigma/Foveon cameras with a semantic crown it does not deserve.

    You are enamored with the Camera's output and/or theory, and want to
    give it an award - more pixels than it actually has.
    --
     
    JPS, Jun 16, 2004
  5. Paul Howland

    JPS Guest

    In message <>,
    None of the definitions imply 3D positioning of elements.

    Even if they did, the Foveon does not measure the color present *at*
    different depths; it measures the light intensity in a 2D plane, and
    only uses depth as a discriminator to divide the light into 3 wide color
    bands. The depth is most definitely *not* part of the image.
    --
     
    JPS, Jun 16, 2004
  6. Paul Howland

    JPS Guest

    In message <[email protected]>,
    Huh?

    There is no way to distinguish "focused" vs "non-focused" at different
    depths in the Foveon sensor. Light has either entered the stack, or it
    has not. All focusing occurs at and before the surface.
    --
     
    JPS, Jun 16, 2004
  7. Paul Howland

    Lionel Guest

    <grin> I'm imagining 3.4 million tiny gnomes in every SD9, each equipped
    with a microscopic slide-rule & a bucket. ;)
     
    Lionel, Jun 16, 2004
  8. Paul Howland

    Paul Howland Guest

    Agreed - but the key word here is *potential*. Foveon doesn't do this,
    so it's rather pointless even mentioning it. In any event, you couldn't
    extract any useful depth information as the difference in focal position
    is *far* too small! It must be fractions of a milimeter, whereas the
    depth information we'd be interested in is of the order of metres. And,
    unless we had a coherent light source, you couldn't use phase
    differences either. So, I stand by my comment - the Foveon has no 3D
    application whatsoever.
     
    Paul Howland, Jun 16, 2004
  9. No they don't, they measure sort of red, sort of green and sort blue,
    however the wavelength's sensed at each sensor are hardly perfect, and
    sometimes overlap, mainly due to inconsistencies in the doping process
    and in the silicon manufacturing process. The consequence is that the
    Foveon data has to be interpolated at every site. One only needs to
    look at the unprocessed raw data from a Sigma camera, it has muted
    colours mostly in the brown/orange and gray green spectra.

    Your BS about quantum mechanical accuracy is simply that, BS

    GK
     
    grant kinsley, Jun 16, 2004
  10. Hell, every single pixel is interpolated, have you ever seen the
    unprocessed output from a Foveon sensor, it doesn't resemble real
    world colours at all, they are muted and tend towards gray-green,
    brown and orange.

    GK
     
    grant kinsley, Jun 16, 2004
  11. it's still a million, no matter how you try to redefine it.

    GK
     
    grant kinsley, Jun 16, 2004
  12. Paul Howland

    Lionel Guest

    Kibo informs me that (Laurence Matson) stated that:
    "Smallest addressable point on a digital image", to be precise. I notice
    that you keep on misquoting this. I take it that English wasn't one of
    your best subjects at school?
    That's because you seem to be exceptionally stupid.

    FYI, the 3 dimensional equivalent to the term 'pixel' is the term
    'voxel', something I've pointed out to you several times already. A
    pixel is, by definition, 2 dimensional - a fact that's also been pointed
    out to you quite a few times.
    "Smallest *addressable point*", Laurence, which has also been pointed
    out to you any number of times. The R, G & B layers in the Foveon sensor
    are not "addressable points".
     
    Lionel, Jun 16, 2004
  13. Paul Howland

    Lionel Guest

    Kibo informs me that (Laurence Matson) stated that:
    Laurence, these facts have been explained to you patiently & clearly by
    myself & numerous other people in this group. You have clearly gone way
    past ignorance, & have descended into pure trolling. Accordingly, I'm
    not going to waste any more of my time explaining the basics of digital
    imaging to you, when there are beginners asking questions here who're
    actually interested in learning something, rather than shilling for
    their patron.
     
    Lionel, Jun 16, 2004
  14. SNIP
    I was just humoring you, unless you mean, that wasn't what you hoped to
    provoke with your condescending tone? In that case I'm sorry, but I do
    wonder what else you are trying to achieve with it?

    Bart

    P.S. It won't hurt to snip remarks or questions you choose not to react to,
    it'll save bandwidth.
     
    Bart van der Wolf, Jun 16, 2004
  15. SNIP
    How hard is it to accept that in the context we're dealing with here, an
    image/picture is the result of a rectilinear projection in a 2-dimensional
    plane?

    Bart
     
    Bart van der Wolf, Jun 16, 2004
  16. Unlikely, because e.g. X-ray tomography was a common technology even before
    digital imaging existed. It was made a lot easier to perform with the the
    advent of Computed Tomography, but that all deals with voxels (three
    dimensional pixels).

    Given the angle of projection, the refractive index of Silicon, and the
    close proximity of the spectral sampling bands, it is irrelevant for the
    subject at hand, pixels in a 2-dimensional plane.
    The same applies to projections on photosensitive film or paper.
    That's not correct. Locations in the projection direction are not spatially
    discrete in the image plane.

    SNIP
    They're called (photo)sensors, and have been since their invention.
    There's no need to redefine a nonexisting thing. Pictures are 2-dimensional,
    picture elements are therefore also 2-dimensional, how hard to accept is
    that?

    Bart
     
    Bart van der Wolf, Jun 16, 2004
  17. Laurence Matson wrote:
    "We" absolutely do not need to redefine pixels. But thanks for offering
    a choice.
     
    John McWilliams, Jun 16, 2004
  18. Paul Howland

    Big Bill Guest

    You seem to be confusing "data" for "image".
    The data that come from the X3's sensors are not an image; they are
    interpreted, or processed (dare I say 'interpolated'?) into an image.
    As such, since the data from, say, the bottom sensor in any given
    sensel location on an X3 sensor is not a part of an image (because the
    image has not, at this point, been processed), it can't be a pixel.

    You like analogies; that data is analogous to an egg in a cake mix
    before it's baked into a cake. The egg isn't part of the cake, because
    the ingredients haven't been processed into a cake yet.
    Once the mix has been baked into a cake, the egg ceases to exist as an
    egg, and is now part of the cake.
    In like manner, that data from the 'red' sensor isn't a pixel, it's
    part of the data that goes into making a pixel. Once the data from the
    three sensors is processed into a pixel, that 'red' data ceases to
    exist on its own.

    Bill Funk
    Change "g" to "a"
     
    Big Bill, Jun 16, 2004
  19. Paul Howland

    Big Bill Guest

    You are confusing the image with the acquisition of the data that is
    processed to form the image.
    The individual sensors are not an image; the data they collect is not
    an image. It is *processed* into an image.
    No, we just need to understand the processes involved, and to
    understand the current definitions of words.
    You say you understand that he definitions of 'pixel' refers to
    *images*, yet still continue in your belief that the sensors comprise
    an image; they don't.

    Bill Funk
    Change "g" to "a"
     
    Big Bill, Jun 16, 2004
  20. You are probably correct - it cannot in practice be used.

    But let us stretch our minds a bit - just for the sake of
    an argument. Let us assume that the three layers are at
    a more significant distance. Then, different things
    in the scene will be in focus at different layers. If
    they are in focus or not depends on their distance. So,
    the contrast can be use to meassure distance. In fact, this
    is (more or less) how many auto focus mechanisms work.

    In this case the distance difference is too small to be
    usefull - probably. But, there is a potential.


    /Roland
     
    Roland Karlsson, Jun 16, 2004
    1. Advertisements

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments (here). After that, you can post your question and our members will help you out.