Which free software could acquire 48 bits color depth pictures from a scanner ?

Discussion in 'Scanners' started by Guilbert STABILO, Nov 16, 2008.

  1. Guilbert STABILO

    Steve Guest

    Well, it wasn't quite a year later but it was later. I still have a
    magazine where they announced a 68881. It was Feb 1982. The IBM PC
    was August 1981.

    Steve, Nov 18, 2008
    1. Advertisements

  2. Guilbert STABILO

    Ray Fischer Guest

    I still have an Apple //e that does IEEE floating point using it's
    8-bit processor. What can be done in silicon can be done the same in
    software, just slower. In fact all the major chip makers use
    emulators that simulate the working of the chip in software in order
    to make sure that it works correctly.
    Ray Fischer, Nov 18, 2008
    1. Advertisements

  3. Guilbert STABILO

    Ray Fischer Guest

    What kind of engineer?
    Been there - done that. In addition I've got about 6 yers worth of
    university education and decades of experience as an actual software
    You don't need to know ANYTHING about how computers do math in order
    to write code.
    Ray Fischer, Nov 18, 2008
  4. Guilbert STABILO

    Ray Fischer Guest

    IEEE floating point defines a standard implementation.
    If it's IEEE floating point then the rounding is done a standard way.
    Most hardware these days sticks pretty close to the standard.
    Ray Fischer, Nov 18, 2008
  5. Guilbert STABILO

    Ray Fischer Guest

    Because you don't know what you're writing about. The fact that it may
    be an 8-bit processor makes no difference at all. ALL floating point
    math is subject to accumulated errors.
    Ray Fischer, Nov 18, 2008
  6. Guilbert STABILO

    Ray Fischer Guest

    Now try and catch up to the 21st century.

    A "math processor" is only some silicon to do the same calculations
    that can be done in software except faster. Your insistence on
    treating it like some special component is ... outdated.
    Ancient history.
    Ray Fischer, Nov 18, 2008
  7. Guilbert STABILO

    Ray Fischer Guest

    Given your limited understanding you're hardly in any position to
    Don't start lying about what I write. I did NOT imply anything of the
    sort. It is all YOUR limited understanding.
    Tell us which is faster: 8-bit, 16-bit, 32-bit, or 64-bit integer
    LOL! You're stuck on the data representation and completely ignoring
    the algorithm.
    And how would you know?

    By the way: "Scaled integer"? No such thing. Unless you meant fixed
    Stupid asshole.
    Ray Fischer, Nov 18, 2008
  8. Guilbert STABILO

    Eric Stevens Guest

    I don't know about the Apple //e but are you really trying to say that
    all the software floating point emulators had the same accuracy as an
    equivalent hardware APU?

    Eric Stevens
    Eric Stevens, Nov 18, 2008
  9. Guilbert STABILO

    Eric Stevens Guest

    Mechanical. I've been using Finite Element calculations since the year
    dot and accumulated errors have always been a primary concern.
    You do when the programs are large and complex.

    Eric Stevens
    Eric Stevens, Nov 18, 2008
  10. Guilbert STABILO

    Eric Stevens Guest

    I've been involved in this since before the IEEE standard and in any
    case sticking 'pretty close' is not good enough.

    Eric Stevens
    Eric Stevens, Nov 18, 2008
  11. Guilbert STABILO

    Eric Stevens Guest

    .... and its always the same amount of error?

    Eric Stevens
    Eric Stevens, Nov 18, 2008
  12. Guilbert STABILO

    Eric Stevens Guest

    Its still a maths coprocessor, even when you build it into the same
    silicon as the CPU.

    Eric Stevens
    Eric Stevens, Nov 18, 2008
  13. Guilbert STABILO

    J. Clarke Guest

    If I understand what he was saying correctly then he's talking about
    using 8-bit floating point instructions to construct an 80-bit
    floating point routine. Seems to me like doing a tonsillectomy
    through the rectum but if all you've got is a hammer . . .

    I do wonder what machine he has encountered though that actually _has_
    8-bit floating point instructions.
    J. Clarke, Nov 18, 2008
  14. Guilbert STABILO

    J. Clarke Guest

    Uh, this is a photography newsgroup. 32 bits that represent 8 bits of
    red, 8 bits of blue, 8 bits of green, and 8 bits of black is a bit
    different than 32 bits that represent 32 bits of luminance. You
    really do need to know what each of those 32 bits represents before
    you start doing calculations on them, at least if the purpose of the
    calculations is to support image editing you do. If you're talking
    about an encryption algorithm or lossless compression then the data
    representation doesn't much matter except to the extent that you might
    be able to exploit the structure.
    "Scaled integer" seems to be the new geekspeak for scientific
    Actually neither one of you is coming across as the shiniest apple in
    the bushel.
    J. Clarke, Nov 18, 2008
  15. Guilbert STABILO

    Steve Guest

    And yet different machines give different results even with the same
    code compiled on the same version of a compiler. Hell, I've even seen
    different machines give different results even with the same
    executable code. It's rarer than the first case, which happens a lot.

    Steve, Nov 18, 2008
  16. Guilbert STABILO

    Steve Guest

    I'm in a perfect position to understand just from the little you've
    written here in this thread. You're a hack.
    I don't have to lie about anything. It's all in the record.
    The fact that you would even ask that question without specifying what
    is doing the math proves my point that you just don't know what you're
    talking about. Any one of them could be faster, slower or the same on
    a particular machine. And I'll bet you don't even understand why
    that's the case.
    Nope. I'm not ignoring anything. But you are. You have no clue that
    what an algorithm does and is capable of doing in the real world
    depends on the data representation. I'll bet you've never had to code
    the same complex algorithms on both a floating point and a fixed point
    only machine and compare the result to make sure it's doing the same
    thing within a specified error tolerance.
    I'm not going to brag about my qualifications like you did. Suffice
    it to say that I know enough to know you don't know very much.
    Nope. Just because you don't know what it is doesn't mean it doesn't
    exist. I feel sorry for you so I'll tell you. It's doing math on a
    machine that does not do floating point arithmatic but where you have
    to represent data larger than the max integer it can handle. So what
    you do is have the base integer and a scale factor which tells you
    where the binary point is to represent the number. When you do
    operations, it's up to the programmer to correctly scale the result.
    It's like a manual floating point implementation. It's fixed point
    arithmatic but where the location of the binary point is allowed to
    float and must be properly kept track of.

    When I was writing complex equations in pipelined APU microcode where
    the APU was a fixed point only machine, you had to worry about stuff
    like that. And before you say that's ancient history, this particular
    machine is still in use and still being supported.

    I'll bet you've never had to write microcode for a pipelined
    arithmatic processor. Lot's of things to worry about. Like arranging
    the order of instructions and folding up your code so that each stage
    of a multi-stage pipeline is kept busy doing something during each
    clock cycle for max efficiency. Like knowing when the results of a
    calculation are availabe for use. Like realizing than whan you do a
    compare and branch on the result, the branch won't take effect until a
    few clock cycles later. So the couple of instructions after the
    compare are executed no matter whether the compare was true or false.
    Like having separate initialization code to "prime the pipline".
    Ah, we finally see your true colors. You should crawl back under your
    rock and only talk about things which you know about if you don't want
    to be made to look like a fool.

    Steve, Nov 18, 2008
  17. Guilbert STABILO

    Steve Guest

    Not exactly. Before you can code an algoritm, you have to know what
    the format of the data is. Just saying it's "32 bits" isn't enough.
    Because you have to write different code depending whether that 32 bit
    data is signed integer, unsigned integer, floating point, etc. The
    only people who can say it's just 32 bit data and not worry about what
    the data represents are people who are only concerned with the size
    and not with doing any math operations on it. Our software hack
    friend Mr Fischer apparently doesn't realize that because to him "If
    there's a 32-bit channel then the math is 32 bits." and "32 bits is
    still 32 bits whether it's foating point or fixed point or integer."

    Apparently he doesn't realize that telling a software engineer "it's
    32 bits so the math is 32 bits" is meaningless if you actually have to
    do something with that data other than store or ship it.
    Sort of but not really because scientific notation allows for a
    floating point significand, or mantissa. If you constrained the
    mantissa to integers of a certain size and the exponent is a power of
    2, not 10, then that would be scaled integer.

    It's fixed point math except the binary point is allowed to float and
    the programmer has to keep track of where it is after every operation
    by keeping a scale factor with the data and properly setting the scale
    factor of the result based on the input scale factors. You have to
    have code to look for overflows and underflows and adjust the scale
    factors accordingly so that you're using all the available bits most

    Or, if you know apriori what the range of values you'll be getting,
    you can just assign a fixed scale to each stage of the calculations
    and not worry about checking for over/under flow and adjusting the
    scale factor. But you still have to keep track of where the binary
    point is so that at the end of the chain of calculations, you know
    what your resulting data represents.

    Steve, Nov 18, 2008
  18. That _VERY_ MUCH_ depends on what code you write and the application
    Actually no, it's not a matter of size of complexity.
    But you do need a good understanding whenever numerics are involved,
    beginning with simple matters like finance and bookkeeping.
    After all, it's not the missing million that is drivng the accountant up
    the walls (that one is easy to find and correct) but the missing penny
    caused by accumulated rounding errors because the programmer had no clue
    about computer numerics.

    Jürgen Exner, Nov 18, 2008
  19. [/QUOTE]
    There's no particular reason why they shouldn't be as much more
    accurate as you like, just as there's no reason why someone with
    pencil and paper should stop calculating a square root at five, ten,
    fifty, of five hundred decimal places, and no reason except a careless
    error why there should be any inaccuracy in the calculation.
    Chris Malcolm, Nov 18, 2008
  20. That kind of approach would be open to error in the implementation if
    you weren't rather careful and knowledgeable. Better to go back to
    first principles and re-implement 80-bit FP from basic integer
    arithmetic where you can more easily be certain about the limits of
    precision and the behaviour of errors and approximations.
    Chris Malcolm, Nov 18, 2008
    1. Advertisements

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments (here). After that, you can post your question and our members will help you out.