64 bits

Discussion in 'Darkroom Developing and Printing' started by Witheld, Sep 3, 2003.

  1. Witheld

    Witheld Guest

    http://home.netscape.com/apple.adp
     
    Witheld, Sep 3, 2003
    #1
    1. Advertisements

  2. And what limits are we bumping into that this will overcome? My Pentium III
    has no trouble addressing a whole gigabyte of memory...
     
    Michael A. Covington, Sep 4, 2003
    #2
    1. Advertisements

  3. Witheld

    Witheld Guest

    8 gigs verses 4
     
    Witheld, Sep 4, 2003
    #3
  4. Witheld

    Jorge Omar Guest

    If you have to manipulate more than 2 GBytes of data at once or
    giantic (for home users) disks, then it will be an improvement.
    Otherwise, the advantages will be only for corporate databases.

    So, do not jump in this bandwagon at the start - it may simply not be
    cost effective.

    Jorge
     
    Jorge Omar, Sep 4, 2003
    #4
  5. Have we really bumped into the need to address more than 4 GB of RAM yet? I
    know we will eventually... maybe we have when people want to edit entire
    movies without swapping to disk.
     
    Michael A. Covington, Sep 4, 2003
    #5
  6. Witheld

    John Guest

    In contrast AMD's Opteron processors bring the benefit of an integrated
    memory controller built right on each CPU. I like Mac but sure wish they'd get a
    clue.

    Regards

    John S. Douglas, Photographer
    http://www.darkroompro.com
     
    John, Sep 4, 2003
    #6
  7. Witheld

    John Guest

    And image scanning. I needed to scan in some 5X7's for the town the other
    day and found that my piddly 768 MB DDR-SDRAM only held 10 at 600 DPI as TIFF.


    Regards,

    John - Photographer & Webmaster - http://www.darkroompro.com
    A summation of American society after 9/11:
    Never have so many known so much and yet done so little.
     
    John, Sep 4, 2003
    #7
  8. Witheld

    Mike Marty Guest

    Desktops purchased today often have 1Gb of memory. Memory capacity
    roughly follows Moore's law-- doubling every 18 months.

    Thus in 3 years, our 32-bit machines of today will not be able to address
    the > 4Gb needed for new applications/machines.

    64-bit is the future...even for Grandma.
     
    Mike Marty, Sep 4, 2003
    #8
  9. I think the advantages of longer word sizes is not so much because of
    the increased accuracy (in this case, dynamic range, color depth,
    freedom from contouring, or whatever you choose to call it), but the
    greater addressing capability.

    If people are scanning 8000 pixels per inch with 48-bit color (3x16-bit)
    you are already gobbling up a lotta bytes for that. (I admit I would not
    do that to set up web site pages, but I might for really high quality
    graphic arts work. And I am not imagining this. You can buy scanners
    like that right now.) Lessee here:

    8000x8000x6 bytes per square inch: 384 MegaBytes per square inch.
    8x10 inch image: 80 square inches.
    That is 30 GBytes already.
     
    Jean-David Beyer, Sep 4, 2003
    #9
  10. Witheld

    Mike Marty Guest

    Yes, more bits isn't always better. Its also harder to achieve fast clock
    rates with 64-bits due to the increase in the width of everything from
    register files to pipeline bypass circuits.

    If your workloads use small data sets and can't benefit from vector-like
    instructions, such as MMX, then 32-bit will be faster.

    w.r.t. VLIW, I think you mean that the compiler had a hard time finding
    enough instruction-level parallelism to fill all the slots in the
    instruction. Itanium (Itanic) of courses uses a variation of VLIW called
    EPIC. Although similar to VLIW, it is much different in that the
    scheduling is not _entirely_ static.
    Fast instruction decoding does nothing if you are always stalled! I don't
    know the specifics of the G5, but I'm sure its issue window is greater
    allowing higher ILP. And I'm sure its larger caches account for a big
    increase in performance.

    Itanium2 currently has the lead in SpecFP due mainly to its massive
    on-chip L2 cache. Not because of its great microarchitecture...
     
    Mike Marty, Sep 4, 2003
    #10
  11. Witheld

    Mike King Guest

    Sorry but why did you need to have all those 5x7's in memory at the same
    time?
     
    Mike King, Sep 4, 2003
    #11
  12. The machines of today can address > 4Gb if you will settle for a 4Gb
    limit for each process. My machine currently has 109 processes in the
    process table, but it is averaging about 4 that are ready-to-run; i.e.,
    not waiting for disk-io, user input, etc. This box will take only
    4GBytes RAM, and there is only 512MBytes actually in it. Funny. My first
    hard drive was less than that (40MBytes).

    The mother board for the machine I am putting together will take
    16GBytes, although I am going to start with 2GBytes. Now the XEON chips
    have only 32-bit addresses, so no process will get over 4 GBytes, but
    the OS can give that much to each of 3 processes (reserving a bit for
    itself, IO buffers, etc.) if it feels like it. So if you tend to run
    more than one process at a time, as I do, you could use all that memory.
    The basic XEON and its E7501 chipset can deal with up to 64GBytes, IIRC.
    Now I am not sure if that would make sense or not. Pretty soon you will
    want individual processes to use more than 4GBytes each. But not for a
    while for me.
     
    Jean-David Beyer, Sep 4, 2003
    #12
  13. Yes, it is mainly because of its microarchitecture, the cache help but
    not that much.
     
    David Svensson, Sep 5, 2003
    #13
  14. Witheld

    John Guest

    Speed. I had to get all 40 done ASAP for a project that they threw at me
    out of the blue.

    Regards,

    John - Photographer & Webmaster - http://www.darkroompro.com
     
    John, Sep 5, 2003
    #14
  15. Witheld

    Mike Marty Guest

    And what is your reasoning? I'm not saying your wrong, I just want to
    know why?

    SpecFP workloads exhibit the following characteristics:

    -- Lots of predictable loops (branch prediction allows the processor
    to speculate correctly)
    -- Lots of regular data access (data prefetching can be effective)
    -- Dependences between loops ( register renaming in out-of-order
    processors, such as the Pentium4, can alleviate this. Itanium2 relies on
    the compiler)
    -- others I can't think of off the top of my head

    A speculative OOO processor, like the P4, already takes good advantage of
    my 3 points. Its got good branch prediction and can speculate correctly,
    has good prefetching and non-blocking cache access, and has register
    renaming (and loads of physical registers the programmer doesn't see in
    the ISA) to eliminate name dependences.

    My point is that EPIC doesn't do much, in the microarchitecture, to
    improve the throughput of instruction execution for SpecFP above and
    beyond what already is done. It wins because the on-chip cache is so huge
    that the data sets fit.

    EPIC allows the compiler to expose more ILP in irregular workloads where
    today's superscaler OOO processors struggle because of the limited size of
    the instruction window. Until their SpecINT score tops everything else,
    I'm skeptical.
     
    Mike Marty, Sep 5, 2003
    #15
    1. Advertisements

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments (here). After that, you can post your question and our members will help you out.