64 bit is a lot!

When people talk about porting their applications to 64 bit, I sometimes hear them wonder how long it will be until they have to port everything to 128 bit – after all, the swiches from 8 to 16 bit (e.g. CP/M to DOS), 16 to 32 bit (DOS/Windows 3 to Windows 95/NT) and 32 to 64 have all happened in the last 25 years.

But all these switches don’t, even after Moore-compensation, don’t push the limit in a linear, but in an exponential way: 64 bit extends addressable memory by a factor of 4 billion. A database holding 2^64 bytes can store a 1 Megapixel JPEG of every square meter of the earth’s surface.

AMD and Intel understood that no CPU in the next few decades would need as much RAM (!), and therefore decided to, completely opaque for user space, only implement 48 bit addressing for now, saving 2 extra levels of page tables, and thus keeping TLB complexity lower. That’s about the size of New Jersey.

19 thoughts on “64 bit is a lot!”

  1. The problem with these kinds of analogies is that people wouldn’t be using measures such as “1 megapixel image of a square metre of the earth”. Imagine people working with plain text documents about 2 decades ago. I bet you they’d be aghast at how a modern MS Word document that contains 1000 words takes up over 30 kilobytes of space (depending on which version of Word you use, it varies, and for PDF files it could be even more.)

    For the people 2 decades ago, 1000 words would take up only a few kilobytes of space for plain ascii text.

    So who’s to know that a “simple” 1 megapixel image won’t take up over 100 gigabyte of space in a future form of encoding of the information? Then the memory requirements could very well push the 128 bit barrier, only because there is no way to predict how future information systems will perform their encoding, etc.

    Reply
  2. The switch “from 8 to 16 bit” was not the same thing: the 8080 still had a 16-bit address space.

    Reply
  3. @JL: couldn’t agree more: who says we won’t be making high res 3d movies of our holidays and returning with 12 hours @ 2TB/hour? Another way of putting it: my first computer had 4K’s of ram (vic-20), my current has 4GB’s of RAM. That’s a million-fold increase in 25 years. 32-to-64 bit is a 4-billion fold increase, but 32-48 is only a 16K fold increase.

    Reply
  4. Swtiches to 32 bits, to 60 bits, to 36 bits, etc., occured more than 40 years ago.

    For data sizes, applications needed more than 32 bits more than 40 years ago.

    For address sizes, I think that 48 bits will be enough for the next one decade. This is a personal prediction, nowhere near the reliability of the above historical facts.

    Reply
  5. @Azeem Jiva: Fixed, thanks. Stupid iPhone text correction. 😉

    @InvincibleChunk: You mean 8086, but you’re basically right. Although, a “far” pointer on an 8086 would be 32 (or at least 20) bits, if your code could cope with that.

    Reply
  6. Hello,

    This as nothing to do with your post. I just discovered your website with StumbleUpon and added it to my google reader. I’ve been thru several of your posts ~ nearly everything in fact ~ and I must say that what you write is really interesting!

    Thanks so much for your content.

    Have a nice day!

    PS: if you could post more often, that would be great 🙂

    Reply
  7. InvisibleChunk meant 8080. The 8086 had a 20-bit address space. The 8080 had a 16-bit address space. In both cases, the address space was too wide for a single register.

    Reply
  8. Don’t confuse address space with the width of registers or ALU paths :
    most 8bits cpu’s (Z80/8080, 6502, 6809,etc…) had 16bit address space.
    16bits cpu’s had usualy a 20 or 24 bit address space :
    8086 : 20bit, 68000 and 80286 : 24bit.

    Reply
  9. Except you need really large address spaces to use statistical memory protection so that you don’t have to pay overhead for crossing protection domains.

    I don’t believe that address space randomization is useful, but you could also make that argument, if you thought it was.

    It would also be handy if MMU designers would support walling off physical memory until the machine is reset again. Maybe it is time to bring back Harvard architecture?

    Reply
  10. A useful note in the Intel case is considering ia32’s Page Address Extensions (PAE) actually would imply a 36 to 48 bit (decimal 4096) increase factor of physically addressable memory .

    Reply
  11. “But all these switches don’t, even after Moore-compensation, don’t push the limit in a linear, but in an exponential way…”

    All computer ‘stuff’ increases in an exponential way – address space, register size, instruction buffer size, and storage space. That is the whole point of Moore’s Law. The area of a CPU doubles every 18-24 months. So the jump to 128 bit addresses for CPUs will occur around 2016-2020. That is assuming that we are able to continue past the point at which a single molecule contains a bit of data, which should be around 2013-2016. This barrier will probably be broken via 3D chips, or via multi-chip processor systems. The multi-chip CPU method is already in mass production. The 3D method needs improved (built-in to the chip) cooling solutions, possibly water-cooled. Using light based CPUs (instead of the present day voltage/current based) may also speed this up.

    The popular 16 bit processors came out around 1980-1985. The widespread use of 64 bit processors around 2007? So that is two jumps, 16 -> 32, and 32 -> 64 in a little over 20 years.

    The first 64 bit processor came out around 1991, but that is not the relevant date IMHO. The time people stared using them in large numbers is more significant, and that is much closer to now, almost 2 decades later.

    When I was in college in the early 1980s, a professor asked a room full of Computer Science Juniors and Seniors where they thought the data register size would stop for Microprocessors (in powers of two). I was the only one in my class to say 128 bits. the rest were all 32 or 64 bits. I now think my choice was too small. I based my response on the size of the largest mainframe register size I was aware of at the time, which was the IBM 370 at 80 bits.

    Reply
  12. @Dave: Exponential growth of memory corresponds to linear growth of address bits. But the jumps 16->32->64 are exponential in the address bit world. Adding 32 bits means doubling the address space 32 times. According to Moore’s law (extended), that’s 48 to 64 years.

    If 64 bit do run out (some time after the year 2050), adding another 64 bits will last until ~2150.

    About the “80 bit” System 370. These 80 bit registers are floating point registers. There are needed for numeric precision, not for addressing large amounts of memory, which is what we are talking about here.

    Reply
  13. I agree with JL.

    Information and bandwidth requirement has changed a lot. 20 years ago we used to store 8 bit color image and think it was most advaned (say).

    But now that has changed dramatically. Now many ppl have camera of 14 MP.

    20 years ago u could fill a CD-rom capacity may be (say) by 100000 still image. Now to store 100000 images with 14 MP resolution u will need a blu-ray disc.

    Never say this type of analogy. Because you don’t know when user’s perspective might change in near feature.

    I can think of a single 3D holographic image still image taking arround 2 GB in feature. 15 years from now , we may need to store 3d movies in WHUXGA resolution and taking 4 TB each to store.

    Future is very uncertain.

    Reply

Leave a Comment