What does it mean 64 bit OS?


what does it mean from a programmers stand point to make an operating system 64 bit? i get that more memory is accessible, but what in the OS allows for that? A simple recompile doesn’t do it using some kind of switch to the compiler?


x86_64 is the architecture. Binaries produced for x86_64(64-bit) aren’t compatible with x86 (32-bit) (just like binaries produced for arm aren’t compatible with x86 or x86_64)

You can’t produce a x86_64 Linux kernel (or any other OS) binary, and run it on a 32-bit system.
You can produce a 32-bit binary though and run it on a 64-bit system. (the amd64 architecture allows for 32-bit compatibility mode at boot without a large performance impact, which is why everyone gravitated to it over ia64.) Intel offered an emulated 32-bit compatibility mode on ia64 via ia32, but it was slower.

Now, in some other operating systems (linux, freebsd, windows, etc) you can offer 32-bit binary support “within your 64-bit operating system” by adding some compatibility layers to translate access requests for 32-bit memory addresses to 64-bit memory. Haiku has some early work on such a layer, but it’s incomplete.

tldr: it’s complex. http://www.differencebetween.net/technology/difference-between-ia-64-and-amd64/ 64-bit gives you access to more than 4GB of ram (the native limitation of 32-bit architectures without PAE/NX)

EDIT: Clarified PAE/NX mention per @cocobean, updated my poor understanding of IA32 via @tialaramex 's comments below.


Umm… that is if the 32-bit CPU doesn’t support that NX flag.

A few things usually circling around virtual memory usage, processes, and data types.

In normal practices:

  • 64-bit OS allocates >= 4GB virtual memory to a process.
  • 32-bit OS allocates < 4GB virtual memory to a process.

NOTE: 32-bit OS can access 128GB+ virtual memory.(Y2001 era)…

64-bit data types:

  • int64 - is it really 64-bit or pseudo 64-bit (>32-bit <= 64-bit)
  • long double floats
  • unsigned long long integers
    Implementations varied at different points from specs.


Umm… that is if the 32-bit CPU doesn’t support that NX flag.

“64-bit gives you access to more than 4GB of ram (the native limitation of 32-bit architectures)”

I chose ‘native’ there carefully, leaving room for extensions like PAE/NX which make accessing > 4GB possible (though limited per process as you pointed out) :wink:


This is completely wrong. IA32 is Intel’s preferred nomenclature for the legacy “x86” 32-bit architecture. For application software it was not necessary to recompile anything “for ia32” since that’s what your programs were already and your existing 32-bit programs ran on Itanium CPUs with hardware emulation.

However, the Itanium architecture was so different that this emulation was markedly slower than the raw performance of the CPU. An expensive Itanium CPU would make your 32-bit code run far slower than a much cheaper i686 CPU. In the last few IA64 CPUs there is no hardware emulation and only a software emulation is provided, but still there’s no need to recompile applications.

Because AMD’s x86-64 is much more similar to x86 the performance is similar, an Opteron (early AMD 64-bit CPU) is similar in performance to Intel’s 32-bit CPUs for 32-bit code, yet it has a clear path to 64-bit code and that’s why it was more successful in the market where many customers would have an ambition to reach 64-bit but not the immediate capability to stop running 32-bit code.


Ah, ok. That sounds right. I was definitely off here.

I remembered there was some reason IA64 was so unpopular vs amd64, for some reason I thought it was instruction set compatibility related… but now that you mention the IA32 emulation was so much slower that sounds a lot more correct.


From an application developers point of view it doesn’t make that much difference. The thing that developers must look out for, especially when converting 32 bit code to 64 bit, is that the sizes of integers and pointers are different. This can for example cause alignment problems with structs, or generally in any code that makes assumptions about the sizes of integers and pointers. INT_MAX etc will also be different.


From programmer’s point of view, the switch from x86 to x86_64 heavily depends on what programming language you use. For high-level languages (Java, Pascal/Delphi, Basic, Python, Ruby) there is virtually no difference, for middle-level languages (C/C++) the differences is mostly in registers sizes, for low-level languages (Assembler) it is completely different experience.

Haiku (OS and most applications) is written mostly in C++. Hence for Haiku programmer, the switch means to take care on integer overflow and pointer address space. However, the code of drivers may imply more difficulties in porting.


There are not more difficulties for drivers. It’s still C++ and you face exactly the same problems.
If anything, 32bit drivers will be more complex, because you end up dealing with both 64bit addresses (the hardware is effectively 64bit these days no matter what), and 32bit addresses (the software really wants 32bit pointers in vitual memory).

But in Haiku, we have an abstraction layer over that, you just use phys_addr_t for physical addresses and can write code irrelevant of the pointer sizes.


You aren’t using registers directly in C or C++, the issue is that the sizes of certain types are different: pointers and integers. This might be because machine register sizes are different, but C and C++ do not expose the user to that. The basic issue is that sizeof(long) and sizeof(void *) are different between 64 and 32 bit machines (and other types may have different sizes too). (The only place a C/C++ user is exposed to the concept of registers is the register keyword, but that is related to allocation of variables, not their size.)

The difference in the sizes of types has a knock on effect on the memory layout of structures because compilers may chose to word align struct fields. For example on 32 bit sizeof(struct { int x; int *y; }) is 8, with x at byte offset 0 and y at offset 4, and no padding between the fields. But on 64 bit the size will likely be 16, not 12 as you might expect, with 4 bytes of padding after x, and y living at offset 8, even though sizeof(int) is still 4. This is because the compiler has chosen to align the struct fields to multiples of 8 bytes.

In most cases application developers probably will not care that a struct is laid out differently, but in certain scenarios it is vital, for example if you are reading or writing data from/to disc using a certain file format with structs as your memory representation, or if you are exchanging data over a network and a certain network protocol is assumed.

On 64 bit, your integers are less likely to overflow, because they are much bigger. I’m not sure what you mean about pointer address space. As an application developer you don’t generally chose your memory layout or have any control over address space, except using e.g. mmap, but this has implementation defined behaviour and in most cases trying to map something at a specific address is probably a bad design choice.