64-bit for Haiku R2

Ok, I was surprised to see that some users thought Haiku would come with 64-bit for R1. I for one was aware that it would be only 32-bit & not sure where people got the impression otherwise.

For R1, 32-bit makes perfect sense and I believe is the best choice.

Now, after some thought, I think that for R2 it should only be 64-bit & I’ll quickly point out why.

I truly believe that it is a bad idea to try to do both 32-bit & 64-bit versions at the same time because you end up splitting up resources. Better to work on 32 or 64-bit but not both at the same point in time. If you work on just the one version then it can become better than trying to do two versions at once ( 1 good versus 2 so-so ).

  1. By the time R2 starts, there should be lots of 64-bit computer users. When R2 is finished it could be that 64-bit has 70% or greater market share.

  2. 32-bit is more efficient & uses less memory resources for memory below 4GB. Still, considering Haiku uses about 128MB of RAM. Going to 64-bit will increase memory usage between 30-40%. To make this next example easy we’ll use 50%. 128*50%=64MB. So, those with RAM less than 4GB would lose out on 64MB ( next to nothing ) while those with 4GB+ would gain access to memory above 3.3GB barrier.

  3. There still would be many 32-bit programs out there which would run fine on 64-bit OS ( & have same RAM usage as under 32-bit OS ). Could also run 64-bit programs if any are created.

  4. Better to go to 64-bit with R2 than leave it till later and more & more code is added to Haiku making it harder to do with R3. 64-bit may not be needed with R2 but better to do it sooner, when easier, rather than later, when more work has to be done.

  5. I don’t see any program or game make use of the extra RAM on Haiku - to be fair, I believe 2 to 3GB would be lots of RAM to handle everything. But those with 4GB+ could use the additional RAM to multi-task multiple complex programs. Or maybe create an automatic cache system that is based off installed RAM. More RAM = bigger caches.

The only downside is those with 32-bit systems will have to upgrade to 64-bit system or stick using R1. Haiku R1 could also receive a few improvements and bug fixes to become 1.x - just enough to keep 32-bit users happy for awhile longer but nothing major that it takes focus away from working on 64-bit R2.

Just something to think about for now until R1 is released. After which we can see how many people use 64-bit computers and discuss it further.


I agree whole heartedly. I know that R1 is only meant to be a replacement for BeOS R5. Also in accordance with the license agreement, there is no reason why Haiku couldn’t fork! (Though maybe not an ideal solution)

I’m confused: what exactly what would that accomplish?

I think there’s a critical misunderstanding here - the developers are not anti-64-bit, they’re simply pre-occupied with other more important tasks.

If there was anyone willing to develop a 64-bit Haiku port, I am 100% positive that their patches would be accepted to the official repo (assuming they didn’t also break the existing functionality).

Forking for the sake of forking is… pointless.

Let me rephrase my thoughts a little, I have a bad tendancy not to make myself understood.

If the developers go 64-bit only with R2, where does that leave the 32-bit userbase of R1?

We could either develop jointly, or we could fork Haiku, spinning off another project, just for 32Bit. Almost like the Debian/Ubuntu team does. (Though not exactly, the general idea is the same)

Does it make any more sense to anybody?

I agree with these arguments. Besides, considering R2 is so far off in the future, I see no reason to lose sleep over it now.

One thing I wonder is how embedded solutions are concerned by this. Or are those applications so different they demand a port of the system anyway?
I still dream of a Haiku settop box as media station for (satellite) TV, DVD/BlueRay, music… :slight_smile:


Deutsche Haiku News @ http://haiku-gazette.blogspot.com

I think it’s a bad idea to drop 32-bit support, your 70% figure is presumably nothing more than a guess. My guess would be it will be a long time before 64-bit hits that kind of penetration, but based on no statistics (tried 10 minutes of Googling with no joy).

64-bit is a lot of work, but most of the code should just be a case of recompiling.

The only thing against keeping a 32-bit version is confusion for users when downloading software and meaning developers have to compile a couple of versions. I don’t know if it’s possible to cross-compile in either direction, but if so that would mean it wasn’t much of a hassle for most developers. Preventing user confusion will be a job for whatever BeBits-style download portals spring up.

Ok, just additional info.

Predicting the future is impossible it’s just a best guess based on the most probable outcome ( using probability ). You have to make assumptions, which may change over time, to base a future prediction on. The longer the time frame the greater the chance of being off. Short term ( 1 - 12 months ) future is easier to predict because not much will change in short period of time. For instance, if I say Haiku Alpha is coming out in a few months time, then I’d be right. If I say R1 is out in 3 years time then I may or may not be right. It could come out in 1-5 years ( or even longer ). This is why long term predictions always have to be re-examined and revised accordingly and taken with a grain of salt.

FACTS: A) Today & for about last 4 years, everyone was buying Intel or AMD 64-bit systems - few, if any, 32-bit systems sold these days B) # of 64-bit computers is increasing every month or year C) People are installing more & more RAM ( standard is 3GB these days on new systems ), etc., etc.

64-bit has been out for almost 5 years now on the AMD side. Give it another 5 years & many should have upgraded by then. Whether it be 50, 60, 70 or 80% who knows for sure. It should be a fair amount of people. I’m guessing most people will upgrade their systems within a 10 year period - I change CPUs every 6 to 10 years myself.

When the time comes, the developers will have the choice of:
A) 32-bit B) 32 & 64-bit or C) 64-bit

They will have to decide themselves. I would prefer A OR C because it avoids splitting resources. It is better to focus on one version than two. There is a good chance they’ll do either B or C because more users will push for 64-bit & stronger case can be made for it down the road. And 32-bit will slowly phase out of existence. If they have lots of developers to spare then they’d do B) if not then they’d look to see # of 32-bit to 64-bit users + installed system RAM & make the choice of A) OR C).

R2 is still some ways off so this won’t really matter until R1 is completed and R2 development begins. About this time or a little later is when they’ll have to make their decision.

PS On Windows, if you install 4GB of RAM, and depending the devices you’ve installed, you’ll have 2.75 to 3.5GB of RAM available. Most will have around 3.2GB but some will have more or less. I’ve seen people post about this. ( good idea to turn off things you don’t use, like com, lpt, firewire, floppy, etc. in the bios since I believe that’ll help maximize available RAM ).

[quote]When the time comes, the developers will have the choice of:
A) 32-bit B) 32 & 64-bit or C) 64-bit

They will have to decide themselves. I would prefer A OR C because it avoids splitting resources. [/quote]

I’m still not following here…

Why exactly does this split resources? I think developers are perfectly capable of writing code that is both 32-bit and 64-bit friendly.

In fact, most of the userland stuff probably won’t even care I would guess.

I’m still not following here…

Why exactly does this split resources? I think developers are perfectly capable of writing code that is both 32-bit and 64-bit friendly.[/quote]

Well, why don’t they do 32-bit & 64-bit for R1 then if it’s so easy? :slight_smile:

It was Stephan, when asked about GCC4 making 64-bit easier to do, who said "Unfortunately, no. 64 bit will be a lot of work."

I take what Stephan said to be right and accurate because he’s one of the Haiku developers. I wouldn’t want to say why it would be lots of work because I may explain it incorrectly. A programmer would have to explain this in further detail.

If doing 64-bit was easy then I’m sure they would have done it for R1. If doing 64-bit is lots of work for R1 then by simple reasoning it’ll be lots of work for R2 or R3. And more code will be added to R2 & R3. So, it’ll add to the work required.

That means, it’ll be less efficient to do both at once because you’ll use up time and resources - split between two projects instead of one. If they have lots of developers then they can do both, otherwise they’ll just have to pick one and work on that. This is why they haven’t said if R2 will have a 64-bit version until they can figure out what is achievable and best to do.

[quote]Well, why don’t they do 32-bit & 64-bit for R1 then if it’s so easy? :slight_smile:

It was Stephan, when asked about GCC4 making 64-bit easier to do, who said "Unfortunately, no. 64 bit will be a lot of work."

Because getting there is a LOT OF WORK!.. but that doesn’t mean that maintaining it afterward is…

Same with any port - PPC, m68k, etc - once the code is ported, maintaining it is just a matter of keeping the subsequent code changes portable.

FWIW, I would guess most of the work is kernel-side (including drivers and such) at this point, and as you may have noticed, the number of kernel devs is pretty slim while the amount of work left is probably largely kernel-based.

Why would you think that eliminating 32bit after R1 and going 64bit would be any less work than maintaining both simultaneously after R1?

IMO, all this discussion is just mental masturbation until Haiku is “stable”… why speculate about the difficulty of this until it’s reality.

Not all of the work is kernel-based, but by and large this is correct: the reason it’s a lot of work is that, ignoring the VM and a few other low-level bits which do need some more significant changes to deal with differences in the mem layout, the lion’s share of the 64-bit work actually involves auditing code to make sure it’s 64-bit clean. The major problem becomes mostly lazy coders assuming that the size of a pointer or integer is always 4 bytes or whatnot, which obviously will fail when you hit a 64-bit env. However, making it deal with both correctly is relatively trivial, and can be done as part of the cleanup. At that point it’s easy to keep it buildable as both 32 and 64-bit, as can be observed from various other OSes that have been doing it for years. The problem is simply the amount of work needed to audit the whole codebase. So no, it’s not a matter of splitting resources at all.

Yes I did speculate about 64-bit but this opened up a discussion which helped me ( and I’m certain others ) for a better understanding. Readers hopefully learned or gained something from this thread.

You’re right. I was only thinking of getting there and not maintaining it afterwards which would be easier to do.

Once both versions are completed then maintaining them should be simple afterwards. Just getting there is the hard part. Got it.

I can’t say if those coders were lazy. Maybe they didn’t take into account 64-bit when coding? Could be they thought the OS would stay 32-bit? 64-bit only became a big thing just recently.

You actually are splitting resources to get to 64-bit because now you have coders auditing code when they could have spent their time adding additional features, bug fixes, updates, drivers to the 32-bit OS to get R2 out sooner. ie: if you spend an hour watching tv, you could have spent that hour elsewhere, like hanging with friends, going to a soccer game, etc., If developers spend an hour to audit code for 64-bit compliance, they could have spent that hour to get 32-bit R2 finished quicker instead. The point I was trying to make here.

Thanks guys for the information. Later,

Ok, so now I think we’re on a similar page.

What you’re saying then is that 32-bit is the only reasonable option in order to prevent ‘wasted time’ on a 64-bit OS.

I, on the other hand, think that cleaning up the codebase to be 64-bit clean would help make the code more bulletproof and portable to other platforms.

In the end, it’s not about what you or I think is the best approach, but rather what the developers slinging the code feels is in their best interest. Thus, as I insinuated, this discussion is not much more than talk, and perhaps a bit of enlightenment for those who don’t necessarily know any better.

I was thinking, when it does eventually come to making a 64 bit Haiku; wouldn’t 64bit Haiku just start with a modified and recompiled 64 bit kernel, and then run most everything else in 32bit?

More or less,

The kernel needs a “thunk” layer to run most 32-bit system calls, since inevitably some parameters are incompatible between the 64-bit and 32-bit environments. A few calls need to be explicitly aware of the 32-bit userspace (e.g. those related to the VM) and some types of call, like ioctls require support deeper in the system. With the kernel (including drivers) ported to 64-bit, and the thunk layer implemented, the 32-bit userspace software would then run again much as it did before.

how about UMPC’s and such? Will they be 64-bits processor? some version of Intel Atom had 64-bit but not all of them.

Only developing a 64-bit and not a 32-bit will close a big market where allot of ram and huge amount of resources don’t exist.

We are seeing allot of old technologies shrinking and making there way in to small devices and if we still have our 32-bit we don’t need to port anything except changing the Deskbar app to be easier to navigate in a small window (like screens of 4" - 7")

I believe that x86 processor withing 5 years will be fitted in to a hand held if nothing else to show that it can be made.

I want 8-bit haiku :smiley:

We have progressed three years since this thread started in 2008.
It is becoming harder to buy components that are 32 bit. And the 64 bit quad core is just a couple of ten of euro’s more expensive. So lets say before R1 finishes and R2 is on its way the vision expessed in this thread is becoming the path for HAIKU to walk.

The thing now is becoming a problem is HAIKU will see an alpha 4, 5 , 6 etc because it is hard to go into beta let alone state HAIKU is ready to use as an production environment.

The gcc is evolving, it is version 4.6 now where HAIKU still relies on gcc2 for binary compatability issues. (That by the way are the original motivations to recreate the BeOS experience which I like(d) very much.)
Components are able to do much more than they could in the days gcc2 ruled. Possibly using them causes HAIKU to go into KDL where a HAIKU based on a newer gcc would handle them correct.

It is hard to watch this, for personally I would like to see HAIKU on both platforms. The fact is there is a very small developer base to do both. Stretching the current line of alpha releases is tempting to get maximum BeOS resemblance but in the long term deadly for HAIKU.

But in all honesty I am impressed what HAIKU can do now and with the coming results of GSOC the leap towards BETA & R1 could become reality sooner than 2013.

In my opinion this should be as soon as possible because declaring R1 production would attract new possible developers ready to pull it higher on the 64 bit (for there were sad to find out it didn’t work on their 64 platform :wink: ).

Hope this brings this dated issue back into the reality of today.

Sigh… it really is pointless speculation however.

for R1 only x86 support on gcc2.x with gcc4.x installed along side

for R2 the plan as I understand it is to drop gcc2.x support thus losing binary compatibility but maintaining most source level compatibility.

Maintainance will shift from maintaining muitiple compilers on a single architecture to one compiler on multiple architectures. I don’t see what the big deal is there and quite frankly its nobodies bussiness but the developers after all they have their fair share of common sense and experience besides.

There is also a change that we won’t lose any backward compatibility with R2 but and will add even more compilers (think LLVM, PCC for C code, and ecoPath ) this sort of stuff just happens whenever anybody interested wants to see it happen and quite frankly it doens’t really affect usability at all …

You could even argue that gcc2.x is more stable that 4.x as it is older and has had far longer to have the bugs caught.

There are use cases for large amounts of ram on the desktop… WebBrowsers almost always take full advantage of a couple Gb caching pages you might want to see again. Media editing applications sometimes need copiously large amounts of ram… especially high quality audio editing. The need is there it just isn’t pressing…

i think this is good 32bit created ISO images for R1 and maybe only 64bit for R2, but !!!

haiku can have oficial only 64 bit suport for R2 but pls do not remove suport from source for 32bit… thay can be option for people who know how to compile haiku to make 32bit images… maybe this 32bit images do not be oficialy suported,