I have integrated Radeon HD4200 graphics card when this card do under haiku only VESA modes… I want buy a cheap NVIDIA card to PCI-Express slot for Haiku. What NVIDIA card is best for Haiku i want have 1440x900 resolution or with optional 3D support in Haiku. Thx. lukas.
Haiku supports Nvidia cards up to Geforce 7950 with nvidia driver. Newer video card than that will use VESA driver. Look for an upper end Geforce 7xxx or 6xxx graphics card.
All video cards use software OpenGL for 3D right now. So, 3D will be slow no matter which video card you get until Haiku gets hardware OpenGL (3D drivers).
Changes/Additions for Haiku’s nvidia driver can be seen here ( in html ):
1440x900 is listed in there and supported by nvidia driver.
I, too, am looking for graphics card, a PCI card preferred, that is guaranteed to work with Haiku. I have tried a Nvidia Geforce 8400gs and a Radeon HD 7450, but neither are supported. I have a PCIe adapter which I’d rather not use.
Any suggestions and where to buy in Canada or the U.S.?
I used to recommend NCIX.com but they’ve gone bankrupt.
Second or third time that has happened - find a good PC component supplier and they vanish after a few years. This time it was due to too many expensive retail stores rather than expanding online, previous time the owner of PCCyber grabbed the cash, bought an expensive car, and vanished to China, previous previous Clones Society the joint owners had a fight. Might have to go with Staples or BestBuy now
Helpful. Yes. It’s a start. Now I go through them and look for a seller.
I just found out about the bankruptcy a few hours ago. I’ve dealt with them several times. I’m sorry to see them go.
I find it somewhat comical you say that 3D will be slow in Haiku. I just ran GLTeapot on a cheap dual core Lenovo laptop (I bought at Best Buy, to get used to Windows 8) and Haiku x86 was blasting one core almost to 100% and that teapot was spinning at 400fps!!! I kid you not. Oddly enough, this was in VESA mode, as native graphics mode was limited to about 60fps. Why, I cannot fathom. Shouldn’t it be the other way around?
This is the frame rate you get for sending the commands to the driver. If the driver frame rate is capped at 60 fps it simply drops all the render frames in between. Your 400 fps are thus not representative. With my pure threaded rendering engine I can get for example several hundred fps with an example project having little scripting per frame update while limiting rendering to 10 fps (useless but possible to do).
What is the purpose of limiting the frame rate (or dropping frames or whatever)? Is it purely to give the CPU less work, thus it can do other things with the remaining cycles or whatever?
And why does 32-bit Haiku blast one core (of my dual core Intel Lenovo laptop) to near 100%, doing 400fps, while 64-bit Haiku (also in VESA mode) on my quad-core AMD A6 Acer laptop, only hits around 300fps and all four cores are bouncing around? Why is GLTeapot operating differently on the two versions of Haiku, when both are in VESA graphics mode?
If your monitor runs at let’s say 60Hz then rendering at for example 120Hz is useless. Half the frames will stress the GPU but will not be visible on the monitor. Thus limiting the rendering to the refresh rate of the display stresses the GPU only as much as needed. This improves performance, reduces heat dissipation and reduces also power consumption.
Now why the core usage is different I don’t know. OpenGL itself is single-threaded. It knows the concept of sharing contexts across threads but I would recommend nobody with a sane mind to venture down that road. Getting the locking done right is the responsibility of the application. It’s just not worth the troubles (side note: it’s the same problem as with multi-threaded X11 Window System). So it’s the driver which is multi-process aware (this command buffer concept).
As mentioned I don’t know why the 64-bit version is different. What I could imagine though is that multiple threads are set up in an worker thread type of setup. In such a concept a frame update is processed with one thread. If you get a SwapBuffers OpenGL does an implicit Flush which would tell the thread to finish the command buffer and transmit it to the card to render. I imagine the next free worker thread gets activate to process the next frame content (up to the next SwapBuffers). As mentioned, this is just my guess out into the blue.
Different GCC, different MESA, diferent renderer (gcc2: swrast, gcc5: llvmpipe)
I found a radeon_hd listed here:
It is supposed to support Radeon HD 7xxx, which I thought would include my Radeon HD 7450.
How do I install the radeon_hd driver?
The radeon_hd driver is included in the Haiku image. And it should support the HD 7450, see here. Maybe your device ID differs?
Thank you, humdinger.
You must be right. There is something unusual about my card.
‘listdev’ in Terminal will show you the ID.
llvmpipe… forgot about that one. Yeah, this one is possibly implementing a scheme like I outlined above.
listdev or lspci (Linux) gives me:
device Display controller (VGA compatible controller, VGA controller) [3|0|0]
vendor 1002: Advanced Micro Devices, Inc. [AMD/ATI]
device 677b: Caicos PRO [Radeon HD 7450]
ID 677b is filed under “Radeon HD 7400” in the radeon_hd driver, but that doesn’t matter really, just what’s reported.
So, you have that Radeon card, but the driver isn’t used?
You can check with
listimage | grep radeon to see if the driver is loaded.
listimage | grep radeon
971 0x811ae000 0x811b5000 0 0 /boot/system/add-ons/kernel/drivers/dev/graphics/radeon_hd
995 0x00bd9000 0x00bf6000 0 0 /boot/system/add-ons/accelerants/radeon_hd.accelerant
TEAM 699 (/bin/grep --color=auto radeon):