Please Want as Many Requests as Possible from Nvidia for Driver

That is the funniest most horrifying thing I’ve ready today. Nvidia’s performance on Linux is so bad, that AMD’s slower GPUs on paper often beat them, and 2D performance is much better also. AMD’s drivers have been VASTLY better on Linux than Nvidia for at least 3-4 GPU generations. The reason FGLRX stopped getting updates is everybody hated it… the open source driver has been better than it ever was for half a decade already.

I will grant you that on windows … the story is mostly reversed but that’s basically irrelevant to this discussion since Haiku will never get a propietary Nvidia driver.

That is not my experience and not most people who do gaming. Why do most people that do gaming or simulation on linux use nvidia and nvidia proprietary drivers? I am saying that is my experience from a few years ago, unless the situation is changing now. I know that still in linux with X-Plane at least, most users find the nvidia proprietary driver better but that might change soon when X-Plane uses Vulkan.

Because they haven’t actually compared how much better the experience is out of the box with AMD.

The situation started changing about 10 years ago… 5 years ago it was already getting solidly to the point where if you bought an AMD GPU that had been out for 6mo to a year it would work better than an Nvidia GPU, at this point after about 3mo or so AMD GPU get full out of the box support… and in general perform better than Nvidia’s GPUs of the same price range.

The fact that you even mentiont the old propritary AMD FGLRX driver means you are WWAAAAAY… out of date on your info. Today the proprietary driver on Linux for AMD even uses the same exact kernel driver… only the userspace part of the driver is different between the open and closed driver. At this point the Linux AMD driver is in many ways better than the Windows driver for instance once it was working most people on Linux did not experience all the issues that plagued the Radeon 5700 drivers for a few months.

Just because Nvidia hardware is fast on windows doesn’t mean it is the right tool for the job anywhere else and in 99% of cases it is the wrong tool.

Well ATM at least with X-Plane 11 and that is mostly what I use on linux, nvidia drivers are better. That is about the only 3d accelerated app I use on linux besides the occasional 2nd Life session. I did retry using an AMD video card a few years ago (cannot remember exactly). If was quite good hardware wise but the proprietary driver was terrible and the opensource worse. I must have tried that out about 2013-14 or there about. I immediately went back to nvidia GTX 680. X-Plane 11 developers still recommend nvidia over AMD but as I’ve said that might change with Vulkan.
Maybe AMD opensource would be easier and have more potential on Haiku than Linux with no Xorg.

The Nvidia recommendation is only valid on Windows… it’s dead wrong on Linux and Mac these days and has been for years.

Also a GTX 680 is pretty old at this point and on a very dated kepler architecture… a current high midrange card like the Radeon RX5700XT is at east 200% faster and has 4x the vram as well as many more advanced features.

“We absolutely do not support the fully open source drivers for AMD and NVIDIA.” they claim it will work on an piece of crap intel GPU on the open drivers, but then backpedal and claim it won’t work on the open drivers for AMD… what a bunch of liars. I bet the tried to run it on Nouveu… which is a reverse engineered driver for Nvidia… and it does run like absolute garbage.

You should also know that Xplane runs terribly on all modern video cards so you should not use it as a benchmark… it’s frankly embarassing how bad thier renderer runs on modern hardware so it is no supprise that they are porting to Vulkan… hopefully they learn to write a faster render instead of making terrible assumptions about what the hardware can and should be doing. It seems they are already going down the wrong road though from reading their blog as they say, an 8GB GPU works well but they that they need to texture swap on 4GB GPUs… except that is totally wrong, just about every modern game out there can texture swap fairly preformantly on low end GPUs and certainly you don’t need to keep all textures in vram in a flight sim!.. except of course their engine.

I realize that x-Plane just about needs a super computer to run well. When I said I changed my video card from AMD to a GTX 680. I think I now remember the AMD video card I swapped it from. It was a RX 470. I was running X-Plane 10. The FPS with same settings I had before were hardly any better than my previous card which was A AMD HD 7970. With the GTX 680 it went much better. I was using these GPU’s with a i7 4770k CPU, at about end of 2014 and beginning 2015 when X-Plane 10 was the latest version. Flight simulators tend to be very demanding on computer resources not just X-Plane. There is a user provided benchmarks from the built in benchmark feature of X-Plane that compares hardware:
https://docs.google.com/spreadsheets/d/15rNa7_naUM8a4YzzrzwrxvWSl4X8SHesshZGm7H54AI/pubhtml#

RX 470 … is a much slower card than a GTX 680 and has a newer architecture. The 680 is more what they old style rendering in xplane is designed to run on. So, the newer RX470 ends up gimped in multiple ways.

Just be aware that when something specific works well on a particular hardware combination that doesn’t mean everything will. And the same goes for me… I have to remember that sometimes there are software that just run badly on new hardware.

And frankly it isn’t that the flight sim is demanding… its just coded badly and seems demanding it completely ignores many optimisations that are blatantly obvious. They do ridiculous things like attempting ensuring exact rendering per frame when 99.99% correct rendering would be 10x faster. And avoiding lazy loading textures so they stutter instead… again ridiculous. Say your 680 could render it at 45FPS perfectly… wouldn’t 100+FPS with the occasional single frame texture pop in be better? Even on old consoles they do pretty advanced culling, and new consoles will be reloading textures faster than you can pan your camera.

If you check through your linked spreadsheet you can find several instances of Linux + AMD cards… and if you search for the same card on windows the proprietary driver on windows is running half as fast… this is due to thier propietary OpenGL driver being known to be lackluster, while the open driver has continued to drastically improve, even recently it is getting GL threading optimizations, that boost performance 10-40% depending on the situation and that’s on top of already outperforming the windows OpenGL driver!

I don’t really expect nvidia to make drivers for Haiku, they can’t even make ones for Linux that work… What I’d want is proper specs published, and not just a few tables, so people can write drivers for their own hardware.

1 Like

The RX 470 is quite fast from a hardware point of view. In most areas it is equivalent to GTX 680 but is better in memory performance. Here is a hardware comparison:
https://www.game-debate.com/gpu/index.php?gid=3598&gid2=576&compare=radeon-rx-470-4gb-vs-geforce-gtx-680

The only type of gaming I’m interested in are Simulators, specifically flight sims. Flight sims have to render a much greater geographic area than most other games. This makes greater demands on CPU and RAM transfer of textures to VRAM. This is inefficiently done in X-Plane. I agree with CB88 on those points about not needing 100% accuracy in textures sent to VRAM and culling lazy textures. At the moment and for quite some time I have been reliant on nvidia and their drivers for these reasons but why does my present GTX 1070 8GB VRAM & AMD RYZEN 3800X and earlier nvidia CPU combinations cope better than AMD GPU’s and similar CPU combinations, despite the inefficiencies of X-Plane? Is X-Plane better optimised for nvidia? At least in the near future they may make X-Plane run well on AMD GPU’s with Vulkan. This debating the merits of AMD v’s nvidia is tiring though.

NVIDIA is not that super bad though, they even have drivers for FreeBSD that are pretty awesome. Ok. They do not yet include Vulkan. But still they have drivers for FreeBSD.

1 Like

But beeing the first creating 3D drivers for Haiku is maybe a bit unusual. How do they could know how it should be done?

Note that if you want AMD or Intel to provide Haiku drivers, you could ask them, as well?

I know you’re not an Nvidia fan, but I for one will only get Nvidia cards since CUDA is the de facto standard for machine learning and data science. OpenCL sucks. And I do run Haiku on metal (my custom build desktop and my System76 Oryx Pro). Just my 2¢

1 Like

You can’t be a defacto standard at all… if you are proprietary. A defacto standard is an informal standard that is generally adopted… the fact is GPU manufactures other than Nvidia cannot adopt CUDA at all… thus it isn’t a standard at all.

I get your point, but the software developers for libraries like keras, pytorch, tensorflow, all use CUDA. Add-ons for non-GPU libraries, like sklearn-cuda for sklearn, use CUDA. And part of the reason is that Nvidia has dumped a lot of money into cudnn, their library for deep learning. So yeah, if you want GPU-enabled machine learning, data science, you use Nvidia hardware.

And if you want to do computer vision, OpenCV uses… CUDA!

Nvidia has invested a lot of money into free libraries for a variety of domains to get people to use their hardware, and it has worked. So yes, Nvidia & CUDA is a standard.

Tell me, how many AMD GPUs does Amazon Web Services host on EC2?

Look… All I am hearing is proprietary ecosystem speak… there is no way forward with that. People buying into CUDA sold themselves out that’s all there is to it.

Sure, there’s no way out for them. But the way forward is sticking with Nvidia and enjoying the hardware, software, support they provide. We’re waayy off topic, but getting Nvidia, others to support & invest in Haiku is a good thing and benefits all of us.

You are still stuck in the broken mindset… while AMD is working on providing alternative implementations of all the tools you have mentioned, you continue to push the goal you want further out by supporting the company that caused the whole problem.

AMD isn’t doing anything except making cheap gaming GPUs and playing catch up. They don’t even have a data center GPU offering to rival Nvidia’s Tesla line. They’re problem is they are trying to take on Nvidia (GPUs) and Intel (CPUs) while maintaining thin profit margins. Intel also makes a neural network library. AMD just isn’t a thought leader in any market segments, and I don’t understand your fascination with them.

Anyways, seems I’ve fed you too much. I’m done, we’re off topic & not going to agree.