Please Want as Many Requests as Possible from Nvidia for Driver

The only type of gaming I’m interested in are Simulators, specifically flight sims. Flight sims have to render a much greater geographic area than most other games. This makes greater demands on CPU and RAM transfer of textures to VRAM. This is inefficiently done in X-Plane. I agree with CB88 on those points about not needing 100% accuracy in textures sent to VRAM and culling lazy textures. At the moment and for quite some time I have been reliant on nvidia and their drivers for these reasons but why does my present GTX 1070 8GB VRAM & AMD RYZEN 3800X and earlier nvidia CPU combinations cope better than AMD GPU’s and similar CPU combinations, despite the inefficiencies of X-Plane? Is X-Plane better optimised for nvidia? At least in the near future they may make X-Plane run well on AMD GPU’s with Vulkan. This debating the merits of AMD v’s nvidia is tiring though.

NVIDIA is not that super bad though, they even have drivers for FreeBSD that are pretty awesome. Ok. They do not yet include Vulkan. But still they have drivers for FreeBSD.

1 Like

But beeing the first creating 3D drivers for Haiku is maybe a bit unusual. How do they could know how it should be done?

Note that if you want AMD or Intel to provide Haiku drivers, you could ask them, as well?

I know you’re not an Nvidia fan, but I for one will only get Nvidia cards since CUDA is the de facto standard for machine learning and data science. OpenCL sucks. And I do run Haiku on metal (my custom build desktop and my System76 Oryx Pro). Just my 2¢

1 Like

You can’t be a defacto standard at all… if you are proprietary. A defacto standard is an informal standard that is generally adopted… the fact is GPU manufactures other than Nvidia cannot adopt CUDA at all… thus it isn’t a standard at all.

I get your point, but the software developers for libraries like keras, pytorch, tensorflow, all use CUDA. Add-ons for non-GPU libraries, like sklearn-cuda for sklearn, use CUDA. And part of the reason is that Nvidia has dumped a lot of money into cudnn, their library for deep learning. So yeah, if you want GPU-enabled machine learning, data science, you use Nvidia hardware.

And if you want to do computer vision, OpenCV uses… CUDA!

Nvidia has invested a lot of money into free libraries for a variety of domains to get people to use their hardware, and it has worked. So yes, Nvidia & CUDA is a standard.

Tell me, how many AMD GPUs does Amazon Web Services host on EC2?

Look… All I am hearing is proprietary ecosystem speak… there is no way forward with that. People buying into CUDA sold themselves out that’s all there is to it.

Sure, there’s no way out for them. But the way forward is sticking with Nvidia and enjoying the hardware, software, support they provide. We’re waayy off topic, but getting Nvidia, others to support & invest in Haiku is a good thing and benefits all of us.

You are still stuck in the broken mindset… while AMD is working on providing alternative implementations of all the tools you have mentioned, you continue to push the goal you want further out by supporting the company that caused the whole problem.

AMD isn’t doing anything except making cheap gaming GPUs and playing catch up. They don’t even have a data center GPU offering to rival Nvidia’s Tesla line. They’re problem is they are trying to take on Nvidia (GPUs) and Intel (CPUs) while maintaining thin profit margins. Intel also makes a neural network library. AMD just isn’t a thought leader in any market segments, and I don’t understand your fascination with them.

Anyways, seems I’ve fed you too much. I’m done, we’re off topic & not going to agree.

This is a phylosophical question: should i support a closed technology for momentary gain or should i support the alternative which at least propagates open tech to catch up?
Do i care about the future and others or not. What is morally the best for everyone?

5 Likes

Hmm. AMD’s CDNA technology? AMD’s Radeon Instinct MI-series, Radeon SSG and Radeon Pro WX 9100? Vulkan? OpenCL?!? Even the AMD Radeon RX 580?

Play fair… :grin:

1 Like

AMD is sticking to GPUs for gamers and creatives. That’s nice and all, but only part of the picture. Google, Facebook, Amazon, etc are using Nvidia datacenter GPUs because those are the only high-end GPUs on the market. In an earlier post I mentioned AWS EC2. All of the GPU instance types have been Nvidia Tesla cards. Nvidia also has CUDA, and supports OpenCL on their devices. I’ve been working with GPU-accelerated machine learning for over a decade. When we started, CUDA was at the beginning, everyone had to write code in C/C++. Fast-forward to today, most coding is done with Python which has wrappers already written for the raw CUDA code, mostly written by Nvidia (ML, BLAS, etc).

In the beginning, some colleagues were keen to use OpenCL to avoid vendor lock-in. But today, they come to me asking how to migrate their OpenCL code to CUDA, because that’s where the innovation and ingenuity is. Check out Nvidia’s conference, GTC. Lots of big names have been there to announce products, services, research all made possible because of Nvidia. It’s an entire ecosystem. Those write the whole thing off because it’s one company is to close the door to a lot of great ideas, brilliant people, and existing code base which would cost millions to rewrite.

CUDA supports Fortran, which many consider dead. OpenCL doesn’t support Fortran. But consider weather models. The various agencies and companies which predict forecasts, and hurricane trajectories, all use Fortran, and that’s why Nvidia supports Fortran for CUDA. It would cost millions to rewrite, retest, and validate all that weather modeling code, and NO ONE is going to do it. This was made possible by Nvidia’s success, which made for this further success.

In principle, it’s ideal to support open technologies, open platforms, open standards, open models, etc. However, in practise, it’s not always the same thing. In reality in this case, for high performance computing, Intel, Nvidia, Mellanox, lead the way because that’s where the money is.

From the Top 500 Supercomputers list:

Summit and Sierra remain in the top two spots. Both are IBM-built supercomputers employing Power9 CPUs and NVIDIA Tesla V100 GPUs.

AMD could only dream of such things. A standard/technology holds value when people are all-in to support it, and that ecosystem grows.

Tell that to the two fastest upcoming super computers that ether one of which has more PF/s than the entire rest of all supers. AMD is currently doing for thier GPUs what they did for their CPUS 2-3 years ago, don’t’ get stuck on the trailing edge.

Frontier and El Capitan are both to be AMD machines.

Also they aren’t rewriting the code they are recompiling it with AMD’s CUDA translation and porting tools … granted most software should just target OpenCL if they had any sense. See here: https://github.com/ROCm-Developer-Tools/HIP

Also there is a good chance Intel will fail to deliver Aurora on time… or as specced. From what I understand Intel is scrambling to launch those processors on TSMC’s 7nm process as thier fabs are continually failing to deliver anything successful past 14nm.

Even after Intel / Nvidia’s Aurora comes online at around 0.75-1EF/s… AMD will still have more Exaflops online with its two top super computers than the entirety of all super computers combined under them in 2023 as they’ll have 3.5 Exaflops delievered by then and Intel has not won any more new contracts.

1 Like

Dude, do you not even bother to read the news before posting these kinds of claims?

ROCm that @cb88 linked to uses LLVM, just like the CUDA compiler, and explicitly markets itself as being “language-agnostic.” I think their docs even mention Fortran. So, you’re behind the times again, here.

I guess all those benchmarks showing the latest AMD CPUs beating the latest Intel CPUs in all metrics (including power usage) are just made up, then?

Not to mention that every N months, a new Intel CPU vulnerability is announced, and AMD is discovered to have done things correctly and is thus immune. Pretty sure that makes them a “thought leader” (whatever that means) on security, which is pretty high on everyone’s list these days.

  • Dude, I don’t read all the news. I don’t usually check AnandTech, but I read the article. I stand corrected.
  • The ROCm compiler looks nice, and is certainly a welcome addition. It’s also much newer than what I was talking about. I’m sure I’m not the only one who doesn’t do an exhaustive search of the Internet every time I post something.
  • Sure, Intel has some vulnerabilities. But AMD isn’t alone in this. Why bring this up?

I was simply being supportive of Nvidia and the idea for a commercial company to support smaller open source projects, and I get shit on for it.

1 Like

Nobody’s asking you to read “all” the news – but before you write multiple paragraphs of FUD about how NVIDIA is and always will be better than AMD, please do some basic fact-checking to make sure your claims are still accurate, huh?

One of the researchers has a quoted tweet at the bottom of that article specifically stating that the attacks leak only “a few bits” of metadata, and that Intel Meltdown and Zombieload are far worse as they leak “tons of actual data.”

These (and some Spectre variants) are inherent flaws (in some sense) in speculative execution in general; Intel’s bugs are quite literally “oh hey, skipping permissions checks is a 3% performance boost, let’s do it!” – i.e. Intel is massively incompetent; AMD isn’t. That’s a very, very large quantitative, not qualitative, difference.

NVIDIA has a long-standing policy of being unhelpful, unsupported, and occasionally outright hostile to open-source. AMD has a long(ish)-standing policy of having their entire Linux 3D stack, and lots of other critical bits of infrastructure, be open source.

Also, multiple paragraphs of FUD about how NVIDIA is greater than AMD are fanboy shilling, not “supporting.” There are a number of ways that you could have written your posts to say essentially the same thing (i.e. “NVIDIA are technically better than AMD, and AMD is far behind and won’t catch up”) without the FUD. You would still have been technically wrong, but that’s a lesser problem than being rude and annoying.

4 Likes

I confirm. I have a Quadro board at home and a cluster with Tesla boards at work. It’s like working on the Computer of the Enterprise…

…but this doesn’t mean that the average user of Haiku, which after all is aimed at personal computer, would benefit :slight_smile:

1 Like

AMD’s high end hardware ie Vega FE, Radeon VII is quite nice also… just a bit slower than Nvidia’s although the tides are turning. And some of AMD’s recent patents indicate they put some serious effort into surpassing Nvidia (many of the patents are related to improving memory performance).