Interesting subject I think… I remember how super it was to get nVidia 3D working around 2005… Remembering that: I know how much fun you must be having at this time
Are you depending on BIOS setup in intel_extreme driver? The nice part of radeon_hd that it can run when BIOS is totally unaware of GPU, for example on RISC-V boards.
Yes, because I want to do as little as is needed for a functional driver. Looking at drm it’s no problem to totally init the card ourselves in theory, but then again, for Intel embedded graphics we’ll never need this on Risc-V for instance.
The same dependancy (partly) exists for nvidia BIOS setup. On certain configs I can totally init these cards already though (did some massive reverse engineering to get that going back in the day). Looking at drm: all can be setup.
That brings me to my doubts: it’s a lot of work to totally setup our own drivers, while drm has very, very complete drivers these days for a lot of brands. Doing the same work is a bit insane if you ask me, though I do understand the kernel/userland discussion about this all…
EDIT: So as you can imagine I’ll probably stick some nvidia cards in a risc-V system if I can get my hands on it
EDIT2: same applies for Matrox… these can be nicely coldstarted already as well, VGA only, and of course the G550 PCI-e (newest supported) is rather old…
True, but we have the same problem (and there are bugreports about it) with people using machines running coreboot, which by default doesn’t include a VGA or VESA BIOS.
Maybe not the most important use case to cover, but it seems people can always come up with a way to hit these edge cases
Radeon_hd hardware makes things easier by having atombios, which is bytecode for a simple virtual machine. The driver implements that virtual machine to run the code stored in the card ROM in a cross platform way. It makes it a good choice for non-x86 hardware.
It may be also useful to note that our VESA driver already does something similar (using x86emu which we borrowed from x.org). But currently it relies on the native BIOS of the machine (in addition to the VGA/VESA bios of the card). It could be modified to have its own BIOS image (from seabios, I guess) for machines that don’t have a native one.
Anybody knows how to get current mode timing and clock parameters in Linux (DRM/KMS)?
Another use case would be suspend/sleep mode support.
Of course other brands use such type code in the bios as well. Nvidia has a lot of that for instance, but I would call it more of a script binary language. Exactly this is what I partly reverse engineered all those years ago…
Edit: so our Nvidia driver also implements such a machine (you can enable it via the driver settings file, I called it ‘coldstart’)
Technically, Intel Gen 3 GPUs.
GPU gen 3 or CPU gen 3 I wonder?
Intel Gen 3 GPUs. These older GPUs still run most legacy 3D software as of today.
There are also a problem if Intel GPU is not selected as default GPU in BIOS. In that case it do not work in Haiku, but work in Linux.
I found that it is possible to use existing mode set by BIOS instead of setting it every time on driver start. At least this works for Radeon, not sure about Intel. Framebuffer pointer can be changed later without touching mode settings. It can be also used as is.
This can greatly reduce black screen problem after rocket icon. It is also possible to add driver setting that prohibit to set mode by driver.
Intel driver used to do this, but the code was removed
Indeed it would be nice to not have a few seconds of blackness between the splashscreen and desktop.
What was the problem?
There wasn’t really a problem. It was removed “temporarily” to be sure to test the modesetting code in the driver at boot. Otherwise the driver appeared to work fine, but in fact was relying on vesa to setup everything.
Now that the driver is more advanced, maybe it can be restored?
Hi @PulkoMandy , @X512
I have a few remarks I want to make here:
-
@X512 when I described how you could locate which registers were ‘destroying the mode’ in a message somewhere above, I implicitly stated with that that indeed you could simply not program anything at all, and you would have a desktop. This goes for most cards/brands (on x86) these days, but this isn’t always so ‘per se’. You’ll find a lot of hardware outthere which still has this VGA compatibility mode(s) in it, in which it defaults when first started. These old modes are often not compatible with something which we could display a desktop on. That being said, these days mostly this will indeed work.
-
@PulkoMandy , personally I never even had a thought that we’d indeed want to not set a mode upon starting up a primary accelerant. I am not aware at all there was explicit code in the intel_Extreme driver that was meant to do exactly this. On top of that, as you already stated, it’s not very handy to do that since we’d never, or at least much slower, become aware of trouble in the driver. From the bugreports I am handling sofar, I see a lot of users who more or less say, well, native is working, you can close the ticket. In my opinion just running vesa (or gop) mode would be fine for those users and the driver, and all of the work needed to build that, is just a waste of time.
-
I can image that for some reason, for some people, it’s actually important that they don’t experience a single modeswitch taking place on system startup (I personally don’t share this though). If we want to support this, keep in mind that the drivers should be kept small, and common things (hardware - independant, done by all gfx drivers) should not be in the driver at all.
So thinking about this, I think app_server should send a flag to the accelerant when init accelerant is called, instructing it to -not- init the card’s hardware. app_server should do this, because it should know the state the system is in at this point. So in turn it must be told at boot time already a compatible active mode setting command was done outside the accelerant, namely gop or vesa. Which one did it is not important, as long as app_server knows which resolution it was, which colordepth (for 8 bit: what palette) and at what refreshrate. Comparing that to the desktop mode it wants to set it can determine to let accelerant, at init time, tell it, sufficient init was already done. If however refresh or hres, vres, differ, the accelerant may init and set mode.
If app_server sends that do not init flag, the accelerant may assume screen is up, retrace is happening, etc: so no hang situation will occur when i.e. waiting for retrace during some function inside or outside the accelerant. (We had this problem with intel_extreme iirc)
I think this is the way I would implement something like this if it were my OS
When looking at app_server and a mode that would already be up, it’s important btw to acknowledge that the booticons screen is often -not- in the same mode as the desktop is in. Also remember that a lot of cards do not offer native (to the attached screen’s resolution) VESA modes, and remember vesa nolonger exists in practice later on (‘just’ gop, or maybe even nothing at all, depending on architecture for example, or if a card is not the primary system card).
When running on non primary cards (or other archs) be aware that the atom bios interpreter, or the one in nvidia for instance, will probably init the card to an old fashion VGA style mode. We’d have to on top of that call a vesa or gop modeset routine via the card’s bios in order to get to a situation where an initial init and modesetting sequence is not needed by the accelerant.
I guess that’s what I have for now. so: just my two cents, and just my opinion…
BTW EDIT: be aware that our drivers (gfx) will probably always be in a kind of ‘under construction’ state. It’s not wise to assume it’s finished at some point I think, maybe unless we’d just use drm drivers
Here is how this piece of code worked:
- Read the current video mode from the hardware
- If the mode is the same as what is being set, don’t do the mode switching
What was changed: now instead of reading the current mode from the hardware, the driver remembers which mode it has set itself. In most cases this is the same, except on the first modesetting from the driver: before, it read the mode from hardware and could know that VESA had already set the same mode (or if it had set another one). Now, the first modeset always sets a mode (and properly initializes everything, which is good).
The way this was solved in Linux is with KMS: moving a lot more of the driver into the kernel side, so that they can set a native mode before the display server is started. But I would prefer to avoid that (less code in the kernel is always better). The immediate switch from bootsplash to desktop is nice to have, but not required. And for now, a properly working driver that can initialize everything is more important. When the driver is fully working, we can see about adding some “tricks” like this. But maybe we are not quite there yet. There are more important things to work on first (multi display support maybe?)
Well, if we extend the driver to support more things, users will notice again what’s missing. Multiple monitors, hardware cursors, video overlays, maybe 3d acceleration, …
For now there just isn’t a lot of difference between native and VESA drivers. Maybe wait_for_retrace (and that should be possible with VESA as well) and brightness setting on some machines (but that could be done with ACPI instead).
Ok, Then I guess it was removed before I looked at it.
Anyway, Personally I wouldn’t want to implement this ‘readout’ of the currently set mode as it just seems even more work to be done to me. Write-only support is enough to get the driver going good enough, so I’d much rather skip that piece, and replace that with a trick like the above mentioned one (the flag), as that’s totally driver independant and doesn’t require extra hardware-dependant work at all.
In the past btw I never implemented this and it never, ever posed a single problem. Even back then, when I had much, much more users simultaneously responding to problems, testing, etc: at that time this was all done via my own site and email and I kept track of it all ‘by hand’… Then again, I put in 60+ hours of work during years, week after week. (not going to do that again )
I think that video driver can read video mode on load and report it as current mode. Driver client can set another mode if it is not satisfied.
Radeon GPU has a bit different mechanism for vsync. When you set framebuffer address, is is not applied immediately, instead it wait for vblank and then apply address so it will be presented at next frame. Hardware triggers interrupt called “page flip interrupt” when previous framebuffer can be recycled. If framebuffer address is not changed, no interrupt is produced. I implemented this mechanism in RadeonGfx and expose it with VideoStreams VideoConsumer interface.
Vsync interrupt is also available, but it is less convenient for flipping buffers.
Yes, this is also possible on Intel devices. I think both interrupts are available at least on modern hardware.