Improving the Intel Extreme Driver (Was: Graphics on Dell laptop in Vesa mode only)

There wasn’t really a problem. It was removed “temporarily” to be sure to test the modesetting code in the driver at boot. Otherwise the driver appeared to work fine, but in fact was relying on vesa to setup everything.

Now that the driver is more advanced, maybe it can be restored?

6 Likes

Hi @PulkoMandy , @X512

I have a few remarks I want to make here:

  • @X512 when I described how you could locate which registers were ‘destroying the mode’ in a message somewhere above, I implicitly stated with that that indeed you could simply not program anything at all, and you would have a desktop. This goes for most cards/brands (on x86) these days, but this isn’t always so ‘per se’. You’ll find a lot of hardware outthere which still has this VGA compatibility mode(s) in it, in which it defaults when first started. These old modes are often not compatible with something which we could display a desktop on. That being said, these days mostly this will indeed work.

  • @PulkoMandy , personally I never even had a thought that we’d indeed want to not set a mode upon starting up a primary accelerant. I am not aware at all there was explicit code in the intel_Extreme driver that was meant to do exactly this. On top of that, as you already stated, it’s not very handy to do that since we’d never, or at least much slower, become aware of trouble in the driver. From the bugreports I am handling sofar, I see a lot of users who more or less say, well, native is working, you can close the ticket. In my opinion just running vesa (or gop) mode would be fine for those users and the driver, and all of the work needed to build that, is just a waste of time.

  • I can image that for some reason, for some people, it’s actually important that they don’t experience a single modeswitch taking place on system startup (I personally don’t share this though). If we want to support this, keep in mind that the drivers should be kept small, and common things (hardware - independant, done by all gfx drivers) should not be in the driver at all.

So thinking about this, I think app_server should send a flag to the accelerant when init accelerant is called, instructing it to -not- init the card’s hardware. app_server should do this, because it should know the state the system is in at this point. So in turn it must be told at boot time already a compatible active mode setting command was done outside the accelerant, namely gop or vesa. Which one did it is not important, as long as app_server knows which resolution it was, which colordepth (for 8 bit: what palette) and at what refreshrate. Comparing that to the desktop mode it wants to set it can determine to let accelerant, at init time, tell it, sufficient init was already done. If however refresh or hres, vres, differ, the accelerant may init and set mode.
If app_server sends that do not init flag, the accelerant may assume screen is up, retrace is happening, etc: so no hang situation will occur when i.e. waiting for retrace during some function inside or outside the accelerant. (We had this problem with intel_extreme iirc)

I think this is the way I would implement something like this if it were my OS :wink:

When looking at app_server and a mode that would already be up, it’s important btw to acknowledge that the booticons screen is often -not- in the same mode as the desktop is in. Also remember that a lot of cards do not offer native (to the attached screen’s resolution) VESA modes, and remember vesa nolonger exists in practice later on (‘just’ gop, or maybe even nothing at all, depending on architecture for example, or if a card is not the primary system card).

When running on non primary cards (or other archs) be aware that the atom bios interpreter, or the one in nvidia for instance, will probably init the card to an old fashion VGA style mode. We’d have to on top of that call a vesa or gop modeset routine via the card’s bios in order to get to a situation where an initial init and modesetting sequence is not needed by the accelerant.

I guess that’s what I have for now. so: just my two cents, and just my opinion…

BTW EDIT: be aware that our drivers (gfx) will probably always be in a kind of ‘under construction’ state. It’s not wise to assume it’s finished at some point I think, maybe unless we’d just use drm drivers :wink:

4 Likes

Here is how this piece of code worked:

  • Read the current video mode from the hardware
  • If the mode is the same as what is being set, don’t do the mode switching

What was changed: now instead of reading the current mode from the hardware, the driver remembers which mode it has set itself. In most cases this is the same, except on the first modesetting from the driver: before, it read the mode from hardware and could know that VESA had already set the same mode (or if it had set another one). Now, the first modeset always sets a mode (and properly initializes everything, which is good).

The way this was solved in Linux is with KMS: moving a lot more of the driver into the kernel side, so that they can set a native mode before the display server is started. But I would prefer to avoid that (less code in the kernel is always better). The immediate switch from bootsplash to desktop is nice to have, but not required. And for now, a properly working driver that can initialize everything is more important. When the driver is fully working, we can see about adding some “tricks” like this. But maybe we are not quite there yet. There are more important things to work on first (multi display support maybe?)

Well, if we extend the driver to support more things, users will notice again what’s missing. Multiple monitors, hardware cursors, video overlays, maybe 3d acceleration, …

For now there just isn’t a lot of difference between native and VESA drivers. Maybe wait_for_retrace (and that should be possible with VESA as well) and brightness setting on some machines (but that could be done with ACPI instead).

1 Like

Ok, Then I guess it was removed before I looked at it.

Anyway, Personally I wouldn’t want to implement this ‘readout’ of the currently set mode as it just seems even more work to be done to me. Write-only support is enough to get the driver going good enough, so I’d much rather skip that piece, and replace that with a trick like the above mentioned one (the flag), as that’s totally driver independant and doesn’t require extra hardware-dependant work at all.

In the past btw I never implemented this and it never, ever posed a single problem. Even back then, when I had much, much more users simultaneously responding to problems, testing, etc: at that time this was all done via my own site and email and I kept track of it all ‘by hand’… Then again, I put in 60+ hours of work during years, week after week. (not going to do that again :wink: )

5 Likes

I think that video driver can read video mode on load and report it as current mode. Driver client can set another mode if it is not satisfied.

1 Like

Radeon GPU has a bit different mechanism for vsync. When you set framebuffer address, is is not applied immediately, instead it wait for vblank and then apply address so it will be presented at next frame. Hardware triggers interrupt called “page flip interrupt” when previous framebuffer can be recycled. If framebuffer address is not changed, no interrupt is produced. I implemented this mechanism in RadeonGfx and expose it with VideoStreams VideoConsumer interface.

Vsync interrupt is also available, but it is less convenient for flipping buffers.

4 Likes

Yes, this is also possible on Intel devices. I think both interrupts are available at least on modern hardware.

1 Like

Well, that’s cleaner I guess design wise. For the not-so-complete intel driver I would not recommend it though.

Still, if someone likes to do it fully, it’s nice to have it I guess. That probably means you have to checkout all hardware versions, all possible pipes, scalers, cross-connections possible and come up with the right readout. And, while at it, if all this has become fully clear, it will be also possible largely to actively program all that same stuff, which will be needed anyway if this driver has to support multiple independant heads. Currently it has clone at best, since in that case the exact readout is less important…

For me this is not something I am going to put effort in for the extreme driver, my goal is/was to get as much systems at least a single screen that can set modes, compared to what it did before. Looks like this attempt is succeeding indeed. If someone else wants to jump in, thats perfectly OK, I’ll just rethink what I want to do next then.

That would be a luxury I guess… :smile:

8 Likes

Hi KapiX, where is this change made? I would like to attempt this test. Thank you.

If all is right it’s in the normal driver, including even much more fixes and extensions in the meantime…

2 Likes

@x512, recently I saw (I think) that the intel hardware has this option as well (page flip int), in case you’re interested :wink:

It also need some API to set new framebuffer VRAM address. And VRAM memory manager.

1 Like

@rudolfc Thank you so much for your work on this front.

My Lenovo Yoga 2 Pro 13’’ graphic card is now detected as Intel Haswell mobile and I think the monitor as well (I am writing from my Thinkpad now). All available resolutions are displayed correctly and I have a brightness bar ! I am using it currently at 1600 x 900, half of his native (3200 x 1800). The screen really looks amazing. This is a machine that has some motherboard heat issues that prevent CPUs to reach high frequency. But, as I said in an older thread, it’s pretty much working with lightweight Haiku. Need to try to play 1080p video to encounter the problem…
Need to update our hardware database with the info…

Due to the above problem, my main machine is a Thinkpad T450s.
Its graphic card is an HD Graphics 5500 but framebuffer is used with Haiku (EFI boot), although in correct resolution: 1920x1080.
Is it supposed to work? If yes, is there a difference for the driver between legacy and EFI boot?
BTW, my Yoga 2 above boots in legacy mode. But the Thinkpad cannot, because of 12Gb memory it boots only in EFI mode (I think I had created a ticket for that in the past).

Please let me know if you need more info or if I should create any ticket.
Thanks again :slight_smile:

BTW, I forgot, hrev55945 64bit for both machines.

5 Likes

Hi fkap, so on the Lenovo, does GLteapot spin at 60Hz as well? (checking if interrupts for blanking works there).

For the other system: what’s the card’s ID? if it’s not in the driver you could create a ticket for it indeed.

The difference between EFI boot and legacy boot on some systems is that during boot EDID info might not be fetched in EFI mode (depending on specific BIOS implementation though, so depends on the specific system setup). Since for some screentypes (DP) the driver might still rely on the EDID from boot, this would indeed mean there’s a difference in how the driver would behave between the two boot types.

On most laptops I guess it will be OK anyway though since there the panel info is fetched from the ROM (or via ACPI, using Intel’s OpRegion function) and the driver will use that instead then.

Yes it does.

So, HD Graphics 5500, id: 0x1616

Would Intel Extreme be a first good target for hardware graphics acceleration?

The first target is / will be picked by the one who starts to hack it.

1 Like

Too bad I’m not one of those really smart coders who can hack at such a problem.

I saw KabyLake mentioned in a few posts. Has there been more progress on it?
That ended up being why I was having to boot in VESA mode…I have KabyLake and a 4K monitor, and it was attempting 4K native resolution but resulted in a black screen (ctrl+alt+shift+esc worked great to move down to a working resolution). Changing the Screen resolution to 1920x1068x32 60Hz works fine, so I don’t need to use VESA in that case. Thanks for all the hard work!