Improving the Intel Extreme Driver (Was: Graphics on Dell laptop in Vesa mode only)

Ah, I see. Thanks a lot!
So when you do look at linux Intel driver sources, you look at the X.Org driver. Interesting looking docs, I’ll have a look to see if I can find my way around these places to specifically look at the VGA analog port stuff.

BTW I have the impression that the DRI stuff is optional (3D/accel specifics), so (for nvidia) I’m only looking at DRM: all I need I can find there as far as I can currently see. Would be nice if I could plug that (nvidia) into a haiku accelerant driver as ‘engine’, while using the haiku kernel driver. No idea if that would go though…

But I find that an interesting experiment, if I would have the time for that. Which I probably don’t however :roll_eyes:

In modern day Linux, there is KMS (kernel modesetting) which means the modesetting code is on the kernel side. And in typical Linux ways, it starts with this 20K lines long file: https://elixir.bootlin.com/linux/latest/source/drivers/gpu/drm/i915/display/intel_display.c

For VGA you probably also need this: https://elixir.bootlin.com/linux/latest/source/drivers/gpu/drm/i915/display/intel_crt.c (lots of repetitive code to do the enabling sequence for various Intel chip generations)

I don’t know the architecture of the NVidia driver but I expect it will be similar in terms of having modesetting code on the kernel side. In Linux this allows using high resolution modes for the console and maybe the boot splashscreen. I find doing this much work kernel-side a high price to pay for these two features.

Thanks again :slight_smile:
Indeed, I also wouldn’t want that on the kernel side. Indeed the same applies for the nouveau driver. So again, I have no idea if I could use that kernel side code (largely) in user space, but that would be my goal.

Just now browsing through the DRM code for intel again. In my case it’s IVY bridge, where the i915 driver does a manual FDI train, and they state that for newer steppings autotrain could be used instead (todo).
In our driver it’s just the other way around, which means pre-step B0+ Ivybridge won’t work anyway in our case. (we do it ‘less universal’ so to speak)

That already could be the problem I am facing I take it. I don’t know yet if our driver in fact does execute the training code at all. Do I need to change the code somewhere to enable it? Or can I for example just start fiddling with manual training?

Thanks for any additional pointers you might have…

1 Like

It appears to be disabled currently: https://git.haiku-os.org/haiku/tree/src/add-ons/accelerants/intel_extreme/Ports.cpp#n317

You can check in the same file, the modesetting sequence is different for each type of port (Analog linked above is for VGA, but there are slightly different sequences for LVDS, DisplayPort, eDP, and HDMI).

After enabling this you should get some traces in the log. On my machines (one SandyBridge and one Haswell), the FDI training with our existing code always fails, so I disabled it. Now I can at least get the default video mode to work (since for this one, the FDI training done by VESA would already be correct), but this should be fixed properly.

Ah yes, got it, Thank you:

I see the traces and I’m poking around while reading docs. Interesting. I do have one remark btw: the 1920x1080 modeline in common code seems rather high concerning Htotal. This increases the pixelclock from the needed appros 148Mhz to close to 180Mhz: thereby almost blocking on my Iiyama flatpanel monitor. Is there a specific reason why this (in my opinion) ‘insanely’ high Htotal is set?

My screen likes the one in my drivers: (And this seems like a very reasonable setup to me…)

{ { 148500, 1920, 2008, 2052, 2200, 1080, 1084, 1089, 1125, POSITIVE_SYNC},
B_CMAP8, 1920, 1080, 0, 0, MODE_FLAGS}, /* Vesa_Monitor_@60Hz_(1920X1080) */

At this moment this difference in modeline is responsible for my screens refresh dropping from 60 to 52 Hz when the intel_extreme driver kicks in (of course that’s a fault in the driver: but this led me to this common code initially)

I don’t know where the modelines in the common code come from. Possibly collected from random places on the Internet, or copied from suggested modes in some display EDID data?

Can’t we compute the timing using the GTF Formula, instead of hardcoding specific modelines?

It’s handy it’s in common code: saves every driver from supplying it’s own (as I still do).
It’s here:
https://git.haiku-os.org/haiku/tree/src/add-ons/accelerants/common/create_display_modes.cpp

There are different approaches to come up with these modelines indeed. It would also be handy if a user could supply a specific one as in rare cases that might remain being handy/needed.

I always strive to have VESA like modes in the drivers, assuming the cardBIOSes strive to that also when setting most VESA modes. During the boot sequence such a mode is used (from BIOS) and it’s replaced by the one our accelerants come up with so it has it’s advantages to stay close.

The one I mentioned is not a very ‘standard’ one I suspect, but more one that was needed in a corner case somewhere. (One reason for greater blanking time would be the sync circuitry in CRT screens, or for instance if you run routines during blanking to i.e. prevent video tearing: though that would be the vertical blanking period)

I guess it would be nice if we would use a formula (there are more than one in use I think), and also have VESA like ‘fixed’ modelines. If that would be switchable somehow that would be cool I think.

The ‘native’ mode relayed by EDID is added to the modelist in the drivers I think, and that should probably be like that indeed: as the native mode is likely the highest possible mode in a monitor and therefore sometimes has extra restrictions like very short timing pulses to keep within max freq specs. Though in my opinion that means that the monitor designers had better spent a tiny bit more money on better capable components so a relatively ‘standard’ modeline could have been used.

I’m looking at this code and I see that it will, in fact, create the mode using the GTF (compute_display_timing) if nothing matching is found in the “well known” modeline list. So this modeline for 1920x1080 can be replaced by a better one, or even removed completely, and we would still be fine.

The comment on it does not really say where it comes from, either.

Nice reading. If testing some code/drivers are needed I have

Kaby Lake (not supported in our code)
Skylake
Ivy Bridge
IronLake
Intel GMA 3150 (Atom N4xx)

1 Like

Ah nice, did not see that yet. I’ll try to get a bit more confirmation of the mode I am using from the net so to speak and if it’s OK and you’re OK with it I can replace it in the common code.

I’ll have a look at the formula in there as well and see how that compares more or less to the fixed modes. If it can be activated/used seperate from the fixed lines I might even use it in the nvidia driver, along with a driver setting to block the fixed modes. Provided I find the time that is. For now I am focussed on the intel_extreme driver, modesetting / FDI wise. I am rebooting numerous times, like in the old days :wink:

2 Likes

Hi, While doublechecking this driver on functionality, I now have the display PLL going. Intel is very specific about this: you may -not- program the PLL while it’s active. This means a few things:

in Pipes.cpp, routine ConfigureClocks:

  • Directly at the beginning of this routine (after establishing the registers to access) I disable the PLL explicitly: by writing PLL control with its own contents and with ~DISPLAY_PLL_ENABLED. Just to be safe I added spinning here for 150uS directly afterwards.
  • When programming this register, it is stated as a comment why the PLL is set back ‘under VGA_CONTROL’.
    Well, that’s not the goal of this. The goal is programming the PLL -keeping- it disabled. Hence, this is not correct and instead of anding with the inverse of PLL_NO_VGA_CONTROL that should read DISPLAY_PLL_ENABLED.
  • then I readback this register (already in the code)
  • then spin these 150uS (already in the code).
  • Next line should be to program the PLL control again, exactly the same as before, but this time with the ENABLE bit set.
  • readback (already in the code).

Done: PLL is up and running: I can set refreshrates now. When switching resolutions I see correct mode and refreshrate (of course I am tricking somewhat to be able to see this… :wink: )

Note please: the sync between the CPU/Northbridge and southbridge is not OK yet. I am now fiddling with that (I have a lot of changes in the code purely for testing purposes that might influence this, I’ll get back to you on this asap).

Anyhow: with the above knowledge you might think of other places in the driver where the same setup should apply. Other PLL’s for instance are likely to have the same ‘security’ feature…

Note please : tested on IVYBRIDGE: which is correctly shown in the screenprefs panel, along with monitor specs.
The item ‘back under VGA’ control -might- be needed on other/older cards: as a secondary function might be that the PLL in that case shuts off as well on these cards. Please doublecheck the linux sources I would advice.

Will you update GIT or shall I make this kind of small modifications to it?
Thanks!

1 Like

It’s easier for me if you have the things in patch form.

The explanation makes sense to me so if you want to push it directly, I’m fine. Or you can push it to Gerrit and I’ll review it there. Or just attach the patch here. Otherwise I can try to redo the change here but it’s more work for me.

The “VGA control” bit I think allows to bypass the PLL and use a fixed standard VGA dot clock (only possible for low resolutions). It can be useful if you need “something” driving the display while you reprogram the PLL, maybe you can manage to continue sending hsync and vsync. I know that on laptops with LVDS panels, things don’t look very nice if you leave the display completely uncontrolled (it gradually fades to white).

Exactly: hence: disabling the PLL as a side-effect.
OK, I’ll experiment a bit more before I do that then, probably post a diff. Is there a ticket where I can add this diff for instance? I have no experience with gerrit use yet, so that’s for now not so easy for me :wink:

I’m not sure it even disables the PLL itself. I think what happens is the PLL is running (well, if you enable it), but its output is not connected to the video generation clock input.
And indeed changing the PLL settings while it is running is probably a bad idea. Better to stop it, and restart it once all parameters are set.

You can push changes to the special branch “refs/for/master”:

# Make some changes here
git commit
git push origin HEAD:refs/for/master # each commit in the local branch that is not in upstream master branch is uploaded as a gerrit change request

Later on it is possible to push again to the same special branch to update/replace the changes.

There are many tickets where this can help, but I don’t know if there is one specifically about this:
https://dev.haiku-os.org/query?status=assigned&status=in-progress&status=new&status=reopened&component=^Drivers%2FGraphics%2Fintel_extreme&col=id&col=summary&col=status&col=type&col=priority&col=milestone&col=component&order=priority

I’ll see what route I’ll take then, maybe push it directly after all.
About the PLL:
So indeed also I am not sure. It’s always best policy to take the safest route here. So I’ll simply test if I and-out both flags: if that’s OK that’s the route I’d like to take.
It might well be that on some hardware the one flag does the trick, and on some hardware the other one. Could be a typical example of the quircks outthere in hardware land so to speak. So If the fixed VGA flag is in use on the linux code, we should keep it as well to prevent other hardware possibly nolonger working…

Thanks for the pointers… going to reboot… again :slight_smile:

2 Likes

Still poking around for more fixes before committing.
BTW: there’s a problem in Haiku with the revert to 800x600 mode via the keyboard shortcut: if you do that the system first tries to set a ‘undefined’ mode (something of 4096x4096 in resolution, which (of course) fails in the intel driver (cannot assign memory aperture).

oh wait, sanizite makes that from:
KERN: intel_extreme: Initial mode: Hd 29286 Hs 29029 He 25973 Ht 25454 Vd 14969 Vs 9481 Ve 30060 Vt 11552
KERN: intel_extreme: Sanitized: Hd 4096 Hs 8160 He 8192 Ht 8192 Vd 4096 Vs 8190 Ve 8192 Vt 8192

Normally you don’t really see this as after this failed attempt, a second attempt is done automatically, with the correct mode…

Anyhow: this got me on a wrong track somewhere concerning the driver itself :wink:

4 Likes

Yes, I don’t know where this comes from. Is it Haiku asking the driver “figure out a default mode by yourself”?

It happens sometimes when a new display is added, too (the video mode preferences are stored for each display, identified by some info from DDC/EDID). Probably this does not help with solving problems in the driver…

1 Like

I would think this is not working as it should be. think about a mode yourself sounds much to fuzzy for me (apart from retreiving preferred display mode and setting that: which is not fetched by this driver to replace the default common one for the screen’s native mode: that would be better since that prevents some screens not working it their ‘native’ mode when only the pixel resolution is actually used, but not the preferred timing.

BTW looking at (some) of the other modes in common, and also in the nVidia driver shows that rather high ‘htotals’ is common, so I now assume that the 1920x1080 mode is perfectly OK. Only this mode should not be offered in the driver since it’s the native mode of my screen and in that case also its native timing best would be returned in the place of the default VESA timing.

Anyhow: I have made progression in the intel driver again: now I can set refreshrates keeping the screen content exactly OK. Before (with PLL fix) only the resolution and refreshrate reported by the screen were correct but it’s content distorted or partly gone (timing issue between CRTC and port/transcoder).

I have added programming the CPU M/N/TU timing for both ‘data’ and ‘link’. This was missing in the driver.

  • The data M/N should be set according to ref timing (270000kHz), number of active lanes (that’s already calculated in flexibleDisplayInterface.cpp) and pixelclock: but -not- colordepth(!): The BIOS shows that this has no influence on the programmed values.
  • The link M/N values is based purely on pixelclock and ref timing (270000). Program link N as last item since this unlocks/triggers all the other registers as well making the update atomic (on next VBlank).

Notes:

  • Upto now I still have training code disabled (I call the routine but just for the M/N timings currently).
  • in Pipes.cpp routine ‘Enable’ does -NOT- touch INTEL_DISPLAY_PIPE_CONTROL: if you do that the screen’s output is gone and stays gone (blank ON screen, with correct timing though). I am guessing this may only be touched if the FDI training code is also enabled.

I’ll keep digging for the next item: being able to change resolution on the fly as well while keeping the display OK. I am still missing some registers, from the looks the link is stable now even on other resolutions (guess based on change in visible behaviour), but no sync (Hsync). Looks like I need to program somewhere extra how many pixels should be displayed/fetched per line at least. I’ll keep you posted here.

Update:

  • I think also the nvidia driver possibly does not replace that native timing come to think of it, since I just apparantly copied my specific modeline into that driver: best would be to doublecheck/modify that there as well in that case.
  • Ah: yes: the M/N -data- link: for colordepth you do need to correct: but with a fixed value of 3 (bytes per pixel). This number stays the same for 16 and 8 bit depths looking at the BIOS programming done. Indeed: I can set different depths without distortions. Sounds logical as well: being fixed 3: as this is ‘raw’ data so i.e. for 8 bit depth the color lookup was already done and the ‘real’ color is being transferred across this interface. If we would use more than 8-bits per color per pixel this will be another story then.
13 Likes

Great work and progress Rudolf thanks, looking forward to a better Video-driver!