Re: [patch] Make MMCONFIG space (extended PCI config space) a driver opt-in issue

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Arjan van de Ven wrote:
3) mmconfig might or might not be enabled, depending on which driver
is loaded, whether it called an API or not.

	Even LESS testing by hw vendors than #2.  Maybe even "never"

	Inconsistent (config access depends on device)

the "depends on device" is even true for Linux today. For us today, MMCONFIG isn't always used, it's used on a per device basis already; except that the per-device is both defined by the bios and our quirks....
(the mmconfig code already falls back to conf1 cycles in various cases)

So I'm not entirely buying your argument. IN fact I'm not buying your "mixed is not tested at all" argument; while the statement may be true, it's not different than it is from Linux today... which is mixed. Just differently mixed I suppose.

/If/ mixed use is truly well tested, then that eliminates one of my two objections.

But I don't see that in the code now... I see we choose [get_virt() in mmconfig_{32,6 4}.c} strictly on a per-bus basis, based on the allowed bus list provided by the BIOS.

In other words, if the BIOS says "use this segment/bus range for mmconfig", the code does that. There is no mixing of accesses on a per-device basis AFAICS in the current code. (feel free to prove me wrong! :)) It looks like per-bus/domain to me, which is a more reasonable and expected arrangement than per-device mmconfig|typeN conf accesses.

At boot time, we use type1 accesses until a "flag day", at which time an entire bus is switched to mmconfig all at once. After that point there is no mixing of accesses on that bus -- and nor were there mixed accesses on that bus /before/ that point.

And -- please forgive me, I mean no offense here -- we have to make sure whatever rule we come up with makes AMD and other non-Intel folks happy. Like with PCI MSI, mmconfig (+ its Linux support) has a history of being written first for Intel, working great on select Intel platforms, and not working so great initially for other vendors even when said technology is supposedly compatible. So having AMD sign-off on such an approach would greatly increase comfort level.



The second objection was regarding the inconsistencies introduced to userland (and the kernel?) whereby the existing states:

	* 256b config space
	* on per-bus basis, ext cfg space is available
	  if device provides && mmconfig active

were joined by the additional state

	* on a per-bus basis, ext cfg space is available
	  if device provides && mmconfig active

which has the potential to confuse the codebase that makes assumptions based upon whole system (mmconfig or not) and per-bus (ext cfg space capable or not) basis.

So, if my two worries here are needless, then those objections are eliminated.




Finally, I wonder if anything can be done about the diagnostic angle: it is not __only__ the driver that may want extended config space access.

It is quite reasonable and logical for an admin or developer to want to _view_ the entire extended cfg space (with lspci -vvvxxx) of a device, even if the device has not called pci_enable_ext_cfg_space(); indeed, even if a driver is not loaded at all.

How to address that need?

	Jeff



--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Stuff]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]     [Linux Resources]
  Powered by Linux