Sujet : Re: MSI interrupts
De : mitchalsup (at) *nospam* aol.com (MitchAlsup1)
Groupes : comp.archDate : 14. Mar 2025, 00:52:10
Autres entêtes
Organisation : Rocksolid Light
Message-ID : <53b8227eba214e0340cad309241af7b5@www.novabbs.org>
References : 1 2 3 4 5 6 7
User-Agent : Rocksolid Light
On Thu, 13 Mar 2025 23:34:22 +0000, Scott Lurndal wrote:
mitchalsup@aol.com (MitchAlsup1) writes:
On Thu, 13 Mar 2025 21:14:08 +0000, Scott Lurndal wrote:
>
mitchalsup@aol.com (MitchAlsup1) writes:
On Thu, 13 Mar 2025 18:34:32 +0000, Scott Lurndal wrote:
>
>
Most modern devices advertise the MSI-X capability instead.
>
And why not:: its just a few more flip-flops and almost no more
sequencing logic.
>
I've seen several devices with more than 200 MSI-X vectors;
thats 96 bits per vector to store a full 64-bit address and
32-bit data payload.
>
At this point, with 200+ entries, flip-flops are not recommended,
instead these would be placed in a RAM of some sort. Since RAMs
come in 1KB and 2KB quanta; we have 1 of 2K and 1 of 1K and we
have 256 said message containers, with 1 cycle access (after
you get to that corner of some chip).
>
That is 200 _per function_. Consider a physical function that
supports the SRIOV capability and configures 2048 virtual
functions. So that's 2049 * 200 MSI-X vectors just for one
device. It's not unusual. These vectors are, of course,
stored on the device itself. Mostly in RAMs.
In My 66000, said device can place those value-holding containers
in actual DRAM should it want to punt the storage. This adds latency
and decreases on-die storage. At that point, the device has an
unlimited number of "special things".
>
Keep in mind that a guest operating system may be writing
the 64-bit address field in a virtual function assigned to
it with guest virtual addresses, so the inbound path for
MSI-X needs to traverse the IOMMU before hitting the
interrupt controller logic.
>
An important concept that is hard to find in the PCIe specifications.
>
It's more of a system-level thing than a PCI thing, so I
wouldn't expect to find it the PCI specification. System
level standards (like ARM's SBSA (server base system architecture)
covers these type of topics).
>
>
The guest OS also specifies
the data payload, which may be an IRQ number on an intel
system, or a virtual IRQ number translated by the IOMMU
or Interrupt controller into a physical IRQ number (allowing
multiple guest OS to use the same IRQs they would use on real
hardware).
>
Just note that GuestOS[k].IRQ[j] uses the same bit patterns
at the device, they get delivered to their own unique
GuestOS through its interrupt table.
>
But even when GuestOS[k] and GuestOS[j] use the same 96-bit MSI-X
pattern, each GuestOS can interpret its message the way it wants,
sort its drivers in the way it wants, and operate with blissful
ignorance of GuestOS[other].
>
Linux, for example, knows that the format of the data payload
on an MSI-X message is an unformatted 32-bit integer. Limiting
that or defining a hard format is a non-starter, as the kernel
software folks will just point and laugh. They really don't
like accomodating new architectures that play fast and loose
with the standard behavior. DAMHIKT.
Not al all (laughing Linux people); I make no specification to
the system (at the architectural level) of how the 32-bit
value is interpreted, nor which range of addresses in MMI/O
space map to interrupt tables. Those ae all system choices--
possibly dynamic (ASLR and all that, or moving once an hour...)