Liste des Groupes | Revenir à c arch |
mitchalsup@aol.com (MitchAlsup1) writes:device is configured by setting BAR[s] to an addressableOn Fri, 7 Feb 2025 13:57:51 +0000, Scott Lurndal wrote:>
>mitchalsup@aol.com (MitchAlsup1) writes:>On Thu, 6 Feb 2025 20:06:31 +0000, Stephen Fuld wrote:>
>On 2/6/2025 10:51 AM, EricP wrote:>MitchAlsup1 wrote:>On Thu, 6 Feb 2025 16:41:45 +0000, EricP wrote:Not sure how this would work with device IO and DMA.
-------------------
Say a secure kernel that owns a disk drive with secrets that even the HV
is not authorized to see (so HV operators don't need Top Secret
clearance).
The Hypervisor has to pass to a hardware device DMA access to a memory
frame that it has no access to itself. How does one block the HV from
setting the IOMMU to DMA the device's secrets into its own memory?
>
Hmmm... something like: once a secure HV passes a physical frame address
to a secure kernel then it cannot take it back, it can only ask that
kernel for it back. Which means that the HV looses control of any
core or IOMMU PTE's that map that frame until it is handed back.
>
That would seem to imply that once an HV gives memory to a secure
guest kernel that it can only page that guest with its permission.
Hmmm...
I am a little confused here. When you talk about I0MMU addresses, are
you talking about memory addresses or disk addresses?
I/O MMU does not see the device commands containing the sector on
the disk to be accessed, Mostly, CPUs write directly to the CRs
of the device to start a command, bypassing I/O MMU as raw data.
That is indeed the case. The IOMMU is on the inbound path
from the PCIe controller to the internal bus/mesh structure.
>
Note that there is a translation on the outbound path from
the host address space to the PCIe memory space - this is
often 1:1, but need not be so. This translation happens
in the PCIe controller when creating the a TLP that contains
an address before sending the TLP to the endpoint. Take
Is there any reason this cannot happen in the core MMU ??
How do you map the translation table to the device?
How do you map the translation table to the device?HostBridge has a configuration register that points at
WhyThis is pure mischaracterization on you part. You always
would you wish to have the CPU translating I/O virtual
addresses?
The IOMMU tables are per device, and theyThis still leaves the door open for a parity error to
can be configured to map the minimum amount of the address
space (even updated per-I/O if desired) required to support
the completion of an inbound DMA from the device.
If the I/O MMU does not participate in interrupts, page faults,>>
Guest OS uses a virtual device address given to it from HV.
HV sets up the 2nd nesting of translation to translate this
to "what HostBridge needs" to route commands to device control
registers. The handoff can be done by spoofing config space
of having HV simply hand Guest OS a list of devices it can
discover/configure/use.
The IOMMU only is involved in DMA transactions _initiated_ by
the device, not by the CPUs. They're two completely different
concepts.
DRAM: 0x0000000000000000 is not the same address as>>an AHCI controller, for example, where the only device>
BAR is 32-bits; if a host wants to map the AHCI controller
at a 64-bit address, the controller needs to map that 64-bit
address window into a 32-bit 3DW TLP to be sent to the endpoint
function.
This is one of the reasons My 66000 architecture has a unique
MMI/O address space--you can setup a 32-bit BAR to put a
page of control registers in 32-bit address space without
conflict. {{If I understand correctly}} Core MMU, then,
translates normal device virtual control register addresses
such that the request is routed to where the device is looking
{{which has 32 high order bits zero.}}
Most systems have DRAM located at physical address zero, and
a 4GB DRAM is pretty small these days.
So you either needThe 32-bit BAR simply maps into IOMM: 0x00000000-0xFFFFFFFF
to make a hole in the DRAM or provide a mapping mechanism to
map a 64-bit address into a 32-bit bar when sending TLPs
to the AHCI controller.
Systems that aren't intel compatible will designate a rangeYou are using an aperture to place said MMI/O region.
of the 64-bit physical address space (near the top) and will
map regions in that range to the 32-bit bar via translation
registers in the PCIe controller.
>It is a SINGLE address system, it happens to have 66-bits of>>
On the other hand--it would take a very big system indeed to
overflow the 32-bit MMI/O space, although ECAM can access
42-bit device CR MMI/O space.
Leaving aside the small size of the legacy Intel I/O space
(16-bit addresses), history seems to have favored single
address space systems, so I suspect such a MMI/O space will
not be favored by many.
Les messages affichés proviennent d'usenet.