Liste des Groupes | Revenir à c arch |
MitchAlsup1 wrote:For some reason this made me think about getting a blue screen of death due to too much non-paged memory being used by too many concurrent overlapped IO's on Windows.EricP wrote:I don't understand your question.
>MitchAlsup1 wrote:>Scott Lurndal wrote:
>mitchalsup@aol.com (MitchAlsup1) writes:>BGB wrote:>>Also I may need to rework how page-in/page-out is handled (and or how IO is handled in general) since if a page swap needs to happen while IO is already in progress (such as a page-miss in the system-call process), at present, the OS is dead in the water (one can't access the SDcard in the middle of a different access to the SDcard).>
Having a HyperVisor helps a lot here, with HV taking the page faults
of the OS page fault handler.Seems like adding another layer couldn't help with this, unless it also abstracts away the SDcard interface.>
With a HV, GuestOS does not "do" IO is paravirtualizes it via HV.Actually, that's not completely accurate. With PCI Express SR-IOV,>
an I/O MMU and hardware I/O virtualization, the guest accesses the I/O device
hardware directly and initiates DMA transactions to-or-from the
guest OS directly. With the PCIe PRI (Page Request Interface), the
guest DMA target pages don't need to be pinned by the hypervisor; the
I/O MMU will interrupt the hypervisor to make the page present
and pin it and the hardware will then do the DMA.
This was something I was not aware of but probably should have anticipated.
>
GuestOS initiates an I/O request (command) using a virtual function.
Rather than going through a bunch of activities to verify the user
owns the page and it is present, GuestOS just launches request and
then the I/O device page faults and pins the required page (if it is
not already so)--much like the page fault volcano when a new process
begins running:: page faulting in .text, the stack, and data pages
as they get touched.
>
This way, GuestOS simply considers all pages in its "portfolio" to be
present in memory, and HV does the heavy lifting and page virtualization.
>
I guess I should have anticipated this. Sorry !!The reason OS's pin the pages before the IO starts is so there is no>
latency reading in from a device, which then has to buffer the input.
An HDD seek avg about 9 ms, add 3 ms for the page fault code.
A 100 Mbs Ethernet can receive 10 MB/s or 10 kB/ms, = 120 kB in 12 ms.What would likely happen is the Ethernet card buffer would fill up>
then it starts tossing packets, while it waits for HV to page fault
the receive buffer in from its page file. Later when the guest OS
buffer has faulted in and the card's buffer is emptied, the network
software will eventually NAK all the tossed packets and they get resent.So there is a stutter every time the HV recycles that guest OS memory>
that requires retransmissions to fix. And this is basically using the
senders memory to buffer the transmission while this HV page faults.Note there are devices, like A to D converters which cannot fix the>
tossed data by asking for a retransmission. Or devices like tape drives
which can rewind and reread but are verrry slow about it.I would want an option in this SR-IOV mechanism for the guest app to>
tell the guest OS to tell the HV to pin the buffer before starting IO.
>
>
So, what happens if GuestOS thinks the user file is located on a local
SATA drive, but it is really across some network ?? This works when devices are not virtualized since the request is routed to a different
system where the file is local, accessed and data returned over the
network.
>
Does this mean the application has lost a level of indirection in order
to have become virtualized ?????
My comment was about the consequences of not pinning buffer pages
before starting an I/O. If those pages were for a mapped file stored
on a network device it won't be different.
Les messages affichés proviennent d'usenet.