Liste des Groupes | Revenir à c arch |
Anton Ertl wrote:While that is one way of doing it and/or conceptualizing it,EricP <ThatWouldBeTelling@thevillage.com> writes:>That's difficult with a circular buffer for the instruction queue/rob>
as you can't edit the order.
What's wrong with performing an asynchronous interrupt at the ROB
level rather than inserting it at the decoder? Just stop commiting at
some point, record this at the interrupt return address and start
decoding the interrupt code.
That's worse than a pipeline drain because you toss things you already
invested in, by fetch, decode, rename, schedule, and possibly execute.
And you still have to refill the pipeline with the handler.
>Ok, it's more efficient to insert an interrupt call into the>
instruction stream in the decoder: all the in-flight instructions will
be completed instead of going to waste. However, the interrupt will
usually be serviced later, and, as you point out, if the instruction
stream is redirected for some other reason, you may have to replay the
interrupt.
The way I saw it, the core continues to execute its current stream while
it prefetches the handler prologue into I$L1, then loads its fetch
buffer.
At that point fetch injects a special INT_START uOp into the instruction
stream and switches to the handler. The INT_START uOp travels down the
pipeline following right behind the tail of the original stream.
If none of the flow disrupting events occur to the original stream thenThe interrupt handler should start executing when its first instruction
the handler just tucks in behind it. When INT_START hits retire then
core
send the commit signal to the interrupt controller to confirm the
hand-off.
>
The interrupt handler should start executing at the same time as it
would otherwise.
What changes is the interrupt is retained in the controller
in a tentative state longer while the handler is fetched, and the
current
stream continues executing. So the window where an interrupt hand-off
can be disrupted and rejected is larger. But interrupts are asynchronous
and there is no guaranteed delivery latency.
Les messages affichés proviennent d'usenet.