Re: MSI interrupts

Liste des GroupesRevenir à c arch 
Sujet : Re: MSI interrupts
De : cross (at) *nospam* spitfire.i.gajendra.net (Dan Cross)
Groupes : comp.arch
Date : 28. Mar 2025, 02:01:59
Autres entêtes
Organisation : PANIX Public Access Internet and UNIX, NYC
Message-ID : <vs4se6$hvb$1@reader1.panix.com>
References : 1 2 3 4
User-Agent : trn 4.0-test77 (Sep 1, 2010)
In article <601781e2c91d42a73526562b419fdf02@www.novabbs.org>,
MitchAlsup1 <mitchalsup@aol.com> wrote:
On Thu, 27 Mar 2025 17:19:21 +0000, Dan Cross wrote:
In article <7a093bbb356e3bda3782c15ca27e98a7@www.novabbs.org>,
MitchAlsup1 <mitchalsup@aol.com> wrote:
On Wed, 26 Mar 2025 4:08:57 +0000, Dan Cross wrote:
[snip]
1. `from` may point to a cache line
2. `to` may point to a different cache line.
3. Does your architecture use a stack?
>
No
>
You make multiple references to a stack just below.  I
understand stack references are excluded from the atomic event,
but I asked whether the architecture uses a stack.  Does it, or
doesn't it?
>
   What sort of alignment criteria do you impose?
>
On the participating data none. On the cache line--cache line.
>
I meant for data on the stack.  The point is moot, though, since
as you said stack data does not participate in the atomic event.
>
Stack does not NECESSARILY participate. However, if you perform
a esmLOCKload( SP[23] ) then that line on the stack is participating.

Very good.

   At first blush, it seems to me that
   the pointers `from_next`, `from_prev`, and `to_next` could be
   on the stack and if so, will be on at least one cache line,
   and possibly 2, if the call frame for `MoveElement` spans
   more than one.
>
The illustrated code is using 6 participating cache lines.
Where local variables are kept (stack, registers) does not
count against the tally of participating cache lines.
>
Huh.
>
There are 6 esmLOCKxxxxx() so there are 6 participating lines--
and these are absolutely independent from where the variables
are located. The variables carry information but do not part-
ticipate they carry data between operations on the particupating
lines.

That was clearly in the code.  That was not what the "Huh" was
in resposne to.

      How would this handle something like an MCS lock, where
the lock node may be on a stack, though accessible globally in
some virtual address space?
>
As illustrated above esmLOCKload( SP[23]) will cause a line on
the stack to participate in the event.

Ok, good to know.

----------------
But later you wrote,
>
So, if you want the property whereby the lock disappears on any
control transfer out of the event {exception, interrupt, SVC, SVR, ...};
then you want to use my ATOMIC stuff; otherwise, you can use the
normal ATOMIC primitives everyone and his brother provide.
>
Well, what precisely do you mean here when you say, "if you want
the property whereby the lock disappears on any control transfer
out of the event"?
>
If you want that property--you use the tools at hand.
If you don't just use them as primitive generators.
>
I wasn't clear enough here.  I'm asking what, exactly, you mean
by this _property_,
>
The property that if an interrupt (or exception or SVC or SVR)
prevents executing the event to its conclusion, HW makes the
event look like it never started. So that when/if control returns
we have another chance to execute the event as-if ATOMICally.

Ok.  So let's be crystal clear here: after the event concludes
successfully, a subsequent exception, interrupt, or whatever,
will not magically roll back any of the values set and made
visible system-wide as a result of the successful conclusion of
the atomic event.  Correct?  That's what I've been asking.

                    not what you mean when you write that one
can use these atomic events if one wants the property.  That is,
I'm asking you to precisely define what it means for a lock to
"disappear".
>
It seems clear enough _now_ that once the event concludes
successfully the lock value is set to whatever it was set to in
during the event, but that wasn't clear to me earlier and I
wanted confirmation.
>
Not "the lock value" but "all participating values" are set
and become visible system-wide in a single instant.

Sure.  But there was only one participating value in the
spinlock example.  ;-}

No 3rd
party sees some set and others remain unset. Everybody sees
all of then change value between this-clock and its successor.

Sure.  You've said this all along, and I've never disputed it.

In particular, it seems odd to me that one would bother with a
lock of some kind during an event if it didn't remain set after
the event.
>
Consider CompareTripleSwapFour() as an atomic primitive. How
would you program this such that nobody nowhere could ever
tell that the four updated locations changes in any order
then simultaneously ??? Without using a LOCK to guard the
event ??? And without demanding that all four updates are
to the same cache line ???
>
{{Nobody nowhere included ATOMIC-blind DMA requests}}

The fundamental issue here is that, when I refer to a "lock" I
have been talking about a software mutual exclusion primitive,
and you are referring to something else entirely.

I would have thought my meaning was clear from context;
evidently not.

            If you do all of your critical section stuff inside
of the event, and setting the lock in the event is not visible
until the event concludes, why bother?  If on the other hand you
use the event to set the lock, why bother doing additional work
inside the event itself, but in this case I definitely don't
want something else to come along and just unset the lock on its
own, higher priority or not.
>
It is clear you do not understand the trouble HW takes to implement
even DCADS where between any 2 clocks, one (or more) SNOOPs can
take the lines you are DCADSing. It is this property of cache
coherence that gets in the way of multi-operation ATOMIC events.

As above, when I refer to "setting the lock" I am referring to
acquiring a mutex; a software primitive.  I am not referring to
what the hardware does internally.  Consider what I wrote above,
the section that you quoted, in that context.

In 2004 Fred Weber came to me and ask: "Why can't we (AMD) give
Microsoft the DCADS they want. I dug into it, and ASF was my
solution in x86 ISA, ESM is my solution in RISC-based ISA. It
removes the need to add ATOMIC primitives to ISA over generations.
>
It also provides the ability to do other things--which apparently
you will not use--sort of like complaining that your new car has
auto-park a feature you will never use.

"It is clear you do not understand the trouble SW takes to
implement even simple things that are beyond the capability of
hardware."

(Back at you.  I think I've been pretty polite here; I'm getting
tired of the barbs.)

What I _hope_ you mean is that if you transfer "out of the
event" before the `esmLOCKstore` then anything that's been done
since the "event" started is rolled back, but NOT if control
transfer happens _after_ the `esmLOCKstore`.  If THAT were the
case, then this entire mechanism is useless.
>
One can use a esmLOCKstore( *SP, 0 ) to abandon an event. HW
detects that *SP is not participating in the event, and uses
the lock bit in the instruction to know it is time to leave
the event in the failed manner. No ST is performed to the
non-participating cache line. {{I guess I could wrap the
abandonment into a macro ...}}
>
Again, not what I was asking.  I'm trying to understand what you
meant when you talked about disappearing locks.
>
If the event cannot be completed due to interrupt or exception or
SCVC or SVR, the HW backs up the event such that is looks like
it never started. SO when control return, the ATOMIC event runs
in its entirety ATOMICally !! So, the illusion of ATOMICITY is
preserved.

Sigh.  You've said this many times over now.  I get it.  I'm not
disputing it, nor do I believe I ever have.

Again, when I said "lock" earlier, I am referring to a mutux
from the perspective of software.  That is, the value of the
structure member that I called "lock" in the version of the
`Spinlock` struct that I posted however-many responses ago, and
that I keep referring to in example code.  I am not asking about
what the hardware does internally.  I'm not talking about locked
bus cycles or whatever.  I'm not referring to cache line address
monitors or any of these other things that you keep referring
to.  I'm talking the the value of the structure member called
`lock` in the `Spinlock` C structure I posted earlier.

What I have been asking, over and over again, is whether that
_value_ "disappears" _after_ an atomic event successfully
concludes.  That is, the value that was written into the memory,
not whatever internal marker you use in hardware to monitor and
guard those stores until the atomic event completes or is
otherwise exited, aborted, cancelled, interrupted, or however
you prefer to refer to it not completing successfully.  That is
a totally separate thing and I not presume to speak of how its
done, though in fairness to you you've made it reasonably clear.

Does that clarify what I have been asking about?  I honestly
thought this would be obvious from context.

Again, I think you meant that, if an atomic event that sets a
lock variable fails,
>
The event has NO lock variable--it has up to 8 cache line address
monitors.

The structure member is literally called `lock` and the context
is software spinlocks.  Note, spinlocks because they're simple
and easy to discuss, not because they're great mutex primtives.

As frustrating as this evidently is to you, it strikes me that
it actually reinforces my point: hardware has no insight into
what the software is _doing_ at a higher semantic level.

                     whether it got far enough along to set the
variable inside the event or not is not observable if the event
fails.
>
Non-participating cache lines can be used to leak information
out of the event (like for debug purposes). Participating data
does not leak.
>
        But if the event succeeds, then the variable is set and
that's that.  IF that's the case, then great!  But if not, then
disappearing locks are bad.
>
It is NOT a FRIGGEN LOCK--

I am talking about the spinlock.  That is a software object, and
in that context, it is very much called a "lock".

I understand when you say that it is not a lock, it is not a
lock in the sense relevant to your world in hardware, but that's
not what _I'm_ talking about.

I'm trying to square what you meant when you talked about
"disappearing locks" with what happens to those software objects
_after_ an atomic event modifying them successfully completes.

Evidently you were talking about something else entirely, and
that's fine.  I gather it has nothing to do with _my_ notion of
locks after they've been successfully acquired; I was just
trying to confirm that.

it is a set of address monitors which
are used to determine the success of the event -- the monitors
remain invisible to SW just like miss buffers remain invisible
to SW. The monitors are not preservable across privilege changes.

See above.  Tell me what you want me to call it and I'll use
that terminology instead.

                             This is especially murky to me
because there seem to be bits of OS primitives, like threads and
priority, that are known directly to the hardware.
>
Just its instantaneous privilege level--which is congruent to
its register file, Root Pointer, ASID, and operating modes.

What is "it" here?

------------------
>
Ok, so perhaps this is a reasonable implementation of spinlock
acquire, as I wrote to Stefan yesterday:
>
void
spinlock_acquire(volatile Spinlock *lk)
{
while (esmLOCKload(&lk->lock) != 0)
cpu_spin_hint();
esmLOCKstore(&lk->lock, 1);
}
>
Yes?  If this is preempted, or another CPU wins the race on the
lock, the hardware will back this up and start over in the while
loop, right?
>
Yes.
------------
Remember only the lines participating in the event count. For
example, say one had sprintf() inside the event. None of the
lines touched by springt() are participating in the event,
and are not counted against the 8 cache lines available.
>
This doesn't change the point, though.  Anything that touches
more than 8 cache lines is not suitable for an atomic event.
>
I see what you are saying, but the event is allowed to have an
unbounded amount of touches to non-participating lies.

I don't see how that's helpful if I need to touch data in more
than 8 cache lines atomically.

Insertion into a doubly linked list is already up to 6 lines,
>
I moved a doubly linked element from a CDS and inserted it
somewhere else doubly-linked in same CDS in 6 cache lines.
You need to refine how you count.

<Tongue-in-cheek mode on>
Ah, yes.  The program that would dump core if you tried to move
from the head of a list or the rear and that couldn't insert at
the head pointer was, in fact, called `MoveElement`.  My
mistake, even if it was clear that that's what I was talking
about, given that quoted where I referred to it in the context
of work stealing later on.  ;-P

BOOLEAN InsertElement( Element *el, Element *to )
{
    tn = esmLOCKload( to->next );
    esmLOCKprefetch( el );
    esmLOCKprefetch( tn );
    if( !esmINTERFERENCE() )
    {
                  el->next = tn;
                  el->prev = to;
                  to->next = el;
    esmLOCKstore( tn->prev,  el );
                  return TRUE;
    }
    return FALSE;
}

If you're going to be pendatic when I slip up and call "move"
"insert", then let's be pendantic.

This code won't compile, because of the obvious error of not
defining `tn`.  Even if you fix that, it's not correct if you
try to insert at the end of the list.  It's not clear now you
would insert at the head of a list, or into an empty list, but
that depends on how you represent the list itself.  ie, the
head may just be a pointer to the first element, or it might be
an otherwise unused node; something like this:

typedef struct List List;
struct List {
Element head;
};

where the first element in the list is always `head.next`.

It would help if you used standard types, like `bool`, which has
been available standard for more than 25 years now since
introduced in C99 as `_Bool` with accompanying typedefs and
macros in `stdbool.h` for `bool`, `true` and `false`.

And doesn't this presume that `el->next`, `el->prev` and `el`
are all on the same cache line?  Similarly with `tn` and
`tn->prev`?

You need to refine how you write programs.  ;-}

Here's a better version, that assumes the list representation
above.

#include <stddef.h>
#include <stdbool.h>

typedef struct Element Element;
struct Element {
Element *next;
Element *prev;
};

typedef struct List List;
struct List {
Element head;
};

Element *
ListBegin(List *list)
{
return list->head.next;
}

bool
InsertElement(Element *el, Element *to)
{
Element *tn = esmLOCKload(to->next);
esmLOCKprefetch(el->next);
esmLOCKprefetch(el->prev);
if (tn != NULL)
esmLOCKprefetch(tn->prev);

if (!esmINTERFERENCE()) {
el->next = tn;
el->prev = to;
to->next = el;
esmLOCKstore(tn->prev,  el);
return true;
}
return false;
}

Inserting at the head, or if the list is empty, are both easy
now: `InsertElement(&list.head, el)`.  But note, again, that the
first element of the list is really `list.head.next`, hence the
`ListBegin` function.

Here I count up to three, but possibly up to 4, cache lines.

Note that if you did Not represent the list this way, in order
to support generalized insertions anywhere in the list, you'd
either need a circular representation (which is annoying to use
in practice), or you'd most likely have to support separate
head and tail pointers and two operations, one to insert before,
and another after, the 'to' pointer.  But then you'd have to be
prepared to update either the head or tail pointers (or both if
the list is empty); either way it's potentially two more cache
lines to account for, though if the list is empty you don't have
to touch `tn->prev`.

recall that my original hypothetical also involved popping off a
queue, which adds another (the queue head).  If I threw up
incrementing a couple of counters I can't do it.  And this seems
like a perfectly reasonable real-world scenario; consider a work
stealing thread scheduler that takes a task from one core to
another and keeps some stats on how things move.
>
Or maybe even just swapping places between two elements in a
linked list.  For example:
>
The example has 3 structures each requiring 3 participating lines.
{themself, the struct their next pointer points at and the struct
their prev pointer points at}
>
3×3 = 9 and there is no getting around it.

Yup.  So what does the hardware do if I try it?

-------------------------
>
But there isn't a lot of publicly available documentation for
any of this, at least none that I could find, and from the way
it's been described thus far it seems like there are a number of
edge cases that it is not clear how one is supposed to address.
So I want to understand the boundaries of where these hardware
facilities are applicable, given architectural limitations.  I
think this thread has shown that those boundaries exist.  The
universe of interesting and useful software is big, and someone,
somewhere, is going to trip over those limitations sooner or
later.  Once we accept that, we must ask, "Then what?"  It seems
obvious that programmers are going to have to find techniques to
work around those limitations, and that the mechanisms chosen
are unknowable to hardware.
>
Fair enough.
>
But now we're back in the same boat of software having to deal
with all the thorny problems that these mechanisms were designed
to address in the first place, like priority inversion: take the
spinlock from earlier, the hardware has no semantic knowledge
that the lock is used as a mutex, and can't know; it's just some
bit in memory somewhere that was set during an atomic event.
>
It is not a FRIGGEN lock, it is a set of address monitors which
guard the instructions in the event.

Now I have to wonder if you're deliberatly ignoring the context.

I called it a "lock" in the same sentence I said it was a
"spinlock".  Are you seriously going to tell me that you could
not figure out that the "lock" I was referring to was the
spinlock?  You know that that's a software thing; you yourself
said that.

Note the words, "it's just some bit in memory that _was set
during an atomic event._"  Is it not clear that the event is
over?  It's done.  It was successful.  All of the address
monitors guarding instructions in the event are no longer
relevant; they've been retired or whatever.

Now, some (software) thread holds a (software) (spin)lock.
I've tried over and over to make this clear, yet you insist on
going back to talking about what happens _during_ an atomic
event.  I'm talking about _after_ the event completed
_succesfully_.

A Lock is a SW concept
which requires a value change to announce to others that "you
got it".

That's exactly what `spinlock_acquire` from above _does_, isn't
it?

You cannot seriously expect me to accept that you did not
understand this.

Maybe ESM helps in a race when trying to acquire the spinlock,
but once it's acquired, it's acquired, and because the spinlock
is not a primitive understood by the hardware, the hardware has
no way of incorporating the spinlock's semantics into its view
of priority, threading, etc.
>
Consider a 1024 core system and the time-quantum goes off and
every core wants to rotate the thread it is running; putting
it at the back of the execution queue and taking off the one on
the front to run. Nick McLaren timed a big SUN server (100 core)
on this and it could take up to 6 seconds !! for something that
should take a few milliseconds. This is because of the BigO( n^3 )
nature of bus traffic wrt SNOOPing on that system.

This was solved with per-core run queues.

You're senior enough that I would think it clear that spinlocks
are just a convenient illustrative example here because they're
so simple, but their pitfalls as core counts increase are well
studied and understood.  In the real world, mutex structures
like MCS locks or CLH locks scale approximately linearly with
core-count under contention.  I posted a reference to this
earlier, but here it is again.  You may have a hard time
accepting it because they call the things that they refer to,
"locks".  ;-}
https://pdos.csail.mit.edu/papers/linux:lock.pdf

One can use ESM to convert that particular case to BigO( 3 )
or in the general case of random insert and removal times:
BigO( ln( n ) ).

Modern scheduling algorithms already run in amortized constant
time on conventional hardware.

So while an atomic event might help me in the implementation of
a generic mutex of some kind, so that I can protect critical
sections that can't be handled with the atomic event framework
directly because of their size, that's the extent of the help it
can give me, because once the mutex is acquired the hardware has
no insight into the fact that it's a mutex.
>
Exactly:: once you convert address-based mutual exclusion to
data-based mutual exclusion, HW no longer sees it at all.

That's what I've been saying all along.

                                             So if some higher
priority thing subsequently comes along and preempts the thread
holding that lock and tries to take that lock itself, then the
hardware can't help me out anymore, and I'm back to a
priority-inversion deadlock that has to be dealt with by
software.
>
HW can only "take" address based locking, not data-based locking.

Precisely.  So it can't help me with priority inversion
generally.

The point is, even if the hardware gives me a cool
mechanism for helping prevent priority inversion problems,
programmers are inevitably going to have to handle those
situations themselves anyway.
>
Never promised you a rose garden.

It's ok; I've already been through Parris Island.  And OCS at
Quantico, AFG deployment, la la la.  But that's neither here nor
there.  Point is, I don't need your roses, and I probably don't
want your threads or anything more advanced than things that let
me build my own synchronization primitives, threads, priority
abstractions, and so on.

This is the sort of thing I'm trying to communicate, it's really
not me sitting here tossing out barbs like, "oh yeah?!  Well,
how about THIS one, smart guy?"
>
Look at it this way:: over the decade from 2000-2010 x86 added
a new ATOMIC instruction every iteration.
>
Using ASF or ESM; my architecture would never have to add one.
SW has the tools to build what it wants/needs.

Ok, now we're getting somewhere useful.  That's handy.

Look, while don't you put the documentation of this stuff
somewhere publicly accessible?  It'd probably save people like
me a lot of time, frustration, and dad-level insults.  Probably
save you some frustration as well.

- Dan C.


Date Sujet#  Auteur
13 Mar 25 * MSI interrupts163Robert Finch
13 Mar 25 `* Re: MSI interrupts162MitchAlsup1
13 Mar 25  +* Re: MSI interrupts5Robert Finch
13 Mar 25  i+- Re: MSI interrupts1MitchAlsup1
13 Mar 25  i`* Re: MSI interrupts3Robert Finch
13 Mar 25  i +- Re: MSI interrupts1MitchAlsup1
13 Mar 25  i `- Re: MSI interrupts1Stefan Monnier
13 Mar 25  `* Re: MSI interrupts156MitchAlsup1
13 Mar 25   `* Re: MSI interrupts155MitchAlsup1
14 Mar 25    `* Re: MSI interrupts154MitchAlsup1
14 Mar 25     `* Re: MSI interrupts153MitchAlsup1
14 Mar 25      `* Re: MSI interrupts152MitchAlsup1
15 Mar 25       `* Re: MSI interrupts151Robert Finch
15 Mar 25        `* Re: MSI interrupts150MitchAlsup1
15 Mar 25         `* Re: MSI interrupts149Robert Finch
15 Mar 25          `* Re: MSI interrupts148MitchAlsup1
16 Mar 25           `* Re: MSI interrupts147Robert Finch
16 Mar 25            +- Re: MSI interrupts1MitchAlsup1
17 Mar 25            +* Re: MSI interrupts142Michael S
17 Mar 25            i+- Re: MSI interrupts1Robert Finch
17 Mar 25            i+* Re: MSI interrupts133Robert Finch
18 Mar 25            ii+* Re: MSI interrupts127Robert Finch
19 Mar 25            iii+* Re: MSI interrupts124MitchAlsup1
19 Mar 25            iiii+* Re: MSI interrupts121Dan Cross
19 Mar 25            iiiii+* Re: MSI interrupts112MitchAlsup1
20 Mar 25            iiiiii`* Re: MSI interrupts111Dan Cross
20 Mar 25            iiiiii `* Re: MSI interrupts110MitchAlsup1
20 Mar 25            iiiiii  `* Re: MSI interrupts109Dan Cross
20 Mar 25            iiiiii   +* Re: MSI interrupts31MitchAlsup1
24 Mar 25            iiiiii   i`* Re: MSI interrupts30Dan Cross
24 Mar 25            iiiiii   i +* Re: MSI interrupts20MitchAlsup1
24 Mar 25            iiiiii   i i+* Re: MSI interrupts18Stefan Monnier
24 Mar 25            iiiiii   i ii`* Re: MSI interrupts17MitchAlsup1
24 Mar 25            iiiiii   i ii `* Re: MSI interrupts16Dan Cross
24 Mar 25            iiiiii   i ii  +* Re: MSI interrupts8MitchAlsup1
25 Mar 25            iiiiii   i ii  i`* Re: MSI interrupts7Dan Cross
25 Mar 25            iiiiii   i ii  i `* Re: MSI interrupts6Dan Cross
25 Mar 25            iiiiii   i ii  i  `* Re: MSI interrupts5Stefan Monnier
25 Mar 25            iiiiii   i ii  i   +- Re: MSI interrupts1Stefan Monnier
25 Mar 25            iiiiii   i ii  i   +- Re: MSI interrupts1Dan Cross
27 Mar 25            iiiiii   i ii  i   `* Re: MSI interrupts2Terje Mathisen
27 Mar 25            iiiiii   i ii  i    `- Re: MSI interrupts1MitchAlsup1
25 Mar 25            iiiiii   i ii  `* Re: MSI interrupts7Chris M. Thomasson
25 Mar 25            iiiiii   i ii   `* Re: MSI interrupts6Dan Cross
25 Mar 25            iiiiii   i ii    `* Re: MSI interrupts5Chris M. Thomasson
25 Mar 25            iiiiii   i ii     `* Re: MSI interrupts4Dan Cross
26 Mar 25            iiiiii   i ii      `* Re: MSI interrupts3Chris M. Thomasson
26 Mar 25            iiiiii   i ii       `* Re: MSI interrupts2Dan Cross
26 Mar 25            iiiiii   i ii        `- Re: MSI interrupts1Chris M. Thomasson
24 Mar 25            iiiiii   i i`- Re: MSI interrupts1Dan Cross
24 Mar 25            iiiiii   i +- Re: MSI interrupts1MitchAlsup1
24 Mar 25            iiiiii   i `* Re: MSI interrupts8Dan Cross
24 Mar 25            iiiiii   i  `* Re: MSI interrupts7Chris M. Thomasson
24 Mar 25            iiiiii   i   `* Re: MSI interrupts6Dan Cross
25 Mar 25            iiiiii   i    `* Re: MSI interrupts5Chris M. Thomasson
25 Mar 25            iiiiii   i     +* Re: MSI interrupts2Chris M. Thomasson
25 Mar 25            iiiiii   i     i`- Re: MSI interrupts1Dan Cross
25 Mar 25            iiiiii   i     `* Re: MSI interrupts2Dan Cross
25 Mar 25            iiiiii   i      `- Re: MSI interrupts1Chris M. Thomasson
24 Mar 25            iiiiii   +- Re: MSI interrupts1Chris M. Thomasson
24 Mar 25            iiiiii   `* Re: MSI interrupts76Dan Cross
24 Mar 25            iiiiii    +* Re: MSI interrupts57MitchAlsup1
25 Mar 25            iiiiii    i`* Re: MSI interrupts56Dan Cross
25 Mar 25            iiiiii    i `* Re: MSI interrupts55MitchAlsup1
25 Mar 25            iiiiii    i  +* Re: MSI interrupts2Stefan Monnier
25 Mar 25            iiiiii    i  i`- Re: MSI interrupts1Chris M. Thomasson
25 Mar 25            iiiiii    i  +- Re: MSI interrupts1Dan Cross
25 Mar 25            iiiiii    i  `* Re: MSI interrupts51MitchAlsup1
25 Mar 25            iiiiii    i   `* Re: MSI interrupts50Dan Cross
25 Mar 25            iiiiii    i    `* Re: MSI interrupts49MitchAlsup1
25 Mar 25            iiiiii    i     `* Re: MSI interrupts48Dan Cross
25 Mar 25            iiiiii    i      `* Re: MSI interrupts47MitchAlsup1
25 Mar 25            iiiiii    i       `* Re: MSI interrupts46Dan Cross
25 Mar 25            iiiiii    i        +* Re: MSI interrupts8Stefan Monnier
26 Mar 25            iiiiii    i        i+* Re: MSI interrupts5Dan Cross
26 Mar 25            iiiiii    i        ii`* Re: MSI interrupts4Stefan Monnier
26 Mar 25            iiiiii    i        ii +- Re: MSI interrupts1Dan Cross
26 Mar 25            iiiiii    i        ii `* Re: MSI interrupts2MitchAlsup1
27 Mar 25            iiiiii    i        ii  `- Re: MSI interrupts1Stefan Monnier
26 Mar 25            iiiiii    i        i+- Re: MSI interrupts1Chris M. Thomasson
26 Mar 25            iiiiii    i        i`- Re: MSI interrupts1MitchAlsup1
26 Mar 25            iiiiii    i        `* Re: MSI interrupts37MitchAlsup1
26 Mar 25            iiiiii    i         `* Re: MSI interrupts36Dan Cross
26 Mar 25            iiiiii    i          +* Re: MSI interrupts4Stefan Monnier
26 Mar 25            iiiiii    i          i`* Re: MSI interrupts3Dan Cross
26 Mar 25            iiiiii    i          i `* Re: MSI interrupts2Chris M. Thomasson
4 Apr 25            iiiiii    i          i  `- Re: MSI interrupts1Chris M. Thomasson
26 Mar 25            iiiiii    i          `* Re: MSI interrupts31MitchAlsup1
26 Mar 25            iiiiii    i           +- Re: MSI interrupts1Stefan Monnier
26 Mar 25            iiiiii    i           +* Re: MSI interrupts2Stefan Monnier
27 Mar 25            iiiiii    i           i`- Re: MSI interrupts1MitchAlsup1
27 Mar 25            iiiiii    i           +* Re: MSI interrupts3MitchAlsup1
27 Mar 25            iiiiii    i           i`* Re: MSI interrupts2MitchAlsup1
27 Mar 25            iiiiii    i           i `- Re: MSI interrupts1Dan Cross
27 Mar 25            iiiiii    i           `* Re: MSI interrupts24Dan Cross
27 Mar 25            iiiiii    i            +* Re: MSI interrupts2Stefan Monnier
27 Mar 25            iiiiii    i            i`- Re: MSI interrupts1Dan Cross
27 Mar 25            iiiiii    i            +* Re: MSI interrupts12MitchAlsup1
28 Mar 25            iiiiii    i            i`* Re: MSI interrupts11Dan Cross
28 Mar 25            iiiiii    i            i +- Re: MSI interrupts1MitchAlsup1
28 Mar 25            iiiiii    i            i +* Re: MSI interrupts5MitchAlsup1
28 Mar 25            iiiiii    i            i +* Re: MSI interrupts2Stefan Monnier
28 Mar 25            iiiiii    i            i `* Re: MSI interrupts2Chris M. Thomasson
27 Mar 25            iiiiii    i            `* Re: MSI interrupts9MitchAlsup1
25 Mar 25            iiiiii    `* Re: MSI interrupts18Dan Cross
20 Mar 25            iiiii`* Re: MSI interrupts8MitchAlsup1
19 Mar 25            iiii`* Re: MSI interrupts2MitchAlsup1
19 Mar 25            iii`* Re: MSI interrupts2Robert Finch
18 Mar 25            ii`* Re: MSI interrupts5MitchAlsup1
17 Mar 25            i`* Re: MSI interrupts7MitchAlsup1
17 Mar 25            `* Re: MSI interrupts3Robert Finch

Haut de la page

Les messages affichés proviennent d'usenet.

NewsPortal