In article <4603ec2d5082f16ab0588b4b9d6f96c7@
www.novabbs.org>,
MitchAlsup1 <
mitchalsup@aol.com> wrote:
On Thu, 20 Mar 2025 20:25:59 +0000, Dan Cross wrote:
In article <fe9715fa347144df1e584463375107cf@www.novabbs.org>,
MitchAlsup1 <mitchalsup@aol.com> wrote:
On Thu, 20 Mar 2025 12:44:08 +0000, Dan Cross wrote:
In article <af2d54a7c6c694bf18bcca6e6876a72b@www.novabbs.org>,
MitchAlsup1 <mitchalsup@aol.com> wrote:
On Wed, 19 Mar 2025 14:03:56 +0000, Dan Cross wrote:
In article <36b8c18d145cdcd673713b7074cce6c3@www.novabbs.org>,
MitchAlsup1 <mitchalsup@aol.com> wrote:
I want to address the elephant in the room::
>
Why disable interrupts AT ALL !!
>
So that you can have some mechanism for building critical
sections that span multiple instructions, where just having
access to atomics and spin locks isn't enough.
>
You can do this with priority, too.
>
In the limit, disabling interrupts is just setting the priority
mask to infinity, so in some sense this is de facto true. But I
don't see how having a prioritization scheme is all that useful.
Consider that when you're starting the system up, you need to
do things like set up an interrupt handler vector, and so on;
>
The Interrupt vector is completely software. HW delivers the MSI-X
message and the service for which this message is for, and SW decides
what to do with this message to that service provider. No HW vectoring
but HW does deliver all the bits needed to guide SW to its goal.
>
Maybe I'm missing the context here, and apologies if I am. But
which interrupt disable are we talking about here, and where? I
>
We are talking about the core's Interrupt Disablement and nothing
of the gates farther out in the system that control interrupt
deliveries (or not).
Ok, that's what I thought.
am coming at this from the perspective of systems software
running on some CPU core somewhere; if I want that software to
enter a critical section that cannot be interrupted, then it
seems to me that MSI-X delivery is just a detail at that point.
I might want to prevent IPIs, or delivery or local interrupts
attached directly to the CPU (or something like an IOAPIC or
GIC distributor).
>
All of those flow through the same mechanism; even a locally
generated timer interrupt.
Right. So whether those interrupts come from MSI-X or anything
else is mostly irrelevant.
[snip/]
Sometimes you really don't want to be interrupted.
>
And sometimes you don't want to be interrupted unless the
"house is on fire"; I cannot see a time when "the house is
on fire" that you would not want to take the interrupt.
>
Is there one ?!?
>
Consider a thread that takes a spinlock; suppose some
high-priority interrupt comes in while the thread is holding
that lock. In response to the interrupt, software decides to
suspend the thread and switch some other thread; that thread
wants to lock the spin lock that the now-descheduled thread is
holding: a classic deadlock scenario.
>
If we can s/taken/attempts/ in the first line of that paragraph::
If it has merely attempted, then it's a completely different
scenario. Something that is more transactional in nature is no
longer just a simple CAS-based spinlock. It may be _better_,
but it is definitely different.
My architecture has a mechanism to perform ATOMIC stuff over multiple
instruction time frames that has the property that a higher priority
thread which interferers with a lower priority thread, the HPT wins
and the LPT fails its ATOMIC event. It is for exactly this reason
that I drag priority through the memory hierarchy--so that real-time
things remain closer to real-time (no or limited priority inversions).
Without being able to see this in practice, it's difficult to
speculate as to how well it will actually work in real-world
scenarios. What is the scope of what's covered by this atomic
thing?
Consider something as simple as popping an item off of the front
of a queue and inserting it into an ordered singly-linked list:
In this case, I'm probably going to want to take a lock on the
queue, then lock the list, then pop the first element off of the
queue by taking a pointer to the head. Then I'll set the head
to the next element of that item, then walk the list, finding
the place to insert (make sure I'm keeping track of the "prev"
pointer somehow), then do the dance to insert the item: set the
item's next to the thing after where I'm inserting, set the
prev's next to the item if it's not nil, or set the list head
to the item. Then I unlock the list and then the queue (suppose
for some reason that it's important that I hold the lock on the
queue until the element is fully added to the list).
This isn't complicated, and for a relatively small list it's not
expensive. Depending on the circumstances, I'd feel comfortable
doing this with only a spinlocks protecting both the queue and
the list. But it's more than a two or three instructions, and
doesn't feel like something that would fit well into the "may
fail atomic" model.
Mandating to SW that typ SW must use this mechanism will not fly,
at least in the first several iterations of an OS for the system.
So the std. methods must remain available.
>
AND to a large extend this sub-thread is attempting to get a
usefully large number of them annotated.
It sounds like the whole model is fairly different, and would
require software writers to consider interrupts occuring in
places that wouldn't have to in a more conventional design. I
suspect you'd have a fairly serious porting issue with existing
systems, but maybe I misunderstand.
A valid response here might be, "don't context switch from the
interrupt handler; use a DPC instead". That may be valid, but
it puts a constraint on the software designer that may be onerus
in practice: suppose the interrupt happens in the scheduler,
while examining a run queue or something. A DPC object must be
available, etc.
>
This seems to be onerous on SW mainly because of too many unknowns
to track down and deal with.
Yes.
Further, software must now consider the complexity of
potentially interruptable critical sections. From the
standpoint of reasoning about already-complex concurrency issues
it's simpler to be able to assert that (almost) all interrupt
delivery can be cheaply disabled entirely, save for very
special, specific, scenarios like NMIs. Potentially switching
away from a thread holding a spinlock sort of defeats the
purpose of a spinlock in the first place, which is a mutex
primitive designed to avoid the overhead of switching.
>
Agreed.
The idea of priority as described above seems to prize latency
above other considerations, but it's not clear to me that that
is the right tradeoff. Exactly what problem is being solved?
- Dan C.