In article <
t3CFP.529522$f81.217510@fx48.iad>,
EricP <
ThatWouldBeTelling@thevillage.com> wrote:
Dan Cross wrote:
In article <8uzEP.115550$Xq5f.66883@fx38.iad>,
EricP <ThatWouldBeTelling@thevillage.com> wrote:
Dan Cross wrote:
In article <kVgEP.1277108$_N6e.605199@fx17.iad>,
EricP <ThatWouldBeTelling@thevillage.com> wrote:
Dan Cross wrote:
[snip]
Consider a thread that takes a spinlock; suppose some
high-priority interrupt comes in while the thread is holding
that lock. In response to the interrupt, software decides to
suspend the thread and switch some other thread; that thread
wants to lock the spin lock that the now-descheduled thread is
holding: a classic deadlock scenario.
Terminology: mutexes coordinate mutual exclusion between threads,
spinlocks coordinate mutual exclusion between cpu cores.
Windows "critical sections" are mutexes with a fast path.
A spin lock is simply any lock where you spin trying to acquire
the lock, as opposed to a blocking synchronization protocol.
Here I'm using the terminology of Herlihy and Shavit [Her08].
I'm using terminology in common use on multiprocessor systems like
VMS since 1980's and later WinNT.
>
Each OS data structure has different reentrancy requirements.
- Ones accessed by threads are guarded by mutexes.
Example on Windows would be the page tables as paging is done by threads.
- Ones accessed by OS software interrupt level are guarded by spinlocks.
Example on Windows is the Scheduler tables.
- Ones accessed by HW interrupts are guarded by interrupt spinlocks.
Example on Windows is device interrupt service routines.
I'm not sure what you're saying here, or rather, how it is
relevant.
Your earlier statement was that "mutexes coordinate mutual
exclusion between threads, spinlocks coordinate mutual exclusion
between cpu cores." But this is not the case.
Taking VMS as an example, from [Gol94], sec 9.5 (page 255):
|A thread of execution running on one processor acquires a
|spinlock to serialize access to the data with threads of
|execution running on other processors. Before acquiring
|the spinlock, the thread of executions raises IPL to block
|accesses by other threads of execution running on the same
|processor. The IPL value is determined by the spinlock being
|locked.
Goldberg and Saravanan seems quite explicit that threads can
acquire and hold spinlocks on OpenVMS, which seems to be
something that you were objecting to?
>
It took me a while to find copy of that book online.
That 1994 book relates to VMS before they added kernel threads to the OS.
Before you said you were using terminology commonly used on
systems "since [the] 1980s". The systems you cited were VMS and
Windows. I quoted references about both systems from the mid
1990s and mid 2000s, respectively. If that terminology were
common in the 1980s, as you suggested, surely it would be in a
book from 1994? Regardless, see below.
At that point VMS scheduled processes, an execution context plus virtual
space, like most *nix (except I think Solaris which had kernel threads).
>
It says on pg 3: "The term thread of execution as used in this book
refers to a computational entity, an agent of execution."
This refers generally to the various code sequences from different context
that may be intertwined on a processors core for interrupts, exceptions,
Dpc (called driver forks in VMS), process AST's (software interrupts),
and applications.
>
In a later book on VMS after they added kernel scheduled threads they
consistently use the two terms "thread of execution" and "kernel thread"
to distinguish the two meanings.
Are you referring to the 1997 revision of Goldenberg et al,
that covers VMS 7.0? [Gol97] In that context, VMS "kernel
threads" refer to threads of execution within a program that
are managed and scheduled by the VMS executive (aka kernel).
This is described on page 151, section 5.1:
|A thread of execution is a sequential flow of instruction
|executiuon. A traditional program is single-threaded,
|consisting of one main thread of execution. In contrast, a
|multithreaded program divides its work among multiple threads
|that can run concurrently. OpenVMS supplies DECthreads, a
|run-time library, to facilitate the creation, synchronization,
|and deletion of multiple user-mode thredas.
|
|OpenVMS Alpha Version 7.0 supports multiple execution contexts
|within a process: each execution context has its own hardware
|context and stacks, and canexecute independently of the others;
|all execution context with a process share the same virtual
|address space. The kernel _kernel thread_ refers to one of
|those execution contexts and the thread of execution running
|within it. This volume uses the term _multithreaded_ to refer
|to a process with multiple kernel threads.
|
|The kernel thread is now the basis for OpenVMS scheduling.
|[...]
It continues on page 152:
|The term _kernel thread_ is a somewhat confusing name, in that
|instructions executing in a kernel thread can run in any of the
|four access modes, not only kernel mode. The term arose to
|distinguish the current multithreaded model, supported within
|the OpenVMS executive (or kernel), from the previous model,
|supported only by DECthreads.
Sadly, the later books on OpenVMS internals are more focused on
specific subsystems (scheduling and process management; memory
management) than the ealier volumes. Details of the
implementation of synchronization primitives are almost entirely
missing.
There is discussion about how those primitives are _used_,
however. [Gol03] mentions spinlocks, such as the SCHED
spinlock, in the context of memory management data structures.
E.g., on page 51:
|The swappability of the PHD [Process Header] result in several
|different methods for synchronizing access to fields within it.
|Because a PHD can be inswapped to a different balance set slot
|than it last occuped, accesses to a PHD that use its system
|space address must be synchronized against swapper
|interference. Accesses from a kernel thread to its own PHD can
|be made with the SCHED spinlock held to block any rescheduling
|and possible swapping of the process. Holding the MMG spinlock
|is another way to block swapping.
Notably here, that the language is again in terms of (software)
threads. It seems clear enough that in the VMS context,
spinlocks are conceptually held by threads, not cores.
I find the phrase "thread of execution" ambiguous and try not use it.
After the term thread had been pretty much adopted to mean kernel thread
as an OS scheduled execution context, I would have preferred if they used
something like "strand of execution" to avoid the overloading.
I disagree with that characterization. "Thread" generically
means what they said it means above: a sequential flow of
instruction execution. Generically, on most systems, "threads"
are just register state and a stack in some address address
space.
Fibers/Strands/etc tend to refer to lightweight cooperatively
scheduled concurrency structures.
Anyway, I don't think that much changes the point I was making.
I don't see any evidence of
Turning to Windows, [Rus12] says about spin locks on page 179:
|Testing and acquiring the lock in one instruction prevents a
|second thread from grabbing the lock between the time the first
|thread tests the variable and the time it acquires the lock.
Note that, again, the language here is thread-centric.
>
He says on pg 12: "A thread is the entity within a process that
Windows schedules for execution." meaning "kernel thread".
>
He also uses the term thread loosely as you can see on pg 13 when he says:
"Fibers allow an application to schedule its own �threads� of execution"
I don't see how that changes the point.
He also says pg 87 (to paraphrase) that scheduling priority is an
attribute of a thread whereas IRQL is an attribute of a processor.
Which is the distinction I am trying to make: when you raise IRQL
it stops being a thread executing and starts being a processor core.
The exact paragraph I suspect you are referring to is this:
|IRQL priority levels have a completely different meaning than
|thread-scheduling priorities (which are described in chapter
|5). A scheduling priority is an attribute of a thread, whereas
|an IRQL is an attribute of an interrupt source, such as a
|keyboard or a mouse. In addition, each processor has an IRQL
|setting that chnages as operating system code executes.
Here, they simply discuss the difference between the IRQL and
thread priorities
If one looks past software and looks at the hardware for a
moment, we know that (for instance) the x86 LAPIC has a Task
Priority Register (TPR) can be used to set what asynchronous
interrupts the current task is willing to accept, where of such
interrupts is a function of its vector number. Well, it's a
little more complex than that in that x86_64 buckets vectors
into 16 "classes" and the threshold for interrupt acceptance is
actually based on the 4-bit "task-priority class" subfield of
that register, but that's the gist of it. But that's obviously
different than, say, the OS's notion of a thread (or, if you
prefer, some schedulable execution context)'s priority, which is
an abstraction it creates and maintains for its own consumption.
Clearly, the IRQL is an abstraction of that hardware concept.
Anyway, it seems clear that the above quoted text is merely
explaining that these are not the same thing, and that readers
should take care not to confuse them. I don't see how that has
any bearing at all on your assertion that spinlocks are
conceptualized in terms of cores and not (software) threads in
Windows, let alove VMS.
Indeed, taking in the larger context in this section, I see no
evidence that Russinovich Russinovich et al (multiple authors,
not just one) talk about anything other than (kernel-mode)
threads holding locks. From pages 86 and 87,
|Interrupts are serviced in priority order, and a
|higher-priority interrupt preempts the servicing of a
|lower-priority interrupt. When a high-priority interrupt
|occurs, the processor saves the interrupted thread's state and
|invokes the trap dispatchers associated with the interrupt.
|The trap dispatcher raises the IRQL and calls the interrupt's
|service routine. After the service routine executes, the
|interrupt dispatcher lowers the processor's IRQL to where it
|was before the interrupt occurred and then loads the saved
|machine state. The interrupted thread resumes executing where
|it left off.
This just discusses how _interrupts_ are processed. Note that
on page 86, they write:
|The kernel represents IRQLs internally as a number from 0
|through 31 on x86 and from 0 to 15 on x64 and IA64, with higher
|numbers representing higher-priority interrupts.
Note how these numbers correspond to the hardware. E.g., in the
case of x86_64, 0-15 are the priority thresholds representable
in the 4-bit "task-priority class" field in the TPR, settable by
software via a move to CR8.
But continuing on after the paragraph you cited on page 87, they
write:
|Each processor's IRQL setting determines which interrupts that
|processor can receive. IRQLs are also used to synchronize
|access to kernel-mode data structures. (You'll find out more
|about synchronization later in this chapter.) As a kernel-mode
|thread runs, it raises or lowers the processor's IRQL either
|directly by calling _KeRaiseIrql_ and _KeLowerIrql_ or, more
|commonly, indirectly via class to functions that acquire kernel
|synchronization objects.
In other words, kernel-mode threads raise and lower the IRQL as
they interact with kernel synchronization objects, such as
spinlocks.
Continuing on page 88, one reads:
|A kernel-mode thread raises and lowers the IRQL of the
|processor on which it's running, depending on what it's trying
|to do.
Again, note that the language describing the actors in these
scenarios is centered on software abstractions. It is
kernel-mode _threads_ that are acquiring and holding spinlocks,
changing the IRQL, etc. Not cores.
By the way, regarding terminology, on page 177 [Rus12] also
says,
|Sections of code that access a nonshareable resource are called
|_critical_ sections. To ensure correct code, only one thread
|at a time can execute in a critical section.
This is the generic definition of a critical section and note
that this is the context in which we are using the term
throughout this thread.
>
Yes, I was trying to note the ambiguous terminology so we could avoid it.
>
That page is referring to kernel threads as scheduled entities
because he says "if the operating system performed a context
switch to the second thread...". In a Windows context the term
"critical section" means a specific Win32 user mode mechanism
used to synchronize access for scheduled threads.
I don't think so. I mentioned the mechanism that windows calls
a "critical section" later, but we can see that that is not the
meaning here by looking at the surrounding context. For
instance, on page 176:
|The concept of _mutual exclusion_ is a crucial one in operating
|systems development. It refers to the guarantee that one, and
|only one, thread cn access a particular resource at a time.
Given the reference to "operating system development", it seems
clear this is referring to a "thread" in a generic sense.
Note that the sentence you quoted above is somewhat misleading
out of context. It is part of a larger paragraph explaining an
illustrative example about the consequences of failing to
maintain proper mutual between threads of execution generally.
This paragraph, and that sentence fragment in particular, are
explaining how the specific example, which shows a conflict
between code running on two separate processors can also affect
a single processor. The exact text is,
|Because the second thread obtained the value of the queue tail
|pointer before the first thread finished updating it, the
|second thread inserted its data into the same location that the
|first thread used, overwriting data and leaving one queue
|location empty. *Even though Figure 3-24 illustrates what
|could happen on a multiprocessor system, the same error could
|occur on a single-processor system if the operating system
|performed a context switch to the second thread before the
|first thread updated the queue tail pointer*
(Emphasis mine.)
Mentioning thread context switching here doesn't seem particular
indicative of user-mode versus kernel-mode threads; it's common
to talk about thread context switch in the context of a kernel.
For example, Unix `swtch` has been switching between thread
contexts since the early 1970s (Unix has had a multithreaded
kernel since the earliest editions, where processes were
conceptualized as either running in user-mode, or "in the
kernel", as part of a thread maintained by the kernel for each
process; just to be extra confusing, these are often called
"kernel threads", meaning threads internal to the kernel. A
"context switch" between user processes thus always involved a
trap so that the process was running in the kernel, then a
thread switch from one process's kernel thread to that of
another, followed by an eventual return to userspace).
Anyway, continuing with Russinovich et al, later on page 177
they write,
|The issue of mutual exclusion, although important for all
|operating systems, is especially important (and intricate) for
|a _tightly coupled, symmetric multiprocessing_ (SMP) operating
|system such as Windows, in which the same system code runs
|simultaneously on more than one processor, sharing certain data
|structures stored in global memory. In Windows, it is the
|kernel's job to provide mechanisms that system code can use to
|prevent two threads from modifying the same structure at the
|same time. The kernel provides mutual-exclusion primitives
|that it and the rest of the executive use to synchronize their
|access to global data structures.
Taken together, this all makes it clear that, in this section,
these authors are talking about these concepts generally as a
preface to discussing them within the context of Windows
specifically.
As opposed to books on multi-processors often use the term
"critical section" in a generic way to mean a shared section of memory
accessed by multiple processors, guarded by some form of mutual exclusion,
which could apply to shared memory guarded by OS spinlocks
but equally to shared application memory with its own synchronization.
In this context, "critical section" usually refers to a sequence
of instructions that are modifying some shared resource, but
must do so atomically.
What you referred to earlier as a "critical section" in Windows
is a _userspace_ synchronization primitive with a kernel assist
and mostly unrelated to concerns around interruptibility. From
the description on page 201 of [Rus12], they seem similar to
futexes on Linux, though somewhat more limited: Russinovich et
al states that they cannot be used between processes, while a
Linux futex can if it's in a region of memory shared between
cooperating processes.
>
Yes.
>
Traditional Unix "sleep" and "wakeup" are an example of a
blocking protocol, where a thread may "sleep" on some "channel",
yielding the locking thread while the lock cannot be acquired,
presumably scheduling something else until the thread is later
marked runnable by virtual of something else calling "wakeup" on
the sleep channel.
>
But note that I am deliberately simplifying in order to
construct a particular scenario in which a light-weight
synchronization primitive is being used for mutual exclusion
between concurrent entities, hence mentioning spinlocks
specifically.
>
Suggesting that spin locks iare only applicable to
multiprocessing scenarios is flatly incorrect.
I didn't.
Your exacty words were, "spinlocks coordinate mutual exclusion
between cpu cores." See the text quoted above. This is
incorrect in that it ignores that threads are, in contxt, the
software primitive for the computational entities that are doing
the acquiring, and while the "between cpu core" part is the
usual case, it is not the only case.
>
I believe I'm being consistent and precise.
However a *core's* interrupt level gets raised >= Dispatch level, either by
an interrupt of a lower priority strand of execution, or if a kernel thread
making a SysCall, it shuts off the scheduler. At that point the core is no
longer executing as a kernel thread - it is executing as a processor core.
A processor core is always a processor core, full stop. A core
does not "execute _as_ a kernel thread"; a "kernel thread" is a
software abstraction, something the core has no conception of;
the core just executes instructions, handles interrupts (modulo
the state that it's put into by software) and exceptions, and
does other processor things.
In the context of Windows, a "kernel thread" is an entity that
is managed and scheduled by the operating system. The operating
system manages scheduling threads onto cores, switching between
them, etc, but the processor itself is no more aware of that
that an electrical power plant is aware whether it is powering a
toaster versus a dishwasher.
Software mutual exclusion primitives like spinlocks, are by
definition the purview of software; as such, the core has no
insight into them. Systems software elevates interrupt priority
levels (or disables interrupts completely) when holding
spinlocks as a necessary part correctness of a software locking
protocol, in order to ensure correct serialization of access
between multiple software agents. But in particular, changing
interrupt priorities when holding a spinlock is actually done to
ensure correctness of the protocol on a _single_ processor, not
_between_ processors.
If it then acquires an OS spinlock, it is synchronizing between cores.
Not true at all. This would preclude using spinlocks on a
uniprocessor system, which is an unreasonable constraint:
consider an operating system distributed as a single image that
should run correctly on both uni- and multi-processor systems.
Recall the reference to [Vah96] earlier, that goes into this.
Here it is again:
|On a uniprocessor, if a thread tries to acquire a spin lock
|that is already held, it will loop forever. Multiprocessor
|algorithms, however, must operate correctly regardless of the
|number of processors, which means that they should handle the
|uniprocessor case as well. This requires strict adherence to
|the rule that threads not relinquish control of the CPU while
|holding a spin lock.
I am saying the OS by design guards *ITS* different data structures
with different mechanisms, and those mechanism must only be used in
a very controlled manner. And if you want to do something that requires
access to one of *ITS* data structures then you must obey *ITS* rules.
This is axiomatic.
But I fail to see how it's relevant to the topic at hand. In
context we're discussing scenarios where a system may want to
disable interrupts entirely, and whether a particular aspect of
a hardware design is sufficient to implement critical sections
generally.
>
You seemed to be switching between talking about general spinlocks as
"any code that spin-waits" whereas I am just referring to OS spinlocks
executing at a specific raised IRQL, guarding shared OS data structures.
Going back to the original context, I wrote:
Consider a thread that takes a spinlock; suppose some
high-priority interrupt comes in while the thread is holding
that lock. In response to the interrupt, software decides to
suspend the thread and switch some other thread; that thread
wants to lock the spin lock that the now-descheduled thread is
holding: a classic deadlock scenario.
To which you responded:
Terminology: mutexes coordinate mutual exclusion between threads,
spinlocks coordinate mutual exclusion between cpu cores.
Windows "critical sections" are mutexes with a fast path.
Which strikes me as pretty non-standard: everything cited so
far pretty much acknowledges that spinlocks coordinate between
threads, independent of the specific meaning of "thread" in
context a given context. Indeed, Vahalia explicitly addresses
the uniprocessor case for spinlocks.
Beyond that, I still fail to see how that's relevant to the
original context: scenarios under which one may want to disable
interrupts entirely.
In Windows and VMS and from what I understand of *nix spinlocks guard most
of the core OS data structures with spinlocks acquired at the software
interrupt level. If you are not at that level then the spinlock will not
provide the necessary synchronization for access to that data.
Precisely.
>
Yeah. We are synchonized!
Yay!
Further, software must now consider the complexity of
potentially interruptable critical sections. From the
standpoint of reasoning about already-complex concurrency issues
it's simpler to be able to assert that (almost) all interrupt
delivery can be cheaply disabled entirely, save for very
special, specific, scenarios like NMIs. Potentially switching
away from a thread holding a spinlock sort of defeats the
purpose of a spinlock in the first place, which is a mutex
primitive designed to avoid the overhead of switching.
(It occurs to me that perhaps this is why you are going on about
how VMS and windows uses spin locks, and how they interact with
interrupt priority levels.)
>
I find that because WinNT was a clean-sheet design, much of its terminology
for things like threads, processes, mutexes, Dpc's, is less ambiguous,
particularly when talking about interrupts and scheduling.
>
Whereas with VMS and *nix that terminology can get messy because they
started out with "process" as the unit of scheduled execution and later
glued threads on, and then Unix glued threads on as a new flavor of forking
that doesn't really fork, and Linux with its interrupt handling history of
back half's vs tasklettes vs SoftIrq.
Ok.
This is why I mentioned the terminology thing: threads do not hold
spinlocks, they hold mutexes.
See above. Threads can certainly "hold" a spin lock, as they
can hold any kind of lock. To quote from sec 7.6.1 of [Val96],
page 202:
When a particular OS defined spinlock is used to guard a particular OS
defined data structure, exactly WHEN the OS allows this decides how
the OS avoids all the nasty interrupt pitfalls you identified.
How is this relevant to what you wrote earlier and my response,
quoted just above? My statement was that threads can hold spin
locks. This is obviously true.
>
If you means a kernel thread doing its own spin-wait, sure. But that is a
"wait"the OS know nothing about. As far as OS is concerned that thread is
just executing its applications.
>
Whereas an OS spinlock requires IRQL >= Dispatch, at which point it
is no longer executing as a kernel thread, it is a core.
Please provide a citation to something that uses these terms in
this sense.
I see no evidence that this terminology is common anywhere, but
have cited plenty of evidence using the terminology that threads
can hold spinlocks.
|On a uniprocessor, if a thread tries to acquire a spin lock
|that is already held, it will loop forever. Multiprocessor
|algorithms, however, must operate correctly regardless of the
|number of processors, which means that they should handle the
|uniprocessor case as well. This requires strict adherence to
|the rule that threads not relinquish control of the CPU while
|holding a spin lock.
Yes, which is what I was saying.
I am describing HOW that is accomplished inside an OS.
Why? First, it's well-known, and second, that's not relevant to
the discussion at hand.
But it also doesn't seem like that's what you're saying; you
asserted that a thread (a software thread in this context) does
not "hold" a spin lock; that's incorrect.
>
Answered above.
Not really. You made an assertion, but that assertion was not
backed up with any evidence, and moreover, disconfirming
evidence has been provided. Threads can clearly hold spinlocks;
suggesting otherwise is simply incorrect. If you still disagree
post evidence.
The way you prevent a thread from relinquishing control while holding a
spinlock is to first switch hats from thread level to software interrupt
level, which blocks the scheduler because it also runs at SWIL,
then acquire the spinlock.
>
Raising the interrupt priority level from passive (aka thread) level
to SWI level (called dispatch level on Windows) blocks the thread switch.
This ensures non-reentrant algorithms are serialized.
The core may then acquire spinlocks guarding data structures.
Just so. Earlier you said that threads cannot hold spin locks;
in the text quoted just above, you said that they do.
As for this specific scenario of specific interrupt priority
levels, I would go further and say that is just a detail. In
the simplest systems, ie, the kind we might reach for as an
expository example when discussing these kinds of issues, one
just disables interrupts completely when holding a spinlock.
>
As I think you noted elsewhere, disabling interrupts is the same as
raising the interrupt level to maximum.
>
The general intent is to not do that as that limits concurrency
and causes higher interrupt latencies.
Did you read what you responded to?
I like to think of it as all having to do with hats.
The cpu is wearing one of three hats: a thread hat when it pretends to be
a time shared execution machine; a core hat when it acts as a single
execution machine running non-reentrant things like a scheduler which
creates the fiction of threads and processes (multiple virtual spaces);
and an interrupt hat when executing nestable reentrant interrupts.
>
The interrupt mechanism coordinates exactly when and how we switch hats.
I suppose you are lumping system calls into the overall bucket
of "interrupts" in this model, regardless of whether one might
use a dedicated supervisor call instruction (e.g., something
like x86 SYSENTER or SYSCALL or ARM SVC or RISC-V's ECALL)?
Syscall switches a thread from user mode to supervisor mode.
Your statement was that "the interrupt mechanism coordinates
exactly when and how we switch hats". But that's incomplete; if
a system call blocks, for example, the typical behavior is to
yield to the scheduler to pick something else to do. Clearly,
that is another way that coordinates this business of changing
one's hat.
>
Yes, I believe that is how Linux worked originally.
That just moves the point where it changes privilege mode and raising of
priority level that disables the scheduler to be the OS SysCall entry point.
System call entry does not "disable the scheduler" in any
meaningful way. In fact, it's quite common to invoke the
scheduler as a consequence of user-initiated system calls;
most Unix and Linux system calls are synchronous.
They later changed to what I described, which is seems to be how many
other OS also work, because it allows more kernel mode concurrency.
Consider a single-threaded process that issues a synchronous
`read` call against a blocking socket descriptor referring to a
connected TCP stream. Assuming no data is currently immediately
available for consumption, the system call will block; the
calling task will be marked as blocked waiting on data to arrive
on the socket and the scheduler will be invoked on that core.
No interrupts involved.
This has nothing to do with "kernel-mode concurrency."
Or, for that matter, consider `vfork`, which always executes the
scheduler. Exercise left to the reader.
The hat analogy feels strained to me, and too colloquial to be
useful.
I offered it because it answers your prior questions about how
OS avoid the kinds of deadlocks and race conditions you described.
I'm afraid you answered a question that I did not ask, then.
I am discussing the specifics of this architectural proposal,
and specifically two aspects of it: one is addressing the
proposer's questions about scenarios where one might want to
disable interrupts entirely.
Another, which has emerged somewhat more recently in the thread,
is this mechanism which seems to be a semi-transactional
elaboration of LL-SC primitives.
Any questions I did ask were Socratic in nature, and in the
context of these architectural discussions. I did not ask any
questions about basic design or implementation of
synchronization primitives.
>
You claimed I was wrong when I said spinlocks synchronize between cores.
So I have repeatedly shown why I am correct.
Well, you have asserted the same thing several times, despite
being presented with disconfirming evidence. But you have not
shown any evidence supporting your assertions or why you think
you are correct.
- Dan C.
References:
[Gol94] Ruth E. Goldenberg and Saro Saravanan. 1994. _OpenVMS
AXP Internals and Data Structures_ (Vers. 1.5). Digital Press,
Newton, MA.
[Gol97] Ruth E. Goldenberg, Denise E. Dumas and Saro Saravanan.
1997. _OpenVMS Alpha Internals: Scheduling and Process Control_
(Vers. 7.0). Digital Press, Newton, MA.
[Gol03] Ruth E. Goldenberg. 2003. _OpenVMS Alpha Internals and
Data Structures: Memory Management_. Digital Press, Boston, MA.
[Rus12] Mark Russinovich, David A. Solomon and Alex Ionescu.
2012. _Windows Internals: Part 1_. Microsoft Press, Redmond, WA.
[Vah96] Uresh Vahalia. 1996. _Unix Internals: The New
Frontiers_. Prentice Hall, Upper Saddle River, NJ.