On 05/01/2024 11:48 PM, Lawrence D'Oliveiro wrote:
On Wed, 1 May 2024 20:09:48 -0700, Ross Finlayson wrote:
>
So, the idea of the re-routine, is a sort of co-routine. That is, it
fits the definition of being a co-routine, though as with that when its
asynchronous filling of the memo of its operation is unfulfilled, it
quits by throwing an exception, then is as expected to to called again,
when its filling of the memo is fulfilled, thus that it returns.
>
The normal, non-comedy way of handling this is to have the task await
something variously called a “future” or “promise”: when that object is
marked as completed, then the task is automatically woken again to fulfil
its purpose.
>
Thanks for writing, warm regards.
Aye, it's typical that people think that "await" will make for blocking
a thread on an "async" future, because that's the language construct
they've heard about and what people make of things like the "threading
building blocks" or, "machines", in their synchrony, abstractly they're
complex machines.
(These days a lot of what could have been in the MMX registers for SIMD
that those are integer vectors and a lot of them have gone instead to
employ what's the same unit as for the XMM then XMM2, into floating
point or some fixed point vectors, what were often matrix
multiplications for affine geometry in screen coordinates, now is lots
of arithmetic coding or Hopfield nets. "Threading Building Blocks" was a
library that Intel released with language bindings to the intrinsics of
synchronization primitives and other threading building blocks for
complex synchrony. These days something like the UEFI BIOS has that
there's an interface where people are actually supposed to write to the
timing with regards mostly to real-time DRAM refresh, then with the fast
side of the bus and the slow side, then what people get out of that is
just plain SMBIOS and ACPI and some UEFI functions, all sort of mostly
all booted up in an EFI BIOS often in Forth, the totally ubiquitous
64-bit setup on all PCs everywhere, with PCI and PCIe, and some other
very usual adapters like the bog-standard 802.x and WIFI, and some
blinking lights.)
It's all about _timing_, if you get my drift, and just provided all
smoothly above that as it's just another protocol, the firmware.
"Re-Routines": is a great idea, given that, in languages without
language features for asynchrony or for that matter threads, cooperating
multithreading or multitasking, is still a great thing. When there's
only one thread of control then a scheduler can still round-robin these
queues, of non-blocking tasks what apiece courtesy their non-blocking as
re-routines or non-blocking as thanks to select/epoll/kqueue or
non-blocking I/O, it's pretty much the
same pattern.
So I've been thinking about this awhile, in about 2016 I was like "you
know some day Usenet will need a fresh ecosystem of servers" and
so over on sci.math I started tapping away at "Meta: a usenet server
just for sci.math", and came up with this idea of re-routines, and
implemented
a most of it.
If there's one great thing about a re-routine: it's really easy to mock.
(Yeah, I know, I heard that some people actually do not even perceive
that which is not, "literal", which is not figurative, ..., which is not
figurative, ....)
The re-routine is exactly the same and is a model of definition of
synchrony by the usual flow-of-control, that's almost exactly what
"defined behavior of synchrony" is, the definition of state according to
the guarantees of flow-of-control, in the language in the syntax in all
the languages that have procedural flow-of-control.
So, it's, really easy to mock.
Then, it's sort of an abstraction of what also usually the languages
does, the program stack and the call stack. I.e., the memo, where
"memoization" is a very usual term in optimization and can be
unconfused with "cache", the memo has a holder for each of
the results still being used in a re-routine, and a model of
the call stack with regards to "no callbacks, callbacks everywhere,
no futures, futures everywhere", as it's a great model of implicits.
One of the things I would gripe about these days is that
people don't program to PIMPL, which is an abstract,
"point-to-implementation", what in Java is called
"extracting interfaces". There's just connected a giant
adapter with a thousand methods when almost always
the use-case is like "I push to the queue" or "I pop from
the queue", and it's like, you know, it's not so much that
it's easier to mock when the surface is minimal, as that,
it's much easier.
So, here, re-routines are easier to mock in a sense,
but especially easier to implement usual synchronous
modules of them, when the idea is "actually I'd like to
run this machine in-memory and synchronously
before introducing asynchrony and the distributed".
Especially the idea of "re-using the same code for the
synchronous edition and later asynchronous edition",
is mostly for that by the very nature of declaring and
initialization as of returning and holding and passing
and accessing of usable objects, defining dependencies
of synchrony, that's sort of what there is to it.
So, it's a great idea, I've been tapping away on it on
the design of servers for usual protocols on "Meta:
a usenet server just for sci.math".
I imagine it's a very old idea of just sort of modeling
the call stack first-class in routine, as a model of
cooperative multithreading, if it's really a joke then
there are only a dozen jokes in the world already
constantly wrapped as new, maybe it's just too good
to tell.