Re: Meta: a usenet server just for sci.math

Liste des GroupesRevenir à ns nntp 
Sujet : Re: Meta: a usenet server just for sci.math
De : ross.a.finlayson (at) *nospam* gmail.com (Ross Finlayson)
Groupes : sci.math news.software.nntp comp.programming.threads
Date : 22. Apr 2024, 18:06:02
Autres entêtes
Message-ID : <TO6cnaz7jdFtBbv7nZ2dnZfqn_WdnZ2d@giganews.com>
References : 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
User-Agent : Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.6.0
On 04/20/2024 11:24 AM, Ross Finlayson wrote:
>
>
Well I've been thinking about the re-routine as a model of cooperative
multithreading,
then thinking about the flow-machine of protocols
>
NNTP
IMAP <-> NNTP
HTTP <-> IMAP <-> NNTP
>
Both IMAP and NNTP are session-oriented on the connection, while,
HTTP, in terms of session, has various approaches in terms of HTTP 1.1
and connections, and the session ID shared client/server.
>
>
The re-routine idea is this, that each kind of method, is memoizable,
and, it memoizes, by object identity as the key, for the method, all
its callers, how this is like so.
>
interface Reroutine1 {
>
Result1 rr1(String a1) {
>
     Result2 r2 = reroutine2.rr2(a1);
>
     Result3 r3 = reroutine3.rr3(r2);
>
     return result(r2, r3);
}
>
}
>
>
The idea is that the executor, when it's submitted a reroutine,
when it runs the re-routine, in a thread, then it puts in a ThreadLocal,
the re-routine, so that when a re-routine it calls, returns null as it
starts an asynchronous computation for the input, then when
it completes, it submits to the executor the re-routine again.
>
Then rr1 runs through again, retrieving r2 which is memoized,
invokes rr3, which throws, after queuing to memoize and
resubmit rr1, when that calls back to resubmit r1, then rr1
routines, signaling the original invoker.
>
Then it seems each re-routine basically has an instance part
and a memoized part, and that it's to flush the memo
after it finishes, in terms of memoizing the inputs.
>
>
Result 1 rr(String a1) {
   // if a1 is in the memo, return for it
   // else queue for it and carry on
>
}
>
>
What is a re-routine?
>
     It's a pattern for cooperative multithreading.
>
     It's sort of a functional approach to functions and flow.
>
     It has a declarative syntax in the language with usual
flow-of-control.
>
So, it's cooperative multithreading so it yields?
>
     No, it just quits, and expects to be called back.
>
So, if it quits, how does it complete?
>
     The entry point to re-routine provides a callback.
>
     Re-routines only return results to other re-routines,
     It's the default callback.  Otherwise they just callback.
>
So, it just quits?
>
     If a re-routine gets called with a null, it throws.
>
     If a re-routine gets a null, it just continues.
>
     If a re-routine completes, it callbacks.
>
So, can a re-routine call any regular code?
>
     Yeah, there are some issues, though.
>
So, it's got callbacks everywhere?
>
     Well, it's just got callbacks implicitly everywhere.
>
So, how does it work?
>
     Well, you build a re-routine with an input and a callback,
     you call it, then when it completes, it calls the callback.
>
     Then, re-routines call other re-routines with the argument,
     and the callback's in a ThreadLocal, and the re-routine memoizes
     all of its return values according to the object identity of the
inputs,
     then when a re-routine completes, it calls again with another
ThreadLocal
     indicating to delete the memos, following the exact same
flow-of-control
     only deleting the memos going along, until it results all the memos in
     the re-routines for the interned or ref-counted input are deleted,
     then the state of the re-routine is de-allocated.
>
So, it's sort of like a monad and all in pure and idempotent functions?
>
     Yeah, it's sort of like a monad and all in pure and idempotent
functions.
>
So, it's a model of cooperative multithreading, though with no yield,
and callbacks implicitly everywhere?
>
     Yeah, it's sort of figured that a called re-routine always has a
callback in the ThreadLocal, because the runtime has pre-emptive
multithreading anyways, that the thread runs through its re-routines in
their normal declarative flow-of-control with exception handling, and
whatever re-routines or other pure monadic idempotent functions it
calls, throw when they get null inputs.
>
     Also it sort of doesn't have primitive types, Strings must always
be interned, all objects must have a distinct identity w.r.t. ==, and
null is never an argument or return value.
>
So, what does it look like?
>
interface Reroutine1 {
>
Result1 rr1(String a1) {
>
     Result2 r2 = reroutine2.rr2(a1);
>
     Result3 r3 = reroutine3.rr3(r2);
>
     return result(r2, r3);
}
>
}
>
So, I expect that to return "result(r2, r3)".
>
     Well, that's synchronous, and maybe blocking, the idea is that it
calls rr2, gets a1, and rr2 constructs with the callback of rr1 and it's
own callback, and a1, and makes a memo for a1, and invokes whatever is
its implementation, and returns null, then rr1 continues and invokes rr3
with r2, which is null, so that throws a NullPointerException, and rr1
quits.
>
So, ..., that's cooperative multithreading?
>
     Well you see what happens is that rr2 invoked another re-routine or
end routine, and at some point it will get called back, and that will
happen over and over again until rr2 has an r2, then rr2 will memoize
(a1, r2), and then it will callback rr1.
>
     Then rr1 had quit, it runs again, this time it gets r2 from the
(a1, r2) memo in the monad it's building, then it passes a non-null r2
to rr3, which proceeds in much the same way, while rr1 quits again until
rr3 calls it back.
>
So, ..., it's non-blocking, because it just quits all the time, then
happens to run through the same paces filling in?
>
     That's the idea, that re-routines are responsible to build the
monad and call-back.
>
So, can I just implement rr2 and rr3 as synchronous and blocking?
>
     Sure, they're interfaces, their implementation is separate.  If
they don't know re-routine semantics then they're just synchronous and
blocking.  They'll get called every time though when the re-routine gets
called back, and actually they need to know the semantics of returning
an Object or value by identity, because, calling equals() to implement
Memo usually would be too much, where the idea is to actually function
only monadically, and that given same Object or value input, must return
same Object or value output.
>
So, it's sort of an approach as a monadic pure idempotency?
>
     Well, yeah, you can call it that.
>
So, what's the point of all this?
>
     Well, the idea is that there are 10,000 connections, and any time
one of them demultiplexes off the connection an input command message,
then it builds one of these with the response input to the demultiplexer
on its protocol on its connection, on the multiplexer to all the
connections, with a callback to itself.  Then the re-routine is launched
and when it returns, it calls-back to the originator by its
callback-number, then the output command response writes those back out.
>
     The point is that there are only as many Theads as cores so the
goal is that they never block,
and that the memos make for interning Objects by value, then the goal is
mostly to receive command objects and handles to request bodies and
result objects and handles to response bodies, then to call-back with
those in whatever serial order is necessary, or not.
>
So, won't this run through each of these re-routines umpteen times?
>
     Yeah, you figure that the runtime of the re-routine is on the order
of n^2 the order of statements in the re-routine.
>
So, isn't that terrible?
>
     Well, it doesn't block.
>
So, it sounds like a big mess.
>
     Yeah, it could be.  That's why to avoid blocking and callback
semantics, is to make monadic idempotency semantics, so then the
re-routines are just written in normal synchronous flow-of-control, and
they're well-defined behavior is exactly according to flow-of-control
including exception-handling.
>
     There's that and there's basically it only needs one Thread, so,
less Thread x stack size, for a deep enough thread call-stack.  Then the
idea is about one Thread per core, figuring for the thread to always be
running and never be blocking.
>
So, it's just normal flow-of-control.
>
     Well yeah, you expect to write the routine in normal
flow-of-control, and to test it with synchronous and in-memory editions
that just run through synchronously, and that if you don't much care if
it blocks, then it's the same code and has no semantics about the
asynchronous or callbacks actually in it.  It just returns when it's done.
>
>
So what's the requirements of one of these again?
>
     Well, the idea is, that, for a given instance of a re-routine, it's
an Object, that implements an interface, and it has arguments, and it
has a return value.  The expectation is that the re-routine gets called
with the same arguments, and must return the same return value.  This
way later calls to re-routines can match the same expectation, same/same.
>
     Also, if it gets different arguments, by Object identity or
primitive value, the re-routine must return a different return value,
those being same/same.
>
     The re-routine memoizes its arguments by its argument list, Object
or primitive value, and a given argument list is same if the order and
types and values of those are same, and it must return the same return
value by type and value.
>
So, how is this cooperative multithreading unobtrusively in
flow-of-control again?
>
Here for example the idea would be, rr2 quits and rr1 continues, rr3
quits and rr1 continues, then reaching rr4, rr4 throws and rr1 quits.
When rr2's or rr3's memo-callback completes, then it calls-back rr1.  as
those come in, at some point rr4 will be fulfilled, and thus rr4 will
quit and rr1 will quit.  When rr4's callback completes, then it will
call-back rr1, which will finally complete, and then call-back whatever
called r1.  Then rr1 runs itself through one more time to
delete or decrement all its memos.
>
interface Reroutine1 {
>
Result1 rr1(String a1) {
>
     Result2 r2 = reroutine2.rr2(a1);
>
     Result3 r3 = reroutine3.rr3(a1);
>
     Result4 r4 = reroutine4.rr4(a1, r2, r3);
>
     return Result1.r4(a1, r4);
}
>
}
>
The idea is that it doesn't block when it launchs rr2 and rr3, until
such time as it just quits when it tries to invoke rr4 and gets a
resulting NullPointerException, then eventually rr4 will complete and be
memoized and call-back rr1, then rr1 will be called-back and then
complete, then run itself through to delete or decrement the ref-count
of all its memo-ized fragmented monad respectively.
>
Thusly it's cooperative multithreading by never blocking and always just
launching callbacks.
>
There's this System.identityHashCode() method and then there's a notion
of Object pools and interning Objects then as for about this way that
it's about numeric identity instead of value identity, so that when
making memo's that it's always "==" and for a HashMap with
System.identityHashCode() instead of ever calling equals(), when calling
equals() is more expensive than calling == and the same/same
memo-ization is about Object numeric value or the primitive scalar
value, those being same/same.
>
https://docs.oracle.com/javase/8/docs/api/java/lang/System.html#identityHashCode-java.lang.Object-
>
>
So, you figure to return Objects to these connections by their session
and connection and mux/demux in these callbacks and then write those out?
>
Well, the idea is to make it so that according to the protocol, the
back-end sort of knows what makes a handle to a datum of the sort, given
the protocol and the protocol and the protocol, and the callback is just
these handles, about what goes in the outer callbacks or outside the
re-routine, those can be different/same.  Then the single writer thread
servicing the network I/O just wants to transfer those handles, or, as
necessary through the compression and encryption codecs, then write
those out, well making use of the java.nio for scatter/gather and vector
I/O in the non-blocking and asynchronous I/O as much as possible.
>
>
So, that seems a lot of effort to just passing the handles, ....
>
Well, I don't want to write any code except normal flow-of-control.
>
So, this same/same bit seems onerous, as long as different/same has a
ref-count and thus the memo-ized monad-fragment is maintained when all
sorts of requests fetch the same thing.
>
Yeah, maybe you're right.  There's much to be gained by re-using monadic
pure idempotent functions yet only invoking them once.  That gets into
value equality besides numeric equality, though, with regards to going
into re-routines and interning all Objects by value, so that inside and
through it's all "==" and System.identityHashCode, the memos, then about
the ref-counting in the memos.
>
>
So, I suppose you know HTTP, and about HTTP/2 and IMAP and NNTP here?
>
Yeah, it's a thing.
>
So, I think this needs a much cleaner and well-defined definition, to
fully explore its meaning.
>
Yeah, I suppose.  There's something to be said for reading it again.
>
>
>
>
>
>
ReRoutines: monadic functional non-blocking asynchrony in the language
Implementing a sort of Internet protocol server, it sort of has three or
four kinds of machines.
flow-machine: select/epoll hardware driven I/O events
protocol-establishment: setting up and changing protocol (commands,
encryption/compression)
protocol-coding: block coding in encryption/compression and wire/object
commands/results
routine: inside the objects of the commands of the protocol,
commands/results
Then, it often looks sort of like
flow <-> protocol <-> routine <-> protocol <-> flow
On either outer side of the flow is a connection, it's a socket or the
receipt or sending of a datagram, according to the network interface and
select/epoll.
The establishment of a protocol looks like
connection/configuration/commencement/conclusion, or setup/teardown.
Protocols get involved renegotiation within a protocol, and for example
upgrade among protocols. Then the protocol is setup and established.
The idea is that a protocol's coding is in three parts for
coding/decoding, compression/decompression, and (en)cryption/decryption,
or as it gets set up.
flow->decrypt->decomp->decod->routine->cod->comp->crypt->flow-v
flow<-crypt<-comp<-cod<-routine<-decod<-decomp<-decrypt<-flow<-
Whenever data arrives, the idea goes, is that the flow is interpreted
according to the protocol, resulting commands, then the routine derives
results from the commands, as by issuing others, in their protocols, to
the backend flow. Then, the results get sent back out through the
protocol, to the frontend, the clients of what it serves the protocol
the server.
The idea is that there are about 10,000 connections at a time, or more
or less.
flow <-> protocol <-> routine <-> protocol <-> flow
flow <-> protocol <-> routine <-> protocol <-> flow
flow <-> protocol <-> routine <-> protocol <-> flow
...
Then, the routine in the middle, has that there's one processor, and on
the processor are a number of cores, each one independent. Then, the
operating system establishes that each of the cores, has any number of
threads-of-control or threads, and each thread has the state of where it
is in the callstack of routines, and the threads are preempted so that
multithreading, that a core runs multiple threads, gives each thread
some running from the entry to the exit of the thread, in any given
interval of time. Each thread-of-control is thusly independent, while it
must synchronize with any other thread-of-control, to establish common
or mutual state, and threads establish taking turns by mutual exclusion,
called "mutex".
Into and out of the protocol, coding, is either a byte-sequence or
block, or otherwise the flow is a byte-sequence, that being serial,
however the protocol multiplexes and demultiplexes messages, the
commands and their results, to and from the flow.
Then the idea is that what arrives to/from the routine, is objects in
the protocol, or handles to the transport of byte sequences, in the
protocol, to the flow.
A usual idea is that there's a thread that services the flow, where, how
it works is that a thread blocks waiting for there to be any I/O,
input/output, reading input from the flow, and writing output to the
flow. So, mostly the thread that blocks has that there's one thread that
blocks on input, and when there's any input, then it reads or transfers
the bytes from the input, into buffers. That's its only job, and only
one thread can block on a given select/epoll selector, which is any
given number of ports, the connections, the idea being that it just
blocks until select returns for its keys of interest, it services each
of the I/O's by copying from the network interface's buffers into the
program's buffers, then other threads do the rest.
So, if a thread results waiting at all for any other action to complete
or be ready, it's said to "block". While a thread is blocked, the CPU or
core just skips it in scheduling the preemptive multithreading, yet it
still takes some memory and other resources and is in the scheduler of
the threads.
The idea that the I/O thread, ever blocks, is that it's a feature of
select/epoll that hardware results waking it up, with the idea that
that's the only thread that ever blocks.
So, for the other threads, in the decryption/decompression/decoding and
coding/compression/cryption, the idea is that a thread, runs through
those, then returns what it's doing, and joins back to a limited pool of
threads, with a usual idea of there being 1 core : 1 thread, so that
multithreading is sort of simplified, because as far as the system
process is concerned, it has a given number of cores and the system
preemptively multithreads it, and as far as the virtual machine is
concerned, is has a given number of cores and the virtual machine
preemptively multithreads its threads, about the thread-of-control, in
the flow-of-control, of the thing.
A usual way that the routine muliplexes and demultiplexes objects in the
protocol from a flow's input back to a flow's output, has that the
thread-per-connection model has that a single thread carries out the
entire task through the backend flow, blocking along the way, until it
results joining after writing back out to its connection. Yet, that has
a thread per each connection, and threads use scheduling and heap
resources. So, here thread-per-connection is being avoided.
Then, a usual idea of the tasks, is that as I/O is received and flows
into the decryption/decompression/decoding, then what's decoded, results
the specification of a task, the command, and the connection, where to
return its result. The specification is a data structure, so it's an
object or Object, then. This is added to a queue of tasks, where
"buffers" represent the ephemeral storage of content in transport the
byte-sequences, while, the queue is as usually a first-in/first-out
(FIFO) queue also, of tasks.
Then, the idea is that each of the cores consumes task specifications
from the task queue, performs them according to the task specification,
then the results are written out, as coded/compressed/crypted, in the
protocol.
So, to avoid the threads blocking at all, introduces the idea of
"asynchrony" or callbacks, where the idea is that the "blocking" and
"synchronous" has that anywhere in the threads' thread-of-control
flow-of-control, according to the program or the routine, it is current
and synchronous, the value that it has, then with regards to what it
returns or writes, as the result. So, "asynchrony" is the idea that
there's established a callback, or a place to pause and continue, then a
specification of the task in the protocol is put to an event queue and
executed, or from servicing the O/I's of the backend flow, that what
results from that, has the context of the callback and returns/writes to
the relevant connection, its result.
I -> flow -> protocol -> routine -> protocol -> flow -> O -v
O <- flow <- protocol <- routine <- protocol <- flow <- I <-
The idea of non-blocking then, is that a routine either provides a
result immediately available, and is non-blocking, or, queues a task
what results a callback that provides the result eventually, and is
non-blocking, and never invokes any other routine that blocks, so is
non-blocking.
This way a thread, executing tasks, always runs through a task, and thus
services the task queue or TQ, so that the cores' threads are always
running and never blocking. (Besides the I/O and O/I threads which block
when there's no traffic, and usually would be constantly woken up and
not waiting blocked.) This way, the TQ threads, only block when there's
nothing in the TQ, or are just deconstructed, and reconstructed, in a
"pool" of threads, the TQ's executor pool.
Enter the ReRoutine
The idea of a ReRoutine, a re-routine, is that it is a usual procedural
implementation as if it were synchronous, and agnostic of callbacks.
It is named after "routine" and "co-routine". It is a sort of co-routine
that builds a monad and is aware its originating caller, re-caller, and
callback, or, its re-routine caller, re-caller, and callback.
The idea is that there are callbacks implicitly at each method boundary,
and that nulls are reserved values to indicate the result or lack
thereof of re-routines, so that the code has neither callbacks nor any
nulls.
The originating caller has that the TQ, has a task specification, the
session+attachment of the client in the protocol where to write the
output, and the command, then the state of the monad of the task, that
lives on the heap with the task specification and task object. The TQ
consumers or executors or the executor, when a thread picks up the task,
it picks up or builds ("originates") the monad state, which is the
partial state of the re-routine and a memo of the partial state of the
re-routine, and installs this in the thread local storage or
ThreadLocal, for the duration of the invocation of the re-routine. Then
the thread enters the re-routine, which proceeds until it would block,
where instead it queues a command/task with callback to re-call it to
re-launch it, and throw a NullPointerException and quits/returns.
This happens recursively and iteratively in the re-routine implemented
as re-routines, each re-routine updates the partial state of the monad,
then that as a re-routine completes, it re-launches the calling
re-routine, until the original re-routine completes, and it calls the
original callback with the result.
This way the re-routine's method body, is written as plain declarative
procedural code, the flow-of-control, is exactly as if it were
synchronous code, and flow-of-control is exactly as if written in the
language with no callbacks and never nulls, and exception-handling as
exactly defined by the language.
As the re-routine accumulates the partial results, they live on the
heap, in the monad, as a member of the originating task's object the
task in the task queue. This is always added back to the queue as one of
the pending results of a re-routine, so it stays referenced as an object
on the heap, then that as it is completed and the original re-routine
returns, then it's no longer referenced and the garbage-collector can
reclaim it from the heap or the allocator can delete it.
Well, for the re-routine, I sort of figure there's a Callstack and a
Callback type
class Callstack {
Stack<Callback> callstack;
}
interface Callback {
void callback() throws Exception;
}
and then a placeholder sort of type for Callflush
class Callflush {
Callstack callstack;
}
with the idea that the presence in ThreadLocals is to be sorted out,
about a kind of ThreadLocal static pretty much.
With not returning null and for memoizing call-graph dependencies,
there's basically for an "unvoid" type.
class unvoid {
}
Then it's sort of figure that there's an interface with some defaults,
with the idea that some boilerplate gets involved in the Memoization.
interface Caller {}
interface Callee {}
class Callmemo {
memoize(Caller caller, Object[] args);
flush(Caller caller);
}
Then it seems that the Callstack should instead be of a Callgraph, and
then what's maintained from call to call is a Callpath, and then what's
memoized is all kept with the Callgraph, then with regards to objects on
the heap and their distinctness, only being reachable from the
Callgraph, leaving less work for the garbage collector, to maintain the
heap.
The interning semantics would still be on the class level, or for
constructor semantics, as with regards to either interning Objects for
uniqueness, or that otherwise they'd be memoized, with the key being the
Callpath, and the initial arguments into the Callgraph.
Then the idea seems that the ThreaderCaller, establishes the Callgraph
with respect to the Callgraph of an object, installing it on the thread,
otherwise attached to the Callgraph, with regards to the ReRoutine.
About the ReRoutine, it's starting to come together as an idea, what is
the apparatus for invoking re-routines, that they build the monad of the
IOE's (inputs, outputs, exceptions) of the re-routines in their
call-graph, in terms of ThreadLocals of some ThreadLocals that callers
of the re-routines, maintain, with idea of the memoized monad along the
way, and each original re-routine.
class IOE <O, E> {
Object[] input;
Object output;
Exception exception;
}
So the idea is that there are some ThreadLocal's in a static ThreadGlobal
public class ThreadGlobals {
public static ThreadLocal<MonadMemo> monadMemo;
}
where callers or originators or ReRoutines, keep a map of the Runnables
or Callables they have, to the MonadMemo's,
class Originator {
Map<? extends ReRoutineMapKey, MonadMemo> monadMemoMap;
}
then when it's about to invoke a Runnable, if it's a ReRoutine, then it
either retrieves the MonadMemo or makes a new one, and sets it on the
ThreadLocal, then invokes the Runnable, then clears the ThreadLocal.
Then a MonadMemo, pretty simply, is a List of IOE's, that when the
ReRoutine runs through the callgraph, the callstack is indicated by a
tree of integers, and the stack path in the ReRoutine, so that any
ReRoutine that calls ReRoutines A/B/C, points to an IOE that it finds in
the thing, then it's default behavior is to return its memo-ized value,
that otherwise is making the callback that fills its memo and re-invokes
all the way back the Original routine, or just its own entry point.
This is basically that the Originator, when the ReRoutine quits out,
sort of has that any ReRoutine it originates, also gets filled up by the
Originator.
So, then the Originator sort of has a map to a ReRoutine, then for any
Path, the Monad, so that when it sets the ThreadLocal with the
MonadMemo, it also sets the Path for the callee, launches it again when
its callback returned to set its memo and relaunch it, then back up the
path stack to the original re-routine.
One of the issues here is "automatic parallelization". What I mean by
that is that the re-routine just goes along and when it gets nulls
meaning "pending" it just continues along, then expects
NullPointerExceptions as "UnsatisifiedInput", to quit, figuring it gets
relaunched when its input is satisfied.
This way then when routines serially don't depend on each others'
outputs, then they all get launched apiece, parallelizing.
Then, I wonder about usual library code, basically about Collections and
Streams, and the usual sorts of routines that are applied to the
arguments, and how to basically establish that the rule of re-routine
code is that anything that gets a null must throw a
NullPointerException, so the re-routine will quit until the arguments
are satisfied, the inputs to library code. Then with the Memo being
stored in the MonadMemo, it's figured that will work out regardless the
Objects' or primitives' value, with regards to Collections and Stream
code and after usual flow-of-control in Iterables for the for loops, or
whatever other application library code, that they will be run each time
the re-routine passes their section with satisfied arguments, then as
with regards to, that the Memo is just whatever serial order the
re-routine passes, not needing to lookup by Object identity which is
otherwise part of an interning pattern.
void rr1(String s1) {
List<String> l1 = rr2.get(s1);
Map<String, String> m1 = new LinkedHashMap<>();
l1.stream().forEach(s -> m1.put(s, rr3.get(s)));
return m1;
}
See what I figure is that the order of the invocations to rr3.get() is
serial, so it really only needs to memoize its OE, Output|Exception,
then about that putting null values in the Map, and having to check the
values in the Map for null values, and otherwise to make it so that the
semantics of null and NullPointerException, result that satisfying
inputs result calls, and unsatisfying inputs result quits, figuring
those unsatisfying inputs are results of unsatisfied outputs, that will
be satisfied when the callee gets populated its memo and makes the callback.
If the order of invocations is out-of-order, gets again into whether the
Object/primitive by value needs to be the same each time, IOE, about the
library code in Collections, Streams, parallelStream, and Iterables, and
basically otherwise that any kind of library code, should throw
NullPointerException if it gets an "unexpected" null or what doesn't
fulfill it.
The idea though that rr3 will get invoked say 1000 times with the rr2's
result, those each make their call, then re-launch 1000 times, has that
it's figured that the Executor, or Originator, when it looks up and
loads the "ReRoutineMapKey", is to have the count of those and whether
the count is fulfilled, then to no-op later re-launches of the
call-backs, after all the results are populated in the partial monad memo.
Then, there's perhaps instead as that each re-routine just checks its
input or checks its return value for nulls, those being unsatisfied.
(The exception handling thoroughly or what happens when rr3 throws and
this kind of thing is involved thoroughly in library code.)
The idea is it remains correct if the worst thing nulls do is throw
NullPointerException, because that's just a usual quit and means another
re-launch is coming up, and that it automatically queues for
asynchronous parallel invocation each the derivations while resulting
never blocking.
It's figured that re-routines check their inputs for nulls, and throw
quit, and check their inputs for library container types, and checking
any member of a library container collection for null, to throw quit,
and then it will result that the automatic asynchronous parallelization
proceeds, while the re-routines are never blocking, there's only as much
memory on the heap of the monad as would be in the lifetime of the
original re-routine, and whatever re-calls or re-launches of the
re-routine established local state in local variables and library code,
would come in and out of scope according to plain stack unwinding.
Then there's still the perceived deficiency that the re-routine's method
body will be run many times, yet it's only run as many times as result
throwing-quit, when it reaches where its argument to the re-routine or
result value isn't yet satisfied yet is pending.
It would re-run the library code any number of times, until it results
all non-nulls, then the resulting satisfied argument to the following
re-routines, would be memo-ized in the monad, and the return value of
the re-routine thus returning immediately its value on the partial monad.
This way each re-call of the re-routine, mostly encounters its own monad
results in constant time, and throws-quit or gets thrown-quit only when
it would be unsatisfying, with the expectation that whatever
throws-quit, either NullPointerException or extending
NullPointerException, will have a pending callback, that will queue on a
TQ, the task specification to re-launch and re-enter the original or
derived, re-routine.
The idea is sort of that it's sort of, Java with non-blocking I/O and
ThreadLocal (1.7+, not 17+), or you know, C/C++ with non-blocking I/O
and thread local storage, then for the abstract or interface of the
re-routines, how it works out that it's a usual sort of model of
co-operative multithreading, the re-routine, the routine "in the language".
Then it's great that the routine can be stubbed or implemented agnostic
of asynchrony, and declared in the language with standard libraries,
basically using the semantics of exception handling and convention of
re-launching callbacks to implement thread-of-control flow-of-control,
that can be implemented in the synchronous and blocking for unit tests
and modules of the routine, making a great abstraction of flow-of-control.
Basically anything that _does_ block then makes for having its own
thread, whose only job is to block and when it unblocks, throw-toss the
re-launch toward the origin of the re-routine, and consume the next
blocking-task off the TQ. Yet, the re-routines and their servicing the
TQ only need one thread and never block. (And scale in core count and
automatically parallelize asynchronous requests according to satisfied
inputs.)
Mostly the idea of the re-routine is "in the language, it's just plain,
ordinary, synchronous routine".

Date Sujet#  Auteur
28 Mar 24 * Re: Meta: a usenet server just for sci.math4Ross Finlayson
20 Apr 24 `* Re: Meta: a usenet server just for sci.math3Ross Finlayson
22 Apr 24  `* Re: Meta: a usenet server just for sci.math2Ross Finlayson
25 Apr 24   `- Re: Meta: a usenet server just for sci.math1Ross Finlayson

Haut de la page

Les messages affichés proviennent d'usenet.

NewsPortal