Sujet : Schachner, Joseph was the Big Moron [September 2021 16:30] (Was: Which Python System is affected?)
De : janburse (at) *nospam* fastmail.fm (Mild Shock)
Groupes : comp.lang.pythonDate : 24. Jun 2025, 07:53:40
Autres entêtes
Message-ID : <103di1k$16u1u$1@solani.org>
References : 1 2 3 4 5 6
User-Agent : Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:128.0) Gecko/20100101 Firefox/128.0 SeaMonkey/2.53.21
Hi,
Everybody who puts me personally on CC: , and
posts form
python-list@python.org . Please note,
I cannot respond on
python-list@python.org .
Somebody blocked me on
python-list@python.org .
If you want a discussion, post on comp.lang.python .
And stop spamming me with your CC: .
Bye
P.S.: BTW, I got blocked after this moron wrote
this nonsense. It is complete nonsense, now
that everybody is talking about AsyncAPI, and
since Dogelog Player evolved into Async, simply
by its 2nd target JavaScript. What company was he
working for? A looser company Teledyne ?
------------------- begin moron ---------------------
Opinion: Anyone who is counting on Python for truly
fast compute speed is probably using Python for the
wrong purpose. Here, we use Python to control Test
Equipment, to set up the equipment and ask for a
measurement, get it, and proceed to the next measurement;
and at the end produce a nice formatted report. If we
wrote the test script in C or Rust or whatever it
could not run substantially faster because it is
communicating with the test equipment, setting it up
and waiting for responses, and that is where the vast
majority of the time goes. Especially if the measurement
result requires averaging it can take a while. In my
opinion this is an ideal use for Python, not just
because the speed of Python is not important, but also
because we can easily find people who know Python, who
like coding in Python, and will join the company
to program in Python ... and stay with us.
--- Joseph S.
Teledyne Confidential; Commercially Sensitive Business Data
https://mail.python.org/archives/list/python-list@python.org/thread/RWEKXFW4WED7KNI67QBMDTC32EAEU3ZT/------------------- end moron -----------------------
Mild Shock schrieb:
Hi,
I tested this one:
Python 3.11.11 (0253c85bf5f8, Feb 26 2025, 10:43:25)
[PyPy 7.3.19 with MSC v.1941 64 bit (AMD64)] on win32
I didn't test yet this one, because it is usually slower:
ython 3.14.0b2 (tags/v3.14.0b2:12d3f88, May 26 2025, 13:55:44)
[MSC v.1943 64 bit (AMD64)] on win32
Bye
Mild Shock schrieb:
Hi,
>
I have some data what the Async Detour usually
costs. I just compared with another Java Prolog
that didn't do the thread thingy.
>
Reported measurement with the async Java Prolog:
>
> JDK 24: 50 ms (using Threads, not yet VirtualThreads)
>
New additional measurement with an alternative Java Prolog:
>
JDK 24: 30 ms (no Threads)
>
But already the using Threads version is quite optimized,
it basically reuse its own thread and uses a mutex
somewhere, so it doesn't really create a new secondary
>
thread, unless a new task is spawn. Creating a 2nd thread
is silly if task have their own thread. This is the
main potential of virtual threads in upcoming Java,
>
just run tasks inside virtual threads.
>
Bye
>
P.S.: But I should measure with more files, since
the 50 ms and 30 ms are quite small. Also I am using a
warm run, so the files and their meta information is already
>
cached in operating system memory. I am trying to only
measure the async overhead, but maybe Python doesn't trust
the operating system memory, and calls some disk
>
sync somewhere. I don't know. I don't open and close the
files, and don't call some disk syncing. Only reading
stats to get mtime and doing some comparisons.