paavo512 <
paavo@osa.pri.ee> wrote or quoted:
|Anyway, multithreading performance is a non-issue for Python so far as
|the Python interpreter runs in a single-threaded regime anyway, under a
|global GIL lock. They are planning to get rid of GIL, but this work is
|still in development AFAIK. I'm sure it will take years to stabilize the
|whole Python zoo without GIL.
The GIL only prevents multiple Python statements from being
interpreted simultaneously, but if you're waiting on inputs (like
sockets), it's not active, so that could be distributed across
multiple cores.
With asyncio, however, you can easily handle the application
for threads to "wait in parallel" for thousands of sockets in a
single thread, and there are fewer opportunities for errors than
with multithreading.
Additionally, there are libraries like numpy that use true
multithreading internally to distribute computational tasks
across multiple cores. By using such libraries, you can take
advantage of that. (Not to mention the AI libraries that have their
work done in highly parallel fashion by graphics cards.)
If you want real threads, you could probably work with Cython
sometimes.
Other languages like JavaScript seem to have an advantage there
because they don't know a GIL, but with JavaScript, for example,
it's because it always runs in a single thread overall. And in
the languages where there are threads without a GIL, you quickly
realize that programming correct non-trivial programs with
parallel processing is error-prone.
Often in Python you can use "ThreadPoolExecutor" to start
multiple threads. If the GIL then becomes a problem (which is
not the case if you're waiting on I/O), you can easily swap it
out for "ProcessPoolExecutor": Then processes are used instead
of threads, and there is no GIL for those.
If four cores are available, by dividing up compute-intensive tasks
using "ProcessPoolExecutor", you can expect a speedup factor of two
to eight.
With the Celery library, tasks can be distributed across multiple
processes that can also run on different computers. See, for
example, "Parallel Programming with Python" by Jan Palach.