Sujet : Re: Somebody Got It The Wrong Way Round ...
De : c186282 (at) *nospam* nnada.net (c186282)
Groupes : comp.programmingDate : 18. Jul 2025, 07:14:05
Autres entêtes
Message-ID : <dYCdnUSAi5uwe-T1nZ2dnZfqn_idnZ2d@giganews.com>
References : 1
User-Agent : Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.13.0
On 7/16/25 7:21 PM, Lawrence D'Oliveiro wrote:
From
<https://www.infoworld.com/article/4018856/4-tips-for-getting-started-with-free-threaded-python.html>:
As an example, if you have a job that writes a lot of files,
having each job in its own thread is less effective if each job
also writes the file. This is because writing files is an
inherently serial operation. A better approach would be to divide
jobs across threads and use one thread for writing to disk. As
each job finishes, it sends work to the disk-writing job. This
way, jobs don’t block each other and aren’t themselves blocked by
file writing.
Actually, blocking system calls (whether for I/O or something else)
only block the current thread. So having each thread do its own I/O
should be faster than funnelling it all through one bottleneck thread.
SEEMS true ... but not ALWAYS.
Software design has to be customized for the
exact job in mind.
If you don’t want your worker threads blocked waiting for I/O to
complete, then each worker context can be a pair of threads: one does
the CPU-intensive stuff, while the other handles the blocking I/O.