Sujet : Re: GIL-Removal Project Takes Another Step (Posting On Python-List Prohibited)
De : no.email (at) *nospam* nospam.invalid (Paul Rubin)
Groupes : comp.lang.pythonDate : 20. Mar 2024, 01:51:54
Autres entêtes
Organisation : A noiseless patient Spider
Message-ID : <87r0g5ybbp.fsf@nightsong.com>
References : 1 2 3
User-Agent : Gnus/5.13 (Gnus v5.13) Emacs/27.1 (gnu/linux)
Lawrence D'Oliveiro <
ldo@nz.invalid> writes:
So even a very simple, seemingly well-behaved Python script, if
running for long enough, would consume more and more memory if it were
not for reference-counting.
That is completely false. It's usual to set a GC to run every so-many
allocations. GHC normally does a minor GC every 256K of allocations so
that the most recent stuff fits in the L2 CPU cache, speeding things up
a lot. Refcounting schemes are of course incapable of that optimization
because they don't relocate objects in memory.
You can of course configure a GC to not run very often, in which case
the memory region can get large. That is an optimization you do
intentionally, to spend less CPU time doing GC, and of course you only
do that if you have the memory for it. I think you are imagining that
people always do that, but again remember MicroPython.
The allocation of a new method wrapper on every method call is of course
something that the interpreter could also be optimized to not do. The
Emacs Lisp interpreter does something like that for function args, IIRC.
They are passed on a permanent stack instead of in temporary cons cells.
Erlang on a midsized server can run millions of lightweight processes in
its VM, each with its own GC. The minimum ram size of an Erlang process
is around 2KB iirc. But I don't know if they get bigger than that
before the GC runs.