Liste des Groupes | Revenir à cl c |
On 08/03/2024 14:07, David Brown wrote:I believe the GC runs are done very regularly (if there is something in the clean-up list), so there is not much build-up and not much extra latency.On 08/03/2024 13:41, Paavo Helde wrote:Is that how CPython works? I can't quite see the point of saving up all the deallocations so that they are all done as a batch. It's extra overhead, and will cause those latency spikes that was the problem here.07.03.2024 17:36 David Brown kirjutas:>>>
CPython does use garbage collection, as far as I know.
>
AFAIK CPython uses reference counting, i.e. basically the same as C++ std::shared_ptr (except that it does not need to be thread-safe).
Yes, that is my understanding too. (I could be wrong here, so don't rely on anything I write!) But the way it is used is still a type of garbage collection. When an object no longer has any "live" references, it is put in a list, and on the next GC it will get cleared up (and call the asynchronous destructor, __del__, for the object).
In my own reference count scheme, when the count reaches zero, the memory is freed immediately.That's synchronous deallocation. It's a perfectly good strategy, of course. There are pros and cons of both methods.
I also tend to have most allocations being of either 16 or 32 bytes, so reuse is easy. It is only individual data items (a long string or long array) that might have an arbitrary length that needs to be in contiguous memory.
Most strings however have an average length of well below 16 characters in my programs, so use a 16-byte allocation.
I don't know the allocation pattern in that Discard app, but Michael S suggested they might not be lots of arbitrary-size objects.
Les messages affichés proviennent d'usenet.