Sujet : Re: filling area by color atack safety
De : tr.17687 (at) *nospam* z991.linuxsc.com (Tim Rentsch)
Groupes : comp.lang.cDate : 24. Mar 2024, 21:26:16
Autres entêtes
Organisation : A noiseless patient Spider
Message-ID : <86frwfjs0n.fsf@linuxsc.com>
References : 1 2 3 4 5 6 7 8
User-Agent : Gnus/5.11 (Gnus v5.11) Emacs/22.4 (gnu/linux)
Michael S <
already5chosen@yahoo.com> writes:
On Wed, 20 Mar 2024 07:27:38 -0700
Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:
>
Michael S <already5chosen@yahoo.com> writes:
>
On Tue, 19 Mar 2024 11:57:53 +0000
Malcolm McLean <malcolm.arthur.mclean@gmail.com> wrote:
[...]
The nice thing about Tim's method is that we can expect that
performance depends on number of recolored pixels and almost
nothing else.
>
One aspect that I consider a significant plus is my code never
does poorly. Certainly it isn't the fastest in all cases, but
it's never abysmally slow.
>
To be fair, none of presented algorithms is abysmally slow. When
compared by number of visited points, they all appear to be within
factor of 2 or 2.5 of each other.
Certainly "abysmally slow" is subjective, but working in a large
pixel field, filling the whole field starting at the center,
Malcolm's code runs slower than my unoptimized code by a factor of
10 (and a tad slower than that compared to my optimized code).
Some of them for some patterns could be 10-15 times slower than
others, but it does not happen for all patterns and when it
happens it's because of problematic implementation rather because
of differences in algorithms.
In the case of Malcolm's code I think it's the algorithm, because
it doesn't scale linearly. Malcolm's code runs faster than mine
for small colorings, but slows down dramatically as the image
being colored gets bigger.
The big difference between algorithms is not a speed, but amount of
auxiliary memory used in the worst case. Your algorithm appears to be
the best in that department, [...]
Yes, my unoptimized algorithm was designed to use as little
memory as possible. The optimized version traded space for
speed: it runs a little bit faster but incurs a non-trivial cost
in terms of space used. I think it's still not too bad, an upper
bound of a small multiple of N for an NxN pixel field.
But even by that metric, the difference between different
implementations of the same algorithm is often much bigger than
difference between algorithms.
If I am not mistaken the original naive recursive algorithm has a
space cost that is O( N**2 ) for an NxN pixel field. The big-O
difference swamps everything else, just like the big-O difference
in runtime does for that metric.
For example, solid 200x200 image with starting point in the corner
[...]
On small pixel fields almost any algorithm is probably not too
bad. These days any serious algorithm should scale well up
to at least 4K by 4K, and tested up to at least 16K x 16K.
Tricks that make some things faster for small images sometimes
fall on their face when confronted with a larger image. My code
isn't likely to win many races on small images, but on large
images I expect it will always be competitive even if it doesn't
finish in first place.