Liste des Groupes | Revenir à cl c |
Michael S <already5chosen@yahoo.com> writes:
On Wed, 17 Apr 2024 10:47:25 -0700
Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:
Michael S <already5chosen@yahoo.com> writes:>
>
[...]
Finally found the time for speed measurements. [...]>
I got these. Thank you.
>
The format used didn't make it easy to do any automated
processing. I was able to get around that, although it
would have been nicer if that had been easier.
>
The results you got are radically different than my own,
to the point where I wonder if there is something else
going on.
What are your absolute result?
Are they much faster, much slower or similar to mine?
Also it would help if you find out characteristics of your
test hardware.
I think trying to look at those wouldn't tell me anything
helpful. Too many unknowns. And still no way to test or
measure any changes to the various algorithms.
>
Considering that, since I now have no way of doing any>
useful measuring, it seems there is little point in any
further development or investigation on my part. It's
been fun, even if ultimately inconclusive.
I am still interested in combination of speed that does
not suck with O(N) worst-case memory footprint.
I already have couple of variants of the former,
Did you mean you some algorithms whose worst case memory
behavior is strictly less than O( total number of pixels )?
I think it would be helpful to adopt a standard terminology
where the pixel field is of size M x N, otherwise I'm not
sure what O(N) refers to.
but so
far they are all unreasonably slow - ~5 times slower than
the best.
I'm no longer working on the problem but I'm interested to
hear what you come up with.
Les messages affichés proviennent d'usenet.