Sujet : Re: Avoid treating the stack as an array [Re: "Back & Forth" is back!]
De : no.email (at) *nospam* nospam.invalid (Paul Rubin)
Groupes : comp.lang.forthDate : 28. Sep 2024, 21:49:46
Autres entêtes
Organisation : A noiseless patient Spider
Message-ID : <87h69zcxlh.fsf@nightsong.com>
References : 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
User-Agent : Gnus/5.13 (Gnus v5.13) Emacs/27.1 (gnu/linux)
dxf <
dxforth@gmail.com> writes:
That 30% difference was because VFX doesn't attempt to optimize locals.
What's the evidence? My observation is compilers do not generate
native code independently of the language. Parameter passing
strategies differ between C and Forth and this necessarily affects the
code compilers lay down.
1) comparisons between VFX and other compilers like iForth, 2) the
observation that there is any difference at all between the generated
code for the two versions of EMITS under VFX.
This isn't a question of C vs Forth. It's two equivalent pieces of
Forth code being compiled by the same optimizing Forth compiler, one
version resulting in worse code instead of identical code.
For me it comes down why have I chosen to use Forth. The philosophy
of it appeals to me in a way other languages don't. There's the
question which forth - because forth has essentially split down two
paths with rather incompatible motivations.
I gather that one path is industrial users who want there to be a
standard with well-supported commercial implementations, and who want to
run development projects with large teams of programmers (the Saudi
airport being the classic example).
I guess the other path is something like solo practitioners who don't
really care about standardization, perhaps because they just want the
most direct way to an end result. Philosophical appeal is another such
motivation. That's fine too, but partly a matter of personal taste.
What I'm unclear about is what the philosophical purist path has to say
about optimizing compilers. I think anyone wanting to reject locals for
reasons of code efficiency, probably should be using a VFX-style
compiler. My own idea of purity says to use a simple interpreter and
accept the speed penalty, using CODE when needed.
FWIW, most of the code I write these days doesn't spend much time on
computation. It might spend 100ms retrieving something over the
network, and then 1ms computing. So if the computing part somehow sped
up by 1000x, I wouldn't notice or care about the difference.
FWIW 2, I suspect most computing operations in the real world right now
are spent in GPU kernels or large parallel batch jobs, rather than in
ordinary single-CPU programs.