Liste des Groupes | Revenir à cl c |
On 24/06/2024 14:09, Michael S wrote:On Fri, 21 Jun 2024 22:47:46 +0100
bart <bc@freeuk.com> wrote:
On 21/06/2024 14:34, David Brown wrote:On 21/06/2024 12:42, bart wrote:>On 21/06/2024 10:46, David Brown wrote:>>>
>
I understand your viewpoint and motivation. But my own
experience is mostly different.
>
First, to get it out of the way, there's the speed of
compilation. While heavy optimisation (-O3) can take noticeably
longer, I never see -O0 as being in any noticeable way faster
for compilation than -O1 or even -O2.
Absolute time or relative?
Both.
For me, optimised options with gcc always take longer:>
Of course. But I said it was not noticeable - it does not make
enough difference in speed for it to be worth choosing.
>
C:\c>tm gcc bignum.c -shared -s -obignum.dll # from
cold TM: 3.85
Cold build times are irrelevant to development - when you are
working on a project, all the source files and all your compiler
files are in the PC's cache.
>
>
C:\c>tm gcc bignum.c -shared -s -obignum.dll
TM: 0.31
C:\c>tm gcc bignum.c -shared -s -obignum.dll -O2
TM: 0.83
C:\c>tm gcc bignum.c -shared -s -obignum.dll -O3
TM: 0.93
C:\c>dir bignum.dll
21/06/2024 11:14 35,840 bignum.dll
Any build time under a second is as good as instant.
>
I tested on a real project, not a single file. It has 158 C files
and about 220 header files. And I ran it on my old PC, without
any "tricks" that you dislike so much, doing full clean
re-builds. The files are actually all compiled twice, building
two variants of the binary.
>
With -O2, it took 34.3 seconds to build. With -O1, it took 33.4
seconds. With -O0, it took 30.8 seconds.
>
So that is a 15% difference for full builds. In practice, of
course, full rebuilds are rarely needed, and most builds after
changes to the source are within a second or so.
Then there's something very peculiar about your codebase.
To me it looks more likely that your codebase is very unusual rather
than David's
In order to get meaningful measurements I took embedded project that
is significantly bigger than average by my standards. Here are
times of full parallel rebuild (make -j5) on relatively old
computer (4-core Xeon E3-1271 v3).
Option time(s) -g time text size
-O0 13.1 13.3 631648
-Os 13.6 14.1 424016
-O1 13.5 13.7 455728
-O2 14.0 14.1 450056
-O3 14.0 14.6 525380
The difference in time between different -O settings in my
measurements is even smaller than reported by David Brown. That can
be attributed to older compiler (gcc 4.1.2). Another difference is
that this compiler works under cygwin, which is significantly
slower both than native Linux and than native Windows. That causes
relatively higher make overhead and longer link.
I don't know why Cygwin would make much difference; the native code
is still running on the same processor.
>
However, is there any way of isolating the compilation time (turning
.c files into either or .o files) from 'make' the linker? Failing
that, can you compile just one module in isolation (.c to .o) with
-O0 and -O2, or is that not possible?
>
Those throughputs don't look that impressive for a parallel build on
what sounds like a high-spec machine.
>
Your processor has a CPU-mark double that of mine, which has only two
cores, and is using one.
Building a 34-module project with .text size of 300KB, with either
gcc 10 or 14, using -O0, takes about 8 seconds, or 37KB/second.
>
Your figures show about 50KB/second.
You say you use gcc 4, but an
older gcc is more likely to be faster in compilation speed than a
newer one.
>
It does sound like something outside of gcc itself.
For the same project, on the same slow machine, Tiny C's throughput
is 1.3MB/second. While my non-C compiler, on other projects, is
5-10MB/second, still only looking at .text segments. That is 100
times faster than your timings, for generating code that is as good
as gcc's -O0.
So IT IS NOT WINDOWS ITSELF THAT IS SLOW.
If I had were "native" tools then all times will be likely shorter
by few seconds and the difference between -O0 and -O3 will be close
to 10%.
So two people now saying that all the many dozens of extras passes
and extra analysis that gcc -O2/O3 has to do, compared with the basic
front-end work that every toy compiler needs to do and does it
quickly, only slows it down by 10%.
I really don't believe it. And you should understand that it doesn't
add up.
Les messages affichés proviennent d'usenet.