Liste des Groupes | Revenir à cl c |
On Mon, 24 Jun 2024 15:00:26 +0100
bart <bc@freeuk.com> wrote:
On 24/06/2024 14:09, Michael S wrote:On Fri, 21 Jun 2024 22:47:46 +0100
bart <bc@freeuk.com> wrote:
On 21/06/2024 14:34, David Brown wrote:On 21/06/2024 12:42, bart wrote:>On 21/06/2024 10:46, David Brown wrote:>>>
>
I understand your viewpoint and motivation. But my own
experience is mostly different.
>
First, to get it out of the way, there's the speed of
compilation. While heavy optimisation (-O3) can take
noticeably longer, I never see -O0 as being in any noticeable
way faster for compilation than -O1 or even -O2.
Absolute time or relative?
Both.
For me, optimised options with gcc always take longer:>
Of course. But I said it was not noticeable - it does not make
enough difference in speed for it to be worth choosing.
>
C:\c>tm gcc bignum.c -shared -s -obignum.dll # from
cold TM: 3.85
Cold build times are irrelevant to development - when you are
working on a project, all the source files and all your compiler
files are in the PC's cache.
>
>
C:\c>tm gcc bignum.c -shared -s -obignum.dll
TM: 0.31
C:\c>tm gcc bignum.c -shared -s -obignum.dll -O2
TM: 0.83
C:\c>tm gcc bignum.c -shared -s -obignum.dll -O3
TM: 0.93
C:\c>dir bignum.dll
21/06/2024 11:14 35,840 bignum.dll
Any build time under a second is as good as instant.
>
I tested on a real project, not a single file. It has 158 C
files and about 220 header files. And I ran it on my old PC,
without any "tricks" that you dislike so much, doing full clean
re-builds. The files are actually all compiled twice, building
two variants of the binary.
>
With -O2, it took 34.3 seconds to build. With -O1, it took 33.4
seconds. With -O0, it took 30.8 seconds.
>
So that is a 15% difference for full builds. In practice, of
course, full rebuilds are rarely needed, and most builds after
changes to the source are within a second or so.
Then there's something very peculiar about your codebase.
To me it looks more likely that your codebase is very unusual
rather than David's
In order to get meaningful measurements I took embedded project
that is significantly bigger than average by my standards. Here
are times of full parallel rebuild (make -j5) on relatively old
computer (4-core Xeon E3-1271 v3).
Option time(s) -g time text size
-O0 13.1 13.3 631648
-Os 13.6 14.1 424016
-O1 13.5 13.7 455728
-O2 14.0 14.1 450056
-O3 14.0 14.6 525380
The difference in time between different -O settings in my
measurements is even smaller than reported by David Brown. That
can be attributed to older compiler (gcc 4.1.2). Another
difference is that this compiler works under cygwin, which is
significantly slower both than native Linux and than native
Windows. That causes relatively higher make overhead and longer
link.
I don't know why Cygwin would make much difference; the native code
is still running on the same processor.
I don't know specific reasons. Bird's eye perspective is that cygwin
tries to emulate Posix semantics on platform that is not Posix and
achieves that by using few low-granularity semaphores in user space,
which seriously limits parallelism. Besides, there are problems with
emulation of Posix I/O semantics that cause cygwin file I/O to be 2-3
times slower that native Windows I/O. The later applies mostly to
relatively small files, but, then again, software build mostly
accesses small files.
As a matter of fact, a parallel speed up I see on this project on this
quad-core machine is barely 2x. I expect 3x or a little more for the
same project with native Windows tools.
>
>
Les messages affichés proviennent d'usenet.