Liste des Groupes | Revenir à cl c |
On 24/06/2024 14:09, Michael S wrote:Cygwin, especially older Cygwin, is very slow for all file access and all process control, because it tries to emulate POSIX as closely as possible on an OS that has only a fraction of the necessary features. gcc is not a monolithic tool - it is a driver, and controls multiple processes and accesses a fairly large number of files. So Cygwin-based gcc builds will spend a considerable amount of time in this sort of thing rather than actual processor-bound compiler work. I am confident that Michael would find a mingw/mingw64 based build significantly faster since that has a far thinner (almost transparent) emulation layer. And it would be a good deal faster again under Linux in the same hardware, as that has more efficient file handling.On Fri, 21 Jun 2024 22:47:46 +0100I don't know why Cygwin would make much difference; the native code is still running on the same processor.
bart <bc@freeuk.com> wrote:
>On 21/06/2024 14:34, David Brown wrote:>On 21/06/2024 12:42, bart wrote:>On 21/06/2024 10:46, David Brown wrote:>>>
>
I understand your viewpoint and motivation. But my own
experience is mostly different.
>
First, to get it out of the way, there's the speed of
compilation. While heavy optimisation (-O3) can take noticeably
longer, I never see -O0 as being in any noticeable way faster for
compilation than -O1 or even -O2.
Absolute time or relative?
Both.For me, optimised options with gcc always take longer:>
Of course. But I said it was not noticeable - it does not make
enough difference in speed for it to be worth choosing.C:\c>tm gcc bignum.c -shared -s -obignum.dll # from cold>
TM: 3.85
Cold build times are irrelevant to development - when you are
working on a project, all the source files and all your compiler
files are in the PC's cache.
>C:\c>tm gcc bignum.c -shared -s -obignum.dll>
TM: 0.31
C:\c>tm gcc bignum.c -shared -s -obignum.dll -O2
TM: 0.83
C:\c>tm gcc bignum.c -shared -s -obignum.dll -O3
TM: 0.93
C:\c>dir bignum.dll
21/06/2024 11:14 35,840 bignum.dll
Any build time under a second is as good as instant.
>
I tested on a real project, not a single file. It has 158 C files
and about 220 header files. And I ran it on my old PC, without any
"tricks" that you dislike so much, doing full clean re-builds. The
files are actually all compiled twice, building two variants of the
binary.
>
With -O2, it took 34.3 seconds to build. With -O1, it took 33.4
seconds. With -O0, it took 30.8 seconds.
>
So that is a 15% difference for full builds. In practice, of
course, full rebuilds are rarely needed, and most builds after
changes to the source are within a second or so.
Then there's something very peculiar about your codebase.
>
>
To me it looks more likely that your codebase is very unusual rather
than David's
>
In order to get meaningful measurements I took embedded project that
is significantly bigger than average by my standards. Here are times of
full parallel rebuild (make -j5) on relatively old computer (4-core Xeon
E3-1271 v3).
>
Option time(s) -g time text size
-O0 13.1 13.3 631648
-Os 13.6 14.1 424016
-O1 13.5 13.7 455728
-O2 14.0 14.1 450056
-O3 14.0 14.6 525380
>
The difference in time between different -O settings in my measurements
is even smaller than reported by David Brown. That can be attributed to
older compiler (gcc 4.1.2). Another difference is that this compiler
works under cygwin, which is significantly slower both than native
Linux and than native Windows. That causes relatively higher make
overhead and longer link.
However, is there any way of isolating the compilation time (turning .c files into either or .o files) from 'make' the linker?Why would anyone want to do that? At times, it can be useful to do partial builds, but compilation alone is not particularly useful.
Failing that, can you compile just one module in isolation (.c to .o) with -O0 and -O2, or is that not possible?How can you possibly judge that when you have no idea how big the project is?
Those throughputs don't look that impressive for a parallel build on what sounds like a high-spec machine.
That's not what people have said.If I had were "native" tools then all times will be likely shorter bySo two people now saying that all the many dozens of extras passes and extra analysis that gcc -O2/O3 has to do, compared with the basic front-end work that every toy compiler needs to do and does it quickly, only slows it down by 10%.
few seconds and the difference between -O0 and -O3 will be close to 10%.
I really don't believe it. And you should understand that it doesn't add up.
Les messages affichés proviennent d'usenet.