Sujet : Re: question about linker
De : antispam (at) *nospam* fricas.org (Waldek Hebisch)
Groupes : comp.lang.cDate : 05. Dec 2024, 04:11:23
Autres entêtes
Organisation : To protect and to server
Message-ID : <vir5kp$3hjd9$1@paganini.bofh.team>
References : 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
User-Agent : tin/2.6.2-20221225 ("Pittyvaich") (Linux/6.1.0-9-amd64 (x86_64))
David Brown <
david.brown@hesbynett.no> wrote:
On 04/12/2024 16:09, Bart wrote:
On 04/12/2024 09:02, David Brown wrote:
On 03/12/2024 19:42, Bart wrote:
>
Yesterday you tried to give the misleading impression that compiling a
substantial 200Kloc project only took 1-3 seconds with gcc.
No, I did not. I said my builds of that project typically take 1-3
seconds. I believe I was quite clear on the matter.
Without word "make" it was not clear if you mean full build (say
after checkout from repository). Frequently people talk about re-making
when they mean running make after a small edit and reserve build
for full build. So it was not clear if you claim to have a compile
farm with few hundred cores (so you can compile all files in parallel).
If I do a full, clean re-compile of the code, it takes about 12 seconds
or so. But only a fool would do that for their normal builds. Are you
such a fool? I haven't suggested you are - it's up to you to say if
that's how you normally build projects.
If I do a full, clean re-compile /sequentially/, rather than with
parallel jobs, it would be perhaps 160 seconds. But only a fool would
do that.
Well, when I download a project from internt the first (ant frequently
the only compilation is a full build). And if build fails, IME
to is much harder to find problem from log of parallel build.
So I frequently run full builds sequentially. Of course, I find
something to do when computer is busy (300sec of computer time
spent on full build is not worth extra 30 seconds to find trouble
in parallel log (and for bigger things _both_ times grow so
conclusion is the same)).
I gave some timings that showed gcc-O0 taking 50 times longer than tcc,
and 150 times longer with -O2.
That is the real picture. Maybe your machine is faster than mine, but I
doubt it is 100 times faster. (If you don't like my benchmark, then
provide another in portable C.)
All this just so you can crap all over the benefits of small, faster,
simpler tools.
Your small, fast, simple tools are - as I have said countless times -
utterly useless to me. Perhaps you find them useful, but I have never
known any other C programmer who would choose such tools for anything
but very niche use-cases.
The real picture is that real developers can use real tools in ways that
they find convenient. If you can't do that, it's your fault. (I don't
even believe it is true that you can't do it - you actively /choose/ not
to.)
And since compile speed is a non-issue for C compilers under most
circumstances, compiler size is /definitely/ a non-issue, and
"simplicity" in this case is just another word for "lacking useful
features", there are no benefits to your tools.
I somewhat disagree. You probaly represent opinion of majority of
developers. But that leads to uncontrolled runaway complexity and
bloat. You clearly see need to have fast and resonably small code
on your targets. But there are also machines like Raspberry Pi,
where normal tools, including compilers can be quite helpful.
But such machines may have rather tight "disc" space and CPU
use corresponds to power consumption which preferably should be
low. So there is some interest and benefits from smaller, more
efficient tools.
OTOH, people do not want to drop all features. And concerning
gcc, AFAIK is is actually a compromise for good reason. Some
other projects are slow and bloated apparenty for no good
reason. Some time ago I found a text about Netscape mail
index file. The author (IIRC Jame Zawinsky) explained how
it fastures ensured small size and fast loading. But in
later developement it was replaced by some generic DB-like
solution leadind to huge slowdown and much higher space
use (apparently new developers were not willing to spent
a litte time to learn how old code worked). And similar
examples are quite common.
And concerning compiler size, I do not know if GCC/clang
developers care. But cleary Debian developers care,
they use shared libraries, split debug info to separate
packages and similar to reduce size.
-- Waldek Hebisch