Sujet : Re: question about linker
De : bc (at) *nospam* freeuk.com (Bart)
Groupes : comp.lang.cDate : 05. Dec 2024, 20:27:36
Autres entêtes
Organisation : A noiseless patient Spider
Message-ID : <visur8$1pj41$1@dont-email.me>
References : 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
User-Agent : Mozilla Thunderbird
On 05/12/2024 18:30, Scott Lurndal wrote:
Bart <bc@freeuk.com> writes:
On 05/12/2024 15:46, David Brown wrote:
On 05/12/2024 04:11, Waldek Hebisch wrote:
Few users are bothered about $0.10 worth of disk space.)
>
And again you fail to get point. Disk space could be free, but that
doesn't mean a 1GB or 10GB executable is desirable. It would just be
slow and cumbersome.
Given that all modern systems load text and data pages from the executable
dynamically on the first reference[*], I don't see how executable
size (much of which is debug and linker metadata such as
symbol tables) matters at all. Modern SSD's can move data at
better than 1GB/sec, so 'slow' isn't correct either.
So what the hell IS slowing down your build then?
Since converting C code to native code is something that can be done at 10MB/sec per core. Is your final binary 36GB?
The whole matter is confusing to me. David Brown says that build speed is no problem at all, as it's only a second or two. You post figures of 13 MINUTES. Which is really an hour on one core.
Meanwhile DB later admits real clean build times of nearly 3 minutes for one core.
So what I see is that compile times better could be much better but no one wants to admit it. Instead people mitigate those times by using parallel builds, or being cage about how much they care compiling.
Effectively brushing the matter under the carpet. I suspect that few here want to admit that a fast, streamlined compiler might actually be useful!
BTW you can run a ten-time-faster compiler AND use parallel builds; have you thought of that?
And if tcc can't manage, is there nobody capable of making gcc etc faster, or is it just too big a project to streamline?