Sujet : Re: Baby X is bor nagain
De : bc (at) *nospam* freeuk.com (bart)
Groupes : comp.lang.cDate : 14. Jun 2024, 19:24:04
Autres entêtes
Organisation : A noiseless patient Spider
Message-ID : <v4i1s4$30goi$1@dont-email.me>
References : 1 2 3 4 5 6 7 8 9 10 11
User-Agent : Mozilla Thunderbird
On 14/06/2024 17:43, David Brown wrote:
On 13/06/2024 16:43, Michael S wrote:
Somewhat more than a second on less modern hardware. Enough for me to
feel that compilation is not instant.
But 1 MB is just an arbitrary number. For 20 MB everybody would feel
the difference. And for 50 MB few people would not want it to be much
faster.
>
But what would be the point of trying to embed such files in the first place? There are much better ways of packing large files.
I remember complaining that some tool installations were bloated at 100MB, 500MB, 1000MB or beyond, and your attitude was So what, since there is now almost unlimited storage.
But now of course, it's Why would someone ever want to do X with such a large file! Suddenly large files are undesirable when it suits you.
You can always increase sizes for things until you get problems or annoying slowdowns, but that does not mean that will happen in practical situations.
And even if you /did/ want to embed a 20 MB file, and even if that took 20 seconds, so what? Unless you have a masochistic build setup, such as refusing to use "make" or insisting that everything goes in one C file that is re-compiled all the time, that 20 second compile is a one-off time cost on the rare occasion when you change the big binary file.
Now, I am quite happy to agree that faster is better, all other things being equal. And convenience and simplicity is better. Once the compilers I use support #embed, if I need to embed a file and I don't need anything more than an array initialisation, I'll use #embed. Until then, 5 seconds writing an "xxd -i" line in a makefile and a 20 second compile (if it took that long) beats 5 minutes writing a Python script to generate string literals even if the compile is now 2 seconds.
That's a really bad attitude. It partly explains why such things as #embed take so long to get added.
I've heard lots of horror stories elsewhere about projects taking minutes, tens of minutes or even hours to build.
How much of that is due to attitudes like yours? You've managed to find ways of working around speed problems, by throwing hardware resources at it (fast processors, loads of memory, multiple cores, SSD, RAM-disk), or using ingenuity in *avoiding* having to compile stuff as much as possible. Or maybe the programs you build aren't that big.
But that is not how you fix such problems. Potential bottlenecks should be identified and investigated.
/Could/ it be faster? /Could/ it use less memory? /Could/ a simple language extension help out?
I can understand you having little interest in it because you just use the tools that available and can't do much about it, but it should be somebody's job to keep on top of this stuff.
> Until
> then, 5 seconds writing an "xxd -i" line in a makefile and a 20 second
> compile (if it took that long) beats 5 minutes writing a Python script
> to generate string literals even if the compile is now 2 seconds.
So now you need 'xxd'. And 'Python'. And 'make'. When it could all be done effortlessly, more easily and 100 times faster within the language without all that mucking about.
> Unless you have a masochistic build setup, such as
> refusing to use "make" or insisting that everything goes in one C file
> that is re-compiled all the time,
When you write such tools, you don't know what people are going to do with them, how much they will push their limits. And you can't really dictate how they develop or build their software.