On 21/11/2024 13:00, David Brown wrote:
On 20/11/2024 21:17, Bart wrote:
For the routines ones that I do 100s of times a day, where test runs are generally very short, then I don't want to hang about waiting for a compiler that is taking 30 times longer than necessary for no good reason.
>
Your development process sounds bad in so many ways it is hard to know where to start. I think perhaps the foundation is that you taught yourself a bit of programming in the 1970's,
1970s builds, especially on mainframes, were dominated by link times. You also had to keep on eye on resources (eg. allocated CPU time), as they were limited on time-shared systems.
Above all, you could only do active work from a terminal that you first had to book, for one-hour slots.
I'm surprised you think that my tools and working practices have any connection with the above.
I've also eliminated linkers; you apparently still use them.
As I said, no one is ever going to care if a compilation takes 1 second or 0.1 seconds.
And yet, considerable effort IS placed on getting development tools to run fast:
* Presumably, optimisation is applied to a compiler to get it faster than otherwise. But why bother if the difference is only a second or so?
* Tools can now do builds in parallel across multiple cores. Again, why? So that 1 second becomes 20 lots of 50ms? Or what that 1 second really have been 20 seconds without that feature?
* People are developing new kinds of linkers (I think there was 'gold', and now something else) which are touted as being several times faster than traditional
* All sorts of make and other files are used to define dependency graphs between program modules. Why? Presumably to minimise time spent recompiling.
* There various JIT compilation schemes where a rough version of an application can get up and running quickly, with 'hot' functions compiled and optimised on demand. Again, why?
If people really don't care about compilation speed, why this vast effort?
Getting development tools faster is an active field, and everyone benefits including you, but when I do it, it's a pointless waste of time?
> As I said, no one is ever going to care if a compilation takes 1 second
> or 0.1 seconds.
Have you asked? You must use interactive tools like shells; I guess you wouldn't want a pointless one second delay after each command, which you KNOW doesn't warrant such a delay.
That would surely slow you down if used to fluently firing off a rapid sequence of commands.
The problem is that you don't view use of a compiler as just another interactive command.
> As I said, no one is ever going to care if a compilation takes 1 second
> or 0.1 seconds.
Here's an actual use-case: I have a transpiler that produces a single-file C output of 40K lines. Tiny C can build it in 0.2 seconds. gcc -O0 takes 2.2 seconds. However there's no point in using gcc, as the generated code is as poor as Tiny C, so I might as well use that.
But if I want faster code, gcc -O2 takes 11 seconds.
For lots of routine builds used for testing, passing the intermediate C through gcc -O2 makes no sense at all. It is just a waste of time, destroys my train of thought, and is very frustrating.
However, if you ran the world, then tools like gcc and its ilk would be the only choice!
So your advice is that developers should be stuck
I'm saying that most developers don't write their own tools. They will use off-the-shelf language implementations. If those happen to be slow, then there's little they can do except work within those limitations.
Or just twiddle their thumbs.
Which do you think an employer (or amateur programmer) would prefer?
a) A compiler that runs in 0.1 seconds with little static checking
b) A compiler that runs in 10 seconds but spots errors saving 6 hours debugging time
You can have both. You can run a slow compiler that might pick up those errors.
But sometimes you make a trivial mod (eg. change a prompt); do you REALLY need that deep analysis all over again? Do you still it fully optimised?
If your answer is YES to both then there's little point in further discussion.
I might spend an hour or two writing code (including planing, organising, reading references, etc.) and then 5 seconds building it. Then there might be anything from a few minutes to a few hours testing or debugging.
Up to a few hours testing and debugging without need to rebuild? That last time I had to do that, it was a program written on punched cards that was submitted as an overnight job. You could compile it only once a day.
And you're accusing ME of being stuck in the 70s!
But using a good compiler saves a substantial amount of developer time
A better language too.
<snip the rest to save time>
So you snipped my comments about fast bytecode compilers which do zero analysis being perfectly acceptable for scripting languages.
And my remark about my language edging towards behaving as a scripting language.
I can see why you wouldn't want to respond to that.
BTW I'm doing the same with C; given this program:
int main(void) {
int a;
int* p = 0;
a = *p;
}
Here's what happens with my C compiler when told to interpret it:
c:\cx>cc -i c
Compiling c.c to c.(int)
Error: Null ptr access
Here's what happens with gcc:
c:\cx>gcc c.c
c:\cx>a
<crashes>
Is there some option to insert such a check with gcc? I've no idea; most people don't.