Sujet : Re: Python recompile
De : antispam (at) *nospam* fricas.org (Waldek Hebisch)
Groupes : comp.lang.cDate : 18. Mar 2025, 17:27:00
Autres entêtes
Organisation : To protect and to server
Message-ID : <vrc6si$1jquk$1@paganini.bofh.team>
References : 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
User-Agent : tin/2.6.2-20221225 ("Pittyvaich") (Linux/6.1.0-9-amd64 (x86_64))
bart <
bc@freeuk.com> wrote:
On 18/03/2025 09:53, Muttley@DastardlyHQ.org wrote:
On Mon, 17 Mar 2025 17:10:59 +0000
bart <bc@freeuk.com> wibbled:
On 17/03/2025 16:32, Muttley@DastardlyHQ.org wrote:
On Mon, 17 Mar 2025 14:25:46 +0000
bart <bc@freeuk.com> wibbled:
On 17/03/2025 12:07, Muttley@DastardlyHQ.org wrote:
Anything C could do so long as you don't include all the standard C
libraries
>
in "anything".
>
Another mysterious remark. You seem to consider it your job to put down
anything I do or say!
>
So, what do the standard C libraries have to do with anything here?
>
They're generally the interface to the OS on *nix. No idea about windows.
>
I think you can assume that the tool I used was up to the job
I'm assuming nothing since all we have is your word for it.
So presumably your amazing build system checks the current module build dates
>
and doesn't rebuild stuff that it doesn't have to?
>
Why would it matter? I can compile code at one million lines every two
seconds, and my largest project is 50K lines - do the math.
You'll have to excuse me if I take that figure with a large packet of salt
unless the code does nothing particularly complicated.
If you don't believe my figures, try Tiny C on actual C programs.
Tiny C is single pass, mine does multiple passes so is a little slower.
What the code does is not that relevant:
c:\cx\big>tim tcc fann4.c
Time: 0.855
c:\cx\big>dir fann4.exe
18/03/2025 10:44 10,491,904 fann4.exe
So tcc can generate 12MB per second in this case, for a test file of
nearly 1M lines.
What you should find harder to believe is this figure:
c:\cx\big>tim gcc fann4.c
Time: 50.571 (44.2 on subsequent build)
c:\cx\big>dir a.exe
18/03/2025 10:51 9,873,707 a.exe
Since it can only manage 0.2MB per second for the same quality of code.
How about making such compilers faster first before resorting to
makefile tricks?
Here is my C compiler on the same task:
c:\cx\big>tim bcc fann4
Time: 1.624
c:\cx\big>dir fann4.exe
18/03/2025 10:55 6,842,368 fann4.exe
Throughput is only 4MB/second, but it is generating a smaller executable.
I find it astonishing that even with machines at least a thousand times
faster than I've used in the past, you have to resort to tricks to avoid
compilation.
Why not?
You're missing the point. I mentioned a throughput of 500Klps above;
divide that by 1000, and it means a machine from 40 years was able to
build programs at 500 lines per second, which seems plausible.
So what do you think is a more realistic figure for today's machines:
20Klps for an unoptimised build? (The gcc test managed 22Klps.) That
would mean a compilation speed of 20 lines per second on an early 80s
PC, which is ludicrous.
Actually 20 lines per per second would be not bad. Early Turbo
Pascal was considerd very fast and IIRC did 8000 lines per minute,
that is about 130 lines per second.
Modern machines are more than 1000 faster than early PC-s, probably
closer to 10000 faster. If you belive in Dhrystones, slow RISCV
board is doing 1820.7 DMIPS, slow Atom 2952 DMIPS, And Zen at about
3 GHz 30501 DMIPS. VAX in 1 DMIPS and is quite a bit faster
than early PC-s.
OTOH, compiler are bad case for modern machines. gcc folks
observed that gcc execution time correlates better with
databases than with compute intensive programs. In particular
there are a lot of cache misses which are costly on modern
machines.
Something is badly wrong.
People want compilers to do more work. The idea is to write
simple program without doing various speed-enhancing tricks
and still get good execution time. Look at functions below.
'aref' guarantees that all access are in bounds. But at
-O2 gcc compiled code for 'my_sum' is the same as code with
no bound checking. Simply, compiler can prove that all array
accesses are in bound, so it can safely remove checking
code. How good is code from your compiler (assuming that
source code is checking all array accesses)?
#include <stddef.h>
#include <stdio.h>
#include <stdlib.h>
void
my_error(void) {
fprintf(stderr, "my_error called\n");
exit(EXIT_FAILURE);
}
typedef struct {size_t size; int a[];} my_arr;
int
aref(my_arr * t, size_t i) {
if (i < t->size) {
return t->a[i];
} else {
my_error();
return 0;
}
}
long
my_sum(my_arr * t) {
size_t n = t->size;
size_t i;
long sum = 0;
for(i = 0; i < n; i++) {
sum += aref(t, i);
}
return sum;
}
-- Waldek Hebisch