Liste des Groupes | Revenir à cl c |
On 19/11/2024 22:40, Waldek Hebisch wrote:Bart <bc@freeuk.com> wrote:It is related: both gcc anf LLVM are doing analyses that in the
past were deemed inpracticaly expensive (both in time and in space).
Those analyses work now thanks to smart algorithms that
significantly reduced resource usage. I know that you consider
this too expensive.
How long would LLVM take to compile itself on one core? (Here I'm not
even sure what LLVM is; it you download the binary, it's about 2.5GB,
but a typical LLVM compiler might 100+ MB. But I guess it will be while
in either case.)
I have product now that is like a mini-LLVM backend. It can build into a
standalone library of under 0.2MB, which can directy produce EXEs, or it
can interpret. Building that product from scratch takes 60ms.
That is my kind of product
What's the context of this 0.1 seconds? Do you consider it long or short?
Context is interactive response. It means "pretty fast for interactive
use".
It's less than the time to press and release the Enter key.
My tools can generally build my apps from scratch in 0.1 seconds; big
compilers tend to take a lot longer. Only Tiny C is in that ballpark.
>
So I'm failing to see your point here. Maybe you picked up that 0.1
seconds from an earlier post of mine and are suggesting I ought to be
able to do a lot more analysis within that time?
This 0.1s is old thing. My point is that if you are compiling simple
change, than you should be able to do more in this time. In normal
developement source file bigger than 10000 lines are relatively
rare, so once you get in range of 50000-100000 lines per second
making compiler faster is of marginal utility.
I *AM* doing more in that time! It just happens to be stuff you appear
to have no interest in:
* I write whole-program compilers: you always process all source files
of an application. The faster the compiler, the bigger the scale of app
it becomes practical on.
* That means no headaches with dependencies (it goes in hand with a
decent module scheme)
* I can change one tiny corner of a the program, say add an /optional/
argument to a function, which requires compiling all call-sites across
the program, and the next compilation will take care of everything
* If I were to do more with optimisation (there is lots that can be done
without getting into the heavy stuff), it automatically applies to the
whole program
* I can choose to run applications from source code, without generating
discrete binary files, just like a script language
* I can choose (with my new backend) to interpret programs in this
static language. (Interpretation gives better debugging opportunities)
* I don't need to faff around with object files or linkers
Module-based independent compilation and having to link 'object files'
is stone-age stuff.
We clearly differ in question of what is routine. Creating usable
executable is rare task, once executable is created it can be used
for long time. OTOH developement is routine and for this one wants
to know if a change is correct.
I take it then that you have some other way of doing test runs of a
program without creating an executable?
It's difficult to tell from your comments.
Already simple thing would be an improvement: make compiler aware of
error routine (if you do not have it add one) so that when you
signal error compiler will know that there is no need for normal
return value.
OK, but what does that buy me? Saving a few bytes for a return
instruction in a function? My largest program, which is 0.4MB, already
only occupies 0.005% of the machines 8GB.
Which is not going to be part of a routine build.
In a sense build is not routine. Build is done for two purposes:
- to install working system from sources, that includes
documentaion
- to check that build works properly after changes, this also
should check documentaion build.
Normal developement goes without rebuilding the system.
We must be talking at cross-purposes then.
Either you're developing using interpreted code, or you must have some
means of converting source code to native code, but for some reason you
don't use 'compile' or 'build' to describe that process.
Or maybe your REPL/incremental process can run for days doing
incremental changes without doing a full compile.
It seems quite mysterious.
I might run my compiler hundreds of times a day (at 0.1 seconds a time,
600 builds would occupy one whole minute in the day!). I often do it for
frivolous purposes, such as trying to get some output lined up just
right. Or just to make sure something has been recompiled since it's so
quick it's hard to tell.
I know. But this is not what I do. Build produces mutiple
artifacts, some of them executable, some are loadable code (but _not_
in form recogized by operating system), some essentially non-executable
(like documentation).
So, 'build' means something different to you. I use 'build' just as a
change from writing 'compile'.
This sounds like a REPL system. There, each line is a new part of the
program which is processed, executed and discarded.
First, I am writing about two different systems. Both have REPL.
Lines typed at REPL are "discarded", but their effect may last
long time.
My last big app used a compiled core but most user-facing functionality
was done using an add-on script language. This meant I could develop
such modules from within a working application, which provided a rich,
persistent environment.
Changes to the core program required a rebuild and a restart.
However the whole thing was an application, not a language.
What happens if you change the type of a global; are you saying that
none of the program codes needs revising?
In typed system there are no global "library" variables, all data
is encapsulated in modules and normally accessed in abstract way,
by calling apropriate functions. So, in "clean" code you
can recompile a single module and the whole system works.
I used module-at-time compilation until 10-12 years ago. The module
scheme had to be upgraded at the same time, but it took several goes to
get it right.
Now I wouldn't go back. Who cares about compiling a single module that
may or may not affect a bunch of others? Just compile the lot!
If a project's scale becomes too big, then it should be split into
independent program units, for example a core EXE file and a bunch of
DLLs; that's the new granularity. Or a lot of functionality can be
off-loaded to scripts, as I used to do.
(My scripting language code still needs bytecode compilation, and I also
use whole-program units there, but the bytecode compiler goes up to 2Mlps.)
Les messages affichés proviennent d'usenet.