Liste des Groupes | Revenir à cl c |
On 7/4/2024 8:05 PM, Lawrence D'Oliveiro wrote:That would just be rearranging the deck chairs I think.It’s called “Rust”.If anything, I suspect may make sense to go a different direction:
Not to a bigger language, but to a more narrowly defined language.
Basically, to try to distill what C does well, keeping its core essence intact.
*1: While not exactly that rare, and can be useful, it is debatable if they add enough to really justify their complexity and relative semantic fragility.C only has 1D arrays, but array elements can themselves be array types.
If using pointers, one almost invariably needs to fall back to doing "arr[y*N+x]"I've never had to do that, even if N was a variable. If using Iliffe vectors then C allows you do regular indexing even with runtime bounds.
Note that multidimensional indexing via multiple levels of pointer indirection would not be effected by this.These are Iliffe vectors.
Similarly, structs may not be declared at the point of use, but only as types.This is how it works on my language; they must be a named user-type. It's crazy how C allows them just anywhere, even in useless declarations like this:
Though, would want to do a few things differently from my current IR to be able to reduce memory footprint; as my current IR was designed in such a way that it is effectively necessary to load in the whole program before code-generation can be done. Ideally one would want the compiler to be able to read-in the IR as-needed and discard the parts it is already done with (but, existing efforts to redesign the IR stage here have tended to fizzle; would effectively need to redesign the compiler backend to be able to make effective use of it).It's not an issue at all. My main compiler is a whole-program one; the entire source code for my biggest program occupies 1/8000th of the memory of my PC. That would be like building an 8-byte source file on my first computer.
For a new compiler, could make sense to try to "do it right this time" (though, not too much of an issue if running the compiler on a modern PC, as they are "not exactly RAM constrained" in this area; so reading in and decoding the IR and symbol tables for an entire executable image at the same time, is not too much of an issue).
If pulled of well, such a module system could be both faster and require less memory use in the compiler if compared with headersHeaders are a bottleneck in C. 50 modules all including the same huge header (say the 1000 #includes involved with #include <gtk<>, which is for GTK2), would involve repeating all that work 50 times.
Even with unity builds, build times can still get annoying for bigger programs. And here, I am talking like 20 seconds to rebuild a 250kLOC program.In my language, a 250Kloc app would take about half a second to build into an executable.
Granted, even if parsing is fast, this still leaves the challenge of fast/efficient machine-code generation.Unoptimised code in my case (for the way I write code, for my language, and for my apps) is about 50% slower than when passed through C and gcc-O3.
Les messages affichés proviennent d'usenet.