Liste des Groupes | Revenir à cl c |
I understand your viewpoint and motivation. But my own experience is mostly different.Or maybe I have a really big code base. (My last project at work was using a distributed compilation system across all our workstations)
First, to get it out of the way, there's the speed of compilation. While heavy optimisation (-O3) can take noticeably longer, I never see -O0 as being in any noticeable way faster for compilation than -O1 or even -O2. (I'm implicitly using gcc options here, but it's mostly applicable to any serious compiler I have used.) Frankly, if your individual C compiles during development are taking too long, you are doing something wrong. Maybe you are using far too big files, or trying to do too much in one part - split the code into manageable sections and possibly into libraries, and it will be far easier to understand, write and test. Maybe you are not using appropriate build tools. Maybe you are using a host computer that is long outdated or grossly underpowered.
There are exceptions. Clearly some languages - like C++ - are more demanding of compilers than others. And if you are using whole-program or link-time optimisation, compilation and build time is more of an issue - but of course these only make sense with strong optimisation.In recent years most of my code has ben C++.
Secondly, there is the static error analysis. While it is possible to do this using additional tools, your first friend is your compiler and its warnings. (Even with additional tools, you'll want compiler warnings enabled.) You always want to find your errors as early as possible - from your editor/IDE, your compiler, your linker, your additional linters, your automatic tests, your manual tests, your beta tests, your end user complaints. The earlier in this chain you find the issue, the faster, easier and cheaper it is to fix things. And compilers do a better job at static error checking with strong optimisations enabled, because they do more code analysis.Agreed. The cost of maintenance is often overlooked, and correct code is more useful than faster code.
Thirdly, optimisation allows you to write your code with more focus on clarity, flexibility and maintainability, relying on the compiler for the donkey work of efficiency details. If you want efficient results (and that doesn't always matter - but if it doesn't, then C is probably not the best choice of language in the first place) and you also want to write good quality source code, optimisation is a must.
Now to your point about debugging. It is not uncommon for me to use debuggers, including single-stepping, breakpoints, monitoring variables, modifying data via the debugger, and so on. It is common practice in embedded development. I also regularly examine the generated assembly, and debug at that level. If I am doing a lot of debugging on a section of code, I generally use -O1 rather than -O0 - precisely because it is far /easier/ to understand the generated code. Typically it is hard to see what is going on in the assembly because it is swamped by stack accesses or code that would be far simpler when optimised. (That goes back to the focus on source code clarity and flexibility rather than micro-managing for run-time efficiency without optimisation.)I'll play with the settings.
Some specific optimisation options can make a big difference to debugging, and can be worth disabling, such as "-fno-inline" or "-fno-top-reorder", and heavily optimised code can be hard to follow in a debugger. But disabling optimisation entirely can often, IME, make things harder.
Temporarily changing optimisation flags for all or part of the code while chasing particular bugs is a useful tool, however.
Les messages affichés proviennent d'usenet.