Liste des Groupes | Revenir à c arch |
On 9/13/2024 10:30 AM, David Brown wrote:On 12/09/2024 23:14, BGB wrote:On 9/12/2024 9:18 AM, David Brown wrote:>On 11/09/2024 20:51, BGB wrote:On 9/11/2024 5:38 AM, Anton Ertl wrote:Josh Vanderhoof <x@y.z> writes:anton@mips.complang.tuwien.ac.at (Anton Ertl) writes:
>
<snip lots>
Nonsense.Until VS2013, the most one could really use was:Though, generally takes a few years before new features become usable.>
Like, it is only in recent years that it has become "safe" to use most parts of C99.
>
Most of the commonly used parts of C99 have been "safe" to use for 20 years. There were a few bits that MSVC did not implement until relatively recently, but I think even have caught up now.
>
// comments
long long
Otherwise, it was basically C90.
'stdint.h'? Nope.
Ability to declare variables wherever? Nope.
...
After this, it was piecewise.That I believe.
Though, IIRC, still no VLAs or similar.
Clearly your own compiler will only support the bits of C that you implement. But I am not sure that it counts as a "serious, general purpose C compiler in mainstream use" - no offence implied!There are only two serious, general purpose C compilers in mainstream use - gcc and clang, and both support almost all of C23 now. But it will take a while for the more niche tools, such as some embedded compilers, to catch up.FWIW:
>
<stdbit.h> is, however, in the standard library rather than the compiler, and they can be a bit slow to catch up.
>
I had been adding parts of newer standards in my case, but it is more hit/miss (more adding parts as they seem relevant).
Which target would that be? Excluding personal projects, some very niche devices, and long-outdated small CISC chips, there really aren't many devices that don't have a GCC and clang port. Of course there /are/ processors that gcc does not support, but almost nobody writes code that has to be portable to such devices.Say, you are using a target where you can't use GCC or similar.>>>Whether or not the target/compiler allows misaligned memory access;>
If set, one may use misaligned access.
Why would you need that? Any decent compiler will know what is allowed for the target (perhaps partly on the basis of compiler flags), and will generate the best allowed code for accesses like foo3() above.
>
Imagine you have compilers that are smart enough to turn "memcpy()" into a load and store, but not smart enough to optimize away the memory accesses, or fully optimize away the wrapper functions...
>
Why would I do that? If I want to have efficient object code, I use a good compiler. Under what realistic circumstances would you need to have highly efficient results but be unable to use a good optimising compiler? Compilers have been inlining code for 30 years at least (that's when I first saw it) - this is not something new and rare.
>
Say:It would be quite ridiculous to limit the way you write code because of possible limitations for non-existent compilers for target devices that have never been made.
BJX2, haven't ported GCC as it looks like a pain;
Also GCC is big and slow to recompile.
6502 and 65C816, because these are old and probably not worth the effort from GCC's POV.
Various other obscure/niche targets.
Say, SH-5, which never saw a production run (it was a 64-bit successor to SH-4), but seemingly around the time Hitachi spun-out Renesas, the SH-5 essentially got canned. And, it apparently wasn't worth it for GCC to maintain a target for which there were no actual chips (comparably the SH-2 and SH-4 lived on a lot longer due to having niche uses).
I still cannot see any situation where it would be relevant. If I need to read 4 bytes of memory from an address, and don't know if the address is uint32_t aligned or not, I would use memcpy(). The compiler would know if unaligned 32-bit reads are supported or not for the target, or if it is faster to use them or use byte reads. That's the compiler's job - I'm the programmer, not the micro-manager.I can think of a few.So, for best results, the best case option is to use a pointer cast and dereference.>
>
For some cases, one may also need to know whether or not they can access the pointers in a misaligned way (and whether doing so would be better or worse than something like "memcpy()").
>
Again, I cannot see a /real/ situation where that would be relevant.
>
Most often though it is in things like data compression/decompression code, where there is often a lot of priority on "gotta go fast".
I can accept that there are cases (such as you describe below) where this might be useful, but I would not be identifying it just with an "f".This is why the latter have an 'f' extension (for "fast").There is a difference here between "_memlzcpy()" and "_memlzcpyf()" in that:>
the former will always copy an exact number of bytes;
the latter may write 16-32 bytes over the limit.
It may do /what/ ? That is a scary function!
>
There are cases where it may be desirable to have the function write past the end in the name of speed, and others where this would not be acceptable.Of course it makes sense to do that, on targets where an alignment of 1 is safe and efficient.
Hence why there are 2 functions.
The main intended use-case for _memlzcpyf() being use for match-copying in something like my LZ4 decoder, where one may pad the decode buffer by an extra 32 byes.
Also my RP2 decoder works in a similar way.
Not necessarily, it wouldn't make sense for _Alignof to return 1 for all the basic integer types.>>>>
Possible:
__MINALIGN_type__ //minimum allowed alignment for type
_Alignof(type) has been around since C11.
>
_Alignof tells the native alignment, not the minimum.
It is the same thing.
>
But, for" minimum alignment" it may make sense to return 1 for anything that can be accessed unaligned.Again, I see no use for this.
For what purpose?The point of __MINALIGN_type__ would be:>>
Where, _Alignof(int32_t) will give 4, but __MINALIGN_INT32__ would give 1 if the target supports misaligned pointers.
>
The alignment of types in C is given by _Alignof. Hardware may support unaligned accesses - C does not. (By that, I mean that unaligned accesses are UB.)
>
If the compiler defines it, and it is defined as 1, then this allows the compiler to be able to tell the program that it is safe to use this type in an unaligned way.
This also applies to targets where some types are unaligned but others are not:For what purpose? And why do you want to worry about totally hypothetical systems?
Say, if all integer types 64 bits or less are unaligned, but 128-bit types are not.
Most of this is being compiled by BGBCC for a 50 MHz cPU.Note that MSVC most certainly does /not/ work like "gcc -fwrapv" - signed integer overflow is UB in MSVC, and it generates code that assumes it never happens. There is an obscure officially undocumented (or documented unofficially, if you prefer) flag to turn off such optimisations.
So, the CPU is slow and the compiler doesn't generate particularly efficient code unless one writes it in a way it can use effectively.
Which often means trying to write C like it was assembler and manually organizing statements to try to minimize value dependencies (often caching any values in variables, and using lots of variables).
In this case, the equivalent of "-fwrapv -fno-strict-aliasing" is the default semantics.
Generally, MSVC also responds well to a similar coding style as used for BGBCC (or, as it more happened, the coding styles that gave good results in MSVC also tended to work well in BGBCC).
Les messages affichés proviennent d'usenet.