Liste des Groupes | Revenir à c arch |
On 12/09/2024 23:14, BGB wrote:Until VS2013, the most one could really use was:On 9/12/2024 9:18 AM, David Brown wrote:<snip lots>On 11/09/2024 20:51, BGB wrote:On 9/11/2024 5:38 AM, Anton Ertl wrote:Josh Vanderhoof <x@y.z> writes:anton@mips.complang.tuwien.ac.at (Anton Ertl) writes:
>
Good idea.>Would be nice, say, if there were semi-standard compiler macros for various things:>
Ask, and you shall receive! (Well, sometimes you might receive.)
>Endianess (macros exist, typically compiler specific);>
And, apparently GCC and Clang can't agree on which strategy to use.
#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
...
#elif __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
...
#else
...
#endif
>
Works in gcc, clang and MSVC.
>
Technically now also in BGBCC, since I have just recently added it.
>Most of the commonly used parts of C99 have been "safe" to use for 20 years. There were a few bits that MSVC did not implement until relatively recently, but I think even have caught up now.>>
And C23 has the <stdbit.h> header with many convenient little "bit and byte" utilities, including endian detection:
>
#include <stdbit.h>
#if __STDC_ENDIAN_NATIVE__ == __STDC_ENDIAN_LITTLE__
...
#elif __STDC_ENDIAN_NATIVE__ == __STDC_ENDIAN_BIG__
...
#else
...
#endif
>
This is good at least.
>
Though, generally takes a few years before new features become usable.
Like, it is only in recent years that it has become "safe" to use most parts of C99.
>
There are only two serious, general purpose C compilers in mainstream use - gcc and clang, and both support almost all of C23 now. But it will take a while for the more niche tools, such as some embedded compilers, to catch up.FWIW:
<stdbit.h> is, however, in the standard library rather than the compiler, and they can be a bit slow to catch up.
Say, you are using a target where you can't use GCC or similar.Why would I do that? If I want to have efficient object code, I use a good compiler. Under what realistic circumstances would you need to have highly efficient results but be unable to use a good optimising compiler? Compilers have been inlining code for 30 years at least (that's when I first saw it) - this is not something new and rare.>>Whether or not the target/compiler allows misaligned memory access;>
If set, one may use misaligned access.
Why would you need that? Any decent compiler will know what is allowed for the target (perhaps partly on the basis of compiler flags), and will generate the best allowed code for accesses like foo3() above.
>
Imagine you have compilers that are smart enough to turn "memcpy()" into a load and store, but not smart enough to optimize away the memory accesses, or fully optimize away the wrapper functions...
>
I can think of a few.So, for best results, the best case option is to use a pointer cast and dereference.Again, I cannot see a /real/ situation where that would be relevant.
>
For some cases, one may also need to know whether or not they can access the pointers in a misaligned way (and whether doing so would be better or worse than something like "memcpy()").
>
It is part of my C library, but also used for LZ decompression, which is used quite extensively.>If this is something for your library for your compiler, then of course you are free to do anything you want here - standard library code does not need to be portable, but is free to use any kind of compiler "magic" it likes. (For example, gcc has lots of builtins and extensions that are not targeted at normal code, but are targeted specifically at library writers.)>Whether or not memory uses a single address space;>
If set, all pointer comparisons are allowed.
Pointer comparisons are always allowed for equality tests if they are pointers to objects of compatible types. (Function pointers cannot be compared at all.)
>
For other relational tests, the pointers must point to sub-objects of the same aggregate object. (That means they can't be null pointers, misaligned pointers, invalid pointers or pointers going nowhere.) This is independent of how the address space(s) are organised on the target machine.
>
What you /can/ do, on pretty much any implementation with a single linear address space, is convert pointers to uintptr_t and then compare them. There may be some targets for which there is no uintptr_t, or where the mapping from pointer to integer does not match with the address, but that would be very unusual.
>
I can't think when you would need to do such comparisons, however, other than to implement memmove - and library functions can use any kind of implementation-specific feature they like.
>
Yeah.
>
My "_memlzcpy()" functions do a lot of relative comparisons (more than needed for memmove):
dst<=src: memmove
(dst-src)>=sz: memcpy
(dst-src)>=32: can copy with 32B blocks
(dst-src)>=16: can copy with 16B blocks
(dst-src)>= 8: can copy with 8B blocks
1/2/4: Generate a full-block fill pattern
3/5/6/7: partial fill pattern (16B block with irregular step)
>
This is why the latter have an 'f' extension (for "fast").There is a difference here between "_memlzcpy()" and "_memlzcpyf()" in that:It may do /what/ ? That is a scary function!
the former will always copy an exact number of bytes;
the latter may write 16-32 bytes over the limit.
Not necessarily, it wouldn't make sense for _Alignof to return 1 for all the basic integer types. But, for" minimum alignment" it may make sense to return 1 for anything that can be accessed unaligned.It is the same thing.>>>
Possible:
__MINALIGN_type__ //minimum allowed alignment for type
_Alignof(type) has been around since C11.
>
_Alignof tells the native alignment, not the minimum.
The point of __MINALIGN_type__ would be:>The alignment of types in C is given by _Alignof. Hardware may support unaligned accesses - C does not. (By that, I mean that unaligned accesses are UB.)
Where, _Alignof(int32_t) will give 4, but __MINALIGN_INT32__ would give 1 if the target supports misaligned pointers.
>
Most of this is being compiled by BGBCC for a 50 MHz cPU.>It may look simpler in the code to do this kind of thing, but it is not /necessary/ and it is not safe unless you are writing non-portable code and are sure it will only be used on a compiler that supports it. Thus the Linux kernel requires "-fno-strict-aliasing", because some of the Linux kernel authors write crap C code. (Or, to be a bit fairer, some of the code in the Linux kernel is very old and comes from a time when writing things correctly while generating efficient results would need more effort.)>>>
Maybe also alias pointer control:
__POINTER_ALIAS__
__POINTER_ALIAS_CONSERVATIVE__
__POINTER_ALIAS_STRICT__
>
Where, pointer alias can be declared, and:
If conservative, then conservative semantics are being used.
Pointers may be freely cast without concern for pointer aliasing.
Compiler will assume that "non restrict" pointer stores may alias.
If strict, the compiler is using TBAA semantics.
Compiler may assume that aliasing is based on pointer types.
>
Faffing around with pointer types - breaking the "effective type" rules - has been a bad idea and risky behaviour since C was standardised. You never need to do it. (I accept, however, that on some weaker or older compilers "doing the right thing" can be noticeably less efficient than writing bad code.) Just get a half- decent compiler and use memcpy(). For any situation where you might think casting pointer types would be a good idea, your sizes are small and known at compile time, so they are easy for the compiler to optimise.
>
It depends.
>
In some things, like my ELF and PE/COFF program loaders, the code can get particularly nasty in these areas...
And as a general rule, if you feel you really want to break the rules of C and still get something useful out at the end, use "volatile" liberally.>
>
I have used "volatile" here to good effect.
>>>
Les messages affichés proviennent d'usenet.