Liste des Groupes | Revenir à c arch |
On 14/09/2024 08:34, BGB wrote:Go and try to write C with variables not declared at the start of a block in VS2008 or similar and see how far you get...On 9/13/2024 10:30 AM, David Brown wrote:On 12/09/2024 23:14, BGB wrote:On 9/12/2024 9:18 AM, David Brown wrote:>On 11/09/2024 20:51, BGB wrote:On 9/11/2024 5:38 AM, Anton Ertl wrote:Josh Vanderhoof <x@y.z> writes:anton@mips.complang.tuwien.ac.at (Anton Ertl) writes:
>
<snip lots>Nonsense.>Though, generally takes a few years before new features become usable.>
Like, it is only in recent years that it has become "safe" to use most parts of C99.
>
Most of the commonly used parts of C99 have been "safe" to use for 20 years. There were a few bits that MSVC did not implement until relatively recently, but I think even have caught up now.
>
Until VS2013, the most one could really use was:
// comments
long long
Otherwise, it was basically C90.
'stdint.h'? Nope.
Ability to declare variables wherever? Nope.
...
MS basically gave up on C and concentrated on C++ (then later C# and other languages). Their C compiler gained the parts of C99 that were in common with C++ - and anyway, most people (that I have heard of) using MSVC for C programming actually use the C++ compiler but stick approximately to a C subset. And this has been the case for a /long/ time - long before 2013.
It compiles a lot of the code I throw at it, though occasionally steps on holes or bugs.>That I believe.
After this, it was piecewise.
Though, IIRC, still no VLAs or similar.
>
>Clearly your own compiler will only support the bits of C that you implement. But I am not sure that it counts as a "serious, general purpose C compiler in mainstream use" - no offence implied!There are only two serious, general purpose C compilers in mainstream use - gcc and clang, and both support almost all of C23 now. But it will take a while for the more niche tools, such as some embedded compilers, to catch up.>
>
<stdbit.h> is, however, in the standard library rather than the compiler, and they can be a bit slow to catch up.
>
FWIW:
I had been adding parts of newer standards in my case, but it is more hit/miss (more adding parts as they seem relevant).
>
It also depends on what one considers optimizing.>Which target would that be? Excluding personal projects, some very niche devices, and long-outdated small CISC chips, there really aren't many devices that don't have a GCC and clang port. Of course there / are/ processors that gcc does not support, but almost nobody writes code that has to be portable to such devices.>>>>Whether or not the target/compiler allows misaligned memory access;>
If set, one may use misaligned access.
Why would you need that? Any decent compiler will know what is allowed for the target (perhaps partly on the basis of compiler flags), and will generate the best allowed code for accesses like foo3() above.
>
Imagine you have compilers that are smart enough to turn "memcpy()" into a load and store, but not smart enough to optimize away the memory accesses, or fully optimize away the wrapper functions...
>
Why would I do that? If I want to have efficient object code, I use a good compiler. Under what realistic circumstances would you need to have highly efficient results but be unable to use a good optimising compiler? Compilers have been inlining code for 30 years at least (that's when I first saw it) - this is not something new and rare.
>
Say, you are using a target where you can't use GCC or similar.
And as for optimising compilers, I used at least two different optimising compilers in the mid nineties that inlined code automatically, before using gcc. (I can't remember if they inlined memcpy - it was a long time ago!). Optimising compilers are not a new concept, and are not limited to gcc and clang.
Hitachi did release an ISA spec for SH-5 at least (and it might have worked OK, if Renesas had pushed "upwards" rather than focusing almost exclusively on the small embedded / microcontroller space).>It would be quite ridiculous to limit the way you write code because of possible limitations for non-existent compilers for target devices that have never been made.
Say:
BJX2, haven't ported GCC as it looks like a pain;
Also GCC is big and slow to recompile.
>
6502 and 65C816, because these are old and probably not worth the effort from GCC's POV.
>
Various other obscure/niche targets.
>
>
Say, SH-5, which never saw a production run (it was a 64-bit successor to SH-4), but seemingly around the time Hitachi spun-out Renesas, the SH-5 essentially got canned. And, it apparently wasn't worth it for GCC to maintain a target for which there were no actual chips (comparably the SH-2 and SH-4 lived on a lot longer due to having niche uses).
>
If the compiler is naive (wrt inline memcpy):>I still cannot see any situation where it would be relevant. If I need to read 4 bytes of memory from an address, and don't know if the address is uint32_t aligned or not, I would use memcpy(). The compiler would know if unaligned 32-bit reads are supported or not for the target, or if it is faster to use them or use byte reads. That's the compiler's job - I'm the programmer, not the micro-manager.>So, for best results, the best case option is to use a pointer cast and dereference.>
>
For some cases, one may also need to know whether or not they can access the pointers in a misaligned way (and whether doing so would be better or worse than something like "memcpy()").
>
Again, I cannot see a /real/ situation where that would be relevant.
>
I can think of a few.
>
Most often though it is in things like data compression/decompression code, where there is often a lot of priority on "gotta go fast".
>
And if I know that for a particular target there are particular instructions that could be more efficient but are unknown to the compiler (perhaps there are odd SIMD instructions), and it is worth the effort to use them, then I would be writing that code for the specific target. That's target-specific conditional compilation, and I still have no need to know if the target can access misaligned data.
OK.>I can accept that there are cases (such as you describe below) where this might be useful, but I would not be identifying it just with an "f".>There is a difference here between "_memlzcpy()" and "_memlzcpyf()" in that:>
the former will always copy an exact number of bytes;
the latter may write 16-32 bytes over the limit.
It may do /what/ ? That is a scary function!
>
This is why the latter have an 'f' extension (for "fast").
>
Tradition dictates that struct members are pad-aligned aligned to their native alignment (usually equal to the size of the base type), unless the struct is 'packed'.There are cases where it may be desirable to have the function write past the end in the name of speed, and others where this would not be acceptable.Of course it makes sense to do that, on targets where an alignment of 1 is safe and efficient.
>
Hence why there are 2 functions.
>
>
The main intended use-case for _memlzcpyf() being use for match- copying in something like my LZ4 decoder, where one may pad the decode buffer by an extra 32 byes.
>
Also my RP2 decoder works in a similar way.
>
>>>>>>
Possible:
__MINALIGN_type__ //minimum allowed alignment for type
_Alignof(type) has been around since C11.
>
_Alignof tells the native alignment, not the minimum.
It is the same thing.
>
Not necessarily, it wouldn't make sense for _Alignof to return 1 for all the basic integer types.
The main alternatives:But, for" minimum alignment" it may make sense to return 1 for anything that can be accessed unaligned.Again, I see no use for this.
>
Probably for unaligned deref's on targets where "memcpy()" is a less desirable option (say, if it takes several additional CPU instructions).For what purpose?>>>
Where, _Alignof(int32_t) will give 4, but __MINALIGN_INT32__ would give 1 if the target supports misaligned pointers.
>
The alignment of types in C is given by _Alignof. Hardware may support unaligned accesses - C does not. (By that, I mean that unaligned accesses are UB.)
>
The point of __MINALIGN_type__ would be:
If the compiler defines it, and it is defined as 1, then this allows the compiler to be able to tell the program that it is safe to use this type in an unaligned way.
>
Note that a lot of what I am describing here is true of BJX2.This also applies to targets where some types are unaligned but others are not:For what purpose? And why do you want to worry about totally hypothetical systems?
Say, if all integer types 64 bits or less are unaligned, but 128-bit types are not.
>
I haven't seen any issues with MSVC and this sort of code usually works as expected...>Note that MSVC most certainly does /not/ work like "gcc -fwrapv" - signed integer overflow is UB in MSVC, and it generates code that assumes it never happens. There is an obscure officially undocumented (or documented unofficially, if you prefer) flag to turn off such optimisations.
Most of this is being compiled by BGBCC for a 50 MHz cPU.
>
So, the CPU is slow and the compiler doesn't generate particularly efficient code unless one writes it in a way it can use effectively.
>
Which often means trying to write C like it was assembler and manually organizing statements to try to minimize value dependencies (often caching any values in variables, and using lots of variables).
>
>
In this case, the equivalent of "-fwrapv -fno-strict-aliasing" is the default semantics.
>
Generally, MSVC also responds well to a similar coding style as used for BGBCC (or, as it more happened, the coding styles that gave good results in MSVC also tended to work well in BGBCC).
>
Last I read about it, they had no plans to do any type-based alias analysis, but nor did they rule out the possibility in the future.
Les messages affichés proviennent d'usenet.