Sujet : Re: "A diagram of C23 basic types"
De : janis_papanagnou+ng (at) *nospam* hotmail.com (Janis Papanagnou)
Groupes : comp.lang.cDate : 07. Apr 2025, 20:49:02
Autres entêtes
Organisation : A noiseless patient Spider
Message-ID : <vt1a7f$i5jd$1@dont-email.me>
References : 1 2 3 4 5 6 7 8 9 10
User-Agent : Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.8.0
On 07.04.2025 19:30, candycanearter07 wrote:
Lawrence D'Oliveiro <ldo@nz.invalid> wrote at 03:01 this Friday (GMT):
On Wed, 2 Apr 2025 16:33:46 +0100, bart wrote:
>
Here, tell me at a glance the magnitude of
this number:
>
10000000000
>
#define THOUSAND 1000
#define MILLION (THOUSAND * THOUSAND)
#define BILLION (THOUSAND * MILLION)
>
uint64 num = 10 * BILLION;
>
Much easier to figure out, don’t you think?
Yes, where appropriate that's fine.
But that pattern doesn't work for numbers like 299792458 [m/s]
(i.e. in the general case, as opposed to primitive scalers).
And it's also not good for international languages (different
to US American and the like), where "billion" means something
else (namely 10^12, and not 10^9), so that its semantics isn't
unambiguously clear in the first place.
And sometimes you have large numeric literals and don't want
to add such CPP ballast just for readability; especially if
there is (or would be) a standard number grouping for literals
available.
So it's generally a gain to have a grouping syntax available.
I used to do a bit of code for a codebase that did that with SECONDS and
MINUTES since (almost) every "time" variable was in milliseconds, and it
was very nice. That is just my subjective opinion, though. :P
That actually depends on what you do. Milliseconds was (for our
applications) often either not good enough a resolution, or, on
a larger scale, unnecessary or reducing the available range.
Quasi "norming" an integral value to represent a milliseconds unit
I consider especially bad, although not that bad as units of 0.01s
(that I think have met in Javascript). I also seem to recall that
MS DOS had such arbitrary sub-seconds units, but I'm not quite sure
about that any more.
A better unit is, IMO, a second resolution (which at least is a
basic physical unit) and a separate integer for sub-seconds. (An
older Unix I used supported the not uncommon nanoseconds attribute
but where only milli- and micro-seconds were uses, the rest was 0.)
Or have an abstraction layer that hides all implementation details
and don't have to care any more about implementation details of
such "time types".
it was more like
#define SECONDS *10
#define MINUTES SECONDS*60
#define HOURS MINUTES*60
, though. Probably would be more notably annoying to debug in weird
cases if the whole language/codebase wasnt borked spagetti :D
Janis