Sujet : Re: The integral type 'byte' (was Re: Suggested method for returning a string from a C program?)
De : david.brown (at) *nospam* hesbynett.no (David Brown)
Groupes : comp.lang.cDate : 26. Mar 2025, 11:10:54
Autres entêtes
Organisation : A noiseless patient Spider
Message-ID : <vs0jrf$1hb4h$1@dont-email.me>
References : 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
User-Agent : Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.11.0
On 25/03/2025 19:18, Janis Papanagnou wrote:
On 25.03.2025 10:38, David Brown wrote:
>
Personally, I think [...]
(I'll skip most of that in your post.)
>
Thus pretty much any programmer in the last 50 years sees "byte" as
synonymous with 8-bit octet, including C programmers,
Be careful if you are not speaking for yourself, and especially if
you extrapolate to such a lengthy period of time.
50 years ago was 1975 (and about the time I wrote my first programs).
And it was even some years later that I programmed on CDC 175 or 176,
a machine with a word length of 60 bit, 6 bit characters and Pascal's
'text' data type was a 'packed array [1..10] of character'. (Just to
give an example.) Computer scientists generally had a much broader
view back these days.
If you'd have said 40 years ago, about the time when MS DOS systems
got popular, I would have agreed about the prevalent opinion. OTOH,
with all this populism a lot of quality degradation entered the IT
scenery (at least, as far as my observation goes); things were not
taken as accurately as would have been appropriate.
OK, let's say 40 years ago. But even by 1975, it was clear that 8-bit groupings were already dominant and other sizes were only going to see usage in niche devices or for compatibility with existing older designs. Basically, the future of programming belonged to microprocessors, and they used 8-bit bytes (except for the smallest embedded devices with 4-bit nibbles).
and for the last
30 years or so it has been the ISO standard definition of the term.
I suppose you meant the "ISO _C_ standard definition"?
No, I meant ISO standards. ISO 2382, ISO 60027, ISO 80000. These are the standards that cover all sorts of terms and units for science, mathematics, engineering - including computing. They are how we know that when you buy 1 GiB of ram, you have 1024 ^ 3 bytes with each byte consisting of 8 bits. They are why other ISO standards for other programming languages don't have to define what they mean by "byte" - it's only languages that deviate from the ISO standard definitions that need to be explicit because they define terms in a different way from the commonly accepted terms. (To be clear here - the ISO C90 standard predates these ISO quantities standards.)
I'm asking because I was in my post already referring to international
standards (ISO, CCITT/ITU-T, etc.) that have defined 'octet' for the
purpose of unambiguously identifying an 8 bit entity. The 'octet' went
into the ASN.1 protocol standard notation (that you will now also find
in IETF's RFC standards).
Yes, "octet" (or "octad") was used in some contexts before "byte" was standardised at 8 bits, and was standardised in communications and networking documents before "byte" was standardised. It is still used in such contexts, due to the historical momentum - no one is going to revise all the RFC's to change the wording.
But the fact that "octet" was a standardised term for 8 bits prior to the standardisation of the term "byte", does not change the fact that the term "byte" was standardised as 8 bits - in common computing usage by at least 40 years ago (though I still think 50 years ago is reasonable), and in official international standards by at least 30 years ago.