Michael S <
already5chosen@yahoo.com> writes:
On Tue, 25 Mar 2025 05:02:45 -0700
Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:
>
Michael S <already5chosen@yahoo.com> writes:
>
On Tue, 25 Mar 2025 08:39:04 -0000 (UTC)
Kaz Kylheku <643-408-1753@kylheku.com> wrote:
>
On 2025-03-25, Janis Papanagnou <janis_papanagnou+ng@hotmail.com>
wrote:
>
On 25.03.2025 05:56, Tim Rentsch wrote:
>
Janis Papanagnou <janis_papanagnou+ng@hotmail.com> writes:
>
[...]
>
When I started with "C" or C++ there were not only 8-bit
multiples defined for the integral types; [...]
>
In C the correct phrase is integer types, not integral types.
>
My apologies if I'm using language independent terms. I'm
confident, though, that most people (obviously you as well)
understood the term.
>
You are 100% correct. You made it clear that you're referring
to a time /when you started with C/. I remember from past
discussions that this was sufficiently long ago that it was ISO
C90 or ANSI C, if not earlier.
>
In ISO 9899:1990, we have this:
>
6.1.2.5 Types
>
[...]
>
"The type char, the signed and unsigned integer types. and the
enumerated types are collectively called integral types."
^^^^^^^^
>
The integral types were renamed between C90 and C99. However,
"integral types" remains part of C history. C90 is a still valid,
historic and historically significant dialect of C.
>
Even today, it is misleading to say that "integral types"
is an incorrect way to talk about C. It's a terminology that
has been formally superseded since C90. However, it is a term
used in computer science and mathematics, and fine for informal
discussions that don't revolve around language-lawyering.
>
The word has two pronunciations in English. When the emphasis is
on the first syllalble: IN-tgrl, it is a noun which refers to the
opposite of a calculus derivative. The integral of x^2 from 0 to
1, etc. in-TE-gral is an adjective, which is is a common
words---it's an integral part of everyday English, meaning
indivisible from. In math and CS it is used for indicating that
some quantity is in Z.
>
Wouldn't the term 'whole numbers' be preferred in everyday English?
>
"Whole numbers" are all non-negative.
>
"Integers" include values less than zero.
>
Sounds like English everyday use differs from two other languages that
I know relatively well in both of which "whole" numbers include
negatives.
My native language is US English. I have taken classes
in two other languages, but don't know either well
enough to say how "whole number" is understood in them.
When I was in grade school, I was taught that "natural
numbers" are numbers like 1, 2, 3, ... Shortly after, and
probably the same day, I was taught that "whole numbers"
are the natural numbers plus zero. Someone who is a
contemporary of mine told me recently that there was a
mnemonic: "whole" numbers include zero because of the O
(letter oh) in "whole".
This question came up recently with another friend, who was
in grade school (in the US) at roughly the same time that
I was. Talking with him, I learned that more recent usage,
at least some more recent usage, uses "whole numbers" for
integers starting at 1, and "natural numbers" for integers
starting at 0.
I have never heard anyone who grew up speaking US English
use "whole number" to include the possibility of negative
numbers. In fact I don't remember anyone use "whole number"
to include negative numbers, no matter what their native
language is (or native languages are). It occurs to me
that there is someone I could ask, if I knew how to get
in touch, and get a reliable answer - the most polyglot
person I have ever met (more than two dozen languages).
But I can't do that anytime soon.
I should note that "whole numbers" was taught as a compound
noun, in the same way that "natural numbers" was taught as
a compound noun, and sometimes the phrase "counting numbers"
was used as a compound noun, meaning the numbers used to
count, starting at one. Note also that these compound nouns
are always used in the plural: "natural numbers", "whole
numbers", and "counting numbers". The leading word is not
being used as an adjective, but just as a way of indicating
which set a number belongs to.
Returning to the original question, the point is that,
when considered as adjectives, "integer" and "integral"
mean very different things. The function sin is a
real-valued function, and consequently sin(pi) yields
an integral value, but it does not yield an integer
value. "Integer" used as an adjective is about what
/kind/ of number is being considered, whereas "integral"
used as an adjective is about what /value/ is produced.
An _integer_ expression always yields an _integral_ value,
but an expression that produces an _integral_ value is not
necessarily an _integer_ expression - it could be a real
expression, or even a complex expression, that just happens
to yield a value that is equal to an integer.
All of the foregoing represents my best understanding of how
these words are used, in learned mathematical discourse.
And I believe the same understanding underlies the decision
to change "integral" in C90 to "integer" in C99.