Sujet : Re: Operator precedence
De : janis_papanagnou+ng (at) *nospam* hotmail.com (Janis Papanagnou)
Groupes : comp.lang.awkDate : 31. May 2024, 16:46:30
Autres entêtes
Organisation : A noiseless patient Spider
Message-ID : <v3crcn$29n7m$1@dont-email.me>
References : 1 2 3 4 5
User-Agent : Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.8.0
On 30.05.2024 10:17, Axel Reichert wrote:
Janis Papanagnou <janis_papanagnou+ng@hotmail.com> writes:
Are you saying that the order
unary minus
exponentiation
binary minus
is somehow "wrong"?
It certainly feels like this to me. When thinking about the "why", I
could see two "arguments":
1. Unary and binary minus, while not identical operators, at least
visually are identical. Hence it reduces the cognitive load on the
reader of source code, who does not need to remember that unary minus
and binary minus are "on opposite sides of the 'exponentiation fence'".
I sort this "argument" (quotes borrowed from your text) into the same
category as "[old] mathematical convention". IMO it doesn't withstand
the slightest analysis. To explain I'm going back to "Computer Science
101" as taught in the 1980's (and probably even before)...
You model expressions with Kantorovic trees (using a homonym '-' here)
a b c
\ / |
'-' '-'
but what I depicted are different operations, as can obviously be seen.
Their evaluation in a stack automaton will happen like
push(a); push(b); subtract() and push(c); negate() respectively.
If there wouldn't be a distinction and we had, say, a single 'minus()'
operation there'd be no indication to reduce that part of expression.
Same glyph doesn't tell anything in an area (mathematics, but also CS)
that is meanwhile highly overloaded with symbols.
In CS we have even more complex situation depending on involved types.
'**' is a good example also here; 'a**b' is depending on the involved
types; try (for example in Awk)
c = 2 ; d = 2.0 ; e = 2.0000000001 ; f = -2.3
print f^c, f^d, f^e
There's not one '**' function but actually two under one name
exp: numeric x int -> numeric
exp: numeric x real -> real
2. Mathematical programming is algorithms (often beautifully typeset
with (La)TeX) transformed into more mundane ASCII-text representations
(which I see as a technical limitation). Using different operator
precedence depending on the visual representation is certainly confusing
if not dangerous (error-prone coding). Also, I do not see the benefit of
deviating from the mathematical convention.
In the first part you seem to just repeat your "argument" 1. above.
Deviation from conventions makes sense when appropriate. (This had
been discussed with examples and I think it's worth pondering about.
I'm glad that the "Burning Witches" convention has been superseded,
and that "Newton" got fixed by Relativity and Quantum Mechanics. :-)
Janis
With the rather new trend to use UTF characters in source code (lambda
and other greek letter, arrows) the visual distinction between printed
math and written source gets smaller, so it makes less and less sense to
use different conventions for the two.
Best regards
Axel