antispam@fricas.org (Waldek Hebisch) writes:
Anton Ertl <anton@mips.complang.tuwien.ac.at> wrote:
If yes, a few years down the road your prospective customers have to
decide whether to go for your newfangled architecture or one of the
established ones. They learn that a number of programs work
everywhere else, but not on your architecture. How many of them will
be placated by your reasoning that these programs are not strictly
confoming standard programs? How many will be alarmed by your
admission that you find it ok that you find it ok that such programs
don't work on your architecture? After all, hardly any program is a
strictly conforming standard program.
>
Such things happended many times in the past. AFAIK standard
setup on a VAX was that accessing data at address 0 gave you 0.
A lot of VAX programs needed fixes to run on different machines.
That case is interesting. It's certainly a benefit to programmers if
most uses of NULL produce a SIGSEGV, but for existing programs mapping
allowing to have accessible memory in page 0 is an advantage. So how
did we get from there to where we are now?
First, my guess is that the VAX is only called out because it was so
popular, and it was one of the first Unix machines where doing it
differently was possible. I am sure that earlier Unix tragets without
virtual memory used memory starting with address 1 because they would
otherwise have wasted precious memory.
Anyway, once we had virtual memory, whether to use the start of the
address space is not an issue of the ABI (which is hard to change),
but could be determined by programmers on linking. I guess that at
first they used explicit options for making the first page
unaccessible, and these options soon became the defaults. By the time
I started with Unix in the later 1980s, that battle was over; I
certainly never experienced it as an issue, and only read about it in
papers on VAXocentrism.
I remember issue with writing to strings: early C compilers
put literal strings in writable memory and programs assumed that
they can change strings.
gcc definitely had an option for that. Again not an ABI issue, but
one that can be controlled by programmers on compilation.
C 'errno' was made more abstract due
to multithreading, it broke some programs.
That's pretty similar to an ABI issue (not sure if errno is in the
ABIs or not). And the really perverse thing is that raw Unix and
Linux system calls have been thread-safe from the start. It's only
the limitation of C language in early times (no struct returns,
bringing us back to the topic of the thread) that gave us the errno
variable in the C wrappers of these system calls that turned out not
to be thread-safe and led to problems later.
Concerning varags,
Power PC and later AMD-64 used calling convention incompatible
with popular expectations.
I did not experience calling convention problems on PowerPC in my
software, so apparently it was compatible with my expectations.
Still, Power(PC) is very niche. I recently talked to someone who
worked a lot on Power while he was at IBM (now he no longer works for
IBM); I asked him why people are buying Power, and he said something
along the lines that IBM is satisfying a base of established
customers. Maybe Power would be more popular if it had had a calling
convention compatible with popular expectations, probably not.
As for AMD64, whatever popular expectation they may have been
incompatible with (again I experienced no problems), the user could
fall back to the IA-32 calling convention (i.e., compile the program
as a 32-bit program, or just run the existing 32-bit binary),
providing an easy workaround for ABI problems for existing, working
programs.
Concerning customers, they will tolerate a lot of things, as long
as there are benefits (faster
Didn't work out for Alpha.
or cheaper machines,
People are abandoning PCs in favour of Raspis? Does not look that way
to me.
better security,
Oh, really? Which machine became a success because of better security?
etc.) and fixes require reasonable amount of work.
Many customers expect a machine that's compatible with their legacy
software, and are not willing (or at all able) to "fix" it. Many even
require machines that are officially supported by the software vendor.
And for a software vendor, the need for one fix is probably a sign
that the platform is not as compatible as they would like, and that
qualifying that platform requires more work, and they will charge that
work to the platform's customers.
- anton
-- 'Anyone trying for "industrial quality" ISA should avoid undefined behavior.' Mitch Alsup, <c17fcd89-f024-40e7-a594-88a85ac10d20o@googlegroups.com>