Sujet : Instruction Tracing
De : ldo (at) *nospam* nz.invalid (Lawrence D'Oliveiro)
Groupes : comp.archDate : 10. Aug 2024, 07:20:51
Autres entêtes
Organisation : A noiseless patient Spider
Message-ID : <v970s3$flpo$1@dont-email.me>
User-Agent : Pan/0.159 (Vovchansk; )
In the early days of the spread of RISC (i.e. the 1980s), much was made of
the analysis of the dynamic execution profiles of actual compiled programs
to see what machine instructions they most frequently used. This then
became the rationale for optimizing common instructions, and even omitting
ones that were not so often used.
One thing these instruction traces would frequently report is that integer
multiply and divide instructions were not so common, and so could be
omitted and emulated in software, with minimal impact on overall
performance. We saw this design decision taken in the early versions of
Sun’s SPARC for example, and also IBM’s ROMP as used in the RT PC.
Later, it seems, the CPU designers realized that instruction traces were
not the final word on performance measurements, and started to include
hardware integer multiply and divide instructions.
(ROMP was also one of those RISC architectures that had delayed branches,
along with MIPS, HP-PA and I think SPARC as well.)
I have heard it said that the RT PC was a poor advertisement for the
benefits of RISC, and the joke was made that “RT” stood for “Reduced
Technology”.
Later, of course, IBM more than made good this deficiency with its second
take on RISC, in the form of the POWER architecture, which is still a
performance leader to this day.