On Tue, 13 Aug 2024 05:46:23 -0500, Altered Beast
<
j63480576@gmail.com> wrote:
Spalls Hurgenson wrote:
On Mon, 12 Aug 2024 12:19:58 -0500, Altered Beast
<j63480576@gmail.com> wrote:
Kyonshi wrote:
On 8/4/2024 6:09 PM, Dimensional Traveler wrote:
On 8/3/2024 10:38 PM, Mark P. Nelson wrote:
Look, the whole point of the *personal* computer was that you didn't
have to rent time from
IBM to figure out your profit/loss balance.
>
Ever since then, every computer company has been trying desperately
to revive the "You
only rent it" model to bolster their bottom line, no matter their
public face on the question.
>
We're getting closer and closer to no longer having personal
computers which we own and
can configure/control as we wish but rather Microsoft or Banana
computers for which we pay
a regular fee.
>
Pfui!
>
Its not just computers.
>
>
well, by now lots of things have more computing power than was used to
get man to the moon. e.g. cars.
>
What units are computing power measured in?
Here's a layman's answer. I'm sure experts in the field will take
issue with some of my descriptions but I think its a good enough
overall introduction.
FLOPS and IPS are the units that I've typically seen used. The former
- Floating Point Operations Per Second - calculates how fast the
computer can do arithmetic calculations, which is a 'real-world'
example of what PCs do. After all, in the end everything we ask our
computers to do revolves around maths, so knowing how fast it can run
a calculation is the best measurement between computers.
IPS - Instructions Per Second - counts how many internal instructions
the CPU can parse each second. However, because of differences in
CPUs, IPS doesn't directly scale to output; a calculation that takes
one type of CPU three instructions may take a different architecture
five instructions and a third architecture might need twelve.
FLOPS is more useful for comparing actual performance between
different computers (e.g., your phone versus your home PC versus an
F-35 fighter jet). IPS is really only useful for comparing between
similar architectures (e.g., an Intel 13900 and an Intel 13700). There
are also different ways of measuring a CPUs performance, which causes
different results depending on which method you used.
Precision (how many decimal points you use) also effects the results
of FLOPs benchmarking; some computers only have 16-bit precision,
others go up to 64-bit. Many early computers also lacked dedicated
hardware for floating-point calculations, and so had to 'brute-force'
the math at a significant hit to performance. Others were specialized
for floating point performance at a cost to 'regular' arithmetic used
for a lot of user operations. And -especially with older computers-
architectural differences were so radically different that comparisons
are almost impossible.
>
I had heard of FLOPS via MATLAB, which reports the number of FLOP in
each instruction. I hadn't realized that the S in FLOPS was for
seconds, which was why it didn't make sense to me. I'm really surprised
at the numbers especially for Cray. 2 FLOPS sounds kind of primitive.
MATLAB had operations on the order of 1 teraflop for one instruction.
Of course, these could take a few minutes to execute.
Those numbers -from Wikipedia, so take them for what you will- are
floating point operations per CYCLE (so FLOPC, I guess?). You need to
multiply that by number of cores and megahertz to get the nice big
number everyone expects. 160FLOPS (80MHz, two cpus) is the number more
typically quoted for a CRAY-1.
The 'per cycle' number is arguably a more useful reference since
actual processor speed can vary depending on how fast you can clock
it. But it makes some CPUs look a lot slower than you'd expect, I
know.
That's why I added an alternate table of comparison. (Also because the
only reference I found on Apollo 11's moon-landing computers was in
overall FLOPS performance,not per cycle, and I really wanted to show
where it lay in comparison to more modern hardware. We've come a long
way, baby!). But that table gives you a more 'real world' example of
the performance difference between different machines.
(Also, if the 486 performance looks a bit skewed, that's because the
first chart shows the base 486 performance using just the CPU, whereas
the second is using the dedicated floating-point hardware built into
the chip available in the DX line of processors*)
But really, all these numbers mean very little to the end-user. A lot
of the day-to-day stuff we ask our computers to do rarely touch
floating point math, and more often RAM and storage performance have a
more immediate impact than trying to find a chip with the best FLOPS
or MIPS performance. Unfortunately, measuring THAT makes for even more
difficult comparisons between computers.
Ultimately, the best performance test is, "Does it run Doom?" If the
answer is affirmative, then you've got enough performance to do most
of what you'd need a computer to do. ;-)
* and yes, I know the 486SX chips actually had built-in-but-disabled
FPUs too ;-)