According to Joe Pfeiffer <
pfeiffer@cs.nmsu.edu>:
... Once you had a
channel, I/O buffering made sense, have the channel read or write one
area while you're working on the other.
>
The day the CPU became faster than a teletype (or any other IO device
you care to name) interrupts became useful. Get an interrupt saying the
teletype is ready, send a character, go back to work, repeat.
That's certainly the model that DEC used in the PDP-1 and their other
minis. Lightweight interrupts and simple device controllers worked
for them. But the tradeoffs can be a lot more complicated.
Let us turn back to the late, not very lamented IBM 1130 mini. It
usually came with an 1132 printer which printed about 100
lines/minute. A drum rotated behind the paper with 48 rows of
characters, each row being all the same character. In front of the
paper was the ribbon and a row of solenoid driven hammers.
When the 1130 wanted to print a line, it started the printer, which
would then tell it what the upcoming character was on the drum. The
computer then had less than 10ms to scan the line of characters to be
printed and put a bit map saying which solenoids to fire into fixed
locations in low memory that the printer then fetched using DMA.
Repeat until all of the characters were printed, and tell the printer
to advance the paper.
Given the modest speed of the 1130, while it was printing a line it
couldn't do anything else. But it was even worse than that. There were
two models of 1130, fast and slow, with the difference being a delay
circuit. The slow model couldn't produce the bitmaps fast enough, so
there was a "print mode" that disabled the delay circuit while it was
printing. As you might expect, students quickly figured out how to put
their 1130s into print mode all the time.
The printer interrupted after a paper move was complete, giving the
computer some chance to compute the next line to print in the
meantime. To skip to the top of the next page or other paper motion,
it told the printer to start moving the paper, and a hole in which row
in the carriage control tape (look it up) to wait for. When the hole
came around, the printer interrupted the CPU which then told the
printer to stop the paper.
The other printer was a 1403 which had 300 and 600 LPM models. Its
print mechanism was sort of similar, a horizontal chain of characters
spinning behind the paper, but that made the hammer management harder
since what character was at what position changed every character
time. But that wasn't the CPU's problem. The 1403 used its own unique
character code probably related to the layout of the print chain, so
the CPU translated the line into printer code, stored the result in a
buffer, and then sent a command to the printer telling it to print the
buffer. The printer printed, then interrupted, at which point the CPU
told it to either space one line or skip to row N in the carriage
control tape, again interrupting when done.
By putting most of the logic into the printer controller, the 1403 was
not just faster, but only took a small fraction of the CPU so the
whole system could do more work to keep the printer printing.
The point of this long anecdote is that you don't just want an
interrupt when the CPU is a little faster than the device. At least in
that era you wanted to offload as much work as possible so the CPU
could keep the device going and balance the speed of the CPU and
the devices.
As a final note, keep in mind when you look at the 400 page patent on
the 709's channel that the logic was built entirely out of vacuum
tubes, and was not a lot less complex than the computer to which it
was attached. A basic 709 rented for $10K/mo (about $100K now) and
each channel was $3600/mo ($37K now). But the speed improvement
was worth it.
-- Regards,John Levine, johnl@taugh.com, Primary Perpetrator of "The Internet for Dummies",Please consider the environment before reading this e-mail. https://jl.ly