John Levine <
johnl@taugh.com> writes:
My question is where did CKD come from? I looked at the 1964 IBM Systems Journal
issue that describes the design of S/360 which has an article describing the way
the channels work, but nothing about the disks. Wikipedia has a long list of CKD
drives and controllers but no hint of whose idea it was.
>
One clear mistake is that they underestimated how quickly main memories would
grow, which meant that you could cache keys in memory so most of what would
otherwise be found by a channel ISAM search was in the cached keys and you could
usually directly read correct record on the disk. Similarly, if you have
plenty of memory to buffer multiple disk blocks, splitting logical records of
whatever size across fixed size blocks is no big deal.
big part of CKD was trade-off of I/O and memory. search & multi-track
search ... have the I/O channel program search for the record you wanted
... at least by the mid-70s, the trade-off was inverting.
program libraries were typically partitioned datasets (PDS) with
directory at front. Channel program would do multi-track search of PDS
directory looking for a PDS directory program entry, read it ... then do
a channel program to move arm to that head position and read the
program. For each seach compare, the search CCW would refetch the match
from processor memory ... for the duration of the search, the device,
controller and channel would be locked.
the architecture became heavily ingrained into the batch operating
systems. around 1980, I offered to provide FBA support to them. I was
told even if I provided fully integrated and tested, I still needed a
couple hundred million in incremental sales to cover education and
documentation for the changes ... and since they were already selling
every disk made ... it would just change from CKD to FBA (with no
incremental sales) ... also I couldn't use (FBA) life-time savings in
the business case.
late 70s, I had been brought into large datacenter for major national
grocery store chain ... had multiple systems in loosely-coupled shared
DASD configuration (stores group into multiple geographical regions
mapped to different systems). They were having horrendous performance
problems and most of the corporate specialists had been brought though
before I was called.
Had a classroom with tables covered with large paper piles of
performance activity data from all the systems. After about 30mins I
noticed that during worst performance periods, the aggregate I/O (summed
across all the systems) peaked around 6-7/sec (3330, 19tracks/cyl, RPS)
and asked what it was.
It turns out it was shared disk (for all systems) that contained all
store applications ... and it was basically caped at doing two program
loads/sec for the hundreds of stores across the country. It had a 3cyl
PDS directory and would avg of 1.5 cyl multi-track search for each
application ... i.e. full cyl. multi-track search of 19tracks at
60revs/sec (.317secs) followed by 9.5track (.16secs) ... during which
time the disk was locked out for all systems and the controller (and all
drives on that controller were locked out). Once the PDS entry was
found&read, it could use the information to move the arm for
reading/loading the program.
Somewhat engrained in the professional minds was typical 3330 would do
30-40 I/Os per sec ... individual system activity reports only showed
the I/O activity counts for each specific system (with no data about
aggregate disk I/Os across all systems or elapsed avg queued/waiting
time).
I also was pontificating that between 60s and early 80s, disk relative
system throughput had declined by order of magnitude (disks got 3-5
times faster while systems got 40-50 times faster). Disk division
executive assigned the division performance group to refute the claim
... after a couple weeks they came back and basically I was slightly
understating the problem (this got respun for customer presentations on
how to configure disks for improved system throughput).
-- virtualization experience starting Jan1968, online at home since Mar1970