CA1174373A - Channel adapter for virtual storage system - Google Patents
Channel adapter for virtual storage systemInfo
- Publication number
- CA1174373A CA1174373A CA000402465A CA402465A CA1174373A CA 1174373 A CA1174373 A CA 1174373A CA 000402465 A CA000402465 A CA 000402465A CA 402465 A CA402465 A CA 402465A CA 1174373 A CA1174373 A CA 1174373A
- Authority
- CA
- Canada
- Prior art keywords
- data
- host
- storage system
- processor means
- processor
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired
Links
Landscapes
- Memory System Of A Hierarchy Structure (AREA)
Abstract
Abstract of the Disclosure A channel adapter is disclosed which serves to inter-face between a host computer and a virtual storage system of the type in which data is stored in locations chosen by intelligence in the virtual storage system and in blocks of a size chosen for convenience for the memory system. The channel adapter comprises first relatively intelligent processor means for controlling the flow of data and second relatively high speed sequencer means slaved to the first processor for actual handling of the host interface and third hardware logic for actual handling of the data. Means for data compression and decompression are provided in the data flow path prior to division of the data into block sizes convenient for storage on the long term storage media.
Description
~ ~74373 CHANNEL ADAPT~R F~R VIRTU~L STO~AG~ SYST~M
Field of the Invention .
This invention relates to ~pparatus for coupling data processing equipment such as a host computer to a virtual storage system for the storage of large quantities of digital data exclusively under the control of the virtual storage system. More particularly, the invention relates to apparatus for performing data formatting and compression operations and for interfacing the host computer with the processor of the virtual storage system.
Background of the Invention For many years it has been an aim of the computer and data processing industry generally to provide increasingly less expensive and faster access time memory for the storage of large quantities of digital data. Typically large computer systems have a heirarchy of memory in which data is stored.
The choice is made based on the frequency of use of a given data file; a compromise is struck between access time and the expense ~f the storage rleans used. Thus, for example, a large computer system may have its least frequently used data files stored on reels of magnetic tape which require substantial access time before they can be physically brought to a given tape drive, mounted and advanced to the point on the tape at which the information sought for is stored. The next step in the heirarchy is typically magnetic disk storage, of which there are several types. The user-mountable Aisk drive, as does tape, requires an operator to fetch the physical disk file and mount it on the drive, but access time is com~aratively short. Still a faster 117~373 form of disk storage is so-called "fixed head dis~" in which scheme the dis~s are permanently mounted. Other ~orms of data storage devices include solid state sequential access and random access memory mean~. These are generally considered too expensive for long-term storage, and are only used to contain a data set actually being used. Typically in the prior art, a host computer when requested by a user program to access a given data file converts the file name to the physical location of the file on a given reel of tape or disk drive, and outputs an instruction to, e.g., mount a given reel of tape on a tape drive, access a particular portion of a disk, or the like. The selection of the type of storage and where on the given storage device a certain user file was to be stored is usually made by the computer system user or operator; to a lesser degree tXe assi~n~ent process can~
be carried out by the host computer. Accordingly, the data storage devices themselves are not intelligent and only respond to the commands given them by the host. In Canadian Pato~t No.
1,153,126 which~ssued to Whi`te on 30 Augustj 1983, an improvement on this system was made with the invention of a virtual storage system which comprised means for accepting data a host intended for storage on tape and converting the host's co~mands to, e.g., "Mount Tape" to commands suitable for the storage of the data on disk drlves. The virtual storage system thus mimiced a tape drive to the host. In this way, much faster access times were provided to the host without requiring modification of the host operating systems.
A further development of this concept is ~isclosed and claimed in co-pen~ing Canadian application Serial No. 402,477, filed on 7 May, 1982, (Attorne~'s Doc~et No. STC-140). There the host operating systems are modified such that the host has little >
~ .~
STC-142 ~74373 or no control over the event-~al stora~e of the data hut merely writes it to a virtual storaae syst~m. The virtual storage system then determines ~here and ~n ~hat sort of storaae device, typically a disk~, the data is to be stored, the host i5 entirely relieved of this function, thus improvina its useful processing capability. The utilization of storage is greatly irnproved by use in connection with the virtual storage system.
The actual configuration of the virtual storage system disclosed in co-pending application Serial No.
402,477 includes what in other circumstances might be considered a computer itself as the heart of the system, in a preferred e~bodiment, a Magnuson.M80 com2uter is used. This com~uter decides ~here and on what type of data storage device the given data is to be stored. Its main memory is used as a "cache"
~ithin which data is reformatte~ from the user's chosen record size to a recor~d size, referred to as a "page", convenient for;
storage on the long-term storage device chosen. For example, in a presently preferred embodiment the most common mode of storage is on a disk drive, and the page size is selected to be equal. to one disk track, which greatly sir~plifies addressing and formatting considerations. Accordingly, as ~ata is receive~ from the host record by record it is written into a "frame" of storage locations within the solid state memory of the r1agnuson CPU. ~hen such a frame has been filled ~ith a "pAge" of data, the page is written to a "disk frame" of the same size for permanent storage. This frame-by-frame organization is important to the efficient use of the disk storage, and as e~lained in the co-pendin~ application referred to ahove yields suhst.antial. impl-ovemer)ts in tl-e operabilit~
of the computer system as a ~hole.
The present invention rel~tes to A hard~are means used 1~743~3 to interface the virtual stora~e system sl~ch as that re~erre~ to above with one or more host computers. It comprises itself a processor for performlng the paqination of the ~ata as received from the host an~ additionally comprises means for compression of the data so as to remove redundant hytes, thus rendering less storage necessary. This compression is performed, of course, prior to the paqination of the user records into appropriate sizes for storage on back-end storage devices.
Objects of the Invention It is therefore an object of the invention to provide improved means for storage of digital data.
A further object of the invention is to,provide means for interfacing a host computer of a first type with a processor of a differing type comprised in a virtual storage system.
Still a further object of the invention is to provide apparatus for performing data compression and pagination upon data received from such a host prior to writing it to memory in a virtual processor in block sizes convenient for storage on a storage device selected for final storage of the data.
Yet a further object of the invention is to provide such means for interconnecting a plurality of hosts with such a virtual storage system whereby utilization of the virtual storage system can be more completely effected.
Summary of the Invention The present invention fulfills the above-mentioned needs of the art and objects of the invention by its provision of a channel adapter for a virtual storage system which comprises a microprocessor operating a slave or su~processor for the control of flow of data throuqh the svstem. The microprocessor is a relatively low-speed, highly intelligent device, while the subprocessor is a relatively less intelligent but much higher speed processor. The microprocessor and subprocessor each control hardware logic. While the microprocessor con-trols assignment of storage location, pagination of the dataand the actual path along which the data flows through the channel adapater of the invention, the subprocessor or "sequencer" controls switching of signal lines and monitors the hardware logic which actually transfers the individual data bytes.
Accordingly, the present invention is directed to a data processing and storage system, comprising a host com-puter and a virtual storage system. The host computer com-prises an arithmetic and logic unit for performing data pro-cessing operations. The virtual storage system compriseslong-term storage media and means for assigning locations for storage of data on the long-term storage media. The means for assigning storage locations comprises first proces-sor means for determining where on associated long-term storage media selected blocks of data are to be stored, and channel adapter means comprising second processor means connected between the host computer and the first processor means for dividing data supplied by the host for storage into blocks of size convenient for storage on the long-term storage media, and for supplying the data to the first processor means in the convenient block sizes.
The present invention is also directed to a virtual storage system for the storage of data on long-term storage 1~74373 media, the data being received from a source in an arbitrary format. The virtual storage system comprises first processor means for determining where and on what types of associated long-term storage media the date is to be stored, second processor means for dividing the data into block sizes con-venient for storage on the long-term storage media, and third processor means for receiving data from the source and controlling the flow of the data through the system in response to the division by the second processor means.
This invention is further directed to a channel adapter for interposition between a virtual storage system and a host computer for dividing data received from the host user in user selected format and deliv~ring it to the virtual storage system in a format selected for convenience of storage within the virtual storage system. The channel adapter comprises first and second processor means, the first processor means for responding to host commands and to system commands and for controlling operation of the second processor means, the second processor means directing flow of the data through the channel adapter in response to commands from the first processor means, wherein the first processor means is relatively intelligent and the second processor means is relatively high speed.
Brief Description of the Drawings The invention will be better understood if refer~
ence is made to the accompanying drawings, in which:
Fig. 1 shows an overall view of a data processing and storage system including a virtual storage system accord-ing to the invention;
- 5a -_ ~17~373 Fig. 2 shows a block diagram of the virtual storage system processing apparatus including the channel adapter of the invention;
Fig. 3 shows a detailed block diagram of the channel adapter according to the invention;
Fig. 4 shows the main data path through the channel adapter of the invention in as much of Fig. 3 as is neces-sary to show the various steps in the path;
Fig. 5 shows the flow of control signals between the virtual storage system data bus and the microprocessor which is the heart of the channel adapter of the invention;
Fig. 6 shows the flow of control signals back and forth between the microprocessor and the sequencer;
Fig. 7 shows the flow of control signals between the host and the microprocessor;
~6) .~ 7~1~, STC-142 ~ ~ ~ ~
Fig. ~ shows the flow of control signals hetween the channel and the sequencer; and Fig. 9 shows a block diagram of hardware which could be used to perform data compression in the channel adapter according to the invention.
Description of the Preferred Rmbodiments As discussed above, the present invention relates to a virtual storage system in which a channel adapter performs interface functions between one or more host computers and the virtual storage system: the virtual storage system operates to decide independently-of the host computer(s) where and on what sort of data storage device a given user record should be stored.
The overall configuration of such a system together with a host and showing the channel adapters which are an important aspect of the present invention is shown in Fig. 1.
A host processor 30 is shown together wit~ various con-ventional inputs and outputs indicated generally by cards, printed paper output and a console. According to the virtual storage system concept the host 30 is connected by interface cables 32, to at least t~lo channel adapters 40 which in turn are connected to the virtual storage system 20. Additional channel adapters 34 may be provided for connection to additional host computers as indicated. In this way, a single virtual storage system 20 can service a plurality of hosts throu~h at least a like plurality of channel adapters. As will be explained in further detail below, in general at least two channel adapters are provided per host so that alternate paths for data transfer exist. The virtual storage system 20 is also supplied with a console 42 ~or operator interaction and is connecteA bv interface cable 36 to mass storage STC-112 117'~373 in~icated general1y at 3F~- T))is ~ould tv.7ically com~is~ ~oth di~ an~ tape drives as ~11 as perha~5 other forms of ~e~ory devices.
It will be ap~reciated that in qeneral the prior art practice when allocating spac~ on a ~c3~tic ~tora~e device such as at 38 in Fig- 1 to a gi~Jen file record ~2S ~een to allo~ the user to select a rnaxi~u~ si~e for each aata f;le and to allocate that amount ,of space to the file perl~lanently. F,xceI~t in the relatively unusual circumstance ~here a file is of fixed si~e, therefore, the file will be underutilized, that is, not completely filled wi~h actual data, as a user will inevitabl~ set up too large a file ratller than take the chance of running out of space.
Within the file the user specifies a convenient record length, and likewise selects ~here and on what sort of data storage device, e.g., dis~ or tape, his file is to be storecl when it is not being processed^ When it is desired to run a job using a particular user file the user specifies its location and specifies what portions of the file are to be transferred from the storage device to the host cornputer for computation. After t}-e computa-tion is co~pleted, the ~ile is returned to the same sr.~ot on thelong-term storage media.
According to the present invention, the virtual storage system instead of the user defines where and on what type of data storage device a given user file is to be stored.
The user need therefore only specify the file name and the virtual storage system accesses its memory to determine where the data is stored and to cause the portions sought to be read into the host.
Performance of the storage space allocation function external to the host and without operator intervention STC-142 1~7~373 allows several important a~vantages to he achieved. ~ne most important feature is that the file size need not be defined by the user; only that storage space actually neeAed for the storage of data need be allocated to a given file. This allows rnuch better utilization of available storage space. A second advantage is that the virtual storage system is enabled to reformat the data into block sizes convenient for storage purposes, not determined by the user's purposes. Thus, for e~ample, a user file is divided into "pages" of data which are of convenient size for storage on the storage device selected by the virtual storage system as the eventual storage location. ~or example, in a presently preferred embodiment, most of the storage takes place on disk memory. Hence, the pa~es of data are set equal in size to one disk track length.
This enables addressing considerations to be greatly simplified and leads to better utili~ation of the space on the disk. This "pagination" of the data is performed in the virtual storage system by providing a cache solid state memory in the virtual storage system in which the pages of data are assembled prior to being written to long-term storage media. In the presently pre-ferred embodiment, the virtual storage system comprises a ~lagnusonM80 computer; the main memory of this computer or "virtual proces-sor" becomes the cache memory. Thus, for e~ample, when the host is outputting data it is written into the memory of the virtual processor, which is divided into data storage areas equal in size to the pages of data desired, called "frarnes". ~hen a given frame of data is filled, the processor outputs the data from the cache to a selected track on a dis~ drive.
As noted above in connection with Fig. 1, channel adapters are used to form the interface between the host and the 33 virtual storage system according to the invention. It is in ~4~73 these channel adapters that the pa~ination function is performed, that is, reformatting of the user data files into page length sub-blocks for storage convenient to the storage device, rather to the user as in the prior art. Additionally, it is desirable that data compression be performed on the ~ata. This may comprise removal of redundancies from the data and their replacement by a compressed character indicating the number of times a given byte is repeated. Clearly, it is desirable that such data compression be performed prior to pagination of the data: this is likewise performed in the channel adapter.
Fig. 2 shows a functional block diagram of a channel adapter 60 and virtual storage system central processor (VCP) 64, both of which are connected to a virtual storage system (VSS) bus 62. A second channel adapter 60 is partially shown, indicat-ing that plural channel adapters may be connected to a singlebus 62 operating in conjunction with a single virtual storage system central processor 64. As noted, each of the channel adapters 60 is connected to a host computer through a host inter-face unit 74 and to the VSS system bus 62 through a system bus interface 76. ~1ithin each channel adapter 60 resides a channel adapter processor 78, a host data handler and an operation tag handler. The operation tag handler applies instructions from the host to the processor 78 which controls the flow of data through the channel adapter 60 and to the system bus interface 76. As wiil be made clear below, the channel adapter processor comprises an intelligent microprocessor such as, for example, a Zilog Z8000, which has the capacity for relatively complicated decision maXing, but which is too slow of operation to perform, e.g., signal switching to the host comp~lter interface. Accord-ingly, a second processor, resiclent in the operation tag handler _ 9 _ STC-142 ~437~
in the drawing of Fig. 2, of limited intelligence but of far higher speed is slaved to the Z8000 processor an-l is that which actually controls the protocol to the host interface. The actual transfer of data from the host interface to the system bus inter-face is performed witll hardware. Data compression occurs on thispath. The data, having been compressed, is then passed via bus 62 to the virtual storage system indicated generally at 64. As noted, the virtual storage system (VSS) is built generally within a Magnuson ~180 computer. The memory 68 indicated is the ~agnuson M80's memory, in which the addresses of the various data files stored on back-end storage such as disks, tape, and the like are located. The assignments are made by the VSS central processor _-70. The memory 68 may additionally comprise a cache memory for temporarily holding one or more pa~es of data for a short time, while, for example, a given back-end data storage device is made ready for receipt. Alternatively, the cache memory may be used to hold all of a given page of data from a back-end storage device prior to transfer over the system bus and through the channel adapter back to the host. The memory G8, also stores the software program which is executed on the VCP 64. Finally, the VCP 64 comprises channels 72 for connection to the various back-end storage devices and to an operator console.
Fig. 3 shows a more detailed block diagram of the chan-nel adapter 60 of Fig. 2. Its chief component is a Zilog or P~ID
Z8000 microprocessor 80 which may be dualed for error detection purposes by a second Z8000 81 connected in parallel. Slaved to the Z8000 microprocessor through two 8 bit control re~isters 84 and 85 is a channel sequencer unit 8~, which in a preferred embod-iment is an Intel 8X300 microcontroller chip which is additionally controlled by a 2K by 18 hit PRO~ ~n to assure appropriate sequencin~
STC-1~2 117~373 and control of operations. ~s noted, the Z~000 ~0 controls the flow of data but the channel sequencer 86 actually handles the gatinq of control reglsters and signal lines and the monitoring of the hardware data transfer by virtue of its far hi~her speed.
Thus, while the channel sequencer 86 can be said to be slaved to the processor 80, the channel sequencer 86 provides the essential capability of high speed data handling, which capability i5 important to the operation of the channel adapter.
The drawings subsequent to Fig. 3 show the paths taXen by data and by control signals between the major components of the channel adapter shown in Fig. 3. Each of Figs. 4 through 8 shows a primary one-directional data path in full and the inverse return path in dotted lines. The major components through which each path passes are also shown. It is to be understood that these are portions of Fig. 3, the remainder not being drawn in, for clarity's sake. If simultaneous reference is made to these Figures and to Fig. 3, it will be seen how the overall channel adapter functions are performed. Thus, for exa~ple, Fig. 4 shows the main data connection between the channel cable, connecting the channel adapter to the host computer at the left side of the diagram, and the system bus connecting the channel adapter and the VCP. This would be the path taken by data when written by a host computer to the virtual storage system according to the inven-tion. The data is received from the channel bus by a receiver 90, is passed through a 2 to 1 multiplexer (a simple switching unit~
92, and a first in/first out buffer 94. The data then passes through data compression logic 96, operating in a compress mode, under the control of a PR0~1 9~, and into a second first in/first out huffer lO0. A detailed hardware diagram of the logic which may be used to perform data com~ression will be discussed later ~L~7~3~3 in connection ~ith Fig. 9. Th~ ~lat~ thus co~presse~ is passed to an output ~uffer 102 an~ thrcugh an output register 104 to a transceiver 106. In the output buffer 102 the ~-~it data words passed from the FIFO ~uffer (that is, 8 hits plus a parity bit) may be assembled to a 64 data bit, 8 parity bit word for conven-ience in transmission over the bus. When received in the cache memory of the VSS processor (Fig. 2) the actual pagination opera-tion, that is, the accumulation of a page of data of a size convenient for long-term storage, may be performed. The pagina-tion occurs by virtue of the Z8000 specifying the memory areainto which data is written' see Fig. 5 below.
If reference is made simultaneously to Fig. 3 and the dotted line path of Fig. 4, the data pa.h ~hich is used when a read operation is being performed will be observed. In order that the sequence of the first in/first out registers 94 and 100 and the data compression/decompression logic 96 can be the sa~e, the data is passed through the transceiver 106, through a data input buffer 108, throu~h the multiplex unit 92, the first in/
first out registers 94 and 100, the data compression/decompres-sion logic 96 and back out through a driver 110 to the channelbus. These chief data flow paths of Fig. 4 may be sequenced and controlled as noted above by the microcontroller or channel sequencer 86 as directed by the Z8000 80. The control signals are passed to the channel sequencer 86 from the microprocessor 80 through control register 84 thus isolating the Z8000 bus 122 from 8X300 bus 120. The straight-through path shown in Fig. 4 may be referred to as direct memory access (~r1A).
A dynamic ~rogram memory 11~ and PRO~ 118 are connected to the address hus 112 and to tlle %8000 bus 122 to provide the Z8000 with the inputs require-1 for its operation. ~imilarly, STC-1~2 ~17'~37~
control signals are passe-l througll a control data register 124 which supplies instructions thro-lgh a system control bus inter-face 126 to the system control bus 128. The 7,~000 also passes control signals to a DMA counter 130 via the data ~us 122, and receives control data from the system as required from a control data register 132. The Z8000 utilizes various registers, tags, opcodes and the like to assist in the operation of the channel sequencer, and operate various diagnostic and error control circuits not shown. These expedients will be readily apparent to those skilled in the art, as will be suitable locations of parity check operators for error detection and the like.
If reference is made to the remaining data flow and control paths shown in Figs. 5 through 8, the operation of the channel adapter of the invention will be made more completely clear. Thus, for example, in Fig. 5, the path of control flow from the Z80Q0 to the system data bus is shown in solid, while the flow of control signals back from the syste~ bus to the Z8000 is shown in dotted lines. For example, the z8000 might pass a header consisting of 24 bytes before each page of data is transmitted to the host. Each block within each page might be preceded by an 8-byte header. This path is thus that through which the VSS and the channel adapter communicate regardin~ the sequencing of the movement of data therebetweên.
In Fig. 6, the flow of control signals from the proces-sor to the channel seauencer is shown in solid, while the flowof responses is shown in dotted. Similarly, in Fig. 7, the flow of control signals from the %~000 to the channel b~s interface for transmission to the host is shown in solid, while the flow of control signals ~rom the host to the Z8000 is shown in dotted lines. Such signals could ~e, Eor e~am~le, concerning ~et~ils STC-142 ~ 7~373 relevant to the operation of the c]lannel a~lapter, such as the field size of the data to he transferre~3 from the host to the channel ac~apter for compression and eventual transmission to the virtual storage system for long-term storage.
In Fig. 8, the flow of signals from the channel se~uen-cer directly to the channel bus is shown. This path wouid not ordinaril~ be used, communications from the host generally being made~through the Z8000, not through the slaved channel sequencer, but this path could be used to provide a "busy" signal to the host when, for example, the channel adapter was already busy transmitting or assembling data. The host might then try a second channel adapter if one were connected thereto for such purposes. The reverse path shown dotted from the channel bus to the channel sequencer directly would be used only for diagnostic purposes.
~ Fig. 9 shows one possible hardware embodiment whereby data could be compressed. A byte typically 8 bits wide is read into a first register 100 and is compared in a comparator 104 shown as an AND gate with the contents of a second register 102.
If the two are alike, the comparator 104 outputs a high signal to the set input of a flip-flop 106. If this was the first high input to the set input of the flip-flop, it causes a counter 108 to be reset to one. Simultaneously, an escape character genarator 110 outputs a predetermined character onto an output line 112.
On the other hand, if the flip-flop 106 had already been set, then the only consequence of the comparator's 104 putting out a high signal would be incrementing counter 10~. The contents of registers 100 and 102 are also compared in a N~N~ gate 114.
Thus, after a series of identical bytes have been compared in comparator 104, and have been counted in counter 108, if a hyte -- 1~ --;:~1'7'~3~3 is fed to register 100 which is not the same as that stored in 102, the 1ip-flop 106 would then be reset' the output from the flip-flop 106 would cause the counter 10~ to output its contents onto the output line 112, while the contents of register 102, the repeated byte, would also be output. The escape character together with the number of re~etitions and an example of the repeated byte would then become that ~hich was stored in place of the identical bytes. It will be appreciated that this process is only useful where the number of repeated bytes is at least equal to the number of bytes necessary to indicate a compression ~as been per~ormed, in this case four. This could simply be implemented by adding additional reaisters to those shown and by providing multiple inputs to A~ID and NAND gates 104 and 114.
~ As discussed above, the overall function of the virtual storage system c~f the invention is to divide user data into frames which are conveniently sized for storage on back-end storage devices such as disk drives. Within this context, the channel adapter serves as the interface between the Virtual Storage System Central Processor (VCP) and the host software. The channel adapter attaches to the host with a standard channel interface and is corrected to the VCP; in the preferred embodiment, to an extension of the ~lagnuson ~180/40 system bus. Functionally, the channel adapter comprises five parts: 1) the channel interface and channel sequencer (the 8X300 unit discussed above); 2) data compression and decompression logic means; 3) the channel adapter processor and its memory (the Z~000 unit): 4) direct memory access logic; and 5) the system bus interface. The primary channel adapter function i5 to acce~t variable length bloc~s of data from the host user stanclard channel protocol, compress the 11'7~37~
ST~-l42 data if possible, group thesc bloc~s into frames and pass control of the filled frames, now called pages, to the virtual control processor by means of messages. Thus, a frame is a track size space in the system memory allocated by the software contained in the virtual control processor. Control of frames is originally passed to the channel adapter via messages. The virtual control processor then controls movement of filled frames, i.e., pages, to back-end dis~. storage, thus freeing the frame for reuse. In a presently preferred embodiment the frame size is l9,06a bytes.
The inverse operation is likewise performed on a read operation.
The secondary function of the channel adapter is to pass messages be~ween the host and the virtual control processor software.
This is performed similarly except that messages are not grouped and naturally are not stored in disk drives.
As discussed above, in the virtual storage system of the invention the concept of a virtual volume or a data file of unspecified length is of importance. Such a virtual volume consists of at least one page of data. ~1hen such a volume is "opened", access to it is controlled bv a "stream block" created by the virtual con~rol processor and accessed by the channel adapters. Any host connected to the virtual storage system of the invention can thus access any volume, and indeed multiple hosts can simultaneously access the same file--a possibility not available in the prior art.
To write data to an open volume, the host issues a "SET
ID" command to a channel adapter. The channel adapter hardware logic detects the selection and sets a bit in a register monitored by the 8X300 channel sequencer. The sequencer controls the channel tags and stores the channel address and col~and in regis-ters available to the ~8000 processor and sends an interrupt 1~79L373 siqnal to the 28000. The Z8000 responds to the interrupt by sendin~ an order to the 3x3no which in turn res~onds gi~ing the reason for the interrupt. The Z8000 then orders the ~X300 to present the initial status and to transfer bytes of data from the channel to t~e channel address register and the comman~
register of the Z8000. These four bytes comprise the stream block I.D., which the Z~000 then transforms and treats as the system memory address of the stream block. The Z8000 writes the stream block address and other control data to the registers controlling the system bus interface logic. The stream block is then fetched from the system memory using the ~agnuson M80/40 bus protocol and placed in the Z8000 memory. The Z8000 then writes ending status in the channel status registér and orders the 8~300 to present ending status directly to the host. The system is then ready to receive data from a host.
The host then chains a write command, which repeats the "SET ID" action as above, through initial status. The Z8000 then sets up the data transfer parameters such as writing and compressing in the channel interface logic, and orders the 8X300 to start data transfer. The interface hardware now controls byte serial data transfer from the channel into a first in/first out buffer 94 preceding the data compression logic. This is driven until both the buffers 94 and 100 are full. Meanwhile, the Z8000 causes direct memory access (Dl~) transfer to begin.
The DMA logic then assembles byte-serial data from the buffer into 8 byte sets, acquires the system bu~s and sends data to the system memorv. The operation of the direct memory access logic includes automatic address incrementing and byte countin~, thus releasing the Z8000 from these tasks and enahling t~em to be performed far faster. ~ventually the data has entirely heen STC-142 ~17~3 73 transferred to the system whereu~on the ~X3~0 again interrupts the Z8000, which in turn updates the stream hlocks to show the current state, adds the header Aata to the stored block, and orders the ~X300 to again indicate to the host that the operation S is completed. ~en sufficient host data has been received to fill a frame in system memory, control of that frame (paae) is passed to the VCP.
As noted, data is transferred ~ramewise. The Z~000 operations are accordingly set up to fill a frame. If a frame fills before the channel signals the end of the data, the block is continued on to another frame. Since the next frame is gener-ally not contiguous to the first in system memory, a second direct memory access set up Day be required. This is essentially an independent operation as abov~. Those skilled in the art will appreciate that the system bus interface handles data trans-fer to the system memory both from the data compression buffer under direct memory control and directly from the Z8000, ie., data transfer operations and control operations. These may be interleaved as required. The read operation is similar to the write operation discussed above; in particular the direction of travel through the data compress/decompress logic in a presently preferred embodiment is co~on to both modes for simplicity of design, except the data transfer proceeds through i.n the opposite direction as indicated in connection with Figs. 4.
Those skilled in the art will recognize numerous other aspects of the invention without specific discussion thereof.
For example, diagnostics and error correction logic may be incor-porated at numerous points throughout the channel adapter discus-sed above for convenience of service and for reliability in oper-ations. Too, those s~;illed in the art will recognize that while S~C-142 ~i74373 the channel adapter is designed to interface a virtual storage system comprising a ~lagnuson rl8o compllter as the virtual storage system processor with an IB~I 370 CPU (in particular, one having the "MVS" operating system, as discussed in the patent and appli-cation referred to above), the channel adapter desig. couldreadily be modified in order to operate with other virtual control processors and with other host computers.
Therefore, the above description of the invention should be construea as exemplary only and not as a limitation on its scope which is more properly as defined by the following claims.
Field of the Invention .
This invention relates to ~pparatus for coupling data processing equipment such as a host computer to a virtual storage system for the storage of large quantities of digital data exclusively under the control of the virtual storage system. More particularly, the invention relates to apparatus for performing data formatting and compression operations and for interfacing the host computer with the processor of the virtual storage system.
Background of the Invention For many years it has been an aim of the computer and data processing industry generally to provide increasingly less expensive and faster access time memory for the storage of large quantities of digital data. Typically large computer systems have a heirarchy of memory in which data is stored.
The choice is made based on the frequency of use of a given data file; a compromise is struck between access time and the expense ~f the storage rleans used. Thus, for example, a large computer system may have its least frequently used data files stored on reels of magnetic tape which require substantial access time before they can be physically brought to a given tape drive, mounted and advanced to the point on the tape at which the information sought for is stored. The next step in the heirarchy is typically magnetic disk storage, of which there are several types. The user-mountable Aisk drive, as does tape, requires an operator to fetch the physical disk file and mount it on the drive, but access time is com~aratively short. Still a faster 117~373 form of disk storage is so-called "fixed head dis~" in which scheme the dis~s are permanently mounted. Other ~orms of data storage devices include solid state sequential access and random access memory mean~. These are generally considered too expensive for long-term storage, and are only used to contain a data set actually being used. Typically in the prior art, a host computer when requested by a user program to access a given data file converts the file name to the physical location of the file on a given reel of tape or disk drive, and outputs an instruction to, e.g., mount a given reel of tape on a tape drive, access a particular portion of a disk, or the like. The selection of the type of storage and where on the given storage device a certain user file was to be stored is usually made by the computer system user or operator; to a lesser degree tXe assi~n~ent process can~
be carried out by the host computer. Accordingly, the data storage devices themselves are not intelligent and only respond to the commands given them by the host. In Canadian Pato~t No.
1,153,126 which~ssued to Whi`te on 30 Augustj 1983, an improvement on this system was made with the invention of a virtual storage system which comprised means for accepting data a host intended for storage on tape and converting the host's co~mands to, e.g., "Mount Tape" to commands suitable for the storage of the data on disk drlves. The virtual storage system thus mimiced a tape drive to the host. In this way, much faster access times were provided to the host without requiring modification of the host operating systems.
A further development of this concept is ~isclosed and claimed in co-pen~ing Canadian application Serial No. 402,477, filed on 7 May, 1982, (Attorne~'s Doc~et No. STC-140). There the host operating systems are modified such that the host has little >
~ .~
STC-142 ~74373 or no control over the event-~al stora~e of the data hut merely writes it to a virtual storaae syst~m. The virtual storage system then determines ~here and ~n ~hat sort of storaae device, typically a disk~, the data is to be stored, the host i5 entirely relieved of this function, thus improvina its useful processing capability. The utilization of storage is greatly irnproved by use in connection with the virtual storage system.
The actual configuration of the virtual storage system disclosed in co-pending application Serial No.
402,477 includes what in other circumstances might be considered a computer itself as the heart of the system, in a preferred e~bodiment, a Magnuson.M80 com2uter is used. This com~uter decides ~here and on what type of data storage device the given data is to be stored. Its main memory is used as a "cache"
~ithin which data is reformatte~ from the user's chosen record size to a recor~d size, referred to as a "page", convenient for;
storage on the long-term storage device chosen. For example, in a presently preferred embodiment the most common mode of storage is on a disk drive, and the page size is selected to be equal. to one disk track, which greatly sir~plifies addressing and formatting considerations. Accordingly, as ~ata is receive~ from the host record by record it is written into a "frame" of storage locations within the solid state memory of the r1agnuson CPU. ~hen such a frame has been filled ~ith a "pAge" of data, the page is written to a "disk frame" of the same size for permanent storage. This frame-by-frame organization is important to the efficient use of the disk storage, and as e~lained in the co-pendin~ application referred to ahove yields suhst.antial. impl-ovemer)ts in tl-e operabilit~
of the computer system as a ~hole.
The present invention rel~tes to A hard~are means used 1~743~3 to interface the virtual stora~e system sl~ch as that re~erre~ to above with one or more host computers. It comprises itself a processor for performlng the paqination of the ~ata as received from the host an~ additionally comprises means for compression of the data so as to remove redundant hytes, thus rendering less storage necessary. This compression is performed, of course, prior to the paqination of the user records into appropriate sizes for storage on back-end storage devices.
Objects of the Invention It is therefore an object of the invention to provide improved means for storage of digital data.
A further object of the invention is to,provide means for interfacing a host computer of a first type with a processor of a differing type comprised in a virtual storage system.
Still a further object of the invention is to provide apparatus for performing data compression and pagination upon data received from such a host prior to writing it to memory in a virtual processor in block sizes convenient for storage on a storage device selected for final storage of the data.
Yet a further object of the invention is to provide such means for interconnecting a plurality of hosts with such a virtual storage system whereby utilization of the virtual storage system can be more completely effected.
Summary of the Invention The present invention fulfills the above-mentioned needs of the art and objects of the invention by its provision of a channel adapter for a virtual storage system which comprises a microprocessor operating a slave or su~processor for the control of flow of data throuqh the svstem. The microprocessor is a relatively low-speed, highly intelligent device, while the subprocessor is a relatively less intelligent but much higher speed processor. The microprocessor and subprocessor each control hardware logic. While the microprocessor con-trols assignment of storage location, pagination of the dataand the actual path along which the data flows through the channel adapater of the invention, the subprocessor or "sequencer" controls switching of signal lines and monitors the hardware logic which actually transfers the individual data bytes.
Accordingly, the present invention is directed to a data processing and storage system, comprising a host com-puter and a virtual storage system. The host computer com-prises an arithmetic and logic unit for performing data pro-cessing operations. The virtual storage system compriseslong-term storage media and means for assigning locations for storage of data on the long-term storage media. The means for assigning storage locations comprises first proces-sor means for determining where on associated long-term storage media selected blocks of data are to be stored, and channel adapter means comprising second processor means connected between the host computer and the first processor means for dividing data supplied by the host for storage into blocks of size convenient for storage on the long-term storage media, and for supplying the data to the first processor means in the convenient block sizes.
The present invention is also directed to a virtual storage system for the storage of data on long-term storage 1~74373 media, the data being received from a source in an arbitrary format. The virtual storage system comprises first processor means for determining where and on what types of associated long-term storage media the date is to be stored, second processor means for dividing the data into block sizes con-venient for storage on the long-term storage media, and third processor means for receiving data from the source and controlling the flow of the data through the system in response to the division by the second processor means.
This invention is further directed to a channel adapter for interposition between a virtual storage system and a host computer for dividing data received from the host user in user selected format and deliv~ring it to the virtual storage system in a format selected for convenience of storage within the virtual storage system. The channel adapter comprises first and second processor means, the first processor means for responding to host commands and to system commands and for controlling operation of the second processor means, the second processor means directing flow of the data through the channel adapter in response to commands from the first processor means, wherein the first processor means is relatively intelligent and the second processor means is relatively high speed.
Brief Description of the Drawings The invention will be better understood if refer~
ence is made to the accompanying drawings, in which:
Fig. 1 shows an overall view of a data processing and storage system including a virtual storage system accord-ing to the invention;
- 5a -_ ~17~373 Fig. 2 shows a block diagram of the virtual storage system processing apparatus including the channel adapter of the invention;
Fig. 3 shows a detailed block diagram of the channel adapter according to the invention;
Fig. 4 shows the main data path through the channel adapter of the invention in as much of Fig. 3 as is neces-sary to show the various steps in the path;
Fig. 5 shows the flow of control signals between the virtual storage system data bus and the microprocessor which is the heart of the channel adapter of the invention;
Fig. 6 shows the flow of control signals back and forth between the microprocessor and the sequencer;
Fig. 7 shows the flow of control signals between the host and the microprocessor;
~6) .~ 7~1~, STC-142 ~ ~ ~ ~
Fig. ~ shows the flow of control signals hetween the channel and the sequencer; and Fig. 9 shows a block diagram of hardware which could be used to perform data compression in the channel adapter according to the invention.
Description of the Preferred Rmbodiments As discussed above, the present invention relates to a virtual storage system in which a channel adapter performs interface functions between one or more host computers and the virtual storage system: the virtual storage system operates to decide independently-of the host computer(s) where and on what sort of data storage device a given user record should be stored.
The overall configuration of such a system together with a host and showing the channel adapters which are an important aspect of the present invention is shown in Fig. 1.
A host processor 30 is shown together wit~ various con-ventional inputs and outputs indicated generally by cards, printed paper output and a console. According to the virtual storage system concept the host 30 is connected by interface cables 32, to at least t~lo channel adapters 40 which in turn are connected to the virtual storage system 20. Additional channel adapters 34 may be provided for connection to additional host computers as indicated. In this way, a single virtual storage system 20 can service a plurality of hosts throu~h at least a like plurality of channel adapters. As will be explained in further detail below, in general at least two channel adapters are provided per host so that alternate paths for data transfer exist. The virtual storage system 20 is also supplied with a console 42 ~or operator interaction and is connecteA bv interface cable 36 to mass storage STC-112 117'~373 in~icated general1y at 3F~- T))is ~ould tv.7ically com~is~ ~oth di~ an~ tape drives as ~11 as perha~5 other forms of ~e~ory devices.
It will be ap~reciated that in qeneral the prior art practice when allocating spac~ on a ~c3~tic ~tora~e device such as at 38 in Fig- 1 to a gi~Jen file record ~2S ~een to allo~ the user to select a rnaxi~u~ si~e for each aata f;le and to allocate that amount ,of space to the file perl~lanently. F,xceI~t in the relatively unusual circumstance ~here a file is of fixed si~e, therefore, the file will be underutilized, that is, not completely filled wi~h actual data, as a user will inevitabl~ set up too large a file ratller than take the chance of running out of space.
Within the file the user specifies a convenient record length, and likewise selects ~here and on what sort of data storage device, e.g., dis~ or tape, his file is to be storecl when it is not being processed^ When it is desired to run a job using a particular user file the user specifies its location and specifies what portions of the file are to be transferred from the storage device to the host cornputer for computation. After t}-e computa-tion is co~pleted, the ~ile is returned to the same sr.~ot on thelong-term storage media.
According to the present invention, the virtual storage system instead of the user defines where and on what type of data storage device a given user file is to be stored.
The user need therefore only specify the file name and the virtual storage system accesses its memory to determine where the data is stored and to cause the portions sought to be read into the host.
Performance of the storage space allocation function external to the host and without operator intervention STC-142 1~7~373 allows several important a~vantages to he achieved. ~ne most important feature is that the file size need not be defined by the user; only that storage space actually neeAed for the storage of data need be allocated to a given file. This allows rnuch better utilization of available storage space. A second advantage is that the virtual storage system is enabled to reformat the data into block sizes convenient for storage purposes, not determined by the user's purposes. Thus, for e~ample, a user file is divided into "pages" of data which are of convenient size for storage on the storage device selected by the virtual storage system as the eventual storage location. ~or example, in a presently preferred embodiment, most of the storage takes place on disk memory. Hence, the pa~es of data are set equal in size to one disk track length.
This enables addressing considerations to be greatly simplified and leads to better utili~ation of the space on the disk. This "pagination" of the data is performed in the virtual storage system by providing a cache solid state memory in the virtual storage system in which the pages of data are assembled prior to being written to long-term storage media. In the presently pre-ferred embodiment, the virtual storage system comprises a ~lagnusonM80 computer; the main memory of this computer or "virtual proces-sor" becomes the cache memory. Thus, for e~ample, when the host is outputting data it is written into the memory of the virtual processor, which is divided into data storage areas equal in size to the pages of data desired, called "frarnes". ~hen a given frame of data is filled, the processor outputs the data from the cache to a selected track on a dis~ drive.
As noted above in connection with Fig. 1, channel adapters are used to form the interface between the host and the 33 virtual storage system according to the invention. It is in ~4~73 these channel adapters that the pa~ination function is performed, that is, reformatting of the user data files into page length sub-blocks for storage convenient to the storage device, rather to the user as in the prior art. Additionally, it is desirable that data compression be performed on the ~ata. This may comprise removal of redundancies from the data and their replacement by a compressed character indicating the number of times a given byte is repeated. Clearly, it is desirable that such data compression be performed prior to pagination of the data: this is likewise performed in the channel adapter.
Fig. 2 shows a functional block diagram of a channel adapter 60 and virtual storage system central processor (VCP) 64, both of which are connected to a virtual storage system (VSS) bus 62. A second channel adapter 60 is partially shown, indicat-ing that plural channel adapters may be connected to a singlebus 62 operating in conjunction with a single virtual storage system central processor 64. As noted, each of the channel adapters 60 is connected to a host computer through a host inter-face unit 74 and to the VSS system bus 62 through a system bus interface 76. ~1ithin each channel adapter 60 resides a channel adapter processor 78, a host data handler and an operation tag handler. The operation tag handler applies instructions from the host to the processor 78 which controls the flow of data through the channel adapter 60 and to the system bus interface 76. As wiil be made clear below, the channel adapter processor comprises an intelligent microprocessor such as, for example, a Zilog Z8000, which has the capacity for relatively complicated decision maXing, but which is too slow of operation to perform, e.g., signal switching to the host comp~lter interface. Accord-ingly, a second processor, resiclent in the operation tag handler _ 9 _ STC-142 ~437~
in the drawing of Fig. 2, of limited intelligence but of far higher speed is slaved to the Z8000 processor an-l is that which actually controls the protocol to the host interface. The actual transfer of data from the host interface to the system bus inter-face is performed witll hardware. Data compression occurs on thispath. The data, having been compressed, is then passed via bus 62 to the virtual storage system indicated generally at 64. As noted, the virtual storage system (VSS) is built generally within a Magnuson ~180 computer. The memory 68 indicated is the ~agnuson M80's memory, in which the addresses of the various data files stored on back-end storage such as disks, tape, and the like are located. The assignments are made by the VSS central processor _-70. The memory 68 may additionally comprise a cache memory for temporarily holding one or more pa~es of data for a short time, while, for example, a given back-end data storage device is made ready for receipt. Alternatively, the cache memory may be used to hold all of a given page of data from a back-end storage device prior to transfer over the system bus and through the channel adapter back to the host. The memory G8, also stores the software program which is executed on the VCP 64. Finally, the VCP 64 comprises channels 72 for connection to the various back-end storage devices and to an operator console.
Fig. 3 shows a more detailed block diagram of the chan-nel adapter 60 of Fig. 2. Its chief component is a Zilog or P~ID
Z8000 microprocessor 80 which may be dualed for error detection purposes by a second Z8000 81 connected in parallel. Slaved to the Z8000 microprocessor through two 8 bit control re~isters 84 and 85 is a channel sequencer unit 8~, which in a preferred embod-iment is an Intel 8X300 microcontroller chip which is additionally controlled by a 2K by 18 hit PRO~ ~n to assure appropriate sequencin~
STC-1~2 117~373 and control of operations. ~s noted, the Z~000 ~0 controls the flow of data but the channel sequencer 86 actually handles the gatinq of control reglsters and signal lines and the monitoring of the hardware data transfer by virtue of its far hi~her speed.
Thus, while the channel sequencer 86 can be said to be slaved to the processor 80, the channel sequencer 86 provides the essential capability of high speed data handling, which capability i5 important to the operation of the channel adapter.
The drawings subsequent to Fig. 3 show the paths taXen by data and by control signals between the major components of the channel adapter shown in Fig. 3. Each of Figs. 4 through 8 shows a primary one-directional data path in full and the inverse return path in dotted lines. The major components through which each path passes are also shown. It is to be understood that these are portions of Fig. 3, the remainder not being drawn in, for clarity's sake. If simultaneous reference is made to these Figures and to Fig. 3, it will be seen how the overall channel adapter functions are performed. Thus, for exa~ple, Fig. 4 shows the main data connection between the channel cable, connecting the channel adapter to the host computer at the left side of the diagram, and the system bus connecting the channel adapter and the VCP. This would be the path taken by data when written by a host computer to the virtual storage system according to the inven-tion. The data is received from the channel bus by a receiver 90, is passed through a 2 to 1 multiplexer (a simple switching unit~
92, and a first in/first out buffer 94. The data then passes through data compression logic 96, operating in a compress mode, under the control of a PR0~1 9~, and into a second first in/first out huffer lO0. A detailed hardware diagram of the logic which may be used to perform data com~ression will be discussed later ~L~7~3~3 in connection ~ith Fig. 9. Th~ ~lat~ thus co~presse~ is passed to an output ~uffer 102 an~ thrcugh an output register 104 to a transceiver 106. In the output buffer 102 the ~-~it data words passed from the FIFO ~uffer (that is, 8 hits plus a parity bit) may be assembled to a 64 data bit, 8 parity bit word for conven-ience in transmission over the bus. When received in the cache memory of the VSS processor (Fig. 2) the actual pagination opera-tion, that is, the accumulation of a page of data of a size convenient for long-term storage, may be performed. The pagina-tion occurs by virtue of the Z8000 specifying the memory areainto which data is written' see Fig. 5 below.
If reference is made simultaneously to Fig. 3 and the dotted line path of Fig. 4, the data pa.h ~hich is used when a read operation is being performed will be observed. In order that the sequence of the first in/first out registers 94 and 100 and the data compression/decompression logic 96 can be the sa~e, the data is passed through the transceiver 106, through a data input buffer 108, throu~h the multiplex unit 92, the first in/
first out registers 94 and 100, the data compression/decompres-sion logic 96 and back out through a driver 110 to the channelbus. These chief data flow paths of Fig. 4 may be sequenced and controlled as noted above by the microcontroller or channel sequencer 86 as directed by the Z8000 80. The control signals are passed to the channel sequencer 86 from the microprocessor 80 through control register 84 thus isolating the Z8000 bus 122 from 8X300 bus 120. The straight-through path shown in Fig. 4 may be referred to as direct memory access (~r1A).
A dynamic ~rogram memory 11~ and PRO~ 118 are connected to the address hus 112 and to tlle %8000 bus 122 to provide the Z8000 with the inputs require-1 for its operation. ~imilarly, STC-1~2 ~17'~37~
control signals are passe-l througll a control data register 124 which supplies instructions thro-lgh a system control bus inter-face 126 to the system control bus 128. The 7,~000 also passes control signals to a DMA counter 130 via the data ~us 122, and receives control data from the system as required from a control data register 132. The Z8000 utilizes various registers, tags, opcodes and the like to assist in the operation of the channel sequencer, and operate various diagnostic and error control circuits not shown. These expedients will be readily apparent to those skilled in the art, as will be suitable locations of parity check operators for error detection and the like.
If reference is made to the remaining data flow and control paths shown in Figs. 5 through 8, the operation of the channel adapter of the invention will be made more completely clear. Thus, for example, in Fig. 5, the path of control flow from the Z80Q0 to the system data bus is shown in solid, while the flow of control signals back from the syste~ bus to the Z8000 is shown in dotted lines. For example, the z8000 might pass a header consisting of 24 bytes before each page of data is transmitted to the host. Each block within each page might be preceded by an 8-byte header. This path is thus that through which the VSS and the channel adapter communicate regardin~ the sequencing of the movement of data therebetweên.
In Fig. 6, the flow of control signals from the proces-sor to the channel seauencer is shown in solid, while the flowof responses is shown in dotted. Similarly, in Fig. 7, the flow of control signals from the %~000 to the channel b~s interface for transmission to the host is shown in solid, while the flow of control signals ~rom the host to the Z8000 is shown in dotted lines. Such signals could ~e, Eor e~am~le, concerning ~et~ils STC-142 ~ 7~373 relevant to the operation of the c]lannel a~lapter, such as the field size of the data to he transferre~3 from the host to the channel ac~apter for compression and eventual transmission to the virtual storage system for long-term storage.
In Fig. 8, the flow of signals from the channel se~uen-cer directly to the channel bus is shown. This path wouid not ordinaril~ be used, communications from the host generally being made~through the Z8000, not through the slaved channel sequencer, but this path could be used to provide a "busy" signal to the host when, for example, the channel adapter was already busy transmitting or assembling data. The host might then try a second channel adapter if one were connected thereto for such purposes. The reverse path shown dotted from the channel bus to the channel sequencer directly would be used only for diagnostic purposes.
~ Fig. 9 shows one possible hardware embodiment whereby data could be compressed. A byte typically 8 bits wide is read into a first register 100 and is compared in a comparator 104 shown as an AND gate with the contents of a second register 102.
If the two are alike, the comparator 104 outputs a high signal to the set input of a flip-flop 106. If this was the first high input to the set input of the flip-flop, it causes a counter 108 to be reset to one. Simultaneously, an escape character genarator 110 outputs a predetermined character onto an output line 112.
On the other hand, if the flip-flop 106 had already been set, then the only consequence of the comparator's 104 putting out a high signal would be incrementing counter 10~. The contents of registers 100 and 102 are also compared in a N~N~ gate 114.
Thus, after a series of identical bytes have been compared in comparator 104, and have been counted in counter 108, if a hyte -- 1~ --;:~1'7'~3~3 is fed to register 100 which is not the same as that stored in 102, the 1ip-flop 106 would then be reset' the output from the flip-flop 106 would cause the counter 10~ to output its contents onto the output line 112, while the contents of register 102, the repeated byte, would also be output. The escape character together with the number of re~etitions and an example of the repeated byte would then become that ~hich was stored in place of the identical bytes. It will be appreciated that this process is only useful where the number of repeated bytes is at least equal to the number of bytes necessary to indicate a compression ~as been per~ormed, in this case four. This could simply be implemented by adding additional reaisters to those shown and by providing multiple inputs to A~ID and NAND gates 104 and 114.
~ As discussed above, the overall function of the virtual storage system c~f the invention is to divide user data into frames which are conveniently sized for storage on back-end storage devices such as disk drives. Within this context, the channel adapter serves as the interface between the Virtual Storage System Central Processor (VCP) and the host software. The channel adapter attaches to the host with a standard channel interface and is corrected to the VCP; in the preferred embodiment, to an extension of the ~lagnuson ~180/40 system bus. Functionally, the channel adapter comprises five parts: 1) the channel interface and channel sequencer (the 8X300 unit discussed above); 2) data compression and decompression logic means; 3) the channel adapter processor and its memory (the Z~000 unit): 4) direct memory access logic; and 5) the system bus interface. The primary channel adapter function i5 to acce~t variable length bloc~s of data from the host user stanclard channel protocol, compress the 11'7~37~
ST~-l42 data if possible, group thesc bloc~s into frames and pass control of the filled frames, now called pages, to the virtual control processor by means of messages. Thus, a frame is a track size space in the system memory allocated by the software contained in the virtual control processor. Control of frames is originally passed to the channel adapter via messages. The virtual control processor then controls movement of filled frames, i.e., pages, to back-end dis~. storage, thus freeing the frame for reuse. In a presently preferred embodiment the frame size is l9,06a bytes.
The inverse operation is likewise performed on a read operation.
The secondary function of the channel adapter is to pass messages be~ween the host and the virtual control processor software.
This is performed similarly except that messages are not grouped and naturally are not stored in disk drives.
As discussed above, in the virtual storage system of the invention the concept of a virtual volume or a data file of unspecified length is of importance. Such a virtual volume consists of at least one page of data. ~1hen such a volume is "opened", access to it is controlled bv a "stream block" created by the virtual con~rol processor and accessed by the channel adapters. Any host connected to the virtual storage system of the invention can thus access any volume, and indeed multiple hosts can simultaneously access the same file--a possibility not available in the prior art.
To write data to an open volume, the host issues a "SET
ID" command to a channel adapter. The channel adapter hardware logic detects the selection and sets a bit in a register monitored by the 8X300 channel sequencer. The sequencer controls the channel tags and stores the channel address and col~and in regis-ters available to the ~8000 processor and sends an interrupt 1~79L373 siqnal to the 28000. The Z8000 responds to the interrupt by sendin~ an order to the 3x3no which in turn res~onds gi~ing the reason for the interrupt. The Z8000 then orders the ~X300 to present the initial status and to transfer bytes of data from the channel to t~e channel address register and the comman~
register of the Z8000. These four bytes comprise the stream block I.D., which the Z~000 then transforms and treats as the system memory address of the stream block. The Z8000 writes the stream block address and other control data to the registers controlling the system bus interface logic. The stream block is then fetched from the system memory using the ~agnuson M80/40 bus protocol and placed in the Z8000 memory. The Z8000 then writes ending status in the channel status registér and orders the 8~300 to present ending status directly to the host. The system is then ready to receive data from a host.
The host then chains a write command, which repeats the "SET ID" action as above, through initial status. The Z8000 then sets up the data transfer parameters such as writing and compressing in the channel interface logic, and orders the 8X300 to start data transfer. The interface hardware now controls byte serial data transfer from the channel into a first in/first out buffer 94 preceding the data compression logic. This is driven until both the buffers 94 and 100 are full. Meanwhile, the Z8000 causes direct memory access (Dl~) transfer to begin.
The DMA logic then assembles byte-serial data from the buffer into 8 byte sets, acquires the system bu~s and sends data to the system memorv. The operation of the direct memory access logic includes automatic address incrementing and byte countin~, thus releasing the Z8000 from these tasks and enahling t~em to be performed far faster. ~ventually the data has entirely heen STC-142 ~17~3 73 transferred to the system whereu~on the ~X3~0 again interrupts the Z8000, which in turn updates the stream hlocks to show the current state, adds the header Aata to the stored block, and orders the ~X300 to again indicate to the host that the operation S is completed. ~en sufficient host data has been received to fill a frame in system memory, control of that frame (paae) is passed to the VCP.
As noted, data is transferred ~ramewise. The Z~000 operations are accordingly set up to fill a frame. If a frame fills before the channel signals the end of the data, the block is continued on to another frame. Since the next frame is gener-ally not contiguous to the first in system memory, a second direct memory access set up Day be required. This is essentially an independent operation as abov~. Those skilled in the art will appreciate that the system bus interface handles data trans-fer to the system memory both from the data compression buffer under direct memory control and directly from the Z8000, ie., data transfer operations and control operations. These may be interleaved as required. The read operation is similar to the write operation discussed above; in particular the direction of travel through the data compress/decompress logic in a presently preferred embodiment is co~on to both modes for simplicity of design, except the data transfer proceeds through i.n the opposite direction as indicated in connection with Figs. 4.
Those skilled in the art will recognize numerous other aspects of the invention without specific discussion thereof.
For example, diagnostics and error correction logic may be incor-porated at numerous points throughout the channel adapter discus-sed above for convenience of service and for reliability in oper-ations. Too, those s~;illed in the art will recognize that while S~C-142 ~i74373 the channel adapter is designed to interface a virtual storage system comprising a ~lagnuson rl8o compllter as the virtual storage system processor with an IB~I 370 CPU (in particular, one having the "MVS" operating system, as discussed in the patent and appli-cation referred to above), the channel adapter desig. couldreadily be modified in order to operate with other virtual control processors and with other host computers.
Therefore, the above description of the invention should be construea as exemplary only and not as a limitation on its scope which is more properly as defined by the following claims.
Claims (10)
1. A data processing and storage svstem comprising:
a host computer comprising an arithmetic and logic unit for performing data processing operations; and a virtual storage system comprising long-term storage media and means for assigning locations for storage of data on said long-term storage media, said means for assigning storage locations comprising:
first processor means for determining where on associated long-term storage media selected blocks of data are to be stored and channel adapter means comprising second processor means connected between said host computer and said first processor means for dividing data supplied by said host for storage into blocks of size convenient for storage on said long term storage media, and for supplying said data to said first processor means in said convenient block sizes.
a host computer comprising an arithmetic and logic unit for performing data processing operations; and a virtual storage system comprising long-term storage media and means for assigning locations for storage of data on said long-term storage media, said means for assigning storage locations comprising:
first processor means for determining where on associated long-term storage media selected blocks of data are to be stored and channel adapter means comprising second processor means connected between said host computer and said first processor means for dividing data supplied by said host for storage into blocks of size convenient for storage on said long term storage media, and for supplying said data to said first processor means in said convenient block sizes.
2. The apparatus of claim 1 wherein said second processor means comprises a relatively low-speed intelligent processor and a relatively high-speed channel sequencer unit slaved to said intelligent processor.
3. The apparatus of claim 1 wherein said channel adapter means is operable to compress said data by removal of redundancies prior to dividing it into said blocks of convenient size for storage on said long term storage media.
4. The apparatus of claim l wherein a plurality of host computers are connected to said virtual storage system, and said connections to said host computers are made through at least one of said channel adapter means.
5. A virtual storage system for the storage of data on long-term storage media, said data being received from a source in an arbitrary format, comprising:
first processor means for determining where and on what types of associated long-term storage media said data is to be stored;
second processor means for dividing said data into block sizes convenient for storage on said long-term storage media; and third processor means for receiving data from said source and controlling the flow of said data through said system in response to said division by said second processor means.
first processor means for determining where and on what types of associated long-term storage media said data is to be stored;
second processor means for dividing said data into block sizes convenient for storage on said long-term storage media; and third processor means for receiving data from said source and controlling the flow of said data through said system in response to said division by said second processor means.
6. The virtual storage system of claim 5 additionally comprising means for performing data compression prior to division of said data into said block sizes convenient for storage upon said long-term storage media.
7. The virtual storage system of claim 5 wherein a plurality of said third processors are provided whereby a plurality of said host computers may utilize said virtual storage system.
8. The apparatus of claim 4 wherein said long-term storage media comprises magnetic disk drives and where said block size convenient for said long term storage media is one track on a disk.
9. A channel adapter for interposition between a virtual storage system and a host computer for dividina data received from said host user in user selected format and deliver-ing it to said virtual storage system in a format selected for convenience of storage within said virtual storage system, comprising:
first and second processor means, said first proces-sor means for responding to host commands and to system commands and for controlling operation of second processor means, said second processor means directing flow of said data through said channel adapter in response to commands from said first processor means, wherein said first processor means is relatively intelli-gent and said second processor means is relatively high speed.
first and second processor means, said first proces-sor means for responding to host commands and to system commands and for controlling operation of second processor means, said second processor means directing flow of said data through said channel adapter in response to commands from said first processor means, wherein said first processor means is relatively intelli-gent and said second processor means is relatively high speed.
10. The channel adapter of claim 9 additionally comprising means for data compression and decompression.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US26195081A | 1981-05-08 | 1981-05-08 | |
US261,950 | 1981-05-08 |
Publications (1)
Publication Number | Publication Date |
---|---|
CA1174373A true CA1174373A (en) | 1984-09-11 |
Family
ID=22995574
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CA000402465A Expired CA1174373A (en) | 1981-05-08 | 1982-05-07 | Channel adapter for virtual storage system |
Country Status (2)
Country | Link |
---|---|
JP (1) | JPS57195376A (en) |
CA (1) | CA1174373A (en) |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS5118409A (en) * | 1974-08-07 | 1976-02-14 | Hitachi Ltd | |
JPS5322331A (en) * | 1976-08-13 | 1978-03-01 | Fujitsu Ltd | Dynamic address conversion s ystem |
JPS5398741A (en) * | 1977-02-08 | 1978-08-29 | Nec Corp | High level recording and processing system |
-
1982
- 1982-05-07 CA CA000402465A patent/CA1174373A/en not_active Expired
- 1982-05-08 JP JP57077391A patent/JPS57195376A/en active Granted
Also Published As
Publication number | Publication date |
---|---|
JPH0430057B2 (en) | 1992-05-20 |
JPS57195376A (en) | 1982-12-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US6496791B1 (en) | Interfaces for an open systems server providing tape drive emulation | |
US4476526A (en) | Cache buffered memory subsystem | |
CA2069711C (en) | Multi-media signal processor computer system | |
KR101174997B1 (en) | Providing indirect data addressing in an input/output processing system where the indirect data addressing list is non-contiguous | |
US7228399B2 (en) | Control method for storage device controller system, and storage device controller system | |
USRE36989E (en) | Virtual storage system and method | |
US6115761A (en) | First-In-First-Out (FIFO) memories having dual descriptors and credit passing for efficient access in a multi-processor system environment | |
EP0811905A1 (en) | Storage control and computer system using the same | |
US6105076A (en) | Method, system, and program for performing data transfer operations on user data | |
US6571362B1 (en) | Method and system of reformatting data blocks for storage as larger size data blocks | |
CA1153126A (en) | Virtual storage system and method | |
CN116136748B (en) | High-bandwidth NVMe SSD read-write system and method based on FPGA | |
CA1174373A (en) | Channel adapter for virtual storage system | |
US6349348B1 (en) | Data transfer method and apparatus | |
US6810469B2 (en) | Storage system and method for data transfer between storage systems | |
US5809561A (en) | Method and apparatus for real memory page handling for cache optimization | |
US6134623A (en) | Method and system for taking advantage of a pre-stage of data between a host processor and a memory system | |
JPH0246967B2 (en) | ||
JPH0430056B2 (en) | ||
JP3456234B2 (en) | Target device | |
JPH01120655A (en) | Buffer control system | |
JPH05502315A (en) | Methods for transferring data between data storage subsystems and host data processing systems | |
JPH05181778A (en) | Input/output processor | |
JPH0272443A (en) | Data processor | |
JPH06149716A (en) | Disk data transfer system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
MKEC | Expiry (correction) | ||
MKEX | Expiry |