US20040083245A1 - Real time backup system - Google Patents
Real time backup system Download PDFInfo
- Publication number
- US20040083245A1 US20040083245A1 US10/729,284 US72928403A US2004083245A1 US 20040083245 A1 US20040083245 A1 US 20040083245A1 US 72928403 A US72928403 A US 72928403A US 2004083245 A1 US2004083245 A1 US 2004083245A1
- Authority
- US
- United States
- Prior art keywords
- server
- source
- file
- target
- replication
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/20—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
- G06F11/2053—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
- G06F11/2056—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring
- G06F11/2071—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring using a plurality of controllers
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/20—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
- G06F11/2053—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
- G06F11/2056—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring
- G06F11/2066—Optimisation of the communication load
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/20—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
- G06F11/2053—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
- G06F11/2056—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring
- G06F11/2082—Data synchronisation
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y10—TECHNICAL SUBJECTS COVERED BY FORMER USPC
- Y10S—TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y10S707/00—Data processing: database and file management or data structures
- Y10S707/99951—File or database maintenance
- Y10S707/99952—Coherency, e.g. same view to multiple users
- Y10S707/99953—Recoverability
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y10—TECHNICAL SUBJECTS COVERED BY FORMER USPC
- Y10S—TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y10S707/00—Data processing: database and file management or data structures
- Y10S707/99951—File or database maintenance
- Y10S707/99952—Coherency, e.g. same view to multiple users
- Y10S707/99955—Archiving or backup
Definitions
- the present invention relates generally to the field of data replication techniques for computer operating systems, and in particular, to an apparatus and method providing real-time back-up of data changes occurring in open or newly edited files.
- a network is a collection of computers connected to each other by various means, in order to share programs, data, and peripherals among computer users. Data on such systems should be periodically copied to a secondary “backup” media, for numerous reasons; including computer failure or power shortage that may damage or destroy some or all of the data stored on the system.
- the standard approach to backing up data is to perform “full backups” of files on the system on a periodic basis. This means copying the data stored on a given computer to a backup storage device.
- a backup storage device usually, but not always, supports removable high-capacity media (such as Digital Audio Tape or Streaming Tape).
- incremental backups are performed by copying only the files that have changed since the last backup (full or incremental) to a backup storage device. This reduces the amount of backup storage space required, as files that have not changed will not be copied on each incremental backup. Incremental backups also provide an up-to-date backup of the files, when used in conjunction with the full backup.
- One such approach is to perform “disk mirroring”, such as is available on Server Fault Tolerance (SFT) II from Novell.
- SFT Server Fault Tolerance
- a full backup of a disk is made to a second disk attached to the same central processing unit.
- This approach provides a “hot-backup” of the first disk, meaning that if a failure occurs on the first disk, processing can be switched to the second with little or no interruption of service.
- a disadvantage of this approach is that a separate hard disk is required for each disk to be backed up, doubling the disk requirements for a system.
- the secondary disk must be at least as large as the primary disk, and the disks must be configured with identical volume mapping. Any extra space on the secondary disk is unavailable. Also, in many cases errors that render the primary disk inoperable affect the mirrored disk as well.
- SFT III from Novell introduced the capability to mirror transactions across a network. All disk I/O and memory operations are forwarded from a file server to a target server, where they are performed in parallel on each server. This includes reads as well as writes. If a failure occurs on the source server, operation can be shifted to the target server. Both the source and target servers must be running Novell software in this backup configuration, and a proprietary high-speed link is recommended to connect the two servers. As NetWare is a multi-tasking environment, the target server can be used for other limited functions while mirroring is being performed. A disadvantage of this approach is that since all operations are mirrored to both servers, errors on the primary server are often mirrored to the secondary server. As with SFTII, local storage on both the source and target servers must be similarly configured.
- Standby Server by VINCA uses the network mirroring capability of NetWare, and provides a mechanism to quickly switch from the source server to the target server in the event of a failure.
- VINCA's Standby Server 32 with Autoswitch adds automatic switching between servers on failure, and allows the operator to take advantage of NetWare's 32-bit environment.
- Communication between the source and target servers is accomplished via a dedicated, proprietary interface. While the source and target server do not have to be identical, identical partitions are required on the local file system of each server.
- the purpose of this invention is to provide means for real-time, transaction-based replication of one or more source computers on a network to one or more target computers, which may or may not be running the same operating system software as the original source computer.
- This provides centralized backup facilities across an entire network, coordination of distributed processing, and migration of data to a new platform with minimal down-time. Only changed information is transmitted to the target server, minimizing the amount of network traffic associated with such a backup.
- a method of controlling flow between the source and target servers is provided to avoid loss of data and bottlenecks in the path between the servers.
- Means are provided to allow files currently open and in use by an application to be backed up in real-time.
- means are provided to replicate user configuration information (such as user accounts, file ownership, and trustee rights) to the target computer, so that users may login immediately and access data in the event of a failure on the source computer.
- a feature of the invention is the manner in which information on a computer system is replicated to a secondary storage media in real-time. Specifically, when a change is made to a file or configuration item on the primary (source) computer, those changes are immediately copied to a secondary (target) computer. This provides a real-time backup of all data on the source computer, so no data is lost in the event of a source computer failure. Only data that has been changed on the source computer is transmitted to the target computer for replication, versus transmitting the entire contents of the file. This reduces the amount of network traffic required to attain real-time replication.
- a further feature of this invention is the manner in which information on the source computer is replicated to the target computer regardless of which software application modifies the information. This includes applications running on the source computer, as well as applications running on other computers that have access to the data on the source computer via networking means.
- a further feature of this invention is the means in which several source computers can be replicated to the same target computer.
- the file system associated with each source computer can be replicated to a separate subdirectory on the target computer storage media, as specified by the operator when configuring the invention.
- Many servers can be replicated to a single target server.
- User configuration information from each source computer is replicated to the target computer, so that this information can be restored to the proper source computer in the event of a failure.
- a further feature of this invention is the means in which a single source computer can be replicated to several different target computers.
- Each replication packet is sent from the source computer to the each of the target computers, as designated by user configuration of the invention.
- the result of this operation is that each target computer has a copy of the source files, updated in real-time.
- This feature allows for data processing to be distributed to different computers, by handling the coordination of changes between all targets.
- Another feature of this invention is that data can be replicated to a local file system on the source computer.
- This configuration is referred to as single-server mode, as only a single computer is required to perform replication.
- Replicated data is stored in a separate directory on the source computer, as specified by the operator. This mode is useful when resources are not available for a separate source and target computer, or when a network connection to a target server can not be made.
- Another feature of this invention is the means in which data can be replicated to target computer(s) running a different operating system than the source computer(s).
- the format of replication messages passed between the source and target computers is common for all operating systems. Independent means are provided to build such messages from operating specific commands on the source computer, and to interpret these messages into operating specific commands on the target computer. This feature allows data to be shared between applications running on different platforms.
- a further feature of this invention is manner in which the operator may select a commit mode for replication actions.
- Commit mode refers to the conditions that must be met before a replication is considered to be successful, thereby allowing the original file operation to proceed.
- the target computer must return a successful status message to the source computer before the transaction is committed.
- real mode the transaction is committed as soon as the replication packet is transmitted from the source computer.
- local mode the transaction is committed as soon as the replication command is written to a local disk file.
- remote mode the file operation must be successful on both the source and target computers, before the transaction is committed. If the operation fails on either computer, both operations are reversed to return each computer to original state.
- a further feature of this invention is the method in which flow of replication data between the source and target computers is controlled.
- Means are provided to control replication data flow by limiting the number of replication network packets that can be in transmission at any one time. Once this limit is reached, additional packets are placed on a packet queue until the number of outstanding packets falls below the prescribed level. Also, if there are not enough network resources to accommodate all of the outstanding packets in the queue, the commands are placed in a second internal queue in a compressed format. This format includes the file name, offset, and length of data to be changed, but not the actual data to be modified in the file. When network resources are again available to service these requests, the required data associated with each command is extracted from the file on the source server, and a network packet is built and placed on the packet queue.
- Another feature of this invention is the manner in which multiple operations to the same file are handled on the internal queue described above.
- the condition described above, in which commands are placed on an internal queue because of a lack of network resources, is referred to as stacked -up mode.
- stacked-up mode several commands may be received in the queue that are associated with the same file. If the commands reference similar areas in the file, the commands will be merged in to a single command denoting the union of the two areas. If the commands reference areas within the file that are sufficiently separated, the commands will not be merged in the queue. This technique reduces the number of replication packets required when network resources are again available, and reduces the size of the internal queue.
- Another feature of this invention is the means for the user to configure flow control rules, in order to maximize network efficiency based on the current hardware configuration.
- the operator can define if packets are to held in queue on the source computer until certain conditions exist, or to send out all packets immediately upon receiving them.
- This feature can be used to optimize network performance and cost when using communication protocols such as ISDN or X.25, or when replication is done across a Wide Area Network (WAN).
- communication protocols such as ISDN or X.25, or when replication is done across a Wide Area Network (WAN).
- WAN Wide Area Network
- Another feature of this invention is that the operator may select individual files, subdirectories, directories, volumes, or file systems on the source computer to replicate. Means are provided for the user to select files to be replicated by file name, location, or type. A database of files to be replicated is maintained on the source computer. This feature allows the user to mirror only those files on a computer that are considered to be critical enough to require real-time replication.
- Another feature of this invention is the manner in which source computer data is initially mirrored to the target computer.
- the user may initiate the mirroring process which copies all of the files on the source computer to the target computer.
- the location of replicated files on the target computer is specified by the operator during configuration.
- the mirroring process utilizes the flow control and compression techniques described above for normal replication operations. If replication is disabled at any time during operation, the operator may choose to remirror all data to the target server. This insures that all files on the source and target computers are in sync after a disruption in service.
- Another feature of this invention is the manner in which data is restored from the target server to the source server in the event of a source server failure. Means are provided to copy all of the replicated data back to the source server using the mirroring technique described above. All user configuration information (including user accounts, file ownership, trustee rights) is also rebuilt on the source server, using the replicated target server information. Since all replicated data is stored on the target server in standard file format, it can be copied back to the source server at any time via user requests.
- Another feature of this invention is the ability for a user to login to the target server and access all replicated data in the event of a source server failure. Since all user configuration information (such as user accounts, file ownership, trustee rights) are replicated on the target computer, the user can login at any time with the same access rights as on the source computer.
- the target computer serves as a hot backup to the source computer, which reduces the amount of user downtime in the event of a computer failure.
- Another feature of this invention is the manner in which data can be copied to a backup storage media on the target computer, while users have the file open on the source computer. For each data replication packet received by the target computer, the associated file is opened, data is written, and the file is then closed, even if the file remains open on the source computer. The result of this sequence is that files are closed and available for backup using third-party backup utilities.
- Another feature of this invention is the ability to store replicated data to a backup storage device (such as a streaming tape) from the target computer, providing a common backup server for one or more source computers. This feature also reduces the processor loading on the source computers, as the backup function is not performed locally.
- a backup storage device such as a streaming tape
- Another feature of this invention is the means in which replication commands can be held in memory or on disk while data on the target computer is accessed.
- An application may make a call via an Application Program Interface (API) to cause all replication commands to be placed in the source server internal queue, instead of being sent to the target computer for replication.
- API Application Program Interface
- Another call can be made to resume replication, causing all commands in the queue to be sent to the target computer in the order they were received.
- API Application Program Interface
- the queuing techniques described above are used to maintain this queue on the source server. This technique can be used by applications such as backup agents, which require a constant file image during processing.
- Another feature of this invention is the ability to replicate over a Wide Area Network (WAN), without any specialized or proprietary hardware.
- WAN Wide Area Network
- Existing WAN communication mechanisms can be used to transmit replication packets to target computers. This feature allows remote sites to maintain real-time updates on data files, and also provides a mechanism for effecting off-site backup storage of critical data.
- Another feature of this invention is the means by which to maintain copies of deleted files on the target computer, and to restore these files to the source server if requested by the user. Based on user configuration, copies of deleted files may be stored under unique names on the target server. Means are provided to display all such files to the user, and to allow the user to restore one or more of these files to a specific location on the source computer. This feature can be configured to maintain deleted files on the target computer until they are explicitly purged by the user, or after a certain period of inactivity.
- Another feature of this invention is the mechanism by which large files are mirrored to an existing directory on the target computer. If the specified file exists on both the source and target computers when mirroring is initiated, only those blocks that have changed shall be copied to the target computer. This feature is only used when the specified file is large enough such that the transmission cost of sending the entire file is greater than the cost of determining which blocks have changed between the files on each computer. This reduces the amount of network traffic required to bring source and target computers in to sync, in the event replication is disabled for any period of time.
- Another feature of this invention is the means by which files that are inactive for a specified period of time can be archived to the target computer and deleted from the source computer, in order to conserve storage media.
- the user may configure the amount of inactivity required before the file is deleted from the source computer.
- Means are also provided to list all such files on the target server, and to allow the user to restore such files to the source computer if necessary.
- Another feature of this invention is the means by which replication transactions may be stored to a local storage media on the source computer, in the event that the source computer can not connect to the target computer. All transactions are stored locally using the internal queuing techniques described above. Once a connection is reestablished with the target computer, all stored transactions can be transmitted and executed in the order they were received.
- Another feature of this invention is the means by which replication data may be compressed prior to transmission, in order to reduce the amount of network traffic.
- This feature can be configured by the user to compress data being sent from the source computer, using a variety of standard compression algorithms. Compressed data is decompressed by the target computer, before the data is written to storage media.
- Another feature of this invention is the means by which replication data may be encrypted prior to transmission, in order to prevent replicated data from being intercepted and compromised.
- This feature can be configured by the user to encrypt data being sent from the source computer, using a variety of standard encryption algorithms. Encrypted data is authenticated by the target computer, before the data is written to storage media.
- Another feature of this invention is the manner in which all replication operations are done at the file system level, via operating system calls. Direct access to storage media on either the source or target computers is not required, thereby reducing the risk of introducing errors during low-level media access.
- FIG. 1 is a block diagram of a typical computer network configuration.
- FIG. 2 is a block diagram of the major components of a typical file server.
- FIG. 3 is a block diagram of a computer network system configured for server replication in accordance with the invention.
- FIG. 4 is a block diagram of a computer network in Many to One replication configuration.
- FIG. 5 is a block diagram of a computer network in One to Many replication configuration.
- FIG. 6 is a block diagram of a computer network in Single-Server replication configuration.
- FIG. 7 illustrates the software components of the invention.
- FIG. 8 illustrates the polling sequence for identifying source and target servers.
- FIG. 9 illustrates the sequence of operations for a server mirroring request.
- FIG. 10 illustrates the sequence of operations for a server restore request.
- FIG. 11 illustrates replication set selection.
- FIG. 12 illustrates the sequence of operations for a requesting source and target server status.
- FIG. 13 is a block diagram of the source server software component.
- FIG. 14 is a flowchart representing the typical file modification process, without replication.
- FIG. 15 is a flowchart representing the operation of the File System Interface (FSI).
- FSI File System Interface
- FIG. 16 is a flowchart representing the operation of the Source Replication Manager (SRM).
- SRM Source Replication Manager
- FIG. 17 illustrates local-mode operation, with logging to a local transaction file.
- FIG. 18 is a flowchart representing local-mode operation.
- FIG. 19 illustrates the process of committing local-mode transactions.
- FIG. 20 illustrates remote (two-phase) operation
- FIG. 21 is a flowchart representing remote (two-phase) operation.
- FIG. 22 is a flowchart representing stacked-up mode, with logging to an internal queue.
- FIG. 23 ( a ) and ( b ) illustrate an example of stacked-up mode queues.
- FIG. 24 illustrates the process of servicing entries from the stacked-up mode internal queue.
- FIG. 25 illustrates the process of mirroring data from source to target computers.
- FIG. 26 illustrates the fast-mirroring process.
- FIG. 27 is a flowchart of the fast-mirroring process.
- FIG. 28 illustrates operation of the Source Communication Manager (SCM) software component.
- SCM Source Communication Manager
- FIG. 29 illustrates the process of data compression/decompression and encryption decryption on source and target computers.
- FIG. 30 is a block diagram of the target server software component.
- FIG. 31 illustrates operation of the Target Replication Manager (TRM).
- FIG. 32 is a flowchart representing the operation of the Target Replication Manager (TRM).
- TRM Target Replication Manager
- FIG. 33 is a flowchart representing the process of calculating checksum values on the target server.
- FIG. 34 illustrates the process of restoring data from target to source computer.
- FIG. 1 represents a typical computer network configuration, consisting of file server [ 11 ] with local nonvolatile storage [ 12 ], one or more user workstations [ 10 ], and local area network (LAN) [ 13 ].
- the file server [ 11 ] and workstations [ 10 ] are not necessarily all the same type of computer, and may be running unique operating system software on each.
- a backup device [ 14 ], such as a tape drive [ 11 ] is also connected directly to file server [ 11 ].
- the major components of a file server [ 11 ] are shown in FIG. 2, and include central processing unit (“CPU”) [ 22 ], random access memory (RAM) [ 23 ], non-volatile data storage (such as a hard disk drive) [ 24 ], and a network interface card (NIC) [ 21 ].
- CPU central processing unit
- RAM random access memory
- NIC network interface card
- FIG. 1 Typical operation of this sample computer network system is shown by the numbered arrows in FIG. 1.
- Workstations [ 10 ] send file modification requests ( 1 ) to the file server [ 11 ], which processes the request and stores any required changes to non-volatile storage media [ 12 ] connected thereto through operating system calls ( 2 ).
- the contents of hard disk [ 12 ] can be stored to back-up storage media [ 14 ] for backup purposes ( 4 ). If an error occurs on file server [ 11 ] which destroys some or all of the data on non-volatile storage media [ 12 ], the contents of the backup tape can be restored from backup storage media [ 14 ] non-volatile storage media [ 12 ].
- FIG. 3 shows a typical computer network system configured for server replication, in accordance with the preferred embodiment of this invention.
- This configuration consists of source (or primary) server [ 31 ], a target (or secondary) server [ 33 ], one or more client workstations [ 30 ], and a local area network (“LAN”) [ 36 ] to connect servers and workstations.
- a backup device [ 35 ] such as a tape drive, is also connected directly to the target server [ 33 ]. All communication between work stations [ 30 ], source server [ 31 ], and target server [ 33 ] is done via LAN [ 36 ]
- the LAN utilizes standard networking mechanisms (e.g. ethernet, token-ring), and this configuration may be partitioned in to separate network segments to improve performance.
- step 1 the contents of hard disk [ 32 ] in the source server [ 31 ] is mirrored to hard disk [ 34 ] on target server [ 33 ], via network packets.
- Workstations [ 30 ] then send file modification requests ( 2 ) to source server [ 31 ].
- the source server [ 31 ] forwards these requests to target server [ 33 ] for replication ( 3 ).
- the target server [ 33 ] executes the file modification request ( 4 ) on its local hard disk [ 34 ], then returns a status message ( 5 ) to source server [ 31 ].
- the source server [ 31 ] then executes the file modification request on its local hard disk [ 32 ] ( 7 ) then returns a status message to the workstation [ 30 ]. It is an option for the contents of hard disk [ 34 ] to be forwarded and saved on tape [ 35 ]. The result of these operations is that hard disks [ 32 , 34 ] on source server [ 31 ] and target server [ 33 ] have current copies of same files at all times. Other embodiments of this invention do not require target server [ 33 ] to execute the file modification request on hard disk before source server [ 31 ] executes the file modification request on its disk drive[ 32 ].
- the example configuration shown in FIG. 3 is referred to as One to One mode, (i.e., a single source server [ 31 ] is replicated to a single target server [ 33 ]).
- Other configurations include Many to One, One to Many, and Single Server.
- Many to One mode several source servers [ 42 ] are replicated to single target server [ 44 ], as shown in FIG. 4.
- single source server [ 52 ] is replicated to several target servers [ 54 ], as shown in FIG. 5.
- source server [ 61 ] data is replicated to local file system [ 63 ], as shown in FIG. 6. Once the data is mirrored, workstations [ 60 ] send file modification requests to source/target server.
- local file systems [ 62 ] and [ 63 ] can be one or two non-volatile data storage device. In the case, of one storage device, the primary data and replicated data will be in different volumes of the same data storage device. Further, it is always an option to attach a backup storage device to the target server.
- the components of this invention include three independent applications: a workstation component [ 76 ], source server component [ 72 ], and target server component [ 74 ].
- FIG. 7 shows the relationship between these components and the hardware described in this example.
- the primary function of the workstation component [ 76 ] is to allow the users [ 77 ] to configure the replication process, and communicate this configuration to source server [ 71 ] and target servers [ 73 ].
- This component can be executed on any workstation [ 75 ] on the network [ 78 ]. While the workstation component [ 76 ] is required to configure and initiate replication between two or more servers, it is not required to execute during normal operation.
- user [ 88 ] may configure the following:
- the workstation [ 85 ] broadcasts a message ( 1 ) to each network node [ 80 ] to determine if the node is configured as a target server [ 83 ]. If node [ 83 ] is configured as a target server [ 81 ], a response ( 2 ) is sent to the requesting workstation [ 85 ] denoting that the specified node [ 83 ] is available.
- a list of all available target servers [ 87 ] is maintained ( 3 ) on the workstation [ 85 ], and is displayed to the user [ 88 ] for target server [ 83 ] selection.
- target server [ 83 ] is selected by user [ 88 ] it is referred to as current target server.
- the workstation [ 85 ] broadcasts a message ( 1 ) to each network node to determine if the node is configured as a source server [ 84 ], If node [ 83 ] is configured as a source server [ 81 ], a response ( 2 ) is sent to the requesting workstation [ 85 ] denoting that specified node [ 83 ] is available.
- a list of all available source servers [ 87 ] is maintained on the workstation [ 85 ], and is displayed to the user [ 88 ] for source server [ 84 ] selection.
- source server [ 84 ] When a source server [ 84 ] is selected, user [ 88 ] must specify the location on the current target server [ 82 ] where the source server [ 84 ] data is to be replicated, inform of a directory or subdirectory path name. The source server [ 84 ] is then connected to current target server [ 82 ] via the network interface [ 89 ], and replication begins to the specified directory location on current target server [ 82 ].
- Source Server Disconnect—User [ 88 ] selects specific source server [ 81 ] to disconnect from current target server [ 83 ].
- a list of source servers [ 87 ] connected to the current target server [ 83 ] is available for user selection. If a source server [ 81 ] is selected to be disconnected, a network message ( 1 ) is sent to the current target server [ 83 ] to perform the disconnect. Once disconnected, no further replication is done between specified source server [ 81 ] and current target server [ 83 ].
- Replication Mode selects replication mode for each source server [ 81 ].
- Replication mode refers to the method in which data is replicated to the current target server [ 83 ], and the level of error checking required before a transaction can be committed.
- Valid replication mode settings are real mode, local mode, and remote mode. Each mode is described in further detail under the source server component section.
- Replication Set—User [ 112 ] selects the volumes, directories, and files [ 117 ] to be replicated from a specific source server [ 110 ], referred to as the replication set [ 113 ].
- Replication set [ 113 ] selection is shown in FIG. 11.
- the user [ 112 ] may select the replication set [ 113 ] from the available volumes, directories, and files [ 116 ] on the source server file system [ 116 ].
- a copy of the replication set [ 113 ] is stored on the source server file system [ 116 ] in file format [ 115 ].
- the workstation [ 111 ] then transmits a network message [ 118 ] to the source server [ 110 ] denoting that the replication set file [ 115 ] is ready to be loaded.
- the source server [ 110 ] loads the replication set from file [ 115 ] to local memory [ 114 ].
- the replication set file [ 115 ] is copied from the source server [ 110 ] to the workstation [ 111 ], and is used as the default replication set [ 113 ].
- a further function of the workstation component [ 96 ] is to initiate mirroring and monitor replication status. From workstation interface [ 97 ], the user may initiate the following actions:
- Initiate Mirroring When mirroring is requested for specific source server [ 91 ], all of the volumes, directories, and files specified in replication set [ 98 ] are copied from the source server [ 91 ] to target server [ 93 ]. Initially, a mirroring request ( 1 ) is transmitted from workstation [ 95 ] to the target server [ 93 ], and is then forwarded to source server [ 91 ]. Once the mirroring request is forwarded to source server [ 91 ], a response message ( 3 ) is sent to workstation [ 95 ] to denote that mirroring is under way. The source server [ 91 ] then sends the necessary file information ( 4 ), as well as user account information (such as user name, file ownership, and file access permissions) to specified target server [ 93 ].
- Initiate Restore When a restore is requested for specific source server [ 101 ], all replicated information (including files and user information) is copied ( 2 ) from the target server [ 103 ] to source server [ 101 ]. As depicted in FIG. 10, a restore request ( 1 ) is transmitted to the target server [ 103 ] from the workstation [ 105 ]. If the files to be copied already exist on the source server [ 101 ], they will be overwritten during the restore process.
- Target Server Statistics Statistics on target server [ 123 ] operations can be displayed at the operators request. Statistics include, but are not limited to, number of packets received, number of errors encountered, number of replication commands received per command type, number of bytes received, and number of bytes transmitted. These statistics are sent from the current target server [ 123 ] on a periodic basis, as described in the preceding paragraph.
- Source Server Statistics Statistics on source server [ 121 ] operations may be displayed at the operators request.
- Source server statistics are requested from the specified source server [ 121 ] via a network message ( 1 ) as they are needed, as shown in FIG. 12.
- Statistics include, but are not limited to, number of packets transmitted, number of errors encountered, number of replication commands transmitted per command type, number of packets in stacked-up mode, and number of bytes transmitted. These statistics are sent from current source server [ 121 ] on a periodic basis.
- the primary function of source server software component [ 131 ] is to intercept any file system commands [ 137 ] from the local operating system [ 136 ], and forward such commands [ 137 ] to the current target server [ 138 ] if necessary.
- a block diagram of the source server software components is shown in FIG. 13.
- the File System Interface (FSI) [ 132 ] monitors file system operations from the operating system [ 136 ] to determine when changes are being made.
- the Source Replication Manager (SRM) [ 133 ] determines whether file system changes should be replicated, builds the network packets [ 139 ] required to effect such replication, and controls the flow of these network packets [ 139 ] using queuing techniques.
- the Source Communications Manager (“SCM”) [ 134 ] sends and receives replication packets [ 139 ] to/from the target server [ 138 ].
- SCM Source Communications Manager
- the source server software component [ 131 ] is loaded on each source server [ 130 ] at startup, and remains resident in memory until it is explicitly unloaded by the user, or the source computer [ 130 ] is powered off.
- a source server [ 130 ] must be connected to at least one target server [ 138 ] for replication to be performed. If a source server [ 130 ] is connected to multiple target servers [ 138 ], replication commands are transmitted to each target server [ 138 ] in the order they were connected.
- the function of File System Interface (“FSI”) [ 132 ] is to monitor operating system [ 136 ] commands to any file systems associated with the source server [ 130 ].
- the flowchart depicted in FIG. 14 shows the typical path of such a command, without replication.
- a file system command [ 140 ] is received from a workstation via network messages, and error checking is performed to make sure the command and its associated parameters are valid [ 141 ]. If the command or parameters are not valid, a failed status message is returned to the requesting workstation [ 142 ]. If the command and parameters are valid, the file system operation is performed [ 143 ], and an appropriate status message is returned to the requesting workstation [ 144 ].
- FIG. 15 shows the modified path of a file system command [ 150 ] according to this invention, with the addition of FSI [ 132 ].
- a file system command [ 150 ] is received from a workstation via network messages, and error checking is performed to make sure the command and its associated parameters are valid [ 151 ]. If the command or parameters are not valid, a failed status message is returned to requesting workstation [ 152 ]. If the command and parameters are valid, FSI [ 132 ], checks the command type to see if it is a file modification request [ 153 ]. All file modification requests are forwarded to Source Replication Manager [ 133 ] for replication [ 154 ]. If the command is successfully replicated by Source Replication Manager [ 133 ], original file system operation is performed [ 155 ] on source server [ 130 ]. If replication is not successful, a failed status message [ 156 ] is returned to requesting workstation [ 157 ].
- Source Replication Manager [ 133 ] The primary function of Source Replication Manager [ 133 ] is to replicate specific file system commands as received from the FSI [ 132 ].
- FIG. 16 shows a high-level flowchart of this process.
- Source Replication Manager [ 132 ] first determines if file referenced by the command is included in the current replication set [ 162 ]. If it is not, control is immediately returned to the FSI [ 160 ] with a status message indicating the original file system operation may be executed. If the file associated with the current file system operation is included in the replication set, the operation will be replicated. As depicted in FIGS.
- Source Replication Manager [ 133 ] first checks to see if network resources are available to send a replication packet [ 163 ] to target server [ 138 ].
- the Source Replication Manager [ 133 ] is limited to a specific portion of all available network resources on source server [ 130 ], to avoid locking out other network operations. If resources are not available, the file system operation is placed in stacked-up mode [ 164 ], which is described in later sections.
- Source Replication Manager [ 133 ] forms a replication packet for the modifications required,
- the replication packet includes the type of file system operation requested (e.g. write file, create directory, change file attribute), all of the associated parameters required to replicate the operation, and the file data associated with this request.
- the format of each replication packet is such that parameters required to replicate the transaction on any operating system are supported. Only those parameters required for target operating system are populated on any given message.
- This packet is forwarded to Source Communications Manager.
- the Source Communications Manager returns a status message to Source Replication Manager [ 133 ] when replication packet has been received and executed on specified target servers [ 167 ].
- the source server [ 130 ] may operate under one of the following replication modes; real, local, or remote mode.
- the mode selected determines when a replication transaction is considered complete (or committed), allowing control to be returned to the FSI [ 132 ].
- the transaction is considered complete when one of the following conditions is met: (a) when replication packet [ 166 ] is successfully forwarded to Source Communications Manager [ 134 ]; or (b) when the command is successfully placed in stacked-up mode queue [ 164 ] (stacked-up mode only).
- the Source Replication Manager [ 133 ] does not wait for confirmation from Source Communications Manager [ 167 ] that the packet has actually been received or executed by target server [ 138 ] before completing the transaction.
- transaction information [ 176 ] to be stored in transaction log [ 174 ] includes the command type and parameters, as passed from the FSI [ 177 ].
- the file data associated with each transaction is not stored to hard disk [ 173 ] in this mode, in order to minimize disk space requirements on source server [ 172 ].
- the transaction log [ 176 ] is later transmitted to target server [ 175 ] for execution, the data associated with each transaction [ 174 ] can be extracted from local source file [ 173 ]. In this case, source server [ 172 ] does not complete transaction [ 176 ] until it is successfully written to specified log file [ 174 ].
- FIG. 19 shows the manner in which local-mode transactions [ 196 ] in the log file [ 192 ] are serviced in the event of a retry.
- a transaction record [ 196 ] is extracted from log file [ 192 ] by the Source Replication Manager [ 191 ].
- associated file data [ 194 ] is extracted from the source server file system [ 194 ].
- a replication packet [ 193 ] is formed from the operation type and parameters [ 196 ] from the transaction log [ 192 ], and the file data from the file system [ 194 ]. This replication packet [ 193 ] is then forwarded to the Source Communications Manager [ 195 ] for transmission.
- FIG. 20 Operation of remote mode is shown in FIG. 20.
- the source server [ 202 ] receives a file modification request ( 1 ) from a workstation [ 201 ], it forwards the request ( 2 ) to the target server [ 205 ].
- the target server [ 205 ] replicates the transaction ( 3 ) to the target server file system [ 206 ], then returns a status message ( 4 ) to the source server [ 202 ] denoting whether the transaction ( 3 ) was successfully replicated. If the status message ( 4 ) returned by the target server [ 205 ] denotes that the transaction ( 2 ) was successfully replicated, the original file modification request ( 1 ) is committed ( 5 ) on the source server file system [ 203 ].
- replication transactions [ 226 ] are placed in an internal queue [ 224 ] on source server [ 222 ] if network resources are not available to transmit the a replication packet to target server [ 205 ].
- This condition is known as stacked-up mode. This condition may be caused by the loss of network connection [ 227 ] between source and server [ 222 ] and target servers,[ 225 ] or heavy network traffic.
- source server [ 222 ] stores the command type and all associated parameters [ 226 ] in internal queue [ 224 ] in the order which it was received. The file data associated with this operation is not stored in this queue [ 224 ], as it can be extracted when the queue [ 224 ] is serviced.
- Source Replication Manager [ 133 ] attempts to merge transaction [ 226 ] with any other queue entry [ 226 ] that is associated with the same file. If two queue entries reference similar areas in the same file, they are candidates to be merged in to a single entry. The new entry will reflect the combination of both operations. If two entries reference significantly different areas in the same file, they will not be merged. If the number of bytes separating the two entries is less than the maximum packet size (a system configuration item), these packets will be merged.
- Operation 1 [ 230 ] writes 40 bytes to the file DATA.DAT [ 231 ], starting at byte offset 20 .
- Operation 2 [ 232 ] writes 60 bytes to the same file [ 231 ], starting at bytes offset 40 . Since these two entries [ 230 , 232 ] reflect operations in overlapping areas of the file [ 233 ], they can be combined in to a single entry denoted as Operation 1 on the merged queue [ 234 ].
- This new operation [ 234 ] writes 80 bytes to the file DATA.DAT [ 231 ], starting at byte offset 20 .
- FIG. 23 ( a ) For the two file operations shown in FIG.
- Operation 1 [ 235 ] writes 40 bytes to the file DATA.DAT [ 236 ], starting at byte offset 20 .
- Operation 2 [ 237 ] writes 60 bytes to the same file [ 236 ], starting at bytes offset 1,040. Since these two entries [ 235 , 227 ] reflect operations in distinct areas of the file [ 236 ], and the difference between the packet offsets is greater than the maximum packet size, they will not be merged.
- FIG. 24 shows the manner in which stacked-up mode queue entries [ 246 ] are serviced when network resources are available to transmit replication packets [ 243 ].
- a transaction record [ 246 ] is extracted from the stacked-up mode queue [ 242 ] by the Source Replication Manager [ 241 ].
- the associated file data [ 247 ] is extracted from the source server file system [ 244 ].
- a replication packet [ 243 ] is formed from the operation type and parameters [ 246 ] from the stacked-up mode queue [ 242 ], and the file data [ 247 ] from the file system [ 244 ].
- This replication packet [ 243 ] is then forwarded to the Source Communications Manager [ 245 ] for transmission.
- Source Replication Manager [ 241 ] Another function of Source Replication Manager [ 241 ] is to perform mirroring of source server data [ 251 ] to target server [ 252 ], as shown in FIG. 25.
- Source Replication Manager [ 241 ] copies every volume, directory, and file listed in the replication set table [ 254 ] from source server [ 250 ] to target server [ 252 ].
- the Source Replication Manager [ 251 ] extracts the data associated with each file [ 251 ], and builds a mirror packet [ 258 ] to be sent to target server [ 252 ]. If a file [ 255 ] is larger than the maximum packet size [ 257 ] on source server [ 250 ], it will be broken into smaller blocks [ 256 ] for network transmission.
- the queuing techniques described above for replication are used to control the flow of mirror packets ( 2 ) between source [ 250 ] and target [ 252 ] servers.
- the source server [ 250 ] may only send a limited number of mirror packets [ 258 ] at a time, in order to prevent locking out replication and other applications from network resources.
- the mirroring function is used to synchronize the contents of source [ 250 ] and target [ 252 ] servers. This is necessary when replication is first started, and again whenever replication is disabled while changes are being made to source server [ 250 ].
- a fast-mirror mechanism is provided to expedite mirroring in the cases where file to be copied [ 261 ] already exists on target server [ 263 ]. This process is illustrated in FIG. 26.
- Source Replication Manager [ 241 ] logically breaks the file [ 261 ] in to a number of blocks of a given size [ 264 ], and calculates a checksum for each block [ 267 ].
- the source server [ 260 ] then requests the same information for the existing file [ 263 ] on target server [ 262 ].
- the checksum of each block is compared [ 267 , 268 ], and only blocks that are different are transmitted to target server [ 262 ] via fast-mirror packets [ 266 ]. This significantly reduces the amount of network traffic required to effect mirroring, especially for larger files.
- a flowchart of the fast-mirroring process is shown in FIG. 27.
- FIG. 26 An example of fast-mirroring is shown in FIG. 26.
- the file FAST.DAT [ 261 ] is 4096 bytes long, and is broken into 8 logical blocks of 512 bytes each [ 267 ].
- These two blocks [ 266 ] are transmitted to target server [ 262 ], where they will overwrite existing blocks [ 263 ]. In this case, only 1024 bytes are copied, versus 4096 bytes if normal mirroring was performed.
- Block size denotes the size of each logical block within the file, and is inversely proportional to the number of blocks that make up the file. A smaller block size would require more checksums to be calculated, but the resolution of each block would be higher. A small block size is optimal if changes were isolated to a small portion of a file, and if network resources are limited. A larger block size would require few checksums, with lower block resolution. If changes are spread throughout a file or computing resources are limited, a larger block size should be used.
- Minimum file size denotes the minimum size a file must be to be considered for fast-mirroring. Because of the computing and network resources required to calculate, transmit, and compare checksum values, this technique may only useful for larger files. Any files that are smaller than the user configured value for minimum size are copied using the standard mirroring process described above.
- Source Replication Manager [ 241 ] is to handle configuration and status request messages from the workstation component. The following messages are supported:
- Replication Set Modification replication set messages denote which directories, files, and subdirectories are to be replicated by the source server. As the messages are received, the replication manager maintains an internal table of all replication set entries. If a directory, subdirectory, or file is added to this list after replication has begun, it will be automatically mirrored to target server.
- Initiate Mirroring requests that a specific source server begin mirroring its selected directories, subdirectories, and files to target server. This message is forwarded to Source Replication Manager for processing.
- Source Server Statistics requests statistical information about a specified source server.
- the Source Communications Manager places the requested information in a network message, which is returned to workstation component.
- Source Communications Manager [ 283 ] the primary function of Source Communications Manager [ 283 ] is to transmit replication packets [ 284 ] from source server [ 280 ] to one or more target servers [ 281 ].
- Source Communications Manager [ 283 ] first determines which target servers [ 281 ] are to receive this data [ 284 ].
- Server configuration is stored internally in a target server list [ 285 ] on source server [ 280 ].
- Source Communications Manager [ 283 ] then transmits the packet [ 284 ] to each configured target server [ 281 ], and places a copy of the packet on an internal “waiting for acknowledge” queue [ 286 ].
- the copy of packet remains on this queue [ 286 ] until target server [ 281 ] responds that the [ 284 ] has been executed, or a time-out condition occurs. When one of these conditions is met, the status of the operation is returned to Source Replication Manager [ 282 ] and the packet [ 284 ] is removed from the queue [ 286 ].
- Source Communications Manager [ 283 ] will attempt to resend the packet [ 284 ]. If target server [ 281 ] does not respond after a given number of retries, transaction [ 284 ] is removed from queue [ 286 ] and an error status message is returned to Source Replication Manager [ 282 ]. Whenever a packet [ 284 ] is removed from the “waiting for acknowledge” queue [ 286 ], Source Communications Manager [ 283 ] determines if there are any commands currently in stacked-up mode. If there are, the Source Replication Manager [ 282 ] is signaled to service the stacked-up mode queue with the available packet [ 284 ].
- a further function of the Source Communications Manager [ 292 ] is to compress and/or encrypt replication data [ 298 ] before it is transmitted to the target server [ 294 ], using standard compression and encryption algorithms [ 293 ].
- This process is shown in FIG. 29.
- Data compression and encryption [ 293 ] are optional features that may be enabled from the user workstation [ 336 ].
- the packet data [ 298 ] is compressed and/or encrypted using standard methods [ 293 ].
- the compressed/encrypted packet [ 299 ] is then transmitted to the target server [ 294 ], where it is decompressed and/or decrypted [ 297 ] before it is replicated.
- target server component [ 301 ] The primary function of target server component [ 301 ] is to receive and execute replication packets [ 307 ] from one or more source servers [ 306 ].
- a block diagram of target server [ 300 ] software components is shown in FIG. 30.
- the Target Communications Manager (TCM) [ 302 ] receives replication packets [ 307 ] from source server [ 306 ], and sends status messages back to source server [ 306 ] for each replication packet [ 307 ].
- TRM Target Replication Manager
- the Target Replication Manager (TRM) [ 303 ] replicates the operation described in each packet [ 307 ] to the local storage media [ 305 ] on target server [ 300 ], and restores data [ 305 ] to source server [ 306 ] when necessary.
- the target server software component [ 301 ] is loaded on target server [ 300 ] at startup, and remains resident in memory until it is explicitly unloaded, or target computer [ 300 ] is powered off.
- a target server [ 300 ] must be connected to at least one source server [ 306 ] for replication to be performed. If a target server [ 300 ] is connected to multiple source servers [ 306 ], replication commands [ 307 ] may be forwarded from each.
- Target Communications Manager [ 302 ] The primary function of Target Communications Manager [ 302 ] on target server is to receive replication packets [ 307 ] from one or more source servers [ 306 ].
- a replication packet [ 307 ] is received from a source server [ 306 ]
- Target Communications Manager [ 302 ] forwards packet [ 307 ] to Target Replication Manager [ 303 ].
- Target Replication Manager [ 303 ] is finished processing the packet [ 307 ]
- Target Communications Manager [ 302 ] sends a status message to source server [ 306 ] in the form of another network packet.
- Target Communications Manager Another function of Target Communications Manager is to handle the following user requests from workstation component:
- Source Server Connect requests that target server establish a connection with specified source server, and begin replication.
- the Target Communications Manager attempts to connect to specified source server, and returns a status message to workstation denoting the status of the connection.
- Source Server Disconnect requests that target server drop a connection with specified source server.
- the Target Communications Manager disconnects from specified source server, and returns a status message to workstation denoting the status of the connection.
- Initiate Restore requests that all replicated directories, subdirectories, and files for a specific source server be restored from target server. This request is forwarded to Target Replication Manager for processing.
- Target Server Statistics requests statistical information about target server.
- the communications manager places the requested information in a network message, which is returned to the workstation component.
- a further function of the Target Communications Manager [ 295 ] is to decompress and/or decrypt replication data [ 299 ] that is transmitted from the source server [ 290 ], using standard decompression and decryption algorithms [ 297 ].
- This process is shown in FIG. 29.
- Data compression and encryption [ 297 ] are optional features that may be enabled from the user workstation [ 336 ].
- the packet data [ 299 ] is decompressed and/or decrypted using standard methods [ 297 ].
- the decompressed/decrypted packet [ 298 ] is then passed to the Target Replication Manager [ 296 ] for replication.
- Target Replication Manager [ 312 ] The primary function of Target Replication Manager [ 312 ] is to replicate the operation described in each packet [ 313 ] received by Target Communications Manager [ 311 ] to local storage media [ 315 ] on target server [ 310 ]. This process is shown in FIG. 31.
- the Target Replication Manager [ 312 ] parses each message to determine the type of command to be executed, the parameters required to execute the command, and the file data passed in packet [ 313 ].
- Target Replication Manager [ 312 ] determines if the file [ 315 ] specified in replication packet [ 313 ] is opened by another application [ 316 ]on target server [ 310 ].
- file [ 315 ] is available and is successfully opened, specified file operation [ 313 ] is executed and file [ 315 ] is then closed. By closing file [ 315 ] immediately after operation is completed, file [ 315 ] is available for use by other applications [ 316 ] even if it remains open on source server [ 316 ]. The status of file operation [ 313 ] is then returned to Target Communications Manager [ 311 ], where it is in turn sent to source server [ 316 ] as a response.
- the Target Replication Manager [ 312 ] periodically checks to see if any operations are waiting to be executed in open-file queue [ 314 ]. If file [ 315 ] associated with one or more entries in this queue [ 314 ] has since been closed, any such operations [ 314 ] are executed in the order they were received. Once an entry is executed, it is removed from open-file queue [ 314 ].
- Target Replication Manager [ 312 ] A further function of Target Replication Manager [ 312 ] is to handle mirror packets [ 313 ] from source server [ 316 ], via Target Communications Manager [ 311 ].
- Target Replication Manager [ 312 ] determines if associated file system item (user, directory, subdirectory, or file) [ 315 ] exists on target server [ 310 ].
- the item [ 315 ] is created on target server [ 310 ] if it does not exist.
- the data associated with the request is written to file [ 315 ], at the offset specified in mirror packet [ 313 ].
- the file [ 315 ] is then closed, so it may be accessed by other applications [ 316 ]. If file [ 315 ] specified by a mirror request [ 313 ] already exists, its contents are overwritten by new data [ 315 ].
- Target Replication Manager [ 312 ] In order to support fast-mirroring of larger files that already exist on both source [ 260 ] and target [ 262 ] servers, Target Replication Manager [ 312 ] must also calculate checksum values [ 265 ] for files [ 263 ] as requested by source server [ 260 ], as shown in FIG. 26. FIG. 33 illustrates how these checksum values [ 268 ] are calculated on the target server [ 262 ].
- source server [ 260 ] sends a list of all candidate files for fast mirroring, and the block size to be used in calculating checksum values [ 330 ].
- the Target Replication Manager [ 312 ] searches the target server file system [ 336 ] for each file on this list [ 330 ] to see if it already exists [ 331 ]. If it does not exist on the target server [ 332 ], the specified file is dropped from the candidate list [ 330 ], and normal mirroring is performed for that file.
- checksum values are calculated [ 333 ] using the specified block size [ 330 ]. These checksum values [ 333 ] are returned to source server [ 335 ] in the form of a network message [ 334 ].
- the source server [ 335 ] is then responsible for comparing the checksum values [ 255 ] to those calculated on source server [ 254 ], and sending only those blocks [ 256 ] which are different.
- a final message is sent to the source server [ 335 ] denoting that checksum calculation is complete [ 337 ].
- Target Replication Manager [ 344 ] Another function of Target Replication Manager [ 344 ] is to restore replicated data [ 345 ] to a specified source server [ 340 ], as shown in FIG. 34.
- the restore request is sent from workstation [ 346 ] to Target Communications Manager [ 342 ], and is then forwarded to Target Replication Manager [ 344 ].
- the Target Replication Manager [ 344 ] uses the mirroring technique described in source server [ 340 ] component section to effect such a restore, with source [ 340 ] and target [ 341 ] servers reversed. Both user account information and replicated data [ 345 ] are mirrored from target server [ 341 ] to source server [ 340 ].
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Quality & Reliability (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
In a computer network system, a user-defined file modification request is communicated to a primary server, which communicates the request to a secondary server. The file modification request is saved in a non-volatile storage media associated with the primary server, and the file modification request is executed and saved in a non-volatile storage media associated with the secondary server.
Description
- This application is a continuation of U.S. Ser. No. 09/165,724 filed Oct. 2, 1998, which is a continuation of U.S. Ser. No. 08/543,266 filed Oct. 16, 1995.
- The present invention relates generally to the field of data replication techniques for computer operating systems, and in particular, to an apparatus and method providing real-time back-up of data changes occurring in open or newly edited files.
- A network is a collection of computers connected to each other by various means, in order to share programs, data, and peripherals among computer users. Data on such systems should be periodically copied to a secondary “backup” media, for numerous reasons; including computer failure or power shortage that may damage or destroy some or all of the data stored on the system.
- The standard approach to backing up data is to perform “full backups” of files on the system on a periodic basis. This means copying the data stored on a given computer to a backup storage device. A backup storage device usually, but not always, supports removable high-capacity media (such as Digital Audio Tape or Streaming Tape). Between full backups, incremental backups are performed by copying only the files that have changed since the last backup (full or incremental) to a backup storage device. This reduces the amount of backup storage space required, as files that have not changed will not be copied on each incremental backup. Incremental backups also provide an up-to-date backup of the files, when used in conjunction with the full backup. There are several commercial software products available to facilitate such backup operations, such as Cheyenne's ARCServe, Palindrome's Backup Director, Symantec's Norton Enterprise Backup, Legato's NetWorker for NetWare, and Arcada's Backup Exec for NetWare.
- The problem with this technique is that the data stored to the backup media is only valid at the exact time the backup is performed. Any changes made after one incremental backup, but before the next, would be lost if there was a failure on the file storage media associated with the computer. Moreover, since the backup process on a large system can take several hours or days to complete, files backed up to the beginning of a tape may have been modified by the time the backup completes.
- Another disadvantage of this approach is that with most systems, all files to be copied to backup storage media must be closed before a backup can be performed, which means that all network users must log off the system during the backup process. If files remain open during the backup process, the integrity of the backup data is jeopardized. On a network with hundreds or thousands of users, this can be a time-consuming process. In organizations that require full-time operation of a computer network, this approach is not feasible.
- To address the problem of backing up open files, techniques have been developed to ensure that no changes are made to a file while it is being backed up. One product that utilizes such an approach is the St. Bernard Open File Manager, licensed by Emerald Systems Corporation. While a file is being copied to backup storage media, the original contents of the data to be overwritten are stored in a “pre-image cache”, which is a disk file allocated specifically for this product. Reads from a backup program are redirected to the pre-image cache if the requested data has been overwritten. Otherwise, the backup read is directed to the original file on disk. Related files on a disk can be “grouped”, so that changes to all files in the group are cached using the technique described above, whenever any one file in the group is being backed up. One problem with this approach is that the resulting backup is still only valid until a change is made to any one of the files on the system.
- More recently, several approaches have been developed to backup the data on a computer system in real-time, meaning the data is backed up whenever it is changed. In such known methods, a full backup of the primary storage media is made to a backup media, then incremental backups of changed data is made whenever a change is made to the primary storage media. Since changes are written immediately to the backup media, the backup media always has an updated copy of the data on the primary media. A second hard disk (or other non-volatile storage media) that is comparable in size and configuration is required for this method.
- One such approach is to perform “disk mirroring”, such as is available on Server Fault Tolerance (SFT) II from Novell. In this approach, a full backup of a disk is made to a second disk attached to the same central processing unit. Whenever changes are made to the first disk, they are mirrored on the second disk. This approach provides a “hot-backup” of the first disk, meaning that if a failure occurs on the first disk, processing can be switched to the second with little or no interruption of service. A disadvantage of this approach is that a separate hard disk is required for each disk to be backed up, doubling the disk requirements for a system. The secondary disk must be at least as large as the primary disk, and the disks must be configured with identical volume mapping. Any extra space on the secondary disk is unavailable. Also, in many cases errors that render the primary disk inoperable affect the mirrored disk as well.
- SFT III from Novell introduced the capability to mirror transactions across a network. All disk I/O and memory operations are forwarded from a file server to a target server, where they are performed in parallel on each server. This includes reads as well as writes. If a failure occurs on the source server, operation can be shifted to the target server. Both the source and target servers must be running Novell software in this backup configuration, and a proprietary high-speed link is recommended to connect the two servers. As NetWare is a multi-tasking environment, the target server can be used for other limited functions while mirroring is being performed. A disadvantage of this approach is that since all operations are mirrored to both servers, errors on the primary server are often mirrored to the secondary server. As with SFTII, local storage on both the source and target servers must be similarly configured.
- Standby Server by VINCA uses the network mirroring capability of NetWare, and provides a mechanism to quickly switch from the source server to the target server in the event of a failure. VINCA's Standby
Server 32 with Autoswitch, adds automatic switching between servers on failure, and allows the operator to take advantage of NetWare's 32-bit environment. Communication between the source and target servers is accomplished via a dedicated, proprietary interface. While the source and target server do not have to be identical, identical partitions are required on the local file system of each server. - Most disaster recovery procedures require that a periodic backup of the system be stored “off-site”, at a location other than where the network is being operated. This protects the backup data in the event of a fire or other natural disaster at the primary operating location, in which all data and computing facilities are destroyed. Baseline and incremental techniques can be used to perform such a backup to removable media, as described above. A disadvantage of the “mirroring” approaches to real-time backup is that the target server or disk cannot be backed up reliably while mirroring is being performed. If a file is open on the target server or disk, as a result of a mirroring operation, it can not be backed up to a separate backup storage device. The result of this limitation is that all users have to be logged off of the system before such a backup can take place.
- These foregoing approaches introduce some degree of fault-tolerance to the computer system, since a failure on the primary storage media or computer can be tolerated by switching to the secondary storage media or computer. A disadvantage common to all of these techniques is that there is a one-to-one relationship between the primary and secondary storage media, thereby doubling the hardware resources required to implement mirroring. Even if only a small number of data files on a server are considered critical enough to require real-time replication, a separate, equivalent copy of the server or hard disk is still necessary. If critical files exist on several computers throughout the network, mirroring mechanisms must be maintained at each computer. None of these approaches provides a method for mirroring between multiple computers.
- In many network configurations, there are many different types of computers connected as workstations and file servers. In many cases, different operating systems are used on different nodes on the same network. Some examples are: Novell Netware (versions 3.x, 4.x); Windows NT; Unix (System V, BSD); and OS/2. When centralized backup of the various servers is required, files from each of the servers must be copied over the network to a centralized backup server, where they can be stored to a backup storage device. None of the existing real-time backup systems provide the capability to back up data between servers that are running different operating system software.
- The purpose of this invention is to provide means for real-time, transaction-based replication of one or more source computers on a network to one or more target computers, which may or may not be running the same operating system software as the original source computer. This provides centralized backup facilities across an entire network, coordination of distributed processing, and migration of data to a new platform with minimal down-time. Only changed information is transmitted to the target server, minimizing the amount of network traffic associated with such a backup. A method of controlling flow between the source and target servers is provided to avoid loss of data and bottlenecks in the path between the servers. Means are provided to allow files currently open and in use by an application to be backed up in real-time. Finally, means are provided to replicate user configuration information (such as user accounts, file ownership, and trustee rights) to the target computer, so that users may login immediately and access data in the event of a failure on the source computer.
- A feature of the invention is the manner in which information on a computer system is replicated to a secondary storage media in real-time. Specifically, when a change is made to a file or configuration item on the primary (source) computer, those changes are immediately copied to a secondary (target) computer. This provides a real-time backup of all data on the source computer, so no data is lost in the event of a source computer failure. Only data that has been changed on the source computer is transmitted to the target computer for replication, versus transmitting the entire contents of the file. This reduces the amount of network traffic required to attain real-time replication.
- A further feature of this invention is the manner in which information on the source computer is replicated to the target computer regardless of which software application modifies the information. This includes applications running on the source computer, as well as applications running on other computers that have access to the data on the source computer via networking means.
- A further feature of this invention is the means in which several source computers can be replicated to the same target computer. The file system associated with each source computer can be replicated to a separate subdirectory on the target computer storage media, as specified by the operator when configuring the invention. Many servers can be replicated to a single target server. User configuration information from each source computer is replicated to the target computer, so that this information can be restored to the proper source computer in the event of a failure.
- A further feature of this invention is the means in which a single source computer can be replicated to several different target computers. Each replication packet is sent from the source computer to the each of the target computers, as designated by user configuration of the invention. The result of this operation is that each target computer has a copy of the source files, updated in real-time. This feature allows for data processing to be distributed to different computers, by handling the coordination of changes between all targets.
- Another feature of this invention is that data can be replicated to a local file system on the source computer. This configuration is referred to as single-server mode, as only a single computer is required to perform replication. Replicated data is stored in a separate directory on the source computer, as specified by the operator. This mode is useful when resources are not available for a separate source and target computer, or when a network connection to a target server can not be made.
- Another feature of this invention is the means in which data can be replicated to target computer(s) running a different operating system than the source computer(s). The format of replication messages passed between the source and target computers is common for all operating systems. Independent means are provided to build such messages from operating specific commands on the source computer, and to interpret these messages into operating specific commands on the target computer. This feature allows data to be shared between applications running on different platforms.
- A further feature of this invention is manner in which the operator may select a commit mode for replication actions. Commit mode refers to the conditions that must be met before a replication is considered to be successful, thereby allowing the original file operation to proceed. By default, the target computer must return a successful status message to the source computer before the transaction is committed. In real mode, the transaction is committed as soon as the replication packet is transmitted from the source computer. In local mode, the transaction is committed as soon as the replication command is written to a local disk file. In remote mode, the file operation must be successful on both the source and target computers, before the transaction is committed. If the operation fails on either computer, both operations are reversed to return each computer to original state.
- A further feature of this invention is the method in which flow of replication data between the source and target computers is controlled. Means are provided to control replication data flow by limiting the number of replication network packets that can be in transmission at any one time. Once this limit is reached, additional packets are placed on a packet queue until the number of outstanding packets falls below the prescribed level. Also, if there are not enough network resources to accommodate all of the outstanding packets in the queue, the commands are placed in a second internal queue in a compressed format. This format includes the file name, offset, and length of data to be changed, but not the actual data to be modified in the file. When network resources are again available to service these requests, the required data associated with each command is extracted from the file on the source server, and a network packet is built and placed on the packet queue.
- Another feature of this invention is the manner in which multiple operations to the same file are handled on the internal queue described above. The condition described above, in which commands are placed on an internal queue because of a lack of network resources, is referred to as stacked -up mode. In stacked-up mode, several commands may be received in the queue that are associated with the same file. If the commands reference similar areas in the file, the commands will be merged in to a single command denoting the union of the two areas. If the commands reference areas within the file that are sufficiently separated, the commands will not be merged in the queue. This technique reduces the number of replication packets required when network resources are again available, and reduces the size of the internal queue.
- Another feature of this invention is the means for the user to configure flow control rules, in order to maximize network efficiency based on the current hardware configuration. The operator can define if packets are to held in queue on the source computer until certain conditions exist, or to send out all packets immediately upon receiving them. This feature can be used to optimize network performance and cost when using communication protocols such as ISDN or X.25, or when replication is done across a Wide Area Network (WAN).
- Another feature of this invention is that the operator may select individual files, subdirectories, directories, volumes, or file systems on the source computer to replicate. Means are provided for the user to select files to be replicated by file name, location, or type. A database of files to be replicated is maintained on the source computer. This feature allows the user to mirror only those files on a computer that are considered to be critical enough to require real-time replication.
- Another feature of this invention is the manner in which source computer data is initially mirrored to the target computer. Once a source/target computer configuration is established, the user may initiate the mirroring process which copies all of the files on the source computer to the target computer. The location of replicated files on the target computer is specified by the operator during configuration. The mirroring process utilizes the flow control and compression techniques described above for normal replication operations. If replication is disabled at any time during operation, the operator may choose to remirror all data to the target server. This insures that all files on the source and target computers are in sync after a disruption in service.
- Another feature of this invention is the manner in which data is restored from the target server to the source server in the event of a source server failure. Means are provided to copy all of the replicated data back to the source server using the mirroring technique described above. All user configuration information (including user accounts, file ownership, trustee rights) is also rebuilt on the source server, using the replicated target server information. Since all replicated data is stored on the target server in standard file format, it can be copied back to the source server at any time via user requests.
- Another feature of this invention is the ability for a user to login to the target server and access all replicated data in the event of a source server failure. Since all user configuration information (such as user accounts, file ownership, trustee rights) are replicated on the target computer, the user can login at any time with the same access rights as on the source computer. The target computer serves as a hot backup to the source computer, which reduces the amount of user downtime in the event of a computer failure.
- Another feature of this invention is the manner in which data can be copied to a backup storage media on the target computer, while users have the file open on the source computer. For each data replication packet received by the target computer, the associated file is opened, data is written, and the file is then closed, even if the file remains open on the source computer. The result of this sequence is that files are closed and available for backup using third-party backup utilities.
- Another feature of this invention is the ability to store replicated data to a backup storage device (such as a streaming tape) from the target computer, providing a common backup server for one or more source computers. This feature also reduces the processor loading on the source computers, as the backup function is not performed locally.
- Another feature of this invention is the means in which replication commands can be held in memory or on disk while data on the target computer is accessed. An application may make a call via an Application Program Interface (API) to cause all replication commands to be placed in the source server internal queue, instead of being sent to the target computer for replication. Another call can be made to resume replication, causing all commands in the queue to be sent to the target computer in the order they were received. The queuing techniques described above are used to maintain this queue on the source server. This technique can be used by applications such as backup agents, which require a constant file image during processing.
- Another feature of this invention is the ability to replicate over a Wide Area Network (WAN), without any specialized or proprietary hardware. Existing WAN communication mechanisms can be used to transmit replication packets to target computers. This feature allows remote sites to maintain real-time updates on data files, and also provides a mechanism for effecting off-site backup storage of critical data.
- Another feature of this invention is the means by which to maintain copies of deleted files on the target computer, and to restore these files to the source server if requested by the user. Based on user configuration, copies of deleted files may be stored under unique names on the target server. Means are provided to display all such files to the user, and to allow the user to restore one or more of these files to a specific location on the source computer. This feature can be configured to maintain deleted files on the target computer until they are explicitly purged by the user, or after a certain period of inactivity.
- Another feature of this invention is the mechanism by which large files are mirrored to an existing directory on the target computer. If the specified file exists on both the source and target computers when mirroring is initiated, only those blocks that have changed shall be copied to the target computer. This feature is only used when the specified file is large enough such that the transmission cost of sending the entire file is greater than the cost of determining which blocks have changed between the files on each computer. This reduces the amount of network traffic required to bring source and target computers in to sync, in the event replication is disabled for any period of time.
- Another feature of this invention is the means by which files that are inactive for a specified period of time can be archived to the target computer and deleted from the source computer, in order to conserve storage media. For each file or group of files to be archived, the user may configure the amount of inactivity required before the file is deleted from the source computer. Means are also provided to list all such files on the target server, and to allow the user to restore such files to the source computer if necessary.
- Another feature of this invention is the means by which replication transactions may be stored to a local storage media on the source computer, in the event that the source computer can not connect to the target computer. All transactions are stored locally using the internal queuing techniques described above. Once a connection is reestablished with the target computer, all stored transactions can be transmitted and executed in the order they were received.
- Another feature of this invention is the means by which replication data may be compressed prior to transmission, in order to reduce the amount of network traffic. This feature can be configured by the user to compress data being sent from the source computer, using a variety of standard compression algorithms. Compressed data is decompressed by the target computer, before the data is written to storage media.
- Another feature of this invention is the means by which replication data may be encrypted prior to transmission, in order to prevent replicated data from being intercepted and compromised. This feature can be configured by the user to encrypt data being sent from the source computer, using a variety of standard encryption algorithms. Encrypted data is authenticated by the target computer, before the data is written to storage media.
- Another feature of this invention is the manner in which all replication operations are done at the file system level, via operating system calls. Direct access to storage media on either the source or target computers is not required, thereby reducing the risk of introducing errors during low-level media access.
- FIG. 1 is a block diagram of a typical computer network configuration.
- FIG. 2 is a block diagram of the major components of a typical file server.
- FIG. 3 is a block diagram of a computer network system configured for server replication in accordance with the invention.
- FIG. 4 is a block diagram of a computer network in Many to One replication configuration.
- FIG. 5 is a block diagram of a computer network in One to Many replication configuration.
- FIG. 6 is a block diagram of a computer network in Single-Server replication configuration.
- FIG. 7 illustrates the software components of the invention.
- FIG. 8 illustrates the polling sequence for identifying source and target servers.
- FIG. 9 illustrates the sequence of operations for a server mirroring request.
- FIG. 10 illustrates the sequence of operations for a server restore request.
- FIG. 11 illustrates replication set selection.
- FIG. 12 illustrates the sequence of operations for a requesting source and target server status.
- FIG. 13 is a block diagram of the source server software component.
- FIG. 14 is a flowchart representing the typical file modification process, without replication.
- FIG. 15 is a flowchart representing the operation of the File System Interface (FSI).
- FIG. 16 is a flowchart representing the operation of the Source Replication Manager (SRM).
- FIG. 17 illustrates local-mode operation, with logging to a local transaction file.
- FIG. 18 is a flowchart representing local-mode operation.
- FIG. 19 illustrates the process of committing local-mode transactions.
- FIG. 20 illustrates remote (two-phase) operation
- FIG. 21 is a flowchart representing remote (two-phase) operation.
- FIG. 22 is a flowchart representing stacked-up mode, with logging to an internal queue.
- FIG. 23 (a) and (b) illustrate an example of stacked-up mode queues.
- FIG. 24 illustrates the process of servicing entries from the stacked-up mode internal queue.
- FIG. 25 illustrates the process of mirroring data from source to target computers.
- FIG. 26 illustrates the fast-mirroring process.
- FIG. 27 is a flowchart of the fast-mirroring process.
- FIG. 28 illustrates operation of the Source Communication Manager (SCM) software component.
- FIG. 29 illustrates the process of data compression/decompression and encryption decryption on source and target computers.
- FIG. 30 is a block diagram of the target server software component.
- FIG. 31 illustrates operation of the Target Replication Manager (TRM).
- FIG. 32 is a flowchart representing the operation of the Target Replication Manager (TRM).
- FIG. 33 is a flowchart representing the process of calculating checksum values on the target server.
- FIG. 34 illustrates the process of restoring data from target to source computer.
- FIG. 1 represents a typical computer network configuration, consisting of file server [11] with local nonvolatile storage [12], one or more user workstations [10], and local area network (LAN) [13]. The file server [11] and workstations [10] are not necessarily all the same type of computer, and may be running unique operating system software on each. A backup device [14], such as a tape drive [11] is also connected directly to file server [11]. The major components of a file server [11] are shown in FIG. 2, and include central processing unit (“CPU”) [22], random access memory (RAM) [23], non-volatile data storage (such as a hard disk drive) [24], and a network interface card (NIC) [21].
- Typical operation of this sample computer network system is shown by the numbered arrows in FIG. 1. Workstations [10] send file modification requests (1) to the file server [11], which processes the request and stores any required changes to non-volatile storage media [12] connected thereto through operating system calls (2). At any given time, the contents of hard disk [12] can be stored to back-up storage media [14] for backup purposes (4). If an error occurs on file server [11] which destroys some or all of the data on non-volatile storage media [12], the contents of the backup tape can be restored from backup storage media [14] non-volatile storage media [12].
- FIG. 3 shows a typical computer network system configured for server replication, in accordance with the preferred embodiment of this invention. This configuration consists of source (or primary) server [31], a target (or secondary) server [33], one or more client workstations [30], and a local area network (“LAN”) [36] to connect servers and workstations. A backup device [35] such as a tape drive, is also connected directly to the target server [33]. All communication between work stations [30], source server [31], and target server [33] is done via LAN [36] One skilled in the art will appreciate that the LAN utilizes standard networking mechanisms (e.g. ethernet, token-ring), and this configuration may be partitioned in to separate network segments to improve performance.
- The sequence of operations of the preferred embodiment is shown by the numbered arrows in FIG. 3. In step1 (1), the contents of hard disk [32] in the source server [31] is mirrored to hard disk [34] on target server [33], via network packets. Workstations [30] then send file modification requests (2) to source server [31]. The source server [31] forwards these requests to target server [33] for replication (3). The target server [33] executes the file modification request (4) on its local hard disk [34], then returns a status message (5) to source server [31]. The source server [31] then executes the file modification request on its local hard disk [32] (7) then returns a status message to the workstation [30]. It is an option for the contents of hard disk [34] to be forwarded and saved on tape [35]. The result of these operations is that hard disks [32,34] on source server [31] and target server [33] have current copies of same files at all times. Other embodiments of this invention do not require target server [33] to execute the file modification request on hard disk before source server [31] executes the file modification request on its disk drive[32].
- The example configuration shown in FIG. 3 is referred to as One to One mode, (i.e., a single source server [31] is replicated to a single target server [33]). Other configurations include Many to One, One to Many, and Single Server. In Many to One mode, several source servers [42] are replicated to single target server [44], as shown in FIG. 4. In One to Many mode, single source server [52] is replicated to several target servers [54], as shown in FIG. 5. In Single Server mode, source server [61] data is replicated to local file system [63], as shown in FIG. 6. Once the data is mirrored, workstations [60] send file modification requests to source/target server. When the modification request is executed on local file system [63], the source/target server then executes the file modification request on local file system [62]. One skilled in the art will appreciate that local file systems [62] and [63] can be one or two non-volatile data storage device. In the case, of one storage device, the primary data and replicated data will be in different volumes of the same data storage device. Further, it is always an option to attach a backup storage device to the target server.
- The components of this invention include three independent applications: a workstation component [76], source server component [72], and target server component [74]. FIG. 7 shows the relationship between these components and the hardware described in this example. These components are described in detail in the following sections:
- Workstation Component [76]
- The primary function of the workstation component [76] is to allow the users [77] to configure the replication process, and communicate this configuration to source server [71] and target servers [73]. This component can be executed on any workstation [75] on the network [78]. While the workstation component [76] is required to configure and initiate replication between two or more servers, it is not required to execute during normal operation.
- As depicted in FIG. 8, from workstation [85], user [88] may configure the following:
- Target Sever(s)—User [88] selects one or more target servers [83] where replicated information will be stored. At startup, the workstation [85] broadcasts a message (1) to each network node [80] to determine if the node is configured as a target server [83]. If node [83] is configured as a target server [81], a response (2) is sent to the requesting workstation [85] denoting that the specified node [83] is available. A list of all available target servers [87] is maintained (3) on the workstation [85], and is displayed to the user [88] for target server [83] selection. When target server [83] is selected by user [88], it is referred to as current target server.
- Source Server(s)—User [88] selects one or more source servers [84] to be replicated. At startup, the workstation [85] broadcasts a message (1) to each network node to determine if the node is configured as a source server [84], If node [83] is configured as a source server [81], a response (2) is sent to the requesting workstation [85] denoting that specified node [83] is available. A list of all available source servers [87] is maintained on the workstation [85], and is displayed to the user [88] for source server [84] selection. When a source server [84] is selected, user [88] must specify the location on the current target server [82] where the source server [84] data is to be replicated, inform of a directory or subdirectory path name. The source server [84] is then connected to current target server [82] via the network interface [89], and replication begins to the specified directory location on current target server [82].
- Source Server Disconnect—User [88] selects specific source server [81] to disconnect from current target server [83]. A list of source servers [87] connected to the current target server [83] is available for user selection. If a source server [81] is selected to be disconnected, a network message (1) is sent to the current target server [83] to perform the disconnect. Once disconnected, no further replication is done between specified source server [81] and current target server [83].
- Replication Mode—User [88] selects replication mode for each source server [81]. Replication mode refers to the method in which data is replicated to the current target server [83], and the level of error checking required before a transaction can be committed. Valid replication mode settings are real mode, local mode, and remote mode. Each mode is described in further detail under the source server component section.
- Replication Set—User [112] selects the volumes, directories, and files [117] to be replicated from a specific source server [110], referred to as the replication set [113]. Replication set [113] selection is shown in FIG. 11. The user [112] may select the replication set [113] from the available volumes, directories, and files [116] on the source server file system [116]. When the user [112] is finished selecting the replication set [113] from a workstation [111], a copy of the replication set [113] is stored on the source server file system [116] in file format [115]. The workstation [111] then transmits a network message [118] to the source server [110] denoting that the replication set file [115] is ready to be loaded. The source server [110] loads the replication set from file [115] to local memory [114].
- As depicted in FIGS. 10 and 11, whenever a source server [101] is connected to a target server [103] according to this invention, the replication set file [115] is copied from the source server [110] to the workstation [111], and is used as the default replication set [113].
- As further depicted in FIG. 9, a further function of the workstation component [96] is to initiate mirroring and monitor replication status. From workstation interface [97], the user may initiate the following actions:
- Initiate Mirroring—When mirroring is requested for specific source server [91], all of the volumes, directories, and files specified in replication set [98] are copied from the source server [91] to target server [93]. Initially, a mirroring request (1) is transmitted from workstation [95] to the target server [93], and is then forwarded to source server [91]. Once the mirroring request is forwarded to source server [91], a response message (3) is sent to workstation [95] to denote that mirroring is under way. The source server [91] then sends the necessary file information (4), as well as user account information (such as user name, file ownership, and file access permissions) to specified target server [93].
- Initiate Restore—When a restore is requested for specific source server [101], all replicated information (including files and user information) is copied (2) from the target server [103] to source server [101]. As depicted in FIG. 10, a restore request (1) is transmitted to the target server [103] from the workstation [105]. If the files to be copied already exist on the source server [101], they will be overwritten during the restore process.
- Display Replication Traffic—As depicted in FIG. 12, the status of replication traffic between all connected source servers [121] and current target server [123] is displayed whenever the workstation component [126] is executing. The workstation component [126] requests (1) status regarding each source server [121] from the target server [123] on a periodic basis, as shown in FIG. 12. If any packets have been transmitted between specific source server [121] and current target server [123] since the last status request, a graphical indication of that traffic is displayed (2).
- Display Target Server Statistics—Statistics on target server [123] operations can be displayed at the operators request. Statistics include, but are not limited to, number of packets received, number of errors encountered, number of replication commands received per command type, number of bytes received, and number of bytes transmitted. These statistics are sent from the current target server [123] on a periodic basis, as described in the preceding paragraph.
- Display Source Server Statistics—Statistics on source server [121] operations may be displayed at the operators request. Source server statistics are requested from the specified source server [121] via a network message (1) as they are needed, as shown in FIG. 12. Statistics include, but are not limited to, number of packets transmitted, number of errors encountered, number of replication commands transmitted per command type, number of packets in stacked-up mode, and number of bytes transmitted. These statistics are sent from current source server [121] on a periodic basis.
- Source Server Component
- The primary function of source server software component [131] is to intercept any file system commands [137] from the local operating system [136], and forward such commands [137] to the current target server [138] if necessary. A block diagram of the source server software components is shown in FIG. 13. The File System Interface (FSI) [132] monitors file system operations from the operating system [136] to determine when changes are being made. The Source Replication Manager (SRM) [133] determines whether file system changes should be replicated, builds the network packets [139] required to effect such replication, and controls the flow of these network packets [139] using queuing techniques. The Source Communications Manager (“SCM”) [134] sends and receives replication packets [139] to/from the target server [138].
- The source server software component [131] is loaded on each source server [130] at startup, and remains resident in memory until it is explicitly unloaded by the user, or the source computer [130] is powered off. A source server [130] must be connected to at least one target server [138] for replication to be performed. If a source server [130] is connected to multiple target servers [138], replication commands are transmitted to each target server [138] in the order they were connected.
- The function of File System Interface (“FSI”) [132] is to monitor operating system [136] commands to any file systems associated with the source server [130]. The flowchart depicted in FIG. 14 shows the typical path of such a command, without replication. A file system command [140] is received from a workstation via network messages, and error checking is performed to make sure the command and its associated parameters are valid [141]. If the command or parameters are not valid, a failed status message is returned to the requesting workstation [142]. If the command and parameters are valid, the file system operation is performed [143], and an appropriate status message is returned to the requesting workstation [144].
- The flowchart in FIG. 15 shows the modified path of a file system command [150] according to this invention, with the addition of FSI [132]. A file system command [150] is received from a workstation via network messages, and error checking is performed to make sure the command and its associated parameters are valid [151]. If the command or parameters are not valid, a failed status message is returned to requesting workstation [152]. If the command and parameters are valid, FSI [132], checks the command type to see if it is a file modification request [153]. All file modification requests are forwarded to Source Replication Manager [133] for replication [154]. If the command is successfully replicated by Source Replication Manager [133], original file system operation is performed [155] on source server [130]. If replication is not successful, a failed status message [156] is returned to requesting workstation [157].
- Only operations that cause modifications to source server file system [155] are monitored via this process. Such operations include, but are not limited to: writing to a file, creating a file, deleting a file, renaming a file, creating a directory, deleting a directory, renaming a directory, changing file or directory attributes, and changing file ownership or permissions. Operations that do not modify source server file system [155], such as reading a file, are not monitored by this process.
- The primary function of Source Replication Manager [133] is to replicate specific file system commands as received from the FSI [132]. FIG. 16 shows a high-level flowchart of this process. When a file system command [161] is received from the FSI [160], Source Replication Manager [132] first determines if file referenced by the command is included in the current replication set [162]. If it is not, control is immediately returned to the FSI [160] with a status message indicating the original file system operation may be executed. If the file associated with the current file system operation is included in the replication set, the operation will be replicated. As depicted in FIGS. 13 and 16, Source Replication Manager [133] first checks to see if network resources are available to send a replication packet [163] to target server [138]. The Source Replication Manager [133] is limited to a specific portion of all available network resources on source server [130], to avoid locking out other network operations. If resources are not available, the file system operation is placed in stacked-up mode [164], which is described in later sections.
- If network resources are available, Source Replication Manager [133] forms a replication packet for the modifications required, The replication packet includes the type of file system operation requested (e.g. write file, create directory, change file attribute), all of the associated parameters required to replicate the operation, and the file data associated with this request. The format of each replication packet is such that parameters required to replicate the transaction on any operating system are supported. Only those parameters required for target operating system are populated on any given message. This packet is forwarded to Source Communications Manager. The Source Communications Manager returns a status message to Source Replication Manager [133] when replication packet has been received and executed on specified target servers [167].
- The source server [130] may operate under one of the following replication modes; real, local, or remote mode. The mode selected determines when a replication transaction is considered complete (or committed), allowing control to be returned to the FSI [132]. In real mode (shown in FIG. 16), the transaction is considered complete when one of the following conditions is met: (a) when replication packet [166] is successfully forwarded to Source Communications Manager [134]; or (b) when the command is successfully placed in stacked-up mode queue [164] (stacked-up mode only). The Source Replication Manager [133] does not wait for confirmation from Source Communications Manager [167] that the packet has actually been received or executed by target server [138] before completing the transaction. Therefore, as depicted in FIG. 15, original file system operation [155] will always be executed on source server [130], even if replication failed. If the status message returned (see FIG. 17) from the target server [175] indicates that the transaction [176] was replicated successfully, the transaction [176] is removed from the transaction log file [174]. If the status message indicates that the transaction [176] was not completed successfully on the target server [175], the source server [172] will attempt to resend the transaction [176] to the target server [175]. If the transaction [176] is still not completed successfully after a specific number of retires, this transaction [176] will be flagged as an error in the transaction log [174]. A flowchart of this process is shown in FIG. 18.
- As further depicted in FIG. 17, transaction information [176] to be stored in transaction log [174] includes the command type and parameters, as passed from the FSI [177]. The file data associated with each transaction is not stored to hard disk [173] in this mode, in order to minimize disk space requirements on source server [172]. When the transaction log [176] is later transmitted to target server [175] for execution, the data associated with each transaction [174] can be extracted from local source file [173]. In this case, source server [172] does not complete transaction [176] until it is successfully written to specified log file [174]. If the transaction [176] can not be written, an error message (5) is returned to calling workstation [171], and the original file system operation is aborted. While the operator delay may be increased because of the time required to write each transaction [176] to transaction log file [174],user [171] is guaranteed that transactions [176] are recorded. A Flow Chart of the local mode is depicted in FIG. 18.
- FIG. 19 shows the manner in which local-mode transactions [196] in the log file [192] are serviced in the event of a retry. A transaction record [196] is extracted from log file [192] by the Source Replication Manager [191]. Using the parameters in the transaction record [196], associated file data [194] is extracted from the source server file system [194]. A replication packet [193] is formed from the operation type and parameters [196] from the transaction log [192], and the file data from the file system [194]. This replication packet [193] is then forwarded to the Source Communications Manager [195] for transmission.
- Operation of remote mode is shown in FIG. 20. When the source server [202] receives a file modification request (1) from a workstation [201], it forwards the request (2) to the target server [205]. The target server [205] replicates the transaction (3) to the target server file system [206], then returns a status message (4) to the source server [202] denoting whether the transaction (3) was successfully replicated. If the status message (4) returned by the target server [205] denotes that the transaction (2) was successfully replicated, the original file modification request (1) is committed (5) on the source server file system [203]. If the status message (4) returned by the target server [205] denotes that the transaction (2) was not replicated, the original file modification request (1) is aborted. In either case, a status message (6) is returned to the requesting workstation [201] by the operating system, denoting whether the original file modification request (1) was performed on the source server [202]. A flowchart of this process is shown in FIG. 21.
- As noted previously and shown in FIG. 22, replication transactions [226] are placed in an internal queue [224] on source server [222] if network resources are not available to transmit the a replication packet to target server [205]. This condition is known as stacked-up mode. This condition may be caused by the loss of network connection [227] between source and server [222] and target servers,[225] or heavy network traffic. For each such transaction [226], source server [222] stores the command type and all associated parameters [226] in internal queue [224] in the order which it was received. The file data associated with this operation is not stored in this queue [224], as it can be extracted when the queue [224] is serviced.
- Whenever a new transaction [226] is introduced to the stacked-up mode queue [224], Source Replication Manager [133] (See FIG. 13) attempts to merge transaction [226] with any other queue entry [226] that is associated with the same file. If two queue entries reference similar areas in the same file, they are candidates to be merged in to a single entry. The new entry will reflect the combination of both operations. If two entries reference significantly different areas in the same file, they will not be merged. If the number of bytes separating the two entries is less than the maximum packet size (a system configuration item), these packets will be merged.
- As an example, consider the two file operations shown in FIG. 23 (a). Operation 1 [230] writes 40 bytes to the file DATA.DAT [231], starting at byte offset 20. Operation 2 [232] writes 60 bytes to the same file [231], starting at bytes offset 40. Since these two entries [230,232] reflect operations in overlapping areas of the file [233], they can be combined in to a single entry denoted as
Operation 1 on the merged queue [234]. This new operation [234] writes 80 bytes to the file DATA.DAT [231], starting at byte offset 20. Next, consider the two file operations shown in FIG. 23 (b), assuming a maximum packet size of 512 bytes. Operation 1 [235] writes 40 bytes to the file DATA.DAT [236], starting at byte offset 20. Operation 2 [237] writes 60 bytes to the same file [236], starting at bytes offset 1,040. Since these two entries [235,227] reflect operations in distinct areas of the file [236], and the difference between the packet offsets is greater than the maximum packet size, they will not be merged. - FIG. 24 shows the manner in which stacked-up mode queue entries [246] are serviced when network resources are available to transmit replication packets [243]. A transaction record [246] is extracted from the stacked-up mode queue [242] by the Source Replication Manager [241]. Using the parameters in the transaction record [242], the associated file data [247] is extracted from the source server file system [244]. A replication packet [243] is formed from the operation type and parameters [246] from the stacked-up mode queue [242], and the file data [247] from the file system [244]. This replication packet [243] is then forwarded to the Source Communications Manager [245] for transmission.
- Another function of Source Replication Manager [241] is to perform mirroring of source server data [251] to target server [252], as shown in FIG. 25. When mirroring is requested, Source Replication Manager [241] copies every volume, directory, and file listed in the replication set table [254] from source server [250] to target server [252]. The Source Replication Manager [251] extracts the data associated with each file [251], and builds a mirror packet [258] to be sent to target server [252]. If a file [255] is larger than the maximum packet size [257] on source server [250], it will be broken into smaller blocks [256] for network transmission. The queuing techniques described above for replication are used to control the flow of mirror packets (2) between source [250] and target [252] servers. The source server [250] may only send a limited number of mirror packets [258] at a time, in order to prevent locking out replication and other applications from network resources.
- As depicted in FIG. 25, the mirroring function is used to synchronize the contents of source [250] and target [252] servers. This is necessary when replication is first started, and again whenever replication is disabled while changes are being made to source server [250]. A fast-mirror mechanism is provided to expedite mirroring in the cases where file to be copied [261] already exists on target server [263]. This process is illustrated in FIG. 26. When fast-mirroring is used, Source Replication Manager [241] logically breaks the file [261] in to a number of blocks of a given size [264], and calculates a checksum for each block [267]. The source server [260] then requests the same information for the existing file [263] on target server [262]. The checksum of each block is compared [267,268], and only blocks that are different are transmitted to target server [262] via fast-mirror packets [266]. This significantly reduces the amount of network traffic required to effect mirroring, especially for larger files. A flowchart of the fast-mirroring process is shown in FIG. 27.
- An example of fast-mirroring is shown in FIG. 26. The file FAST.DAT [261] is 4096 bytes long, and is broken into 8 logical blocks of 512 bytes each [267]. By comparing the checksum values for the file on the source [264] and target [265] servers, we see that only the second and fifth blocks have changed [267,268]. These two blocks [266] are transmitted to target server [262], where they will overwrite existing blocks [263]. In this case, only 1024 bytes are copied, versus 4096 bytes if normal mirroring was performed.
- Several parameters that control fast-mirroring may be configured by user, in order to optimize server performance. Configurable parameters include: block size and minimum file size. Block size denotes the size of each logical block within the file, and is inversely proportional to the number of blocks that make up the file. A smaller block size would require more checksums to be calculated, but the resolution of each block would be higher. A small block size is optimal if changes were isolated to a small portion of a file, and if network resources are limited. A larger block size would require few checksums, with lower block resolution. If changes are spread throughout a file or computing resources are limited, a larger block size should be used.
- Minimum file size denotes the minimum size a file must be to be considered for fast-mirroring. Because of the computing and network resources required to calculate, transmit, and compare checksum values, this technique may only useful for larger files. Any files that are smaller than the user configured value for minimum size are copied using the standard mirroring process described above.
- A further function of Source Replication Manager [241] is to handle configuration and status request messages from the workstation component. The following messages are supported:
- Replication Set Modification—replication set messages denote which directories, files, and subdirectories are to be replicated by the source server. As the messages are received, the replication manager maintains an internal table of all replication set entries. If a directory, subdirectory, or file is added to this list after replication has begun, it will be automatically mirrored to target server.
- Initiate Mirroring—requests that a specific source server begin mirroring its selected directories, subdirectories, and files to target server. This message is forwarded to Source Replication Manager for processing.
- Source Server Statistics—requests statistical information about a specified source server. The Source Communications Manager places the requested information in a network message, which is returned to workstation component.
- As depicted in FIG. 28, the primary function of Source Communications Manager [283] is to transmit replication packets [284] from source server [280] to one or more target servers [281]. When a packet [284] is received from Source Replication Manager [282], Source Communications Manager [283] first determines which target servers [281] are to receive this data [284]. Server configuration is stored internally in a target server list [285] on source server [280]. Source Communications Manager [283] then transmits the packet [284] to each configured target server [281], and places a copy of the packet on an internal “waiting for acknowledge” queue [286]. The copy of packet remains on this queue [286] until target server [281] responds that the [284] has been executed, or a time-out condition occurs. When one of these conditions is met, the status of the operation is returned to Source Replication Manager [282] and the packet [284] is removed from the queue [286].
- If a time-out occurs, meaning target server [281] has not responded within a given period of time, Source Communications Manager [283] will attempt to resend the packet [284]. If target server [281] does not respond after a given number of retries, transaction [284] is removed from queue [286] and an error status message is returned to Source Replication Manager [282]. Whenever a packet [284] is removed from the “waiting for acknowledge” queue [286], Source Communications Manager [283] determines if there are any commands currently in stacked-up mode. If there are, the Source Replication Manager [282] is signaled to service the stacked-up mode queue with the available packet [284].
- A further function of the Source Communications Manager [292] is to compress and/or encrypt replication data [298] before it is transmitted to the target server [294], using standard compression and encryption algorithms [293]. This process is shown in FIG. 29. Data compression and encryption [293] are optional features that may be enabled from the user workstation [336]. When compression and/or encryption are enabled, and a packet [298] is received by the Source Communications Manager [292], the packet data [298] is compressed and/or encrypted using standard methods [293]. The compressed/encrypted packet [299] is then transmitted to the target server [294], where it is decompressed and/or decrypted [297] before it is replicated.
- Target Server Component
- The primary function of target server component [301] is to receive and execute replication packets [307] from one or more source servers [306]. A block diagram of target server [300] software components is shown in FIG. 30. The Target Communications Manager (TCM) [302] receives replication packets [307] from source server [306], and sends status messages back to source server [306] for each replication packet [307]. The Target Replication Manager (TRM) [303] replicates the operation described in each packet [307] to the local storage media [305] on target server [300], and restores data [305] to source server [306] when necessary.
- The target server software component [301] is loaded on target server [300] at startup, and remains resident in memory until it is explicitly unloaded, or target computer [300] is powered off. A target server [300] must be connected to at least one source server [306] for replication to be performed. If a target server [300] is connected to multiple source servers [306], replication commands [307] may be forwarded from each.
- The primary function of Target Communications Manager [302] on target server is to receive replication packets [307] from one or more source servers [306]. When a replication packet [307] is received from a source server [306], Target Communications Manager [302] forwards packet [307] to Target Replication Manager [303]. When Target Replication Manager [303] is finished processing the packet [307], Target Communications Manager [302] sends a status message to source server [306] in the form of another network packet.
- Another function of Target Communications Manager is to handle the following user requests from workstation component:
- Source Server Connect—requests that target server establish a connection with specified source server, and begin replication. The Target Communications Manager attempts to connect to specified source server, and returns a status message to workstation denoting the status of the connection.
- Source Server Disconnect—requests that target server drop a connection with specified source server. The Target Communications Manager disconnects from specified source server, and returns a status message to workstation denoting the status of the connection.
- Initiate Restore—requests that all replicated directories, subdirectories, and files for a specific source server be restored from target server. This request is forwarded to Target Replication Manager for processing.
- Target Server Statistics—requests statistical information about target server. The communications manager places the requested information in a network message, which is returned to the workstation component.
- A further function of the Target Communications Manager [295] is to decompress and/or decrypt replication data [299] that is transmitted from the source server [290], using standard decompression and decryption algorithms [297]. This process is shown in FIG. 29. Data compression and encryption [297] are optional features that may be enabled from the user workstation [336]. When compression and/or encryption are enabled and a packet [299] is received by the Target Communications Manager [295], the packet data [299] is decompressed and/or decrypted using standard methods [297]. The decompressed/decrypted packet [298] is then passed to the Target Replication Manager [296] for replication.
- The primary function of Target Replication Manager [312] is to replicate the operation described in each packet [313] received by Target Communications Manager [311] to local storage media [315] on target server [310]. This process is shown in FIG. 31. The Target Replication Manager [312] parses each message to determine the type of command to be executed, the parameters required to execute the command, and the file data passed in packet [313]. Target Replication Manager [312] then determines if the file [315] specified in replication packet [313] is opened by another application [316]on target server [310]. If file [315] is in use by another application [316], the operation and all associated parameters and data is placed on an internal “open-file” queue [314]. A status message is returned to Target Communications Manager [311], denoting replication operation [313] is pending an open file [315]. A flowchart of this process is shown in FIG. 32.
- If the associated file [315] is available and is successfully opened, specified file operation [313] is executed and file [315] is then closed. By closing file [315] immediately after operation is completed, file [315] is available for use by other applications [316] even if it remains open on source server [316]. The status of file operation [313] is then returned to Target Communications Manager [311], where it is in turn sent to source server [316] as a response.
- The Target Replication Manager [312] periodically checks to see if any operations are waiting to be executed in open-file queue [314]. If file [315] associated with one or more entries in this queue [314] has since been closed, any such operations [314] are executed in the order they were received. Once an entry is executed, it is removed from open-file queue [314].
- A further function of Target Replication Manager [312] is to handle mirror packets [313] from source server [316], via Target Communications Manager [311]. When a mirror packet [313] is received, Target Replication Manager [312] determines if associated file system item (user, directory, subdirectory, or file) [315] exists on target server [310]. The item [315] is created on target server [310] if it does not exist. In the case of a file request, the data associated with the request is written to file [315], at the offset specified in mirror packet [313]. The file [315] is then closed, so it may be accessed by other applications [316]. If file [315] specified by a mirror request [313] already exists, its contents are overwritten by new data [315].
- In order to support fast-mirroring of larger files that already exist on both source [260] and target [262] servers, Target Replication Manager [312] must also calculate checksum values [265] for files [263] as requested by source server [260], as shown in FIG. 26. FIG. 33 illustrates how these checksum values [268] are calculated on the target server [262]. When fast mirroring is selected, source server [260] sends a list of all candidate files for fast mirroring, and the block size to be used in calculating checksum values [330]. The Target Replication Manager [312] searches the target server file system [336] for each file on this list [330] to see if it already exists [331]. If it does not exist on the target server [332], the specified file is dropped from the candidate list [330], and normal mirroring is performed for that file.
- If a file in candidate list [330] does exist on target server file system [336], checksum values are calculated [333] using the specified block size [330]. These checksum values [333] are returned to source server [335] in the form of a network message [334]. The source server [335] is then responsible for comparing the checksum values [255] to those calculated on source server [254], and sending only those blocks [256] which are different. When all files on the candidate list [330] have been processed on the target server [262], a final message is sent to the source server [335] denoting that checksum calculation is complete [337].
- Another function of Target Replication Manager [344] is to restore replicated data [345] to a specified source server [340], as shown in FIG. 34. The restore request is sent from workstation [346] to Target Communications Manager [342], and is then forwarded to Target Replication Manager [344]. The Target Replication Manager [344] uses the mirroring technique described in source server [340] component section to effect such a restore, with source [340] and target [341] servers reversed. Both user account information and replicated data [345] are mirrored from target server [341] to source server [340].
- Although the foregoing invention has been described in some detail by way of illustration and example for purposes of clarity of understanding, it will be obvious that certain changes and modifications may be practiced within the scope of the appended claims.
Claims (2)
1. A real time backup system comprising:
at least one primary server having a non-volatile storage media where a file modification request for a file modification is saved;
at least one secondary server for executing said file modification request, each secondary server having a non-volatile storage media where said file modification is saved; and
a communication means for communicating said file modification request from said at least one primary server to said at least one secondary server.
2. A method for real time backup comprising the steps of:
saving a file modification request for a file modification on at least one non-volatile storage media associated with at least one primary server;
executing said file modification request on at least one secondary server, each secondary server having a non-volatile storage media;
saving said file modification on said non-volatile storage media associated with said at least one secondary server; and
communicating said file modification request from said at least one primary server to said at least one secondary server.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/729,284 US20040083245A1 (en) | 1995-10-16 | 2003-12-04 | Real time backup system |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US08/543,266 US5819020A (en) | 1995-10-16 | 1995-10-16 | Real time backup system |
US09/165,724 US5974563A (en) | 1995-10-16 | 1998-10-02 | Real time backup system |
US33677799A | 1999-06-21 | 1999-06-21 | |
US10/729,284 US20040083245A1 (en) | 1995-10-16 | 2003-12-04 | Real time backup system |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US33677799A Continuation | 1995-10-16 | 1999-06-21 |
Publications (1)
Publication Number | Publication Date |
---|---|
US20040083245A1 true US20040083245A1 (en) | 2004-04-29 |
Family
ID=24167278
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US08/543,266 Expired - Lifetime US5819020A (en) | 1995-10-16 | 1995-10-16 | Real time backup system |
US09/165,724 Expired - Lifetime US5974563A (en) | 1995-10-16 | 1998-10-02 | Real time backup system |
US10/729,284 Abandoned US20040083245A1 (en) | 1995-10-16 | 2003-12-04 | Real time backup system |
Family Applications Before (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US08/543,266 Expired - Lifetime US5819020A (en) | 1995-10-16 | 1995-10-16 | Real time backup system |
US09/165,724 Expired - Lifetime US5974563A (en) | 1995-10-16 | 1998-10-02 | Real time backup system |
Country Status (1)
Country | Link |
---|---|
US (3) | US5819020A (en) |
Cited By (58)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030074535A1 (en) * | 2001-09-14 | 2003-04-17 | Eric Owhadi | Method of initiating a backup procedure |
US20030110237A1 (en) * | 2001-12-06 | 2003-06-12 | Hitachi, Ltd. | Methods of migrating data between storage apparatuses |
US20030152078A1 (en) * | 1998-08-07 | 2003-08-14 | Henderson Alex E. | Services processor having a packet editing unit |
US20040133610A1 (en) * | 2002-11-18 | 2004-07-08 | Sparta Systems, Inc. | Techniques for reconfiguring configurable systems |
US20040199609A1 (en) * | 2003-04-07 | 2004-10-07 | Microsoft Corporation | System and method for web server migration |
US20050010609A1 (en) * | 2003-06-12 | 2005-01-13 | International Business Machines Corporation | Migratable backup and restore |
US20050131993A1 (en) * | 2003-12-15 | 2005-06-16 | Fatula Joseph J.Jr. | Apparatus, system, and method for autonomic control of grid system resources |
US20050144283A1 (en) * | 2003-12-15 | 2005-06-30 | Fatula Joseph J.Jr. | Apparatus, system, and method for grid based data storage |
US20050160313A1 (en) * | 2003-12-30 | 2005-07-21 | Chih-Sung Wu | Real-time remote backup system and related method |
US20050165853A1 (en) * | 2004-01-22 | 2005-07-28 | Altiris, Inc. | Method and apparatus for localized protected imaging of a file system |
US20060075143A1 (en) * | 2004-09-30 | 2006-04-06 | Konica Minolta Business Technologies, Inc. | Administration system for administration target devices, data server and branch server for use in said system |
US20060101097A1 (en) * | 2004-11-05 | 2006-05-11 | Dima Barboi | Replicated data validation |
WO2006074869A1 (en) * | 2005-01-11 | 2006-07-20 | Rudolf Bayer | Data storage system and method for operation thereof |
US20060190502A1 (en) * | 2005-02-24 | 2006-08-24 | International Business Machines Corporation | Backing up at least one encrypted computer file |
EP1708094A1 (en) * | 2005-03-31 | 2006-10-04 | Ubs Ag | Computer network system for constructing, synchronizing and/or managing a second database from/with a first database, and methods therefore |
EP1708095A1 (en) * | 2005-03-31 | 2006-10-04 | Ubs Ag | Computer network system for constructing, synchronizing and/or managing a second database from/with a first database, and methods therefore |
US20060222161A1 (en) * | 2005-03-31 | 2006-10-05 | Marcel Bank | Computer network system for building, synchronising and/or operating a second database from/with a first database, and procedures for it |
US20060271601A1 (en) * | 2005-05-24 | 2006-11-30 | International Business Machines Corporation | System and method for peer-to-peer grid based autonomic and probabilistic on-demand backup and restore |
US7155458B1 (en) * | 2002-04-05 | 2006-12-26 | Network Appliance, Inc. | Mechanism for distributed atomic creation of client-private files |
US20070073791A1 (en) * | 2005-09-27 | 2007-03-29 | Computer Associates Think, Inc. | Centralized management of disparate multi-platform media |
US20070150522A1 (en) * | 2003-10-07 | 2007-06-28 | International Business Machines Corporation | Method, system, and program for processing a file request |
US20070214384A1 (en) * | 2006-03-07 | 2007-09-13 | Manabu Kitamura | Method for backing up data in a clustered file system |
US20070226538A1 (en) * | 2006-03-09 | 2007-09-27 | Samsung Electronics Co., Ltd. | Apparatus and method to manage computer system data in network |
US20070255758A1 (en) * | 2006-04-28 | 2007-11-01 | Ling Zheng | System and method for sampling based elimination of duplicate data |
US20070260696A1 (en) * | 2006-05-02 | 2007-11-08 | Mypoints.Com Inc. | System and method for providing three-way failover for a transactional database |
US20080005201A1 (en) * | 2006-06-29 | 2008-01-03 | Daniel Ting | System and method for managing data deduplication of storage systems utilizing persistent consistency point images |
US20080005141A1 (en) * | 2006-06-29 | 2008-01-03 | Ling Zheng | System and method for retrieving and using block fingerprints for data deduplication |
US20080184001A1 (en) * | 2007-01-30 | 2008-07-31 | Network Appliance, Inc. | Method and an apparatus to store data patterns |
US20080270490A1 (en) * | 2004-05-28 | 2008-10-30 | Moxite Gmbh | System and Method for Replication, Integration, Consolidation and Mobilisation of Data |
US20080301134A1 (en) * | 2007-05-31 | 2008-12-04 | Miller Steven C | System and method for accelerating anchor point detection |
US20080307102A1 (en) * | 2007-06-08 | 2008-12-11 | Galloway Curtis C | Techniques for communicating data between a host device and an intermittently attached mobile device |
US20090100241A1 (en) * | 2005-03-23 | 2009-04-16 | Steffen Werner | Method for Removing a Mass Storage System From a Computer Network, and Computer Program Product and Computer Network for Carrying our the Method |
US20090100236A1 (en) * | 2007-10-15 | 2009-04-16 | Ricardo Spencer Puig | Copying data onto a secondary storage device |
US20090150477A1 (en) * | 2007-12-07 | 2009-06-11 | Brocade Communications Systems, Inc. | Distributed file system optimization using native server functions |
US20100005125A1 (en) * | 1996-12-13 | 2010-01-07 | Visto Corporation | System and method for globally and securely accessing unified information in a computer network |
US20100049726A1 (en) * | 2008-08-19 | 2010-02-25 | Netapp, Inc. | System and method for compression of partially ordered data sets |
US7747584B1 (en) | 2006-08-22 | 2010-06-29 | Netapp, Inc. | System and method for enabling de-duplication in a storage system architecture |
US20100165876A1 (en) * | 2008-12-30 | 2010-07-01 | Amit Shukla | Methods and apparatus for distributed dynamic network provisioning |
US20100174685A1 (en) * | 2000-03-01 | 2010-07-08 | Computer Associates Think, Inc. | Method and system for updating an archive of a computer file |
US20110004586A1 (en) * | 2009-07-15 | 2011-01-06 | Lon Jones Cherryholmes | System, method, and computer program product for creating a virtual database |
US7890714B1 (en) * | 2007-09-28 | 2011-02-15 | Symantec Operating Corporation | Redirection of an ongoing backup |
US20110231366A1 (en) * | 2005-04-20 | 2011-09-22 | Axxana (Israel) Ltd | Remote data mirroring system |
US8032487B1 (en) * | 2003-10-29 | 2011-10-04 | At&T Intellectual Property I, L.P. | System and method for synchronizing data in a networked system |
US20110289270A1 (en) * | 2010-05-24 | 2011-11-24 | Bell Jr Robert H | System, method and computer program product for data transfer management |
US20120089566A1 (en) * | 2010-10-11 | 2012-04-12 | Sap Ag | Method for reorganizing or moving a database table |
US20120117025A1 (en) * | 2008-02-18 | 2012-05-10 | Microsoft Corporation | Synchronization of Replications for Different Computing Systems |
US20120215999A1 (en) * | 2009-08-11 | 2012-08-23 | International Business Machines Corporation | Synchronization of replicated sequential access storage components |
US8495019B2 (en) | 2011-03-08 | 2013-07-23 | Ca, Inc. | System and method for providing assured recovery and replication |
US8793226B1 (en) | 2007-08-28 | 2014-07-29 | Netapp, Inc. | System and method for estimating duplicate data |
WO2014170810A1 (en) * | 2013-04-14 | 2014-10-23 | Axxana (Israel) Ltd. | Synchronously mirroring very fast storage arrays |
US8891406B1 (en) | 2010-12-22 | 2014-11-18 | Juniper Networks, Inc. | Methods and apparatus for tunnel management within a data center |
US9032054B2 (en) | 2008-12-30 | 2015-05-12 | Juniper Networks, Inc. | Method and apparatus for determining a network topology during network provisioning |
US9195397B2 (en) | 2005-04-20 | 2015-11-24 | Axxana (Israel) Ltd. | Disaster-proof data recovery |
US9875042B1 (en) * | 2015-03-31 | 2018-01-23 | EMC IP Holding Company LLC | Asynchronous replication |
US10379958B2 (en) | 2015-06-03 | 2019-08-13 | Axxana (Israel) Ltd. | Fast archiving for database systems |
US10592326B2 (en) | 2017-03-08 | 2020-03-17 | Axxana (Israel) Ltd. | Method and apparatus for data loss assessment |
CN111382136A (en) * | 2018-12-29 | 2020-07-07 | 华为技术有限公司 | File system mirror image and file request method |
US10769028B2 (en) | 2013-10-16 | 2020-09-08 | Axxana (Israel) Ltd. | Zero-transaction-loss recovery for database systems |
Families Citing this family (582)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8352400B2 (en) | 1991-12-23 | 2013-01-08 | Hoffberg Steven M | Adaptive pattern recognition based controller apparatus and method and human-factored interface therefore |
US5835953A (en) * | 1994-10-13 | 1998-11-10 | Vinca Corporation | Backup system that takes a snapshot of the locations in a mass storage device that has been identified for updating prior to updating |
US5680640A (en) * | 1995-09-01 | 1997-10-21 | Emc Corporation | System for migrating data by selecting a first or second transfer means based on the status of a data element map initialized to a predetermined state |
GB9603582D0 (en) | 1996-02-20 | 1996-04-17 | Hewlett Packard Co | Method of accessing service resource items that are for use in a telecommunications system |
US7415466B2 (en) * | 1996-03-19 | 2008-08-19 | Oracle International Corporation | Parallel transaction recovery |
US6647510B1 (en) | 1996-03-19 | 2003-11-11 | Oracle International Corporation | Method and apparatus for making available data that was locked by a dead transaction before rolling back the entire dead transaction |
US7114049B2 (en) * | 1997-01-08 | 2006-09-26 | Hitachi, Ltd. | Adaptive remote copy in a heterogeneous environment |
US6802062B1 (en) * | 1997-04-01 | 2004-10-05 | Hitachi, Ltd. | System with virtual machine movable between virtual machine systems and control method |
US6366558B1 (en) * | 1997-05-02 | 2002-04-02 | Cisco Technology, Inc. | Method and apparatus for maintaining connection state between a connection manager and a failover device |
US7039008B1 (en) * | 1997-05-02 | 2006-05-02 | Cisco Technology, Inc. | Method and apparatus for maintaining connection state between a connection manager and a failover device |
US6065046A (en) * | 1997-07-29 | 2000-05-16 | Catharon Productions, Inc. | Computerized system and associated method of optimally controlled storage and transfer of computer programs on a computer network |
JP4077907B2 (en) * | 1997-08-04 | 2008-04-23 | 富士通株式会社 | Computer data backup device, data backup method, and computer-readable recording medium recording data backup program |
JP3901806B2 (en) * | 1997-09-25 | 2007-04-04 | 富士通株式会社 | Information management system and secondary server |
US5974574A (en) * | 1997-09-30 | 1999-10-26 | Tandem Computers Incorporated | Method of comparing replicated databases using checksum information |
US7581077B2 (en) | 1997-10-30 | 2009-08-25 | Commvault Systems, Inc. | Method and system for transferring data in a storage operation |
US7209972B1 (en) | 1997-10-30 | 2007-04-24 | Commvault Systems, Inc. | High speed data transfer mechanism |
US6418478B1 (en) * | 1997-10-30 | 2002-07-09 | Commvault Systems, Inc. | Pipelined high speed data transfer mechanism |
US6170055B1 (en) | 1997-11-03 | 2001-01-02 | Iomega Corporation | System for computer recovery using removable high capacity media |
US6199179B1 (en) | 1998-06-10 | 2001-03-06 | Compaq Computer Corporation | Method and apparatus for failure recovery in a multi-processor computer system |
US6260068B1 (en) * | 1998-06-10 | 2001-07-10 | Compaq Computer Corporation | Method and apparatus for migrating resources in a multi-processor computer system |
US6633916B2 (en) | 1998-06-10 | 2003-10-14 | Hewlett-Packard Development Company, L.P. | Method and apparatus for virtual resource handling in a multi-processor computer system |
US6647508B2 (en) | 1997-11-04 | 2003-11-11 | Hewlett-Packard Development Company, L.P. | Multiprocessor computer architecture with multiple operating system instances and software controlled resource allocation |
US6381682B2 (en) | 1998-06-10 | 2002-04-30 | Compaq Information Technologies Group, L.P. | Method and apparatus for dynamically sharing memory in a multiprocessor system |
US6542926B2 (en) | 1998-06-10 | 2003-04-01 | Compaq Information Technologies Group, L.P. | Software partitioned multi-processor system with flexible resource sharing levels |
US6332180B1 (en) | 1998-06-10 | 2001-12-18 | Compaq Information Technologies Group, L.P. | Method and apparatus for communication in a multi-processor computer system |
JPH11167510A (en) * | 1997-12-04 | 1999-06-22 | Hitachi Ltd | Replication method, replication tool, and replication server |
US6212531B1 (en) * | 1998-01-13 | 2001-04-03 | International Business Machines Corporation | Method for implementing point-in-time copy using a snapshot function |
US7047300B1 (en) | 1998-02-10 | 2006-05-16 | Sprint Communications Company L.P. | Survivable and scalable data system and method for computer networks |
US6317844B1 (en) * | 1998-03-10 | 2001-11-13 | Network Appliance, Inc. | File server storage arrangement |
US7277941B2 (en) | 1998-03-11 | 2007-10-02 | Commvault Systems, Inc. | System and method for providing encryption in a storage network by storing a secured encryption key with encrypted archive data in an archive storage device |
US7739381B2 (en) * | 1998-03-11 | 2010-06-15 | Commvault Systems, Inc. | System and method for providing encryption in storage operations in a storage network, such as for use by application service providers that provide data storage services |
US6631477B1 (en) * | 1998-03-13 | 2003-10-07 | Emc Corporation | Host system for mass storage business continuance volumes |
US6681327B1 (en) * | 1998-04-02 | 2004-01-20 | Intel Corporation | Method and system for managing secure client-server transactions |
US6289357B1 (en) * | 1998-04-24 | 2001-09-11 | Platinum Technology Ip, Inc. | Method of automatically synchronizing mirrored database objects |
US6976093B2 (en) * | 1998-05-29 | 2005-12-13 | Yahoo! Inc. | Web server content replication |
US6279011B1 (en) * | 1998-06-19 | 2001-08-21 | Network Appliance, Inc. | Backup and restore for heterogeneous file server environment |
US6154849A (en) * | 1998-06-30 | 2000-11-28 | Sun Microsystems, Inc. | Method and apparatus for resource dependency relaxation |
US6266781B1 (en) * | 1998-07-20 | 2001-07-24 | Academia Sinica | Method and apparatus for providing failure detection and recovery with predetermined replication style for distributed applications in a network |
JP4689137B2 (en) | 2001-08-08 | 2011-05-25 | 株式会社日立製作所 | Remote copy control method and storage system |
US8010627B1 (en) | 1998-09-25 | 2011-08-30 | Sprint Communications Company L.P. | Virtual content publishing system |
US6411991B1 (en) * | 1998-09-25 | 2002-06-25 | Sprint Communications Company L.P. | Geographic data replication system and method for a network |
US6269405B1 (en) * | 1998-10-19 | 2001-07-31 | International Business Machines Corporation | User account establishment and synchronization in heterogeneous networks |
US6269406B1 (en) * | 1998-10-19 | 2001-07-31 | International Business Machines Corporation | User group synchronization to manage capabilities in heterogeneous networks |
US6088721A (en) * | 1998-10-20 | 2000-07-11 | Lucent Technologies, Inc. | Efficient unified replication and caching protocol |
JP3578385B2 (en) * | 1998-10-22 | 2004-10-20 | インターナショナル・ビジネス・マシーンズ・コーポレーション | Computer and replica identity maintaining method |
US6615244B1 (en) | 1998-11-28 | 2003-09-02 | Tara C Singhal | Internet based archive system for personal computers |
US6330669B1 (en) | 1998-11-30 | 2001-12-11 | Micron Technology, Inc. | OS multi boot integrator |
US6260140B1 (en) * | 1998-11-30 | 2001-07-10 | Micron Electronics, Inc. | Operating system multi boot integrator |
US6397125B1 (en) * | 1998-12-18 | 2002-05-28 | International Business Machines Corporation | Method of and apparatus for performing design synchronization in a computer system |
US7904187B2 (en) | 1999-02-01 | 2011-03-08 | Hoffberg Steven M | Internet appliance system and method |
US6397307B2 (en) * | 1999-02-23 | 2002-05-28 | Legato Systems, Inc. | Method and system for mirroring and archiving mass storage |
US8877916B2 (en) | 2000-04-26 | 2014-11-04 | Ceres, Inc. | Promoter, promoter control elements, and combinations, and uses thereof |
US10329575B2 (en) | 1999-05-14 | 2019-06-25 | Ceres, Inc. | Regulatory sequence for plants |
US9029523B2 (en) * | 2000-04-26 | 2015-05-12 | Ceres, Inc. | Promoter, promoter control elements, and combinations, and uses thereof |
US6625622B1 (en) * | 1999-05-14 | 2003-09-23 | Eisenworld, Inc. | Apparatus and method for transfering information between platforms |
US6938057B2 (en) * | 1999-05-21 | 2005-08-30 | International Business Machines Corporation | Method and apparatus for networked backup storage |
US7035880B1 (en) | 1999-07-14 | 2006-04-25 | Commvault Systems, Inc. | Modular backup and retrieval system used in conjunction with a storage area network |
US7389311B1 (en) | 1999-07-15 | 2008-06-17 | Commvault Systems, Inc. | Modular backup and retrieval system |
US7395282B1 (en) | 1999-07-15 | 2008-07-01 | Commvault Systems, Inc. | Hierarchical backup and retrieval system |
US7167962B2 (en) | 1999-08-19 | 2007-01-23 | Hitachi, Ltd. | Remote copy for a storage controller with reduced data size |
US7103647B2 (en) | 1999-08-23 | 2006-09-05 | Terraspring, Inc. | Symbolic definition of a computer system |
US6938058B2 (en) | 1999-08-23 | 2005-08-30 | Eisenworld, Inc. | Apparatus and method for transferring information between platforms |
US7703102B1 (en) | 1999-08-23 | 2010-04-20 | Oracle America, Inc. | Approach for allocating resources to an apparatus based on preemptable resource requirements |
US8019870B1 (en) | 1999-08-23 | 2011-09-13 | Oracle America, Inc. | Approach for allocating resources to an apparatus based on alternative resource requirements |
US8234650B1 (en) | 1999-08-23 | 2012-07-31 | Oracle America, Inc. | Approach for allocating resources to an apparatus |
US7463648B1 (en) | 1999-08-23 | 2008-12-09 | Sun Microsystems, Inc. | Approach for allocating resources to an apparatus based on optional resource requirements |
US8032634B1 (en) | 1999-08-23 | 2011-10-04 | Oracle America, Inc. | Approach for allocating resources to an apparatus based on resource requirements |
US6779016B1 (en) | 1999-08-23 | 2004-08-17 | Terraspring, Inc. | Extensible computing system |
US8179809B1 (en) | 1999-08-23 | 2012-05-15 | Oracle America, Inc. | Approach for allocating resources to an apparatus based on suspendable resource requirements |
US6795833B1 (en) * | 1999-09-22 | 2004-09-21 | Alsoft, Inc. | Method for allowing verification of alterations to the cataloging structure on a computer storage device |
US7444390B2 (en) * | 1999-10-20 | 2008-10-28 | Cdimensions, Inc. | Method and apparatus for providing a web-based active virtual file system |
US6421688B1 (en) | 1999-10-20 | 2002-07-16 | Parallel Computers Technology, Inc. | Method and apparatus for database fault tolerance with instant transaction replication using off-the-shelf database servers and low bandwidth networks |
US6609215B1 (en) * | 1999-10-21 | 2003-08-19 | International Business Machines Corporation | Method and system for implementing network filesystem-based customized computer system automated rebuild tool |
US6460055B1 (en) | 1999-12-16 | 2002-10-01 | Livevault Corporation | Systems and methods for backing up data files |
US6526418B1 (en) * | 1999-12-16 | 2003-02-25 | Livevault Corporation | Systems and methods for backing up data files |
US6625623B1 (en) * | 1999-12-16 | 2003-09-23 | Livevault Corporation | Systems and methods for backing up data files |
US6847984B1 (en) * | 1999-12-16 | 2005-01-25 | Livevault Corporation | Systems and methods for backing up data files |
US6779003B1 (en) | 1999-12-16 | 2004-08-17 | Livevault Corporation | Systems and methods for backing up data files |
US6658589B1 (en) * | 1999-12-20 | 2003-12-02 | Emc Corporation | System and method for backup a parallel server data storage system |
US6618822B1 (en) * | 2000-01-03 | 2003-09-09 | Oracle International Corporation | Method and mechanism for relational access of recovery logs in a database system |
US8612553B2 (en) * | 2000-01-14 | 2013-12-17 | Microsoft Corporation | Method and system for dynamically purposing a computing device |
US6694336B1 (en) | 2000-01-25 | 2004-02-17 | Fusionone, Inc. | Data transfer and synchronization system |
US8620286B2 (en) | 2004-02-27 | 2013-12-31 | Synchronoss Technologies, Inc. | Method and system for promoting and transferring licensed content and applications |
US6671757B1 (en) | 2000-01-26 | 2003-12-30 | Fusionone, Inc. | Data transfer and synchronization system |
US8156074B1 (en) | 2000-01-26 | 2012-04-10 | Synchronoss Technologies, Inc. | Data transfer and synchronization system |
US7505762B2 (en) | 2004-02-27 | 2009-03-17 | Fusionone, Inc. | Wireless telephone data backup system |
US7035878B1 (en) | 2000-01-25 | 2006-04-25 | Fusionone, Inc. | Base rolling engine for data transfer and synchronization system |
JP2003521067A (en) * | 2000-01-28 | 2003-07-08 | ウィリアムズ コミュニケーションズ, エルエルシー | System and method for rewriting a media resource request and / or response between an origin server and a client |
US7155481B2 (en) * | 2000-01-31 | 2006-12-26 | Commvault Systems, Inc. | Email attachment management in a computer system |
US6658436B2 (en) | 2000-01-31 | 2003-12-02 | Commvault Systems, Inc. | Logical view and access to data managed by a modular data and storage management system |
US7434219B2 (en) * | 2000-01-31 | 2008-10-07 | Commvault Systems, Inc. | Storage of application specific profiles correlating to document versions |
US7003641B2 (en) | 2000-01-31 | 2006-02-21 | Commvault Systems, Inc. | Logical view with granular access to exchange data managed by a modular data and storage management system |
US6714980B1 (en) * | 2000-02-11 | 2004-03-30 | Terraspring, Inc. | Backup and restore of data associated with a host in a dynamically changing virtual server farm without involvement of a server that uses an associated storage device |
US7093005B2 (en) * | 2000-02-11 | 2006-08-15 | Terraspring, Inc. | Graphical editor for defining and creating a computer system |
EP1143338B1 (en) * | 2000-03-10 | 2004-05-19 | Alcatel | Method and apparatus for backing up data |
US20090216641A1 (en) * | 2000-03-30 | 2009-08-27 | Niration Network Group, L.L.C. | Methods and Systems for Indexing Content |
US6856993B1 (en) | 2000-03-30 | 2005-02-15 | Microsoft Corporation | Transactional file system |
US6671801B1 (en) * | 2000-03-31 | 2003-12-30 | Intel Corporation | Replication of computer systems by duplicating the configuration of assets and the interconnections between the assets |
US6633977B1 (en) * | 2000-03-31 | 2003-10-14 | International Business Machines Corporation | System and method for computer system duplication |
US7062541B1 (en) * | 2000-04-27 | 2006-06-13 | International Business Machines Corporation | System and method for transferring related data objects in a distributed data storage environment |
US6711699B1 (en) | 2000-05-04 | 2004-03-23 | International Business Machines Corporation | Real time backup system for information based on a user's actions and gestures for computer users |
US6944651B2 (en) * | 2000-05-19 | 2005-09-13 | Fusionone, Inc. | Single click synchronization of data from a public information store to a private information store |
US6892221B2 (en) * | 2000-05-19 | 2005-05-10 | Centerbeam | Data backup |
US6769074B2 (en) * | 2000-05-25 | 2004-07-27 | Lumigent Technologies, Inc. | System and method for transaction-selective rollback reconstruction of database objects |
US7334098B1 (en) * | 2000-06-06 | 2008-02-19 | Quantum Corporation | Producing a mass storage backup using a log of write commands and time information |
WO2001098952A2 (en) * | 2000-06-20 | 2001-12-27 | Orbidex | System and method of storing data to a recording medium |
US6938039B1 (en) * | 2000-06-30 | 2005-08-30 | Emc Corporation | Concurrent file across at a target file server during migration of file systems between file servers using a network file system access protocol |
US7895334B1 (en) | 2000-07-19 | 2011-02-22 | Fusionone, Inc. | Remote access communication architecture apparatus and method |
US8073954B1 (en) | 2000-07-19 | 2011-12-06 | Synchronoss Technologies, Inc. | Method and apparatus for a secure remote access system |
EP1187421A3 (en) * | 2000-08-17 | 2004-04-14 | FusionOne, Inc. | Base rolling engine for data transfer and synchronization system |
US6925476B1 (en) | 2000-08-17 | 2005-08-02 | Fusionone, Inc. | Updating application data including adding first change log to aggreagate change log comprising summary of changes |
JP3617437B2 (en) * | 2000-09-13 | 2005-02-02 | 日本電気株式会社 | Data copy method and program recording medium recording data copy program |
US6804819B1 (en) | 2000-09-18 | 2004-10-12 | Hewlett-Packard Development Company, L.P. | Method, system, and computer program product for a data propagation platform and applications of same |
US7386610B1 (en) * | 2000-09-18 | 2008-06-10 | Hewlett-Packard Development Company, L.P. | Internet protocol data mirroring |
US6708188B1 (en) * | 2000-09-19 | 2004-03-16 | Bocada, Inc. | Extensible method for obtaining an historical record of data backup activity (and errors) and converting same into a canonical format |
US6640217B1 (en) | 2000-09-19 | 2003-10-28 | Bocada, Inc, | Method for extracting and storing records of data backup activity from a plurality of backup devices |
US6631374B1 (en) | 2000-09-29 | 2003-10-07 | Oracle Corp. | System and method for providing fine-grained temporal database access |
US8650169B1 (en) | 2000-09-29 | 2014-02-11 | Oracle International Corporation | Method and mechanism for identifying transaction on a row of data |
DE10053016A1 (en) * | 2000-10-17 | 2002-04-25 | Libelle Informatik Gmbh | Device for data mirroring has mirror system with older data version, temporary memory for data related to actions leading from older version to current one, monitoring/changeover program |
ES2301499T3 (en) * | 2000-11-09 | 2008-07-01 | Accenture Llp | METHOD AND SYSTEM TO IMPROVE A COMMERCIAL TRANSITION DIRECTED THROUGH A COMMUNICATIONS NETWORK. |
US7587446B1 (en) | 2000-11-10 | 2009-09-08 | Fusionone, Inc. | Acquisition and synchronization of digital media to a personal information space |
US7818435B1 (en) | 2000-12-14 | 2010-10-19 | Fusionone, Inc. | Reverse proxy mechanism for retrieving electronic content associated with a local network |
US7136881B2 (en) * | 2000-12-15 | 2006-11-14 | International Business Machines Corporation | Method and system for processing directory events |
US6941490B2 (en) * | 2000-12-21 | 2005-09-06 | Emc Corporation | Dual channel restoration of data between primary and backup servers |
US6871271B2 (en) | 2000-12-21 | 2005-03-22 | Emc Corporation | Incrementally restoring a mass storage device to a prior state |
US7254606B2 (en) * | 2001-01-30 | 2007-08-07 | Canon Kabushiki Kaisha | Data management method using network |
US6606690B2 (en) | 2001-02-20 | 2003-08-12 | Hewlett-Packard Development Company, L.P. | System and method for accessing a storage area network as network attached storage |
EP1255198B1 (en) * | 2001-02-28 | 2006-11-29 | Hitachi, Ltd. | Storage apparatus system and method of data backup |
US7194590B2 (en) * | 2001-02-28 | 2007-03-20 | Hitachi, Ltd. | Three data center adaptive remote copy |
EP1388085B1 (en) * | 2001-03-15 | 2006-11-29 | The Board Of Governors For Higher Education State Of Rhode Island And Providence Plantations | Remote online information back-up system |
US8615566B1 (en) | 2001-03-23 | 2013-12-24 | Synchronoss Technologies, Inc. | Apparatus and method for operational support of remote network systems |
CA2437560A1 (en) * | 2001-03-26 | 2002-10-03 | Duaxes Corporation | Protocol duplexer and protocol duplexing method |
US6728849B2 (en) | 2001-12-14 | 2004-04-27 | Hitachi, Ltd. | Remote storage system and method |
US7398302B2 (en) * | 2001-03-30 | 2008-07-08 | Hitachi, Ltd. | Remote copy with path selection and prioritization |
US20040158687A1 (en) * | 2002-05-01 | 2004-08-12 | The Board Of Governors For Higher Education, State Of Rhode Island And Providence Plantations | Distributed raid and location independence caching system |
EP1390854A4 (en) * | 2001-05-01 | 2006-02-22 | Rhode Island Education | Distributed raid and location independence caching system |
US7143252B2 (en) * | 2001-05-10 | 2006-11-28 | Hitachi, Ltd. | Storage apparatus system and method of data backup |
US7213114B2 (en) | 2001-05-10 | 2007-05-01 | Hitachi, Ltd. | Remote copy for a storage controller in a heterogeneous environment |
US7080381B2 (en) | 2001-05-31 | 2006-07-18 | International Business Machines Corporation | Message bridging system and method for singular server to singular or multiple event reception engines |
US7774492B2 (en) * | 2001-07-26 | 2010-08-10 | Citrix Systems, Inc. | System, method and computer program product to maximize server throughput while avoiding server overload by controlling the rate of establishing server-side net work connections |
EP1595363B1 (en) * | 2001-08-15 | 2016-07-13 | The Board of Governors for Higher Education State of Rhode Island and Providence Plantations | Scsi-to-ip cache storage device and method |
US9659292B1 (en) * | 2001-08-30 | 2017-05-23 | EMC IP Holding Company LLC | Storage-based replication of e-commerce transactions in real time |
US7509356B2 (en) * | 2001-09-06 | 2009-03-24 | Iron Mountain Incorporated | Data backup |
US7346623B2 (en) | 2001-09-28 | 2008-03-18 | Commvault Systems, Inc. | System and method for generating and managing quick recovery volumes |
EP1442387A4 (en) * | 2001-09-28 | 2008-01-23 | Commvault Systems Inc | System and method for archiving objects in an information store |
US7589737B2 (en) * | 2001-10-31 | 2009-09-15 | Hewlett-Packard Development Company, L.P. | System and method for communicating graphics image data over a communication network |
US20030101155A1 (en) * | 2001-11-23 | 2003-05-29 | Parag Gokhale | Method and system for scheduling media exports |
DE60239358D1 (en) * | 2001-11-23 | 2011-04-14 | Commvault Systems Inc | SELECTIVE DATA DISPLACEMENT SYSTEM AND METHOD |
US7603518B2 (en) | 2005-12-19 | 2009-10-13 | Commvault Systems, Inc. | System and method for improved media identification in a storage device |
US8346733B2 (en) | 2006-12-22 | 2013-01-01 | Commvault Systems, Inc. | Systems and methods of media management, such as management of media to and from a media storage library |
US7596586B2 (en) | 2003-04-03 | 2009-09-29 | Commvault Systems, Inc. | System and method for extended media retention |
US7584227B2 (en) * | 2005-12-19 | 2009-09-01 | Commvault Systems, Inc. | System and method for containerized data storage and tracking |
US7296125B2 (en) * | 2001-11-29 | 2007-11-13 | Emc Corporation | Preserving a snapshot of selected data of a mass storage system |
JP3722057B2 (en) * | 2001-11-30 | 2005-11-30 | ソニー株式会社 | Data recording / reproducing apparatus, data recording / reproducing method, and digital camera |
US20030217062A1 (en) | 2001-12-18 | 2003-11-20 | Shawn Thomas | Method and system for asset transition project management |
KR100359423B1 (en) * | 2002-01-04 | 2002-11-07 | Ncerti Co Ltd | Very high speed high capacity backup system and backup method thereof |
US6779093B1 (en) * | 2002-02-15 | 2004-08-17 | Veritas Operating Corporation | Control facility for processing in-band control messages during data replication |
US6938056B2 (en) * | 2002-02-22 | 2005-08-30 | International Business Machines Corporation | System and method for restoring a file system from backups in the presence of deletions |
US7503042B2 (en) * | 2002-03-08 | 2009-03-10 | Microsoft Corporation | Non-script based intelligent migration tool capable of migrating software selected by a user, including software for which said migration tool has had no previous knowledge or encounters |
US7133894B2 (en) * | 2002-03-12 | 2006-11-07 | International Business Machines Corporation | Method, apparatus, and program for synchronous remote builds |
US7143307B1 (en) * | 2002-03-15 | 2006-11-28 | Network Appliance, Inc. | Remote disaster recovery and data migration using virtual appliance migration |
US7165154B2 (en) * | 2002-03-18 | 2007-01-16 | Net Integration Technologies Inc. | System and method for data backup |
US6857053B2 (en) * | 2002-04-10 | 2005-02-15 | International Business Machines Corporation | Method, system, and program for backing up objects by creating groups of objects |
US7082446B1 (en) * | 2002-04-15 | 2006-07-25 | Steel Eye Technology, Inc. | Hybrid transaction/intent log for data replication |
US7043732B2 (en) * | 2002-04-29 | 2006-05-09 | Sun Microsystems, Inc. | Method and apparatus for managing remote data replication using CIM providers in a distributed computer system |
US20030208511A1 (en) * | 2002-05-02 | 2003-11-06 | Earl Leroy D. | Database replication system |
US7546364B2 (en) * | 2002-05-16 | 2009-06-09 | Emc Corporation | Replication of remote copy data for internet protocol (IP) transmission |
US6792518B2 (en) | 2002-08-06 | 2004-09-14 | Emc Corporation | Data storage system having mata bit maps for indicating whether data blocks are invalid in snapshot copies |
US6934822B2 (en) * | 2002-08-06 | 2005-08-23 | Emc Corporation | Organization of multiple snapshot copies in a data storage system |
US6957362B2 (en) * | 2002-08-06 | 2005-10-18 | Emc Corporation | Instantaneous restoration of a production copy from a snapshot copy in a data storage system |
US7953926B2 (en) * | 2002-08-15 | 2011-05-31 | Board Of Governors For Higher Education, State Of Rhode Island And Providence Plantations | SCSI-to-IP cache storage device and method |
WO2004023317A1 (en) * | 2002-09-09 | 2004-03-18 | Commvault Systems, Inc. | Dynamic storage device pooling in a computer system |
US8370542B2 (en) | 2002-09-16 | 2013-02-05 | Commvault Systems, Inc. | Combined stream auxiliary copy system and method |
AU2003272457A1 (en) * | 2002-09-16 | 2004-04-30 | Commvault Systems, Inc. | System and method for blind media support |
EP1579331A4 (en) | 2002-10-07 | 2007-05-23 | Commvault Systems Inc | System and method for managing stored data |
US8443036B2 (en) * | 2002-11-18 | 2013-05-14 | Siebel Systems, Inc. | Exchanging project-related data in a client-server architecture |
US7188125B1 (en) * | 2002-12-19 | 2007-03-06 | Veritas Operating Corporation | Replication using a special off-host network device |
TW583538B (en) * | 2003-01-17 | 2004-04-11 | Yu-De Wu | Method of remote redundancy |
US7769722B1 (en) | 2006-12-08 | 2010-08-03 | Emc Corporation | Replication and restoration of multiple data storage object types in a data network |
WO2004090788A2 (en) | 2003-04-03 | 2004-10-21 | Commvault Systems, Inc. | System and method for dynamically performing storage operations in a computer network |
WO2004090676A2 (en) * | 2003-04-03 | 2004-10-21 | Commvault Systems, Inc. | Remote disaster data recovery system and method |
US7631351B2 (en) * | 2003-04-03 | 2009-12-08 | Commvault Systems, Inc. | System and method for performing storage operations through a firewall |
WO2004090872A2 (en) * | 2003-04-03 | 2004-10-21 | Commvault Systems, Inc. | Method and system for controlling a robotic arm in a storage device |
CN100353707C (en) * | 2003-04-09 | 2007-12-05 | 华为技术有限公司 | Equipment capable of making equipment logging interface implement hynamic arrangement and its method |
US7275177B2 (en) * | 2003-06-25 | 2007-09-25 | Emc Corporation | Data recovery with internet protocol replication with or without full resync |
US7567991B2 (en) * | 2003-06-25 | 2009-07-28 | Emc Corporation | Replication of snapshot using a file system copy differential |
US7454569B2 (en) * | 2003-06-25 | 2008-11-18 | Commvault Systems, Inc. | Hierarchical system and method for performing storage operations in a computer network |
JP4124348B2 (en) | 2003-06-27 | 2008-07-23 | 株式会社日立製作所 | Storage system |
JP4374953B2 (en) * | 2003-09-09 | 2009-12-02 | 株式会社日立製作所 | Data processing system |
US7130975B2 (en) * | 2003-06-27 | 2006-10-31 | Hitachi, Ltd. | Data processing system |
JP2005309550A (en) | 2004-04-19 | 2005-11-04 | Hitachi Ltd | Remote copying method and system |
US7287086B2 (en) * | 2003-07-09 | 2007-10-23 | Internatinonal Business Machines Corporation | Methods, systems and computer program products for controlling data transfer for data replication or backup based on system and/or network resource information |
EP1652048A4 (en) | 2003-07-21 | 2009-04-15 | Fusionone Inc | Device message management system |
US7251708B1 (en) | 2003-08-07 | 2007-07-31 | Crossroads Systems, Inc. | System and method for maintaining and reporting a log of multi-threaded backups |
US7447852B1 (en) | 2003-08-07 | 2008-11-04 | Crossroads Systems, Inc. | System and method for message and error reporting for multiple concurrent extended copy commands to a single destination device |
US7552294B1 (en) | 2003-08-07 | 2009-06-23 | Crossroads Systems, Inc. | System and method for processing multiple concurrent extended copy commands to a single destination device |
US7953819B2 (en) * | 2003-08-22 | 2011-05-31 | Emc Corporation | Multi-protocol sharable virtual storage objects |
US20130117881A1 (en) | 2003-10-14 | 2013-05-09 | Ceres, Inc. | Promoter, promoter control elements, and combinations, and uses thereof |
US7402667B2 (en) * | 2003-10-14 | 2008-07-22 | Ceres, Inc. | Promoter, promoter control elements, and combinations, and uses thereof |
US11634723B2 (en) | 2003-09-11 | 2023-04-25 | Ceres, Inc. | Promoter, promoter control elements, and combinations, and uses thereof |
US20060021083A1 (en) * | 2004-04-01 | 2006-01-26 | Zhihong Cook | Promoter, promoter control elements, and combinations, and uses thereof |
US7219201B2 (en) | 2003-09-17 | 2007-05-15 | Hitachi, Ltd. | Remote storage disk control device and method for controlling the same |
US7035881B2 (en) * | 2003-09-23 | 2006-04-25 | Emc Corporation | Organization of read-write snapshot copies in a data storage system |
US11739340B2 (en) | 2003-09-23 | 2023-08-29 | Ceres, Inc. | Promoter, promoter control elements, and combinations, and uses thereof |
US7225208B2 (en) * | 2003-09-30 | 2007-05-29 | Iron Mountain Incorporated | Systems and methods for backing up data files |
US7447850B1 (en) | 2003-10-05 | 2008-11-04 | Quantum Corporation | Associating events with the state of a data set |
EP1680742A2 (en) | 2003-11-04 | 2006-07-19 | Constant Data, Inc. | Hybrid real-time data replication |
US7870354B2 (en) * | 2003-11-04 | 2011-01-11 | Bakbone Software, Inc. | Data replication from one-to-one or one-to-many heterogeneous devices |
US7634509B2 (en) | 2003-11-07 | 2009-12-15 | Fusionone, Inc. | Personal information space management system and method |
US7978716B2 (en) | 2003-11-24 | 2011-07-12 | Citrix Systems, Inc. | Systems and methods for providing a VPN solution |
US7613748B2 (en) | 2003-11-13 | 2009-11-03 | Commvault Systems, Inc. | Stored data reverification management system and method |
CA2546304A1 (en) | 2003-11-13 | 2005-05-26 | Commvault Systems, Inc. | System and method for performing an image level snapshot and for restoring partial volume data |
GB2424297B (en) * | 2003-11-13 | 2007-06-27 | Commvault Systems Inc | System and method for data storage and tracking |
WO2005050381A2 (en) | 2003-11-13 | 2005-06-02 | Commvault Systems, Inc. | Systems and methods for performing storage operations using network attached storage |
US7440982B2 (en) | 2003-11-13 | 2008-10-21 | Commvault Systems, Inc. | System and method for stored data archive verification |
WO2005065084A2 (en) | 2003-11-13 | 2005-07-21 | Commvault Systems, Inc. | System and method for providing encryption in pipelined storage operations in a storage network |
GB2423850B (en) | 2003-11-13 | 2009-05-20 | Commvault Systems Inc | System and method for performing integrated storage operations |
GB2408355B (en) * | 2003-11-18 | 2007-02-14 | Ibm | A system for verifying a state of an environment |
US8001085B1 (en) | 2003-11-25 | 2011-08-16 | Symantec Operating Corporation | Remote data access for local operations |
US7680933B2 (en) | 2003-12-15 | 2010-03-16 | International Business Machines Corporation | Apparatus, system, and method for on-demand control of grid system resources |
JP4412989B2 (en) | 2003-12-15 | 2010-02-10 | 株式会社日立製作所 | Data processing system having a plurality of storage systems |
US7139887B2 (en) * | 2003-12-31 | 2006-11-21 | Veritas Operating Corporation | Coordinated storage management operations in replication environment |
JP4477370B2 (en) * | 2004-01-30 | 2010-06-09 | 株式会社日立製作所 | Data processing system |
US7383463B2 (en) * | 2004-02-04 | 2008-06-03 | Emc Corporation | Internet protocol based disaster recovery of a server |
US7685384B2 (en) * | 2004-02-06 | 2010-03-23 | Globalscape, Inc. | System and method for replicating files in a computer network |
EP1564975B1 (en) * | 2004-02-12 | 2006-11-15 | Alcatel | Service request handling method and storage system |
US20070006335A1 (en) * | 2004-02-13 | 2007-01-04 | Zhihong Cook | Promoter, promoter control elements, and combinations, and uses thereof |
US7536593B2 (en) * | 2004-03-05 | 2009-05-19 | International Business Machines Corporation | Apparatus, system, and method for emergency backup |
US20050223277A1 (en) * | 2004-03-23 | 2005-10-06 | Eacceleration Corporation | Online storage system |
US7499953B2 (en) * | 2004-04-23 | 2009-03-03 | Oracle International Corporation | Online recovery of user tables using flashback table |
US7343459B2 (en) | 2004-04-30 | 2008-03-11 | Commvault Systems, Inc. | Systems and methods for detecting & mitigating storage risks |
US8266406B2 (en) | 2004-04-30 | 2012-09-11 | Commvault Systems, Inc. | System and method for allocation of organizational resources |
US8108429B2 (en) * | 2004-05-07 | 2012-01-31 | Quest Software, Inc. | System for moving real-time data events across a plurality of devices in a network for simultaneous data protection, replication, and access services |
US7565661B2 (en) | 2004-05-10 | 2009-07-21 | Siew Yong Sim-Tang | Method and system for real-time event journaling to provide enterprise data services |
KR20070038462A (en) | 2004-05-12 | 2007-04-10 | 퓨전원 인코포레이티드 | Advanced contact identification system |
US9542076B1 (en) | 2004-05-12 | 2017-01-10 | Synchronoss Technologies, Inc. | System for and method of updating a personal profile |
US20050256859A1 (en) * | 2004-05-13 | 2005-11-17 | Internation Business Machines Corporation | System, application and method of providing application programs continued access to frozen file systems |
US7913043B2 (en) | 2004-05-14 | 2011-03-22 | Bakbone Software, Inc. | Method for backup storage device selection |
US20060036908A1 (en) * | 2004-08-10 | 2006-02-16 | Fabrice Helliker | System for backup storage device selection |
US7680834B1 (en) | 2004-06-08 | 2010-03-16 | Bakbone Software, Inc. | Method and system for no downtime resychronization for real-time, continuous data protection |
US8739274B2 (en) | 2004-06-30 | 2014-05-27 | Citrix Systems, Inc. | Method and device for performing integrated caching in a data communication network |
US8495305B2 (en) | 2004-06-30 | 2013-07-23 | Citrix Systems, Inc. | Method and device for performing caching of dynamically generated objects in a data communication network |
US7757074B2 (en) | 2004-06-30 | 2010-07-13 | Citrix Application Networking, Llc | System and method for establishing a virtual private network |
US8363650B2 (en) | 2004-07-23 | 2013-01-29 | Citrix Systems, Inc. | Method and systems for routing packets from a gateway to an endpoint |
AU2005266943C1 (en) | 2004-07-23 | 2011-01-06 | Citrix Systems, Inc. | Systems and methods for optimizing communications between network nodes |
US20060026587A1 (en) * | 2004-07-28 | 2006-02-02 | Lemarroy Luis A | Systems and methods for operating system migration |
JP4519563B2 (en) * | 2004-08-04 | 2010-08-04 | 株式会社日立製作所 | Storage system and data processing system |
US7681105B1 (en) | 2004-08-09 | 2010-03-16 | Bakbone Software, Inc. | Method for lock-free clustered erasure coding and recovery of data across a plurality of data stores in a network |
US7681104B1 (en) * | 2004-08-09 | 2010-03-16 | Bakbone Software, Inc. | Method for erasure coding data across a plurality of data stores in a network |
US8224784B2 (en) * | 2004-08-13 | 2012-07-17 | Microsoft Corporation | Combined computer disaster recovery and migration tool for effective disaster recovery as well as the backup and migration of user- and system-specific information |
CA2576569A1 (en) * | 2004-08-13 | 2006-02-23 | Citrix Systems, Inc. | A method for maintaining transaction integrity across multiple remote access servers |
US7634685B2 (en) | 2004-08-13 | 2009-12-15 | Microsoft Corporation | Remote computer disaster recovery and migration tool for effective disaster recovery and migration scheme |
US7360113B2 (en) * | 2004-08-30 | 2008-04-15 | Mendocino Software, Inc. | Protocol for communicating data block copies in an error recovery environment |
US7664983B2 (en) | 2004-08-30 | 2010-02-16 | Symantec Corporation | Systems and methods for event driven recovery management |
US7366742B1 (en) * | 2004-09-10 | 2008-04-29 | Symantec Operating Corporation | System and method for distributed discovery and management of frozen images in a storage environment |
US7979404B2 (en) * | 2004-09-17 | 2011-07-12 | Quest Software, Inc. | Extracting data changes and storing data history to allow for instantaneous access to and reconstruction of any point-in-time data |
CN100362509C (en) * | 2004-10-11 | 2008-01-16 | 佛山市顺德区顺达电脑厂有限公司 | Method and device for selecting and transmiting objects of computer among computers |
JP2006127028A (en) | 2004-10-27 | 2006-05-18 | Hitachi Ltd | Memory system and storage controller |
US7904913B2 (en) | 2004-11-02 | 2011-03-08 | Bakbone Software, Inc. | Management interface for a system that provides automated, real-time, continuous data protection |
CA2583912A1 (en) * | 2004-11-05 | 2006-05-18 | Commvault Systems, Inc. | System and method to support single instance storage operations |
US7490207B2 (en) * | 2004-11-08 | 2009-02-10 | Commvault Systems, Inc. | System and method for performing auxillary storage operations |
US8959299B2 (en) | 2004-11-15 | 2015-02-17 | Commvault Systems, Inc. | Using a snapshot as a data source |
US8775823B2 (en) * | 2006-12-29 | 2014-07-08 | Commvault Systems, Inc. | System and method for encrypting secondary copies of data |
US7461102B2 (en) * | 2004-12-09 | 2008-12-02 | International Business Machines Corporation | Method for performing scheduled backups of a backup node associated with a plurality of agent nodes |
US20060143412A1 (en) * | 2004-12-28 | 2006-06-29 | Philippe Armangau | Snapshot copy facility maintaining read performance and write performance |
US8954595B2 (en) | 2004-12-30 | 2015-02-10 | Citrix Systems, Inc. | Systems and methods for providing client-side accelerated access to remote applications via TCP buffering |
US8700695B2 (en) | 2004-12-30 | 2014-04-15 | Citrix Systems, Inc. | Systems and methods for providing client-side accelerated access to remote applications via TCP pooling |
US8549149B2 (en) | 2004-12-30 | 2013-10-01 | Citrix Systems, Inc. | Systems and methods for providing client-side accelerated access to remote applications via TCP multiplexing |
US7810089B2 (en) | 2004-12-30 | 2010-10-05 | Citrix Systems, Inc. | Systems and methods for automatic installation and execution of a client-side acceleration program |
US8706877B2 (en) | 2004-12-30 | 2014-04-22 | Citrix Systems, Inc. | Systems and methods for providing client-side dynamic redirection to bypass an intermediary |
US8255456B2 (en) | 2005-12-30 | 2012-08-28 | Citrix Systems, Inc. | System and method for performing flash caching of dynamically generated objects in a data communication network |
EP2739014B1 (en) | 2005-01-24 | 2018-08-01 | Citrix Systems, Inc. | Systems and methods for performing caching of dynamically generated objects in a network |
US7870416B2 (en) * | 2005-02-07 | 2011-01-11 | Mimosa Systems, Inc. | Enterprise service availability through identity preservation |
US7657780B2 (en) * | 2005-02-07 | 2010-02-02 | Mimosa Systems, Inc. | Enterprise service availability through identity preservation |
US7778976B2 (en) * | 2005-02-07 | 2010-08-17 | Mimosa, Inc. | Multi-dimensional surrogates for data management |
US8799206B2 (en) * | 2005-02-07 | 2014-08-05 | Mimosa Systems, Inc. | Dynamic bulk-to-brick transformation of data |
US20070143366A1 (en) * | 2005-02-07 | 2007-06-21 | D Souza Roy P | Retro-fitting synthetic full copies of data |
US8918366B2 (en) * | 2005-02-07 | 2014-12-23 | Mimosa Systems, Inc. | Synthetic full copies of data and dynamic bulk-to-brick transformation |
US8543542B2 (en) * | 2005-02-07 | 2013-09-24 | Mimosa Systems, Inc. | Synthetic full copies of data and dynamic bulk-to-brick transformation |
US8275749B2 (en) * | 2005-02-07 | 2012-09-25 | Mimosa Systems, Inc. | Enterprise server version migration through identity preservation |
US7917475B2 (en) * | 2005-02-07 | 2011-03-29 | Mimosa Systems, Inc. | Enterprise server version migration through identity preservation |
US8812433B2 (en) * | 2005-02-07 | 2014-08-19 | Mimosa Systems, Inc. | Dynamic bulk-to-brick transformation of data |
US8271436B2 (en) * | 2005-02-07 | 2012-09-18 | Mimosa Systems, Inc. | Retro-fitting synthetic full copies of data |
US8161318B2 (en) * | 2005-02-07 | 2012-04-17 | Mimosa Systems, Inc. | Enterprise service availability through identity preservation |
US20060183469A1 (en) * | 2005-02-16 | 2006-08-17 | Gadson Gregory P | Mobile communication device backup, disaster recovery, and migration scheme |
US7634758B2 (en) * | 2005-03-02 | 2009-12-15 | Computer Associates Think, Inc. | System and method for backing up open files of a source control management repository |
US7814260B2 (en) * | 2005-03-09 | 2010-10-12 | Bakbone Software, Inc. | Tape image on non-tape storage device |
US7546431B2 (en) * | 2005-03-21 | 2009-06-09 | Emc Corporation | Distributed open writable snapshot copy facility using file migration policies |
US8055724B2 (en) * | 2005-03-21 | 2011-11-08 | Emc Corporation | Selection of migration methods including partial read restore in distributed storage management |
US8112605B2 (en) * | 2005-05-02 | 2012-02-07 | Commvault Systems, Inc. | System and method for allocation of organizational resources |
JP4708862B2 (en) * | 2005-05-26 | 2011-06-22 | キヤノン株式会社 | Optical scanning device and image forming apparatus using the same |
US7979650B2 (en) | 2005-06-13 | 2011-07-12 | Quest Software, Inc. | Discovering data storage for backup |
US7788521B1 (en) | 2005-07-20 | 2010-08-31 | Bakbone Software, Inc. | Method and system for virtual on-demand recovery for real-time, continuous data protection |
US7689602B1 (en) | 2005-07-20 | 2010-03-30 | Bakbone Software, Inc. | Method of creating hierarchical indices for a distributed object system |
US7602906B2 (en) * | 2005-08-25 | 2009-10-13 | Microsoft Corporation | Cipher for disk encryption |
US7613739B2 (en) * | 2005-11-17 | 2009-11-03 | Research In Motion Limited | Method and apparatus for synchronizing databases connected by wireless interface |
US8000683B2 (en) * | 2005-11-17 | 2011-08-16 | Research In Motion Limited | System and method for communication record logging |
EP1801710A1 (en) | 2005-11-17 | 2007-06-27 | Research In Motion Limited | Method and apparatus for synchronizing databases connected by wireless interface |
US7657550B2 (en) | 2005-11-28 | 2010-02-02 | Commvault Systems, Inc. | User interfaces and methods for managing data in a metabase |
US7822749B2 (en) | 2005-11-28 | 2010-10-26 | Commvault Systems, Inc. | Systems and methods for classifying and transferring information in a storage network |
US7765187B2 (en) | 2005-11-29 | 2010-07-27 | Emc Corporation | Replication of a consistency group of data storage objects from servers in a data network |
US20070198422A1 (en) * | 2005-12-19 | 2007-08-23 | Anand Prahlad | System and method for providing a flexible licensing system for digital content |
US20070166674A1 (en) * | 2005-12-19 | 2007-07-19 | Kochunni Jaidev O | Systems and methods for generating configuration metrics in a storage network |
US7617262B2 (en) | 2005-12-19 | 2009-11-10 | Commvault Systems, Inc. | Systems and methods for monitoring application data in a data replication system |
US20110010518A1 (en) | 2005-12-19 | 2011-01-13 | Srinivas Kavuri | Systems and Methods for Migrating Components in a Hierarchical Storage Network |
US7651593B2 (en) | 2005-12-19 | 2010-01-26 | Commvault Systems, Inc. | Systems and methods for performing data replication |
US7962709B2 (en) | 2005-12-19 | 2011-06-14 | Commvault Systems, Inc. | Network redirector systems and methods for performing data replication |
US20200257596A1 (en) | 2005-12-19 | 2020-08-13 | Commvault Systems, Inc. | Systems and methods of unified reconstruction in storage systems |
US7457790B2 (en) * | 2005-12-19 | 2008-11-25 | Commvault Systems, Inc. | Extensible configuration engine system and method |
US8655850B2 (en) | 2005-12-19 | 2014-02-18 | Commvault Systems, Inc. | Systems and methods for resynchronizing information |
US7636743B2 (en) | 2005-12-19 | 2009-12-22 | Commvault Systems, Inc. | Pathname translation in a data replication system |
US8930496B2 (en) | 2005-12-19 | 2015-01-06 | Commvault Systems, Inc. | Systems and methods of unified reconstruction in storage systems |
US7620710B2 (en) | 2005-12-19 | 2009-11-17 | Commvault Systems, Inc. | System and method for performing multi-path storage operations |
EP1974296B8 (en) | 2005-12-19 | 2016-09-21 | Commvault Systems, Inc. | Systems and methods for performing data replication |
US7543125B2 (en) | 2005-12-19 | 2009-06-02 | Commvault Systems, Inc. | System and method for performing time-flexible calendric storage operations |
US7606844B2 (en) | 2005-12-19 | 2009-10-20 | Commvault Systems, Inc. | System and method for performing replication copy storage operations |
US8572330B2 (en) | 2005-12-19 | 2013-10-29 | Commvault Systems, Inc. | Systems and methods for granular resource management in a storage network |
US7921184B2 (en) | 2005-12-30 | 2011-04-05 | Citrix Systems, Inc. | System and method for performing flash crowd caching of dynamically generated objects in a data communication network |
US8301839B2 (en) | 2005-12-30 | 2012-10-30 | Citrix Systems, Inc. | System and method for performing granular invalidation of cached dynamically generated objects in a data communication network |
US8170985B2 (en) * | 2006-01-31 | 2012-05-01 | Emc Corporation | Primary stub file retention and secondary retention coordination in a hierarchical storage system |
US7546432B2 (en) * | 2006-05-09 | 2009-06-09 | Emc Corporation | Pass-through write policies of files in distributed storage management |
US8726242B2 (en) | 2006-07-27 | 2014-05-13 | Commvault Systems, Inc. | Systems and methods for continuous data replication |
WO2008018969A1 (en) * | 2006-08-04 | 2008-02-14 | Parallel Computers Technology, Inc. | Apparatus and method of optimizing database clustering with zero transaction loss |
US20080066192A1 (en) * | 2006-09-07 | 2008-03-13 | International Business Machines Corporation | Keyless copy of encrypted data |
US20080201223A1 (en) * | 2006-09-19 | 2008-08-21 | Lutnick Howard W | Products and processes for providing information services |
US7539783B2 (en) | 2006-09-22 | 2009-05-26 | Commvault Systems, Inc. | Systems and methods of media management, such as management of media to and from a media storage library, including removable media |
US8655914B2 (en) | 2006-10-17 | 2014-02-18 | Commvault Systems, Inc. | System and method for storage operation access security |
US7882077B2 (en) | 2006-10-17 | 2011-02-01 | Commvault Systems, Inc. | Method and system for offline indexing of content and classifying stored data |
US7792789B2 (en) | 2006-10-17 | 2010-09-07 | Commvault Systems, Inc. | Method and system for collaborative searching |
US8238882B2 (en) | 2006-10-19 | 2012-08-07 | Research In Motion Limited | System and method for storage of electronic mail |
US8370442B2 (en) | 2008-08-29 | 2013-02-05 | Commvault Systems, Inc. | Method and system for leveraging identified changes to a mail server |
CA2705379C (en) | 2006-12-04 | 2016-08-30 | Commvault Systems, Inc. | Systems and methods for creating copies of data, such as archive copies |
US8706833B1 (en) | 2006-12-08 | 2014-04-22 | Emc Corporation | Data storage server having common replication architecture for multiple storage object types |
EP1933236A1 (en) * | 2006-12-12 | 2008-06-18 | Ixiar Technologies | Branch Office and remote server smart archiving based on mirroring and replication software |
US8677091B2 (en) | 2006-12-18 | 2014-03-18 | Commvault Systems, Inc. | Writing data and storage system specific metadata to network attached storage device |
US20080155205A1 (en) * | 2006-12-22 | 2008-06-26 | Parag Gokhale | Systems and methods of data storage management, such as dynamic data stream allocation |
US8312323B2 (en) | 2006-12-22 | 2012-11-13 | Commvault Systems, Inc. | Systems and methods for remote monitoring in a computer network and reporting a failed migration operation without accessing the data being moved |
US7734669B2 (en) | 2006-12-22 | 2010-06-08 | Commvault Systems, Inc. | Managing copies of data |
US8719809B2 (en) | 2006-12-22 | 2014-05-06 | Commvault Systems, Inc. | Point in time rollback and un-installation of software |
US7831766B2 (en) | 2006-12-22 | 2010-11-09 | Comm Vault Systems, Inc. | Systems and methods of data storage management, such as pre-allocation of storage space |
US7840537B2 (en) | 2006-12-22 | 2010-11-23 | Commvault Systems, Inc. | System and method for storing redundant information |
US7831566B2 (en) | 2006-12-22 | 2010-11-09 | Commvault Systems, Inc. | Systems and methods of hierarchical storage management, such as global management of storage operations |
US20080228771A1 (en) | 2006-12-22 | 2008-09-18 | Commvault Systems, Inc. | Method and system for searching stored data |
US7769972B2 (en) * | 2007-01-18 | 2010-08-03 | Lsi Corporation | Storage system management based on a backup and recovery solution embedded in the storage system |
US8290808B2 (en) | 2007-03-09 | 2012-10-16 | Commvault Systems, Inc. | System and method for automating customer-validated statement of work for a data storage environment |
US8131723B2 (en) * | 2007-03-30 | 2012-03-06 | Quest Software, Inc. | Recovering a file system to any point-in-time in the past with guaranteed structure, content consistency and integrity |
US8364648B1 (en) | 2007-04-09 | 2013-01-29 | Quest Software, Inc. | Recovering a database to any point-in-time in the past with guaranteed data consistency |
US20080250085A1 (en) * | 2007-04-09 | 2008-10-09 | Microsoft Corporation | Backup system having preinstalled backup data |
CA2695470C (en) | 2007-08-28 | 2014-08-26 | Commvault Systems, Inc. | Power management of data processing resources, such as power adaptive management of data storage operations |
US8706976B2 (en) | 2007-08-30 | 2014-04-22 | Commvault Systems, Inc. | Parallel access virtual tape library and drives |
US8396838B2 (en) | 2007-10-17 | 2013-03-12 | Commvault Systems, Inc. | Legal compliance, electronic discovery and electronic document handling of online and offline copies of data |
US20090150533A1 (en) * | 2007-12-07 | 2009-06-11 | Brocade Communications Systems, Inc. | Detecting need to access metadata during directory operations |
US8181111B1 (en) | 2007-12-31 | 2012-05-15 | Synchronoss Technologies, Inc. | System and method for providing social context to digital activity |
CN101918921B (en) | 2008-01-27 | 2013-12-04 | 思杰系统有限公司 | Methods and systems for remoting three dimensional graphics |
US7836174B2 (en) | 2008-01-30 | 2010-11-16 | Commvault Systems, Inc. | Systems and methods for grid-based data scanning |
US8296301B2 (en) | 2008-01-30 | 2012-10-23 | Commvault Systems, Inc. | Systems and methods for probabilistic data classification |
US8001079B2 (en) * | 2008-02-29 | 2011-08-16 | Double-Take Software Inc. | System and method for system state replication |
US9946493B2 (en) * | 2008-04-04 | 2018-04-17 | International Business Machines Corporation | Coordinated remote and local machine configuration |
US8271612B2 (en) * | 2008-04-04 | 2012-09-18 | International Business Machines Corporation | On-demand virtual storage capacity |
US8055723B2 (en) * | 2008-04-04 | 2011-11-08 | International Business Machines Corporation | Virtual array site configuration |
US8769048B2 (en) | 2008-06-18 | 2014-07-01 | Commvault Systems, Inc. | Data protection scheduling, such as providing a flexible backup window in a data protection system |
US8352954B2 (en) | 2008-06-19 | 2013-01-08 | Commvault Systems, Inc. | Data storage resource allocation by employing dynamic methods and blacklisting resource request pools |
US9128883B2 (en) | 2008-06-19 | 2015-09-08 | Commvault Systems, Inc | Data storage resource allocation by performing abbreviated resource checks based on relative chances of failure of the data storage resources to determine whether data storage requests would fail |
US8484162B2 (en) | 2008-06-24 | 2013-07-09 | Commvault Systems, Inc. | De-duplication systems and methods for application-specific data |
US9098495B2 (en) | 2008-06-24 | 2015-08-04 | Commvault Systems, Inc. | Application-aware and remote single instance data management |
US8219524B2 (en) | 2008-06-24 | 2012-07-10 | Commvault Systems, Inc. | Application-aware and remote single instance data management |
US8335776B2 (en) | 2008-07-02 | 2012-12-18 | Commvault Systems, Inc. | Distributed indexing system for data storage |
US8166263B2 (en) | 2008-07-03 | 2012-04-24 | Commvault Systems, Inc. | Continuous data protection over intermittent connections, such as continuous data backup for laptops or wireless devices |
US8307177B2 (en) | 2008-09-05 | 2012-11-06 | Commvault Systems, Inc. | Systems and methods for management of virtualization data |
US8725688B2 (en) | 2008-09-05 | 2014-05-13 | Commvault Systems, Inc. | Image level copy or restore, such as image level restore without knowledge of data object metadata |
US20100070474A1 (en) | 2008-09-12 | 2010-03-18 | Lad Kamleshkumar K | Transferring or migrating portions of data objects, such as block-level data migration or chunk-based data migration |
US20100070466A1 (en) | 2008-09-15 | 2010-03-18 | Anand Prahlad | Data transfer techniques within data storage devices, such as network attached storage performing data migration |
US8311985B2 (en) * | 2008-09-16 | 2012-11-13 | Quest Software, Inc. | Remote backup and restore system and method |
US8452731B2 (en) | 2008-09-25 | 2013-05-28 | Quest Software, Inc. | Remote backup and restore |
US9015181B2 (en) | 2008-09-26 | 2015-04-21 | Commvault Systems, Inc. | Systems and methods for managing single instancing data |
AU2009296695B2 (en) | 2008-09-26 | 2013-08-01 | Commvault Systems, Inc. | Systems and methods for managing single instancing data |
US20090033669A1 (en) * | 2008-10-06 | 2009-02-05 | Hochmuth Roland M | System And Method For Communicating Graphics Image Data Over A Communication Network |
US9178842B2 (en) | 2008-11-05 | 2015-11-03 | Commvault Systems, Inc. | Systems and methods for monitoring messaging applications for compliance with a policy |
US8412677B2 (en) | 2008-11-26 | 2013-04-02 | Commvault Systems, Inc. | Systems and methods for byte-level or quasi byte-level single instancing |
US9495382B2 (en) | 2008-12-10 | 2016-11-15 | Commvault Systems, Inc. | Systems and methods for performing discrete data replication |
US8204859B2 (en) | 2008-12-10 | 2012-06-19 | Commvault Systems, Inc. | Systems and methods for managing replicated database data |
US8434131B2 (en) | 2009-03-20 | 2013-04-30 | Commvault Systems, Inc. | Managing connections in a data storage system |
US8401996B2 (en) | 2009-03-30 | 2013-03-19 | Commvault Systems, Inc. | Storing a variable number of instances of data objects |
US8578120B2 (en) | 2009-05-22 | 2013-11-05 | Commvault Systems, Inc. | Block-level single instancing |
US8238538B2 (en) | 2009-05-28 | 2012-08-07 | Comcast Cable Communications, Llc | Stateful home phone service |
US8285681B2 (en) * | 2009-06-30 | 2012-10-09 | Commvault Systems, Inc. | Data object store and server for a cloud storage environment, including data deduplication and data management across multiple cloud storage sites |
US8930306B1 (en) | 2009-07-08 | 2015-01-06 | Commvault Systems, Inc. | Synchronized data deduplication |
JP2011048723A (en) * | 2009-08-28 | 2011-03-10 | Fuji Xerox Co Ltd | Program and apparatus for processing information |
US8578138B2 (en) | 2009-08-31 | 2013-11-05 | Intel Corporation | Enabling storage of active state in internal storage of processor rather than in SMRAM upon entry to system management mode |
US8706867B2 (en) | 2011-03-31 | 2014-04-22 | Commvault Systems, Inc. | Realtime streaming of multimedia content from secondary storage devices |
US8719767B2 (en) | 2011-03-31 | 2014-05-06 | Commvault Systems, Inc. | Utilizing snapshots to provide builds to developer computing devices |
US9092500B2 (en) | 2009-09-03 | 2015-07-28 | Commvault Systems, Inc. | Utilizing snapshots for access to databases and other applications |
US8255006B1 (en) | 2009-11-10 | 2012-08-28 | Fusionone, Inc. | Event dependent notification system and method |
WO2011082113A1 (en) | 2009-12-31 | 2011-07-07 | Commvault Systems, Inc. | Asynchronous methods of data classification using change journals and other data structures |
WO2011082132A1 (en) | 2009-12-31 | 2011-07-07 | Commvault Systems, Inc. | Systems and methods for analyzing snapshots |
EP2519872A4 (en) | 2009-12-31 | 2015-08-26 | Commvault Systems Inc | Systems and methods for performing data management operations using snapshots |
US8600935B1 (en) * | 2010-03-03 | 2013-12-03 | Symantec Corporation | Systems and methods for achieving file-level data-protection operations using block-level technologies |
US8504517B2 (en) | 2010-03-29 | 2013-08-06 | Commvault Systems, Inc. | Systems and methods for selective data replication |
US8504515B2 (en) | 2010-03-30 | 2013-08-06 | Commvault Systems, Inc. | Stubbing systems and methods in a data replication environment |
US8725698B2 (en) | 2010-03-30 | 2014-05-13 | Commvault Systems, Inc. | Stub file prioritization in a data replication system |
US8352422B2 (en) | 2010-03-30 | 2013-01-08 | Commvault Systems, Inc. | Data restore systems and methods in a replication environment |
US8589347B2 (en) | 2010-05-28 | 2013-11-19 | Commvault Systems, Inc. | Systems and methods for performing data replication |
US11449394B2 (en) | 2010-06-04 | 2022-09-20 | Commvault Systems, Inc. | Failover systems and methods for performing backup operations, including heterogeneous indexing and load balancing of backup and indexing resources |
US8935492B2 (en) | 2010-09-30 | 2015-01-13 | Commvault Systems, Inc. | Archiving data objects using secondary copies |
US9244779B2 (en) | 2010-09-30 | 2016-01-26 | Commvault Systems, Inc. | Data recovery operations, such as recovery from modified network data management protocol data |
US8620870B2 (en) | 2010-09-30 | 2013-12-31 | Commvault Systems, Inc. | Efficient data management improvements, such as docking limited-feature data management modules to a full-featured data management system |
US8578109B2 (en) | 2010-09-30 | 2013-11-05 | Commvault Systems, Inc. | Systems and methods for retaining and using data block signatures in data protection operations |
US8364652B2 (en) | 2010-09-30 | 2013-01-29 | Commvault Systems, Inc. | Content aligned block-based deduplication |
US8943428B2 (en) | 2010-11-01 | 2015-01-27 | Synchronoss Technologies, Inc. | System for and method of field mapping |
US9020900B2 (en) | 2010-12-14 | 2015-04-28 | Commvault Systems, Inc. | Distributed deduplicated storage system |
US9104623B2 (en) | 2010-12-14 | 2015-08-11 | Commvault Systems, Inc. | Client-side repository in a networked deduplicated storage system |
US9021198B1 (en) | 2011-01-20 | 2015-04-28 | Commvault Systems, Inc. | System and method for sharing SAN storage |
US8719264B2 (en) | 2011-03-31 | 2014-05-06 | Commvault Systems, Inc. | Creating secondary copies of data based on searches for content |
US8849762B2 (en) | 2011-03-31 | 2014-09-30 | Commvault Systems, Inc. | Restoring computing environments, such as autorecovery of file systems at certain points in time |
EP2724264B1 (en) * | 2011-06-23 | 2020-12-16 | Red Hat, Inc. | Client-based data replication |
US8630983B2 (en) * | 2011-08-27 | 2014-01-14 | Accenture Global Services Limited | Backup of data across network of devices |
US9372827B2 (en) | 2011-09-30 | 2016-06-21 | Commvault Systems, Inc. | Migration of an existing computing system to new hardware |
US9116633B2 (en) | 2011-09-30 | 2015-08-25 | Commvault Systems, Inc. | Information management of virtual machines having mapped storage devices |
US9461881B2 (en) | 2011-09-30 | 2016-10-04 | Commvault Systems, Inc. | Migration of existing computing systems to cloud computing sites or virtual machines |
US9471578B2 (en) | 2012-03-07 | 2016-10-18 | Commvault Systems, Inc. | Data storage system utilizing proxy device for storage operations |
US9298715B2 (en) | 2012-03-07 | 2016-03-29 | Commvault Systems, Inc. | Data storage system utilizing proxy device for storage operations |
US9020890B2 (en) | 2012-03-30 | 2015-04-28 | Commvault Systems, Inc. | Smart archiving and data previewing for mobile devices |
US8950009B2 (en) | 2012-03-30 | 2015-02-03 | Commvault Systems, Inc. | Information management of data associated with multiple cloud services |
US9639297B2 (en) | 2012-03-30 | 2017-05-02 | Commvault Systems, Inc | Shared network-available storage that permits concurrent data access |
US9262496B2 (en) | 2012-03-30 | 2016-02-16 | Commvault Systems, Inc. | Unified access to personal data |
US9063938B2 (en) | 2012-03-30 | 2015-06-23 | Commvault Systems, Inc. | Search filtered file system using secondary storage, including multi-dimensional indexing and searching of archived files |
AU2013202553B2 (en) | 2012-03-30 | 2015-10-01 | Commvault Systems, Inc. | Information management of mobile device data |
US10157184B2 (en) | 2012-03-30 | 2018-12-18 | Commvault Systems, Inc. | Data previewing before recalling large data files |
US9342537B2 (en) | 2012-04-23 | 2016-05-17 | Commvault Systems, Inc. | Integrated snapshot interface for a data storage system |
US8892523B2 (en) | 2012-06-08 | 2014-11-18 | Commvault Systems, Inc. | Auto summarization of content |
US9218375B2 (en) | 2012-06-13 | 2015-12-22 | Commvault Systems, Inc. | Dedicated client-side signature generator in a networked storage system |
KR20140047230A (en) * | 2012-10-10 | 2014-04-22 | (주)티베로 | Method for optimizing distributed transaction in distributed system and distributed system with optimized distributed transaction |
US20140156745A1 (en) * | 2012-11-30 | 2014-06-05 | Facebook, Inc. | Distributing user information across replicated servers |
US20140181085A1 (en) | 2012-12-21 | 2014-06-26 | Commvault Systems, Inc. | Data storage system for analysis of data across heterogeneous information management systems |
US20140181044A1 (en) | 2012-12-21 | 2014-06-26 | Commvault Systems, Inc. | Systems and methods to identify uncharacterized and unprotected virtual machines |
US10379988B2 (en) | 2012-12-21 | 2019-08-13 | Commvault Systems, Inc. | Systems and methods for performance monitoring |
US9223597B2 (en) | 2012-12-21 | 2015-12-29 | Commvault Systems, Inc. | Archiving virtual machines in a data storage system |
US9069799B2 (en) | 2012-12-27 | 2015-06-30 | Commvault Systems, Inc. | Restoration of centralized data storage manager, such as data storage manager in a hierarchical data storage system |
US9633216B2 (en) | 2012-12-27 | 2017-04-25 | Commvault Systems, Inc. | Application of information management policies based on operation with a geographic entity |
US9021452B2 (en) | 2012-12-27 | 2015-04-28 | Commvault Systems, Inc. | Automatic identification of storage requirements, such as for use in selling data storage management solutions |
US9378035B2 (en) | 2012-12-28 | 2016-06-28 | Commvault Systems, Inc. | Systems and methods for repurposing virtual machines |
US9633022B2 (en) | 2012-12-28 | 2017-04-25 | Commvault Systems, Inc. | Backup and restoration for a deduplicated file system |
US10346259B2 (en) | 2012-12-28 | 2019-07-09 | Commvault Systems, Inc. | Data recovery using a cloud-based remote data recovery center |
US8977587B2 (en) * | 2013-01-03 | 2015-03-10 | International Business Machines Corporation | Sampling transactions from multi-level log file records |
US20140196038A1 (en) | 2013-01-08 | 2014-07-10 | Commvault Systems, Inc. | Virtual machine management in a data storage system |
US20140201162A1 (en) | 2013-01-11 | 2014-07-17 | Commvault Systems, Inc. | Systems and methods to restore selected files from block-level backup for virtual machines |
US9886346B2 (en) | 2013-01-11 | 2018-02-06 | Commvault Systems, Inc. | Single snapshot for multiple agents |
US9633033B2 (en) | 2013-01-11 | 2017-04-25 | Commvault Systems, Inc. | High availability distributed deduplicated storage system |
US20140201140A1 (en) | 2013-01-11 | 2014-07-17 | Commvault Systems, Inc. | Data synchronization management |
US9286110B2 (en) | 2013-01-14 | 2016-03-15 | Commvault Systems, Inc. | Seamless virtual machine recall in a data storage system |
US9459968B2 (en) | 2013-03-11 | 2016-10-04 | Commvault Systems, Inc. | Single index to query multiple backup formats |
US9483655B2 (en) | 2013-03-12 | 2016-11-01 | Commvault Systems, Inc. | File backup with selective encryption |
US9939981B2 (en) | 2013-09-12 | 2018-04-10 | Commvault Systems, Inc. | File manager integration with virtualization in an information management system with an enhanced storage manager, including user control and storage management of virtual machines |
US10949382B2 (en) | 2014-01-15 | 2021-03-16 | Commvault Systems, Inc. | User-centric interfaces for information management systems |
US9753812B2 (en) | 2014-01-24 | 2017-09-05 | Commvault Systems, Inc. | Generating mapping information for single snapshot for multiple applications |
US9639426B2 (en) | 2014-01-24 | 2017-05-02 | Commvault Systems, Inc. | Single snapshot for multiple applications |
US9632874B2 (en) | 2014-01-24 | 2017-04-25 | Commvault Systems, Inc. | Database application backup in single snapshot for multiple applications |
US9495251B2 (en) | 2014-01-24 | 2016-11-15 | Commvault Systems, Inc. | Snapshot readiness checking and reporting |
US10324897B2 (en) | 2014-01-27 | 2019-06-18 | Commvault Systems, Inc. | Techniques for serving archived electronic mail |
US10169121B2 (en) | 2014-02-27 | 2019-01-01 | Commvault Systems, Inc. | Work flow management for an information management system |
US9648100B2 (en) | 2014-03-05 | 2017-05-09 | Commvault Systems, Inc. | Cross-system storage management for transferring data across autonomous information management systems |
US10380072B2 (en) | 2014-03-17 | 2019-08-13 | Commvault Systems, Inc. | Managing deletions from a deduplication database |
US9633056B2 (en) | 2014-03-17 | 2017-04-25 | Commvault Systems, Inc. | Maintaining a deduplication database |
US9563518B2 (en) | 2014-04-02 | 2017-02-07 | Commvault Systems, Inc. | Information management by a media agent in the absence of communications with a storage manager |
US9823978B2 (en) | 2014-04-16 | 2017-11-21 | Commvault Systems, Inc. | User-level quota management of data objects stored in information management systems |
US9740574B2 (en) | 2014-05-09 | 2017-08-22 | Commvault Systems, Inc. | Load balancing across multiple data paths |
US10127119B1 (en) * | 2014-05-21 | 2018-11-13 | Veritas Technologies, LLC | Systems and methods for modifying track logs during restore processes |
US9848045B2 (en) | 2014-05-27 | 2017-12-19 | Commvault Systems, Inc. | Offline messaging between a repository storage operation cell and remote storage operation cells via an intermediary media agent |
US9760446B2 (en) | 2014-06-11 | 2017-09-12 | Micron Technology, Inc. | Conveying value of implementing an integrated data management and protection system |
US20160019317A1 (en) | 2014-07-16 | 2016-01-21 | Commvault Systems, Inc. | Volume or virtual machine level backup and generating placeholders for virtual machine files |
US9852026B2 (en) | 2014-08-06 | 2017-12-26 | Commvault Systems, Inc. | Efficient application recovery in an information management system based on a pseudo-storage-device driver |
US11249858B2 (en) | 2014-08-06 | 2022-02-15 | Commvault Systems, Inc. | Point-in-time backups of a production application made accessible over fibre channel and/or ISCSI as data sources to a remote application by representing the backups as pseudo-disks operating apart from the production application and its host |
US10042716B2 (en) | 2014-09-03 | 2018-08-07 | Commvault Systems, Inc. | Consolidated processing of storage-array commands using a forwarder media agent in conjunction with a snapshot-control media agent |
US9774672B2 (en) | 2014-09-03 | 2017-09-26 | Commvault Systems, Inc. | Consolidated processing of storage-array commands by a snapshot-control media agent |
US9405928B2 (en) | 2014-09-17 | 2016-08-02 | Commvault Systems, Inc. | Deriving encryption rules based on file content |
US9710465B2 (en) | 2014-09-22 | 2017-07-18 | Commvault Systems, Inc. | Efficiently restoring execution of a backed up virtual machine based on coordination with virtual-machine-file-relocation operations |
US9417968B2 (en) | 2014-09-22 | 2016-08-16 | Commvault Systems, Inc. | Efficiently restoring execution of a backed up virtual machine based on coordination with virtual-machine-file-relocation operations |
US9436555B2 (en) | 2014-09-22 | 2016-09-06 | Commvault Systems, Inc. | Efficient live-mount of a backed up virtual machine in a storage management system |
US20160100004A1 (en) * | 2014-10-06 | 2016-04-07 | International Business Machines Corporation | Data replication across servers |
US9444811B2 (en) | 2014-10-21 | 2016-09-13 | Commvault Systems, Inc. | Using an enhanced data agent to restore backed up data across autonomous storage management systems |
US9575673B2 (en) | 2014-10-29 | 2017-02-21 | Commvault Systems, Inc. | Accessing a file system using tiered deduplication |
US10776209B2 (en) | 2014-11-10 | 2020-09-15 | Commvault Systems, Inc. | Cross-platform virtual machine backup and replication |
US9448731B2 (en) | 2014-11-14 | 2016-09-20 | Commvault Systems, Inc. | Unified snapshot storage management |
US9648105B2 (en) | 2014-11-14 | 2017-05-09 | Commvault Systems, Inc. | Unified snapshot storage management, using an enhanced storage manager and enhanced media agents |
US20160142485A1 (en) | 2014-11-19 | 2016-05-19 | Commvault Systems, Inc. | Migration to cloud storage from backup |
US9983936B2 (en) | 2014-11-20 | 2018-05-29 | Commvault Systems, Inc. | Virtual machine change block tracking |
US9904481B2 (en) | 2015-01-23 | 2018-02-27 | Commvault Systems, Inc. | Scalable auxiliary copy processing in a storage management system using media agent resources |
US9898213B2 (en) | 2015-01-23 | 2018-02-20 | Commvault Systems, Inc. | Scalable auxiliary copy processing using media agent resources |
US10313243B2 (en) | 2015-02-24 | 2019-06-04 | Commvault Systems, Inc. | Intelligent local management of data stream throttling in secondary-copy operations |
US10956299B2 (en) | 2015-02-27 | 2021-03-23 | Commvault Systems, Inc. | Diagnosing errors in data storage and archiving in a cloud or networking environment |
US9928144B2 (en) | 2015-03-30 | 2018-03-27 | Commvault Systems, Inc. | Storage management of data using an open-archive architecture, including streamlined access to primary data originally stored on network-attached storage and archived to secondary storage |
US10339106B2 (en) | 2015-04-09 | 2019-07-02 | Commvault Systems, Inc. | Highly reusable deduplication database after disaster recovery |
US10311150B2 (en) | 2015-04-10 | 2019-06-04 | Commvault Systems, Inc. | Using a Unix-based file system to manage and serve clones to windows-based computing clients |
US10324914B2 (en) | 2015-05-20 | 2019-06-18 | Commvalut Systems, Inc. | Handling user queries against production and archive storage systems, such as for enterprise customers having large and/or numerous files |
US20160350391A1 (en) | 2015-05-26 | 2016-12-01 | Commvault Systems, Inc. | Replication using deduplicated secondary copy data |
US9563514B2 (en) | 2015-06-19 | 2017-02-07 | Commvault Systems, Inc. | Assignment of proxies for virtual-machine secondary copy operations including streaming backup jobs |
US10084873B2 (en) | 2015-06-19 | 2018-09-25 | Commvault Systems, Inc. | Assignment of data agent proxies for executing virtual-machine secondary copy operations including streaming backup jobs |
US10275320B2 (en) | 2015-06-26 | 2019-04-30 | Commvault Systems, Inc. | Incrementally accumulating in-process performance data and hierarchical reporting thereof for a data stream in a secondary copy operation |
US9846621B1 (en) * | 2015-06-26 | 2017-12-19 | Western Digital Technologies, Inc. | Disaster recovery—multiple restore options and automatic management of restored computing devices |
US9766825B2 (en) | 2015-07-22 | 2017-09-19 | Commvault Systems, Inc. | Browse and restore for block-level backups |
US10101913B2 (en) | 2015-09-02 | 2018-10-16 | Commvault Systems, Inc. | Migrating data to disk without interrupting running backup operations |
US10248494B2 (en) | 2015-10-29 | 2019-04-02 | Commvault Systems, Inc. | Monitoring, diagnosing, and repairing a management database in a data storage management system |
US10592357B2 (en) | 2015-12-30 | 2020-03-17 | Commvault Systems, Inc. | Distributed file system in a distributed deduplication data storage system |
US10565067B2 (en) | 2016-03-09 | 2020-02-18 | Commvault Systems, Inc. | Virtual server cloud file system for virtual machine backup from cloud operations |
US10296368B2 (en) | 2016-03-09 | 2019-05-21 | Commvault Systems, Inc. | Hypervisor-independent block-level live browse for access to backed up virtual machine (VM) data and hypervisor-free file-level recovery (block-level pseudo-mount) |
US10503753B2 (en) | 2016-03-10 | 2019-12-10 | Commvault Systems, Inc. | Snapshot replication operations based on incremental block change tracking |
US10747630B2 (en) | 2016-09-30 | 2020-08-18 | Commvault Systems, Inc. | Heartbeat monitoring of virtual machines for initiating failover operations in a data storage management system, including operations by a master monitor node |
US10540516B2 (en) | 2016-10-13 | 2020-01-21 | Commvault Systems, Inc. | Data protection within an unsecured storage environment |
US10152251B2 (en) | 2016-10-25 | 2018-12-11 | Commvault Systems, Inc. | Targeted backup of virtual machine |
US10162528B2 (en) | 2016-10-25 | 2018-12-25 | Commvault Systems, Inc. | Targeted snapshot based on virtual machine location |
US10389810B2 (en) | 2016-11-02 | 2019-08-20 | Commvault Systems, Inc. | Multi-threaded scanning of distributed file systems |
US10922189B2 (en) | 2016-11-02 | 2021-02-16 | Commvault Systems, Inc. | Historical network data-based scanning thread generation |
US10678758B2 (en) | 2016-11-21 | 2020-06-09 | Commvault Systems, Inc. | Cross-platform virtual machine data and memory backup and replication |
US10528015B2 (en) | 2016-12-15 | 2020-01-07 | Trane International Inc. | Building automation system controller with real time software configuration and database backup |
US10838821B2 (en) | 2017-02-08 | 2020-11-17 | Commvault Systems, Inc. | Migrating content and metadata from a backup system |
US10740193B2 (en) | 2017-02-27 | 2020-08-11 | Commvault Systems, Inc. | Hypervisor-independent reference copies of virtual machine payload data based on block-level pseudo-mount |
US10459666B2 (en) | 2017-03-03 | 2019-10-29 | Commvault Systems, Inc. | Using storage managers in respective data storage management systems for license distribution, compliance, and updates |
US10949308B2 (en) | 2017-03-15 | 2021-03-16 | Commvault Systems, Inc. | Application aware backup of virtual machines |
US11032350B2 (en) | 2017-03-15 | 2021-06-08 | Commvault Systems, Inc. | Remote commands framework to control clients |
US10474542B2 (en) | 2017-03-24 | 2019-11-12 | Commvault Systems, Inc. | Time-based virtual machine reversion |
CN111612468B (en) * | 2017-03-24 | 2024-03-19 | 创新先进技术有限公司 | Method and device for transmitting transaction information and identifying consensus |
US10891069B2 (en) | 2017-03-27 | 2021-01-12 | Commvault Systems, Inc. | Creating local copies of data stored in online data repositories |
US10776329B2 (en) | 2017-03-28 | 2020-09-15 | Commvault Systems, Inc. | Migration of a database management system to cloud storage |
US11108858B2 (en) | 2017-03-28 | 2021-08-31 | Commvault Systems, Inc. | Archiving mail servers via a simple mail transfer protocol (SMTP) server |
US11074140B2 (en) | 2017-03-29 | 2021-07-27 | Commvault Systems, Inc. | Live browsing of granular mailbox data |
US10387073B2 (en) | 2017-03-29 | 2019-08-20 | Commvault Systems, Inc. | External dynamic virtual machine synchronization |
US11074138B2 (en) | 2017-03-29 | 2021-07-27 | Commvault Systems, Inc. | Multi-streaming backup operations for mailboxes |
US11221939B2 (en) | 2017-03-31 | 2022-01-11 | Commvault Systems, Inc. | Managing data from internet of things devices in a vehicle |
US10552294B2 (en) | 2017-03-31 | 2020-02-04 | Commvault Systems, Inc. | Management of internet of things devices |
US11294786B2 (en) | 2017-03-31 | 2022-04-05 | Commvault Systems, Inc. | Management of internet of things devices |
US10853195B2 (en) | 2017-03-31 | 2020-12-01 | Commvault Systems, Inc. | Granular restoration of virtual machine application data |
US11010261B2 (en) | 2017-03-31 | 2021-05-18 | Commvault Systems, Inc. | Dynamically allocating streams during restoration of data |
GB2562106B (en) * | 2017-05-05 | 2020-01-22 | Canon Europa Nv | Resilient print job submission |
US10984041B2 (en) | 2017-05-11 | 2021-04-20 | Commvault Systems, Inc. | Natural language processing integrated with database and data storage management |
US10664352B2 (en) | 2017-06-14 | 2020-05-26 | Commvault Systems, Inc. | Live browsing of backed up data residing on cloned disks |
US10742735B2 (en) | 2017-12-12 | 2020-08-11 | Commvault Systems, Inc. | Enhanced network attached storage (NAS) services interfacing to cloud storage |
US10831591B2 (en) | 2018-01-11 | 2020-11-10 | Commvault Systems, Inc. | Remedial action based on maintaining process awareness in data storage management |
US10795927B2 (en) | 2018-02-05 | 2020-10-06 | Commvault Systems, Inc. | On-demand metadata extraction of clinical image data |
US10642886B2 (en) | 2018-02-14 | 2020-05-05 | Commvault Systems, Inc. | Targeted search of backup data using facial recognition |
US20190251204A1 (en) | 2018-02-14 | 2019-08-15 | Commvault Systems, Inc. | Targeted search of backup data using calendar event data |
US10740022B2 (en) | 2018-02-14 | 2020-08-11 | Commvault Systems, Inc. | Block-level live browsing and private writable backup copies using an ISCSI server |
CN108388487B (en) * | 2018-03-01 | 2020-09-04 | 上海达梦数据库有限公司 | Data loading method, device, equipment and storage medium |
US10877928B2 (en) | 2018-03-07 | 2020-12-29 | Commvault Systems, Inc. | Using utilities injected into cloud-based virtual machines for speeding up virtual machine backup operations |
US10761942B2 (en) | 2018-03-12 | 2020-09-01 | Commvault Systems, Inc. | Recovery point objective (RPO) driven backup scheduling in a data storage management system using an enhanced data agent |
US10789387B2 (en) | 2018-03-13 | 2020-09-29 | Commvault Systems, Inc. | Graphical representation of an information management system |
CN110324375B (en) * | 2018-03-29 | 2020-12-04 | 华为技术有限公司 | Information backup method and related equipment |
US10891198B2 (en) | 2018-07-30 | 2021-01-12 | Commvault Systems, Inc. | Storing data to cloud libraries in cloud native formats |
US11159469B2 (en) | 2018-09-12 | 2021-10-26 | Commvault Systems, Inc. | Using machine learning to modify presentation of mailbox objects |
US11010258B2 (en) | 2018-11-27 | 2021-05-18 | Commvault Systems, Inc. | Generating backup copies through interoperability between components of a data storage management system and appliances for data storage and deduplication |
US11200124B2 (en) | 2018-12-06 | 2021-12-14 | Commvault Systems, Inc. | Assigning backup resources based on failover of partnered data storage servers in a data storage management system |
US10860443B2 (en) | 2018-12-10 | 2020-12-08 | Commvault Systems, Inc. | Evaluation and reporting of recovery readiness in a data storage management system |
US20200192572A1 (en) | 2018-12-14 | 2020-06-18 | Commvault Systems, Inc. | Disk usage growth prediction system |
US11698727B2 (en) | 2018-12-14 | 2023-07-11 | Commvault Systems, Inc. | Performing secondary copy operations based on deduplication performance |
US10996974B2 (en) | 2019-01-30 | 2021-05-04 | Commvault Systems, Inc. | Cross-hypervisor live mount of backed up virtual machine data, including management of cache storage for virtual machine data |
US10768971B2 (en) | 2019-01-30 | 2020-09-08 | Commvault Systems, Inc. | Cross-hypervisor live mount of backed up virtual machine data |
US20200327017A1 (en) | 2019-04-10 | 2020-10-15 | Commvault Systems, Inc. | Restore using deduplicated secondary copy data |
US11366723B2 (en) | 2019-04-30 | 2022-06-21 | Commvault Systems, Inc. | Data storage management system for holistic protection and migration of serverless applications across multi-cloud computing environments |
US11463264B2 (en) | 2019-05-08 | 2022-10-04 | Commvault Systems, Inc. | Use of data block signatures for monitoring in an information management system |
US11461184B2 (en) | 2019-06-17 | 2022-10-04 | Commvault Systems, Inc. | Data storage management system for protecting cloud-based data including on-demand protection, recovery, and migration of databases-as-a-service and/or serverless database management systems |
US11308034B2 (en) | 2019-06-27 | 2022-04-19 | Commvault Systems, Inc. | Continuously run log backup with minimal configuration and resource usage from the source machine |
US11561866B2 (en) | 2019-07-10 | 2023-01-24 | Commvault Systems, Inc. | Preparing containerized applications for backup using a backup services container and a backup services container-orchestration pod |
US11042318B2 (en) | 2019-07-29 | 2021-06-22 | Commvault Systems, Inc. | Block-level data replication |
TWI726403B (en) | 2019-08-29 | 2021-05-01 | 智微科技股份有限公司 | Method for enhancing speed of incremental backup, bridge device, and storage system |
US11321354B2 (en) * | 2019-10-01 | 2022-05-03 | Huawei Technologies Co., Ltd. | System, computing node and method for processing write requests |
US11442896B2 (en) | 2019-12-04 | 2022-09-13 | Commvault Systems, Inc. | Systems and methods for optimizing restoration of deduplicated data stored in cloud-based storage resources |
US11467753B2 (en) | 2020-02-14 | 2022-10-11 | Commvault Systems, Inc. | On-demand restore of virtual machine data |
US11321188B2 (en) | 2020-03-02 | 2022-05-03 | Commvault Systems, Inc. | Platform-agnostic containerized application data protection |
US11422900B2 (en) | 2020-03-02 | 2022-08-23 | Commvault Systems, Inc. | Platform-agnostic containerized application data protection |
US11442768B2 (en) | 2020-03-12 | 2022-09-13 | Commvault Systems, Inc. | Cross-hypervisor live recovery of virtual machines |
US11099956B1 (en) | 2020-03-26 | 2021-08-24 | Commvault Systems, Inc. | Snapshot-based disaster recovery orchestration of virtual machine failover and failback operations |
US11500669B2 (en) | 2020-05-15 | 2022-11-15 | Commvault Systems, Inc. | Live recovery of virtual machines in a public cloud computing environment |
US11687424B2 (en) | 2020-05-28 | 2023-06-27 | Commvault Systems, Inc. | Automated media agent state management |
US12130708B2 (en) | 2020-07-10 | 2024-10-29 | Commvault Systems, Inc. | Cloud-based air-gapped data storage management system |
US11494417B2 (en) | 2020-08-07 | 2022-11-08 | Commvault Systems, Inc. | Automated email classification in an information management system |
US11314687B2 (en) | 2020-09-24 | 2022-04-26 | Commvault Systems, Inc. | Container data mover for migrating data between distributed data storage systems integrated with application orchestrators |
US11656951B2 (en) | 2020-10-28 | 2023-05-23 | Commvault Systems, Inc. | Data loss vulnerability detection |
US11604706B2 (en) | 2021-02-02 | 2023-03-14 | Commvault Systems, Inc. | Back up and restore related data on different cloud storage tiers |
US12032855B2 (en) | 2021-08-06 | 2024-07-09 | Commvault Systems, Inc. | Using an application orchestrator computing environment for automatically scaled deployment of data protection resources needed for data in a production cluster distinct from the application orchestrator or in another application orchestrator computing environment |
US11593223B1 (en) | 2021-09-02 | 2023-02-28 | Commvault Systems, Inc. | Using resource pool administrative entities in a data storage management system to provide shared infrastructure to tenants |
US11809285B2 (en) | 2022-02-09 | 2023-11-07 | Commvault Systems, Inc. | Protecting a management database of a data storage management system to meet a recovery point objective (RPO) |
US12056018B2 (en) | 2022-06-17 | 2024-08-06 | Commvault Systems, Inc. | Systems and methods for enforcing a recovery point objective (RPO) for a production database without generating secondary copies of the production database |
US12135618B2 (en) | 2022-07-11 | 2024-11-05 | Commvault Systems, Inc. | Protecting configuration data in a clustered container system |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5530848A (en) * | 1992-10-15 | 1996-06-25 | The Dow Chemical Company | System and method for implementing an interface between an external process and transaction processing system |
Family Cites Families (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB8915875D0 (en) * | 1989-07-11 | 1989-08-31 | Intelligence Quotient United K | A method of operating a data processing system |
US5307481A (en) * | 1990-02-28 | 1994-04-26 | Hitachi, Ltd. | Highly reliable online system |
US5633999A (en) * | 1990-11-07 | 1997-05-27 | Nonstop Networks Limited | Workstation-implemented data storage re-routing for server fault-tolerance on computer networks |
US5263152A (en) * | 1991-04-01 | 1993-11-16 | Xerox Corporation | Process for replacing non-volatile memory in electronic printing systems |
US5241672A (en) * | 1991-04-01 | 1993-08-31 | Xerox Corporation | System using the storage level of file updates in nonvolatile memory to trigger saving of RAM to disk and using the file updates to reboot after crash |
US5611049A (en) * | 1992-06-03 | 1997-03-11 | Pitts; William M. | System for accessing distributed data cache channel at each network node to pass requests and data |
EP0593062A3 (en) * | 1992-10-16 | 1995-08-30 | Siemens Ind Automation Inc | Redundant networked database system |
GB2273180A (en) * | 1992-12-02 | 1994-06-08 | Ibm | Database backup and recovery. |
US5555371A (en) * | 1992-12-17 | 1996-09-10 | International Business Machines Corporation | Data backup copying with delayed directory updating and reduced numbers of DASD accesses at a back up site using a log structured array data storage |
US5564011A (en) * | 1993-10-05 | 1996-10-08 | International Business Machines Corporation | System and method for maintaining file data access in case of dynamic critical sector failure |
EP0986007A3 (en) * | 1993-12-01 | 2001-11-07 | Marathon Technologies Corporation | Method of isolating I/O requests |
US5537533A (en) * | 1994-08-11 | 1996-07-16 | Miralink Corporation | System and method for remote mirroring of digital data from a primary network server to a remote network server |
US5634052A (en) * | 1994-10-24 | 1997-05-27 | International Business Machines Corporation | System for reducing storage requirements and transmission loads in a backup subsystem in client-server environment by transmitting only delta files from client to server |
US5513314A (en) * | 1995-01-27 | 1996-04-30 | Auspex Systems, Inc. | Fault tolerant NFS server system and mirroring protocol |
US5604862A (en) * | 1995-03-14 | 1997-02-18 | Network Integrity, Inc. | Continuously-snapshotted protection of computer files |
US5592611A (en) * | 1995-03-14 | 1997-01-07 | Network Integrity, Inc. | Stand-in computer server |
US5608865A (en) * | 1995-03-14 | 1997-03-04 | Network Integrity, Inc. | Stand-in Computer file server providing fast recovery from computer file server failures |
US5675723A (en) * | 1995-05-19 | 1997-10-07 | Compaq Computer Corporation | Multi-server fault tolerance using in-band signalling |
US5822512A (en) * | 1995-05-19 | 1998-10-13 | Compaq Computer Corporartion | Switching control in a fault tolerant system |
-
1995
- 1995-10-16 US US08/543,266 patent/US5819020A/en not_active Expired - Lifetime
-
1998
- 1998-10-02 US US09/165,724 patent/US5974563A/en not_active Expired - Lifetime
-
2003
- 2003-12-04 US US10/729,284 patent/US20040083245A1/en not_active Abandoned
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5530848A (en) * | 1992-10-15 | 1996-06-25 | The Dow Chemical Company | System and method for implementing an interface between an external process and transaction processing system |
Cited By (105)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8812702B2 (en) * | 1996-12-13 | 2014-08-19 | Good Technology Corporation | System and method for globally and securely accessing unified information in a computer network |
US20100005125A1 (en) * | 1996-12-13 | 2010-01-07 | Visto Corporation | System and method for globally and securely accessing unified information in a computer network |
US7333484B2 (en) * | 1998-08-07 | 2008-02-19 | Intel Corporation | Services processor having a packet editing unit |
US20030152078A1 (en) * | 1998-08-07 | 2003-08-14 | Henderson Alex E. | Services processor having a packet editing unit |
US20100174685A1 (en) * | 2000-03-01 | 2010-07-08 | Computer Associates Think, Inc. | Method and system for updating an archive of a computer file |
US8019731B2 (en) | 2000-03-01 | 2011-09-13 | Computer Associates Think, Inc. | Method and system for updating an archive of a computer file |
US20110010345A1 (en) * | 2000-03-01 | 2011-01-13 | Computer Associates Think, Inc. | Method and system for updating an archive of a computer file |
US8019730B2 (en) | 2000-03-01 | 2011-09-13 | Computer Associates Think, Inc. | Method and system for updating an archive of a computer file |
US20030074535A1 (en) * | 2001-09-14 | 2003-04-17 | Eric Owhadi | Method of initiating a backup procedure |
US20030110237A1 (en) * | 2001-12-06 | 2003-06-12 | Hitachi, Ltd. | Methods of migrating data between storage apparatuses |
US7155458B1 (en) * | 2002-04-05 | 2006-12-26 | Network Appliance, Inc. | Mechanism for distributed atomic creation of client-private files |
US7257705B2 (en) * | 2002-11-18 | 2007-08-14 | Sparta Systems, Inc. | Method for preserving changes made during a migration of a system's configuration to a second configuration |
US20040133610A1 (en) * | 2002-11-18 | 2004-07-08 | Sparta Systems, Inc. | Techniques for reconfiguring configurable systems |
US20040199609A1 (en) * | 2003-04-07 | 2004-10-07 | Microsoft Corporation | System and method for web server migration |
US7379996B2 (en) * | 2003-04-07 | 2008-05-27 | Microsoft Corporation | System and method for web server migration |
US20050010609A1 (en) * | 2003-06-12 | 2005-01-13 | International Business Machines Corporation | Migratable backup and restore |
US7797278B2 (en) * | 2003-06-12 | 2010-09-14 | Lenovo (Singapore) Pte. Ltd. | Migratable backup and restore |
US20070150522A1 (en) * | 2003-10-07 | 2007-06-28 | International Business Machines Corporation | Method, system, and program for processing a file request |
US7882065B2 (en) * | 2003-10-07 | 2011-02-01 | International Business Machines Corporation | Processing a request to update a file in a file system with update data |
US8032487B1 (en) * | 2003-10-29 | 2011-10-04 | At&T Intellectual Property I, L.P. | System and method for synchronizing data in a networked system |
KR100985443B1 (en) | 2003-12-15 | 2010-10-06 | 인터내셔널 비지네스 머신즈 코포레이션 | Apparatus, system, and method for grid based data storage |
US7698428B2 (en) * | 2003-12-15 | 2010-04-13 | International Business Machines Corporation | Apparatus, system, and method for grid based data storage |
US8332483B2 (en) | 2003-12-15 | 2012-12-11 | International Business Machines Corporation | Apparatus, system, and method for autonomic control of grid system resources |
US20050144283A1 (en) * | 2003-12-15 | 2005-06-30 | Fatula Joseph J.Jr. | Apparatus, system, and method for grid based data storage |
US20050131993A1 (en) * | 2003-12-15 | 2005-06-16 | Fatula Joseph J.Jr. | Apparatus, system, and method for autonomic control of grid system resources |
US7315959B2 (en) * | 2003-12-30 | 2008-01-01 | Icp Electronics Inc. | Real-time remote backup system and related method |
US20050160313A1 (en) * | 2003-12-30 | 2005-07-21 | Chih-Sung Wu | Real-time remote backup system and related method |
US8818950B2 (en) * | 2004-01-22 | 2014-08-26 | Symantec Corporation | Method and apparatus for localized protected imaging of a file system |
US20050165853A1 (en) * | 2004-01-22 | 2005-07-28 | Altiris, Inc. | Method and apparatus for localized protected imaging of a file system |
US8417741B2 (en) * | 2004-05-28 | 2013-04-09 | Moxite Gmbh | System and method for replication, integration, consolidation and mobilisation of data |
US20080270490A1 (en) * | 2004-05-28 | 2008-10-30 | Moxite Gmbh | System and Method for Replication, Integration, Consolidation and Mobilisation of Data |
US20060075143A1 (en) * | 2004-09-30 | 2006-04-06 | Konica Minolta Business Technologies, Inc. | Administration system for administration target devices, data server and branch server for use in said system |
US20060101097A1 (en) * | 2004-11-05 | 2006-05-11 | Dima Barboi | Replicated data validation |
US7840535B2 (en) | 2004-11-05 | 2010-11-23 | Computer Associates Think, Inc. | Replicated data validation |
WO2006074869A1 (en) * | 2005-01-11 | 2006-07-20 | Rudolf Bayer | Data storage system and method for operation thereof |
US20060190502A1 (en) * | 2005-02-24 | 2006-08-24 | International Business Machines Corporation | Backing up at least one encrypted computer file |
US7600133B2 (en) * | 2005-02-24 | 2009-10-06 | Lenovo Singapore Pte. Ltd | Backing up at least one encrypted computer file |
US20090100241A1 (en) * | 2005-03-23 | 2009-04-16 | Steffen Werner | Method for Removing a Mass Storage System From a Computer Network, and Computer Program Product and Computer Network for Carrying our the Method |
EP1708095A1 (en) * | 2005-03-31 | 2006-10-04 | Ubs Ag | Computer network system for constructing, synchronizing and/or managing a second database from/with a first database, and methods therefore |
EP1708094A1 (en) * | 2005-03-31 | 2006-10-04 | Ubs Ag | Computer network system for constructing, synchronizing and/or managing a second database from/with a first database, and methods therefore |
US20060222162A1 (en) * | 2005-03-31 | 2006-10-05 | Marcel Bank | Computer network system for building, synchronising and/or operating second database from/with a first database, and procedures for it |
WO2006103096A2 (en) * | 2005-03-31 | 2006-10-05 | Ubs Ag | Computer network system for establishing a second database from, synchronizing and/or operating it with a first database and corresponding procedure |
WO2006103098A2 (en) * | 2005-03-31 | 2006-10-05 | Ubs Ag | Computer network system for the establishment synchronisation and/or operation of a second databank from/with a first databank and procedure for the above |
US7526576B2 (en) | 2005-03-31 | 2009-04-28 | Ubs Ag | Computer network system using encapsulation to decompose work units for synchronizing and operating a second database from a first database |
US20060222161A1 (en) * | 2005-03-31 | 2006-10-05 | Marcel Bank | Computer network system for building, synchronising and/or operating a second database from/with a first database, and procedures for it |
WO2006103096A3 (en) * | 2005-03-31 | 2006-12-07 | Ubs Ag | Computer network system for establishing a second database from, synchronizing and/or operating it with a first database and corresponding procedure |
US20060222163A1 (en) * | 2005-03-31 | 2006-10-05 | Marcel Bank | Computer network system for building, synchronising and/or operating a second database from/with a first database, and procedures for it |
US7707177B2 (en) * | 2005-03-31 | 2010-04-27 | Ubs Ag | Computer network system for building, synchronising and/or operating a second database from/with a first database, and procedures for it |
US7577687B2 (en) | 2005-03-31 | 2009-08-18 | Ubs Ag | Systems and methods for synchronizing databases |
WO2006103098A3 (en) * | 2005-03-31 | 2006-12-21 | Ubs Ag | Computer network system for the establishment synchronisation and/or operation of a second databank from/with a first databank and procedure for the above |
US20110231366A1 (en) * | 2005-04-20 | 2011-09-22 | Axxana (Israel) Ltd | Remote data mirroring system |
US8914666B2 (en) | 2005-04-20 | 2014-12-16 | Axxana (Israel) Ltd. | Remote data mirroring system |
US9195397B2 (en) | 2005-04-20 | 2015-11-24 | Axxana (Israel) Ltd. | Disaster-proof data recovery |
US7584226B2 (en) * | 2005-05-24 | 2009-09-01 | International Business Machines Corporation | System and method for peer-to-peer grid based autonomic and probabilistic on-demand backup and restore |
US20060271601A1 (en) * | 2005-05-24 | 2006-11-30 | International Business Machines Corporation | System and method for peer-to-peer grid based autonomic and probabilistic on-demand backup and restore |
US20070073791A1 (en) * | 2005-09-27 | 2007-03-29 | Computer Associates Think, Inc. | Centralized management of disparate multi-platform media |
US20070214384A1 (en) * | 2006-03-07 | 2007-09-13 | Manabu Kitamura | Method for backing up data in a clustered file system |
US20070226538A1 (en) * | 2006-03-09 | 2007-09-27 | Samsung Electronics Co., Ltd. | Apparatus and method to manage computer system data in network |
US8165221B2 (en) | 2006-04-28 | 2012-04-24 | Netapp, Inc. | System and method for sampling based elimination of duplicate data |
US20070255758A1 (en) * | 2006-04-28 | 2007-11-01 | Ling Zheng | System and method for sampling based elimination of duplicate data |
US9344112B2 (en) | 2006-04-28 | 2016-05-17 | Ling Zheng | Sampling based elimination of duplicate data |
US20070260696A1 (en) * | 2006-05-02 | 2007-11-08 | Mypoints.Com Inc. | System and method for providing three-way failover for a transactional database |
US7613742B2 (en) * | 2006-05-02 | 2009-11-03 | Mypoints.Com Inc. | System and method for providing three-way failover for a transactional database |
US20080005141A1 (en) * | 2006-06-29 | 2008-01-03 | Ling Zheng | System and method for retrieving and using block fingerprints for data deduplication |
US8296260B2 (en) | 2006-06-29 | 2012-10-23 | Netapp, Inc. | System and method for managing data deduplication of storage systems utilizing persistent consistency point images |
US7921077B2 (en) | 2006-06-29 | 2011-04-05 | Netapp, Inc. | System and method for managing data deduplication of storage systems utilizing persistent consistency point images |
US20110035357A1 (en) * | 2006-06-29 | 2011-02-10 | Daniel Ting | System and method for managing data deduplication of storage systems utilizing persistent consistency point images |
US20080005201A1 (en) * | 2006-06-29 | 2008-01-03 | Daniel Ting | System and method for managing data deduplication of storage systems utilizing persistent consistency point images |
US8412682B2 (en) * | 2006-06-29 | 2013-04-02 | Netapp, Inc. | System and method for retrieving and using block fingerprints for data deduplication |
US7747584B1 (en) | 2006-08-22 | 2010-06-29 | Netapp, Inc. | System and method for enabling de-duplication in a storage system architecture |
US20080184001A1 (en) * | 2007-01-30 | 2008-07-31 | Network Appliance, Inc. | Method and an apparatus to store data patterns |
US7853750B2 (en) | 2007-01-30 | 2010-12-14 | Netapp, Inc. | Method and an apparatus to store data patterns |
US9069787B2 (en) | 2007-05-31 | 2015-06-30 | Netapp, Inc. | System and method for accelerating anchor point detection |
US8762345B2 (en) | 2007-05-31 | 2014-06-24 | Netapp, Inc. | System and method for accelerating anchor point detection |
US20080301134A1 (en) * | 2007-05-31 | 2008-12-04 | Miller Steven C | System and method for accelerating anchor point detection |
US20080307102A1 (en) * | 2007-06-08 | 2008-12-11 | Galloway Curtis C | Techniques for communicating data between a host device and an intermittently attached mobile device |
US8793226B1 (en) | 2007-08-28 | 2014-07-29 | Netapp, Inc. | System and method for estimating duplicate data |
US7890714B1 (en) * | 2007-09-28 | 2011-02-15 | Symantec Operating Corporation | Redirection of an ongoing backup |
US20090100236A1 (en) * | 2007-10-15 | 2009-04-16 | Ricardo Spencer Puig | Copying data onto a secondary storage device |
US20090150477A1 (en) * | 2007-12-07 | 2009-06-11 | Brocade Communications Systems, Inc. | Distributed file system optimization using native server functions |
US20120117025A1 (en) * | 2008-02-18 | 2012-05-10 | Microsoft Corporation | Synchronization of Replications for Different Computing Systems |
US8983904B2 (en) * | 2008-02-18 | 2015-03-17 | Microsoft Technology Licensing, Llc | Synchronization of replications for different computing systems |
US20100049726A1 (en) * | 2008-08-19 | 2010-02-25 | Netapp, Inc. | System and method for compression of partially ordered data sets |
US8250043B2 (en) | 2008-08-19 | 2012-08-21 | Netapp, Inc. | System and method for compression of partially ordered data sets |
US9032054B2 (en) | 2008-12-30 | 2015-05-12 | Juniper Networks, Inc. | Method and apparatus for determining a network topology during network provisioning |
US8565118B2 (en) * | 2008-12-30 | 2013-10-22 | Juniper Networks, Inc. | Methods and apparatus for distributed dynamic network provisioning |
US20100165876A1 (en) * | 2008-12-30 | 2010-07-01 | Amit Shukla | Methods and apparatus for distributed dynamic network provisioning |
US20110004586A1 (en) * | 2009-07-15 | 2011-01-06 | Lon Jones Cherryholmes | System, method, and computer program product for creating a virtual database |
US10120767B2 (en) * | 2009-07-15 | 2018-11-06 | Idera, Inc. | System, method, and computer program product for creating a virtual database |
US20120215999A1 (en) * | 2009-08-11 | 2012-08-23 | International Business Machines Corporation | Synchronization of replicated sequential access storage components |
US8533412B2 (en) * | 2009-08-11 | 2013-09-10 | International Business Machines Corporation | Synchronization of replicated sequential access storage components |
US20110289270A1 (en) * | 2010-05-24 | 2011-11-24 | Bell Jr Robert H | System, method and computer program product for data transfer management |
US11163850B2 (en) | 2010-05-24 | 2021-11-02 | International Business Machines Corporation | System, method and computer program product for data transfer management |
US10635736B2 (en) | 2010-05-24 | 2020-04-28 | International Business Machines Corporation | System, method and computer program product for data transfer management |
US9881099B2 (en) * | 2010-05-24 | 2018-01-30 | International Business Machines Corporation | System, method and computer program product for data transfer management |
US8886596B2 (en) * | 2010-10-11 | 2014-11-11 | Sap Se | Method for reorganizing or moving a database table |
US20120089566A1 (en) * | 2010-10-11 | 2012-04-12 | Sap Ag | Method for reorganizing or moving a database table |
US8891406B1 (en) | 2010-12-22 | 2014-11-18 | Juniper Networks, Inc. | Methods and apparatus for tunnel management within a data center |
US8495019B2 (en) | 2011-03-08 | 2013-07-23 | Ca, Inc. | System and method for providing assured recovery and replication |
WO2014170810A1 (en) * | 2013-04-14 | 2014-10-23 | Axxana (Israel) Ltd. | Synchronously mirroring very fast storage arrays |
US10769028B2 (en) | 2013-10-16 | 2020-09-08 | Axxana (Israel) Ltd. | Zero-transaction-loss recovery for database systems |
US9875042B1 (en) * | 2015-03-31 | 2018-01-23 | EMC IP Holding Company LLC | Asynchronous replication |
US10379958B2 (en) | 2015-06-03 | 2019-08-13 | Axxana (Israel) Ltd. | Fast archiving for database systems |
US10592326B2 (en) | 2017-03-08 | 2020-03-17 | Axxana (Israel) Ltd. | Method and apparatus for data loss assessment |
CN111382136A (en) * | 2018-12-29 | 2020-07-07 | 华为技术有限公司 | File system mirror image and file request method |
Also Published As
Publication number | Publication date |
---|---|
US5974563A (en) | 1999-10-26 |
US5819020A (en) | 1998-10-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US5974563A (en) | Real time backup system | |
US8401999B2 (en) | Data mirroring method | |
US8001079B2 (en) | System and method for system state replication | |
US7565572B2 (en) | Method for rolling back from snapshot with log | |
US7627776B2 (en) | Data backup method | |
US6611923B1 (en) | System and method for backing up data stored in multiple mirrors on a mass storage subsystem under control of a backup server | |
US7043665B2 (en) | Method, system, and program for handling a failover to a remote storage location | |
US6671705B1 (en) | Remote mirroring system, device, and method | |
US7383463B2 (en) | Internet protocol based disaster recovery of a server | |
US5555371A (en) | Data backup copying with delayed directory updating and reduced numbers of DASD accesses at a back up site using a log structured array data storage | |
US8027952B2 (en) | System and article of manufacture for mirroring data at storage locations | |
US5740433A (en) | Remote duplicate database facility with improved throughput and fault tolerance | |
US8396830B2 (en) | Data control method for duplicating data between computer systems | |
US7032089B1 (en) | Replica synchronization using copy-on-read technique | |
US7181477B2 (en) | Storage system and storage control method | |
US6446090B1 (en) | Tracker sensing method for regulating synchronization of audit files between primary and secondary hosts | |
US5794252A (en) | Remote duplicate database facility featuring safe master audit trail (safeMAT) checkpointing | |
US7051052B1 (en) | Method for reading audit data from a remote mirrored disk for application to remote database backup copy | |
JP2003517651A (en) | Highly available file server | |
US20070022319A1 (en) | Maintaining and using information on updates to a data group after a logical copy is made of the data group | |
JPH10124419A (en) | Software and data matching distribution method for client server system | |
US7210058B2 (en) | Method for peer-to-peer system recovery |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: NSI SOFTWARE, INC., NEW JERSEY Free format text: MERGER;ASSIGNOR:NETWORK SPECIALISTS, INCORPORATED;REEL/FRAME:014409/0161 Effective date: 20031016 |
|
AS | Assignment |
Owner name: DOUBLE-TAKE SOFTWARE, INC., MASSACHUSETTS Free format text: CHANGE OF NAME;ASSIGNOR:NSI SOFTWARE, INC.;REEL/FRAME:018590/0678 Effective date: 20060725 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |