US20070276989A1 - Predictive data-loader - Google Patents

Predictive data-loader Download PDF

Info

Publication number
US20070276989A1
US20070276989A1 US11/802,223 US80222307A US2007276989A1 US 20070276989 A1 US20070276989 A1 US 20070276989A1 US 80222307 A US80222307 A US 80222307A US 2007276989 A1 US2007276989 A1 US 2007276989A1
Authority
US
United States
Prior art keywords
data
segment
data object
data segment
predictable
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US11/802,223
Inventor
Amir Mosek
Amir Lehr
Yacov Duzly
Menahem Lasser
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Western Digital Israel Ltd
Original Assignee
SanDisk IL Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SanDisk IL Ltd filed Critical SanDisk IL Ltd
Priority to US11/802,223 priority Critical patent/US20070276989A1/en
Assigned to SANDISK IL LTD. reassignment SANDISK IL LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: DUZLY, YACOV, LASSER, MENAHEM, LEHR, AMIR, MOSEK, AMIR
Publication of US20070276989A1 publication Critical patent/US20070276989A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/10Program control for peripheral devices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0862Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches with prefetch
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0866Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/20Employing a main memory using a specific memory technology
    • G06F2212/202Non-volatile memory
    • G06F2212/2022Flash memory

Definitions

  • the present invention relates to devices for improving data-retrieval times from a non-volatile storage device in which a cache memory is used for preloading a data segment before the data segment is loaded into a host system.
  • Retrieved data i e. data which is requested by the host system
  • ordered data i.e. data that is arranged in a known and/or “predictable” sequence
  • random data i.e. data that is arranged in a known and/or “predictable” sequence
  • Data-retrieval operations from a storage device are divided into two main sub-operations:
  • an internal storage-device data-retrieval stage (hereinafter referred to as a “pre-loading stage”), which occurs upon a host-system request, involves the storage device's internal controller searching for the specific data, and preparing the data to be read by the host system. It is noted that the data is not delivered to the host system during the pre-loading stage.
  • a host-system data-retrieval stage (hereinafter referred to as a “loading stage”), which occurs when the pre-loading stage is completed, involves the storage device notifying the host system that the data is ready to be read by the host system.
  • Such notification can occur in two ways, such as by answering a host-system question as to whether the data is ready or not, or by invoking an interrupt to the host system, signaling that the data is ready.
  • proxy servers and cache storage systems known in the prior art do not solve the need described herein, as they are probabilistic and provide faster access based only on considerations that are external to the data itself (e.g. history of retrieval, availability of sectors, and a priori knowledge about future retrieval).
  • the prior art fails to improve the loading time of an arbitrarily-selected data file using any type of predictive approach.
  • data segment is used herein to refer to a unit of data that is written and/or read according to a given protocol as a single unit of data, such that a write-operation and/or read-operation typically writes and/or reads one segment (e.g. a sector in storage).
  • data object is used herein to refer to a collection of segments having a joint logical meaning (e.g. a file made of sectors).
  • storage device is used herein to refer to a device configured to be operationally connected to a host system, which activates, initializes, and manages the storage device. A storage device provides data to the host system (e.g. data input and read-operations) and/or retrieves data from the host system (e.g. data output and write-operations).
  • data sequence is used herein to refer to an index of logical segments of a data object in a storage device (e.g. bits, bytes, sectors, segments, units, and blocks), indicating the order of the data in the storage device, and enabling the prediction of the logical address of a next segment while using a current segment.
  • predictable sequence is used herein to refer to a set of data segments that are to be read in a specific order.
  • the specific order is provided to the storage device by the host system.
  • the specific order is derived by the storage device from information that is either in the data object, or is provided by the host system upon starting a read-operation.
  • data-retrieval time is used herein to refer to the time for performing a data-object loading process.
  • a process in which segment-by-segment preloading operations are performed during the loading makes the process faster than a similar process lacking such segment-by-segment preloading operations, thereby improving data-retrieval times.
  • contiguous data object is used herein to refer to a data object made of segments having contiguous addresses.
  • non-contiguous data object is used herein to refer to a data object made of segments having non-contiguous addresses.
  • diluted data object is used herein to refer to a data object having data segments that are excluded from being preloaded in a predictive manner according to the present invention.
  • diluted formula and diluted algorithm are used herein to refer to a formula and algorithm, respectively, which are used to predict the next data segment, from the data segments that are not excluded from a diluted data object, to be preloaded according to the present invention.
  • the present invention teaches devices for identifying sequences of non-contiguous data objects that are requested by a host system from a storage device, and using such sequences to prepare the next data segment for reading while the current segment is being processed. Such devices eliminate the latency of the pre-loading stage, making the next segment available to the host system on demand.
  • the present invention assumes that the next segment will actually be required by the host system; however, it is possible that the next pre-loaded segment will not be requested (e.g. if a host application aborts the loading process). In such a case, the time lost unnecessarily in performing the segment pre-loading is minimal.
  • segments are requested in random order in some applications, making it difficult to predict the next segment to be requested, in other applications (e.g. playing music, displaying images, and running program-code sequences), there are sets of segments that are always retrieved in sequence (e.g. data objects). Since segments are typically stored in a non-contiguous manner, the retrieval of non-contiguous data objects involves some latency.
  • the controller of the storage device detects predictable sequences to be loaded, and pre-loads the next segment, based on information about the identity of the next segment.
  • the controller of the storage device determines if a data object to be retrieved should be retrieved as a predictable data object.
  • the embodiments below are classified in Table 1 according to the “next-segment determination” criteria of the entity that determines the next segment (i.e. host system or storage device), and of the time of the determination (i.e. upon a write-operation or upon a read-operation). TABLE 1 Embodiments of the present invention according to “next- segment determination” criteria.
  • Next segment address Next segment address determined by host system [1] determined by storage device [2] Next segment (i) Designate data object as (i) Recognizes predictable data determined predictable object by the filename upon a write- (ii) Prescribe, for each (ii) Recognizes predictable data operation [A] segment, the following object by the file extension segment to be read (iii) Recognizes predictable data object by the file content Next segment (i) Prescribe to the storage (i) Storage device determines the determined device, upon reading the data next segment to be read according upon a read- object, to read the data object to a formula, provided by the host operation [B] sequentially system, upon reading the data object (ii) Host system specifies, upon reading a segment, the next segment to be read
  • the host system informs the storage device, upon writing a data object, that the data object is to be handled as predictable, and the storage device uses the information to handle the data object as predictable upon retrieval (i.e. that the storage device will determine the address of the next segment to be read, and “cache” the segment (i.e. send the segment to cache memory) upon delivery of the current segment).
  • the host system prescribes to the storage device, upon reading a segment in the storage device, the identity of the segment that should follow the current segment upon reading.
  • the storage device follows the host-system instruction upon reading, and handles the data object as predictable.
  • the controller of the storage device upon writing the data object, recognizes that the data object is predictable (based on data-object name, extension, or content) as follows:
  • the storage device identifies data-object type (predictable or non-predictable) by the data-object name (e.g. filename). For example, there are filenames that start with specific strings that indicate the type of file. According to the file type, the storage device recognizes if the file type follows a predictable sequence or not.
  • a prior-art example is JPEG files created by a digital camera, and stored with the filenames IMG0.JPG, IMG1.JPG, . . . IMGXXX.JPG.
  • the storage device identifies the data object type (predictable or non-predictable) by file extension.
  • the extension of a filename sometimes indicates the type of file.
  • the storage device recognizes if the file type follows a predictable sequence or not.
  • a prior-art example is a JPEG file with the extension JPG, or an MPEG file with the extension MPG.
  • the storage device identifies the data-object type (predictable or non-predictable) by a unique identification or “signature” included in the file content:
  • a prior-art example is a Windows® CE (or WinCE) image that includes a unique signature, “B000FF” in a specific offset of the file (i.e. data object).
  • the storage device identifies the data as an executable OS image.
  • the storage-device controller writes the data object in a way that will enable devices of the present invention to be operative upon reading.
  • the host system informs the storage device, upon reading a data object or a part of a data object, that the data object is to be read as a predictive data object, and the storage device uses the data sequence generated upon writing, to identify the next data segment and cache the next data segment upon delivery of the current segment.
  • a host application of the host system prescribes to the storage device, upon reading the data object, a diluted formula or a diluted algorithm that the storage device should use to determine the next data segment in a diluted data object, and the storage device caches and delivers the data segments according to such a formula.
  • An example of an application of such an embodiment is the retrieval of a large image. Such retrieval can be done either sequentially, or by sampling portions of the image (and filling in the missing portions later).
  • Another example of an application of such an embodiment is the retrieval of records from a large database following the completion of a search on the database. The search provides a list of pointers, and the retrieval needs a specified number of segments for each pointer.
  • the “formula” for the next segment is simply an instruction to “read the segment that follows the current segment.”
  • the application enables any data structure to be read as a contiguous data object.
  • the host system upon reading the data object, arbitrarily specifies along with each read command, a specific identity of the next segment to be read, and the storage device then caches the next segment. In such a case, no formula or algorithm is needed.
  • the controller When the storage-device controller writes a data object that is known to be predictable, the controller marks the data object as such in one of two alternative embodiments:
  • the controller maintains a data structure that maps the individual segments of the data object, and recognizes, upon retrieval, which is the next segment in the sequence.
  • a data structure can take the form of a pointer, an index, a formula, or any other indication that can predictably point to the next segment.
  • One way to maintain these indices is to use the virtual-to-physical “conversion” data, which reside in mapping tables in all flash-memory storage devices, to maintain the information that applies to each segment of storage. A bit can be added to the segment information to indicate if the relevant segment is predictable.
  • Such an approach to labeling data objects ensures lower storage-memory consumption and better retrieval-time performance compared to managing a separate index for the predictable attribute.
  • each segment of a predictable data object is associated, upon writing, with a pointer to the next segment in the sequence.
  • the controller reads this pointer, and prepares the next segment as described below.
  • the identity of a “follower segment” is determined statistically by the controller.
  • the controller records the identity of the data segment that is requested following a given data segment.
  • the data segment is designated as the follower data segment of the current data segment, and is predicatively cached each time the current data segment is requested. While there is no guarantee that the follower segment will always be the next data segment requested (as in program code in which a routine can diverge at “if/then” branches causing two or more alternative segment chains), there is an improvement in file-access time due to the statistical probability the data segment being a “frequent follower.” This is in contrast to a simple history-of-retrieval approach that tracks the history for data-object retrieval. The present invention predicts the follower segment, not just the next data object.
  • An essential feature of the present invention is the autonomous execution of the preloading stage of the next data segment by the storage-device controller based on information known to the storage device (from any source) about the expected next request of the host system to retrieve a data segment.
  • the controller checks if the data segment belongs to a predictable data object. Such a check is performed in either of the two ways described above. If the data segment is found to belong to a predictable data object, then immediately after loading the data segment to the host system, the controller preloads the next data segment in the data object. When the host system requests the next data segment, the controller can deliver the data segment immediately.
  • DMA direct memory access
  • a non-volatile storage device including: (a) a storage memory for storing data; (b) a cache memory for preloading the data upon a host-system request to read the data; and (c) a storage-device controller configured: (i) to determine that a plurality of data segments that constitute a non-contiguous data object, stored in the storage memory such that at least one data segment is non-contiguous to a preceding data segment in the data object, are in a predictable sequence; and (ii) to preload a non-contiguous next data segment in the predictable sequence into the cache memory after loading a current data segment into a host system from the cache memory, wherein the next data segment is preloaded prior to the host-system request to read the next data segment.
  • the controller is further configured: (iii) to store information about the predictable sequence upon writing of the data object into the storage memory.
  • the controller is further configured: (iii) to receive an indication, from the host system, that the data object is a predictable data object.
  • the indication is provided upon writing the data object into the storage memory, or upon reading the data object from the storage memory.
  • the controller is further configured: (iii) to determine the data object as a predictable data object by examination of properties of the data object.
  • the properties include at least one property from the group consisting of: a name of the data object, an extension of the data object, and a format of the data object.
  • an identity of the next data segment is determined from at least one item from the group consisting of: a data structure that is external to the data object, a parameter in the current data segment of the data object, a table for converting virtual addresses to physical addresses of the plurality of data segments, a pointer from the current data segment to the next data segment, a host-system designation of the current data segment, and a statistical frequency analysis of data segments that follow the current data segment in previous retrievals of the current data segment.
  • the host-system request includes an identity of the next data segment for preloading into the cache memory.
  • the controller is further configured: (iii) to select the next data segment for preloading into the cache memory, from the plurality of data segments, using a diluted algorithm or a diluted formula, prior to the host-system request to read the next data segment.
  • FIG. 1 is a simplified block diagram of a predictable-data-retrieval system, according to a preferred embodiment of the present invention
  • FIG. 2 is a simplified flowchart of the steps in a predictable-data-retrieval process, according to a preferred embodiment of the present invention
  • FIG. 3 is a simplified flowchart of the predictable-data-retrieval process of the present invention when the data object is non-predictable (Flow A);
  • FIG. 4 is a simplified flowchart of the predictable-data-retrieval process of the present invention when the first segment of a predictable data object is being read (Flow B);
  • FIG. 5 is a simplified flowchart of the predictable-data-retrieval process of the present invention when a predictable data object is being read (Flow C);
  • FIG. 6 is a simplified flowchart of the predictable-data-retrieval process of the present invention when the last segment of a predictable data object is being read (Flow D);
  • FIG. 7 is a simplified schematic diagram depicting Flows A-D of the process in FIGS. 3-6 as a function of time, according to a preferred embodiment of the present invention.
  • the present invention relates to devices for increasing data-retrieval times from a non-volatile storage device in which a cache memory is used for preloading a data segment before the data segment is loaded into a host system.
  • a cache memory is used for preloading a data segment before the data segment is loaded into a host system.
  • FIG. 1 is a simplified block diagram of a predictable-data-retrieval system, according to a preferred embodiment of the present invention.
  • a host system 10 e.g. a personal computer or a mobile phone
  • storage device 12 e.g. a USB flash drive (UFD) or a memory card
  • Storage device 12 has a controller 14 and a non-volatile storage memory 16 .
  • Storage memory 16 contains data objects 17 that are made of data segments 18 .
  • Storage device 12 also has a cache memory 19 that is typically much faster and smaller than storage memory 16 .
  • storage device 12 Upon receiving a data-retrieval request from host system 10 , storage device 12 performs a “search” operation, which actually prepares data segments 18 for read-operations on host system 10 .
  • storage device 12 Since the search involves reading indices and/or mapping tables, determining the correct address of data segments 18 according to which data object 17 was read, and loading data segments 17 into cache memory 19 , the search will take time to be performed. While the search is being performed, storage device 12 either notifies host system 10 that data object 17 is not ready yet (e.g. via a bit indicating the status as ready/busy), or responds with a failure to the read-operations associated with the data-retrieval request from host system 10 . When data object 17 is ready (i.e. storage device 12 found the relevant file, relevant track, or relevant flash-erasable unit), storage device 12 notifies host system 10 that data object 17 is ready to be read (via a ready/busy bit or by an interrupt). At this point, host system 10 , or another storage device, can read data object 17 .
  • FIG. 2 is a simplified flowchart of the steps in a predictable-data-retrieval process, according to a preferred embodiment of the present invention.
  • FIG. 3 is a simplified flowchart of the predictable-data-retrieval process of the present invention when the data object is non-predictable (Flow A).
  • Data segments 18 that are not recognized as part of a predictable data object 17 are not read and loaded in cache memory 19 prior to the request of host system 10 . If the data segment 18 is not in cache memory 19 already (Step 22 ), then controller 14 proceeds to load the requested data segment 18 into cache memory 19 (Step 24 ). Controller 14 then notifies host system 10 that data segment 18 is ready to be read, and automatically sends data segment 18 to host system 10 (Step 30 ) when host system 10 reads data segment 18 , controller 14 checks if the current requested data segment 18 is part of predictable data object 17 (Step 32 ). If not, then controller 14 ends the read-operation (Step 38 ).
  • FIG. 4 is a simplified flowchart of the predictable-data-retrieval process of the present invention when the first segment of a predictable data object is being read (Flow B).
  • controller 14 checks if the current data segment 18 has a “follower segment” (Step 28 ). If it is the last data segment 18 in the sequence of data object 17 , controller 14 proceeds to the end (Step 38 ). If the data segment 18 is not identified as the last data segment in the sequence of data object 17 (i.e. data segment 18 has a “follower segment”), controller 14 determines the next data segment 18 in the sequence of data object 17 (Step 34 ), loads the next data segment 18 into cache memory 19 (Step 36 ), and then proceeds to the end (Step 38 ).
  • FIG. 5 is a simplified flowchart of the predictable-data-retrieval process of the present invention when a predictable data object is being read (Flow C).
  • the data segment being retrieved is neither the first nor the last data segment in the data object.
  • Data segments 18 can be loaded into cache memory 19 if data segments 18 are recognized as part of a predictable data object 17 , and were read and loaded into cache memory 19 prior to the request from host system 10 . If the requested data segment 18 is already in cache memory 19 (Step 22 ), controller 14 notifies host system 10 that the data is ready to be read, and sends the cached data segment 18 to host system 10 which tries to read the relevant data segment 18 (Step 26 ).
  • controller 14 determines that the current requested data segment 18 is part of predictable data object, it proceeds to check if the current data segment 18 has a “follower segment” (Step 28 ). If the current data segment 18 is not the last data segment 18 in the data object 17 , controller 14 determines the next data segment 18 in the sequence of data object 17 (Step 34 ), loads the next data segment 18 into cache memory 19 (Step 36 ), and then ends the read-operation (Step 38 ).
  • FIG. 6 is a simplified flowchart of the predictable-data-retrieval process of the present invention when the last segment of a predictable data object is being read (Flow D). Continuing from Step 28 , if the current data segment 18 is the last data segment 18 in the sequence of data object 17 , then controller 14 ends the read-operation (Step 38 ).
  • FIG. 7 is a simplified schematic diagram depicting Flows A-D of the process in FIGS. 3-6 as a function of time, according to a preferred embodiment of the present invention.
  • FIG. 7 illustrates a scenario that involves all four “read-modes” described in FIGS. 3-6 .
  • FIG. 7 is divided into a sequence of read-operations 40 , labeled as the read-modes of Flows A-D from FIGS. 3-6 , of host system 10 .
  • Three data objects 42 , 44 , and 46 are being read in consecutive order. Data objects 42 and 46 are non-predictable, and data object 44 is predictable.
  • the last read-operation 40 of predictable data object 44 is indicated by a read-operation 48 , and is read using Flow D of FIG. 6 .

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The present invention discloses devices for improving data-retrieval times from a non-volatile storage device. A non-volatile storage device including: a storage memory for storing data; a cache memory for preloading the data upon a host-system request to read the data; and a storage-device controller configured: to determine that a plurality of data segments that constitute a non-contiguous data object, stored in the storage memory such that at least one data segment is non-contiguous to a preceding data segment in the data object, are in a predictable sequence; and to preload a non-contiguous next data segment in the predictable sequence into the cache memory after loading a current data segment into a host system from the cache memory, wherein the next data segment is preloaded prior to the host-system request to read the next data segment.

Description

    RELATED APPLICATIONS
  • This patent application claims priority under 35 U.S.C. § 119(e) to U.S. Provisional Application No. 60/803,371, filed May 29, 2006, which is hereby incorporated by reference in its entirety.
  • This patent application is related to U.S. patent application Ser. No. ______ of the same inventors, which is entitled “METHOD FOR PRELOADING DATA TO IMPROVE DATA-RETRIEVAL TIMES” and filed on the same day as the present application. This patent application, also claiming priority to U.S. Provisional Application No. 60/803,371, is incorporated in its entirety as if fully set forth herein.
  • FIELD AND BACKGROUND OF THE INVENTION
  • The present invention relates to devices for improving data-retrieval times from a non-volatile storage device in which a cache memory is used for preloading a data segment before the data segment is loaded into a host system. Retrieved data (i e. data which is requested by the host system) can be divided into ordered data (i.e. data that is arranged in a known and/or “predictable” sequence) and random data. The present invention focuses on retrieving ordered data from storage devices.
  • Data-retrieval operations from a storage device (e.g. a magnetic-tape recorder) are divided into two main sub-operations:
  • (1) an internal storage-device data-retrieval stage (hereinafter referred to as a “pre-loading stage”), which occurs upon a host-system request, involves the storage device's internal controller searching for the specific data, and preparing the data to be read by the host system. It is noted that the data is not delivered to the host system during the pre-loading stage.
  • (2) a host-system data-retrieval stage (hereinafter referred to as a “loading stage”), which occurs when the pre-loading stage is completed, involves the storage device notifying the host system that the data is ready to be read by the host system. Such notification can occur in two ways, such as by answering a host-system question as to whether the data is ready or not, or by invoking an interrupt to the host system, signaling that the data is ready.
  • Clearly, such data-retrieval operations, typical to all storage devices in the prior art, have a built-in latency which is the time needed for the first sub-operation. This latency does not disturb the host system, and is hardly noticed if the time to process one segment of data is much longer than this latency. However, in some data-retrieval operations (especially in streaming processes such as loading JPEG and MPEG data, for example), the processing time is very short, and the latency of waiting for the storage device to complete the initial pre-loading stage becomes a problem. Moreover, in some applications, it is important that the data in storage be available on demand (e.g. utilizes a “demand-paging mechanism”).
  • It is important to note that proxy servers and cache storage systems known in the prior art do not solve the need described herein, as they are probabilistic and provide faster access based only on considerations that are external to the data itself (e.g. history of retrieval, availability of sectors, and a priori knowledge about future retrieval). The prior art fails to improve the loading time of an arbitrarily-selected data file using any type of predictive approach.
  • It would be desirable to have devices for predicting with a high probability of success which data segments will be subsequently loaded from a storage device. By applying such a prediction, and preparing the predicted data segment, such systems can save time and increase the efficiency of data retrieval.
  • It is noted that there are prior-art systems that cache a plurality of data segments for reducing the time of the reading process (e.g. a hard-disk drive that reads all the available sectors upon one revolution of the disk). Such prior art does not solve the need described above, as it applies only to contiguous data objects and to the amount of sectors that can be read in one revolution of the disk.
  • SUMMARY OF THE INVENTION
  • It is the purpose of the present invention to provide devices for improving data-retrieval times from a non-volatile storage device in which a cache memory is used for preloading a data segment before the data segment is loaded into a host system.
  • For the purpose of clarity, several terms which follow are specifically defined for use herein. The term “data segment” is used herein to refer to a unit of data that is written and/or read according to a given protocol as a single unit of data, such that a write-operation and/or read-operation typically writes and/or reads one segment (e.g. a sector in storage). The term “data object” is used herein to refer to a collection of segments having a joint logical meaning (e.g. a file made of sectors). The term “storage device” is used herein to refer to a device configured to be operationally connected to a host system, which activates, initializes, and manages the storage device. A storage device provides data to the host system (e.g. data input and read-operations) and/or retrieves data from the host system (e.g. data output and write-operations).
  • The term “data sequence” is used herein to refer to an index of logical segments of a data object in a storage device (e.g. bits, bytes, sectors, segments, units, and blocks), indicating the order of the data in the storage device, and enabling the prediction of the logical address of a next segment while using a current segment. The term “predictable sequence” is used herein to refer to a set of data segments that are to be read in a specific order. In some embodiments of the present invention, the specific order is provided to the storage device by the host system. Alternatively, the specific order is derived by the storage device from information that is either in the data object, or is provided by the host system upon starting a read-operation.
  • The term “data-retrieval time” is used herein to refer to the time for performing a data-object loading process. In the present invention, a process in which segment-by-segment preloading operations are performed during the loading makes the process faster than a similar process lacking such segment-by-segment preloading operations, thereby improving data-retrieval times. The term “contiguous data object” is used herein to refer to a data object made of segments having contiguous addresses. The term “non-contiguous data object” is used herein to refer to a data object made of segments having non-contiguous addresses.
  • The term “diluted data object” is used herein to refer to a data object having data segments that are excluded from being preloaded in a predictive manner according to the present invention. The terms “diluted formula” and “diluted algorithm” are used herein to refer to a formula and algorithm, respectively, which are used to predict the next data segment, from the data segments that are not excluded from a diluted data object, to be preloaded according to the present invention.
  • The present invention teaches devices for identifying sequences of non-contiguous data objects that are requested by a host system from a storage device, and using such sequences to prepare the next data segment for reading while the current segment is being processed. Such devices eliminate the latency of the pre-loading stage, making the next segment available to the host system on demand.
  • A potentially redundant step exists if a pre-loaded segment is not requested by the host system. The present invention assumes that the next segment will actually be required by the host system; however, it is possible that the next pre-loaded segment will not be requested (e.g. if a host application aborts the loading process). In such a case, the time lost unnecessarily in performing the segment pre-loading is minimal.
  • It is the purpose of the present invention to teach faster transfer of non-contiguous data segments from a storage device to a host system (or to another storage device) by predicting the next segments to be needed by the host system before the segments are actually requested.
  • While the segments are requested in random order in some applications, making it difficult to predict the next segment to be requested, in other applications (e.g. playing music, displaying images, and running program-code sequences), there are sets of segments that are always retrieved in sequence (e.g. data objects). Since segments are typically stored in a non-contiguous manner, the retrieval of non-contiguous data objects involves some latency.
  • In the present invention, the controller of the storage device detects predictable sequences to be loaded, and pre-loads the next segment, based on information about the identity of the next segment. There are several alternative ways for the controller of the storage device to determine if a data object to be retrieved should be retrieved as a predictable data object. The embodiments below are classified in Table 1 according to the “next-segment determination” criteria of the entity that determines the next segment (i.e. host system or storage device), and of the time of the determination (i.e. upon a write-operation or upon a read-operation).
    TABLE 1
    Embodiments of the present invention according to “next-
    segment determination” criteria.
    Next segment address Next segment address
    determined by host system [1] determined by storage device [2]
    Next segment (i) Designate data object as (i) Recognizes predictable data
    determined predictable object by the filename
    upon a write- (ii) Prescribe, for each (ii) Recognizes predictable data
    operation [A] segment, the following object by the file extension
    segment to be read (iii) Recognizes predictable data
    object by the file content
    Next segment (i) Prescribe to the storage (i) Storage device determines the
    determined device, upon reading the data next segment to be read according
    upon a read- object, to read the data object to a formula, provided by the host
    operation [B] sequentially system, upon reading the data object
    (ii) Host system specifies,
    upon reading a segment, the
    next segment to be read
  • In a preferred embodiment of the present invention (see Table 1, [1A](i-ii)), the host system informs the storage device, upon writing a data object, that the data object is to be handled as predictable, and the storage device uses the information to handle the data object as predictable upon retrieval (i.e. that the storage device will determine the address of the next segment to be read, and “cache” the segment (i.e. send the segment to cache memory) upon delivery of the current segment).
  • In another preferred embodiment of the present invention (see Table 1, [1B](i)), the host system prescribes to the storage device, upon reading a segment in the storage device, the identity of the segment that should follow the current segment upon reading. The storage device follows the host-system instruction upon reading, and handles the data object as predictable.
  • In another preferred embodiment of the present invention (see Table 1, [2A](i-iii)), the controller of the storage device, upon writing the data object, recognizes that the data object is predictable (based on data-object name, extension, or content) as follows:
  • (i) Based on filename: The storage device identifies data-object type (predictable or non-predictable) by the data-object name (e.g. filename). For example, there are filenames that start with specific strings that indicate the type of file. According to the file type, the storage device recognizes if the file type follows a predictable sequence or not. A prior-art example is JPEG files created by a digital camera, and stored with the filenames IMG0.JPG, IMG1.JPG, . . . IMGXXX.JPG.
  • (ii) Based on file extension: The storage device identifies the data object type (predictable or non-predictable) by file extension. For example, the extension of a filename sometimes indicates the type of file. According to the file type, the storage device recognizes if the file type follows a predictable sequence or not. A prior-art example is a JPEG file with the extension JPG, or an MPEG file with the extension MPG.
  • (iii) Based on content format: The storage device identifies the data-object type (predictable or non-predictable) by a unique identification or “signature” included in the file content: A prior-art example is a Windows® CE (or WinCE) image that includes a unique signature, “B000FF” in a specific offset of the file (i.e. data object). According to such a signature, with a specific offset inside the data object, the storage device identifies the data as an executable OS image. There are well-known industry formats that are identified according to signatures with a specific offset and a “checksum” (in a specific offset from the start of the data object).
  • In all of the above preferred embodiments, the storage-device controller writes the data object in a way that will enable devices of the present invention to be operative upon reading.
  • In another preferred embodiment of the present invention (see Table 1, [1B](ii)), the host system informs the storage device, upon reading a data object or a part of a data object, that the data object is to be read as a predictive data object, and the storage device uses the data sequence generated upon writing, to identify the next data segment and cache the next data segment upon delivery of the current segment.
  • In another embodiment of the present invention (see Table 1, [2B](i)), a host application of the host system prescribes to the storage device, upon reading the data object, a diluted formula or a diluted algorithm that the storage device should use to determine the next data segment in a diluted data object, and the storage device caches and delivers the data segments according to such a formula. An example of an application of such an embodiment is the retrieval of a large image. Such retrieval can be done either sequentially, or by sampling portions of the image (and filling in the missing portions later). Another example of an application of such an embodiment is the retrieval of records from a large database following the completion of a search on the database. The search provides a list of pointers, and the retrieval needs a specified number of segments for each pointer.
  • In another preferred embodiment of the present invention, the “formula” for the next segment is simply an instruction to “read the segment that follows the current segment.” In such a case, the application enables any data structure to be read as a contiguous data object.
  • In another preferred embodiment of the present invention, the host system, upon reading the data object, arbitrarily specifies along with each read command, a specific identity of the next segment to be read, and the storage device then caches the next segment. In such a case, no formula or algorithm is needed.
  • When the storage-device controller writes a data object that is known to be predictable, the controller marks the data object as such in one of two alternative embodiments:
  • (1) the controller maintains a data structure that maps the individual segments of the data object, and recognizes, upon retrieval, which is the next segment in the sequence. Such a data structure can take the form of a pointer, an index, a formula, or any other indication that can predictably point to the next segment. One way to maintain these indices is to use the virtual-to-physical “conversion” data, which reside in mapping tables in all flash-memory storage devices, to maintain the information that applies to each segment of storage. A bit can be added to the segment information to indicate if the relevant segment is predictable. Such an approach to labeling data objects ensures lower storage-memory consumption and better retrieval-time performance compared to managing a separate index for the predictable attribute.
  • (2) each segment of a predictable data object is associated, upon writing, with a pointer to the next segment in the sequence. Upon retrieval, the controller reads this pointer, and prepares the next segment as described below.
  • In another preferred embodiment of the present invention, the identity of a “follower segment” is determined statistically by the controller. The controller records the identity of the data segment that is requested following a given data segment. Upon identification of such a data segment, the data segment is designated as the follower data segment of the current data segment, and is predicatively cached each time the current data segment is requested. While there is no guarantee that the follower segment will always be the next data segment requested (as in program code in which a routine can diverge at “if/then” branches causing two or more alternative segment chains), there is an improvement in file-access time due to the statistical probability the data segment being a “frequent follower.” This is in contrast to a simple history-of-retrieval approach that tracks the history for data-object retrieval. The present invention predicts the follower segment, not just the next data object.
  • An essential feature of the present invention is the autonomous execution of the preloading stage of the next data segment by the storage-device controller based on information known to the storage device (from any source) about the expected next request of the host system to retrieve a data segment.
  • When a data segment is requested by the host system (or by another storage device in a direct memory access (DMA) process), the controller checks if the data segment belongs to a predictable data object. Such a check is performed in either of the two ways described above. If the data segment is found to belong to a predictable data object, then immediately after loading the data segment to the host system, the controller preloads the next data segment in the data object. When the host system requests the next data segment, the controller can deliver the data segment immediately.
  • Therefore, according to the present invention, there is provided for the first time a non-volatile storage device including: (a) a storage memory for storing data; (b) a cache memory for preloading the data upon a host-system request to read the data; and (c) a storage-device controller configured: (i) to determine that a plurality of data segments that constitute a non-contiguous data object, stored in the storage memory such that at least one data segment is non-contiguous to a preceding data segment in the data object, are in a predictable sequence; and (ii) to preload a non-contiguous next data segment in the predictable sequence into the cache memory after loading a current data segment into a host system from the cache memory, wherein the next data segment is preloaded prior to the host-system request to read the next data segment.
  • Preferably, the controller is further configured: (iii) to store information about the predictable sequence upon writing of the data object into the storage memory.
  • Preferably, the controller is further configured: (iii) to receive an indication, from the host system, that the data object is a predictable data object.
  • Most preferably, the indication is provided upon writing the data object into the storage memory, or upon reading the data object from the storage memory.
  • Preferably, the controller is further configured: (iii) to determine the data object as a predictable data object by examination of properties of the data object.
  • Most preferably, the properties include at least one property from the group consisting of: a name of the data object, an extension of the data object, and a format of the data object.
  • Preferably, an identity of the next data segment is determined from at least one item from the group consisting of: a data structure that is external to the data object, a parameter in the current data segment of the data object, a table for converting virtual addresses to physical addresses of the plurality of data segments, a pointer from the current data segment to the next data segment, a host-system designation of the current data segment, and a statistical frequency analysis of data segments that follow the current data segment in previous retrievals of the current data segment.
  • Preferably, the host-system request includes an identity of the next data segment for preloading into the cache memory.
  • Preferably, the controller is further configured: (iii) to select the next data segment for preloading into the cache memory, from the plurality of data segments, using a diluted algorithm or a diluted formula, prior to the host-system request to read the next data segment.
  • These and further embodiments will be apparent from the detailed description and examples that follow.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The present invention is herein described, by way of example only, with reference to the accompanying drawings, wherein:
  • FIG. 1 is a simplified block diagram of a predictable-data-retrieval system, according to a preferred embodiment of the present invention;
  • FIG. 2 is a simplified flowchart of the steps in a predictable-data-retrieval process, according to a preferred embodiment of the present invention;
  • FIG. 3 is a simplified flowchart of the predictable-data-retrieval process of the present invention when the data object is non-predictable (Flow A);
  • FIG. 4 is a simplified flowchart of the predictable-data-retrieval process of the present invention when the first segment of a predictable data object is being read (Flow B);
  • FIG. 5 is a simplified flowchart of the predictable-data-retrieval process of the present invention when a predictable data object is being read (Flow C);
  • FIG. 6 is a simplified flowchart of the predictable-data-retrieval process of the present invention when the last segment of a predictable data object is being read (Flow D);
  • FIG. 7 is a simplified schematic diagram depicting Flows A-D of the process in FIGS. 3-6 as a function of time, according to a preferred embodiment of the present invention.
  • DESCRIPTION OF THE PREFERRED EMBODIMENTS
  • The present invention relates to devices for increasing data-retrieval times from a non-volatile storage device in which a cache memory is used for preloading a data segment before the data segment is loaded into a host system. The principles and operation for increasing data-retrieval times from a non-volatile storage device, according to the present invention, may be better understood with reference to the accompanying description and the drawings.
  • Referring now to the drawings, FIG. 1 is a simplified block diagram of a predictable-data-retrieval system, according to a preferred embodiment of the present invention. A host system 10 (e.g. a personal computer or a mobile phone), is shown connected to a storage device 12 (e.g. a USB flash drive (UFD) or a memory card). Storage device 12 has a controller 14 and a non-volatile storage memory 16. Storage memory 16 contains data objects 17 that are made of data segments 18. Storage device 12 also has a cache memory 19 that is typically much faster and smaller than storage memory 16. Upon receiving a data-retrieval request from host system 10, storage device 12 performs a “search” operation, which actually prepares data segments 18 for read-operations on host system 10.
  • Since the search involves reading indices and/or mapping tables, determining the correct address of data segments 18 according to which data object 17 was read, and loading data segments 17 into cache memory 19, the search will take time to be performed. While the search is being performed, storage device 12 either notifies host system 10 that data object 17 is not ready yet (e.g. via a bit indicating the status as ready/busy), or responds with a failure to the read-operations associated with the data-retrieval request from host system 10. When data object 17 is ready (i.e. storage device 12 found the relevant file, relevant track, or relevant flash-erasable unit), storage device 12 notifies host system 10 that data object 17 is ready to be read (via a ready/busy bit or by an interrupt). At this point, host system 10, or another storage device, can read data object 17.
  • FIG. 2 is a simplified flowchart of the steps in a predictable-data-retrieval process, according to a preferred embodiment of the present invention. Upon receiving a request for a data segment 18 from host system 10 (Step 20), controller 14 determines whether or not the requested data segment 18 is in cache memory 19 (Step 22).
  • FIG. 3 is a simplified flowchart of the predictable-data-retrieval process of the present invention when the data object is non-predictable (Flow A). Data segments 18 that are not recognized as part of a predictable data object 17 are not read and loaded in cache memory 19 prior to the request of host system 10. If the data segment 18 is not in cache memory 19 already (Step 22), then controller 14 proceeds to load the requested data segment 18 into cache memory 19 (Step 24). Controller 14 then notifies host system 10 that data segment 18 is ready to be read, and automatically sends data segment 18 to host system 10 (Step 30) when host system 10 reads data segment 18, controller 14 checks if the current requested data segment 18 is part of predictable data object 17 (Step 32). If not, then controller 14 ends the read-operation (Step 38).
  • FIG. 4 is a simplified flowchart of the predictable-data-retrieval process of the present invention when the first segment of a predictable data object is being read (Flow B). Continuing from Step 32 in the case of a predictable data object 17, if the current data segment 18 is not the last data segment 18 in the sequence of data object 17, controller 14 checks if the current data segment 18 has a “follower segment” (Step 28). If it is the last data segment 18 in the sequence of data object 17, controller 14 proceeds to the end (Step 38). If the data segment 18 is not identified as the last data segment in the sequence of data object 17 (i.e. data segment 18 has a “follower segment”), controller 14 determines the next data segment 18 in the sequence of data object 17 (Step 34), loads the next data segment 18 into cache memory 19 (Step 36), and then proceeds to the end (Step 38).
  • FIG. 5 is a simplified flowchart of the predictable-data-retrieval process of the present invention when a predictable data object is being read (Flow C). In this case, the data segment being retrieved is neither the first nor the last data segment in the data object. Data segments 18 can be loaded into cache memory 19 if data segments 18 are recognized as part of a predictable data object 17, and were read and loaded into cache memory 19 prior to the request from host system 10. If the requested data segment 18 is already in cache memory 19 (Step 22), controller 14 notifies host system 10 that the data is ready to be read, and sends the cached data segment 18 to host system 10 which tries to read the relevant data segment 18 (Step 26). If controller 14 identifies that the current requested data segment 18 is part of predictable data object, it proceeds to check if the current data segment 18 has a “follower segment” (Step 28). If the current data segment 18 is not the last data segment 18 in the data object 17, controller 14 determines the next data segment 18 in the sequence of data object 17 (Step 34), loads the next data segment 18 into cache memory 19 (Step 36), and then ends the read-operation (Step 38).
  • FIG. 6 is a simplified flowchart of the predictable-data-retrieval process of the present invention when the last segment of a predictable data object is being read (Flow D). Continuing from Step 28, if the current data segment 18 is the last data segment 18 in the sequence of data object 17, then controller 14 ends the read-operation (Step 38).
  • FIG. 7 is a simplified schematic diagram depicting Flows A-D of the process in FIGS. 3-6 as a function of time, according to a preferred embodiment of the present invention. FIG. 7 illustrates a scenario that involves all four “read-modes” described in FIGS. 3-6. FIG. 7 is divided into a sequence of read-operations 40, labeled as the read-modes of Flows A-D from FIGS. 3-6, of host system 10. Three data objects 42, 44, and 46 are being read in consecutive order. Data objects 42 and 46 are non-predictable, and data object 44 is predictable. By way of example, the last read-operation 40 of predictable data object 44 is indicated by a read-operation 48, and is read using Flow D of FIG. 6.
  • While the invention has been described with respect to a limited number of embodiments, it will be appreciated that many variations, modifications, and other applications of the invention may be made.

Claims (10)

1. A non-volatile storage device comprising:
(a) a storage memory for storing data;
(b) a cache memory for preloading said data upon a host-system request to read said data; and
(c) a storage-device controller configured:
(i) to determine that a plurality of data segments that constitute a non-contiguous data object, stored in said storage memory such that at least one said data segment is non-contiguous to a preceding said data segment in said data object, are in a predictable sequence; and
(ii) to preload a non-contiguous next data segment in said predictable sequence into said cache memory after loading a current data segment into a host system from said cache memory, wherein said next data segment is preloaded prior to said host-system request to read said next data segment.
2. The device of claim 1, wherein said controller is further configured:
(iii) to store information about said predictable sequence upon writing of said data object into said storage memory.
3. The device of claim 1, wherein said controller is further configured:
(iii) to receive an indication, from said host system, that said data object is a predictable data object.
4. The device of claim 3, wherein said indication is provided upon writing said data object into said storage memory.
5. The device of claim 3, wherein said indication is provided upon reading said data object from said storage memory.
6. The device of claim 1, wherein said controller is further configured:
(iii) to determine said data object as a predictable data object by examination of properties of said data object.
7. The device of claim 6, wherein said properties include at least one property from the group consisting of: a name of said data object, an extension of said data object, and a format of said data object.
8. The device of claim 1, wherein an identity of said next data segment is determined from at least one item from the group consisting of: a data structure that is external to said data object, a parameter in said current data segment of said data object, a table for converting virtual addresses to physical addresses of said plurality of data segments, a pointer from said current data segment to said next data segment, a host-system designation of said current data segment, and a statistical frequency analysis of data segments that follow said current data segment in previous retrievals of said current data segment.
9. The device of claim 1, wherein said host-system request includes an identity of said next data segment for preloading into said cache memory.
10. The device of claim 1, wherein said controller is further configured:
(iii) to select said next data segment for preloading into said cache memory, from said plurality of data segments, using a diluted algorithm or a diluted formula, prior to said host-system request to read said next data segment.
US11/802,223 2006-05-29 2007-05-21 Predictive data-loader Abandoned US20070276989A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US11/802,223 US20070276989A1 (en) 2006-05-29 2007-05-21 Predictive data-loader

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US80337106P 2006-05-29 2006-05-29
US11/802,223 US20070276989A1 (en) 2006-05-29 2007-05-21 Predictive data-loader

Publications (1)

Publication Number Publication Date
US20070276989A1 true US20070276989A1 (en) 2007-11-29

Family

ID=38445643

Family Applications (2)

Application Number Title Priority Date Filing Date
US11/802,224 Active 2028-03-21 US8051249B2 (en) 2006-05-29 2007-05-21 Method for preloading data to improve data-retrieval times
US11/802,223 Abandoned US20070276989A1 (en) 2006-05-29 2007-05-21 Predictive data-loader

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US11/802,224 Active 2028-03-21 US8051249B2 (en) 2006-05-29 2007-05-21 Method for preloading data to improve data-retrieval times

Country Status (5)

Country Link
US (2) US8051249B2 (en)
JP (1) JP2009539168A (en)
KR (1) KR101422557B1 (en)
TW (1) TWI349194B (en)
WO (1) WO2007138585A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100274962A1 (en) * 2009-04-26 2010-10-28 Sandisk Il Ltd. Method and apparatus for implementing a caching policy for non-volatile memory
US20140250086A1 (en) * 2013-03-03 2014-09-04 Barracuda Networks, Inc. WAN Gateway Optimization by Indicia Matching to Pre-cached Data Stream Apparatus, System, and Method of Operation
US8892638B2 (en) 2012-05-10 2014-11-18 Microsoft Corporation Predicting and retrieving data for preloading on client device
US20150134680A1 (en) * 2013-11-13 2015-05-14 Palo Alto Research Center Incorporated Method and apparatus for prefetching content in a data stream
US9099171B2 (en) 2012-06-08 2015-08-04 Hitachi, Ltd. Information processor
US9336214B2 (en) 2009-01-31 2016-05-10 Hewlett-Packard Development Company, L.P. File-name extension characters for file distribution

Families Citing this family (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4984666B2 (en) * 2006-06-12 2012-07-25 ソニー株式会社 Non-volatile memory
RU2006134919A (en) * 2006-10-03 2008-04-10 Спарсикс Корпорейшн (Us) MEMORY CONTROLLER FOR THE SYSTEM OF CALCULATION OF SPARED DATA AND METHOD FOR IT
JP5093757B2 (en) * 2008-02-19 2012-12-12 日本電気株式会社 Field priority terminal cache storage system, method and program thereof
JP2010125631A (en) * 2008-11-25 2010-06-10 Casio Electronics Co Ltd Data transfer device
JP5104828B2 (en) * 2009-08-24 2012-12-19 富士通株式会社 Object-based storage system, cache control device, and cache control method
US8560778B2 (en) * 2011-07-11 2013-10-15 Memory Technologies Llc Accessing data blocks with pre-fetch information
US10606754B2 (en) * 2012-04-16 2020-03-31 International Business Machines Corporation Loading a pre-fetch cache using a logical volume mapping
US9304711B2 (en) 2012-10-10 2016-04-05 Apple Inc. Latency reduction in read operations from data storage in a host device
US10127235B2 (en) * 2013-03-06 2018-11-13 Quest Software Inc. Storage system deduplication with service level agreements
US9565233B1 (en) * 2013-08-09 2017-02-07 Google Inc. Preloading content for requesting applications
US9542309B2 (en) 2013-08-21 2017-01-10 Sandisk Technologies Llc Relocating data based on matching address sequences
KR101485907B1 (en) 2014-01-03 2015-02-11 연세대학교 산학협력단 Apparatus and method for preload using probability model
US9529722B1 (en) 2014-07-31 2016-12-27 Sk Hynix Memory Solutions Inc. Prefetch with localities and performance monitoring
CN106155764A (en) 2015-04-23 2016-11-23 阿里巴巴集团控股有限公司 The method and device of scheduling virtual machine input and output resource
CN106201839B (en) 2015-04-30 2020-02-14 阿里巴巴集团控股有限公司 Information loading method and device for business object
CN106209741B (en) 2015-05-06 2020-01-03 阿里巴巴集团控股有限公司 Virtual host, isolation method, resource access request processing method and device
CN106708819A (en) * 2015-07-17 2017-05-24 阿里巴巴集团控股有限公司 Data caching preheating method and device
CN106487708B (en) 2015-08-25 2020-03-13 阿里巴巴集团控股有限公司 Network access request control method and device
KR102450555B1 (en) 2015-11-09 2022-10-05 삼성전자주식회사 Storage device and operating method thereof
JP6100952B2 (en) * 2016-04-27 2017-03-22 株式会社日立製作所 Information processing device
US10776273B2 (en) * 2016-05-16 2020-09-15 SK Hynix Inc. Memory system having multiple cache pages and operating method thereof
CN109213692B (en) * 2017-07-06 2022-10-21 慧荣科技股份有限公司 Storage device management system and storage device management method
US11256618B2 (en) 2017-07-06 2022-02-22 Silicon Motion, Inc. Storage apparatus managing system comprising local and global registering regions for registering data and associated method
US10459844B2 (en) * 2017-12-21 2019-10-29 Western Digital Technologies, Inc. Managing flash memory read operations
CN111045732B (en) * 2019-12-05 2023-06-09 腾讯科技(深圳)有限公司 Data processing method, chip, device and storage medium
KR20220035568A (en) 2020-09-14 2022-03-22 에스케이하이닉스 주식회사 Memory system and operating method of memory system

Citations (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5410653A (en) * 1992-06-16 1995-04-25 International Business Machines Corporation Asynchronous read-ahead disk caching using multiple disk I/O processes and dynamically variable prefetch length
US5530821A (en) * 1991-08-02 1996-06-25 Canon Kabushiki Kaisha Method and apparatus including independent virtual address translation
US5941981A (en) * 1997-11-03 1999-08-24 Advanced Micro Devices, Inc. System for using a data history table to select among multiple data prefetch algorithms
US6003115A (en) * 1997-07-29 1999-12-14 Quarterdeck Corporation Method and apparatus for predictive loading of a cache
US6044439A (en) * 1997-10-27 2000-03-28 Acceleration Software International Corporation Heuristic method for preloading cache to enhance hit rate
US6134643A (en) * 1997-11-26 2000-10-17 Intel Corporation Method and apparatus for cache line prediction and prefetching using a prefetch controller and buffer and access history
US6463509B1 (en) * 1999-01-26 2002-10-08 Motive Power, Inc. Preloading data in a cache memory according to user-specified preload criteria
US6516389B1 (en) * 1999-12-28 2003-02-04 Kabushiki Kaisha Toshiba Disk control device
US20030105939A1 (en) * 2001-11-30 2003-06-05 Cooksey Robert N. Method and apparatus for next-line prefetching from a predicted memory address
US6625696B1 (en) * 2000-03-31 2003-09-23 Intel Corporation Method and apparatus to adaptively predict data quantities for caching
US6654867B2 (en) * 2001-05-22 2003-11-25 Hewlett-Packard Development Company, L.P. Method and system to pre-fetch compressed memory blocks using pointers
US20040088490A1 (en) * 2002-11-06 2004-05-06 Subir Ghosh Super predictive fetching system and method
US20040148593A1 (en) * 2003-01-29 2004-07-29 Jan Civlin Method and apparatus for prefetching memory pages during execution of a computer program
US20040168026A1 (en) * 2003-02-20 2004-08-26 Wu Chia Y. Write posting memory interface with block-based read-ahead mechanism
US20040205298A1 (en) * 2003-04-14 2004-10-14 Bearden Brian S. Method of adaptive read cache pre-fetching to increase host read throughput
US20040205301A1 (en) * 2003-04-14 2004-10-14 Renesas Technology Corp. Memory device
US6834325B1 (en) * 1999-07-16 2004-12-21 Storage Technology Corporation System and method for providing client-directed staging to improve non-sequential access performance in a caching disk storage system
US20050210198A1 (en) * 2004-03-22 2005-09-22 International Business Machines Corporation Method and apparatus for prefetching data from a data structure
US20060026386A1 (en) * 2004-07-30 2006-02-02 Microsoft Corporation System and method for improved prefetching
US20070136533A1 (en) * 2005-12-09 2007-06-14 Microsfoft Corporation Pre-storage of data to pre-cached system memory
US20070143547A1 (en) * 2005-12-20 2007-06-21 Microsoft Corporation Predictive caching and lookup
US20070150647A1 (en) * 2005-12-27 2007-06-28 Samsung Electronics Co., Ltd. Storage apparatus using non-volatile memory as cache and method of operating the same
US20070198780A1 (en) * 2006-02-17 2007-08-23 Boyd Kenneth W Apparatus, system, and method for determining prefetch data
US20070204108A1 (en) * 2006-02-28 2007-08-30 Griswell John B Jr Method and system using stream prefetching history to improve data prefetching performance
US7313656B1 (en) * 2004-12-27 2007-12-25 Emc Corporation Pre-fetch prediction method for disk drives

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6418525B1 (en) 1999-01-29 2002-07-09 International Business Machines Corporation Method and apparatus for reducing latency in set-associative caches using set prediction
US6760818B2 (en) * 2002-05-01 2004-07-06 Koninklijke Philips Electronics N.V. Memory region based data pre-fetching

Patent Citations (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5530821A (en) * 1991-08-02 1996-06-25 Canon Kabushiki Kaisha Method and apparatus including independent virtual address translation
US5410653A (en) * 1992-06-16 1995-04-25 International Business Machines Corporation Asynchronous read-ahead disk caching using multiple disk I/O processes and dynamically variable prefetch length
US6003115A (en) * 1997-07-29 1999-12-14 Quarterdeck Corporation Method and apparatus for predictive loading of a cache
US6044439A (en) * 1997-10-27 2000-03-28 Acceleration Software International Corporation Heuristic method for preloading cache to enhance hit rate
US5941981A (en) * 1997-11-03 1999-08-24 Advanced Micro Devices, Inc. System for using a data history table to select among multiple data prefetch algorithms
US6134643A (en) * 1997-11-26 2000-10-17 Intel Corporation Method and apparatus for cache line prediction and prefetching using a prefetch controller and buffer and access history
US6463509B1 (en) * 1999-01-26 2002-10-08 Motive Power, Inc. Preloading data in a cache memory according to user-specified preload criteria
US6834325B1 (en) * 1999-07-16 2004-12-21 Storage Technology Corporation System and method for providing client-directed staging to improve non-sequential access performance in a caching disk storage system
US6516389B1 (en) * 1999-12-28 2003-02-04 Kabushiki Kaisha Toshiba Disk control device
US6625696B1 (en) * 2000-03-31 2003-09-23 Intel Corporation Method and apparatus to adaptively predict data quantities for caching
US6654867B2 (en) * 2001-05-22 2003-11-25 Hewlett-Packard Development Company, L.P. Method and system to pre-fetch compressed memory blocks using pointers
US20030105939A1 (en) * 2001-11-30 2003-06-05 Cooksey Robert N. Method and apparatus for next-line prefetching from a predicted memory address
US20040088490A1 (en) * 2002-11-06 2004-05-06 Subir Ghosh Super predictive fetching system and method
US20040148593A1 (en) * 2003-01-29 2004-07-29 Jan Civlin Method and apparatus for prefetching memory pages during execution of a computer program
US20040168026A1 (en) * 2003-02-20 2004-08-26 Wu Chia Y. Write posting memory interface with block-based read-ahead mechanism
US20040205298A1 (en) * 2003-04-14 2004-10-14 Bearden Brian S. Method of adaptive read cache pre-fetching to increase host read throughput
US20040205301A1 (en) * 2003-04-14 2004-10-14 Renesas Technology Corp. Memory device
US20050210198A1 (en) * 2004-03-22 2005-09-22 International Business Machines Corporation Method and apparatus for prefetching data from a data structure
US20060026386A1 (en) * 2004-07-30 2006-02-02 Microsoft Corporation System and method for improved prefetching
US7313656B1 (en) * 2004-12-27 2007-12-25 Emc Corporation Pre-fetch prediction method for disk drives
US20070136533A1 (en) * 2005-12-09 2007-06-14 Microsfoft Corporation Pre-storage of data to pre-cached system memory
US20070143547A1 (en) * 2005-12-20 2007-06-21 Microsoft Corporation Predictive caching and lookup
US20070150647A1 (en) * 2005-12-27 2007-06-28 Samsung Electronics Co., Ltd. Storage apparatus using non-volatile memory as cache and method of operating the same
US20070198780A1 (en) * 2006-02-17 2007-08-23 Boyd Kenneth W Apparatus, system, and method for determining prefetch data
US20070204108A1 (en) * 2006-02-28 2007-08-30 Griswell John B Jr Method and system using stream prefetching history to improve data prefetching performance

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9336214B2 (en) 2009-01-31 2016-05-10 Hewlett-Packard Development Company, L.P. File-name extension characters for file distribution
US20100274962A1 (en) * 2009-04-26 2010-10-28 Sandisk Il Ltd. Method and apparatus for implementing a caching policy for non-volatile memory
US8103822B2 (en) 2009-04-26 2012-01-24 Sandisk Il Ltd. Method and apparatus for implementing a caching policy for non-volatile memory
US8892638B2 (en) 2012-05-10 2014-11-18 Microsoft Corporation Predicting and retrieving data for preloading on client device
US9099171B2 (en) 2012-06-08 2015-08-04 Hitachi, Ltd. Information processor
US9268486B2 (en) 2012-06-08 2016-02-23 Hitachi, Ltd. Information processor
US20140250086A1 (en) * 2013-03-03 2014-09-04 Barracuda Networks, Inc. WAN Gateway Optimization by Indicia Matching to Pre-cached Data Stream Apparatus, System, and Method of Operation
US20150134680A1 (en) * 2013-11-13 2015-05-14 Palo Alto Research Center Incorporated Method and apparatus for prefetching content in a data stream
US10101801B2 (en) * 2013-11-13 2018-10-16 Cisco Technology, Inc. Method and apparatus for prefetching content in a data stream

Also Published As

Publication number Publication date
US20070276990A1 (en) 2007-11-29
KR20090026296A (en) 2009-03-12
US8051249B2 (en) 2011-11-01
WO2007138585A1 (en) 2007-12-06
JP2009539168A (en) 2009-11-12
TW200811654A (en) 2008-03-01
TWI349194B (en) 2011-09-21
KR101422557B1 (en) 2014-08-13

Similar Documents

Publication Publication Date Title
US8051249B2 (en) Method for preloading data to improve data-retrieval times
US8595451B2 (en) Managing a storage cache utilizing externally assigned cache priority tags
US10503423B1 (en) System and method for cache replacement using access-ordering lookahead approach
TWI233552B (en) A log-structured write cache for data storage devices and systems
US11347443B2 (en) Multi-tier storage using multiple file sets
US8838875B2 (en) Systems, methods and computer program products for operating a data processing system in which a file delete command is sent to an external storage device for invalidating data thereon
US20070005904A1 (en) Read ahead method for data retrieval and computer system
CN106445405B (en) Data access method and device for flash memory storage
US20160224588A1 (en) Data integrity and loss resistance in high performance and high capacity storage deduplication
CN108733306B (en) File merging method and device
US9727479B1 (en) Compressing portions of a buffer cache using an LRU queue
KR20090032821A (en) Method for prefetching of hard disk drive, recording medium and apparatus therefor
CN108628542B (en) File merging method and controller
CN103150395B (en) Directory path analysis method of solid state drive (SSD)-based file system
US11221999B2 (en) Database key compression
US20130346724A1 (en) Sequential block allocation in a memory
US20220129420A1 (en) Method for facilitating recovery from crash of solid-state storage device, method of data synchronization, computer system, and solid-state storage device
JP2019028954A (en) Storage control apparatus, program, and deduplication method
US11630595B2 (en) Methods and systems of efficiently storing data
CN113835614A (en) SSD intelligent caching method and system based on distributed file storage client
CN111610936B (en) Object storage platform, object aggregation method and device and server
US8151053B2 (en) Hierarchical storage control apparatus, hierarchical storage control system, hierarchical storage control method, and program for controlling storage apparatus having hierarchical structure
CN116414304B (en) Data storage device and storage control method based on log structured merging tree
US20130046736A1 (en) Recovering method and device for linux using fat file system
US20240020019A1 (en) Resumable transfer of virtual disks

Legal Events

Date Code Title Description
AS Assignment

Owner name: SANDISK IL LTD., ISRAEL

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:MOSEK, AMIR;LEHR, AMIR;DUZLY, YACOV;AND OTHERS;REEL/FRAME:019388/0171

Effective date: 20070516

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION