TW202401232A - Storage system and method of operating storage system - Google Patents

Storage system and method of operating storage system Download PDF

Info

Publication number
TW202401232A
TW202401232A TW112118965A TW112118965A TW202401232A TW 202401232 A TW202401232 A TW 202401232A TW 112118965 A TW112118965 A TW 112118965A TW 112118965 A TW112118965 A TW 112118965A TW 202401232 A TW202401232 A TW 202401232A
Authority
TW
Taiwan
Prior art keywords
data
storage device
raid
buffer
circuit
Prior art date
Application number
TW112118965A
Other languages
Chinese (zh)
Inventor
张通
朴熙權
瑞克哈 皮茲馬尼
亮奭 奇
Original Assignee
南韓商三星電子股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 南韓商三星電子股份有限公司 filed Critical 南韓商三星電子股份有限公司
Publication of TW202401232A publication Critical patent/TW202401232A/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0629Configuration or reconfiguration of storage systems
    • G06F3/0632Configuration or reconfiguration of storage systems by initialisation or re-initialisation of storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0806Multiuser, multiprocessor or multiprocessing cache systems
    • G06F12/0815Cache consistency protocols
    • G06F12/0817Cache consistency protocols using directory methods
    • G06F12/0828Cache consistency protocols using directory methods with concurrent directory accessing, i.e. handling multiple concurrent coherency transactions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0683Plurality of storage devices
    • G06F3/0689Disk arrays, e.g. RAID, JBOD
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • G06F12/0238Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory
    • G06F12/0246Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory in block erasable memory, e.g. flash memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0806Multiuser, multiprocessor or multiprocessing cache systems
    • G06F12/0815Cache consistency protocols
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/14Handling requests for interconnection or transfer
    • G06F13/16Handling requests for interconnection or transfer for access to memory bus
    • G06F13/1668Details of memory controller
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/38Information transfer, e.g. on bus
    • G06F13/42Bus transfer protocol, e.g. handshake; Synchronisation
    • G06F13/4204Bus transfer protocol, e.g. handshake; Synchronisation on a parallel bus
    • G06F13/4221Bus transfer protocol, e.g. handshake; Synchronisation on a parallel bus being an input/output bus, e.g. ISA bus, EISA bus, PCI bus, SCSI bus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0604Improving or facilitating administration, e.g. storage management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0656Data buffering arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0662Virtualisation aspects
    • G06F3/0665Virtualisation aspects at area level, e.g. provisioning of virtual or logical volumes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0683Plurality of storage devices
    • G06F3/0688Non-volatile semiconductor memory arrays
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/26Using a specific storage system architecture
    • G06F2212/261Storage comprising a plurality of storage devices
    • G06F2212/262Storage comprising a plurality of storage devices configured as RAID
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/62Details of cache specific to multiprocessor cache arrangements
    • G06F2212/621Coherency control relating to peripheral accessing, e.g. from DMA or I/O device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/72Details relating to flash memory management
    • G06F2212/7203Temporary buffering, e.g. using volatile buffer or dedicated buffer blocks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/72Details relating to flash memory management
    • G06F2212/7208Multiple device management, e.g. distributing data over multiple flash devices

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Memory System Of A Hierarchy Structure (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

A system is disclosed. A first storage device may support a cache coherent interconnect protocol, the cache coherent interconnect protocol including a block level protocol and a byte level protocol. A second storage device may also support the cache coherent interconnect protocol. A redundant array of independent disks (RAID) circuit may communicate with the first storage device and the second storage device. The RAID circuit may apply a RAID level to the first storage device and the second storage device. The RAID circuit may be configured to receive a request using the byte level protocol and to access data on the first storage device.

Description

在快取記憶體相干互連存儲設備中使用RAID電路的獨立磁碟冗餘陣列(RAID)的系統和方法Systems and methods for redundant array of independent disks (RAID) using RAID circuits in cache coherent interconnect storage devices

本揭露大體而言是有關於儲存,且更具體而言是有關於使用支援快取記憶體同調互連協定的儲存設備來支援獨立磁碟冗餘陣列(RAID)。This disclosure relates generally to storage, and more specifically to supporting Redundant Array of Independent Disks (RAID) using storage devices that support the Cache Coherent Interconnect protocol.

獨立磁碟冗餘陣列(Redundant Array of Independent Disks,RAID)可將一組二或更多個儲存設備呈現為單個儲存設備。RAID配置可支援條帶化(striping)(猶如二或更多個儲存設備是單個儲存設備一般使用所述二或更多個儲存設備的儲存空間)、同位(parity)(提供用於對資料是否正確進行雙重檢查的機制)或條帶化與同位二者。但是,為了利用RAID的有益效果,可藉由RAID控制器(硬體或軟體)進行對資料的存取。繞過RAID控制器可能會導致資料不準確或資料惡化(data corruption)。Redundant Array of Independent Disks (RAID) presents a set of two or more storage devices as a single storage device. RAID configurations can support striping (using the storage space of two or more storage devices as if they were a single storage device), parity (providing a way to determine whether data is correct mechanism for double checking) or both striping and parity. However, in order to take advantage of the beneficial effects of RAID, data can be accessed through a RAID controller (hardware or software). Bypassing the RAID controller may result in data inaccuracies or data corruption.

仍然需要一種對根據RAID配置存取資料進行改良的方法。There is still a need for an improved method of accessing data based on RAID configurations.

用於解決以上問題的實施例包括一種系統,所述系統包括:第一儲存設備,支援快取記憶體同調互連協定,所述快取記憶體同調互連協定包括區塊層級協定及位元組層級協定;第二儲存設備,支援所述快取記憶體同調互連協定;以及獨立磁碟冗餘陣列(RAID)電路,與所述第一儲存設備及所述第二儲存設備進行通訊,所述RAID電路對所述第一儲存設備及所述第二儲存設備應用RAID層級,所述RAID電路被配置成接收使用所述位元組層級協定的請求且對所述第一儲存設備上的資料進行存取。Embodiments for solving the above problems include a system, the system includes: a first storage device supporting a cache coherence interconnection protocol, the cache memory coherence interconnection protocol includes a block level protocol and a bit level protocol. a group level protocol; a second storage device supporting the cache coherent interconnect protocol; and a redundant array of independent disks (RAID) circuitry communicating with the first storage device and the second storage device, The RAID circuit applies a RAID level to the first storage device and the second storage device, the RAID circuit is configured to receive a request using the byte level protocol and to apply a RAID level to the first storage device. Data access.

用於解決以上問題的實施例包括一種方法,所述方法包括:在獨立磁碟冗餘陣列(RAID)電路處接收載入請求,所述載入請求包括位元組位址;至少部分地基於所述位元組位址而在所述RAID電路的緩衝器中對資料進行定位;以及自所述RAID電路返送所述資料。Embodiments for solving the above problems include a method that includes: receiving a load request at a redundant array of independent disks (RAID) circuit, the load request including a byte address; based at least in part on The byte address is used to locate data in a buffer of the RAID circuit; and the data is returned from the RAID circuit.

用於解決以上問題的實施例包括一種方法,所述方法包括:在獨立磁碟冗餘陣列(RAID)電路處接收儲存請求,所述儲存請求包括位元組位址及第一資料;至少部分地基於所述位元組位址及所述第一資料而在所述RAID電路的緩衝器中對第二資料進行更新以生成經更新第二資料;以及自所述RAID電路返送結果。Embodiments for solving the above problems include a method that includes: receiving a storage request at a redundant array of independent disks (RAID) circuit, the storage request including a byte address and first data; at least in part updating second data in a buffer of the RAID circuit based on the byte address and the first data to generate updated second data; and returning a result from the RAID circuit.

現在將詳細參照本揭露的實施例,所述實施例的實例示出於附圖中。在以下詳細說明中,陳述諸多具體細節以使得能夠透徹地理解本揭露。然而,應理解,此項技術中具有通常知識者無需該些具體細節即可實踐本揭露。在其他情形中,未對眾所習知的方法、程序、組件、電路及網路予以詳細闡述,以避免不必要地使實施例的各態樣模糊不清。Reference will now be made in detail to the embodiments of the present disclosure, examples of which are illustrated in the accompanying drawings. In the following detailed description, numerous specific details are set forth to provide a thorough understanding of the present disclosure. However, it is understood that one of ordinary skill in the art may practice the present disclosure without these specific details. In other instances, well-known methods, procedures, components, circuits, and networks have not been described in detail in order to avoid unnecessarily obscuring aspects of the embodiments.

應理解,儘管本文中可能使用用語「第一」、「第二」等來闡述各種元件,然而該些元件不應受該些用語限制。該些用語僅用於區分各個元件。舉例而言,在不背離本揭露的範圍的條件下,第一模組可被稱為第二模組,且相似地,第二模組可被稱為第一模組。It should be understood that although the terms "first", "second", etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish between various components. For example, a first module may be referred to as a second module, and similarly, a second module may be referred to as a first module without departing from the scope of the present disclosure.

本文中在本揭露的說明中所使用的術語僅用於闡述具體實施例的目的,而並非旨在限制本揭露。除非上下文另外清楚地指示,否則在本揭露的說明及隨附申請專利範圍中所使用的單數形式「一(a/an)」及「所述(the)」旨在亦包括複數形式。亦應理解,本文中所使用的用語「及/或(and/or)」是指且囊括相關聯所列項中一或多個項的任意及所有可能組合。更應理解,當在本說明書中使用用語「包括(comprises及/或comprising)」時,是指明所敘述特徵、整數、步驟、操作、元件及/或組件的存在,但不排除一或多個其他特徵、整數、步驟、操作、元件、組件及/或其群組的存在或添加。圖式所示組件及特徵未必按比例繪製。The terminology used herein in the description of the disclosure is for the purpose of describing specific embodiments only and is not intended to limit the disclosure. As used in the description of the disclosure and the appended claims, the singular forms "a/an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and includes any and all possible combinations of one or more of the associated listed items. It should be further understood that when the word "comprises and/or comprising" is used in this specification, it refers to the presence of the described features, integers, steps, operations, elements and/or components, but does not exclude the presence of one or more The presence or addition of other features, integers, steps, operations, elements, components and/or groups thereof. Components and features shown in the drawings are not necessarily drawn to scale.

支援快取記憶體同調互連協定的儲存設備變得越來越普遍。此種儲存設備容許使用具有不同粒度層級(level of granularity)的不同協定對資料進行存取。可如同其他儲存設備般以區塊為單位對資料進行存取,或者可如同記憶體設備般以位元組為單位對資料進行存取。Storage devices that support the Cache Coherent Interconnect protocol are becoming more and more common. This storage device allows data to be accessed using different protocols with different levels of granularity. Data can be accessed in blocks, like other storage devices, or in bytes, like a memory device.

獨立磁碟冗餘陣列(RAID)使二或更多個磁碟能夠表現為一個更大的磁碟。RAID的不同層級可提供相對於各別設備而言增加的儲存量、提供冗餘以防止因儲存設備故障而丟失資料、或者同時提供此兩種功能。Redundant Array of Independent Disks (RAID) allows two or more disks to appear as one larger disk. Different levels of RAID can provide increased storage relative to individual devices, provide redundancy to prevent data loss due to storage device failure, or provide both functions at the same time.

RAID技術可能不支援對資料的位元組層級存取。假若應用欲使用位元組層級協定來對儲存設備上的資料進行存取,則此種存取可能會繞過RAID技術。以此種方式改變一個儲存設備上的一個位元可能會導致RAID陣列中的資料不可讀,進而使得儲存於所述RAID陣列上的資料可能不可用。舉例而言,對陣列上的資料進行同位檢查可能不再能夠偵測到資料中的錯誤及/或對資料中的錯誤進行校正。(儘管同位資料可能夠在單個位元被改變的情況下恢復資料,然而若足夠多的位元發生改變,則可能無法進行錯誤偵測及校正。並且,若不存在可用的同位資料,即使單個位元錯誤亦可能會使資料惡化。)RAID technology may not support byte-level access to data. If an application attempts to access data on a storage device using a byte-level protocol, such access may bypass RAID technology. Changing one bit on a storage device in this manner may render the data in the RAID array unreadable, and thus the data stored on the RAID array may be unavailable. For example, parity checking of data on an array may no longer be able to detect and/or correct errors in the data. (Although parity data may be able to recover data if a single bit is changed, if enough bits have changed, error detection and correction may not be possible. Also, if no parity data is available, even a single Bit errors may also corrupt data.)

本揭露的實施例使用RAID引擎來解決該些問題。RAID引擎將各別儲存設備的位址範圍進行組合,以使其表現為單個位址範圍。RAID引擎可包括緩衝器,所述緩衝器可為揮發性儲存器、電池支援的揮發性儲存器(battery-backed volatile storage)或非揮發性儲存器。在接收到對特定位址的載入請求時,RAID引擎可將適宜的資料自RAID中的各別儲存設備讀取至緩衝器中,然後自特定位址返送資料。在接收到對特定位址的儲存請求時,RAID引擎可將資料自儲存設備載入至緩衝器中、在緩衝器中對所述資料實行儲存請求、然後執行對各別儲存設備的寫入以提交更新。RAID引擎可以此種方式考量資料的任何同位、加密、壓縮等,進而確保RAID配置的完整性得到維護。亦可以相似的方式使用RAID引擎,以使用區塊層級協定來對讀取請求及寫入請求進行處置。Embodiments of the present disclosure use a RAID engine to solve these problems. The RAID engine combines the address ranges of separate storage devices so that they appear as a single address range. A RAID engine may include a buffer, which may be volatile storage, battery-backed volatile storage, or non-volatile storage. When receiving a load request for a specific address, the RAID engine can read the appropriate data from the respective storage devices in the RAID into the buffer, and then return the data from the specific address. Upon receiving a store request for a specific address, the RAID engine may load the data from the storage device into a buffer, perform a store request for the data in the buffer, and then perform a write to the respective storage device to Submit update. The RAID engine can take into account any parity, encryption, compression, etc. of the data in this way to ensure that the integrity of the RAID configuration is maintained. A RAID engine can be used in a similar manner to handle read and write requests using a block-level protocol.

RAID引擎可支援任何所期望的RAID層級。The RAID engine can support any desired RAID level.

另外,當資料被寫入至RAID儲存設備時,藉由將資料儲存於緩衝器中,RAID引擎可能夠更快速地對主機作出響應且在稍後對各別儲存設備的寫入進行處置。因此RAID引擎可使主機能夠繼續進行其處理,且可在方便時(例如,當負載為低的時)處置對儲存設備的實際寫入。Additionally, by storing data in a buffer when data is written to a RAID storage device, the RAID engine may be able to respond more quickly to the host and later handle writes to individual storage devices. The RAID engine can therefore enable the host to continue its processing and handle actual writes to the storage device when convenient (eg, when load is low).

使用RAID引擎的另一優點在於,由於資料可分佈於多個儲存設備上,因此可在多個儲存設備上並行執行儲存操作。Another advantage of using a RAID engine is that since data can be distributed across multiple storage devices, storage operations can be performed in parallel on multiple storage devices.

圖1示出根據本揭露實施例的包括可被配置於獨立磁碟冗餘陣列(RAID)中的快取記憶體同調互連儲存設備的機器。在圖1中,機器105(其亦可被稱為主機或系統)可包括處理器110、記憶體115以及儲存設備120-1及120-2(其可被統稱為儲存設備120)。處理器110可為任何種類的處理器。(為易於例示,處理器110以及以下論述的其他組件被示出為位於所述機器之外:本揭露的實施例可包括位於機器內的該些組件。)儘管圖1示出單個處理器110,然而機器105可包括任意數目的處理器,所述處理器中的每一者可為單核處理器或多核處理器,所述單核處理器或多核處理器中的每一者可實施精簡指令集電腦(Reduced Instruction Set Computer,RISC)架構或複雜指令集電腦(Complex Instruction Set Computer,CISC)架構(以及其他可能性),且可以任何所期望組合進行混合。FIG. 1 illustrates a machine including a cache coherent interconnect storage device that can be configured in a redundant array of independent disks (RAID) according to an embodiment of the present disclosure. In Figure 1, machine 105 (which may also be referred to as a host or system) may include processor 110, memory 115, and storage devices 120-1 and 120-2 (which may be collectively referred to as storage devices 120). Processor 110 may be any kind of processor. (For ease of illustration, processor 110 and other components discussed below are shown external to the machine: embodiments of the present disclosure may include these components located within the machine.) Although FIG. 1 shows a single processor 110 , however machine 105 may include any number of processors, each of which may be a single-core processor or a multi-core processor, each of which may implement thin Reduced Instruction Set Computer (RISC) architecture or Complex Instruction Set Computer (CISC) architecture (among other possibilities), and can be mixed in any desired combination.

處理器110可耦合至記憶體115。記憶體115可為任何種類的記憶體,例如快閃記憶體、動態隨機存取記憶體(Dynamic Random Access Memory,DRAM)、靜態隨機存取記憶體(Static Random Access Memory,SRAM)、持續隨機存取記憶體(Persistent Random Access Memory)、鐵電隨機存取記憶體(Ferroelectric Random Access Memory,FRAM)或非揮發性隨機存取記憶體(Non-Volatile Random Access Memory,NVRAM)(例如磁阻式隨機存取記憶體(Magnetoresistive Random Access Memory,MRAM))等。記憶體115可根據需要而為揮發性記憶體或非揮發性記憶體。記憶體115亦可為不同記憶體類型的任何所期望組合且可由記憶體控制器125進行管理。可使用記憶體115儲存可被稱為「短期(short-term)」的資料:即,預期不會被儲存達延長時間段的資料。短期資料的實例可包括臨時檔案、由應用在本地使用的資料(其可自其他儲存位置拷貝)及類似資料。Processor 110 may be coupled to memory 115 . The memory 115 can be any type of memory, such as flash memory, dynamic random access memory (Dynamic Random Access Memory, DRAM), static random access memory (Static Random Access Memory, SRAM), persistent random access memory, etc. Persistent Random Access Memory, Ferroelectric Random Access Memory (FRAM) or Non-Volatile Random Access Memory (NVRAM) (such as magnetoresistive random access memory) Access memory (Magnetoresistive Random Access Memory, MRAM)), etc. The memory 115 can be a volatile memory or a non-volatile memory as needed. Memory 115 may also be any desired combination of different memory types and may be managed by memory controller 125 . Memory 115 may be used to store what may be termed "short-term" data: that is, data that is not expected to be stored for an extended period of time. Examples of short-lived data may include temporary files, data used locally by an application (which may be copied from other storage locations), and similar data.

在本揭露的一些實施例中,機器105可包括持續記憶體設備(未在圖1中示出)。可使用此種持續記憶體設備來代替記憶體115或者除記憶體115之外亦使用此種持續記憶體設備。In some embodiments of the present disclosure, machine 105 may include a persistent memory device (not shown in Figure 1). Such persistent memory devices may be used instead of or in addition to memory 115 .

處理器110及記憶體115亦可支援作業系統,各種應用可在所述作業系統下運行。該些應用可發出自記憶體115讀取資料或者向記憶體115寫入資料的請求(其亦可被稱為命令)。當使用儲存設備120-1及120-2(其可被統稱為儲存設備120)來支援經由某種檔案系統讀取資料或寫入資料的應用時,可使用設備驅動器130對儲存設備120進行存取。儘管圖1示出兩個儲存設備120-1及120-2,然而在機器105中可存在任意數目的儲存設備(但在僅存在一個儲存設備120的情況下一般不使用RAID配置)。儲存設備120可各自支援任何所期望的一或多個協定,包括例如快速非揮發性記憶體(Non-Volatile Memory Express,NVMe)協定。不同的儲存設備120可支援不同的協定及/或介面。具體而言,儲存設備120可支援快取記憶體同調互連協定,所述快取記憶體同調互連協定可支援對儲存設備120上的資料的區塊層級(或任何其他更高的粒度層級)存取及位元組層級(或任何其他更低的粒度層級)存取二者。此種快取記憶體同調互連協定的實例是快速計算鏈路(Compute Express Link,CXL)協定,所述CXL協定支援使用cxl.io協定以區塊為單位對資料進行存取以及使用cxl.記憶體協定以位元組為單位對資料進行存取。The processor 110 and the memory 115 can also support an operating system under which various applications can run. These applications may issue requests (which may also be referred to as commands) to read data from or write data to the memory 115 . When storage devices 120-1 and 120-2 (which may be collectively referred to as storage devices 120) are used to support applications that read or write data through a certain file system, the device driver 130 may be used to store the storage device 120. Pick. Although Figure 1 shows two storage devices 120-1 and 120-2, any number of storage devices may be present in the machine 105 (although a RAID configuration is generally not used where only one storage device 120 is present). Storage devices 120 may each support any desired protocol or protocols, including, for example, the Non-Volatile Memory Express (NVMe) protocol. Different storage devices 120 may support different protocols and/or interfaces. Specifically, storage device 120 may support a cache coherence interconnect protocol that may support block-level (or any other higher level of granularity) access to data on storage device 120 ) access and byte level (or any other lower level of granularity) access both. An example of such a cache coherence interconnect protocol is the Compute Express Link (CXL) protocol, which supports block-based data access using the cxl.io protocol and the use of cxl. The memory protocol accesses data in bytes.

圖1使用通用用語「儲存設備」,且本揭露的實施例可包括可支援快取記憶體同調互連協定的任何儲存設備格式,所述儲存設備的實例可包括硬碟驅動機及固態驅動機(Solid State Drive,SSD)。以下對「SSD」、「硬碟驅動機」或「儲存設備」的任何引用皆應被理解成包括本揭露的此類其他實施例。此外,可對不同類型的儲存設備進行混合。舉例而言,一個儲存設備120可為硬碟驅動機且另一儲存設備120可為SSD。FIG. 1 uses the general term "storage device", and embodiments of the present disclosure may include any storage device format that can support the cache coherent interconnect protocol. Examples of such storage devices may include hard disk drives and solid state drives. (Solid State Drive, SSD). Any references below to "SSD," "hard drive," or "storage device" should be understood to include such other embodiments of the disclosure. Additionally, different types of storage devices can be mixed. For example, one storage device 120 may be a hard disk drive and the other storage device 120 may be an SSD.

如上所述,儲存設備120可被配置於RAID中。當使用RAID時,可對處理器110隱藏基礎硬體(underlying hardware)(儲存設備120)。相反,RAID配置示出看起來如同單個儲存設備的「虛擬」設備,但包括RAID配置中所包括的各種儲存設備120上的儲存。(一些儲存設備120可為RAID的一部分且其他儲存設備120不是RAID的一部分:可使用設備驅動器130及常規的存取技術對不包括於RAID中的儲存設備120進行存取。)As mentioned above, storage device 120 may be configured in RAID. When using RAID, the underlying hardware (storage device 120) may be hidden from the processor 110. In contrast, a RAID configuration shows a "virtual" device that appears like a single storage device, but includes storage on the various storage devices 120 included in the RAID configuration. (Some storage devices 120 may be part of a RAID and other storage devices 120 may not be part of a RAID: storage devices 120 not included in a RAID may be accessed using device drivers 130 and conventional access techniques.)

存在使用RAID的諸多不同方式。該些不同方式被稱為「層級」,不同的編號表示使用儲存設備120的不同方式。RAID層級0(亦被稱為條帶集(stripe set)或條帶卷(striped volume))實際上並不提供任何冗餘。相反,在儲存設備120上寫入資料。在具有兩個儲存設備的本揭露實施例中,一半的資料被寫入至儲存設備120-1,且一半的資料被寫入至儲存設備120-2。RAID層級0中的總可用儲存通常等於最小儲存設備的大小乘以RAID中的儲存設備的數目:由於RAID層級0的操作方式,對不同大小的儲存設備進行混合可能會導致某些儲存不可存取。RAID層級0會改善效能:由於每一儲存設備120僅寫入及讀取資料的一部分,因此每一儲存設備120可與RAID中的其他儲存設備120並行地存取所述資料的一部分,進而達成更快的讀取及/或寫入。但是,由於資料分散於多個儲存設備上,因此系統中的任何各別儲存設備的故障均可能會導致所儲存的全部資料丟失。There are many different ways to use RAID. These different ways are called "tiers," and different numbers represent different ways of using the storage device 120 . RAID level 0 (also known as a stripe set or striped volume) does not actually provide any redundancy. Instead, data is written on storage device 120. In the disclosed embodiment with two storage devices, half of the data is written to storage device 120-1, and half of the data is written to storage device 120-2. The total available storage in RAID level 0 is usually equal to the size of the smallest storage device multiplied by the number of storage devices in the RAID: due to the way RAID level 0 operates, mixing storage devices of different sizes may cause some storage to be inaccessible . RAID level 0 improves performance: since each storage device 120 only writes and reads a portion of the data, each storage device 120 can access a portion of the data in parallel with the other storage devices 120 in the RAID, thereby achieving Faster reads and/or writes. However, since data is spread across multiple storage devices, the failure of any individual storage device in the system may result in the loss of all stored data.

RAID層級1(亦被稱為鏡像(mirroring))在多個儲存設備上儲存相同的資料。亦即,儲存於儲存設備120-1上的資料亦被儲存於儲存設備120-2上,且反之。RAID層級1中的總可用空間通常是最小儲存設備的大小:在RAID中包括其他儲存設備不會增加可用儲存容量(由於其他儲存設備是鏡像)。可由於可自RAID中的各種儲存設備讀取資料而改善讀取效能。但是,由於寫入請求會使得全部資料被寫入至每一儲存設備,因此寫入效能可能不會發生改變。藉由保存資料的兩個(或更多個)副本而提供冗餘。若任何各別的儲存設備120發生故障,可自RAID中的任何其他儲存設備120存取資料。RAID level 1 (also known as mirroring) stores the same data on multiple storage devices. That is, data stored on storage device 120-1 is also stored on storage device 120-2, and vice versa. The total available space in RAID level 1 is usually the size of the smallest storage device: including additional storage devices in the RAID does not increase the available storage capacity (since the other storage devices are mirrored). The read performance can be improved since data can be read from various storage devices in RAID. However, since write requests cause all data to be written to each storage device, write performance may not change. Provides redundancy by keeping two (or more) copies of data. If any individual storage device 120 fails, data can be accessed from any other storage device 120 in the RAID.

RAID層級5提供具有分佈式同位的區塊層級條帶化。亦即,當資料欲被寫入至儲存設備120上的條帶時,資料被寫入至除一個儲存設備120之外的所有儲存設備:最末一個儲存設備120基於針對所述條帶儲存於其他儲存設備上的資料來儲存同位資料。沒有各別的儲存設備120專用於儲存同位資料:同位資料可在所有儲存設備120之間輪換。(使一個儲存設備120專用於儲存同位資料是RAID層級4。)RAID層級5使用至少三個儲存設備120:總可用儲存通常是最小儲存設備的大小乘以RAID中的儲存設備的總數目減一。因此,舉例而言,若RAID包括三個500吉位元組的儲存設備,則總可用儲存空間為近似(3 – 1) × 500吉位元組= 1000吉位元組= 1太-位元組。由於可自RAID中的各種儲存設備(除了針對條帶儲存同位資訊的儲存設備之外)讀取資料,因此可改善讀取效能。由於可在儲存設備120上寫入資料,因此亦可改善寫入效能。然而,由於可根據被寫入至其他儲存設備120的資料產生同位資訊,因此可能需要一些附加的時間來產生同位資訊。在一個儲存設備發生故障的情況下,可使用來自其他儲存設備的資料重新計算所丟失的資料(由於可使用同位資訊來根據發生故障的儲存設備對資料進行重建)。RAID level 5 provides block-level striping with distributed parity. That is, when data is to be written to a stripe on storage device 120, the data is written to all but one of the storage devices 120: the last storage device 120 based on the data stored in the stripe for the stripe. data on other storage devices to store co-located data. No individual storage device 120 is dedicated to storing parity data: parity data can be rotated among all storage devices 120 . (Dedicating one storage device 120 to storing co-located data is RAID level 4.) RAID level 5 uses at least three storage devices 120: the total available storage is usually the size of the smallest storage device multiplied by the total number of storage devices in the RAID minus one. . So, for example, if the RAID includes three 500-gigabyte storage devices, the total available storage space is approximately (3 – 1) × 500 gigabytes = 1000 gigabytes = 1 terabyte group. Read performance is improved because data can be read from a variety of storage devices in a RAID (except those that store parity information for stripes). Since data can be written on the storage device 120, writing performance can also be improved. However, since parity information may be generated based on data written to other storage devices 120, some additional time may be required to generate parity information. In the event of a storage device failure, the lost data can be recalculated using data from other storage devices (since parity information can be used to reconstruct data from the failed storage device).

RAID層級6相似於RAID層級5,不同的是同位資訊儲存於RAID中的兩個儲存設備上。如此一來,RAID層級6使用至少四個儲存設備,但RAID層級6可僅承受RAID中兩個儲存設備120的故障。總可用儲存通常是最小儲存設備的大小乘以RAID中的儲存設備的數目減二。因此,舉例而言,若RAID包括四個500吉位元組的儲存設備,則總可用儲存空間為近似(4 – 2) × 500吉位元組= 1000吉位元組= 1太-位元組。讀取效能及寫入效能可相似於RAID層級5。RAID level 6 is similar to RAID level 5, except that co-located information is stored on two storage devices in the RAID. In this way, RAID level 6 uses at least four storage devices, but RAID level 6 can only withstand the failure of two storage devices 120 in the RAID. The total available storage is usually the size of the smallest storage device multiplied by the number of storage devices in the RAID minus two. So, for example, if the RAID includes four 500-gigabyte storage devices, the total available storage space is approximately (4 – 2) × 500 gigabytes = 1000 gigabytes = 1 terabyte group. Read performance and write performance can be similar to RAID level 5.

RAID層級0、RAID層級1、RAID層級5及RAID層級6是較為常見的RAID層級,但亦存在其他RAID層級。另外,可將儲存設備120配置成該些層級的組合。舉例而言,RAID層級10是鏡像條帶。亦即,多組儲存設備可被佈置成RAID層級1配置,RAID層級0條帶包括所述各組鏡像。RAID層級10提供鏡像與條帶化二者的有益效果,但代價是需要附加的儲存設備:例如,RAID層級10使用至少四個儲存設備,總容量僅為最小儲存設備的大小乘以條帶中所使用的各組鏡像的數目。RAID level 0, RAID level 1, RAID level 5, and RAID level 6 are the more common RAID levels, but other RAID levels also exist. Additionally, the storage device 120 may be configured as a combination of these levels. For example, RAID level 10 is mirrored striping. That is, multiple sets of storage devices may be arranged in a RAID level 1 configuration, with a RAID level 0 stripe including each set of mirrors. RAID level 10 provides the benefits of both mirroring and striping, but at the cost of requiring additional storage devices: for example, RAID level 10 uses at least four storage devices, and the total capacity is only the size of the smallest storage device times the size of the stripe The number of each set of images used.

出於本揭露的目的,用語「同位資訊」應被理解為包括RAID的任何形式的資訊冗餘,不論是作為同位資訊(例如,可在RAID層級5或RAID層級6中使用)還是資料鏡像(例如,可在RAID層級1中使用)。For the purposes of this disclosure, the term "parity information" shall be understood to include any form of information redundancy of RAID, whether as parity information (e.g., as may be used in RAID Level 5 or RAID Level 6) or data mirroring ( For example, can be used in RAID level 1).

一些RAID實施方案可提供向RAID添加附加儲存設備的能力。但是,附加儲存設備是否會增加總可用儲存可端視所使用的RAID層級而定。舉例而言,向RAID層級0添加另一儲存設備可增加儲存容量(但仍不提供冗餘),而向RAID層級1添加另一儲存設備可增加冗餘,但不增加總儲存容量。向RAID層級5或RAID層級6添加附加儲存設備可在一定程度上增加總可用儲存。是否可在不關閉RAID以添加新的儲存設備的情況下向RAID添加新的儲存設備可端視RAID實施方案而定。Some RAID implementations provide the ability to add additional storage devices to the RAID. However, whether additional storage will increase the total available storage depends on the RAID level used. For example, adding another storage device to RAID level 0 increases storage capacity (but still does not provide redundancy), while adding another storage device to RAID level 1 increases redundancy but does not increase total storage capacity. Adding additional storage devices to RAID level 5 or RAID level 6 can increase the total available storage to some extent. Whether new storage devices can be added to a RAID without shutting down the RAID to add new storage devices depends on the RAID implementation.

RAID引擎135可對儲存設備120的使用進行管理,以對RAID進行配置並使用RAID。亦即,RAID引擎135可自儲存設備120接收用於對資料進行存取的請求、確定資料在儲存設備120上實際儲存於何處、讀取或寫入資料、對錯誤校正進行處理(例如,使用同位資訊)等等。RAID引擎135可被實施為硬體且亦可被稱為RAID電路。RAID engine 135 may manage the use of storage device 120 to configure and use RAID. That is, RAID engine 135 may receive requests to access data from storage device 120 , determine where the data is actually stored on storage device 120 , read or write data, perform error correction (e.g., using parity information) and so on. RAID engine 135 may be implemented as hardware and may also be referred to as a RAID circuit.

可使用組構管理器140對RAID引擎135進行初始化。舉例而言,組構管理器140可確定欲使用的RAID層級、對可用的儲存設備120進行辨識且實行對機器105的RAID進行初始化所涉及的其他操作。以下參照圖5及圖9至圖11進一步論述組構管理器140。RAID engine 135 may be initialized using fabric manager 140 . For example, fabric manager 140 may determine the RAID level to be used, identify available storage devices 120 , and perform other operations involved in initializing the RAID of machine 105 . Fabric manager 140 is discussed further below with reference to FIG. 5 and FIGS. 9-11.

在本揭露的一些實施例中,RAID引擎135可直接連接至儲存設備120或者與儲存設備120進行直接通訊。但是,在本揭露的一些實施例中,交換機145可充當處理器110、RAID引擎135、組構管理器140及/或儲存設備120之間的媒介(intermediary)。交換機145可負責確保擬定自機器105中的一個組件至另一組件的資料被正確地路由以到達其擬定目的地。由於交換機145可與快取記憶體同調互連協定儲存設備120進行通訊,因此交換機145可為快取記憶體同調互連協定交換機。此外,由於交換機145可與RAID引擎135進行通訊,因此交換機145可為RAID感知交換機(RAID-aware switch)。In some embodiments of the present disclosure, the RAID engine 135 may be directly connected to or in direct communication with the storage device 120 . However, in some embodiments of the present disclosure, the switch 145 may serve as an intermediary between the processor 110 , the RAID engine 135 , the fabric manager 140 and/or the storage device 120 . Switch 145 may be responsible for ensuring that data intended from one component to another component in machine 105 is routed correctly to reach its intended destination. Since the switch 145 can communicate with the CCP storage device 120, the switch 145 can be a CCP switch. In addition, since the switch 145 can communicate with the RAID engine 135, the switch 145 can be a RAID-aware switch.

儘管圖1將RAID引擎135及組構管理器140示出為位於交換機145的外部,然而在本揭露的一些實施例中,交換機145可包括RAID引擎135及組構管理器140中的任一者或兩者。Although Figure 1 shows RAID engine 135 and fabric manager 140 as being external to switch 145, in some embodiments of the present disclosure, switch 145 may include any of RAID engine 135 and fabric manager 140 Or both.

在圖1中未示出電路板。此種電路板(其可為母板(motherboard)、背板(backplane)或中板(midplane))可包括其中可安裝有處理器110、記憶體115、儲存設備120、RAID引擎135、組構管理器140及/或交換機145的槽(slot)。應注意,端視實施方案而定,該些組件中的一或多者可直接安裝至電路板上而非安裝於槽中。另外,本揭露的實施例可包括互連的多個電路板,其中所述組件安裝於該些電路板上。The circuit board is not shown in Figure 1 . Such a circuit board (which may be a motherboard, a backplane, or a midplane) may include a processor 110, a memory 115, a storage device 120, a RAID engine 135, a fabric, etc. Slot of manager 140 and/or switch 145 . It should be noted that, depending on the implementation, one or more of these components may be mounted directly to the circuit board rather than in a slot. Additionally, embodiments of the present disclosure may include a plurality of interconnected circuit boards on which the components are mounted.

圖2示出根據本揭露實施例的圖1所示機器的細節。在圖2中,機器105通常包括一或多個處理器110,所述一或多個處理器110可包括記憶體控制器125及可用於對機器的組件的操作進行協調的時脈205。處理器110亦可耦合至記憶體115,作為實例,記憶體115可包括隨機存取記憶體(RAM)、唯讀記憶體(read-only memory,ROM)或其他狀態保持媒體。處理器110亦可耦合至儲存設備120且耦合至網路連接件210,網路連接件210可為例如乙太網路連接件或無線連接件。處理器110亦可連接至匯流排215,使用者介面220及可使用輸入/輸出(Input/Output,I/O)引擎225及其他組件進行管理的I/O介面埠可附接至匯流排215。Figure 2 shows details of the machine shown in Figure 1 according to an embodiment of the present disclosure. In Figure 2, machine 105 generally includes one or more processors 110, which may include a memory controller 125 and a clock 205 that may be used to coordinate the operation of the components of the machine. Processor 110 may also be coupled to memory 115, which may include, as examples, random access memory (RAM), read-only memory (ROM), or other state-maintaining media. The processor 110 may also be coupled to the storage device 120 and to a network connection 210, which may be, for example, an Ethernet connection or a wireless connection. The processor 110 may also be connected to the bus 215 , to which a user interface 220 and an I/O interface port that may be managed using an input/output (I/O) engine 225 and other components may be attached. .

圖3示出根據本揭露實施例的對圖1所示RAID的使用。在圖3中,處理器110可正在執行應用305。(在本揭露的一些實施例中,處理器110可正在執行來自多個應用、作業系統或其他來源的多個執行緒(thread)。)應用305可發送用於自儲存設備120存取資料的請求(例如請求310-1及310-2(其可被統稱為請求310))。(在本揭露中,用語「存取」旨在闡述自儲存設備120讀取資料、向儲存設備120寫入資料或在儲存設備120內實行任何其他操作,尤其是涉及儲存於儲存設備120上的資料的操作。)可在交換機145處接收請求310,交換機145可將所述請求轉發至RAID引擎135。RAID引擎135可確切地確定出如何對儲存設備120上的資料進行存取,且可發出適宜於已實施的RAID層級的命令。Figure 3 illustrates the use of the RAID shown in Figure 1 according to an embodiment of the present disclosure. In Figure 3, processor 110 may be executing application 305. (In some embodiments of the present disclosure, processor 110 may be executing multiple threads from multiple applications, operating systems, or other sources.) Application 305 may send a request to access data from storage device 120 Requests (such as requests 310-1 and 310-2 (which may be collectively referred to as requests 310)). (In this disclosure, the term "access" is intended to describe reading data from, writing data to, or performing any other operation within storage device 120 , particularly involving data stored on storage device 120 The request 310 may be received at switch 145 , which may forward the request to RAID engine 135 . RAID engine 135 can determine exactly how to access data on storage device 120 and can issue commands appropriate to the RAID level implemented.

在本揭露的一些實施例中,圖1所示機器105可包括計算模組,例如被設計成執行各種功能的加速器。若此種計算模組包括於機器105中,則在本揭露的一些實施例中,計算模組亦可與RAID引擎135進行通訊以自儲存設備120存取資料,使得計算模組不需要知曉如何將資料儲存於儲存設備120上(且具體而言不需要知曉如何基於正在實施的RAID層級自儲存設備120存取資料)。In some embodiments of the present disclosure, the machine 105 shown in Figure 1 may include computing modules, such as accelerators designed to perform various functions. If such a computing module is included in the machine 105, in some embodiments of the present disclosure, the computing module may also communicate with the RAID engine 135 to access data from the storage device 120, so that the computing module does not need to know how to The data is stored on the storage device 120 (and specifically without knowledge of how to access the data from the storage device 120 based on the RAID level being implemented).

如上所述,儲存設備120可支援快取記憶體同調互連協定,所述快取記憶體同調互連協定可支援對儲存設備120上的資料的區塊層級存取及位元組層級存取二者。亦即,應用可藉由發出用於自儲存設備120讀取區塊(扇區或其他等效概念)的讀取請求或者發出用於讀取儲存設備120上自特定位元組位址開始的特定位元組(或一組位元組)的載入請求而自儲存設備120讀取資料。相似地,應用可藉由發出用於將區塊寫入至儲存設備120的寫入請求或者發出用於將資料寫入至儲存設備120上自特定位址開始的特定位元組(或一組位元組)的儲存請求而將資料寫入至儲存設備120。在本揭露中,可將讀取請求及寫入請求作為區塊層級請求進行處置(例如自儲存設備120上的檔案存取資料),且可將載入請求及儲存請求作為位元組層級請求進行處置(例如自圖1所示記憶體115存取位元組)。圖4示出該些不同形式的存取。As described above, the storage device 120 may support a cache coherence interconnection protocol that may support block-level access and byte-level access to data on the storage device 120 both. That is, the application can issue a read request for reading a block (sector or other equivalent concept) from the storage device 120 or issue a read request for reading a block starting from a specific byte address on the storage device 120 . Data is read from the storage device 120 in response to a load request for a specific byte (or group of bytes). Similarly, an application may respond by issuing a write request to write a block to storage device 120 or by issuing a write request to write data to a specific byte (or group of bytes) starting at a specific address on storage device 120 . byte) to write the data to the storage device 120. In the present disclosure, read requests and write requests may be handled as block-level requests (eg, to access data from a file on storage device 120), and load requests and store requests may be handled as byte-level requests. Perform processing (eg, access bytes from memory 115 shown in Figure 1). Figure 4 illustrates these different forms of access.

在圖4中,可使用區塊層級協定405-1執行區塊層級請求,且可使用位元組層級協定405-2執行位元組層級請求。區塊層級協定405-1及位元組層級協定405-2可被統稱為協定405。應注意,圖4並不旨在表明在儲存設備120上存在用於該些各種請求的不同實體埠或實體介面:所有請求可通過儲存設備120的同一實體連接、實體介面及/或實體埠。圖4僅旨在表明儲存設備120可相依於所使用的協定405來以不同方式對該些請求進行處理。In Figure 4, block-level requests may be performed using block-level protocol 405-1, and byte-level requests may be performed using byte-level protocol 405-2. Block level protocol 405-1 and byte level protocol 405-2 may be collectively referred to as protocol 405. It should be noted that FIG. 4 is not intended to indicate that there are different physical ports or physical interfaces on storage device 120 for these various requests: all requests may be through the same physical connection, physical interface, and/or physical port of storage device 120 . FIG. 4 is merely intended to illustrate that storage device 120 may handle these requests differently depending on the protocol 405 used.

儲存設備120可提供整個儲存器410。儲存器410可被劃分成區塊。舉例而言,儲存設備120可包括區塊415-1至415-4(其可被統稱為區塊415)。在本揭露的一些實施例中,區塊415可被劃分成頁面,其中頁面是可使用區塊層級協定405-1進行存取的最小資料單位。但是,當自儲存設備120抹除資料時,可被抹除的最小資料單位可能使區塊。因此,由於區塊可為可針對所有可能的命令發出的最小儲存單位,因此根據區塊來論述自儲存設備120存取資料是方便的。端視儲存設備120可如何發揮作用而定,本揭露的實施例可包括被劃分成除了區塊之外的單位(例如,頁面)的儲存器410。Storage device 120 may provide the entire storage 410. Storage 410 may be divided into blocks. For example, storage device 120 may include blocks 415-1 through 415-4 (which may be collectively referred to as blocks 415). In some embodiments of the present disclosure, block 415 may be divided into pages, where a page is the smallest unit of data that can be accessed using block-level protocol 405-1. However, when data is erased from the storage device 120, the smallest unit of data that can be erased may be a block. Therefore, it is convenient to discuss accessing data from storage device 120 in terms of blocks since a block may be the smallest unit of storage that can be issued for all possible commands. Depending on how storage device 120 may function, embodiments of the present disclosure may include storage 410 divided into units other than blocks (eg, pages).

圖3所示應用305可發出圖1所示讀取請求或寫入請求310-1,以一次性讀取或寫入一個區塊(或頁面,其對欲被存取的區塊進行辨識)的資料。因此,舉例而言,圖4示出用於自區塊415-3存取資料的區塊層級協定405-1。區塊415可一同表示儲存設備120的可用儲存。The application 305 shown in Figure 3 can issue the read request or write request 310-1 shown in Figure 1 to read or write one block (or page, which identifies the block to be accessed) at one time information. Thus, for example, Figure 4 shows a block-level protocol 405-1 for accessing data from block 415-3. Block 415 may collectively represent the available storage of storage device 120 .

但是,由於儲存設備120可支援快取記憶體同調互連協定,因此亦可使用位元組層級協定405-2來對儲存設備120上的資料的各別位元組進行存取。儲存設備120可暴露出位址範圍,所述位址範圍可對圖3所示應用305表現為僅是圖1所示記憶體115所暴露出的記憶體的擴展。因此,圖3所示應用305可發出用於在儲存設備120內載入或儲存特定位元組的資料的命令。舉例而言,圖4示出用於對區塊415-4內的位元組420進行存取的位元組層級協定405-2。However, since the storage device 120 may support the cache coherence interconnection protocol, the byte level protocol 405-2 may also be used to access individual bytes of data on the storage device 120. The storage device 120 may expose an address range that may appear to the application 305 shown in FIG. 3 as merely an extension of the memory exposed by the memory 115 shown in FIG. 1 . Therefore, application 305 shown in FIG. 3 can issue commands for loading or storing specific bytes of data within storage device 120 . For example, Figure 4 shows a byte level protocol 405-2 for accessing bytes 420 within block 415-4.

如以上所論述,區塊或頁面通常是可在儲存設備120內存取的最小資料單位。為了能夠使用位元組層級協定405-2對各別位元組進行存取,儲存設備120可包括另一種形式的記憶體(例如DRAM)。當接收到使用位元組層級協定405-2的請求時,儲存設備120可進行檢查以判斷所請求的位址是否已被載入至此記憶體中的某處。若所請求的位址未被載入至此記憶體中,則儲存設備120可自儲存媒體讀取所述區塊且將資料儲存於記憶體中。(此記憶體可小於儲存設備120的總可用儲存:若需要,儲存設備120亦可將資料自此記憶體刷新回儲存媒體,以便為欲被載入至此記憶體中的資料騰出空間。)然後儲存設備120可自此記憶體存取在使用位元組層級協定405-2的請求中辨識的特定位元組位址。As discussed above, a block or page is generally the smallest unit of data that can be accessed within storage device 120 . To enable access to individual bytes using the byte level protocol 405-2, the storage device 120 may include another form of memory (eg, DRAM). When receiving a request using the byte level protocol 405-2, the storage device 120 may check to determine whether the requested address has already been loaded somewhere in the memory. If the requested address is not loaded into the memory, the storage device 120 can read the block from the storage medium and store the data in the memory. (This memory can be less than the total available storage of storage device 120: if necessary, storage device 120 can also flush data from this memory back to the storage medium to make room for data to be loaded into this memory.) Storage device 120 can then access from this memory the specific byte address identified in the request using byte level protocol 405-2.

此種其他形式的記憶體可為揮發性儲存器或非揮發性儲存器且可為持續的或非持續的,此端視本揭露的實施例而定。若使用揮發性記憶體,則儲存設備120可能需要包括一些備用電源以防止電力中斷,或者儲存設備120可能需要確保儲存命令被立即提交至非揮發性儲存媒體。Such other forms of memory may be volatile storage or non-volatile storage and may be persistent or non-persistent, depending on the embodiment of the present disclosure. If volatile memory is used, the storage device 120 may need to include some backup power to protect against power outages, or the storage device 120 may need to ensure that store commands are submitted immediately to the non-volatile storage medium.

典型的RAID實施方案對相對於儲存設備讀取資料及寫入資料的請求進行處理;典型的RAID實施方案不對記憶體請求進行處理。因此,使用位元組層級協定405-2的任何請求可能均不會達成正確的資料處理。載入請求可能會導致返送錯誤的資料(例如,若附加至載入請求的位元組位址實際上儲存RAID 5配置的同位資訊,則返送的資料實際上可能不是已由圖3所示應用305寫入的資料);儲存請求實際上可能將錯誤引入至儲存於圖1所示儲存設備120上的資料中。在使用RAID層級1配置的情況下,可能無法確定哪一儲存設備120儲存正確的資料,只能偵測到存在某種錯誤。即使在RAID層級5配置或RAID層級6配置中使用同位資訊可能亦無法防止問題:若發出足夠多的此種請求,則圖1所示RAID引擎135可能無法確定正確的資料。因此,當使用支援區塊層級協定405-1及位元組層級協定405-2二者的儲存設備120時,RAID引擎135可能需要攔截看起來是對記憶體進行存取但實際上對圖1所示儲存設備120進行存取的請求。A typical RAID implementation handles requests to read and write data to the storage device; a typical RAID implementation does not handle requests to memory. Therefore, any request using Byte Level Protocol 405-2 may not result in correct data handling. A load request may cause incorrect data to be returned (for example, if the byte address appended to the load request actually stores parity information for a RAID 5 configuration, the data returned may not actually have been used by the application shown in Figure 3 305); the storage request may actually introduce errors into the data stored on the storage device 120 shown in FIG. 1 . In the case of a RAID level 1 configuration, it may not be possible to determine which storage device 120 stores the correct data, but only to detect that there is some kind of error. Even using parity information in a RAID level 5 configuration or a RAID level 6 configuration may not prevent the problem: if enough such requests are made, the RAID engine 135 shown in Figure 1 may not be able to determine the correct data. Therefore, when using a storage device 120 that supports both block-level protocol 405-1 and byte-level protocol 405-2, the RAID engine 135 may need to intercept what appears to be an access to memory but is actually an access to the memory of FIG. 1 Storage device 120 is shown making a request for access.

為了能夠攔截實際上對圖1所示儲存設備120上的資料進行存取的記憶體請求,圖1所示RAID引擎135可建立表示圖1所示儲存設備120上的可用儲存的位址範圍。然後可向系統註冊此RAID位址範圍而非圖1所示儲存設備120的各別位址範圍。以此種方式,圖1所示RAID引擎135可隱藏圖1所示儲存設備120的位址範圍。圖5示出可如何進行此種操作。In order to intercept memory requests that actually access data on the storage device 120 shown in FIG. 1 , the RAID engine 135 shown in FIG. 1 may establish an address range representing available storage on the storage device 120 shown in FIG. 1 . This RAID address range may then be registered with the system instead of the individual address ranges of storage device 120 shown in FIG. 1 . In this manner, the RAID engine 135 shown in FIG. 1 can hide the address range of the storage device 120 shown in FIG. 1 . Figure 5 shows how this can be done.

圖5示出根據本揭露實施例的RAID位址範圍可如何映射至圖1所示儲存設備120的各別位址範圍。在RAID初始化期間,圖1所示組構管理器140可確定由圖1所示每一儲存設備120暴露出的位址範圍505-1及505-2(位址範圍505-1及505-2可被統稱為位址範圍505)。然後圖1所示組構管理器140可產生RAID位址範圍510,RAID位址範圍510可大至足以包括各別儲存設備的位址範圍。位址範圍510中的任何位址可在位址範圍505中的一者中具有對應的位址,且反之。然後圖1所示組構管理器140可向系統註冊RAID位址範圍510(例如,藉由將RAID位址範圍510添加至圖1所示機器105的記憶體映射或系統記憶體)。若圖1所示儲存設備120的位址範圍505是單獨向系統註冊的,則圖1所示組構管理器140亦可自系統註銷位址範圍505,使得圖3所示應用305可不嘗試使用圖4所示位元組層級協定405-2來對圖1所示儲存設備120中的位元組位址進行存取。FIG. 5 illustrates how a RAID address range may be mapped to a respective address range of the storage device 120 shown in FIG. 1 according to an embodiment of the present disclosure. During RAID initialization, fabric manager 140 of FIG. 1 may determine address ranges 505-1 and 505-2 exposed by each storage device 120 of FIG. 1 (address ranges 505-1 and 505-2 May be collectively referred to as address range 505). The fabric manager 140 shown in FIG. 1 can then generate a RAID address range 510, which can be large enough to include the address ranges of the respective storage devices. Any address in address range 510 may have a corresponding address in one of address ranges 505, and vice versa. Fabric manager 140 of FIG. 1 may then register RAID address range 510 with the system (eg, by adding RAID address range 510 to the memory map or system memory of machine 105 of FIG. 1 ). If the address range 505 of the storage device 120 shown in Figure 1 is separately registered with the system, the configuration manager 140 shown in Figure 1 can also unregister the address range 505 from the system, so that the application 305 shown in Figure 3 can not try to use it. The byte level protocol 405-2 shown in FIG. 4 is used to access the byte address in the storage device 120 shown in FIG. 1.

儘管圖5表明RAID位址範圍510包括與各別位址範圍505對應的連續位址區塊,然而本揭露的實施例可包括對RAID位址範圍510中的位址可如何對應於位址範圍505中的位址進行管理的其他方式。舉例而言,位址範圍505中的位址可在RAID位址範圍510中交錯。Although FIG. 5 illustrates that RAID address range 510 includes contiguous address blocks corresponding to respective address ranges 505, embodiments of the present disclosure may include understanding how addresses in RAID address range 510 may correspond to address ranges. Other ways to manage addresses in 505. For example, addresses in address range 505 may be interleaved in RAID address range 510 .

圖6示出根據本揭露實施例的圖1所示RAID引擎135中的緩衝器的細節。在圖6中,RAID引擎135可包括各種組件。RAID引擎135被示出為包括讀取電路605、寫入電路610及緩衝器615。讀取電路605可使用圖4所示區塊層級協定405-1自儲存設備120讀取資料。相似地,寫入電路610可使用圖4所示區塊層級協定405-1向儲存設備120寫入資料。Figure 6 shows details of the buffers in the RAID engine 135 shown in Figure 1 according to an embodiment of the present disclosure. In Figure 6, RAID engine 135 may include various components. RAID engine 135 is shown including read circuitry 605, write circuitry 610, and buffer 615. The read circuit 605 may read data from the storage device 120 using the block level protocol 405-1 shown in FIG. 4. Similarly, write circuit 610 may write data to storage device 120 using block level protocol 405-1 shown in FIG. 4.

當圖3所示應用305發出使用圖4所示位元組層級協定405-2的載入請求或儲存請求時,可使用緩衝器615管理對資料的存取。基於在圖5所示RAID位址範圍510中存在位元組位址,RAID引擎135可確定出所述請求對儲存設備120中的一者上的資料進行存取。RAID引擎135可將資料載入至緩衝器615中。本質上,緩衝器615等效於由儲存設備用來對使用圖4所示位元組層級協定405-2的請求進行處置的記憶體,但由RAID引擎135使用。緩衝器615可為揮發性記憶體或非揮發性記憶體;緩衝器615可為持續的或者不是持續的。如同儲存設備120用來對使用圖4所示位元組層級協定405-2的請求進行處置的記憶體一般,若緩衝器615不是非揮發性記憶體或持續記憶體,則RAID引擎可包括備用電源(未在圖6中示出)(例如電池或電容器),以在電力中斷的情形中提供足夠的電力來將緩衝器615中的資料刷新回儲存設備120。When the application 305 shown in Figure 3 issues a load request or a store request using the byte level protocol 405-2 shown in Figure 4, the buffer 615 may be used to manage access to the data. Based on the presence of the byte address in the RAID address range 510 shown in FIG. 5 , the RAID engine 135 may determine that the request is to access data on one of the storage devices 120 . RAID engine 135 may load data into buffer 615 . In essence, buffer 615 is equivalent to the memory used by the storage device to handle requests using the byte level protocol 405-2 shown in Figure 4, but used by RAID engine 135. Buffer 615 may be volatile memory or non-volatile memory; buffer 615 may be persistent or not persistent. Like the memory used by storage device 120 to handle requests using the byte-level protocol 405-2 shown in Figure 4, if buffer 615 is not non-volatile memory or persistent memory, the RAID engine may include backup A power source (not shown in FIG. 6 ) (eg, a battery or a capacitor) to provide sufficient power to refresh the data in buffer 615 back to storage device 120 in the event of a power outage.

緩衝器615可將資料儲存於條帶620-1至620-3(其可被統稱為條帶620)中。每一條帶可包括來自每一儲存設備的資料的一部分。條帶620可與RAID引擎135在儲存設備120上儲存資料的方式並行存在。舉例而言,若使用RAID層級0配置,則來自每一儲存設備120的條帶620中的資料可被讀取並載入至緩衝器615中;若使用RAID層級1配置,則條帶620中的資料可包括來自儲存設備120中的一者的資料(其餘儲存設備120中的資料用於對載入至條帶620中的資料中的任何錯誤進行驗證及校正);若使用RAID層級5配置,則條帶620中的資料可包括來自儲存設備120的非同位資料(同位資訊用於對載入至條帶620中的資料中的任何錯誤進行驗證及校正)等等。Buffer 615 may store data in stripes 620-1 through 620-3 (which may be collectively referred to as stripes 620). Each strip may include a portion of data from each storage device. Stripes 620 may exist in parallel with the manner in which RAID engine 135 stores data on storage device 120 . For example, if a RAID level 0 configuration is used, data in stripe 620 from each storage device 120 may be read and loaded into buffer 615; if a RAID level 1 configuration is used, data in stripe 620 may include data from one of the storage devices 120 (the data in the remaining storage devices 120 is used to verify and correct any errors in the data loaded into the stripe 620); if using a RAID level 5 configuration , then the data in stripe 620 may include non-colocated data from storage device 120 (parity information is used to verify and correct any errors in the data loaded into stripe 620), etc.

RAID引擎135可基於與請求一起提供的位元組位址來對儲存所請求資料的條帶620進行辨識。若條帶620當前未被載入至緩衝器615中,則RAID引擎135可自儲存設備120讀取條帶且將資料儲存於條帶620中。然後RAID引擎135可基於來自緩衝器615的請求對各別位元組進行存取且將所述資料返送至圖3所示應用305。RAID engine 135 may identify the stripe 620 storing the requested data based on the byte address provided with the request. If stripe 620 is not currently loaded into buffer 615, RAID engine 135 may read the stripe from storage device 120 and store data in stripe 620. RAID engine 135 may then access the respective bytes based on the request from buffer 615 and return the data to application 305 shown in FIG. 3 .

如同儲存設備120可用來對使用圖4所示位元組層級協定405-2的請求進行處置的記憶體一般,緩衝器615可小於使用RAID層級配置的儲存設備120的總可用儲存。在本揭露的一些實施例中,緩衝器615可使用任何所期望的技術來決定驅逐哪一條帶620以為新的資料騰出空間。舉例而言,緩衝器615可使用最近最少使用(least recently used,LRU)演算法、最不常用(least frequently used,LFU)演算法或任何其他所期望的演算法來選擇自緩衝器615驅逐哪一條帶620以為新的資料騰出空間。As with the memory that storage device 120 may use to handle requests using the byte level protocol 405-2 shown in FIG. 4, buffer 615 may be less than the total available storage of storage device 120 using a RAID level configuration. In some embodiments of the present disclosure, buffer 615 may use any desired technique to determine which strip 620 to evict to make room for new data. For example, the buffer 615 may use a least recently used (LRU) algorithm, a least frequently used (LFU) algorithm, or any other desired algorithm to select which ones to evict from the buffer 615 . A strip of 620 is used to make room for new data.

圖6示出條帶620-3被逐出以為條帶620-1騰出空間。可將條帶620-3中的資料劃分成多個部分,每一部分儲存於不同的儲存設備120上。舉例而言,可將條帶620-3劃分成資料625-1、625-2及625-3(其可被統稱為資料625),以分別將所述資料寫入至儲存設備120-1、120-2及120-3。資料625可為實際資料,或者可為根據條帶620-3中的資料適宜地產生的同位資訊。倘若使用包括鏡像的RAID配置(例如RAID層級1),則每一資料625可能不是唯一的。Figure 6 shows stripe 620-3 being evicted to make room for stripe 620-1. The data in the strip 620-3 can be divided into multiple parts, and each part is stored on a different storage device 120. For example, stripe 620-3 may be divided into data 625-1, 625-2, and 625-3 (which may be collectively referred to as data 625) to write the data to storage devices 120-1, 625-3, respectively. 120-2 and 120-3. Data 625 may be actual data, or may be parity information appropriately generated based on the data in strip 620-3. If a RAID configuration including mirroring is used (eg, RAID level 1), each data 625 may not be unique.

一旦條帶620-3中的資料已被寫入至儲存設備120,便可將資料(例如資料625)載入至條帶620-1中。本質上,驅逐條帶620-3的操作與載入條帶620-1的操作是彼此的鏡像:寫入或讀取資料,且寫入同位資訊或其他冗餘資訊或者使用所述同位資訊或其他冗餘資訊驗證出所讀取的資料是正確的。Once the data in stripe 620-3 has been written to storage device 120, data (eg, data 625) can be loaded into stripe 620-1. Essentially, the operations of evicting stripe 620-3 and loading stripe 620-1 are mirror images of each other: data is written or read, and parity information or other redundant information is written or used. Other redundant information verifies that the data read is correct.

應注意,驅逐條帶620-3可能不需要包括將資料寫入至儲存設備120。舉例而言,若條帶620-3中的資料已作為條帶被儲存於儲存設備120上,則驅逐條帶620-3可能僅涉及自緩衝器615刪除條帶620-3。(應注意,用語「刪除」可在廣義上被理解為實際上自條帶620-3抹除資料、將條帶620-3中的資料標記為無效(但實際上不自條帶620-3抹除資料)或者以其他方式指示條帶620-3可用於儲存新的資料,而不論實際上是否以某種方式將當前處於條帶620-3中的資料自緩衝器615移除。)It should be noted that evicting stripe 620 - 3 may not need to include writing data to storage device 120 . For example, if the data in stripe 620-3 is already stored as a stripe on storage device 120, evicting stripe 620-3 may simply involve deleting stripe 620-3 from buffer 615. (It should be noted that the term "delete" may be understood broadly to mean actually erasing data from stripe 620-3, marking data in stripe 620-3 as invalid (but not actually erasing data from stripe 620-3). erase data) or otherwise indicate that stripe 620-3 is available for storing new data, regardless of whether the data currently in stripe 620-3 is actually removed from buffer 615 in some manner.)

儘管圖6表明將資料載入至緩衝器615中的條帶620-1中且自緩衝器615中的條帶620-3驅逐資料,然而本揭露的實施例可包括自緩衝器615中的任何條帶620驅逐資料及/或將資料載入至緩衝器615中的任何條帶620中。舉例而言,緩衝器615可包括對關於條帶620的存取的資訊進行追蹤的表(未在圖6中示出):然後可使用此資訊選擇驅逐哪一條帶620及/或對哪一條帶620載入資料。不要求條帶620-1總是用於載入資料或者條帶620-3中的資料總是被驅逐。Although FIG. 6 illustrates loading data into stripe 620 - 1 in buffer 615 and evicting data from stripe 620 - 3 in buffer 615 , embodiments of the present disclosure may include evicting data from any stripe in buffer 615 Stripe 620 evicts data and/or loads data into any stripe 620 in buffer 615 . For example, buffer 615 may include a table (not shown in FIG. 6 ) that tracks information regarding access to stripe 620 : this information may then be used to select which stripe 620 to evict and/or which stripe to evict. Load data with 620. There is no requirement that stripe 620-1 is always used for loading data or that data in stripe 620-3 is always evicted.

如以上所論述,RAID引擎135可將同位資訊寫出至儲存設備120中的一或多者。錯誤處置器630可產生欲寫入至儲存設備120的適宜同位資訊。舉例而言,假設圖1所示機器105被配置成使用RAID層級5配置。此意指當條帶(例如條帶620-3)儲存於儲存設備120上時,兩個儲存設備120可接收實際資料,且第三個儲存設備120可儲存同位資訊。以此種方式,一旦一個儲存設備120上的資料惡化或者一個儲存設備120發生故障,便可使用同位資訊來對條帶進行重建。因此,假設對於條帶620-3而言資料625-1及625-2是實際資料且資料625-3是同位資訊。錯誤處置器630可基於資料625-1及625-2產生資料625-3的同位資訊,以將條帶620-3寫入至儲存設備120。As discussed above, RAID engine 135 may write parity information to one or more of storage devices 120 . Error handler 630 may generate appropriate parity information to be written to storage device 120 . For example, assume that machine 105 shown in Figure 1 is configured to use a RAID level 5 configuration. This means that when a stripe (eg, stripe 620-3) is stored on storage devices 120, two storage devices 120 can receive the actual data, and a third storage device 120 can store the co-located information. In this manner, once the data on a storage device 120 deteriorates or a storage device 120 fails, the parity information can be used to reconstruct the stripe. Therefore, assume that for stripe 620-3 data 625-1 and 625-2 are actual data and data 625-3 is parity information. Error handler 630 may generate parity information for data 625-3 based on data 625-1 and 625-2 to write stripe 620-3 to storage device 120.

儘管以上論述著重於產生同位資訊,然而本揭露的實施例可擴展至任何所期望形式的錯誤處置。舉例而言,若使用圖1所示機器105實施的RAID層級包括資料鏡像,則錯誤處置器630可將一些資料拷貝至鏡像驅動機。Although the above discussion focuses on generating parity information, embodiments of the present disclosure may be extended to any desired form of error handling. For example, if the RAID level implemented using machine 105 shown in Figure 1 includes data mirroring, error handler 630 may copy some data to the mirrored drive.

應注意,錯誤處置器630亦可實行錯誤校正。舉例而言,假設在某一點處將資料625載入至條帶620-1中。讀取電路605可讀取資料625,且錯誤處置器630可使用(例如)資料625-3中的同位資訊來驗證出資料625-1及625-2是正確的。若錯誤處置器630確定出在某處存在錯誤(可能在資料625中的任意者中),則錯誤處置器630可在資料條帶620-1被認為載入至緩衝器615中之前嘗試對錯誤進行校正。以此種方式,當資料被存取時,錯誤處置器630亦可支援對錯誤進行校正。It should be noted that error handler 630 may also perform error correction. For example, assume that at some point data 625 is loaded into stripe 620-1. Read circuit 605 can read data 625, and error handler 630 can use parity information in data 625-3, for example, to verify that data 625-1 and 625-2 are correct. If error handler 630 determines that there is an error somewhere (possibly in any of the data 625 ), error handler 630 may attempt to handle the error before data strip 620 - 1 is considered loaded into buffer 615 Make corrections. In this manner, error handler 630 may also support error correction when data is accessed.

圖7示出根據本揭露實施例的圖1所示RAID引擎135可如何對載入請求進行處置。在圖7中,應用305可發送載入請求705。載入請求705可包括位元組位址710,所述位元組位址710指定儲存所請求的資料的位元組位址。位元組位址710應被理解為表示對特定位址進行辨識的任何可能形式以及欲載入的任何可能大小的資料。因此,舉例而言,位元組位址710可直接指定特定位址且指示欲載入一個位元組。或者,位元組位址710可包括區塊的基礎位址(base address)(或者可被映射至所述區塊的基礎位址的辨識符)加上相對於所述基礎位址的偏置以及欲載入的位元組數目。FIG. 7 illustrates how the RAID engine 135 of FIG. 1 may handle a load request according to an embodiment of the present disclosure. In Figure 7, application 305 may send a load request 705. The load request 705 may include a byte address 710 that specifies a byte address where the requested data is stored. Byte address 710 should be understood to mean any possible form of identifying a particular address and any possible size of data to be loaded. Thus, for example, byte address 710 may directly specify a specific address and indicate that a byte is to be loaded. Alternatively, byte address 710 may include a base address of a block (or an identifier that may be mapped to the base address of the block) plus an offset relative to the base address. and the number of bytes to be loaded.

當RAID引擎135接收到載入請求705時,RAID引擎135可判斷包含位元組位址710的條帶當前是否被載入於緩衝器615中。若包含位元組位址710的條帶當前未被載入於緩衝器615中,則RAID引擎135可使用讀取電路605將條帶自儲存設備120讀取至緩衝器615中。若緩衝器615先前是滿的,則RAID引擎135可選擇自緩衝器615驅逐的條帶,如以上參照圖6所闡述。When the RAID engine 135 receives the load request 705, the RAID engine 135 may determine whether the stripe containing the byte address 710 is currently loaded in the buffer 615. If the stripe containing byte address 710 is not currently loaded in buffer 615, RAID engine 135 may use read circuitry 605 to read the stripe from storage device 120 into buffer 615. If buffer 615 was previously full, RAID engine 135 may select a stripe to evict from buffer 615, as explained above with reference to FIG. 6 .

如以上參照圖6所論述,當條帶被載入至緩衝器615中時,可使用圖6所示錯誤處置器630驗證出資料是正確的。若資料625在被圖6所示讀取電路605讀取時包括錯誤,則圖6所示錯誤處置器630可嘗試在條帶被完全載入至緩衝器615中之前對該些錯誤進行校正(或者可在錯誤未能被校正的情況下報告讀取失敗)。As discussed above with reference to Figure 6, when a stripe is loaded into buffer 615, error handler 630 shown in Figure 6 can be used to verify that the data is correct. If the data 625 includes errors when read by the read circuit 605 of FIG. 6, the error handler 630 of FIG. 6 may attempt to correct the errors before the stripe is fully loaded into the buffer 615 ( Or a read failure can be reported if the error cannot be corrected).

一旦包含位元組位址710的條帶已被載入至緩衝器615中,RAID引擎135便可自緩衝器615中的條帶存取資料。然後RAID引擎可將資料715返送回應用305。Once the stripe containing byte address 710 has been loaded into buffer 615, RAID engine 135 can access data from the stripe in buffer 615. The RAID engine can then return the data 715 back to the application 305.

圖8示出根據本揭露實施例的圖1所示RAID引擎135可如何對儲存請求進行處置。在圖8中,應用305可發送儲存請求805。儲存請求805可包括位元組位址810,所述位元組位址810指定儲存所請求的資料的位元組位址以及欲儲存的資料815。如同圖7所示位元組位址710一般,位元組位址810應被理解為表示對特定位址進行辨識的任何可能形式以及欲載入的任何可能大小的資料。因此,舉例而言,位元組位址810可直接指定特定位址且指示欲載入一個位元組。或者,位元組位址810可包括區塊的基礎位址(或者可被映射至所述區塊的基礎位址的辨識符)加上相對於所述基礎位址的偏置以及欲載入的位元組數目。FIG. 8 illustrates how the RAID engine 135 of FIG. 1 may handle storage requests according to an embodiment of the present disclosure. In Figure 8, application 305 may send a save request 805. The store request 805 may include a byte address 810 that specifies the byte address where the requested data is stored and the data to be stored 815 . Like byte address 710 shown in Figure 7, byte address 810 should be understood to represent any possible form of identifying a particular address and any possible size of data to be loaded. Thus, for example, byte address 810 may directly specify a specific address and indicate that a byte is to be loaded. Alternatively, the byte address 810 may include the base address of the block (or an identifier that may be mapped to the base address of the block) plus an offset relative to the base address and the load to be loaded. The number of bytes.

當RAID引擎135接收到儲存請求805時,RAID引擎135可判斷包含位元組位址810的條帶當前是否被載入於緩衝器615中。若包含位元組位址810的條帶當前未被載入於緩衝器615中,則RAID引擎135可使用讀取電路605將條帶自儲存設備120讀取至緩衝器615中。若緩衝器615先前是滿的,則RAID引擎135可選擇自緩衝器615驅逐的條帶,如以上參照圖6所闡述。When the RAID engine 135 receives the store request 805, the RAID engine 135 may determine whether the stripe containing the byte address 810 is currently loaded in the buffer 615. If the stripe containing byte address 810 is not currently loaded in buffer 615, RAID engine 135 may use read circuitry 605 to read the stripe from storage device 120 into buffer 615. If buffer 615 was previously full, RAID engine 135 may select a stripe to evict from buffer 615, as explained above with reference to FIG. 6 .

如以上參照圖6所論述,當條帶被載入至緩衝器615中時,可使用圖6所示錯誤處置器630驗證出資料是正確的。若資料625在被圖6所示讀取電路605讀取時包括錯誤,則圖6所示錯誤處置器630可嘗試在條帶被完全載入至緩衝器615中之前對該些錯誤進行校正(或者可在錯誤未能被校正的情況下報告讀取失敗)。As discussed above with reference to Figure 6, when a stripe is loaded into buffer 615, error handler 630 shown in Figure 6 can be used to verify that the data is correct. If the data 625 includes errors when read by the read circuit 605 of FIG. 6, the error handler 630 of FIG. 6 may attempt to correct the errors before the stripe is fully loaded into the buffer 615 ( Or a read failure can be reported if the error cannot be corrected).

一旦包含位元組位址815的條帶已被載入至緩衝器615中,RAID引擎135便可使用資料815對緩衝器615中的條帶進行更新。亦即,可使用資料815對資料625中的一或多者進行更新。舉例而言,若資料815在資料625-1內完全適配,則可對資料625-1進行更新。作為另外一種選擇,若資料815足夠大以至於跨越多個資料625(例如,若資料815的一個部分對資料625-1的一部分進行更新,且資料815的另一部分對資料625-2的一部分進行更新),則可藉由資料815對多個資料625進行更新。另外,若包括資料625-1的條帶包括鏡像資料或同位資訊,則亦可對資料625-2及/或625-3進行更新以將對資料625-1進行更新的資料815考量在內。Once the stripe containing the byte address 815 has been loaded into the buffer 615, the RAID engine 135 can use the data 815 to update the stripe in the buffer 615. That is, one or more of the data 625 may be updated using the data 815. For example, if data 815 fully fits within data 625-1, data 625-1 may be updated. Alternatively, if data 815 is large enough to span multiple data 625 (for example, if one portion of data 815 updates a portion of data 625-1, and another portion of data 815 updates a portion of data 625-2 Update), multiple data 625 can be updated through data 815. Additionally, if the stripe including data 625-1 includes mirror data or co-location information, data 625-2 and/or 625-3 may also be updated to take into account data 815 that updates data 625-1.

在RAID引擎135已對緩衝器615中的條帶中的資料進行更新之後,RAID引擎135便可將結果820返送回應用305。After the RAID engine 135 has updated the data in the stripe in the buffer 615, the RAID engine 135 may return the result 820 back to the application 305.

由於緩衝器615現在可將尚未被寫入至儲存設備120的資料儲存於條帶中,因此RAID引擎135可在某一點將緩衝器615中的條帶發生的改變提交至儲存設備120。因此,RAID引擎135可在某一點使用圖6所示寫入電路610(且可能使用圖6所示錯誤處置器630)來將經更新條帶中的資料實際寫入至儲存設備120。Because buffer 615 can now store data in stripes that has not yet been written to storage device 120, RAID engine 135 can commit changes to the stripes in buffer 615 to storage device 120 at some point. Therefore, RAID engine 135 may at some point use write circuitry 610 shown in FIG. 6 (and possibly error handler 630 shown in FIG. 6 ) to actually write the data in the updated stripe to storage device 120 .

在本揭露的一些實施例中,一旦緩衝器615中的條帶中的資料已得到更新,RAID引擎135便可使用圖6所示寫入電路610將經更新條帶寫入於緩衝器615中。在本揭露的一些實施例中,在RAID引擎135將結果820返送至應用305之前,圖6所示寫入電路610可將經更新條帶中的資料寫入至儲存設備120。在本揭露的其他實施例中,RAID引擎135可將結果820返送至應用305(使得應用305可繼續執行),且然後使用圖6所示寫入電路610將經更新條帶中的資料寫入至儲存設備120。在本揭露的又一些實施例中,RAID引擎135可對使用圖6所示寫入電路610將資料寫入至儲存設備120進行延遲,直至方便寫入資料或有必要寫入資料。舉例而言,RAID引擎135可進行等待,直至儲存設備120的利用率已減少,使得可在對可能正在執行的其他過程影響最小(或不具有影響)的情況下執行向儲存設備120發出的寫入命令。或者,若選擇自緩衝器615驅逐包含經更新資料的條帶,則RAID引擎135可由於寫入可能不再被延遲而使用圖6所示寫入電路610將經更新條帶中的資料寫入至儲存設備120。應注意,若RAID引擎135對提交對儲存設備120的更新進行延遲,直至RAID引擎135將結果820發送回應用305之後,則緩衝器電路615可能需要備用電源來防止電力中斷導致資料丟失:若緩衝器615的電力中斷,則RAID引擎135可確保任何經更新資料(例如由儲存請求805更新的條帶)在此種更新可能丟失之前被寫入至儲存設備120。In some embodiments of the present disclosure, once the data in the stripe in the buffer 615 has been updated, the RAID engine 135 may write the updated stripe into the buffer 615 using the write circuit 610 shown in FIG. 6 . In some embodiments of the present disclosure, the write circuit 610 shown in FIG. 6 may write the data in the updated stripe to the storage device 120 before the RAID engine 135 returns the result 820 to the application 305 . In other embodiments of the present disclosure, the RAID engine 135 can return the result 820 to the application 305 (so that the application 305 can continue to execute), and then write the data in the updated stripe using the write circuit 610 shown in FIG. 6 to storage device 120. In yet other embodiments of the present disclosure, the RAID engine 135 may delay writing data to the storage device 120 using the writing circuit 610 shown in FIG. 6 until it is convenient or necessary to write the data. For example, RAID engine 135 may wait until utilization of storage device 120 has decreased such that writes to storage device 120 may be performed with minimal (or no impact) on other processes that may be executing. Enter the command. Alternatively, if a stripe containing updated data is selected to be evicted from buffer 615, RAID engine 135 may use write circuitry 610 shown in FIG. 6 to write the data in the updated stripe since the write may no longer be delayed. to storage device 120. It should be noted that if the RAID engine 135 delays submitting updates to the storage device 120 until after the RAID engine 135 sends the results 820 back to the application 305, the buffer circuit 615 may require backup power to prevent power outages resulting in data loss: if buffering If power to the server 615 is interrupted, the RAID engine 135 can ensure that any updated data (eg, stripes updated by the store request 805) are written to the storage device 120 before such updates may be lost.

應注意,所有上述論述著重於RAID引擎135可如何對位元組層級協定請求而非區塊層級協定請求進行處置。本揭露的實施例有效地使區塊層級協定請求與位元組層級協定請求等效,且可對二者進行相似地處置。換言之,本揭露的實施例有效地將位元組層級協定請求轉換成區塊層級協定請求(此使得能夠使用位元組層級協定請求來對資料進行存取,而不繞過RAID實施方案),其中所述轉換對例如圖3所示應用305等更高層級過程隱藏。It should be noted that all of the above discussion focuses on how RAID engine 135 may handle bit-level protocol requests rather than block-level protocol requests. Embodiments of the present disclosure effectively equate block-level agreement requests and byte-level agreement requests, and both can be handled similarly. In other words, embodiments of the present disclosure effectively convert byte-level protocol requests into block-level protocol requests (which enables data to be accessed using byte-level protocol requests without bypassing the RAID implementation), The transformation is hidden from higher-level processes such as application 305 shown in FIG. 3 .

圖9示出根據本揭露實施例的載入RAID配置作為圖1所示RAID的初始化的一部分的實例性程序的流程圖。在方塊905處,圖1所示組構管理器140可載入RAID配置。RAID配置可儲存於任何所期望的非揮發性儲存器中。舉例而言,RAID配置可儲存於圖1所示儲存設備120中的一者上。或者,RAID配置可儲存於本地儲存區域(例如圖1所示組構管理器140的非揮發性儲存器、圖1所示RAID引擎140的非揮發性儲存器或圖1所示交換機145的非揮發性儲存器)中。此種非揮發性儲存器的實例可包括反及(NAND)快閃儲存器、唯讀記憶體(ROM)、可程式化ROM(programmable ROM,PROM)、可抹除可程式化ROM(erasable programmable,EPROM)或電性可抹除可程式化ROM(electrically erasable programmable ROM,EEPROM);亦可使用其他形式的非揮發性儲存器來儲存RAID配置。9 illustrates a flowchart of an example procedure for loading a RAID configuration as part of the initialization of the RAID shown in FIG. 1 according to an embodiment of the present disclosure. At block 905, fabric manager 140 shown in Figure 1 may load the RAID configuration. RAID configurations can be stored in any desired non-volatile storage. For example, the RAID configuration may be stored on one of the storage devices 120 shown in FIG. 1 . Alternatively, the RAID configuration may be stored in a local storage area (such as the non-volatile storage of the fabric manager 140 shown in FIG. 1, the non-volatile storage of the RAID engine 140 shown in FIG. 1, or the non-volatile storage of the switch 145 shown in FIG. 1). volatile storage). Examples of such non-volatile memory may include NAND flash memory, read-only memory (ROM), programmable ROM (PROM), erasable programmable ROM (erasable programmable) , EPROM) or electrically erasable programmable ROM (EEPROM); other forms of non-volatile memory can also be used to store RAID configurations.

圖10示出根據本揭露實施例的實行圖1所示RAID的初始化的實例性程序的流程圖。在圖10中,在方塊1005處,圖1所示組構管理器140可對圖1所示儲存設備120-1進行辨識以使所述儲存設備120-1包括於RAID中。如以上所論述,圖1所示儲存設備120-1可支援快取記憶體同調互連協定。相似地,在方塊1010處,圖1所示組構管理器140可對圖1所示儲存設備120-2進行辨識以使所述儲存設備120-2包括於RAID中。如以上所論述,圖1所示儲存設備120-2可支援快取記憶體同調互連協定。FIG. 10 illustrates a flowchart of an example procedure for performing initialization of the RAID shown in FIG. 1 according to an embodiment of the present disclosure. In FIG. 10, at block 1005, the fabric manager 140 of FIG. 1 may identify the storage device 120-1 of FIG. 1 so that the storage device 120-1 is included in a RAID. As discussed above, the storage device 120-1 shown in Figure 1 may support the cache coherence interconnection protocol. Similarly, at block 1010, the fabric manager 140 of FIG. 1 may identify the storage device 120-2 of FIG. 1 so that the storage device 120-2 is included in a RAID. As discussed above, the storage device 120-2 shown in FIG. 1 may support the cache coherence interconnection protocol.

在方塊1015處,圖1所示組構管理器140可為圖1所示儲存設備120-1確定圖5所示位址範圍505-1。在方塊1020處,圖1所示組構管理器140可為圖1所示儲存設備120-2確定圖5所示位址範圍505-2。最後,在方塊1025處,圖1所示組構管理器140可基於圖5所示位址範圍505產生圖5所示RAID位址範圍510。At block 1015, fabric manager 140 of FIG. 1 may determine address range 505-1 of FIG. 5 for storage device 120-1 of FIG. 1. At block 1020, fabric manager 140 of FIG. 1 may determine address range 505-2 of FIG. 5 for storage device 120-2 of FIG. 1. Finally, at block 1025, fabric manager 140 of FIG. 1 may generate RAID address range 510 of FIG. 5 based on address range 505 of FIG.

圖11示出根據本揭露實施例的對圖5所示RAID位址範圍510的使用進行管理的實例性程序的流程圖。在圖11中,在方塊1105處,圖1所示組構管理器140可向記憶體映射註冊圖5所示RAID位址範圍510,使得圖1所示機器105(更具體而言使得與圖1所示機器105進行交互的過程)可將圖5所示RAID位址範圍510「視為」可針對記憶體請求進行存取。在方塊1110處,若圖5所示位址範圍505-1及/或505-2被註冊至圖1所示機器105的記憶體映射,則圖1所示組構管理器140可自圖1所示機器105的記憶體映射註銷圖1所示位址範圍505。應注意,若先前未向圖1所示機器105的記憶體映射註冊圖1所示位址範圍505,則可省略方塊1110,如虛線1115所示。FIG. 11 illustrates a flowchart of an example process for managing the use of the RAID address range 510 shown in FIG. 5 according to an embodiment of the present disclosure. In FIG. 11, at block 1105, the fabric manager 140 of FIG. 1 may register the RAID address range 510 of FIG. 5 with the memory map such that the machine 105 of FIG. 1), the RAID address range 510 shown in FIG. 5 can be "seen" as being accessible for memory requests. At block 1110, if the address ranges 505-1 and/or 505-2 shown in FIG. 5 are registered to the memory map of the machine 105 shown in FIG. 1, the fabric manager 140 shown in FIG. The memory map of machine 105 is shown unregistering the address range 505 shown in FIG. 1 . It should be noted that if the address range 505 shown in FIG. 1 has not been previously registered with the memory map of the machine 105 shown in FIG. 1 , block 1110 may be omitted, as indicated by the dotted line 1115 .

圖12示出根據本揭露實施例的使用圖1所示RAID對載入請求705進行處理的實例性程序的流程圖。在圖12中,在方塊1205處,圖1所示RAID引擎135可接收圖7所示載入請求705。如以上參照圖7所論述,圖7所示載入請求705可包括圖7所示位元組位址710。在方塊1210處,圖1所示RAID引擎135可在圖6所示緩衝器615中在圖7所示位元組位址710處對資料進行定位。最後,在方塊1215處,RAID引擎可將自圖6所示緩衝器615提取的圖7所示資料715返送至發出圖7所示載入請求705的過程。FIG. 12 illustrates a flowchart of an example procedure for processing load request 705 using the RAID shown in FIG. 1 , in accordance with an embodiment of the present disclosure. In FIG. 12, at block 1205, the RAID engine 135 of FIG. 1 may receive the load request 705 of FIG. 7. As discussed above with reference to FIG. 7, the load request 705 shown in FIG. 7 may include the byte address 710 shown in FIG. 7. At block 1210, the RAID engine 135 of FIG. 1 may locate the data in the buffer 615 of FIG. 6 at the byte address 710 of FIG. 7. Finally, at block 1215, the RAID engine may return the data 715 of FIG. 7 retrieved from the buffer 615 of FIG. 6 to the process that issued the load request 705 of FIG.

圖13示出根據本揭露實施例的對圖7所示載入請求705中所請求的資料進行定位的實例性程序的流程圖。在方塊1305處,圖6所示讀取電路605可自圖1所示儲存設備120讀取圖6所示資料625,且在方塊1310處,圖1所示RAID引擎135可將自儲存設備120讀取的圖6所示資料625儲存於圖6所示緩衝器615中的圖6所示條帶620中。應注意,可實行方塊1305及1310多於一次,此端視自圖1所示儲存設備120讀取的資料而定,如虛線1315所示。在方塊1320處,圖6所示錯誤處置器630可驗證出由圖6所示讀取電路605讀取的資料不包含錯誤(或者在可能的情況下對偵測到的任何錯誤進行校正)。在一些情形中,可能不會實行錯誤處置:舉例而言,在RAID層級0配置中,不存在可用於實行錯誤校正的冗餘。在此種情形中,可省略方塊1320,如虛線1325所示。最後,在方塊1330處,圖1所示RAID引擎135可使用圖7所示位元組位址710在圖6所示緩衝器615中的圖6所示條帶620中對所請求資料進行定位。FIG. 13 illustrates a flowchart of an example procedure for locating the material requested in the load request 705 shown in FIG. 7 , according to an embodiment of the present disclosure. At block 1305, the read circuit 605 shown in FIG. 6 can read the data 625 shown in FIG. 6 from the storage device 120 shown in FIG. 1, and at block 1310, the RAID engine 135 shown in FIG. The read data 625 shown in FIG. 6 is stored in the strip 620 shown in FIG. 6 in the buffer 615 shown in FIG. 6 . It should be noted that blocks 1305 and 1310 may be executed more than once, depending on the data read from the storage device 120 shown in Figure 1, as indicated by dashed line 1315. At block 1320, error handler 630 of FIG. 6 may verify that the data read by read circuit 605 of FIG. 6 does not contain errors (or correct any errors detected, if possible). In some cases, error handling may not be performed: for example, in a RAID level 0 configuration, there is no redundancy available to perform error correction. In this case, block 1320 may be omitted, as shown by dashed line 1325. Finally, at block 1330, the RAID engine 135 of FIG. 1 may locate the requested data in the stripe 620 of FIG. 6 in the buffer 615 of FIG. 6 using the byte address 710 of FIG. .

圖14示出根據本揭露實施例的使用圖1所示RAID對圖8所示儲存請求805進行處理的實例性程序的流程圖。在圖14中,在方塊1405處,圖1所示RAID引擎135可接收圖8所示儲存請求805。如以上參照圖8所論述,圖8所示儲存請求805可包括圖8所示資料815及欲在其中儲存圖8所示資料815的圖8所示位元組位址810。在方塊1410處,圖1所示RAID引擎135可基於圖8所示位元組位址810及圖8所示資料815對圖6所示緩衝器615中的圖6所示條帶620進行更新。在方塊1410處,RAID引擎亦可對圖6所示緩衝器615中的圖6所示條帶620中對其他資料進行更新:舉例而言,亦可對鏡像資料及/或同位資訊進行更新。最後,在方塊1415處,圖1所示RAID引擎135可將圖8所示結果820返送至發出圖8所示儲存請求805的過程。14 illustrates a flowchart of an example procedure for processing the storage request 805 shown in FIG. 8 using the RAID shown in FIG. 1 according to an embodiment of the present disclosure. In FIG. 14, at block 1405, the RAID engine 135 of FIG. 1 may receive the storage request 805 of FIG. 8. As discussed above with reference to FIG. 8, the store request 805 of FIG. 8 may include the data 815 of FIG. 8 and the byte address 810 of FIG. 8 in which the data 815 of FIG. 8 is to be stored. At block 1410, the RAID engine 135 of FIG. 1 may update the stripe 620 of FIG. 6 in the buffer 615 of FIG. 6 based on the byte address 810 of FIG. 8 and the data 815 of FIG. . At block 1410, the RAID engine may also update other data in stripe 620 of FIG. 6 in buffer 615 of FIG. 6: for example, mirror data and/or co-location information may also be updated. Finally, at block 1415, the RAID engine 135 of FIG. 1 may return the result 820 of FIG. 8 to the process that issued the save request 805 of FIG. 8.

圖15示出根據本揭露實施例的對在圖6所示緩衝器615中儲存資料進行處置的實例性程序的流程圖。在圖15中,在方塊1505處,讀取電路605可自圖1所示儲存設備120讀取圖6所示資料625,以將圖6所示條帶620儲存於圖6所示緩衝器615中。當包含欲被更新的資料的圖6所示條帶620已被載入於圖6所示緩衝器615中時,可省略方塊1505,如虛線1510所示。在方塊1515處,圖1所示RAID引擎135可基於圖8所示位元組位址810及圖8所示資料815在圖6所示緩衝器615中的圖6所示條帶620中對資料進行更新。在方塊1520處,圖1所示RAID引擎135可保護經更新資料免受電力中斷。可例如由備用電源提供此種保護,以確保圖6所示緩衝器615中的資料可被寫入至儲存設備120。若直至圖6所示緩衝器615中的經更新條帶已被寫入至圖1所示儲存設備120之後才認為完成圖8所示儲存操作805,則可省略方塊1520,如虛線1525所示。最後,在方塊1530(方塊1530可為在圖6所示RAID引擎135如方塊1515中所闡述般在圖6所示緩衝器615中的圖6所示條帶620中對資料進行更新之後的某一時間)處,圖6所示寫入電路610可將圖6所示緩衝器615中的圖6所示條帶620中的經更新資料寫入至圖1所示儲存設備120。FIG. 15 illustrates a flowchart of an example procedure for processing data stored in buffer 615 shown in FIG. 6 according to an embodiment of the present disclosure. In Figure 15, at block 1505, the read circuit 605 can read the data 625 shown in Figure 6 from the storage device 120 shown in Figure 1 to store the stripe 620 shown in Figure 6 in the buffer 615 shown in Figure 6 middle. When the stripe 620 of FIG. 6 containing the data to be updated has been loaded into the buffer 615 of FIG. 6 , block 1505 may be omitted, as indicated by dashed line 1510 . At block 1515, the RAID engine 135 of FIG. 1 may match the stripe 620 of FIG. 6 in the buffer 615 of FIG. 6 based on the byte address 810 of FIG. 8 and the data 815 of FIG. The information is updated. At block 1520, the RAID engine 135 shown in Figure 1 may protect the updated data from power outages. Such protection may be provided, for example, by a backup power supply to ensure that data in buffer 615 shown in FIG. 6 can be written to storage device 120. If the store operation 805 of FIG. 8 is not considered complete until the updated stripe in the buffer 615 of FIG. 6 has been written to the storage device 120 of FIG. 1 , block 1520 may be omitted, as indicated by dashed line 1525 . Finally, at block 1530 (block 1530 may be some time after the RAID engine 135 of FIG. 6 updates the data in the stripe 620 of FIG. 6 in the buffer 615 of FIG. 6 as set forth in block 1515 At a time), the writing circuit 610 shown in FIG. 6 may write the updated data in the stripe 620 shown in FIG. 6 in the buffer 615 shown in FIG. 6 to the storage device 120 shown in FIG. 1 .

在圖9至圖15中示出本揭露的一些實施例。但是,熟習此項技術者將認識到,藉由改變方塊的次序、藉由省略方塊或者藉由包括圖式中未示出的鏈路,亦可存在本揭露的其他實施例。無論是否明確闡述,流程圖的所有此種變型皆被視為本揭露的實施例。Some embodiments of the present disclosure are shown in Figures 9-15. However, those skilled in the art will recognize that other embodiments of the disclosure are possible, by changing the order of the blocks, by omitting blocks, or by including links not shown in the figures. All such variations on the flow diagrams, whether explicitly stated or not, are considered embodiments of the present disclosure.

本揭露的實施例包括獨立磁碟冗餘陣列(RAID)引擎,所述RAID引擎可支援對針對儲存於快取記憶體同調互連協定儲存設備上的資料的位元組層級協定請求進行處理。RAID引擎可對針對RAID配置中的儲存設備上的資料進行存取的位元組層級協定請求進行辨識。可將資料載入至緩衝器中,可自緩衝器擷取所請求的特定資料且返送所請求的特定資料。本揭露的實施例提供優於未配備成對針對儲存於RAID配置中所包括的儲存設備上的資料的位元組層級請求進行處理的配置的技術優勢。Embodiments of the present disclosure include a redundant array of independent disks (RAID) engine that can support processing of byte-level protocol requests for data stored on cache-coherent interconnect protocol storage devices. The RAID engine recognizes byte-level protocol requests for access to data on storage devices in a RAID configuration. Data can be loaded into a buffer, specific requested data can be retrieved from the buffer, and specific requested data can be returned. Embodiments of the present disclosure provide technical advantages over configurations not equipped to handle byte-level requests for data stored on storage devices included in a RAID configuration.

快速計算鏈路(CXL)固態驅動機(SSD)可被暴露為區塊設備。然後使用者可使用CXL區塊設備介面將CXL SSD作為獨立磁碟冗餘陣列(RAID)的一部分。Compute Link Express (CXL) solid-state drives (SSDs) can be exposed as block devices. Users can then use the CXL block device interface to use the CXL SSD as part of a redundant array of independent disks (RAID).

但是,若CXL SSD是RAID陣列的一部分,則存在潛在的問題。首先,寫入至RAID陣列中的各別SSD可能會導致RAID陣列中的資料不可讀取。其次,若資料是自各別SSD讀取而非作為RAID陣列的一部分讀取,則可能無法使用所使用的RAID層級提供的同位來對資料進行檢查。However, there are potential issues if the CXL SSD is part of a RAID array. First, writing to individual SSDs in a RAID array may render the data in the RAID array unreadable. Secondly, if the data is read from individual SSDs rather than as part of a RAID array, the data may not be inspectable using the parity provided by the RAID level used.

另外,CXL.mem(或CXL.記憶體)路徑本身不支援RAID。使用軟體來映射RAID陣列中的每一各別CXL設備且在軟體中進行檢查是緩慢的且涉及大量的軟體配接(software adaption)。Additionally, the CXL.mem (or CXL.mem) path itself does not support RAID. Using software to map each individual CXL device in a RAID array and checking it in software is slow and involves a lot of software adaption.

為瞭解決該些問題,本揭露的實施例可包括用於CXL.mem路徑的圖1所示硬體RAID引擎135。在使用本揭露的實施例的情況下,可使用軟體來使用由硬體RAID引擎提供的位址範圍。To address these issues, embodiments of the present disclosure may include the hardware RAID engine 135 shown in Figure 1 for the CXL.mem path. With embodiments of the present disclosure, software may be used to use the address range provided by the hardware RAID engine.

本揭露實施例的優點包括支援CXL.mem路徑及CXL.io路徑二者中的RAID的方式以及使用RAID特徵對CXL.mem路徑錯誤進行偵測及恢復的能力。不需要進行應用改變且可在新的架構中改善可靠性。Advantages of embodiments of the present disclosure include support for RAID in both CXL.mem paths and CXL.io paths and the ability to use RAID features to detect and recover from CXL.mem path errors. No application changes are required and reliability can be improved in the new architecture.

本揭露的實施例支援使用CXL.mem路徑的RAID配置。本揭露的實施例可降低軟體複雜性且可避免應用改變或重新編譯(recompilation)。另外,可改善使用CXL.mem的應用的可靠性。最後,在一些RAID配置中,可藉由在設備之間分配流量來改善效能。Embodiments of the present disclosure support RAID configuration using CXL.mem paths. Embodiments of the present disclosure can reduce software complexity and avoid application changes or recompilation. Additionally, the reliability of applications using CXL.mem can be improved. Finally, in some RAID configurations, performance can be improved by distributing traffic between devices.

以下論述旨在提供對可在其中實施本揭露的某些態樣的一或多個適合的機器的簡短總體說明。所述一或多個機器可至少部分地藉由以下來控制:來自例如鍵盤、滑鼠等常規輸入設備的輸入;以及自另一機器接收的指令(directive)、與虛擬實境(virtual reality,VR)環境、生物統計回饋(biometric feedback)或其他輸入訊號的交互作用。本文中所使用的用語「機器」旨在廣泛地囊括單一機器、虛擬機器或由以通訊方式耦合的一同進行操作的機器、虛擬機器或設備構成的系統。示例性機器包括:計算設備,例如個人電腦、工作站、伺服器、可攜式電腦、手持式設備、電話、平板電腦等;以及運輸設備,例如私人運輸或公共運輸(例如汽車、火車、計程車等)。The following discussion is intended to provide a brief general description of one or more suitable machines in which certain aspects of the present disclosure may be implemented. The one or more machines may be controlled, at least in part, by input from conventional input devices such as keyboards and mice, as well as instructions received from another machine, and virtual reality. Interaction with VR) environments, biometric feedback, or other input signals. The term "machine" as used herein is intended to broadly include a single machine, a virtual machine, or a system of communicatively coupled machines, virtual machines, or devices that operate together. Exemplary machines include: computing devices, such as personal computers, workstations, servers, portable computers, handheld devices, phones, tablets, etc.; and transportation devices, such as private or public transportation (e.g., cars, trains, taxis, etc. ).

所述一或多個機器可包括嵌入式控制器,例如可程式化或非可程式化邏輯設備或陣列、應用專用積體電路(Application Specific Integrated Circuit,ASIC)、嵌入式電腦、智慧卡及類似設備。所述一或多個機器可利用例如經由網路介面、數據機或其他通訊性耦合達成的與一或多個遠程機器的一或多個連接。機器可借助於例如內部網路(Intranet)、網際網路、局域網路、廣域網路等實體網路及/或邏輯網路進行內連。熟習此項技術者應理解,網路通訊可利用各種有線及/或無線短程或長程載波及協定,所述載波及協定包括射頻(radio frequency,RF)、衛星、微波、電氣及電子工程師學會(Institute of Electrical and Electronics Engineers,IEEE)802.11、藍芽®、光學的、紅外線的、纜線、雷射等。The one or more machines may include embedded controllers such as programmable or non-programmable logic devices or arrays, Application Specific Integrated Circuits (ASICs), embedded computers, smart cards, and the like. equipment. The one or more machines may utilize one or more connections to one or more remote machines, such as via a network interface, modem, or other communicative coupling. Machines can be internally connected via physical networks and/or logical networks such as intranets, the Internet, local area networks, and wide area networks. Those familiar with this technology should understand that network communications may utilize a variety of wired and/or wireless short-range or long-range carriers and protocols, including radio frequency (RF), satellite, microwave, and IEEE (Institute of Electrical and Electronics Engineers) Institute of Electrical and Electronics Engineers (IEEE) 802.11, Bluetooth®, optical, infrared, cable, laser, etc.

可藉由參照或結合相關聯資料來闡述本揭露的實施例,所述相關聯資料包括當由機器存取時使得所述機器實行任務或對抽象資料類型或低層級硬體上下文進行定義的功能、程序、資料結構、應用程式等。相關聯資料可儲存於例如揮發性記憶體及/或非揮發性記憶體(例如,RAM、ROM等)中,或儲存於包括硬驅動機、軟磁碟(floppy-disk)、光學儲存器、磁帶(tape)、快閃記憶體、記憶條(memory stick)、數位視訊碟、生物儲存器等的其他儲存設備及其相關聯儲存媒體中。相關聯資料可以封包、串列資料、並列資料、傳播訊號等形式而藉由包括實體網路及/或邏輯網路在內的傳送環境進行遞送,且可以壓縮格式或加密格式使用。相關聯資料可用於分佈式環境中,且可在本地及/或遠程儲存以供機器存取。Embodiments of the present disclosure may be illustrated by reference to or in conjunction with associated information that, when accessed by a machine, causes the machine to perform tasks or functions that define abstract data types or low-level hardware contexts. , programs, data structures, applications, etc. The associated data may be stored in, for example, volatile memory and/or non-volatile memory (e.g., RAM, ROM, etc.), or may be stored in hard drives, floppy disks, optical storage, magnetic tapes, etc. (tape), flash memory, memory stick, digital video disk, biological memory and other storage devices and their associated storage media. The associated data may be delivered through transmission environments including physical networks and/or logical networks in the form of packets, serial data, parallel data, broadcast signals, etc., and may be used in compressed or encrypted formats. The associated data can be used in a distributed environment and can be stored locally and/or remotely for machine access.

本揭露的實施例可包括有形非暫時性機器可讀取媒體,所述有形非暫時性機器可讀取媒體包括可由一或多個處理器執行的指令,所述指令包括用於實行本文中所闡述的本揭露的要素的指令。Embodiments of the present disclosure may include tangible, non-transitory, machine-readable media including instructions executable by one or more processors, including instructions for performing the tasks described herein. Instructions setting forth the elements of the present disclosure.

上述方法的各種操作可藉由能夠實行所述操作的任何適合的手段(例如各種硬體組件及/或軟體組件、電路及/或模組)來實行。所述軟體可包括用於實施邏輯功能的可執行指令的有序列表,且可實施於任何「處理器可讀取媒體」中,以供由指令執行系統、裝置或設備(例如單核處理器或多核處理器或包含處理器的系統)使用或者與所述指令執行系統、裝置或設備結合使用。Various operations of the above methods can be performed by any suitable means capable of performing the operations (such as various hardware components and/or software components, circuits and/or modules). The software may include an ordered list of executable instructions for implementing logical functions, and may be implemented on any "processor-readable medium" for use by an instruction-executable system, device or device, such as a single-core processor or multi-core processor or system containing a processor) or used in conjunction with the instruction execution system, apparatus or device.

結合本文中所揭露的實施例闡述的方法或演算法及功能的方塊或步驟可直接以硬體、以由處理器執行的軟體模組或以所述二者的組合來實施。若以軟體實施,則所述功能可作為一或多個指令或碼儲存於有形非暫時性電腦可讀取媒體上或者在有形非暫時性電腦可讀取媒體之上傳送。軟體模組可駐存於隨機存取記憶體(RAM)、快閃記憶體、唯讀記憶體(ROM)、電性可程式化ROM(EPROM)、電性可抹除可程式化ROM(EEPROM)、暫存器、硬碟、可抽換式磁碟(removable disk)、光碟唯讀記憶體(Compact Disc Read Only Memory,CD ROM)或此項技術中已知的任何其他形式的儲存媒體中。Blocks or steps of methods or algorithms and functions described in connection with the embodiments disclosed herein may be implemented directly in hardware, in software modules executed by a processor, or in a combination of the two. If implemented in software, the functions may be stored on or transmitted over a tangible non-transitory computer-readable medium as one or more instructions or code. Software modules can reside in random access memory (RAM), flash memory, read-only memory (ROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM) ), scratchpad, hard disk, removable disk, Compact Disc Read Only Memory (CD ROM) or any other form of storage media known in the art .

由於已參照所示實施例闡述並示出了本揭露的原理,因此將認識到,在不背離此種原理的條件下,所示實施例可在佈置及細節上作出潤飾,且可以任何所期望方式進行組合。並且,儘管前述論述著重於特定實施例,然而亦可預期存在其他配置。具體而言,即使在本文中使用例如「根據本揭露的實施例」或類似表述等表述,然而該些片語意在一般指代實施例可能性,且不旨在將本揭露限制於特定實施例配置。本文中所使用的該些用語可指組合至其他實施例中的相同或不同的實施例。Since the principles of the present disclosure have been illustrated and described with reference to the illustrated embodiments, it will be appreciated that the illustrated embodiments may be modified in arrangement and detail and may be modified in any desired manner without departing from such principles. way to combine. Also, while the foregoing discussion focuses on specific embodiments, other configurations are contemplated. In particular, even though expressions such as "according to embodiments of the disclosure" or similar expressions are used herein, these phrases are intended to refer generally to embodiment possibilities and are not intended to limit the disclosure to specific embodiments. configuration. As used herein, these terms may refer to the same or different embodiments combined into other embodiments.

前述例示性實施例不應被視為限制其揭露內容。儘管已闡述幾個實施例,然而熟習此項技術者應易於理解,在不實質上背離本揭露的新穎教示內容及優點的條件下,可對該些實施例作出諸多潤飾。因此,所有此種潤飾皆旨在包含於如在申請專利範圍中所界定的本揭露的範圍內。 本揭露的實施例可擴展至以下聲明,但不限於此: 聲明1. 本揭露的實施例包括一種系統,所述系統包括: 第一儲存設備,支援快取記憶體同調互連協定,所述快取記憶體同調互連協定包括區塊層級協定及位元組層級協定; 第二儲存設備,支援快取記憶體同調互連協定;以及 獨立磁碟冗餘陣列(RAID)電路,與第一儲存設備及第二儲存設備進行通訊,所述RAID電路對第一儲存設備及第二儲存設備應用RAID層級,所述RAID電路被配置成接收使用位元組層級協定的請求且對第一儲存設備上的資料進行存取。 聲明2. 本揭露的實施例包括根據聲明1的系統,其中所述RAID電路被配置成接收使用位元組層級協定的請求且在維持RAID層級的同時對第一儲存設備上的資料進行存取。 聲明3. 本揭露的實施例包括根據聲明1的系統,其中所述系統更包括處理器,所述處理器被配置成發出被發送至RAID電路的請求。 聲明4. 本揭露的實施例包括根據聲明1的系統,其中所述快取記憶體同調互連協定包括快速計算鏈路(CXL)協定。 聲明5. 本揭露的實施例包括根據聲明1的系統,其中: 所述第一儲存設備包括第一固態驅動機(SSD)(120);且 所述第二儲存設備包括第二SSD。 聲明6. 本揭露的實施例包括根據聲明1的系統,所述系統更包括快取記憶體同調互連交換機,其中所述快取記憶體同調互連交換機連接至第一儲存設備、第二儲存設備及RAID電路。 聲明7. 本揭露的實施例包括根據聲明6的系統,其中所述快取記憶體同調互連交換機包括RAID電路。 聲明8. 本揭露的實施例包括根據聲明1的系統,其中所述RAID電路被配置成支援RAID層級0、RAID層級1、RAID層級5或RAID層級6中的至少一者。 聲明9. 本揭露的實施例包括根據聲明1的系統,其中: 所述系統更包括電路板,所述電路板包括第一槽及第二槽; 所述第一儲存設備安裝於第一槽中;且 所述第二儲存設備安裝於第二槽中。 聲明10. 本揭露的實施例包括根據聲明9的系統,其中: 所述電路板更包括第三槽;且 所述RAID電路安裝於第三槽中。 聲明11. 本揭露的實施例包括根據聲明10的系統,所述系統更包括快取記憶體同調互連交換機,其中所述快取記憶體同調互連交換機連接至第一儲存設備、第二儲存設備及RAID電路,所述快取記憶體同調互連交換機安裝於第三槽中,且所述快取記憶體同調互連交換機包括RAID電路。 聲明12. 本揭露的實施例包括根據聲明10的系統,其中: 所述電路板更包括第四槽;且 所述系統更包括快取記憶體同調互連交換機,其中所述快取記憶體同調互連交換機連接至第一儲存設備、第二儲存設備及RAID電路,所述快取記憶體同調互連交換機安裝於第四槽中。 聲明13. 本揭露的實施例包括根據聲明1的系統,所述系統更包括用於對RAID電路進行配置的組構管理器。 聲明14. 本揭露的實施例包括根據聲明13的系統,其中所述組構管理器被配置成對第一儲存設備及第二儲存設備進行辨識且將所述RAID電路配置成使用RAID層級。 聲明15. 本揭露的實施例包括根據聲明13的系統,其中所述組構管理器被配置成確定第一儲存設備的第一位址範圍及第二儲存設備的第二位址範圍且將第一位址範圍及第二位址範圍映射至RAID位址範圍。 聲明16. 本揭露的實施例包括根據聲明15的系統,其中所述組構管理器更被配置成將RAID位址範圍確定為可由處理器進行存取。 聲明17. 本揭露的實施例包括根據聲明16的系統,其中所述組構管理器更被配置成將所述RAID位址範圍添加至記憶體映射。 聲明18. 本揭露的實施例包括根據聲明16的系統,其中對處理器隱藏所述第一位址範圍及所述第二位址範圍。 聲明19. 本揭露的實施例包括根據聲明1的系統,其中所述RAID電路包括緩衝器。 聲明20. 本揭露的實施例包括根據聲明19的系統,所述系統更包括被配置成向所述緩衝器提供備用電力的備用電源。 聲明21. 本揭露的實施例包括根據聲明20的系統,其中所述備用電源包括電池或電容器。 聲明22. 本揭露的實施例包括根據聲明19的系統,其中所述RAID電路被配置成至少部分地基於載入請求而自緩衝器返送資料,所述載入請求包括位元組位址。 聲明23. 本揭露的實施例包括根據聲明22的系統,其中所述載入請求包括位元組層級協定載入請求。 聲明24. 本揭露的實施例包括根據聲明22的系統,所述系統更包括處理器,所述處理器被配置成發出被發送至RAID電路的載入請求。 聲明25. 本揭露的實施例包括根據聲明22的系統,其中所述RAID電路更包括讀取電路,所述讀取電路被配置成至少部分地基於位元組位址及RAID電路的RAID層級而將第二資料自第一儲存設備讀取至緩衝器中且將第三資料自第二儲存設備讀取至緩衝器中。 聲明26. 本揭露的實施例包括根據聲明25的系統,其中所述第二資料包括所述資料。 聲明27. 本揭露的實施例包括根據聲明26的系統,其中: 所述資料包括第一部分及第二部分; 所述第二資料包括所述資料的第一部分;且 所述第三資料包括所述資料的第二部分。 聲明28. 本揭露的實施例包括根據聲明25的系統,其中所述RAID電路更包括錯誤處置器,所述錯誤處置器被配置成使用第三資料中的同位資訊對第二資料進行驗證。 聲明29. 本揭露的實施例包括根據聲明19的系統,其中所述RAID電路被配置成至少部地分基於儲存請求而向所述緩衝器寫入資料,所述儲存請求包括位元組位址及所述資料。 聲明30. 本揭露的實施例包括根據聲明29的系統,其中所述儲存請求包括位元組層級協定儲存請求。 聲明31. 本揭露的實施例包括根據聲明29的系統,所述系統更包括處理器,所述處理器被配置成發出被發送至RAID電路的儲存請求。 聲明32. 本揭露的實施例包括根據聲明29的系統,其中所述RAID電路更包括寫入電路,所述寫入電路被配置成至少部分地基於位元組位址及RAID電路的RAID層級而將第二資料自緩衝器寫入至第一儲存設備中且將第三資料自緩衝器寫入至第二儲存設備中。 聲明33. 本揭露的實施例包括根據聲明32的系統,其中所述RAID電路更包括錯誤處置器,所述錯誤處置器被配置成在第三資料中寫入同位資訊以對第二資料進行驗證。 聲明34. 本揭露的實施例包括根據聲明29的系統,其中所述RAID電路更包括讀取電路,所述讀取電路被配置成至少部分地基於位元組位址及RAID電路的RAID層級而將第二資料自第一儲存設備讀取至緩衝器中且將第三資料自第二儲存設備讀取至緩衝器中。 聲明35. 本揭露的實施例包括根據聲明34的系統,其中所述第二資料包括欲被所述資料替換的第四資料。 聲明36. 本揭露的實施例包括根據聲明34的系統,其中: 所述資料包括第一部分及第二部分; 所述第二資料包括欲被所述資料的第一部分替換的第四資料;且 所述第三資料包括欲被所述資料的第二部分替換的第五資料。 聲明37. 本揭露的實施例包括根據聲明34的系統,其中所述RAID電路更包括錯誤處置器,所述錯誤處置器被配置成使用第三資料中的同位資訊對第二資料進行驗證。 聲明38. 本揭露的實施例包括根據聲明1的系統,所述系統更包括計算模組,其中所述計算模組被配置成經由RAID電路自第一儲存設備或第二儲存設備存取資料。 聲明39. 本揭露的實施例包括一種方法,所述方法包括: 對支援快取記憶體同調互連協定的第一儲存設備進行辨識; 對支援快取記憶體同調互連協定的第二儲存設備進行辨識; 確定第一儲存設備的第一位址範圍; 確定第二儲存設備的第二位址範圍;以及 至少部分地基於所述第一位址範圍及所述第二位址範圍而為獨立磁碟冗餘陣列(RAID)電路產生RAID位址範圍。 聲明40. 本揭露的實施例包括根據聲明39的方法,所述方法更包括向包括RAID電路的系統的記憶體映射註冊RAID位址範圍。 聲明41. 本揭露的實施例包括根據聲明40的方法,所述方法更包括向記憶體映射註銷第一位址範圍及第二位址範圍。 聲明42. 本揭露的實施例包括根據聲明39的方法,所述方法更包括自非揮發性儲存器載入RAID配置。 聲明43. 本揭露的實施例包括一種方法,所述方法包括: 在獨立磁碟冗餘陣列(RAID)電路處接收載入請求,所述載入請求包括位元組位址; 至少部分地基於位元組位址而在RAID電路的緩衝器中對資料進行定位;以及 自RAID電路返送所述資料。 聲明44. 本揭露的實施例包括根據聲明43的方法,其中所述載入請求包括位元組層級協定載入請求。 聲明45. 本揭露的實施例包括根據聲明43的方法,其中: 在RAID電路處接收載入請求包括在RAID電路處自處理器接收載入請求;且 自RAID電路返送所述資料包括將所述資料自RAID電路返送至處理器。 聲明46. 本揭露的實施例包括根據聲明43的方法,其中在RAID電路的所述緩衝器中針對位元組位址對所述資料進行定位包括: 自支援快取記憶體同調互連協定的第一儲存設備讀取第二資料; 自支援快取記憶體同調互連協定的第二儲存設備讀取第三資料; 將第二資料儲存於所述緩衝器中;以及 將第三資料儲存於所述緩衝器中。 聲明47. 本揭露的實施例包括根據聲明46的方法,其中在RAID電路的所述緩衝器中針對位元組位址對所述資料進行定位更包括在第二資料中定位所述資料。 聲明48. 本揭露的實施例包括根據聲明47的方法,其中在第二資料中定位所述資料包括在第二資料的位址範圍中定位位元組位址。 聲明49. 本揭露的實施例包括根據聲明46的方法,其中: 所述資料包括第一部分及第二部分; 所述第二資料包括所述資料的第一部分;且 所述第三資料包括所述資料的第二部分。 聲明50. 本揭露的實施例包括根據聲明46的方法,其中在RAID電路的所述緩衝器中針對位元組位址對所述資料進行定位更包括使用第三資料中的同位資訊對第二資料進行驗證。 聲明51. 本揭露的實施例包括一種方法,所述方法包括: 在獨立磁碟冗餘陣列(RAID)電路處接收儲存請求,所述儲存請求包括位元組位址及第一資料; 至少部分地基於位元組位址及第一資料而在RAID電路的緩衝器中對第二資料進行更新以生成經更新第二資料;以及 自所述RAID電路返送結果。 聲明52. 本揭露的實施例包括根據聲明51的方法,其中所述儲存請求包括位元組層級協定儲存請求。 聲明53. 本揭露的實施例包括根據聲明51的方法,其中: 在RAID電路處接收儲存請求包括在RAID電路處自處理器接收載入請求;且 自RAID電路返送結果包括將結果自RAID電路返送至處理器。 聲明54. 本揭露的實施例包括根據聲明51的方法,所述方法更包括將經更新第二資料寫入至支援快取記憶體同調互連協定的第一儲存設備。 聲明55. 本揭露的實施例包括根據聲明54的方法,其中將經更新第二資料寫入至支援快取記憶體同調互連協定的第一儲存設備包括至少部分地基於自RAID電路返送結果而將經更新第二資料寫入至支援快取記憶體同調互連協定的第一儲存設備。 聲明56. 本揭露的實施例包括根據聲明55的方法,其中至少部分地基於自RAID電路返送結果而將經更新第二資料寫入至支援快取記憶體同調互連協定的第一儲存設備包括使用備用電源保護所述緩衝器免受電力中斷。 聲明57. 本揭露的實施例包括根據聲明51的方法,所述方法更包括: 至少部分地基於第一資料而在RAID電路的所述緩衝器中對第三資料進行更新以生成經更新第三資料;以及 將經更新第三資料寫入至支援快取記憶體同調互連協定的第二儲存設備。 聲明58. 本揭露的實施例包括根據聲明57的方法,其中至少部分地基於第一資料而在RAID電路的所述緩衝器中對第三資料進行更新以生成經更新第三資料包括至少部分地基於第一資料產生同位資訊作為經更新第三資料。 聲明59. 本揭露的實施例包括根據聲明57的方法,所述方法更包括將所述第三資料自第二儲存設備讀取至所述緩衝器中。 聲明60. 本揭露的實施例包括根據聲明51的方法,所述方法更包括將所述第二資料自第一儲存設備讀取至所述緩衝器中。 聲明61. 本揭露的實施例包括一種物品,所述物品包括非暫時性儲存媒體,所述非暫時性儲存媒體具有儲存於所述非暫時性儲存媒體上的指令,所述指令在由機器執行時使得: 對支援快取記憶體同調互連協定的第一儲存設備進行辨識; 對支援快取記憶體同調互連協定的第二儲存設備進行辨識; 確定第一儲存設備的第一位址範圍; 確定第二儲存設備的第二位址範圍;以及 至少部分地基於所述第一位址範圍及所述第二位址範圍而為獨立磁碟冗餘陣列(RAID)電路產生RAID位址範圍。 聲明62. 本揭露的實施例包括根據聲明61的物品,所述非暫時性儲存媒體具有儲存於所述非暫時性儲存媒體上的另外的指令,所述另外的指令在由所述機器執行時使得向包括RAID電路的系統的記憶體映射註冊RAID位址範圍。 聲明63. 本揭露的實施例包括根據聲明62的物品,所述非暫時性儲存媒體具有儲存於所述非暫時性儲存媒體上的另外的指令,所述另外的指令在由所述機器執行時使得向記憶體映射註銷第一位址範圍及第二位址範圍。 聲明64. 本揭露的實施例包括根據聲明61的物品,所述非暫時性儲存媒體具有儲存於所述非暫時性儲存媒體上的另外的指令,所述另外的指令在由所述機器執行時使得自非揮發性儲存器載入RAID配置。 聲明65. 本揭露的實施例包括一種物品,所述物品包括非暫時性儲存媒體,所述非暫時性儲存媒體具有儲存於所述非暫時性儲存媒體上的指令,所述指令在由機器執行時使得: 在獨立磁碟冗餘陣列(RAID)電路處接收載入請求,所述載入請求包括位元組位址; 至少部分地基於位元組位址而在RAID電路的緩衝器中對資料進行定位;以及 自RAID電路返送所述資料。 聲明66. 本揭露的實施例包括根據聲明65的物品,其中所述載入請求包括位元組層級協定載入請求。 聲明67. 本揭露的實施例包括根據聲明65的物品,其中: 在RAID電路處接收載入請求包括在RAID電路處自處理器接收載入請求;且 自RAID電路返送資料包括將資料自RAID電路返送至處理器。 聲明68. 本揭露的實施例包括根據聲明65的物品,其中在RAID電路的所述緩衝器中針對位元組位址對所述資料進行定位包括: 自支援快取記憶體同調互連協定的第一儲存設備讀取第二資料; 自支援快取記憶體同調互連協定的第二儲存設備讀取第三資料; 將第二資料儲存於所述緩衝器中;以及 將第三資料儲存於所述緩衝器中。 聲明69. 本揭露的實施例包括根據聲明68的物品,其中在RAID電路的所述緩衝器中針對位元組位址對所述資料進行定位更包括在第二資料中定位所述資料。 聲明70. 本揭露的實施例包括根據聲明69的物品,其中在第二資料中定位所述資料包括在第二資料的位址範圍中定位位元組位址。 聲明71. 本揭露的實施例包括根據聲明68的物品,其中: 所述資料包括第一部分及第二部分; 所述第二資料包括所述資料的第一部分;且 所述第三資料包括所述資料的第二部分。 聲明72. 本揭露的實施例包括根據聲明68的物品,其中在RAID電路的所述緩衝器中針對位元組位址對所述資料進行定位更包括使用第三資料中的同位資訊對第二資料進行驗證。 聲明73. 本揭露的實施例包括一種物品,所述物品包括非暫時性儲存媒體,所述非暫時性儲存媒體具有儲存於所述非暫時性儲存媒體上的指令,所述指令在由機器執行時使得: 在獨立磁碟冗餘陣列(RAID)電路處接收儲存請求,所述儲存請求包括位元組位址及第一資料; 至少部分地基於位元組位址及第一資料而在RAID電路的緩衝器中對第二資料進行更新以生成經更新第二資料;以及 自所述RAID電路返送結果。 聲明74. 本揭露的實施例包括根據聲明73的物品,其中所述儲存請求包括位元組層級協定儲存請求。 聲明75. 本揭露的實施例包括根據聲明73的物品,其中: 在RAID電路處接收儲存請求包括在RAID電路處自處理器接收載入請求;且 自RAID電路返送結果包括將結果自RAID電路返送至處理器。 聲明76. 本揭露的實施例包括根據聲明73的物品,所述非暫時性儲存媒體具有儲存於所述非暫時性儲存媒體上的另外的指令,所述另外的指令在由所述機器執行時使得將經更新第二資料寫入至支援快取記憶體同調互連協定的第一儲存設備。 聲明77. 本揭露的實施例包括根據聲明76的物品,其中將經更新第二資料寫入至支援快取記憶體同調互連協定的第一儲存設備包括至少部分地基於自RAID電路返送結果而將經更新第二資料寫入至支援快取記憶體同調互連協定的第一儲存設備。 聲明78. 本揭露的實施例包括根據聲明77的物品,其中至少部分地基於自RAID電路返送結果而將經更新第二資料寫入至支援快取記憶體同調互連協定的第一儲存設備包括使用備用電源保護所述緩衝器免受電力中斷。 聲明79. 本揭露的實施例包括根據聲明73的物品,所述非暫時性儲存媒體具有儲存於所述非暫時性儲存媒體上的另外的指令,所述另外的指令在由所述機器執行時使得: 至少部分地基於第一資料而在RAID電路的所述緩衝器中對第三資料進行更新以生成經更新第三資料;以及 將經更新第三資料寫入至支援快取記憶體同調互連協定的第二儲存設備。 聲明80. 本揭露的實施例包括根據聲明79的物品,其中至少部分地基於第一資料而在RAID電路的所述緩衝器中對第三資料進行更新以生成經更新第三資料包括至少部分地基於第一資料產生同位資訊作為經更新第三資料。 聲明81. 本揭露的實施例包括根據聲明79的物品,所述非暫時性儲存媒體具有儲存於所述非暫時性儲存媒體上的另外的指令,所述另外的指令在由所述機器執行時使得將所述第三資料自第二儲存設備讀取至所述緩衝器中。 聲明82. 本揭露的實施例包括根據聲明73的物品,所述非暫時性儲存媒體具有儲存於所述非暫時性儲存媒體上的另外的指令,所述另外的指令在由所述機器執行時使得將所述第二資料自第一儲存設備讀取至所述緩衝器中。 因此,慮及本文中所闡述的實施例的各種各樣的排列,此詳細說明及隨附材料僅旨在為例示性的,且不應被視為限制本揭露的範圍。因此,所主張保護的揭露內容是可落入以下申請專利範圍及其等效內容的範圍及精神內的所有此類潤飾。 The foregoing exemplary embodiments should not be considered as limiting the disclosure thereof. Although several embodiments have been described, those skilled in the art will readily appreciate that many modifications may be made to the embodiments without materially departing from the novel teachings and advantages of the present disclosure. Accordingly, all such modifications are intended to be included within the scope of the present disclosure as defined in the patent claims. Embodiments of the present disclosure may be extended to the following statement, but are not limited thereto: Statement 1. Embodiments of the present disclosure include a system, the system includes: a first storage device supporting a cache coherence interconnection protocol, the The cache coherence interconnection protocol includes block-level protocols and byte-level protocols; the secondary storage device supports the cache coherence interconnection protocol; and the redundant array of independent disks (RAID) circuit, and the first The storage device and the second storage device communicate, the RAID circuit applies a RAID level to the first storage device and the second storage device, the RAID circuit is configured to receive a request using a byte level protocol and apply a RAID level to the first storage device to access the data on. Statement 2. Embodiments of the present disclosure include a system according to Statement 1, wherein the RAID circuit is configured to receive a request using a byte level protocol and to access data on the first storage device while maintaining the RAID level. . Statement 3. Embodiments of the present disclosure include a system according to Statement 1, wherein the system further includes a processor configured to issue a request sent to the RAID circuit. Statement 4. Embodiments of the present disclosure include a system according to Statement 1, wherein the cache coherent interconnect protocol includes a Compute Express Link (CXL) protocol. Statement 5. Embodiments of the present disclosure include a system according to Statement 1, wherein: the first storage device includes a first solid state drive (SSD) (120); and the second storage device includes a second SSD. Statement 6. Embodiments of the present disclosure include a system according to Statement 1, further comprising a cache coherence interconnect switch, wherein the cache coherence interconnect switch is connected to a first storage device, a second storage device equipment and RAID circuits. Statement 7. Embodiments of the present disclosure include a system according to Statement 6, wherein the cache coherence interconnect switch includes a RAID circuit. Statement 8. Embodiments of the present disclosure include a system according to Statement 1, wherein the RAID circuit is configured to support at least one of RAID Level 0, RAID Level 1, RAID Level 5, or RAID Level 6. Statement 9. Embodiments of the present disclosure include a system according to Statement 1, wherein: the system further includes a circuit board, the circuit board includes a first slot and a second slot; the first storage device is installed in the first slot ; And the second storage device is installed in the second slot. Statement 10. Embodiments of the present disclosure include a system according to Statement 9, wherein: the circuit board further includes a third slot; and the RAID circuit is installed in the third slot. Statement 11. Embodiments of the present disclosure include a system according to Statement 10, further comprising a cache coherence interconnect switch, wherein the cache coherence interconnect switch is connected to a first storage device, a second storage device A device and a RAID circuit, the cache coherence interconnect switch is installed in the third slot, and the cache coherence interconnect switch includes a RAID circuit. Statement 12. Embodiments of the present disclosure include a system according to Statement 10, wherein: the circuit board further includes a fourth slot; and the system further includes a cache coherence interconnect switch, wherein the cache coherence The interconnect switch is connected to the first storage device, the second storage device and the RAID circuit, and the cache synchronization interconnect switch is installed in the fourth slot. Statement 13. Embodiments of the present disclosure include a system according to Statement 1, further comprising a fabric manager for configuring the RAID circuit. Statement 14. Embodiments of the present disclosure include a system according to Statement 13, wherein the fabric manager is configured to identify a first storage device and a second storage device and configure the RAID circuit to use RAID levels. Statement 15. Embodiments of the present disclosure include a system according to Statement 13, wherein the fabric manager is configured to determine a first address range of a first storage device and a second address range of a second storage device and assign the One address range and the second address range are mapped to RAID address ranges. Statement 16. Embodiments of the present disclosure include a system according to Statement 15, wherein the fabric manager is further configured to determine a RAID address range as accessible by the processor. Statement 17. Embodiments of the present disclosure include a system according to Statement 16, wherein the fabric manager is further configured to add the RAID address range to a memory map. Statement 18. Embodiments of the present disclosure include a system according to Statement 16, wherein the first address range and the second address range are hidden from a processor. Statement 19. Embodiments of the present disclosure include a system according to Statement 1, wherein the RAID circuit includes a buffer. Statement 20. Embodiments of the present disclosure include a system according to Statement 19, further comprising a backup power supply configured to provide backup power to the buffer. Statement 21. Embodiments of the present disclosure include a system according to Statement 20, wherein the backup power source includes a battery or a capacitor. Statement 22. Embodiments of the present disclosure include a system according to Statement 19, wherein the RAID circuit is configured to return data from the buffer based at least in part on a load request, the load request including a byte address. Statement 23. Embodiments of the present disclosure include a system according to Statement 22, wherein the load request includes a byte level protocol load request. Statement 24. Embodiments of the present disclosure include a system according to Statement 22, further comprising a processor configured to issue a load request sent to the RAID circuit. Statement 25. Embodiments of the present disclosure include a system according to Statement 22, wherein the RAID circuit further includes a read circuit configured to be based at least in part on a byte address and a RAID level of the RAID circuit. Second data is read from the first storage device into the buffer and third data is read from the second storage device into the buffer. Statement 26. Embodiments of the present disclosure include a system according to Statement 25, wherein the second data includes the data. Statement 27. Embodiments of the present disclosure include a system according to Statement 26, wherein: the data includes a first part and a second part; the second data includes the first part of the data; and the third data includes the Part II of the information. Statement 28. Embodiments of the present disclosure include a system according to Statement 25, wherein the RAID circuit further includes an error handler configured to verify the second data using parity information in the third data. Statement 29. Embodiments of the present disclosure include a system according to Statement 19, wherein the RAID circuit is configured to write data to the buffer based at least in part on a storage request, the storage request including a byte address and the information described. Statement 30. Embodiments of the present disclosure include a system according to Statement 29, wherein the storage request includes a byte level protocol storage request. Statement 31. Embodiments of the present disclosure include a system according to Statement 29, further comprising a processor configured to issue a storage request sent to the RAID circuit. Statement 32. Embodiments of the present disclosure include a system according to Statement 29, wherein the RAID circuit further includes a write circuit configured to be based at least in part on a byte address and a RAID level of the RAID circuit. The second data is written from the buffer to the first storage device and the third data is written from the buffer to the second storage device. Statement 33. Embodiments of the present disclosure include a system according to Statement 32, wherein the RAID circuit further includes an error handler configured to write parity information in the third data to verify the second data. . Statement 34. Embodiments of the present disclosure include a system according to Statement 29, wherein the RAID circuit further includes a read circuit configured to be based at least in part on a byte address and a RAID level of the RAID circuit. Second data is read from the first storage device into the buffer and third data is read from the second storage device into the buffer. Statement 35. Embodiments of the present disclosure include a system according to Statement 34, wherein the second data includes fourth data to be replaced by the data. Statement 36. Embodiments of the present disclosure include a system according to Statement 34, wherein: the data includes a first part and a second part; the second data includes fourth data to be replaced by the first part of the data; and Said third data includes fifth data to be replaced by the second part of said data. Statement 37. Embodiments of the present disclosure include a system according to Statement 34, wherein the RAID circuit further includes an error handler configured to verify the second data using parity information in the third data. Statement 38. Embodiments of the present disclosure include a system according to Statement 1, further comprising a computing module, wherein the computing module is configured to access data from the first storage device or the second storage device via the RAID circuit. Statement 39. Embodiments of the present disclosure include a method that includes: identifying a first storage device that supports a cache coherence interconnection protocol; identifying a second storage device that supports a cache coherence interconnection protocol. performing identification; determining a first address range of a first storage device; determining a second address range of a second storage device; and being independent based at least in part on the first address range and the second address range. A redundant array of disks (RAID) circuit generates a RAID address range. Statement 40. Embodiments of the present disclosure include a method according to Statement 39, further comprising registering a RAID address range with a memory map of a system including a RAID circuit. Statement 41. Embodiments of the present disclosure include the method according to Statement 40, further comprising unregistering the first address range and the second address range to the memory map. Statement 42. Embodiments of the present disclosure include the method according to Statement 39, further comprising loading a RAID configuration from non-volatile storage. Statement 43. Embodiments of the present disclosure include a method comprising: receiving a load request at a redundant array of independent disks (RAID) circuit, the load request including a byte address; based at least in part on The byte address is used to locate the data in the RAID circuit's buffer; and the data is returned from the RAID circuit. Statement 44. Embodiments of the present disclosure include the method according to Statement 43, wherein the load request includes a byte level protocol load request. Statement 45. Embodiments of the present disclosure include a method according to Statement 43, wherein: receiving the load request at the RAID circuit includes receiving the load request from the processor at the RAID circuit; and returning the data from the RAID circuit includes sending the Data is returned from the RAID circuit to the processor. Statement 46. Embodiments of the present disclosure include a method according to Statement 43, wherein locating the data for a byte address in the buffer of a RAID circuit includes: self-supporting cache coherence interconnection protocol The first storage device reads the second data; reads the third data from the second storage device supporting the cache coherence interconnection protocol; stores the second data in the buffer; and stores the third data in the buffer. in the buffer. Statement 47. Embodiments of the present disclosure include a method according to Statement 46, wherein locating the data for a byte address in the buffer of a RAID circuit further includes locating the data in second data. Statement 48. Embodiments of the present disclosure include a method according to Statement 47, wherein locating the data in the second data includes locating a byte address in an address range of the second data. Statement 49. Embodiments of the present disclosure include a method according to Statement 46, wherein: the data includes a first part and a second part; the second data includes the first part of the data; and the third data includes the Part II of the information. Statement 50. Embodiments of the present disclosure include a method according to Statement 46, wherein locating the data for a byte address in the buffer of a RAID circuit further includes using parity information in a third data to locate a second Data is verified. Statement 51. Embodiments of the present disclosure include a method comprising: receiving a storage request at a redundant array of independent disks (RAID) circuit, the storage request including a byte address and first data; at least in part Update the second data in a buffer of the RAID circuit based on the byte address and the first data to generate updated second data; and return a result from the RAID circuit. Statement 52. Embodiments of the present disclosure include the method according to Statement 51, wherein the storage request includes a byte level protocol storage request. Statement 53. Embodiments of the present disclosure include a method according to Statement 51, wherein: receiving the storage request at the RAID circuit includes receiving a load request from the processor at the RAID circuit; and returning the result from the RAID circuit includes returning the result from the RAID circuit to the processor. Statement 54. Embodiments of the present disclosure include a method according to Statement 51, further comprising writing the updated second data to a first storage device that supports a cache coherence interconnection protocol. Statement 55. Embodiments of the present disclosure include a method according to Statement 54, wherein writing the updated second data to the first storage device supporting the cache coherence interconnection protocol includes based at least in part on returning results from the RAID circuit. Writing the updated second data to the first storage device supporting the cache coherence interconnection protocol. Statement 56. Embodiments of the present disclosure include a method according to Statement 55, wherein writing updated second data to a first storage device supporting a cache coherence interconnection protocol based at least in part on results returned from the RAID circuit includes Use backup power to protect the buffer from power outages. Statement 57. Embodiments of the present disclosure include a method according to Statement 51, further comprising: updating third data in the buffer of the RAID circuit based at least in part on the first data to generate an updated third data. data; and writing the updated third data to a second storage device that supports the cache coherence interconnection protocol. Statement 58. Embodiments of the present disclosure include a method according to Statement 57, wherein updating third data in the buffer of the RAID circuit to generate updated third data based at least in part on the first data includes Generating co-location information as updated third data based on the first data. Statement 59. Embodiments of the present disclosure include the method according to Statement 57, further comprising reading the third data from the second storage device into the buffer. Statement 60. Embodiments of the present disclosure include the method according to Statement 51, further comprising reading the second data from the first storage device into the buffer. Statement 61. Embodiments of the present disclosure include an article including a non-transitory storage medium having instructions stored on the non-transitory storage medium, the instructions being executed by a machine When: identifying a first storage device that supports the cache coherence interconnection protocol; identifying a second storage device that supports the cache coherence interconnection protocol; determining a first address range of the first storage device ; Determining a second address range for a second storage device; and generating a RAID address range for a redundant array of independent disks (RAID) circuit based at least in part on the first address range and the second address range. . Statement 62. Embodiments of the present disclosure include articles according to Statement 61, the non-transitory storage medium having additional instructions stored on the non-transitory storage medium, the additional instructions when executed by the machine Causes the RAID address range to be registered with the memory map of the system including the RAID circuit. Statement 63. Embodiments of the present disclosure include articles according to Statement 62, the non-transitory storage medium having additional instructions stored on the non-transitory storage medium, the additional instructions when executed by the machine The first address range and the second address range are deregistered from the memory map. Statement 64. Embodiments of the present disclosure include articles according to Statement 61, the non-transitory storage medium having additional instructions stored on the non-transitory storage medium, the additional instructions when executed by the machine Causes the RAID configuration to be loaded from non-volatile storage. Statement 65. Embodiments of the present disclosure include an article including a non-transitory storage medium having instructions stored on the non-transitory storage medium, the instructions being executed by a machine causing: a load request to be received at a redundant array of independent disks (RAID) circuit, the load request including a byte address; locating the data; and returning the data from the RAID circuit. Statement 66. Embodiments of the present disclosure include articles according to Statement 65, wherein the load request includes a byte level protocol load request. Statement 67. Embodiments of the present disclosure include articles according to Statement 65, wherein: receiving the load request at the RAID circuit includes receiving the load request from the processor at the RAID circuit; and returning the data from the RAID circuit includes returning the data from the RAID circuit Returned to processor. Statement 68. Embodiments of the present disclosure include articles according to Statement 65, wherein locating the data for a byte address in the buffer of a RAID circuit includes: a first storage that supports a cache coherent interconnect protocol The device reads the second data; reads the third data from the second storage device supporting the cache coherence interconnection protocol; stores the second data in the buffer; and stores the third data in the buffer. in the vessel. Statement 69. Embodiments of the present disclosure include articles according to Statement 68, wherein locating the data for a byte address in the buffer of a RAID circuit further includes locating the data in second data. Statement 70. Embodiments of the present disclosure include articles according to Statement 69, wherein locating the data in the second data includes locating a byte address in an address range of the second data. Statement 71. Embodiments of the present disclosure include articles according to Statement 68, wherein: the data includes a first part and a second part; the second data includes the first part of the data; and the third data includes the Part II of the information. Statement 72. Embodiments of the present disclosure include articles according to Statement 68, wherein locating the data in the buffer of a RAID circuit for a byte address further includes using parity information in a third data to locate a second Data is verified. Statement 73. Embodiments of the present disclosure include an article including a non-transitory storage medium having instructions stored on the non-transitory storage medium, the instructions being executed by a machine causing: receiving a storage request at a redundant array of independent disks (RAID) circuit, the storage request including a byte address and first data; updating the second data in a buffer of the circuit to generate updated second data; and returning results from the RAID circuit. Statement 74. Embodiments of the present disclosure include articles according to Statement 73, wherein the storage request includes a byte level protocol storage request. Statement 75. Embodiments of the present disclosure include articles according to Statement 73, wherein: receiving the storage request at the RAID circuit includes receiving a load request from the processor at the RAID circuit; and returning the result from the RAID circuit includes returning the result from the RAID circuit to the processor. Statement 76. Embodiments of the present disclosure include articles according to Statement 73, the non-transitory storage medium having additional instructions stored on the non-transitory storage medium, the additional instructions when executed by the machine causing the updated second data to be written to the first storage device supporting the cache coherence interconnection protocol. Statement 77. Embodiments of the present disclosure include articles according to Statement 76, wherein writing the updated second data to the first storage device supporting the cache coherence interconnect protocol includes based at least in part on returning results from the RAID circuit. Writing the updated second data to the first storage device supporting the cache coherence interconnection protocol. Statement 78. Embodiments of the present disclosure include the article according to Statement 77, wherein writing updated second data to a first storage device supporting a cache coherence interconnect protocol based at least in part on results returned from the RAID circuit includes Use backup power to protect the buffer from power outages. Statement 79. Embodiments of the present disclosure include articles according to Statement 73, the non-transitory storage medium having additional instructions stored on the non-transitory storage medium, the additional instructions when executed by the machine causing: updating third data in the buffer of the RAID circuit based at least in part on the first data to generate updated third data; and writing the updated third data to a cache synchronization support Connected to the protocol's secondary storage device. Statement 80. Embodiments of the present disclosure include the article according to Statement 79, wherein updating third data in the buffer of the RAID circuit to generate updated third data based at least in part on the first data includes Generating co-location information as updated third data based on the first data. Statement 81. Embodiments of the present disclosure include articles according to Statement 79, the non-transitory storage medium having additional instructions stored on the non-transitory storage medium, the additional instructions when executed by the machine The third data is read from the second storage device into the buffer. Statement 82. Embodiments of the present disclosure include articles according to Statement 73, the non-transitory storage medium having additional instructions stored on the non-transitory storage medium, the additional instructions when executed by the machine The second data is read from the first storage device into the buffer. Accordingly, this detailed description and accompanying materials are intended to be illustrative only and should not be construed as limiting the scope of the disclosure in view of the various permutations of the embodiments set forth herein. The claimed disclosure is therefore intended to be subject to all such modifications as may fall within the scope and spirit of the following claims and their equivalents.

105:機器 110:處理器 115:記憶體 120、120-1、120-2、120-3:儲存設備 125:記憶體控制器 130:設備驅動器 135:RAID引擎 140:組構管理器 145:交換機 205:時脈 210:網路連接件 215:匯流排 220:使用者介面 225:輸入/輸出(I/O)引擎 305:應用 310-1:請求/讀取請求/寫入請求 310-2:請求 405-1:區塊層級協定 405-2:位元組層級協定 410:儲存器 415-1、415-2、415-3、415-4:區塊 420:位元組 505-1、505-2:位址範圍 510:RAID位址範圍/位址範圍 605:讀取電路 610:寫入電路 615:緩衝器/緩衝器電路 620-1、620-2、620-3:條帶 625-1、625-2、625-3、715、815:資料 630:錯誤處置器 705:載入請求 710、810:位元組位址 805:儲存請求 820:結果 905、1005、1010、1015、1020、1025、1105、1110、1205、1210、1215、1305、1310、1320、1330、1405、1410、1415、1505、1515、1520、1530:方塊 1115、1315、1325、1510、1525:虛線 105:Machine 110: Processor 115:Memory 120, 120-1, 120-2, 120-3: Storage equipment 125:Memory controller 130:Device driver 135:RAID engine 140: Organization Manager 145:Switch 205: Clock 210:Network connector 215:Bus 220:User interface 225: Input/output (I/O) engine 305:Application 310-1: Request/read request/write request 310-2: Request 405-1: Block Level Agreement 405-2: Byte Level Agreement 410:Storage 415-1, 415-2, 415-3, 415-4: Block 420: Byte 505-1, 505-2: Address range 510:RAID address range/address range 605:Read circuit 610:Writing circuit 615: Buffer/buffer circuit 620-1, 620-2, 620-3: Strip 625-1, 625-2, 625-3, 715, 815: Information 630: Error handler 705: Loading request 710, 810: Byte address 805: Storage request 820:Result 905, 1005, 1010, 1015, 1020, 1025, 1105, 1110, 1205, 1210, 1215, 1305, 1310, 1320, 1330, 1405, 1410, 1415, 1505, 1515, 1520, 1530: Square 1115, 1315, 1325, 1510, 1525: dashed line

以下闡述的圖式是關於可如何實施本揭露實施例的實例,且不旨在限制本揭露的實施例。本揭露的各別實施例可包括未在特定圖中示出的元件及/或可省略特定圖中所示的元件。所述圖式旨在提供例示且可能未按比例繪製。 圖1示出根據本揭露實施例的包括可被配置於獨立磁碟冗餘陣列(RAID)中的快取記憶體同調互連儲存設備的機器。 圖2示出根據本揭露實施例的圖1所示機器的細節。 圖3示出根據本揭露實施例的對圖1所示RAID的使用。 圖4示出根據本揭露實施例的可如何使用兩種不同的協定自圖1所示儲存設備存取資料。 圖5示出根據本揭露實施例的RAID位址範圍可如何映射至圖1所示儲存設備的各別位址範圍。 圖6示出根據本揭露實施例的圖1所示RAID引擎中的緩衝器的細節。 圖7示出根據本揭露實施例的圖1所示RAID引擎可如何對載入請求進行處置。 圖8示出根據本揭露實施例的圖1所示RAID引擎可如何對儲存請求進行處置。 圖9示出根據本揭露實施例的載入RAID配置作為圖1所示RAID的初始化的一部分的實例性程序的流程圖。 圖10示出根據本揭露實施例的實行圖1所示RAID的初始化的實例性程序的流程圖。 圖11示出根據本揭露實施例的對圖5所示RAID位址範圍的使用進行管理的實例性程序的流程圖。 圖12示出根據本揭露實施例的使用圖1所示RAID對載入請求進行處理的實例性程序的流程圖。 圖13示出根據本揭露實施例的對載入操作中所請求的資料進行定位的實例性程序的流程圖。 圖14示出根據本揭露實施例的使用圖1所示RAID對儲存請求進行處理的實例性程序的流程圖。 圖15示出根據本揭露實施例的對在圖6所示緩衝器中儲存資料進行處置的實例性程序的流程圖。 The drawings set forth below are examples of how embodiments of the disclosure may be implemented and are not intended to limit the embodiments of the disclosure. Various embodiments of the present disclosure may include elements not shown in certain figures and/or elements shown in certain figures may be omitted. The drawings are intended to provide illustrations and may not be drawn to scale. FIG. 1 illustrates a machine including a cache coherent interconnect storage device that can be configured in a redundant array of independent disks (RAID) according to an embodiment of the present disclosure. Figure 2 shows details of the machine shown in Figure 1 according to an embodiment of the present disclosure. Figure 3 illustrates the use of the RAID shown in Figure 1 according to an embodiment of the present disclosure. FIG. 4 illustrates how two different protocols may be used to access data from the storage device shown in FIG. 1 according to an embodiment of the present disclosure. FIG. 5 illustrates how a RAID address range may be mapped to a respective address range of the storage device shown in FIG. 1 according to an embodiment of the present disclosure. Figure 6 shows details of the buffers in the RAID engine shown in Figure 1, according to an embodiment of the present disclosure. FIG. 7 illustrates how the RAID engine shown in FIG. 1 may handle a load request according to an embodiment of the present disclosure. FIG. 8 illustrates how the RAID engine shown in FIG. 1 may handle storage requests according to an embodiment of the present disclosure. 9 illustrates a flowchart of an example procedure for loading a RAID configuration as part of the initialization of the RAID shown in FIG. 1 according to an embodiment of the present disclosure. FIG. 10 illustrates a flowchart of an example procedure for performing initialization of the RAID shown in FIG. 1 according to an embodiment of the present disclosure. FIG. 11 illustrates a flowchart of an example procedure for managing the use of the RAID address range shown in FIG. 5 according to an embodiment of the present disclosure. FIG. 12 illustrates a flowchart of an example procedure for processing a load request using the RAID shown in FIG. 1 according to an embodiment of the present disclosure. 13 illustrates a flowchart of an example procedure for locating data requested in a load operation, in accordance with an embodiment of the present disclosure. 14 illustrates a flowchart of an example procedure for processing storage requests using the RAID shown in FIG. 1 according to an embodiment of the present disclosure. FIG. 15 illustrates a flowchart of an example procedure for processing data stored in the buffer shown in FIG. 6 according to an embodiment of the present disclosure.

105:機器 105:Machine

110:處理器 110: Processor

115:記憶體 115:Memory

120-1、120-2:儲存設備 120-1, 120-2: Storage equipment

125:記憶體控制器 125:Memory controller

130:設備驅動器 130:Device driver

135:RAID引擎 135:RAID engine

140:組構管理器 140: Organization Manager

145:交換機 145:Switch

Claims (20)

一種系統,包括: 第一儲存設備,支援快取記憶體同調互連協定,所述快取記憶體同調互連協定包括區塊層級協定及位元組層級協定; 第二儲存設備,支援所述快取記憶體同調互連協定;以及 獨立磁碟冗餘陣列(RAID)電路,與所述第一儲存設備及所述第二儲存設備進行通訊,所述獨立磁碟冗餘陣列電路對所述第一儲存設備及所述第二儲存設備應用獨立磁碟冗餘陣列層級,所述獨立磁碟冗餘陣列電路被配置成接收使用所述位元組層級協定的請求且對所述第一儲存設備上的資料進行存取。 A system that includes: The first storage device supports cache coherence interconnection protocols, and the cache coherence interconnection protocols include block-level protocols and byte-level protocols; A second storage device that supports the cache coherence interconnection protocol; and A redundant array of independent disks (RAID) circuit communicates with the first storage device and the second storage device, and the redundant array of independent disks circuit communicates with the first storage device and the second storage device. The device applies a redundant array of independent disks circuit, the redundant array of independent disks circuit is configured to receive a request using the byte level protocol and to access data on the first storage device. 如請求項1所述的系統,其中所述快取記憶體同調互連協定包括快速計算鏈路(CXL)協定。The system of claim 1, wherein the cache coherent interconnect protocol includes a Compute Express Link (CXL) protocol. 如請求項1所述的系統,更包括快取記憶體同調互連交換機,其中所述快取記憶體同調互連交換機連接至所述第一儲存設備、所述第二儲存設備及所述獨立磁碟冗餘陣列電路。The system of claim 1, further comprising a cache coherence interconnect switch, wherein the cache coherence interconnect switch is connected to the first storage device, the second storage device and the independent Disk redundant array circuit. 如請求項1所述的系統,更包括用於對所述獨立磁碟冗餘陣列電路進行配置的組構管理器。The system of claim 1 further includes a fabric manager for configuring the independent disk redundant array circuit. 如請求項4所述的系統,其中所述組構管理器被配置成對所述第一儲存設備及所述第二儲存設備進行辨識且將所述獨立磁碟冗餘陣列電路配置成使用獨立磁碟冗餘陣列層級。The system of claim 4, wherein the fabric manager is configured to identify the first storage device and the second storage device and configure the independent disk redundant array circuit to use independent Disk redundant array hierarchy. 如請求項4所述的系統,其中所述組構管理器被配置成確定所述第一儲存設備的第一位址範圍及所述第二儲存設備的第二位址範圍且將所述第一位址範圍及所述第二位址範圍映射至獨立磁碟冗餘陣列位址範圍。The system of claim 4, wherein the fabric manager is configured to determine a first address range of the first storage device and a second address range of the second storage device and assign the first address range to the second storage device. One address range and the second address range are mapped to a Redundant Array of Independent Disks address range. 如請求項6所述的系統,其中所述組構管理器更被配置成將所述獨立磁碟冗餘陣列位址範圍確定為可由處理器進行存取。The system of claim 6, wherein the fabric manager is further configured to determine the redundant array of independent disks address range as accessible by the processor. 如請求項1所述的系統,其中所述獨立磁碟冗餘陣列電路包括緩衝器。The system of claim 1, wherein the redundant array of independent disk circuits includes a buffer. 如請求項8所述的系統,更包括被配置成向所述緩衝器提供備用電力的備用電源。The system of claim 8, further comprising a backup power supply configured to provide backup power to the buffer. 一種方法,包括: 在獨立磁碟冗餘陣列(RAID)電路處接收載入請求,所述載入請求包括位元組位址; 至少部分地基於所述位元組位址而在所述獨立磁碟冗餘陣列電路的緩衝器中對資料進行定位;以及 自所述獨立磁碟冗餘陣列電路返送所述資料。 A method that includes: receiving a load request at a redundant array of independent disks (RAID) circuit, the load request including a byte address; locating data in a buffer of the independent disk redundant array circuit based at least in part on the byte address; and The data is returned from the independent disk redundant array circuit. 如請求項10所述的方法,其中在所述獨立磁碟冗餘陣列電路的所述緩衝器中針對所述位元組位址對所述資料進行定位包括: 自支援快取記憶體同調互連協定的第一儲存設備讀取第二資料; 自支援快取記憶體同調互連協定的第二儲存設備讀取第三資料; 將所述第二資料儲存於所述緩衝器中;以及 將所述第三資料儲存於所述緩衝器中。 The method of claim 10, wherein locating the data for the byte address in the buffer of the independent disk redundant array circuit includes: Reading the second data from the first storage device supporting the cache coherence interconnection protocol; Read third data from a second storage device that supports cache coherence interconnection protocol; storing the second data in the buffer; and Store the third data in the buffer. 如請求項11所述的方法,其中在所述獨立磁碟冗餘陣列電路的所述緩衝器中針對所述位元組位址對所述資料進行定位更包括在所述第二資料中定位所述資料。The method of claim 11, wherein locating the data for the byte address in the buffer of the independent disk redundant array circuit further includes locating the data in the second data said information. 如請求項11所述的方法,其中在所述獨立磁碟冗餘陣列電路的所述緩衝器中針對所述位元組位址對所述資料進行定位更包括使用所述第三資料中的同位資訊對所述第二資料進行驗證。The method of claim 11, wherein locating the data for the byte address in the buffer of the independent disk redundant array circuit further includes using the third data. The parity information verifies the second data. 一種方法,包括: 在獨立磁碟冗餘陣列(RAID)電路處接收儲存請求,所述儲存請求包括位元組位址及第一資料; 至少部分地基於所述位元組位址及所述第一資料而在所述獨立磁碟冗餘陣列電路的緩衝器中對第二資料進行更新以生成經更新第二資料;以及 自所述獨立磁碟冗餘陣列電路返送結果。 A method that includes: receiving a storage request at a redundant array of independent disks (RAID) circuit, the storage request including a byte address and first data; updating second data in a buffer of the independent disk redundant array circuit to generate updated second data based at least in part on the byte address and the first data; and Results are returned from the redundant array of independent disk circuits. 如請求項14所述的方法,更包括將所述經更新第二資料寫入至支援快取記憶體同調互連協定的第一儲存設備。The method of claim 14, further comprising writing the updated second data to a first storage device that supports cache coherence interconnection protocol. 如請求項15所述的方法,其中將所述經更新第二資料寫入至支援所述快取記憶體同調互連協定的所述第一儲存設備包括至少部分地基於自所述獨立磁碟冗餘陣列電路返送結果,而將所述經更新第二資料寫入至支援所述快取記憶體同調互連協定的所述第一儲存設備。The method of claim 15, wherein writing the updated second data to the first storage device supporting the cache coherence interconnection protocol includes writing the updated second data based at least in part on the data from the independent disk. The redundant array circuit returns a result to write the updated second data to the first storage device supporting the cache coherence interconnection protocol. 如請求項16所述的方法,其中至少部分地基於自所述獨立磁碟冗餘陣列電路返送結果,而將所述經更新第二資料寫入至支援所述快取記憶體同調互連協定的所述第一儲存設備包括使用備用電源保護所述緩衝器免受電力中斷。The method of claim 16, wherein writing the updated second data to support the cache coherence interconnection protocol is based at least in part on returning results from the independent disk redundant array circuit. The first storage device includes the use of backup power to protect the buffer from power outages. 如請求項14所述的方法,更包括: 至少部分地基於所述第一資料而在所述獨立磁碟冗餘陣列電路的所述緩衝器中對第三資料進行更新以生成經更新第三資料;以及 將所述經更新第三資料寫入至支援所述快取記憶體同調互連協定的第二儲存設備。 The method described in request item 14 further includes: updating third data in the buffer of the independent disk redundant array circuit based at least in part on the first data to generate updated third data; and Writing the updated third data to a second storage device that supports the cache coherence interconnection protocol. 如請求項18所述的方法,其中至少部分地基於所述第一資料而在所述獨立磁碟冗餘陣列電路的所述緩衝器中對所述第三資料進行更新以生成所述經更新第三資料包括至少部分地基於所述第一資料產生同位資訊作為所述經更新第三資料。The method of claim 18, wherein the third data is updated in the buffer of the independent disk redundant array circuit based at least in part on the first data to generate the updated Third data includes generating parity information as the updated third data based at least in part on the first data. 如請求項14所述的方法,更包括將所述第二資料自所述第一儲存設備讀取至所述緩衝器中。The method of claim 14, further comprising reading the second data from the first storage device into the buffer.
TW112118965A 2022-06-15 2023-05-22 Storage system and method of operating storage system TW202401232A (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US202263352629P 2022-06-15 2022-06-15
US63/352,629 2022-06-15
US17/885,519 2022-08-10
US17/885,519 US20230409480A1 (en) 2022-06-15 2022-08-10 Systems and methods for a redundant array of independent disks (raid) using a raid circuit in cache coherent interconnect storage devices

Publications (1)

Publication Number Publication Date
TW202401232A true TW202401232A (en) 2024-01-01

Family

ID=86646665

Family Applications (1)

Application Number Title Priority Date Filing Date
TW112118965A TW202401232A (en) 2022-06-15 2023-05-22 Storage system and method of operating storage system

Country Status (4)

Country Link
US (1) US20230409480A1 (en)
EP (1) EP4293493A1 (en)
KR (1) KR20230172394A (en)
TW (1) TW202401232A (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11989088B2 (en) * 2022-08-30 2024-05-21 Micron Technology, Inc. Read data path

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7991951B2 (en) * 2006-11-22 2011-08-02 Quantum Corporation Clustered storage network
US7975109B2 (en) * 2007-05-30 2011-07-05 Schooner Information Technology, Inc. System including a fine-grained memory and a less-fine-grained memory
US10540231B2 (en) * 2018-04-04 2020-01-21 International Business Machines Corporation Log-structured array (LSA) partial parity eviction and reassembly
US10909012B2 (en) * 2018-11-12 2021-02-02 H3 Platform, Inc. System having persistent memory
US10795817B2 (en) * 2018-11-16 2020-10-06 Western Digital Technologies, Inc. Cache coherence for file system interfaces
US11074189B2 (en) * 2019-06-20 2021-07-27 International Business Machines Corporation FlatFlash system for byte granularity accessibility of memory in a unified memory-storage hierarchy
US20220114086A1 (en) * 2021-12-22 2022-04-14 Intel Corporation Techniques to expand system memory via use of available device memory

Also Published As

Publication number Publication date
EP4293493A1 (en) 2023-12-20
US20230409480A1 (en) 2023-12-21
KR20230172394A (en) 2023-12-22

Similar Documents

Publication Publication Date Title
US10289304B2 (en) Physical address management in solid state memory by tracking pending reads therefrom
US8819338B2 (en) Storage system and storage apparatus
US20190324859A1 (en) Method and Apparatus for Restoring Data after Power Failure for An Open-Channel Solid State Drive
US11543989B2 (en) Storage system and control method thereof
CN112148628A (en) Offload defragmentation operations for host managed storage
CN112286838B (en) Storage device configurable mapping granularity system
TW202401232A (en) Storage system and method of operating storage system
US11188425B1 (en) Snapshot metadata deduplication
US20140201167A1 (en) Systems and methods for file system management
KR20220119348A (en) Snapshot management in partitioned storage
KR20230154618A (en) Storage device, memory device, and system including storage device and memory device
US10789168B2 (en) Maintaining multiple cache areas
CN117234414A (en) System and method for supporting redundant array of independent disks
TW202401259A (en) System and method for redundant array of independent disk (raid) using decoder in cache coherent interconnect storage device
US11340795B2 (en) Snapshot metadata management
WO2023046129A1 (en) Computer device, method for processing data, and computer system
US11016896B2 (en) Reducing overhead of managing cache areas
US20230236737A1 (en) Storage Controller Managing Different Types Of Blocks, Operating Method Thereof, And Operating Method Of Storage Device Including The Same
US10795814B2 (en) Placement of local cache areas
EP3314390B1 (en) Returning coherent data in response to a failure of a storage device when a single input/output request spans two storage devices
CN117234415A (en) System and method for supporting Redundant Array of Independent Disks (RAID)
KR20230156524A (en) Operation method of host configured to communicate with storage devices and memory devices, and system including storage devices and memory devices
US20210286547A1 (en) Storage system