CN103136121B - Cache management method for solid-state disc - Google Patents

Cache management method for solid-state disc Download PDF

Info

Publication number
CN103136121B
CN103136121B CN201310096798.XA CN201310096798A CN103136121B CN 103136121 B CN103136121 B CN 103136121B CN 201310096798 A CN201310096798 A CN 201310096798A CN 103136121 B CN103136121 B CN 103136121B
Authority
CN
China
Prior art keywords
page
chained list
caching
dirty
physical
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201310096798.XA
Other languages
Chinese (zh)
Other versions
CN103136121A (en
Inventor
宋振龙
魏登萍
李琼
郭御风
肖立权
周恩强
董勇
黎铁军
李元山
胡积平
谢徐超
王烨琛
李旭言
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
HUNAN GREATWALL GALAXY TECHNOLOGY CO., LTD.
Original Assignee
National University of Defense Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National University of Defense Technology filed Critical National University of Defense Technology
Priority to CN201310096798.XA priority Critical patent/CN103136121B/en
Publication of CN103136121A publication Critical patent/CN103136121A/en
Application granted granted Critical
Publication of CN103136121B publication Critical patent/CN103136121B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Memory System (AREA)

Abstract

The invention discloses a cache management method for a solid-state disc. The method comprises the following implement steps that a page cache, a replace block module, a new page linked list, a physical block chain list and a physical page state list are established; an input and output (IO) request from a host is received and is executed through the page cache, when a writing request is executed, if the page cache is missed, and the page cache has no spare space, a block replace process of the solid-state disc is executed, namely an 'effective' page space in the page cache is preferential released; when the number of 'effective' pages in the page cache is zero, a candidate replace block with the largest failure ratio in a rear half physical block of the physical block chain list is selected to serve as a replace block, and the replace block cache is utilized to execute a replace writing process. The cache management method for the solid-state disc can effectively use a limited cache space and increase hit rate of the cache, enables a block written in a flash medium to comprise as many dirty data pages as possible and as few effective data pages as possible to reduce erasure operation and page copy operations and sequential rubbish recovery caused by the dirty data pages. The cache management method for the solid-state disc is easy to operate.

Description

A kind of buffer memory management method of solid-state disk
Technical field
The present invention relates to the buffer memory management method of memory device, be specifically related to a kind of buffer memory management method of solid-state disk.
Background technology
Solid-state disk (Solid State Disk, SSD) is a kind of Flash of utilization medium or the dram chip hard disk as data permanent storage.More common SSD is the SSD based on Flash medium now, Flash medium, SSD controller and a DRAM who uses as controller internal memory, consists of.Solid-state disk, owing to having abandoned conventional magnetic media, having adopted electronic storage medium to carry out data access, has therefore been broken away from the mechanicalness restriction of magnetic hard disk, has greatly reduced the data search time.The time delay of solid-state disk is microsecond (us) level, and random access performance is than high one to two order of magnitude of conventional hard.Meanwhile, that solid-state disk has is non-volatile, low-power consumption, shock resistance is strong, readwrite bandwidth is high, speed of random access is fast, high reliability, can provide good support to small grain size, random IO access.Therefore, the use of solid-state disk has solved some problems that contemporary storage system faces to a certain extent, as limited in access speed etc., becomes one of study hotspot of current field of storage.
But when having above-mentioned many advantages, also there are some problems in SSD.After being typically the wiping of solid-state disk, write mechanism.The base unit of solid-state disk read-write is page, and the base unit of wiping is piece, when the mechanism of writing after the wiping of solid-state disk refers to certain page that need to change in certain piece of solid-state disk, need to first wipe whole, then writes this page.For other data that make this piece, do not lose, before wiping, need the data mobile in this piece to other places, and then wipe this data block, finally write again this page.So greatly limit the random write performance of solid-state disk, become a shorter principal element of solid-state disk life-span, limited the application of solid-state disk.And the mechanism of writing after wiping is significantly less than the write performance of solid-state disk and reads performance, and erasable serviceable life of reducing solid-state disk frequently.
In order to improve the performance of solid-state disk, reduce erasing times and extend serviceable life of solid-state disk, caching mechanism is introduced into, utilize temporal locality and the spatial locality of deal with data, dram space in solid-state disk is used for to the data of buffer memory write request, the read-write operation of some solid-state disks just can be completed in buffer memory, reduced the number of times of access Flash medium.At present, there are some buffer storage managing algorithms for solid-state disk.For example, BPLRU algorithm utilizes one to write buffer memory and optimize random write performance, and then the write request that All hosts is submitted to passes to Flash conversion layer (Flash Transition Layer, FTL) writing rearrangement in buffer memory.Clean-First LRU(CFLRU) algorithm as far as possible reduces the number of times of writing to storage unit in Flash medium dirty page delay buffer memory under the prerequisite that does not reduce cache hit rate; LRU-WSR algorithm is divided into two classes by all pages, " cold " page that high " heat " of access frequency page and access frequency are low, and in page chained list for each page has increased by one " cold and hot " sign, " cold " page preferentially displaced buffer memory.These buffer storage managing algorithms that design for different separately targets respectively have quality, are applicable to different IO access scenario.
In order to increase the performance of solid-state disk, increase the service life, buffer storage managing algorithm should meet the requirement of three aspects: (1) utilizes limited spatial cache to improve cache hit rate as far as possible, to improve read or write speed and to reduce and write number of times; (2) writing the erase operation that data bring should be the least possible, and the number of copy times of page is the least possible; (3) cache management need make the operation of follow-up garbage reclamation as far as possible simple and complexity is low.But above-mentioned prior art is and stresses some aspects, can not meet the requirement of above-mentioned three aspects simultaneously, and in current patent and document, yet there are no the relevant report of the buffer memory management method of taking into account above-mentioned three features.
Summary of the invention
The technical problem to be solved in the present invention is to provide a kind of hit rate that can effectively utilize limited spatial cache and increase buffer memory, make to write piece in Flash medium comprises the buffer memory management method of as far as possible many pages with the page replicate run that reduces erase operation and bring, follow-up garbage reclamation solid-state disk simple to operate.
In order to solve the problems of the technologies described above, the technical solution used in the present invention is:
A buffer memory management method for solid-state disk, implementation step is as follows:
1) in the buffer memory of solid-state disk, set up be in advance used for memory buffers data caching of page and be used for storing the replace block buffer memory of replace block, then in the buffer memory of solid-state disk, set up new page chained list, physical block chained list, Physical Page state table; New page chained list is for the logical page number (LPN) of the solid-state disk to be written of record page buffer memory; Physical block chained list is for the information of the corresponding physical block number of logical page (LPAGE) of record page buffer memory, each node in physical block chained list be one all with data and effective " dirty " page chained list of " dirty " page of flag for recording in this physical block; Physical Page state table is for recording data " effectively ", " inefficacy " of the each Physical Page of solid-state disk, the status information of " totally " thrin;
2) receive the IO request from main frame, if IO request is read request, redirect execution step 3); If IO request is write request, redirect execution step 4);
3) preferentially read the logical page (LPAGE) in caching of page, when caching of page is miss, by FTL, from solid-state disk, read logical page (LPAGE), and deposit logical page (LPAGE) data in caching of page, more " dirty " of new physical block chained list page chained list be effective " 1 " by effective flag status indication of described logical page (LPAGE); Logical page (LPAGE) data return to main frame redirect execution step 2 the most at last);
4) whether the logical page (LPAGE) that judges write request hits in caching of page, when when caching of page hits, deposit logical page (LPAGE) in caching of page, more " dirty " of new physical block chained list page chained list, and be inefficacy " 0 " by effective flag status indication of described logical page (LPAGE), will write result and return to main frame redirect execution step 2); When further judge the whether available free space of caching of page when caching of page is miss, when caching of page is miss and during the available free space of caching of page, deposit logical page (LPAGE) in caching of page, inquiry Physical Page state table obtains the status information that writes the corresponding Physical Page of logical page (LPAGE), if status information is " effectively ", " dirty " of new physical block chained list page chained list is more revised as " inefficacy " by the status information of the corresponding Physical Page of described logical page (LPAGE) in Physical Page state table; If status information is " totally ", add new page chained list to, write the most at last result and returned to main frame redirect execution step 2); When and caching of page miss at caching of page is during without free space, redirect execution step 5);
5) value of variable " effectively " number of pages is set is 0 in initialization; From the table Caudad gauge outfit of physical block chained list, travel through each " dirty " page chained list, each " dirty " page chained list is traveled through to each " dirty " page from table Caudad gauge outfit, according to effective flag, judge whether current " dirty " page is effective, if effective discharge the respective logic page of caching of page and delete the corresponding node of " dirty " page in chained list, variable " effectively " number of pages is added to 1, whether judgment variable " effectively " number of pages equals default value sets up, if set up redirect execution step 7); If effective flag of current " dirty " page was for losing efficacy " 0 ", effective flag of judgement next " dirty " page; After all " dirty " page has traveled through, whether judgment variable " effectively " number of pages is 0, if 0 goes to step 6); If variable " effectively " number of pages is not 0, redirect execution step 7);
6) first using the physical block of physical block chained list later half as candidate's replace block, search Physical Page state table and obtain the active page quantity that inefficacy number of pages that status information in each candidate's replace block is " inefficacys " and status information are " effectively ", the active page quantity that the inefficacy number of pages that by status information is " inefficacy " is " effectively " divided by status information is calculated the inefficacy ratio of each candidate's replace block, the relatively inefficacy ratio of each candidate's replace block, choose candidate's replace block piece as an alternative of inefficacy ratio maximum, page in replace block is write to replace block buffer memory according to page bias internal position, the active page that is " effectively " by status information in this replace block is read in replace block buffer memory in advance, discharge these pages shared space in caching of page, judge that whether replace block buffer memory is write completely, if replace block buffer memory is full, goes to step 7), if replace block buffer memory less than, each page that travels through replace block judges whether current page has data, if current page has data, the judgement processing of one page under redirect, if the countless certificates of current page, whether the table tail of the new page of judgement chained list points to the some logical page (LPAGE)s in caching of page, if table tail points to logical page (LPAGE) in caching of page and data is write to the current page in replace block and delete corresponding page in new page chained list, discharge the space of this logical page (LPAGE) in caching of page, and the next page that returns to traversal replace block is until traveled through, if the countless certificates of current page, redirect execution step 7),
7) by replacing ready message, issue piece writing module, discharge the shared space of page in replace block in caching of page, return in step 4) and re-execute and judge the whether available free space of caching of page.
As the further improvement of technique scheme of the present invention:
The detailed step of described step 3) is as follows:
3.1) whether the logical page number (LPN) that judges read request hits in caching of page, if miss, redirect execution step 3.2), otherwise, directly from caching of page, read and hit page data and return to main frame, return to execution step 2);
3.2) IO request is transmitted to FTL, then judges whether the page data that FTL sends, when having the page data of FTL transmission, redirect is carried out next step;
3.3) judge the whether available free space of caching of page, if there is no free space, directly the page data reading is returned to main frame, return to execution step 2); If available free space, deposits the page data reading in caching of page in, redirect execution step 3.4);
3.4) searching FTL mapping table obtains and reads the physical block number that page data is corresponding, according to described physical block number, search physical block chained list, if physical block chained list does not comprise " dirty " page chained list of the corresponding physical block of this physical block number, create " dirty " page chained list corresponding to physical block number as target " dirty " page chained list, and target " dirty " page chained list is placed in to the gauge outfit of physical block chained list; Redirect execution step 3.5); Otherwise, directly redirect execution step 3.5);
3.5) logical page (LPAGE) of the page data reading is added into the gauge outfit of target " dirty " page chained list, effective flag of the logical page (LPAGE) of the gauge outfit of target " dirty " page chained list is set to effectively " 1 ", and the page data reading is returned to main frame redirect execution step 2).
The detailed step of described step 4) is as follows:
4.1) whether the logical page number (LPN) that judges write request hits in caching of page, if hit, and redirect execution step 4.2); If miss, redirect execution step 4.6);
4.2) data of write request are write to caching of page;
4.3) searching FTL mapping table obtains and writes physical page number and the physical block number that page data is corresponding, according to physical block number, search physical block chained list, if " dirty " page chained list that physical block chained list comprises the corresponding physical block of this physical block number, redirect execution step 4.4), otherwise redirect execution step 4.5);
4.4) target " dirty " page chained list is moved to the gauge outfit of physical block chained list, target " dirty " page is moved to the gauge outfit of target " dirty " page chained list and effective flag of target " dirty " page is set to lose efficacy " 0 ", redirect execution step 2);
4.5) create " dirty " page chained list corresponding to new physical block number as target " dirty " page chained list, target " dirty " page chained list is moved to the gauge outfit of physical block chained list, target " dirty " page is moved to the gauge outfit of target " dirty " page chained list, and effective flag of target " dirty " page is set to inefficacy " 0 ", redirect execution step 2);
4.6) whether judge in caching of page available free space, if there is no free space, redirect execution step 5), otherwise redirect execution step 4.7);
4.7) write request data are write direct caching of page;
4.8) search FTL mapping table and obtain the corresponding physical page number of logical page (LPAGE) that newly writes caching of page, according to physical block number, search Physical Page state table and obtain the status information of corresponding Physical Page, if the status information of described Physical Page is " effectively ", redirect execution step 4.9), the status information of described Physical Page is " totally " else if, redirect execution step 4.10);
4.9) logical page (LPAGE) that writes caching of page is added to the gauge outfit of target " dirty " page chained list and effective flag is labeled as to inefficacy " 0 ", target " dirty " page chained list is moved to the gauge outfit of physical block chained list, the status information of described Physical Page is revised as to " inefficacy ", redirect execution step 2);
4.10) logical page (LPAGE) is added in the gauge outfit of new page chained list to redirect execution step 2).
The present invention has following advantage:
1, the present invention adopts multiple pages of level buffer memorys and a replace block buffer memory, can in limited spatial cache, improve the chance that page hits, and accelerates read or write speed, and reduces read-write number of times.
2, the present invention is when selecting replace block to write the Flash medium of solid-state disk, and the piece of selection comprises as far as possible many dirty datas and few valid data of trying one's best, and can reduce like this piece and wipe front page number of copy times.
3, the present invention is when execution block is replaced, by the logical page (LPAGE) (logical page (LPAGE) that new page chained list points to) of choosing the Flash medium that never writes solid-state disk, fill up write-in block, making writing of each piece is all monoblock data, effectively reduced because repeatedly writing and caused repeatedly erase operation, and then reduced complexity and the time overhead of garbage reclamation operation.
4, the present invention is directed to solid-state disk compared with the inherent characteristics of conventional magnetic media hard disk, after wiping, write mechanism, it is low that height is write the random write performance that time and high erasing time cause, the problem that the solid-state disk life-span is short, by setting up caching of page, replace block buffer memory, new page chained list, physical block chained list, Physical Page state table, according to setting up caching of page, replace block buffer memory, new page chained list, physical block chained list, Physical Page state table is read, write and piece replacement, can effectively utilize the limited spatial cache of caching of page and increase the hit rate of buffer memory, write the erase operation number of times that data bring few, and the number of copy times of page is the least possible, and the piece that makes to write in solid-state disk Flash medium comprises the page replicate run of as far as possible many pages to reduce erase operation and bring, follow-up garbage reclamation is simple to operate, the present invention can meet the requirement of aforementioned three aspects of buffer storage managing algorithm, can increase the performance of solid-state disk, extend the serviceable life of solid-state disk.
Accompanying drawing explanation
Fig. 1 is the holistic approach schematic flow sheet of the embodiment of the present invention.
Fig. 2 is the schematic flow sheet that the execution block of the embodiment of the present invention is replaced.
Fig. 3 is the schematic flow sheet of the execution read request of the embodiment of the present invention.
Fig. 4 is the schematic flow sheet of the execution write request of the embodiment of the present invention.
Fig. 5 is the solid-state disk logical organization schematic diagram of the application embodiment of the present invention.
Fig. 6 is the structural representation of the corresponding cache management subsystem of application embodiment of the present invention method.
Embodiment
As depicted in figs. 1 and 2, the implementation step of the buffer memory management method of the present embodiment solid-state disk is as follows:
1) in the buffer memory of solid-state disk, set up be in advance used for memory buffers data caching of page and be used for storing the replace block buffer memory of replace block, then in the buffer memory of solid-state disk, set up new page chained list, physical block chained list, Physical Page state table; New page chained list is for the logical page number (LPN) of the solid-state disk to be written of record page buffer memory; Physical block chained list is for the information of the corresponding physical block number of logical page (LPAGE) of record page buffer memory, each node in physical block chained list be one all with data and effective " dirty " page chained list of " dirty " page of flag for recording in this physical block; Physical Page state table is for recording data " effectively ", " inefficacy " of the each Physical Page of solid-state disk, the status information of " totally " thrin;
2) receive the IO request from main frame, if IO request is read request, redirect execution step 3); If IO request is write request, redirect execution step 4);
3) preferentially read the logical page (LPAGE) in caching of page, when caching of page is miss, by FTL, from solid-state disk, read logical page (LPAGE), and deposit logical page (LPAGE) data in caching of page, more " dirty " of new physical block chained list page chained list be effective " 1 " by effective flag status indication of logical page (LPAGE); Logical page (LPAGE) data return to main frame redirect execution step 2 the most at last);
4) whether the logical page (LPAGE) that judges write request hits in caching of page, when when caching of page hits, deposit logical page (LPAGE) in caching of page, more " dirty " of new physical block chained list page chained list, and be inefficacy " 0 " by effective flag status indication of logical page (LPAGE), will write result and return to main frame redirect execution step 2), when further judge the whether available free space of caching of page when caching of page is miss, when caching of page is miss and during the available free space of caching of page, deposit logical page (LPAGE) in caching of page, inquiry Physical Page state table obtains the status information that writes the corresponding Physical Page of logical page (LPAGE), if status information is " effectively ", more " dirty " of new physical block chained list page chained list, in Physical Page state table, the status information of the corresponding Physical Page of logical page (LPAGE) is revised as to " inefficacy ", if status information is " totally ", add new page chained list to, write the most at last result and returned to main frame redirect execution step 2), when and caching of page miss at caching of page is during without free space, redirect execution step 5),
5) value of variable " effectively " number of pages is set is 0 in initialization; From the table Caudad gauge outfit of physical block chained list, travel through each " dirty " page chained list, each " dirty " page chained list is traveled through to each " dirty " page from table Caudad gauge outfit, according to effective flag, judge whether current " dirty " page is effective, if effective discharge the respective logic page of caching of page and delete the corresponding node of " dirty " page in chained list, variable " effectively " number of pages is added to 1, whether judgment variable " effectively " number of pages equals default value sets up, if set up redirect execution step 7); If effective flag of current " dirty " page was for losing efficacy " 0 ", effective flag of judgement next " dirty " page; After all " dirty " page has traveled through, whether judgment variable " effectively " number of pages is 0, if 0 goes to step 6); If variable " effectively " number of pages is not 0, go to step 7);
6) first using the physical block of physical block chained list later half as candidate's replace block, search Physical Page state table and obtain the active page quantity that inefficacy number of pages that status information in each candidate's replace block is " inefficacys " and status information are " effectively ", the active page quantity that the inefficacy number of pages that by status information is " inefficacy " is " effectively " divided by status information is calculated the inefficacy ratio of each candidate's replace block, the relatively inefficacy ratio of each candidate's replace block, choose candidate's replace block piece as an alternative of inefficacy ratio maximum, page in replace block is write to replace block buffer memory according to page bias internal position, the active page that is " effectively " by status information in this replace block is read in replace block buffer memory in advance, discharge these pages shared space in caching of page, judge that whether replace block buffer memory is write completely, if replace block buffer memory is full, goes to step 7), if replace block buffer memory less than, each page that travels through replace block judges whether current page has data, if current page has data, the judgement processing of one page under redirect, if the countless certificates of current page, whether the table tail of the new page of judgement chained list points to the some logical page (LPAGE)s in caching of page, if table tail points to logical page (LPAGE) in caching of page and data is write to the current page in replace block and delete corresponding page in new page chained list, discharge the space of this logical page (LPAGE) in caching of page, and the next page that returns to traversal replace block is until traveled through, if the countless certificates of current page, the step 7) of walking around,
7) by replacing ready message, issue piece writing module, discharge the shared space of page in replace block in caching of page, return in step 4) and re-execute and judge the whether available free space of caching of page.
In the present embodiment, the value of the effective flag in physical block chained list represents that for " 1 " data of logical page (LPAGE) are effective, is worth the data failure that represents logical page (LPAGE) for " 0 ".The present embodiment adopts multiple pages of level buffer memorys and a replace block buffer memory, can in limited spatial cache, improve the chance that page hits, and accelerates read or write speed, and reduces read-write number of times.The present embodiment is when selecting replace block to write the Flash medium of solid-state disk, and the piece of selection comprises as far as possible many dirty datas and few valid data of trying one's best, and can reduce like this piece and wipe front page number of copy times.When the present embodiment is replaced at execution block, by the logical page (LPAGE) (logical page (LPAGE) that new page chained list points to) of choosing the Flash medium that never writes solid-state disk, fill up write-in block, making writing of each piece is all monoblock data, effectively reduced because repeatedly writing and caused repeatedly erase operation, and then reduced complexity and the time overhead of garbage reclamation operation.Therefore, the present embodiment can meet the requirement of aforementioned three aspects of buffer storage managing algorithm, can increase the performance of solid-state disk, the serviceable life of prolongation solid-state disk.
As shown in Figure 3, in the present embodiment, the detailed step of step 3) is as follows:
3.1) whether the logical page number (LPN) that judges read request hits in caching of page, if miss, redirect execution step 3.2), otherwise, directly from caching of page, read and hit page data and return to main frame, return to execution step 2);
3.2) IO request is transmitted to FTL, then judges whether the page data that FTL sends, when having the page data of FTL transmission, redirect is carried out next step, otherwise continues to wait for;
3.3) judge the whether available free space of caching of page, if there is no free space, directly the page data reading is returned to main frame, return to execution step 2); If available free space, deposits the page data reading in caching of page in, redirect execution step 3.4);
3.4) searching FTL mapping table obtains and reads the physical block number that page data is corresponding, according to physical block number, search physical block chained list, if physical block chained list does not comprise " dirty " page chained list of the corresponding physical block of this physical block number, create " dirty " page chained list corresponding to physical block number as target " dirty " page chained list, and target " dirty " page chained list is placed in to the gauge outfit of physical block chained list, redirect execution step 3.5); Otherwise, directly redirect execution step 3.5);
3.5) logical page (LPAGE) of the page data reading is added into the gauge outfit of target " dirty " page chained list, effective flag of the logical page (LPAGE) of the gauge outfit of target " dirty " page chained list is set to effectively " 1 ", and the page data reading is returned to main frame redirect execution step 2).
As shown in Figure 4, in the present embodiment, the detailed step of step 4) is as follows:
4.1) whether the logical page number (LPN) that judges write request hits in caching of page, if hit, and redirect execution step 4.2); If miss, redirect execution step 4.6);
4.2) data of write request are write to caching of page;
4.3) searching FTL mapping table obtains and writes physical page number and the physical block number that page data is corresponding, according to physical block number, search physical block chained list, if " dirty " page chained list that physical block chained list comprises the corresponding physical block of this physical block number, redirect execution step 4.4), otherwise redirect execution step 4.5);
4.4) target " dirty " page chained list is moved to the gauge outfit of physical block chained list, target " dirty " page is moved to the gauge outfit of target " dirty " page chained list and effective flag of target " dirty " page is set to lose efficacy " 0 ", redirect execution step 2);
4.5) create " dirty " page chained list corresponding to new physical block number as target " dirty " page chained list, target " dirty " page chained list is moved to the gauge outfit of physical block chained list, target " dirty " page is moved to the gauge outfit of target " dirty " page chained list, and effective flag of target " dirty " page is set to inefficacy " 0 ", redirect execution step 2);
4.6) whether judge in caching of page available free space, if there is no free space, redirect execution step 5), otherwise redirect execution step 4.7);
4.7) write request data are write direct caching of page;
4.8) search FTL mapping table and obtain the corresponding physical page number of logical page (LPAGE) that newly writes caching of page, according to physical block number, search Physical Page state table and obtain the status information of corresponding Physical Page, if the status information of Physical Page is " effectively ", redirect execution step 4.9), the status information of Physical Page is " totally " else if, redirect execution step 4.10);
4.9) logical page (LPAGE) that writes caching of page is added to the gauge outfit of target " dirty " page chained list and effective flag is labeled as to inefficacy " 0 ", target " dirty " page chained list is moved to the gauge outfit of physical block chained list, the status information of Physical Page is revised as to " inefficacy ", redirect execution step 2);
4.10) logical page (LPAGE) is added in the gauge outfit of new page chained list to redirect execution step 2).
As shown in Figure 5, solid state hard disc is mainly comprised of Flash chip, buffer memory and Flash conversion layer (FTL), and FTL has comprised a series of supervisory routines and processed the reading and writing of solid state hard disc, erase operation.The present embodiment is located at by one in when application that cache management subsystem in Flash conversion layer (FTL) realizes.Flash conversion layer (the Flash Transition Layer of cache management subsystem and main frame and solid state hard disc, FTL) be connected, cache management subsystem is in FTL, to be specifically designed to manage the supervisory routine of the data of IO request being carried out to buffer memory, the performance of processing to improve IO.Cache management subsystem is connected with other supervisory routines in main frame, buffer memory and FTL, and cache management subsystem receives the IO request from main frame, processes IO request, and result is returned to main frame.Cache management subsystem obtains the IO request of solid-state disk from main frame, the address of IO request access is logical page number (LPN).If IO request is read request, and the logical page number (LPN) of read request and cache hit, cache management subsystem directly returns to main frame by the data in buffer memory, otherwise send to FTL to read corresponding data from Flash medium read request, and receive the data that also buffer memory reads from the Flash medium of solid-state disk from FTL.If IO request is write request, the logical page number (LPN) that judges write request whether with cache hit.If hit, cache management subsystem, by the data buffer memory that writes direct, if miss, writes buffer memory by data, and revises the FTL mapping table in FTL.Storage management subsystem deposits the data that read from Flash medium in buffer memory and before data that request is write write buffer memory, judge whether buffer memory is write full, if buffer memory is write full, the page in buffer memory is replaced out to buffer memory and write Flash medium, and discharge corresponding spatial cache for the new data of buffer memory, otherwise the page data that storage resource request writes in buffer memory.
As shown in Figure 6, cache management subsystem corresponding to the present embodiment method is comprised of caching of page, replace block buffer memory, new page chained list, physical block chained list, Physical Page state table, caching management module, piece replacement module and piece writing module.
Caching of page is used for memory buffers data, and the base unit of storage data is pages, and quantity=cache size/page of page is big or small.
Replace block buffer memory is used for storing replace block, and the base unit of storage data is pieces, and one comprises N page, N=block size/page size.New page chained list is used for the logical page number (LPN) that writes for the first time Flash medium of record buffer memory, the countless certificates of the corresponding Physical Page of these logical page number (LPN)s, and each node in chained list is a logical page number (LPN).
All pages of new page chained list management can write the optional position in Flash medium.Physical block chained list is for the information of the corresponding physical block number of page of record buffer memory, each node in chained list is that (" dirty " page table shows in this Physical Page have data to one " dirty " page chained list, otherwise be " totally " page), and " dirty " page chained list is for recording all " dirty " page in this physical block.Each node in " dirty " page chained list is one " dirty " page, and has an effective flag S, and S is this page of 1 expression " effectively ", and S is this page of 0 expression " inefficacy ".
Physical Page state table is for recording the state of the each Physical Page of Flash medium, and state is divided into " effectively ", " inefficacy ", " totally "." effectively " represents there are data in this Physical Page, and data are valid data; " inefficacy " represents there are data in this Physical Page, but data are invalid data; " totally " represents countless certificates in this Physical Page.
Caching management module is connected with FTL with main frame, piece replacement module, new page chained list, physical block chained list, Physical Page state table, buffer memory.Caching management module receives and processes the IO request sending over from main frame.If IO request is write request, first caching management module judges whether the logical page (LPAGE) of this IO request hits in caching of page.If hit, directly corresponding data are write to caching of page to cover the legacy data in this page.If miss, whether judge in caching of page available free space, if without free space, invoking block replacement module, will select the data in caching of page to write replace block buffer memory, and the data in replace block buffer memory write Flash medium and go out buffer zone with the free time.Then, caching management module in caching of page, and is searched the data buffer storage of this write request FTL mapping table and is obtained the Physical Page that this logical page (LPAGE) is corresponding, then searches Physical Page state table and obtain the state of this Physical Page.If the state of this Physical Page is " effectively ", caching management module is added this logical page (LPAGE) to the gauge outfit of " dirty " page chained list of this physical block in physical block chained list, and gauge outfit is set to 0, and will be somebody's turn to do " dirty " page chained list and be placed in the gauge outfit of physical block chained list, the state of this page in Physical Page state table is set to " inefficacy " simultaneously.If the state of this Physical Page is " totally ", caching management module is added this logical page (LPAGE) in the gauge outfit of new page chained list to.If IO request is read request, first caching management module judges whether the logical page (LPAGE) of this IO read request hits in caching of page, if hit, directly by corresponding data reading and return to main frame; Otherwise read request is transmitted to FTL to read corresponding information from Flash.Caching management module also receives the data that read from FTL, and be buffered in caching of page, this page is added to the gauge outfit of corresponding " dirty " page chained list in physical block chained list simultaneously, and be 1 by list head marker, then will be somebody's turn to do " dirty " page chained list and be placed in the gauge outfit of physical block chained list, finally this page information be returned to main frame.
Piece replacement module, for choosing the page of replacing out buffer memory, is connected with FTL with caching management module, physical block chained list and piece writing module.Piece replacement module receives the piece replacement request sending over from caching management module, starts to choose the page of replacing out buffer memory.Piece replacement module starts to travel through each " dirty " page chained list physical block chained list from the table tail of physical block chained list to gauge outfit, for each " dirty " page chained list, also from table tail to gauge outfit, travel through and find the page that is designated " 1 ", until find out N page or traversal end.Often find out a page that is designated " 1 ", discharge corresponding caching of page space, then delete the corresponding node in " dirty " page chained list.If without " effectively " page (being designated the page of " 1 "), select corresponding of rear 1/2 node of physical block chained list as candidate's replace block in buffer memory.By comparing the inefficacy ratio in candidate's replace block, select the piece as an alternative of inefficacy ratio maximum.Wherein, inefficacy ratio is the ratio of " inefficacy " number of pages and " effectively " number of pages in this physical block.Then according to page bias internal amount, successively the page write-in block in replace block is replaced to buffer memory.Then, replace block buffer memory is read and write to piece replacement module by other " effectively " page in the replace block of Flash medium, last, from the table tail of new page chained list, selects corresponding page to fill up replace block buffer memory.After piece replacement buffer memory fills up, piece replacement module sends replaces ready message to piece writing module.
Piece writing module is connected with FTL with piece replacement module, piece writing module receives the ready message of replacement sending from piece replacement module, then replace block buffer memory is write in Flash medium by FTL, transmission writes successful message and original physical block number to wiping module to wipe original physical block and to upgrade Physical Page state table in piece FTL, and send writing address to the FTL mapping table update module in FTL to upgrade FTL mapping table.
In the present embodiment, the flow process that caching management module is processed certain IO read request is as follows:
A1, caching management module are obtained IO read request, and according to the logical page number (LPN) of this IO read request, judge that whether this logical page number (LPN) hits with a certain of caching of page, enters steps A 1.1 or A1.2 according to judged result.
If A1.1 logical page number (LPN) hits, caching management module is read page data from buffer memory, goes to step A3.
If A1.2 logical page number (LPN) hits unsuccessfully, this request is transmitted to FTL, go to step A2.
A2, caching management module judge whether the page data sending over from FTL, according to judged result, enter steps A 2.1 or A2.2.
If A2.1 nothing, goes to step A2 and continues to wait for.
If A2.2 has, caching management module receives the page data reading from FTL, and judges the whether available free space of caching of page, according to judged result, enters steps A 2.2.1 or A2.2.2.
If A2.2.1 caching of page, without free space, goes to step A3.
If the available free space of A2.2.2 caching of page, deposits this page data in caching of page in, go to step A2.2.3.
A2.2.3, caching management module are searched physical block chained list according to this page of corresponding physical block number, and judge that whether physical block chained list comprises " dirty " page chained list of this physical block, enters steps A 2.2.3.1 or A2.2.3.2 according to judged result.
If A2.2.3.1 physical block chained list does not comprise " dirty " page chained list of this physical block, create one " dirty " page chained list, and will be somebody's turn to do " dirty " page chained list and be placed in the gauge outfit of physical block chained list.
A2.2.3.2, obtain " dirty " page chained list of this physical block, and this page is added to the gauge outfit of " dirty " page chained list, and be 1 by list head marker.
A3, caching management module return to main frame by the page data obtaining.
A4, read procedure finish.
The flow process that caching management module is processed certain IO write request is as follows:
B1, caching management module, according to the logical page number (LPN) of this IO write request, judge that whether this logical page number (LPN) hits with a certain of caching of page, enters step B1.1 or B1.2 according to judged result.
If B1.1 logical page number (LPN) hits, the caching of page that data in write request write direct, performs step B1.1.1 and B1.1.2 successively.
B1.1.1, caching management module are searched FTL mapping table, obtain physical page number and physical block piece number that this logical page (LPAGE) is corresponding.
B1.1.2, caching management module judge " dirty " page chained list that whether comprises this physical block in physical block chained list.If have, will be somebody's turn to do " dirty " page chained list and move to the gauge outfit of physical block chained list, and this page be added to the gauge outfit of this " dirty " page chained list, effective flag that this page is set is " 0 ".If nothing, creates new " dirty " page chained list, and the gauge outfit of this page being added to this " dirty " page chained list, then will be somebody's turn to do the gauge outfit that " dirty " page chained list adds physical block chained list to, effective flag that this page is set is " 0 ".
If B1.2 logical page number (LPN) is miss, judge the whether available free space of caching of page, according to judged result, enter step B1.2.1 or B1.2.2.
If B1.2.1 caching of page is without free space, caching management module sends piece replacement request to piece replacement module, and redirect execution step B1.2.
If the available free space of B1.2.2 caching of page, the caching of page that data of write request write direct, performs step B1.2.2.1 and B1.2.2.2 successively.
B1.2.2.1, caching management module be by searching FTL mapping table, obtains physical page number corresponding to page that newly writes caching of page, and search Physical Page state table and obtain the state of this Physical Page.According to the state of Physical Page, enter step B1.2.2.1.1 or B1.2.2.1.2.
If the state of this Physical Page of B1.2.2.1.1 is " effectively ", caching management module is added this logical page (LPAGE) to the gauge outfit of " dirty " page chained list of this physical block in physical block chained list, and be " 0 " by effective home position of gauge outfit, and will be somebody's turn to do " dirty " page chained list and be placed in the gauge outfit of physical block chained list, the state of this page in Physical Page state table is set to " inefficacy " simultaneously, goes to step B1.3.
If the state of this Physical Page of B1.2.2.1.2 is " totally ", caching management module is added this logical page (LPAGE) in the gauge outfit of new page chained list to, goes to step B1.3.
B1.3, the process of writing finish.
The flow process that piece replacement module processing block is replaced is as follows:
C1, piece replacement module receive piece replacement request, and the value that variable " effectively " number of pages is set is 0.
C2, piece replacement module start to travel through each " dirty " page chained list physical block chained list from the table tail of physical block chained list to gauge outfit.
C2.1, for each " dirty " page chained list, piece replacement module also from table tail to gauge outfit, travel through every one page.
C2.2, piece replacement module judge whether this page of sign is " 1 ", if 1, discharge corresponding caching of page space, then delete the corresponding node in " dirty " page chained list, " effectively " number of pages adds 1.Otherwise, go to step C2.1 and continue traversal until travel through complete.
Whether C2.3, judgement " effectively " number of pages equal N, if set up, go to step C11.
If C3 " effectively " number of pages is not 0, go to step C11.If " effectively " number of pages is 0, go to step C4.
C4, piece replacement module are using the physical block of later half in physical block chained list as candidate's replace block.
C5, to each candidate's replace block, execution step C5.1.
C5.1, piece replacement module be by searching Physical Page state table, and obtain the quantity of " inefficacy " page in candidate's replace block and " effectively " page, and calculate inefficacy ratio, and inefficacy ratio=inefficacy number of pages/active page quantity.
C6, piece replacement module compare the actual effect ratio of candidate's replace block, choose candidate's replace block piece as an alternative of actual effect ratio maximum.
C7, piece replacement module write replace block buffer memory by the page in replace block according to page bias internal position.
C8, piece replacement module by and the active page in this physical block is read in replace block buffer memory in advance, then discharge these pages shared space in caching of page.
C9, piece replacement module judge that whether replace block buffer memory is write completely, if replace block buffer memory is full, goes to step C11.
C10, judge whether the current page of replace block has data, if current page has data, the lower one page of traversal re-executes step C10.If the countless certificates of current page, go to step C10.1.
C10.1, obtain the table tail page pointed of new page chained list, judge whether this page has data.If have, data are write to the current page in replace block, and delete the corresponding page in new page chained list, discharge the space of this page in caching of page, go to step C10.2; If countless certificates, go to step C11.
Next page in C10.2, traversal replace block buffer memory, goes to step C10.
C11, piece replacement module are issued piece writing module by replacing ready message, discharge the shared space of page in replace block in caching of page.
C12, piece are replaced complete.
The above is only the preferred embodiment of the present invention, and protection scope of the present invention is also not only confined to above-described embodiment, and all technical schemes belonging under thinking of the present invention all belong to protection scope of the present invention.It should be pointed out that for those skilled in the art, some improvements and modifications without departing from the principles of the present invention, these improvements and modifications also should be considered as protection scope of the present invention.

Claims (3)

1. a buffer memory management method for solid-state disk, is characterized in that implementation step is as follows:
1) in the buffer memory of solid-state disk, set up be in advance used for memory buffers data caching of page and be used for storing the replace block buffer memory of replace block, then in the buffer memory of solid-state disk, set up new page chained list, physical block chained list, Physical Page state table; New page chained list is for the logical page number (LPN) of the solid-state disk to be written of record page buffer memory; Physical block chained list is for the information of the corresponding physical block number of logical page (LPAGE) of record page buffer memory, each node in physical block chained list be one all with data and effective " dirty " page chained list of " dirty " page of flag for recording in this physical block; Physical Page state table is for recording data " effectively ", " inefficacy " of the each Physical Page of solid-state disk, the status information of " totally " thrin;
2) receive the IO request from main frame, if IO request is read request, redirect execution step 3); If IO request is write request, redirect execution step 4);
3) preferentially read the logical page (LPAGE) in caching of page, when caching of page is miss, by FTL, from solid-state disk, read logical page (LPAGE), and deposit logical page (LPAGE) data in caching of page, more " dirty " of new physical block chained list page chained list be effective " 1 " by effective flag status indication of described logical page (LPAGE); Logical page (LPAGE) data return to main frame redirect execution step 2 the most at last);
4) whether the logical page (LPAGE) that judges write request hits in caching of page, when when caching of page hits, deposit logical page (LPAGE) in caching of page, more " dirty " of new physical block chained list page chained list, and be inefficacy " 0 " by effective flag status indication of described logical page (LPAGE), will write result and return to main frame redirect execution step 2), when further judge the whether available free space of caching of page when caching of page is miss, when caching of page is miss and during the available free space of caching of page, deposit logical page (LPAGE) in caching of page, inquiry Physical Page state table obtains the status information that writes the corresponding Physical Page of logical page (LPAGE), if status information is " effectively ", more " dirty " of new physical block chained list page chained list, in Physical Page state table, the status information of the corresponding Physical Page of described logical page (LPAGE) is revised as to " inefficacy ", if status information is " totally ", add new page chained list to, write the most at last result and returned to main frame redirect execution step 2), when and caching of page miss at caching of page is during without free space, redirect execution step 5),
5) value of variable " effectively " number of pages is set is 0 in initialization; From the table Caudad gauge outfit of physical block chained list, travel through each " dirty " page chained list, each " dirty " page chained list is traveled through to each " dirty " page from table Caudad gauge outfit, according to effective flag, judge whether current " dirty " page is effective, if effective discharge the respective logic page of caching of page and delete the corresponding node of " dirty " page in chained list, variable " effectively " number of pages is added to 1, whether judgment variable " effectively " number of pages equals default value sets up, if set up redirect execution step 7); If effective flag of current " dirty " page was for losing efficacy " 0 ", effective flag of judgement next " dirty " page; After all " dirty " page has traveled through, whether judgment variable " effectively " number of pages is 0, if 0 goes to step 6); If variable " effectively " number of pages is not 0, redirect execution step 7);
6) first using the physical block of physical block chained list later half as candidate's replace block, search Physical Page state table and obtain the active page quantity that inefficacy number of pages that status information in each candidate's replace block is " inefficacys " and status information are " effectively ", the active page quantity that the inefficacy number of pages that by status information is " inefficacy " is " effectively " divided by status information is calculated the inefficacy ratio of each candidate's replace block, the relatively inefficacy ratio of each candidate's replace block, choose candidate's replace block piece as an alternative of inefficacy ratio maximum, page in replace block is write to replace block buffer memory according to page bias internal position, the active page that is " effectively " by status information in this replace block is read in replace block buffer memory in advance, discharge these pages shared space in caching of page, judge whether replace block buffer memory is write completely, if replace block buffer memory is full, redirect execution step 7), if replace block buffer memory less than, each page that travels through replace block judges whether current page has data, if current page has data, the judgement processing of one page under redirect, if the countless certificates of current page, whether the table tail of the new page of judgement chained list points to the some logical page (LPAGE)s in caching of page, if table tail points to logical page (LPAGE) in caching of page and data is write to the current page in replace block and delete corresponding page in new page chained list, discharge the space of this logical page (LPAGE) in caching of page, and the next page that returns to traversal replace block is until traveled through, if the countless certificates of current page, redirect execution step 7),
7) by replacing ready message, issue piece writing module, discharge the shared space of page in replace block in caching of page, return in step 4) and re-execute and judge the whether available free space of caching of page.
2. the buffer memory management method of solid-state disk according to claim 1, is characterized in that, the detailed step of described step 3) is as follows:
3.1) whether the logical page number (LPN) that judges read request hits in caching of page, if miss, redirect execution step 3.2), otherwise, directly from caching of page, read and hit page data and return to main frame, return to execution step 2);
3.2) IO request is transmitted to FTL, then judges whether the page data that FTL sends, when having the page data of FTL transmission, redirect is carried out next step;
3.3) judge the whether available free space of caching of page, if there is no free space, directly the page data reading is returned to main frame, return to execution step 2); If available free space, deposits the page data reading in caching of page in, redirect execution step 3.4);
3.4) searching FTL mapping table obtains and reads the physical block number that page data is corresponding, according to described physical block number, search physical block chained list, if physical block chained list does not comprise " dirty " page chained list of the corresponding physical block of this physical block number, create " dirty " page chained list corresponding to physical block number as target " dirty " page chained list, and target " dirty " page chained list is placed in to the gauge outfit of physical block chained list, redirect execution step 3.5); Otherwise, directly redirect execution step 3.5);
3.5) logical page (LPAGE) of the page data reading is added into the gauge outfit of target " dirty " page chained list, effective flag of the logical page (LPAGE) of the gauge outfit of target " dirty " page chained list is set to effectively " 1 ", and the page data reading is returned to main frame redirect execution step 2).
3. the buffer memory management method of solid-state disk according to claim 1 and 2, is characterized in that, the detailed step of described step 4) is as follows:
4.1) whether the logical page number (LPN) that judges write request hits in caching of page, if hit, and redirect execution step 4.2); If miss, redirect execution step 4.6);
4.2) data of write request are write to caching of page;
4.3) searching FTL mapping table obtains and writes physical page number and the physical block number that page data is corresponding, according to physical block number, search physical block chained list, if " dirty " page chained list that physical block chained list comprises the corresponding physical block of this physical block number, redirect execution step 4.4), otherwise redirect execution step 4.5);
4.4) target " dirty " page chained list is moved to the gauge outfit of physical block chained list, target " dirty " page is moved to the gauge outfit of target " dirty " page chained list and effective flag of target " dirty " page is set to lose efficacy " 0 ", redirect execution step 2);
4.5) create " dirty " page chained list corresponding to new physical block number as target " dirty " page chained list, target " dirty " page chained list is moved to the gauge outfit of physical block chained list, target " dirty " page is moved to the gauge outfit of target " dirty " page chained list, and effective flag of target " dirty " page is set to inefficacy " 0 ", redirect execution step 2);
4.6) whether judge in caching of page available free space, if there is no free space, redirect execution step 5), otherwise redirect execution step 4.7);
4.7) write request data are write direct caching of page;
4.8) search FTL mapping table and obtain the corresponding physical page number of logical page (LPAGE) that newly writes caching of page, according to physical block number, search Physical Page state table and obtain the status information of corresponding Physical Page, if the status information of described Physical Page is " effectively ", redirect execution step 4.9), the status information of described Physical Page is " totally " else if, redirect execution step 4.10);
4.9) logical page (LPAGE) that writes caching of page is added to the gauge outfit of target " dirty " page chained list and effective flag is labeled as to inefficacy " 0 ", target " dirty " page chained list is moved to the gauge outfit of physical block chained list, the status information of described Physical Page is revised as to " inefficacy ", redirect execution step 2);
4.10) logical page (LPAGE) is added in the gauge outfit of new page chained list to redirect execution step 2).
CN201310096798.XA 2013-03-25 2013-03-25 Cache management method for solid-state disc Active CN103136121B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310096798.XA CN103136121B (en) 2013-03-25 2013-03-25 Cache management method for solid-state disc

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310096798.XA CN103136121B (en) 2013-03-25 2013-03-25 Cache management method for solid-state disc

Publications (2)

Publication Number Publication Date
CN103136121A CN103136121A (en) 2013-06-05
CN103136121B true CN103136121B (en) 2014-04-16

Family

ID=48495968

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310096798.XA Active CN103136121B (en) 2013-03-25 2013-03-25 Cache management method for solid-state disc

Country Status (1)

Country Link
CN (1) CN103136121B (en)

Families Citing this family (36)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103631940B (en) * 2013-12-09 2017-02-08 中国联合网络通信集团有限公司 Data writing method and data writing system applied to HBASE database
CN103729145B (en) * 2013-12-19 2017-06-16 华为技术有限公司 The processing method and processing device of I/O Request
TWI604307B (en) 2014-10-31 2017-11-01 慧榮科技股份有限公司 Data storage device and flash memory control method
CN104408126B (en) * 2014-11-26 2018-06-15 杭州华为数字技术有限公司 A kind of persistence wiring method of database, device and system
CN104503703B (en) * 2014-12-16 2018-06-05 华为技术有限公司 The treating method and apparatus of caching
CN104484290B (en) * 2014-12-19 2018-09-28 上海斐讯数据通信技术有限公司 The operating method of Flash and the operating device of Flash
CN105739926A (en) * 2016-01-29 2016-07-06 四川长虹电器股份有限公司 Method for improving starting speed of set top box and prolonging service life of Flash
CN107122124B (en) * 2016-02-25 2021-06-15 中兴通讯股份有限公司 Data processing method and device
CN105930282B (en) * 2016-04-14 2018-11-06 北京时代民芯科技有限公司 A kind of data cache method for NAND FLASH
US10534716B2 (en) * 2016-07-13 2020-01-14 Seagate Technology Llc Limiting access operations in a data storage device
CN106227471A (en) 2016-08-19 2016-12-14 深圳大普微电子科技有限公司 Solid state hard disc and the data access method being applied to solid state hard disc
CN107870732B (en) * 2016-09-23 2020-12-25 伊姆西Ip控股有限责任公司 Method and apparatus for flushing pages from solid state storage devices
CN106502592A (en) * 2016-10-26 2017-03-15 郑州云海信息技术有限公司 Solid state hard disc caching block recovery method and system
US10318423B2 (en) * 2016-12-14 2019-06-11 Macronix International Co., Ltd. Methods and systems for managing physical information of memory units in a memory device
CN107273306B (en) * 2017-06-19 2021-01-12 苏州浪潮智能科技有限公司 Data reading and writing method for solid state disk and solid state disk
CN108519858B (en) * 2018-03-22 2021-06-08 雷科防务(西安)控制技术研究院有限公司 Memory chip hardware hit method
CN108776614B (en) * 2018-05-03 2021-08-13 华为技术有限公司 Recovery method and device of storage block
CN108920096A (en) * 2018-06-06 2018-11-30 深圳忆联信息系统有限公司 A kind of data storage method of SSD, device, computer equipment and storage medium
CN109324979B (en) * 2018-08-20 2020-10-16 华中科技大学 Data cache dividing method and data distribution method of 3D flash memory solid-state disk system
CN111367823B (en) * 2018-12-25 2022-03-29 北京兆易创新科技股份有限公司 Method and device for writing effective data
CN109918316B (en) * 2019-02-26 2021-07-13 深圳忆联信息系统有限公司 Method and system for reducing FTL address mapping space
CN110032671B (en) * 2019-04-12 2021-06-18 北京百度网讯科技有限公司 User track information processing method and device, computer equipment and storage medium
CN110688325B (en) * 2019-09-05 2021-12-03 苏州浪潮智能科技有限公司 Garbage recycling method, device and equipment for solid state disk and storage medium
CN110555001B (en) * 2019-09-05 2021-05-28 腾讯科技(深圳)有限公司 Data processing method, device, terminal and medium
CN111124305B (en) * 2019-12-20 2021-08-31 浪潮电子信息产业股份有限公司 Solid state disk wear leveling method and device and computer readable storage medium
CN111190835B (en) * 2019-12-29 2022-06-10 北京浪潮数据技术有限公司 Data writing method, device, equipment and medium
CN111857601B (en) * 2020-07-30 2023-09-01 暨南大学 Solid-state disk cache management method based on garbage collection and channel parallelism
CN112148631B (en) * 2020-09-25 2023-05-26 华侨大学 Garbage collection method, equipment and storage medium based on cache perception
CN113326214B (en) * 2021-06-16 2023-06-16 统信软件技术有限公司 Page cache management method, computing device and readable storage medium
CN113655955B (en) * 2021-07-16 2023-05-16 深圳大普微电子科技有限公司 Cache management method, solid state disk controller and solid state disk
CN114265694A (en) * 2021-12-23 2022-04-01 国家电网有限公司信息通信分公司 Memory replacement method and device
CN114490430A (en) * 2021-12-27 2022-05-13 天翼云科技有限公司 Data processing method, device, equipment and medium
CN114510198B (en) * 2022-02-16 2023-06-30 北京中电华大电子设计有限责任公司 Method for improving erasing and writing efficiency of NVM (non-volatile memory)
CN114697325B (en) * 2022-03-15 2024-06-18 浪潮云信息技术股份公司 Automatic deployment method and operation and maintenance device for cluster virtualization resource management platform cache equipment
CN115048056B (en) * 2022-06-20 2024-07-16 河北工业大学 Solid state disk buffer area management method based on page replacement cost
CN115858421B (en) * 2023-03-01 2023-05-23 浪潮电子信息产业股份有限公司 Cache management method, device, equipment, readable storage medium and server

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1862475A (en) * 2005-07-15 2006-11-15 华为技术有限公司 Method for managing magnetic disk array buffer storage
CN101673188A (en) * 2008-09-09 2010-03-17 上海华虹Nec电子有限公司 Data access method for solid state disk
CN101819509A (en) * 2010-04-19 2010-09-01 清华大学深圳研究生院 Solid state disk read-write method
CN102156753A (en) * 2011-04-29 2011-08-17 中国人民解放军国防科学技术大学 Data page caching method for file system of solid-state hard disc
CN102768645A (en) * 2012-06-14 2012-11-07 国家超级计算深圳中心(深圳云计算中心) Solid state disk (SSD) prefetching method for mixed caching and SSD
CN102981963A (en) * 2012-10-30 2013-03-20 华中科技大学 Implementation method for flash translation layer of solid-state disc

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8161241B2 (en) * 2010-01-12 2012-04-17 International Business Machines Corporation Temperature-aware buffered caching for solid state storage

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1862475A (en) * 2005-07-15 2006-11-15 华为技术有限公司 Method for managing magnetic disk array buffer storage
CN101673188A (en) * 2008-09-09 2010-03-17 上海华虹Nec电子有限公司 Data access method for solid state disk
CN101819509A (en) * 2010-04-19 2010-09-01 清华大学深圳研究生院 Solid state disk read-write method
CN102156753A (en) * 2011-04-29 2011-08-17 中国人民解放军国防科学技术大学 Data page caching method for file system of solid-state hard disc
CN102768645A (en) * 2012-06-14 2012-11-07 国家超级计算深圳中心(深圳云计算中心) Solid state disk (SSD) prefetching method for mixed caching and SSD
CN102981963A (en) * 2012-10-30 2013-03-20 华中科技大学 Implementation method for flash translation layer of solid-state disc

Also Published As

Publication number Publication date
CN103136121A (en) 2013-06-05

Similar Documents

Publication Publication Date Title
CN103136121B (en) Cache management method for solid-state disc
CN103425600B (en) Address mapping method in a kind of solid-state disk flash translation layer (FTL)
CN103777905B (en) Software-defined fusion storage method for solid-state disc
CN107066393B (en) Method for improving mapping information density in address mapping table
US10241919B2 (en) Data caching method and computer system
EP2939120B1 (en) Priority-based garbage collection for data storage systems
CN102981963B (en) A kind of implementation method of flash translation layer (FTL) of solid-state disk
KR101257691B1 (en) Memory controller and data management method
CN108121503B (en) NandFlash address mapping and block management method
CN103631536B (en) A kind of method utilizing the invalid data of SSD to optimize RAID5/6 write performance
CN105930282B (en) A kind of data cache method for NAND FLASH
CN107368436B (en) Flash memory cold and hot data separated storage method combined with address mapping table
CN105339910B (en) Virtual NAND capacity extensions in hybrid drive
CN104461393A (en) Mixed mapping method of flash memory
CN106528438A (en) Segmented junk recovery method for solid-state storage device
JP2011198133A (en) Memory system and controller
CN103455435A (en) Data writing method and device
CN109710541B (en) Optimization method for Greedy garbage collection of NAND Flash main control chip
CN108845957B (en) Replacement and write-back self-adaptive buffer area management method
CN102768645A (en) Solid state disk (SSD) prefetching method for mixed caching and SSD
CN107797772A (en) A kind of garbage retrieving system and method based on flash media
CN109240939B (en) Method for rapidly processing solid state disk TRIM
CN110515552A (en) A kind of method and system of storage device data no write de-lay
US20170160940A1 (en) Data processing method and apparatus of solid state disk
CN107402890A (en) A kind of data processing method and system based on Solid-state disc array and caching

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20170731

Address after: 12, building 410013, building 233, Yuelu Avenue, Changsha, Hunan

Patentee after: Hunan Industrial Technology Cooperative Innovation Research Institute

Address before: Zheng Jie in Hunan province 410073 city Changsha Yan w pool No. 47 Chinese PLA National Defense University of science and technology of Computer Science

Patentee before: National University of Defense Technology of People's Liberation Army of China

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20180808

Address after: 410205 the two floor of CLP software park headquarters, 39 Jian Shan Road, Yuelu District, Changsha, Hunan.

Patentee after: HUNAN GREATWALL GALAXY TECHNOLOGY CO., LTD.

Address before: 410013 12 building, science and technology building, 233 Yuelu Avenue, Changsha, Hunan.

Patentee before: Hunan Industrial Technology Cooperative Innovation Research Institute

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20180910

Address after: 410205 the two floor of CLP software park headquarters, 39 Jian Shan Road, Yuelu District, Changsha, Hunan.

Patentee after: HUNAN GREATWALL GALAXY TECHNOLOGY CO., LTD.

Address before: 410013 12 building, science and technology building, 233 Yuelu Avenue, Changsha, Hunan.

Patentee before: Hunan Industrial Technology Cooperative Innovation Research Institute

TR01 Transfer of patent right