Summary of the invention
The technical problem to be solved in the present invention is to provide a kind of hit rate that can effectively utilize limited spatial cache and increase buffer memory, make to write piece in Flash medium comprises the buffer memory management method of as far as possible many pages with the page replicate run that reduces erase operation and bring, follow-up garbage reclamation solid-state disk simple to operate.
In order to solve the problems of the technologies described above, the technical solution used in the present invention is:
A buffer memory management method for solid-state disk, implementation step is as follows:
1) in the buffer memory of solid-state disk, set up be in advance used for memory buffers data caching of page and be used for storing the replace block buffer memory of replace block, then in the buffer memory of solid-state disk, set up new page chained list, physical block chained list, Physical Page state table; New page chained list is for the logical page number (LPN) of the solid-state disk to be written of record page buffer memory; Physical block chained list is for the information of the corresponding physical block number of logical page (LPAGE) of record page buffer memory, each node in physical block chained list be one all with data and effective " dirty " page chained list of " dirty " page of flag for recording in this physical block; Physical Page state table is for recording data " effectively ", " inefficacy " of the each Physical Page of solid-state disk, the status information of " totally " thrin;
2) receive the IO request from main frame, if IO request is read request, redirect execution step 3); If IO request is write request, redirect execution step 4);
3) preferentially read the logical page (LPAGE) in caching of page, when caching of page is miss, by FTL, from solid-state disk, read logical page (LPAGE), and deposit logical page (LPAGE) data in caching of page, more " dirty " of new physical block chained list page chained list be effective " 1 " by effective flag status indication of described logical page (LPAGE); Logical page (LPAGE) data return to main frame redirect execution step 2 the most at last);
4) whether the logical page (LPAGE) that judges write request hits in caching of page, when when caching of page hits, deposit logical page (LPAGE) in caching of page, more " dirty " of new physical block chained list page chained list, and be inefficacy " 0 " by effective flag status indication of described logical page (LPAGE), will write result and return to main frame redirect execution step 2); When further judge the whether available free space of caching of page when caching of page is miss, when caching of page is miss and during the available free space of caching of page, deposit logical page (LPAGE) in caching of page, inquiry Physical Page state table obtains the status information that writes the corresponding Physical Page of logical page (LPAGE), if status information is " effectively ", " dirty " of new physical block chained list page chained list is more revised as " inefficacy " by the status information of the corresponding Physical Page of described logical page (LPAGE) in Physical Page state table; If status information is " totally ", add new page chained list to, write the most at last result and returned to main frame redirect execution step 2); When and caching of page miss at caching of page is during without free space, redirect execution step 5);
5) value of variable " effectively " number of pages is set is 0 in initialization; From the table Caudad gauge outfit of physical block chained list, travel through each " dirty " page chained list, each " dirty " page chained list is traveled through to each " dirty " page from table Caudad gauge outfit, according to effective flag, judge whether current " dirty " page is effective, if effective discharge the respective logic page of caching of page and delete the corresponding node of " dirty " page in chained list, variable " effectively " number of pages is added to 1, whether judgment variable " effectively " number of pages equals default value sets up, if set up redirect execution step 7); If effective flag of current " dirty " page was for losing efficacy " 0 ", effective flag of judgement next " dirty " page; After all " dirty " page has traveled through, whether judgment variable " effectively " number of pages is 0, if 0 goes to step 6); If variable " effectively " number of pages is not 0, redirect execution step 7);
6) first using the physical block of physical block chained list later half as candidate's replace block, search Physical Page state table and obtain the active page quantity that inefficacy number of pages that status information in each candidate's replace block is " inefficacys " and status information are " effectively ", the active page quantity that the inefficacy number of pages that by status information is " inefficacy " is " effectively " divided by status information is calculated the inefficacy ratio of each candidate's replace block, the relatively inefficacy ratio of each candidate's replace block, choose candidate's replace block piece as an alternative of inefficacy ratio maximum, page in replace block is write to replace block buffer memory according to page bias internal position, the active page that is " effectively " by status information in this replace block is read in replace block buffer memory in advance, discharge these pages shared space in caching of page, judge that whether replace block buffer memory is write completely, if replace block buffer memory is full, goes to step 7), if replace block buffer memory less than, each page that travels through replace block judges whether current page has data, if current page has data, the judgement processing of one page under redirect, if the countless certificates of current page, whether the table tail of the new page of judgement chained list points to the some logical page (LPAGE)s in caching of page, if table tail points to logical page (LPAGE) in caching of page and data is write to the current page in replace block and delete corresponding page in new page chained list, discharge the space of this logical page (LPAGE) in caching of page, and the next page that returns to traversal replace block is until traveled through, if the countless certificates of current page, redirect execution step 7),
7) by replacing ready message, issue piece writing module, discharge the shared space of page in replace block in caching of page, return in step 4) and re-execute and judge the whether available free space of caching of page.
As the further improvement of technique scheme of the present invention:
The detailed step of described step 3) is as follows:
3.1) whether the logical page number (LPN) that judges read request hits in caching of page, if miss, redirect execution step 3.2), otherwise, directly from caching of page, read and hit page data and return to main frame, return to execution step 2);
3.2) IO request is transmitted to FTL, then judges whether the page data that FTL sends, when having the page data of FTL transmission, redirect is carried out next step;
3.3) judge the whether available free space of caching of page, if there is no free space, directly the page data reading is returned to main frame, return to execution step 2); If available free space, deposits the page data reading in caching of page in, redirect execution step 3.4);
3.4) searching FTL mapping table obtains and reads the physical block number that page data is corresponding, according to described physical block number, search physical block chained list, if physical block chained list does not comprise " dirty " page chained list of the corresponding physical block of this physical block number, create " dirty " page chained list corresponding to physical block number as target " dirty " page chained list, and target " dirty " page chained list is placed in to the gauge outfit of physical block chained list; Redirect execution step 3.5); Otherwise, directly redirect execution step 3.5);
3.5) logical page (LPAGE) of the page data reading is added into the gauge outfit of target " dirty " page chained list, effective flag of the logical page (LPAGE) of the gauge outfit of target " dirty " page chained list is set to effectively " 1 ", and the page data reading is returned to main frame redirect execution step 2).
The detailed step of described step 4) is as follows:
4.1) whether the logical page number (LPN) that judges write request hits in caching of page, if hit, and redirect execution step 4.2); If miss, redirect execution step 4.6);
4.2) data of write request are write to caching of page;
4.3) searching FTL mapping table obtains and writes physical page number and the physical block number that page data is corresponding, according to physical block number, search physical block chained list, if " dirty " page chained list that physical block chained list comprises the corresponding physical block of this physical block number, redirect execution step 4.4), otherwise redirect execution step 4.5);
4.4) target " dirty " page chained list is moved to the gauge outfit of physical block chained list, target " dirty " page is moved to the gauge outfit of target " dirty " page chained list and effective flag of target " dirty " page is set to lose efficacy " 0 ", redirect execution step 2);
4.5) create " dirty " page chained list corresponding to new physical block number as target " dirty " page chained list, target " dirty " page chained list is moved to the gauge outfit of physical block chained list, target " dirty " page is moved to the gauge outfit of target " dirty " page chained list, and effective flag of target " dirty " page is set to inefficacy " 0 ", redirect execution step 2);
4.6) whether judge in caching of page available free space, if there is no free space, redirect execution step 5), otherwise redirect execution step 4.7);
4.7) write request data are write direct caching of page;
4.8) search FTL mapping table and obtain the corresponding physical page number of logical page (LPAGE) that newly writes caching of page, according to physical block number, search Physical Page state table and obtain the status information of corresponding Physical Page, if the status information of described Physical Page is " effectively ", redirect execution step 4.9), the status information of described Physical Page is " totally " else if, redirect execution step 4.10);
4.9) logical page (LPAGE) that writes caching of page is added to the gauge outfit of target " dirty " page chained list and effective flag is labeled as to inefficacy " 0 ", target " dirty " page chained list is moved to the gauge outfit of physical block chained list, the status information of described Physical Page is revised as to " inefficacy ", redirect execution step 2);
4.10) logical page (LPAGE) is added in the gauge outfit of new page chained list to redirect execution step 2).
The present invention has following advantage:
1, the present invention adopts multiple pages of level buffer memorys and a replace block buffer memory, can in limited spatial cache, improve the chance that page hits, and accelerates read or write speed, and reduces read-write number of times.
2, the present invention is when selecting replace block to write the Flash medium of solid-state disk, and the piece of selection comprises as far as possible many dirty datas and few valid data of trying one's best, and can reduce like this piece and wipe front page number of copy times.
3, the present invention is when execution block is replaced, by the logical page (LPAGE) (logical page (LPAGE) that new page chained list points to) of choosing the Flash medium that never writes solid-state disk, fill up write-in block, making writing of each piece is all monoblock data, effectively reduced because repeatedly writing and caused repeatedly erase operation, and then reduced complexity and the time overhead of garbage reclamation operation.
4, the present invention is directed to solid-state disk compared with the inherent characteristics of conventional magnetic media hard disk, after wiping, write mechanism, it is low that height is write the random write performance that time and high erasing time cause, the problem that the solid-state disk life-span is short, by setting up caching of page, replace block buffer memory, new page chained list, physical block chained list, Physical Page state table, according to setting up caching of page, replace block buffer memory, new page chained list, physical block chained list, Physical Page state table is read, write and piece replacement, can effectively utilize the limited spatial cache of caching of page and increase the hit rate of buffer memory, write the erase operation number of times that data bring few, and the number of copy times of page is the least possible, and the piece that makes to write in solid-state disk Flash medium comprises the page replicate run of as far as possible many pages to reduce erase operation and bring, follow-up garbage reclamation is simple to operate, the present invention can meet the requirement of aforementioned three aspects of buffer storage managing algorithm, can increase the performance of solid-state disk, extend the serviceable life of solid-state disk.
Embodiment
As depicted in figs. 1 and 2, the implementation step of the buffer memory management method of the present embodiment solid-state disk is as follows:
1) in the buffer memory of solid-state disk, set up be in advance used for memory buffers data caching of page and be used for storing the replace block buffer memory of replace block, then in the buffer memory of solid-state disk, set up new page chained list, physical block chained list, Physical Page state table; New page chained list is for the logical page number (LPN) of the solid-state disk to be written of record page buffer memory; Physical block chained list is for the information of the corresponding physical block number of logical page (LPAGE) of record page buffer memory, each node in physical block chained list be one all with data and effective " dirty " page chained list of " dirty " page of flag for recording in this physical block; Physical Page state table is for recording data " effectively ", " inefficacy " of the each Physical Page of solid-state disk, the status information of " totally " thrin;
2) receive the IO request from main frame, if IO request is read request, redirect execution step 3); If IO request is write request, redirect execution step 4);
3) preferentially read the logical page (LPAGE) in caching of page, when caching of page is miss, by FTL, from solid-state disk, read logical page (LPAGE), and deposit logical page (LPAGE) data in caching of page, more " dirty " of new physical block chained list page chained list be effective " 1 " by effective flag status indication of logical page (LPAGE); Logical page (LPAGE) data return to main frame redirect execution step 2 the most at last);
4) whether the logical page (LPAGE) that judges write request hits in caching of page, when when caching of page hits, deposit logical page (LPAGE) in caching of page, more " dirty " of new physical block chained list page chained list, and be inefficacy " 0 " by effective flag status indication of logical page (LPAGE), will write result and return to main frame redirect execution step 2), when further judge the whether available free space of caching of page when caching of page is miss, when caching of page is miss and during the available free space of caching of page, deposit logical page (LPAGE) in caching of page, inquiry Physical Page state table obtains the status information that writes the corresponding Physical Page of logical page (LPAGE), if status information is " effectively ", more " dirty " of new physical block chained list page chained list, in Physical Page state table, the status information of the corresponding Physical Page of logical page (LPAGE) is revised as to " inefficacy ", if status information is " totally ", add new page chained list to, write the most at last result and returned to main frame redirect execution step 2), when and caching of page miss at caching of page is during without free space, redirect execution step 5),
5) value of variable " effectively " number of pages is set is 0 in initialization; From the table Caudad gauge outfit of physical block chained list, travel through each " dirty " page chained list, each " dirty " page chained list is traveled through to each " dirty " page from table Caudad gauge outfit, according to effective flag, judge whether current " dirty " page is effective, if effective discharge the respective logic page of caching of page and delete the corresponding node of " dirty " page in chained list, variable " effectively " number of pages is added to 1, whether judgment variable " effectively " number of pages equals default value sets up, if set up redirect execution step 7); If effective flag of current " dirty " page was for losing efficacy " 0 ", effective flag of judgement next " dirty " page; After all " dirty " page has traveled through, whether judgment variable " effectively " number of pages is 0, if 0 goes to step 6); If variable " effectively " number of pages is not 0, go to step 7);
6) first using the physical block of physical block chained list later half as candidate's replace block, search Physical Page state table and obtain the active page quantity that inefficacy number of pages that status information in each candidate's replace block is " inefficacys " and status information are " effectively ", the active page quantity that the inefficacy number of pages that by status information is " inefficacy " is " effectively " divided by status information is calculated the inefficacy ratio of each candidate's replace block, the relatively inefficacy ratio of each candidate's replace block, choose candidate's replace block piece as an alternative of inefficacy ratio maximum, page in replace block is write to replace block buffer memory according to page bias internal position, the active page that is " effectively " by status information in this replace block is read in replace block buffer memory in advance, discharge these pages shared space in caching of page, judge that whether replace block buffer memory is write completely, if replace block buffer memory is full, goes to step 7), if replace block buffer memory less than, each page that travels through replace block judges whether current page has data, if current page has data, the judgement processing of one page under redirect, if the countless certificates of current page, whether the table tail of the new page of judgement chained list points to the some logical page (LPAGE)s in caching of page, if table tail points to logical page (LPAGE) in caching of page and data is write to the current page in replace block and delete corresponding page in new page chained list, discharge the space of this logical page (LPAGE) in caching of page, and the next page that returns to traversal replace block is until traveled through, if the countless certificates of current page, the step 7) of walking around,
7) by replacing ready message, issue piece writing module, discharge the shared space of page in replace block in caching of page, return in step 4) and re-execute and judge the whether available free space of caching of page.
In the present embodiment, the value of the effective flag in physical block chained list represents that for " 1 " data of logical page (LPAGE) are effective, is worth the data failure that represents logical page (LPAGE) for " 0 ".The present embodiment adopts multiple pages of level buffer memorys and a replace block buffer memory, can in limited spatial cache, improve the chance that page hits, and accelerates read or write speed, and reduces read-write number of times.The present embodiment is when selecting replace block to write the Flash medium of solid-state disk, and the piece of selection comprises as far as possible many dirty datas and few valid data of trying one's best, and can reduce like this piece and wipe front page number of copy times.When the present embodiment is replaced at execution block, by the logical page (LPAGE) (logical page (LPAGE) that new page chained list points to) of choosing the Flash medium that never writes solid-state disk, fill up write-in block, making writing of each piece is all monoblock data, effectively reduced because repeatedly writing and caused repeatedly erase operation, and then reduced complexity and the time overhead of garbage reclamation operation.Therefore, the present embodiment can meet the requirement of aforementioned three aspects of buffer storage managing algorithm, can increase the performance of solid-state disk, the serviceable life of prolongation solid-state disk.
As shown in Figure 3, in the present embodiment, the detailed step of step 3) is as follows:
3.1) whether the logical page number (LPN) that judges read request hits in caching of page, if miss, redirect execution step 3.2), otherwise, directly from caching of page, read and hit page data and return to main frame, return to execution step 2);
3.2) IO request is transmitted to FTL, then judges whether the page data that FTL sends, when having the page data of FTL transmission, redirect is carried out next step, otherwise continues to wait for;
3.3) judge the whether available free space of caching of page, if there is no free space, directly the page data reading is returned to main frame, return to execution step 2); If available free space, deposits the page data reading in caching of page in, redirect execution step 3.4);
3.4) searching FTL mapping table obtains and reads the physical block number that page data is corresponding, according to physical block number, search physical block chained list, if physical block chained list does not comprise " dirty " page chained list of the corresponding physical block of this physical block number, create " dirty " page chained list corresponding to physical block number as target " dirty " page chained list, and target " dirty " page chained list is placed in to the gauge outfit of physical block chained list, redirect execution step 3.5); Otherwise, directly redirect execution step 3.5);
3.5) logical page (LPAGE) of the page data reading is added into the gauge outfit of target " dirty " page chained list, effective flag of the logical page (LPAGE) of the gauge outfit of target " dirty " page chained list is set to effectively " 1 ", and the page data reading is returned to main frame redirect execution step 2).
As shown in Figure 4, in the present embodiment, the detailed step of step 4) is as follows:
4.1) whether the logical page number (LPN) that judges write request hits in caching of page, if hit, and redirect execution step 4.2); If miss, redirect execution step 4.6);
4.2) data of write request are write to caching of page;
4.3) searching FTL mapping table obtains and writes physical page number and the physical block number that page data is corresponding, according to physical block number, search physical block chained list, if " dirty " page chained list that physical block chained list comprises the corresponding physical block of this physical block number, redirect execution step 4.4), otherwise redirect execution step 4.5);
4.4) target " dirty " page chained list is moved to the gauge outfit of physical block chained list, target " dirty " page is moved to the gauge outfit of target " dirty " page chained list and effective flag of target " dirty " page is set to lose efficacy " 0 ", redirect execution step 2);
4.5) create " dirty " page chained list corresponding to new physical block number as target " dirty " page chained list, target " dirty " page chained list is moved to the gauge outfit of physical block chained list, target " dirty " page is moved to the gauge outfit of target " dirty " page chained list, and effective flag of target " dirty " page is set to inefficacy " 0 ", redirect execution step 2);
4.6) whether judge in caching of page available free space, if there is no free space, redirect execution step 5), otherwise redirect execution step 4.7);
4.7) write request data are write direct caching of page;
4.8) search FTL mapping table and obtain the corresponding physical page number of logical page (LPAGE) that newly writes caching of page, according to physical block number, search Physical Page state table and obtain the status information of corresponding Physical Page, if the status information of Physical Page is " effectively ", redirect execution step 4.9), the status information of Physical Page is " totally " else if, redirect execution step 4.10);
4.9) logical page (LPAGE) that writes caching of page is added to the gauge outfit of target " dirty " page chained list and effective flag is labeled as to inefficacy " 0 ", target " dirty " page chained list is moved to the gauge outfit of physical block chained list, the status information of Physical Page is revised as to " inefficacy ", redirect execution step 2);
4.10) logical page (LPAGE) is added in the gauge outfit of new page chained list to redirect execution step 2).
As shown in Figure 5, solid state hard disc is mainly comprised of Flash chip, buffer memory and Flash conversion layer (FTL), and FTL has comprised a series of supervisory routines and processed the reading and writing of solid state hard disc, erase operation.The present embodiment is located at by one in when application that cache management subsystem in Flash conversion layer (FTL) realizes.Flash conversion layer (the Flash Transition Layer of cache management subsystem and main frame and solid state hard disc, FTL) be connected, cache management subsystem is in FTL, to be specifically designed to manage the supervisory routine of the data of IO request being carried out to buffer memory, the performance of processing to improve IO.Cache management subsystem is connected with other supervisory routines in main frame, buffer memory and FTL, and cache management subsystem receives the IO request from main frame, processes IO request, and result is returned to main frame.Cache management subsystem obtains the IO request of solid-state disk from main frame, the address of IO request access is logical page number (LPN).If IO request is read request, and the logical page number (LPN) of read request and cache hit, cache management subsystem directly returns to main frame by the data in buffer memory, otherwise send to FTL to read corresponding data from Flash medium read request, and receive the data that also buffer memory reads from the Flash medium of solid-state disk from FTL.If IO request is write request, the logical page number (LPN) that judges write request whether with cache hit.If hit, cache management subsystem, by the data buffer memory that writes direct, if miss, writes buffer memory by data, and revises the FTL mapping table in FTL.Storage management subsystem deposits the data that read from Flash medium in buffer memory and before data that request is write write buffer memory, judge whether buffer memory is write full, if buffer memory is write full, the page in buffer memory is replaced out to buffer memory and write Flash medium, and discharge corresponding spatial cache for the new data of buffer memory, otherwise the page data that storage resource request writes in buffer memory.
As shown in Figure 6, cache management subsystem corresponding to the present embodiment method is comprised of caching of page, replace block buffer memory, new page chained list, physical block chained list, Physical Page state table, caching management module, piece replacement module and piece writing module.
Caching of page is used for memory buffers data, and the base unit of storage data is pages, and quantity=cache size/page of page is big or small.
Replace block buffer memory is used for storing replace block, and the base unit of storage data is pieces, and one comprises N page, N=block size/page size.New page chained list is used for the logical page number (LPN) that writes for the first time Flash medium of record buffer memory, the countless certificates of the corresponding Physical Page of these logical page number (LPN)s, and each node in chained list is a logical page number (LPN).
All pages of new page chained list management can write the optional position in Flash medium.Physical block chained list is for the information of the corresponding physical block number of page of record buffer memory, each node in chained list is that (" dirty " page table shows in this Physical Page have data to one " dirty " page chained list, otherwise be " totally " page), and " dirty " page chained list is for recording all " dirty " page in this physical block.Each node in " dirty " page chained list is one " dirty " page, and has an effective flag S, and S is this page of 1 expression " effectively ", and S is this page of 0 expression " inefficacy ".
Physical Page state table is for recording the state of the each Physical Page of Flash medium, and state is divided into " effectively ", " inefficacy ", " totally "." effectively " represents there are data in this Physical Page, and data are valid data; " inefficacy " represents there are data in this Physical Page, but data are invalid data; " totally " represents countless certificates in this Physical Page.
Caching management module is connected with FTL with main frame, piece replacement module, new page chained list, physical block chained list, Physical Page state table, buffer memory.Caching management module receives and processes the IO request sending over from main frame.If IO request is write request, first caching management module judges whether the logical page (LPAGE) of this IO request hits in caching of page.If hit, directly corresponding data are write to caching of page to cover the legacy data in this page.If miss, whether judge in caching of page available free space, if without free space, invoking block replacement module, will select the data in caching of page to write replace block buffer memory, and the data in replace block buffer memory write Flash medium and go out buffer zone with the free time.Then, caching management module in caching of page, and is searched the data buffer storage of this write request FTL mapping table and is obtained the Physical Page that this logical page (LPAGE) is corresponding, then searches Physical Page state table and obtain the state of this Physical Page.If the state of this Physical Page is " effectively ", caching management module is added this logical page (LPAGE) to the gauge outfit of " dirty " page chained list of this physical block in physical block chained list, and gauge outfit is set to 0, and will be somebody's turn to do " dirty " page chained list and be placed in the gauge outfit of physical block chained list, the state of this page in Physical Page state table is set to " inefficacy " simultaneously.If the state of this Physical Page is " totally ", caching management module is added this logical page (LPAGE) in the gauge outfit of new page chained list to.If IO request is read request, first caching management module judges whether the logical page (LPAGE) of this IO read request hits in caching of page, if hit, directly by corresponding data reading and return to main frame; Otherwise read request is transmitted to FTL to read corresponding information from Flash.Caching management module also receives the data that read from FTL, and be buffered in caching of page, this page is added to the gauge outfit of corresponding " dirty " page chained list in physical block chained list simultaneously, and be 1 by list head marker, then will be somebody's turn to do " dirty " page chained list and be placed in the gauge outfit of physical block chained list, finally this page information be returned to main frame.
Piece replacement module, for choosing the page of replacing out buffer memory, is connected with FTL with caching management module, physical block chained list and piece writing module.Piece replacement module receives the piece replacement request sending over from caching management module, starts to choose the page of replacing out buffer memory.Piece replacement module starts to travel through each " dirty " page chained list physical block chained list from the table tail of physical block chained list to gauge outfit, for each " dirty " page chained list, also from table tail to gauge outfit, travel through and find the page that is designated " 1 ", until find out N page or traversal end.Often find out a page that is designated " 1 ", discharge corresponding caching of page space, then delete the corresponding node in " dirty " page chained list.If without " effectively " page (being designated the page of " 1 "), select corresponding of rear 1/2 node of physical block chained list as candidate's replace block in buffer memory.By comparing the inefficacy ratio in candidate's replace block, select the piece as an alternative of inefficacy ratio maximum.Wherein, inefficacy ratio is the ratio of " inefficacy " number of pages and " effectively " number of pages in this physical block.Then according to page bias internal amount, successively the page write-in block in replace block is replaced to buffer memory.Then, replace block buffer memory is read and write to piece replacement module by other " effectively " page in the replace block of Flash medium, last, from the table tail of new page chained list, selects corresponding page to fill up replace block buffer memory.After piece replacement buffer memory fills up, piece replacement module sends replaces ready message to piece writing module.
Piece writing module is connected with FTL with piece replacement module, piece writing module receives the ready message of replacement sending from piece replacement module, then replace block buffer memory is write in Flash medium by FTL, transmission writes successful message and original physical block number to wiping module to wipe original physical block and to upgrade Physical Page state table in piece FTL, and send writing address to the FTL mapping table update module in FTL to upgrade FTL mapping table.
In the present embodiment, the flow process that caching management module is processed certain IO read request is as follows:
A1, caching management module are obtained IO read request, and according to the logical page number (LPN) of this IO read request, judge that whether this logical page number (LPN) hits with a certain of caching of page, enters steps A 1.1 or A1.2 according to judged result.
If A1.1 logical page number (LPN) hits, caching management module is read page data from buffer memory, goes to step A3.
If A1.2 logical page number (LPN) hits unsuccessfully, this request is transmitted to FTL, go to step A2.
A2, caching management module judge whether the page data sending over from FTL, according to judged result, enter steps A 2.1 or A2.2.
If A2.1 nothing, goes to step A2 and continues to wait for.
If A2.2 has, caching management module receives the page data reading from FTL, and judges the whether available free space of caching of page, according to judged result, enters steps A 2.2.1 or A2.2.2.
If A2.2.1 caching of page, without free space, goes to step A3.
If the available free space of A2.2.2 caching of page, deposits this page data in caching of page in, go to step A2.2.3.
A2.2.3, caching management module are searched physical block chained list according to this page of corresponding physical block number, and judge that whether physical block chained list comprises " dirty " page chained list of this physical block, enters steps A 2.2.3.1 or A2.2.3.2 according to judged result.
If A2.2.3.1 physical block chained list does not comprise " dirty " page chained list of this physical block, create one " dirty " page chained list, and will be somebody's turn to do " dirty " page chained list and be placed in the gauge outfit of physical block chained list.
A2.2.3.2, obtain " dirty " page chained list of this physical block, and this page is added to the gauge outfit of " dirty " page chained list, and be 1 by list head marker.
A3, caching management module return to main frame by the page data obtaining.
A4, read procedure finish.
The flow process that caching management module is processed certain IO write request is as follows:
B1, caching management module, according to the logical page number (LPN) of this IO write request, judge that whether this logical page number (LPN) hits with a certain of caching of page, enters step B1.1 or B1.2 according to judged result.
If B1.1 logical page number (LPN) hits, the caching of page that data in write request write direct, performs step B1.1.1 and B1.1.2 successively.
B1.1.1, caching management module are searched FTL mapping table, obtain physical page number and physical block piece number that this logical page (LPAGE) is corresponding.
B1.1.2, caching management module judge " dirty " page chained list that whether comprises this physical block in physical block chained list.If have, will be somebody's turn to do " dirty " page chained list and move to the gauge outfit of physical block chained list, and this page be added to the gauge outfit of this " dirty " page chained list, effective flag that this page is set is " 0 ".If nothing, creates new " dirty " page chained list, and the gauge outfit of this page being added to this " dirty " page chained list, then will be somebody's turn to do the gauge outfit that " dirty " page chained list adds physical block chained list to, effective flag that this page is set is " 0 ".
If B1.2 logical page number (LPN) is miss, judge the whether available free space of caching of page, according to judged result, enter step B1.2.1 or B1.2.2.
If B1.2.1 caching of page is without free space, caching management module sends piece replacement request to piece replacement module, and redirect execution step B1.2.
If the available free space of B1.2.2 caching of page, the caching of page that data of write request write direct, performs step B1.2.2.1 and B1.2.2.2 successively.
B1.2.2.1, caching management module be by searching FTL mapping table, obtains physical page number corresponding to page that newly writes caching of page, and search Physical Page state table and obtain the state of this Physical Page.According to the state of Physical Page, enter step B1.2.2.1.1 or B1.2.2.1.2.
If the state of this Physical Page of B1.2.2.1.1 is " effectively ", caching management module is added this logical page (LPAGE) to the gauge outfit of " dirty " page chained list of this physical block in physical block chained list, and be " 0 " by effective home position of gauge outfit, and will be somebody's turn to do " dirty " page chained list and be placed in the gauge outfit of physical block chained list, the state of this page in Physical Page state table is set to " inefficacy " simultaneously, goes to step B1.3.
If the state of this Physical Page of B1.2.2.1.2 is " totally ", caching management module is added this logical page (LPAGE) in the gauge outfit of new page chained list to, goes to step B1.3.
B1.3, the process of writing finish.
The flow process that piece replacement module processing block is replaced is as follows:
C1, piece replacement module receive piece replacement request, and the value that variable " effectively " number of pages is set is 0.
C2, piece replacement module start to travel through each " dirty " page chained list physical block chained list from the table tail of physical block chained list to gauge outfit.
C2.1, for each " dirty " page chained list, piece replacement module also from table tail to gauge outfit, travel through every one page.
C2.2, piece replacement module judge whether this page of sign is " 1 ", if 1, discharge corresponding caching of page space, then delete the corresponding node in " dirty " page chained list, " effectively " number of pages adds 1.Otherwise, go to step C2.1 and continue traversal until travel through complete.
Whether C2.3, judgement " effectively " number of pages equal N, if set up, go to step C11.
If C3 " effectively " number of pages is not 0, go to step C11.If " effectively " number of pages is 0, go to step C4.
C4, piece replacement module are using the physical block of later half in physical block chained list as candidate's replace block.
C5, to each candidate's replace block, execution step C5.1.
C5.1, piece replacement module be by searching Physical Page state table, and obtain the quantity of " inefficacy " page in candidate's replace block and " effectively " page, and calculate inefficacy ratio, and inefficacy ratio=inefficacy number of pages/active page quantity.
C6, piece replacement module compare the actual effect ratio of candidate's replace block, choose candidate's replace block piece as an alternative of actual effect ratio maximum.
C7, piece replacement module write replace block buffer memory by the page in replace block according to page bias internal position.
C8, piece replacement module by and the active page in this physical block is read in replace block buffer memory in advance, then discharge these pages shared space in caching of page.
C9, piece replacement module judge that whether replace block buffer memory is write completely, if replace block buffer memory is full, goes to step C11.
C10, judge whether the current page of replace block has data, if current page has data, the lower one page of traversal re-executes step C10.If the countless certificates of current page, go to step C10.1.
C10.1, obtain the table tail page pointed of new page chained list, judge whether this page has data.If have, data are write to the current page in replace block, and delete the corresponding page in new page chained list, discharge the space of this page in caching of page, go to step C10.2; If countless certificates, go to step C11.
Next page in C10.2, traversal replace block buffer memory, goes to step C10.
C11, piece replacement module are issued piece writing module by replacing ready message, discharge the shared space of page in replace block in caching of page.
C12, piece are replaced complete.
The above is only the preferred embodiment of the present invention, and protection scope of the present invention is also not only confined to above-described embodiment, and all technical schemes belonging under thinking of the present invention all belong to protection scope of the present invention.It should be pointed out that for those skilled in the art, some improvements and modifications without departing from the principles of the present invention, these improvements and modifications also should be considered as protection scope of the present invention.