CN115905246B - KV caching method and device based on dynamic compression prefix tree - Google Patents

KV caching method and device based on dynamic compression prefix tree Download PDF

Info

Publication number
CN115905246B
CN115905246B CN202310238897.0A CN202310238897A CN115905246B CN 115905246 B CN115905246 B CN 115905246B CN 202310238897 A CN202310238897 A CN 202310238897A CN 115905246 B CN115905246 B CN 115905246B
Authority
CN
China
Prior art keywords
node
thread
prefix tree
read
dynamic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310238897.0A
Other languages
Chinese (zh)
Other versions
CN115905246A (en
Inventor
高天一
代晓磊
姜诚
谢丹博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhizhe Sihai Beijing Technology Co Ltd
Original Assignee
Zhizhe Sihai Beijing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhizhe Sihai Beijing Technology Co Ltd filed Critical Zhizhe Sihai Beijing Technology Co Ltd
Priority to CN202310238897.0A priority Critical patent/CN115905246B/en
Publication of CN115905246A publication Critical patent/CN115905246A/en
Application granted granted Critical
Publication of CN115905246B publication Critical patent/CN115905246B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application provides a KV caching method and device based on a dynamic compressed prefix tree, wherein the method comprises the following steps: acquiring a request of a client, and distributing the request of the client to a plurality of threads; and analyzing the request of the client to acquire an execution command for any thread, and executing the execution command based on the dynamic compression prefix tree to acquire an execution result. The dynamic compression prefix tree in the method enables data storage to be globally ordered, improves query efficiency, saves memory space, improves cache locality, and can better support interval scanning; the data integrity is further ensured by additionally arranging a prefix key node; through the request distribution of the network layer, the multi-core multi-thread processing of the bottom data structure is realized; the concurrency management unit is used for constructing a concurrency safe index tree, so that powerful performance guarantee can be provided under massive concurrency operation; and the memory recovery unit is used for realizing the memory release in a multithreading scene.

Description

KV caching method and device based on dynamic compression prefix tree
Technical Field
The present disclosure relates to the field of data processing technologies, and in particular, to a KV caching method and apparatus based on a dynamic compressed prefix tree.
Background
The well-known Chinese question-answering communities already have tens of millions of questions and hundreds of millions of answers, but the dependency of each community on the cache is huge, and hundreds of millions of cache requests are processed daily. The performance and expansibility of massive data requests are very important to online user experience, corporate decision, product strategies and the like.
In order to solve the cache request with higher complexity, the CPU performance of the computer is fully utilized through lateral expansion, the most common solution in the industry is Redis, but the Redis still has the defects, for example, although the Redis supports multiple cores of I/O threads, the computing logic is generally single-core processing, the effect of the multiple cores is very little for complex computing commands, and the expansion of the multiple cores is limited; the Redis bottom layer realizes the storage of keys based on a hash table, and the ordering of keys cannot be ensured; furthermore, redis has poor searching effect on the range by interval, requiring scanning of the corpus.
Disclosure of Invention
The application provides a KV caching method and device based on a dynamic compressed prefix tree, so as to realize transverse expansion, solve a caching request with high complexity and fully utilize the CPU performance of a modern computer.
In a first aspect, the present application provides a KV caching method based on a dynamic compressed prefix tree, where the method includes:
acquiring a request of a client, and distributing the request of the client to a plurality of threads;
analyzing the request of the client to acquire an execution command for any thread, and executing the execution command based on a dynamic compression prefix tree to acquire an execution result;
the dynamic compression prefix tree comprises a concurrency management unit, a memory recycling unit and a memory compression unit, wherein the concurrency management unit is used for adjusting version numbers for all nodes of the dynamic compression prefix tree, and the version numbers comprise read-write blocking zone bits and retry zone bits.
According to the KV caching method based on the dynamic compressed prefix tree provided by the present application, the adjusting version number for each node of the dynamic compressed prefix tree includes: if a certain node has a writing operation, performing first type adding processing on the version number of the node before and after the writing operation so as to adjust the value of the read-write blocking flag bit; if a node has a deletion operation, performing a second type adding process on the version number of the node when the deletion operation is performed, so as to adjust the value of the retry zone bit.
According to the KV caching method based on the dynamic compressed prefix tree provided in the present application, the read-write blocking flag bit is used to determine whether a write operation exists in each node and whether read-write operations of other threads are blocked, and correspondingly, the step of determining whether a write operation exists in each node and whether read-write operations of other threads are blocked based on the read-write blocking flag bit specifically includes: aiming at a node currently accessed by any thread, acquiring the version number of the node; determining the value of a read-write blocking flag bit of the node based on the version number of the node; if the value of the read-write blocking flag bit is 1, the node is indicated to have write operation and the read-write operation of other threads is blocked; and if the value of the read-write blocking flag bit is 0, the node is indicated that no write operation exists and the read-write operation of other threads is not blocked.
According to the KV caching method based on the dynamic compressed prefix tree provided in the present application, the retry flag bit is used to determine whether to access each node and whether to perform a retry operation, and correspondingly, the step of determining whether to access each node and whether to perform a retry operation based on the retry flag bit specifically includes: aiming at a node currently accessed by any thread, acquiring the version number of the node; determining the value of a retry flag bit of the node based on the version number of the node; if the value of the retry flag bit is 1, the retry operation is needed to be performed when the node is accessed; if the retry flag bit has a value of 0, it indicates that no retry operation is required to access the node.
According to the KV cache method based on the dynamic compression prefix tree, the memory recycling unit is used for supporting the dynamic compression prefix tree to realize multi-thread memory recycling; the memory recovery supporting the dynamic compression prefix tree to realize multithreading specifically comprises: setting, for any thread, a generation count of the thread to a maximum value; performing locking operation on the global pre-deleted data list, and traversing all threads to determine the minimum generation count; and unlocking the global pre-deleted data list, and deleting the data smaller than the minimum generation count in the pre-deleted data list corresponding to the thread.
According to the KV caching method based on the dynamic compression prefix tree, the dynamic compression prefix tree is provided with a prefix key node, and the prefix key node is used for storing data identical to the prefix.
According to the KV cache method based on the dynamic compression prefix tree, the memory compression unit is used for compressing the dynamic compression prefix tree by adopting path compression and inertia expansion.
In a second aspect, the present application further provides a KV buffering device based on a dynamic compressed prefix tree, where the device includes:
the thread allocation module is used for acquiring a request of a client and allocating the request of the client to a plurality of threads;
the thread executing module is used for analyzing the request of the client to acquire an executing command for any thread, and executing the executing command based on a dynamic compression prefix tree to acquire an executing result;
the dynamic compression prefix tree comprises a concurrency management unit, a memory recycling unit and a memory compression unit, wherein the concurrency management unit is used for adjusting version numbers for all nodes of the dynamic compression prefix tree, and the version numbers comprise read-write blocking zone bits and retry zone bits.
In a third aspect, an embodiment of the present application further provides an electronic device, where the electronic device includes a memory and a processor, where the memory stores a computer program, and when the processor runs the computer program, the processor executes steps in any implementation manner of the KV caching method based on the dynamic compressed prefix tree.
In a fourth aspect, embodiments of the present application further provide a readable storage medium, where a computer program is stored, where the computer program performs the steps in any implementation manner of the KV caching method based on the dynamic compressed prefix tree when the computer program runs on a processor.
In summary, the KV caching method and device based on the dynamic compressed prefix tree realize transverse expansion, solve the cache request with high complexity, and fully utilize the CPU performance of the modern computer. The multi-core multi-thread processing method of the bottom data structure is realized through the request distribution of the network layer; by constructing the dynamic compression prefix tree, the data storage is globally ordered, the query efficiency is improved, the memory space is saved, the cache locality is improved, and the interval scanning can be better supported; by adding a prefix key node for the dynamic compression prefix tree, the integrity and accuracy of data are further ensured; the concurrency management unit is used for adjusting the read-write blocking flag bit and the retry flag bit to realize concurrency management of read-write operation of each node, and a concurrency safe index tree is constructed, so that powerful performance guarantee can be provided under massive concurrency operation; the memory release under the multithreading scene is realized through the memory recovery unit; the memory space is further saved by the memory compression unit.
Drawings
For a clearer description of the present application or of the prior art, the drawings that are used in the description of the embodiments or of the prior art will be briefly described, it being apparent that the drawings in the description below are some embodiments of the present application, and that other drawings may be obtained from these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flow diagram of a KV caching method based on a dynamic compressed prefix tree provided in the present application;
FIG. 2 is a flow chart of a method provided herein for adjusting version numbers for nodes of the dynamic compressed prefix tree;
fig. 3 is a schematic structural diagram of a KV caching apparatus based on a dynamic compressed prefix tree provided in the present application;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Icon: 300-KV buffer device; 310-a thread allocation module; 320-a thread execution module; 400-an electronic device; 410-a memory; 420-a processor; 430-bus.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the present application more apparent, the technical solutions in the present application will be clearly and completely described below with reference to the drawings in the present application, and it is apparent that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
The terms involved in the embodiments are explained:
dynamic compressed prefix tree (Adaptive Radix Tree, ART): the prefix tree using binary bit string as key word is a multi-fork tree structure, and is similar to multi-layer index table, every intermediate node contains pointer array pointing to several child nodes, and leaf node contains pointer pointing to actual object.
Key value: a string or byte array for retrieval.
Value: data corresponding to the Key value.
KV cache: KV is an abbreviation of Key-Value, and KV storage is also called Key Value pair storage.
Root node: the topmost node of the tree, and only one.
Child node: besides the root node, there are also branch nodes themselves.
Leaf node: the non-branching nodes are leaf nodes, also called end nodes.
Fig. 1 is a flow chart of a KV caching method based on a dynamic compressed prefix tree provided in the present application. As shown in fig. 1, the KV caching method based on the dynamic compressed prefix tree includes:
s1, acquiring a request of a client, and distributing the request of the client to a plurality of threads.
The method comprises the steps of obtaining a request of a client, and distributing the request of the client to a plurality of threads, and specifically comprises the following steps:
step S11, a request of a client is obtained and sent to a network layer;
step S12, the request of the client is distributed to a plurality of threads and uniformly distributed to different threads.
Specifically, since the Libuv network library is a high-performance, event-driven I/O network library, which does not support multi-core and multi-thread request processing, the embodiments of the present application develop on the network layer based on the Libuv network library, and send IPC (Inter-Process Communication) signals to multiple threads by using uv_write2 functions and so_reuport, SO that the requests of the client are distributed to the multiple threads uniformly and the threads are not interfered with each other.
In order to reduce the pressure of the database, hot spot data is generally used as a cache, and a dynamic compression prefix tree is adopted in an index layer for cache, so that multi-core and multi-thread cache request processing of a network layer is better supported, cache requests with higher complexity are solved, and the read-write performance of the database is improved. Wherein, the hot spot data may be data frequently accessed for most users.
S2, analyzing the request of the client to acquire an execution command for any thread, and executing the execution command based on the dynamic compression prefix tree to acquire an execution result.
The dynamic compression prefix tree comprises a concurrency management unit, a memory recycling unit and a memory compression unit.
Specifically, each thread analyzes the request of the client, obtains the data stream corresponding to the thread of the thread and continuously packages the data stream into a complete command so as to obtain the execution command corresponding to each thread; and then, pulling or writing data into the dynamic compression prefix tree of the index layer by each thread based on the execution command of each thread, and completing the processing of the execution command of each thread to obtain an execution result. It is noted that the execution commands of different threads may be performed simultaneously and data may be pulled or written in the de-correlated index layer.
In some embodiments, the dynamic compressed prefix tree is provided with a prefix Key node for storing the same data as the prefix, i.e., a prefix Key refers to a Key value that is both a prefix in a prefix tree and a separate and complete Key value. For example, abca, abcc, abcd is inserted into a null index tree, and the common prefix is abc at this time, but if a key value "abc" needs to be inserted into the null index tree again, a prefix key node needs to be added to store the same key value "abc" as the common prefix "abc".
In some embodiments, the Key value that each node of the dynamic compressed prefix tree can accommodate is dynamically changed, and the node types of the dynamic compressed prefix tree include node4, node16, node48, and node256, where node4 includes 4 pointer spaces with fixed lengths and 4 index bits with fixed lengths; node16 includes 16 fixed-length pointer spaces and 16 fixed-length index bits; node48 contains 48 fixed length pointer spaces and 256 index bits; the node256 contains 256 fixed-length pointer spaces and 256 index bits.
In some embodiments, the node of node16 in the dynamic compressed prefix tree also supports vectorized computation, specifically, multiple data are simultaneously operated by SIMD (Single Instruction Multiple Data, single instruction multiple data stream) instructions, parallel acceleration is implemented to improve efficiency, for example, by SIMD instructions reading once [ 12 5 5], reading once again [2 3 7 1], and performing addition operation to compute the result of two read values [3 5 12 6], which is vectorized computation.
It should be noted that, the dynamic compressed prefix tree is globally ordered, specifically, for each node with node type of node4 and node16, orderly writing is required according to the order of ASCII codes when writing data, for example, for data acd stored in a certain index tree, if b is required to be inserted at this time, b is required to be inserted between c and d when storing; for each node of node type 48, node256, all characters can be stored in order without adjustment, since the data itself contains an ordered array of length 256.
For the above step S2, it may be understood that the concurrency management unit of the dynamic compressed prefix tree is configured to adjust version numbers for each node of the dynamic compressed prefix tree, where the version numbers include a read-write blocking flag bit and a retry flag bit. The read-write blocking flag bit is used for judging whether write operation exists in each node and whether read-write operation of other threads is blocked; the retry flag bit is used for judging whether the access to each node needs to perform retry operation.
In some embodiments, the concurrency management unit is a management unit based on optimistic lock design, and by introducing a version number to each node of the dynamic compression prefix tree and further managing data of each node based on a read-write blocking flag bit and a retry flag bit of the version number, the problems of reduced accuracy of read operation, program interruption and the like caused by simultaneous read operation and write operation when the data is read in a multithreading scene are solved. Specifically, in the process of executing a command by a certain thread, firstly, the version number of the currently accessed node is obtained, and then whether the currently accessed node has a write operation or not and whether the currently accessed node blocks the read-write operation of other threads or not is judged by the value of the read-write blocking flag bit, namely, whether the current thread can normally read the data of the currently accessed node or not; and judging whether retry operation is needed or not according to the value of the retry flag bit.
For the above step S2, it may be further understood that the memory reclamation unit of the dynamic compressed prefix tree is configured to support the dynamic compressed prefix tree to implement multi-threaded memory reclamation. The multi-thread memory reclamation is mainly used for solving the problem that under the multi-thread Cheng Changjing, the threads cannot be executed due to the fact that other threads cannot read or read null values and the like due to a certain deleting operation. In the embodiment of the present application, the memory reclamation unit may use EBR (epoch based reclamation) technology to support the dynamic compressed prefix tree to implement multithreading memory reclamation. Since it is considered that the data to be inserted for the write operation may change the node type of a certain node, a new node is generated, and the delete operation may need to delete the data corresponding to a part of the nodes, at this time, the pre-deleted data needs to be marked instead of being deleted immediately, for example, temporarily recorded and stored in a global pre-deleted data list.
In some embodiments, for any thread, if there is a write operation or a delete operation, when the thread is initialized, a thread information file thread_info is first created locally, and a locking operation is performed on the global pre-delete data list delete_map through delete_map_lock, so as to prevent other threads from modifying the global pre-delete data list delete_map, and simultaneously, the thread information file thread_info corresponding to each thread is added into the global pre-delete data list delete_map.
In the above embodiment, the global pre-delete data list delete_map is a collection of pre-delete data list delete_list of each thread; the thread information file thread_info comprises a thread identifier, a global generation count gobal_epoch and pre-deleted data information. The thread identifier may be obtained by using a read_self function, the global generation count gobal_epoch is sequentially incremented from an initial value 0, the pre-deleted data information is node information that needs to be deleted by each thread, and adding a thread information file thread_info corresponding to each thread to the global pre-deleted data list delete_map includes: and synchronizing the pre-deleted data information to a pre-deleted data list delete_list corresponding to any thread in the global pre-deleted data list delete_map.
In some embodiments, for any thread, if there is a write operation or a delete operation, a generation count local_epoch of one thread is set for each execution of the command, and each thread executes thread Exit Epoch And Cleanup function at the end of a cycle of one generation, and at the same time, the EBR technology is used to implement multi-thread memory reclamation, so as to implement thread cleaning. For any thread with a write operation or a delete operation, the memory reclamation for implementing multithreading by using the EBR technique specifically includes the following steps:
step a1, setting the generation count of the thread as the maximum value;
specifically, the thread generation count local_epoch and the global generation count gobal_epoch are of the type of the uint64, and the generation count local_epoch of the thread is set to the maximum value of the uint64, so that the data of which the generation count is the first generation is ensured to be cleared in the process of memory reclamation in the first polling.
In some embodiments, the thread's generation count local_epoch may also be modified based on the global generation count gobal_epoch; specifically, if the thread's generation count local_epoch is smaller than the global generation count gobal_epoch, keeping the thread's generation count local_epoch unchanged; if the thread's generation count local_epoch is greater than the global generation count gobal_epoch, the thread's generation count local_epoch is modified to the global generation count gobal_epoch to maintain synchronization with the global.
Step a2, locking the global pre-deleted data list, and traversing all threads to determine the minimum generation count;
specifically, the global pre-deletion data list delete_map is locked through delete_map_lock, so that other threads are prevented from modifying the global pre-deletion data list delete_map; then, all threads in the global pre-delete data list are traversed, and the smallest generation count epoch_min is determined.
Step a3, unlocking the global pre-deleted data list, and deleting the data smaller than the minimum generation count in the pre-deleted data list corresponding to each thread;
specifically, unlocking the global pre-deleted data list delete_map; traversing the pre-deleted data list corresponding to the current thread in the global pre-deleted data list delete_map, screening out the data of the generation smaller than the minimum generation count epoch_min, and deleting. For example, for thread a, if local_epoch is set to 4 and epoch_min is set to 3, then the delete_list corresponding to thread a is traversed in the delete_map, and the data of nodes whose generation is smaller than 3 in the marked pre-deleted data is deleted.
And a4, ending the exit.
For the above step S2, it may be further understood that the memory compression unit of the dynamic compression prefix tree is configured to compress the dynamic compression prefix tree by adopting path compression and inertia expansion.
In some embodiments, the compressing the dynamic compressed prefix tree using path compression and lazy expansion includes: removing and merging nodes conforming to a first condition aiming at leaf nodes and non-leaf nodes of the dynamic compression prefix tree; wherein the first condition is that a node has only a single child node. For example, when BAR, BAZ is inserted into a null index tree, i.e., parent node B has only one child node A, child node A is now merged into its parent node B and child node A is removed.
In some embodiments, the compressing the dynamic compressed prefix tree using path compression and lazy expansion further comprises: omitting nodes meeting a second condition aiming at leaf nodes of the dynamic compression prefix tree; the second condition is that the number of child nodes of a certain node is less than two. For example, when the FOO is inserted into a null index tree, that is, the parent node F has only one child node O, and the child node O has only one child node O, in this case, in order to save space, the creation of two internal child nodes may be directly omitted, and the parent node F is directly pointed to the node storing the suffix OO.
The KV caching method based on the dynamic compressed prefix tree, provided by the embodiment of the application, realizes transverse expansion, solves the cache request with high complexity, and fully utilizes the CPU performance of a modern computer. The multi-core multi-thread processing method of the bottom data structure is realized through the request distribution of the network layer; by constructing the dynamic compression prefix tree, the data storage is globally ordered, the query efficiency is improved, the memory space is saved, the cache locality is improved, and the interval scanning can be better supported; by adding a prefix key node for the dynamic compression prefix tree, the integrity and accuracy of data are further ensured; the concurrency management unit is used for adjusting the read-write blocking flag bit and the retry flag bit to realize concurrency management of read-write operation of each node, and a concurrency safe index tree is constructed, so that powerful performance guarantee can be provided under massive concurrency operation; the memory release under the multithreading scene is realized through the memory recovery unit; the memory space is further saved by the memory compression unit.
Fig. 2 is a flow chart of a method for adjusting version numbers for nodes of the dynamic compressed prefix tree provided by the present application. The version number is a variable of a uint64 type, the initial 64 bits of the variable are all 0, and the read-write blocking flag bit is positioned at the penultimate position of the version number and is used for judging whether write operation exists in each node and whether read-write operation of other threads is blocked; the retry flag bit is located in the last-to-last bit of the version number, and is used for judging whether the access to each node needs to perform retry operation. It is noted that the priority of the read-write blocking flag bit is higher than that of the retry flag bit, that is, after the version number of a certain node is obtained, whether normal reading can be performed is judged by the read-write blocking flag bit, and if normal reading cannot be performed, whether retry is required is judged by the retry flag bit.
As shown in fig. 2, the adjusting version numbers for each node of the dynamic compressed prefix tree includes:
s21, if a certain node has a writing operation, performing first type adding processing on the version number of the node before and after the writing operation so as to adjust the value of the read-write blocking flag bit;
the first type of adding process may be to perform +2 operation on the version number of the node. If a certain node has a write operation, carrying out +2 operation on the version number of the node before the write operation, so that the last three bits of the version number are changed from the initial 000 to 010, and at the moment, changing the read-write blocking flag bit from 0 to 1; and after the writing operation is performed, performing +2 operation on the version number of the node again, so that the last three bits of the version number are changed from the initial 010 to 100, and at the moment, the read-write blocking flag bit is changed from 1 to 0.
In some embodiments, the step of determining whether each node has a write operation and whether the read/write operation of other threads is blocked based on the read/write blocking flag bit specifically includes:
step b1, aiming at a node currently accessed by any thread, acquiring a version number of the node;
step b2, determining the value of a read-write blocking flag bit of the node based on the version number of the node; if the value of the read-write blocking flag bit is 1, the node is indicated to have write operation and the read-write operation of other threads is blocked; and if the value of the read-write blocking flag bit is 0, the node is indicated that no write operation exists and the read-write operation of other threads is not blocked.
If the read/write blocking flag bit of the version number corresponding to a certain node is 1, at this time, other threads except the thread performing the write operation on the node cannot perform the read/write operation on the node, and the other threads are in a waiting state, but can perform repeated read/write attempts on the node through spinning, and when the read/write blocking flag bit becomes 0, the write or read operation can be performed immediately.
S22, if a certain node has a deleting operation, performing second class adding processing on the version number of the node when the deleting operation is performed, so as to adjust the value of the retry zone bit.
The second type of adding process may be to perform +3 operation on the version number of the node. And if the deleting operation exists in a certain node, carrying out +3 operation on the version number of the node, and changing the last three bits of the version number from the initial 000 to 011 or from 100 to 011, wherein at the moment, the read-write blocking flag bit is changed from 0 to 1, and the retry flag bit is changed from 0 to 1.
In some embodiments, the step of determining whether to access each node to perform the retry operation based on the retry flag bit specifically includes:
step c1, aiming at a node currently accessed by any thread, acquiring a version number of the node;
step c2, determining the value of the retry zone bit of the node based on the version number of the node; if the value of the retry flag bit is 1, the retry operation is needed to be performed when the node is accessed; if the retry flag bit has a value of 0, it indicates that no retry operation is required to access the node.
It should be noted that, the deleting operation includes a node deleting operation and a path deleting operation, and the retry operation refers to that the node currently accessed jumps to the root node and sinks again, and because the optimistic lock is very optimistic in operating the data, that is, the number of times of the retry operation is relatively small, the retry operation is not performed all the time, or it can be understood that the second type of adding process is a low-frequency operation.
According to the method for adjusting the version number for each node of the dynamic compression prefix tree, the value of the read-write blocking flag bit is adjusted through first type adding processing on the version number, so that the blocking control of read-write operation of other threads is realized; the second type of adding processing is carried out on the version number to adjust the values of the read-write blocking flag bit and the retry flag bit, so that not only can the blocking control of read-write operation of other threads be realized, but also whether the thread needs to be subjected to retry operation or not can be controlled; in addition, the node is locked through the version number management node, so that the cost is low compared with the cost for locking the whole index tree, the operation efficiency can be greatly improved, and the concurrency safety management of the dynamic compression prefix tree is realized.
Fig. 3 is a schematic structural diagram of a KV buffer device based on a dynamic compressed prefix tree provided in the present application, which may be used to implement the method described in the foregoing embodiments. As shown in fig. 3, the apparatus includes:
a thread allocation module 310, configured to obtain a request of a client, and allocate the request of the client to a plurality of threads;
the thread executing module 320 is configured to parse the request of the client to obtain an execution command for any thread, and execute the execution command based on a dynamic compressed prefix tree to obtain an execution result; the dynamic compression prefix tree comprises a concurrency management unit, a memory recycling unit and a memory compression unit, wherein the concurrency management unit is used for adjusting version numbers for all nodes of the dynamic compression prefix tree, and the version numbers comprise read-write blocking zone bits and retry zone bits.
For the detailed description of the KV buffer device based on the dynamic compressed prefix tree, please refer to the description of the related method steps in the above embodiment, and the repetition is not repeated. The apparatus embodiments described above are merely illustrative, wherein the "module" as illustrated as a separate component may or may not be physically separate, as may be a combination of software and/or hardware implementing the intended function. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
Fig. 4 is a schematic structural diagram of an electronic device provided in the present application, as shown in fig. 4, where the electronic device includes a memory 410 and a processor 420, where the memory 410 stores a computer program, and the processor 420 executes steps in a KV caching method based on a dynamic compressed prefix tree when running the computer program.
The embodiment of the application also provides a readable storage medium, wherein the readable storage medium stores a computer program, and the computer program executes steps in a KV caching method based on a dynamic compressed prefix tree when running on a processor.
It should be understood that the electronic device may be an electronic device with a logic computing function, such as a personal computer, a tablet computer, a smart phone, etc.; the readable storage medium may be a ROM (Read-Only Memory), a RAM (Random Access Memory ), a magnetic disk, an optical disk, or the like.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present application, and are not limiting thereof; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and they should not fall within the scope of the present invention.

Claims (8)

1. A KV caching method based on a dynamic compressed prefix tree, the method comprising:
acquiring a request of a client, and distributing the request of the client to a plurality of threads;
analyzing the request of the client to acquire an execution command for any thread, and executing the execution command based on a dynamic compression prefix tree to acquire an execution result; the dynamic compression prefix tree comprises a concurrency management unit, a memory recovery unit and a memory compression unit;
the concurrency management unit is used for adjusting version numbers for all nodes of the dynamic compression prefix tree, wherein the version numbers comprise read-write blocking zone bits and retry zone bits; the adjusting version numbers for each node of the dynamic compression prefix tree includes:
if a certain node has a writing operation, performing first type adding processing on the version number of the node before and after the writing operation so as to adjust the value of the read-write blocking flag bit;
if a certain node has a deleting operation, performing second class adding processing on the version number of the node when the deleting operation is performed so as to adjust the value of the retry zone bit;
the memory recycling unit is used for supporting the dynamic compression prefix tree to realize multi-thread memory recycling; the memory recovery supporting the dynamic compression prefix tree to realize multithreading specifically comprises:
setting, for any thread, a generation count of the thread to a maximum value;
performing locking operation on the global pre-deleted data list, and traversing all threads to determine the minimum generation count;
and unlocking the global pre-deleted data list, and deleting the data smaller than the minimum generation count in the pre-deleted data list corresponding to the thread.
2. The method according to claim 1, wherein the read-write blocking flag is used for determining whether each node has a write operation and blocks a read-write operation of other threads, and the step of determining whether each node has a write operation and blocks a read-write operation of other threads based on the read-write blocking flag specifically includes:
aiming at a node currently accessed by any thread, acquiring the version number of the node;
determining the value of a read-write blocking flag bit of the node based on the version number of the node; if the value of the read-write blocking flag bit is 1, the node is indicated to have write operation and the read-write operation of other threads is blocked; and if the value of the read-write blocking flag bit is 0, the node is indicated that no write operation exists and the read-write operation of other threads is not blocked.
3. The method of claim 1, wherein the retry flag is used to determine whether access to each node needs to perform a retry operation, and the step of determining whether access to each node needs to perform a retry operation based on the retry flag specifically includes:
aiming at a node currently accessed by any thread, acquiring the version number of the node;
determining the value of a retry flag bit of the node based on the version number of the node; if the value of the retry flag bit is 1, the retry operation is needed to be performed when the node is accessed; if the retry flag bit has a value of 0, it indicates that no retry operation is required to access the node.
4. The method of claim 1, wherein the dynamic compressed prefix tree is provided with a prefix key node for storing the same data as the prefix.
5. The method of claim 1, wherein the memory compression unit is configured to compress the dynamic compressed prefix tree using path compression and lazy expansion.
6. A KV caching apparatus based on a dynamic compressed prefix tree, the apparatus comprising:
the thread allocation module is used for acquiring a request of a client and allocating the request of the client to a plurality of threads;
the thread executing module is used for analyzing the request of the client to acquire an executing command for any thread, and executing the executing command based on a dynamic compression prefix tree to acquire an executing result; the dynamic compression prefix tree comprises a concurrency management unit, a memory recovery unit and a memory compression unit;
the concurrency management unit is used for adjusting version numbers for all nodes of the dynamic compression prefix tree, wherein the version numbers comprise read-write blocking zone bits and retry zone bits; the adjusting version numbers for each node of the dynamic compression prefix tree includes:
if a certain node has a writing operation, performing first type adding processing on the version number of the node before and after the writing operation so as to adjust the value of the read-write blocking flag bit;
if a certain node has a deleting operation, performing second class adding processing on the version number of the node when the deleting operation is performed so as to adjust the value of the retry zone bit;
the memory recycling unit is used for supporting the dynamic compression prefix tree to realize multi-thread memory recycling; the memory recovery supporting the dynamic compression prefix tree to realize multithreading specifically comprises:
setting, for any thread, a generation count of the thread to a maximum value;
performing locking operation on the global pre-deleted data list, and traversing all threads to determine the minimum generation count;
and unlocking the global pre-deleted data list, and deleting the data smaller than the minimum generation count in the pre-deleted data list corresponding to the thread.
7. An electronic device comprising a memory storing a computer program and a processor executing the dynamic compressed prefix tree based KV caching method according to any of claims 1 to 5 when the computer program is run.
8. A readable storage medium, characterized in that it has stored therein a computer program which, when run on a processor, performs the dynamic compressed prefix tree based KV caching method according to any of claims 1 to 5.
CN202310238897.0A 2023-03-14 2023-03-14 KV caching method and device based on dynamic compression prefix tree Active CN115905246B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310238897.0A CN115905246B (en) 2023-03-14 2023-03-14 KV caching method and device based on dynamic compression prefix tree

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310238897.0A CN115905246B (en) 2023-03-14 2023-03-14 KV caching method and device based on dynamic compression prefix tree

Publications (2)

Publication Number Publication Date
CN115905246A CN115905246A (en) 2023-04-04
CN115905246B true CN115905246B (en) 2023-05-09

Family

ID=85733786

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310238897.0A Active CN115905246B (en) 2023-03-14 2023-03-14 KV caching method and device based on dynamic compression prefix tree

Country Status (1)

Country Link
CN (1) CN115905246B (en)

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7293028B2 (en) * 2001-06-08 2007-11-06 Sap Ag Cache-conscious concurrency control scheme for database systems
CN109407979B (en) * 2018-09-27 2020-07-28 清华大学 Multithreading persistent B + tree data structure design and implementation method
CN112306991A (en) * 2020-10-30 2021-02-02 深圳前海微众银行股份有限公司 Method, device and equipment for processing data in tree structure and storage medium
CN112749198B (en) * 2021-01-21 2024-10-22 中信银行股份有限公司 Multistage data caching method and device based on version number
CN114138792A (en) * 2021-12-02 2022-03-04 浪潮云信息技术股份公司 Key-value separated storage method and system

Also Published As

Publication number Publication date
CN115905246A (en) 2023-04-04

Similar Documents

Publication Publication Date Title
Chandramouli et al. Faster: A concurrent key-value store with in-place updates
US11182356B2 (en) Indexing for evolving large-scale datasets in multi-master hybrid transactional and analytical processing systems
US20210042286A1 (en) Transactional key-value store
US9672235B2 (en) Method and system for dynamically partitioning very large database indices on write-once tables
US11080260B2 (en) Concurrent reads and inserts into a data structure without latching or waiting by readers
US10078598B1 (en) Maintaining a separate LRU linked list for each thread for multi-threaded access
US11023453B2 (en) Hash index
US20160179865A1 (en) Method and system for concurrency control in log-structured merge data stores
US20180011892A1 (en) Foster twin data structure
US9251155B1 (en) Maintaining sort order of data in databases
CN113392126B (en) Execution plan caching and reading method based on distributed database
Wang et al. The concurrent learned indexes for multicore data storage
Sun et al. Mitigating asymmetric read and write costs in cuckoo hashing for storage systems
McCoy et al. High-performance filters for gpus
US9552298B2 (en) Smart pre-fetch for sequential access on BTree
CN115905246B (en) KV caching method and device based on dynamic compression prefix tree
CN116627345A (en) High-performance KV caching method and device applied to massive value key value pairs
US9223780B2 (en) Non-blocking caching technique
CN111198660A (en) B + tree traversal method and device
CN116226151A (en) Method and device for storing, reading and deleting data
CN115858522A (en) Local compression of tree-based index structures
US20070220026A1 (en) Efficient caching for large scale distributed computations
Yang et al. DyTIS: A Dynamic Dataset Targeted Index Structure Simultaneously Efficient for Search, Insert, and Scan
Li et al. vBoost: A Lock-free Distributed Index Based on vEB Tree for Disaggregated Memory
Esfahani et al. Selective Parallel Loading of Large-Scale Compressed Graphs with ParaGrapher

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant