CN112860794B - Concurrency capability lifting method, device, equipment and storage medium based on cache - Google Patents

Concurrency capability lifting method, device, equipment and storage medium based on cache Download PDF

Info

Publication number
CN112860794B
CN112860794B CN202110150945.1A CN202110150945A CN112860794B CN 112860794 B CN112860794 B CN 112860794B CN 202110150945 A CN202110150945 A CN 202110150945A CN 112860794 B CN112860794 B CN 112860794B
Authority
CN
China
Prior art keywords
data
version
version data
service
synchronized
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110150945.1A
Other languages
Chinese (zh)
Other versions
CN112860794A (en
Inventor
唐小龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Bigo Technology Pte Ltd
Original Assignee
Bigo Technology Pte Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bigo Technology Pte Ltd filed Critical Bigo Technology Pte Ltd
Priority to CN202110150945.1A priority Critical patent/CN112860794B/en
Publication of CN112860794A publication Critical patent/CN112860794A/en
Application granted granted Critical
Publication of CN112860794B publication Critical patent/CN112860794B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
    • G06F16/275Synchronous replication
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/21Design, administration or maintenance of databases
    • G06F16/219Managing data history or versioning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/23Updating
    • G06F16/2308Concurrency control
    • G06F16/2315Optimistic concurrency control
    • G06F16/2329Optimistic concurrency control using versioning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/23Updating
    • G06F16/2365Ensuring data consistency and integrity

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Security & Cryptography (AREA)
  • Computing Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the application discloses a concurrency capability lifting method, device, equipment and storage medium based on cache. According to the technical scheme provided by the embodiment of the application, the memory version data loaded from the data layer is cached in the logic layer as read-only version data, meanwhile, the read-only version data is cached in the logic layer as service version data, when one or more data processing requests arrive, service processing is carried out on the service version data in the logic layer, the service version data after service processing is copied and cached as the version data to be synchronized, when the version data to be synchronized is successfully written back to the data layer, a corresponding request processing result is returned to a requester, low-delay response to high concurrent requests is realized, consistency between the return result and the data layer data is ensured, and meanwhile, the carrying capacity of the high concurrent requests is effectively improved.

Description

Concurrency capability lifting method, device, equipment and storage medium based on cache
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to a concurrency capability improving method, device and equipment based on cache and a storage medium.
Background
In part of the internet business scenes, there are situations that certain data are simultaneously accessed and modified by a large number of users, such as a wheat-robbing scene in a live wheat-connecting service and a commodity second killing scene in an electronic commerce service. In general, services of a data layer and a logic layer are separated for the service scenes, when a request arrives, data is loaded from the data layer to the logic layer, the logic layer processes the data according to the request, then the data is written back to the data layer, and finally a processing result is returned to a requester.
The processing mode can load, process and write back data when each request comes, so that the processing time of one request is higher than the data transmission time delay of a logic layer and a data layer, the next request can be processed continuously after the previous request is processed, the contradiction between the request with high concurrency and the data read-write time delay can be displayed, and the processing capacity of the request with high concurrency is limited.
Disclosure of Invention
The embodiment of the application provides a concurrency capacity improving method, device, equipment and storage medium based on cache, so as to improve the processing capacity of high concurrency requests.
In a first aspect, an embodiment of the present application provides a method for improving concurrency capability based on cache, including:
Loading memory version data from a data layer, and caching the memory version data as read-only version data in a logic layer;
Copying the read-only version data into service version data, caching the service version data in the logic layer, and simultaneously carrying out service processing on the service version data according to one or more data processing requests sent by a requester;
Copying the service version data after service processing into version data to be synchronized, and caching the version data to be synchronized in the logic layer;
and carrying out data write-back operation on the data layer according to the version data to be synchronized, and returning a corresponding request processing result to the requester based on the data write-back operation result.
In a second aspect, an embodiment of the present application provides a concurrency capability improving device based on cache, including a data loading module, a data processing module, a data synchronization module, and a data write-back module, where:
the data loading module is used for loading the memory version data from the data layer and caching the memory version data in the logic layer as read-only version data;
The data processing module is used for copying the read-only version data into service version data, caching the service version data in the logic layer, and simultaneously carrying out service processing on the service version data according to one or more data processing requests sent by a requester;
The data synchronization module is used for copying the service version data after service processing into version data to be synchronized and caching the version data to be synchronized in the logic layer;
the data write-back module is used for performing data write-back operation on the data layer according to the version data to be synchronized, and returning a corresponding request processing result to the requester based on a data write-back operation result.
In a third aspect, an embodiment of the present application provides a concurrency capability promotion device based on cache, including: a memory and one or more processors;
The memory is used for storing one or more programs;
The one or more programs, when executed by the one or more processors, cause the one or more processors to implement the cache-based concurrency capability promotion method as described in the first aspect.
In a fourth aspect, embodiments of the present application provide a storage medium containing computer executable instructions which, when executed by a computer processor, are adapted to carry out a cache-based concurrency capability promotion method as described in the first aspect.
According to the embodiment of the application, the memory version data loaded from the data layer is cached in the logic layer as the read-only version data, meanwhile, the read-only version data is cached in the logic layer as the service version data, when one or more data processing requests arrive, the service version data is processed in the logic layer, the service version data after being processed is copied and cached as the version data to be synchronized, when the version data to be synchronized is successfully written back to the data layer, a corresponding request processing result is returned to a requester, low-delay response to high-concurrency requests is realized, consistency between the returned result and the data of the data layer is ensured, and meanwhile, the bearing capacity of the high-concurrency requests is effectively improved.
Drawings
FIG. 1 is a flowchart of a concurrency capability improving method based on cache, provided by an embodiment of the present application;
FIG. 2 is a schematic diagram of a multi-concurrency request processing flow provided in an embodiment of the present application;
FIG. 3 is a flowchart of another method for improving concurrency capability based on cache according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a data storage state before a data processing request arrives according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a data saving state when a first data request arrives according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a data saving state when a second data request arrives according to an embodiment of the present application;
Fig. 7 is a schematic structural diagram of a concurrency capability improving device based on cache according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a concurrency capability promotion device based on cache according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the following detailed description of specific embodiments of the present application is given with reference to the accompanying drawings. It is to be understood that the specific embodiments described herein are merely illustrative of the application and are not limiting thereof. It should be further noted that, for convenience of description, only some, but not all of the matters related to the present application are shown in the accompanying drawings. Before discussing exemplary embodiments in more detail, it should be mentioned that some exemplary embodiments are described as processes or methods depicted as flowcharts. Although a flowchart depicts operations (or steps) as a sequential process, many of the operations can be performed in parallel, concurrently, or at the same time. Furthermore, the order of the operations may be rearranged. The process may be terminated when its operations are completed, but may have additional steps not included in the figures. The processes may correspond to methods, functions, procedures, subroutines, and the like.
Fig. 1 shows a flowchart of a method for improving concurrency capability based on cache, which is provided by the embodiment of the present application, where the method for improving concurrency capability based on cache may be implemented by a device for improving concurrency capability based on cache, and the device for improving concurrency capability based on cache may be implemented by means of hardware and/or software and integrated in a device for improving concurrency capability based on cache.
The following describes an example of a method for executing the concurrent capability lifting method based on the cache by using the concurrent capability lifting device based on the cache. Referring to fig. 1, the cache-based concurrency capability promotion method includes:
S101: and loading the memory version data from the data layer, and caching the memory version data as read-only version data in the logic layer.
The concurrency capability improving device based on cache provided in this embodiment is provided with a data layer and a logic layer, where the data layer and the logic layer are respectively used to store and process data, and the memory version data provided in this embodiment is stored in the data layer.
For example, when a data processing service with multiple concurrent requests is required, corresponding memory version data is loaded from the data layer according to the data content corresponding to the data processing service, and then the loaded memory version data is cached in the logic layer as read-only version data. In one possible embodiment, the memory version data may also be pre-loaded into the logical layer and cached as read-only version data when the data processing service is turned on (e.g., the wheat bit queue data, etc., is pre-loaded into the logical layer when the anchor is opened).
In one possible embodiment, after the memory version data is cached as the read-only version data in the logic layer, the embodiment of the present application further includes: the read-only version data is provided to a requester based on a correctness service request issued by the requester.
It can be understood that, the read-only version data provided in this embodiment is unalterable read-only data, and the read-only version data is copied from the memory version data, so that the correctness of the data can be ensured, and when a correctness service request (for example, when the data of the wheat is pushed to the client in the continuous wheat service) which is sent by the requester and needs to ensure the correctness of the data is received, the read-only version data in the logic layer can be obtained and sent to the corresponding requester, so that the correctness of the obtained data is ensured.
S102: copying the read-only version data into service version data, caching the service version data in the logic layer, and simultaneously carrying out service processing on the service version data according to one or more data processing requests sent by a requester.
Illustratively, after the memory version data is loaded from the data layer and cached as read-only version data, the read-only version data is copied as service version data and the service version data is cached in the logic layer. The service version data provided in this embodiment is used as data for performing service processing, and when a service request arrives, service processing is performed on the service version data.
Optionally, after the memory version data is loaded from the data layer and cached into the read-only version data each time, the new read-only version data is copied again, the original service version data is replaced, and the correctness of the service version data for service processing is ensured.
Further, upon receiving one or more data processing requests from the requester, the service version data is processed and modified according to the data processing requests. Wherein one or more data processing requests may be issued by one requester or by a plurality of requesters, respectively.
It can be understood that when receiving a plurality of data processing requests, the embodiment of the application does not block the subsequent data processing requests in the logic layer, can process the service version data according to the data processing requests simultaneously or sequentially, does not need to wait for the processing of the next data processing request after returning the request processing result corresponding to the last data processing request, and meets the bearing capacity requirement of the concurrent request.
S103: copying the service version data after service processing into version data to be synchronized, and caching the version data to be synchronized in the logic layer.
Illustratively, after service processing is performed on the service version data, the service version data is further copied to be the version data to be synchronized, and the version data to be synchronized is cached in the logic layer.
The data to be synchronized provided in this embodiment is copied from the service version data after service processing, and is used for asynchronously writing back into the data layer.
S104: and carrying out data write-back operation on the data layer according to the version data to be synchronized, and returning a corresponding request processing result to the requester based on the data write-back operation result.
The method includes the steps of copying service version data and caching the service version data as version data to be synchronized, performing data write-back operation on a data layer according to the version data to be synchronized, namely writing the version data to be synchronized cached in a logic layer back to the data layer, and updating memory version data in the data layer by utilizing the version data to be synchronized.
Further, after the execution of the data write-back operation is completed, determining a write-back operation result corresponding to the data write-back operation, and returning a corresponding request processing result to a requester corresponding to each data processing request according to the write-back operation result.
For example, when the write-back operation result indicates that the data write-back operation is successful, the request processing result returned to the requester is a data processing result obtained by performing service processing on the service version data. And when the write-back operation result indicates that the data write-back operation fails, the request processing result returned to the requester is request processing error information (such as a service fault error code).
For any data processing request provided in this embodiment, after the corresponding version data to be synchronized is successfully written back to the data layer, the corresponding data processing result is returned to the requester, so as to ensure consistency between the memory version data and the returned data processing result, and reduce the situations that the data is returned to the requester successfully in modification and is not actually successfully modified.
Fig. 2 is a schematic diagram of a multi-concurrency request processing flow provided by an embodiment of the present application, which is applied to the method for improving concurrency capability based on cache provided by the present embodiment to process multi-concurrency requests. Referring to fig. 2, when a data service (such as a user live broadcast multi-user headset service, an online commodity second killing service, etc.) with multiple concurrent requests is required, corresponding memory version data is loaded from a data layer to a logic layer, and the loaded memory version data is cached as read-only version data. When receiving the data processing request 1 and the data processing request 2, firstly carrying out service processing on service version data according to the data processing requests 1 and 2 in a logic layer, then caching the service version data after service processing into the version data to be synchronized, and writing the version data to be synchronized back to the data layer.
If the data processing request 3 is received in the data write-back operation process, the service version data is processed in the logic layer according to the data processing request 3, and when the data write-back operation for the data of the version to be synchronized is successful, the data processing results corresponding to the data processing requests 1 and 2 are returned to the requester.
Further, the data of the version to be synchronized is updated by using the service version data after service processing according to the data processing request 3, and the data of the version to be synchronized is written back to the data layer, and when the data writing back operation is successful, the data processing result corresponding to the data processing request 3 is returned to the requester.
In the multi-concurrent request processing flow, any request can be returned to the requester after the data is written back successfully, so that the consistency of the data and the result is ensured. Meanwhile, if a plurality of requests arrive at the same time, after all requests are waited for modifying the service version data of the logic layer, the finally modified service version data is used as the version data to be synchronized to be written back to the data layer. In the process of data write-back operation, the logic layer does not block the data processing request but continues to provide the data processing service, so that the effect that the delay of any one data processing request is not more than twice the delay of the data layer (the delay caused by service processing can be correspondingly ignored because the time for carrying out service processing is shorter than the delay of the data layer) is achieved.
The memory version data loaded from the data layer is cached in the logic layer as the read-only version data, meanwhile, the read-only version data is cached in the logic layer as the service version data, when one or more data processing requests arrive, the service version data is processed in the logic layer, the service version data after being processed is copied and cached as the version data to be synchronized, when the version data to be synchronized is successfully written back to the data layer, a corresponding request processing result is returned to a requester, low-delay response to high-concurrency requests is realized, consistency between the returned result and the data layer data is ensured, and meanwhile, the bearing capacity of the high-concurrency requests is effectively improved.
On the basis of the above embodiment, fig. 3 shows a flowchart of another method for improving concurrency capability based on cache, which is a specific implementation of the method for improving concurrency capability based on cache. Referring to fig. 3, the cache-based concurrency capability promotion method includes:
S201: based on the set loading time interval or based on the failure of the data write-back operation, the memory version data is loaded from the data layer, and the memory version data is cached in the logic layer as read-only version data.
Specifically, in the embodiment of the application, the memory version data is loaded from the data layer according to the set loading time interval and is cached in the logic layer as the read-only version data, so that the read-only version number of the read-only version data is kept consistent with the memory version number of the memory version data as far as possible. And starting to time the loading time interval when the memory version data is loaded from the data layer and cached as the read-only version data each time, and reloading the memory version data from the data layer and updating the read-only version data cache in the logic layer when the loading time interval is reached.
Furthermore, in addition to loading the memory version data of the data layer at regular time, when the data write-back operation to the data layer based on the version data to be synchronized fails, the embodiment of the application reloads the memory version data from the data layer, updates the read-only version data cache in the logic layer, and ensures that the read-only version data is consistent with the memory version data.
It can be understood that the read-only version number corresponding to the read-only version data cache obtained by loading the memory version data based on the data layer is consistent with the memory version number of the memory version data each time until the memory version data is modified. When the memory version data is modified, a corresponding memory version number is processed, and if other services or processes are modified for the memory version data at this time, the memory version number is advanced to the current read-only version number.
S202: copying the read-only version data into service version data, increasing the service version number of the service version data, and caching the service version data in the logic layer.
Specifically, after each read-only version data update, the updated read-only version data is copied into service version data, and at this time, the service version number of the service version data is consistent with the read-only version number of the read-only version data.
Further, the service version number of the service version data is increased, that is, the service version number is processed by adding one, and the service version data is cached in the logic layer.
S203: and carrying out service processing on the service version data according to one or more data processing requests sent by the requester.
S204: it is determined whether a logical layer is performing a data write back operation on the data layer. If yes, go to step S203, otherwise go to step S205.
When the business processing of the business version data is completed according to the data processing request, the business version data is required to be copied into the data of the version to be synchronized, and the data writing back operation is waited to be performed.
Specifically, when the service processing of the service version data is completed according to the data processing request, whether the logic layer is performing data write-back operation on the data layer is judged.
If the data write-back operation is being performed at this time, the copying of the service processing data needs to be performed after the data write-back operation is completed, the step is skipped to step S203, and the service processing is performed on the service version data according to the data processing request, that is, the service processing is performed on the service version data according to the data processing request waiting for the subsequent processing, so as to avoid blocking the subsequent data processing request.
S205: copying the service version data after service processing into to-be-synchronized version data, caching the to-be-synchronized version data in the logic layer, and increasing the service version number of the service version data.
And if the data write-back operation is not performed at the moment, copying the business version data into the version data to be synchronized. It can be understood that the version number to be synchronized of the version data to be synchronized at this time is identical to the service version number of the service version data.
Further, the data to be synchronized obtained by copying the service version data are cached in the logic layer, and the service version number of the service version data is increased, namely, the service version number is added with one. It can be understood that, in this embodiment, after the copying of the service version data is completed each time, the service version number of the service version data is increased, so that after the next service version data to be processed is copied as the version data to be synchronized, the difference between the version number to be synchronized and the last version number to be synchronized is 1, and synchronization with the memory version number is achieved.
S206: and judging whether the version number to be synchronized corresponds to the next version number of the memory version number. If yes, go to step S207, otherwise, reload the memory version data from the data layer and go to step S202.
Specifically, the to-be-synchronized version number of the to-be-synchronized version data and the memory version number of the memory version data are determined, the to-be-synchronized version number and the memory version number are compared, and when the to-be-synchronized version number is consistent with the next version number of the memory version number (when the to-be-synchronized version number is equal to the memory version number plus one), the step S207 is skipped.
And when the version number to be synchronized is inconsistent with the next version number of the memory version number, determining that the data write-back operation fails, reloading the memory version data from the data layer, caching the memory version data as read-only version data in the logic layer, and jumping to the step S202.
Specifically, when the version number to be synchronized is not equal to the next version number of the memory version number, the memory version data is considered to be modified by other processes or services (the read-only version number of the read-only version data is behind the memory version number of the memory version data), the memory version data needs to be reloaded from the data layer, the memory version data is cached in the logic layer as the read-only version data, and the read-only version data is ensured to be consistent with the memory version data, and the read-only version number of the read-only version data is ensured to be consistent with the memory version number of the memory version data.
S207: and performing data write-back operation on the data layer according to the version data to be synchronized.
Specifically, when the version number to be synchronized is equal to the next version number of the memory version number, it can be determined that the memory version data is not modified by other processes or services at the moment, and then data write-back operation is performed on the data layer according to the version data to be synchronized, so that asynchronous update of the memory version data is realized. It will be appreciated that at this point, the memory version number is also incremented by the modification to the memory version data.
In one possible embodiment, if the data write back operation fails, the step S201 is skipped to reload the memory version data from the data layer, and the memory version data is cached as read-only version data in the logic layer. If the data write-back operation is successful, the process goes to step S208.
S208: and based on the data write-back operation result, returning a corresponding request processing result to the requester.
S209: and based on successful data write-back operation, updating the read-only version data according to the version data to be synchronized, and updating the version data to be synchronized according to the service version data so as to perform the next data write-back operation.
Specifically, when the data write-back operation is successful, the version data to be synchronized in the logic layer is directly copied, and the read-only version data is updated according to the version data to be synchronized.
Further, after the update of the read-only version data is completed, the read-only version data is copied into service version data to be cached. Further, the service version data is copied as the version data to be synchronized, so as to update the version data to be synchronized, prepare for the next data write-back operation, and jump to step S203 (at this time, the next data write-back operation can be performed synchronously).
Fig. 4 is a schematic diagram of a data storage state before a data processing request arrives according to an embodiment of the present application. As shown in fig. 4, assuming that the memory version number v0=x of the memory version data stored in the initial data layer, the logic layer loads the memory version data from the data layer in advance according to the data required by the service and caches the memory version data as read-only version data, where the read-only version number v1=y is less than or equal to x (y=x in the initial state, and y is less than x when other processes or services modify the memory version data). And then copying and caching the read-only version data into service version data, and adding one to the service version number, wherein the service version number v2=y+1. At this time, the version number V3 = 0 of the version data to be synchronized, and at this time, the version data to be synchronized is invalid data.
Fig. 5 is a schematic diagram of a data saving state when a first data request arrives according to an embodiment of the present application. As shown in fig. 5, it is assumed that concurrent data processing requests R1 to R8 arrive, if after the modification of the service version data is completed according to the data processing requests R1 to R4, the service version data is copied and cached to the version data to be synchronized for performing a data write-back operation, and the service version number is added by one, where the read-only version number v1=y, the version number v3=y+1 to be synchronized, and the service version number v2=y+2, because v3=v0+1 (the next version number of the memory version number corresponding to the synchronization version number) at this time, the data write-back operation can be performed normally, and the modification of the service version data according to the data processing requests R5 to R8 is continued during the data write-back operation. And when the data write-back operation is successful, returning a corresponding request processing result to the requester, and copying the version data to be synchronized as read-only version data. At this time, the read-only version v1=x+1 and the memory version v0=x+1.
Fig. 6 is a schematic diagram of a data storage state when a second data request arrives according to an embodiment of the present application. As shown in fig. 6, it is assumed that the data processing request R9 is received at this time, the service version data after service processing according to the data processing requests R5 to R8 is copied to the version data to be synchronized for performing a data write-back operation, and the service version data is continuously modified according to the data processing request R9 during the data write-back operation. At this time, the read-only version v1=x+1, the version v3=x+2 to be synchronized, and the service version v2=x+2. Assuming that v3=v0+2 at this time, the data write-back operation is normally performed, and when the data write-back operation is successful, a corresponding request processing result is returned to the requester. If V3 is less than v0+2, the data write-back operation fails, all executed data processing requests should fail, at this time, all data processing requests R1 to R9 fail, and the latest memory version data is forcefully reloaded from the data layer, and the new data processing request can be processed after the data loading is completed.
The concurrency capability improving method provided by the embodiment of the application can be applied to application scenes requiring high concurrency request service capability, for example, in the live wheat connecting service of users, can support operations of simultaneously grabbing wheat, adding wheat connecting waiting queues and the like of a large number of users in a single live room, can support wider product designs, for example, pop-up notices can enable all users in the room to simultaneously add the wheat connecting waiting queues, and can improve the activity of the host and the wheat connecting of the users. And for example, in the commodity second killing scene of the electronic commerce, the commodity quantity data is cached in the logic layer, when a user buys the commodity, whether the residual quantity in the memory meets the requirement of the commodity, if so, a commodity-robbing task is generated, and the logic is executed asynchronously and serially to complete the order.
In one possible embodiment, the availability of the service is ensured by a current limiting mechanism, for example, when a large number of data processing requests are received, the data processing requests exceeding a set number are blocked, so that the impact of the large number of data processing requests on the server is reduced.
According to the method, the memory version data loaded from the data layer are cached in the logic layer as the read-only version data, meanwhile, the read-only version data are cached in the logic layer as the service version data, when one or more data processing requests arrive, the service version data are processed in the logic layer, the service version data after being processed are copied and cached as the version data to be synchronized, when the version data to be synchronized are successfully written back to the data layer, corresponding request processing results are returned to a requester, low-delay response to high-concurrency requests is achieved, consistency between the returned results and the data layer data is guaranteed, meanwhile, bearing capacity of the high-concurrency requests is effectively improved, and high-concurrency hot spot data modification is met. Meanwhile, based on the comparison updating operation (CAS, computer and Swap), only the data modification based on the last version number can be updated successfully in the data layer finally by utilizing the monotonically increasing version number, so that the updating accuracy can be effectively ensured. And when the data write-back operation is successful, a corresponding request processing result is returned to the requester, so that the condition that the request actually fails but is successful due to the reason that the network or the back-end node is abnormal (for example, the server loses the memory data which is not as fast as the write-back disk because of power failure) is reduced, and the data is disordered is effectively ensured. In a live-broadcast continuous-wheat service scene of a user, the concurrency bearing capacity of continuous-wheat in a room is effectively improved, the attack protection capacity of a large number of requests is improved, the abnormal or breakdown situation of a server caused by the attack of a large number of requests is reduced, and a high concurrency service scene is supported, for example, after a host is provided with free continuous-wheat, a free continuous-wheat notification is sent to all users, users can be scheduled to be immediately on wheat in response to the barley-on request of the users, the waiting time of the users is reduced, and the user experience is optimized.
Fig. 7 is a schematic structural diagram of a concurrency capability lifting device based on cache according to an embodiment of the present application. Referring to fig. 7, the cache-based concurrency capability promotion device includes a data loading module 31, a data processing module 32, a data synchronization module 33, and a data write-back module 34.
The data loading module 31 is configured to load memory version data from a data layer, and cache the memory version data as read-only version data in a logic layer; the data processing module 32 is configured to copy the read-only version data into service version data, cache the service version data in the logic layer, and perform service processing on the service version data according to one or more data processing requests sent by a requester; the data synchronization module 33 is configured to copy the service version data after service processing into version data to be synchronized, and cache the version data to be synchronized in the logic layer; the data write-back module 34 is configured to perform a data write-back operation on the data layer according to the version data to be synchronized, and return a corresponding request processing result to the requester based on a data write-back operation result.
The memory version data loaded from the data layer is cached in the logic layer as the read-only version data, meanwhile, the read-only version data is cached in the logic layer as the service version data, when one or more data processing requests arrive, the service version data is processed in the logic layer, the service version data after being processed is copied and cached as the version data to be synchronized, when the version data to be synchronized is successfully written back to the data layer, a corresponding request processing result is returned to a requester, low-delay response to high-concurrency requests is realized, consistency between the returned result and the data layer data is ensured, and meanwhile, the bearing capacity of the high-concurrency requests is effectively improved.
The embodiment of the application also provides a concurrent capacity lifting device based on the cache, which can integrate the concurrent capacity lifting device based on the cache provided by the embodiment of the application. Fig. 8 is a schematic structural diagram of a concurrency capability promotion device based on cache according to an embodiment of the present application. Referring to fig. 8, the cache-based concurrency capability promotion apparatus includes: an input device 43, an output device 44, a memory 42, and one or more processors 41; the memory 42 is configured to store one or more programs; the one or more programs, when executed by the one or more processors 41, cause the one or more processors 41 to implement the cache-based concurrency capability promotion method as provided by the above embodiments. The device, the equipment and the computer for improving the concurrency capacity based on the cache can be used for executing the method for improving the concurrency capacity based on the cache, which is provided by any embodiment, and have corresponding functions and beneficial effects.
Embodiments of the present application also provide a storage medium containing computer executable instructions which, when executed by a computer processor, are configured to perform a cache-based concurrency capability promotion method as provided by the above embodiments. Of course, the storage medium containing the computer executable instructions provided in the embodiments of the present application is not limited to the above-mentioned cache-based concurrency capability promotion method, and related operations in the cache-based concurrency capability promotion method provided in any embodiment of the present application may also be performed. The device, the equipment and the storage medium for improving concurrency capability based on cache provided in the foregoing embodiments may perform the method for improving concurrency capability based on cache provided in any embodiment of the present application, and technical details not described in detail in the foregoing embodiments may be referred to the method for improving concurrency capability based on cache provided in any embodiment of the present application.
The foregoing description is only of the preferred embodiments of the application and the technical principles employed. The present application is not limited to the specific embodiments described herein, but is capable of numerous modifications, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the application. Therefore, while the application has been described in connection with the above embodiments, the application is not limited to the embodiments, but may be embodied in many other equivalent forms without departing from the spirit of the application, the scope of which is set forth in the following claims.

Claims (10)

1. The concurrency capability improving method based on the cache is characterized by comprising the following steps of:
Loading memory version data from a data layer, and caching the memory version data as read-only version data in a logic layer;
Copying the read-only version data into service version data, caching the service version data in the logic layer, and simultaneously carrying out service processing on the service version data according to one or more data processing requests sent by a requester;
Copying the service version data after service processing into version data to be synchronized, and caching the version data to be synchronized in the logic layer;
Determining a version number to be synchronized of the version data to be synchronized and a memory version number of the memory version data, and judging whether the version number to be synchronized corresponds to a next version number of the memory version number; if yes, performing data write-back operation on the data layer according to the data of the version to be synchronized; if not, reloading the memory version data from the data layer, and caching the memory version data in the logic layer as read-only version data; and based on the data write-back operation result, returning a corresponding request processing result to the requester.
2. The cache-based concurrency capability promotion method of claim 1, wherein loading memory version data from a data layer comprises:
And loading the memory version data from the data layer based on the set loading time interval or based on failure of the data write-back operation.
3. The cache-based concurrency capability enhancement method of claim 1, wherein copying the read-only version data into service version data and caching the service version data in the logical layer comprises:
Copying the read-only version data into service version data, increasing the service version number of the service version data, and caching the service version data in the logic layer.
4. The cache-based concurrency capability enhancement method of claim 1, wherein the caching the version data to be synchronized in the logical layer comprises:
And caching the version data to be synchronized in the logic layer, and increasing the service version number of the service version data.
5. The method for improving concurrency capability based on cache as set forth in claim 1, wherein the copying the service version data after service processing as version data to be synchronized comprises:
Determining whether a logic layer is performing data write-back operation on the data layer;
If yes, continuing to process the service version data according to the data processing request;
If not, copying the service version data after service processing into the version data to be synchronized.
6. The cache-based concurrency capability promotion method of claim 1, further comprising, after the returning of the corresponding request processing result to the requester:
And based on successful data write-back operation, updating the read-only version data according to the version data to be synchronized, and updating the version data to be synchronized according to the service version data so as to perform the next data write-back operation.
7. The method for improving concurrency capability based on cache as set forth in claim 1, wherein after caching the memory version data as read-only version data in the logic layer, further comprising:
the read-only version data is provided to a requester based on a correctness service request issued by the requester.
8. The concurrency capability lifting device based on the cache is characterized by comprising a data loading module, a data processing module, a data synchronizing module and a data writing back module, wherein:
the data loading module is used for loading the memory version data from the data layer and caching the memory version data in the logic layer as read-only version data;
The data processing module is used for copying the read-only version data into service version data, caching the service version data in the logic layer, and simultaneously carrying out service processing on the service version data according to one or more data processing requests sent by a requester;
The data synchronization module is used for copying the service version data after service processing into version data to be synchronized and caching the version data to be synchronized in the logic layer;
The data write-back module is configured to determine a version number to be synchronized of the version data to be synchronized and a memory version number of the memory version data, and determine whether the version number to be synchronized corresponds to a next version number of the memory version number; if yes, performing data write-back operation on the data layer according to the data of the version to be synchronized; if not, reloading the memory version data from the data layer, and caching the memory version data in the logic layer as read-only version data; and based on the data write-back operation result, returning a corresponding request processing result to the requester.
9. A cache-based concurrency capability promotion device, comprising: a memory and one or more processors;
The memory is used for storing one or more programs;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the cache-based concurrency capability promotion method of any one of claims 1-7.
10. A storage medium containing computer executable instructions which, when executed by a computer processor, are for performing the cache-based concurrency capability enhancement method of any one of claims 1-7.
CN202110150945.1A 2021-02-03 2021-02-03 Concurrency capability lifting method, device, equipment and storage medium based on cache Active CN112860794B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110150945.1A CN112860794B (en) 2021-02-03 2021-02-03 Concurrency capability lifting method, device, equipment and storage medium based on cache

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110150945.1A CN112860794B (en) 2021-02-03 2021-02-03 Concurrency capability lifting method, device, equipment and storage medium based on cache

Publications (2)

Publication Number Publication Date
CN112860794A CN112860794A (en) 2021-05-28
CN112860794B true CN112860794B (en) 2024-08-13

Family

ID=75986448

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110150945.1A Active CN112860794B (en) 2021-02-03 2021-02-03 Concurrency capability lifting method, device, equipment and storage medium based on cache

Country Status (1)

Country Link
CN (1) CN112860794B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6578041B1 (en) * 2000-06-30 2003-06-10 Microsoft Corporation High speed on-line backup when using logical log operations
CN105740260A (en) * 2014-12-09 2016-07-06 阿里巴巴集团控股有限公司 Method and device for extracting template file data structure

Family Cites Families (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8089805B2 (en) * 2008-11-20 2012-01-03 Micron Technology, Inc. Two-part programming methods and memories
CN106357787A (en) * 2016-09-30 2017-01-25 郑州云海信息技术有限公司 Storage disaster tolerant control system
CN108228669B (en) * 2016-12-22 2020-08-14 腾讯科技(深圳)有限公司 Cache processing method and device
CN106951456B (en) * 2017-02-24 2020-03-17 广东广信通信服务有限公司 Memory database system and data processing system
CN109413127B (en) * 2017-08-18 2022-04-12 北京京东尚科信息技术有限公司 Data synchronization method and device
CN107545060A (en) * 2017-08-31 2018-01-05 聚好看科技股份有限公司 A kind of method for limiting speed and device of redis principals and subordinates full dose synchrodata
CN107870970B (en) * 2017-09-06 2019-10-25 北京理工大学 A kind of data store query method and system
CN108595451A (en) * 2017-12-04 2018-09-28 阿里巴巴集团控股有限公司 Service request processing method and device
US10402116B2 (en) * 2017-12-11 2019-09-03 Micron Technology, Inc. Systems and methods for writing zeros to a memory array
CN108234641B (en) * 2017-12-29 2021-01-29 北京奇元科技有限公司 Data reading and writing method and device based on distributed consistency protocol
CN110321227A (en) * 2018-03-29 2019-10-11 腾讯科技(深圳)有限公司 Page data synchronous method, electronic device and computer readable storage medium
CN108829413A (en) * 2018-05-07 2018-11-16 北京达佳互联信息技术有限公司 Data-updating method, device and computer readable storage medium, server
CN110059135B (en) * 2019-04-12 2024-05-17 创新先进技术有限公司 Data synchronization method and device
CN110737682A (en) * 2019-10-17 2020-01-31 贝壳技术有限公司 cache operation method, device, storage medium and electronic equipment
CN111258897A (en) * 2020-01-15 2020-06-09 网银在线(北京)科技有限公司 Service platform testing method, device and system
CN111427853A (en) * 2020-03-23 2020-07-17 腾讯科技(深圳)有限公司 Data loading method and related device
CN111581239A (en) * 2020-04-10 2020-08-25 支付宝实验室(新加坡)有限公司 Cache refreshing method and electronic equipment

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6578041B1 (en) * 2000-06-30 2003-06-10 Microsoft Corporation High speed on-line backup when using logical log operations
CN105740260A (en) * 2014-12-09 2016-07-06 阿里巴巴集团控股有限公司 Method and device for extracting template file data structure

Also Published As

Publication number Publication date
CN112860794A (en) 2021-05-28

Similar Documents

Publication Publication Date Title
JP3830886B2 (en) Method for storing data in nonvolatile memory
RU2619181C2 (en) System and method for downloadable content transmission optimizing
US6665747B1 (en) Method and apparatus for interfacing with a secondary storage system
KR20120005488A (en) System and method for reducing startup cost of a software application
CN115599747B (en) Metadata synchronization method, system and equipment of distributed storage system
CN106796546B (en) Method and apparatus for implementation in a data processing system
CN113254536A (en) Database transaction processing method, system, electronic device and storage medium
CN114911528B (en) Branch instruction processing method, processor, chip, board card, equipment and medium
CN112860794B (en) Concurrency capability lifting method, device, equipment and storage medium based on cache
US8090769B2 (en) Dynamically generating web contents
CN111309799A (en) Method, device and system for realizing data merging and storage medium
CN113467719A (en) Data writing method and device
WO2018188959A1 (en) Method and apparatus for managing events in a network that adopts event-driven programming framework
CN113204520A (en) Remote sensing data rapid concurrent read-write method based on distributed file system
CN115774621B (en) Request processing method, system, equipment and computer readable storage medium
US20160239237A1 (en) Externalized execution of input method editor
US20230236878A1 (en) Efficiently launching tasks on a processor
US20140279928A1 (en) System and method for reversing changes executed by change management
CN112114757B (en) Storage method and system in object storage system, computing device and medium
WO2006088917A1 (en) Methodology for effectively utilizing processor cache in an electronic system
JP2010176512A (en) Storage device, storage device control method, and storage device control program
CN113961298A (en) Page switching method, device, equipment and medium
CN112559568A (en) Virtual article determination method and device and computer readable storage medium
CN118377741B (en) Atomic operation execution system, method and device
US11755425B1 (en) Methods and systems for synchronous distributed data backup and metadata aggregation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant