CN115221082B - Data caching method and device and storage medium - Google Patents

Data caching method and device and storage medium Download PDF

Info

Publication number
CN115221082B
CN115221082B CN202210840912.4A CN202210840912A CN115221082B CN 115221082 B CN115221082 B CN 115221082B CN 202210840912 A CN202210840912 A CN 202210840912A CN 115221082 B CN115221082 B CN 115221082B
Authority
CN
China
Prior art keywords
bit width
data
target
writing
read
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210840912.4A
Other languages
Chinese (zh)
Other versions
CN115221082A (en
Inventor
李彦平
王文俊
吴昌昊
尹得智
邹佳鑫
邵德立
谭晟吉
张雄林
刘杰
柏森洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China South Industries Group Automation Research Institute
Original Assignee
China South Industries Group Automation Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China South Industries Group Automation Research Institute filed Critical China South Industries Group Automation Research Institute
Priority to CN202210840912.4A priority Critical patent/CN115221082B/en
Publication of CN115221082A publication Critical patent/CN115221082A/en
Application granted granted Critical
Publication of CN115221082B publication Critical patent/CN115221082B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/14Handling requests for interconnection or transfer
    • G06F13/20Handling requests for interconnection or transfer for access to input/output bus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Image Generation (AREA)
  • Image Input (AREA)

Abstract

The invention discloses a data caching method, a data caching device and a storage medium, wherein the data caching method comprises the steps of determining a first bit width W1 of input data and a second bit width W2 of output data; the first bit width W1 is different from the second bit width W2; taking a common divisor of the first bit width W1 and the second bit width W2 as a target bit width W of a storage unit, and taking a power number n of 2 as a target number d of the storage unit; the target number d is not less than a ratio of the first bit width W1 to the common divisor. The method avoids complex Gray code conversion and synchronous design in principle, and reduces the hardware resource overhead of the FPGA. The bit width definition of the input/output interface is more free and flexible, so that the occupied space of the video memory is greatly reduced, and the cost is saved.

Description

Data caching method and device and storage medium
Technical Field
The present invention relates to the field of data transmission technologies, and in particular, to a data caching method, apparatus, and storage medium.
Background
Generally, when the clock domain crossing or the data transmission rate between the data input and output devices are different, a data buffering device is required to be added between the two devices. In digital systems, the most common buffer device is the FIFO. FIFO (First In First Out) is a data buffer of First In First Out, the difference with ordinary memorizer is that there is not external read-write address line, so use very simple, but the disadvantage is that it can only write data In order, the order reads data, its data address is finished by adding 1 automatically by the internal read-write pointer, can't be read or written into some appointed address by the address line decision like ordinary memorizer.
FIFOs are typically used for data transfer between different clock domains or for interfacing data of different widths. F, the FIFO can be divided into a synchronous FIFO and an asynchronous FIFO according to whether the FIFO read-write clock domains are the same or not.
Common parameters of the FIFO:
width (Width): namely the data bit of FIFO one-time read-write operation;
depth (depth): refers to how many N bits of data (if N width) the FIFO can store.
Full flag (full): setting when the FIFO is full of data;
empty flag (empty): setting when all FIFO data are read out;
read clock (r _ clk): a reference clock of a read operation clock domain;
write clock (W _ clk): a reference clock of a write operation clock domain;
read pointer (r _ addr): always points to the next cell to be read, and points to the 1 st cell (numbered 0) when reset;
write pointer (W _ addr): always points to the data currently to be written, and at reset points to the 1 st cell (numbered 0).
Asynchronous FIFO design consists in the design of full/empty flag generation, typically for a depth of 2 n The bit width of the read-write pointer is (n + 1) bits. Example (c): for a FIFO with a depth of 16, the bit width of the read-write pointer is 5, when the write-write pointer is reset, the read-write pointer points to 0_0000 at the same time, and the read-write pointer is completely consistent, the FIFO is indicated to be "empty", and when the most significant bit of the write pointer is different from the read pointer, and other bits are the same, that is, the write pointer =1_0000, and the read pointer =0_0000, the FIFO is indicated to be "full". Therefore, the lower 4 bits 0000-1111 of the read/write pointer point to the actual data storage unit address, the highest bit is similar to the indicating bit of "race circle", and is used to distinguish that when the read/write pointer points to the same address, the write pointer exceeds the read pointer by one circle: (Different highest order) full data or the same circle of read pointer overtakes the write pointer (same highest order) to read empty data.
In practical design, the read-write pointer needs to be converted from binary to gray code and then generated by comparing the full-empty flags. As shown in particular in fig. 10. This is because the change of the write pointer (W _ addr) occurs in the write clock domain (W _ clk), the change of the read pointer (r _ addr) occurs in the read clock domain (r _ clk), and the data comparison across the clock domains is to solve the meta-stability problem of the data first.
The gray code has the greatest characteristic that only one bit is changed every time in the process of increasing or decreasing, and the gray code has the greatest advantage that the comparison data sent to the empty and full mark generator only has two possibilities of the current state and the last state and cannot generate an intermediate process state, so that the synchronous circuits of different clock domains can be prevented from acquiring wrong intermediate state data. It also has its own limitation that the cycle count depth must be the nth power of 2, otherwise the feature of changing only one bit at a time is lost.
For the situation that the input and output are different, the input and output bit width is generally required to maintain a power of 2 multiple relation, because the design of the FIFO memory space is still realized by means of the dual-port RAM resource, and is limited by the configuration of the dual-port RAM. In addition, the selection of the input/output bit width is also limited for another reason, namely, the FPGA cannot compare the size relationship of the gray codes, and only can compare whether the gray codes are consistent, and if the gray codes are continuously used to eliminate the metastable state problem, the input/output bit width must be in a multiple relationship of power of 2. Such as: the input bit width is 10 and the depth is 16, and as mentioned above, the write pointer needs (4+1) =5 bits. If the selected output bit width is 5, the corresponding depth is 32, the read pointer should be (5+1) =6 bits, and then the high 5 bits of the read pointer and the write pointer are selected for gray code comparison, so as to generate the empty-full signal as described above, and the lowest bit of the read pointer can be used as the data selector control signal during output.
The input and output bit width design of the asynchronous FIFO should keep the multiple relation of power of 2. The SDRAM display has only 16, 32, 64, 128 output bit widths under controllable state, so as to achieve the highest transmission rate, therefore, the output bit width of the display (i.e. the input of the data buffer device) is set to 128 bits. And the LCD is in RGB display mode, so that data should be 24 bits at a time. Since the FIFO input/output bit width needs to be set by multiples, the FIFO output bit width can only be set to 128/4=32 bits, only the lower 24 bits of data output by the FIFO are reserved as the LCD controller each time, and the upper 8 bits are discarded.
Since only 24 bits of valid data are stored per 32-bit memory space, the conventional buffering scheme generates additional memory resource consumption in order to satisfy the FIFO input-output bit width limitation. However, in the context of nationwide design, the memory space is limited and the large-capacity domestic memory chips are expensive.
Disclosure of Invention
In view of the above, the present invention provides a data caching method, apparatus and storage medium for overcoming the above problems or at least partially solving the above problems.
The invention provides the following scheme:
a data caching method, comprising:
determining a first bit width W1 of input data and a second bit width W2 of output data; the first bit width W1 is different from the second bit width W2; taking a common divisor of the first bit width W1 and the second bit width W2 as a target bit width W of a storage unit, and taking a power number n of 2 as a target number d of the storage unit; the target number d is not less than the ratio of the first bit width W1 to the common divisor;
receiving a first data writing request, writing the first data segment into continuous W1/W storage units, wherein the first data has the first bit width W1;
and receiving a read data request, merging the data in the continuous W2/W storage units and outputting second data, wherein the second data has the second bit width W2.
Preferably: and taking the greatest common divisor of the first bit width W1 and the second bit width W2 as a target bit width W of a storage unit.
Preferably: determining the bit width W3 of a read-write pointer according to the power number n, wherein the bit width W3 of the read-write pointer is n +1; writing the first data segment into continuous W1/W storage units, and adding W1/W to a write pointer; and merging the data in the continuous W2/W storage units, outputting second data, and adding W2/W to the read pointer.
Preferably: receiving a first data writing request, judging whether a target writing condition is met, and if so, writing the first data into continuous W1/W storage units in a segmented manner;
and receiving a read data request to judge whether a target read condition is met, and if so, merging the data in the continuous W2/W storage units and outputting second data.
Preferably: the target writing condition and the target reading condition comprise reset signals, and whether the target writing condition and the target reading condition are met is judged according to the type of the determined reset signals; the reset signal is sent by the reset controller.
Preferably: the target writing condition further comprises a write pointer signal and a full flag signal;
the reset signal is set to 1, the write pointer points to 0 address and the full flag signal is set to 0 or the reset signal is set to 0, the write signal is set to 1 and the full flag signal is set to 0 to determine that the target write condition is met;
the logic for judging the full flag signal includes that when the writing occurs and is effective, if the counting distance of the storage space is less than twice of the full value, the full flag signal is set to 1 or less than 1 time of the writing space, and the full flag signal is set to 1.
Preferably: the target readout condition further includes a read pointer signal and a null flag signal;
the reset signal is set to 1, the read pointer points to 0 address and the null flag signal is set to 0 or the reset signal is set to 0, the write signal is set to 1 and the null flag signal is set to 0 to determine that the target read condition is met;
the logic for judging the empty flag signal includes that when reading occurs and is effective, if the counting distance of the storage space is less than two times of the empty value, writing occupies the storage space, and the empty flag signal is set to be 1 or less than 1 time of writing space, the empty flag signal is set to be 1.
Preferably: the storage unit comprises a memory type data storage unit; the first bit width W1 is 128, the second bit width W2 is 24, the common divisor is 8, and the power exponent n is 7.
A data caching apparatus, for connecting between a video memory and a display apparatus, the apparatus comprising:
a parameter setting unit for determining a first bit width W1 of input data and a second bit width W2 of output data; the first bit width W1 is different from the second bit width W2; taking a common divisor of the first bit width W1 and the second bit width W2 as a target bit width W of a storage unit, and taking a power number n of 2 as a target number d of the storage unit; the target number d is not less than the ratio of the first bit width W1 to the common divisor;
a writing unit, configured to receive a request for writing first data, and write the first data segment into W1/W consecutive storage units, where the first data has the first bit width W1;
and the reading unit is used for receiving a data reading request, merging the data in the continuous W2/W storage units and outputting second data, wherein the second data has the second bit width W2.
A storage medium having stored therein computer-executable instructions that, when loaded and executed by a processor, implement a data caching method as described above.
According to the specific embodiment provided by the invention, the invention discloses the following technical effects:
according to the data caching method, the data caching device and the data caching storage medium, complex Gray code conversion and synchronous design are avoided in principle, and hardware resource cost of an FPGA is reduced. The bit width definition of the input/output interface is more free and flexible, so that the occupied space of the video memory is greatly reduced, and the cost is saved.
Of course, it is not necessary for any product in which the invention is practiced to achieve all of the above-described advantages at the same time.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the embodiments will be briefly described below. It is obvious that the drawings in the following description are only some embodiments of the invention, and that for a person skilled in the art, other drawings can be derived from them without inventive effort.
Fig. 1 is a flowchart of a data caching method according to an embodiment of the present invention;
fig. 2 is a block diagram of a data caching apparatus according to an embodiment of the present invention;
FIG. 3 is a first write flow diagram provided by an embodiment of the present invention;
FIG. 4 is a second write flow diagram provided by an embodiment of the present invention;
FIG. 5 is a first readout flow chart provided by an embodiment of the present invention;
FIG. 6 is a second read flow diagram provided by embodiments of the present invention;
fig. 7 is a data _ num calculation flowchart provided by an embodiment of the present invention;
fig. 8 is a schematic diagram of a data caching apparatus according to an embodiment of the present invention;
fig. 9 is a schematic diagram of a data caching apparatus in an application state according to an embodiment of the present invention;
fig. 10 is a schematic diagram of a universal asynchronous FIFO provided by an embodiment of the present invention.
In the figure: the device comprises a write address counter 1, a read address counter 2, a full mark generator 3, an empty mark generator 4, an address subtracter 5, a reset controller 6, a write address gray code conversion circuit 7, a read address gray code conversion circuit 8 and a D trigger 9.
Detailed Description
The technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. It should be apparent that the described embodiments are only some of the embodiments of the present invention, and not all of the embodiments. All other embodiments that can be derived by one of ordinary skill in the art from the embodiments given herein are intended to be within the scope of the present invention.
Referring to fig. 1, a data caching method provided in an embodiment of the present invention is, as shown in fig. 1, the method may include:
s101: determining a first bit width W1 of input data and a second bit width W2 of output data; the first bit width W1 is different from the second bit width W2; taking a common divisor of the first bit width W1 and the second bit width W2 as a target bit width W of a storage unit, and taking a power number n of 2 as a target number d of the storage unit; the target number d is not less than the ratio of the first bit width W1 to the common divisor;
s102, receiving a first data writing request, and writing the first data into continuous W1/W storage units in a segmented mode, wherein the first data has the first bit width W1;
s103, receiving a read data request, merging the data in the continuous W2/W storage units and outputting second data, wherein the second data has the second bit width W2.
The data caching method provided by the embodiment of the application can realize display cache data caching for inputting data with a first bit width and outputting data with a second bit width. By the cache method of which the input and output bit width is not a power of 2 multiple, the storage space of a chip is greatly saved, the design flexibility is improved, and the product cost is reduced.
It can be appreciated that the first bit width provided by the embodiments of the present application is generally larger than the second bit width, for example, when the embodiments are applied to the field of image display, the first bit width may be 128 bits, and the second bit width may be 24 bits. Since the bit width of each memory cell indicates the number of bits of data that each memory cell can store, in order to reduce the number of flags of the memory cell as much as possible, the embodiment of the present application may provide the greatest common divisor of the first bit width W1 and the second bit width W2 as the target bit width W of the memory cell. The greatest common divisor of the first bit width and the second bit width is adopted, so that each storage unit can store the most data bits to the maximum extent, and the output of the final second data cannot be influenced.
Further, determining a bit width W3 of a read-write pointer according to the power number n, wherein the bit width W3 of the read-write pointer is n +1; writing the first data segment into continuous W1/W storage units, and adding W1/W to a write pointer; and merging the data in the continuous W2/W storage units, outputting second data, and adding W2/W to the read pointer.
In practical application, in order to prevent the problem that data is lost or the cache cannot be performed after the cache is full, the embodiment of the present application may further provide a method for receiving a first data writing request, determining whether a target writing condition is met, and if so, writing the first data segment into the continuous W1/W storage units;
and receiving a read data request to judge whether a target read condition is met, and if so, merging the data in the continuous W2/W storage units and outputting second data.
The target writing condition and the target reading condition comprise reset signals, and whether the target writing condition and the target reading condition are met is judged according to the type of the determined reset signals; the reset signal is sent by the reset controller. The reset controller can adopt an external form, the reset controller can clear data in all the storage units by sending a reset signal once, and the reset controller can send the reset signal at a clearance without influencing normal data processing. For example, when applied to a picture processing scene, the reset may be performed at a gap when one frame of image processing is completed waiting for processing of the next frame of image. Resetting can effectively prevent the cache from being normally used when an accident happens, and the problem that part of the data of the storage unit cannot be emptied can not occur.
The target writing condition further comprises a write pointer signal and a full flag signal;
the reset signal is set to 1, the write pointer points to 0 address and the full flag signal is set to 0 or the reset signal is set to 0, the write signal is set to 1 and the full flag signal is set to 0 to determine that the target write condition is met;
the logic for judging the full flag signal includes that when the writing occurs and is effective, if the counting distance of the storage space is less than twice of the full value, the full flag signal is set to 1 or less than 1 time of the writing space, and the full flag signal is set to 1.
The target readout condition further includes a read pointer signal and a null flag signal;
the reset signal is set to 1, the read pointer points to 0 address and the null flag signal is set to 0 or the reset signal is set to 0, the write signal is set to 1 and the null flag signal is set to 0 to determine that the target read condition is met;
the logic for judging the empty flag signal includes that when reading occurs and is effective, if the counting distance of the storage space is less than two times of the empty value, writing occupies the storage space, and the empty flag signal is set to be 1 or less than 1 time of writing space, the empty flag signal is set to be 1.
In practical applications, the first bit width and the second bit width may be determined according to a bit width of data specified to be written and output in a field in which the caching method is applied. For example, in an implementation manner, the storage unit may include a memory-type data storage unit; the first bit width W1 is 128, the second bit width W2 is 24, the common divisor is 8, and the power exponent n is 7.
In order to achieve the highest transmission rate, the bit width of the output of the video memory (i.e. the writing of the data buffer device) is set to be 128 bits. The LCD display device is in RGB display mode, so the bit width of each input data (i.e. the output of the data buffer device) should be 24 bits.
The method provided by the embodiment of the present application is described in detail below by taking an example in which the method provided by the embodiment of the present application is applied to data caching between an SDRAM video memory and an LCD display device.
As shown in fig. 2, the system principle is that memory-type data is used as a storage unit, the greatest common divisor 8 of the input bit width 128 "the first bit width W1" and the output bit width 24 (the second bit width W2) is used as the bit width W of the storage unit, and the proper power number n =7 of 2 is used as the depth (the target number d = n = 7), for example, 2 7 =128; at this time, bit width W3 of the read/write pointer may be determined as (n +1=7+ 1) =8.
As shown in fig. 3 and 4, the reset signal has the highest priority, (one frame of display is completed, and the gap between) when the reset signal is set to 1, the write pointer points to the address 0, and the full signal is set to 0. When the reset signal is 0, if the write signal Wr _ en =1 and full =0, the 128-bit input data segment is written into 16 memory type memory cells consecutively (W1/W = 128/8), the write pointer is incremented by 16; otherwise, the write is invalid and no operation is performed.
As shown in fig. 5 and 6, the reset signal has the highest priority, and when the reset signal is set to 1, the read pointer points to the 0 address, and the empty signal is set to 1. When the reset signal is 0, if the write signal rd _ en =1 and empty =0, merging and outputting the continuous (W2/W = 24/8) 3 memory type memory cell data into 24-bit output data, and adding 3 to a read pointer; otherwise, the reading is invalid and no operation is performed.
Logic for generating full/empty flag: the key of generating the same full/empty mark as the traditional FIFO lies in solving the problem of metastable state, different from the traditional FIFO which uses Gray code to completely avoid data metastable state, the strategy adopted by the application is to accept accidental errors caused by metastable state for a limited time.
The full-empty flag determination logic provided by the present application can refer to fig. 3, 4, 5, and 6, and specifically includes:
when the writing occurs and is effective, if the storage space count data _ num is less than twice of the full value (127) of the distance, the full is set to 1 when the writing occupies the storage space (more than 96), and the full is set to 1 when the other conditions are less than 1 time of the writing space (more than 112);
when reading occurs and is effective, if the storage space count data _ num is less than twice of the empty value (0) and the space occupied by writing (less than 6), the empty is set to 1, and if the other conditions are less than 1 time of the space written (less than 3), the empty is set to 1;
wherein data _ num = W _ addr-r _ addr, and the specific flow is as shown in fig. 7:
first, it is explained that in a general use scenario, errors due to data metastability do not occur continuously. The data change from one steady state to a new steady state can be divided into three stages of 'old steady state-metastable state-new steady state', after the FPGA finishes wiring, the delay from the combinational logic circuit to the next stage of trigger is fixed, so that the data metastable state is always in a fixed time period after the arrival of a trigger edge, and when the trigger edge has strong periodicity, the metastable state occurrence time period also keeps the same periodicity. Therefore, when the read-write clock frequencies of the asynchronous FIFOs are not consistent, the "virtual full" or "virtual empty" caused by data metastable state at a certain time can not bring continuous influence, and errors can be corrected when the next read-write clock edge arrives.
If the read-write clock frequency of the asynchronous FIFO is the same frequency/frequency multiplication, the metastable time period can be avoided by adjusting the phase relation of the read-write clock. This is easily done using FPGA clock resources for phase adjustment.
If the read-write clock frequency of the asynchronous FIFO is the same frequency/multiple frequency, and the phase offset causes the read-write clock trigger edge to always fall in the metastable state period of the opposite side (the phase can not be adjusted). By setting the depth of the deeper memory cell and reserving a certain margin, the method can be avoided. Asynchronous FIFO data writing and reading rates are not consistent necessarily, when the writing rate is larger than the reading rate, intermittent pause is necessarily generated in the writing process, and by reducing the full comparison value, the writing is stopped in advance, so that the error that the writing is not stopped because the storage space is full can be avoided. When the write rate is less than the read rate, the analysis is consistent with the former.
The application additionally adds a protective reset design. Protective reset design: in order to prevent error data caused by accidental errors in the FIFO reading and writing process from remaining in the FIFO and enable the data of a new frame of display picture to be continuously staggered, the method and the device perform reset operation on the cache FIFO at the waiting time of the display end of each frame of picture.
In a word, compared with the common display buffer FIFO design, the data buffer method provided by the application has the advantages that the bit width definition of the input/output interface is more free and flexible, the occupied space of the display memory is greatly reduced, and the cost is saved. The complex Gray code conversion and synchronous design is avoided, and the hardware resource overhead of the FPGA is reduced.
Corresponding to the above method embodiment, as shown in fig. 8 and 9, the present application may further provide a data caching device 30, configured to be connected between the display memory 10 and the display device 20, where the data caching device 30 includes:
a parameter setting unit 301 for determining a first bit width W1 of input data and a second bit width W2 of output data; the first bit width W1 is different from the second bit width W2; taking a common divisor of the first bit width W1 and the second bit width W2 as a target bit width W of a storage unit, and taking a power number n of 2 as a target number d of the storage unit; the target number d is not less than the ratio of the first bit width W1 to the common divisor;
a writing unit 302, configured to receive a request for writing first data, and write the first data segment into W1/W consecutive storage units, where the first data has the first bit width W1;
a reading unit 303, configured to receive a data reading request, merge data in W2/W consecutive storage units, and output second data, where the second data has the second bit width W2.
An embodiment of the present invention provides a storage medium, on which a program is stored, and the program implements the data caching method when executed by a processor.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising a … …" does not exclude the presence of another identical element in a process, method, article, or apparatus that comprises the element.
From the above description of the embodiments, it is clear to those skilled in the art that the present application can be implemented by software plus necessary general hardware platform. Based on such understanding, the technical solutions of the present application may be essentially or partially implemented in the form of a software product, which may be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the embodiments or some parts of the embodiments of the present application.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, the system or system embodiments are substantially similar to the method embodiments and therefore are described in a relatively simple manner, and reference may be made to some of the descriptions of the method embodiments for related points. The above-described system and system embodiments are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.

Claims (6)

1. A method for caching data, comprising:
determining a first bit width W1 of input data and a second bit width W2 of output data; the first bit width W1 is different from the second bit width W2; taking a common divisor of the first bit width W1 and the second bit width W2 as a target bit width W of a storage unit, and taking a power number n of 2 as a target number d of the storage unit; the target number d is not less than the ratio of the first bit width W1 to the common divisor; determining the bit width W3 of a read-write pointer according to the power number n, wherein the bit width W3 of the read-write pointer is n +1;
receiving a first data writing request, judging whether a target writing condition is met, if so, writing the first data into continuous W1/W storage units in a segmented manner, and adding W1/W to a writing pointer; the first data has the first bit width W1;
receiving a data reading request, judging whether a target reading condition is met, if so, merging data in continuous W2/W storage units, outputting second data, and adding a read pointer with W2/W; the second data has the second bit width W2;
the target writing condition and the target reading condition comprise reset signals, and whether the target writing condition and the target reading condition are met is judged according to the type of the determined reset signals; the reset signal is sent by the reset controller.
2. The data buffering method of claim 1, wherein the target write condition further comprises a write pointer signal and a full flag signal;
the reset signal is set to 1, the write pointer points to 0 address and the full flag signal is set to 0 or the reset signal is set to 0, the write signal is set to 1 and the full flag signal is set to 0 to determine that the target write condition is met;
the logic for judging the full flag signal includes that when the writing occurs and is effective, if the counting distance of the storage space is less than twice of the full value, the full flag signal is set to 1 or less than 1 time of the writing space, and the full flag signal is set to 1.
3. The data buffering method of claim 1, wherein the target sensing condition further comprises a read pointer signal and a null flag signal;
the reset signal is set to 1, the read pointer points to 0 address and the null flag signal is set to 0 or the reset signal is set to 0, the write signal is set to 1 and the null flag signal is set to 0 to determine that the target read condition is met;
the logic for judging the empty flag signal includes that when reading occurs and is effective, if the counting distance of the storage space is less than two times of the empty value, writing occupies the storage space, the empty flag signal is set to 1 or the empty flag signal is set to 1 when the counting distance of the storage space is less than 1 time of the empty value, and writing occupies the storage space.
4. The data caching method of claim 1, wherein the storage unit comprises a memory-type data storage unit; the first bit width W1 is 128, the second bit width W2 is 24, the common divisor is 8, and the power exponent n is 7.
5. A data caching apparatus, configured to be connected between a video memory and a display apparatus, the apparatus comprising:
a parameter setting unit for determining a first bit width W1 of the input data and a second bit width W2 of the output data; the first bit width W1 is different from the second bit width W2; taking a common divisor of the first bit width W1 and the second bit width W2 as a target bit width W of a storage unit, and taking a power number n of 2 as a target number d of the storage unit; the target number d is not less than the ratio of the first bit width W1 to the common divisor; determining the bit width W3 of a read-write pointer according to the power number n, wherein the bit width W3 of the read-write pointer is n +1;
the writing unit is used for receiving a first data writing request, judging whether a target writing condition is met, if so, writing the first data segment into continuous W1/W storage units, and adding W1/W to a writing pointer; the first data has the first bit width W1;
the reading unit is used for receiving a data reading request, judging whether a target reading condition is met, if so, merging the data in the continuous W2/W storage units, outputting second data, and adding a read pointer with W2/W; the second data has the second bit width W2;
the target writing condition and the target reading condition comprise reset signals, and whether the target writing condition and the target reading condition are met is judged according to the type of the determined reset signals; the reset signal is sent by the reset controller.
6. A storage medium having stored thereon computer-executable instructions which, when loaded and executed by a processor, implement a data caching method as claimed in any one of claims 1 to 4.
CN202210840912.4A 2022-07-18 2022-07-18 Data caching method and device and storage medium Active CN115221082B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210840912.4A CN115221082B (en) 2022-07-18 2022-07-18 Data caching method and device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210840912.4A CN115221082B (en) 2022-07-18 2022-07-18 Data caching method and device and storage medium

Publications (2)

Publication Number Publication Date
CN115221082A CN115221082A (en) 2022-10-21
CN115221082B true CN115221082B (en) 2023-04-18

Family

ID=83612591

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210840912.4A Active CN115221082B (en) 2022-07-18 2022-07-18 Data caching method and device and storage medium

Country Status (1)

Country Link
CN (1) CN115221082B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115562617B (en) * 2022-11-30 2023-03-03 苏州浪潮智能科技有限公司 Depth setting method and system of FIFO memory and electronic equipment
CN117194281B (en) * 2023-09-05 2024-09-24 上海芯炽科技集团有限公司 Asymmetric access method for variable-length data in ASIC
CN117743234A (en) * 2023-11-29 2024-03-22 中科驭数(北京)科技有限公司 Data bit width conversion method and device, electronic equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101770356A (en) * 2008-12-30 2010-07-07 陈海红 Conversion device and method for data bit width in fixed length cell switch
CN102929808A (en) * 2012-11-02 2013-02-13 长沙景嘉微电子股份有限公司 Clock domain crossing data transmission circuit with high reliability

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001308832A (en) * 2000-04-24 2001-11-02 Oki Electric Ind Co Ltd Device for converting speed
JP2004240713A (en) * 2003-02-06 2004-08-26 Matsushita Electric Ind Co Ltd Data transfer method and data transfer device
CN101667451B (en) * 2009-09-11 2012-05-09 西安电子科技大学 Data buffer of high-speed data exchange interface and data buffer control method thereof
CN103680600B (en) * 2013-12-18 2016-08-03 北京航天测控技术有限公司 A kind of storage device of applicable different bit wide data
CN105607888A (en) * 2014-11-25 2016-05-25 中兴通讯股份有限公司 Data bit width conversion method and device
CN109002409A (en) * 2017-06-07 2018-12-14 深圳市中兴微电子技术有限公司 A kind of bit wide converting means and method
CN107220023B (en) * 2017-06-29 2022-03-22 无锡中微亿芯有限公司 Embedded configurable FIFO memory
JP7376459B2 (en) * 2020-11-30 2023-11-08 日本電波工業株式会社 transmission circuit
CN114637697A (en) * 2022-03-22 2022-06-17 深圳云豹智能有限公司 Data stream processing device, processing method, chip and electronic equipment

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101770356A (en) * 2008-12-30 2010-07-07 陈海红 Conversion device and method for data bit width in fixed length cell switch
CN102929808A (en) * 2012-11-02 2013-02-13 长沙景嘉微电子股份有限公司 Clock domain crossing data transmission circuit with high reliability

Also Published As

Publication number Publication date
CN115221082A (en) 2022-10-21

Similar Documents

Publication Publication Date Title
CN115221082B (en) Data caching method and device and storage medium
US5265231A (en) Refresh control arrangement and a method for refreshing a plurality of random access memory banks in a memory system
US6845414B2 (en) Apparatus and method of asynchronous FIFO control
US10133549B1 (en) Systems and methods for implementing a synchronous FIFO with registered outputs
US7500038B2 (en) Resource management
EP0484652B1 (en) First-in-first-out buffer
US6272583B1 (en) Microprocessor having built-in DRAM and internal data transfer paths wider and faster than independent external transfer paths
KR100288177B1 (en) Memory access control circuit
JP2004062630A (en) Fifo memory and semiconductor device
US5594743A (en) Fifo buffer system having an error detection and correction device
US8732377B2 (en) Interconnection apparatus and controlling method therefor
US5469449A (en) FIFO buffer system having an error detection and resetting unit
CN108984441B (en) Method and system for maintaining data transmission consistency
US7099972B2 (en) Preemptive round robin arbiter
US9767054B2 (en) Data transfer control device and memory-containing device
US20090037619A1 (en) Data flush methods
US20150150009A1 (en) Multple datastreams processing by fragment-based timeslicing
JP2873229B2 (en) Buffer memory controller
US6831920B1 (en) Memory vacancy management apparatus and line interface unit
JPS58223833A (en) Direct memory access control system
CN107544618B (en) Pointer synchronization circuit and method, message exchange device and method
US20240331746A1 (en) Direct memory access (dma) circuit and operation method thereof
JP2000299716A (en) Data receiver and data receiving method
KR100557561B1 (en) First in First out storage device
JP2625396B2 (en) Receive data processing device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant