CN109936513A - Data message processing method, intelligent network adapter and CDN server based on FPGA - Google Patents
Data message processing method, intelligent network adapter and CDN server based on FPGA Download PDFInfo
- Publication number
- CN109936513A CN109936513A CN201910123476.7A CN201910123476A CN109936513A CN 109936513 A CN109936513 A CN 109936513A CN 201910123476 A CN201910123476 A CN 201910123476A CN 109936513 A CN109936513 A CN 109936513A
- Authority
- CN
- China
- Prior art keywords
- data message
- fpga
- data
- intelligent network
- network adapter
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Landscapes
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
The invention discloses a kind of data message processing method based on FPGA, intelligent network adapter and CDN servers, belong to content distributing network technical field.The described method includes: the FPGA determines that the corresponding processing type of data message that the intelligent network adapter receives, the processing type include at least: data surface treatment and control surface treatment;When the corresponding processing type of the data message is data surface treatment, the data message is sent to next-hop node by the intelligent network adapter by the FPGA;When the corresponding processing type of the data message is control surface treatment, the data message is sent to CPU by the FPGA, so that the CPU handles the data message.The data-message transmission speed of CDN server can be improved in the present invention, can also improve the handling capacity of CDN server.
Description
Technical field
The present invention relates to content distributing network technical field more particularly to a kind of data message processing sides based on FPGA
Method, intelligent network adapter and CDN server.
Background technique
With the continuous development of internet, the application range of CDN system constantly extends.Currently, CDN system can be used being
Including web page browsing, resource downloading, audio/video on-demand, audio-video be broadcast live online including multiple business provide accelerate service.
When accelerate service using CDN system, CDN server passes through network interface card first and receives data message.Later, net
The data message received is sent to CPU by card.CPU is directed to different types of data message again and performs corresponding processing, wait locate
After the completion of reason, by treated, data message is sent to network interface card to CPU.Last CDN server passes through network interface card general treated data
Message is sent.
In the implementation of the present invention, inventor discovery in the prior art the prior art has at least the following problems:
In the treatment process inside CDN server, data message is needed by repeatedly read-write.Not with data traffic
Disconnected to increase, the read or write speed of CDN server will limit the processing speed of data message, and further influence the number of CDN server
According to message transmissions speed, therefore, a kind of processing method of data message is needed at present, can be improved the datagram of CDN server
Literary transmission speed.
Summary of the invention
In order to solve problems in the prior art, the embodiment of the invention provides a kind of data message processing side based on FPGA
Method, intelligent network adapter and CDN server.The technical solution is as follows:
In a first aspect, providing a kind of data message processing method based on FPGA, the method is suitable for being equipped with
The intelligent network adapter of FPGA, which comprises
The FPGA determines the corresponding processing type of data message that the intelligent network adapter receives, and the processing type is extremely
It less include: data surface treatment and control surface treatment;
When the corresponding processing type of the data message is data surface treatment, the FPGA will by the intelligent network adapter
The data message is sent to next-hop node;
When the corresponding processing type of the data message is control surface treatment, the FPGA sends the data message
To CPU, so that the CPU handles the data message.
Further, the FPGA determines the corresponding processing type of data message that the intelligent network adapter receives, comprising:
The processing type identifier for including in the data message that the FPGA is received according to the intelligent network adapter is true
Determine the corresponding processing type of the data message.
Further, the data message is sent to next-hop node by the intelligent network adapter by the FPGA, comprising:
The FPGA carries out TCP unloading to the data message;
The FPGA obtains the destination IP of the data message after TCP unloading;
The data message is sent to the corresponding next-hop node of the destination IP by the intelligent network adapter by the FPGA.
Further, it is also equipped with network interface card memory in the intelligent network adapter, when the corresponding processing type of the data message
When for data surface treatment, the method also includes:
The data message is sent to the network interface card memory by the FPGA;
Data message described in the network interface card memory storage.
Further, it is also equipped with network interface card hard disk in the intelligent network adapter, the method also includes:
When the remaining space of the network interface card memory is less than pre-set space threshold value, the FPGA is calculated in the network interface card memory
The hot value of all data messages;
The FPGA is migrated the data message one by one to the network interface card hard disk according to the ascending sequence of hot value.
Further, the method also includes:
When the intelligent network adapter receives request of data, the FPGA is searched in the network interface card memory and network interface card hard disk
The corresponding data message of the request of data;
If finding the corresponding data message of the request of data, the FPGA passes through the intelligent network adapter for the number
The next-hop node is sent to according to message;
If not finding the corresponding data message of the request of data, the FPGA will be described by the intelligent network adapter
Request of data is sent to the next-hop node.
Further, the method also includes:
The FPGA is based on aware transport layer multicast and through type is distributed, and sends the data message by the intelligent network adapter.
Second aspect provides a kind of intelligent network adapter, and FPGA is equipped in the intelligent network adapter, and the FPGA is used for:
Determine that the corresponding processing type of data message that the intelligent network adapter receives, the processing type include at least:
Data surface treatment and control surface treatment;
When the corresponding processing type of the data message is data surface treatment, by the intelligent network adapter by the data
Message is sent to next-hop node;
When the corresponding processing type of the data message is control surface treatment, the data message is sent to CPU, with
Handle the CPU to the data message.
Further, the FPGA, is specifically used for:
The processing type identifier for including in the data message received according to the intelligent network adapter determines the number
According to the corresponding processing type of message.
Further, the FPGA, is specifically used for:
TCP unloading is carried out to the data message;
The destination IP of the data message after obtaining TCP unloading;
The data message is sent to the corresponding next-hop node of the destination IP by the intelligent network adapter.
Further, it is also equipped with network interface card memory in the intelligent network adapter, when the corresponding processing type of the data message
When for data surface treatment;
The FPGA, for the data message to be sent to the network interface card memory;
The network interface card memory, for storing the data message.
Further, network interface card hard disk is also equipped in the intelligent network adapter, the FPGA is also used to:
When the remaining space of the network interface card memory is less than pre-set space threshold value, all data in the network interface card memory are calculated
The hot value of message;The data message is migrated one by one to the network interface card hard disk according to hot value ascending sequence.
Further, the FPGA, is also used to:
When the intelligent network adapter receives request of data, the data are searched in the network interface card memory and network interface card hard disk
Request corresponding data message;
If finding the corresponding data message of the request of data, the data message is sent out by the intelligent network adapter
Toward the next-hop node;
If not finding the corresponding data message of the request of data, by the intelligent network adapter by the request of data
It is sent to the next-hop node.
Further, the FPGA, is also used to:
Distributed based on aware transport layer multicast and through type, the data message is sent by the intelligent network adapter.
The third aspect, provides a kind of CDN server, and the CDN server includes intelligence described in above-mentioned second aspect
Network interface card.
Technical solution provided in an embodiment of the present invention has the benefit that
In the embodiment of the present invention, FPGA determines the corresponding processing type of data message that intelligent network adapter receives, and handles class
Type includes at least: data surface treatment and control surface treatment;When the corresponding processing type of data message is data surface treatment, FPGA
Data message is sent to next-hop node by intelligent network adapter;When the corresponding processing type of data message is control surface treatment
When, data message is sent to CPU by FPGA, so that CPU handles data message.In this way, passing through the place to data message
Reason type is classified, and the data message of data surface treatment is directly sent by FPGA by intelligent network adapter, it is no longer necessary to
It is transmitted to CPU processing, to reduce read-write of the data message between intelligent network adapter and CPU, cpu resource is saved, improves data
Message response and transmission speed.
Detailed description of the invention
To describe the technical solutions in the embodiments of the present invention more clearly, make required in being described below to embodiment
Attached drawing is briefly described, it should be apparent that, drawings in the following description are only some embodiments of the invention, for
For those of ordinary skill in the art, without creative efforts, it can also be obtained according to these attached drawings other
Attached drawing.
Fig. 1 is a kind of flow chart of data message processing method based on FPGA provided in an embodiment of the present invention;
Fig. 2 is a kind of structural schematic diagram of intelligent network adapter provided in an embodiment of the present invention;
Fig. 3 is a kind of structural schematic diagram of CDN server provided in an embodiment of the present invention.
Specific embodiment
To make the object, technical solutions and advantages of the present invention clearer, below in conjunction with attached drawing to embodiment party of the present invention
Formula is described in further detail.
The embodiment of the invention provides one kind, and based on FPGA, (Field-Programmable Gate Array, scene can be compiled
Journey gate array) data message processing method, this method can be applied to the intelligent network adapter for being mounted with FPGA.FPGA is a kind of collection
It at circuit, can be configured according to demand after fabrication, to realize corresponding function.FPGA is capable of providing powerful calculating energy
FPGA is mounted on intelligent network adapter by power, and the CPU that can replace CDN server completes some simple data message processing.Intelligence
Network interface card memory and network interface card hard disk can also be installed on network interface card, to store some frequent data messages of access.In CDN server
One or more intelligent network adapters can be installed, CDN server can carry out the transmitting-receiving of data message by intelligent network adapter.This implementation
The application scenarios of example may is that the FPGA when CDN server receives data message by intelligent network adapter, on intelligent network adapter is true
Determine the corresponding processing type of data message, then according to the corresponding processing type of data message, data message is sent to by FPGA
The CPU of CDN server is handled, and later, intelligent network adapter receives again and send CPU treated data message;Alternatively, FPGA
Directly data message is sent by intelligent network adapter.
Below in conjunction with specific embodiment, to a kind of data message process flow progress based on FPGA shown in FIG. 1
Detailed description, content can be such that
Step 101:FPGA determines the corresponding processing type of data message that intelligent network adapter receives.
Wherein, processing type includes at least: data surface treatment and control surface treatment.
In an implementation, intelligent network adapter is installed in CDN server, FPGA is installed on intelligent network adapter.In CDN server
For intelligent network adapter after receiving data message, the FPGA on intelligent network adapter determines the corresponding processing type of data message first.Number
Processing type according to message may include data surface treatment and control surface treatment.Data surface treatment pertains only to the duplication of data message
And distribution;Control surface treatment can be related to the business of the logics complexity such as storage, management, seven layer proxies of data message.
Optionally, the processing of step 101 specifically can be such that in data message that FPGA is received according to intelligent network adapter
The processing type identifier for including determines the corresponding processing type of data message.
In an implementation, the header of data message has generally comprised field relevant to data message processing type, should
Different disposal type has different processing type identifiers in field.Intelligent network adapter is after receiving data message, FPGA
It identifies the corresponding field in data message head, obtains the processing type identifier that data message includes, then according to processing type
Identifier determines the corresponding processing type of data message.It should be noted that may not have in the header fields of some data messages
Have comprising corresponding processing type identifier, at this point, FPGA can also parse data message, then according to data message
The information such as content, determine the corresponding processing type of data message.
Step 102: when the corresponding processing type of data message is data surface treatment, FPGA passes through intelligent network adapter for data
Message is sent to next-hop node.
In an implementation, FPGA is after determining the corresponding processing type of data message, when the corresponding processing class of data message
Type be data surface treatment when, reference is described previously, due to the processing to data message pertain only to replicate and forward, processing logic compared with
To be simple, the computing capability of FPGA can meet the process demand to data message, FPGA can directly to data message into
Row processing, does not need to transfer to CPU to be handled again.Therefore, data message can be directly sent to by FPGA by intelligent network adapter
Corresponding next-hop node.It is connected directly it should be noted that next-hop node can be in CDN cluster with the CDN server
CDN intermediate server, be also possible to the client being connected directly with the CDN server or source station server.
Optionally, the respective handling of step 102 specifically can be such that FPGA carries out TCP unloading to data message;FPGA
The destination IP of data message after obtaining TCP unloading;FPGA by intelligent network adapter by data message be sent to destination IP it is corresponding under
One hop node.
In an implementation, determine the corresponding processing type of data message be data surface treatment after, FPGA to data message into
Row TCP unloading, data message is unloaded on FPGA, and later, FPGA carries out decapsulation processing to data message, obtains datagram
The information such as source IP, destination IP, source port and the destination port of text, if it is desired, FPGA can also carry out HTTPS to data message
Decryption processing.After FPGA obtains above- mentioned information, HTTPS encryption and encapsulation process, then, FPGA root are carried out to data message again
According to the destination IP of data message, data message is sent to by the corresponding next-hop node of destination IP by intelligent network adapter.For example, FPGA
TCP unloading is being carried out to data message, and after obtaining the destination IP of data message, is determining that the destination IP of data message is directed toward client
End.If CDN server is directly connected to client, FPGA will be counted according to the destination IP of data message by intelligent network adapter
Client is transmitted directly to according to message;If CDN server is connect with client by one or more CDN intermediate server,
Then data message is sent to the CDN being directly connected to CDN server by intelligent network adapter according to the destination IP of data message by FPGA
Intermediate server, and eventually by the forwarding of one or more CDN intermediate server, data message is sent to client.Number
According to message destination IP be directed toward source station server when transmission process similarly, details are not described herein.
TCP unloading is transferred to TCP protocol stack on such as FPGA hardware, i.e., by network seven layer protocol (OSI, Open
System Interconnection) in third layer (network layer/IP layers) and the 4th layer (transport layer/TCP layer) be transferred to
It is handled on FPGA, in this way, the computing resource as required for the TCP protocol stack of CPU offer originally, so that it may it is provided by FPGA, from
And reduce cpu load.Due to no longer needing to transfer to CPU to handle data message, also there is no need to by the data on intelligent network adapter
Message is sent on CDN server memory, is reduced read-write of the data message between intelligent network adapter and CDN server memory and is copied
Shellfish and transmission, save data processing time, thus improve data transfer speed.
Optionally, in order to improve the response speed to request of data, network interface card memory can also be installed in intelligent network adapter
Storing data message, corresponding processing can be such that, when the corresponding processing type of data message is data surface treatment, FPGA will
Data message is sent to network interface card memory;Network interface card data stored in memory message.
In an implementation, network interface card memory is also equipped in intelligent network adapter, when FPGA determines the datagram that intelligent network adapter receives
When the corresponding processing type of text is data surface treatment, FPGA answers data message while carrying out TCP unloading to data message
Network interface card memory is sent to after system, network interface card memory stores the data message received.In this way when the intelligent network of CDN server
After card receives the request of data for the data message again, FPGA can be straight with network interface card data message stored in memory
Reversed feedback request of data, and no longer need to send back to request of data source station server, to obtain the data message.In intelligent network
When network interface card memory is installed in card, DDR4 memory usually can be used as network interface card memory, DDR4 memory highest can have
The memory space of the transmission speed of 460GB/s and hundred G ranks, can rapidly read and write and data message transmission.
Optionally, since the memory space of network interface card memory is smaller, and hence it is also possible to install memory space in intelligent network adapter
Bigger network interface card hard disk carrys out storing data message, and corresponding processing can be such that be preset when the remaining space of network interface card memory is less than
When capacity-threshold, FPGA calculates the hot value of all data messages in network interface card memory;FPGA is according to ascending suitable of hot value
Sequence migrates data message one by one to network interface card hard disk.
In an implementation, although the read-write of network interface card memory and transmission speed are very fast, its limited storage space, so, in intelligence
It can be also equipped with network interface card hard disk in network interface card, to promote the data message quantity that intelligent network adapter can store.It is pre- on FPGA
It is first provided with the capacity-threshold of network interface card memory, pre-set space threshold value is less than or equal to whole storage sizes of network interface card memory.When
When the remaining space of network interface card memory is less than pre-set space threshold value, FPGA can calculate the temperature of all data messages in network interface card memory
Value, then, FPGA is migrated data message one by one to network interface card hard disk according to the ascending sequence of hot value, until network interface card memory
Remaining space be more than or equal to pre-set space threshold value after, FPGA stops to network interface card hard disk migrating data message.For hot value
Illustrating can be with reference to hereinafter.
As described above, network interface card memory has stronger literacy, can carry out the concurrent of mass data message simultaneously,
Therefore, it can will access frequent data message to be stored in network interface card memory, the transmission speed of data message can be improved in this way;
Network interface card disk read-write ability is weaker, but there is biggish memory space therefore can deposit the lower data message of access frequency
The data message quantity of network interface card storage can be improved in network interface card hard disk in storage in this way.When network interface card hard disk is installed in intelligent network adapter,
SSD (Solid State Disk, solid state hard disk) usually can be used as network interface card hard disk, because SSD compares traditional machinery
Hard disk has higher transmission speed, and the storage that general SSD highest can have the transmission speed and TB rank of 3GB/s is empty
Between.
Optionally, it is mounted on intelligent network adapter after network interface card memory and network interface card hard disk, CDN server passes through intelligent network adapter
Processing after receiving request of data accordingly can be such that, when intelligent network adapter receives request of data, FPGA is in network interface card memory
Data message corresponding with request of data is searched in network interface card hard disk;If finding the corresponding data message of request of data, FPGA
Data message is sent to next-hop node by intelligent network adapter;If not finding the corresponding data message of request of data, FPGA
Request of data is sent to next-hop node by intelligent network adapter.
In an implementation, CDN server is after receiving the request of data that client is sent by intelligent network adapter, and FPGA is first
According to the request of data, corresponding data message is searched in network interface card memory, if do not found in network interface card memory corresponding
Data message, then FPGA further searches corresponding data message in network interface card hard disk.If finally in network interface card memory or net
The corresponding data message of the request of data is found in card hard disk, then FPGA is according to the request of data received, by what is found
Data message is sent to corresponding next-hop node by intelligent network adapter, and specific transmission process can refer to described previously.If
The corresponding data message of the request of data is not all found after lookup in network interface card memory and network interface card hard disk, at this point, FPGA passes through
The request of data is sent to next-hop node by intelligent network adapter.It is similar to the transmission process of previously described data message, FPGA
After determining the destination IP of request of data, if the source station server that CDN server is directed toward with destination IP is directly connected to, FPGA
According to the destination IP of request of data, request of data is transmitted directly to by source station server by intelligent network adapter;If CDN server
It is connect with source station server by one or more CDN intermediate server, then FPGA passes through intelligence according to the destination IP of request of data
Request of data is sent to the CDN intermediate server being directly connected to CDN server by energy network interface card, and eventually by one or more
Request of data is sent to source station server by the forwarding of CDN intermediate server.Source station server is receiving the request of data
Afterwards, corresponding data message can be fed back to CDN server.Intelligent network adapter is in the datagram for receiving the source station server feedback
After text, the data message can be sent to by client by intelligent network adapter again.
It should be noted that the request of data that FPGA can be received by intelligent network adapter, hard to network interface card memory and network interface card
The access frequency of the data message stored in disk carries out statistic record, and specifically can embody number by way of hot value
More for the request of data of a certain data message according to the access frequency of message, then the access frequency of the data message is higher, should
The corresponding hot value of data message is also higher.In this way, as described above, FPGA can be when needed according to data message
Hot value migrates data message.
Step 103: when the corresponding processing type of data message is control surface treatment, data message is sent to by FPGA
CPU, so that CPU handles data message.
In an implementation, FPGA is after determining the corresponding processing type of data message, when the corresponding processing class of data message
When type is control surface treatment, reference is described previously, is related to storage, management, seven layer proxies etc. to the processing needs of data message and patrols
Complicated business is collected, the computing capability of FPGA can no longer meet demand, need to be completed at these by CPU by a large amount of calculating
Reason.Therefore, data message is sent to CDN server memory, the CPU of such CDN server from intelligent network adapter by FPGA
Data message is read from CDN server memory and is performed corresponding processing.Data message after CPU processing is written into CDN
Then server memory is being sent to intelligent network adapter from CDN server memory, will treated data finally by intelligent network adapter
Message is sent.It is connected between intelligent network adapter and CDN server memory using PCIe main line.
Optionally, CDN server is before content distribution, it usually needs collects all data messages of composition this document, so
It is distributed as unit of file again afterwards, but the time delay that will cause data transmission in this way is higher, is transmitted across to reduce data
Time delay in journey, corresponding processing can be such that FPGA is based on aware transport layer multicast and through type is distributed, sent by intelligent network adapter
Data message.
In an implementation, CDN server can also use transport layer group after receiving data message by intelligent network adapter
It broadcasts technology and through type distribution technology sends data message.FPGA by aware transport layer multicast technology with by wrap as unit of it is straight
General formula distribution technology combines, and after the intelligent network adapter of CDN server receives data packet, will be distributed to after data packet replication at once
Corresponding next-hop node does not need to wait all data packets for collecting file again, reduces the time delay of data transmission.With live streaming
The usually used RTMP in field (Real Time Messaging Protocol, real-time messages transport protocol) data message is
Example, the distribution time delay of traditional CDN server is in 300ms or so, and the present invention is reduced to time delay is distributed in 1ms or so.
In the embodiment of the present invention, FPGA determines the corresponding processing type of data message that intelligent network adapter receives, and handles class
Type includes at least: data surface treatment and control surface treatment;When the corresponding processing type of data message is data surface treatment, FPGA
Data message is sent to next-hop node by intelligent network adapter;When the corresponding processing type of data message is control surface treatment
When, data message is sent to CPU by FPGA, so that CPU handles data message.In this way, passing through the place to data message
Reason type is classified, and the data message of data surface treatment is directly sent by FPGA by intelligent network adapter, it is no longer necessary to
It is transmitted to CPU processing, to reduce read-write of the data message between intelligent network adapter and CPU, cpu resource is saved, improves data
Message response and transmission speed, and the handling capacity of CDN server can also be improved.
Based on the same technical idea, the embodiment of the invention also provides a kind of intelligent network adapters 200, as shown in Fig. 2, described
FPGA 201 is installed, the FPGA 201 is used in intelligent network adapter 200:
Determine that the corresponding processing type of data message that the intelligent network adapter 200 receives, the processing type are at least wrapped
It includes: data surface treatment and control surface treatment;
It, will be described by the intelligent network adapter 200 when the corresponding processing type of the data message is data surface treatment
Data message is sent to next-hop node;
When the corresponding processing type of the data message is control surface treatment, the data message is sent to CPU, with
Handle the CPU to the data message.
Optionally, the FPGA 201, is specifically used for:
Described in the processing type identifier for including in the data message received according to the intelligent network adapter 200 determines
The corresponding processing type of data message.
Optionally, the FPGA 201, is specifically used for:
TCP unloading is carried out to the data message;
The destination IP of the data message after obtaining TCP unloading;
The data message is sent to the corresponding next-hop node of the destination IP by the intelligent network adapter 200.
Optionally, it is also equipped with network interface card memory 202 in the intelligent network adapter 200, when the corresponding processing of the data message
When type is data surface treatment;
The FPGA 201, for the data message to be sent to the network interface card memory 202;
The network interface card memory 202, for storing the data message.
Optionally, network interface card hard disk 203 is also equipped in the intelligent network adapter 200, the FPGA 201 is also used to:
When the remaining space of the network interface card memory 202 is less than pre-set space threshold value, institute in the network interface card memory 202 is calculated
There is the hot value of data message;The data message migrated one by one according to hot value ascending sequence hard to the network interface card
Disk 203.
Optionally, the FPGA201, is also used to:
When the intelligent network adapter 200 receives request of data, looked into the network interface card memory 202 and network interface card hard disk 203
Look for the corresponding data message of the request of data;
If finding the corresponding data message of the request of data, by the intelligent network adapter 200 by the datagram
Text is sent to the next-hop node;
If not finding the corresponding data message of the request of data, by the intelligent network adapter 200 by the data
Request is sent to the next-hop node.
Optionally, the FPGA201, is also used to:
Distributed based on aware transport layer multicast and through type, the data message is sent by the intelligent network adapter 200.
It should be understood that intelligent network adapter provided by the above embodiment and the data message processing method based on FPGA are implemented
Example belongs to same design, and specific implementation process is detailed in embodiment of the method, and which is not described herein again.
Based on the same technical idea, the embodiment of the invention also provides a kind of CDN servers, as shown in figure 3, described
CDN server includes one or more above-mentioned intelligent network adapters, further includes CPU, hard disk, memory etc..
Through the above description of the embodiments, those skilled in the art can be understood that each embodiment can
It realizes by means of software and necessary general hardware platform, naturally it is also possible to pass through hardware.Based on this understanding, on
Stating technical solution, substantially the part that contributes to existing technology can be embodied in the form of software products in other words, should
Computer software product may be stored in a computer readable storage medium, such as ROM/RAM, magnetic disk, CD, including several fingers
It enables and using so that a computer equipment (can be personal computer, server-side or the network equipment etc.) executes each implementation
Method described in certain parts of example or embodiment.
The foregoing is merely presently preferred embodiments of the present invention, is not intended to limit the invention, it is all in spirit of the invention and
Within principle, any modification, equivalent replacement, improvement and so on be should all be included in the protection scope of the present invention.
Claims (15)
1. a kind of data message processing method based on FPGA, which is characterized in that the method is suitable for being equipped with the intelligence of FPGA
Energy network interface card, which comprises
The FPGA determines that the corresponding processing type of data message that the intelligent network adapter receives, the processing type are at least wrapped
It includes: data surface treatment and control surface treatment;
When the corresponding processing type of the data message is data surface treatment, the FPGA will be described by the intelligent network adapter
Data message is sent to next-hop node;
When the corresponding processing type of the data message is control surface treatment, the data message is sent to by the FPGA
CPU, so that the CPU handles the data message.
2. the method according to claim 1, wherein the FPGA determines the data that the intelligent network adapter receives
The corresponding processing type of message, comprising:
The processing type identifier for including in the data message that the FPGA is received according to the intelligent network adapter determines institute
State the corresponding processing type of data message.
3. the method according to claim 1, wherein the FPGA is by the intelligent network adapter by the datagram
Text is sent to next-hop node, comprising:
The FPGA carries out TCP unloading to the data message;
The FPGA obtains the destination IP of the data message after TCP unloading;
The data message is sent to the corresponding next-hop node of the destination IP by the intelligent network adapter by the FPGA.
4. working as institute the method according to claim 1, wherein being also equipped with network interface card memory in the intelligent network adapter
State the corresponding processing type of data message be data surface treatment when, the method also includes:
The data message is sent to the network interface card memory by the FPGA;
Data message described in the network interface card memory storage.
5. described according to the method described in claim 4, it is characterized in that, be also equipped with network interface card hard disk in the intelligent network adapter
Method further include:
When the remaining space of the network interface card memory is less than pre-set space threshold value, the FPGA, which is calculated in the network interface card memory, to be owned
The hot value of data message;
The FPGA is migrated the data message one by one to the network interface card hard disk according to the ascending sequence of hot value.
6. according to the method described in claim 5, it is characterized in that, the method also includes:
When the intelligent network adapter receives request of data, described in the FPGA is searched in the network interface card memory and network interface card hard disk
The corresponding data message of request of data;
If finding the corresponding data message of the request of data, the FPGA is by the intelligent network adapter by the datagram
Text is sent to the next-hop node;
If not finding the corresponding data message of the request of data, the FPGA is by the intelligent network adapter by the data
Request is sent to the next-hop node.
7. method according to claim 1-6, which is characterized in that the method also includes:
The FPGA is based on aware transport layer multicast and through type is distributed, and sends the data message by the intelligent network adapter.
8. a kind of intelligent network adapter, which is characterized in that be equipped with FPGA in the intelligent network adapter, the FPGA is used for:
Determine that the corresponding processing type of data message that the intelligent network adapter receives, the processing type include at least: data
Surface treatment and control surface treatment;
When the corresponding processing type of the data message is data surface treatment, by the intelligent network adapter by the data message
It is sent to next-hop node;
When the corresponding processing type of the data message is control surface treatment, the data message is sent to CPU, so that institute
CPU is stated to handle the data message.
9. intelligent network adapter according to claim 8, which is characterized in that the FPGA is specifically used for:
The processing type identifier for including in the data message received according to the intelligent network adapter determines the datagram
The corresponding processing type of text.
10. intelligent network adapter according to claim 9, which is characterized in that the FPGA is specifically used for:
TCP unloading is carried out to the data message;
The destination IP of the data message after obtaining TCP unloading;
The data message is sent to the corresponding next-hop node of the destination IP by the intelligent network adapter.
11. intelligent network adapter according to claim 9, which is characterized in that network interface card memory is also equipped in the intelligent network adapter,
When the corresponding processing type of the data message is data surface treatment;
The FPGA, for the data message to be sent to the network interface card memory;
The network interface card memory, for storing the data message.
12. intelligent network adapter according to claim 9, which is characterized in that network interface card hard disk is also equipped in the intelligent network adapter,
The FPGA, is also used to:
When the remaining space of the network interface card memory is less than pre-set space threshold value, all data messages in the network interface card memory are calculated
Hot value;The data message is migrated one by one to the network interface card hard disk according to hot value ascending sequence.
13. intelligent network adapter according to claim 12, which is characterized in that the FPGA is also used to:
When the intelligent network adapter receives request of data, the request of data is searched in the network interface card memory and network interface card hard disk
Corresponding data message;
If finding the corresponding data message of the request of data, the data message is sent to by institute by the intelligent network adapter
State next-hop node;
If not finding the corresponding data message of the request of data, the request of data is sent to by the intelligent network adapter
The next-hop node.
14. according to the described in any item intelligent network adapters of claim 8-13, which is characterized in that the FPGA is also used to:
Distributed based on aware transport layer multicast and through type, the data message is sent by the intelligent network adapter.
15. a kind of CDN server, which is characterized in that the CDN server includes the described in any item intelligence of claim 8-14
Network interface card.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910123476.7A CN109936513A (en) | 2019-02-18 | 2019-02-18 | Data message processing method, intelligent network adapter and CDN server based on FPGA |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910123476.7A CN109936513A (en) | 2019-02-18 | 2019-02-18 | Data message processing method, intelligent network adapter and CDN server based on FPGA |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109936513A true CN109936513A (en) | 2019-06-25 |
Family
ID=66985668
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910123476.7A Pending CN109936513A (en) | 2019-02-18 | 2019-02-18 | Data message processing method, intelligent network adapter and CDN server based on FPGA |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109936513A (en) |
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110765064A (en) * | 2019-10-18 | 2020-02-07 | 山东浪潮人工智能研究院有限公司 | Edge-end image processing system and method of heterogeneous computing architecture |
CN111245866A (en) * | 2020-03-04 | 2020-06-05 | 深圳市龙信信息技术有限公司 | Ethernet application layer protocol control system and method based on hardware acceleration |
CN111541783A (en) * | 2020-07-08 | 2020-08-14 | 支付宝(杭州)信息技术有限公司 | Transaction forwarding method and device based on block chain all-in-one machine |
CN111541784A (en) * | 2020-07-08 | 2020-08-14 | 支付宝(杭州)信息技术有限公司 | Transaction processing method and device based on block chain all-in-one machine |
CN111555973A (en) * | 2020-04-28 | 2020-08-18 | 深圳震有科技股份有限公司 | Data packet forwarding method and device based on 5G data forwarding plane |
CN111740847A (en) * | 2020-08-24 | 2020-10-02 | 常州楠菲微电子有限公司 | High-speed network data transmission system and method based on FPGA |
CN111756628A (en) * | 2020-05-14 | 2020-10-09 | 深圳震有科技股份有限公司 | Data packet forwarding processing method and system, intelligent network card and CPU |
WO2021082877A1 (en) * | 2019-10-28 | 2021-05-06 | 华为技术有限公司 | Method and apparatus for accessing solid state disk |
CN113438219A (en) * | 2020-07-08 | 2021-09-24 | 支付宝(杭州)信息技术有限公司 | Replay transaction identification method and device based on block chain all-in-one machine |
CN113709135A (en) * | 2021-08-24 | 2021-11-26 | 杭州迪普科技股份有限公司 | SSL flow audit acquisition system and method |
WO2022116953A1 (en) * | 2020-12-01 | 2022-06-09 | 阿里巴巴集团控股有限公司 | Packet processing method, device, system, and storage medium |
CN114640447A (en) * | 2022-03-25 | 2022-06-17 | 广东浪潮智慧计算技术有限公司 | Data packet processing method, intelligent network card and storage medium |
US11463553B2 (en) | 2020-07-08 | 2022-10-04 | Alipay (Hangzhou) Information Technology Co., Ltd. | Methods and apparatuses for identifying to-be-filtered transaction based on blockchain integrated station |
CN115834494A (en) * | 2022-11-18 | 2023-03-21 | 深圳市海带智能有限公司 | Data processing method for PON equipment and PON equipment |
US11665234B2 (en) | 2020-07-08 | 2023-05-30 | Alipay (Hangzhou) Information Technology Co., Ltd. | Methods and apparatuses for synchronizing data based on blockchain integrated station |
CN116668375A (en) * | 2023-07-31 | 2023-08-29 | 新华三技术有限公司 | Message distribution method, device, network equipment and storage medium |
CN117119076A (en) * | 2023-07-26 | 2023-11-24 | 中国人民解放军战略支援部队信息工程大学 | Device and method for realizing TCP link management based on queue |
CN117749865A (en) * | 2024-02-06 | 2024-03-22 | 苏州元脑智能科技有限公司 | Session processing method, system, device, equipment and storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103927265A (en) * | 2013-01-04 | 2014-07-16 | 深圳市龙视传媒有限公司 | Content hierarchical storage device, content acquisition method and content acquisition device |
CN105245456A (en) * | 2015-10-20 | 2016-01-13 | 浪潮(北京)电子信息产业有限公司 | Method and system for unloading SDN virtual network function in cloud server |
CN107273040A (en) * | 2016-04-08 | 2017-10-20 | 北京优朋普乐科技有限公司 | data cache method and device |
-
2019
- 2019-02-18 CN CN201910123476.7A patent/CN109936513A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103927265A (en) * | 2013-01-04 | 2014-07-16 | 深圳市龙视传媒有限公司 | Content hierarchical storage device, content acquisition method and content acquisition device |
CN105245456A (en) * | 2015-10-20 | 2016-01-13 | 浪潮(北京)电子信息产业有限公司 | Method and system for unloading SDN virtual network function in cloud server |
CN107273040A (en) * | 2016-04-08 | 2017-10-20 | 北京优朋普乐科技有限公司 | data cache method and device |
Cited By (28)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110765064B (en) * | 2019-10-18 | 2022-08-23 | 山东浪潮科学研究院有限公司 | Edge-end image processing system and method of heterogeneous computing architecture |
CN110765064A (en) * | 2019-10-18 | 2020-02-07 | 山东浪潮人工智能研究院有限公司 | Edge-end image processing system and method of heterogeneous computing architecture |
WO2021082877A1 (en) * | 2019-10-28 | 2021-05-06 | 华为技术有限公司 | Method and apparatus for accessing solid state disk |
CN111245866A (en) * | 2020-03-04 | 2020-06-05 | 深圳市龙信信息技术有限公司 | Ethernet application layer protocol control system and method based on hardware acceleration |
CN111555973A (en) * | 2020-04-28 | 2020-08-18 | 深圳震有科技股份有限公司 | Data packet forwarding method and device based on 5G data forwarding plane |
CN111756628A (en) * | 2020-05-14 | 2020-10-09 | 深圳震有科技股份有限公司 | Data packet forwarding processing method and system, intelligent network card and CPU |
CN112492002A (en) * | 2020-07-08 | 2021-03-12 | 支付宝(杭州)信息技术有限公司 | Transaction forwarding method and device based on block chain all-in-one machine |
CN112492002B (en) * | 2020-07-08 | 2023-01-20 | 支付宝(杭州)信息技术有限公司 | Transaction forwarding method and device based on block chain all-in-one machine |
CN111541784B (en) * | 2020-07-08 | 2021-07-20 | 支付宝(杭州)信息技术有限公司 | Transaction processing method and device based on block chain all-in-one machine |
CN113438219A (en) * | 2020-07-08 | 2021-09-24 | 支付宝(杭州)信息技术有限公司 | Replay transaction identification method and device based on block chain all-in-one machine |
US11783339B2 (en) | 2020-07-08 | 2023-10-10 | Alipay (Hangzhou) Information Technology Co., Ltd. | Methods and apparatuses for transferring transaction based on blockchain integrated station |
US11336660B2 (en) | 2020-07-08 | 2022-05-17 | Alipay (Hangzhou) Information Technology Co., Ltd. | Methods and apparatuses for identifying replay transaction based on blockchain integrated station |
CN111541784A (en) * | 2020-07-08 | 2020-08-14 | 支付宝(杭州)信息技术有限公司 | Transaction processing method and device based on block chain all-in-one machine |
US11665234B2 (en) | 2020-07-08 | 2023-05-30 | Alipay (Hangzhou) Information Technology Co., Ltd. | Methods and apparatuses for synchronizing data based on blockchain integrated station |
CN111541783A (en) * | 2020-07-08 | 2020-08-14 | 支付宝(杭州)信息技术有限公司 | Transaction forwarding method and device based on block chain all-in-one machine |
US11444783B2 (en) | 2020-07-08 | 2022-09-13 | Alipay (Hangzhou) Information Technology Co., Ltd. | Methods and apparatuses for processing transactions based on blockchain integrated station |
US11463553B2 (en) | 2020-07-08 | 2022-10-04 | Alipay (Hangzhou) Information Technology Co., Ltd. | Methods and apparatuses for identifying to-be-filtered transaction based on blockchain integrated station |
CN111740847B (en) * | 2020-08-24 | 2020-12-11 | 常州楠菲微电子有限公司 | High-speed network data transmission system and method based on FPGA |
CN111740847A (en) * | 2020-08-24 | 2020-10-02 | 常州楠菲微电子有限公司 | High-speed network data transmission system and method based on FPGA |
WO2022116953A1 (en) * | 2020-12-01 | 2022-06-09 | 阿里巴巴集团控股有限公司 | Packet processing method, device, system, and storage medium |
CN113709135B (en) * | 2021-08-24 | 2023-02-07 | 杭州迪普科技股份有限公司 | SSL flow audit acquisition system and method |
CN113709135A (en) * | 2021-08-24 | 2021-11-26 | 杭州迪普科技股份有限公司 | SSL flow audit acquisition system and method |
CN114640447A (en) * | 2022-03-25 | 2022-06-17 | 广东浪潮智慧计算技术有限公司 | Data packet processing method, intelligent network card and storage medium |
CN115834494A (en) * | 2022-11-18 | 2023-03-21 | 深圳市海带智能有限公司 | Data processing method for PON equipment and PON equipment |
CN117119076A (en) * | 2023-07-26 | 2023-11-24 | 中国人民解放军战略支援部队信息工程大学 | Device and method for realizing TCP link management based on queue |
CN116668375A (en) * | 2023-07-31 | 2023-08-29 | 新华三技术有限公司 | Message distribution method, device, network equipment and storage medium |
CN116668375B (en) * | 2023-07-31 | 2023-11-21 | 新华三技术有限公司 | Message distribution method, device, network equipment and storage medium |
CN117749865A (en) * | 2024-02-06 | 2024-03-22 | 苏州元脑智能科技有限公司 | Session processing method, system, device, equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109936513A (en) | Data message processing method, intelligent network adapter and CDN server based on FPGA | |
US11316946B2 (en) | Processing and caching in an information-centric network | |
Li et al. | ECCN: Orchestration of edge-centric computing and content-centric networking in the 5G radio access network | |
US8677011B2 (en) | Load distribution system, load distribution method, apparatuses constituting load distribution system, and program | |
CN103457993B (en) | Local cache device and the method that content caching service is provided | |
CN101656618B (en) | Multimedia message broadcasting method and system based on structural Peer-to-Peer Network (PPN) | |
US8250171B2 (en) | Content delivery apparatus, content delivery method, and content delivery program | |
CN106790675A (en) | Load-balancing method, equipment and system in a kind of cluster | |
KR20010088742A (en) | Parallel Information Delievery Method Based on Peer-to-Peer Enabled Distributed Computing Technology | |
CN102970242B (en) | Method for achieving load balancing | |
WO2018213052A1 (en) | System and method for efficiently distributing computation in publisher-subscriber networks | |
CN104967677A (en) | File transmission method and apparatus based on NDN cache optimization | |
CN108293023A (en) | The system and method for the content requests of the context-aware in network centered on support information | |
EP2747336B1 (en) | Content processing method, device and system | |
US9083725B2 (en) | System and method providing hierarchical cache for big data applications | |
Cha et al. | A mobility link service for ndn consumer mobility | |
CN103746768B (en) | A kind of recognition methods of packet and equipment | |
CN105915587B (en) | Content delivery method, system and cache server | |
Wang et al. | Maximizing real-time streaming services based on a multi-servers networking framework | |
CN114945032A (en) | Electric power internet of things terminal data access system, method, device, equipment and medium | |
CN117440053B (en) | Multistage cross die access method and system | |
CN102017568A (en) | System for delivery of content to be played autonomously | |
CN107493254B (en) | TCP message forwarding method, device and system | |
Al-Sakran et al. | A proposed performance evaluation of NoSQL databases in the field of IoT | |
CN107888643A (en) | A kind of UDP load-balancing methods, device and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190625 |
|
RJ01 | Rejection of invention patent application after publication |