CN114465899A - Network acceleration method, system and device under complex cloud computing environment - Google Patents
Network acceleration method, system and device under complex cloud computing environment Download PDFInfo
- Publication number
- CN114465899A CN114465899A CN202210120548.4A CN202210120548A CN114465899A CN 114465899 A CN114465899 A CN 114465899A CN 202210120548 A CN202210120548 A CN 202210120548A CN 114465899 A CN114465899 A CN 114465899A
- Authority
- CN
- China
- Prior art keywords
- network
- virtual
- cloud computing
- computing environment
- acceleration
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000001133 acceleration Effects 0.000 title claims abstract description 78
- 238000000034 method Methods 0.000 title claims abstract description 39
- 238000012545 processing Methods 0.000 claims abstract description 53
- 239000011159 matrix material Substances 0.000 claims abstract description 22
- 230000006870 function Effects 0.000 claims description 25
- 238000013139 quantization Methods 0.000 claims description 7
- 101000652292 Homo sapiens Serotonin N-acetyltransferase Proteins 0.000 claims description 4
- 102100030547 Serotonin N-acetyltransferase Human genes 0.000 claims description 4
- 239000004744 fabric Substances 0.000 claims description 3
- 238000004364 calculation method Methods 0.000 abstract description 4
- 238000007667 floating Methods 0.000 description 5
- 235000008694 Humulus lupulus Nutrition 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 230000005465 channeling Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 238000003672 processing method Methods 0.000 description 2
- 238000013024 troubleshooting Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000005265 energy consumption Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000002955 isolation Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000035755 proliferation Effects 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/08—Configuration management of networks or network elements
- H04L41/0803—Configuration setting
- H04L41/0823—Configuration setting characterised by the purposes of a change of settings, e.g. optimising configuration for enhancing reliability
- H04L41/083—Configuration setting characterised by the purposes of a change of settings, e.g. optimising configuration for enhancing reliability for increasing network speed
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L45/00—Routing or path finding of packets in data switching networks
- H04L45/02—Topology update or discovery
- H04L45/04—Interdomain routing, e.g. hierarchical routing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L45/00—Routing or path finding of packets in data switching networks
- H04L45/38—Flow based routing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/10—Packet switching elements characterised by the switching fabric construction
- H04L49/101—Packet switching elements characterised by the switching fabric construction using crossbar or matrix
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/30—Peripheral units, e.g. input or output ports
- H04L49/3009—Header conversion, routing tables or routing tags
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/35—Switches specially adapted for specific applications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/70—Virtual switches
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
The invention discloses a network acceleration method, a system and a device under a complex cloud computing environment, belonging to the technical field of cloud computing and computer networks, wherein the method decouples computing and network processing, unloads network tasks to corresponding equipment for processing: the forwarding logic of the virtual network is integrated in the virtual switching equipment in a streaming virtualization processing mode, and the streaming virtualization processing of the virtual switching equipment is unloaded to a special hardware accelerated processing unit array; the special hardware acceleration processing unit is integrated through PCI-E interface, and the special hardware network acceleration box including access sub-module and exchange chip back board is designed to connect with the high speed exchange chip via internal digital circuit to form complete exchange matrix. The invention can solve the problems of low virtual network performance, large calculation influence and complex network deployment, greatly improves the virtual network performance and east-west expansion capability and reduces the deployment complexity of a data center network.
Description
Technical Field
The invention relates to the technical field of cloud computing and computer networks, in particular to a network acceleration method, a system and a device under a complex cloud computing environment.
Background
The rise of the trends of artificial intelligence, machine learning, network security, super-large-scale architecture, cloud service and the like puts unprecedented requirements on the network, particularly on the aspects of performance and high availability. These factors, coupled with the proliferation of network usage by wireless networks and telecommuting, are driving increases in network bandwidth, the number of users, and the amount of active network traffic, which increases and the complexity of the traffic put tremendous pressure on the CPU of the server infrastructure compute nodes.
In a cloud computing environment, a network generally has higher complexity, a virtual network and a physical network are mixed together, with the rise of edge computing, many cloud services require lower delay to support real-time applications and services deployed in end systems, such as video conferences, 5G, auto-driving automobiles and the like, and other factors include the need to support traditional network services, which impose high performance requirements on the cloud computing network.
Disclosure of Invention
The technical task of the invention is to provide a network acceleration method, a system and a device under a complex cloud computing environment aiming at the defects, which can solve the problems of low virtual network performance, large influence of computing and complex network deployment, greatly improve the virtual network performance and east-west expansion capability and reduce the deployment complexity of a data center network.
The technical scheme adopted by the invention for solving the technical problems is as follows:
the network acceleration method under the complex cloud computing environment decouples computing and network processing, unloads network tasks to corresponding equipment for processing:
the forwarding logic of the virtual network is integrated in the virtual switch device by means of flow quantization processing,
the stream quantization processing of the virtual switching equipment is unloaded to a special hardware acceleration processing unit array, so that the network has lower time delay, higher bandwidth and less CPU consumption;
the special hardware acceleration processing unit is integrated through PCI-E interface, and the special hardware network acceleration box including access sub-module and exchange chip back board is designed to connect with the high speed exchange chip via internal digital circuit to form complete exchange matrix.
In any virtualized network infrastructure, there is a large amount of data plane network requirements within the servers. Network workloads are particularly computationally expensive. Only one item of virtual switching can occupy more than 90% of available CPU resources of the server. The calculation and the network processing are decoupled, the network tasks are unloaded to be processed by corresponding equipment, the important resources can be returned to the application layer, and meanwhile, the performance efficiency of the virtual network is improved.
Furthermore, the hardware network acceleration box is externally provided with a plurality of high-speed uplink ports for connecting the core switch to form a larger-scale switch matrix.
Preferably, the number of the high-speed upper connecting ports is 4-8.
Preferably, through the hardware network acceleration box, virtual machines in the same module can be processed by a hardware-offloaded virtual switch, and traffic between nodes is exchanged through a high-speed switching matrix built in the acceleration box.
Preferably, each physical PCI-E corresponds to one submodule, and each submodule corresponds to one physical machine;
each physical PCI-E is virtualized into a plurality of logic PCI-E devices through the SR-IOV standard, and the logic PCI-E devices correspond to a plurality of virtual network cards and are used by a plurality of virtual machines on the same physical machine.
Each access submodule comprises a PCI-E network card supporting virtual channeling, an internal switching structure and the like, the functions of the access submodule are similar to those of a common intelligent network card, the access submodule does not need an optical module to convert photoelectric signals, and the access submodule is connected with a high-speed switching chip through an internal digital circuit to form a complete switching matrix.
Preferably, the routing, SNAT and Floating IP three-layer functions of the virtual network are all realized by using an OpenFlow flow table;
the processing of the message is completed in the OpenVSwitch switch in one step.
The three-layer functions of the virtual network mainly comprise routing, SNAT and Floating IP, and are generally realized through Namespace of Linux kernel, each router corresponds to one Namespace, and a Linux TCP/IP protocol stack is used for routing forwarding.
In a traditional virtual network architecture, a virtual machine port is connected to a Linux bridge through a tap port to process relevant firewall network safety functions, the virtual machine port is connected to an integrated bridge of an OpenVSwitch through a virtual router after the processing is finished, two-layer message processing is carried out, if three layers of routing are needed, the three layers of routing are sent into Linux namespace of the virtual router from the port of the integrated bridge, a network segment routing configured in the namespace is searched, if FIP flow needs to be sent into the Linux namespace of FIP again for searching, the message is subjected to NAT through iptables rules configured in the namespace, the message is sent into the integrated bridge of the Openvswitch through a floating IP gateway, and the message is sent into an outer net port through a patch port to be sent out. In a word, the lengthy link hops passed by the virtual machine flow are low in network performance, the overall network performance is limited by the shortest board, the topology is too long, the positioning problem overhead is high, and the performance is limited by the number of iptables, the number of namespaces and the like. Any node on a lengthy link can become a bottleneck of the overall network bandwidth, a path positioned after a network has a problem is long, and the problem that the network faces low bandwidth and high delay is urgently needed to be improved.
Aiming at the problems existing in the current situation of the virtual network, the processing of the message is processed and completed in the OpenVSwitch switch at one time, and the functions which can be completed only by passing in and out a plurality of bridges and a plurality of linux namespaces for a plurality of times of the described three-layer message are processed and completed in the flow table at one time, so that the packet delay is greatly shortened, and the bandwidth is greatly improved. In addition to the typical three-layer forwarding in the above example, the acceleration function also includes all virtual network functions such as load balancing capability, EIP packet processing capability, and the like.
In order to improve the network performance, the three-layer functions of the virtual network are all realized by using an OpenFlow flow table. Taking a security group as an example, a corresponding network connection device needs to be created every time a port is created in a conventional virtual network, and a port is created every time the port is directly mounted on a virtual switch (OVS) in a streaming manner, so that the hop count is greatly reduced.
Preferably, the switch matrix can be regarded as a switch, which is connected with other network acceleration boxes through cables and switching equipment to form a common Fabric network.
Furthermore, the switching matrix itself supports Trunk function, and it is connected to the external switch, and can be made Trunk, thereby supporting Frame-level load balancing, and ensuring no congestion during data transmission.
In a cloud computing environment, networks are generally high in complexity, virtual networks and physical networks are mixed together, and many cloud services require lower delay to support real-time applications and services deployed on end systems. In the mainstream virtual network model implementation in the current cloud computing, a large amount of name space technology of the Linux network is used. Under the scenario of using distributed routing, each computing terminal needs to create a virtual router and a corresponding network namespace to implement a routing forwarding function of traffic. Such a virtual network routing processing method used in a large amount has obvious performance problems, and traffic sent from one virtual host needs to be processed by a plurality of virtual network devices, so that the network performance loss is huge. Especially in the scenario of using DPDK soft acceleration technique, the acceleration performance is far lower than expected due to the traffic needs to switch the process repeatedly between the user mode and the kernel mode.
The method is used for solving the problems of poor virtual network performance and limited expansibility in a complex cloud computing environment, and provides a flow table unloading and direct processing based method.
The invention also claims a network acceleration system under the complex cloud computing environment, which migrates the traditional network card from the host machine to acceleration unit equipment in a modular form, and the unit equipment can be connected to the host machine through PCI-E after being subjected to hardware;
the system realizes the network acceleration method under the complex cloud computing environment.
The invention also claims a network acceleration device in a complex cloud computing environment, comprising: at least one memory and at least one processor, and the hardware network acceleration box described above;
the at least one memory to store a machine readable program;
the at least one processor is used for calling the machine readable program to execute the network acceleration method in the complex cloud computing environment.
Compared with the prior art, the network acceleration method, the network acceleration system and the network acceleration device in the complex cloud computing environment have the following beneficial effects:
all forwarding logics of the virtual network are integrated in the virtual switching equipment in a streaming way, so that the hop count is greatly reduced, meanwhile, the processing in the virtual switch is further offloadd to a special software and hardware processing unit, so that the efficiency is further improved, the network has lower time delay and higher bandwidth, the CPU consumption strategy routing is less, the full-path forward and reverse symmetrical drainage is realized, and various non-service type virtual equipment including various safety equipment, monitoring equipment, log auditing equipment and the like can be supported;
the network optimization method has the advantages that better network service capacity can be provided, the integration of devices is realized, the network wiring cost and the operation and maintenance cost are greatly saved, and the efficient network optimization service with low deployment threshold is really realized;
the traditional network functions of the virtual network are all processed in a centralized way through flow quantization, so that the performance of the traditional virtual network can be greatly optimized, and the processing performance of the corresponding network functions can be improved; meanwhile, related network products related to routing and forwarding performance requirements, such as load balancing, cluster gateways and the like, can obtain great improvement of concurrency and processing capacity by the method.
Drawings
FIG. 1 is a schematic diagram of a network acceleration method in a complex cloud computing environment according to an embodiment of the present invention;
fig. 2 is a diagram of an implementation process of a network acceleration method according to an embodiment of the present invention.
Detailed Description
The invention is further described with reference to the following figures and specific examples.
In the implementation of a mainstream virtual network model in current cloud computing, a large amount of name spaces of a Linux network are used: such as Linux bridge, IPtables, tap/tun/veth interface, etc. In a scenario of using distributed routing, each computing terminal needs to create a virtual router and a corresponding network namespace to implement a routing forwarding function of traffic. Such a virtual network routing processing method used in a large amount has obvious performance problems, and traffic sent from one virtual host needs to be processed by a plurality of virtual network devices, so that the network performance loss is huge. Especially in the scenario of using the DPDK soft acceleration technique, after the tap and the veth interfaces are used, the acceleration performance is far lower than the expected performance because the traffic needs to be repeatedly switched between the user mode and the kernel mode.
There are some solutions in the industry, such as the relatively popular OVN technology, which reduces the number of virtual forwarding hops and improves the system performance by streaming the control command. However, from the practical use perspective, the OVS has many problems, such as that some lower versions of cloud operating systems do not support OVN functions, a bottleneck problem exists in database nodes, a lot of extra work is required for migrating the OVS environment to the OVN environment, tracing back to network problems is more complicated, and more steps are introduced for version updating and upgrading of the environment. Moreover, the OVS is not completely unloaded, and the function of the virtual network port is still retained in the virtual machine and needs to occupy corresponding computing resources, which becomes a system bottleneck.
In addition, due to the requirement of high availability and network isolation, network connection at a Tor switch of the existing cloud data center is very complex, and troubleshooting and traffic model analysis are difficult.
Aiming at the problems of low performance, large influence of calculation and complex network deployment of the virtual network,
the embodiment of the invention provides a network acceleration method in a complex cloud computing environment, which decouples computing and network processing, unloads network tasks to corresponding equipment for processing:
all forwarding logics of the virtual network are integrated in the virtual switching equipment in a flow-based processing mode,
all the functions of the 2 and 3 layers of the virtual network are processed in a centralized way through flow quantization, the processing mode is not limited to a virtual switch, and related products related to network routing and forwarding performance requirements can obtain concurrency capability and greatly improved processing capability through the device;
the stream quantization processing of the virtual switching equipment is unloaded to a special hardware acceleration processing unit array, so that the network has lower time delay, higher bandwidth and less CPU consumption;
the special hardware acceleration processing unit is integrated through PCI-E interface, and the special hardware network acceleration box including access sub-module and exchange chip back board is designed to connect with the high speed exchange chip via internal digital circuit to form complete exchange matrix.
In any virtualized network infrastructure, there is a large amount of data plane network requirements within the servers. Network workloads are particularly computationally expensive. Only one item of virtual switching can occupy more than 90% of available CPU resources of the server. The calculation and the network processing are decoupled, the network tasks are unloaded to be processed by corresponding equipment, the important resources can be returned to the application layer, and meanwhile, the performance efficiency of the virtual network is improved.
In the network acceleration method, each physical PCI-E corresponds to one sub-module, each sub-module corresponds to one physical machine, each physical PCI-E is virtualized into a plurality of logic PCI-E devices through the SR-IOV standard, and the logic PCI-E devices correspond to a plurality of virtual network cards and are used by a plurality of virtual machines on the same physical machine. Since it can be implemented in hardware, it is possible to obtain I/O performance comparable to that of a physical network card, by which standard the virtual server can be directly connected to the I/O device, bypassing the Hypervisor and virtual switching layers, resulting in extremely low processing delay and speed approaching that of a cable.
At the same time, the virtual machine can transfer the packet processing workload from the CPU to the programmable physical acceleration card through these acceleration units. By offloading network processing workloads and tasks of the server CPUs, the network acceleration device can greatly improve server network performance of the cloud and private data centers. Driven by the ever-increasing network traffic and computational complexity of data centers, the adoption of network accelerators provides a processing architecture that provides computation to certain workloads through acceleration box units and offloads those workloads from the general purpose compute kernel, thereby improving the efficiency of the overall virtual network solution.
For the processing of the uplink flow of the virtual network card, the acceleration box in the invention can be used for processing the flow between the virtual machines in the same module through the virtual switch unloaded by hardware, and the flow between the cross nodes is exchanged through the high-speed exchange matrix arranged in the acceleration box. The built-in high-speed switching matrix is similar to a blade switch and is provided with a plurality of external interfaces, and more ports are distributed inside and are directly connected with the network acceleration unit. The hardware acceleration box integrates an Ethernet management interface and a web server, and can be managed through a management framework or configured in a web mode. Through the built-in switching matrix, the number of cables of the cloud data center network and the difficulty of troubleshooting are greatly reduced. Meanwhile, the cloud computing network is miniaturized, and energy consumption is reduced. Traffic between different network acceleration boxes can interact through the high-speed switching matrix connected in series.
The high-speed switch matrix itself can be regarded as a switch, and can be connected with other network acceleration boxes through cables and switching equipment to form a common Fabric network. The high-speed switching matrix also supports a Trunk function, is connected with an external switch and can be used as a Trunk, thereby supporting Frame-level load balancing and ensuring that no congestion occurs during data transmission.
The specific technical implementation of the method comprises two aspects:
1. software implementation
The three-layer functions of the virtual network mainly comprise routing, SNAT and Floating IP, and are generally realized through Namespace of Linux kernel, each router corresponds to one Namespace, and a Linux TCP/IP protocol stack is used for routing forwarding.
In a traditional virtual network architecture, a virtual machine port is connected to a Linux bridge through a tap port to process relevant firewall network safety functions, the virtual machine port is connected to an integrated bridge of an OpenVSwitch through a virtual router after the processing is finished, two-layer message processing is carried out, if three layers of routing are needed, the three layers of routing are sent into Linux namespace of the virtual router from the port of the integrated bridge, a network segment routing configured in the namespace is searched, if FIP flow needs to be sent into the Linux namespace of FIP again for searching, the message is subjected to NAT through iptables rules configured in the namespace, the message is sent into the integrated bridge of the Openvswitch through a floating IP gateway, and the message is sent into an outer net port through a patch port to be sent out. In a word, the lengthy link hops passed by the virtual machine flow are low in network performance, the overall network performance is limited by the shortest board, the topology is too long, the positioning problem overhead is high, and the performance is limited by the number of iptables, the number of namespaces and the like. Any node on a lengthy link can become a bottleneck of the overall network bandwidth, a path positioned after a network has a problem is long, and the problem that the network faces low bandwidth and high delay is urgently needed to be improved.
Aiming at the problems existing in the current situation of the virtual network, the method completes the processing of the message in the OpenVSwitch switch at one time, completes the function that the described three-layer message can be completed only by entering and exiting a plurality of bridges and a plurality of linux namespaces for a plurality of times in the flow table at one time, greatly shortens the packet delay and greatly improves the bandwidth. In addition to the typical three-layer forwarding in the above example, the acceleration function also includes all virtual network functions such as load balancing capability, EIP packet processing capability, and the like.
In order to improve the network performance, the method realizes the three-layer functions of the virtual network in an OpenFlow flow table mode. Taking a security group as an example, a corresponding network connection device needs to be created every time a port is created in a conventional virtual network, and a port is created every time the port is directly mounted on a virtual switch (OVS) in a streaming manner, so that the hop count is greatly reduced.
2. Hardware implementation
The software processing mode based on flow table processing greatly reduces the hop count of the routing node, and on the basis, the processing unit can be further processed into hardware and migrated to a special network acceleration box, so that the processing performance and the stability of the system are further improved. As shown in fig. 2, the basic design concept includes the following aspects:
firstly, a network card is migrated from a host machine to a hardware network acceleration box and is directly connected with the host machine through PCI-E, in the hardware acceleration box, each physical PCI-E corresponds to a sub-module, each sub-module corresponds to a physical machine, each physical PCI-E can be virtualized into a plurality of logic PCI-E devices, and the logic PCI-E devices correspond to a plurality of virtual network cards and are used by a plurality of virtual machines on the same physical machine.
Each access submodule comprises a PCI-E network card supporting virtual channeling, an internal switching structure and the like, the functions of the access submodule are similar to those of a common intelligent network card, the access submodule does not need an optical module to convert photoelectric signals, and the access submodule is connected with a high-speed switching chip through an internal digital circuit to form a complete switching matrix.
Finally, 4-8 high-speed uplink ports are provided on the side of the switching matrix and are used for connecting core switches to form a larger-scale network matrix.
Through the hardware network acceleration box, the virtual network capacity of the virtual machine can be completely offloadd to the network side hardware for processing, and a physical machine protocol stack and an internal complex virtual link are not needed, so that better network service capacity is provided.
The embodiment of the invention also provides a network acceleration system in the complex cloud computing environment, the system migrates the traditional network card from the host machine to acceleration unit equipment in a modular form, and the unit equipment can be connected to the host machine through PCI-E after being subjected to hardware;
the system realizes the network acceleration method under the complex cloud computing environment in the embodiment of the invention.
An embodiment of the present invention further provides a network acceleration apparatus in a complex cloud computing environment, including: at least one memory and at least one processor, and the hardware network acceleration box described above;
the at least one memory to store a machine readable program;
the at least one processor is configured to invoke the machine-readable program to execute the network acceleration method in the complex cloud computing environment according to the above embodiment of the present invention.
While the invention has been shown and described in detail in the drawings and in the preferred embodiments, it is not intended to limit the invention to the embodiments disclosed, and it will be apparent to those skilled in the art that various combinations of the code auditing means in the various embodiments described above may be used to obtain further embodiments of the invention, which are also within the scope of the invention.
Claims (10)
1. The network acceleration method under the complex cloud computing environment is characterized in that computing and network processing are decoupled, and network tasks are unloaded to corresponding equipment for processing:
the forwarding logic of the virtual network is integrated in the virtual switch device by means of flow quantization processing,
the flow quantization processing of the virtual switching equipment is unloaded to a special hardware acceleration processing unit array;
the special hardware acceleration processing unit is integrated through PCI-E interface, and the special hardware network acceleration box including access sub-module and exchange chip back board is designed to connect with the high speed exchange chip via internal digital circuit to form complete exchange matrix.
2. The network acceleration method under the complex cloud computing environment of claim 1, wherein the hardware network acceleration box is externally provided with a plurality of high-speed uplink ports for connecting core switches to form a larger-scale switching matrix.
3. The network acceleration method under the complex cloud computing environment of claim 2, wherein the number of the high-speed uplink ports is 4-8.
4. The network acceleration method under the complex cloud computing environment of claim 2, wherein, through the hardware network acceleration box, virtual machines inside the same module can be processed through a hardware-offloaded virtual switch, and traffic between nodes is exchanged through a high-speed switching matrix built in the acceleration box.
5. The network acceleration method under the complex cloud computing environment of claim 1, 2 or 4, characterized in that, each physical PCI-E corresponds to a sub-module, and each sub-module corresponds to a physical machine;
each physical PCI-E is virtualized into a plurality of logic PCI-E devices through the SR-IOV standard, and the logic PCI-E devices correspond to a plurality of virtual network cards and are used by a plurality of virtual machines on the same physical machine.
6. The network acceleration method in the complex cloud computing environment according to claim 1 or 2, wherein the routing, SNAT and flowing IP three-layer functions of the virtual network are all implemented by using an OpenFlow flow table;
the processing of the message is completed in the OpenVSwitch switch in one step.
7. The network acceleration method in a complex cloud computing environment of claim 1, wherein the switch matrix is connected to other network acceleration boxes through cables and switching devices to form a common Fabric network.
8. The network acceleration method under the complex cloud computing environment of claim 1 or 7, characterized in that, the switching matrix itself supports Trunk function, and it is connected to an external switch, and can be made into Trunk, thereby supporting Frame-level load balancing.
9. The network acceleration system under the complex cloud computing environment is characterized in that a traditional network card is migrated from a host machine to acceleration unit equipment in a modular form, and the unit equipment can be connected to the host machine through PCI-E after being subjected to hardware;
the system realizes the network acceleration method in the complex cloud computing environment according to any one of claims 1 to 8.
10. Network acceleration device under complicated cloud computing environment, characterized by, includes: at least one memory and at least one processor, and the hardware network acceleration box of any of claims 1 to 8;
the at least one memory to store a machine readable program;
the at least one processor is configured to invoke the machine readable program to execute the network acceleration method in the complex cloud computing environment according to any one of claims 1 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210120548.4A CN114465899A (en) | 2022-02-09 | 2022-02-09 | Network acceleration method, system and device under complex cloud computing environment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210120548.4A CN114465899A (en) | 2022-02-09 | 2022-02-09 | Network acceleration method, system and device under complex cloud computing environment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114465899A true CN114465899A (en) | 2022-05-10 |
Family
ID=81413316
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210120548.4A Pending CN114465899A (en) | 2022-02-09 | 2022-02-09 | Network acceleration method, system and device under complex cloud computing environment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114465899A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115622959A (en) * | 2022-11-07 | 2023-01-17 | 浪潮电子信息产业股份有限公司 | Switch control method, device, equipment, storage medium and SDN (software defined network) |
CN115858102A (en) * | 2023-02-24 | 2023-03-28 | 珠海星云智联科技有限公司 | Method for deploying virtual machine supporting virtualization hardware acceleration |
CN115914003A (en) * | 2022-12-08 | 2023-04-04 | 苏州浪潮智能科技有限公司 | Flow monitoring method and system based on intelligent network card |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190012193A1 (en) * | 2017-07-07 | 2019-01-10 | Gysbert Floris van Beek Van Leeuwen | Virtio relay |
US20190089641A1 (en) * | 2017-09-17 | 2019-03-21 | Mellanox Technologies, Ltd. | Stateful Connection Tracking |
CN112543137A (en) * | 2020-11-30 | 2021-03-23 | 中国电子科技集团公司第五十四研究所 | Virtual machine network acceleration system based on semi-virtualization and OVS-DPDK |
CN112929299A (en) * | 2021-01-27 | 2021-06-08 | 广州市品高软件股份有限公司 | SDN cloud network implementation method, device and equipment based on FPGA accelerator card |
CN113676544A (en) * | 2021-08-24 | 2021-11-19 | 优刻得科技股份有限公司 | Cloud storage network and method for realizing service isolation in entity server |
CN113821310A (en) * | 2021-11-19 | 2021-12-21 | 阿里云计算有限公司 | Data processing method, programmable network card device, physical server and storage medium |
-
2022
- 2022-02-09 CN CN202210120548.4A patent/CN114465899A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190012193A1 (en) * | 2017-07-07 | 2019-01-10 | Gysbert Floris van Beek Van Leeuwen | Virtio relay |
US20190089641A1 (en) * | 2017-09-17 | 2019-03-21 | Mellanox Technologies, Ltd. | Stateful Connection Tracking |
CN112543137A (en) * | 2020-11-30 | 2021-03-23 | 中国电子科技集团公司第五十四研究所 | Virtual machine network acceleration system based on semi-virtualization and OVS-DPDK |
CN112929299A (en) * | 2021-01-27 | 2021-06-08 | 广州市品高软件股份有限公司 | SDN cloud network implementation method, device and equipment based on FPGA accelerator card |
CN113676544A (en) * | 2021-08-24 | 2021-11-19 | 优刻得科技股份有限公司 | Cloud storage network and method for realizing service isolation in entity server |
CN113821310A (en) * | 2021-11-19 | 2021-12-21 | 阿里云计算有限公司 | Data processing method, programmable network card device, physical server and storage medium |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115622959A (en) * | 2022-11-07 | 2023-01-17 | 浪潮电子信息产业股份有限公司 | Switch control method, device, equipment, storage medium and SDN (software defined network) |
CN115914003A (en) * | 2022-12-08 | 2023-04-04 | 苏州浪潮智能科技有限公司 | Flow monitoring method and system based on intelligent network card |
CN115858102A (en) * | 2023-02-24 | 2023-03-28 | 珠海星云智联科技有限公司 | Method for deploying virtual machine supporting virtualization hardware acceleration |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114465899A (en) | Network acceleration method, system and device under complex cloud computing environment | |
CN102457439B (en) | Virtual switching system and method of cloud computing system | |
US7483998B2 (en) | Software configurable cluster-based router using heterogeneous nodes as cluster nodes | |
CN107211036B (en) | Networking method for data center network and data center network | |
US20130242718A1 (en) | Method and apparatus providing network redundancy and high availability to remote network nodes | |
US12038861B2 (en) | System decoder for training accelerators | |
EP1501247B1 (en) | Software configurable cluster-based router using stock personal computers as cluster nodes | |
RU2513918C1 (en) | Cluster router and cluster routing method | |
CN109617735A (en) | Cloud computation data center system, gateway, server and message processing method | |
CN110519079B (en) | Data forwarding method and device, network board, network equipment and storage medium | |
CN110838964B (en) | Network docking system for virtual network and physical network | |
WO2022037265A1 (en) | Edge computing center integrated server | |
CN111404818B (en) | Routing protocol optimization method for general multi-core network processor | |
CN103067287B (en) | Forwarding and realizing under control separation architecture the method for virtual programmable router | |
WO2015043679A1 (en) | Moving stateful applications | |
CN113179228B (en) | Method, device, equipment and medium for improving switch stacking reliability | |
JPS59103166A (en) | Hierarchical parallel data processor | |
Rangsietti et al. | SDN‐Enabled Network Virtualization and Its Applications | |
CN212302372U (en) | Edge computing center integrated server | |
US20190373346A1 (en) | Network and Method for a Data Center | |
CN113709018A (en) | Vxlan-based virtualized network access method and system | |
US20230370377A1 (en) | Disaggregation of tier1 devices in an sdn using smartswitches | |
CN107171953B (en) | Virtual router implementation method | |
US12143314B2 (en) | Pooling smart NICs for network disaggregation | |
CN113132145B (en) | Distributed training network system with separated management and training networks and communication method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20220510 |