CN113765815B - Method, equipment and system for sharing multicast message load - Google Patents
Method, equipment and system for sharing multicast message load Download PDFInfo
- Publication number
- CN113765815B CN113765815B CN202010505915.3A CN202010505915A CN113765815B CN 113765815 B CN113765815 B CN 113765815B CN 202010505915 A CN202010505915 A CN 202010505915A CN 113765815 B CN113765815 B CN 113765815B
- Authority
- CN
- China
- Prior art keywords
- link
- network device
- multicast
- parallel links
- link group
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 128
- 230000015654 memory Effects 0.000 claims description 46
- 238000004590 computer program Methods 0.000 claims description 10
- 238000007689 inspection Methods 0.000 claims description 8
- 238000003860 storage Methods 0.000 claims description 5
- 238000012545 processing Methods 0.000 description 27
- 230000006870 function Effects 0.000 description 21
- 238000005304 joining Methods 0.000 description 17
- COCAUCFPFHUGAA-MGNBDDOMSA-N n-[3-[(1s,7s)-5-amino-4-thia-6-azabicyclo[5.1.0]oct-5-en-7-yl]-4-fluorophenyl]-5-chloropyridine-2-carboxamide Chemical compound C=1C=C(F)C([C@@]23N=C(SCC[C@@H]2C3)N)=CC=1NC(=O)C1=CC=C(Cl)C=N1 COCAUCFPFHUGAA-MGNBDDOMSA-N 0.000 description 15
- 230000005540 biological transmission Effects 0.000 description 11
- 230000008569 process Effects 0.000 description 9
- 238000010586 diagram Methods 0.000 description 8
- 238000013461 design Methods 0.000 description 7
- 238000001514 detection method Methods 0.000 description 7
- 230000008878 coupling Effects 0.000 description 5
- 238000010168 coupling process Methods 0.000 description 5
- 238000005859 coupling reaction Methods 0.000 description 5
- 230000002776 aggregation Effects 0.000 description 4
- 238000004220 aggregation Methods 0.000 description 4
- 239000004744 fabric Substances 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 230000006399 behavior Effects 0.000 description 2
- 230000002457 bidirectional effect Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000012423 maintenance Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000006855 networking Effects 0.000 description 2
- 238000011084 recovery Methods 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 238000009826 distribution Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000004904 shortening Methods 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 238000011144 upstream manufacturing Methods 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/12—Avoiding congestion; Recovering from congestion
- H04L47/125—Avoiding congestion; Recovering from congestion by balancing the load, e.g. traffic engineering
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L45/00—Routing or path finding of packets in data switching networks
- H04L45/16—Multipoint routing
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
- Mobile Radio Communication Systems (AREA)
Abstract
The application provides a method for sharing multicast message load, which comprises the following steps: the method comprises the steps that first network equipment receives a first multicast message; determining a first link group corresponding to the first multicast message according to a multicast forwarding table item, wherein the first link group comprises at least two parallel links between the first network device and second network device, the second network device is a neighbor of the first network device, and the at least two parallel links are different; and selecting a first link to send the first multicast message, wherein the first link is one link of the at least two parallel links. According to the technical scheme, when one or more links in the plurality of parallel links for load sharing are failed or recovered, the convergence time of multicast service can be shortened.
Description
Technical Field
The present application relates to the field of network communications, and more particularly, to a method, a first network device, a second network device, and a system for multicast message load sharing.
Background
The network protocol (internet protocol, IP) multicast technology realizes the efficient point-to-multipoint data transmission in the IP network, and can effectively save network bandwidth and reduce network load.
In a scenario that a plurality of parallel links exist between two nodes forwarding a multicast message and the plurality of parallel links are not bundled into an aggregate link group (link aggregation group, LAG), in a related technical scheme, different multicast source groups (source groups, S, G) are sent to different links, so that load sharing of multicast traffic under the plurality of parallel links is realized. However, if one or more of the parallel links performing load sharing fails or recovers, the convergence time of the multicast service in the above related technical solution is long.
Therefore, when one or more of the parallel links performing load sharing fails or recovers, how to shorten the convergence time of the multicast service is a technical problem that needs to be solved currently.
Disclosure of Invention
The application provides a method for sharing multicast message load and a first network device, which can shorten the convergence time of multicast service when one or more of a plurality of parallel links for carrying out load sharing fails or recovers.
In a first aspect, a method for load sharing of multicast messages is provided, including: the method comprises the steps that first network equipment receives a first multicast message; the first network device determines a first link group corresponding to the first multicast message according to a multicast forwarding table item, wherein the first link group comprises at least two parallel links between the first network device and second network device, the at least two parallel links are different, and the second network device is a neighbor of the first network device; the first network device selects a first link to send the first multicast message, wherein the first link is one link of the at least two parallel links.
It should be noted that, in the present application, the link of the network device is in the available state (up) and may be understood that the link of the network device is normal, so that the message may be forwarded. A link of a network device being in an unavailable state (down) may be understood as a link failure of the network device, and a message forwarding cannot be performed.
The first link and the second link of the first network device may be two members of a first link group, where the links in the first link group may be both in an available state and may be in a partially unavailable state. The number of links in the first link group may be an integer greater than 1, which is not particularly limited in this application.
It should be understood that in this application, a "member" may refer to an interface that interfaces a network device in a first link group with a load sharing link, including an ethernet port.
In this application, at least two parallel links included in the first link group are different, and it is understood that two or more links between connected nodes are different from an aggregated link group (link aggregation group, LAG). In a possible implementation manner, physical interfaces corresponding to at least two parallel links in the first link group are different. In another possible implementation, the internet protocol (internet protocol, IP) addresses on the at least two parallel link interfaces are different. In another possible implementation, at least two parallel links have different physical interfaces and different corresponding IP addresses. In another possible implementation manner, at least two parallel links in the first link group are different in corresponding link types. In another possible implementation manner, the physical interfaces corresponding to at least two parallel links in the first link group are the same, but virtual local area networks (virtual local area network, VLANs) configured on the physical interfaces are different, that is, at least two parallel links between nodes may be regarded as at least two different logical links.
It should be understood that the foregoing may each be different, or may be different as long as there is a difference. The IP addresses on the at least two parallel link interfaces may be different from each other, or may be different from each other as long as there are different IP addresses on the at least two parallel link interfaces.
In the above technical solution, in a scenario of load sharing a multicast packet sent between network devices through multiple different parallel links, if at least one link included in a link group performing load sharing fails or recovers, only the member link state in the link group needs to be refreshed, and a multicast forwarding table item of a corresponding relationship between a conventional recording link and multicast traffic does not need to be refreshed. Since there may be a large amount of multicast traffic on a small number of parallel links, there are also more entries to refresh that need to refresh the correspondence between multicast traffic and links. While fewer entries need to be refreshed to refresh the member link states within the link group. Therefore, only refreshing the member link states in the link group can shorten the convergence time of the multicast service.
In a possible implementation manner, the multicast forwarding table entry includes a correspondence between a first identifier of the first multicast packet and an identifier of the first link group; the first network equipment acquires the first identifier from the first multicast message; the first network device determines the first link group according to the multicast forwarding table item and the first identifier.
It should be appreciated that the first identification of the first multicast message may also be referred to as a multicast flow identification of the first multicast message.
The identification of the first link group may be a name of a character string or may be a link group type + a link group ID represented by an integer value.
The member table corresponding to the identifier of the first link group may include at least two link identifiers corresponding to at least two parallel links in the first link group.
Further, the multicast forwarding table entry further includes a correspondence between the identifier of the second link group and identifiers of at least two parallel links in the second link group.
In another possible implementation manner, the first network device determines the at least two parallel links according to the identification of the first link group; the first network device selects the first link from the at least two parallel links to send the first multicast message according to the corresponding relation between the identification of the second link group and the identification of the at least two parallel links in the second link group.
In another possible implementation, the method further includes: and when the state of the first link is unavailable, the first network device selects a second link except the first link from the at least two parallel links and sends the first multicast message through the second link.
In another possible implementation manner, before the first network device determines, according to a multicast forwarding table entry, a first link group corresponding to the first multicast packet, the method further includes: the first network device receives at least two messages sent by the second network device through each link of the at least two parallel links, wherein the messages sent by each link comprise Identification (ID) of the second network device; the first network device determines that the ID of the second network device included in each of the at least two messages is the same; the first network device establishes the first link group including the at least two parallel links based on the ID of the second network device and the correspondence between the IDs of the first link group and the second network device.
In another possible implementation manner, the first network device selects a first link from the at least two parallel links according to the characteristic information of the first multicast message, and sends the first multicast message through the first link.
It should be understood that the characteristic information of the first multicast message is not specifically limited in this application, and may include one or more of the following: the source address information of the first multicast message, the destination address information of the first multicast message, the source address and the destination address information of the first multicast message, the hash result information corresponding to the source address of the first multicast message, the hash result information corresponding to the destination address information of the first multicast message, and the hash result information corresponding to the source address and the destination address information of the first multicast message.
In another possible implementation manner, the first network device models the feature information of the first multicast message according to the number of links in the first link group, where the state is available, to obtain a first link in the first link group, and sends the first multicast message through the first link.
In a second aspect, another method for sharing multicast message load is provided, including: the method comprises the steps that first network equipment receives a first multicast joining message through a first link, wherein the first multicast joining message comprises a first identifier of a first multicast message, and the first link is one link in a first link group; the first network device establishes a multicast forwarding table item, wherein the multicast forwarding table item comprises a corresponding relation between the first identifier and the identifier of a first link group, the first link group comprises at least two parallel links between the first network device and the second network device, and the at least two parallel links are different.
In one possible implementation, the method further includes: the first network equipment receives a second multicast joining message through a second link, wherein the second multicast joining message comprises a second identifier of the second multicast message, and the second link is one link in a second link group; the first network device establishes the multicast forwarding table entry, and the multicast forwarding table entry further includes a correspondence between the second identifier and the identifier of the second link group.
In another possible implementation, the method further includes: the first network device receives the first multicast message; the first network device determines a first link group corresponding to the first multicast message according to the multicast forwarding table item; the first network device selects a second link to send the first multicast message, wherein the second link is one link of the at least two parallel links.
In this application, at least two parallel links included in the first link group are different, and it may be understood that two or more links between connected nodes are not bundled into an aggregate link group (link aggregation group, LAG). In one possible implementation, at least two parallel links in the first link group have different physical interfaces, and the internet protocol (internet protocol, IP) addresses on the at least two parallel link interfaces are different. In another possible implementation manner, at least two parallel links in the first link group are different in corresponding link types. In another possible implementation manner, the physical interfaces corresponding to at least two parallel links in the first link group are the same, but virtual local area networks (virtual local area network, VLANs) configured on the physical interfaces are different, that is, at least two parallel links between nodes may be regarded as at least two different logical links.
It should be understood that the foregoing may each be different, or may be different as long as there is a difference. The IP addresses on the at least two parallel link interfaces may be different from each other, or may be different from each other as long as there are different IP addresses on the at least two parallel link interfaces.
In another possible implementation manner, the network device obtains the first identifier from the first multicast message; the first network device determines the first link group according to the multicast forwarding table item and the first identifier.
In another possible implementation manner, the first network device determines the at least two parallel links according to the identification of the first link group; and the first network equipment selects the first link from the at least two parallel links to send the first multicast message.
In another possible implementation, the method further includes: and when the state of the first link is unavailable, the first network device selects a second link except the first link from the at least two parallel links and sends the first multicast message through the second link.
In another possible implementation manner, before the first network device determines, according to the multicast forwarding table entry, a first link group corresponding to the first multicast packet, the method further includes: the first network device receives at least two messages sent by the second network device through each link of the at least two parallel links, wherein the messages sent by each link comprise Identification (ID) of the second network device; the first network device establishes the first link group including the at least two parallel links based on the ID of the second network device and the correspondence between the IDs of the first link group and the second network device.
In particular, the two messages may be protocol independent multicast protocol (protocol independent multicast, PIM) hello messages.
In another possible implementation manner, the first network device selects the second link from the at least two parallel links according to the characteristic information of the first multicast message, and sends the first multicast message through the second link.
It should be understood that the characteristic information of the first multicast message is not specifically limited in this application, and may include one or more of the following: the source address information of the first multicast message, the destination address information of the first multicast message, the source address and the destination address information of the first multicast message, the hash result information corresponding to the source address of the first multicast message, the hash result information corresponding to the destination address information of the first multicast message, and the hash result information corresponding to the source address and the destination address information of the first multicast message.
In another possible implementation manner, the first network device models the feature information of the first multicast message according to the number of links in the first link group, where the state is available, to obtain a first link in the first link group, and sends the first multicast message through the first link.
The advantages of the second aspect and any possible implementation manner of the second aspect correspond to those of the first aspect and any possible implementation manner of the first aspect, and are not described in detail.
In a third aspect, a method for load sharing of multicast messages is provided, including: the method comprises the steps that a second network device receives a first multicast message sent by a first network device through a first link, wherein the first network device is a neighbor of the second network device, the first link is one link in a second link group, the second link group comprises at least two parallel links between the first network device and the second network device, and the at least two parallel links are different; the second network device determines that the first multicast message passes Reverse Path Forwarding (RPF) inspection for one link in the second link group according to the first link; and the second network equipment forwards the first multicast message.
In a possible implementation manner, the second network device determines the second link group corresponding to the first multicast packet according to a multicast forwarding table entry; the second network device determines that the first link is one link in a second link group, and determines that the first multicast message passes an RPF check.
In another possible implementation manner, the multicast forwarding table entry includes a correspondence between a first identifier of the first multicast packet and an identifier of the second link group, and the second network device obtains the first identifier from the first multicast packet; and the second network equipment determines the second link group according to the first identifier and the multicast forwarding table entry.
In another possible implementation manner, the multicast forwarding table entry further includes a correspondence between an identifier of the second link group and identifiers of at least two parallel links in the second link group, and the second network device determines that the first link is one link in the second link group according to the identifier of the second link group and the multicast forwarding table entry.
In another possible implementation manner, before the second network device determines, according to the first link, that the first multicast packet passes the reverse path forwarding RPF check for one link in the second link group, the method further includes: the second network device receives at least two messages sent by the first network device through each link of the at least two parallel links, wherein the messages sent by each link comprise Identification (ID) of the first network device; the second network device determines that the ID of the first network device included in each of the at least two messages is the same; the second network device establishes the second link group including the at least two parallel links based on the ID of the first network device and a correspondence between the second link group and the ID of the first network device.
In particular, the two messages may be protocol independent multicast protocol (protocol independent multicast, PIM) hello messages.
In a fourth aspect, there is provided a first network device comprising: the device comprises a receiving module, a determining module and a selecting module.
The receiving module is used for receiving a first multicast message;
the determining module is used for determining a first link group corresponding to the first multicast message according to a multicast forwarding table item, wherein the first link group comprises at least two parallel links between the first network device and second network device, the at least two parallel links are different, and the second network device is a neighbor of the first network device;
the selecting module is configured to select a first link to send the first multicast message, where the first link is one link of the at least two parallel links.
In a possible implementation manner, the multicast forwarding table entry includes a correspondence between a first identifier of the first multicast packet and an identifier of the first link group;
the determining module is specifically configured to obtain, through an obtaining module, the first identifier from the first multicast message; and determining the first link group according to the multicast forwarding table item and the first identifier.
In another possible implementation manner, the selection module is specifically configured to determine the at least two parallel links according to the identification of the first link group; and selecting the first link from the at least two parallel links to send the first multicast message.
In another possible implementation manner, the selecting module is further configured to select a second link other than the first link from the at least two parallel links when the state of the first link is unavailable, and send the first multicast packet through the second link.
In another possible implementation manner, the receiving module is further configured to receive at least two messages sent by the second network device through each of the at least two parallel links, where each of the messages sent by the at least two parallel links includes an ID of the second network device; the determining module is further configured to determine that an ID of the second network device included in each of the at least two messages is the same;
the first network device further comprises:
the establishing module is configured to establish the first link group including the at least two parallel links based on the ID of the second network device and a correspondence between the IDs of the first link group and the second network device.
In another possible implementation manner, the selecting module is specifically configured to: and selecting a first link from the at least two parallel links according to the characteristic information of the first multicast message, and sending the first multicast message through the first link.
In a fifth aspect, another first network device is provided, comprising a receiving module and a setup module.
The receiving module is used for receiving a first multicast joining message through a first link, wherein the first multicast joining message comprises a first identifier of the first multicast message, and the first link is one link in a first link group;
the establishing module is configured to establish the multicast forwarding table entry, where the multicast forwarding table entry includes a correspondence between the first identifier and an identifier of a first link group, and the first link group includes at least two parallel links between the first network device and the second network device, where the at least two parallel links are different.
In one possible implementation, the receiving module is further configured to: receiving a second multicast joining message through a second link, wherein the second multicast joining message comprises a second identifier of the second multicast message, and the second link is one link in a second link group;
The establishing module is further configured to establish the multicast forwarding table entry, where the multicast forwarding table entry further includes a correspondence between the second identifier and the identifier of the second link group.
In another possible implementation manner, the receiving module is further configured to receive the first multicast packet;
the first network device further comprises a determining module, wherein the determining module is used for determining a first link group corresponding to the first multicast message according to the multicast forwarding table item;
the selecting module is configured to select a second link to send the first multicast message, where the second link is one link of the at least two parallel links.
In another possible implementation manner, the determining module is specifically configured to obtain, by using an obtaining module, the first identifier from the first multicast message; and determining the first link group according to the multicast forwarding table item and the first identifier.
In another possible implementation manner, the determining module is specifically configured to determine the at least two parallel links according to the identifier of the first link group; and selecting the first link from the at least two parallel links to send the first multicast message.
In another possible implementation manner, the selecting module is further configured to select a third link other than the second link from the at least two parallel links when the state of the second link is unavailable, and send the first multicast packet through the third link.
In another possible implementation manner, the receiving module is further configured to receive at least two messages sent by the second network device through each of the at least two parallel links, where each of the messages sent by the at least two parallel links includes an ID of the second network device;
the determining module is further configured to determine that an ID of the second network device included in each of the at least two messages is the same;
the establishing module is further configured to establish the first link group including the at least two parallel links based on the ID of the second network device and a correspondence between the IDs of the first link group and the second network device.
In another possible implementation manner, the selecting module is specifically configured to select the second link from the at least two parallel links according to the characteristic information of the first multicast packet, and send the first multicast packet through the second link.
In a sixth aspect, a second network device is provided, including a receiving module, a determining module, and a sending module.
The receiving module is configured to receive a first multicast packet sent by a first network device through a first link, where the first network device is a neighbor of the second network device, the first link is one link in a second link group, and the second link group includes at least two parallel links between the first network device and the second network device, where the at least two parallel links are different; the determining module is used for determining that the first multicast message passes Reverse Path Forwarding (RPF) inspection according to the first link as one link in a second link group; and the sending module is used for forwarding the first multicast message.
In a possible implementation manner, the determining module is specifically configured to determine, according to a multicast forwarding table entry, the second link group corresponding to the first multicast packet; and determining that the first link is one link in a second link group, and determining that the first multicast message passes the RPF check.
In another possible implementation manner, the multicast forwarding table entry includes a correspondence between a first identifier of the first multicast packet and an identifier of the second link group,
the determining module is specifically configured to obtain a first identifier from the first multicast message; and determining the second link group according to the first identifier and the multicast forwarding table item.
In another possible implementation manner, the multicast forwarding table entry further includes a correspondence between the identifier of the second link group and identifiers of at least two parallel links in the second link group,
the determining module is specifically configured to determine that the first link is one link in the second link group according to the identifier of the second link group and the multicast forwarding table entry.
In another possible implementation manner, the receiving module is further configured to receive at least two messages sent by the first network device through each of the at least two parallel links, where each of the messages sent by the at least two parallel links includes an ID of the first network device; the determining module is further configured to determine that an ID of the first network device included in each of the at least two messages is the same;
The method further comprises an establishing module, which is used for establishing the second link group comprising the at least two parallel links based on the ID of the first network device and the corresponding relation between the second link group and the ID of the first network device.
A seventh aspect provides a first network device having functionality to implement the first network device behaviour in the first aspect or any possible implementation of the second aspect or the second aspect. The functions can be realized on the basis of hardware, and corresponding software can be executed on the basis of hardware. The hardware or software includes one or more modules corresponding to the functions described above.
In one possible design, the first network device includes a processor and an interface in a structure, where the processor is configured to support the first network device to perform the corresponding functions in the method described above. The interface is used for supporting the first network equipment to receive the first multicast message or is used for supporting the first multicast message to be sent. The first network device may also include a memory for coupling with the processor that holds the program instructions and data necessary for the first network device.
In another possible design, the first network device includes: processor, transmitter, receiver, random access memory, read only memory, and bus. The processor is coupled to the transmitter, the receiver, the random access memory and the read-only memory through buses, respectively. When the first network equipment needs to be operated, the first network equipment is guided to enter a normal operation state by starting a basic input/output system solidified in a read-only memory or a bootloader guiding system in an embedded system. After the first network device enters a normal operating state, the application and the operating system are run in random access memory, such that the processor performs the method of the first aspect or any possible implementation of the second aspect.
In an eighth aspect, a first network device is provided, the first network device comprising: the main control board and the interface board further comprise a switching network board. The first network device is configured to perform the method of the first aspect or any possible implementation manner of the second aspect. In particular, the first network device comprises means for performing the method of the first aspect or any possible implementation of the first aspect or of the second aspect or any possible implementation of the second aspect.
In a ninth aspect, a first network device is provided that includes a controller and a first forwarding sub-device. The first rotor apparatus includes: the interface board, further, can also include the exchange network board. The first forwarding sub-device is configured to perform a function of the interface board in the eighth aspect, and further may perform a function of the switch fabric in the eighth aspect. The controller includes a receiver, a processor, a transmitter, a random access memory, a read-only memory, and a bus. The processor is coupled to the receiver, the transmitter, the random access memory and the read-only memory through buses, respectively. When the controller needs to be operated, the basic input/output system solidified in the read-only memory or the bootloader guide system in the embedded system is started to guide the controller to enter a normal operation state. After the controller enters a normal running state, running an application program and an operating system in the random access memory, so that the processor executes the function of the main control board in the eighth aspect.
It will be appreciated that in actual practice, the first network device may comprise any number of interfaces, processors or memories.
In a tenth aspect, there is provided a second network device having functionality to implement the first network device behaviour in any of the possible implementations of the third aspect or the third aspect. The functions can be realized on the basis of hardware, and corresponding software can be executed on the basis of hardware. The hardware or software includes one or more modules corresponding to the functions described above.
In one possible design, the first network device includes a processor and an interface in a structure, where the processor is configured to support the second network device to perform the corresponding functions in the method described above. The interface is used for supporting the second network equipment to receive the multicast message sent by the first network equipment through the first link. The second network device may also include a memory for coupling with the processor that holds the program instructions and data necessary for the second network device.
In another possible design, the second network device includes: processor, transmitter, receiver, random access memory, read only memory, and bus. The processor is coupled to the transmitter, the receiver, the random access memory and the read-only memory through buses, respectively. When the second network equipment needs to be operated, the second network equipment is guided to enter a normal operation state by starting a basic input/output system solidified in a read-only memory or a bootloader guiding system in an embedded system. After the second network device enters a normal operating state, the application and the operating system are run in random access memory, causing the processor to perform the method of the third aspect or any possible implementation of the third aspect.
In an eleventh aspect, there is provided a second network device comprising: the main control board and the interface board further comprise a switching network board. The second network device is configured to perform the method of the third aspect or any possible implementation of the third aspect. In particular, the second network device comprises means for performing the method of the third aspect or any possible implementation of the third aspect.
In a twelfth aspect, a second network device is provided that includes a controller and a first forwarding sub-device. The first rotor apparatus includes: the interface board, further, can also include the exchange network board. The first forwarding sub device is configured to perform a function of the interface board in the eleventh aspect, and further may perform a function of the switch board in the eleventh aspect. The controller includes a receiver, a processor, a transmitter, a random access memory, a read-only memory, and a bus. The processor is coupled to the receiver, the transmitter, the random access memory and the read-only memory through buses, respectively. When the controller needs to be operated, the basic input/output system solidified in the read-only memory or the bootloader guide system in the embedded system is started to guide the controller to enter a normal operation state. After the controller enters a normal operation state, an application program and an operating system are run in the random access memory, so that the processor executes the function of the main control board in the eleventh aspect.
It will be appreciated that in actual practice, the second network device may comprise any number of interfaces, processors or memories.
In a thirteenth aspect, there is provided a computer program product comprising: computer program code which, when run on a computer, causes the computer to perform the method of the first aspect or any possible implementation of the second aspect or the second aspect.
In a fourteenth aspect, there is provided a computer program product comprising: computer program code which, when run on a computer, causes the computer to perform the method of the third aspect or any possible implementation of the third aspect.
A fifteenth aspect provides a computer readable medium having stored thereon a program code which, when run on a computer, causes the computer to perform the method of the first aspect or any possible implementation of the second aspect or second aspect. These computer-readable stores include, but are not limited to, one or more of the following: read-only memory (ROM), programmable ROM (PROM), erasable PROM (EPROM), flash memory, electrically EPROM (EEPROM), and hard disk drive (hard drive).
In a sixteenth aspect, there is provided a computer readable medium storing program code which, when run on a computer, causes the computer to perform the method of the third aspect or any of the possible implementations of the third aspect. These computer-readable stores include, but are not limited to, one or more of the following: read-only memory (ROM), programmable ROM (PROM), erasable PROM (EPROM), flash memory, electrically EPROM (EEPROM), and hard disk drive (hard drive).
A seventeenth aspect provides a chip comprising a processor and a data interface, wherein the processor reads instructions stored on a memory via the data interface to perform the method of the first aspect or any one of the possible implementations of the second aspect. In a specific implementation, the chip may be implemented in the form of a central processing unit (central processing unit, CPU), microcontroller (micro controller unit, MCU), microprocessor (micro processing unit, MPU), digital signal processor (digital signal processing, DSP), system on chip (SoC), application-specific integrated circuit (ASIC), field programmable gate array (field programmable gate array, FPGA) or programmable logic device (programmable logic device, PLD).
In an eighteenth aspect, a chip is provided, the chip comprising a processor and a data interface, wherein the processor reads instructions stored on a memory via the data interface to perform the method of the third aspect or any one of the possible implementations of the third aspect. In a specific implementation, the chip may be implemented in the form of a central processing unit (central processing unit, CPU), microcontroller (micro controller unit, MCU), microprocessor (micro processing unit, MPU), digital signal processor (digital signal processing, DSP), system on chip (SoC), application-specific integrated circuit (ASIC), field programmable gate array (field programmable gate array, FPGA) or programmable logic device (programmable logic device, PLD).
In a nineteenth aspect, a system is provided that includes the first network device and the second network device described above.
Drawings
Fig. 1 is a schematic diagram of one possible application scenario suitable for the present application.
Fig. 2 is a schematic flowchart of a method for load sharing of multicast messages according to an embodiment of the present application.
Fig. 3 is a schematic flowchart of another method for load sharing of multicast messages according to an embodiment of the present application.
Fig. 4 is a schematic block diagram of a first network device 400 according to an embodiment of the present application.
Fig. 5 is a schematic block diagram of another first network device 500 provided in an embodiment of the present application.
Fig. 6 is a schematic hardware structure of the first network device 2000 according to the embodiment of the present application.
Fig. 7 is a schematic hardware structure of another first network device 2100 according to an embodiment of the present application.
Fig. 8 is a schematic block diagram of another second network device 800 according to an embodiment of the present application.
Fig. 9 is a schematic hardware structure of the first network device 2200 according to the embodiment of the present application.
Fig. 10 is a schematic hardware structure of another first network device 2400 according to an embodiment of the present application.
Detailed Description
The technical solutions in the present application will be described below with reference to the accompanying drawings.
Multicast (multicast) is a data transmission way to send data to multiple recipients on a transmission control protocol (transmission control protocol, TCP)/internet protocol (internet protocol, IP) network in an efficient manner at the same time by using one multicast address. The multicast source sends a multicast stream to multicast group members in a multicast group via links in the network, which can each receive the multicast stream. The multicast transmission mode realizes the point-to-multipoint data connection between the multicast source and the multicast group members. Since the multicast stream only needs to be delivered once per network link, and only when a branch occurs on the link, the multicast is replicated. Therefore, the multicast transmission mode improves the data transmission efficiency and reduces the possibility of congestion of the backbone network.
The network protocol (internet protocol, IP) multicast technology realizes the efficient point-to-multipoint data transmission in the IP network, and can effectively save network bandwidth and reduce network load. Therefore, the method has wide application in the aspects of real-time data transmission, multimedia conferences, data copying, interactive network televisions (internet protocol television, IPTV), games, simulation and the like.
An application scenario suitable for the embodiments of the present application will be described in detail with reference to fig. 1
Fig. 1 is a schematic diagram of one possible multicast scenario suitable for use in embodiments of the present application. Referring to fig. 1, a multicast Receiver (RCV), a router R1, a router R2, a router R3, and a multicast Source (SRC) may be included in the scenario.
It should be understood that there may be a plurality of routers (R) connected to the multicast receiver in the embodiment of the present application, and for convenience of description, three routers, such as router R1/router R2/router R3, are illustrated in fig. 1 as an example.
The multicast receiver may send a multicast join message to the router R1 connected thereto, the router R1 then sends the multicast join message to the router R2, and the router R2 then sends the multicast join message to the router R3. After receiving multicast data traffic from a multicast source, router R3 sends the multicast data traffic to multicast receivers along router R2 and router R1.
It should be understood that the embodiment of the present application does not specifically limit the multicast join message sent by the multicast receiver. May be an internet group management protocol (internet group management protocol, IGMP) message, or may be a multicast discovery listening protocol (multicast listener discovery, MLD) message, or may be a protocol independent multicast protocol (protocol independent multicast, PIM) message.
As shown in fig. 1, IGMP protocol or MLD protocol may be run between router R1 and multicast receivers. The network between router R1/router R2/router R3 may be a protocol PIM, i.e., the PIM protocol may be run between router R1/router R2/router R3. PIM protocol may be run on the interface in router R3 to the multicast source.
It should be noted that the PIM protocol does not need to be run on the interface of the multicast source to the router R3.
In this embodiment of the present application, a PIM protocol may be run between the router R1 and the router R2 and the router R3, and an interface on the router R1 and the router R2 and the router R3, which enables the PIM protocol, may send an outbound PIM hello packet. For example, the link interface of router R1 enables PIM protocol, and the interface of router R1 may send PIM hello messages to router R2/router R3. As another example, the link interface of router R2 enables the PIM protocol, and the interface of router R2 may send PIM hello messages to router R1/router R3.
It should be noted that, in the embodiment of the present application, the link interface of the device is in the available state (up), which is understood that the link of the device is normal, and the message forwarding may be performed. If the link interface of the device is in an unavailable state (down), the link of the device is understood to be failed, and the message forwarding cannot be performed.
Referring to fig. 1, there are cases where there are multiple parallel links between nodes in the network, for example, there are at least two parallel links between router R1 and router R2. For convenience of description, three parallel links (link 1, link 2, link 3) are illustrated in fig. 1 between router R1 and router R2.
It should be understood that in the embodiments of the present application, the parallel link may be two or more links between connected nodes, where the two or more links are different. And, the two or more links are not bundled into an aggregated link group (link aggregation group, LAG).
In a possible implementation, the corresponding physical interfaces between at least two parallel links in the first link group are different, and the internet protocol (internet protocol, IP) addresses on the at least two parallel link interfaces are different. In another possible implementation manner, at least two parallel links in the first link group are different in corresponding link types. In another possible implementation manner, the physical interfaces corresponding to at least two parallel links in the first link group are the same, but virtual local area networks (virtual local area network, VLANs) configured on the physical interfaces are different, that is, at least two parallel links between nodes may be regarded as at least two different logical links.
It should also be understood that the foregoing may each be different, or may be different as long as there is a difference. The IP addresses on the at least two parallel link interfaces may be different from each other, or may be different from each other as long as there are different IP addresses on the at least two parallel link interfaces.
In the related technical scheme, different multicast source groups (S, G) are sent to different links, so that load sharing of multicast traffic under multiple parallel links is realized. Specifically, taking the scenario shown in fig. 1 as an example, three parallel links (link 1, link 2, and link 3) are provided between the router R1 and the router R2, and if the router R1 receives (S1, G1 to G30) 30 multicast group joining messages, when the router R1 sends a PIM multicast joining message to the router R2, according to the SG multicast forwarding table entry, the multicast joining messages corresponding to G1 to G10 are sent to the router R2 through the link 1, the multicast joining messages corresponding to G11 to G20 are sent to the router R2 through the link 2, and the multicast joining messages corresponding to G21 to G30 are sent to the router R2 through the link 3. Correspondingly, after receiving the multicast traffic, the router R2 sends the multicast traffic of G1 to G10 to R1 along the link 1, sends the multicast traffic of G11 to G20 to the router R1 along the link 2, and sends the multicast traffic of G21 to G30 to the router R1 along the link 3 according to the SG multicast forwarding table entry, thereby realizing convergence of the multicast service.
It should be appreciated that convergence of multicast traffic may be receiving multicast traffic corresponding to a group of multicast sources from a link.
In the above related technical solution, if some of the multiple parallel links fail, the router R1 needs to update the SG multicast forwarding table and send a multicast join message to the new link. For example, when the link 1 fails, the router R1 needs to send multicast join messages corresponding to G1 to G10 to the router R2 again along the link 2, so that the router R1 needs to update SG multicast forwarding entries and send multicast join messages corresponding to G1 to G10 to the link 2 again, so that multicast join messages corresponding to G1 to G10 and G11 to G20 can be sent to the router R2 along the link 2, and thus multicast traffic of G1 to G10 can be sent from the router R2 to the router R1 along the link 2. In the related technical scheme, for the case of partial link failure or recovery in a plurality of parallel links, because the SG multicast forwarding table needs to be updated and multicast joining information needs to be sent to a new link, when the number of multicast source groups is large, the convergence time of multicast service is long.
The method for sharing the multicast message load can shorten the convergence time of multicast service under the condition that part of links in a plurality of parallel links are failed or recovered.
Fig. 2 is a schematic flowchart of a method for load sharing of multicast messages according to an embodiment of the present application. As shown in FIG. 2, the method may include steps 210-230, and steps 210-230 are described in detail below, respectively.
Step 210: the first network device receives a first multicast message.
For example, the first network device in the embodiment of the present application may correspond to the router R2 in fig. 1. The router R2 may receive a first multicast message sent by the multicast source.
Step 220: and the first network equipment determines a first link group corresponding to the first multicast message according to the multicast forwarding table entry.
In this embodiment, the first link group includes at least two parallel links between the first network device and a neighbor of the first network device, where the at least two parallel links are different. For a specific description of parallel links, please refer to the above description, and the description is omitted here.
It should be appreciated that, taking the scenario shown in fig. 1 as an example, the neighbors of the first network device may correspond to router R1 in fig. 1. At least two parallel links between router R1 and router R2 may be included in the first link group, and three parallel links between router R1 and router R2 are illustrated in fig. 1 as an example.
Optionally, in some embodiments, a correspondence between the first identifier of the first multicast packet and the identifier of the first link group may be included in the multicast forwarding table entry. The first network device may obtain a first identifier from the first multicast message, and determine the first link group according to the first representation and the multicast forwarding table entry.
The identification of the first link group may be various, and embodiments of the present application are not limited in this regard. In one possible implementation, the identifier of the first link group may be a name of a string, where the name of the string indicates that the outgoing interface or the incoming interface of the message is a link group. In another possible implementation, the identifier of the first link group may be a link group type+integer value, where the integer value represents an identifier, and the link group type indicates that the identifier is an identifier of a link group, that is, a link group type+integer value may indicate that the outgoing interface or the incoming interface of the packet is a link group.
Optionally, the first network device may establish a first link group prior to step 220. Specifically, the first network device may receive at least two messages sent by the neighbors of the first network device through the at least two parallel links, where the at least two messages include IDs of the neighbors; the first network device determines that the IDs of the neighbors included in the at least two messages are the same, and establishes a first link group including at least two parallel links. The message may be, for example, a PIM hello message.
Optionally, before step 220, the first network device may further establish the multicast forwarding table entry. Specifically, the first network device may receive a message sent by a neighbor of the first network device, where the message includes a first identifier of the first multicast message. The message may be, for example, a PIM multicast join message. The first network device may determine that a link that receives a message sent by a neighbor of the first network device is one link in the first link group, and establish the multicast forwarding table.
Step 230: the first network device selects a first link from at least two parallel links in the first link group to send a first multicast message.
Wherein the first link is one of at least two parallel links in the first link group.
As an example, in the embodiment of the present application, the first network device may determine at least two parallel links according to the identifier of the first link group, and select a first link from the at least two parallel links to send the first multicast packet.
In the above technical solution, in a scenario of load sharing a multicast packet sent between network devices through a plurality of different parallel links, if at least one link included in a link group for load sharing fails or recovers, only the link group needs to be refreshed, and a multicast forwarding table item of a traditional correspondence between a recording link and multicast traffic does not need to be refreshed, thereby shortening convergence time of multicast service.
Optionally, in some embodiments, when the state of the first link is unavailable, the first network device selects a second link other than the first link from the at least two parallel links, and sends the first multicast message through the second link.
Optionally, in some embodiments, the first network device selects a first link from the at least two parallel links according to the characteristic information of the first multicast packet, and sends the first multicast packet through the first link.
It should be understood that the characteristic information of the first multicast message is not specifically limited in this application, and may include one or more of the following: the source address information of the first multicast message, the destination address information of the first multicast message, the source address and the destination address information of the first multicast message, the hash result information corresponding to the source address of the first multicast message, the hash result information corresponding to the destination address information of the first multicast message, and the hash result information corresponding to the source address and the destination address information of the first multicast message.
Optionally, in some embodiments, the first network device may modulo the characteristic information of the first multicast packet according to the number of links in the first link group whose status is available, to obtain a first link in the first link group, and send the first multicast packet through the first link.
Taking the scenario shown in fig. 1 as an example, a detailed description is given below of a possible implementation procedure of the method for load sharing of a multicast message provided in the embodiment of the present application in conjunction with the specific example in fig. 3. It should be understood that the example of fig. 3 is merely to aid one skilled in the art in understanding the present embodiments, and is not intended to limit the present embodiments to the specific numerical values or the specific scenarios of fig. 3. Various equivalent modifications and variations will be apparent to those skilled in the art from the examples given, and such modifications and variations are intended to be within the scope of the embodiments of the present application.
Fig. 3 is a schematic flowchart of another method for load sharing of multicast messages according to an embodiment of the present application. As shown in FIG. 3, the method may include steps 310-350, with steps 310-350 being described in detail below, respectively.
Step 310: the router R1 and the router R1 establish a link group table item.
In the embodiment of the application, 3 parallel links between the router R1 and the router R2 can be configured or automatically judged or identified as a parallel link group.
For the configuration on the router R1, where "Pim-Hello router-id 1.1.1.1" indicates that the router R1 carries an identification (router-id) of 1.1.1 when the router R1 sends the PIM Hello message outward. "Pim-Hello Router-id enable" means that the Pim interface of the enabled Router R1 may carry the Router-id value of the Router R1 when sending a Pim Hello message. "Pim load-balance link-group auto-generation" means that a link group is automatically generated for a parallel link, and in the case of the router R1, a parallel link group and an ID1 of the parallel link group are automatically generated for 3 parallel links between the router R1 and the router R2. "Interface gi1/0/1" means the Interface on router R1 to the multicast receiver with an IP address of 192.168.1.1."Interface gi2/0/1" means the ingress Interface on router R1 to link 1 of connection R2, with an IP address of 192.168.11.101."Interface gi2/0/2" means the ingress Interface on router R1 to link 2 of connection R2, with an IP address of 192.168.12.102."Interface gi2/0/3" means the ingress Interface on router R1 to link 3 connecting R2, with an IP address of 192.168.13.103.
It should be appreciated that the router-ID may identify one router node, i.e. the identification ID of the network device.
It should be understood that, for the router R1, the interface on the router R1 that sends the multicast join message to the router R2 is a member of the parallel link group, and then the multicast ingress interface established on the router R1 is an ingress interface of the parallel link group.
It should also be understood that in this application embodiment, "member" may refer to an interface of a network device in a load sharing group that connects with a load sharing link. The interface includes an ethernet port, for example, in fig. 1, 3 ports of each of the router R1 and the router R2 are members of the load sharing group. Further, the router R1 may comprise another interface, in which case the router R1 comprises 4 load sharing group members.
For the configuration on router R2, where "Pim-Hello router-id 2.2.2.2" means that router R1 carries router R2 with it when sending PIM Hello messages outwards, the router-id is 2.2.2.2."Pim-Hello Router-id enable" means that the Pim interface of Router R2 is enabled to carry the Router-id value of Router R2 when sending a Pim Hello message. "Pim load-balance link-group auto-generation" means that a link group is automatically generated for a parallel link, and in the case of the router R2, a parallel link group and an ID2 of the parallel link group are automatically generated for 3 parallel links between the router R2 and the router R1. "Interface gi2/0/1" means the outbound Interface on router R2 with link 1 connecting R1, whose IP address is 192.168.11.201."Interface gi2/0/2" means the outbound Interface on router R2 with link 2 connecting R1, whose IP address is 192.168.12.202."Interface gi2/0/3" means the outbound Interface on router R2 with link 3 connecting R1, whose IP address is 192.168.13.203."Interface gi3/0/1" means the Interface on router R2 connected to router R3, and its IP address is 192.168.23.201.
For the configuration on router R3, where the "Interface gi3/0/1" router R3 is connected to the link Interface of router R2, its IP address is 192.168.23.231.
An example, a detailed description will be given of a specific implementation procedure of the router R1 to establish the link group table entry.
The router R2 may send a PIM Hello message to the router R1 according to the configuration described above, with the PIM Hello message carrying the router-id value 2.2.2.2 of the router R2. The router R1 establishes the following correspondence according to the PIM Hello message sent by the router R2.
(Interface 2/0/1, neighbor (Nbr) =192.168.11.201, router-id=2.2.2.2.2)
(Interface 2/0/2,Nbr=192.168.12.202,router-id=2.2.2.2)
(Interface 2/0/3,Nbr=192.168.13.203,router-id=2.2.2.2)
The router R1 may generate a link group for each router-id for messages sent from the same node (different nodes may be distinguished by router-ids) over different interfaces. The link entries in the link group established on router R1 are shown below.
Link table entry 1 in link group: (Link group ID1, interface gi 2/0/1)
(Link group ID1, interface gi 2/0/2)
(Link group ID1, interface gi 2/0/3)
The link group ID1 may have a one-to-one correspondence with the router-ID, for example, ID1 may be directly obtained from the router-ID value, or may be a value different from the router-ID value and set up the following correspondence with it. The correspondence may also be referred to as a link group entry.
Link group table entry 1: (Link group ID1, router-id=2.2.2.2)
As another example, a detailed description will be given of a specific implementation procedure of the router R2 to establish the link group table entry.
The router R1 may send a PIM Hello message to the router R2 according to the configuration described above, with the PIM Hello message carrying the router-id value 1.1.1.1 of the router R1. The router R2 establishes the following correspondence according to the PIM Hello message sent by the router R1.
(Interface 2/0/1,Nbr=192.168.11.101,router-id=1.1.1.1)
(Interface 2/0/2,Nbr=192.168.12.102,router-id=1.1.1.1)
(Interface 2/0/3,Nbr=192.168.13.103,router-id=1.1.1.1)
The router R2 may generate a link group for each router-id for messages sent from the same node (different nodes may be distinguished by router-ids) over different interfaces. The link entries in the link group established on router R2 are shown below.
Link table entry 2 in link group: (Link group ID2, interface gi 2/0/1)
(Link group ID2, interface gi 2/0/2)
(Link group ID2, interface gi 2/0/3)
The link group ID2 may have a one-to-one correspondence with the router-ID, for example, ID2 may be directly obtained from the router-ID value, or may be a value different from the router-ID value and set up the following correspondence with it. The correspondence may also be referred to as a link group entry.
Link group table entry 2: (Link group ID2, router-id=1.1.1.1)
For another example, for router R3, router R3 may receive a PIM Hello message sent by router R2 via interface gi3/0/1, where the PIM Hello message carries the router-id value 2.2.2.2 of router R2. Since neither router-id nor the PIM interface of router R3 is configured in the configuration of router R3, the PIM Hello message is sent with the router-id value of router R3. Thus, router R3 may choose to ignore router-id value 2.2.2.2 of router R2 and process it as a normal PIM Hello message.
The router-id value carried by the router R1 when sending the PIM Hello message, and the router-id value carried by the router R2 when sending the PIM Hello message, may be a 32-bit value. For example, the router-id value may be a 32-bit value that is the same as the loop back (loop back) port IP address under the Internet protocol version 4 (internet protocol version, IPv 4) network. As another example, the router-id value may be a 32-bit value under internet protocol version 6 (internet protocol version, IPv 6) network, and it has no relation to the IPv6 address of the loopback port.
Step 315: the router R1 receives the multicast joining message sent by the multicast receiver and establishes the SG multicast forwarding table item 1.
The multicast receiver sends a multicast source group (S, G) join message to the router R1, and the router R1 can query the next hop and the outgoing interface of the unicast route according to S after receiving the multicast source group (S, G) join message. When the next hop interface is any one interface of gi2/0/1, gi2/0/2 or gi2/0/3, the router R1 may send a multicast source group (S, G) join message to the router R2 through the next hop interface.
Router R1 also creates multicast forwarding table entry 1 as follows:
((S, G), input interface flag (input interface flag, IIFFlag) < link group >, input interface (input interface, IIF) =link group ID1, output interface flag (output interface flag, OIFFlag) < null >, output interface list (output interface list, OIFList) =gi 1/0/1)
It should be understood that, in the embodiment of the present application, the multicast forwarding table entry 1 is used for forwarding, by the router R1, the multicast traffic received from the router R2. In the multicast forwarding table entry 1, IIFFlag indicates that the ingress interface of the multicast traffic established on the router R1 is a parallel link group, and IIF indicates that the identifier of the parallel link group is ID1. OFFlag < null > indicates that the outgoing interface for forwarding the received multicast traffic in the router R1 is not a link group, and OFList indicates that the outgoing interface for forwarding the received multicast traffic in the router R1 is interface gi1/0/1.
In the embodiment of the present application, the forwarding table entry established on the router R1 is as follows.
SG multicast forwarding table 1:
((S, G), IIFFlag < link group >, iif=link group ID1, OIFFlag < null >, oiflist=gi1/0/1)
Link table entry 1 in link group:
(Link group ID1, interface gi 2/0/1)
(Link group ID1, interface gi 2/0/2)
(Link group ID1, interface gi 2/0/3)
Link group table entry 1:
(Link group ID1, router-id=2.2.2.2)
As an example, the identification of the first link group in the foregoing may be IIFFlag and IIF in SG multicast forwarding table entry 1, where IIFFlag is used to indicate that the ingress interface of the multicast traffic established on router R1 is a parallel link group. IIF is used to indicate that the identifier of the parallel link group is ID1, and ID1 may identify that the first link group includes the three parallel links.
Optionally, in some embodiments, there are multiple multicast entries, e.g., 30 multicast entries, and then the forwarding entries established on router R1 are as follows.
Step 320: the router R2 receives the multicast joining message sent by the router R1 and establishes the SG multicast forwarding table 2.
The router R1 sends a multicast source group (S, G) join message to the router R2, and after the router R2 receives the multicast source group (S, G) join message, if the router R2 determines that the received multicast source group (S, G) join message is from any one of the interfaces gi2/0/1, gi2/0/2, or gi2/0/3, the router R2 may establish the following SG multicast forwarding table entry 2:
((S, G), IIFFlag < null >, iif=interface gi3/0/1, oifflag < link group >, oiflist=link group ID 2)
It should be understood that, in the embodiment of the present application, SG multicast forwarding table entry 2 is used for router R2 forwarding multicast traffic received from router R3. In SG multicast forwarding table 2, IFFlag indicates that the ingress Interface of the multicast traffic established on the router R2 is not a link group, and IIF indicates that the ingress Interface of the multicast traffic received on the router R2 is Interface gi3/0/1. OFFlag indicates that the outgoing interface of multicast traffic established on router R2 is a parallel link group, and OFList indicates that the identity of the parallel link group is ID2.
In this embodiment of the present application, the forwarding table entry established on the router R2 is as follows.
Multicast forwarding table entry 2:
(S, G, IIFFlag < null >, iif=interface gi3/0/1, oifflag < link group >, oiflist=link group ID 2)
Link table entry 2 in link group:
(Link group ID2, interface gi 2/0/1)
(Link group ID2, interface gi 2/0/2)
(Link group ID2, interface gi 2/0/3)
Link group table entry 2:
(Link group ID2, router-id=1.1.1.1)
Optionally, in some embodiments, there are multiple multicast entries, e.g., 30 multicast entries, and then the forwarding entries established on router R2 are as follows.
Step 325: the router R3 receives the multicast join message sent by the router R2 and sends the multicast join message to the multicast source.
Step 330: after receiving the multicast join message, the multicast source sends the multicast traffic to the router R3.
Step 335: the router R3 sends the multicast traffic to the router R2 through a link Interface 3/0/1 connecting the router R2.
Step 340: after receiving the multicast traffic sent by the router R3, the router R2 sends the multicast traffic to the router R1 according to the established forwarding table entry.
After receiving the multicast traffic sent by the router R3, the router R2 may forward the multicast traffic according to the established forwarding table entry.
Specifically, as an example. After receiving the multicast traffic from the link Interface gi3/0/1, the router R2 can determine that the outgoing Interface of the multicast traffic is a parallel link group according to the OIFFlag in the SG multicast forwarding table entry 2. And searching a link table entry 2 in the link group according to the OILlist, and selecting one interface from a plurality of outgoing interfaces in the link table entry 2 in the link group to send multicast traffic to the router R1.
It should be understood that there are various implementations of selecting one interface from the plurality of outgoing interfaces in the link table entry 2 in the link group to send the multicast traffic to the router R1 in the embodiment of the present application. In a possible implementation manner, the router R2 may perform hash computation on the multicast source S and the multicast group G to obtain a hash computation result, and then modulo (may also be called as a remainder) according to the number of available link interfaces in the link group ID 2. The link interfaces available in the link table entry 2 in the link group are 3, namely interfaces gi2/0/1, gi2/0/2 and gi2/0/3. When the result of the modulo is 0/1/2, the router R2 may forward the multicast traffic to the router R1 through the interfaces gi2/0/1, gi2/0/2, gi2/0/3, respectively.
In this embodiment, the specific implementation manner of determining the link interface state in the link group ID2 by the router R2 is various. For convenience of description, the following will take an example of determining a state corresponding to interface gi2/0/1 of link 1 in link group ID 2.
Specifically, as an example, the interface of the router R2 that connects to the link 1 of the router R1 enables the PIM protocol and sends a message, such as a PIM HELLO message, to the interface of the router R1 that connects to the link 1 of the router R2. Likewise, the interface of link 1 of router R1 to router R2 enables the PIM protocol and sends a message, e.g., a PIM HELLO message, to the interface of link 1 of router R2 to router R1. If router R1 can receive the message sent by router R2, router R1 can understand that the interface state of link 1 connecting router R2 is an available state. If router R1 does not receive the message sent by router R2, router R1 may understand that the interface state of link 1 connecting router R2 is an unavailable state.
As another example, bidirectional forwarding detection (bidirectional forwarding detection, BFD) may be further disposed between router R1 and router R2, and router R1 may determine the interface state of link 1 of router R1 connected to router R2 according to the BFD detection result, and similarly, router R2 may determine the interface state of link 1 of router R2 connected to router R1 according to the BFD detection result. For example, BFD detection messages are sent between router R1 and router R2 at regular intervals. If the interface of the link 1 in the router R1 can receive the BFD detection packet sent by the interface of the link 1 in the router R2 at the time interval, the router R1 may understand that the link 1 interface state of the link 1 in the router R2 connected to the router R1 is an available state. If the interface of the link 1 in the router R1 does not receive the BFD detection packet sent by the interface of the link 1 in the router R2 at the time interval, the router R1 may understand that the link 1 interface state of the link 1 in the router R2 connected to the router R1 is an unavailable state.
Step 345: and after receiving the multicast traffic sent by the router R2, the router R1 forwards the multicast traffic according to the established forwarding table item.
Taking as an example, the router R1 receives the multicast traffic sent by the router R2 through the interface gi 2/0/1. The router R1 may determine whether the multicast traffic can pass reverse path forwarding (reverse path forwarding, RPF) inspection according to whether the interface actually receiving the multicast traffic and the interface represented by the IIF field in the SG multicast forwarding table entry are consistent.
The multicast routing protocol determines upstream and downstream neighbor devices through the existing unicast routing information, and creates a multicast routing table item. By using the RPF checking mechanism, the multicast data stream can be ensured to be correctly transmitted along the multicast distribution tree (path), and meanwhile, the generation of a loop on the forwarding path can be avoided.
In this embodiment, on the one hand, the router R1 receives the multicast traffic sent by the router R2 through the interface gi2/0/1, determines that the ingress interface of the multicast traffic is the parallel link group ID1 according to the IIFFlag in the SG multicast forwarding table 1, and determines that the interface gi2/0/1 belongs to one link interface in the link group ID1 according to the link table 1 in the link group. On the other hand, the router R1 determines, according to the IIF in the SG multicast forwarding table entry 1, that it is the link group ID1 that is expected to receive the multicast traffic sent by the router R2. Thus, the router R1 receives the multicast traffic sent by the router R2 through the interface gi2/0/1 and can pass the RPF check, so that the multicast traffic can be forwarded to the multicast receiver, for example, the router R1 can forward the multicast traffic to the multicast receiver through the interface gi1/0/1 connected to the multicast receiver.
For router R1, whether multicast traffic sent to router R2 is received from link 1 or link 2 or link 3, the multicast traffic can pass the RPF check and be sent to the downstream egress interface of router R1.
Step 350: when there is a link failure or recovery in a parallel link group including at least two links between the router R1 and the router R2, the router R1 and the router R2 implement convergence of the multicast service by refreshing link entries in the link group.
As an example, link 1 in the parallel link group fails. When a link 1 fails, the router R1 checks whether other links are available in the parallel link group to which the link 1 belongs. If there are other links in the parallel link group to which the link 1 belongs that are available, the router R1 node no longer sends a multicast join message or a multicast exit message to the router R2.
The router R1 and the router R2 may brush the link entries in the link group, and the link entries in the link group after the refresh on the router R1 and the router R2 are as follows.
For the router R2, after receiving the multicast traffic sent by the router R3, the router R2 may forward the multicast traffic according to the forwarding table entry. The router R2 may perform hash computation on the multicast source S and the multicast group G to obtain a hash computation result, and then modulo (may also be referred to as taking the remainder) according to the number of available link interfaces in the link group ID 2. There are 2 available link interfaces in link table entry 2 in the link group, interfaces gi2/0/2, gi2/0/3, respectively. When the result of the modulo is 0/1, the router R2 may forward the multicast traffic to the router R1 through the interfaces gi2/0/2 and gi2/0/3, respectively.
For the router R1, whether the multicast traffic sent by the router R2 is received from the interface gi2/0/2 or from the interface gi2/0/3, since the interfaces gi2/0/2 and gi2/0/3 belong to the link group ID1, the multicast traffic sent by the router R2 through the interfaces gi2/0/2 or gi2/0/3 can pass the RPF check, so that the multicast traffic can be forwarded to the multicast receiver.
As another example, link 1 and link 2 in the parallel link group fail. When the link 1 and the link 2 fail, the router R1 checks whether or not other links are available in the parallel link group to which the link 1 and the link 2 belong. If there are other links in the parallel link group to which link 1 and link 2 belong that are available, then the router R1 node may no longer send a multicast join message or a multicast exit message to router R2.
The router R1 and the router R2 may brush the link entries in the link group, and the link entries in the link group after the refresh on the router R1 and the router R2 are as follows.
For the router R2, after receiving the multicast traffic sent by the router R3, the router R2 may forward the multicast traffic according to the forwarding table entry. The router R2 may perform hash computation on the multicast source S and the multicast group G to obtain a hash computation result, and then modulo (may also be referred to as taking the remainder) according to the number of available link interfaces in the link group ID 2. There are 1 link interfaces available in link table entry 2 in the link group, interfaces gi2/0/3. When the result of the modulo is 0, the router R2 may forward the multicast traffic to the router R1 through the interface gi2/0/3.
For the router R1, since the interface gi2/0/3 belongs to the link group ID1, the router R1 can pass RPF inspection when receiving the multicast traffic sent by the router R2 through the interface gi2/0/3, so that the multicast traffic can be forwarded to the multicast receiver.
Therefore, in the case of a link failure, neither router R1 nor router R2 need to refresh for each SG multicast forwarding table entry, but only for the link table entries in the link group. The convergence speed is irrelevant to the quantity of the SG multicast forwarding table entries, so that the convergence time of the multicast service can be shortened.
As another example, links 1 and 2 in the parallel link group are restored from the failure state to the normal state. When the link 1 and the link 2 fail, the router R1 checks whether or not other links are available in the parallel link group to which the link 1 and the link 2 belong. If there are other links in the parallel link group to which link 1 and link 2 belong that are available, then the router R1 node may no longer send a multicast join message or a multicast exit message to router R2.
The router R1 and the router R2 may brush the link entries in the link group, and the link entries in the link group after the refresh on the router R1 and the router R2 are as follows.
For the router R2, after receiving the multicast traffic sent by the router R3, the router R2 may forward the multicast traffic according to the forwarding table entry. The router R2 may perform hash computation on the multicast source S and the multicast group G to obtain a hash computation result, and then modulo (may also be referred to as taking the remainder) according to the number of available link interfaces in the link group ID 2. The link interfaces available in the link table entry 2 in the link group are 3, namely interfaces gi2/0/1, gi2/0/2 and gi2/0/3. When the result of the modulo is 0/1/2, the router R2 may forward the multicast traffic to the router R1 through the interfaces gi2/0/1, gi2/0/2, gi2/0/3, respectively.
For the router R1, whether multicast traffic sent by the router R2 is received from the interfaces gi2/0/1, gi2/0/2 or gi2/0/3, since the interfaces gi2/0/1, gi2/0/2, gi2/0/3 belong to the link group ID1, the multicast traffic sent by the router R1 through the interfaces gi2/0/1, gi2/0/2 or gi2/0/3 can pass the RPF check, so that the multicast traffic can be forwarded to the multicast receiver.
Therefore, in the case that the link is restored from the failure state to the normal state, both the router R1 and the router R2 do not need to refresh for each SG multicast forwarding table entry, but only need to refresh for the link table entry in the link group. The convergence speed is irrelevant to the quantity of the SG multicast forwarding table entries, so that the convergence time of the multicast service can be shortened.
The method for sharing the multicast message load provided in the embodiment of the present application is described in detail above with reference to fig. 1 to 3, and the embodiment of the apparatus of the present application will be described in detail below with reference to fig. 4 to 10. It is to be understood that the description of the method embodiments corresponds to the description of the device embodiments, and that parts not described in detail can therefore be seen in the preceding method embodiments.
Fig. 4 is a schematic block diagram of a first network device 400 according to an embodiment of the present application. The first network device 400 shown in fig. 4 may perform the corresponding steps performed by the first network device in the method of the above-described embodiments. As shown in fig. 4, the first network device 400 includes: the module 410 is received, the module 420 is determined, and the module 430 is selected.
It should be understood that the first network device 400 may perform the respective steps performed by the first network device in the method of the above-described embodiments, e.g., the respective steps performed by the first network device in the method of fig. 2. Specifically, the receiving module 410 may implement the method flow of step 210 in fig. 2, for receiving the first multicast packet; the determining module 420 may implement the method flow of step 220 in fig. 2, and is configured to determine, according to the multicast forwarding table entry, a first link group corresponding to the first multicast packet; the selecting module 430 may implement the method flow of step 230 in fig. 2, for selecting the first link to send the first multicast message.
Optionally, the internet protocol IP addresses of the corresponding physical interfaces between the at least two parallel links are different.
Optionally, the multicast forwarding table entry includes a correspondence between a first identifier of the first multicast packet and an identifier of the first link group;
the determining module 420 is specifically configured to obtain, through the obtaining module 450, the first identifier from the first multicast message; and determining the first link group according to the multicast forwarding table item and the first identifier.
Optionally, the selecting module 430 is specifically configured to determine the at least two parallel links according to the identification of the first link group; and selecting the first link from the at least two parallel links to send the first multicast message.
Optionally, the selecting module 430 is further configured to select a second link other than the first link from the at least two parallel links when the state of the first link is unavailable, and send the first multicast message through the second link.
Optionally, the receiving module 410 is further configured to receive at least two messages sent by the second network device through each of the at least two parallel links, where each of the messages sent by the at least two parallel links includes an identification ID of the second network device;
The determining module 420 is further configured to determine that an ID of the second network device included in each of the at least two messages is the same;
the first network device 400 further comprises:
an establishing module 440, configured to establish the first link group including the at least two parallel links based on the ID of the second network device. Further, the first network device establishes a first link group based on the ID of the second network device, acquires the ID of the first link group, establishes a correspondence between the first link group and the ID of the second network device, and establishes a correspondence between the first link group and at least two parallel links.
Optionally, the selecting module 430 is specifically configured to select a first link from the at least two parallel links according to the characteristic information of the first multicast message, and send the first multicast message through the first link.
Fig. 5 is a schematic block diagram of another first network device 500 provided in an embodiment of the present application. As shown in fig. 5, the first network device 500 includes: a receiving module 510 and a setting up module 520.
It should be understood that the first network device 500 may perform the corresponding steps performed by the first network device in the method of the above-described embodiments.
Specifically, as an example, the receiving module 510 in the first network device 500 is configured to receive a first multicast join message through a first link; the establishing module 520 is configured to establish the multicast forwarding table entry.
The first multicast joining message includes a first identifier of a first multicast message, and the first link is one link in a first link group. The multicast forwarding table entry includes a correspondence between the first identifier and an identifier of a first link group, where the first link group includes at least two parallel links between the first network device and the second network device, and the at least two parallel links are different.
Optionally, the receiving module 510 further receives a second multicast join message through a second link, where the second multicast join message includes a second identifier of the second multicast message, and the second link is one link in a second link group;
the establishing module 520 is further configured to establish the multicast forwarding table, where the multicast forwarding table further includes a correspondence between the second identifier and the identifier of the second link group.
Optionally, the receiving module 510 is further configured to receive the first multicast packet; the first network device 500 further comprises a determining module 530 and a selecting module 540. The determining module 530 is configured to determine, according to the multicast forwarding table entry, a first link group corresponding to the first multicast packet; the selecting module 540 is configured to select a second link to send the first multicast message, where the second link is one link of the at least two parallel links.
Optionally, the determining module 530 is specifically configured to obtain, by the obtaining module 550, the first identifier from the first multicast message; and determining the first link group according to the multicast forwarding table item and the first identifier.
Optionally, the determining module 530 is specifically configured to determine the at least two parallel links according to the identification of the first link group; and selecting the first link from the at least two parallel links to send the first multicast message.
Optionally, the selecting module 540 is further configured to select a third link other than the second link from the at least two parallel links when the interface state of the second link is unavailable, and send the first multicast message through the third link.
Optionally, the receiving module 510 is further configured to receive at least two messages sent by the second network device through each of the at least two parallel links, where each of the messages sent by the at least two parallel links includes an ID of the second network device;
the determining module 530 is further configured to determine that an ID of the second network device included in each of the at least two messages is the same;
The establishing module 520 is further configured to establish the first link group including the at least two parallel links based on the ID of the second network device and a correspondence between IDs of the first link group and the second network device.
Optionally, the selecting module 540 is specifically configured to select the second link from the at least two parallel links according to the characteristic information of the first multicast message, and send the first multicast message through the second link.
Fig. 6 is a schematic hardware structure of the first network device 2000 according to the embodiment of the present application. The first network device 2000 shown in fig. 6 may perform the corresponding steps performed by the first network device in the method of the above embodiment.
As shown in fig. 6, the first network device 2000 includes a processor 2001, a memory 2002, an interface 2003, and a bus 2004. The interface 2003 may be implemented in a wireless or wired manner, and may specifically be a network card. The processor 2001, memory 2002, and interface 2003 are connected by a bus 2004.
It should be understood that the first network device 2000 may perform the respective steps performed by the first network device in the method of the above-described embodiments, for example, the respective steps performed by the first network device in the method of fig. 2.
As an example, the processor 2001 is to: receiving a first multicast message; determining a first link group corresponding to the first multicast message according to a multicast forwarding table item; and selecting a first link to send the first multicast message.
The first link group comprises at least two parallel links between the first network device and a second network device, the second network device is a neighbor of the first network device, and the at least two parallel links are different. The first link is one of the at least two parallel links.
Specifically, the interface 2003 in the first network device 2000 may implement the method flow of step 210 in fig. 2, for receiving the first multicast packet; processor 2001 in first network device 2000 may implement the method flow of step 220 in fig. 2, for determining, according to a multicast forwarding table entry, a first link group corresponding to the first multicast packet; processor 2001 in first network device 2000 may also implement the method flow of step 230 in fig. 2 for selecting a first link to send the first multicast message.
The interface 2003 may include a transmitter and a receiver, for the first network device to implement the above-mentioned transceiving. For example, the interface 2003 is configured to receive a first multicast message. For another example, the interface 2003 is configured to send the first multicast message. For another example, the interface 2003 is configured to receive a message sent by a neighbor of the first network device.
The processor 2001 is configured to perform the processing performed by the first network device in the above embodiment. For example, a first link group corresponding to the first multicast message is determined according to the multicast forwarding table entry. For another example, a multicast forwarding table entry is established; and/or other processes for the techniques described herein. By way of example, the processor 2001 is configured to support steps 220, 230 of fig. 2. Memory 2002 includes an operating system 20021 and application programs 20022 for storing programs, code or instructions which when executed by a processor or hardware device perform the processes of the method embodiments involving the first network device. Alternatively, the memory 2002 may include read-only memory (ROM) and random access memory (random access memory, RAM). Wherein the ROM comprises a basic input/output system (BIOS) or an embedded system; the RAM includes application programs and an operating system. When the first network device 2000 needs to be operated, the first network device 2000 is guided to enter a normal operation state by starting a BIOS cured in a ROM or a bootloader booting system in an embedded system. After the first network device 2000 enters the normal operation state, the application programs and the operating system that run in the RAM, and thus, the processing procedure involving the first network device 2000 in the method embodiment is completed.
It is to be understood that fig. 6 only shows a simplified design of the first network device 2000. In practice, the first network device may comprise any number of interfaces, processors or memories.
Fig. 7 is a schematic hardware structure of another first network device 2100 according to an embodiment of the present application. The first network device 2100 shown in fig. 7 may perform the corresponding steps performed by the first network device in the method of the above-described embodiments.
As depicted in fig. 7, the first network device 2100 includes: a main control board 2110, an interface board 2130, a switch web 2120, and an interface board 2140. Main control board 2110, interface boards 2130 and 2140, and switching network board 2120 are connected to the system back board via a system bus to achieve interworking. The main control board 2110 is used for completing functions such as system management, equipment maintenance, protocol processing and the like. Switch board 2120 is used to perform data exchanges between interface boards (also known as line cards or traffic boards). Interface boards 2130 and 2140 are used to provide various service interfaces (e.g., POS interface, GE interface, ATM interface, etc.) and to enable forwarding of data packets.
It is to be understood that the first network device 2100 may perform the corresponding steps performed by the first network device in the method of the above embodiments, e.g., the corresponding steps performed by the first network device in the method of fig. 2.
Specifically, the interface board 2130 may implement the method flow of step 210 in fig. 2, for receiving a first multicast message; the main control board 2110 may implement the method flow of step 220 in fig. 2, and is configured to determine, according to the multicast forwarding table entry, a first link group corresponding to the first multicast packet; the main control board 2110 may implement the method of step 230 in fig. 2, for selecting the first link to send the first multicast message.
The interface board 2130 may include a central processor 2131, a forwarding table entry memory 2134, a physical interface card 2133, and a network processor 2132. The central processor 2131 is used for controlling and managing the interface board and communicating with the central processor on the main control board. The forwarding table entry memory 2134 is used to hold entries, such as multicast forwarding entries above. The physical interface card 2133 is used to complete the reception and transmission of traffic.
Specifically, the physical interface card 2133 is configured to receive a first multicast message, and after the physical interface card 2133 receives the first multicast message, send the first multicast message to the central processor 2111 through the central processor 2131, and the central processor 2111 processes the first multicast message.
It should be understood that the operations on the interface board 2140 in the embodiment of the present application are consistent with the operations of the interface board 2130, and are not repeated for brevity. It should be understood that the first network device 2100 of this embodiment may correspond to the functions and/or the various steps implemented in the above-described method embodiments, which are not described herein.
In addition, it should be noted that the main control board may have one or more blocks, and the main control board and the standby main control board may be included when there are multiple blocks. The interface boards may have one or more blocks, the more data processing capabilities the first network device is, the more interface boards are provided. The physical interface card on the interface board may also have one or more pieces. The switching network board may not be provided, or may be provided with one or more blocks, and load sharing redundancy backup can be jointly realized when the switching network board is provided with the plurality of blocks. Under the centralized forwarding architecture, the first network device may not need a switch board, and the interface board bears the processing function of the service data of the whole system. Under the distributed forwarding architecture, the first network device may have at least one switching fabric, through which data exchange between multiple interface boards is implemented, and a large capacity of data exchange and processing capability is provided. Therefore, the data access and processing power of the first network device of the distributed architecture is greater than that of the devices of the centralized architecture. The specific architecture employed is not limited in any way herein, depending on the specific networking deployment scenario.
Fig. 8 is a schematic block diagram of a second network device 800 according to an embodiment of the present application. The second network device 800 shown in fig. 8 may perform the corresponding steps performed by the second network device in the method of the above-described embodiment. As shown in fig. 8, the second network device 800 includes: a receiving module 810, a determining module 820, and a transmitting module 830.
It should be understood that the second network device 800 may perform the corresponding steps performed by the second network device in the method of the above-described embodiments, for example, the corresponding steps performed by the router R1 in fig. 3.
Specifically, as an example, the receiving module 810 in the second network device 800 is configured to receive, through the first link, a first multicast packet sent by the first network device; the determining module 820 is configured to determine that the first multicast packet passes a reverse path forwarding RPF check according to the first link being one link in a second link group; the sending module 830 is configured to forward the first multicast packet.
The first network device is a neighbor of the second network device, the first link is one link in a second link group, the second link group comprises at least two parallel links between the first network device and the second network device, and the at least two parallel links are different.
Optionally, the determining module 820 is specifically configured to: determining the second link group corresponding to the first multicast message according to a multicast forwarding table item; and determining that the first link is one link in a second link group, and determining that the first multicast message passes the RPF check.
Optionally, the multicast forwarding table entry includes a correspondence between a first identifier of the first multicast packet and an identifier of the second link group, and the determining module 820 is specifically configured to: acquiring a first identifier from the first multicast message; and determining the second link group according to the first identifier and the multicast forwarding table item.
Optionally, the multicast forwarding table entry further includes a correspondence between an identifier of the second link group and identifiers of at least two parallel links in the second link group, and the determining module 820 is specifically configured to: and determining that the first link is one link in the second link group according to the identification of the second link group and the multicast forwarding table entry.
Optionally, the receiving module 810 is further configured to receive at least two messages sent by the first network device through each of the at least two parallel links, where each of the messages sent by the at least two parallel links includes an ID of the first network device; the determining module 820 is further configured to determine that an ID of the first network device included in each of the at least two messages is the same;
the second network device 800 further comprises: an establishing module 840 is configured to establish the second link group including the at least two parallel links based on the ID of the first network device and a correspondence between the second link group and the ID of the first network device.
Fig. 9 is a schematic hardware architecture of a second network device 2200 according to an embodiment of the present application. The second network device 2200 shown in fig. 9 may perform the corresponding steps performed by the second network device in the method of the above-described embodiment.
As shown in fig. 9, the second network device 2200 includes a processor 2201, a memory 2202, an interface 2203, and a bus 2204. The interface 2203 may be implemented by a wireless or wired manner, and may specifically be a network card. The processor 2201, the memory 2202, and the interface 2203 are connected via a bus 2204.
It should be understood that the second network device 2200 may perform the corresponding steps performed by the second network device in the method of the above-described embodiments, such as router R1 in fig. 3.
As an example, the processor 2201 in the second network device 2200 is specifically configured to: receiving a first multicast message sent by first network equipment through a first link; determining that the first multicast message passes Reverse Path Forwarding (RPF) inspection according to the first link as one link in a second link group; and forwarding the first multicast message.
The interface 2203 may specifically include a transmitter and a receiver, for the second network device to implement the foregoing transceiving. For example, the interface is configured to support receiving, via the first link, a first multicast message sent by the first network device. For another example, the interface 2203 is configured to forward the first multicast message.
The processor 2201 is configured to perform the processing performed by the second network device in the above embodiment. For example, the method is used for determining that the first multicast message passes Reverse Path Forwarding (RPF) check according to the first link as one link in a second link group; and/or other processes for the techniques described herein. The memory 2202 includes an operating system 22021 and application programs 22022 for storing programs, code or instructions that when executed by a processor or hardware device perform the processes of the method embodiments involving the second network device. Alternatively, the memory 2202 may include a read-only memory (ROM) and a random access memory (random access memory, RAM). Wherein the ROM comprises a basic input/output system (BIOS) or an embedded system; the RAM includes application programs and an operating system. When it is necessary to operate the second network device 2200, the second network device 2200 is booted into a normal operation state by a BIOS cured in a ROM or bootloader booting system in an embedded system. After the second network device 2200 enters the normal operation state, the application programs and the operating system run in the RAM, thereby completing the processing procedure related to the second network device 2200 in the method embodiment.
It is to be understood that fig. 9 only shows a simplified design of the second network device 2200. In practice, the second network device may comprise any number of interfaces, processors or memories.
Fig. 10 is a schematic hardware structure of another second network device 2400 according to an embodiment of the present application. The second network device 2400 shown in fig. 10 may perform the corresponding steps performed by the second network device in the method of the above-described embodiment.
As illustrated in fig. 10, the second network device 2400 includes: a main control board 2410, an interface board 2430, a switch board 2420 and an interface board 2440. The main control board 2410, the interface boards 2430 and 2440 and the switch board 2420 are connected with the system back board through a system bus to realize intercommunication. The main control board 2410 is used for performing functions such as system management, equipment maintenance, and protocol processing. The switch fabric 2420 is used to complete data exchange between interface boards (interface boards are also referred to as line cards or traffic boards). Interface boards 2430 and 2440 are used to provide various service interfaces (e.g., POS interface, GE interface, ATM interface, etc.) and to enable forwarding of data packets.
It is to be understood that the second network device 2400 may perform the respective steps performed by the second network device in the method of the above-described embodiments, such as the respective steps performed by the router R1 in fig. 3.
The interface board 2430 can include a central processor 2431, a forwarding table entry store 2434, a physical interface card 2433, and a network processor 2432. The central processor 2431 is used for controlling and managing the interface board and communicating with the central processor on the main control board. Forwarding table entry store 2434 is used to hold entries, e.g., multicast forwarding entries, above. The physical interface card 2433 is used to complete the reception and transmission of traffic.
It should be understood that the operations on the interface board 2440 in the embodiment of the present application are consistent with the operations of the interface board 2430, and are not repeated for brevity. It should be understood that the second network device 2400 of the present embodiment may correspond to the functions and/or the steps implemented by the above-described method embodiments, which are not described herein.
In addition, it should be noted that the main control board may have one or more blocks, and the main control board and the standby main control board may be included when there are multiple blocks. The interface board may have one or more blocks, the more data processing capabilities the second network device is, the more interface boards are provided. The physical interface card on the interface board may also have one or more pieces. The switching network board may not be provided, or may be provided with one or more blocks, and load sharing redundancy backup can be jointly realized when the switching network board is provided with the plurality of blocks. Under the centralized forwarding architecture, the second network device may not need a switch board, and the interface board bears the processing function of the service data of the whole system. Under the distributed forwarding architecture, the second network device may have at least one switching fabric, through which data exchange between multiple interface boards is implemented, and a large capacity of data exchange and processing capability is provided. Therefore, the data access and processing power of the second network device of the distributed architecture is greater than that of the devices of the centralized architecture. The specific architecture employed is not limited in any way herein, depending on the specific networking deployment scenario.
The embodiment of the application also provides a system for sharing the multicast message load, which comprises the first network device and the second network device. Wherein the first network device may perform the corresponding steps performed by the first network device in the method of the above embodiments, e.g. the first network device in the method of fig. 2 or the router R2 in fig. 3. The second network device may perform the corresponding steps performed by the second network device in the method of the above embodiment, such as router R1 in fig. 3.
As an example, a first network device is to: receiving a first multicast message; determining a first link group corresponding to the first multicast message according to a multicast forwarding table item, wherein the first link group comprises at least two parallel links between the first network device and second network device, the at least two parallel links are different, and the second network device is a neighbor of the first network device; and selecting a first link to send the first multicast message, wherein the first link is one link of the at least two parallel links.
The second network device is configured to: receiving a first multicast message sent by a first network device through a first link, wherein the first network device is a neighbor of a second network device, the first link is one link in a second link group, the second link group comprises at least two parallel links between the first network device and the second network device, and the at least two parallel links are different; determining that the first multicast message passes Reverse Path Forwarding (RPF) inspection for one link in the second link group according to the first link; and forwarding the first multicast message.
The present application also provides a computer readable medium storing program code which, when run on a computer, causes the computer to perform the methods of the above aspects. These computer-readable stores include, but are not limited to, one or more of the following: read-only memory (ROM), programmable ROM (PROM), erasable PROM (EPROM), flash memory, electrically EPROM (EEPROM), and hard disk drive (hard drive).
The embodiment of the application also provides a computer program product, which is applied to the first network device, and comprises: computer program code which, when run by a computer, causes the computer to perform the method in any of the possible implementations of any of the above aspects.
The embodiment of the application also provides a chip system applied to the first network device, the chip system comprising: the system comprises at least one processor, at least one memory and an interface circuit, wherein the interface circuit is responsible for information interaction between the chip system and the outside, the at least one memory, the interface circuit and the at least one processor are interconnected through a circuit, and instructions are stored in the at least one memory; the instructions are executable by the at least one processor to perform the operations of the first network device in the methods of the various aspects described above.
In a specific implementation, the chip may be implemented in the form of a central processing unit (central processing unit, CPU), microcontroller (micro controller unit, MCU), microprocessor (micro processing unit, MPU), digital signal processor (digital signal processing, DSP), system on chip (SoC), application-specific integrated circuit (ASIC), field programmable gate array (field programmable gate array, FPGA) or programmable logic device (programmable logic device, PLD).
Embodiments of the present application also provide a computer program product for use in a first network device, the computer program product comprising a series of instructions which, when executed, perform the operations of the first network device in the methods of the above aspects.
It should be understood that, in various embodiments of the present application, the sequence numbers of the foregoing processes do not mean the order of execution, and the order of execution of the processes should be determined by the functions and internal logic thereof, and should not constitute any limitation on the implementation process of the embodiments of the present application.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
In the several embodiments provided in this application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a read-only memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the present application, and the changes and substitutions are intended to be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
Claims (29)
1. A method for load sharing of multicast messages, the method comprising:
the method comprises the steps that first network equipment receives a first multicast message;
the first network device determines a first link group corresponding to the first multicast message according to a multicast forwarding table item, wherein the first link group comprises at least two parallel links between the first network device and a second network device, the multicast forwarding table item comprises a corresponding relation between a first identifier of the first multicast message and an identifier of the first link group and a link table item, the link table item comprises a multicast traffic output interface corresponding to the first link group, the second network device is a neighbor of the first network device, and the at least two parallel links are different;
the first network device selects a first link to send the first multicast message, wherein the first link is one link of the at least two parallel links; and
and when the state of the first link is unavailable, the first network equipment refreshes the link list item, selects a second link except the first link from the at least two parallel links, and sends the first multicast message through the second link.
2. The method of claim 1, wherein the at least two parallel links are different, comprising:
the internet protocol IP addresses of the interfaces corresponding between the at least two parallel links are different.
3. The method according to claim 1 or 2, wherein the first network device determines a first link group corresponding to the first multicast message according to a multicast forwarding table entry, comprising:
the first network equipment acquires the first identifier from the first multicast message;
the first network device determines the first link group according to the multicast forwarding table item and the first identifier.
4. The method according to claim 1 or 2, wherein the first network device selecting a first link to send the first multicast message comprises:
the first network equipment determines the at least two parallel links according to the identification of a first link group;
and the first network equipment selects the first link from the at least two parallel links to send the first multicast message.
5. The method according to claim 1 or 2, wherein before the first network device determines the first link group corresponding to the first multicast packet according to the multicast forwarding table entry, the method further comprises:
The first network device receives at least two messages sent by the second network device through each link of the at least two parallel links, wherein the messages sent by each link comprise Identification (ID) of the second network device;
the first network device establishes the first link group including the at least two parallel links based on an ID of the second network device.
6. The method according to claim 1 or 2, wherein the first network device selects a first link to send the first multicast message, specifically comprising:
and the first network equipment selects a first link from the at least two parallel links according to the characteristic information of the first multicast message and sends the first multicast message through the first link.
7. A method for load sharing of multicast messages, the method comprising:
the second network equipment receives a first multicast message sent by the first network equipment through a first link, wherein the first network equipment is a neighbor of the second network equipment, the first link is one link of at least two parallel links between the first network equipment and the second network equipment, which are included in a second link group, and the at least two parallel links are different;
The second network device determines that the first multicast message passes Reverse Path Forwarding (RPF) inspection for one link in the second link group according to the first link based on a multicast forwarding table entry, wherein the multicast forwarding table entry comprises a corresponding relation between a first identifier of the first multicast message and an identifier of the second link group and a link table entry, and the link table entry comprises a multicast flow output interface corresponding to the second link group;
the second network device forwards the first multicast message; and
and refreshing the link table item by the second network equipment when the state of the first link is unavailable.
8. The method of claim 7, wherein the at least two parallel links are different, comprising:
the interfaces corresponding to the at least two parallel links have different Internet Protocol (IP) addresses.
9. The method according to claim 7 or 8, wherein the second network device determining, for one link of the second link group, that the first multicast message passes a reverse path forwarding, RPF, check according to the first link, comprises:
the second network device determines the second link group corresponding to the first multicast message according to the multicast forwarding table item;
The second network device determines that the first multicast message passes the RPF check based on the first link being one of a second link group.
10. The method of claim 9, wherein the second network device determining the second link group corresponding to the first multicast message according to a multicast forwarding table entry comprises:
the second network equipment acquires a first identifier from the first multicast message;
and the second network equipment determines the second link group corresponding to the first multicast message according to the first identifier and the multicast forwarding table item.
11. The method of claim 9, wherein the multicast forwarding table entry further includes a correspondence between the identification of the second link group and at least two parallel links in the second link group, the method further comprising:
and the second network equipment determines that the first link is one link in the second link group according to the identification of the second link group and the multicast forwarding table entry.
12. The method according to claim 7 or 8, wherein before the second network device determines from the first link that the first multicast message passes a reverse path forwarding, RPF, check for one of a second link group, the method further comprises:
The second network device receives at least two messages sent by the first network device through each link of the at least two parallel links, wherein the messages sent by each link comprise Identification (ID) of the first network device;
the second network device establishes the second link group including the at least two parallel links based on the ID of the first network device and a correspondence between the second link group and the ID of the first network device.
13. A first network device, comprising:
the receiving module is used for receiving the first multicast message;
a determining module, configured to determine a first link group corresponding to the first multicast packet according to a multicast forwarding table entry, where the first link group includes at least two parallel links between the first network device and a second network device, the multicast forwarding table entry includes a corresponding relationship between a first identifier of the first multicast packet and an identifier of the first link group, and a link table entry, and the link table entry includes a multicast traffic output interface corresponding to the first link group, where the at least two parallel links are different, and the second network device is a neighbor of the first network device;
The selecting module is used for selecting a first link to send the first multicast message, wherein the first link is one link of the at least two parallel links;
the selecting module is further configured to, when the state of the first link is unavailable, refresh the link table entry, select a second link other than the first link from the at least two parallel links, and send the first multicast message through the second link.
14. The first network device of claim 13, wherein the at least two parallel links are different, comprising:
the internet protocol IP addresses of the interfaces corresponding between the at least two parallel links are different.
15. The first network device according to claim 13 or 14, characterized in that,
the determining module is specifically configured to:
acquiring the first identifier from the first multicast message through an acquisition module;
and determining the first link group according to the multicast forwarding table item and the first identifier.
16. The first network device according to claim 13 or 14, wherein the selection module is specifically configured to:
determining the at least two parallel links according to the identification of the first link group;
And selecting the first link from the at least two parallel links to send the first multicast message.
17. The first network device of claim 13 or 14, wherein the receiving module is further configured to:
receiving at least two messages sent by the second network device through each link of the at least two parallel links, wherein the messages sent by each link comprise an identification ID of the second network device;
further comprises:
and the establishing module is used for establishing the first link group comprising the at least two parallel links based on the ID of the second network equipment.
18. The first network device according to claim 13 or 14, wherein the selection module is specifically configured to:
and selecting a first link from the at least two parallel links according to the characteristic information of the first multicast message, and sending the first multicast message through the first link.
19. A second network device, comprising:
a receiving module, configured to receive a first multicast packet sent by a first network device through a first link, where the first network device is a neighbor of the second network device, the first link is one of at least two parallel links between the first network device and the second network device included in a second link group, and the at least two parallel links are different;
The determining module is configured to determine, based on a multicast forwarding table entry, that the first multicast packet passes through a reverse path forwarding RPF inspection according to the first link for one link in a second link group, where the multicast forwarding table entry includes a link table entry and a correspondence between a first identifier of the first multicast packet and an identifier of the second link group, and the link table entry includes a multicast traffic output interface corresponding to the second link group;
the sending module is used for forwarding the first multicast message;
the determining module is further configured to refresh the link table entry when the state of the first link is unavailable.
20. The second network device of claim 19, wherein the at least two parallel links are different, comprising:
the interfaces corresponding to the at least two parallel links have different Internet Protocol (IP) addresses.
21. The second network device according to claim 19 or 20, wherein the determining module is specifically configured to:
determining the second link group corresponding to the first multicast message according to a multicast forwarding table item;
and determining that the first multicast message passes the RPF check based on the first link being one link in a second link group.
22. The second network device according to claim 21, wherein the determining module is specifically configured to:
acquiring a first identifier from the first multicast message;
and determining the second link group corresponding to the first multicast message according to the first identifier and the multicast forwarding table item.
23. The second network device of claim 21, wherein the multicast forwarding table further includes a correspondence between an identification of the second link group and at least two parallel links in the second link group,
the determining module is specifically configured to: and determining that the first link is one link in the second link group according to the identification of the second link group and the multicast forwarding table entry.
24. The second network device according to claim 19 or 20, wherein the receiving module is further configured to:
receiving at least two messages sent by the first network device through each link in the at least two parallel links, wherein the messages sent by each link comprise an identification ID of the first network device;
further comprises:
and the establishing module is used for establishing the second link group comprising the at least two parallel links based on the ID of the first network equipment.
25. A first network device, comprising: a processor and a memory for storing a program, the processor being adapted to call and run the program from the memory to perform the method of any one of claims 1 to 6.
26. A second network device, comprising: a processor and a memory for storing a program, the processor being adapted to call and run the program from the memory to perform the method of any one of claims 7 to 12.
27. A system for multicast message load sharing, comprising: the first network device of any of claims 13 to 18 and the second network device of any of claims 19 to 24.
28. A computer readable storage medium comprising a computer program which, when run on a computer, causes the computer to perform the method of any one of claims 1 to 6.
29. A computer readable storage medium comprising a computer program which, when run on a computer, causes the computer to perform the method of any one of claims 7 to 12.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010505915.3A CN113765815B (en) | 2020-06-05 | 2020-06-05 | Method, equipment and system for sharing multicast message load |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010505915.3A CN113765815B (en) | 2020-06-05 | 2020-06-05 | Method, equipment and system for sharing multicast message load |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113765815A CN113765815A (en) | 2021-12-07 |
CN113765815B true CN113765815B (en) | 2024-03-26 |
Family
ID=78784977
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010505915.3A Active CN113765815B (en) | 2020-06-05 | 2020-06-05 | Method, equipment and system for sharing multicast message load |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113765815B (en) |
Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1960282A (en) * | 2006-08-31 | 2007-05-09 | 华为技术有限公司 | Multicast service method and device of providing multiple types of protection and recovery |
CN1992707A (en) * | 2005-12-29 | 2007-07-04 | 上海贝尔阿尔卡特股份有限公司 | Fast restoration method of multicast service and network apparatus |
CN101039262A (en) * | 2007-01-24 | 2007-09-19 | 中国科学院计算机网络信息中心 | Half-covered self-organizing dynamic multicast routing method |
CN101043422A (en) * | 2006-03-24 | 2007-09-26 | 上海贝尔阿尔卡特股份有限公司 | Multicasting service protecting method of access network and its system, apparatus |
US8238344B1 (en) * | 2007-03-30 | 2012-08-07 | Juniper Networks, Inc. | Multicast load balancing |
CN103236975A (en) * | 2013-05-09 | 2013-08-07 | 杭州华三通信技术有限公司 | Message forwarding method and message forwarding device |
WO2013139270A1 (en) * | 2012-03-23 | 2013-09-26 | 华为技术有限公司 | Method, device, and system for implementing layer3 virtual private network |
CN105264834A (en) * | 2013-06-28 | 2016-01-20 | 华为技术有限公司 | Method and device for processing multicast message in nvo3 network, and nvo3 network |
WO2017201750A1 (en) * | 2016-05-27 | 2017-11-30 | 华为技术有限公司 | Method, device and system for processing multicast data |
CN107659496A (en) * | 2016-07-26 | 2018-02-02 | 新华三技术有限公司 | A kind of data processing method and device |
CN109787839A (en) * | 2019-02-28 | 2019-05-21 | 新华三技术有限公司 | A kind of message forwarding method and device |
CN109981308A (en) * | 2017-12-27 | 2019-07-05 | 北京华为数字技术有限公司 | Message transmitting method and device |
CN110999230A (en) * | 2017-10-18 | 2020-04-10 | 华为技术有限公司 | Method, network equipment and system for transmitting multicast message |
WO2020088655A1 (en) * | 2018-11-02 | 2020-05-07 | FG Innovation Company Limited | Sidelink measurement reporting in next generation wireless networks |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7586842B2 (en) * | 2006-05-19 | 2009-09-08 | Hewlett-Packard Development Company, L.P. | Failover of multicast traffic flows using NIC teaming |
US8223767B2 (en) * | 2009-12-31 | 2012-07-17 | Telefonaktiebolaget L M Ericsson (Publ) | Driven multicast traffic distribution on link-aggregate-group |
US8432789B2 (en) * | 2010-12-28 | 2013-04-30 | Avaya Inc. | Split multi-link trunking (SMLT) hold-down timer for internet protocol (IP) multicast |
US9100203B2 (en) * | 2012-01-12 | 2015-08-04 | Brocade Communications Systems, Inc. | IP multicast over multi-chassis trunk |
US9143444B2 (en) * | 2013-03-12 | 2015-09-22 | International Business Machines Corporation | Virtual link aggregation extension (VLAG+) enabled in a TRILL-based fabric network |
US9692677B2 (en) * | 2013-06-12 | 2017-06-27 | Avaya Inc. | Implementing multicast link trace connectivity fault management in an Ethernet network |
US9565027B2 (en) * | 2013-08-23 | 2017-02-07 | Futurewei Technologies, Inc. | Multi-destination traffic control in multi-level networks |
US9479349B2 (en) * | 2013-12-31 | 2016-10-25 | Lenovo Enterprise Solutions (Singapore) Pte. Ltd | VLAG PIM multicast traffic load balancing |
-
2020
- 2020-06-05 CN CN202010505915.3A patent/CN113765815B/en active Active
Patent Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1992707A (en) * | 2005-12-29 | 2007-07-04 | 上海贝尔阿尔卡特股份有限公司 | Fast restoration method of multicast service and network apparatus |
CN101043422A (en) * | 2006-03-24 | 2007-09-26 | 上海贝尔阿尔卡特股份有限公司 | Multicasting service protecting method of access network and its system, apparatus |
CN1960282A (en) * | 2006-08-31 | 2007-05-09 | 华为技术有限公司 | Multicast service method and device of providing multiple types of protection and recovery |
CN101039262A (en) * | 2007-01-24 | 2007-09-19 | 中国科学院计算机网络信息中心 | Half-covered self-organizing dynamic multicast routing method |
US8238344B1 (en) * | 2007-03-30 | 2012-08-07 | Juniper Networks, Inc. | Multicast load balancing |
WO2013139270A1 (en) * | 2012-03-23 | 2013-09-26 | 华为技术有限公司 | Method, device, and system for implementing layer3 virtual private network |
CN103236975A (en) * | 2013-05-09 | 2013-08-07 | 杭州华三通信技术有限公司 | Message forwarding method and message forwarding device |
CN105264834A (en) * | 2013-06-28 | 2016-01-20 | 华为技术有限公司 | Method and device for processing multicast message in nvo3 network, and nvo3 network |
WO2017201750A1 (en) * | 2016-05-27 | 2017-11-30 | 华为技术有限公司 | Method, device and system for processing multicast data |
CN107659496A (en) * | 2016-07-26 | 2018-02-02 | 新华三技术有限公司 | A kind of data processing method and device |
CN110999230A (en) * | 2017-10-18 | 2020-04-10 | 华为技术有限公司 | Method, network equipment and system for transmitting multicast message |
CN109981308A (en) * | 2017-12-27 | 2019-07-05 | 北京华为数字技术有限公司 | Message transmitting method and device |
WO2020088655A1 (en) * | 2018-11-02 | 2020-05-07 | FG Innovation Company Limited | Sidelink measurement reporting in next generation wireless networks |
CN109787839A (en) * | 2019-02-28 | 2019-05-21 | 新华三技术有限公司 | A kind of message forwarding method and device |
Also Published As
Publication number | Publication date |
---|---|
CN113765815A (en) | 2021-12-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9660939B2 (en) | Protection switching over a virtual link aggregation | |
US7639605B2 (en) | System and method for detecting and recovering from virtual switch link failures | |
US8438305B2 (en) | Method and apparatus for implementing multiple portals into an RBRIDGE network | |
US7751329B2 (en) | Providing an abstraction layer in a cluster switch that includes plural switches | |
US8599683B2 (en) | System and method for changing a delivery path of multicast traffic | |
CN112565046B (en) | Synchronizing multicast router capabilities | |
US20220255847A1 (en) | Packet Sending Method and First Network Device | |
CN114465920B (en) | Method, device and system for determining corresponding relation | |
US11546267B2 (en) | Method for determining designated forwarder (DF) of multicast flow, device, and system | |
CN112671642A (en) | Message forwarding method and device | |
CN101527727A (en) | Communication device and operation management method | |
US20210211351A1 (en) | Stacking-port configuration using zero-touch provisioning | |
JP2013026829A (en) | Transmission system and control method of transmission system | |
CN101534253A (en) | Message forwarding method and device | |
CN112822097B (en) | Message forwarding method, first network device and first device group | |
WO2017201750A1 (en) | Method, device and system for processing multicast data | |
CN113285878B (en) | Load sharing method and first network equipment | |
CN113765815B (en) | Method, equipment and system for sharing multicast message load | |
CN115580574A (en) | Message forwarding method, device and system | |
CN111491334B (en) | Load sharing method, device, system, single board and storage medium | |
WO2022135217A1 (en) | Load sharing method and system, root node device, and leaf node device | |
CN114006780A (en) | Method, equipment and system for forwarding message | |
CN111953786A (en) | System, method and device for recording messages in whole network, network equipment and storage medium | |
CN114726795A (en) | Load sharing method, root node equipment, leaf node equipment and system | |
WO2023165544A9 (en) | Method and apparatus for discovering root node |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |