US9769074B2 - Network per-flow rate limiting - Google Patents

Network per-flow rate limiting Download PDF

Info

Publication number
US9769074B2
US9769074B2 US13/833,886 US201313833886A US9769074B2 US 9769074 B2 US9769074 B2 US 9769074B2 US 201313833886 A US201313833886 A US 201313833886A US 9769074 B2 US9769074 B2 US 9769074B2
Authority
US
United States
Prior art keywords
flow
data
flow control
switch
data flow
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US13/833,886
Other versions
US20140269319A1 (en
Inventor
Casimer DeCusatis
Rajaram B. Krishnamurthy
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
International Business Machines Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Business Machines Corp filed Critical International Business Machines Corp
Priority to US13/833,886 priority Critical patent/US9769074B2/en
Assigned to INTERNATIONAL BUSINESS MACHINES CORPORATION reassignment INTERNATIONAL BUSINESS MACHINES CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: DECUSATIS, CASIMER, KRISHNAMURTHY, RAJARAM B.
Publication of US20140269319A1 publication Critical patent/US20140269319A1/en
Application granted granted Critical
Publication of US9769074B2 publication Critical patent/US9769074B2/en
Assigned to AWEMANE LTD. reassignment AWEMANE LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: INTERNATIONAL BUSINESS MACHINES CORPORATION
Assigned to BEIJING PIANRUOJINGHONG TECHNOLOGY CO., LTD. reassignment BEIJING PIANRUOJINGHONG TECHNOLOGY CO., LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: AWEMANE LTD.
Assigned to BEIJING ZITIAO NETWORK TECHNOLOGY CO., LTD. reassignment BEIJING ZITIAO NETWORK TECHNOLOGY CO., LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BEIJING PIANRUOJINGHONG TECHNOLOGY CO., LTD.
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • H04L43/0876Network utilisation, e.g. volume of load or congestion level
    • H04L43/0882Utilisation of link capacity

Definitions

  • Ethernet networks are typically employed in local area networks (LANs) that include a plurality of network switches.
  • LANs local area networks
  • a number of communication protocols have been developed and continue to evolve to enhance Ethernet network performance for various environments.
  • DCB data center bridging
  • CEE converged enhanced Ethernet
  • DCE data center Ethernet
  • FCoE Fibre Channel over Ethernet
  • iWARP Internet Wide Area Remote direct memory access Protocol
  • RoCE Remote direct memory access over Converged Ethernet
  • Network congestion is a problem that occurs when the data flow is received from a source at a faster rate than the flow can be outputted or routed. Such congestion results in a reduction of quality of service, causing packets to be dropped, or queuing and/or transmission of packets to be delayed.
  • a method of monitoring data flow in a network includes: configuring a data flow including a plurality of data packets by a switch controller, the switch controller configured to control routing through the switch and switch configuration, wherein configuring includes storing an indication of a flow control policy in one or more of the data packets; monitoring a network switch receiving the data flow, wherein monitoring includes determining flow statistics in the switch; determining whether a congestion condition exists for the data flow based on the flow statistics and the flow control policy; and based on determining that the congestion condition exists for the data flow, performing a remedial action specific to the data flow to address the congestion condition.
  • a method of processing data flows in a network switch includes: receiving a data flow at the network switch, the data flow including a plurality of data packets, wherein one or more of the data packets includes an indication of a flow control policy specific to the data flow; storing the indication of the flow control policy in a flow control queue in the network switch, the flow control policy associated with a threshold for comparison to flow statistics for the data flow; and processing the data flow at the network switch according to instructions associated with the data flow and configured by a switch controller.
  • a computer program product for monitoring data flow in a network includes a tangible storage medium readable by a processing circuit and storing instructions for execution by the processing circuit for performing a method including: configuring a data flow including a plurality of data packets by a switch controller, the switch controller configured to control routing through the switch and switch configuration, wherein configuring includes storing an indication of a flow control policy in one or more of the data packets; monitoring a network switch receiving the data flow, wherein monitoring includes determining flow statistics in the switch; determining whether a congestion condition exists for the data flow based on the flow statistics and the flow control policy; and based on determining that the congestion condition exists for the data flow, performing a remedial action specific to the data flow to address the congestion condition.
  • a computer program product for processing data flows in a network switch.
  • the computer program product includes a tangible storage medium readable by a processing circuit and storing instructions for execution by the processing circuit for performing a method including: receiving a data flow at the network switch, the data flow including a plurality of data packets, wherein one or more of the data packets includes an indication of a flow control policy specific to the data flow; storing the indication of the flow control policy in a flow control queue in the network switch, the flow control policy associated with a threshold for comparison to flow statistics for the data flow; and processing the data flow at the network switch according to instructions associated with the data flow and configured by a switch controller.
  • an apparatus for controlling a network switch includes a memory having computer readable computer instructions; and a processor for executing the computer readable instructions.
  • the instructions are for: configuring a data flow including a plurality of data packets by a switch controller, the switch controller configured to control routing through the switch and switch configuration, wherein configuring includes storing an indication of a flow control policy in one or more of the data packets; monitoring a network switch receiving the data flow, wherein monitoring includes determining flow statistics in the switch; determining whether a congestion condition exists for the data flow based on the flow statistics and the flow control policy; and based on determining that the congestion condition exists for the data flow, performing a remedial action specific to the data flow to address the congestion condition.
  • FIG. 1 depicts a block diagram of a system including a network with OpenFlow-capable switches that may be implemented according to an embodiment
  • FIG. 2 depicts a block diagram of an OpenFlow-capable switch according to an embodiment
  • FIG. 3 depicts an example of an OpenFlow flow switching definition that can be used in embodiments
  • FIG. 4 depicts an exemplary embodiment of a portion of a network including a network switch and a switch controller
  • FIG. 5 depicts an example of a data packet
  • FIG. 6 is a flow diagram showing a method of monitoring and/or processing data flows in a network.
  • Exemplary embodiments relate to monitoring data flows processed in network switches.
  • An embodiment of a network includes one or more switches, each connected to a network controller or switch controller configured to control the switch.
  • a plurality of network switches are connected to and controlled by a central switch controller.
  • the controller sends control data packets to the switch to effect various configurations and routing functions.
  • the controller is configured to incorporate flow control information such as Quality of Service (QoS) policies into data packets (e.g., in packet headers) of a data flow for use in monitoring specific flows as they are processed in a network switch.
  • QoS Quality of Service
  • An exemplary method includes monitoring flow statistics for data flows in a network switch, and comparing the flow statistics to a threshold based on flow control information specific to each data flow.
  • the flow control information is stored in the switch in a flow control queue that includes indications of QoS policies associated with specific data flows.
  • the system 100 is a data center environment including a plurality of servers 102 and client systems 104 configured to communicate over the network 101 using switches 106 that are OpenFlow-capable.
  • the servers 102 also referred as hosts or host systems, are high-speed processing devices (e.g., mainframe computers, desktop computers, laptop computers, hand-held devices, embedded computing devices, or the like) including at least one processing circuit (e.g., a computer processor/CPU) capable of reading and executing instructions, and handling interactions with various components of the system 100 .
  • the servers 102 may be storage system servers configured to access and store large amounts of data to one or more data storage systems 108 .
  • the client systems 104 can include a variety of desktop, laptop, general-purpose computer devices, mobile computing devices, and/or networked devices with processing circuits and input/output (I/O) interfaces, such as keys/buttons, a touch screen, audio input, a display device and audio output.
  • the client systems 104 can be linked directly to one or more of the switches 106 or wirelessly through one or more wireless access points 110 .
  • the data storage systems 108 refer to any type of computer readable storage media and may include one or more secondary storage elements, e.g., hard disk drive (HDD), solid-state memory, tape, or a storage subsystem that is internal or external to the servers 102 .
  • Types of data that may be stored in the data storage systems 108 include, for example, various files and databases.
  • the system 100 also includes a network controller 112 that is a central software defined network controller configured to make routing decisions within the network 101 .
  • the network controller 112 establishes one or more secure links 103 to configure the switches 106 and establish communication properties of links 105 between the switches 106 .
  • the network controller 112 can configure the switches 106 to control packet routing paths for data flows between the servers 102 and client systems 104 , as well as one or more firewalls 114 and one or more load balancers 116 .
  • the one or more firewalls 114 restrict access and the flow of network traffic between the network 101 and one or more external networks 118 .
  • the one or more load balancers 116 can distribute workloads across multiple computers, such as between the servers 102 .
  • the servers 102 , client systems 104 , and network controller 112 can include various computer/communication hardware and software technology known in the art, such as one or more processing units or circuits, volatile and non-volatile memory including removable media, power supplies, network interfaces, support circuitry, operating systems, and the like. Although the network controller 112 is depicted as a separate component, it will be understood that network configuration functionality can alternatively be implemented in one or more of the servers 102 or client systems 104 in a standalone or distributed format.
  • the network 101 can include a combination of wireless, wired, and/or fiber optic links.
  • the network 101 as depicted in FIG. 1 represents a simplified example for purposes of explanation.
  • Embodiments of the network 101 can include numerous switches 106 (e.g., hundreds) with dozens of ports and links per switch 106 .
  • the network 101 may support a variety of known communication standards that allow data to be transmitted between the servers 102 , client systems 104 , switches 106 , network controller 112 , firewalls(s) 114 , and load balancer(s) 116 .
  • Communication protocols are typically implemented in one or more layers, such as a physical layer (layer-1), a link layer (layer-2), a network layer (layer-3), a transport layer (layer-4), and an application layer (layer-5).
  • the network 101 supports OpenFlow as a layer-2 protocol.
  • the switches 106 can be dedicated OpenFlow switches or OpenFlow-enabled general purpose switches that also support layer-2 and layer-3 Ethernet.
  • FIG. 2 depicts a block diagram of the switch 106 of FIG. 1 that supports OpenFlow.
  • the switch 106 includes switch logic 202 , secure channel 204 , protocol support 205 , flow table 206 , buffers 208 a - 208 n including various queues 209 a - 209 n , and ports 210 a - 210 n .
  • the switch 106 includes various counters or timers 211 , such as timers associated with queues 209 a - 209 n , the flow table 206 and/or flow table entries.
  • the switch logic 202 may be implemented in one or more processing circuits, where a computer readable storage medium is configured to hold instructions for the switch logic 202 , as well as various variables and constants to support operation of the switch 106 .
  • the switch logic 202 forwards packets between the ports 210 a - 210 n as flows defined by the network controller 112 of FIG. 1 .
  • the secure channel 204 connects the switch 106 to the network controller 112 of FIG. 1 .
  • the secure channel 204 allows commands and packets to be communicated between the network controller 112 and the switch 106 via the OpenFlow protocol.
  • the secure channel 204 can be implemented in software as executable instructions stored within the switch 106 .
  • Protocol details to establish a protocol definition for an implementation of OpenFlow and other protocols can be stored in the protocol support 205 .
  • the protocol support 205 may be software that defines one or more supported protocol formats.
  • the protocol support 205 can be embodied in a computer readable storage medium, for instance, flash memory, which is configured to hold instructions for execution by the switch logic 202 . Implementing the protocol support 205 as software enables updates in the field for new versions or variations of protocols and can provide OpenFlow as an enhancement to existing conventional routers or switches.
  • the flow table 206 defines supported connection types associated with particular addresses, virtual local area networks or switch ports, and is used by the switch to process data flows received at the switch.
  • a data flow is a sequence of data packets grouped in some manner, e.g., by source and/or destination, or otherwise defined by selected criteria. Each data flow may be mapped to a port and associated queue based on the flow table 206 . For example, a data flow is defined as all packets that match a particular header format.
  • Each entry 211 in the flow table 206 can include one or more rules 212 , actions 214 , and statistics 216 associated with a particular flow.
  • the rules 212 define each flow and can be determined by packet headers.
  • the actions 214 define how packets are processed.
  • the statistics 216 track information such as the size of each flow (e.g., number of bytes), the number of packets for each flow, and time since the last matching packet of the flow or connection time. Examples of actions include instructions for forwarding packets of a flow to one or more specific ports 210 a - 210 n (e.g., unicast or multicast), encapsulating and forwarding packets of a flow to the network controller 112 of FIG. 1 , and dropping packets of the flow.
  • Entries 211 in the flow table 206 can be added and removed by the network controller 112 of FIG. 1 via the secure channel 204 .
  • the network controller 112 of FIG. 1 can pre-populate the entries 211 in the flow table 206 .
  • the switch 106 can request creation of an entry 211 from the network controller 112 upon receiving a flow without a corresponding entry 211 in the flow table 206 .
  • the buffers 208 a - 208 n provide temporary storage in queues 209 a - 209 n for flows as packets are sent between the ports 210 a - 210 n .
  • the buffers 208 a - 208 n temporarily store packets until the associated ports 210 a - 210 n and links 105 of FIG. 1 are available.
  • Each of the buffers 208 a - 208 n may be associated with a particular port, flow, or sub-network.
  • Each of the buffers 208 a - 208 n is logically separate but need not be physically independent. Accordingly, when one of the buffers 208 a - 208 n is full, it does not adversely impact the performance of the other buffers 208 a - 208 n within the switch 106 .
  • each port 210 a - 210 n is attached to a respective queue 209 a - 209 n .
  • the switch 106 attempts to match the packet by comparing fields (referred to as “match fields”) to corresponding fields in flow entries of each flow table 206 .
  • Exemplary match fields include ingress port and metadata fields, as well as header fields such as those described below in reference to FIG. 3 .
  • matching starts at the first flow table and may continue to additional flow tables.
  • the switch 106 may perform an action based on the switch configuration, e.g., the packet may be forwarded to the controller or dropped. If the packet matches a flow entry in a flow table, the corresponding instruction set is executed based on the flow entry, e.g., the actions field 214 . For example, when a packet is matched to a flow entry including an output action, the packet is forwarded to one of ports 210 a - 210 n specified in the flow entry.
  • forwarding the packet to a port includes mapping packets in a flow to a queue attached to the port. Such flows are treated according to the queue's configuration (e.g., minimum rate).
  • the switch 106 also includes one or more flow control queues 218 that include flow control information for each data flow received by the switch 106 .
  • the flow control information includes data representing a flow control setting or policy, such as a quality of service (QoS) policy.
  • QoS policy defines a level of flow control assigned to a respective data flow. For example, a data flow can be assigned a QoS policy specifying a minimum throughput (e.g., queue transmission rate), maximum queue depth, dropped packet rate, bit error rate, latency, jitter, etc.
  • the flow control queue(s) 218 allow the switch 406 and/or controller 408 to individually monitor each data flow in the switch 406 and modify flow processing based on a QoS policy specific to each data flow, e.g., by throttling on/off specific flows or alerting the switch controller 408 to allow the flow to be re-routed, dropped or rate-limited.
  • the flow control queue 218 is not limited to the embodiments described herein.
  • the flow control queue 218 may be embodied as any suitable data structure (e.g., a table) that allows the switch 106 and/or the controller to compare individual data flow metrics or statistics to a flow policy and modify flow processing therefrom.
  • flow queue(s) 218 may be embodied as a separate or additional (physical or virtual) queue, or as space allocated from an existing virtual queue.
  • FIG. 3 depicts an example of an OpenFlow flow switching definition 300 that can be used in embodiments.
  • the OpenFlow flow switching definition 300 is a packet header that defines the flow and includes a number of fields.
  • the switching definition 300 is a flow header that includes up to eleven tuples or fields; however, not all tuples need to be defined depending upon particular flows.
  • FIG. 3 depicts an example of an OpenFlow flow switching definition 300 that can be used in embodiments.
  • the OpenFlow flow switching definition 300 is a packet header that defines the flow and includes a number of fields.
  • the switching definition 300 is a flow header that includes up to eleven tuples or fields; however, not all tuples need to be defined depending upon particular flows.
  • the OpenFlow flow switching definition 300 includes tuples for identifying an ingress port 302 , an Ethernet destination address 304 , an Ethernet source address 306 , an Ethernet type 308 , a virtual local area network (VLAN) priority 310 , a VLAN identifier 312 , an Internet protocol (IP) source address 314 , an IP destination address 316 , an IP protocol 318 , a transmission control protocol (TCP)/user datagram protocol (UDP) source port 320 , and a TCP/UDP destination port 322 .
  • the Ethernet destination address 304 may represent a layer-2 Ethernet hardware address or media access control (MAC) address used in legacy switching and routing.
  • the IP destination address 316 may represent a layer-3 IP address used in legacy switching and routing.
  • Flow switching can be defined for any combination of tuples in the OpenFlow flow switching definition 300 , with a particular combination of tuples serving as a key.
  • flows can be defined in a rule 212 of FIG. 2 by exact matching or wildcard matching for aggregated MAC-subnets, IP-subnets, ports, VLAN identifiers, and the like.
  • FIG. 4 depicts a block diagram of a network portion 400 .
  • a server 402 is coupled by a link 404 to a switch 406 .
  • An exemplary server 402 is a server 102 of FIG. 1
  • an exemplary switch 406 is a switch 106 of FIG. 1 .
  • a controller 408 (e.g., a network controller) is linked to the switch 406 by, e.g., a secure link 410 .
  • the controller is a network controller such as network controller 112 of FIG. 1 .
  • functions of the controller 408 can be integrated into other network entities such as the server 402 or server 102 .
  • the switch 406 is connected to the server 402 , which includes at least one port 412 and various logical components such as mode selection logic 414 , wait pulse repetition time 416 , and protocol and mode of operation configuration 418 .
  • Logical components described herein can be implemented in instructions stored in a computer readable storage medium for execution by a processing circuit or in hardware circuitry, and can be configured to send frames such as link initialization frames and data packets.
  • the switch 406 , server 402 and controller 408 may support a number of modes of operation including, but not limited to, Ethernet, Fibre Channel, Fibre Channel over Ethernet (FCoE), Internet Wide Area Remote direct memory access Protocol (iWARP), and Remote direct memory access over Converged Ethernet (RoCE).
  • the switch 406 includes switch logic 420 , flow table 422 , protocol support 424 , port configuration and reset logic 425 and multiple ports, such as port 426 for communicating with the server 402 and port 428 for communicating with other network entities such as other switches or servers.
  • the ports may be physical ports, virtual ports defined by the switch, and/or a virtual port defined by the OpenFlow protocol. Each port may be attached to one or more port queues 427 and 429 .
  • the switch 406 When implemented as an OpenFlow switch, the switch 406 also includes a secure channel 430 for communicating with the network controller 408 on secure link 410 .
  • the switch also includes at least one flow control queue 432 configured to store flow control information such as a QoS policy that is associated with each individual data flow.
  • the switch 406 receives the queue information from one or more data packets received as part of a specific data flow, and stores the flow control information in the flow queue 432 .
  • the flow control information is associated with the specific flow.
  • Multiple QoS policies for multiple flows may be stored in the flow control queue 432 (or other designated queue, virtual queue space or other data structure).
  • An exemplary data packet in which the flow control information may be stored is described below in conjunction with FIG. 5 .
  • Capability of the switch to receive and store the flow control information and control packet processing based on the flow information is embodied in suitable logic, such as flow rate logic 434 .
  • This capability may be embodied in any suitable manner, e.g., configured as part of switch logic 420 .
  • the network controller 408 includes an action table 436 that holds port and protocol information for the switch 406 , as well as rules, actions, and statistics for flows through the switch 406 and other switches, such as switches 106 of FIG. 1 .
  • the network controller 408 also includes flow control logic 438 that can be implemented in instructions stored in a computer readable storage medium for execution by a processing circuit or in hardware circuitry.
  • the network controller 408 can manage updates of the flow table 422 in the switch 406 . Based on the updating of the flow table 422 , the port and protocol information in the action table 432 of the network controller 408 is updated to reflect the changes.
  • the network controller also include suitable logic, e.g., QoS logic 440 , that allows the controller 408 to set flow control or QoS levels, monitor packet routing performance in the switch 406 and modify packet processing (e.g., dropping, rate-limiting or re-routing flows) according to pre-set QoS levels.
  • QoS logic 440 suitable logic that allows the controller 408 to set flow control or QoS levels, monitor packet routing performance in the switch 406 and modify packet processing (e.g., dropping, rate-limiting or re-routing flows) according to pre-set QoS levels.
  • the controller 408 may also be configured to configure and/or modify data packets to include the flow information therein, e.g., by inserting a tag in packet headers via a push action.
  • the network controller 408 communicates with the switch 406 via a secure link 410 established using a specified port, such as a port in a physical network controller 112 or a controller implemented in other processors, such as a server 102 or client system 104 .
  • the network controller 408 communicates with the switch 406 to configure and manage the switch, receive events from the switch and send packets out the switch.
  • Various message types can be sent between the switch and the controller to accomplish such functions, including controller-to-switch, asynchronous and symmetric messages.
  • Controller-to-switch messages are initiated by the controller 408 and may or may not require a response from the switch 406 .
  • Features messages are used to request the capabilities of the switch 406 (e.g., upon establishment of the secure link), in response to which the switch 406 should return a features reply that specifies the capabilities of the switch 406 .
  • Configuration messages are sent by the controller 408 to set and query configuration parameters in the switch 406 .
  • the switch 406 only responds to a query from the controller 408 .
  • Modify-State messages are sent by the controller 408 to manage state on the switches, e.g., to add/delete and/or modify flows/groups in the flow table 422 and to set switch port properties.
  • Read-State messages are used by the controller to collect statistics from the switch.
  • Packet-out messages are used by the controller to send packets out of a specified port on the switch, and to forward packets received via Packet-in messages.
  • Packet-out messages contain a full packet or a buffer ID referencing a packet stored in the switch. Packet-out messages also contain a list of actions to be applied in the order they are specified; an empty action list drops the packet.
  • Asynchronous messages are sent without the controller 408 soliciting them from the switch 406 .
  • the switch 406 sends asynchronous messages to the controller 408 to, e.g., denote a packet arrival, switch state change, or error.
  • a packet-in event message may be sent to the controller 408 from the switch 406 for packets that do not have a matching flow entry, and may be sent from the controller 408 to the switch 406 for packets forwarded to the controller 408 .
  • Flow-removed messages are used to indicate that a flow entry has been removed due to, e.g., inactivity or expiration of the flow entry.
  • Port-status messages are sent in response to changes in port configuration state and port status events. Error messages may be used by the switch 406 to notify the controller 408 of problems.
  • Symmetric messages are sent without solicitation, in either direction.
  • Hello messages may be exchanged between the switch 406 and controller 408 upon connection startup.
  • Echo request/reply messages can be sent from either the switch 406 or the controller 408 , and can be used to measure the latency or bandwidth of a controller-switch connection, as well as verify its liveness.
  • Experimenter messages provide a way for the switch 406 to offer additional functionality within the OpenFlow message type space.
  • FIG. 5 depicts an embodiment of a data frame or data packet 500 .
  • the data packet 500 includes a preamble 502 , a start of frame (SOF) delimiter 504 , a header 506 , payload data 508 and a cyclic redundancy check (CRC) checksum 510 .
  • the header 506 includes network address information and protocol information.
  • the frame 500 includes a destination MAC address 512 , a source MAC address 514 and an Ethernet type field 516 .
  • the frame 500 includes flow control or QoS information for the frame 500 and the corresponding data flow.
  • the flow control information may be included in a field inserted in the data packet 50 or in an existing field.
  • the flow control information is included in the packet header 506 .
  • the Ethertype field 516 includes a tag 518 that specifies a type or level of flow control associated with the data flow.
  • a new delimiter may be added to indicate any additional bytes included for the flow control information.
  • An exemplary tag 518 includes a two-bit field indicating a level of QoS. In this example, four levels of QoS may be specified.
  • exemplary data packets include flow information added to a packet header, e.g., a IPv4 or IPv6 header.
  • the information may be included by adding bits to the header or remapping bits in the header.
  • header fields such as source address (SA), destination address (DA) and type of service (TOS) fields can be set as match fields and include flow control information without requiring additional bits to be added to the header.
  • SA source address
  • DA destination address
  • TOS type of service
  • the frame 500 and the header 506 are not limited to the specific embodiments described herein.
  • the header 506 may include different or additional fields depending on protocols used.
  • the header 506 may include any number of fields as described with reference to the switching definition 300 .
  • FIG. 6 An embodiment of a method 600 of monitoring data transmission and/or processing data flows in a network is described with reference to FIG. 6 .
  • the method 600 is described in conjunction with the network portion 400 shown in FIG. 4 , but is not so limited.
  • the method includes the steps represented by blocks 601 - 605 in the order described. However, in some embodiments, not all of the steps are performed and/or the steps are performed in a different order than that described.
  • the controller 408 (or other processor originating a network operation) receives data to be transmitted through a network and generates or configures a group of data packets such as packets 500 .
  • the group of data packets 500 is referred to as a data flow.
  • the network is an OpenFlow capable network.
  • each data packet 500 (or some subset of data packets 500 in the data flow) is configured to include flow control information such as QoS policy information.
  • the flow control information may be included in any suitable part of the data packet 500 , such as in a tag field 518 .
  • An example of the flow control information includes a QoS policy having multiple QoS levels.
  • Each QoS level corresponds to a level of service.
  • Exemplary service levels include queue depth levels, bit rate or throughput levels in the switch 406 and/or in various port queues, latency and jitter.
  • specific latency sensitive messages may be tagged as exempt from the QoS policy.
  • a queue depth level refers to an amount or percentage of the queue depth that is filled, or the available queue depth. As described herein, “queue depth” refers to the maximum number of packets or maximum amount of data that a queue can hold before packets are dropped. Each QoS level can represent a different queue depth level.
  • Embodiments described herein include originating or configuring packets by the controller 408 at the beginning of the data flow, but are not so limited.
  • the switch 406 and/or controller 408 may receive data packets 500 from a source, and the controller 408 may configure each data packet 500 by inserting flow control information such as a QoS level into the packet header, e.g., by adding bits or reconfiguring an existing field.
  • the switch 406 receives and processes the data packets. Processing includes matching the data packets and performing actions such as forwarding the data packets to appropriate ports.
  • the controller 408 may perform various functions during operation of the switch. For example, the controller 408 manages switches 406 in a network by initializing switches, populating and/or modifying flow tables to manage flows, sending flows to various switches and communicating with switches in general.
  • the switch 406 upon receiving each data packet, stores the QoS policy or other flow control information in the flow control queue 432 and the switch 406 and/or the controller 408 monitors flow statistics specified by the QoS policy. For example, the switch 406 and/or the controller 408 monitors port ingress and/or egress queues (e.g., queues 427 and 429 ) and determines statistics such as amount of queue depth filled, ingress rate and egress rate.
  • port ingress and/or egress queues e.g., queues 427 and 429
  • the switch 406 and/or the controller 408 compares the monitored statistics to service level thresholds established by the QoS policy. For example, the switch 406 compares statistics such as the amount of queue filled, the ingress rate and/or the egress rate to a threshold defined by the QoS policy. For example, the two-bit tag 518 indicates a selected QoS level that is associated with a threshold. If the statistics exceed the threshold associated with that level, a congestion condition is detected. For example, a congestion condition may be backpressure on a specific data flow in the switch 406 .
  • the controller 408 takes remedial or corrective action specifically for that data flow.
  • remedial actions include rate-limiting the data flow, turning off the data flow in the switch, and re-routing the data flow.
  • the remedial action includes rate-limiting the data flow at the data flow's source by the controller 408 .
  • the controller 408 throttles on or off specific flows in the switch 408 using the flow information stored in the flow control queue. For example, the controller 408 sends a controller-to-switch message to direct the switch 406 to turn off a specified data flow.
  • inventions described herein allow for the monitoring and rate-limiting of specific data flows in a switch.
  • Embodiments also allow a central controller to assign specific flow control levels to each data flow and monitor data flows in multiple switches according to flow control levels stored in the switches and assigned to each flow.
  • aspects of the present invention may be embodied as a system, method or computer program product. Accordingly, aspects of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.” Furthermore, aspects of the present invention may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon.
  • the computer readable medium may be a computer readable signal medium or a computer readable storage medium.
  • a computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing.
  • a computer readable storage medium may be any tangible and non-transitory storage medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
  • a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof.
  • a computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
  • Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
  • Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the “C” programming language or similar programming languages.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
  • LAN local area network
  • WAN wide area network
  • Internet Service Provider for example, AT&T, MCI, Sprint, EarthLink, MSN, GTE, etc.
  • These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
  • the computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s).
  • the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Environmental & Geological Engineering (AREA)

Abstract

A method of monitoring data flow in a network is provided. The method includes: configuring a data flow including a plurality of data packets by a switch controller, the switch controller configured to control routing through the switch and switch configuration, wherein configuring includes storing an indication of a flow control policy in one or more of the data packets; monitoring a network switch receiving the data flow, wherein monitoring includes determining flow statistics in the switch; determining whether a congestion condition exists for the data flow based on the flow statistics and the flow control policy; and based on determining that the congestion condition exists for the data flow, performing a remedial action specific to the data flow to address the congestion condition.

Description

BACKGROUND
Ethernet networks are typically employed in local area networks (LANs) that include a plurality of network switches. A number of communication protocols have been developed and continue to evolve to enhance Ethernet network performance for various environments. For example, an enhancement to Ethernet, called data center bridging (DCB), converged enhanced Ethernet (CEE) or data center Ethernet (DCE), supports the convergence of LANs with storage area networks (SANs). Other protocols that can be used in a data center environment in conjunction with Ethernet include, for instance, Fibre Channel over Ethernet (FCoE), Internet Wide Area Remote direct memory access Protocol (iWARP), Remote direct memory access over Converged Ethernet (RoCE).
Network congestion is a problem that occurs when the data flow is received from a source at a faster rate than the flow can be outputted or routed. Such congestion results in a reduction of quality of service, causing packets to be dropped, or queuing and/or transmission of packets to be delayed.
SUMMARY
According to an embodiment, a method of monitoring data flow in a network is provided. The method includes: configuring a data flow including a plurality of data packets by a switch controller, the switch controller configured to control routing through the switch and switch configuration, wherein configuring includes storing an indication of a flow control policy in one or more of the data packets; monitoring a network switch receiving the data flow, wherein monitoring includes determining flow statistics in the switch; determining whether a congestion condition exists for the data flow based on the flow statistics and the flow control policy; and based on determining that the congestion condition exists for the data flow, performing a remedial action specific to the data flow to address the congestion condition.
According to an embodiment, a method of processing data flows in a network switch is provided. The method includes: receiving a data flow at the network switch, the data flow including a plurality of data packets, wherein one or more of the data packets includes an indication of a flow control policy specific to the data flow; storing the indication of the flow control policy in a flow control queue in the network switch, the flow control policy associated with a threshold for comparison to flow statistics for the data flow; and processing the data flow at the network switch according to instructions associated with the data flow and configured by a switch controller.
According to another embodiment, a computer program product for monitoring data flow in a network is provided. The computer program product includes a tangible storage medium readable by a processing circuit and storing instructions for execution by the processing circuit for performing a method including: configuring a data flow including a plurality of data packets by a switch controller, the switch controller configured to control routing through the switch and switch configuration, wherein configuring includes storing an indication of a flow control policy in one or more of the data packets; monitoring a network switch receiving the data flow, wherein monitoring includes determining flow statistics in the switch; determining whether a congestion condition exists for the data flow based on the flow statistics and the flow control policy; and based on determining that the congestion condition exists for the data flow, performing a remedial action specific to the data flow to address the congestion condition.
According to yet another embodiment, a computer program product for processing data flows in a network switch is provided. The computer program product includes a tangible storage medium readable by a processing circuit and storing instructions for execution by the processing circuit for performing a method including: receiving a data flow at the network switch, the data flow including a plurality of data packets, wherein one or more of the data packets includes an indication of a flow control policy specific to the data flow; storing the indication of the flow control policy in a flow control queue in the network switch, the flow control policy associated with a threshold for comparison to flow statistics for the data flow; and processing the data flow at the network switch according to instructions associated with the data flow and configured by a switch controller.
According to still another embodiment, an apparatus for controlling a network switch is provided. The apparatus includes a memory having computer readable computer instructions; and a processor for executing the computer readable instructions. The instructions are for: configuring a data flow including a plurality of data packets by a switch controller, the switch controller configured to control routing through the switch and switch configuration, wherein configuring includes storing an indication of a flow control policy in one or more of the data packets; monitoring a network switch receiving the data flow, wherein monitoring includes determining flow statistics in the switch; determining whether a congestion condition exists for the data flow based on the flow statistics and the flow control policy; and based on determining that the congestion condition exists for the data flow, performing a remedial action specific to the data flow to address the congestion condition.
Additional features and advantages are realized through the embodiments described herein. Other embodiments and aspects of the invention are described in detail herein and are considered a part of the claimed invention. For a better understanding of the invention with the advantages and the features, refer to the description and to the drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
The subject matter which is regarded as the invention is particularly pointed out and distinctly claimed in the claims at the conclusion of the specification. The forgoing and other features, and advantages of the invention are apparent from the following detailed description taken in conjunction with the accompanying drawings in which:
FIG. 1 depicts a block diagram of a system including a network with OpenFlow-capable switches that may be implemented according to an embodiment;
FIG. 2 depicts a block diagram of an OpenFlow-capable switch according to an embodiment;
FIG. 3 depicts an example of an OpenFlow flow switching definition that can be used in embodiments;
FIG. 4 depicts an exemplary embodiment of a portion of a network including a network switch and a switch controller;
FIG. 5 depicts an example of a data packet; and
FIG. 6 is a flow diagram showing a method of monitoring and/or processing data flows in a network.
DETAILED DESCRIPTION
Exemplary embodiments relate to monitoring data flows processed in network switches. An embodiment of a network includes one or more switches, each connected to a network controller or switch controller configured to control the switch. In one embodiment, a plurality of network switches are connected to and controlled by a central switch controller. In one embodiment, the controller sends control data packets to the switch to effect various configurations and routing functions. In one embodiment, the controller is configured to incorporate flow control information such as Quality of Service (QoS) policies into data packets (e.g., in packet headers) of a data flow for use in monitoring specific flows as they are processed in a network switch. An exemplary method includes monitoring flow statistics for data flows in a network switch, and comparing the flow statistics to a threshold based on flow control information specific to each data flow. In one embodiment, the flow control information is stored in the switch in a flow control queue that includes indications of QoS policies associated with specific data flows.
Turning now to FIG. 1, an example of a system 100 including a network 101 that supports OpenFlow will now be described in greater detail. In the example depicted in FIG. 1, the system 100 is a data center environment including a plurality of servers 102 and client systems 104 configured to communicate over the network 101 using switches 106 that are OpenFlow-capable. In exemplary embodiments, the servers 102, also referred as hosts or host systems, are high-speed processing devices (e.g., mainframe computers, desktop computers, laptop computers, hand-held devices, embedded computing devices, or the like) including at least one processing circuit (e.g., a computer processor/CPU) capable of reading and executing instructions, and handling interactions with various components of the system 100. The servers 102 may be storage system servers configured to access and store large amounts of data to one or more data storage systems 108.
The client systems 104 can include a variety of desktop, laptop, general-purpose computer devices, mobile computing devices, and/or networked devices with processing circuits and input/output (I/O) interfaces, such as keys/buttons, a touch screen, audio input, a display device and audio output. The client systems 104 can be linked directly to one or more of the switches 106 or wirelessly through one or more wireless access points 110.
The data storage systems 108 refer to any type of computer readable storage media and may include one or more secondary storage elements, e.g., hard disk drive (HDD), solid-state memory, tape, or a storage subsystem that is internal or external to the servers 102. Types of data that may be stored in the data storage systems 108 include, for example, various files and databases. There may be multiple data storage systems 108 utilized by each of the servers 102, which can be distributed in various locations of the system 100.
The system 100 also includes a network controller 112 that is a central software defined network controller configured to make routing decisions within the network 101. The network controller 112 establishes one or more secure links 103 to configure the switches 106 and establish communication properties of links 105 between the switches 106. For example, the network controller 112 can configure the switches 106 to control packet routing paths for data flows between the servers 102 and client systems 104, as well as one or more firewalls 114 and one or more load balancers 116. The one or more firewalls 114 restrict access and the flow of network traffic between the network 101 and one or more external networks 118. The one or more load balancers 116 can distribute workloads across multiple computers, such as between the servers 102.
The servers 102, client systems 104, and network controller 112 can include various computer/communication hardware and software technology known in the art, such as one or more processing units or circuits, volatile and non-volatile memory including removable media, power supplies, network interfaces, support circuitry, operating systems, and the like. Although the network controller 112 is depicted as a separate component, it will be understood that network configuration functionality can alternatively be implemented in one or more of the servers 102 or client systems 104 in a standalone or distributed format.
The network 101 can include a combination of wireless, wired, and/or fiber optic links. The network 101 as depicted in FIG. 1 represents a simplified example for purposes of explanation. Embodiments of the network 101 can include numerous switches 106 (e.g., hundreds) with dozens of ports and links per switch 106. The network 101 may support a variety of known communication standards that allow data to be transmitted between the servers 102, client systems 104, switches 106, network controller 112, firewalls(s) 114, and load balancer(s) 116. Communication protocols are typically implemented in one or more layers, such as a physical layer (layer-1), a link layer (layer-2), a network layer (layer-3), a transport layer (layer-4), and an application layer (layer-5). In exemplary embodiments, the network 101 supports OpenFlow as a layer-2 protocol. The switches 106 can be dedicated OpenFlow switches or OpenFlow-enabled general purpose switches that also support layer-2 and layer-3 Ethernet.
FIG. 2 depicts a block diagram of the switch 106 of FIG. 1 that supports OpenFlow. The switch 106 includes switch logic 202, secure channel 204, protocol support 205, flow table 206, buffers 208 a-208 n including various queues 209 a-209 n, and ports 210 a-210 n. The switch 106 includes various counters or timers 211, such as timers associated with queues 209 a-209 n, the flow table 206 and/or flow table entries. The switch logic 202 may be implemented in one or more processing circuits, where a computer readable storage medium is configured to hold instructions for the switch logic 202, as well as various variables and constants to support operation of the switch 106. The switch logic 202 forwards packets between the ports 210 a-210 n as flows defined by the network controller 112 of FIG. 1.
The secure channel 204 connects the switch 106 to the network controller 112 of FIG. 1. The secure channel 204 allows commands and packets to be communicated between the network controller 112 and the switch 106 via the OpenFlow protocol. The secure channel 204 can be implemented in software as executable instructions stored within the switch 106. Protocol details to establish a protocol definition for an implementation of OpenFlow and other protocols can be stored in the protocol support 205. The protocol support 205 may be software that defines one or more supported protocol formats. The protocol support 205 can be embodied in a computer readable storage medium, for instance, flash memory, which is configured to hold instructions for execution by the switch logic 202. Implementing the protocol support 205 as software enables updates in the field for new versions or variations of protocols and can provide OpenFlow as an enhancement to existing conventional routers or switches.
The flow table 206 defines supported connection types associated with particular addresses, virtual local area networks or switch ports, and is used by the switch to process data flows received at the switch. A data flow is a sequence of data packets grouped in some manner, e.g., by source and/or destination, or otherwise defined by selected criteria. Each data flow may be mapped to a port and associated queue based on the flow table 206. For example, a data flow is defined as all packets that match a particular header format.
Each entry 211 in the flow table 206 can include one or more rules 212, actions 214, and statistics 216 associated with a particular flow. The rules 212 define each flow and can be determined by packet headers. The actions 214 define how packets are processed. The statistics 216 track information such as the size of each flow (e.g., number of bytes), the number of packets for each flow, and time since the last matching packet of the flow or connection time. Examples of actions include instructions for forwarding packets of a flow to one or more specific ports 210 a-210 n (e.g., unicast or multicast), encapsulating and forwarding packets of a flow to the network controller 112 of FIG. 1, and dropping packets of the flow. Entries 211 in the flow table 206 can be added and removed by the network controller 112 of FIG. 1 via the secure channel 204. The network controller 112 of FIG. 1 can pre-populate the entries 211 in the flow table 206. Additionally, the switch 106 can request creation of an entry 211 from the network controller 112 upon receiving a flow without a corresponding entry 211 in the flow table 206.
The buffers 208 a-208 n provide temporary storage in queues 209 a-209 n for flows as packets are sent between the ports 210 a-210 n. In a lossless configuration, rather than dropping packets when network congestion is present, the buffers 208 a-208 n temporarily store packets until the associated ports 210 a-210 n and links 105 of FIG. 1 are available. Each of the buffers 208 a-208 n may be associated with a particular port, flow, or sub-network. Each of the buffers 208 a-208 n is logically separate but need not be physically independent. Accordingly, when one of the buffers 208 a-208 n is full, it does not adversely impact the performance of the other buffers 208 a-208 n within the switch 106.
For example, in an OpenFlow switch, each port 210 a-210 n is attached to a respective queue 209 a-209 n. In operation, when the switch 106 receives a packet, the switch 106 attempts to match the packet by comparing fields (referred to as “match fields”) to corresponding fields in flow entries of each flow table 206. Exemplary match fields include ingress port and metadata fields, as well as header fields such as those described below in reference to FIG. 3. In one embodiment, matching starts at the first flow table and may continue to additional flow tables.
If no match is found, the switch 106 may perform an action based on the switch configuration, e.g., the packet may be forwarded to the controller or dropped. If the packet matches a flow entry in a flow table, the corresponding instruction set is executed based on the flow entry, e.g., the actions field 214. For example, when a packet is matched to a flow entry including an output action, the packet is forwarded to one of ports 210 a-210 n specified in the flow entry.
In one embodiment, forwarding the packet to a port includes mapping packets in a flow to a queue attached to the port. Such flows are treated according to the queue's configuration (e.g., minimum rate).
The switch 106 also includes one or more flow control queues 218 that include flow control information for each data flow received by the switch 106. The flow control information includes data representing a flow control setting or policy, such as a quality of service (QoS) policy. The QoS policy defines a level of flow control assigned to a respective data flow. For example, a data flow can be assigned a QoS policy specifying a minimum throughput (e.g., queue transmission rate), maximum queue depth, dropped packet rate, bit error rate, latency, jitter, etc. The flow control queue(s) 218 allow the switch 406 and/or controller 408 to individually monitor each data flow in the switch 406 and modify flow processing based on a QoS policy specific to each data flow, e.g., by throttling on/off specific flows or alerting the switch controller 408 to allow the flow to be re-routed, dropped or rate-limited.
The flow control queue 218 is not limited to the embodiments described herein. The flow control queue 218 may be embodied as any suitable data structure (e.g., a table) that allows the switch 106 and/or the controller to compare individual data flow metrics or statistics to a flow policy and modify flow processing therefrom. For example, flow queue(s) 218 may be embodied as a separate or additional (physical or virtual) queue, or as space allocated from an existing virtual queue.
FIG. 3 depicts an example of an OpenFlow flow switching definition 300 that can be used in embodiments. The OpenFlow flow switching definition 300 is a packet header that defines the flow and includes a number of fields. In this example, the switching definition 300 is a flow header that includes up to eleven tuples or fields; however, not all tuples need to be defined depending upon particular flows. In the example of FIG. 3, the OpenFlow flow switching definition 300 includes tuples for identifying an ingress port 302, an Ethernet destination address 304, an Ethernet source address 306, an Ethernet type 308, a virtual local area network (VLAN) priority 310, a VLAN identifier 312, an Internet protocol (IP) source address 314, an IP destination address 316, an IP protocol 318, a transmission control protocol (TCP)/user datagram protocol (UDP) source port 320, and a TCP/UDP destination port 322. The Ethernet destination address 304 may represent a layer-2 Ethernet hardware address or media access control (MAC) address used in legacy switching and routing. The IP destination address 316 may represent a layer-3 IP address used in legacy switching and routing. Flow switching can be defined for any combination of tuples in the OpenFlow flow switching definition 300, with a particular combination of tuples serving as a key. For example, flows can be defined in a rule 212 of FIG. 2 by exact matching or wildcard matching for aggregated MAC-subnets, IP-subnets, ports, VLAN identifiers, and the like.
FIG. 4 depicts a block diagram of a network portion 400. A server 402 is coupled by a link 404 to a switch 406. An exemplary server 402 is a server 102 of FIG. 1, and an exemplary switch 406 is a switch 106 of FIG. 1. A controller 408 (e.g., a network controller) is linked to the switch 406 by, e.g., a secure link 410. In one embodiment, in OpenFlow-capable environments, the controller is a network controller such as network controller 112 of FIG. 1. In other embodiments, for non-OpenFlow environments, functions of the controller 408 can be integrated into other network entities such as the server 402 or server 102.
As shown in FIG. 4, the switch 406 is connected to the server 402, which includes at least one port 412 and various logical components such as mode selection logic 414, wait pulse repetition time 416, and protocol and mode of operation configuration 418. Logical components described herein can be implemented in instructions stored in a computer readable storage medium for execution by a processing circuit or in hardware circuitry, and can be configured to send frames such as link initialization frames and data packets. The switch 406, server 402 and controller 408 may support a number of modes of operation including, but not limited to, Ethernet, Fibre Channel, Fibre Channel over Ethernet (FCoE), Internet Wide Area Remote direct memory access Protocol (iWARP), and Remote direct memory access over Converged Ethernet (RoCE).
The switch 406 includes switch logic 420, flow table 422, protocol support 424, port configuration and reset logic 425 and multiple ports, such as port 426 for communicating with the server 402 and port 428 for communicating with other network entities such as other switches or servers. The ports may be physical ports, virtual ports defined by the switch, and/or a virtual port defined by the OpenFlow protocol. Each port may be attached to one or more port queues 427 and 429. When implemented as an OpenFlow switch, the switch 406 also includes a secure channel 430 for communicating with the network controller 408 on secure link 410.
The switch also includes at least one flow control queue 432 configured to store flow control information such as a QoS policy that is associated with each individual data flow. The switch 406 receives the queue information from one or more data packets received as part of a specific data flow, and stores the flow control information in the flow queue 432. In the flow queue 432, the flow control information is associated with the specific flow. Multiple QoS policies for multiple flows may be stored in the flow control queue 432 (or other designated queue, virtual queue space or other data structure). An exemplary data packet in which the flow control information may be stored is described below in conjunction with FIG. 5.
Capability of the switch to receive and store the flow control information and control packet processing based on the flow information is embodied in suitable logic, such as flow rate logic 434. This capability may be embodied in any suitable manner, e.g., configured as part of switch logic 420.
Still referring to FIG. 4, the network controller 408 includes an action table 436 that holds port and protocol information for the switch 406, as well as rules, actions, and statistics for flows through the switch 406 and other switches, such as switches 106 of FIG. 1. The network controller 408 also includes flow control logic 438 that can be implemented in instructions stored in a computer readable storage medium for execution by a processing circuit or in hardware circuitry. The network controller 408 can manage updates of the flow table 422 in the switch 406. Based on the updating of the flow table 422, the port and protocol information in the action table 432 of the network controller 408 is updated to reflect the changes.
The network controller also include suitable logic, e.g., QoS logic 440, that allows the controller 408 to set flow control or QoS levels, monitor packet routing performance in the switch 406 and modify packet processing (e.g., dropping, rate-limiting or re-routing flows) according to pre-set QoS levels. The controller 408 may also be configured to configure and/or modify data packets to include the flow information therein, e.g., by inserting a tag in packet headers via a push action.
As indicated above, the network controller 408 communicates with the switch 406 via a secure link 410 established using a specified port, such as a port in a physical network controller 112 or a controller implemented in other processors, such as a server 102 or client system 104. The network controller 408 communicates with the switch 406 to configure and manage the switch, receive events from the switch and send packets out the switch. Various message types can be sent between the switch and the controller to accomplish such functions, including controller-to-switch, asynchronous and symmetric messages.
Controller-to-switch messages are initiated by the controller 408 and may or may not require a response from the switch 406. Features messages are used to request the capabilities of the switch 406 (e.g., upon establishment of the secure link), in response to which the switch 406 should return a features reply that specifies the capabilities of the switch 406. Configuration messages are sent by the controller 408 to set and query configuration parameters in the switch 406. The switch 406 only responds to a query from the controller 408. Modify-State messages are sent by the controller 408 to manage state on the switches, e.g., to add/delete and/or modify flows/groups in the flow table 422 and to set switch port properties. Read-State messages are used by the controller to collect statistics from the switch. Packet-out messages are used by the controller to send packets out of a specified port on the switch, and to forward packets received via Packet-in messages. Packet-out messages contain a full packet or a buffer ID referencing a packet stored in the switch. Packet-out messages also contain a list of actions to be applied in the order they are specified; an empty action list drops the packet.
Asynchronous messages are sent without the controller 408 soliciting them from the switch 406. The switch 406 sends asynchronous messages to the controller 408 to, e.g., denote a packet arrival, switch state change, or error. A packet-in event message may be sent to the controller 408 from the switch 406 for packets that do not have a matching flow entry, and may be sent from the controller 408 to the switch 406 for packets forwarded to the controller 408. Flow-removed messages are used to indicate that a flow entry has been removed due to, e.g., inactivity or expiration of the flow entry. Port-status messages are sent in response to changes in port configuration state and port status events. Error messages may be used by the switch 406 to notify the controller 408 of problems.
Symmetric messages are sent without solicitation, in either direction. Hello messages may be exchanged between the switch 406 and controller 408 upon connection startup. Echo request/reply messages can be sent from either the switch 406 or the controller 408, and can be used to measure the latency or bandwidth of a controller-switch connection, as well as verify its liveness. Experimenter messages provide a way for the switch 406 to offer additional functionality within the OpenFlow message type space.
FIG. 5 depicts an embodiment of a data frame or data packet 500. The data packet 500 includes a preamble 502, a start of frame (SOF) delimiter 504, a header 506, payload data 508 and a cyclic redundancy check (CRC) checksum 510. The header 506 includes network address information and protocol information. For example, the frame 500 includes a destination MAC address 512, a source MAC address 514 and an Ethernet type field 516.
In one embodiment, the frame 500 includes flow control or QoS information for the frame 500 and the corresponding data flow. The flow control information may be included in a field inserted in the data packet 50 or in an existing field. In one embodiment, the flow control information is included in the packet header 506. For example, the Ethertype field 516 includes a tag 518 that specifies a type or level of flow control associated with the data flow. A new delimiter may be added to indicate any additional bytes included for the flow control information.
An exemplary tag 518 includes a two-bit field indicating a level of QoS. In this example, four levels of QoS may be specified.
Other exemplary data packets include flow information added to a packet header, e.g., a IPv4 or IPv6 header. The information may be included by adding bits to the header or remapping bits in the header. For example, header fields such as source address (SA), destination address (DA) and type of service (TOS) fields can be set as match fields and include flow control information without requiring additional bits to be added to the header.
It is noted that not all of the data packets in a data flow need to be tagged with flow control information. In addition, specific data packets can be tagged with a different flow policy, e.g., latency sensitive messages can be tagged as exempt.
It is noted that the frame 500 and the header 506, and their respective fields, are not limited to the specific embodiments described herein. For example, the header 506 may include different or additional fields depending on protocols used. In one example, the header 506 may include any number of fields as described with reference to the switching definition 300.
An embodiment of a method 600 of monitoring data transmission and/or processing data flows in a network is described with reference to FIG. 6. The method 600 is described in conjunction with the network portion 400 shown in FIG. 4, but is not so limited. In one embodiment, the method includes the steps represented by blocks 601-605 in the order described. However, in some embodiments, not all of the steps are performed and/or the steps are performed in a different order than that described.
At block 601, the controller 408 (or other processor originating a network operation) receives data to be transmitted through a network and generates or configures a group of data packets such as packets 500. The group of data packets 500 is referred to as a data flow. In one embodiment, the network is an OpenFlow capable network.
For example, each data packet 500 (or some subset of data packets 500 in the data flow) is configured to include flow control information such as QoS policy information. The flow control information may be included in any suitable part of the data packet 500, such as in a tag field 518.
An example of the flow control information includes a QoS policy having multiple QoS levels. Each QoS level corresponds to a level of service. Exemplary service levels include queue depth levels, bit rate or throughput levels in the switch 406 and/or in various port queues, latency and jitter. In some embodiments, specific latency sensitive messages may be tagged as exempt from the QoS policy.
A queue depth level refers to an amount or percentage of the queue depth that is filled, or the available queue depth. As described herein, “queue depth” refers to the maximum number of packets or maximum amount of data that a queue can hold before packets are dropped. Each QoS level can represent a different queue depth level.
Embodiments described herein include originating or configuring packets by the controller 408 at the beginning of the data flow, but are not so limited. For example, the switch 406 and/or controller 408 may receive data packets 500 from a source, and the controller 408 may configure each data packet 500 by inserting flow control information such as a QoS level into the packet header, e.g., by adding bits or reconfiguring an existing field.
At block 602, the switch 406 receives and processes the data packets. Processing includes matching the data packets and performing actions such as forwarding the data packets to appropriate ports. The controller 408 may perform various functions during operation of the switch. For example, the controller 408 manages switches 406 in a network by initializing switches, populating and/or modifying flow tables to manage flows, sending flows to various switches and communicating with switches in general.
At block 603, upon receiving each data packet, the switch 406 stores the QoS policy or other flow control information in the flow control queue 432 and the switch 406 and/or the controller 408 monitors flow statistics specified by the QoS policy. For example, the switch 406 and/or the controller 408 monitors port ingress and/or egress queues (e.g., queues 427 and 429) and determines statistics such as amount of queue depth filled, ingress rate and egress rate.
At block 604, the switch 406 and/or the controller 408 compares the monitored statistics to service level thresholds established by the QoS policy. For example, the switch 406 compares statistics such as the amount of queue filled, the ingress rate and/or the egress rate to a threshold defined by the QoS policy. For example, the two-bit tag 518 indicates a selected QoS level that is associated with a threshold. If the statistics exceed the threshold associated with that level, a congestion condition is detected. For example, a congestion condition may be backpressure on a specific data flow in the switch 406.
At block 605, if a congestion condition is detected for a particular data flow, the controller 408 takes remedial or corrective action specifically for that data flow. Exemplary remedial actions include rate-limiting the data flow, turning off the data flow in the switch, and re-routing the data flow. In one embodiment, the remedial action includes rate-limiting the data flow at the data flow's source by the controller 408. In another embodiment, the controller 408 throttles on or off specific flows in the switch 408 using the flow information stored in the flow control queue. For example, the controller 408 sends a controller-to-switch message to direct the switch 406 to turn off a specified data flow.
Technical effects include the ability to implement rate limiting in a network controller. In addition, the embodiments described herein allow for the monitoring and rate-limiting of specific data flows in a switch. Embodiments also allow a central controller to assign specific flow control levels to each data flow and monitor data flows in multiple switches according to flow control levels stored in the switches and assigned to each flow.
As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as a system, method or computer program product. Accordingly, aspects of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.” Furthermore, aspects of the present invention may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon.
Any combination of one or more computer readable medium(s) may be utilized. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible and non-transitory storage medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
Aspects of the present invention are described below with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one more other features, integers, steps, operations, element components, and/or groups thereof.
The corresponding structures, materials, acts, and equivalents of all means or step plus function elements in the claims below are intended to include any structure, material, or act for performing the function in combination with other claimed elements as specifically claimed. The description of the present invention has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to the invention in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the invention. The embodiment was chosen and described in order to best explain the principles of the invention and the practical application, and to enable others of ordinary skill in the art to understand the invention for various embodiments with various modifications as are suited to the particular use contemplated
The flow diagrams depicted herein are just one example. There may be many variations to this diagram or the steps (or operations) described therein without departing from the spirit of the invention. For instance, the steps may be performed in a differing order or steps may be added, deleted or modified. All of these variations are considered a part of the claimed invention.
While the preferred embodiment to the invention had been described, it will be understood that those skilled in the art, both now and in the future, may make various improvements and enhancements which fall within the scope of the claims which follow. These claims should be construed to maintain the proper protection for the invention first described.

Claims (24)

What is claimed is:
1. A method of monitoring data flow in a network, comprising:
configuring a data flow including a plurality of data packets by a switch controller, the data flow received from a source and configured to be routed through one or more switches in a network to a destination, the switch controller configured to control routing of the data flow through a network switch, wherein configuring includes associating the data flow with a quality of service (QoS) flow control policy selected from a plurality of QoS flow control policies, each QoS flow control policy associated with one of a plurality of pre-set flow control levels, and storing an indication of a selected pre-set flow control level in one or more of the data packets of the data flow, each pre-set flow control level defining a threshold value of a flow statistic, wherein a value of the flow statistic exceeding the threshold value is indicative of a congestion condition;
storing the selected QoS flow control policy in a flow control queue in the network switch that is separate from ingress and egress queues and from a flow table of the switch, the flow control queue associating the selected QoS flow control policy with the data flow, the flow control queue configured to associate one or more of the plurality of QoS flow control policies with each specific data flow received by the switch;
monitoring the network switch receiving the data flow, wherein monitoring includes determining the value of the flow statistic in the switch associated with the data flow;
determining whether a congestion condition exists for the data flow based on comparing the value of the flow statistic and the threshold value defined by the pre-set flow control level associated with the selected QoS flow control policy; and
based on determining that the congestion condition exists for the data flow, performing a remedial action specific to the data flow to address the congestion condition.
2. The method of claim 1, wherein monitoring includes determining flow statistics for a plurality of data flows in the network switch.
3. The method of claim 2, wherein the remedial action is performed for one or more data flows whose flow statistics exceed the threshold.
4. The method of claim 1, wherein the flow statistic includes at least one of a queue depth statistic, an ingress queue flow rate and an egress queue flow rate.
5. The method of claim 1, wherein performing the remedial action includes at least one of rate-limiting the data flow, turning off the data flow in the switch, and re-routing the data flow.
6. The method of claim 1, wherein the indication of the selected pre-set flow control level is stored as data in a header of the one or more data packets.
7. A method of processing data flows in a network switch, comprising:
receiving a data flow at the network switch, the data flow including a plurality of data packets, the data flow received from a source and configured to be routed through one or more switches in a network to a destination, wherein one or more of the data packets includes an indication of a selected pre-set flow control level associated with a quality of service (QoS) flow control policy selected from a plurality of QoS flow control policies, each QoS flow control policy associated with one of a plurality of pre-set flow control levels, each pre-set flow control level defining a threshold value of a flow statistic, wherein a value of the flow statistic exceeding the threshold value is indicative of a congestion condition, the indication inserted into the one or more data packets by a switch controller;
storing the indication of the QoS flow control policy in a flow control queue in the network switch that is separate from ingress and egress queues and from a flow table of the switch, the flow control queue associating the selected QoS flow control policy with the data flow, the flow control queue configured to associate one or more of the plurality of QoS flow control policies with each specific data flow received by the switch; and
processing the data flow at the network switch according to instructions associated with the data flow and configured by a switch controller.
8. The method of claim 7, further comprising, based on the switch controller determining that the specified flow statistic for the data flow exceeds the threshold, receiving an instruction to perform a remedial action on the data flow.
9. The method of claim 8, wherein the remedial action includes at least one of adjusting a flow rate of the data flow, re-routing the data flow, and turning off the data flow.
10. The method of claim 7, wherein storing includes extracting the indication of the selected QoS flow control policy from a header of the one or more data packets and storing the indication with an identification of the data flow.
11. The method of claim 7, wherein the flow control queue is configured to store a plurality of QoS flow control policy indications, each QoS flow control policy indication associated with an identification of a different data flow being processed in the network switch.
12. A computer program product for monitoring data flow in a network, the computer program product comprising:
a non-transitory tangible storage medium readable by a processing circuit and storing instructions for execution by the processing circuit for performing a method comprising:
configuring a data flow including a plurality of data packets by a switch controller, the data flow received from a source and configured to be routed through one or more switches in a network to a destination, the switch controller configured to control routing of the data flow through a network switch, wherein configuring includes associating the data flow with a quality of service (QoS) flow control policy selected from a plurality of QoS flow control policies, each QoS flow control policy associated with one of a plurality of pre-set flow control levels, and storing an indication of a selected pre-set flow control level in one or more of the data packets of the data flow, each pre-set flow control level defining a threshold value of a flow statistic, wherein a value of the flow statistic exceeding the threshold value is indicative of a congestion condition;
storing the selected QoS flow control policy in a flow control queue in the network switch that is separate from ingress and egress queues and from a flow table of the switch, the flow control queue associating the selected QoS flow control policy with the data flow, the flow control queue configured to associate one or more of the plurality of QoS flow control policies with each specific data flow received by the switch controller;
monitoring the network switch receiving the data flow, wherein monitoring includes determining the value of the flow statistic in the switch;
determining whether a congestion condition exists for the data flow based on comparing the value of the flow statistic and the threshold value defined by the pre-set flow control level associated with the selected QoS flow control policy; and
based on determining that the congestion condition exists for the data flow, performing a remedial action specific to the data flow to address the congestion condition.
13. The computer program product of claim 12, wherein the flow control queue includes an indication of a QoS flow control policy specific to each data flow, the indication retrieved from data in a header of the one or more data packets.
14. The computer program product of claim 13, wherein the remedial action is performed for one or more data flows whose flow statistics exceed the threshold.
15. The computer program product of claim 12, wherein the network switch is an OpenFlow switch and the switch controller is an OpenFlow switch controller.
16. The computer program product of claim 12, wherein performing the remedial action includes at least one of rate-limiting the data flow, turning off the data flow in the switch, and re-routing the data flow.
17. A computer program product for processing data flows in a network switch, the computer program product comprising:
a non-transitory tangible storage medium readable by a processing circuit and storing instructions for execution by the processing circuit for performing a method comprising:
receiving a data flow at the network switch, the data flow including a plurality of data packets, the data flow received from a source and configured to be routed through one or more switches in a network to a destination, wherein one or more of the data packets includes an indication of a selected pre-set flow control level associated with a quality of service (QoS) flow control policy selected from a plurality of QoS flow control policies, each QoS flow control policy associated with one of a plurality of pre-set flow control levels, each pre-set flow control level defining a threshold value of a flow statistic, wherein a value of the flow statistic exceeding the threshold value is indicative of a congestion condition the indication inserted into the one or more data packets by a switch controller;
storing the indication of the QoS flow control policy in a flow control queue in the network switch that is separate from ingress and egress queues and from a flow table of the switch, the flow control queue associating the selected QoS flow control policy with the data flow, the flow control queue configured to associate one or more of the plurality of QoS flow control policies with each specific data flow received by the switch; and
processing the data flow at the network switch according to instructions associated with the data flow and configured by a switch controller.
18. The computer program product of claim 17, further comprising, based on the switch controller determining that the specified flow statistic for the data flow exceeds the threshold, receiving an instruction to perform a remedial action on the data flow.
19. The computer program product of claim 18, wherein the remedial action includes at least one of adjusting a flow rate of the data flow, re-routing the data flow, and turning off the data flow.
20. The computer program product of claim 17, wherein each QoS flow control policy indication is associated with an identification of a different data flow being processed in the network switch.
21. An apparatus for controlling a network switch, comprising:
a memory having computer readable computer instructions; and
a processor for executing the computer readable instructions, the instructions including:
configuring a data flow including a plurality of data packets by a switch controller, the data flow received from a source and configured to be routed through one or more switches in a network to a destination, the switch controller configured to control routing of the data flow through a network switch, wherein configuring includes associating the data flow with a quality of service (QoS) flow control policy selected from a plurality of QoS flow control policies, each QoS flow control policy associated with one of a plurality of pre-set flow control levels, and storing an indication of a selected pre-set flow control level in one or more of the data packets of the data flow, each pre-set flow control level defining a threshold value of a flow statistic, wherein a value of the flow statistic exceeding the threshold value is indicative of a congestion condition;
storing the selected QoS flow control policy in a flow control queue in the network switch that is separate from ingress and egress queues and from a flow table of the switch, the flow control queue associating the selected QoS flow control policy with the data flow, the flow control queue configured to associate one or more of the plurality of QoS flow control policies with each specific data flow received by the switch controller;
monitoring the network switch receiving the data flow, wherein monitoring includes determining the value of the flow statistics in the switch associated with the data flow;
determining whether a congestion condition exists for the data flow based on comparing the value of the flow statistic and the threshold value defined by the pre-set flow control level associated with the selected QoS flow control policy; and
based on determining that the congestion condition exists for the data flow, performing a remedial action specific to the data flow to address the congestion condition.
22. The apparatus of claim 21, wherein the flow control queue includes an indication of a QoS flow control policy specific to each of the plurality of data flows.
23. The apparatus of claim 22, wherein the remedial action is performed for one or more data flows whose flow statistics exceed the threshold.
24. The apparatus of claim 21, wherein performing the remedial action includes at least one of rate-limiting the data flow, turning off the data flow in the switch, and re-routing the data flow.
US13/833,886 2013-03-15 2013-03-15 Network per-flow rate limiting Active 2033-09-23 US9769074B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US13/833,886 US9769074B2 (en) 2013-03-15 2013-03-15 Network per-flow rate limiting

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US13/833,886 US9769074B2 (en) 2013-03-15 2013-03-15 Network per-flow rate limiting

Publications (2)

Publication Number Publication Date
US20140269319A1 US20140269319A1 (en) 2014-09-18
US9769074B2 true US9769074B2 (en) 2017-09-19

Family

ID=51526632

Family Applications (1)

Application Number Title Priority Date Filing Date
US13/833,886 Active 2033-09-23 US9769074B2 (en) 2013-03-15 2013-03-15 Network per-flow rate limiting

Country Status (1)

Country Link
US (1) US9769074B2 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170180240A1 (en) * 2015-12-16 2017-06-22 Telefonaktiebolaget Lm Ericsson (Publ) Openflow configured horizontally split hybrid sdn nodes
US10057170B2 (en) * 2016-09-23 2018-08-21 Gigamon Inc. Intelligent dropping of packets in a network visibility fabric
US10419350B2 (en) * 2017-10-06 2019-09-17 Hewlett Packard Enterprise Development Lp Packet admission

Families Citing this family (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11258531B2 (en) 2005-04-07 2022-02-22 Opanga Networks, Inc. System and method for peak flow detection in a communication network
CN103095521B (en) * 2012-12-18 2016-03-30 华为技术有限公司 The control method of flow detection, system, device, controller and checkout equipment
US9596192B2 (en) 2013-03-15 2017-03-14 International Business Machines Corporation Reliable link layer for control links between network controllers and switches
US9609086B2 (en) 2013-03-15 2017-03-28 International Business Machines Corporation Virtual machine mobility using OpenFlow
WO2014186986A1 (en) * 2013-05-24 2014-11-27 华为技术有限公司 Stream forwarding method, device and system
US9866491B2 (en) * 2013-12-20 2018-01-09 Nxp Usa, Inc. Method and system for avoiding new flow packet flood from data plane to control plane of a network device
US9716653B2 (en) * 2014-11-18 2017-07-25 Hauwei Technologies Co., Ltd. System and method for flow-based addressing in a mobile environment
US9571412B2 (en) * 2014-11-21 2017-02-14 Cavium, Inc. Systems and methods for hardware accelerated timer implementation for openflow protocol
CN105743726A (en) * 2014-12-10 2016-07-06 中兴通讯股份有限公司 Traffic statistics and analysis method for feature data message and corresponding device
CN104618157B (en) * 2015-01-27 2018-05-18 华为技术有限公司 Network management, equipment and system
US9680731B2 (en) 2015-02-27 2017-06-13 International Business Machines Corporation Adaptive software defined networking controller
WO2016141239A1 (en) * 2015-03-03 2016-09-09 Opanga Networks, Inc. Systems and methods for pacing data flows
WO2017030604A1 (en) * 2015-08-20 2017-02-23 Ruckus Wireless, Inc. Virtual-machine dataplane having fixed interpacket time
WO2017058188A1 (en) * 2015-09-30 2017-04-06 Hewlett Packard Enterprise Development Lp Identification of an sdn action path based on a measured flow rate
WO2017066947A1 (en) * 2015-10-22 2017-04-27 华为技术有限公司 Method, device and system for processing service
US9906460B2 (en) * 2015-12-31 2018-02-27 Alcatel-Lucent Usa Inc. Data plane for processing function scalability
US10693705B2 (en) 2016-03-23 2020-06-23 Arista Networks, Inc. Show command service aka CLI relay
US10917284B2 (en) 2016-05-23 2021-02-09 Arista Networks, Inc. Method and system for using an OpenConfig architecture on network elements
US10187286B2 (en) * 2016-05-23 2019-01-22 Arista Networks, Inc. Method and system for tracking network device information in a network switch
US10567988B2 (en) 2016-06-03 2020-02-18 Opanga Networks, Inc. Radio access type detection using traffic metrics
WO2018044281A1 (en) 2016-08-30 2018-03-08 Ruckus Wireless, Inc. Virtual-machine dataplane with dhcp-server functionality
CN108243116B (en) 2016-12-23 2021-09-14 华为技术有限公司 Flow control method and switching equipment
US20180324631A1 (en) * 2017-05-05 2018-11-08 Mediatek Inc. Using sdap headers for handling of as/nas reflective qos and to ensure in-sequence packet delivery during remapping in 5g communication systems
US10541866B2 (en) * 2017-07-25 2020-01-21 Cisco Technology, Inc. Detecting and resolving multicast traffic performance issues
CN111386683A (en) * 2017-11-27 2020-07-07 欧庞戈网络有限公司 System and method for accelerating or decelerating a data transport network protocol based on real-time transport network congestion conditions
US11178018B2 (en) 2018-09-28 2021-11-16 Arista Networks, Inc. Method and system for managing real network systems using simulation results
US10880166B2 (en) 2019-02-21 2020-12-29 Arista Networks, Inc. Multi-cluster management plane for network devices
CN110048968B (en) * 2019-04-12 2021-06-22 网宿科技股份有限公司 Domain name bandwidth adjusting method and device
US10958592B2 (en) 2019-04-12 2021-03-23 Wangsu Science & Technology Co., Ltd. Domain name bandwidth adjustment method and apparatus
WO2020236302A1 (en) 2019-05-23 2020-11-26 Cray Inc. Systems and methods for on the fly routing in the presence of errors
CN111585892B (en) * 2020-04-29 2022-08-12 平安科技(深圳)有限公司 Data center flow management and control method and system
US11057275B1 (en) 2020-09-18 2021-07-06 Arista Networks, Inc. Method and system for achieving high availability of a primary network controller in a network controller cluster using distributed network device state information
CN114244584B (en) * 2021-12-02 2023-07-25 中盈优创资讯科技有限公司 Method and device for realizing automatic suppression and protection based on network equipment log

Citations (166)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2313268A (en) 1996-05-17 1997-11-19 Motorola Ltd Transmitting data with error correction
US5694390A (en) 1993-09-06 1997-12-02 Kabushiki Kaisha Toshiba Method and apparatus for controlling congestion in communication network
EP0876023A1 (en) 1997-04-30 1998-11-04 Sony Corporation Transmitter and transmitting method, receiver and receiving method, and transceiver and transmitting/receiving method
US5905711A (en) 1996-03-28 1999-05-18 Lucent Technologies Inc. Method and apparatus for controlling data transfer rates using marking threshold in asynchronous transfer mode networks
WO1999030462A2 (en) 1997-12-12 1999-06-17 3Com Corporation A forward error correction system for packet based real-time media
US5966546A (en) 1996-09-12 1999-10-12 Cabletron Systems, Inc. Method and apparatus for performing TX raw cell status report frequency and interrupt frequency mitigation in a network node
US6094418A (en) 1996-03-07 2000-07-25 Fujitsu Limited Feedback control method and device in ATM switching system
US6208619B1 (en) 1997-03-27 2001-03-27 Kabushiki Kaisha Toshiba Packet data flow control method and device
US20010056459A1 (en) 1998-08-31 2001-12-27 Yoshitoshi Kurose Service assignment apparatus
US6356944B1 (en) 1997-03-31 2002-03-12 Compaq Information Technologies Group, L.P. System and method for increasing write performance in a fibre channel environment
US20020073354A1 (en) 2000-07-28 2002-06-13 International Business Machines Corporation Cascading failover of a data management application for shared disk file systems in loosely coupled node clusters
US20020159386A1 (en) 2001-04-30 2002-10-31 Gilbert Grosdidier Method for dynamical identification of network congestion characteristics
US20020181484A1 (en) 1998-04-01 2002-12-05 Takeshi Aimoto Packet switch and switching method for switching variable length packets
US20020196749A1 (en) 2001-06-25 2002-12-26 Eyuboglu M. Vedat Radio network control
US6504821B2 (en) 1996-06-12 2003-01-07 At&T Corp. Flexible bandwidth negotiation for the block transfer of data
US6504818B1 (en) 1998-12-03 2003-01-07 At&T Corp. Fair share egress queuing scheme for data networks
US20030051187A1 (en) 2001-08-09 2003-03-13 Victor Mashayekhi Failover system and method for cluster environment
US20040153866A1 (en) 2002-11-15 2004-08-05 Microsoft Corporation Markov model of availability for clustered systems
US20040170123A1 (en) 2003-02-27 2004-09-02 International Business Machines Corporation Method and system for managing of denial of service attacks using bandwidth allocation technology
US20040179476A1 (en) 2003-03-10 2004-09-16 Sung-Ha Kim Apparatus and method for controlling a traffic switching operation based on a service class in an ethernet-based network
US6795399B1 (en) 1998-11-24 2004-09-21 Lucent Technologies Inc. Link capacity computation methods and apparatus for designing IP networks with performance guarantees
US20040199609A1 (en) 2003-04-07 2004-10-07 Microsoft Corporation System and method for web server migration
US6813246B1 (en) 1998-08-06 2004-11-02 Alcatel Routing calls with overflows in a private network
US20040228278A1 (en) 2003-05-13 2004-11-18 Corrigent Systems, Ltd. Bandwidth allocation for link aggregation
US6947380B1 (en) 2000-12-01 2005-09-20 Cisco Technology, Inc. Guaranteed bandwidth mechanism for a terabit multiservice switch
US6975592B1 (en) 2000-11-22 2005-12-13 Nortel Networks Limited Configurable rule-engine for layer-7 and traffic characteristic-based classification
US20060126509A1 (en) 2004-12-09 2006-06-15 Firas Abi-Nassif Traffic management in a wireless data network
US20060187874A1 (en) 2005-02-24 2006-08-24 Interdigital Technology Corporation Method and apparatus for supporting data flow control in a wireless mesh network
US20060203828A1 (en) 2003-10-02 2006-09-14 Masayuki Kumazawa Router selecting method and router apparatus
US20060209695A1 (en) 2005-03-15 2006-09-21 Archer Shafford R Jr Load balancing in a distributed telecommunications platform
US20060215550A1 (en) 2005-03-23 2006-09-28 Richa Malhotra Method and apparatus for flow control of data in a network
US7187652B2 (en) 2001-12-26 2007-03-06 Tropic Networks Inc. Multi-constraint routing system and method
US20070081454A1 (en) 2005-10-11 2007-04-12 Cisco Technology, Inc. A Corporation Of California Methods and devices for backward congestion notification
US7234073B1 (en) 2003-09-30 2007-06-19 Emc Corporation System and methods for failover management of manageable entity agents
US20070183332A1 (en) 2006-02-06 2007-08-09 Jong-Sang Oh System and method for backward congestion notification in network
US20070204266A1 (en) 2006-02-28 2007-08-30 International Business Machines Corporation Systems and methods for dynamically managing virtual machines
US20070220121A1 (en) 2006-03-18 2007-09-20 Ignatia Suwarna Virtual machine migration between servers
US7289453B2 (en) 2001-12-13 2007-10-30 Sony Deutschland Gmbh Adaptive quality-of-service reservation and pre-allocation for mobile systems
US20070263540A1 (en) 2004-11-29 2007-11-15 Joachim Charzinski Method and Device for the Automatic Readjustment of Limits for Access Controls Used to Restrict Traffic in a Communication Network
US20080137669A1 (en) 2006-12-12 2008-06-12 Nokia Corporation Network of nodes
US7408876B1 (en) 2002-07-02 2008-08-05 Extreme Networks Method and apparatus for providing quality of service across a switched backplane between egress queue managers
US20080192752A1 (en) 2007-02-06 2008-08-14 Entropic Communications Inc. Parameterized quality of service architecture in a network
US20080225713A1 (en) 2007-03-16 2008-09-18 Cisco Technology, Inc. Source routing approach for network performance and availability measurement of specific paths
CN101313278A (en) 2005-12-02 2008-11-26 国际商业机器公司 Maintaining session states within virtual machine environments
US20080298248A1 (en) 2007-05-28 2008-12-04 Guenter Roeck Method and Apparatus For Computer Network Bandwidth Control and Congestion Management
CN101335710A (en) 2007-06-27 2008-12-31 全球帕克特有限公司 Activating a tunnel upon receiving a control packet
US20090052326A1 (en) 2007-08-21 2009-02-26 Cisco Technology, Inc., A Corporation Of California Backward congestion notification
CN101398770A (en) 2007-09-30 2009-04-01 赛门铁克公司 System for and method of migrating one or more virtual machines
US20090089609A1 (en) 2004-06-29 2009-04-02 Tsunehiko Baba Cluster system wherein failover reset signals are sent from nodes according to their priority
US20090092046A1 (en) 2006-04-05 2009-04-09 Xyratex Technology Limited Method for Congestion Management of a Network, a Switch, and a Network
US20090180380A1 (en) 2008-01-10 2009-07-16 Nuova Systems, Inc. Method and system to manage network traffic congestion
US20090203350A1 (en) 2006-09-08 2009-08-13 Mgpatents, Llc List-based emergency calling device
US20090213861A1 (en) 2008-02-21 2009-08-27 International Business Machines Corporation Reliable Link Layer Packet Retry
US20090231997A1 (en) 2008-03-14 2009-09-17 Motorola, Inc. Method for displaying a packet switched congestion status of a wireless communication network
US20090232001A1 (en) 2008-03-11 2009-09-17 Cisco Technology, Inc. Congestion Control in Wireless Mesh Networks
WO2009113106A2 (en) 2008-02-29 2009-09-17 Gaurav Raina Network communication
US20090268614A1 (en) 2006-12-18 2009-10-29 British Telecommunications Public Limited Company Method and system for congestion marking
US20100014487A1 (en) 2003-03-13 2010-01-21 Qualcomm Incorporated Method and system for a data transmission in a communication system
US20100027420A1 (en) 2008-07-31 2010-02-04 Cisco Technology, Inc. Dynamic distribution of virtual machines in a communication network
US20100027424A1 (en) 2008-07-30 2010-02-04 Microsoft Corporation Path Estimation in a Wireless Mesh Network
CN101663590A (en) 2007-04-20 2010-03-03 思科技术公司 Parsing out of order data packets at a content gateway of a network
CN101677321A (en) 2008-09-16 2010-03-24 株式会社日立制作所 Method and apparatus for storage migration
US20100138686A1 (en) 2008-11-26 2010-06-03 Hitachi, Ltd. Failure recovery method, failure recovery program and management server
US20100142539A1 (en) 2008-12-05 2010-06-10 Mark Gooch Packet processing indication
US20100146327A1 (en) 2008-12-05 2010-06-10 Hitachi, Ltd. Server failover control method and apparatus and computer system group
US20100166424A1 (en) 2004-04-15 2010-07-01 Nagarajan Radhakrishnan L COOLERLESS PHOTONIC INTEGRATED CIRCUITS (PICs) FOR WDM TRANSMISSION NETWORKS AND PICs OPERABLE WITH A FLOATING SIGNAL CHANNEL GRID CHANGING WITH TEMPERATURE BUT WITH FIXED CHANNEL SPACING IN THE FLOATING GRID
US7765328B2 (en) 2001-07-06 2010-07-27 Juniper Networks, Inc. Content service aggregation system
US20100199275A1 (en) 2009-01-30 2010-08-05 Jayaram Mudigonda Server switch integration in a virtualized system
US20100211718A1 (en) 2009-02-17 2010-08-19 Paul Gratz Method and apparatus for congestion-aware routing in a computer interconnection network
US20100214970A1 (en) 2007-09-28 2010-08-26 Nec Europe Ltd Method and system for transmitting data packets from a source to multiple receivers via a network
US20100238803A1 (en) 2007-11-01 2010-09-23 Telefonaktiebolaget Lm Ericsson (Publ) Efficient Flow Control in a Radio Network Controller (RNC)
US20100238805A1 (en) 2007-08-22 2010-09-23 Reiner Ludwig Data Transmission Control Methods And Devices
US20100281178A1 (en) 2009-04-29 2010-11-04 Terence Sean Sullivan Network Audio Distribution System and Method
US20100302935A1 (en) 2009-05-27 2010-12-02 Yin Zhang Method and system for resilient routing reconfiguration
US20100303238A1 (en) 2009-05-29 2010-12-02 Violeta Cakulev Session Key Generation and Distribution with Multiple Security Associations per Protocol Instance
US20100309781A1 (en) 2009-06-03 2010-12-09 Qualcomm Incorporated Switching between mimo and receiver beam forming in a peer-to-peer network
US20110026437A1 (en) 2009-07-30 2011-02-03 Roberto Roja-Cessa Disseminating Link State Information to Nodes of a Network
US20110031082A1 (en) 2009-08-06 2011-02-10 Chen-Lung Chi Wheeled luggage device with brake
US20110032821A1 (en) 2006-08-22 2011-02-10 Morrill Robert J System and method for routing data on a packet network
CN101997644A (en) 2009-08-24 2011-03-30 华为技术有限公司 Speed adjusting method, system and coding scheme selection method and system thereof
WO2011037104A1 (en) 2009-09-24 2011-03-31 日本電気株式会社 Identification system for inter-virtual-server communication and identification method for inter-virtual-server communication
WO2011037148A1 (en) 2009-09-28 2011-03-31 日本電気株式会社 Computer system, and migration method of virtual machine
US20110085444A1 (en) 2009-10-13 2011-04-14 Brocade Communications Systems, Inc. Flow autodetermination
US20110090797A1 (en) 2008-06-27 2011-04-21 Gnodal Limited Method of data delivery across a network
US7949893B1 (en) 2008-04-30 2011-05-24 Netapp, Inc. Virtual user interface failover
WO2011065268A1 (en) 2009-11-26 2011-06-03 日本電気株式会社 Load distribution system, load distribution method, and program
US20110137772A1 (en) 2009-12-07 2011-06-09 At&T Mobility Ii Llc Devices, Systems and Methods for SLA-Based Billing
US20110135305A1 (en) 2009-12-08 2011-06-09 Vello Systems, Inc. Optical Subchannel Routing, Protection Switching and Security
US20110142450A1 (en) 2009-12-11 2011-06-16 Alberto Tanzi Use of pre-validated paths in a wdm network
US20110158647A1 (en) 2001-07-19 2011-06-30 Alan Glen Solheim Wavelength Assignment In An Optical WDM Network
US7978607B1 (en) 2008-08-29 2011-07-12 Brocade Communications Systems, Inc. Source-based congestion detection and control
US20110179415A1 (en) 2010-01-20 2011-07-21 International Business Machines Corporation Enablement and acceleration of live and near-live migration of virtual machines and their associated storage across networks
US20110206025A1 (en) 2008-08-01 2011-08-25 Telefonica, S.A. Access point which sends geographical positioning information from the access point to mobile terminals and mobile terminal which receives the information and estimates the position thereof based on said information
JP2011166700A (en) 2010-02-15 2011-08-25 Nec Corp Network system, and packet speculative transfer method
US20110211834A1 (en) 2009-08-13 2011-09-01 New Jersey Institute Of Technology Scheduling wdm pon with tunable lasers with different tuning times
WO2011118575A1 (en) 2010-03-24 2011-09-29 日本電気株式会社 Communication system, control device and traffic monitoring method
US20110242966A1 (en) 2008-12-18 2011-10-06 Alcatel Lucent Method And Apparatus For Delivering Error-Critical Traffic Through A Packet-Switched Network
US20110256865A1 (en) 2010-04-15 2011-10-20 Zulfiquar Sayeed User Equipment Adjustment of Uplink Satellite Communications
US20110261696A1 (en) 2010-04-22 2011-10-27 International Business Machines Corporation Network data congestion management probe system
US20110261831A1 (en) 2010-04-27 2011-10-27 Puneet Sharma Dynamic Priority Queue Level Assignment for a Network Flow
US20110271007A1 (en) 2010-04-28 2011-11-03 Futurewei Technologies, Inc. System and Method for a Context Layer Switch
US20110273988A1 (en) 2010-05-10 2011-11-10 Jean Tourrilhes Distributing decision making in a centralized flow routing system
US20110283016A1 (en) 2009-12-17 2011-11-17 Nec Corporation Load distribution system, load distribution method, apparatuses constituting load distribution system, and program
US20110286324A1 (en) 2010-05-19 2011-11-24 Elisa Bellagamba Link Failure Detection and Traffic Redirection in an Openflow Network
US8069139B2 (en) 2006-03-30 2011-11-29 International Business Machines Corporation Transitioning of database service responsibility responsive to server failure in a partially clustered computing environment
US20110292830A1 (en) 2010-05-25 2011-12-01 Telefonaktiebolaget L M Ericsson (Publ) Method for enhancing table lookups with exact and wildcards matching for parallel environments
US20110295996A1 (en) 2010-05-28 2011-12-01 At&T Intellectual Property I, L.P. Methods to improve overload protection for a home subscriber server (hss)
US20110299389A1 (en) 2009-12-04 2011-12-08 Telcordia Technologies, Inc. Real Time Monitoring, Onset Detection And Control Of Congestive Phase-Transitions in Communication Networks
US20110305288A1 (en) 2010-06-11 2011-12-15 Yong Liu Method and Apparatus for Determining Channel Bandwidth
US20110305167A1 (en) 2009-12-28 2011-12-15 Nec Corporation Communication system, and method of collecting port information
US8082466B2 (en) 2008-10-30 2011-12-20 Hitachi, Ltd. Storage device, and data path failover method of internal network of storage controller
CN102291389A (en) 2011-07-14 2011-12-21 南京邮电大学 Cross-layer congestion control method in satellite network
US20120008958A1 (en) 2009-03-20 2012-01-12 Telefonaktiebolaget Lm Ericsson (Publ) Method and devices for automatic tuning in wdp-pon
US20120014693A1 (en) 2010-07-13 2012-01-19 Futurewei Technologies, Inc. Passive Optical Network with Adaptive Filters for Upstream Transmission Management
US20120014284A1 (en) 2010-07-19 2012-01-19 Raghuraman Ranganathan Virtualized shared protection capacity
US20120023231A1 (en) 2009-10-23 2012-01-26 Nec Corporation Network system, control method for the same, and controller
US20120020361A1 (en) 2010-01-05 2012-01-26 Nec Corporation Switch network system, controller, and control method
US20120030306A1 (en) 2009-04-28 2012-02-02 Nobuharu Kami Rapid movement system for virtual devices in a computing system, management device, and method and program therefor
US20120054079A1 (en) 2009-09-30 2012-03-01 Nec Corporation Charging system and charging method
US20120063316A1 (en) 2010-09-10 2012-03-15 Brocade Communications Systems, Inc. Congestion notification across multiple layer-2 domains
WO2012056816A1 (en) 2010-10-28 2012-05-03 日本電気株式会社 Network system and method for controlling communication traffic
US20120163175A1 (en) 2010-12-23 2012-06-28 Brocade Communications Systems, Inc. Ingress rate limiting
CN102546385A (en) 2010-12-15 2012-07-04 丛林网络公司 Methods and apparatus for automatically provisioning resources within a distributed control plane of a switch
US20120170477A1 (en) 2010-01-14 2012-07-05 Nec Corporation Computer, communication system, network connection switching method, and program
CN102611612A (en) 2010-12-21 2012-07-25 微软公司 Multi-path communications in a data center environment
US20120195201A1 (en) * 2011-02-02 2012-08-02 Alaxala Networks Corporation Bandwidth policing apparatus and packet relay apparatus
US20120201140A1 (en) 2009-10-06 2012-08-09 Kazuya Suzuki Network system, controller, method, and program
US20120207175A1 (en) 2011-02-11 2012-08-16 Cisco Technology, Inc. Dynamic load balancing for port groups
US20120221887A1 (en) 2010-05-20 2012-08-30 International Business Machines Corporation Migrating Virtual Machines Among Networked Servers Upon Detection Of Degrading Network Link Operation
US20120287782A1 (en) 2011-05-12 2012-11-15 Microsoft Corporation Programmable and high performance switch for data center networks
US20130003735A1 (en) 2011-06-28 2013-01-03 Chao H Jonathan Dynamically provisioning middleboxes
US20130010600A1 (en) 2011-07-08 2013-01-10 Telefonaktiebolaget L M Ericsson (Publ) Controller Driven OAM for OpenFlow
US20130054761A1 (en) 2011-08-29 2013-02-28 Telefonaktiebolaget L M Ericsson (Publ) Implementing a 3G Packet Core in a Cloud Computer with Openflow Data and Control Planes
US20130058345A1 (en) 2011-09-01 2013-03-07 Fujitsu Limited Apparatus and Method for Establishing Tunnels Between Nodes in a Communication Network
US8429282B1 (en) 2011-03-22 2013-04-23 Amazon Technologies, Inc. System and method for avoiding system overload by maintaining an ideal request rate
US20130124683A1 (en) 2010-07-20 2013-05-16 Sharp Kabushiki Kaisha Data distribution system, data distribution method, data relay device on distribution side, and data relay device on reception side
US20130144995A1 (en) 2010-09-03 2013-06-06 Shuji Ishii Control apparatus, a communication system, a communication method and a recording medium having recorded thereon a communication program
US20130159415A1 (en) 2010-06-07 2013-06-20 Kansai University Congestion control system, congestion control method, and communication unit
US20130162038A1 (en) 2010-04-22 2013-06-27 Siemens Aktiengesellschaft Apparatus and method for stabilizing an electrical power import
US20130176850A1 (en) 2012-01-09 2013-07-11 Telefonaktiebolaget L M Ericcson (Publ) Expanding network functionalities for openflow based split-architecture networks
US20130205002A1 (en) 2012-02-02 2013-08-08 Cisco Technology, Inc. Wide area network optimization
US20130212578A1 (en) 2012-02-14 2013-08-15 Vipin Garg Optimizing traffic load in a communications network
US20130250770A1 (en) 2012-03-22 2013-09-26 Futurewei Technologies, Inc. Supporting Software Defined Networking with Application Layer Traffic Optimization
US20130258847A1 (en) * 2012-04-03 2013-10-03 Telefonaktiebolaget L M Ericsson (Publ) Congestion Control and Resource Allocation in Split Architecture Networks
US20130258843A1 (en) 2012-03-29 2013-10-03 Fujitsu Limited Network system and apparatis
US20130266317A1 (en) 1999-07-29 2013-10-10 Rockstar Consortium Us Lp Optical switch and protocols for use therewith
US20130268686A1 (en) 2012-03-14 2013-10-10 Huawei Technologies Co., Ltd. Method, switch, server and system for sending connection establishment request
US20130294236A1 (en) 2012-05-04 2013-11-07 Neda Beheshti-Zavareh Congestion control in packet data networking
US20140006630A1 (en) 2012-06-28 2014-01-02 Yigang Cai Session initiation protocol (sip) for message throttling
US20140010235A1 (en) 2011-03-18 2014-01-09 Nec Corporation Network system and switching method thereof
US8630307B2 (en) 2011-09-13 2014-01-14 Qualcomm Incorporated Methods and apparatus for traffic contention resource allocation
US20140016476A1 (en) 2011-03-24 2014-01-16 Nec Europe Ltd. Method for operating a flow-based switching system and switching system
US20140016647A1 (en) 2011-03-29 2014-01-16 Hirokazu Yoshida Network system and vlan tag data acquiring method
US20140040526A1 (en) 2012-07-31 2014-02-06 Bruce J. Chang Coherent data forwarding when link congestion occurs in a multi-node coherent system
US20140092907A1 (en) 2012-08-14 2014-04-03 Vmware, Inc. Method and system for virtual and physical network integration
US20140108632A1 (en) 2012-10-15 2014-04-17 Cisco Technology, Inc. System and method for efficient use of flow table space in a network environment
US20140119193A1 (en) 2012-10-30 2014-05-01 Telefonaktiebolget L M Ericsson (Publ) Method for dynamic load balancing of network flows on lag interfaces
US20140126907A1 (en) 2012-11-05 2014-05-08 Broadcom Corporation Data rate control in an optical line terminal
US8724470B2 (en) 2009-10-01 2014-05-13 Lg Electronics Inc. Method of controlling data flow in wireless communication system
US20140169189A1 (en) 2012-12-17 2014-06-19 Broadcom Corporation Network Status Mapping
US20140178066A1 (en) 2012-11-07 2014-06-26 Nec Laboratories America, Inc. QoS-aware united control protocol for optical burst switching in software defined optical netoworks
US20140192646A1 (en) 2011-03-29 2014-07-10 Nec Europe Ltd. User traffic accountability under congestion in flow-based multi-layer switches
US20140198648A1 (en) * 2013-01-15 2014-07-17 Cisco Technology, Inc. Identification of data flows based on actions of quality of service policies
US20140258774A1 (en) 2004-04-22 2014-09-11 At&T Intellectual Property I, L.P. Methods and systems for automatically tracking the rerouting of logical circuit data in a data network
US20140301204A1 (en) 2011-11-10 2014-10-09 Ntt Docomo, Inc. Mobile communication method, policy and charging rule server apparatus, and mobile management node
US8953453B1 (en) 2011-12-15 2015-02-10 Amazon Technologies, Inc. System and method for throttling service requests using work-based tokens

Patent Citations (170)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5694390A (en) 1993-09-06 1997-12-02 Kabushiki Kaisha Toshiba Method and apparatus for controlling congestion in communication network
US6094418A (en) 1996-03-07 2000-07-25 Fujitsu Limited Feedback control method and device in ATM switching system
US5905711A (en) 1996-03-28 1999-05-18 Lucent Technologies Inc. Method and apparatus for controlling data transfer rates using marking threshold in asynchronous transfer mode networks
GB2313268A (en) 1996-05-17 1997-11-19 Motorola Ltd Transmitting data with error correction
US6504821B2 (en) 1996-06-12 2003-01-07 At&T Corp. Flexible bandwidth negotiation for the block transfer of data
US5966546A (en) 1996-09-12 1999-10-12 Cabletron Systems, Inc. Method and apparatus for performing TX raw cell status report frequency and interrupt frequency mitigation in a network node
US6208619B1 (en) 1997-03-27 2001-03-27 Kabushiki Kaisha Toshiba Packet data flow control method and device
US6356944B1 (en) 1997-03-31 2002-03-12 Compaq Information Technologies Group, L.P. System and method for increasing write performance in a fibre channel environment
EP0876023A1 (en) 1997-04-30 1998-11-04 Sony Corporation Transmitter and transmitting method, receiver and receiving method, and transceiver and transmitting/receiving method
WO1999030462A2 (en) 1997-12-12 1999-06-17 3Com Corporation A forward error correction system for packet based real-time media
US20020181484A1 (en) 1998-04-01 2002-12-05 Takeshi Aimoto Packet switch and switching method for switching variable length packets
US6813246B1 (en) 1998-08-06 2004-11-02 Alcatel Routing calls with overflows in a private network
US20010056459A1 (en) 1998-08-31 2001-12-27 Yoshitoshi Kurose Service assignment apparatus
US6795399B1 (en) 1998-11-24 2004-09-21 Lucent Technologies Inc. Link capacity computation methods and apparatus for designing IP networks with performance guarantees
US6504818B1 (en) 1998-12-03 2003-01-07 At&T Corp. Fair share egress queuing scheme for data networks
US20130266317A1 (en) 1999-07-29 2013-10-10 Rockstar Consortium Us Lp Optical switch and protocols for use therewith
US20020073354A1 (en) 2000-07-28 2002-06-13 International Business Machines Corporation Cascading failover of a data management application for shared disk file systems in loosely coupled node clusters
US6975592B1 (en) 2000-11-22 2005-12-13 Nortel Networks Limited Configurable rule-engine for layer-7 and traffic characteristic-based classification
US6947380B1 (en) 2000-12-01 2005-09-20 Cisco Technology, Inc. Guaranteed bandwidth mechanism for a terabit multiservice switch
US20020159386A1 (en) 2001-04-30 2002-10-31 Gilbert Grosdidier Method for dynamical identification of network congestion characteristics
US20120243476A1 (en) 2001-06-25 2012-09-27 Eyuboglu M Vedat Radio network control
US20020196749A1 (en) 2001-06-25 2002-12-26 Eyuboglu M. Vedat Radio network control
US7765328B2 (en) 2001-07-06 2010-07-27 Juniper Networks, Inc. Content service aggregation system
US20110158647A1 (en) 2001-07-19 2011-06-30 Alan Glen Solheim Wavelength Assignment In An Optical WDM Network
US20030051187A1 (en) 2001-08-09 2003-03-13 Victor Mashayekhi Failover system and method for cluster environment
US7289453B2 (en) 2001-12-13 2007-10-30 Sony Deutschland Gmbh Adaptive quality-of-service reservation and pre-allocation for mobile systems
US7187652B2 (en) 2001-12-26 2007-03-06 Tropic Networks Inc. Multi-constraint routing system and method
US7408876B1 (en) 2002-07-02 2008-08-05 Extreme Networks Method and apparatus for providing quality of service across a switched backplane between egress queue managers
US20040153866A1 (en) 2002-11-15 2004-08-05 Microsoft Corporation Markov model of availability for clustered systems
US20040170123A1 (en) 2003-02-27 2004-09-02 International Business Machines Corporation Method and system for managing of denial of service attacks using bandwidth allocation technology
US20040179476A1 (en) 2003-03-10 2004-09-16 Sung-Ha Kim Apparatus and method for controlling a traffic switching operation based on a service class in an ethernet-based network
US20100014487A1 (en) 2003-03-13 2010-01-21 Qualcomm Incorporated Method and system for a data transmission in a communication system
US20040199609A1 (en) 2003-04-07 2004-10-07 Microsoft Corporation System and method for web server migration
US20040228278A1 (en) 2003-05-13 2004-11-18 Corrigent Systems, Ltd. Bandwidth allocation for link aggregation
US7234073B1 (en) 2003-09-30 2007-06-19 Emc Corporation System and methods for failover management of manageable entity agents
US20060203828A1 (en) 2003-10-02 2006-09-14 Masayuki Kumazawa Router selecting method and router apparatus
US20100166424A1 (en) 2004-04-15 2010-07-01 Nagarajan Radhakrishnan L COOLERLESS PHOTONIC INTEGRATED CIRCUITS (PICs) FOR WDM TRANSMISSION NETWORKS AND PICs OPERABLE WITH A FLOATING SIGNAL CHANNEL GRID CHANGING WITH TEMPERATURE BUT WITH FIXED CHANNEL SPACING IN THE FLOATING GRID
US20140258774A1 (en) 2004-04-22 2014-09-11 At&T Intellectual Property I, L.P. Methods and systems for automatically tracking the rerouting of logical circuit data in a data network
US20090089609A1 (en) 2004-06-29 2009-04-02 Tsunehiko Baba Cluster system wherein failover reset signals are sent from nodes according to their priority
US20070263540A1 (en) 2004-11-29 2007-11-15 Joachim Charzinski Method and Device for the Automatic Readjustment of Limits for Access Controls Used to Restrict Traffic in a Communication Network
US20060126509A1 (en) 2004-12-09 2006-06-15 Firas Abi-Nassif Traffic management in a wireless data network
US20060187874A1 (en) 2005-02-24 2006-08-24 Interdigital Technology Corporation Method and apparatus for supporting data flow control in a wireless mesh network
US20060209695A1 (en) 2005-03-15 2006-09-21 Archer Shafford R Jr Load balancing in a distributed telecommunications platform
US20060215550A1 (en) 2005-03-23 2006-09-28 Richa Malhotra Method and apparatus for flow control of data in a network
US20070081454A1 (en) 2005-10-11 2007-04-12 Cisco Technology, Inc. A Corporation Of California Methods and devices for backward congestion notification
CN101313278A (en) 2005-12-02 2008-11-26 国际商业机器公司 Maintaining session states within virtual machine environments
US20070183332A1 (en) 2006-02-06 2007-08-09 Jong-Sang Oh System and method for backward congestion notification in network
US20070204266A1 (en) 2006-02-28 2007-08-30 International Business Machines Corporation Systems and methods for dynamically managing virtual machines
US20070220121A1 (en) 2006-03-18 2007-09-20 Ignatia Suwarna Virtual machine migration between servers
US8069139B2 (en) 2006-03-30 2011-11-29 International Business Machines Corporation Transitioning of database service responsibility responsive to server failure in a partially clustered computing environment
US20090092046A1 (en) 2006-04-05 2009-04-09 Xyratex Technology Limited Method for Congestion Management of a Network, a Switch, and a Network
US20110032821A1 (en) 2006-08-22 2011-02-10 Morrill Robert J System and method for routing data on a packet network
US20090203350A1 (en) 2006-09-08 2009-08-13 Mgpatents, Llc List-based emergency calling device
US20080137669A1 (en) 2006-12-12 2008-06-12 Nokia Corporation Network of nodes
US20090268614A1 (en) 2006-12-18 2009-10-29 British Telecommunications Public Limited Company Method and system for congestion marking
US20080192752A1 (en) 2007-02-06 2008-08-14 Entropic Communications Inc. Parameterized quality of service architecture in a network
US20080225713A1 (en) 2007-03-16 2008-09-18 Cisco Technology, Inc. Source routing approach for network performance and availability measurement of specific paths
CN101663590A (en) 2007-04-20 2010-03-03 思科技术公司 Parsing out of order data packets at a content gateway of a network
US20080298248A1 (en) 2007-05-28 2008-12-04 Guenter Roeck Method and Apparatus For Computer Network Bandwidth Control and Congestion Management
CN101335710A (en) 2007-06-27 2008-12-31 全球帕克特有限公司 Activating a tunnel upon receiving a control packet
US20090052326A1 (en) 2007-08-21 2009-02-26 Cisco Technology, Inc., A Corporation Of California Backward congestion notification
US20100238805A1 (en) 2007-08-22 2010-09-23 Reiner Ludwig Data Transmission Control Methods And Devices
US20100214970A1 (en) 2007-09-28 2010-08-26 Nec Europe Ltd Method and system for transmitting data packets from a source to multiple receivers via a network
CN101398770A (en) 2007-09-30 2009-04-01 赛门铁克公司 System for and method of migrating one or more virtual machines
US20100238803A1 (en) 2007-11-01 2010-09-23 Telefonaktiebolaget Lm Ericsson (Publ) Efficient Flow Control in a Radio Network Controller (RNC)
US20090180380A1 (en) 2008-01-10 2009-07-16 Nuova Systems, Inc. Method and system to manage network traffic congestion
US20090213861A1 (en) 2008-02-21 2009-08-27 International Business Machines Corporation Reliable Link Layer Packet Retry
WO2009113106A2 (en) 2008-02-29 2009-09-17 Gaurav Raina Network communication
US20090232001A1 (en) 2008-03-11 2009-09-17 Cisco Technology, Inc. Congestion Control in Wireless Mesh Networks
US20090231997A1 (en) 2008-03-14 2009-09-17 Motorola, Inc. Method for displaying a packet switched congestion status of a wireless communication network
US7949893B1 (en) 2008-04-30 2011-05-24 Netapp, Inc. Virtual user interface failover
US20110090797A1 (en) 2008-06-27 2011-04-21 Gnodal Limited Method of data delivery across a network
US20100027424A1 (en) 2008-07-30 2010-02-04 Microsoft Corporation Path Estimation in a Wireless Mesh Network
US20100027420A1 (en) 2008-07-31 2010-02-04 Cisco Technology, Inc. Dynamic distribution of virtual machines in a communication network
US20110206025A1 (en) 2008-08-01 2011-08-25 Telefonica, S.A. Access point which sends geographical positioning information from the access point to mobile terminals and mobile terminal which receives the information and estimates the position thereof based on said information
US7978607B1 (en) 2008-08-29 2011-07-12 Brocade Communications Systems, Inc. Source-based congestion detection and control
CN101677321A (en) 2008-09-16 2010-03-24 株式会社日立制作所 Method and apparatus for storage migration
US8082466B2 (en) 2008-10-30 2011-12-20 Hitachi, Ltd. Storage device, and data path failover method of internal network of storage controller
US20100138686A1 (en) 2008-11-26 2010-06-03 Hitachi, Ltd. Failure recovery method, failure recovery program and management server
US20100146327A1 (en) 2008-12-05 2010-06-10 Hitachi, Ltd. Server failover control method and apparatus and computer system group
US20100142539A1 (en) 2008-12-05 2010-06-10 Mark Gooch Packet processing indication
US20110242966A1 (en) 2008-12-18 2011-10-06 Alcatel Lucent Method And Apparatus For Delivering Error-Critical Traffic Through A Packet-Switched Network
US20100199275A1 (en) 2009-01-30 2010-08-05 Jayaram Mudigonda Server switch integration in a virtualized system
US20100211718A1 (en) 2009-02-17 2010-08-19 Paul Gratz Method and apparatus for congestion-aware routing in a computer interconnection network
US20120008958A1 (en) 2009-03-20 2012-01-12 Telefonaktiebolaget Lm Ericsson (Publ) Method and devices for automatic tuning in wdp-pon
US20120030306A1 (en) 2009-04-28 2012-02-02 Nobuharu Kami Rapid movement system for virtual devices in a computing system, management device, and method and program therefor
US20100281178A1 (en) 2009-04-29 2010-11-04 Terence Sean Sullivan Network Audio Distribution System and Method
US20100302935A1 (en) 2009-05-27 2010-12-02 Yin Zhang Method and system for resilient routing reconfiguration
US20100303238A1 (en) 2009-05-29 2010-12-02 Violeta Cakulev Session Key Generation and Distribution with Multiple Security Associations per Protocol Instance
US20100309781A1 (en) 2009-06-03 2010-12-09 Qualcomm Incorporated Switching between mimo and receiver beam forming in a peer-to-peer network
US20110026437A1 (en) 2009-07-30 2011-02-03 Roberto Roja-Cessa Disseminating Link State Information to Nodes of a Network
US20110031082A1 (en) 2009-08-06 2011-02-10 Chen-Lung Chi Wheeled luggage device with brake
US20110211834A1 (en) 2009-08-13 2011-09-01 New Jersey Institute Of Technology Scheduling wdm pon with tunable lasers with different tuning times
CN101997644A (en) 2009-08-24 2011-03-30 华为技术有限公司 Speed adjusting method, system and coding scheme selection method and system thereof
WO2011037104A1 (en) 2009-09-24 2011-03-31 日本電気株式会社 Identification system for inter-virtual-server communication and identification method for inter-virtual-server communication
WO2011037148A1 (en) 2009-09-28 2011-03-31 日本電気株式会社 Computer system, and migration method of virtual machine
US20120054079A1 (en) 2009-09-30 2012-03-01 Nec Corporation Charging system and charging method
US8724470B2 (en) 2009-10-01 2014-05-13 Lg Electronics Inc. Method of controlling data flow in wireless communication system
US20120201140A1 (en) 2009-10-06 2012-08-09 Kazuya Suzuki Network system, controller, method, and program
US20110085444A1 (en) 2009-10-13 2011-04-14 Brocade Communications Systems, Inc. Flow autodetermination
US20120023231A1 (en) 2009-10-23 2012-01-26 Nec Corporation Network system, control method for the same, and controller
US20120250496A1 (en) 2009-11-26 2012-10-04 Takeshi Kato Load distribution system, load distribution method, and program
WO2011065268A1 (en) 2009-11-26 2011-06-03 日本電気株式会社 Load distribution system, load distribution method, and program
US20110299389A1 (en) 2009-12-04 2011-12-08 Telcordia Technologies, Inc. Real Time Monitoring, Onset Detection And Control Of Congestive Phase-Transitions in Communication Networks
US20110137772A1 (en) 2009-12-07 2011-06-09 At&T Mobility Ii Llc Devices, Systems and Methods for SLA-Based Billing
US20110135305A1 (en) 2009-12-08 2011-06-09 Vello Systems, Inc. Optical Subchannel Routing, Protection Switching and Security
US20110158658A1 (en) 2009-12-08 2011-06-30 Vello Systems, Inc. Optical Subchannel-Based Cyclical Filter Architecture
US20110142450A1 (en) 2009-12-11 2011-06-16 Alberto Tanzi Use of pre-validated paths in a wdm network
US20110283016A1 (en) 2009-12-17 2011-11-17 Nec Corporation Load distribution system, load distribution method, apparatuses constituting load distribution system, and program
US20110305167A1 (en) 2009-12-28 2011-12-15 Nec Corporation Communication system, and method of collecting port information
US20120020361A1 (en) 2010-01-05 2012-01-26 Nec Corporation Switch network system, controller, and control method
US20120170477A1 (en) 2010-01-14 2012-07-05 Nec Corporation Computer, communication system, network connection switching method, and program
US20110179415A1 (en) 2010-01-20 2011-07-21 International Business Machines Corporation Enablement and acceleration of live and near-live migration of virtual machines and their associated storage across networks
JP2011166700A (en) 2010-02-15 2011-08-25 Nec Corp Network system, and packet speculative transfer method
WO2011118575A1 (en) 2010-03-24 2011-09-29 日本電気株式会社 Communication system, control device and traffic monitoring method
US20110256865A1 (en) 2010-04-15 2011-10-20 Zulfiquar Sayeed User Equipment Adjustment of Uplink Satellite Communications
US20110261696A1 (en) 2010-04-22 2011-10-27 International Business Machines Corporation Network data congestion management probe system
US20130162038A1 (en) 2010-04-22 2013-06-27 Siemens Aktiengesellschaft Apparatus and method for stabilizing an electrical power import
US20110261831A1 (en) 2010-04-27 2011-10-27 Puneet Sharma Dynamic Priority Queue Level Assignment for a Network Flow
US20110271007A1 (en) 2010-04-28 2011-11-03 Futurewei Technologies, Inc. System and Method for a Context Layer Switch
US20110273988A1 (en) 2010-05-10 2011-11-10 Jean Tourrilhes Distributing decision making in a centralized flow routing system
US20110286324A1 (en) 2010-05-19 2011-11-24 Elisa Bellagamba Link Failure Detection and Traffic Redirection in an Openflow Network
US20120221887A1 (en) 2010-05-20 2012-08-30 International Business Machines Corporation Migrating Virtual Machines Among Networked Servers Upon Detection Of Degrading Network Link Operation
US20110292830A1 (en) 2010-05-25 2011-12-01 Telefonaktiebolaget L M Ericsson (Publ) Method for enhancing table lookups with exact and wildcards matching for parallel environments
US20110295996A1 (en) 2010-05-28 2011-12-01 At&T Intellectual Property I, L.P. Methods to improve overload protection for a home subscriber server (hss)
US20130159415A1 (en) 2010-06-07 2013-06-20 Kansai University Congestion control system, congestion control method, and communication unit
US20110305288A1 (en) 2010-06-11 2011-12-15 Yong Liu Method and Apparatus for Determining Channel Bandwidth
US20120014693A1 (en) 2010-07-13 2012-01-19 Futurewei Technologies, Inc. Passive Optical Network with Adaptive Filters for Upstream Transmission Management
US20120014284A1 (en) 2010-07-19 2012-01-19 Raghuraman Ranganathan Virtualized shared protection capacity
US20130124683A1 (en) 2010-07-20 2013-05-16 Sharp Kabushiki Kaisha Data distribution system, data distribution method, data relay device on distribution side, and data relay device on reception side
US20130144995A1 (en) 2010-09-03 2013-06-06 Shuji Ishii Control apparatus, a communication system, a communication method and a recording medium having recorded thereon a communication program
US20120063316A1 (en) 2010-09-10 2012-03-15 Brocade Communications Systems, Inc. Congestion notification across multiple layer-2 domains
WO2012056816A1 (en) 2010-10-28 2012-05-03 日本電気株式会社 Network system and method for controlling communication traffic
CN102546385A (en) 2010-12-15 2012-07-04 丛林网络公司 Methods and apparatus for automatically provisioning resources within a distributed control plane of a switch
CN102611612A (en) 2010-12-21 2012-07-25 微软公司 Multi-path communications in a data center environment
US20120163175A1 (en) 2010-12-23 2012-06-28 Brocade Communications Systems, Inc. Ingress rate limiting
US20120195201A1 (en) * 2011-02-02 2012-08-02 Alaxala Networks Corporation Bandwidth policing apparatus and packet relay apparatus
US20120207175A1 (en) 2011-02-11 2012-08-16 Cisco Technology, Inc. Dynamic load balancing for port groups
US20140010235A1 (en) 2011-03-18 2014-01-09 Nec Corporation Network system and switching method thereof
US8429282B1 (en) 2011-03-22 2013-04-23 Amazon Technologies, Inc. System and method for avoiding system overload by maintaining an ideal request rate
US20140016476A1 (en) 2011-03-24 2014-01-16 Nec Europe Ltd. Method for operating a flow-based switching system and switching system
US20140192646A1 (en) 2011-03-29 2014-07-10 Nec Europe Ltd. User traffic accountability under congestion in flow-based multi-layer switches
US20140016647A1 (en) 2011-03-29 2014-01-16 Hirokazu Yoshida Network system and vlan tag data acquiring method
US20120287782A1 (en) 2011-05-12 2012-11-15 Microsoft Corporation Programmable and high performance switch for data center networks
US20130003735A1 (en) 2011-06-28 2013-01-03 Chao H Jonathan Dynamically provisioning middleboxes
US20130010600A1 (en) 2011-07-08 2013-01-10 Telefonaktiebolaget L M Ericsson (Publ) Controller Driven OAM for OpenFlow
CN102291389A (en) 2011-07-14 2011-12-21 南京邮电大学 Cross-layer congestion control method in satellite network
US8762501B2 (en) 2011-08-29 2014-06-24 Telefonaktiebolaget L M Ericsson (Publ) Implementing a 3G packet core in a cloud computer with openflow data and control planes
US20130054761A1 (en) 2011-08-29 2013-02-28 Telefonaktiebolaget L M Ericsson (Publ) Implementing a 3G Packet Core in a Cloud Computer with Openflow Data and Control Planes
US20130058345A1 (en) 2011-09-01 2013-03-07 Fujitsu Limited Apparatus and Method for Establishing Tunnels Between Nodes in a Communication Network
US8630307B2 (en) 2011-09-13 2014-01-14 Qualcomm Incorporated Methods and apparatus for traffic contention resource allocation
US20140301204A1 (en) 2011-11-10 2014-10-09 Ntt Docomo, Inc. Mobile communication method, policy and charging rule server apparatus, and mobile management node
US8953453B1 (en) 2011-12-15 2015-02-10 Amazon Technologies, Inc. System and method for throttling service requests using work-based tokens
US20130176850A1 (en) 2012-01-09 2013-07-11 Telefonaktiebolaget L M Ericcson (Publ) Expanding network functionalities for openflow based split-architecture networks
US20130205002A1 (en) 2012-02-02 2013-08-08 Cisco Technology, Inc. Wide area network optimization
US20130212578A1 (en) 2012-02-14 2013-08-15 Vipin Garg Optimizing traffic load in a communications network
US20130268686A1 (en) 2012-03-14 2013-10-10 Huawei Technologies Co., Ltd. Method, switch, server and system for sending connection establishment request
US20130250770A1 (en) 2012-03-22 2013-09-26 Futurewei Technologies, Inc. Supporting Software Defined Networking with Application Layer Traffic Optimization
US20130258843A1 (en) 2012-03-29 2013-10-03 Fujitsu Limited Network system and apparatis
US20130258847A1 (en) * 2012-04-03 2013-10-03 Telefonaktiebolaget L M Ericsson (Publ) Congestion Control and Resource Allocation in Split Architecture Networks
US20130294236A1 (en) 2012-05-04 2013-11-07 Neda Beheshti-Zavareh Congestion control in packet data networking
US20140006630A1 (en) 2012-06-28 2014-01-02 Yigang Cai Session initiation protocol (sip) for message throttling
US20140040526A1 (en) 2012-07-31 2014-02-06 Bruce J. Chang Coherent data forwarding when link congestion occurs in a multi-node coherent system
US20140092907A1 (en) 2012-08-14 2014-04-03 Vmware, Inc. Method and system for virtual and physical network integration
US20140108632A1 (en) 2012-10-15 2014-04-17 Cisco Technology, Inc. System and method for efficient use of flow table space in a network environment
US20140119193A1 (en) 2012-10-30 2014-05-01 Telefonaktiebolget L M Ericsson (Publ) Method for dynamic load balancing of network flows on lag interfaces
US20140126907A1 (en) 2012-11-05 2014-05-08 Broadcom Corporation Data rate control in an optical line terminal
US20140178066A1 (en) 2012-11-07 2014-06-26 Nec Laboratories America, Inc. QoS-aware united control protocol for optical burst switching in software defined optical netoworks
US20140169189A1 (en) 2012-12-17 2014-06-19 Broadcom Corporation Network Status Mapping
US20140198648A1 (en) * 2013-01-15 2014-07-17 Cisco Technology, Inc. Identification of data flows based on actions of quality of service policies

Non-Patent Citations (38)

* Cited by examiner, † Cited by third party
Title
Anonymous; "Intelligent VM Migration Based on Relative VM Priority and Relative Suitability of Migration Target"; https://priorartdatabase.com/IPCOM/000201632; Nov. 16, 2010, 3 pages.
Anonymous; "Management framework for efficient live migration of virtual machines running migration-aware applications";https://priorartdatabase.com/IPCOM000200260; Oct. 3, 2010, 5 pages.
Curtis, et al. "DevoFlow: Scaling Flow Management for High-Performance Netowrks". SIGCOMM'11, Aug. 15-19, 2011, Toronto, Ontario, Canada.
Egilmez, et al. "Scalable video streaming over OpenFlow networks: An optimization framework for QoS Routing". 2011 18th IEEE International Confernece on Image Processing (ICIP), 2241-2244.
El-Azzab, et al. "Slices isolator for a virtualized openflow node", (2011) First International Symposium on Network Cloud Computing and Applications (NCCA), 121-126.
IBM "Software Defined Networking, a new paradigm for virtual dynamic, flexible networking," IBM Systems and Technology, Oct. 2012, 6 pages.
IBM; "The automatic determination of master-slave relationship between embedded controllers by mearns of a shared hardware access switch"; https://www.ip.com/pubview/IPCOM000020741D; Dec. 11, 2003, 5 pages.
Johnson, RD.et al.; "Detection of a Working Master Controller by a Slave Card"; https://www.ip.com/pubview/IPCOM000099594D; Feb. 1, 1990, 3 pages.
Li, Z., et al. Compatib le TDM/WDM PON using a Single Tunable Optical Filter for both Downstream Wavelength Selection and Upstream Wavelength Generation. IEEE Photonics Technology Letters, vol. 24, No. 10, May 15, 2012. pp. 797-799.
Liu, et al. "Open Flow-based Wavelength Path Control in Transparent Optical networks: a Proof-of-Concept Demonstration" Sep. 2011, 37th European conference and Exhibition on Optical communication (ECOC).
McKeown et al., "OpenFlow: Enabling Innovation in Campus Networks", Mar. 14, 2008, 6 pages.
McKeown, et al. "OpenFlow: Enabling Innovation in Campus Networks". ACM SIGCOMM Computer Communication Review, 38(2), 69-74.
Notification of Transmittal of the International Search Report and the Written Opinion of the International Searching Authority, or the Declaration; PCT/IB2014/059457; Mailed Jul. 1, 2014, 6 pages.
PCT Notification of Transmittal of the International Search Report and the Written Opinion of the International Searching Authority, or the Declaration, Application No. PCT/IB2014/059459; International Filing Date: Mar. 5, 2014; Date of Mailing: Jun. 30, 2014; 10 pages.
PCT Notification of Transmittal of the International Search Report and the Written Opinion of the International Searching Authority, or the Declaration, Application No. PCT/IB2014/059460; International Filing Date: Mar. 5, 2014; Date of Mailing: Jun. 30, 2014; 9 pages.
Pfaff, B.et al.; "Open Flow Switch Specification"; www.openflow.org/document/openflow-spec-v1.0.0.pdf; Feb. 28, 2011, 56 pages.
Pfaff, B.et al.; "Open Flow Switch Specification"; www.openflow.org/document/openflow—spec—v1.0.0.pdf; Feb. 28, 2011, 56 pages.
U.S. Appl. No. 13/833,796, filed Mar. 15, 2013; Final office action; Date Mailed: Jan. 15, 2016; 18 pages.
U.S. Appl. No. 13/833,796; filed Mar. 15, 2013; Non-Final Office Action; Date Mailed: Jul. 30, 2015, 21 pages.
U.S. Appl. No. 13/833,796; Non-final Office Action ; filed Mar. 15, 2013; Date Mailed: Dec. 19, 2014; 31pages.
U.S. Appl. No. 13/833,952, Final Office Action; filed Mar. 15, 2013; Date Mailed: Apr. 16, 2015; 25 pages.
U.S. Appl. No. 13/833,952; Non-Final Office Action, filed Mar. 15, 2013; Date Mailed: Aug. 5, 2015; 25 pages.
U.S. Appl. No. 13/833,952; Non-Final Office Action; filed Mar. 15, 2013; Date Mailed: Nov. 3, 2014; 39 pages.
U.S. Appl. No. 13/834,117, filed Mar. 15, 2013; Non-final office Action; Date Mailed: Dec. 16, 2015; 30 pages.
U.S. Appl. No. 13/834,117; filed Mar. 15, 2013; Final Office Action; Date Mailed: Jul. 17, 2015, 31 pages.
U.S. Appl. No. 13/834,117; Non-Final Office Action; filed Mar. 15, 2013; Date Mailed: Feb. 26, 2015, 61 pages.
U.S. Appl. No. 13/834,502; Final Office Action; filed Mar. 15, 2013; Date Mailed: Jun. 29, 2015; 26 pages.
U.S. Appl. No. 13/834,502; Non-Final Office Action; filed Mar. 15, 2013; Date Mailed: Dec. 4, 2014; 37 pages.
U.S. Appl. No. 14/501,457; Final Office Action; filed Sep. 30, 2014; Date Mailed: Jun. 29, 2015; 28 pages.
U.S. Appl. No. 14/501,663, filed Sep. 30, 2014; Final Office Action; Date Mailed: Apr. 9, 2015; 25 pages.
U.S. Appl. No. 14/501,663, filed Sep. 30, 2014; Final office action; Date Mailed: Jan. 20, 2016; 15 pages.
U.S. Appl. No. 14/501,663; Non-Final Office Action; filed Sep. 30, 2014; Date Mailed: Dec. 19, 2014; 11 pages.
U.S. Appl. No. 14/501,945; Final Office Action: filed Sep. 30, 2014; Date Mailed: Jul. 16, 2015; 30 pages.
U.S. Appl. No. 14/501,945; Non-Final Office Action, filed Sep. 30, 2014; Date Mailed: Jan. 5, 2015; 29 pages.
U.S. Appl. No. 14/501,945; Non-final Office Action; filed Sep. 30, 2014; Date Mailed: Nov. 30, 2015; 22 pages.
U.S. Appl. No. 14/502,043; Non-Final Office Action; filed Sep. 30, 2014; Date Mailed: Dec. 23, 2014; 17 pages.
Wang et al., "Dynamic Bandwidth Allocation for Preventing Congestion in Data Center Networks," ISNN 2011, Part III, LNCS 6677, pp. 160-167, 2011.
Yong, S. et al, "XOR Retransmission in Multicast Error Recovery". Networks, 2000 (ICON2000). Proceedings. IEEE International Conference. pp. 336-340.

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170180240A1 (en) * 2015-12-16 2017-06-22 Telefonaktiebolaget Lm Ericsson (Publ) Openflow configured horizontally split hybrid sdn nodes
US10171336B2 (en) * 2015-12-16 2019-01-01 Telefonaktiebolaget Lm Ericsson (Publ) Openflow configured horizontally split hybrid SDN nodes
US10057170B2 (en) * 2016-09-23 2018-08-21 Gigamon Inc. Intelligent dropping of packets in a network visibility fabric
US20180351866A1 (en) * 2016-09-23 2018-12-06 Gigamon Inc. Intelligent Dropping of Packets in a Network Visibility Fabric
US10931582B2 (en) * 2016-09-23 2021-02-23 Gigamon Inc. Intelligent dropping of packets in a network visibility fabric
US10419350B2 (en) * 2017-10-06 2019-09-17 Hewlett Packard Enterprise Development Lp Packet admission

Also Published As

Publication number Publication date
US20140269319A1 (en) 2014-09-18

Similar Documents

Publication Publication Date Title
US9769074B2 (en) Network per-flow rate limiting
US10484518B2 (en) Dynamic port type detection
US9407560B2 (en) Software defined network-based load balancing for physical and virtual networks
US9503382B2 (en) Scalable flow and cogestion control with openflow
US9614930B2 (en) Virtual machine mobility using OpenFlow
US9237110B2 (en) Dynamic maximum transmission unit size adaption
US9590923B2 (en) Reliable link layer for control links between network controllers and switches
US10075396B2 (en) Methods and systems for managing distributed media access control address tables
US9071529B2 (en) Method and apparatus for accelerating forwarding in software-defined networks
US9621482B2 (en) Servers, switches, and systems with switching module implementing a distributed network operating system
US9742697B2 (en) Integrated server with switching capabilities and network operating system
US10237130B2 (en) Method for processing VxLAN data units
US8630296B2 (en) Shared and separate network stack instances
US20190052564A1 (en) Network element with congestion-aware match tables
US20160149817A1 (en) Analysis device

Legal Events

Date Code Title Description
AS Assignment

Owner name: INTERNATIONAL BUSINESS MACHINES CORPORATION, NEW Y

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:DECUSATIS, CASIMER;KRISHNAMURTHY, RAJARAM B.;REEL/FRAME:030012/0058

Effective date: 20130312

STCF Information on status: patent grant

Free format text: PATENTED CASE

FEPP Fee payment procedure

Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

FEPP Fee payment procedure

Free format text: SURCHARGE FOR LATE PAYMENT, LARGE ENTITY (ORIGINAL EVENT CODE: M1554); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4

AS Assignment

Owner name: AWEMANE LTD., CAYMAN ISLANDS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:INTERNATIONAL BUSINESS MACHINES CORPORATION;REEL/FRAME:057991/0960

Effective date: 20210826

AS Assignment

Owner name: BEIJING PIANRUOJINGHONG TECHNOLOGY CO., LTD., CHINA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:AWEMANE LTD.;REEL/FRAME:064501/0498

Effective date: 20230302

AS Assignment

Owner name: BEIJING ZITIAO NETWORK TECHNOLOGY CO., LTD., CHINA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:BEIJING PIANRUOJINGHONG TECHNOLOGY CO., LTD.;REEL/FRAME:066565/0952

Effective date: 20231130