US9137179B2 - Memory-mapped buffers for network interface controllers - Google Patents

Memory-mapped buffers for network interface controllers Download PDF

Info

Publication number
US9137179B2
US9137179B2 US11/493,285 US49328506A US9137179B2 US 9137179 B2 US9137179 B2 US 9137179B2 US 49328506 A US49328506 A US 49328506A US 9137179 B2 US9137179 B2 US 9137179B2
Authority
US
United States
Prior art keywords
memory
mapped
data
buffer
nic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US11/493,285
Other versions
US20080028103A1 (en
Inventor
Michael Steven Schlansker
Erwin Oertli
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Valtrus Innovations Ltd
Hewlett Packard Enterprise Development LP
Original Assignee
Hewlett Packard Development Co LP
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hewlett Packard Development Co LP filed Critical Hewlett Packard Development Co LP
Priority to US11/493,285 priority Critical patent/US9137179B2/en
Assigned to HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P. reassignment HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: SCHLANSKER, MICHAEL STEVEN, Oertli, Erwin
Priority to CN200710137000.6A priority patent/CN101115054B/en
Publication of US20080028103A1 publication Critical patent/US20080028103A1/en
Application granted granted Critical
Publication of US9137179B2 publication Critical patent/US9137179B2/en
Assigned to HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP reassignment HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P.
Assigned to OT PATENT ESCROW, LLC reassignment OT PATENT ESCROW, LLC PATENT ASSIGNMENT, SECURITY INTEREST, AND LIEN AGREEMENT Assignors: HEWLETT PACKARD ENTERPRISE COMPANY, HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
Assigned to VALTRUS INNOVATIONS LIMITED reassignment VALTRUS INNOVATIONS LIMITED ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: OT PATENT ESCROW, LLC
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • H04L49/901Buffering arrangements using storage descriptor, e.g. read or write pointers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • H04L49/9063Intermediate storage in different physical parts of a node or terminal

Definitions

  • the present invention relates generally to data communication systems and methods and, more particularly, to data communication systems and methods in which memory-mapped receive and transmit buffers are provided to network interface controllers.
  • a network interface controller is a hardware device that supports the transmission of data between computers as illustrated in FIG. 1 .
  • a symmetric multiprocessor (SMP) system 10 includes a number of central processor units (CPUs) 12 which share memory unit 14 via a memory interconnect 16 .
  • CPUs central processor units
  • SMP 10 is shown as having four CPUs (cores), those skilled in the art will appreciate that SMP 10 can have more or fewer CPUs.
  • SMP 10 sends messages to other SMPs 20 , 30 and 40 under the control of NIC 18 via Ethernet connections and a fabric (switch) 22 .
  • the NIC 18 will typically have a processor (not shown) associated therewith, either as an integral part of the NIC or in the form of a helper processor, so that the NIC has sufficient intelligence to interpret various commands.
  • the NIC 18 as well as various I/O devices 42 , 44 , are connected to the rest of the SMP via an I/O interconnect 46 .
  • the I/O interconnect 46 communicates with the memory interconnect 16 via an I/O adapter 48 (e.g., a bridge).
  • a common source and destination for transmitted data in such systems is paged virtual memory.
  • Paged virtual memory provides for virtual addresses which are translated or mapped onto physical pages and enables virtual pages to be swapped out to disk or removed from main memory and later swapped in from disk to a new physical page location.
  • An operating system can unilaterally perform page swaps of so-called “unpinned” virtual pages.
  • application software operating on such systems typically accesses main memory using address translation hardware that ensures that the correct physical page is accessed, e.g., that the operating system has not initiated a page swap for the page that the application software needs to access.
  • Software access pauses during time intervals when needed data is swapped out and resumes by accessing a new physical location when data is swapped in at that location.
  • Some networking solutions address the downtime associated with software suspension during virtual page swapping by providing for software to copy data from unpinned virtual memory to pinned interface memory.
  • Pinned memory consists of pages that cannot be swapped to disk by the operating system.
  • the NIC 18 will typically access only pinned interface memory. This simplifies direct memory access (DMA) transfers performed by the NIC 18 , since data is never swapped during a network operation which, in turn, guarantees that data remains accessible throughout a NIC's DMA data transfer and that the physical address of the data remains constant.
  • DMA direct memory access
  • Such solutions require extra overhead in the form of data copying (e.g., copying from unpinned virtual memory to a pinned system buffer accessible by the NIC 18 ) that utilizes important system resources.
  • a processing system includes a plurality of processing cells, each of the processing cells including at least one processor and at least one system memory, and a network interface controller (NIC) associated with each of the plurality of processing cells for transmitting and receiving data between the processing cells, wherein each of the plurality of cells further includes a memory interconnect to which the NIC is directly connected and said NIC includes at least one memory-mapped buffer.
  • NIC network interface controller
  • a method for communicating data in a processing system includes the steps of providing a plurality of processing cells, each of the processing cells including at least one processor and at least one system memory, and transmitting and receiving data between the processing cells via a network interface controller (NIC) associated with each of the plurality of processing cells, wherein each of the plurality of cells further includes a memory interconnect to which the NIC is directly connected and the NIC includes at least one memory-mapped buffer.
  • NIC network interface controller
  • FIG. 1 illustrates an exemplary processing system in which a network interface controller is connected to the processing system via an I/O interconnect;
  • FIG. 2 illustrates an exemplary processing system in which a NIC controller is connected to the processing system via a direct connection to a memory interconnect according to an exemplary embodiment
  • FIG. 3 depicts a portion of a NIC according to an exemplary embodiment of the present invention including a memory-mapped receive buffer
  • FIG. 4 is a flow chart depicting a general method for communicating data according to an exemplary embodiment of the present invention
  • FIG. 5 is a flow chart depicting a method for managing memory according to an exemplary embodiment of the present invention.
  • FIG. 6 is a flow chart depicting a method for managing memory according to another exemplary embodiment of the present invention.
  • transmit and receive command and data buffers within the NIC are mapped directly onto the processors' memory interconnect (also sometimes referred to as a “memory bus”, “coherent interconnect” or “front-side bus”).
  • memory interconnect also sometimes referred to as a “memory bus”, “coherent interconnect” or “front-side bus”.
  • This allows, among other things, for: efficient copying of data to the NIC; efficient copying of commands to the NIC; efficient copying of data from the NIC, and efficient detection of data arrival and of command completion.
  • the overall architecture of an SMP is modified such that the NIC is connected directly to the memory interconnect (rather than indirectly via an I/O interconnect 46 and I/O adapter 48 as shown in FIG. 1 ).
  • I/O interconnect 46 and I/O adapter 48 as shown in FIG. 1 .
  • an SMP system 200 includes a number of central processor units (CPUs) 212 which share memory unit 214 via a memory interconnect 216 .
  • CPUs central processor units
  • SMP 200 is shown as having four CPUs (cores), those skilled in the art will appreciate that SMP 200 can have more or fewer CPUs.
  • SMP 200 sends messages to other SMPs 200 under the control of NIC 218 via Ethernet connections and a fabric (switch) 220 .
  • the NIC 218 will typically have a processor (not shown) associated therewith, either as an integral part of the NIC or in the form of a helper processor, so that the NIC has sufficient intelligence to interpret various commands.
  • I/O devices 230 e.g., displays, second memory storage devices, etc.
  • I/O adapter 238 e.g., a bridge
  • I/O interconnect 240 I/O interconnect 240
  • the exemplary embodiment of FIG. 2 provides for the NIC 218 to be directly connected to the memory interconnect 216 rather than indirectly via an I/O interconnect and I/O adapter.
  • this architecture facilitates memory-mapping of transmit and/or receive buffers within the NIC 218 .
  • memory-mapping refers to the ability of the CPUs 212 to write (or read) directly to (or from) the transmit (or receive) buffers within the NIC 218 via reserved memory locations within memory 214 that correspond to the buffers.
  • the receive side of the exemplary system of FIG. 2 (a portion of which is illustrated as FIG. 3 ) will include one or more memory-mapped receive buffers 300 .
  • the NIC 218 is directly connected to the memory interconnect 216 since data does not have to pass through an I/O bus on its way between the NIC 218 , on the one hand, and system memory 308 or processor 304 on the other hand.
  • the phrase “directly connected” does not exclude the provision of interfaces, e.g., memory interface circuitry 305 , between a buffer and the memory interconnect.
  • the memory-mapped receive buffer 300 is a circular queue data structure, although the present invention is not limited to circular queues.
  • the receive buffer 300 has contents stored therein (in the “live region”) which are identified by tail and head tail pointers that indicate most recently deposited data and least recently deposited data, respectively.
  • live region a region which are identified by tail and head tail pointers that indicate most recently deposited data and least recently deposited data, respectively.
  • incoming packets can be dropped to order to prevent buffer overflow.
  • a reliable protocol such as Transmission Control Protocol or TCP
  • TCP Transmission Control Protocol
  • Other techniques can be used to prevent buffer overflow. For example, instead of dropping a most recently received frame, the system could drop a frame which was received earlier, e.g., an oldest received frame.
  • the receive queue fill circuitry 302 deposits data from an arriving Ethernet frame into a region of the receive buffer 300 beyond the tail pointer.
  • the tail pointer is advanced across valid data from the Ethernet frame. The advancement of the tail pointer across arriving data indicates that new data has arrived and may be processed.
  • the receive buffer head and tail pointers are also memory-mapped, i.e., the values associated with the receive buffer 300 's head pointer and tail pointer are automatically updated (via memory interface circuitry 305 ) at predetermined memory locations within the user buffer area 306 of system memory 308 .
  • This enables the head and tail pointer values to be directly accessed by processor 304 , which can make memory references using the memory interconnect 216 .
  • processor 304 to read, for example, the receive buffer tail pointer and compare its current value to a previous receive buffer tail pointer value. When the value changes, the processor 304 knows that new Ethernet frame data has arrived.
  • This latter feature illustrates the capability of processing systems and methods according to exemplary embodiments of the present invention to implement efficient polling of the NIC 218 using cache coherent, memory interconnect access.
  • the processor 304 may read a cached value of that data that is held within the processor. This could occur, for example, during periods of time when no message arrives into the NIC 218 .
  • the receive queue fill circuitry causes data within the receive buffer 300 to change value.
  • exemplary embodiments of the present invention provide for efficient data transport to its post-receive location.
  • Data is transported from Ethernet frames residing within the receive buffer 300 to one or more application buffers (not shown) within one or more user applications.
  • Higher-level software protocols may specify data delivery instructions which are stored in a delivery instruction database 310 in system memory 308 . These delivery instructions may be selectively retrieved from the database 310 for use in handling received data based upon header information that is embedded in the Ethernet frame.
  • Applications may have complex data delivery requirements that allow arriving data to be matched against tag information (or other information) to determine the proper location for the data after it is received in the receive buffer 300 .
  • MPI Message Passing Interface
  • header data may indicate a specific TCP socket to which data should be delivered.
  • Low-level receive buffer processing software may then deliver data with a single copy from the receive buffer 300 to the proper target location. Note that although this exemplary embodiment describes receive operations in the context of a single receive buffer 300 , plural receive buffers can also be implemented in a similar manner to support, e.g., scatter operation.
  • An RDMA delivery instruction describes a buffer region (not shown) into which data can be directly placed, e.g., a list of physical pages that contain data that is contiguous in virtual memory.
  • An incoming Ethernet packet can carry the name of the RDMA delivery instruction, a buffer offset, a length, and actual data.
  • the data is delivered into the buffer region starting at the desired offset.
  • This RDMA delivery instruction can be reused until all data has been successfully placed in the buffer. At this time, the instruction may be removed from the RDMA delivery database.
  • a higher-layer protocol can be used to install/remove RDMA delivery instructions to/from the delivery instruction database 310 and a lower-layer protocol then uses these delivery instructions to directly deliver data.
  • the processor 304 can use information obtained from the Ethernet packet as well as information obtained from the delivery database 310 in order to determine a target address location.
  • DMA commands are then inserted at the tail of the DMA command buffer 320 in order to initiate appropriate DMA transfers via DMA engine 324 .
  • the DMA command head pointer moves across the command to signal command completion.
  • data within the receive buffer 300 that has been copied to the final user buffer location is no longer needed.
  • the receive buffer's head pointer may then be moved across such data without data loss.
  • Transmit-side operations in the NIC 218 can be performed in much the same way as the receive-side operations discussed above.
  • a memory-mapped transmit buffer (not shown), or multiple memory-mapped transmit buffers, can be provided to the NIC 218 , which has a corresponding memory space allocated within the user buffers area 306 of system memory 308 .
  • the processor 304 can readily monitor transmit activities by, for example, comparing its cached transmit values (e.g., head pointer and tail pointer) against corresponding values in the system memory 308 .
  • Exemplary embodiments of the present invention which implement NICs having memory-mapped transmit and receive buffers provide a number of benefits, some of which are mentioned above.
  • processors such as processor 304 , having direct access to buffer data
  • program threads typically a kernel thread
  • a program thread running within the host operating system can efficiently initiate transmit-side or receive-side DMA and detect the completion of pending transmit-side or receive-side DMA operations.
  • DMA is initiated by writing to the memory-mapped command buffer 320 (which also has a corresponding memory space allocated within the users buffer portion 306 ) and DMA completion is detected by reading from the memory-mapped command buffer 320 .
  • Polling for command completion is efficient because the status of a DMA completion can be cached for repeated reading within processor 304 and no transfer of command status from NIC 218 to processor 304 is required until the DMA's completion status changes value.
  • the corresponding data within the processor's cache (not shown) can be invalidated. Any subsequent read of DMA completion status causes new valid data to be transferred from the NIC 218 to the processor 304 .
  • a program thread running on the host operating system can efficiently initiate a DMA transfer and detect the completion of that DMA transfer.
  • a method for communicating data in a processing system includes the steps illustrated in the flowchart of FIG. 4 .
  • a plurality of processing cells each of which include at least one processor and at least one system memory are provided.
  • Data is transmitted and received between the processing cells via a NIC associated with each of said plurality of processing cells at step 410 , wherein each of the plurality of cells further includes a memory interconnect to which the NIC is directly connected and the NIC includes at least one memory-mapped buffer.
  • Exemplary embodiments of the present invention also facilitate page pinning by providing memory-mapped receive and transmit buffers which allow kernel programs running within a host operating system to efficiently control page pinning for DMA-based copying for the purposes of network transmission or network reception.
  • one or more kernel program threads running on the host operating system can regulate the flow of large amounts of data through both the transmission and reception process. This allows efficient pinning and unpinning of needed pages while not requiring that too many pages are pinned as described below with respect to the flowcharts of FIGS. 5 and 6 .
  • a thread For transmission, in FIG. 5 , a thread performs page pinning actions needed for transmission at step 500 (e.g., informs the operating system that certain pages are not to be swapped until further notice), determines the fixed physical addresses for the pinned pages (step 502 ), and then initiates DMA to transmit data from the pinned pages (step 504 ).
  • a thread detects the completion of message transmission at step 506 (e.g., by recognizing a change in a value associated with the memory-mapped transmit buffer, such as a change in a head pointer value)
  • the thread can then unpin pages that have already been transmitted at step 508 .
  • the process illustrated in FIG. 5 can be repeated as necessary (either sequentially or in parallel, e.g., in a pipelined manner) such that, as DMA transmission progress is detected, new pages can be pinned as needed without unnecessarily pinning too many pages.
  • a thread For reception, in FIG. 6 , a thread performs page pinning actions needed for reception at step 600 , determines the fixed physical address of the pinned pages (step 602 ), then initiates DMA to copy received data to a pinned location (step 604 ).
  • a thread detects the completion of message reception at step 606 , (e.g., by detecting a change in a value associated with the memory-mapped receive buffer, such as a change in tail pointer value), the thread can then unpin pages after the data copied thereto has subsequently been delivered to, e.g., an application buffer, based on delivery instructions as described above, at step 608 .
  • the process illustrated in FIG. 6 can be repeated (either sequentially or in parallel, e.g., in a pipelined manner), such that as DMA reception progress is detected, new pages can be pinned as needed without unnecessarily pinning too many pages.
  • Transmit-side and receive-side progress can be coordinated using protocols that send messages through the network of FIG. 2 .
  • a receiving NIC 218 can pin pages associated with its receive-side buffer 300 .
  • the receiving NIC 218 can then inform a transmitter, e.g., another NIC 218 in a different SMP 200 , by sending a message, that it is ready to receive a block of information into that receive-side buffer 300 .
  • the transmitter can then pin pages associated with the corresponding transmit-side buffer.
  • Data can then be transmitted using DMA to copy information from the transmitting user's buffer to the network.
  • Received data can be copied, using DMA, from the NIC's receive buffer 300 to the receiving user's buffer within user buffers region 306 .
  • the receiving NIC can unpin the pages associated with the receiving buffer 300 and send a message to the transmitting NIC indicating that reception is complete and that the transmit-side buffer can also be unpinned.
  • This cycle of page pinning for receiver and transmitter, DMA-based transmission and DMA-based reception, detection of DMA completion and page unpinning for receiver and transmitter can all be performed without interrupts or system calls, since the NICs are directly connected to their respective memory interconnects. This cycle can be repeated in order to move large amounts of data through a network without unnecessarily pinning too much memory or without incurring expensive overheads for handling system calls or interrupts.
  • the receiver buffer 300 may be used to implement out of order delivery. For example, the processing of a first Ethernet frame for which delivery instructions are not yet available may be deferred. A second Ethernet frame, received subsequently by the NIC 218 relative to the first Ethernet frame, may be processed and delivered while the processing of the first frame is deferred. Meanwhile, delivery instructions can be inserted into the delivery instruction database 310 for the first Ethernet frame by a higher level application. Receive processing can then deliver that first Ethernet frame according to provided instructions.
  • a kernel-mode helper thread can both access SMP operating system data structures as well as communicate directly with a NIC that is on the front-side bus.
  • the use of a kernel-mode thread enables processing systems and methods in accordance with these exemplary embodiments to perform the entire pin, transmit/receive data, unpin cycle without wasting bus cycles during polling and without using expensive interrupts on, e.g., an I/O bus.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer And Data Communications (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Communication Control (AREA)

Abstract

Systems and methods for providing network interface controllers (NICs) with memory-mapped buffers are described. A processing system includes a plurality of processing cells, each including at least one processor and at least one system memory. A NIC is associated with each of the processing cells for transmitting and receiving data between the processing cells. Each of the cells further includes a memory interconnect to which the NIC is directly connected and the NIC includes at least one memory-mapped buffer.

Description

BACKGROUND
The present invention relates generally to data communication systems and methods and, more particularly, to data communication systems and methods in which memory-mapped receive and transmit buffers are provided to network interface controllers.
A network interface controller (NIC) is a hardware device that supports the transmission of data between computers as illustrated in FIG. 1. Therein, a symmetric multiprocessor (SMP) system 10 includes a number of central processor units (CPUs) 12 which share memory unit 14 via a memory interconnect 16. Although SMP 10 is shown as having four CPUs (cores), those skilled in the art will appreciate that SMP 10 can have more or fewer CPUs. SMP 10 sends messages to other SMPs 20, 30 and 40 under the control of NIC 18 via Ethernet connections and a fabric (switch) 22. The NIC 18 will typically have a processor (not shown) associated therewith, either as an integral part of the NIC or in the form of a helper processor, so that the NIC has sufficient intelligence to interpret various commands. The NIC 18, as well as various I/ O devices 42, 44, are connected to the rest of the SMP via an I/O interconnect 46. The I/O interconnect 46 communicates with the memory interconnect 16 via an I/O adapter 48 (e.g., a bridge).
A common source and destination for transmitted data in such systems is paged virtual memory. Paged virtual memory provides for virtual addresses which are translated or mapped onto physical pages and enables virtual pages to be swapped out to disk or removed from main memory and later swapped in from disk to a new physical page location. An operating system can unilaterally perform page swaps of so-called “unpinned” virtual pages. Thus, application software operating on such systems typically accesses main memory using address translation hardware that ensures that the correct physical page is accessed, e.g., that the operating system has not initiated a page swap for the page that the application software needs to access. Software access pauses during time intervals when needed data is swapped out and resumes by accessing a new physical location when data is swapped in at that location.
Some networking solutions address the downtime associated with software suspension during virtual page swapping by providing for software to copy data from unpinned virtual memory to pinned interface memory. Pinned memory consists of pages that cannot be swapped to disk by the operating system. In such systems, the NIC 18 will typically access only pinned interface memory. This simplifies direct memory access (DMA) transfers performed by the NIC 18, since data is never swapped during a network operation which, in turn, guarantees that data remains accessible throughout a NIC's DMA data transfer and that the physical address of the data remains constant. However, such solutions require extra overhead in the form of data copying (e.g., copying from unpinned virtual memory to a pinned system buffer accessible by the NIC 18) that utilizes important system resources.
Another solution to the issue posed by unpinned virtual memory eliminates the above-described data copying but instead requires that the NIC 18 invoke an operating system function to pin a virtual page prior to transmitting data directly from or to that page. Additionally, the page must later be unpinned by a further NIC/operating system interaction in order to allow page swapping after network activity is finished. While this eliminates copies to pinned pages, the NIC 18 must now invoke expensive page pinning and page unpinning functions. Each of these operations requires communication between the NIC's processor and the operating system. When these communications require interrupts or polling of the I/O interconnect 46, they are very expensive in terms of resource utilization efficiency.
Accordingly, it would be desirable to provide mechanisms and methods which enable a NIC to more efficiently deal with data transfer issues.
SUMMARY
According to one exemplary embodiment of the present invention, a processing system includes a plurality of processing cells, each of the processing cells including at least one processor and at least one system memory, and a network interface controller (NIC) associated with each of the plurality of processing cells for transmitting and receiving data between the processing cells, wherein each of the plurality of cells further includes a memory interconnect to which the NIC is directly connected and said NIC includes at least one memory-mapped buffer.
According to another exemplary embodiment of the present invention, a method for communicating data in a processing system includes the steps of providing a plurality of processing cells, each of the processing cells including at least one processor and at least one system memory, and transmitting and receiving data between the processing cells via a network interface controller (NIC) associated with each of the plurality of processing cells, wherein each of the plurality of cells further includes a memory interconnect to which the NIC is directly connected and the NIC includes at least one memory-mapped buffer.
BRIEF DESCRIPTION OF THE DRAWINGS
The accompanying drawings, which are incorporated in and constitute a part of the specification, illustrate an embodiment of the invention and, together with the description, explain the invention. In the drawings:
FIG. 1 illustrates an exemplary processing system in which a network interface controller is connected to the processing system via an I/O interconnect;
FIG. 2 illustrates an exemplary processing system in which a NIC controller is connected to the processing system via a direct connection to a memory interconnect according to an exemplary embodiment;
FIG. 3 depicts a portion of a NIC according to an exemplary embodiment of the present invention including a memory-mapped receive buffer;
FIG. 4 is a flow chart depicting a general method for communicating data according to an exemplary embodiment of the present invention;
FIG. 5 is a flow chart depicting a method for managing memory according to an exemplary embodiment of the present invention; and
FIG. 6 is a flow chart depicting a method for managing memory according to another exemplary embodiment of the present invention.
DETAILED DESCRIPTION
The following description of the exemplary embodiments of the present invention refers to the accompanying drawings. The same reference numbers in different drawings identify the same or similar elements. The following detailed description does not limit the invention. Instead, the scope of the invention is defined by the appended claims.
According to exemplary embodiments of the present invention, transmit and receive command and data buffers within the NIC are mapped directly onto the processors' memory interconnect (also sometimes referred to as a “memory bus”, “coherent interconnect” or “front-side bus”). This allows, among other things, for: efficient copying of data to the NIC; efficient copying of commands to the NIC; efficient copying of data from the NIC, and efficient detection of data arrival and of command completion. To implement such memory-mapped buffers, the overall architecture of an SMP is modified such that the NIC is connected directly to the memory interconnect (rather than indirectly via an I/O interconnect 46 and I/O adapter 48 as shown in FIG. 1). A general architecture which supports such memory-mapped transmit and receive buffers according to an exemplary embodiment will now be described with respect to FIG. 2.
Therein, an SMP system 200 includes a number of central processor units (CPUs) 212 which share memory unit 214 via a memory interconnect 216. Although SMP 200 is shown as having four CPUs (cores), those skilled in the art will appreciate that SMP 200 can have more or fewer CPUs. SMP 200 sends messages to other SMPs 200 under the control of NIC 218 via Ethernet connections and a fabric (switch) 220. The NIC 218 will typically have a processor (not shown) associated therewith, either as an integral part of the NIC or in the form of a helper processor, so that the NIC has sufficient intelligence to interpret various commands. Various I/O devices 230, e.g., displays, second memory storage devices, etc., are connected to the memory interconnect 216 via an I/O adapter 238 (e.g., a bridge) and an I/O interconnect 240. As can be seen from a comparison of FIGS. 1 and 2, the exemplary embodiment of FIG. 2 provides for the NIC 218 to be directly connected to the memory interconnect 216 rather than indirectly via an I/O interconnect and I/O adapter. Among other things, this architecture facilitates memory-mapping of transmit and/or receive buffers within the NIC 218. As used herein, the term “memory-mapping” refers to the ability of the CPUs 212 to write (or read) directly to (or from) the transmit (or receive) buffers within the NIC 218 via reserved memory locations within memory 214 that correspond to the buffers.
Thus, the receive side of the exemplary system of FIG. 2 (a portion of which is illustrated as FIG. 3) will include one or more memory-mapped receive buffers 300. As shown therein, the NIC 218 is directly connected to the memory interconnect 216 since data does not have to pass through an I/O bus on its way between the NIC 218, on the one hand, and system memory 308 or processor 304 on the other hand. As used herein, the phrase “directly connected” does not exclude the provision of interfaces, e.g., memory interface circuitry 305, between a buffer and the memory interconnect.
On the left hand side of FIG. 3, data arrives through, e.g., an Ethernet interface, and is placed into the memory-mapped receive buffer 300 by receive queue fill circuitry 302. According to this exemplary embodiment, the memory-mapped receive buffer 300 is a circular queue data structure, although the present invention is not limited to circular queues. In this example, however, the receive buffer 300 has contents stored therein (in the “live region”) which are identified by tail and head tail pointers that indicate most recently deposited data and least recently deposited data, respectively. As it is received, data is inserted at the tail of the receive buffer 300 and the tail pointer is moved across newly inserted data. Data is removed from the live region of the receive buffer 300 by moving the head pointer across unneeded data.
When the receive buffer 300 is full, incoming packets can be dropped to order to prevent buffer overflow. When a reliable protocol (such as Transmission Control Protocol or TCP) is used to retransmit packets, correct operation is maintained even though packets are dropped. Other techniques can be used to prevent buffer overflow. For example, instead of dropping a most recently received frame, the system could drop a frame which was received earlier, e.g., an oldest received frame. The receive queue fill circuitry 302 deposits data from an arriving Ethernet frame into a region of the receive buffer 300 beyond the tail pointer. The tail pointer is advanced across valid data from the Ethernet frame. The advancement of the tail pointer across arriving data indicates that new data has arrived and may be processed. The receive buffer head and tail pointers are also memory-mapped, i.e., the values associated with the receive buffer 300's head pointer and tail pointer are automatically updated (via memory interface circuitry 305) at predetermined memory locations within the user buffer area 306 of system memory 308. This enables the head and tail pointer values to be directly accessed by processor 304, which can make memory references using the memory interconnect 216. This, in turn, enables processor 304 to read, for example, the receive buffer tail pointer and compare its current value to a previous receive buffer tail pointer value. When the value changes, the processor 304 knows that new Ethernet frame data has arrived.
This latter feature illustrates the capability of processing systems and methods according to exemplary embodiments of the present invention to implement efficient polling of the NIC 218 using cache coherent, memory interconnect access. As long as the data within the receive buffer 300 remains unchanged, the processor 304 may read a cached value of that data that is held within the processor. This could occur, for example, during periods of time when no message arrives into the NIC 218. When a new message arrives from the network, data is delivered into the receive buffer 300 and the receive queue fill circuitry causes data within the receive buffer 300 to change value. When values within the receive buffer 300 change, and that data is read by the processor 304, the data is transferred to the processor 304 based on cache coherent shared memory protocols operating in the system, and the processor 304 then observes the changed value in its cache memory (not shown in FIG. 3). This causes the processor 304 to detect a newly arriving message. Note that this polling technique does not require the transfer of information from the receive buffer 300 to the polling processor until a message is received.
In addition to enabling efficient polling, exemplary embodiments of the present invention provide for efficient data transport to its post-receive location. Data is transported from Ethernet frames residing within the receive buffer 300 to one or more application buffers (not shown) within one or more user applications. Higher-level software protocols may specify data delivery instructions which are stored in a delivery instruction database 310 in system memory 308. These delivery instructions may be selectively retrieved from the database 310 for use in handling received data based upon header information that is embedded in the Ethernet frame. Applications may have complex data delivery requirements that allow arriving data to be matched against tag information (or other information) to determine the proper location for the data after it is received in the receive buffer 300. For example Message Passing Interface (MPI) techniques provide for tag and rank information to be matched to determine into which read buffer an arriving message is to be placed. In another example, header data may indicate a specific TCP socket to which data should be delivered. Low-level receive buffer processing software may then deliver data with a single copy from the receive buffer 300 to the proper target location. Note that although this exemplary embodiment describes receive operations in the context of a single receive buffer 300, plural receive buffers can also be implemented in a similar manner to support, e.g., scatter operation.
Another example of data delivery instructions that can be stored in delivery instruction database 310 are those which support remote direct memory access (RDMA). An RDMA delivery instruction describes a buffer region (not shown) into which data can be directly placed, e.g., a list of physical pages that contain data that is contiguous in virtual memory. An incoming Ethernet packet can carry the name of the RDMA delivery instruction, a buffer offset, a length, and actual data. Using the referenced RDMA delivery instruction, the data is delivered into the buffer region starting at the desired offset. This RDMA delivery instruction can be reused until all data has been successfully placed in the buffer. At this time, the instruction may be removed from the RDMA delivery database. A higher-layer protocol can be used to install/remove RDMA delivery instructions to/from the delivery instruction database 310 and a lower-layer protocol then uses these delivery instructions to directly deliver data.
Thus, when an incoming Ethernet frame is processed, the processor 304 can use information obtained from the Ethernet packet as well as information obtained from the delivery database 310 in order to determine a target address location. DMA commands are then inserted at the tail of the DMA command buffer 320 in order to initiate appropriate DMA transfers via DMA engine 324. As DMA commands are completed the DMA command head pointer moves across the command to signal command completion. When a DMA command is known to be complete, data within the receive buffer 300 that has been copied to the final user buffer location is no longer needed. The receive buffer's head pointer may then be moved across such data without data loss.
Transmit-side operations in the NIC 218 can be performed in much the same way as the receive-side operations discussed above. A memory-mapped transmit buffer (not shown), or multiple memory-mapped transmit buffers, can be provided to the NIC 218, which has a corresponding memory space allocated within the user buffers area 306 of system memory 308. The processor 304 can readily monitor transmit activities by, for example, comparing its cached transmit values (e.g., head pointer and tail pointer) against corresponding values in the system memory 308. Exemplary embodiments of the present invention which implement NICs having memory-mapped transmit and receive buffers provide a number of benefits, some of which are mentioned above. In addition to processors, such as processor 304, having direct access to buffer data, program threads (typically a kernel thread), running on the host operating system, can directly communicate with the NIC 218 through memory-mapped command and data buffers for transmission and reception.
For example, a program thread running within the host operating system can efficiently initiate transmit-side or receive-side DMA and detect the completion of pending transmit-side or receive-side DMA operations. DMA is initiated by writing to the memory-mapped command buffer 320 (which also has a corresponding memory space allocated within the users buffer portion 306) and DMA completion is detected by reading from the memory-mapped command buffer 320. Polling for command completion is efficient because the status of a DMA completion can be cached for repeated reading within processor 304 and no transfer of command status from NIC 218 to processor 304 is required until the DMA's completion status changes value. Whenever the value of data that represents the DMA's completion status changes, the corresponding data within the processor's cache (not shown) can be invalidated. Any subsequent read of DMA completion status causes new valid data to be transferred from the NIC 218 to the processor 304. Thus a program thread running on the host operating system can efficiently initiate a DMA transfer and detect the completion of that DMA transfer.
Thus, according to one, general exemplary embodiment of the present invention, a method for communicating data in a processing system includes the steps illustrated in the flowchart of FIG. 4. Therein, at step 400, a plurality of processing cells, each of which include at least one processor and at least one system memory are provided. Data is transmitted and received between the processing cells via a NIC associated with each of said plurality of processing cells at step 410, wherein each of the plurality of cells further includes a memory interconnect to which the NIC is directly connected and the NIC includes at least one memory-mapped buffer.
As mentioned above in the Background section, page pinning associated with NIC data transfer is also an interesting issue for SMP system designers (as well as designers of other processing systems). Exemplary embodiments of the present invention also facilitate page pinning by providing memory-mapped receive and transmit buffers which allow kernel programs running within a host operating system to efficiently control page pinning for DMA-based copying for the purposes of network transmission or network reception. In one exemplary embodiment, one or more kernel program threads running on the host operating system can regulate the flow of large amounts of data through both the transmission and reception process. This allows efficient pinning and unpinning of needed pages while not requiring that too many pages are pinned as described below with respect to the flowcharts of FIGS. 5 and 6.
For transmission, in FIG. 5, a thread performs page pinning actions needed for transmission at step 500 (e.g., informs the operating system that certain pages are not to be swapped until further notice), determines the fixed physical addresses for the pinned pages (step 502), and then initiates DMA to transmit data from the pinned pages (step 504). When a thread detects the completion of message transmission at step 506 (e.g., by recognizing a change in a value associated with the memory-mapped transmit buffer, such as a change in a head pointer value), the thread can then unpin pages that have already been transmitted at step 508. The process illustrated in FIG. 5 can be repeated as necessary (either sequentially or in parallel, e.g., in a pipelined manner) such that, as DMA transmission progress is detected, new pages can be pinned as needed without unnecessarily pinning too many pages.
For reception, in FIG. 6, a thread performs page pinning actions needed for reception at step 600, determines the fixed physical address of the pinned pages (step 602), then initiates DMA to copy received data to a pinned location (step 604). When a thread detects the completion of message reception at step 606, (e.g., by detecting a change in a value associated with the memory-mapped receive buffer, such as a change in tail pointer value), the thread can then unpin pages after the data copied thereto has subsequently been delivered to, e.g., an application buffer, based on delivery instructions as described above, at step 608. The process illustrated in FIG. 6 can be repeated (either sequentially or in parallel, e.g., in a pipelined manner), such that as DMA reception progress is detected, new pages can be pinned as needed without unnecessarily pinning too many pages.
Transmit-side and receive-side progress can be coordinated using protocols that send messages through the network of FIG. 2. For example, a receiving NIC 218 can pin pages associated with its receive-side buffer 300. The receiving NIC 218 can then inform a transmitter, e.g., another NIC 218 in a different SMP 200, by sending a message, that it is ready to receive a block of information into that receive-side buffer 300. The transmitter can then pin pages associated with the corresponding transmit-side buffer. Data can then be transmitted using DMA to copy information from the transmitting user's buffer to the network. Received data can be copied, using DMA, from the NIC's receive buffer 300 to the receiving user's buffer within user buffers region 306. After copying is complete, the receiving NIC can unpin the pages associated with the receiving buffer 300 and send a message to the transmitting NIC indicating that reception is complete and that the transmit-side buffer can also be unpinned. This cycle of page pinning for receiver and transmitter, DMA-based transmission and DMA-based reception, detection of DMA completion and page unpinning for receiver and transmitter can all be performed without interrupts or system calls, since the NICs are directly connected to their respective memory interconnects. This cycle can be repeated in order to move large amounts of data through a network without unnecessarily pinning too much memory or without incurring expensive overheads for handling system calls or interrupts.
Systems and methods according to these exemplary embodiments can also be used to perform other functions. For example, among other things, the receiver buffer 300 may be used to implement out of order delivery. For example, the processing of a first Ethernet frame for which delivery instructions are not yet available may be deferred. A second Ethernet frame, received subsequently by the NIC 218 relative to the first Ethernet frame, may be processed and delivered while the processing of the first frame is deferred. Meanwhile, delivery instructions can be inserted into the delivery instruction database 310 for the first Ethernet frame by a higher level application. Receive processing can then deliver that first Ethernet frame according to provided instructions.
Thus, according to exemplary embodiments, a kernel-mode helper thread can both access SMP operating system data structures as well as communicate directly with a NIC that is on the front-side bus. The use of a kernel-mode thread enables processing systems and methods in accordance with these exemplary embodiments to perform the entire pin, transmit/receive data, unpin cycle without wasting bus cycles during polling and without using expensive interrupts on, e.g., an I/O bus.
The foregoing description of exemplary embodiments of the present invention provides illustration and description, but it is not intended to be exhaustive or to limit the invention to the precise form disclosed. Modifications and variations are possible in light of the above teachings or may be acquired from practice of the invention. The following claims and their equivalents define the scope of the invention.

Claims (20)

The invention claimed is:
1. A processing system comprising:
a plurality of processing cells, each of said processing cells including at least one processor and at least one system memory; and
a network interface controller (NIC) associated with each of said plurality of processing cells for transmitting and receiving data between said processing cells;
wherein each of said plurality of cells further includes a memory interconnect to which said NIC is directly connected and said NIC includes at least one memory-mapped buffer that is directly read and write accessible from a processing cell associated with said NIC.
2. The system of claim 1 wherein said NIC further comprises:
wherein said at least one memory-mapped buffer is one of a memory-mapped transmit buffer and a memory-mapped receive buffer, wherein a portion of said at least one system memory is allocated for memory-mapping said at least one of a memory-mapped transmit buffer and a memory-mapped receive buffer.
3. The system of claim 2, wherein said at least one of said memory-mapped transmit buffer and said memory-mapped receive buffer is a circular queue having a head pointer and a tail pointer and further wherein current values of said head pointer and said tail pointer are mapped to predetermined locations within said system memory.
4. The system of claim 3, wherein said at least one of a memory-mapped transmit buffer and a memory-mapped receive buffer is a memory mapped receive buffer and further wherein said at least one processor compares a previously stored value of one of said head pointer and said tail pointer with a respective current value of said one of said head pointer and said tail pointer as a polling mechanism for said memory-mapped receive buffer.
5. The system of claim 3, wherein said at least one processor compares a previously stored value of at least one of said head pointer and said tail pointer with a respective current value of said at least one of said head pointer and said tail pointer to determine whether to unpin one or more pages in said system memory.
6. The system of claim 1, further comprising:
an I/O interconnect through which I/O devices communicate with said at least one processor and at least one system memory; and
an I/O adapter for connecting said I/O interconnect to said memory interconnect.
7. The system of claim 1, wherein said system memory further comprises a delivery instruction database and wherein said at least one of a memory-mapped transmit buffer and a memory-mapped receive buffer is a memory-mapped receive buffer, such that said at least one processor operates to forward data received in said memory-mapped receive buffer in accordance with an instruction from said delivery instruction database.
8. The system of claim 7, wherein said processor performs a direct memory access (DMA) transfer of said data in accordance with said instruction from said delivery instruction database.
9. The system of claim 6, further comprising:
an operating system application running on said at least one processor;
a kernel-mode helper thread operating under said operating system which can access operating system data structures and communicate directly with a said NIC,
wherein said kernel-mode helper thread performs a pin, transmit/receive data, unpin cycle without interrupting said I/O interconnect.
10. A method for communicating data in a processing system comprising the steps of:
providing a plurality of processing cells, each of said processing cells including at least one processor and at least one system memory; and
transmitting and receiving data between said processing cells via a network interface controller (NIC) associated with each of said plurality of processing cells;
wherein each of said plurality of cells further includes a memory interconnect to which said NIC is directly connected and said NIC includes at least one memory-mapped buffer that is directly read and write accessible from a processing cell associated with said NIC.
11. The method of claim 10 further comprising the step of:
allocating a portion of said at least one system memory for memory-mapping said at least one memory-mapped buffer.
12. The method of claim 11, wherein said at least one memory-mapped buffer is a circular queue having a head pointer and a tail pointer and further comprising the step of:
mapping current values of said head pointer and said tail pointer to predetermined locations within said system memory.
13. The method of claim 12, wherein said at least one memory-mapped buffer is a memory mapped receive buffer and further comprising the step of:
comparing, by said at least one processor, a previously stored value of one of said head pointer and said tail pointer with a respective current value of said one of said head pointer and said tail pointer as a polling mechanism for said memory-mapped receive buffer.
14. The method of claim 12, further comprising the step of:
comparing, by said at least one processor, a previously stored value of at least one of said head pointer and said tail pointer with a respective current value of said at least one of said head pointer and said tail pointer to determine whether to unpin one or more pages in said system memory.
15. The method of claim 10, further comprising the step of:
connecting I/O devices with said at least one processor and said at least one system memory via an I/O interconnect; and
connecting said I/O interconnect to said memory interconnect via an I/O adapter.
16. The method of claim 10, wherein said system memory further comprises a delivery instruction database and wherein said at least one memory-mapped buffer is a memory-mapped receive buffer, said method further comprising the step of:
forwarding, by said at least one processor, data received in said memory-mapped receive buffer in accordance with an instruction from said delivery instruction database.
17. The method of claim 16, further comprising the step of:
performing a direct memory access (DMA) transfer of said data in accordance with said instruction from said delivery instruction database.
18. The method of claim 16, further comprising the steps of:
running an operating system application on said at least one processor;
operating a kernel-mode helper thread under said operating system which can access operating system data structures and communicate directly with a said NIC; and
performing, by said kernel-mode helper thread, a pin, transmit/receive data, unpin cycle without interrupting said I/O interconnect.
19. The method of claim 10 further comprising the steps of:
performing page pinning actions associated with data transmission in said NIC;
determining fixed, physical addresses for pinned pages;
initiating a direct memory access (DMA) transfer from said pinned pages;
detecting completion of said data transmission by recognizing a change in a value associated with a memory-mapped transmit buffer; and
unpinning pages from which data has been transmitted.
20. The method of claim 10 further comprising the steps of:
performing page pinning actions associated with data reception in said NIC;
determining fixed, physical addresses for pinned pages;
initiating a direct memory access (DMA) to copy received data to said pinned pages;
detecting completion of said data reception by recognizing a change in a value associated with a memory-mapped receive buffer; and
unpinning pages after data has been delivered from said pinned pages to another location.
US11/493,285 2006-07-26 2006-07-26 Memory-mapped buffers for network interface controllers Active 2031-07-29 US9137179B2 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US11/493,285 US9137179B2 (en) 2006-07-26 2006-07-26 Memory-mapped buffers for network interface controllers
CN200710137000.6A CN101115054B (en) 2006-07-26 2007-07-26 For the buffer of the memory mapped of network interface controller

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US11/493,285 US9137179B2 (en) 2006-07-26 2006-07-26 Memory-mapped buffers for network interface controllers

Publications (2)

Publication Number Publication Date
US20080028103A1 US20080028103A1 (en) 2008-01-31
US9137179B2 true US9137179B2 (en) 2015-09-15

Family

ID=38987719

Family Applications (1)

Application Number Title Priority Date Filing Date
US11/493,285 Active 2031-07-29 US9137179B2 (en) 2006-07-26 2006-07-26 Memory-mapped buffers for network interface controllers

Country Status (2)

Country Link
US (1) US9137179B2 (en)
CN (1) CN101115054B (en)

Families Citing this family (37)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2002343180A1 (en) * 2001-12-14 2003-06-30 Koninklijke Philips Electronics N.V. Data processing system having multiple processors
US20050091334A1 (en) * 2003-09-29 2005-04-28 Weiyi Chen System and method for high performance message passing
US20080065835A1 (en) * 2006-09-11 2008-03-13 Sun Microsystems, Inc. Offloading operations for maintaining data coherence across a plurality of nodes
US7813342B2 (en) * 2007-03-26 2010-10-12 Gadelrab Serag Method and apparatus for writing network packets into computer memory
US20090089475A1 (en) * 2007-09-28 2009-04-02 Nagabhushan Chitlur Low latency interface between device driver and network interface card
US8145749B2 (en) * 2008-08-11 2012-03-27 International Business Machines Corporation Data processing in a hybrid computing environment
US7984267B2 (en) * 2008-09-04 2011-07-19 International Business Machines Corporation Message passing module in hybrid computing system starting and sending operation information to service program for accelerator to execute application program
US8141102B2 (en) * 2008-09-04 2012-03-20 International Business Machines Corporation Data processing in a hybrid computing environment
US8230442B2 (en) * 2008-09-05 2012-07-24 International Business Machines Corporation Executing an accelerator application program in a hybrid computing environment
US8527734B2 (en) * 2009-01-23 2013-09-03 International Business Machines Corporation Administering registered virtual addresses in a hybrid computing environment including maintaining a watch list of currently registered virtual addresses by an operating system
US9286232B2 (en) * 2009-01-26 2016-03-15 International Business Machines Corporation Administering registered virtual addresses in a hybrid computing environment including maintaining a cache of ranges of currently registered virtual addresses
US8843880B2 (en) * 2009-01-27 2014-09-23 International Business Machines Corporation Software development for a hybrid computing environment
US8255909B2 (en) * 2009-01-28 2012-08-28 International Business Machines Corporation Synchronizing access to resources in a hybrid computing environment
US20100191923A1 (en) * 2009-01-29 2010-07-29 International Business Machines Corporation Data Processing In A Computing Environment
US9170864B2 (en) * 2009-01-29 2015-10-27 International Business Machines Corporation Data processing in a hybrid computing environment
US8001206B2 (en) * 2009-01-29 2011-08-16 International Business Machines Corporation Broadcasting data in a hybrid computing environment
KR101070511B1 (en) * 2009-03-20 2011-10-05 (주)인디링스 Solid state drive controller and method for operating of the solid state drive controller
US8037217B2 (en) * 2009-04-23 2011-10-11 International Business Machines Corporation Direct memory access in a hybrid computing environment
US8180972B2 (en) 2009-08-07 2012-05-15 International Business Machines Corporation Reducing remote reads of memory in a hybrid computing environment by maintaining remote memory values locally
US8478965B2 (en) * 2009-10-30 2013-07-02 International Business Machines Corporation Cascaded accelerator functions
US9417905B2 (en) * 2010-02-03 2016-08-16 International Business Machines Corporation Terminating an accelerator application program in a hybrid computing environment
US8578132B2 (en) * 2010-03-29 2013-11-05 International Business Machines Corporation Direct injection of data to be transferred in a hybrid computing environment
US9015443B2 (en) 2010-04-30 2015-04-21 International Business Machines Corporation Reducing remote reads of memory in a hybrid computing environment
KR20140065009A (en) * 2011-10-27 2014-05-28 후아웨이 테크놀러지 컴퍼니 리미티드 Data-fast-distribution method and device
CN102541803A (en) * 2011-12-31 2012-07-04 曙光信息产业股份有限公司 Data sending method and computer
CN104136474B (en) * 2012-03-07 2016-06-22 阿克佐诺贝尔国际涂料股份有限公司 Non-aqueous liquid coating composition
US9288163B2 (en) * 2013-03-15 2016-03-15 Avago Technologies General Ip (Singapore) Pte. Ltd. Low-latency packet receive method for networking devices
AU2013245529A1 (en) * 2013-10-18 2015-05-07 Cisco Technology, Inc. Network Interface
US20150326684A1 (en) * 2014-05-07 2015-11-12 Diablo Technologies Inc. System and method of accessing and controlling a co-processor and/or input/output device via remote direct memory access
US9742855B2 (en) * 2014-09-04 2017-08-22 Mellanox Technologies, Ltd. Hybrid tag matching
US10540317B2 (en) 2015-07-08 2020-01-21 International Business Machines Corporation Efficient means of combining network traffic for 64Bit and 31 bit workloads
US10078615B1 (en) * 2015-09-18 2018-09-18 Aquantia Corp. Ethernet controller with integrated multi-media payload de-framer and mapper
US11671382B2 (en) 2016-06-17 2023-06-06 Intel Corporation Technologies for coordinating access to data packets in a memory
US10341264B2 (en) * 2016-06-30 2019-07-02 Intel Corporation Technologies for scalable packet reception and transmission
US10999209B2 (en) 2017-06-28 2021-05-04 Intel Corporation Technologies for scalable network packet processing with lock-free rings
US10672095B2 (en) 2017-12-15 2020-06-02 Ati Technologies Ulc Parallel data transfer to increase bandwidth for accelerated processing devices
CN112596960B (en) * 2020-11-25 2023-06-13 新华三云计算技术有限公司 Distributed storage service switching method and device

Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5515538A (en) * 1992-05-29 1996-05-07 Sun Microsystems, Inc. Apparatus and method for interrupt handling in a multi-threaded operating system kernel
US5659798A (en) * 1996-02-02 1997-08-19 Blumrich; Matthias Augustin Method and system for initiating and loading DMA controller registers by using user-level programs
US5764896A (en) * 1996-06-28 1998-06-09 Compaq Computer Corporation Method and system for reducing transfer latency when transferring data from a network to a computer system
US5859975A (en) * 1993-12-15 1999-01-12 Hewlett-Packard, Co. Parallel processing computer system having shared coherent memory and interconnections utilizing separate undirectional request and response lines for direct communication or using crossbar switching device
US5887134A (en) * 1997-06-30 1999-03-23 Sun Microsystems System and method for preserving message order while employing both programmed I/O and DMA operations
US6067608A (en) * 1997-04-15 2000-05-23 Bull Hn Information Systems Inc. High performance mechanism for managing allocation of virtual memory buffers to virtual processes on a least recently used basis
US6067563A (en) * 1996-09-12 2000-05-23 Cabletron Systems, Inc. Method and apparatus for avoiding control reads in a network node
US6070219A (en) * 1996-10-09 2000-05-30 Intel Corporation Hierarchical interrupt structure for event notification on multi-virtual circuit network interface controller
US6185438B1 (en) * 1998-10-01 2001-02-06 Samsung Electronics Co., Ltd. Processor using virtual array of buffer descriptors and method of operation
US20030007457A1 (en) * 2001-06-29 2003-01-09 Farrell Jeremy J. Hardware mechanism to improve performance in a multi-node computer system
US20030158998A1 (en) * 2002-02-19 2003-08-21 Hubbert Smith Network data storage-related operations
US20040003135A1 (en) * 2002-06-27 2004-01-01 Moore Terrill M. Technique for driver installation
US6745286B2 (en) * 2001-01-29 2004-06-01 Snap Appliance, Inc. Interface architecture
US6799200B1 (en) 2000-07-18 2004-09-28 International Business Machines Corporaiton Mechanisms for efficient message passing with copy avoidance in a distributed system
US20050132365A1 (en) * 2003-12-16 2005-06-16 Madukkarumukumana Rajesh S. Resource partitioning and direct access utilizing hardware support for virtualization
US20060034275A1 (en) * 2000-05-03 2006-02-16 At&T Laboratories-Cambridge Ltd. Data transfer, synchronising applications, and low latency networks
US20060174169A1 (en) * 2005-01-28 2006-08-03 Sony Computer Entertainment Inc. IO direct memory access system and method
US7133940B2 (en) * 1997-10-14 2006-11-07 Alacritech, Inc. Network interface device employing a DMA command queue
US20080002578A1 (en) * 2006-06-30 2008-01-03 Jerrie Coffman Network with a constrained usage model supporting remote direct memory access
US7328232B1 (en) * 2000-10-18 2008-02-05 Beptech Inc. Distributed multiprocessing system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7668165B2 (en) * 2004-03-31 2010-02-23 Intel Corporation Hardware-based multi-threading for packet processing

Patent Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5515538A (en) * 1992-05-29 1996-05-07 Sun Microsystems, Inc. Apparatus and method for interrupt handling in a multi-threaded operating system kernel
US5859975A (en) * 1993-12-15 1999-01-12 Hewlett-Packard, Co. Parallel processing computer system having shared coherent memory and interconnections utilizing separate undirectional request and response lines for direct communication or using crossbar switching device
US5659798A (en) * 1996-02-02 1997-08-19 Blumrich; Matthias Augustin Method and system for initiating and loading DMA controller registers by using user-level programs
US5764896A (en) * 1996-06-28 1998-06-09 Compaq Computer Corporation Method and system for reducing transfer latency when transferring data from a network to a computer system
US6067563A (en) * 1996-09-12 2000-05-23 Cabletron Systems, Inc. Method and apparatus for avoiding control reads in a network node
US6070219A (en) * 1996-10-09 2000-05-30 Intel Corporation Hierarchical interrupt structure for event notification on multi-virtual circuit network interface controller
US6067608A (en) * 1997-04-15 2000-05-23 Bull Hn Information Systems Inc. High performance mechanism for managing allocation of virtual memory buffers to virtual processes on a least recently used basis
US5887134A (en) * 1997-06-30 1999-03-23 Sun Microsystems System and method for preserving message order while employing both programmed I/O and DMA operations
US7133940B2 (en) * 1997-10-14 2006-11-07 Alacritech, Inc. Network interface device employing a DMA command queue
US6185438B1 (en) * 1998-10-01 2001-02-06 Samsung Electronics Co., Ltd. Processor using virtual array of buffer descriptors and method of operation
US20060034275A1 (en) * 2000-05-03 2006-02-16 At&T Laboratories-Cambridge Ltd. Data transfer, synchronising applications, and low latency networks
US6799200B1 (en) 2000-07-18 2004-09-28 International Business Machines Corporaiton Mechanisms for efficient message passing with copy avoidance in a distributed system
US7328232B1 (en) * 2000-10-18 2008-02-05 Beptech Inc. Distributed multiprocessing system
US6745286B2 (en) * 2001-01-29 2004-06-01 Snap Appliance, Inc. Interface architecture
US20030007457A1 (en) * 2001-06-29 2003-01-09 Farrell Jeremy J. Hardware mechanism to improve performance in a multi-node computer system
US20030158998A1 (en) * 2002-02-19 2003-08-21 Hubbert Smith Network data storage-related operations
US20040003135A1 (en) * 2002-06-27 2004-01-01 Moore Terrill M. Technique for driver installation
US20050132365A1 (en) * 2003-12-16 2005-06-16 Madukkarumukumana Rajesh S. Resource partitioning and direct access utilizing hardware support for virtualization
US20060174169A1 (en) * 2005-01-28 2006-08-03 Sony Computer Entertainment Inc. IO direct memory access system and method
US20080002578A1 (en) * 2006-06-30 2008-01-03 Jerrie Coffman Network with a constrained usage model supporting remote direct memory access

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
Athanasaki et al., "Hyperplane Grouping and Pipelined Schedules: How to Execute Tiled Loops Fast on Clusters of SMPs", 2005. *
Athanasaki et al., "Pipelined Scheduling of Tiled Nested Loops onto Clusters of SMPs using Memory Mapped Network Interfaces", 2002. *
Blumrich et al., "Virtual Memory Mapped Network Interface for the SHRIMP Multicomputer", 1994 IEEE. *
Blumrich et al., "Virtuall Memory Mapped Network Interface for the SHRIMP Multicomputer", 1994. *
Hsiao et al., "MICA: A Memory and Interconnect Simulation Environment for Cache-Based Architectures", 1989. *
MTI, Xidian University; Xi'an, Xiaofeg et al; Research on Buffer Management Technology of NIC; 2001.
National Semiconductor Corporation, "DP8390D/NS32490D NIC Network Interface Controller", Jul. 1995. *

Also Published As

Publication number Publication date
CN101115054A (en) 2008-01-30
US20080028103A1 (en) 2008-01-31
CN101115054B (en) 2016-03-02

Similar Documents

Publication Publication Date Title
US9137179B2 (en) Memory-mapped buffers for network interface controllers
US7533197B2 (en) System and method for remote direct memory access without page locking by the operating system
US8131814B1 (en) Dynamic pinning remote direct memory access
EP2406723B1 (en) Scalable interface for connecting multiple computer systems which performs parallel mpi header matching
US9935899B2 (en) Server switch integration in a virtualized system
US5752078A (en) System for minimizing latency data reception and handling data packet error if detected while transferring data packet from adapter memory to host memory
US9411644B2 (en) Method and system for work scheduling in a multi-chip system
US9003082B2 (en) Information processing apparatus, arithmetic device, and information transferring method
US8914458B2 (en) Look-ahead handling of page faults in I/O operations
US9639464B2 (en) Application-assisted handling of page faults in I/O operations
US8645596B2 (en) Interrupt techniques
US9529532B2 (en) Method and apparatus for memory allocation in a multi-node system
US7472205B2 (en) Communication control apparatus which has descriptor cache controller that builds list of descriptors
US9632901B2 (en) Page resolution status reporting
US20080109569A1 (en) Remote DMA systems and methods for supporting synchronization of distributed processes in a multi-processor system using collective operations
US20080109573A1 (en) RDMA systems and methods for sending commands from a source node to a target node for local execution of commands at the target node
TWI547870B (en) Method and system for ordering i/o access in a multi-node environment
US9372800B2 (en) Inter-chip interconnect protocol for a multi-chip system
TW201543218A (en) Chip device and method for multi-core network processor interconnect with multi-node connection
US20080109604A1 (en) Systems and methods for remote direct memory access to processor caches for RDMA reads and writes
US20150121376A1 (en) Managing data transfer
US20230014415A1 (en) Reducing transactions drop in remote direct memory access system
WO2008057833A2 (en) System and method for remote direct memory access without page locking by the operating system

Legal Events

Date Code Title Description
AS Assignment

Owner name: HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P., TEXAS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SCHLANSKER, MICHAEL STEVEN;OERTLI, ERWIN;SIGNING DATES FROM 20060724 TO 20060725;REEL/FRAME:018138/0752

Owner name: HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P., TEXAS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SCHLANSKER, MICHAEL STEVEN;OERTLI, ERWIN;REEL/FRAME:018138/0752;SIGNING DATES FROM 20060724 TO 20060725

STCF Information on status: patent grant

Free format text: PATENTED CASE

AS Assignment

Owner name: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP, TEXAS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P.;REEL/FRAME:037079/0001

Effective date: 20151027

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4

AS Assignment

Owner name: OT PATENT ESCROW, LLC, ILLINOIS

Free format text: PATENT ASSIGNMENT, SECURITY INTEREST, AND LIEN AGREEMENT;ASSIGNORS:HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP;HEWLETT PACKARD ENTERPRISE COMPANY;REEL/FRAME:055269/0001

Effective date: 20210115

AS Assignment

Owner name: VALTRUS INNOVATIONS LIMITED, IRELAND

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:OT PATENT ESCROW, LLC;REEL/FRAME:055403/0001

Effective date: 20210201

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 8