US20070002172A1 - Linking frame data by inserting qualifiers in control blocks - Google Patents

Linking frame data by inserting qualifiers in control blocks Download PDF

Info

Publication number
US20070002172A1
US20070002172A1 US11/469,390 US46939006A US2007002172A1 US 20070002172 A1 US20070002172 A1 US 20070002172A1 US 46939006 A US46939006 A US 46939006A US 2007002172 A1 US2007002172 A1 US 2007002172A1
Authority
US
United States
Prior art keywords
fcb
frame
control block
bcb
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US11/469,390
Inventor
Jean Calvignac
Marco Heddes
Joseph Logan
Fabrice Verplanken
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to US11/469,390 priority Critical patent/US20070002172A1/en
Publication of US20070002172A1 publication Critical patent/US20070002172A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • H04L49/9047Buffering arrangements including multiple buffers, e.g. buffer pools
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/30Peripheral units, e.g. input or output ports
    • H04L49/3081ATM peripheral units, e.g. policing, insertion or extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/38Information transfer, e.g. on bus
    • G06F13/42Bus transfer protocol, e.g. handshake; Synchronisation
    • G06F13/4204Bus transfer protocol, e.g. handshake; Synchronisation on a parallel bus
    • G06F13/4234Bus transfer protocol, e.g. handshake; Synchronisation on a parallel bus being a memory bus
    • G06F13/4243Bus transfer protocol, e.g. handshake; Synchronisation on a parallel bus being a memory bus with synchronous protocol
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/54Organization of routing tables
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5678Traffic aspects, e.g. arbitration, load balancing, smoothing, buffer management
    • H04L2012/5681Buffer or queue management

Definitions

  • the present invention relates to the field of a networking communication system, and more particularly to inserting qualifiers in control blocks to reduce memory accesses and thereby improve the efficiency of the bandwidth of the memory.
  • a packet switching network has switching points or nodes for transmission of data among senders and receivers connected to the network.
  • the switching performed by these switching points is in fact the action of passing on packets or “frames” of data received by a switching point or node to a further node in the network.
  • Such switching actions are the means by which communication data is moved through the packet switching network.
  • Each node may comprise a packet processor configured to process packets or frames of data.
  • the packet processor may comprise a data storage unit, e.g., Double Data Rate Static Random Access Memory (DDR SRAM), configured with a plurality of buffers to store frame data.
  • DDR SRAM Double Data Rate Static Random Access Memory
  • Each frame of data may be associated with a Frame Control Block (FCB) configured to describe the corresponding frame of data.
  • FCB Frame Control Block
  • Each FCB may be associated with one or more Buffer Control Blocks (BCBs) where each BCB associated with an FCB may be associated with a buffer in the data storage unit.
  • a BCB may be configured to describe the associated buffer.
  • FCBs and BCBs comprise various fields of information where the fields of information in FCBs and BCBs are each supplied by a separate memory, e.g., Quadruple Data Rate Static Random Access Memory (QDR SRAM), in the packet processor. That is, the fields of information in FCBs and BCBs maybe obtained by accessing a separate memory, e.g., QDR SRAM, in the packet processor.
  • a separate memory e.g., Quadruple Data Rate Static Random Access Memory (QDR SRAM)
  • qualifiers in control blocks may comprise information unrelated to the current control block.
  • qualifiers in control blocks may comprise information related to another control block or to a buffer associated with a next control block.
  • the last frame control block in a queue in the packet processor as well as the last buffer control block associated with a frame control block may comprise fields with no information thereby reducing memory accesses to a memory, e.g., QDR SRAM, to access information to be inserted in those fields. Subsequently, the bandwidth of the memory, e.g., QDR SRAM supplying information to those fields is improved.
  • a system comprises a packet processor configured to process packets, i.e., frames, of data.
  • the processor may comprise a plurality of buffers configured to store frames of data where each frame of data may be associated with a frame control block.
  • Each frame control block associated with a frame of data may be associated with one or more buffer control blocks.
  • Each buffer control block associated with a frame control block may be associated with a particular buffer of the plurality of buffers.
  • the processor may further comprise a plurality of queues configured to temporarily store one or more frame control blocks.
  • Each control block, e.g., frame control block, buffer control block may comprise one or more qualifier fields that comprise information related to a particular buffer in the plurality of buffers.
  • Each frame control block may comprise one or more qualifier fields where the one or more qualifier fields comprise information unrelated to a current frame control block.
  • the one or more qualifier fields may comprise information as to the byte count length of the one or more buffer control blocks associated with a next frame control block.
  • the one or more qualifier fields may comprise information as to a starting byte position and to an ending byte position of framed data stored in a particular buffer associated with the first buffer control block which is associated with the frame control block.
  • each buffer control block may comprise one or more qualifier fields.
  • the one or more qualifier fields may comprise information as to a starting byte position and to an ending byte position of frame data stored in a particular buffer associated with a next buffer control block.
  • FIG. 1 illustrates a packet processor configured in accordance with the present invention
  • FIG. 2 illustrates a data flow unit configured in accordance with the present invention
  • FIG. 3 is a diagram illustrating the reduction of memory accesses by linking frame data with qualifiers in control blocks.
  • FIG. 4 is a flowchart of a method for reducing memory accesses by linking frame data with qualifiers in control blocks.
  • a system comprises a packet processor configured to process packets, i.e., frames, of data.
  • the processor may comprise a plurality of buffers configured to store frames of data where each frame of data may be associated with a frame control block.
  • Each frame control block associated with a frame of data may be associated with one or more buffer control blocks.
  • Each buffer control block associated with a frame control block may be associated with a particular buffer of the plurality of buffers.
  • Each control block e.g., frame control blocks, buffer control blocks, may comprise one or more qualifier fields.
  • the one or more qualifier fields may comprise information unrelated to the current control block.
  • qualifiers in control blocks may comprise information related to another control block.
  • the last frame control block in a queue in the packet processor as well as the last buffer control block associated with a frame control block may comprise fields with no information thereby reducing memory accesses to a memory, e.g., QDR SRAM, to access information to be inserted in those fields. Subsequently, the bandwidth of the memory, e.g., QDR SRAM, supplying information to those fields is improved.
  • FIG. 1 Packet Processor
  • FIG. 1 illustrates an embodiment of the present invention of a packet processor 100 .
  • Packet processor 100 may comprise a data flow unit 110 configured to receive digital packets, i.e., frames, of data, from a particular switch (not shown) or port (not shown) of a packet switching network and transmit the digital packets, i.e., frames, of data to another switch or port, e.g., switch/port 120 , in the packet switching network.
  • Each frame of data may be associated with a Frame Control Block (FCB) where the FCB describes the associated frame of data.
  • FCB associated with a frame of data may be associated with one or more Buffer Control Blocks (BCBs) where each BCB associated with an FCB may be associated with a buffer in a data storage unit 140 .
  • FCB Frame Control Block
  • BCBs Buffer Control Blocks
  • a BCB may be configured to describe the buffer associated with the next chained BCB as described in the description of FIGS. 3 and 4 .
  • data flow unit 110 may reside on an integrated circuit, i.e., integrated chip.
  • Data flow unit 110 may be coupled to data storage unit 140 configured to temporarily store frames of data received by data flow unit 110 from a switch (not shown) or port (not shown) in the packet switching network.
  • Data flow unit 110 may further be coupled to a scheduler 130 configured to schedule frames of data to be transmitted from data flow unit 110 to switch/port 120 .
  • scheduler 130 may reside on an integrated circuit, i.e., integrated chip.
  • data flow unit 110 may further be coupled to an embedded processor 150 configured to process frames of data received by data flow unit 110 .
  • FIG. 2 Data Flow Unit
  • FIG. 2 illustrates an embodiment of the present invention of data flow unit 110 .
  • Data flow unit 110 may comprise a receiver controller 203 configured to receive and temporarily store packets, i.e., frames, of data received from a switch (not shown) or port (not shown) in a packet switching network.
  • Data flow unit 110 may further comprise a transmitter controller 201 configured to modify the frame data as well as transmit the modified frame data to a switch (not shown) or port (not shown) in a packet switching network.
  • Data flow unit 110 may further comprise an embedded processor interface controller 202 configured to exchange frames to be processed by embedded processor 150 .
  • Packets, i.e., frames, of data may be received by a port/switch interface unit 221 .
  • Port/switch interface unit 221 may receive data from a switch (not shown) in the packet switching network when data flow unit 110 operates in an egress mode. Otherwise, port/switch interface unit 221 may receive data from a port (not shown) that operates as an interface to the packet switching network when data flow unit 110 operates in an ingress mode.
  • Data received by data flow unit 110 may be temporarily stored in a receiving preparation area memory 220 prior to being stored in data storage unit 140 which may be represented by a plurality of slices 205 A-F. Slices 205 A-F may collectively or individually be referred to as slices 205 or slice 205 , respectively.
  • the number of slices 205 in FIG. 2 is illustrative, and an embodiment of data flow unit 110 in accordance with the principles of the present invention may have other predetermined number of slices 205 .
  • Each slice may comprise a plurality of buffers.
  • Each slice may represent a slice of memory, e.g., Dynamic Random Access Memory (DRAM), so that frame data may be written into different buffers in different slices in order to maximize memory bandwidth.
  • a memory arbiter 204 may be configured to collect requests, e.g., read, write, from receiver controller 203 , transmitter controller 201 and embedded processor interface controller 202 and subsequently schedule access to particular data store memory slices, i.e., particular buffers in particular slices 205 .
  • receiver controller 203 may be configured to issue write requests to memory arbiter 204 in order to write received data into individual buffers in a particular slice 205 .
  • frame data may be stored in data storage unit 140 , i.e., a plurality of slices 205 .
  • frame data may be stored in one or more buffers in one or more slices 205 in a manner such that the data in each particular frame may be recomposed by having the buffers chained together. That is, data in a particular frame may be stored in one or more buffers that are chained together in the order that data is written into the one or more buffers.
  • the chaining of the one or more buffers may be controlled by a Buffer Control Block Unit (BCBU) 208 in a memory 229 , e.g., Quadruple Data Rate Static Random Access Memory (QDR SRAM), coupled to data flow unit 110 .
  • BCBU 208 may be configured to comprise the addresses of each of the one or more buffers chained together in the order data was written into buffers.
  • the different buffers comprising data of the same frames may be linked together by means of pointers stored in BCBU 208 .
  • each frame of data may be associated with a Frame Control Block (FCB) where the FCB describes the associated frame of data.
  • FCBU 1 Frame Control Block Unit 1
  • a memory 210 e.g., QDR SRAM
  • FCBU 1 Frame Control Block Unit 1
  • the fields of information in FCBs may be obtained by accessing memory 210 , i.e., FCBU 1 ) 209 of memory 210 . Additional details regarding FCBU 1 209 of memory 210 storing fields of information are disclosed in U.S.
  • Frame data stored in buffers may be processed by embedded processor 150 by transmitting the header of each frame to be processed to embedded processor 150 .
  • each frame of data may be represented by an FCB.
  • These FCBs may be temporarily stored in G Queues (GQs) 218 .
  • Dispatcher logic 217 maybe configured to dequeue the next FCB from GQs 218 .
  • dispatcher logic 217 issues a read request to memory arbiter 204 to read the data at the beginning of the frame, i.e., header of the frame, stored in data storage unit 140 associated with the dequeued FCB.
  • the data read by dispatcher logic 217 is then processed by embedded processor 150 .
  • the processed frame data may be temporarily stored in data storage unit 140 , i.e., slices 205 , by embedded processor logic 216 issuing a write request to memory arbiter 204 to write the processed frame data into individual buffers in one or more slices 205 .
  • Scheduler 130 may be configured to comprise flow queues 223 configured to store FCBs.
  • Scheduler 130 may further comprise a Frame Control Block Unit 2 (FCBU 2 ) 225 within a memory 224 , e.g., QDR SRAM, configured to operate similarly as FCBU 1 209 .
  • FCBU 2 225 maybe configured to store the information to be filled in the fields of the FCBs when the FCBs are temporarily residing in flow queues 223 . Additional details regarding FCBU 2 225 within memory 224 of scheduler 130 storing fields of information are disclosed in U.S.
  • Scheduler 130 may be configured to transmit the FCBs stored in flow queues 223 to Target Blade Queues (TBQs) 215 enqueue logic 227 configured to enqueue the received FCBs in TBQs 215 .
  • TBQs Target Blade Queues
  • FCBs queued in TBQs 215 may be scheduled to be dequeued from TBQs 215 by TBQ scheduler 228 and loaded into Port Control Block (PCB) 224 .
  • TBQ scheduler 228 may be configured to dequeue the next FCB from TBQs 215 and enqueue that FCB into PCB 224 .
  • PCB 224 may issue a read request to memory arbiter 204 to read the data at the beginning of the frame, i.e., header of the frame, stored in data storage unit 140 associated with the dequeued FCB.
  • the data read by PCB 224 may be temporarily stored in data preparation area memory 214 prior to transmitting the processed frame data to a switch (not shown) or port (not shown) in a packet switching network. It is noted for clarity that PCB 224 may be configured to read a portion of the data stored in the processed frame in each particular read request. That is, the entire data stored in the processed frame may be read in multiple read requests provided by PCB 224 . Once the entire data stored in the processed frame is read, the data storage unit 140 may store additional frame data.
  • Transmitter controller 201 may further comprise a frame alteration preparation area memory 213 configured to receive commands to modify the processed frames temporarily stored in data preparation area memory 214 . These commands are commonly referred to as frame modification commands which are issued by embedded processor 150 and stored in a particular bank in a particular buffer by embedded processor logic 216 . Additional details regarding the storing of frame modification commands in a particular bank in a particular buffer are disclosed in U.S. patent application Ser. No. ______, filed on ______, entitled “Storing Frame Modification Information in a Bank in Memory,” Attorney Docket No. RAL920000092US1, which is hereby incorporated herein by reference in its entirety.
  • PCB 224 may be configured to retrieve the frame modification commands stored in a particular bank in a particular buffer and store them in frame alteration preparation area memory 213 .
  • a Frame Alteration (FA) logic unit 212 may be configured to execute the commands stored in frame alteration preparation area memory 213 to modify the contents of the processed frames temporarily stored in data preparation area memory 214 .
  • FA logic 212 Once FA logic 212 has modified the contents of the processed frames, then modified processed frames may be transmitted through a switch/port interface unit 211 .
  • Switch/port interface unit 211 may transmit data to a port (not shown) that operates as an interface to the packet switching network when data flow unit 110 operates in an egress mode. Otherwise, switch/port interface unit 211 may transmit data to a switch (not shown) in the packet switching network when data flow unit 110 operates in an ingress mode.
  • Data flow unit 110 may further comprise a Buffer Control Block (BCB) Arbiter 207 configured to arbitrate among different BCB requests from transmitter controller 201 , embedded processor interface controller 202 and receiver controller 203 to read from or write to BCBU 208 .
  • BCB Arbiter 207 may be configured to schedule different accesses in order to utilize memory bandwidth as efficiently as possible.
  • Data flow unit 110 may further comprise a Frame Control Block (FCB) Arbiter 206 configured to arbitrate among different FCB requests from embedded processor interface controller 202 , receiver controller 203 and transmitter controller 201 to read from or write to FCBU 1 209 .
  • FCB Frame Control Block
  • each frame of data may be associated with an FCB.
  • the FCB associated with such processed frame ceases to represent that particular frame of data.
  • the FCB may be stored in a FCB free queue 222 within FCB Arbiter 206 .
  • FCB free queue 222 may be configured to comprise a plurality of FCBs that are no longer associated with particular frame data. It is noted that FCB free queue 222 may comprise any number of FCBs that are no longer associated with particular frame data.
  • a Reassembly Control Block (RCB) 219 of receiver controller 203 may associate a particular FCB from FCB free queue 222 with the received frame of data where the newly associated FCB may then be queued in GQs 218 by RCB 219 .
  • RRCB Reassembly Control Block
  • each frame of data may be associated with an FCB.
  • Each FCB associated with a frame of data may be associated with one or more BCBs where each BCB associated with an FCB may be associated with a particular buffer of data storage unit 140 .
  • a BCB may be configured to describe the buffer associated with the next chained BCB as described in the discussion of FIGS. 3 and 4 .
  • the BCB associated with the particular buffer that no longer includes any frame data includes data that is no useful since the particular buffer associated with the BCB no longer includes any frame data.
  • the BCB may be stored in a BCB free queue 226 within BCB Arbiter 206 .
  • BCB free queue 226 may be configured to comprise a plurality of BCBs that do not comprise any valid information. It is noted that BCB free queue 226 may comprise any number of BCBs that do not comprise any valid information.
  • receiver controller 203 may retrieve a BCB in BCB free queue 226 so that RCB 219 may write the received frame data in the particular buffer associated with the BCB retrieved from BCB free queue 226 .
  • FCBs and BCBs may comprise various fields of information where the fields of information are supplied by a separate memory 210 , i.e., FCBU 1 209 of memory 210 , and memory 229 , i.e., BCBU 208 of memory 229 , respectively. That is, the fields of information in FCBs may be obtained by accessing memory 210 , i.e., FCBU 1 209 of memory 210 . The fields of information in BCBs may be obtained by accessing memory 229 , i.e., BCBU 208 of memory 229 .
  • FIG. 3 Diagram Illustrating the Reduction of Memory Accesses by Linking Frame Data with Qualifiers in Control Blocks
  • FIG. 3 schematically illustrates an exemplary set 300 of control blocks depicting the inclusion of qualifiers in control blocks, e.g., FCB, BCB, in order to reduce accesses to memories 210 and 229 in accordance with the principles of the present invention.
  • FCBs may temporarily reside in a queue 305 which may be one of the following queues: FCB free queue 222 , GQs 218 , flow queues 223 , and TBQs 215 .
  • the queue has a control block commonly referred to as a Queue Control Block (QCB) 301 comprising information as to the number of FCBs that currently reside in that particular queue.
  • QB Queue Control Block
  • BCB free queue 226 may also comprise QCB 301 .
  • QCB 301 may comprise other information than the number of FCBs that currently reside that particular queue.
  • QCB 301 may comprise a head field 302 , a tail field 303 and a count field 304 .
  • Head field 302 may comprise an FCB Address (FCBA) of a first FCB, e.g., FCB 310 A, located in the queue, e.g., FCB free queue 222 , GQs 218 , flow queues 223 , TBQs 215 , comprising QCB 301 .
  • FCBA FCB Address
  • FCB 310 A located in the queue, e.g., FCB free queue 222 , GQs 218 , flow queues 223 , TBQs 215 , comprising QCB 301 .
  • Head field 302 may further comprise the Byte Count (BCNT) of the one or more BCBs, e.g., BCBs 320 A-C, associated with the first FCB, e.g., FCB 310 A, located in the queue, e.g., FCB free queue 222 , GQs 218 , flow queues 223 , TBQs 215 .
  • the BCNT in head field 302 may be referred to as an qualifier as it identifies the byte count length of the one or more BCBs, e.g., BCBs 320 A-C, associated with the first FCB, e.g., FCB 310 A, in the queue 305 comprising QCB 301 .
  • Tail field 303 may comprise the FCB Address (FCBA) of last FCB, e.g., FCB 310 B, in queue 305 , e.g., FCB free queue 222 , GQs 218 , flow queues 223 , TBQs 215 , comprising QCB 301 .
  • Count field 304 may comprise the number of FCBs in queue 305 comprising QCB 301 . Because in the exemplary set 300 of Figure there are two FCBs, e.g., FCB 310 A-B, in queue 305 comprising QCB 301 , count field 304 comprises the number two.
  • FCB free queue 222 may comprise any number of FCBs and that FIG. 3 is illustrative.
  • FCBs 310 A-B may collectively or individually be referred to as FCBs 310 or FCB 310 , respectively.
  • Each FCB 310 may comprise two entries or rows of fields.
  • the first entry may comprise a field comprising a pointer to the Next FCB Address (NFA).
  • the first entry may further comprise a field comprising the Byte Count (BCNT) length of the one or more BCBs associated with the next FCB 310 . That is, instead of FCBs 310 storing the Byte Count length (BCNT) of the one or more BCBs associated with the current FCB 310 , the BCNT field stores the byte count length of the one or more BCBs associated with the next FCB 310 .
  • BCNT Byte Count
  • FCB 310 A comprises the FCB address of the following FCB 310 , e.g., FCB 310 B, in the NFA field as well as the byte count length of the one or more BCBs, e.g., BCBs 320 D-F, associated with the following FCB 310 , e.g., FCB 310 B, in the BCNT field.
  • the BCNT field may be referred to as an qualifier as it identifies the byte count length of the one or more BCBs, e.g., BCBs 320 D-F, associated with FCB 310 B identified in the NFA field, i.e., the next FCB 310 in the chain of FCBs 310 .
  • FCB 310 B does not comprise any information in the NFA field or in the BCNT field as there are no more FCBs 310 following FCB 310 B. (This is denoted in the exemplary set 300 of FIG. 3 by “empty” parentheses “( )”). By not storing information in the NFA field and in the BCNT field of FCB 310 B, memory accesses to memory 210 are reduced thereby improving the efficiency of the bandwidth of memory 210 .
  • the second entry may comprise the fields of the First BCB Address (FBA) of the first BCB associated with that particular FCB, the Starting Byte Position (SBP) of the frame data stored in a buffer associated with the first BCB, and the Ending Byte Position (EBP) of the frame data stored in the buffer associated with the first BCB.
  • FBA First BCB Address
  • SBP Starting Byte Position
  • EBP Ending Byte Position
  • Each FCB maybe associated with one or more BCBs. Referring to FIG. 3 , FCB 310 A is associated with BCBs 320 A-C. FCB 310 B is associated with BCBs 320 D-F. It is noted that FCBs may be associated with any number of BCBs and that FIG. 3 is illustrative, as would be recognized by an artisan of ordinary skill.
  • BCBs 320 A-F may collectively or individually be referred to as BCBs 320 or BCB 320 , respectively.
  • Each BCB 320 maybe associated with a particular buffer in data storage unit 140 .
  • BCB 320 A is associated with buffer 330 A.
  • BCB 320 B is associated with buffer 330 B.
  • BCB 320 C is associated with buffer 330 C.
  • BCB 320 D is associated with buffer 330 D.
  • BCB 320 E is associated with buffer 330 E.
  • BCB 320 F is associated with buffer 330 F.
  • Buffers 330 A-F may collectively or individually be referred to as buffers 330 or buffer 330 , respectively.
  • data storage unit 140 of packet processor 100 may comprise any number of slices 205 comprising any number of buffers 330 . It is further noted that since there may be any number of buffers 330 there may be any number of BCBs 320 associated with those buffers 330 . It is further noted that in one embodiment, each BCB 320 maybe associated with a particular buffer 330 in data storage unit 140 .
  • the FBA field in the second entry may comprise the address of the first BCB 320 , e.g., BCB 320 A, associated with FCB 310 A.
  • FCB 310 A may further comprise an SBP field storing the starting address of the frame data stored in the buffer 330 , e.g., buffer 330 A, associated with the first BCB 320 , e.g., BCB 320 A.
  • FCB 310 A may further comprise an EBP field storing the ending address of the frame data stored in the buffer 330 , e.g., buffer 330 A, associated with the first BCB 320 , e.g., BCB 320 A.
  • FCB 310 B may comprise an FBA field in the second entry which comprises the address of the first BCB 320 , e.g., BCB 320 D, associated with FCB 310 B.
  • FCB 310 B may further comprise an SBP field storing the starting address of the frame data stored in the buffer 330 , e.g., buffer 330 D, associated with the first BCB 320 , e.g., BCB 320 D.
  • FCB 310 B may further comprise an EBP field storing the ending address of the frame data stored in the buffer 330 , e.g., buffer 330 D, associated with the first BCB 320 , e.g., BCB 320 D.
  • the SBP and EBP fields may be referred to as qualifiers as they comprise information about the starting byte position and ending byte position of the frame data associated with the first BCB 320 , e.g., BCB 320 D, and not information about the current FCB 310 , e.g., FCB 310 B.
  • each FCB 310 may be associated with one or more BCBs 320 .
  • FCB 310 A is associated with BCBs 320 A-C and FCB 310 B is associated with BCBs 320 D-F.
  • Each BCB 320 may comprise three fields that are similar to the three fields in the second entry of FCBs 310 .
  • Each BCB 320 may comprise a pointer to the Next BCB Address (NBA) as well as the fields of SBP and EBP which are the starting and ending byte positions of the buffer 330 associated with the next BCB.
  • NBA Next BCB Address
  • the SBP and EBP fields may be referred to as qualifiers as they store the starting byte position and ending byte position of the buffer 330 , e.g., buffer 330 B, associated with the next BCB 320 , e.g., BCB 320 C.
  • the starting and ending byte positions of the buffer 330 e.g., buffer 330 B, associated with the current BCB 320 , e.g., BCB 320 B, thereby resulting an extra write access to memory 229
  • the starting and ending byte positions of the buffer 330 e.g., buffer 330 B, associated with the next BCB 320 , e.g., BCB 320 B
  • the SBP and EBP fields respectively, in the previous BCB 320 , e.g., BCB 320 A.
  • BCB 320 A comprises the BCB address of the next BCB 320 , e.g., BCB 320 B, in the NBA field as well as the starting byte position of the frame data stored in the buffer 330 , e.g., buffer 330 B, associated with the next BCB 320 , e.g., BCB 320 B, in the SBP field and the ending byte position of the frame data stored in the buffer 330 , e.g., buffer 330 B, associated with the next BCB, 320 , e.g., BCB 320 B, in the EBP field.
  • BCB 320 B comprises the BCB address of the next BCB 320 , e.g., BCB 320 C, in the NBA field as well as the starting byte position of the frame data stored in the buffer 330 , e.g., buffer 330 C, associated with the next BCB, e.g., BCB 320 C, in the SBP field and the ending byte position of the frame data stored in the buffer 330 , e.g., buffer 330 C, associated with the next BCB 320 , e.g., BCB 320 C, in the EBP field.
  • BCB 320 e.g., BCB 320 C
  • FCB 310 e.g., FCB 310 A
  • FIG. 4 Method for Reducing Memory Accesses by Linking Frame Data with Qualifiers in Control Blocks
  • FIG. 4 illustrates a flowchart of one embodiment of the present invention of a method 400 for reducing memory accesses to memories 210 and 229 by linking frame data with qualifiers in control blocks, e.g., FCBs 310 , BCBs 320 .
  • a frame of data may be received from a switch (not shown) or a port (not shown) in a packet switching network and temporarily stored in receiving preparation area memory 220 by receiver controller 203 .
  • RCB 219 of receiver controller 203 may be then be configured to lease one or more BCBs 320 from BCB free queue 226 ( FIG. 2 ).
  • a BCB may be said to be “leased” from BCB free queue 226 as the BCB may be temporarily removed from BCB free queue 226 during the “life cycle” of the FCB. Additional details regarding the “life cycle” of the FCB are disclosed in U.S. patent application Ser. No. ______, filed on ______, entitled “Assignment of Packet Descriptor Field Positions in a Network Processor,” Attorney Docket No. RAL920000091US1.
  • RCB 219 may then write the received data in one or more particular buffers 330 associated with the one or more BCBs 320 leased from BCB free queue 226 .
  • RCB 219 may issue one or more write requests to memory arbiter 204 in order to write the received frame data into one or more buffers 330 associated with the one or more BCBs leased from BCB free queue 226 .
  • each particular buffer 330 may be associated with a particular BCB 320 .
  • BCB 320 may be configured as illustrated in FIG. 3 where BCB 320 may comprise the field of a Next BCB Address (NBA) which comprises a pointer to the next BCB 320 address as well as the fields of SBP and EBP which are the starting and ending byte positions of the buffer 330 associated with the next BCB 320 .
  • NBA Next BCB Address
  • SBP and EBP fields may be referred to as qualifiers as they store the starting byte position and ending byte position of the buffer 330 associated with the next BCB 320 . That is, instead of storing the starting and ending byte positions of the buffer 330 , e.g., buffer 330 A ( FIG.
  • the starting and ending byte positions of the buffer 330 e.g., buffer 330 B ( FIG. 3 ) associated with the next BCB 320 , e.g., BCB 320 B ( FIG. 3 ) is stored in the SBP and EBP fields, respectively, in the previous BCB 320 , e.g., BCB 320 A ( FIG. 3 ).
  • RCB 219 may read the head field 302 in QCB 301 ( FIG. 3 ) of BCB free queue 226 .
  • RCB 219 may then read the NBA field of the first BCB 320 , e.g., BCB 320 A, which may comprise the address of the next BCB 320 in the BCB free queue 226 to retrieve.
  • Head field 302 of QCB 301 may subsequently be updated by RCB 219 so that the address in the head field 302 is the address of the next BCB 320 that may be retrieved during the next lease operation.
  • the count field 304 of QCB 301 of BCB free queue 226 may then be decremented to indicate that a BCB 320 has been retrieved from BCB free queue 226 .
  • Head field 302 of QCB 301 may further comprise the Byte Count (BCNT) of the one or more BCBs 320 , e.g., BCBs 320 A-C ( FIG. 3 ), associated with the one or more buffers 330 , e.g., buffers 330 A-C ( FIG. 3 ), storing the frame of data.
  • BCNT Byte Count
  • the BCNT in head field 302 may be referred to as an qualifier as it identifies the byte count length of the one or more BCBs 320 , e.g., BCBs 320 A-C ( FIG. 3 ), associated with the one or more buffers 330 , buffers 330 A-C ( FIG. 3 ), storing the frame of data and not information about the BCB free queue 226 comprising QCB 301 .
  • QCB 301 may further comprise a tail field 303 comprising the address of the last FCB, e.g., FCB 310 B ( FIG. 2 ), located in the queue, e.g., FCB free queue 222 .
  • RCB 219 of receiver controller 203 may associate the one or more BCBs 320 , e.g. BCBs 320 A-C ( FIG. 3 ), with an FCB 310 in step 403 .
  • RCB 219 may lease an FCB 310 from FCB free queue 222 ( FIG. 2 ) thereby associating the leased FCB 310 with the one or more BCBs 320 , e.g., BCBs 320 A-C ( FIG. 3 ), that are associated with the one or more buffers 205 , e.g., buffers 205 A-C ( FIG. 3 ), storing the frame of data.
  • FCB may be said to be “leased” from FCB free queue 222 as the FCB may be temporarily removed from FCB free queue 222 during the “life cycle” of the FCB. Additional details regarding the “life cycle” of the FCB are disclosed in U.S. patent application Ser. No. ______, filed on ______, entitled “Assignment of Packet Descriptor Field Positions in a Network Processor,” Attorney Docket No. RAL920000091US1.
  • FCB 310 may be configured as illustrated in FIG. 3 with two entries or rows of fields. The first entry may comprise a field comprising a pointer to the Next FCB Address (NFA).
  • NFA Next FCB Address
  • the first entry may further comprise a field comprising the Byte Count (BCNT) length of the one or more BCBs 320 , e.g., BCBs 320 D-F ( FIG. 3 ), associated with the next FCB 310 . That is, instead of FCB 310 storing the Byte Count length (BCNT) of the one or more BCBs 320 associated with the current FCB 310 , the BCNT field stores the byte count length of the one or more BCBs 320 associated with the next FCB 310 . It is noted that the last FCB 310 , e.g., FCB 310 B ( FIG. 3 ), in the queue, e.g., FCB free queue 222 ( FIG.
  • FCNT Byte Count
  • FCB 310 does not comprise any information in the NFA field or in the BCNT field as there are no more FCBs 310 following the last FCB 310 , e.g., FCB 310 B ( FIG. 3 ), in the queue, e.g., FCB free queue 222 ( FIG. 2 ).
  • FCB 310 B e.g., FCB 310 B ( FIG. 3 )
  • memory accesses to memory 210 are reduced thereby improving the bandwidth of memory 210 .
  • the last FCB 310 e.g., FCB 310 B ( FIG. 3 ) located in the queue, e.g., FCB free queue 222 ( FIG.
  • tail field 303 may comprise the address of the last FCB 310 , e.g., FCB 310 B ( FIG. 2 ), located in the queue, e.g., FCB free queue 222 .
  • the second entry of FCB 310 may comprise the fields of the First BCB Address (FBA) of the first BCB 320 , e.g., BCB 320 A ( FIG. 3 ), associated with that particular FCB 310 , the Starting Byte Position (SBP) of the frame data stored in buffer 330 , e.g., buffer 330 A ( FIG. 3 ), associated with the first BCB 320 , e.g., BCB 320 A ( FIG. 3 ), and the Ending Byte Position (EBP) of the frame data stored in buffer 330 , e.g., buffer 330 A ( FIG. 3 ), associated with the first BCB 320 , e.g., BCB 320 A ( FIG. 3 ).
  • FBA First BCB Address
  • SBP Starting Byte Position
  • EBP Ending Byte Position
  • the SBP and EBP fields may be referred to as qualifiers as they comprise information about the starting byte position and ending byte position of the frame data associated with the first BCB 320 , e.g., BCB 320 A ( FIG. 3 ), and not information about the current FCB 310 , e.g., FCB 310 A ( FIG. 3 ).
  • RCB 219 may read the head field 302 in QCB 301 of FCB free queue 222 .
  • Head field 302 may include the address of the first FCB 310 , e.g., FCB 310 A ( FIG. 3 ), to retrieve from FCB free queue 222 .
  • RCB 219 may then read the NFA field of the first FCB 310 , e.g., FCB 310 A, which may comprise the address of the next FCB 310 in the FCB free queue 222 to retrieve.
  • Head field 302 of QCB 301 may subsequently be updated so that the address in head field 302 is the address of the next FCB 310 that may be retrieved during the next lease operation.
  • the count field 304 of QCB 301 of FCB free queue 222 may then be decremented to indicate that a FCB 310 has been retrieved from FCB free queue 222 and enqueued in GQs 218 .
  • FCB 310 may be enqueued in GQs 218 by RCB 219 .
  • FCB 310 may comprise the fields of FBA as well as the qualifiers SBP and EBP in the second entry. Once FCB 310 is enqueued in GQs 218 , the information in these fields are copied from RCB 219 .
  • RCB 219 may store the starting and ending byte position of the frame data it stored in the first buffer 330 , e.g., buffer 330 A, associated with the first BCB 320 , e.g., BCB 320 A, associated with the FCB 310 , e.g., FCB 310 A, enqueued in GQs 218 by RCB 219 when RCB 219 writes the received frame data into the first buffer 330 , e.g., buffer 330 A, associated with the first BCB 320 , e.g., BCB 320 A, associated with the FCB 310 , e.g., FCB 310 A, enqueued in GQs 218 by RCB 219 .
  • the BCNT field in the first entry of FCB 310 maybe copied from RCB 219 .
  • FCB 310 enqueued in GQs 218 may then be chained together with the other FCBs 310 in GQs 218 in step 405 .
  • RCB 219 may read the tail field 303 in QCB 301 of GQs 218 to retrieve the address of the current last FCB 310 in GQs 218 .
  • the tail field 303 in QCB 301 of GQs 218 may subsequently be updated by RCB 219 so that the pointer in tail field 303 points to the FCB 310 just enqueued in GQs 218 .
  • the count field 304 of QCB 301 of GQs 218 may then be incremented by RCB 219 to indicate that an FCB 310 has been retrieved from FCB free queue 222 and enqueued in GQs 218 .
  • FCB 310 may be dequeued from GQs 218 .
  • Dispatcher logic 217 of embedded processor interface controller 202 may read the head field 302 of QCB 301 of GQs 218 to retrieve the address of the FCB 310 to dequeue.
  • Dispatcher logic 217 may further read the NFA field in the FCB 310 to determine the address of the next FCB 310 in GQs 218 to dequeue.
  • Dispatcher logic 217 may be configured to update the head field 302 of QCB 301 in GQs 218 so that the address in the head field 302 is the address of the next FCB 310 that may be dequeued at the next dequeue operation.
  • the count field 304 of QCB 301 in GQs 218 may be decremented to indicate that an FCB 301 has been dequeued from GQs 218 .
  • the contents of the dequeued FCB 310 may then be read by dispatcher logic 217 and transferred to embedded processor 150 .
  • the dequeued FCB 310 may be enqueued in TBQs 215 .
  • FCB 310 may first be enqueued in flow queues 223 of scheduler 130 and then dequeued and enqueued in TBQs 215 . Additional details regarding enqueing and dequeing the FCB 310 in flow queues 223 of scheduler 130 are disclosed in U.S. patent application Ser. No. ______, filed on ______, entitled “Assignment of Packet Descriptor Field Positions in a Network Processor,” Attorney Docket No. RAL920000091US1.
  • the enqueued FCB 310 in TBQs 215 may then be dequeued by TBQ scheduler 228 to be loaded into PCB 224 to be read by PCB 224 as discussed in the description of FIG. 2 .
  • the address in the FBA field of the FCB 310 may be loaded into PCB 224 .
  • the frame of data to be transmitted through switch/port interface unit 211 to a switch (not shown) or a port (not shown) in the packet switching network may be read from the one or more buffers 330 associated with the one or more BCBs 320 , e.g., BCBs 320 A-C ( FIG. 3 ), by PCB 224 .
  • PCB retrieves the address of the first BCB 320 , i.e., the address included in the FBA field of the FCB 310 dequeued in step 408 , that was loaded into PCB 224 .
  • the address of the next BCB 320 e.g., BCB 320 B ( FIG. 3 ), may be located in the NBA field of the first BCB 320 , e.g., BCB 320 A ( FIG. 3 ).
  • step 410 when all the data of a frame is read by PCB from the one or more buffers 330 of data storage unit 140 that stored the frame of data, the one or more BCBs 320 , e.g., BCBs 320 A-C ( FIG. 3 ), associated with the one or more buffers 330 , e.g., buffers 330 A-C ( FIG. 3 ), are enqueued in BCB free queue 226 .
  • each BCB 320 may be enqueued one at a time in BCB free queue 226 as frame data in each associated buffer 330 is read by PCB 224 .
  • step 411 when the one or more BCBs 320 , e.g., BCBs 320 A-C ( FIG. 3 ), associated with the one or more buffers 330 , e.g., buffers 330 A-C ( FIG. 3 ), have been enqueued in BCB free queue 226 , the FCB 310 associated with the one or more BCBs 320 , e.g., BCBs 320 A-C ( FIG. 3 ) is enqueued in FCB free queue 222 .
  • the FCB 310 associated with the one or more BCBs 320 e.g., BCBs 320 A-C ( FIG. 3 ) is enqueued in FCB free queue 222 .

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Image Generation (AREA)
  • Memory System Of A Hierarchy Structure (AREA)
  • Computer And Data Communications (AREA)
  • Image Input (AREA)
  • Communication Control (AREA)

Abstract

A method and system for reducing memory accesses by inserting qualifiers in control blocks. In one embodiment, a system comprises a processor configured to process frames of data. The processor may comprise a plurality of buffers configured to store frames of data where each frame of data may be associated with a frame control block. Each frame control block associated with a frame of data may be associated with one or more buffer control blocks. Each control block, e.g., frame control block, buffer control block, may comprise one or more qualifier fields that comprise information unrelated to the current control block. Instead, qualifiers may comprise information related to an another control block. The last frame control block in a queue as well as the last buffer control block associated with a frame control block may comprise fields with no information thereby reducing memory accesses to access information in those fields.

Description

    CROSS REFERENCE TO RELATED APPLICATIONS
  • The present invention is related to the following U.S. patent applications which are incorporated herein by reference:
  • Ser. No. ______ (Attorney Docket No. RAL920000091US1) entitled “Assignment of Packet Descriptor Field Positions in a Network Processor” filed ______.
  • Ser. No. ______ (Attorney Docket No. RAL920000092US1) entitled “Storing Frame Modification Information in a Bank in Memory” filed ______.
  • Ser. No. ______ (Attorney Docket No. RAL920000096US1) entitled “Efficient Implementation of Error Correction Code Scheme” filed ______.
  • TECHNICAL FIELD
  • The present invention relates to the field of a networking communication system, and more particularly to inserting qualifiers in control blocks to reduce memory accesses and thereby improve the efficiency of the bandwidth of the memory.
  • BACKGROUND INFORMATION
  • A packet switching network has switching points or nodes for transmission of data among senders and receivers connected to the network. The switching performed by these switching points is in fact the action of passing on packets or “frames” of data received by a switching point or node to a further node in the network. Such switching actions are the means by which communication data is moved through the packet switching network.
  • Each node may comprise a packet processor configured to process packets or frames of data. The packet processor may comprise a data storage unit, e.g., Double Data Rate Static Random Access Memory (DDR SRAM), configured with a plurality of buffers to store frame data. Each frame of data may be associated with a Frame Control Block (FCB) configured to describe the corresponding frame of data. Each FCB may be associated with one or more Buffer Control Blocks (BCBs) where each BCB associated with an FCB may be associated with a buffer in the data storage unit. A BCB may be configured to describe the associated buffer. Typically, FCBs and BCBs comprise various fields of information where the fields of information in FCBs and BCBs are each supplied by a separate memory, e.g., Quadruple Data Rate Static Random Access Memory (QDR SRAM), in the packet processor. That is, the fields of information in FCBs and BCBs maybe obtained by accessing a separate memory, e.g., QDR SRAM, in the packet processor.
  • It would therefore be desirable to reduce the number of accesses to a particular memory, e.g., QDR SRAM, that supplies information to the fields of FCBs or BCBs thereby improving the efficiency of the bandwidth of the memory, e.g., QDR SRAM.
  • SUMMARY
  • The problems outlined above may at least in part be solved in some embodiments by inserting qualifiers in control blocks, e.g., frame control blocks, buffer control blocks, that comprise information unrelated to the current control block. Instead, qualifiers in control blocks, e.g., frame control blocks, buffer control blocks, may comprise information related to another control block or to a buffer associated with a next control block. The last frame control block in a queue in the packet processor as well as the last buffer control block associated with a frame control block may comprise fields with no information thereby reducing memory accesses to a memory, e.g., QDR SRAM, to access information to be inserted in those fields. Subsequently, the bandwidth of the memory, e.g., QDR SRAM supplying information to those fields is improved.
  • In one embodiment, a system comprises a packet processor configured to process packets, i.e., frames, of data. The processor may comprise a plurality of buffers configured to store frames of data where each frame of data may be associated with a frame control block. Each frame control block associated with a frame of data may be associated with one or more buffer control blocks. Each buffer control block associated with a frame control block may be associated with a particular buffer of the plurality of buffers. The processor may further comprise a plurality of queues configured to temporarily store one or more frame control blocks. Each control block, e.g., frame control block, buffer control block, may comprise one or more qualifier fields that comprise information related to a particular buffer in the plurality of buffers.
  • Each frame control block may comprise one or more qualifier fields where the one or more qualifier fields comprise information unrelated to a current frame control block. In all but the last frame control block in a particular queue, the one or more qualifier fields may comprise information as to the byte count length of the one or more buffer control blocks associated with a next frame control block. In the one or more frame control blocks in a particular queue, the one or more qualifier fields may comprise information as to a starting byte position and to an ending byte position of framed data stored in a particular buffer associated with the first buffer control block which is associated with the frame control block.
  • In another embodiment of the present invention, each buffer control block may comprise one or more qualifier fields. In all but the last buffer control block associated with a frame control block, the one or more qualifier fields may comprise information as to a starting byte position and to an ending byte position of frame data stored in a particular buffer associated with a next buffer control block.
  • The foregoing has outlined rather broadly the features and technical advantages of the present invention in order that the detailed description of the invention that follows may be better understood. Additional features and advantages of the invention will be described hereinafter which form the subject of the claims of the invention.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • A better understanding of the present invention can be obtained when the following detailed description is considered in conjunction with the following drawings, in which:
  • FIG. 1 illustrates a packet processor configured in accordance with the present invention;
  • FIG. 2 illustrates a data flow unit configured in accordance with the present invention;
  • FIG. 3 is a diagram illustrating the reduction of memory accesses by linking frame data with qualifiers in control blocks; and
  • FIG. 4 is a flowchart of a method for reducing memory accesses by linking frame data with qualifiers in control blocks.
  • DETAILED DESCRIPTION
  • The present invention comprises a method and system for reducing memory accesses by inserting qualifiers in control blocks. In one embodiment, a system comprises a packet processor configured to process packets, i.e., frames, of data. The processor may comprise a plurality of buffers configured to store frames of data where each frame of data may be associated with a frame control block. Each frame control block associated with a frame of data may be associated with one or more buffer control blocks. Each buffer control block associated with a frame control block may be associated with a particular buffer of the plurality of buffers. Each control block, e.g., frame control blocks, buffer control blocks, may comprise one or more qualifier fields. The one or more qualifier fields may comprise information unrelated to the current control block. Instead, qualifiers in control blocks, e.g., frame control blocks, buffer control blocks, may comprise information related to another control block. By inserting qualifiers storing information related to another control block, the last frame control block in a queue in the packet processor as well as the last buffer control block associated with a frame control block may comprise fields with no information thereby reducing memory accesses to a memory, e.g., QDR SRAM, to access information to be inserted in those fields. Subsequently, the bandwidth of the memory, e.g., QDR SRAM, supplying information to those fields is improved.
  • FIG. 1—Packet Processor
  • FIG. 1 illustrates an embodiment of the present invention of a packet processor 100. Packet processor 100 may comprise a data flow unit 110 configured to receive digital packets, i.e., frames, of data, from a particular switch (not shown) or port (not shown) of a packet switching network and transmit the digital packets, i.e., frames, of data to another switch or port, e.g., switch/port 120, in the packet switching network. Each frame of data may be associated with a Frame Control Block (FCB) where the FCB describes the associated frame of data. Each FCB associated with a frame of data may be associated with one or more Buffer Control Blocks (BCBs) where each BCB associated with an FCB may be associated with a buffer in a data storage unit 140. A BCB may be configured to describe the buffer associated with the next chained BCB as described in the description of FIGS. 3 and 4. In one embodiment, data flow unit 110 may reside on an integrated circuit, i.e., integrated chip. Data flow unit 110 may be coupled to data storage unit 140 configured to temporarily store frames of data received by data flow unit 110 from a switch (not shown) or port (not shown) in the packet switching network. Data flow unit 110 may further be coupled to a scheduler 130 configured to schedule frames of data to be transmitted from data flow unit 110 to switch/port 120. In one embodiment, scheduler 130 may reside on an integrated circuit, i.e., integrated chip. Furthermore, data flow unit 110 may further be coupled to an embedded processor 150 configured to process frames of data received by data flow unit 110.
  • FIG. 2—Data Flow Unit
  • FIG. 2 illustrates an embodiment of the present invention of data flow unit 110. Data flow unit 110 may comprise a receiver controller 203 configured to receive and temporarily store packets, i.e., frames, of data received from a switch (not shown) or port (not shown) in a packet switching network. Data flow unit 110 may further comprise a transmitter controller 201 configured to modify the frame data as well as transmit the modified frame data to a switch (not shown) or port (not shown) in a packet switching network. Data flow unit 110 may further comprise an embedded processor interface controller 202 configured to exchange frames to be processed by embedded processor 150.
  • Packets, i.e., frames, of data may be received by a port/switch interface unit 221. Port/switch interface unit 221 may receive data from a switch (not shown) in the packet switching network when data flow unit 110 operates in an egress mode. Otherwise, port/switch interface unit 221 may receive data from a port (not shown) that operates as an interface to the packet switching network when data flow unit 110 operates in an ingress mode. Data received by data flow unit 110 may be temporarily stored in a receiving preparation area memory 220 prior to being stored in data storage unit 140 which may be represented by a plurality of slices 205A-F. Slices 205A-F may collectively or individually be referred to as slices 205 or slice 205, respectively. The number of slices 205 in FIG. 2 is illustrative, and an embodiment of data flow unit 110 in accordance with the principles of the present invention may have other predetermined number of slices 205. Each slice may comprise a plurality of buffers. Each slice may represent a slice of memory, e.g., Dynamic Random Access Memory (DRAM), so that frame data may be written into different buffers in different slices in order to maximize memory bandwidth. A memory arbiter 204 may be configured to collect requests, e.g., read, write, from receiver controller 203, transmitter controller 201 and embedded processor interface controller 202 and subsequently schedule access to particular data store memory slices, i.e., particular buffers in particular slices 205. For example, receiver controller 203 may be configured to issue write requests to memory arbiter 204 in order to write received data into individual buffers in a particular slice 205.
  • As stated above, frame data may be stored in data storage unit 140, i.e., a plurality of slices 205. In one embodiment, frame data may be stored in one or more buffers in one or more slices 205 in a manner such that the data in each particular frame may be recomposed by having the buffers chained together. That is, data in a particular frame may be stored in one or more buffers that are chained together in the order that data is written into the one or more buffers. The chaining of the one or more buffers may be controlled by a Buffer Control Block Unit (BCBU) 208 in a memory 229, e.g., Quadruple Data Rate Static Random Access Memory (QDR SRAM), coupled to data flow unit 110. BCBU 208 may be configured to comprise the addresses of each of the one or more buffers chained together in the order data was written into buffers. The different buffers comprising data of the same frames may be linked together by means of pointers stored in BCBU 208.
  • As stated above, each frame of data may be associated with a Frame Control Block (FCB) where the FCB describes the associated frame of data. Frame Control Block Unit 1 (FCBU1) 209 in a memory 210, e.g., QDR SRAM, maybe configured to store the information, e.g., frame control information, to be filled in the fields of the FCBs. That is, the fields of information in FCBs may be obtained by accessing memory 210, i.e., FCBU1) 209 of memory 210. Additional details regarding FCBU1 209 of memory 210 storing fields of information are disclosed in U.S. patent application Ser. No. ______ filed on ______, entitled “Assignment of Packet Descriptor Field Positions in a Network Processor,” Attorney Docket No. RAL920000091US1, which is hereby incorporated herein by reference in its entirety.
  • Frame data stored in buffers may be processed by embedded processor 150 by transmitting the header of each frame to be processed to embedded processor 150. As stated above, each frame of data may be represented by an FCB. These FCBs may be temporarily stored in G Queues (GQs) 218. Dispatcher logic 217 maybe configured to dequeue the next FCB from GQs 218. Once dispatcher logic 217 dequeues the next FCB, dispatcher logic 217 issues a read request to memory arbiter 204 to read the data at the beginning of the frame, i.e., header of the frame, stored in data storage unit 140 associated with the dequeued FCB. The data read by dispatcher logic 217 is then processed by embedded processor 150.
  • Once frame data has been processed by embedded processor 150, the processed frame data may be temporarily stored in data storage unit 140, i.e., slices 205, by embedded processor logic 216 issuing a write request to memory arbiter 204 to write the processed frame data into individual buffers in one or more slices 205.
  • Once frame data has been processed by embedded processor 150, embedded processor logic 216 further issues the FCB associated with the processed frame to scheduler 130. Scheduler 130 may be configured to comprise flow queues 223 configured to store FCBs. Scheduler 130 may further comprise a Frame Control Block Unit 2 (FCBU2) 225 within a memory 224, e.g., QDR SRAM, configured to operate similarly as FCBU1 209. FCBU2 225 maybe configured to store the information to be filled in the fields of the FCBs when the FCBs are temporarily residing in flow queues 223. Additional details regarding FCBU2 225 within memory 224 of scheduler 130 storing fields of information are disclosed in U.S. patent application Ser. No. ______, filed on ______, entitled “Assignment of Packet Descriptor Field Positions in a Network Processor,” Attorney Docket No. RAL920000091US1. Scheduler 130 may be configured to transmit the FCBs stored in flow queues 223 to Target Blade Queues (TBQs) 215 enqueue logic 227 configured to enqueue the received FCBs in TBQs 215.
  • FCBs queued in TBQs 215 may be scheduled to be dequeued from TBQs 215 by TBQ scheduler 228 and loaded into Port Control Block (PCB) 224. TBQ scheduler 228 may be configured to dequeue the next FCB from TBQs 215 and enqueue that FCB into PCB 224. Once the next FCB is enqueued into PCB 224, PCB 224 may issue a read request to memory arbiter 204 to read the data at the beginning of the frame, i.e., header of the frame, stored in data storage unit 140 associated with the dequeued FCB. The data read by PCB 224 may be temporarily stored in data preparation area memory 214 prior to transmitting the processed frame data to a switch (not shown) or port (not shown) in a packet switching network. It is noted for clarity that PCB 224 may be configured to read a portion of the data stored in the processed frame in each particular read request. That is, the entire data stored in the processed frame may be read in multiple read requests provided by PCB 224. Once the entire data stored in the processed frame is read, the data storage unit 140 may store additional frame data.
  • Transmitter controller 201 may further comprise a frame alteration preparation area memory 213 configured to receive commands to modify the processed frames temporarily stored in data preparation area memory 214. These commands are commonly referred to as frame modification commands which are issued by embedded processor 150 and stored in a particular bank in a particular buffer by embedded processor logic 216. Additional details regarding the storing of frame modification commands in a particular bank in a particular buffer are disclosed in U.S. patent application Ser. No. ______, filed on ______, entitled “Storing Frame Modification Information in a Bank in Memory,” Attorney Docket No. RAL920000092US1, which is hereby incorporated herein by reference in its entirety. In one embodiment, PCB 224 may be configured to retrieve the frame modification commands stored in a particular bank in a particular buffer and store them in frame alteration preparation area memory 213. A Frame Alteration (FA) logic unit 212 may be configured to execute the commands stored in frame alteration preparation area memory 213 to modify the contents of the processed frames temporarily stored in data preparation area memory 214. Once FA logic 212 has modified the contents of the processed frames, then modified processed frames may be transmitted through a switch/port interface unit 211. Switch/port interface unit 211 may transmit data to a port (not shown) that operates as an interface to the packet switching network when data flow unit 110 operates in an egress mode. Otherwise, switch/port interface unit 211 may transmit data to a switch (not shown) in the packet switching network when data flow unit 110 operates in an ingress mode.
  • Data flow unit 110 may further comprise a Buffer Control Block (BCB) Arbiter 207 configured to arbitrate among different BCB requests from transmitter controller 201, embedded processor interface controller 202 and receiver controller 203 to read from or write to BCBU 208. BCB Arbiter 207 may be configured to schedule different accesses in order to utilize memory bandwidth as efficiently as possible. Data flow unit 110 may further comprise a Frame Control Block (FCB) Arbiter 206 configured to arbitrate among different FCB requests from embedded processor interface controller 202, receiver controller 203 and transmitter controller 201 to read from or write to FCBU1 209.
  • As stated above, each frame of data may be associated with an FCB. As the processed frames are read from data storage unit 140, e.g., DDR DRAM, and the processed frames are modified and transmitted to a switch (not shown) or a port (not shown) in the packet switching network, the FCB associated with such processed frame ceases to represent that particular frame of data. Once the FCB is no longer associated with frame data, the FCB may be stored in a FCB free queue 222 within FCB Arbiter 206. FCB free queue 222 may be configured to comprise a plurality of FCBs that are no longer associated with particular frame data. It is noted that FCB free queue 222 may comprise any number of FCBs that are no longer associated with particular frame data. Once data flow unit 110 receives a packet, i.e., frame, of data, a Reassembly Control Block (RCB) 219 of receiver controller 203 may associate a particular FCB from FCB free queue 222 with the received frame of data where the newly associated FCB may then be queued in GQs 218 by RCB 219.
  • As stated above, each frame of data may be associated with an FCB. Each FCB associated with a frame of data may be associated with one or more BCBs where each BCB associated with an FCB may be associated with a particular buffer of data storage unit 140. A BCB may be configured to describe the buffer associated with the next chained BCB as described in the discussion of FIGS. 3 and 4. Once the processed frame data stored in a buffer of data storage unit 140 has been retrieved by transmitter controller 201 and subsequently modified and transmitted to a switch (not shown) or port (not shown) in the packet switching network, the BCB associated with that particular buffer that no longer includes any frame data ceases to comprise any valid information. That is, the BCB associated with the particular buffer that no longer includes any frame data includes data that is no useful since the particular buffer associated with the BCB no longer includes any frame data. Once the BCB ceases to comprise any valid information, i.e., once the frame data in a particular buffer has been transmitted, the BCB may be stored in a BCB free queue 226 within BCB Arbiter 206. BCB free queue 226 may be configured to comprise a plurality of BCBs that do not comprise any valid information. It is noted that BCB free queue 226 may comprise any number of BCBs that do not comprise any valid information. Once receiver controller 203 receives frame data, RCB 219 of receiver controller 203 may retrieve a BCB in BCB free queue 226 so that RCB 219 may write the received frame data in the particular buffer associated with the BCB retrieved from BCB free queue 226.
  • As stated in the Background Information section, FCBs and BCBs may comprise various fields of information where the fields of information are supplied by a separate memory 210, i.e., FCBU1 209 of memory 210, and memory 229, i.e., BCBU 208 of memory 229, respectively. That is, the fields of information in FCBs may be obtained by accessing memory 210, i.e., FCBU1 209 of memory 210. The fields of information in BCBs may be obtained by accessing memory 229, i.e., BCBU 208 of memory 229. It would therefore be desirable to reduce the number of accesses to memories 210 and 229, e.g., QDR SRAM, that supplies information to the fields of FCBs and BCBs, respectively, thereby improving the efficiency of the bandwidth of memories 210 and 229. A diagram illustrating the reduction of memory accesses to memories 210 and 229 by inserting qualifiers in the fields of control blocks is described below.
  • FIG. 3—Diagram Illustrating the Reduction of Memory Accesses by Linking Frame Data with Qualifiers in Control Blocks
  • FIG. 3 schematically illustrates an exemplary set 300 of control blocks depicting the inclusion of qualifiers in control blocks, e.g., FCB, BCB, in order to reduce accesses to memories 210 and 229 in accordance with the principles of the present invention. As stated above, FCBs may temporarily reside in a queue 305 which may be one of the following queues: FCB free queue 222, GQs 218, flow queues 223, and TBQs 215. In each of the above stated queues, i.e., FCB free queue 222, GQs 218, flow queues 223, TBQs 215, the queue has a control block commonly referred to as a Queue Control Block (QCB) 301 comprising information as to the number of FCBs that currently reside in that particular queue. It is noted that BCB free queue 226 may also comprise QCB 301. It is further noted that QCB 301 may comprise other information than the number of FCBs that currently reside that particular queue.
  • Referring to FIG. 3, QCB 301 may comprise a head field 302, a tail field 303 and a count field 304. Head field 302 may comprise an FCB Address (FCBA) of a first FCB, e.g., FCB 310A, located in the queue, e.g., FCB free queue 222, GQs 218, flow queues 223, TBQs 215, comprising QCB 301. (In the exemplary set 300 of FIG. 3, the notation (X) connected to certain fields of QCB 301, FCB 310A and FCB 310B, is used to denote that the particular field points to FCBX or comprises the byte count length of BCBsX.). Head field 302 may further comprise the Byte Count (BCNT) of the one or more BCBs, e.g., BCBs 320A-C, associated with the first FCB, e.g., FCB 310A, located in the queue, e.g., FCB free queue 222, GQs 218, flow queues 223, TBQs 215. The BCNT in head field 302 may be referred to as an qualifier as it identifies the byte count length of the one or more BCBs, e.g., BCBs 320A-C, associated with the first FCB, e.g., FCB 310A, in the queue 305 comprising QCB 301. Tail field 303 may comprise the FCB Address (FCBA) of last FCB, e.g., FCB 310B, in queue 305, e.g., FCB free queue 222, GQs 218, flow queues 223, TBQs 215, comprising QCB 301. Count field 304 may comprise the number of FCBs in queue 305 comprising QCB 301. Because in the exemplary set 300 of Figure there are two FCBs, e.g., FCB 310A-B, in queue 305 comprising QCB 301, count field 304 comprises the number two. As an ordinary skilled artisan would recognize, it is noted that the queues, e.g., FCB free queue 222, GQs 218, flow queues 223, TBQs 215, may comprise any number of FCBs and that FIG. 3 is illustrative. It is further noted that FCBs 310A-B may collectively or individually be referred to as FCBs 310 or FCB 310, respectively.
  • Each FCB 310 may comprise two entries or rows of fields. In each FCB 310, the first entry may comprise a field comprising a pointer to the Next FCB Address (NFA). The first entry may further comprise a field comprising the Byte Count (BCNT) length of the one or more BCBs associated with the next FCB 310. That is, instead of FCBs 310 storing the Byte Count length (BCNT) of the one or more BCBs associated with the current FCB 310, the BCNT field stores the byte count length of the one or more BCBs associated with the next FCB 310. For example, FCB 310A comprises the FCB address of the following FCB 310, e.g., FCB 310B, in the NFA field as well as the byte count length of the one or more BCBs, e.g., BCBs 320 D-F, associated with the following FCB 310, e.g., FCB 310B, in the BCNT field. The BCNT field may be referred to as an qualifier as it identifies the byte count length of the one or more BCBs, e.g., BCBs 320D-F, associated with FCB 310B identified in the NFA field, i.e., the next FCB 310 in the chain of FCBs 310. FCB 310B does not comprise any information in the NFA field or in the BCNT field as there are no more FCBs 310 following FCB 310B. (This is denoted in the exemplary set 300 of FIG. 3 by “empty” parentheses “( )”). By not storing information in the NFA field and in the BCNT field of FCB 310B, memory accesses to memory 210 are reduced thereby improving the efficiency of the bandwidth of memory 210.
  • In each FCB 310, the second entry may comprise the fields of the First BCB Address (FBA) of the first BCB associated with that particular FCB, the Starting Byte Position (SBP) of the frame data stored in a buffer associated with the first BCB, and the Ending Byte Position (EBP) of the frame data stored in the buffer associated with the first BCB. Each FCB maybe associated with one or more BCBs. Referring to FIG. 3, FCB 310A is associated with BCBs 320A-C. FCB 310B is associated with BCBs 320D-F. It is noted that FCBs may be associated with any number of BCBs and that FIG. 3 is illustrative, as would be recognized by an artisan of ordinary skill. It is further noted that BCBs 320A-F may collectively or individually be referred to as BCBs 320 or BCB 320, respectively. Each BCB 320 maybe associated with a particular buffer in data storage unit 140. For example, BCB 320A is associated with buffer 330A. BCB 320B is associated with buffer 330B. BCB 320C is associated with buffer 330C. BCB 320D is associated with buffer 330D. BCB 320E is associated with buffer 330E. BCB 320F is associated with buffer 330F. Buffers 330A-F may collectively or individually be referred to as buffers 330 or buffer 330, respectively. It is noted that data storage unit 140 of packet processor 100 may comprise any number of slices 205 comprising any number of buffers 330. It is further noted that since there may be any number of buffers 330 there may be any number of BCBs 320 associated with those buffers 330. It is further noted that in one embodiment, each BCB 320 maybe associated with a particular buffer 330 in data storage unit 140.
  • Referring to FCB 310A, the FBA field in the second entry may comprise the address of the first BCB 320, e.g., BCB 320A, associated with FCB 310A. FCB 310A may further comprise an SBP field storing the starting address of the frame data stored in the buffer 330, e.g., buffer 330A, associated with the first BCB 320, e.g., BCB 320A. FCB 310A may further comprise an EBP field storing the ending address of the frame data stored in the buffer 330, e.g., buffer 330A, associated with the first BCB 320, e.g., BCB 320A. The SBP and EBP fields may be referred to as qualifiers as they comprise information about the starting byte position and ending byte position of the frame data associated with the first BCB 320, e.g., BCB 320A, and not information about the current FCB 310, e.g., FCB 310A. Similarly, FCB 310B may comprise an FBA field in the second entry which comprises the address of the first BCB 320, e.g., BCB 320D, associated with FCB 310B. FCB 310B may further comprise an SBP field storing the starting address of the frame data stored in the buffer 330, e.g., buffer 330D, associated with the first BCB 320, e.g., BCB 320D. FCB 310B may further comprise an EBP field storing the ending address of the frame data stored in the buffer 330, e.g., buffer 330D, associated with the first BCB 320, e.g., BCB 320D. The SBP and EBP fields may be referred to as qualifiers as they comprise information about the starting byte position and ending byte position of the frame data associated with the first BCB 320, e.g., BCB 320D, and not information about the current FCB 310, e.g., FCB 310B.
  • As stated above, each FCB 310 may be associated with one or more BCBs 320. Referring to FIG. 3, FCB 310A is associated with BCBs 320A-C and FCB 310B is associated with BCBs 320D-F. Each BCB 320 may comprise three fields that are similar to the three fields in the second entry of FCBs 310. Each BCB 320 may comprise a pointer to the Next BCB Address (NBA) as well as the fields of SBP and EBP which are the starting and ending byte positions of the buffer 330 associated with the next BCB. The SBP and EBP fields may be referred to as qualifiers as they store the starting byte position and ending byte position of the buffer 330, e.g., buffer 330B, associated with the next BCB 320, e.g., BCB 320C. That is, instead of storing the starting and ending byte positions of the buffer 330, e.g., buffer 330B, associated with the current BCB 320, e.g., BCB 320B, thereby resulting an extra write access to memory 229, the starting and ending byte positions of the buffer 330, e.g., buffer 330B, associated with the next BCB 320, e.g., BCB 320B, is stored in the SBP and EBP fields, respectively, in the previous BCB 320, e.g., BCB 320A.
  • For example, BCB 320A comprises the BCB address of the next BCB 320, e.g., BCB 320B, in the NBA field as well as the starting byte position of the frame data stored in the buffer 330, e.g., buffer 330B, associated with the next BCB 320, e.g., BCB 320B, in the SBP field and the ending byte position of the frame data stored in the buffer 330, e.g., buffer 330B, associated with the next BCB, 320, e.g., BCB 320B, in the EBP field. BCB 320B comprises the BCB address of the next BCB 320, e.g., BCB 320C, in the NBA field as well as the starting byte position of the frame data stored in the buffer 330, e.g., buffer 330C, associated with the next BCB, e.g., BCB 320C, in the SBP field and the ending byte position of the frame data stored in the buffer 330, e.g., buffer 330C, associated with the next BCB 320, e.g., BCB 320C, in the EBP field. In the last BCB 320, e.g., BCB 320C, associated with an FCB, 310, e.g., FCB 310A, there is no information in the fields of NBA, SBP and EBP since there are no more BCBs 320 following the last BCB 320, e.g., BCB 320C. By not storing information in the NBA, SBP and EBP fields of the last BCB 320, e.g., BCB 320C, there is no information to be written into the fields of the last BCB 320, e.g., BCB 320C, associated with an FCB 310, e.g., FCB 310A, thereby reducing memory accesses to memory 229 and improving the efficiency of the bandwidth of memory 229.
  • FIG. 4—Method for Reducing Memory Accesses by Linking Frame Data with Qualifiers in Control Blocks
  • FIG. 4 illustrates a flowchart of one embodiment of the present invention of a method 400 for reducing memory accesses to memories 210 and 229 by linking frame data with qualifiers in control blocks, e.g., FCBs 310, BCBs 320.
  • In step 401, a frame of data may be received from a switch (not shown) or a port (not shown) in a packet switching network and temporarily stored in receiving preparation area memory 220 by receiver controller 203.
  • In step 402, RCB 219 of receiver controller 203 may be then be configured to lease one or more BCBs 320 from BCB free queue 226 (FIG. 2). A BCB may be said to be “leased” from BCB free queue 226 as the BCB may be temporarily removed from BCB free queue 226 during the “life cycle” of the FCB. Additional details regarding the “life cycle” of the FCB are disclosed in U.S. patent application Ser. No. ______, filed on ______, entitled “Assignment of Packet Descriptor Field Positions in a Network Processor,” Attorney Docket No. RAL920000091US1. RCB 219 may then write the received data in one or more particular buffers 330 associated with the one or more BCBs 320 leased from BCB free queue 226. In one embodiment, RCB 219 may issue one or more write requests to memory arbiter 204 in order to write the received frame data into one or more buffers 330 associated with the one or more BCBs leased from BCB free queue 226.
  • As stated above, each particular buffer 330 may be associated with a particular BCB 320. BCB 320 may be configured as illustrated in FIG. 3 where BCB 320 may comprise the field of a Next BCB Address (NBA) which comprises a pointer to the next BCB 320 address as well as the fields of SBP and EBP which are the starting and ending byte positions of the buffer 330 associated with the next BCB 320. The SBP and EBP fields may be referred to as qualifiers as they store the starting byte position and ending byte position of the buffer 330 associated with the next BCB 320. That is, instead of storing the starting and ending byte positions of the buffer 330, e.g., buffer 330A (FIG. 3), associated with the current BCB 320, e.g., BCB 320A (FIG. 3), thereby resulting an extra write access to memory 229, the starting and ending byte positions of the buffer 330, e.g., buffer 330B (FIG. 3), associated with the next BCB 320, e.g., BCB 320B (FIG. 3), is stored in the SBP and EBP fields, respectively, in the previous BCB 320, e.g., BCB 320A (FIG. 3).
  • During each lease operation, i.e., each BCB 320 leased from BCB free queue 226 by RCB 219, RCB 219 may read the head field 302 in QCB 301 (FIG. 3) of BCB free queue 226. RCB 219 may then read the NBA field of the first BCB 320, e.g., BCB 320A, which may comprise the address of the next BCB 320 in the BCB free queue 226 to retrieve. Head field 302 of QCB 301 may subsequently be updated by RCB 219 so that the address in the head field 302 is the address of the next BCB 320 that may be retrieved during the next lease operation. The count field 304 of QCB 301 of BCB free queue 226 may then be decremented to indicate that a BCB 320 has been retrieved from BCB free queue 226. Head field 302 of QCB 301 may further comprise the Byte Count (BCNT) of the one or more BCBs 320, e.g., BCBs 320A-C (FIG. 3), associated with the one or more buffers 330, e.g., buffers 330A-C (FIG. 3), storing the frame of data. The BCNT in head field 302 may be referred to as an qualifier as it identifies the byte count length of the one or more BCBs 320, e.g., BCBs 320A-C (FIG. 3), associated with the one or more buffers 330, buffers 330A-C (FIG. 3), storing the frame of data and not information about the BCB free queue 226 comprising QCB 301. QCB 301 may further comprise a tail field 303 comprising the address of the last FCB, e.g., FCB 310B (FIG. 2), located in the queue, e.g., FCB free queue 222.
  • RCB 219 of receiver controller 203 may associate the one or more BCBs 320, e.g. BCBs 320A-C (FIG. 3), with an FCB 310 in step 403. RCB 219 may lease an FCB 310 from FCB free queue 222 (FIG. 2) thereby associating the leased FCB 310 with the one or more BCBs 320, e.g., BCBs 320A-C (FIG. 3), that are associated with the one or more buffers 205, e.g., buffers 205A-C (FIG. 3), storing the frame of data. An FCB may be said to be “leased” from FCB free queue 222 as the FCB may be temporarily removed from FCB free queue 222 during the “life cycle” of the FCB. Additional details regarding the “life cycle” of the FCB are disclosed in U.S. patent application Ser. No. ______, filed on ______, entitled “Assignment of Packet Descriptor Field Positions in a Network Processor,” Attorney Docket No. RAL920000091US1. FCB 310 may be configured as illustrated in FIG. 3 with two entries or rows of fields. The first entry may comprise a field comprising a pointer to the Next FCB Address (NFA). The first entry may further comprise a field comprising the Byte Count (BCNT) length of the one or more BCBs 320, e.g., BCBs 320D-F (FIG. 3), associated with the next FCB 310. That is, instead of FCB 310 storing the Byte Count length (BCNT) of the one or more BCBs 320 associated with the current FCB 310, the BCNT field stores the byte count length of the one or more BCBs 320 associated with the next FCB 310. It is noted that the last FCB 310, e.g., FCB 310B (FIG. 3), in the queue, e.g., FCB free queue 222 (FIG. 2), does not comprise any information in the NFA field or in the BCNT field as there are no more FCBs 310 following the last FCB 310, e.g., FCB 310B (FIG. 3), in the queue, e.g., FCB free queue 222 (FIG. 2). By not storing information in the NFA field and in the BCNT field of the last FCB 310, e.g., FCB 310B (FIG. 3), memory accesses to memory 210 are reduced thereby improving the bandwidth of memory 210. It is further noted that the last FCB 310, e.g., FCB 310B (FIG. 3), located in the queue, e.g., FCB free queue 222 (FIG. 2), may be identified in tail field 303 of QCB 301 (FIG. 3) of FCB free queue 222. As stated above, tail field 303 may comprise the address of the last FCB 310, e.g., FCB 310B (FIG. 2), located in the queue, e.g., FCB free queue 222.
  • The second entry of FCB 310 may comprise the fields of the First BCB Address (FBA) of the first BCB 320, e.g., BCB 320A (FIG. 3), associated with that particular FCB 310, the Starting Byte Position (SBP) of the frame data stored in buffer 330, e.g., buffer 330A (FIG. 3), associated with the first BCB 320, e.g., BCB 320A (FIG. 3), and the Ending Byte Position (EBP) of the frame data stored in buffer 330, e.g., buffer 330A (FIG. 3), associated with the first BCB 320, e.g., BCB 320A (FIG. 3). The SBP and EBP fields may be referred to as qualifiers as they comprise information about the starting byte position and ending byte position of the frame data associated with the first BCB 320, e.g., BCB 320A (FIG. 3), and not information about the current FCB 310, e.g., FCB 310A (FIG. 3).
  • In the lease operation, RCB 219 may read the head field 302 in QCB 301 of FCB free queue 222. Head field 302 may include the address of the first FCB 310, e.g., FCB 310A (FIG. 3), to retrieve from FCB free queue 222. RCB 219 may then read the NFA field of the first FCB 310, e.g., FCB 310A, which may comprise the address of the next FCB 310 in the FCB free queue 222 to retrieve. Head field 302 of QCB 301 may subsequently be updated so that the address in head field 302 is the address of the next FCB 310 that may be retrieved during the next lease operation. The count field 304 of QCB 301 of FCB free queue 222 may then be decremented to indicate that a FCB 310 has been retrieved from FCB free queue 222 and enqueued in GQs 218.
  • In step 404, FCB 310 may be enqueued in GQs 218 by RCB 219. As illustrated in FIG. 3, FCB 310 may comprise the fields of FBA as well as the qualifiers SBP and EBP in the second entry. Once FCB 310 is enqueued in GQs 218, the information in these fields are copied from RCB 219. RCB 219 may store the starting and ending byte position of the frame data it stored in the first buffer 330, e.g., buffer 330A, associated with the first BCB 320, e.g., BCB 320A, associated with the FCB 310, e.g., FCB 310A, enqueued in GQs 218 by RCB 219 when RCB 219 writes the received frame data into the first buffer 330, e.g., buffer 330A, associated with the first BCB 320, e.g., BCB 320A, associated with the FCB 310, e.g., FCB 310A, enqueued in GQs 218 by RCB 219. Furthermore, the BCNT field in the first entry of FCB 310 maybe copied from RCB 219.
  • FCB 310 enqueued in GQs 218 may then be chained together with the other FCBs 310 in GQs 218 in step 405. RCB 219 may read the tail field 303 in QCB 301 of GQs 218 to retrieve the address of the current last FCB 310 in GQs 218. The tail field 303 in QCB 301 of GQs 218 may subsequently be updated by RCB 219 so that the pointer in tail field 303 points to the FCB 310 just enqueued in GQs 218. The count field 304 of QCB 301 of GQs 218 may then be incremented by RCB 219 to indicate that an FCB 310 has been retrieved from FCB free queue 222 and enqueued in GQs 218.
  • In step 406, FCB 310 may be dequeued from GQs 218. Dispatcher logic 217 of embedded processor interface controller 202 may read the head field 302 of QCB 301 of GQs 218 to retrieve the address of the FCB 310 to dequeue. Dispatcher logic 217 may further read the NFA field in the FCB 310 to determine the address of the next FCB 310 in GQs 218 to dequeue. Dispatcher logic 217 may be configured to update the head field 302 of QCB 301 in GQs 218 so that the address in the head field 302 is the address of the next FCB 310 that may be dequeued at the next dequeue operation. The count field 304 of QCB 301 in GQs 218 may be decremented to indicate that an FCB 301 has been dequeued from GQs 218. The contents of the dequeued FCB 310 may then be read by dispatcher logic 217 and transferred to embedded processor 150.
  • In step 407, the dequeued FCB 310 may be enqueued in TBQs 215. In another embodiment, FCB 310 may first be enqueued in flow queues 223 of scheduler 130 and then dequeued and enqueued in TBQs 215. Additional details regarding enqueing and dequeing the FCB 310 in flow queues 223 of scheduler 130 are disclosed in U.S. patent application Ser. No. ______, filed on ______, entitled “Assignment of Packet Descriptor Field Positions in a Network Processor,” Attorney Docket No. RAL920000091US1.
  • In step 408, the enqueued FCB 310 in TBQs 215 may then be dequeued by TBQ scheduler 228 to be loaded into PCB 224 to be read by PCB 224 as discussed in the description of FIG. 2. In particular, the address in the FBA field of the FCB 310 may be loaded into PCB 224.
  • In step 409, the frame of data to be transmitted through switch/port interface unit 211 to a switch (not shown) or a port (not shown) in the packet switching network may be read from the one or more buffers 330 associated with the one or more BCBs 320, e.g., BCBs 320A-C (FIG. 3), by PCB 224. PCB retrieves the address of the first BCB 320, i.e., the address included in the FBA field of the FCB 310 dequeued in step 408, that was loaded into PCB 224. The address of the next BCB 320, e.g., BCB 320B (FIG. 3), may be located in the NBA field of the first BCB 320, e.g., BCB 320A (FIG. 3).
  • In step 410, when all the data of a frame is read by PCB from the one or more buffers 330 of data storage unit 140 that stored the frame of data, the one or more BCBs 320, e.g., BCBs 320A-C (FIG. 3), associated with the one or more buffers 330, e.g., buffers 330A-C (FIG. 3), are enqueued in BCB free queue 226. In one embodiment, each BCB 320 may be enqueued one at a time in BCB free queue 226 as frame data in each associated buffer 330 is read by PCB 224.
  • In step 411, when the one or more BCBs 320, e.g., BCBs 320A-C (FIG. 3), associated with the one or more buffers 330, e.g., buffers 330A-C (FIG. 3), have been enqueued in BCB free queue 226, the FCB 310 associated with the one or more BCBs 320, e.g., BCBs 320A-C (FIG. 3) is enqueued in FCB free queue 222.
  • Although the method and system of the present invention are described in connection with several embodiments, it is not intended to be limited to the specific forms set forth herein, but on the contrary, it is intended to cover such alternatives, modifications, and equivalents, as can be reasonably included within the spirit and scope of the invention as defined by the appended claims. It is noted that the headings are used only for organizational purposes and not meant to limit the scope of the description or claims.

Claims (10)

1-10. (canceled)
11. A method for reducing memory accesses by inserting qualifiers in control blocks comprising the steps of:
receiving a frame of data;
leasing one or more buffer control blocks;
storing said frame of data in one or more buffers associated with said one or more buffer control blocks in a data storage unit; and
leasing a frame control block associated with said one or more buffer control blocks;
wherein said frame control block comprise one or more qualifier fields, wherein said one or more qualifier fields in said frame control block comprise information related to one of said one or more buffers in said data storage unit.
12. The method as recited in claim 11 further comprising the step of:
enqueing said frame control block in a queue.
13. The method as recited in claim 11, wherein said one or more qualifier fields in said frame control block comprise information as to an address of one of said one or more buffer control blocks associated with current frame control block.
14. The method as recited in claim 11, wherein said one or more qualifier fields in said frame control block comprise information as to a starting byte position and to an ending byte position of data stored in said one of said one or more buffers storing said frame of data.
15. The method as recited in claim 1 1, wherein each of said one or more buffer control blocks comprise one or more qualifier fields, wherein said one or more qualifier fields in all but a last buffer control block of said one or more buffer control blocks associated with said frame control block comprise information as to a starting byte position and to an ending byte position of data stored in one of said one or more buffers associated with a next buffer control block.
16. The method as recited in claim 12, wherein said queue comprises a control block, wherein said control block in said queue comprises a head field, wherein said head field in said queue comprises an address of a first frame control block in said queue.
17. The method as recited in claim 16, wherein said head field in said queue further comprises a qualifier, wherein said qualifier comprises information as to a byte count length of one or more buffer control blocks associated with said first frame control block.
18. The method as recited in claim 16, wherein said control block of said queue further comprises a tail field, wherein said tail field comprises an address of a last frame control block in said queue.
19. The method as recited in claim 16, wherein said control block of said queue further comprises a count field, wherein said count field comprises a number of frame control blocks in said queue.
US11/469,390 2001-02-23 2006-08-31 Linking frame data by inserting qualifiers in control blocks Abandoned US20070002172A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US11/469,390 US20070002172A1 (en) 2001-02-23 2006-08-31 Linking frame data by inserting qualifiers in control blocks

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US09/791,336 US7130916B2 (en) 2001-02-23 2001-02-23 Linking frame data by inserting qualifiers in control blocks
US11/469,390 US20070002172A1 (en) 2001-02-23 2006-08-31 Linking frame data by inserting qualifiers in control blocks

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US09/791,336 Division US7130916B2 (en) 2001-02-23 2001-02-23 Linking frame data by inserting qualifiers in control blocks

Publications (1)

Publication Number Publication Date
US20070002172A1 true US20070002172A1 (en) 2007-01-04

Family

ID=25153396

Family Applications (2)

Application Number Title Priority Date Filing Date
US09/791,336 Expired - Fee Related US7130916B2 (en) 2001-02-23 2001-02-23 Linking frame data by inserting qualifiers in control blocks
US11/469,390 Abandoned US20070002172A1 (en) 2001-02-23 2006-08-31 Linking frame data by inserting qualifiers in control blocks

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US09/791,336 Expired - Fee Related US7130916B2 (en) 2001-02-23 2001-02-23 Linking frame data by inserting qualifiers in control blocks

Country Status (7)

Country Link
US (2) US7130916B2 (en)
EP (1) EP1362465B1 (en)
KR (1) KR100554825B1 (en)
CN (1) CN100429906C (en)
AT (1) ATE323999T1 (en)
DE (1) DE60210748T2 (en)
WO (1) WO2002069601A2 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100173214A1 (en) * 2008-01-29 2010-07-08 Tibor Fabian Controller for fuel cell operation
US20110070151A1 (en) * 2009-07-23 2011-03-24 Daniel Braithwaite Hydrogen generator and product conditioning method
US20110200495A1 (en) * 2009-07-23 2011-08-18 Daniel Braithwaite Cartridge for controlled production of hydrogen
US8795926B2 (en) 2005-08-11 2014-08-05 Intelligent Energy Limited Pump assembly for a fuel cell system
US8940458B2 (en) 2010-10-20 2015-01-27 Intelligent Energy Limited Fuel supply for a fuel cell
US9169976B2 (en) 2011-11-21 2015-10-27 Ardica Technologies, Inc. Method of manufacture of a metal hydride fuel supply

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7130916B2 (en) * 2001-02-23 2006-10-31 International Business Machines Corporation Linking frame data by inserting qualifiers in control blocks
US7200696B2 (en) * 2001-04-06 2007-04-03 International Business Machines Corporation System method structure in network processor that indicates last data buffer of frame packet by last flag bit that is either in first or second position
US6937606B2 (en) * 2001-04-20 2005-08-30 International Business Machines Corporation Data structures for efficient processing of IP fragmentation and reassembly
US7895239B2 (en) * 2002-01-04 2011-02-22 Intel Corporation Queue arrays in network devices
US7522621B2 (en) * 2005-01-06 2009-04-21 International Business Machines Corporation Apparatus and method for efficiently modifying network data frames
US7466715B2 (en) * 2005-03-28 2008-12-16 International Business Machines Corporation Flexible control block format for frame description and management
US7980582B2 (en) * 2006-08-09 2011-07-19 Atc Leasing Company Llc Front tow extended saddle

Citations (47)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4168469A (en) * 1977-10-04 1979-09-18 Ncr Corporation Digital data communication adapter
US4703475A (en) * 1985-12-04 1987-10-27 American Telephone And Telegraph Company At&T Bell Laboratories Data communication method and apparatus using multiple physical data links
US5247516A (en) * 1991-03-28 1993-09-21 Sprint International Communications Corp. Configurable composite data frame
US5333269A (en) * 1988-10-28 1994-07-26 International Business Machines Corporation Mechanism for transferring messages between source and destination users through a shared memory
US5410677A (en) * 1991-12-30 1995-04-25 Apple Computer, Inc. Apparatus for translating data formats starting at an arbitrary byte position
US5440545A (en) * 1993-08-02 1995-08-08 Motorola, Inc. Packet delivery system
US5551020A (en) * 1994-03-28 1996-08-27 Flextech Systems, Inc. System for the compacting and logical linking of data blocks in files to optimize available physical storage
US5553302A (en) * 1993-12-30 1996-09-03 Unisys Corporation Serial I/O channel having independent and asynchronous facilities with sequence recognition, frame recognition, and frame receiving mechanism for receiving control and user defined data
US5559963A (en) * 1992-02-20 1996-09-24 International Business Machines Corporation Suspending, resuming, and interleaving frame-groups
US5561807A (en) * 1993-04-29 1996-10-01 International Business Machines Corporation Method and device of multicasting data in a communications system
US5574905A (en) * 1994-05-26 1996-11-12 International Business Machines Corporation Method and apparatus for multimedia editing and data recovery
US5617537A (en) * 1993-10-05 1997-04-01 Nippon Telegraph And Telephone Corporation Message passing system for distributed shared memory multiprocessor system and message passing method using the same
US5651002A (en) * 1995-07-12 1997-07-22 3Com Corporation Internetworking device with enhanced packet header translation and memory
US5794018A (en) * 1993-11-24 1998-08-11 Intel Corporation System and method for synchronizing data streams
US5812775A (en) * 1995-07-12 1998-09-22 3Com Corporation Method and apparatus for internetworking buffer management
US5832310A (en) * 1993-12-30 1998-11-03 Unisys Corporation Serial I/O channel having dependent and synchronous sources of control data and user defined data
US6330584B1 (en) * 1998-04-03 2001-12-11 Mmc Networks, Inc. Systems and methods for multi-tasking, resource sharing and execution of computer instructions
US6400715B1 (en) * 1996-09-18 2002-06-04 Texas Instruments Incorporated Network address matching circuit and method
US6434145B1 (en) * 1998-06-22 2002-08-13 Applied Micro Circuits Corporation Processing of network data by parallel processing channels
US20020118694A1 (en) * 2001-02-23 2002-08-29 International Business Machines Corporation Linking frame data by inserting qualifiers in control blocks
US6477584B1 (en) * 1997-03-21 2002-11-05 Lsi Logic Corporation Message FIFO empty early warning method
US6519686B2 (en) * 1998-01-05 2003-02-11 Intel Corporation Information streaming in a multi-process system using shared memory
US6532185B2 (en) * 2001-02-23 2003-03-11 International Business Machines Corporation Distribution of bank accesses in a multiple bank DRAM used as a data buffer
US6643716B2 (en) * 1999-03-29 2003-11-04 Intel Corporation Method and apparatus for processing serial data using a single receive fifo
US6658546B2 (en) * 2001-02-23 2003-12-02 International Business Machines Corporation Storing frame modification information in a bank in memory
US6658014B1 (en) * 1999-02-19 2003-12-02 Fujitsu Limited Packet buffer device and packet assembling method
US6661774B1 (en) * 1999-02-16 2003-12-09 Efficient Networks, Inc. System and method for traffic shaping packet-based signals
US6681340B2 (en) * 2001-02-23 2004-01-20 International Business Machines Corporation Efficient implementation of error correction code scheme
US6687247B1 (en) * 1999-10-27 2004-02-03 Cisco Technology, Inc. Architecture for high speed class of service enabled linecard
US6707817B1 (en) * 1999-03-17 2004-03-16 Broadcom Corporation Method for handling IP multicast packets in network switch
US6731644B1 (en) * 2000-02-14 2004-05-04 Cisco Technology, Inc. Flexible DMA engine for packet header modification
US6747978B1 (en) * 1999-05-27 2004-06-08 Nortel Networks Limited Direct memory access packet router method and apparatus
US6754223B1 (en) * 1999-08-17 2004-06-22 Conexant Systems, Inc. Integrated circuit that processes communication packets with co-processor circuitry to determine a prioritized processing order for a core processor
US6785290B1 (en) * 1999-10-25 2004-08-31 Kabushiki Kaisha Toshiba Line interface integrated circuit and packet switch
US6788697B1 (en) * 1999-12-06 2004-09-07 Nortel Networks Limited Buffer management scheme employing dynamic thresholds
US6876659B2 (en) * 2000-01-06 2005-04-05 International Business Machines Corporation Enqueuing apparatus for asynchronous transfer mode (ATM) virtual circuit merging
US6879579B1 (en) * 1997-09-19 2005-04-12 Commonwealth Scientific And Industrial Research Organisation Medium access control protocol for data communications
US6907042B1 (en) * 1999-05-18 2005-06-14 Fujitsu Limited Packet processing device
US6920145B2 (en) * 2000-01-12 2005-07-19 Fujitsu Limited Packet switch device and scheduling control method
US6922408B2 (en) * 2000-01-10 2005-07-26 Mellanox Technologies Ltd. Packet communication buffering with dynamic flow control
US6963921B1 (en) * 2001-02-16 2005-11-08 3Com Corporation Method and apparatus for hardware assisted TCP packet re-assembly
US6977930B1 (en) * 2000-02-14 2005-12-20 Cisco Technology, Inc. Pipelined packet switching and queuing architecture
US7055151B1 (en) * 1998-04-03 2006-05-30 Applied Micro Circuits Corporation Systems and methods for multi-tasking, resource sharing and execution of computer instructions
US7065050B1 (en) * 1998-07-08 2006-06-20 Broadcom Corporation Apparatus and method for controlling data flow in a network switch
US7116679B1 (en) * 1999-02-23 2006-10-03 Alcatel Multi-service network switch with a generic forwarding interface
US7145869B1 (en) * 1999-03-17 2006-12-05 Broadcom Corporation Method for avoiding out-of-ordering of frames in a network switch
US7230917B1 (en) * 2001-02-22 2007-06-12 Cisco Technology, Inc. Apparatus and technique for conveying per-channel flow control information to a forwarding engine of an intermediate network node

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5136582A (en) 1990-05-29 1992-08-04 Advanced Micro Devices, Inc. Memory management system and method for network controller

Patent Citations (51)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4168469A (en) * 1977-10-04 1979-09-18 Ncr Corporation Digital data communication adapter
US4703475A (en) * 1985-12-04 1987-10-27 American Telephone And Telegraph Company At&T Bell Laboratories Data communication method and apparatus using multiple physical data links
US5333269A (en) * 1988-10-28 1994-07-26 International Business Machines Corporation Mechanism for transferring messages between source and destination users through a shared memory
US5247516A (en) * 1991-03-28 1993-09-21 Sprint International Communications Corp. Configurable composite data frame
US5410677A (en) * 1991-12-30 1995-04-25 Apple Computer, Inc. Apparatus for translating data formats starting at an arbitrary byte position
US5559963A (en) * 1992-02-20 1996-09-24 International Business Machines Corporation Suspending, resuming, and interleaving frame-groups
US6038592A (en) * 1993-04-19 2000-03-14 International Business Machines Corporation Method and device of multicasting data in a communications system
US5561807A (en) * 1993-04-29 1996-10-01 International Business Machines Corporation Method and device of multicasting data in a communications system
US5440545A (en) * 1993-08-02 1995-08-08 Motorola, Inc. Packet delivery system
US5617537A (en) * 1993-10-05 1997-04-01 Nippon Telegraph And Telephone Corporation Message passing system for distributed shared memory multiprocessor system and message passing method using the same
US5794018A (en) * 1993-11-24 1998-08-11 Intel Corporation System and method for synchronizing data streams
US5553302A (en) * 1993-12-30 1996-09-03 Unisys Corporation Serial I/O channel having independent and asynchronous facilities with sequence recognition, frame recognition, and frame receiving mechanism for receiving control and user defined data
US5832310A (en) * 1993-12-30 1998-11-03 Unisys Corporation Serial I/O channel having dependent and synchronous sources of control data and user defined data
US5551020A (en) * 1994-03-28 1996-08-27 Flextech Systems, Inc. System for the compacting and logical linking of data blocks in files to optimize available physical storage
US5574905A (en) * 1994-05-26 1996-11-12 International Business Machines Corporation Method and apparatus for multimedia editing and data recovery
US5651002A (en) * 1995-07-12 1997-07-22 3Com Corporation Internetworking device with enhanced packet header translation and memory
US5812775A (en) * 1995-07-12 1998-09-22 3Com Corporation Method and apparatus for internetworking buffer management
US6400715B1 (en) * 1996-09-18 2002-06-04 Texas Instruments Incorporated Network address matching circuit and method
US6477584B1 (en) * 1997-03-21 2002-11-05 Lsi Logic Corporation Message FIFO empty early warning method
US6879579B1 (en) * 1997-09-19 2005-04-12 Commonwealth Scientific And Industrial Research Organisation Medium access control protocol for data communications
US6519686B2 (en) * 1998-01-05 2003-02-11 Intel Corporation Information streaming in a multi-process system using shared memory
US6330584B1 (en) * 1998-04-03 2001-12-11 Mmc Networks, Inc. Systems and methods for multi-tasking, resource sharing and execution of computer instructions
US7055151B1 (en) * 1998-04-03 2006-05-30 Applied Micro Circuits Corporation Systems and methods for multi-tasking, resource sharing and execution of computer instructions
US6434145B1 (en) * 1998-06-22 2002-08-13 Applied Micro Circuits Corporation Processing of network data by parallel processing channels
US7065050B1 (en) * 1998-07-08 2006-06-20 Broadcom Corporation Apparatus and method for controlling data flow in a network switch
US6661774B1 (en) * 1999-02-16 2003-12-09 Efficient Networks, Inc. System and method for traffic shaping packet-based signals
US6658014B1 (en) * 1999-02-19 2003-12-02 Fujitsu Limited Packet buffer device and packet assembling method
US7116679B1 (en) * 1999-02-23 2006-10-03 Alcatel Multi-service network switch with a generic forwarding interface
US6707817B1 (en) * 1999-03-17 2004-03-16 Broadcom Corporation Method for handling IP multicast packets in network switch
US7145869B1 (en) * 1999-03-17 2006-12-05 Broadcom Corporation Method for avoiding out-of-ordering of frames in a network switch
US6643716B2 (en) * 1999-03-29 2003-11-04 Intel Corporation Method and apparatus for processing serial data using a single receive fifo
US6907042B1 (en) * 1999-05-18 2005-06-14 Fujitsu Limited Packet processing device
US6747978B1 (en) * 1999-05-27 2004-06-08 Nortel Networks Limited Direct memory access packet router method and apparatus
US7046686B1 (en) * 1999-08-17 2006-05-16 Mindspeed Technologies, Inc. Integrated circuit that processes communication packets with a buffer management engine having a pointer cache
US6754223B1 (en) * 1999-08-17 2004-06-22 Conexant Systems, Inc. Integrated circuit that processes communication packets with co-processor circuitry to determine a prioritized processing order for a core processor
US6804239B1 (en) * 1999-08-17 2004-10-12 Mindspeed Technologies, Inc. Integrated circuit that processes communication packets with co-processor circuitry to correlate a packet stream with context information
US6785290B1 (en) * 1999-10-25 2004-08-31 Kabushiki Kaisha Toshiba Line interface integrated circuit and packet switch
US6687247B1 (en) * 1999-10-27 2004-02-03 Cisco Technology, Inc. Architecture for high speed class of service enabled linecard
US6788697B1 (en) * 1999-12-06 2004-09-07 Nortel Networks Limited Buffer management scheme employing dynamic thresholds
US6876659B2 (en) * 2000-01-06 2005-04-05 International Business Machines Corporation Enqueuing apparatus for asynchronous transfer mode (ATM) virtual circuit merging
US6922408B2 (en) * 2000-01-10 2005-07-26 Mellanox Technologies Ltd. Packet communication buffering with dynamic flow control
US6920145B2 (en) * 2000-01-12 2005-07-19 Fujitsu Limited Packet switch device and scheduling control method
US6977930B1 (en) * 2000-02-14 2005-12-20 Cisco Technology, Inc. Pipelined packet switching and queuing architecture
US6731644B1 (en) * 2000-02-14 2004-05-04 Cisco Technology, Inc. Flexible DMA engine for packet header modification
US6963921B1 (en) * 2001-02-16 2005-11-08 3Com Corporation Method and apparatus for hardware assisted TCP packet re-assembly
US7230917B1 (en) * 2001-02-22 2007-06-12 Cisco Technology, Inc. Apparatus and technique for conveying per-channel flow control information to a forwarding engine of an intermediate network node
US6681340B2 (en) * 2001-02-23 2004-01-20 International Business Machines Corporation Efficient implementation of error correction code scheme
US6658546B2 (en) * 2001-02-23 2003-12-02 International Business Machines Corporation Storing frame modification information in a bank in memory
US6532185B2 (en) * 2001-02-23 2003-03-11 International Business Machines Corporation Distribution of bank accesses in a multiple bank DRAM used as a data buffer
US7130916B2 (en) * 2001-02-23 2006-10-31 International Business Machines Corporation Linking frame data by inserting qualifiers in control blocks
US20020118694A1 (en) * 2001-02-23 2002-08-29 International Business Machines Corporation Linking frame data by inserting qualifiers in control blocks

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8795926B2 (en) 2005-08-11 2014-08-05 Intelligent Energy Limited Pump assembly for a fuel cell system
US9515336B2 (en) 2005-08-11 2016-12-06 Intelligent Energy Limited Diaphragm pump for a fuel cell system
US20100173214A1 (en) * 2008-01-29 2010-07-08 Tibor Fabian Controller for fuel cell operation
US9034531B2 (en) 2008-01-29 2015-05-19 Ardica Technologies, Inc. Controller for fuel cell operation
US9403679B2 (en) 2009-07-23 2016-08-02 Intelligent Energy Limited Hydrogen generator and product conditioning method
US8808410B2 (en) 2009-07-23 2014-08-19 Intelligent Energy Limited Hydrogen generator and product conditioning method
US8741004B2 (en) 2009-07-23 2014-06-03 Intelligent Energy Limited Cartridge for controlled production of hydrogen
US20110200495A1 (en) * 2009-07-23 2011-08-18 Daniel Braithwaite Cartridge for controlled production of hydrogen
US9409772B2 (en) 2009-07-23 2016-08-09 Intelligent Energy Limited Cartridge for controlled production of hydrogen
US20110070151A1 (en) * 2009-07-23 2011-03-24 Daniel Braithwaite Hydrogen generator and product conditioning method
US8940458B2 (en) 2010-10-20 2015-01-27 Intelligent Energy Limited Fuel supply for a fuel cell
US9774051B2 (en) 2010-10-20 2017-09-26 Intelligent Energy Limited Fuel supply for a fuel cell
US9169976B2 (en) 2011-11-21 2015-10-27 Ardica Technologies, Inc. Method of manufacture of a metal hydride fuel supply

Also Published As

Publication number Publication date
CN100429906C (en) 2008-10-29
US20020118694A1 (en) 2002-08-29
WO2002069601A3 (en) 2003-03-13
KR100554825B1 (en) 2006-02-22
KR20030076680A (en) 2003-09-26
DE60210748D1 (en) 2006-05-24
EP1362465A2 (en) 2003-11-19
ATE323999T1 (en) 2006-05-15
DE60210748T2 (en) 2007-01-04
WO2002069601A2 (en) 2002-09-06
CN1502198A (en) 2004-06-02
US7130916B2 (en) 2006-10-31
EP1362465B1 (en) 2006-04-19

Similar Documents

Publication Publication Date Title
US20070002172A1 (en) Linking frame data by inserting qualifiers in control blocks
US6967951B2 (en) System for reordering sequenced based packets in a switching network
KR100690418B1 (en) Efficient processing of multicast transmissions
US7555579B2 (en) Implementing FIFOs in shared memory using linked lists and interleaved linked lists
US20030128703A1 (en) Switch queue predictive protocol (SQPP) based packet switching technique
US20050254330A1 (en) Apparatus and method for matrix memory switching element
US20030016689A1 (en) Switch fabric with dual port memory emulation scheme
US7085266B2 (en) Apparatus, method and limited set of messages to transmit data between components of a network processor
US6681340B2 (en) Efficient implementation of error correction code scheme
US7072347B2 (en) Assignment of packet descriptor field positions in a network processor
US6658546B2 (en) Storing frame modification information in a bank in memory
US10021035B1 (en) Queuing methods and apparatus in a network device
US6678277B1 (en) Efficient means to provide back pressure without head of line blocking in a virtual output queued forwarding system
US7746775B2 (en) Instant service method for deficit-round-robin (DRR) data packet scheduling
CN117015764A (en) Message communication between integrated computing devices

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- AFTER EXAMINER'S ANSWER OR BOARD OF APPEALS DECISION