EP1654657A1 - Force no-hit indications for cam entries based on policy maps - Google Patents

Force no-hit indications for cam entries based on policy maps

Info

Publication number
EP1654657A1
EP1654657A1 EP04753312A EP04753312A EP1654657A1 EP 1654657 A1 EP1654657 A1 EP 1654657A1 EP 04753312 A EP04753312 A EP 04753312A EP 04753312 A EP04753312 A EP 04753312A EP 1654657 A1 EP1654657 A1 EP 1654657A1
Authority
EP
European Patent Office
Prior art keywords
entries
lookup
associative memory
result
access control
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP04753312A
Other languages
German (de)
French (fr)
Other versions
EP1654657A4 (en
Inventor
Venkateshwar Rao Pullela
Dileep Kumar Devireddy
Bhushan Mangesh Kanekar
Stephen Francis Scheid
Suresh Gurajapu
Gyaneshwar S. Saharia
Atul Rawat
Dipankar Bhattacharya
Qizhong Chen
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cisco Technology Inc
Original Assignee
Cisco Technology Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US10/630,176 external-priority patent/US7082492B2/en
Priority claimed from US10/630,178 external-priority patent/US7689485B2/en
Priority claimed from US10/630,174 external-priority patent/US7177978B2/en
Application filed by Cisco Technology Inc filed Critical Cisco Technology Inc
Publication of EP1654657A1 publication Critical patent/EP1654657A1/en
Publication of EP1654657A4 publication Critical patent/EP1654657A4/en
Withdrawn legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/02Details
    • H04L12/14Charging, metering or billing arrangements for data wireline or wireless communications
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/903Querying
    • G06F16/90335Query processing
    • G06F16/90339Query processing by using parallel associative memories or content-addressable memories
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/74Address processing for routing
    • H04L45/745Address table lookup; Address filtering
    • H04L45/7453Address table lookup; Address filtering using hashing

Definitions

  • One embodiment of an invention especially relates to computer and communications systems, especially network routers and switches; and more particularly, one embodiment relates to associative memory entries with force no-hit and priority indications of particular use in implementing policy maps in communication devices, one embodiment relates to generating and merging lookup results to apply multiple features, and one embodiment relates to generating accounting data based on access control list entries.
  • IP Internet Protocol
  • a network device such as a switch or router, typically receives, processes, and forwards or discards a packet based on one or more criteria, including the type of protocol used by the packet, addresses of the packet (e.g., source, destination, group), and type or quality of service requested.
  • ACLs access control lists
  • IP forwarding requires a longest prefix match.
  • CMOS complementary metal-oxide-semiconductor
  • ASICs application-specific integrated circuits
  • custom circuitry software or firmware controlled processors
  • associative memories including, but not limited to binary content-addressable memories (binary CAMs) and ternary content-addressable memories (ternary CAMs or TCAMs).
  • binary CAMs binary content-addressable memories
  • ternary CAMs or TCAMs ternary content-addressable memories
  • Each entry of a binary CAM typically includes a value for matching against, while each TCAM entry typically includes a value and a mask.
  • the associative memory compares a lookup word against all of the entries in parallel, and typically generates an indication of the highest priority entry that matches the lookup word.
  • An entry matches the lookup word in a binary CAM if the lookup word and the entry value are identical, while an entry matches the lookup word in a TCAM if the lookup word and the entry value are identical in the bits that are not indicated by the mask as being irrelevant to the comparison operations.
  • Associative memories are very useful in performing packet classification operations. In performing a packet classification, it is not uncommon for multiple lookup operations to be performed in parallel or in series using multiple associative memories basically based on a same search key or variant thereof, as one lookup operation might be related to packet forwarding while another related to quality of service determination. Desired are new functionality, features, and mechanisms in associative memories to support packet classification and other applications. Additionally, as with most any system, errors can occur.
  • array parity errors can occur in certain content-addressable memories as a result of failure-in-time errors which are typical of semiconductor devices. Additionally, communications and other errors can occur.
  • Prior systems are known to detect certain errors and to signal that some error condition has occurred, but are typically lacking in providing enough information to identify and isolate the error. Desired is new functionality for performing error detection and identification.
  • One problem with performing packet classification is the rate at which it must be performed, especially when multiple features of a certain type are to be evaluated.
  • a prior approach uses a series of lookups to evaluate an action to be taken for each of these features. This approach is too slow, so techniques, such as Binary Decision Diagram (BDD) and Order Dependent Merge (ODM), were used for combining these features so they can be evaluated in a single lookup operation.
  • BDD Binary Decision Diagram
  • ODM Order Dependent Merge
  • ODM combines these original lists to produce one of two cross-product equivalent ordered lists, each with four entries: A1B1, A1B2, A2B1, and A2B2; or A1B1, A2B1, A1B2, and A2B2.
  • These four entries can then be programmed into an associative memory and an indication of a corresponding action to be taken placed in an adjunct memory. Lookup operations can then be performed on the associative and adjunct memories to identify a corresponding action to use for a particular packet being processed.
  • ODM and BDD which may filter out the entries which are unnecessary as their values will never allow them to be matched.
  • Methods and apparatus for defining and using associative memory entries with force no-hit and priority indications of particular use in implementing policy maps in communication devices, for merging lookup results, such as from one or more associative memory banks and/or memory devices, and for generating accounting or other data based on that indicated in an access control list or other specification, and typically using associative memory entries in one or more associative memory banks and/or memory _, devices.
  • a set of entries is determined based on a policy map with a force no-hit indication being associated with one or more of the entries.
  • programmable priority indications may be associated with one or more of the entries, or with the associative memory devices, associative memory banks, etc.
  • the force no-hit indications are often used in response to identified deny instructions in an access control list or other policy map.
  • a lookup operation is then performed on these associative memory entries, with highest matching result or results identified based on the programmed and/or implicit priority level associated with the entries, or with the associative memory devices, associative memory banks, etc.
  • One embodiment identifies an access control list including multiple access control list entries. A first set of access control list entries corresponding to a first feature of the access control list entries and a second set of access control list entries corresponding to a second feature of the access control list entries are identified.
  • a first associative memory bank is programmed with the first associative memory entries and a second associative memory bank is programmed with the second associative memory entries, with the first associative memory entries having a higher lookup precedence than the second associative memory entries.
  • a lookup value is then identified, such as that based on a packet or other item.
  • Lookup operations are then typically performed substantially simultaneously on the first and second sets of associative memory entries to generate multiple lookup results, with these results typically being identified directly, or via a lookup operation in an adjunct memory or other storage mechanism. These lookup results are then combined to generate a merged lookup result.
  • One embodiment identifies an access control list including multiple access control list entries, with a subset of these access control list entries identifying accounting requests.
  • Accounting mechanisms such as, but not limited to counters or data structures, are associated with each of said access control list entries in the subset of access control list entries identifying accounting requests.
  • An item is identified. A particular one of the accounting mechanisms corresponding to the item is identified and updated.
  • the item corresponds to one or more fields of a received packet.
  • the item includes at least one autonomous system number, said at least one autonomous system number identify a set of communication devices under a single administrative authority.
  • at least one of the accounting mechanisms is associated with at least two different access control list entries in the subset of access control list entries identifying accounting requests.
  • FIGs. 1A-E are block diagrams of various exemplary systems including one or more embodiments for performing lookup operations using associative memories
  • FIG. 2 is a block diagram of an associative memory including one or more embodiments for performing lookup operations
  • FIGs. 3 A-D illustrate various aspects of a control used in one embodiment for performing lookup operations
  • FIGs. 4A-G illustrate various aspects of an associative memory block used in one embodiment for performing lookup operations
  • FIGs. 1A-E are block diagrams of various exemplary systems including one or more embodiments for performing lookup operations using associative memories
  • FIG. 3 A-D illustrate various aspects of a control used in one embodiment for performing lookup operations
  • FIGs. 4A-G illustrate various aspects of an associative memory block used in one embodiment for performing lookup operations
  • FIGs. 5A-C illustrate various aspects of an output selector used in one embodiment for performing lookup operations
  • FIGs. 6A-B illustrate an exemplary policy map and resultant associative memory entries
  • FIG. 6C illustrates a data structure for indicating priority of associative memories, blocks, or entries used in one embodiment
  • FIG. 7A illustrates a process for programming associative memory entries used in one embodiment
  • FIG. 7B illustrates a process for identifying a highest priority result used in one embodiment
  • FIGs. 8A-G illustrate access control lists, processes, mechanisms, data structures, and/or other aspects of some of an unlimited number of systems employing embodiments for updating counters or other accounting devices, or for performing other functions
  • 9A-K illustrate access control lists, processes, mechanisms, data structures, and/or other aspects of some of an unlimited number of systems employing embodiments for generating merged results or for performing other functions.
  • DETAILED DESCRIPTION Methods and apparatus are disclosed for defining and using associative memory entries with force no-hit and priority indications of particular use in implementing policy maps in communication devices, for generating and merging lookup results to apply multiple features, for generating accounting or other data based on that indicated in an access control list or other specification, and for performing lookup operations using associative memories, including, but not limited to modifying search keys within an associative memory based on modification mappings, forcing a no-hit condition in response to a highest-priority matching entry including a force no-hit indication, selecting among various associative memory blocks or sets or banks of associative memory entries in determining a lookup result, and detecting and propagating error conditions.
  • Embodiments described herein include various elements and limitations, with no one element or limitation contemplated as being a critical element or limitation. Each of the claims individually recites an aspect of the invention in its entirety. Moreover, some embodiments described may include, but are not limited to, inter alia, systems, networks, integrated circuit chips, embedded processors, ASICs, methods, and computer-readable medium containing instructions. One or multiple systems, devices, components, etc. may comprise one or more embodiments, which may include some elements or limitations of a claim being performed by the same or different systems, devices, components, etc. The embodiments described hereinafter embody various aspects and configurations within the scope and spirit of the invention, with the figures illustrating exemplary and non-limiting configurations.
  • packet refers to packets of all types or any other units of information or data, including, but not limited to, fixed length cells and variable length packets, each of which may or may not be divisible into smaller packets or cells.
  • packet as used herein also refers to both the packet itself or a packet indication, such as, but not limited to all or part of a packet or packet header, a data structure value, pointer or index, or any other part or identification of a packet.
  • packets may contain one or more types of information, including, but not limited to, voice, data, video, and audio information.
  • the term "item” is used generically herein to refer to a packet or any other unit or piece of information or data, a device, component, element, or any other entity.
  • processing a packet and “packet processing” typically refer to performing some steps or actions based on the packet contents (e.g., packet header or other fields), and such steps or action may or may not include modifying, storing, dropping, and/or forwarding the packet and/or associated data.
  • the term “system” is used generically herein to describe any number of components, elements, sub-systems, devices, packet switch elements, packet switches, routers, networks, computer and/or communication devices or mechanisms, or combinations of components thereof.
  • the term "computer” is used generically herein to describe any number of computers, including, but not limited to personal computers, embedded processing elements and systems, control logic, ASICs, chips, workstations, mainframes, etc.
  • processing element is used generically herein to describe any type of processing mechanism or device, such as a processor, ASIC, field programmable gate array, computer, etc.
  • device is used generically herein to describe any type of mechanism, including a computer or system or component thereof.
  • task and “process” are used generically herein to describe any type of rum ⁇ ng program, including, but not limited to a computer process, task, thread, executing application, operating system, user process, device driver, native code, machine or other language, etc., and can be interactive and/or non-interactive, executing locally and/or remotely, executing in foreground and/or background, executing in the user and/or operating system address spaces, a routine of a library and/or standalone application, and is not limited to any particular memory partitioning technique.
  • network and “communications mechanism” are used generically herein to describe one or more networks, communications mediums or communications systems, including, but not limited to the Internet, private or public telephone, cellular, wireless, satellite, cable, local area, metropolitan area and/or wide area networks, a cable, electrical connection, bus, etc., and internal communications mechanisms such as message passing, interprocess communications, shared memory, etc.
  • messages is used generically herein to describe a piece of information which may or may not be, but is typically communicated via one or more communication mechanisms of any type.
  • storage mechanism includes any type of memory, storage device or other mechanism for maintaining instructions or data in any format.
  • Computer-readable medium is an extensible term including any memory, storage device, storage mechanism, and other storage and signaling mechanisms including interfaces and devices such as network interface cards and buffers therein, as well as any communications devices and signals received and transmitted, and other current and evolving technologies that a computerized system can interpret, receive, and/or transmit.
  • memory includes any random access memory (RAM), read only memory (ROM), flash memory, integrated circuits, and/or other memory components or elements.
  • storage device includes any solid state storage media, disk drives, diskettes, networked services, tape drives, and other storage devices.
  • Memories and storage devices may store computer-executable instructions to be executed by a processing element and/or control logic, and data which is manipulated by a processing element and/or control logic.
  • data structure is an extensible term referring to any data element, variable, data structure, database, and/or one or more organizational schemes that can be applied to data to facilitate interpreting the data or performing operations on it, such as, but not limited to memory locations or devices, sets, queues, trees, heaps, lists, linked lists, arrays, tables, pointers, etc.
  • a data structure is typically maintained in a storage mechanism.
  • pointer and “link” are used generically herein to identify some mechanism for referencing or identifying another element, component, or other entity, and these may include, but are not limited to a reference to a memory or other storage mechanism or location therein, an index in a data structure, a value, etc.
  • associative memory is an extensible term, and refers to all types of known or future developed associative memories, including, but not limited to binary and ternary content addressable memories, hash tables, TRIE and other data structures, etc. Additionally, the term “associative memory unit” may include, but is not limited to one or more associative memory devices or parts thereof, including, but not limited to regions, segments, banks, pages, blocks, sets of entries, etc.
  • the phrases "based on x" and “in response to x” are used to indicate a minimum set of items x from which something is derived or caused, wherein “x” is extensible and does not necessarily describe a complete list of items on which the operation is performed, etc.
  • the phrase “coupled to” is used to indicate some level of direct or indirect connection between two elements or devices, with the coupling device or devices modifying or not modifying the coupled signal or communicated information.
  • the term “subset” is used to indicate a group of all or less than all of the elements of a set.
  • subtree is used to indicate all or less than all of a tree.
  • a set of entries is determined based on a policy map with a force no-hit indication being associated with one or more of the entries.
  • programmable priority indications may be associated with one or more of the entries, or with the associative memory devices, associative memory banks, etc. The force no-hit indications are often used in response to identified deny instructions in an access control list or other policy map.
  • a lookup operation is then performed on these associative memory entries, with highest matching result or results identified based on the programmed and/or implicit priority level associated with the entries, or with the associative memory devices, associative memory banks, etc.
  • Methods and apparatus are disclosed for performing lookup operations using associative memories, including, but not limited to modifying search keys within an associative memory based on modification mappings, forcing a no-hit condition in response to a highest-priority matching entry including a force no-hit indication, selecting among various associative memory blocks or sets or banks of associative memory entries in determining a lookup result, and detecting and propagating error conditions, hi one embodiment, each block retrieves a modification mapping from a local memory and modifies a received search key based on the mapping and received modification data, h one embodiment, each of the associative memory entries includes a field for indicating that a successful match on the entry should or should not force a no-hit result, one embodiment, an indication of which associative memory sets or banks or entries to
  • One embodiment performs error detection and handling by identifying, handling and communication errors, which may include, but is not limited to array parity errors in associative memory entries and communications errors such as protocol errors and interface errors on input ports.
  • Array parity errors can occur as a result of failure-in-time enors which are typical of semiconductor devices.
  • One embodiment includes a mechanism to scan associative memory entries in background, and to identify any detected errors back to a control processor for re-writing or updating the flawed entry, hi one embodiment, certain identified errors or received error conditions are of a fatal nature in which no processing should be performed. For example, in one embodiment, a fatal enor causes an abort condition, hi response, the device stops an in-progress lookup operation and just forwards enor and possibly no-hit signals.
  • these signals are generated at the time the in-progress lookup operation would have generated its result had it not been aborted so as to maintain timing among devices in a system including the associative memory.
  • enor status messages indicating any enor type and its conesponding source are propagated to indicate the enor status to the next device and/or a control processor.
  • the communicated signal may indicate and generate an abort condition in the receiving device.
  • the receiving device does not perform its next operation or the received instruction, or it may abort its current operation or instruction.
  • the receiving device may or may not delay a time amount conesponding to that which its processing would have required in performing or completing the operation or instruction so as to possibly maintain the timing of a transactional sequence of operations.
  • One embodiment generates accounting or other data based on that indicated in an access control list or other specification, and typically using associative memory entries in one or more associative memory banks and/or memory devices.
  • One embodiment identifies an access control list including multiple access control list entries, with a subset of these access control list entries identifying accounting requests. Accounting mechanisms, such as, but not limited to counters or data structures, are associated with each of said access control list entries in the subset of access control list entries identifying accounting requests. An item is identified.
  • a particular one of the accounting mechanisms conesponding to the item is identified and updated, hr one embodiment, the item conesponds to one or more fields of a received packet.
  • the item includes at least one autonomous system number, said at least one autonomous system number identify a set of communication devices under a single administrative authority.
  • at least one of the accounting mechanisms is associated with at least two different access control list entries in the subset of access control list entries identifying accounting requests.
  • One embodiment merges lookup results, such as from one or more associative memory banks and/or memory devices.
  • One embodiment identifies an access control list including multiple access control list entries.
  • a first set of access control list entries conesponding to a first feature of the access control list entries and a second set of access control list entries conesponding to a second feature of the access control list entries are identified.
  • a first associative memory bank is programmed with the first associative memory entries and a second associative memory bank is programmed with the second associative memory entries, with the first associative memory entries having a higher lookup precedence than the second associative memory entries.
  • a lookup value is then identified, such as that based on a packet or other item. Lookup operations are then typically performed substantially simultaneously on the first and second sets of associative memory entries to generate multiple lookup results, with these results typically being identified directly, or via a lookup operation in an adjunct memory or other storage mechanism.
  • FIGs. 1A-E are block diagrams of various exemplary systems and configurations thereof, with these exemplary systems including one or more embodiments for performing lookup operations using associative memories. First, FIG.
  • control logic 110 via signals 111, programs and updates associative memory or memories 115, such as, but not limited to one or more associative memory devices, banks, and/or sets of associative memory entries which may or may not be part of the same associative memory device and/or bank, hi one embodiment, control logic 110 also programs memory 120 via signals 123.
  • control logic 110 includes custom circuitry, such as, but not limited to discrete circuitry, ASICs, memory devices, processors, etc.
  • packets 101 are received by packet processor 105.
  • packet processor 105 In addition to other operations (e.g., packet routing, security, etc.), packet processor 105 typically generates one or more items, including, but not limited to one or more packet flow identifiers based on one or more fields of one or more of the received packets 101 and possibly from information stored in data structures or acquired from other sources. Packet processor 105 typically generates a lookup value 103 which is provided to control logic 110 for providing control and data information (e.g., lookup words, modification data, profile IDs, etc.) to associative memory or memories 115, which perform lookup operations and generate one or more results 117. In one embodiment, a result 117 is used is by memory 120 to produce a result 125.
  • control logic 110 for providing control and data information (e.g., lookup words, modification data, profile IDs, etc.) to associative memory or memories 115, which perform lookup operations and generate one or more results 117.
  • a result 117 is used is by memory 120 to produce a result 125.
  • Control logic 110 then relays result 107, based on result 117 and/or result 125, to packet processor 105.
  • one or more of the received packets are manipulated and forwarded by packet processor 105 as indicated by packets 109.
  • results 117, 125 and 107 may include indications of enor conditions.
  • FIG. IB illustrates one embodiment for performing lookup operations using associative memories, including, but not limited to modifying search keys within an associative memory based on modification mappings, forcing a no-hit condition in response to a highest-priority matching entry including a force no-hit indication, selecting among various associative memory blocks or sets or banks of associative memory entries in determining a lookup result, and detecting and propagating enor conditions.
  • Control logic 130 via signals 132, programs associative memory or memories 136.
  • control logic 130 provides control and data information (e.g., lookup words, modification data, profile IDs, etc.) to associative memory or memories 136, which perform lookup operations to generate results and enor signals 134, which are received by control logic 130.
  • FIG. 1C illustrates one embodiment for performing lookup operations using associative memories, including, but not limited to modifying search keys within an associative memory based on modification mappings, forcing a no-hit condition in response to a highest-priority matching entry including a force no-hit indication, selecting among various associative memory blocks or sets or banks of associative memory entries in determining a lookup result, and detecting and propagating enor conditions.
  • Control logic 140 via signals 141-143, programs associative memories 146-148.
  • control logic 140 provides control and data information (e.g., lookup words, modification data, profile IDs, etc.) to associative memories 146-148, which perform lookup operations to generate results and enor signals 144-145.
  • control and data information e.g., lookup words, modification data, profile IDs, etc.
  • associative memory 148 relays received enor indications via signals 144 via signals 145 to control logic 140.
  • a synchronization bit field is included in messages 141-145 sent between devices 140 and 146-148, with the value being set or changed at predetermined periodic intervals such that each device 140, 146-148 expects the change.
  • One embodiment uses a single synchronization bit, and if this bit is set in the request or input data 141-145 to a device 146-148, then the device 146-148 will set this bit in the conesponding reply or output data 143-145.
  • control processor or logic 140 sets the sync bit in its request data 141 periodically, say once in every eight requests. Control processor or logic 140 also monitors the sync bit in the reply data 145.
  • control processor or logic 140 can detect it and recover from that enor (by flushing the pipeline, etc.) In this manner, devices, especially those as part of a transactional sequence, can synchronize themselves with each other. Resynchronization of devices may become important, for example, should an enor condition occur, such as an undetected parity enor in a communicated instruction signal (e.g., the number of parity enors exceed the enor detection mechanism). There is a possibility that a parity enor in an instruction goes undetected and that completely changes the transaction timing.
  • FIG. ID illustrates one embodiment for performing lookup operations using associative memories, including, but not limited to modifying search keys within an associative memory based on modification mappings, forcing a no-hit condition in response to a highest-priority matching entry including a force no-hit indication, selecting among various associative memory blocks or sets or banks of associative memory entries in determining a lookup result, and detecting and propagating enor conditions.
  • Control logic 150 via signals 151-153, programs associative memories 156-158.
  • control logic 150 provides control and data information (e.g., lookup words, modification data, profile IDs, etc.) to associative memories 156-158, which perform lookup operations to generate results and enor signals 154-155 which are communicated to control logic 150.
  • control and data information e.g., lookup words, modification data, profile IDs, etc.
  • system 180 which maybe part of a router or other communications or computer system, used in one embodiment for distributing entries among associative memory units and selectively enabling less than all of the associative memory units when performing a lookup operation
  • system 180 includes a processing element 181, memory 182, storage devices 183, one or more associative memories 184, and an interface 185 for connecting to other devices, which are coupled via one or more communications mechanisms 189 (shown as a bus for illustrative purposes).
  • Various embodiments of system 180 may include more or less elements.
  • the operation of system 180 is typically controlled by processing element 181 using memory 182 and storage devices 183 to perform one or more tasks or processes, such as programming and performing lookup operations using associative memory or memories 184.
  • Memory 182 is one type of computer-readable medium, and typically comprises random access memory (RAM), read only memory (ROM), flash memory, integrated circuits, and/or other memory components. Memory 182 typically stores computer-executable instructions to be executed by processing element 181 and/or data which is manipulated by processing element 181 for implementing functionality in accordance with one embodiment of the invention.
  • Storage devices 183 are another type of computer-readable medium, and typically comprise solid state storage media, disk drives, diskettes, networked services, tape drives, and other storage devices. Storage devices 183 typically store computer-executable instructions to be executed by processing element 181 and/or data which is manipulated by processing element 181 for implementing functionality in accordance with one embodiment of the invention. In one embodiment, processing element 181 provides control and data information
  • FIG. 2 illustrates an associative memory 200 used in one embodiment for performing lookup operations using associative memories, including, but not limited to modifying search keys within an associative memory based on modification mappings, forcing a no-hit condition in response to a highest-priority matching entry including a force no-hit indication, selecting among various associative memory blocks or sets or banks of associative memory entries in determining a lookup result, and detecting and propagating enor conditions.
  • control logic 210 receives input control signals 202 which may include programming information.
  • control logic 210 may update information and data structures within itself, program/update associative memory blocks 218-219, and/or output selectors 231-232.
  • each of the associative memory blocks 218-219 include one or more associative memory sets or banks of associative memories entries, and logic or circuitry for performing lookup operations.
  • input data 201 which may include, but is not limited to search keys and modification data
  • input control information 202 which may include, but is not limited to profile IDs (e.g., a value), instructions, programming information
  • control logic 210 is received by control logic 210, and possibly forwarded to other downstream associative memories in a cascaded configuration.
  • previous stage lookup results and/or enor indications are received from previous stage associative memories in a cascaded configuration or from other devices by control logic 210.
  • control logic 210 possibly processes and/or forwards the received information via block control signals 211-212 to associative memory blocks 218-219 and via selector control signals and previous stage results 215 (which typically includes the received profile ID) to output selectors 231-232.
  • control logic 210 may generate enor signals 216 based on a detected enor in the received information or in response to received enor condition indications.
  • confrol logic 210 merely splits or regenerates a portion of or the entire received input control 202 and optional previous stage results and enors 203 signals as selector control signals and previous stage results signals 215 and/or enor signals 216.
  • control logic 210 could initiate an abort operation wherein a lookup operation will not occur because of a detected or received notification of an enor condition.
  • control logic 210 identifies data representing which associative memory blocks 218-219 to enable, which associative memory blocks 218-219 each output selector 231-232 should consider in determining its lookup result, and/or modification mappings each associative memory block 218-219 should use in modifying an input search key.
  • this data is retrieved, based on received input control information 202 (e.g., a profile ID or other indication), from one or more memories, data structures, and/or other storage mechanisms. This information is then communicated as appropriate to associative memory blocks 218-219 via block control signals 211-212, and/or output selectors 231-232 via selector control signals and previous stage results signals 215.
  • associative memory blocks 218-219 each receive a search key and possibly modification data via signal 201, and possibly control information via block control signals 211-212.
  • Each enabled associative memory block 218-219 then performs a lookup operation based on the received search key, which may include generating a lookup word by modifying certain portions of the search key based on received modification data and/or modification mappings.
  • Each associative memory block 218-219 typically generates a result 228-229 which are each communicated to each of the output selectors 231-232.
  • each associative memory block 218-219 that is not enabled generates a no-hit signal as its conesponding result 228-229.
  • output selectors 231-232 receive an indication of the associative memory blocks 218-219 that is not enabled.
  • Output selectors 231 evaluate associative memory results 228-229 to produce results 240.
  • each output selector has a conesponding identified static or dynamic subset of the associate memory results 228-229 to evaluate in determining results 240.
  • an identification of this conesponding subset is provided to each output selector 231-232 via selector control signals 215.
  • each of the output selectors 231-232 receives a profile ID via selector control signals 215 and performs a memory lookup operation based on the received profile ID to retrieve an indication of the particular associate memory results 228-229 to evaluate in determining results 240.
  • results 240 are exported over one or more output buses 240, each typically connected to a different set of one or more pins of a chip of the associative memory.
  • the number of output buses used and their connectivity to outputs selectors 231-232 are static, while in one embodiment the number of output buses used and their connectivity to outputs selectors 231-232 are configurable, for example, at initialization or on a per or multiple lookup basis, h one embodiment, an output bus indication is received by an output selector 231-232, which uses the output bus indication to determine which output bus or buses to use. For example, this determination could include, but is not limited to a direct interpretation of the received output bus indication, performing a memory read operation based on the received output bus indication, etc. hi one embodiment, an output selector 231-232 performs a memory access operation based on a profile ID to determine which output bus or buses to use for a particular lookup operation.
  • Associative memory 200 provides many powerful capabilities for simultaneously producing one or more results 240. For example, in one embodiment, based on a received profile ID, control logic 210 identifies which of the one or more associative memory blocks 218-219 to enable and then enables them, and provides the profile ID to output selectors 231 for selecting a lookup result among the multiple associative memory blocks 218-219. Each of the associative memory blocks 218-219 may receive/identify a modification mapping based on the profile ID, with this modification mapping possibly being unique to itself.
  • This modification mapping can then be used in connection with received modification data to change a portion of a received search key to produce the actual lookup word to be used in the lookup operation.
  • certain entries may be programmed with force no-hit indications to generate a no-hit result for the conesponding associative memory block 218-219 should a conesponding entry be identified as the highest priority entry matching the lookup word.
  • Each of these enabled associative memory block 218-219 typically generate a result (e.g., no-hit, hit with highest priority matching entry or location thereof identified) which is typically communicated to each of the output selectors 231-232.
  • the results are only communicated to the particular output selectors 231 -232 which are to consider the particular result in selecting their respective highest priority result received from associative memory blocks 218-219 and possibly other lookup results from previous stage associative memories.
  • multiple associative memories 200 are cascaded or coupled in other methods so that results from one or more stages may depend on previous stage results, such that a lookup can be programmed to be performed across multiple associative memories 200.
  • FIG. 3 A illustrates a control 300 (which may or may not conespond to control logic 210 of FIG. 2) of an associative memory used in one embodiment.
  • control 300 includes control logic 310 and memory 311.
  • programming signals 303 are received, and in response, one or more data structures in memory 311 are updated.
  • control logic generates programming signals 318.
  • programming 318 is the same as programming signals 303 and thus a physical connection can be used rather than passing through control logic 310.
  • FIG. 3C One embodiment of a programming process is illustrated in FIG. 3C, in which processing begins with process block 380. Processing then proceeds to process block 382, wherein programming signals are received.
  • process block 384 data structures and other elements (e.g., associative memory blocks, output selectors, etc.) are updated. Processing is completed as indicated by process block 386.
  • input data 301, input confrol 302, and optionally previous stage results and enors 304 are received by control logic 310.
  • control logic 310 In response, one or more data structures in memory 311 are referenced.
  • Control logic 310 generates input data 314, block confrol signals 315, output selector control signals and (optionally) previous stage results 316, and possibly an enor signal 319 indicating a detected enor condition or a received enor indicator.
  • input data 314 is the same as input data 301 and thus a physical connection can be used rather than passing through control logic 310.
  • FIG. 3B illustrates one set of data structures used in one embodiment.
  • Enable array 320 is programmed with an associative memory block enable indicator 325 for each profile ID 321 to be used.
  • Each associative memory block enable indicator 325 identifies which associative memory blocks are to be enabled for a given lookup operation, hi one embodiment, associative memory block enable indicator 325 includes a programmable priority level indication for use in identifying which result should be used from results from multiple blocks and/or previous stages.
  • enable array 320 can be retrieved from memory 311 (FIG. 3A), which can then be used to generate associative memory block enable signals (and priority indications) included in block control signals 315 (FIG. 3 A), hi one embodiment, associative memory block enable indicator 325 is a bitmap data structure, while in one embodiment, associative memory block enable indicator 325 is a list, set, anay, or any other data structure.
  • Output selector array 330 is programmed with an output selector ID 335 identifying which output selector, such as, but not limited to output selectors 231-232 (FIG. 2) for each tuple (profile ID 331, associative memory block ID 332).
  • an output selector ID 335 can be identified for each associative memory block ID 332.
  • output selector ID 335 is a numeric identifier, while in one embodiment, output selector ID 335 is any value or data structure.
  • Modification mapping array 340 is programmed with a modification mapping 345 for each tuple (profile ID 341, output selector ID 342).
  • a modification mapping 345 can be identified for each output selector ID 342.
  • each modification mapping is a data structure identifying how to modify a received search key with received modification data.
  • 3D illustrates a process used in one embodiment for initiating a lookup operation. Processing begins with process block 360, and proceeds to process block 362, wherein input data and control signals are received. Next, in process block 364, any previous stage results and enor indications are received. As determined in process block 366, if an abort operation should be performed, such as, but not limited to in response to a received fatal enor indication or an identified fatal enor condition, then processing proceeds to process block 374 (discussed hereinafter). Otherwise, in process block 368, the enable bitmap, output selector configuration, and modification mappings are received based on the profile ID.
  • process block 370 data and control signals based on the retrieved and received information are forwarded to the associative memory blocks and output selectors.
  • process block 372 if an enor condition is identified or has been received, then in process block 374, an enor indication, typically including an indication of the enor type and its source is generated or forwarded. Processing is complete as indicated by process block 376.
  • FIG. 4A illustrates an associative memory block 400 used in one embodiment.
  • Associative memory block 400 typically includes control logic 410 and associative memory entries, global mask registers, operation logic and priority encoder 412 (e.g., elements for performing the associative memory match operation on a received lookup word), one embodiment, sets of associative memory entries are grouped into banks of associative memory entries.
  • programming signals 401 are received, and in response, one or more associative memory entries and/or global mask registers in block 412 are updated.
  • an associative memory block 400 conesponds to a set or bank of associative memory entries and a mechanism for performing a lookup operation on the set or bank of associative memory entries to produce one or more results.
  • no mask register is included in associative memory block 400.
  • one embodiment of associative memory block 400 includes a memory
  • modification mapping data e.g., modification mapping 345 of FIG. 3B
  • associative memory block 400 retrieves the modification mapping information, such as based on a received profile ID (e.g., rather than receiving the modification mapping signal 404).
  • a search key 402, modification data 403, modification mapping 404, an enable signal 405, a global mask enable signal 406, and a global mask select signal 407 are received, hi response to perfonning a lookup operation and/or detecting an enor condition, such as a parity fault in one of the associative memory entries, result and enor indications 411 are generated, hi one embodiment, associative memory entries are checked for parity enors in background.
  • FIG. 4B one embodiment includes multiple global mask registers 415 for use in a lookup operation on associative memory entries 416.
  • Global mask enable signal 406 enables the use of a global mask register, while global mask select 407 identifies which of multiple masks to apply to each of the associative memory entries. Lookup word
  • FIG. 4C illustrates an enor indication 420 used in one embodiment.
  • enor indication 420 includes an enor indication 421 for identifying if any or possibly the number of enor indications included therein. For any identified enor condition or received enor indication, an encoded description of each enor is included in one or more of the enor descriptors 422-423.
  • a bitmap is used in one or more of enor descriptors 422-423, wherein each bit represents a possible enor condition, and the value of the bit indicates whether or not a conesponding enor has been identified (including received from a prior component or stage.)
  • each enor descriptor 422-423 conesponds to a different component, interface, or previous stage, h one embodiment, enor indication 420 is used by other components in communicating enor conditions or lack thereof.
  • FIG. 4D illustrates an associative memory entry 430 used in one embodiment.
  • associative memory entry 430 includes a value 431, an optional mask 432, force no hit indication 433, valid/invalid flag 434, and an enor detection value 435.
  • Enor detection value 435 may be one or more parity bits, a cyclic redundancy checksum value, or a value conesponding to any other mechanism used for detecting data corruption enors.
  • value 431 is of a configurable width, h one embodiment, this configurable width includes 80 bits, 160 bits and 320 bits, h one embodiment, such as that of a binary content-addressable memory, no mask field 432 is included, hi one embodiment, the width of mask field 432 is variable, and typically, although not required, matches the width of value field 431.
  • fields 431-435 are stored in a single physical memory; while in one embodiment, fields 431-435 are stored in multiple physical memories.
  • FIG. 4E illusfrates a mechanism to modify a search key based on modification mapping and modification information used in one embodiment. As shown, a modification mapping bit 443 is used to control selector 440 which selects either search key unit (e.g., one or more bits, bytes, etc.) 441 or modification data unit 442 as the value for lookup unit 445, which is typically a portion of the actual lookup word to be used in matching associative memory entries in a lookup operation.
  • search key unit e.g., one or more bits, bytes, etc.
  • modification mapping 450 includes a source portion 451 and a destination portion 452.
  • modification data 454 includes four bytes and search key 456 includes eight bytes.
  • the source portion 451 of modification mapping 450 identifies which bytes of modification data 454 are to be used in generating lookup word 458, and the destination portion 452 of modification mapping 450 identifies where the conesponding bytes to be used of modification data 454 are to be placed in lookup word 458, with the remaining bytes coming from search key 456. hr other words, modification mapping 450 and modification data 454 are used to replace certain specified data units in search key 456 in producing the value which will be used in matching the associative memory entries. Of course, various embodiments use different numbers of bits and bytes for modification mapping 450 and modification data 454.
  • modification mapping 450 includes an indication of the portion of search key 456 to modify (e.g., the value of J in one embodiment, the high-order bytes, the low order bytes, etc.).
  • FIG. 4G illustrates an associative memory process used in one embodiment in performing a lookup operation. Processing begins with process block 470, and proceeds to process block 472. If the associative memory is not enabled, then processing proceeds to process block 490 wherein a result with a no hit indication is generated, and processing continues to process block 484. Otherwise, in process block 474, the lookup word is determined typically based on the search key, modification mapping, and modification data. Note, in one embodiment, the search key is used as the lookup word and there is no concept of a modification mapping or modification data.
  • process block 476 the lookup word is used to match the associative memory entries with consideration of a selected and enabled global mask, if any. Note, in one embodiment, there is no concept of a global mask. As determined in process block 478, if at least one match has been identified, then processing proceeds to process block 480, otherwise to process block 490, wherein a result with a no hit indication is generated and processing proceeds to process block 484. Otherwise, as determined in process block 480, if the highest priority matching entry includes a force no hit indication, then processing proceeds to process block 490, wherein a result with a no hit indication is generated and processing proceeds to process block 484.
  • a result indicating a hit (i.e., successful match) with the highest priority matching entry identified is generated.
  • the result is communicated to at least the identified output selector or selectors, hi one embodiment, the output selector to which to communicate the result is identified by output selector ID 335 (FIG. 3B).
  • a signal is generated indicating the type and location of the enor. hi one embodiment, enor indication 420 (FIG. 4C) is used. Processing is complete as indicated by process block 499.
  • FIG. 5 A illustrates of an output selector 500 (which may or may not conespond to an output selector 231-232 of FIG. 2) used in one embodiment.
  • output selector 500 includes control logic 510 and memory 511.
  • programming signals 504 are received, and in response, one or more data structures in memory 511 are updated.
  • FIG. 5B illustrates one data structure used in one embodiment.
  • Available anay 520 is programmed with an associative memory blocks and optionally previous stage results available for use indicator 525 for each profile TD 521 to be used.
  • Each indicator 525 identifies which, if any, associative memory blocks, sets of entries or associative memory banks are to be considered in determining which matching associative entry to select for the ultimate highest-priority matching associative memory entry.
  • indicator 525 further identifies which previous stage results to consider, h one embodiment, a priority level is associated with each of the banks and/or previous stage results.
  • a priority level is associated with each of the banks and/or previous stage results.
  • associative memory blocks available for use indicator 525 is a bitmap data structure, while in one embodiment, associative memory blocks available for use indicator 525 is a list, set, anay, or any other data structure.
  • output selector 500 receives selector control signal 501, which may include a profile ID.
  • output selector 500 receives any relevant previous stage results 502 and results 503 from zero or more of the associative memory blocks from which the highest-priority entry will be selected, and which, if any, will be identified in generated result 515.
  • selector control signal 501 including an enable indication, the enable indication including an enabled or not enabled value, such that in when a not enable value is received, output selector 500 is not enabled and does not select among results from blocks 1-N 503 or optional previous stage results 502. hi one embodiment, when not enabled, output selector 500 generates a result signal 515 indicting a no hit, not enabled, or some other predetermined or floating value.
  • result 515 is communicated over a fixed output bus, which may or may not be multiplexed with other results 515 generated by other output selectors 500.
  • the associative memory may include one or more output buses, each typically connected to a single pin of a chip of the associative memory, with the selection of a particular output bus possibly being hardwired or configurable, with the configuration possibly being on a per lookup basis, such as that determined from a received value or configuration information retrieved from a memory (e.g., based on the cunent profile ID.)
  • control logic 510 or other mechanism typically selects which output bus (and the timing of sending result 515) to use for a particular or all results 515.
  • FIG. 5C A process used in one embodiment for receiving and selecting a highest-priority associative memory entry, if any, is illustrated in FIG. 5C.
  • Processing begins with process block 540, and proceeds to process block 542, wherein the results from the associative memory blocks and the profile ID are received.
  • the set of associative memory blocks to consider in determining the result is retrieved from a data structure/memory based on the profile ID.
  • any relevant previous stage results are received from coupled associative memories.
  • process block 548 the highest priority match from the available associative memory block and previous stage results is identified, if any, based on the implied and/or programmed priority values associated with the matching entries and/or associative memories, blocks, etc.
  • FIG. 6A illustrates an exemplary policy map 600, including deny and permit instructions. Note, there are many applications of embodiments, and not all use permit and deny instructions.
  • FIG. 6B illustrates associative memory entries 621 and 622 as determined by one embodiment based on policy map 600. Associative memory entries 621 and 622 could be programmed in a same or different associative memories or associative memory blocks.
  • Associative memory entries 621 and 622 are shown in separate groupings to illustrate how priority can be optionally used and programmed in one embodiment.
  • entries 621 and 622 can be stored in different associative memories and/or associative memory banks, etc., to possibly consider in determining where to store the entries in order to efficiently use the space available for the entries.
  • By associating a priority level with each entry entries within a same associative memory and/or associate memory block, etc.
  • FIG. 6C illustrates a data structure 650 for indicating priority of associative memories, blocks, or entries, etc. used in one embodiment.
  • priority mapping data structure 650 provides a priority indication 652 (e.g., value) for each of the associative memories, associative memory blocks, associative memory entries, etc. (identified by indices 651).
  • Associative memories and/or blocks, etc. associated with programmed priority values can be used with or without programmed priority values associated with the associative memory entries themselves.
  • FIG. 7A illustrates a process for programming associative memory entries used in one embodiment.
  • Processing begins with process block 700, and proceeds to process block 702, wherein a policy map (e.g., any definition of desired actions, etc.) is identified.
  • a policy map e.g., any definition of desired actions, etc.
  • a set of conesponding entries is identified based on the policy map.
  • a force no-hit indication is associated with one or more of the entries (if so conespondingly defined by the policy map).
  • a force no-hit indication is of particular use in implementing deny operations, but is not required to be identified with a deny operation.
  • priority indications are associated with each of the entries, associative memories, associative memory banks, etc.
  • FIG. 7B illustrates a process for identifying a highest priority result used in one embodiment. Processing begins with process block 750, and proceeds to process block 752, wherein results are received from the associative memories, blocks, etc. (including possibly from previous stages). In process block 754, the priority values are associated with the results (e.g., based on the entries, memories, blocks, etc.). In process block 756, the highest priority result is (or in one embodiment, results are) identified based on the inherent or programmed priority values.
  • FIGs. 8A-G illustrate access control lists, processes, mechanisms, data structures, and/or other aspects of some of an unlimited number of systems employing embodiments for updating counters or other accounting devices, or for performing other functions.
  • FIG. 8A Shown in FIG. 8A is an access confrol list 800 which defines accounting information to be collected in a counting mechanism one by statement 801 for access control list entries 803 and in a counting mechanism two by statement 802 for access control list entries 804. Note, there are multiple access control entries in that will cause a same counting mechanism to be adjusted. Also, the value that a particular counter is adjusted can be one (e.g., conesponding to one item or packet), a byte count (e.g., a size of an item, packet, frame, or datagram) or any other value.
  • FIG. 8B illustrates a process used in one embodiment to configure a mechanism for accumulating information based on access control entries.
  • this embodiment may be responsive to and/or implemented in computer-readable medium (e.g., software, firmware, etc.), custom hardware (e.g., circuits, ASICs, etc.) or via any other means or mechanism, such as, but not limited to that disclosed herein.
  • computer-readable medium e.g., software, firmware, etc.
  • custom hardware e.g., circuits, ASICs, etc.
  • any other means or mechanism such as, but not limited to that disclosed herein.
  • one embodiment uses a system described herein, and/or illustrated in FIGs. 1 A-E, 2, 8D-8E, 9A, 9C-D, and/or any other figure. Processing of the flow diagram illustrated in FIG. 8B begins with process block
  • FIG. 8C illustrates a process used in one embodiment for updating an accounting mechanism based on an item, such as, but not limited to one or more fields or values associated with a packet. Processing begins with process block 820, and proceeds to process block 822, wherein an item is identified.
  • the identification of an item might include identifying an autonomous system number conesponding to the packet.
  • an autonomous system number is typically associated with a set of communication devices under a single administrative authority. For example, all packets sent from an Internet Service Provide typically are associated with a same autonomous system number.
  • a particular one of the accounting mechanisms conesponding to the item is identified, such as by, but not limited to a lookup operation in a data structure, associative memory, or by any other means or mechanism.
  • the identified accounting mechamsm is updated. Processing is complete as indicated by process block 828.
  • FIG. 8D illustrates one embodiment of a system for updating an accounting value based on that defined by an access control list or other mechanism.
  • Packets 831 are received and processed by packet processor 832 to generate packets 839.
  • packet processor 832 performs a lookup operation in a forwarding information base (FIB) data structure to identify the source and/or destination autonomous system number associated with the identified packet.
  • FIB forwarding information base
  • a lookup value 833 is identified.
  • FIG. 9G illustrates a lookup value 960 used in one embodiment.
  • One embodiment uses all, less than all, or none of fields 960A-960I.
  • a lookup operation is performed in associative memory entries 834 in one or more associative memory banks and/or one or more associative memories to generate a counter indication 835.
  • FIG. 8E illustrates one embodiment of a system for updating an accounting value based on that defined by an access control list or other mechanism.
  • Packets 840 are received and processed by packet processor 841 to generate packets 849.
  • packet processor 841 performs a lookup operation in a forwarding information base (FIB) data structure to identify the source and/or destination autonomous system number associated with the identified packet. Based on an identified packet, autonomous system numbers, and/or other information, a lookup value 842 is identified.
  • FIB forwarding information base
  • 9G illusfrates a lookup value 960 used in one embodiment.
  • One embodiment uses all, less than all, or none of fields 960A-960I.
  • a lookup operation is performed in associative memory entries 843 in one or more associative memory banks and/or one or more associative memories to produce a lookup result 844, which is then used to perform a lookup operation in adjunct memory 845 generate a counter indication 846, and the conesponding counting mechanism within counters and decoder/control logic 847 is updated.
  • adjunct memory 845 stores counter indications for conesponding locations of access confrol list entries programmed in associative memory 843, and some of these counter indications may be the same value such that a same counting mechanism is updated for different matching access control list entries.
  • Counter values 848 are typically communicated via any communication mechanism and/or technique to packet processor 841 or another device to be forwarded or processed.
  • FIG. 8F illustrates an example of associative memory entries 860 and conesponding adjunct memory entries 870, such as those are generated by one embodiment based on access control list entries 803 and 804 (FIG. 8A).
  • adjunct memory entries 861-863 have the same counter indication in adjunct memory entries 871-873, while associative memory entry 864 has a different conesponding counter indication in adjunct memory entry 874.
  • associative memory entries include fields for a source address, destination address, and other fields, such as, but not limited to autonomous system numbers (ASNs), protocol type, source and destination port information, etc.
  • adjunct memory entries 870 include an indication of a counting mechanism and/or other values which may be used for other purposes (e.g., security, routing, policing, quality of service, etc.).
  • FIG. 8G illustrates a process used in one embodiment for processing a packet.
  • Processing begins with process block 880, and proceeds to process block 882, wherein a packet is identified.
  • process block 884 one or more forwarding information base (FIB) lookup operations are performed to identify source and destination autonomous system numbers conesponding to the identified packet
  • hi process block 886 an accounting lookup value is identified, typically based on information contained in the identified packet and the source and destination ASNs.
  • a lookup operation is performed in one or more associative memory banks and possibly in conesponding one or more adjunct memories to identify a counter indication, hi process block 890, the counter, if any, conesponding to the counter indication is updated by some static or dynamic value. Processing is complete as indicated by process block 892.
  • FIG. 9A illustrates one embodiment of a system for identifying a merged lookup result.
  • Packets 901 are received and processed by packet processor 902 to generate packets 909.
  • packet processor 902 performs a lookup operation in a forwarding information base (FIB) data structure to identify the source and/or destination autonomous system number associated with the identified packet. Based on an identified packet, autonomous system numbers, and/or other information, a lookup value 903 is identified.
  • FIG. 9G illustrates a lookup value 960 used in one embodiment.
  • One embodiment uses all, less than all, or none of fields 960A-960I.
  • a lookup operation is performed in associative memory entries 904 (e.g., access control list, security, quality of service, accounting entries) in multiple associative memory banks and/or one or more associative memories to generate a results 905, based on which, memories 906 generate results 907.
  • Combiner mechanism 910 merges results 907 to produce one or more merged results 911, which are typically used by packet processor 902 in the processing of packets.
  • combiner mechanism 910 includes a processing element responsive to computer-readable medium (e.g., software, firmware, etc.), custom hardware (e.g., circuits, ASICs, etc.) and/or via any other means or mechanism.
  • a merged result 911 includes a counter indication which is used by counters and decoder/control logic 912 to update a value.
  • the accumulated accounting values 913 are typically communicated to packet processor 902 or another device.
  • FIG. 9B illustrates an access control list 915, including access control list entries of multiple features of a same type. For example, entries 916 conespond to security entries such as the packet that should be dropped or processed, while entries 917 conespond to packets that should or should not be sent to a mechanism to encrypt the packet.
  • Different associative memories are each programmed with associative memory entries conesponding to a different one of the features.
  • a lookup operation is then performed substantially simultaneously on each of feature sets of associative memory entries to generate associative memory results, which are then used to perform lookup operations substantially simultaneously in adjunct memories to produce the lookup results which then can be merged to produce the merged result.
  • the respective priorities of the lookup results may be implicit based on that conesponding to their respective associative memory banks and/or adjunct memories, or be specified, such as in the associative memory entries, from another data structure lookup operation, or identified using any other manner or mechanism.
  • one embodiment includes four associative memory banks for supporting one to four features.
  • An associative memory lookup operation is performed in parallel on the four banks and then in the adjunct memories (SRAMs), which indicate the action, type of entry (e.g., ACL, QoS, Accounting), and precedence for combiner mechanism.
  • the combiner mechanism merges the results to get the final merged result.
  • a miss in an ACL lookup in a bank is treated as a permit with lowest precedence. If in more than one bank there is a hit with same specified precedence in the retrieved adjunct memory entry, the precedence used by the combiner mechanism is determined based on the implied or specified precedence of the associative memory bank. If there is amiss in all the banks, default result is used from global registers.
  • a similar merge operation is performed for the QoS and accounting lookup results.
  • FIG. 9C illustrates a lookup and merge mechanism 920 used by one embodiment.
  • One or more of associative memory banks 921A-921C are programmed with associative memory entries of a same access control list type, with different features of the type programmed into a different one of the associative memory banks 921 A-921C.
  • Conesponding adjunct memory entries 922A-922C are programmed in one or more adjunct memories.
  • lookup operations can be performed substantially simultaneously on associative memory banks 921 A-C to generate results, which are used to identify conesponding lookup results from adjunct memory entries 922A-922C, which are then merged by combiner mechanism 923 to generate the merged result 924.
  • FIG. 9D is substantially similar to that of FIG.
  • lookup and merge mechanism 920 used by one embodiment, is programmed with features sets of a same type in associative memory banks 931 A-93 IB (there can be any number of banks), and of a different type in associative memory banks 931 C-93 ID (there can be any number of banks).
  • Conesponding adjunct memory entries 932A-932D are programmed into one or more adjunct memories.
  • FIG. 9E illustrates a process used in one embodiment to program the associative and adjunct memories in one embodiment. Processing begins with process block 940, and proceeds to process block 941, wherein an access control list including multiple access control list entries is identified. In process block 942, a first set of the access control list entries conesponding to a first feature of the access control list entries is identified.
  • a first associative memory bank and a first adjunct memory are programmed with entries conesponding to the first set of access control list entries.
  • a second set of the access control list entries conesponding to a second feature of the access control list entries is identified.
  • a second associative memory bank and a second adjunct memory are programmed with entries conesponding to the second set of access control list entries.
  • the first set of associative memory entries have a higher lookup precedence than the second set of associative memory entries.
  • FIG. 9F illustrates a process used by one embodiment to perform lookup operations and to identify the merged result.
  • FIG. 9G illustrates a lookup value 960, result value 965, and merged result value 967 used in one embodiment.
  • lookup value 960 includes a lookup type 960A, source address 960B, destination address 960C, source port 960D, destination port 960E, protocol type 960F, source ASN 960G, destination ASN 960H, and possibly other fields 9601.
  • result value 965 includes a result type 965A, an action or counter indication 965B, and a precedence indication 965C.
  • result value 965 is programmed in the adjunct memories.
  • One embodiment uses all, less than all, or none of fields 965A-965C.
  • merged result value 967 includes a result type 967A and an action or counter indication 967B.
  • One embodiment uses all, less than all, or none of fields 967A-967B.
  • 9H-9J illustrate merging logic truth tables 970, 972, and 974 for generating the merged result
  • the merge result of a security lookup operation is illustrated in security combiner logic 970, and is based on the results of up to four substantially simultaneous (or not) lookup operations with differing precedence indicated in columns 970A-970D, with the conesponding merged result shown in column 970E.
  • the " — " in the fields indicate a don't care condition as a merged result conesponding to a higher priority will be selected.
  • the merge result of a Quality of Service (QoS) lookup operation is illustrated in security combiner logic 972, and is based on the results of a previously merged security lookup operation and up to four substantially simultaneous (or not) lookup operations with differing precedence indicated in columns 972A-970E, with the conesponding merged result shown in column 972F.
  • the merge result of an accounting lookup operation is illustrated in accounting combiner logic 972, and is based on the results of a previously merged security lookup operation and up to four substantially simultaneous (or not) lookup operations with differing precedence indicated in columns 974A-974E, with the conesponding merged result shown possibly identifying a counter to be updated in column 972F.
  • 9K illustrates a process used in one embodiment, to generate a security merged result, a QoS merged result, and an accounting merged result.
  • Processing begins with process block 980, and proceeds to process block 981, wherein a packet is identified.
  • process block 982 one or more FIB lookup operations are performed to identify source and destination ASNs.
  • process block 983 a security lookup value is identified.
  • process block 984 lookup operations are performed based on the security lookup value in multiple associative memory banks and one or more adjunct memories to identify multiple security results, which are merged in process block 985 to identify the merged security result.
  • this merged security result is stored in a data structure or other mechanism for use in identifying the merged QoS and accounting results.
  • process block 986 the QoS lookup value is identified
  • hi process block 987 lookup operations are performed based on the QoS lookup value in multiple associative memory banks and one or more adjunct memories to identify multiple QoS results, which, in process block 988, are merged along with the previously determined merged security result to identify the merged QoS result.
  • the accounting lookup value is identified.
  • lookup operations are performed based on the accounting lookup value in multiple associative memory banks and one or more adjunct memories to identify multiple accounting results, which, in process block 991, are merged along with the previously determined merged security result to identify the merged accounting result. Also, an identified counter or other accounting mechanism is updated. Processing is complete as indicated by process block 992.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

A set of entries is determined (704) based on a policy map (702) with a force no-hit indication being associated with one or more of the entries (706). Programmable priority indications may be associated with one or more of the entries (708), or with associative memory devices, associative banks, etc. The force no-hit indications are often used in response to identified deny instructions in an access control list or other policy map.

Description

FORCE NO-HIT INDICATIONS FOR CAM ENTRIES BASED ON POLICY MAPS
FIELD OF THE INVENTION One embodiment of an invention especially relates to computer and communications systems, especially network routers and switches; and more particularly, one embodiment relates to associative memory entries with force no-hit and priority indications of particular use in implementing policy maps in communication devices, one embodiment relates to generating and merging lookup results to apply multiple features, and one embodiment relates to generating accounting data based on access control list entries.
BACKGROUND OF THE INVENTION The communications industry is rapidly changing to adjust to emerging technologies and ever increasing customer demand. This customer demand for new applications and increased performance of existing applications is driving communications network and system providers to employ networks and systems having greater speed and capacity (e.g., greater bandwidth). In trying to achieve these goals, a common approach taken by many communications providers is to use packet switching technology. Increasingly, public and private communications networks are being built and expanded using various packet technologies, such as Internet Protocol (IP). A network device, such as a switch or router, typically receives, processes, and forwards or discards a packet based on one or more criteria, including the type of protocol used by the packet, addresses of the packet (e.g., source, destination, group), and type or quality of service requested. Additionally, one or more security operations are typically performed on each packet. But before these operations can be performed, a packet classification operation must typically be performed on the packet. Packet classification as required for, inter alia, access control lists (ACLs) and forwarding decisions, is a demanding part of switch and router design. The packet classification of a received packet is increasingly becoming more difficult due to ever increasing packet rates and number of packet classifications. For example, ACLs require matching packets on a subset of fields of the packet flow label, with the semantics of a sequential search through the ACL rules. IP forwarding requires a longest prefix match. Known approaches of packet classification include using custom application-specific integrated circuits (ASICs), custom circuitry, software or firmware controlled processors, and associative memories, including, but not limited to binary content-addressable memories (binary CAMs) and ternary content-addressable memories (ternary CAMs or TCAMs). Each entry of a binary CAM typically includes a value for matching against, while each TCAM entry typically includes a value and a mask. The associative memory compares a lookup word against all of the entries in parallel, and typically generates an indication of the highest priority entry that matches the lookup word. An entry matches the lookup word in a binary CAM if the lookup word and the entry value are identical, while an entry matches the lookup word in a TCAM if the lookup word and the entry value are identical in the bits that are not indicated by the mask as being irrelevant to the comparison operations. Associative memories are very useful in performing packet classification operations. In performing a packet classification, it is not uncommon for multiple lookup operations to be performed in parallel or in series using multiple associative memories basically based on a same search key or variant thereof, as one lookup operation might be related to packet forwarding while another related to quality of service determination. Desired are new functionality, features, and mechanisms in associative memories to support packet classification and other applications. Additionally, as with most any system, errors can occur. For example, array parity errors can occur in certain content-addressable memories as a result of failure-in-time errors which are typical of semiconductor devices. Additionally, communications and other errors can occur. Prior systems are known to detect certain errors and to signal that some error condition has occurred, but are typically lacking in providing enough information to identify and isolate the error. Desired is new functionality for performing error detection and identification. One problem with performing packet classification is the rate at which it must be performed, especially when multiple features of a certain type are to be evaluated. A prior approach uses a series of lookups to evaluate an action to be taken for each of these features. This approach is too slow, so techniques, such as Binary Decision Diagram (BDD) and Order Dependent Merge (ODM), were used for combining these features so they can be evaluated in a single lookup operation. For example, if there are two ACLs A (having entries Al and A2) and B (having entries Bl and B2, then ODM combines these original lists to produce one of two cross-product equivalent ordered lists, each with four entries: A1B1, A1B2, A2B1, and A2B2; or A1B1, A2B1, A1B2, and A2B2. These four entries can then be programmed into an associative memory and an indication of a corresponding action to be taken placed in an adjunct memory. Lookup operations can then be performed on the associative and adjunct memories to identify a corresponding action to use for a particular packet being processed. There are also variants of ODM and BDD which may filter out the entries which are unnecessary as their values will never allow them to be matched. However, one problem with these approaches is that there can be an explosion of entries generated by these algorithms. A typical worst case would be to multiply the number of items in each feature by each other. Thus, two features of one hundred items each can generate one thousand entries, and if a third feature is considered which also has one hundred items, one million entries could be generated. Desired is a new mechanism for efficiently performing lookup operations which may reduce the number of entries required. A known approach of identifying traffic flows for the purpose of prioritizing packets uses CAMs to identify and "remember" traffic flows allowing a network switch or router to identify packets belonging to that flow, at wire speed, without processor intervention, hi one approach, learning new flows is automatic. Once a flow is identified, the system software assigns the proper priority to the newly identified flow. In each of the cases where learning is necessary (i.e., adding a new connection), the next free address of the device is read out so the system software can keep track of where the new additions are being placed. This way, the system software can efficiently remove these entries when they are no longer active. If aging is not used, the system software would need to keep track of the locations of every entry, and when a session ends, remove the corresponding entries. This is not a real-time issue, so software can provide adequate performance. Additionally, it is possible, even desirable to store timestamp information in the device to facilitate aging and purging of inactive flow identifiers. For a purpose and context different from prioritizing packets, it is desirable to collect statistics about traffic flows (also referred to as "netflows"). These statistics can provide the metering base for real-time and post-processing applications including network traffic accounting, usage-based network billing, network planning, network monitoring, outbound marketing, and data mining capabilities for both service provider and enterprise customers. While this approach may work well for systems dealing with a relatively small amount of traffic with thousands of flows, this approach is not very scalable to systems handling larger amounts of data and flows as the collection of data on the raw flows generally produces too much unneeded data and requires a heavy burden on systems to collect all the information, if possible. Desired is a new mechanism for collecting accounting and other data.
SUMMARY OF THE INVENTION Methods and apparatus are disclosed for defining and using associative memory entries with force no-hit and priority indications of particular use in implementing policy maps in communication devices, for merging lookup results, such as from one or more associative memory banks and/or memory devices, and for generating accounting or other data based on that indicated in an access control list or other specification, and typically using associative memory entries in one or more associative memory banks and/or memory _, devices. hi one embodiment, a set of entries is determined based on a policy map with a force no-hit indication being associated with one or more of the entries. Additionally, programmable priority indications may be associated with one or more of the entries, or with the associative memory devices, associative memory banks, etc. The force no-hit indications are often used in response to identified deny instructions in an access control list or other policy map. A lookup operation is then performed on these associative memory entries, with highest matching result or results identified based on the programmed and/or implicit priority level associated with the entries, or with the associative memory devices, associative memory banks, etc. One embodiment identifies an access control list including multiple access control list entries. A first set of access control list entries corresponding to a first feature of the access control list entries and a second set of access control list entries corresponding to a second feature of the access control list entries are identified. A first associative memory bank is programmed with the first associative memory entries and a second associative memory bank is programmed with the second associative memory entries, with the first associative memory entries having a higher lookup precedence than the second associative memory entries. A lookup value is then identified, such as that based on a packet or other item. Lookup operations are then typically performed substantially simultaneously on the first and second sets of associative memory entries to generate multiple lookup results, with these results typically being identified directly, or via a lookup operation in an adjunct memory or other storage mechanism. These lookup results are then combined to generate a merged lookup result. One embodiment identifies an access control list including multiple access control list entries, with a subset of these access control list entries identifying accounting requests. Accounting mechanisms, such as, but not limited to counters or data structures, are associated with each of said access control list entries in the subset of access control list entries identifying accounting requests. An item is identified. A particular one of the accounting mechanisms corresponding to the item is identified and updated. In one embodiment, the item corresponds to one or more fields of a received packet. In one embodiment, the item includes at least one autonomous system number, said at least one autonomous system number identify a set of communication devices under a single administrative authority. In one embodiment, at least one of the accounting mechanisms is associated with at least two different access control list entries in the subset of access control list entries identifying accounting requests.
BRIEF DESCRIPTION OF THE DRAWINGS The appended claims set forth the features of the invention with particularity. The invention, together with its advantages, may be best understood from the following detailed description taken in conjunction with the accompanying drawings of which: FIGs. 1A-E are block diagrams of various exemplary systems including one or more embodiments for performing lookup operations using associative memories; FIG. 2 is a block diagram of an associative memory including one or more embodiments for performing lookup operations; FIGs. 3 A-D illustrate various aspects of a control used in one embodiment for performing lookup operations; FIGs. 4A-G illustrate various aspects of an associative memory block used in one embodiment for performing lookup operations; FIGs. 5A-C illustrate various aspects of an output selector used in one embodiment for performing lookup operations; FIGs. 6A-B illustrate an exemplary policy map and resultant associative memory entries; FIG. 6C illustrates a data structure for indicating priority of associative memories, blocks, or entries used in one embodiment; FIG. 7A illustrates a process for programming associative memory entries used in one embodiment; FIG. 7B illustrates a process for identifying a highest priority result used in one embodiment; FIGs. 8A-G illustrate access control lists, processes, mechanisms, data structures, and/or other aspects of some of an unlimited number of systems employing embodiments for updating counters or other accounting devices, or for performing other functions; and FIGs. 9A-K illustrate access control lists, processes, mechanisms, data structures, and/or other aspects of some of an unlimited number of systems employing embodiments for generating merged results or for performing other functions. DETAILED DESCRIPTION Methods and apparatus are disclosed for defining and using associative memory entries with force no-hit and priority indications of particular use in implementing policy maps in communication devices, for generating and merging lookup results to apply multiple features, for generating accounting or other data based on that indicated in an access control list or other specification, and for performing lookup operations using associative memories, including, but not limited to modifying search keys within an associative memory based on modification mappings, forcing a no-hit condition in response to a highest-priority matching entry including a force no-hit indication, selecting among various associative memory blocks or sets or banks of associative memory entries in determining a lookup result, and detecting and propagating error conditions. Embodiments described herein include various elements and limitations, with no one element or limitation contemplated as being a critical element or limitation. Each of the claims individually recites an aspect of the invention in its entirety. Moreover, some embodiments described may include, but are not limited to, inter alia, systems, networks, integrated circuit chips, embedded processors, ASICs, methods, and computer-readable medium containing instructions. One or multiple systems, devices, components, etc. may comprise one or more embodiments, which may include some elements or limitations of a claim being performed by the same or different systems, devices, components, etc. The embodiments described hereinafter embody various aspects and configurations within the scope and spirit of the invention, with the figures illustrating exemplary and non-limiting configurations. As used herein, the term "packet" refers to packets of all types or any other units of information or data, including, but not limited to, fixed length cells and variable length packets, each of which may or may not be divisible into smaller packets or cells. The term "packet" as used herein also refers to both the packet itself or a packet indication, such as, but not limited to all or part of a packet or packet header, a data structure value, pointer or index, or any other part or identification of a packet. Moreover, these packets may contain one or more types of information, including, but not limited to, voice, data, video, and audio information. The term "item" is used generically herein to refer to a packet or any other unit or piece of information or data, a device, component, element, or any other entity. The phrases "processing a packet" and "packet processing" typically refer to performing some steps or actions based on the packet contents (e.g., packet header or other fields), and such steps or action may or may not include modifying, storing, dropping, and/or forwarding the packet and/or associated data. The term "system" is used generically herein to describe any number of components, elements, sub-systems, devices, packet switch elements, packet switches, routers, networks, computer and/or communication devices or mechanisms, or combinations of components thereof. The term "computer" is used generically herein to describe any number of computers, including, but not limited to personal computers, embedded processing elements and systems, control logic, ASICs, chips, workstations, mainframes, etc. The term "processing element" is used generically herein to describe any type of processing mechanism or device, such as a processor, ASIC, field programmable gate array, computer, etc. The term "device" is used generically herein to describe any type of mechanism, including a computer or system or component thereof. The terms "task" and "process" are used generically herein to describe any type of rumήng program, including, but not limited to a computer process, task, thread, executing application, operating system, user process, device driver, native code, machine or other language, etc., and can be interactive and/or non-interactive, executing locally and/or remotely, executing in foreground and/or background, executing in the user and/or operating system address spaces, a routine of a library and/or standalone application, and is not limited to any particular memory partitioning technique. The steps, connections, and processing of signals and information illustrated in the figures, including, but not limited to any block and flow diagrams and message sequence charts, may be performed in the same or in a different serial or parallel ordering and/or by different components and/or processes, threads, etc., and/or over different connections and be combined with other functions in other embodiments in keeping within the scope and spirit of the invention. Furthermore, the term "identify" is used generically to describe any manner or mechanism for directly or indirectly ascertaining something, which may include, but is not limited to receiving, retrieving from memory, determining, defining, calculating, generating, etc. Moreover, the terms "network" and "communications mechanism" are used generically herein to describe one or more networks, communications mediums or communications systems, including, but not limited to the Internet, private or public telephone, cellular, wireless, satellite, cable, local area, metropolitan area and/or wide area networks, a cable, electrical connection, bus, etc., and internal communications mechanisms such as message passing, interprocess communications, shared memory, etc. The term "message" is used generically herein to describe a piece of information which may or may not be, but is typically communicated via one or more communication mechanisms of any type. The term "storage mechanism" includes any type of memory, storage device or other mechanism for maintaining instructions or data in any format. "Computer-readable medium" is an extensible term including any memory, storage device, storage mechanism, and other storage and signaling mechanisms including interfaces and devices such as network interface cards and buffers therein, as well as any communications devices and signals received and transmitted, and other current and evolving technologies that a computerized system can interpret, receive, and/or transmit. The term "memory" includes any random access memory (RAM), read only memory (ROM), flash memory, integrated circuits, and/or other memory components or elements. The term "storage device" includes any solid state storage media, disk drives, diskettes, networked services, tape drives, and other storage devices. Memories and storage devices may store computer-executable instructions to be executed by a processing element and/or control logic, and data which is manipulated by a processing element and/or control logic. The term "data structure" is an extensible term referring to any data element, variable, data structure, database, and/or one or more organizational schemes that can be applied to data to facilitate interpreting the data or performing operations on it, such as, but not limited to memory locations or devices, sets, queues, trees, heaps, lists, linked lists, arrays, tables, pointers, etc. A data structure is typically maintained in a storage mechanism. The terms "pointer" and "link" are used generically herein to identify some mechanism for referencing or identifying another element, component, or other entity, and these may include, but are not limited to a reference to a memory or other storage mechanism or location therein, an index in a data structure, a value, etc. The term "associative memory" is an extensible term, and refers to all types of known or future developed associative memories, including, but not limited to binary and ternary content addressable memories, hash tables, TRIE and other data structures, etc. Additionally, the term "associative memory unit" may include, but is not limited to one or more associative memory devices or parts thereof, including, but not limited to regions, segments, banks, pages, blocks, sets of entries, etc. The term "one embodiment" is used herein to reference a particular embodiment, wherein each reference to "one embodiment" may refer to a different embodiment, and the use of the term repeatedly herein in describing associated features, elements and/or limitations does not establish a cumulative set of associated features, elements and/or limitations that each and every embodiment must include, although an embodiment typically may include all these features, elements and/or limitations. In addition, the phrase "means for xxx" typically includes computer-readable medium containing computer-executable instructions for performing xxx. In addition, the terms "first," "second," etc. are typically used herein to denote different units (e.g., a first element, a second element). The use of these terms herein does not necessarily connote an ordering such as one unit or event occurring or coming before another, but rather provides a mechanism to distinguish between particular units. Additionally, the use of a singular tense of a noun is non-limiting, with its use typically including one or more of the particular thing rather than just one (e.g., the use of the word "memory" typically refers to one or more memories without having to specify "memory or memories," or "one or more memories" or "at least one memory", etc.). Moreover, the phrases "based on x" and "in response to x" are used to indicate a minimum set of items x from which something is derived or caused, wherein "x" is extensible and does not necessarily describe a complete list of items on which the operation is performed, etc. Additionally, the phrase "coupled to" is used to indicate some level of direct or indirect connection between two elements or devices, with the coupling device or devices modifying or not modifying the coupled signal or communicated information. The term "subset" is used to indicate a group of all or less than all of the elements of a set. The term "subtree" is used to indicate all or less than all of a tree. Moreover, the term "or" is used herein to identify a selection of one or more, including all, of the conjunctive items. , Methods and apparatus are disclosed for defining and using associative memory entries with force no-hit and priority indications of particular use in implementing policy maps in communication devices. In one embodiment, a set of entries is determined based on a policy map with a force no-hit indication being associated with one or more of the entries. Additionally, programmable priority indications may be associated with one or more of the entries, or with the associative memory devices, associative memory banks, etc. The force no-hit indications are often used in response to identified deny instructions in an access control list or other policy map. A lookup operation is then performed on these associative memory entries, with highest matching result or results identified based on the programmed and/or implicit priority level associated with the entries, or with the associative memory devices, associative memory banks, etc. Methods and apparatus are disclosed for performing lookup operations using associative memories, including, but not limited to modifying search keys within an associative memory based on modification mappings, forcing a no-hit condition in response to a highest-priority matching entry including a force no-hit indication, selecting among various associative memory blocks or sets or banks of associative memory entries in determining a lookup result, and detecting and propagating error conditions, hi one embodiment, each block retrieves a modification mapping from a local memory and modifies a received search key based on the mapping and received modification data, h one embodiment, each of the associative memory entries includes a field for indicating that a successful match on the entry should or should not force a no-hit result, one embodiment, an indication of which associative memory sets or banks or entries to use in a particular lookup operation is retrieved from a memory. One embodiment performs error detection and handling by identifying, handling and communication errors, which may include, but is not limited to array parity errors in associative memory entries and communications errors such as protocol errors and interface errors on input ports. Array parity errors can occur as a result of failure-in-time enors which are typical of semiconductor devices. One embodiment includes a mechanism to scan associative memory entries in background, and to identify any detected errors back to a control processor for re-writing or updating the flawed entry, hi one embodiment, certain identified errors or received error conditions are of a fatal nature in which no processing should be performed. For example, in one embodiment, a fatal enor causes an abort condition, hi response, the device stops an in-progress lookup operation and just forwards enor and possibly no-hit signals. Typically, these signals are generated at the time the in-progress lookup operation would have generated its result had it not been aborted so as to maintain timing among devices in a system including the associative memory. hi one embodiment, including cascaded or connected associative memory devices, enor status messages indicating any enor type and its conesponding source are propagated to indicate the enor status to the next device and/or a control processor. In addition, the communicated signal may indicate and generate an abort condition in the receiving device. In one embodiment, the receiving device does not perform its next operation or the received instruction, or it may abort its current operation or instruction. Moreover, the receiving device may or may not delay a time amount conesponding to that which its processing would have required in performing or completing the operation or instruction so as to possibly maintain the timing of a transactional sequence of operations. One embodiment generates accounting or other data based on that indicated in an access control list or other specification, and typically using associative memory entries in one or more associative memory banks and/or memory devices. One embodiment identifies an access control list including multiple access control list entries, with a subset of these access control list entries identifying accounting requests. Accounting mechanisms, such as, but not limited to counters or data structures, are associated with each of said access control list entries in the subset of access control list entries identifying accounting requests. An item is identified. A particular one of the accounting mechanisms conesponding to the item is identified and updated, hr one embodiment, the item conesponds to one or more fields of a received packet. In one embodiment, the item includes at least one autonomous system number, said at least one autonomous system number identify a set of communication devices under a single administrative authority. In one embodiment, at least one of the accounting mechanisms is associated with at least two different access control list entries in the subset of access control list entries identifying accounting requests. One embodiment merges lookup results, such as from one or more associative memory banks and/or memory devices. One embodiment identifies an access control list including multiple access control list entries. A first set of access control list entries conesponding to a first feature of the access control list entries and a second set of access control list entries conesponding to a second feature of the access control list entries are identified. A first associative memory bank is programmed with the first associative memory entries and a second associative memory bank is programmed with the second associative memory entries, with the first associative memory entries having a higher lookup precedence than the second associative memory entries. A lookup value is then identified, such as that based on a packet or other item. Lookup operations are then typically performed substantially simultaneously on the first and second sets of associative memory entries to generate multiple lookup results, with these results typically being identified directly, or via a lookup operation in an adjunct memory or other storage mechanism. These lookup results are then combined to generate a merged lookup result. FIGs. 1A-E are block diagrams of various exemplary systems and configurations thereof, with these exemplary systems including one or more embodiments for performing lookup operations using associative memories. First, FIG. 1 illustrates one embodiment of a system, which may be part of a router or other communications or computer system, for performing lookup operations to produce results which can be used in the processing of packets, hi one embodiment, control logic 110, via signals 111, programs and updates associative memory or memories 115, such as, but not limited to one or more associative memory devices, banks, and/or sets of associative memory entries which may or may not be part of the same associative memory device and/or bank, hi one embodiment, control logic 110 also programs memory 120 via signals 123. In one embodiment, control logic 110 includes custom circuitry, such as, but not limited to discrete circuitry, ASICs, memory devices, processors, etc. In one embodiment, packets 101 are received by packet processor 105. In addition to other operations (e.g., packet routing, security, etc.), packet processor 105 typically generates one or more items, including, but not limited to one or more packet flow identifiers based on one or more fields of one or more of the received packets 101 and possibly from information stored in data structures or acquired from other sources. Packet processor 105 typically generates a lookup value 103 which is provided to control logic 110 for providing control and data information (e.g., lookup words, modification data, profile IDs, etc.) to associative memory or memories 115, which perform lookup operations and generate one or more results 117. In one embodiment, a result 117 is used is by memory 120 to produce a result 125. Control logic 110 then relays result 107, based on result 117 and/or result 125, to packet processor 105. In response, one or more of the received packets are manipulated and forwarded by packet processor 105 as indicated by packets 109. Note, results 117, 125 and 107 may include indications of enor conditions. FIG. IB illustrates one embodiment for performing lookup operations using associative memories, including, but not limited to modifying search keys within an associative memory based on modification mappings, forcing a no-hit condition in response to a highest-priority matching entry including a force no-hit indication, selecting among various associative memory blocks or sets or banks of associative memory entries in determining a lookup result, and detecting and propagating enor conditions. Control logic 130, via signals 132, programs associative memory or memories 136. In addition, control logic 130 provides control and data information (e.g., lookup words, modification data, profile IDs, etc.) to associative memory or memories 136, which perform lookup operations to generate results and enor signals 134, which are received by control logic 130. FIG. 1C illustrates one embodiment for performing lookup operations using associative memories, including, but not limited to modifying search keys within an associative memory based on modification mappings, forcing a no-hit condition in response to a highest-priority matching entry including a force no-hit indication, selecting among various associative memory blocks or sets or banks of associative memory entries in determining a lookup result, and detecting and propagating enor conditions. Control logic 140, via signals 141-143, programs associative memories 146-148. In addition, control logic 140 provides control and data information (e.g., lookup words, modification data, profile IDs, etc.) to associative memories 146-148, which perform lookup operations to generate results and enor signals 144-145. As shown each progressive stage forwards enor messages to a next associative memory stage or to control logic 140. For example, associative memory 148 relays received enor indications via signals 144 via signals 145 to control logic 140. Moreover, in one embodiment, a synchronization bit field is included in messages 141-145 sent between devices 140 and 146-148, with the value being set or changed at predetermined periodic intervals such that each device 140, 146-148 expects the change. One embodiment uses a single synchronization bit, and if this bit is set in the request or input data 141-145 to a device 146-148, then the device 146-148 will set this bit in the conesponding reply or output data 143-145. For example, in one embodiment, control processor or logic 140 sets the sync bit in its request data 141 periodically, say once in every eight requests. Control processor or logic 140 also monitors the sync bit in the reply data 145. If any kind of enor altered the request-reply association (or transaction timing) between the control processor or logic 140 and the associative memories 146-148, then control processor or logic 140 can detect it and recover from that enor (by flushing the pipeline, etc.) In this manner, devices, especially those as part of a transactional sequence, can synchronize themselves with each other. Resynchronization of devices may become important, for example, should an enor condition occur, such as an undetected parity enor in a communicated instruction signal (e.g., the number of parity enors exceed the enor detection mechanism). There is a possibility that a parity enor in an instruction goes undetected and that completely changes the transaction timing. Also, there could be other types of "unknown" enors that can put the control processor or logic and the associative memory chain out of synchronization. FIG. ID illustrates one embodiment for performing lookup operations using associative memories, including, but not limited to modifying search keys within an associative memory based on modification mappings, forcing a no-hit condition in response to a highest-priority matching entry including a force no-hit indication, selecting among various associative memory blocks or sets or banks of associative memory entries in determining a lookup result, and detecting and propagating enor conditions. Control logic 150, via signals 151-153, programs associative memories 156-158. hi addition, control logic 150 provides control and data information (e.g., lookup words, modification data, profile IDs, etc.) to associative memories 156-158, which perform lookup operations to generate results and enor signals 154-155 which are communicated to control logic 150. FIG. IE illustrates a system 180, which maybe part of a router or other communications or computer system, used in one embodiment for distributing entries among associative memory units and selectively enabling less than all of the associative memory units when performing a lookup operation, h one embodiment, system 180 includes a processing element 181, memory 182, storage devices 183, one or more associative memories 184, and an interface 185 for connecting to other devices, which are coupled via one or more communications mechanisms 189 (shown as a bus for illustrative purposes). Various embodiments of system 180 may include more or less elements. The operation of system 180 is typically controlled by processing element 181 using memory 182 and storage devices 183 to perform one or more tasks or processes, such as programming and performing lookup operations using associative memory or memories 184. Memory 182 is one type of computer-readable medium, and typically comprises random access memory (RAM), read only memory (ROM), flash memory, integrated circuits, and/or other memory components. Memory 182 typically stores computer-executable instructions to be executed by processing element 181 and/or data which is manipulated by processing element 181 for implementing functionality in accordance with one embodiment of the invention. Storage devices 183 are another type of computer-readable medium, and typically comprise solid state storage media, disk drives, diskettes, networked services, tape drives, and other storage devices. Storage devices 183 typically store computer-executable instructions to be executed by processing element 181 and/or data which is manipulated by processing element 181 for implementing functionality in accordance with one embodiment of the invention. In one embodiment, processing element 181 provides control and data information
(e.g., lookup words, modification data, profile IDs, etc.) to associative memory or memories 184, which perform lookup operations to generate lookup results and possibly enor indications, which are received and used by processing element 181 and/or communicated to other devices via interface 185. FIG. 2 illustrates an associative memory 200 used in one embodiment for performing lookup operations using associative memories, including, but not limited to modifying search keys within an associative memory based on modification mappings, forcing a no-hit condition in response to a highest-priority matching entry including a force no-hit indication, selecting among various associative memory blocks or sets or banks of associative memory entries in determining a lookup result, and detecting and propagating enor conditions. As shown, control logic 210 receives input control signals 202 which may include programming information. In turn, control logic 210 may update information and data structures within itself, program/update associative memory blocks 218-219, and/or output selectors 231-232. Note, in one embodiment, each of the associative memory blocks 218-219 include one or more associative memory sets or banks of associative memories entries, and logic or circuitry for performing lookup operations. In one embodiment, input data 201, which may include, but is not limited to search keys and modification data , is received by associative memory 200 and distributed to associative memory blocks 218-219, and possibly forwarded to other downstream associative memories in a cascaded configuration, h addition, input control information 202, which may include, but is not limited to profile IDs (e.g., a value), instructions, programming information, is received by control logic 210, and possibly forwarded to other downstream associative memories in a cascaded configuration. In addition, in one embodiment, previous stage lookup results and/or enor indications are received from previous stage associative memories in a cascaded configuration or from other devices by control logic 210. Note, in one embodiment, input data 201, input control 202, previous stage results and enors 203, and/or portions thereof are communicated directly to associative memory blocks 218-219 and/or output selectors 231-232. Control logic 210 possibly processes and/or forwards the received information via block control signals 211-212 to associative memory blocks 218-219 and via selector control signals and previous stage results 215 (which typically includes the received profile ID) to output selectors 231-232. hi addition, control logic 210 may generate enor signals 216 based on a detected enor in the received information or in response to received enor condition indications. Note, in one embodiment, confrol logic 210 merely splits or regenerates a portion of or the entire received input control 202 and optional previous stage results and enors 203 signals as selector control signals and previous stage results signals 215 and/or enor signals 216. hi addition, control logic 210 could initiate an abort operation wherein a lookup operation will not occur because of a detected or received notification of an enor condition. In one embodiment, control logic 210 identifies data representing which associative memory blocks 218-219 to enable, which associative memory blocks 218-219 each output selector 231-232 should consider in determining its lookup result, and/or modification mappings each associative memory block 218-219 should use in modifying an input search key. In one embodiment, this data is retrieved, based on received input control information 202 (e.g., a profile ID or other indication), from one or more memories, data structures, and/or other storage mechanisms. This information is then communicated as appropriate to associative memory blocks 218-219 via block control signals 211-212, and/or output selectors 231-232 via selector control signals and previous stage results signals 215. In one embodiment, associative memory blocks 218-219 each receive a search key and possibly modification data via signal 201, and possibly control information via block control signals 211-212. Each enabled associative memory block 218-219 then performs a lookup operation based on the received search key, which may include generating a lookup word by modifying certain portions of the search key based on received modification data and/or modification mappings. Each associative memory block 218-219 typically generates a result 228-229 which are each communicated to each of the output selectors 231-232. In one embodiment, each associative memory block 218-219 that is not enabled generates a no-hit signal as its conesponding result 228-229. In one embodiment, output selectors 231-232 receive an indication of the associative memory blocks 218-219 that is not enabled. Output selectors 231 evaluate associative memory results 228-229 to produce results 240. h one embodiment, each output selector has a conesponding identified static or dynamic subset of the associate memory results 228-229 to evaluate in determining results 240. hi one embodiment, an identification of this conesponding subset is provided to each output selector 231-232 via selector control signals 215. In one embodiment, each of the output selectors 231-232 receives a profile ID via selector control signals 215 and performs a memory lookup operation based on the received profile ID to retrieve an indication of the particular associate memory results 228-229 to evaluate in determining results 240. Moreover, in one embodiment, results 240 are exported over one or more output buses 240, each typically connected to a different set of one or more pins of a chip of the associative memory. In one embodiment, the number of output buses used and their connectivity to outputs selectors 231-232 are static, while in one embodiment the number of output buses used and their connectivity to outputs selectors 231-232 are configurable, for example, at initialization or on a per or multiple lookup basis, h one embodiment, an output bus indication is received by an output selector 231-232, which uses the output bus indication to determine which output bus or buses to use. For example, this determination could include, but is not limited to a direct interpretation of the received output bus indication, performing a memory read operation based on the received output bus indication, etc. hi one embodiment, an output selector 231-232 performs a memory access operation based on a profile ID to determine which output bus or buses to use for a particular lookup operation. Thus, depending on the configuration, a single or multiple output buses / pins can selectively be used to communicate results 240, with this decision possibly being made based on the tradeoff of receiving multiple results simultaneously versus the number of pins required. Associative memory 200 provides many powerful capabilities for simultaneously producing one or more results 240. For example, in one embodiment, based on a received profile ID, control logic 210 identifies which of the one or more associative memory blocks 218-219 to enable and then enables them, and provides the profile ID to output selectors 231 for selecting a lookup result among the multiple associative memory blocks 218-219. Each of the associative memory blocks 218-219 may receive/identify a modification mapping based on the profile ID, with this modification mapping possibly being unique to itself. This modification mapping can then be used in connection with received modification data to change a portion of a received search key to produce the actual lookup word to be used in the lookup operation. Also, certain entries may be programmed with force no-hit indications to generate a no-hit result for the conesponding associative memory block 218-219 should a conesponding entry be identified as the highest priority entry matching the lookup word. Each of these enabled associative memory block 218-219 typically generate a result (e.g., no-hit, hit with highest priority matching entry or location thereof identified) which is typically communicated to each of the output selectors 231-232. Note, in one embodiment, the results are only communicated to the particular output selectors 231 -232 which are to consider the particular result in selecting their respective highest priority result received from associative memory blocks 218-219 and possibly other lookup results from previous stage associative memories. Additionally, in certain configurations, multiple associative memories 200 are cascaded or coupled in other methods so that results from one or more stages may depend on previous stage results, such that a lookup can be programmed to be performed across multiple associative memories 200. These and other constructs provided by associative memory 200 and configurations thereof provide powerful programmable lookup search capabilities and result selection mechanisms using one or more stages of associative memories 200, each including N associative memories blocks 218-219 and M output selectors 231-232. h one embodiment, the actual values of N and M may vary among associative memories 200. FIG. 3 A illustrates a control 300 (which may or may not conespond to control logic 210 of FIG. 2) of an associative memory used in one embodiment. As shown, control 300 includes control logic 310 and memory 311. hi one embodiment, programming signals 303 are received, and in response, one or more data structures in memory 311 are updated. In addition, control logic generates programming signals 318. hi one embodiment, programming 318 is the same as programming signals 303 and thus a physical connection can be used rather than passing through control logic 310. One embodiment of a programming process is illustrated in FIG. 3C, in which processing begins with process block 380. Processing then proceeds to process block 382, wherein programming signals are received. Next, in process block 384, data structures and other elements (e.g., associative memory blocks, output selectors, etc.) are updated. Processing is completed as indicated by process block 386. Returning to FIG. 3A, in performing a lookup operation, input data 301, input confrol 302, and optionally previous stage results and enors 304 (such as in a cascaded associative memory configuration) are received by control logic 310. In response, one or more data structures in memory 311 are referenced. Control logic 310 generates input data 314, block confrol signals 315, output selector control signals and (optionally) previous stage results 316, and possibly an enor signal 319 indicating a detected enor condition or a received enor indicator. In one embodiment, input data 314 is the same as input data 301 and thus a physical connection can be used rather than passing through control logic 310. FIG. 3B illustrates one set of data structures used in one embodiment. Enable array 320 is programmed with an associative memory block enable indicator 325 for each profile ID 321 to be used. Each associative memory block enable indicator 325 identifies which associative memory blocks are to be enabled for a given lookup operation, hi one embodiment, associative memory block enable indicator 325 includes a programmable priority level indication for use in identifying which result should be used from results from multiple blocks and/or previous stages. Thus, based on a profile ID 321 received via input control 302 (FIG. 3 A), enable array 320 can be retrieved from memory 311 (FIG. 3A), which can then be used to generate associative memory block enable signals (and priority indications) included in block control signals 315 (FIG. 3 A), hi one embodiment, associative memory block enable indicator 325 is a bitmap data structure, while in one embodiment, associative memory block enable indicator 325 is a list, set, anay, or any other data structure. Output selector array 330 is programmed with an output selector ID 335 identifying which output selector, such as, but not limited to output selectors 231-232 (FIG. 2) for each tuple (profile ID 331, associative memory block ID 332). Thus, based on a profile ID 331 received over via input control 302 (FIG. 3A), an output selector ID 335 can be identified for each associative memory block ID 332. h one embodiment, output selector ID 335 is a numeric identifier, while in one embodiment, output selector ID 335 is any value or data structure. Modification mapping array 340 is programmed with a modification mapping 345 for each tuple (profile ID 341, output selector ID 342). Thus, based on a profile ID 341 received over via input control 302 (FIG. 3 A), a modification mapping 345 can be identified for each output selector ID 342. h one embodiment, each modification mapping is a data structure identifying how to modify a received search key with received modification data. FIG. 3D illustrates a process used in one embodiment for initiating a lookup operation. Processing begins with process block 360, and proceeds to process block 362, wherein input data and control signals are received. Next, in process block 364, any previous stage results and enor indications are received. As determined in process block 366, if an abort operation should be performed, such as, but not limited to in response to a received fatal enor indication or an identified fatal enor condition, then processing proceeds to process block 374 (discussed hereinafter). Otherwise, in process block 368, the enable bitmap, output selector configuration, and modification mappings are received based on the profile ID. Next, in process block 370, data and control signals based on the retrieved and received information are forwarded to the associative memory blocks and output selectors. As determined in process block 372, if an enor condition is identified or has been received, then in process block 374, an enor indication, typically including an indication of the enor type and its source is generated or forwarded. Processing is complete as indicated by process block 376. FIG. 4A illustrates an associative memory block 400 used in one embodiment. Associative memory block 400 typically includes control logic 410 and associative memory entries, global mask registers, operation logic and priority encoder 412 (e.g., elements for performing the associative memory match operation on a received lookup word), one embodiment, sets of associative memory entries are grouped into banks of associative memory entries. In one embodiment, programming signals 401 are received, and in response, one or more associative memory entries and/or global mask registers in block 412 are updated. In one embodiment, an associative memory block 400 conesponds to a set or bank of associative memory entries and a mechanism for performing a lookup operation on the set or bank of associative memory entries to produce one or more results. In one embodiment, no mask register is included in associative memory block 400. Moreover, one embodiment of associative memory block 400 includes a memory
413 for storing configuration information, which may allow an associative memory block 400 to retrieve the information from memory 413 rather than receive it from another source. For example, in one embodiment, modification mapping data (e.g., modification mapping 345 of FIG. 3B) or other information is programmed into memory 413. Then, associative memory block 400 retrieves the modification mapping information, such as based on a received profile ID (e.g., rather than receiving the modification mapping signal 404). Additionally, in one embodiment, a search key 402, modification data 403, modification mapping 404, an enable signal 405, a global mask enable signal 406, and a global mask select signal 407 are received, hi response to perfonning a lookup operation and/or detecting an enor condition, such as a parity fault in one of the associative memory entries, result and enor indications 411 are generated, hi one embodiment, associative memory entries are checked for parity enors in background. The use of these signals and information in one embodiment are further described in relation to FIGs. 4B-4G. Turning to FIG. 4B, one embodiment includes multiple global mask registers 415 for use in a lookup operation on associative memory entries 416. Global mask enable signal 406 enables the use of a global mask register, while global mask select 407 identifies which of multiple masks to apply to each of the associative memory entries. Lookup word
414 is applied to associative memory entries 416, with possibly using one or more of global masks stored in global mask registers 415, to generate hit/no hit indication 417 and possibly hit location 418 and/or enor indication 419, which are incorporated directly or indirectly into result and enor indications 411 (FIG. 4A). FIG. 4C illustrates an enor indication 420 used in one embodiment. As shown, enor indication 420 includes an enor indication 421 for identifying if any or possibly the number of enor indications included therein. For any identified enor condition or received enor indication, an encoded description of each enor is included in one or more of the enor descriptors 422-423. In one embodiment, a bitmap is used in one or more of enor descriptors 422-423, wherein each bit represents a possible enor condition, and the value of the bit indicates whether or not a conesponding enor has been identified (including received from a prior component or stage.) In one embodiment, each enor descriptor 422-423 conesponds to a different component, interface, or previous stage, h one embodiment, enor indication 420 is used by other components in communicating enor conditions or lack thereof. FIG. 4D illustrates an associative memory entry 430 used in one embodiment. As shown, associative memory entry 430 includes a value 431, an optional mask 432, force no hit indication 433, valid/invalid flag 434, and an enor detection value 435. Enor detection value 435 may be one or more parity bits, a cyclic redundancy checksum value, or a value conesponding to any other mechanism used for detecting data corruption enors. In one embodiment, value 431 is of a configurable width, h one embodiment, this configurable width includes 80 bits, 160 bits and 320 bits, h one embodiment, such as that of a binary content-addressable memory, no mask field 432 is included, hi one embodiment, the width of mask field 432 is variable, and typically, although not required, matches the width of value field 431. In one embodiment, fields 431-435 are stored in a single physical memory; while in one embodiment, fields 431-435 are stored in multiple physical memories. FIG. 4E illusfrates a mechanism to modify a search key based on modification mapping and modification information used in one embodiment. As shown, a modification mapping bit 443 is used to control selector 440 which selects either search key unit (e.g., one or more bits, bytes, etc.) 441 or modification data unit 442 as the value for lookup unit 445, which is typically a portion of the actual lookup word to be used in matching associative memory entries in a lookup operation. FIG. 4F illustrates a mechanism to modify a search key 456 based on modification mapping 450 and modification data 454 used in one embodiment, hi one embodiment, modification mapping 450 conesponds to a modification mapping 345 (FIG. 3B). As shown in FIG. 4F, modification mapping 450 includes a source portion 451 and a destination portion 452. Referring to the lower portion of FIG. 4F, modification data 454 includes four bytes and search key 456 includes eight bytes. The source portion 451 of modification mapping 450 identifies which bytes of modification data 454 are to be used in generating lookup word 458, and the destination portion 452 of modification mapping 450 identifies where the conesponding bytes to be used of modification data 454 are to be placed in lookup word 458, with the remaining bytes coming from search key 456. hr other words, modification mapping 450 and modification data 454 are used to replace certain specified data units in search key 456 in producing the value which will be used in matching the associative memory entries. Of course, various embodiments use different numbers of bits and bytes for modification mapping 450 and modification data 454. hi one embodiment, modification mapping 450 includes an indication of the portion of search key 456 to modify (e.g., the value of J in one embodiment, the high-order bytes, the low order bytes, etc.). FIG. 4G illustrates an associative memory process used in one embodiment in performing a lookup operation. Processing begins with process block 470, and proceeds to process block 472. If the associative memory is not enabled, then processing proceeds to process block 490 wherein a result with a no hit indication is generated, and processing continues to process block 484. Otherwise, in process block 474, the lookup word is determined typically based on the search key, modification mapping, and modification data. Note, in one embodiment, the search key is used as the lookup word and there is no concept of a modification mapping or modification data. Next, in process block 476, the lookup word is used to match the associative memory entries with consideration of a selected and enabled global mask, if any. Note, in one embodiment, there is no concept of a global mask. As determined in process block 478, if at least one match has been identified, then processing proceeds to process block 480, otherwise to process block 490, wherein a result with a no hit indication is generated and processing proceeds to process block 484. Otherwise, as determined in process block 480, if the highest priority matching entry includes a force no hit indication, then processing proceeds to process block 490, wherein a result with a no hit indication is generated and processing proceeds to process block 484. Otherwise, in process block 482, a result indicating a hit (i.e., successful match) with the highest priority matching entry identified is generated. In process block 484, the result is communicated to at least the identified output selector or selectors, hi one embodiment, the output selector to which to communicate the result is identified by output selector ID 335 (FIG. 3B). As determined in process block 486, if an enor condition has been identified or received, then in process block 492, a signal is generated indicating the type and location of the enor. hi one embodiment, enor indication 420 (FIG. 4C) is used. Processing is complete as indicated by process block 499. FIG. 5 A illustrates of an output selector 500 (which may or may not conespond to an output selector 231-232 of FIG. 2) used in one embodiment. As shown, output selector 500 includes control logic 510 and memory 511. In one embodiment, programming signals 504 are received, and in response, one or more data structures in memory 511 are updated. FIG. 5B illustrates one data structure used in one embodiment. Available anay 520 is programmed with an associative memory blocks and optionally previous stage results available for use indicator 525 for each profile TD 521 to be used. Each indicator 525 identifies which, if any, associative memory blocks, sets of entries or associative memory banks are to be considered in determining which matching associative entry to select for the ultimate highest-priority matching associative memory entry. In one embodiment, indicator 525 further identifies which previous stage results to consider, h one embodiment, a priority level is associated with each of the banks and/or previous stage results. Thus, based on a profile ID 521 received over via selector control signal 501 (FIG. 5 A), available anay 520 can be retrieved from memory 511 (FIG. 5 A), hi one embodiment, there is an implied priority ordering of associative memory blocks and any previous stage results, while in one embodiment this priority ordering for determining the ultimate highest-priority matching entry is programmable and/or variable per lookup operation, h one embodiment, associative memory blocks available for use indicator 525 is a bitmap data structure, while in one embodiment, associative memory blocks available for use indicator 525 is a list, set, anay, or any other data structure. Returning to FIG. 5A, in the performance of a lookup operation, output selector 500 receives selector control signal 501, which may include a profile ID. In addition, output selector 500 receives any relevant previous stage results 502 and results 503 from zero or more of the associative memory blocks from which the highest-priority entry will be selected, and which, if any, will be identified in generated result 515. Moreover, in one embodiment, selector control signal 501 including an enable indication, the enable indication including an enabled or not enabled value, such that in when a not enable value is received, output selector 500 is not enabled and does not select among results from blocks 1-N 503 or optional previous stage results 502. hi one embodiment, when not enabled, output selector 500 generates a result signal 515 indicting a no hit, not enabled, or some other predetermined or floating value. Additionally, in one embodiment, result 515 is communicated over a fixed output bus, which may or may not be multiplexed with other results 515 generated by other output selectors 500. h one embodiment, the associative memory may include one or more output buses, each typically connected to a single pin of a chip of the associative memory, with the selection of a particular output bus possibly being hardwired or configurable, with the configuration possibly being on a per lookup basis, such as that determined from a received value or configuration information retrieved from a memory (e.g., based on the cunent profile ID.) In such a configuration, control logic 510 (or other mechanism) typically selects which output bus (and the timing of sending result 515) to use for a particular or all results 515. A process used in one embodiment for receiving and selecting a highest-priority associative memory entry, if any, is illustrated in FIG. 5C. Processing begins with process block 540, and proceeds to process block 542, wherein the results from the associative memory blocks and the profile ID are received. In process block 544, the set of associative memory blocks to consider in determining the result is retrieved from a data structure/memory based on the profile ID. hi process block 546, any relevant previous stage results are received from coupled associative memories. Next, in process block 548, the highest priority match from the available associative memory block and previous stage results is identified, if any, based on the implied and/or programmed priority values associated with the matching entries and/or associative memories, blocks, etc. Then, in process block 550, the result is communicated over a fixed or identified output bus/pin or to some other destination, with the result typically including a no hit indication or a hit indication and an identification of the ultimate highest-priority matching associative memory entry. Processing is complete as indicated by process block 552. FIG. 6A illustrates an exemplary policy map 600, including deny and permit instructions. Note, there are many applications of embodiments, and not all use permit and deny instructions. FIG. 6B illustrates associative memory entries 621 and 622 as determined by one embodiment based on policy map 600. Associative memory entries 621 and 622 could be programmed in a same or different associative memories or associative memory blocks. Associative memory entries 621 and 622 are shown in separate groupings to illustrate how priority can be optionally used and programmed in one embodiment. As shown, the deny statements in policy map 600 generate force no-hit indications (e.g., FORCE NO-HIT=l) in conesponding entries of entries 621 and 622. By using the optional priority indications, entries 621 and 622 can be stored in different associative memories and/or associative memory banks, etc., to possibly consider in determining where to store the entries in order to efficiently use the space available for the entries. By associating a priority level with each entry, entries within a same associative memory and/or associate memory block, etc. can have different priority levels, which gives great flexibility in programming and managing the entries and space available for storing the entries. FIG. 6C illustrates a data structure 650 for indicating priority of associative memories, blocks, or entries, etc. used in one embodiment. As shown, priority mapping data structure 650 provides a priority indication 652 (e.g., value) for each of the associative memories, associative memory blocks, associative memory entries, etc. (identified by indices 651). Associative memories and/or blocks, etc. associated with programmed priority values can be used with or without programmed priority values associated with the associative memory entries themselves. FIG. 7A illustrates a process for programming associative memory entries used in one embodiment. Processing begins with process block 700, and proceeds to process block 702, wherein a policy map (e.g., any definition of desired actions, etc.) is identified. Next, in process block 704, a set of conesponding entries is identified based on the policy map. In process block 706, a force no-hit indication is associated with one or more of the entries (if so conespondingly defined by the policy map). A force no-hit indication is of particular use in implementing deny operations, but is not required to be identified with a deny operation. Next, in process block 708, optionally, priority indications are associated with each of the entries, associative memories, associative memory banks, etc. hi process block 710, one or more associative memories and/or banks are programmed with the entries (and data structures updated as required). Processing is complete as indicated by process block 712. FIG. 7B illustrates a process for identifying a highest priority result used in one embodiment. Processing begins with process block 750, and proceeds to process block 752, wherein results are received from the associative memories, blocks, etc. (including possibly from previous stages). In process block 754, the priority values are associated with the results (e.g., based on the entries, memories, blocks, etc.). In process block 756, the highest priority result is (or in one embodiment, results are) identified based on the inherent or programmed priority values. The hierarchy (e.g., the order they are considered) of types of priority values (e.g., those associated with the entries, banks, memories, etc.) can vary among embodiments and even among individual lookup operations, hi process block 758, the highest priority result is (or results are) identified. Processing is complete as indicated by process block 759. FIGs. 8A-G illustrate access control lists, processes, mechanisms, data structures, and/or other aspects of some of an unlimited number of systems employing embodiments for updating counters or other accounting devices, or for performing other functions.
Shown in FIG. 8A is an access confrol list 800 which defines accounting information to be collected in a counting mechanism one by statement 801 for access control list entries 803 and in a counting mechanism two by statement 802 for access control list entries 804. Note, there are multiple access control entries in that will cause a same counting mechanism to be adjusted. Also, the value that a particular counter is adjusted can be one (e.g., conesponding to one item or packet), a byte count (e.g., a size of an item, packet, frame, or datagram) or any other value. FIG. 8B illustrates a process used in one embodiment to configure a mechanism for accumulating information based on access control entries. Note, this embodiment may be responsive to and/or implemented in computer-readable medium (e.g., software, firmware, etc.), custom hardware (e.g., circuits, ASICs, etc.) or via any other means or mechanism, such as, but not limited to that disclosed herein. For example, one embodiment uses a system described herein, and/or illustrated in FIGs. 1 A-E, 2, 8D-8E, 9A, 9C-D, and/or any other figure. Processing of the flow diagram illustrated in FIG. 8B begins with process block
810, and proceed to process block 812, wherein an access control list is identified. Typically, the access control list includes multiple access control list entries, with a subset of these entries identifying accounting requests. Next, in process block 814, accounting mechanisms are associated with each of the access control list entries specifying accounting requests. Typically, but not always, at least one of the accounting mechanisms is associated with at least two different access control list entries. Processing is complete as indicated by process block 816. FIG. 8C illustrates a process used in one embodiment for updating an accounting mechanism based on an item, such as, but not limited to one or more fields or values associated with a packet. Processing begins with process block 820, and proceeds to process block 822, wherein an item is identified. The identification of an item might include identifying an autonomous system number conesponding to the packet. Note, an autonomous system number is typically associated with a set of communication devices under a single administrative authority. For example, all packets sent from an Internet Service Provide typically are associated with a same autonomous system number. Next, in process block 824, a particular one of the accounting mechanisms conesponding to the item is identified, such as by, but not limited to a lookup operation in a data structure, associative memory, or by any other means or mechanism. Then, in process block 826, the identified accounting mechamsm is updated. Processing is complete as indicated by process block 828. FIG. 8D illustrates one embodiment of a system for updating an accounting value based on that defined by an access control list or other mechanism. Packets 831 are received and processed by packet processor 832 to generate packets 839. hi one embodiment, packet processor 832 performs a lookup operation in a forwarding information base (FIB) data structure to identify the source and/or destination autonomous system number associated with the identified packet. Based on an identified packet, autonomous system numbers, and/or other information, a lookup value 833 is identified. FIG. 9G illustrates a lookup value 960 used in one embodiment. One embodiment uses all, less than all, or none of fields 960A-960I. Based on lookup value 833, a lookup operation is performed in associative memory entries 834 in one or more associative memory banks and/or one or more associative memories to generate a counter indication 835. The conesponding counting mechanism within counters and decoder/control logic 836 is updated. Counter values 837 are typically communicated via any communication mechanism and/or technique to packet processor 832 or another device to be forwarded or processed. FIG. 8E illustrates one embodiment of a system for updating an accounting value based on that defined by an access control list or other mechanism. Packets 840 are received and processed by packet processor 841 to generate packets 849. hi one embodiment, packet processor 841 performs a lookup operation in a forwarding information base (FIB) data structure to identify the source and/or destination autonomous system number associated with the identified packet. Based on an identified packet, autonomous system numbers, and/or other information, a lookup value 842 is identified. FIG. 9G illusfrates a lookup value 960 used in one embodiment. One embodiment uses all, less than all, or none of fields 960A-960I. Based on lookup value 842, a lookup operation is performed in associative memory entries 843 in one or more associative memory banks and/or one or more associative memories to produce a lookup result 844, which is then used to perform a lookup operation in adjunct memory 845 generate a counter indication 846, and the conesponding counting mechanism within counters and decoder/control logic 847 is updated. In one embodiment, adjunct memory 845 stores counter indications for conesponding locations of access confrol list entries programmed in associative memory 843, and some of these counter indications may be the same value such that a same counting mechanism is updated for different matching access control list entries. Counter values 848 are typically communicated via any communication mechanism and/or technique to packet processor 841 or another device to be forwarded or processed. FIG. 8F illustrates an example of associative memory entries 860 and conesponding adjunct memory entries 870, such as those are generated by one embodiment based on access control list entries 803 and 804 (FIG. 8A). As shown, associative memory entries 861-863 have the same counter indication in adjunct memory entries 871-873, while associative memory entry 864 has a different conesponding counter indication in adjunct memory entry 874. In one embodiment, associative memory entries include fields for a source address, destination address, and other fields, such as, but not limited to autonomous system numbers (ASNs), protocol type, source and destination port information, etc. h one embodiment, adjunct memory entries 870 include an indication of a counting mechanism and/or other values which may be used for other purposes (e.g., security, routing, policing, quality of service, etc.). FIG. 8G illustrates a process used in one embodiment for processing a packet. Processing begins with process block 880, and proceeds to process block 882, wherein a packet is identified. Next, in process block 884, one or more forwarding information base (FIB) lookup operations are performed to identify source and destination autonomous system numbers conesponding to the identified packet, hi process block 886, an accounting lookup value is identified, typically based on information contained in the identified packet and the source and destination ASNs. In process block 888, a lookup operation is performed in one or more associative memory banks and possibly in conesponding one or more adjunct memories to identify a counter indication, hi process block 890, the counter, if any, conesponding to the counter indication is updated by some static or dynamic value. Processing is complete as indicated by process block 892. FIG. 9A illustrates one embodiment of a system for identifying a merged lookup result. Packets 901 are received and processed by packet processor 902 to generate packets 909. h one embodiment, packet processor 902 performs a lookup operation in a forwarding information base (FIB) data structure to identify the source and/or destination autonomous system number associated with the identified packet. Based on an identified packet, autonomous system numbers, and/or other information, a lookup value 903 is identified. FIG. 9G illustrates a lookup value 960 used in one embodiment. One embodiment uses all, less than all, or none of fields 960A-960I. Based on lookup value 903, a lookup operation is performed in associative memory entries 904 (e.g., access control list, security, quality of service, accounting entries) in multiple associative memory banks and/or one or more associative memories to generate a results 905, based on which, memories 906 generate results 907. Combiner mechanism 910 merges results 907 to produce one or more merged results 911, which are typically used by packet processor 902 in the processing of packets. In one embodiment, combiner mechanism 910 includes a processing element responsive to computer-readable medium (e.g., software, firmware, etc.), custom hardware (e.g., circuits, ASICs, etc.) and/or via any other means or mechanism. In one embodiment, a merged result 911 includes a counter indication which is used by counters and decoder/control logic 912 to update a value. The accumulated accounting values 913 are typically communicated to packet processor 902 or another device. FIG. 9B illustrates an access control list 915, including access control list entries of multiple features of a same type. For example, entries 916 conespond to security entries such as the packet that should be dropped or processed, while entries 917 conespond to packets that should or should not be sent to a mechanism to encrypt the packet. Different associative memories are each programmed with associative memory entries conesponding to a different one of the features. A lookup operation is then performed substantially simultaneously on each of feature sets of associative memory entries to generate associative memory results, which are then used to perform lookup operations substantially simultaneously in adjunct memories to produce the lookup results which then can be merged to produce the merged result. The respective priorities of the lookup results may be implicit based on that conesponding to their respective associative memory banks and/or adjunct memories, or be specified, such as in the associative memory entries, from another data structure lookup operation, or identified using any other manner or mechanism. For example, one embodiment includes four associative memory banks for supporting one to four features. An associative memory lookup operation is performed in parallel on the four banks and then in the adjunct memories (SRAMs), which indicate the action, type of entry (e.g., ACL, QoS, Accounting), and precedence for combiner mechanism. The combiner mechanism merges the results to get the final merged result. A miss in an ACL lookup in a bank is treated as a permit with lowest precedence. If in more than one bank there is a hit with same specified precedence in the retrieved adjunct memory entry, the precedence used by the combiner mechanism is determined based on the implied or specified precedence of the associative memory bank. If there is amiss in all the banks, default result is used from global registers. A similar merge operation is performed for the QoS and accounting lookup results. FIG. 9C illustrates a lookup and merge mechanism 920 used by one embodiment. One or more of associative memory banks 921A-921C (there can be any number of banks) are programmed with associative memory entries of a same access control list type, with different features of the type programmed into a different one of the associative memory banks 921 A-921C. Conesponding adjunct memory entries 922A-922C are programmed in one or more adjunct memories. Thus, lookup operations can be performed substantially simultaneously on associative memory banks 921 A-C to generate results, which are used to identify conesponding lookup results from adjunct memory entries 922A-922C, which are then merged by combiner mechanism 923 to generate the merged result 924. FIG. 9D is substantially similar to that of FIG. 9C, but illustrates that multiple merged results conesponding to multiple access control list entry types can be generated in parallel (e.g., substantially simultaneously). As shown, lookup and merge mechanism 920, used by one embodiment, is programmed with features sets of a same type in associative memory banks 931 A-93 IB (there can be any number of banks), and of a different type in associative memory banks 931 C-93 ID (there can be any number of banks). Conesponding adjunct memory entries 932A-932D are programmed into one or more adjunct memories. Thus, lookup operations can be performed substantially simultaneously on associative memory banks 921A-D to generate results, which are used to identify conesponding lookup results from adjunct memory entries 922A-922D, which are then merged by combiner mechanism 933 to generate the multiple merged results 934 (e.g., typically one or more merged result per access control list type). FIG. 9E illustrates a process used in one embodiment to program the associative and adjunct memories in one embodiment. Processing begins with process block 940, and proceeds to process block 941, wherein an access control list including multiple access control list entries is identified. In process block 942, a first set of the access control list entries conesponding to a first feature of the access control list entries is identified. In process block 943, a first associative memory bank and a first adjunct memory are programmed with entries conesponding to the first set of access control list entries. In process block 944, a second set of the access control list entries conesponding to a second feature of the access control list entries is identified. In process block 945, a second associative memory bank and a second adjunct memory are programmed with entries conesponding to the second set of access control list entries. The first set of associative memory entries have a higher lookup precedence than the second set of associative memory entries. Processing is complete as indicated by process block 946. FIG. 9F illustrates a process used by one embodiment to perform lookup operations and to identify the merged result. Processing begins with process block 950, and proceeds to process block 951, wherein a lookup value is identified. Next, in process block 952, lookup operations are performed in the first and second associative memory banks and adjunct memories to generate first and second lookup results, which are merged in process block 953 to identify the merged result. Processing is complete as indicated by process block 954. FIG. 9G illustrates a lookup value 960, result value 965, and merged result value 967 used in one embodiment. As shown, lookup value 960 includes a lookup type 960A, source address 960B, destination address 960C, source port 960D, destination port 960E, protocol type 960F, source ASN 960G, destination ASN 960H, and possibly other fields 9601. One embodiment uses all, less than all, or none of fields 960A-960I. As shown, result value 965 includes a result type 965A, an action or counter indication 965B, and a precedence indication 965C. hi one embodiment, result value 965 is programmed in the adjunct memories. One embodiment uses all, less than all, or none of fields 965A-965C. As shown, merged result value 967 includes a result type 967A and an action or counter indication 967B. One embodiment uses all, less than all, or none of fields 967A-967B. FIGs. 9H-9J illustrate merging logic truth tables 970, 972, and 974 for generating the merged result, hi one embodiment, the merge result of a security lookup operation is illustrated in security combiner logic 970, and is based on the results of up to four substantially simultaneous (or not) lookup operations with differing precedence indicated in columns 970A-970D, with the conesponding merged result shown in column 970E. Note, the " — " in the fields indicate a don't care condition as a merged result conesponding to a higher priority will be selected. h one embodiment, the merge result of a Quality of Service (QoS) lookup operation is illustrated in security combiner logic 972, and is based on the results of a previously merged security lookup operation and up to four substantially simultaneous (or not) lookup operations with differing precedence indicated in columns 972A-970E, with the conesponding merged result shown in column 972F. hi one embodiment, the merge result of an accounting lookup operation is illustrated in accounting combiner logic 972, and is based on the results of a previously merged security lookup operation and up to four substantially simultaneous (or not) lookup operations with differing precedence indicated in columns 974A-974E, with the conesponding merged result shown possibly identifying a counter to be updated in column 972F. FIG. 9K illustrates a process used in one embodiment, to generate a security merged result, a QoS merged result, and an accounting merged result. Processing begins with process block 980, and proceeds to process block 981, wherein a packet is identified. Next, in process block 982, one or more FIB lookup operations are performed to identify source and destination ASNs. h process block 983, a security lookup value is identified. In process block 984, lookup operations are performed based on the security lookup value in multiple associative memory banks and one or more adjunct memories to identify multiple security results, which are merged in process block 985 to identify the merged security result. Also, this merged security result is stored in a data structure or other mechanism for use in identifying the merged QoS and accounting results. In process block 986, the QoS lookup value is identified, hi process block 987, lookup operations are performed based on the QoS lookup value in multiple associative memory banks and one or more adjunct memories to identify multiple QoS results, which, in process block 988, are merged along with the previously determined merged security result to identify the merged QoS result. In process block 989, the accounting lookup value is identified. In process block 990, lookup operations are performed based on the accounting lookup value in multiple associative memory banks and one or more adjunct memories to identify multiple accounting results, which, in process block 991, are merged along with the previously determined merged security result to identify the merged accounting result. Also, an identified counter or other accounting mechanism is updated. Processing is complete as indicated by process block 992. hi view of the many possible embodiments to which the principles of our invention may be applied, it will be appreciated that the embodiments and aspects thereof described herein with respect to the drawings/figures are only illustrative and should not be taken as limiting the scope of the invention. For example and as would be apparent to one skilled in the art, many of the process block operations can be re-ordered to be performed before, after, or substantially concurrent with other operations. Also, many different fonns of data structures could be used in various embodiments. The invention as described herein contemplates all such embodiments as may come within the scope of the following claims and equivalents thereof.

Claims

CLAIMS What is claimed is:
1. A method for performing operations for programming one or more associative memories, the method comprising: identifying a specified policy map; determining a set of entries based on the specified policy map; and associating a force no-hit indication with one or more entries of the set of entries.
2. The method of claim 1, comprising programming one or more associative memories with the set of entries.
3. The method of claim 1 , comprising programming a plurality of banks of an associative memory with the set of entries.
4. The method of claim 3, comprising associating a priority indication with each entry of the set of entries.
5. The method of claim 4, comprising: programming a plurality of banks of an associative memory with the set of entries; and associating a programmable priority level with each of the plurality of banks.
6. The method of claim 3, comprising associating a programmable priority level with each of the plurality of banks.
7. The method of claim 1 , wherein at least one of said one or more entries conesponds to a deny operation.
8. An associative memory comprising: a plurality of associative memory banks; wherein each of said one or more associative memory banks includes a plurality of entries; and wherein each of the plurality of entries includes a force no-hit value field.
9. The associative memory of claim 8, wherein each of the plurality of entries includes a priority indication field.
10. The associative memory of claim 9, comprising: a plurality of mechanisms for identifying a block highest priority matching entry for each of the plurality of associative memory banks; and a priority mechanism for identifying a highest priority one of said associative memory entries based on the block highest priority matching entry of each of the plurality of associative memory banks and values of the priority indication fields associated with the for the block highest priority matching entry of each of the plurality of associative memory banks.
11. A computer-readable medium containing computer-executable instructions for performing steps for performing operations for programming one or more associative memories, said steps comprising: identifying a specified policy map; determining a set of entries based on the specified policy map; and associating a force no-hit indication with one or more entries of the set of entries.
12. The computer-readable medium of claim 11, wherein said steps comprise programming one or more associative memories with the set of entries.
13. The computer-readable medium of claim 11, wherein said steps comprise programming a plurality of banks of an associative memory with the set of entries.
14. The computer-readable medium of claim 13, wherein said steps comprise associating a priority indication with each entry of the set of entries.
15. The computer-readable medium of claim 14, wherein said steps comprise: programming a plurality of banks of an associative memory with the set of entries; and associating a programmable priority level with each of the plurality of banks.
16. The computer-readable medium of claim 13, wherein said steps comprise associating a programmable priority level with each of the plurality of banks.
17. The computer-readable medium of claim 11, wherein at least one of said one or more entries conesponds to a deny operation.
18. An apparatus for identifying a merged lookup result, the apparatus comprising: a mechanism for generating a lookup value; one or more associative memories for generating a plurality of associative memory results based on the lookup value, the plurality of associative memory results including at least one result from each of said one or more associative memories; a one or more adjunct memories, coupled to said one or more associative memories, for performing lookup operations on said plurality of associative memory results to generate a plurality of lookup results; and a combiner, coupled to said one or more adjunct memories, for merging the plurality of lookup results to generate the merged lookup result.
19. The apparatus of claim 18, wherein the plurality of lookup results are each associated with precedence indications stored in said one or more adjunct memories, and wherein said combiner selects one of the plurality of lookup results as the merged result based on said precedence values of the plurality of lookup results.
20. The apparatus of claim 19, wherein each of the plurality of lookup results conespond to a different feature as defined in an access control list.
21. A method for identifying a merged lookup result, the method comprising: identifying an access control list including a plurality of access control list entries; identifying a first set of access control list entries conesponding to a first feature of said plurality of access control list entries; programming a first associative memory bank and a first adjunct memory with first associative memory entries conesponding to the first set of access control list entries identifying a second set of access control list entries conesponding to a first feature of said plurality of access control list entries; and programming a second associative memory bank and a second adjunct memory with second associative memory entries conesponding to the second set of access control list entries; wherein said first associative memory entries have a higher lookup precedence than said second associative memory entries.
22. The method of claim 21, comprising: identifying a lookup value; performing lookup operations in the first associative memory bank and the first adjunct memory to generate a first second lookup result; performing lookup operations in the second associative memory bank and the second adjunct memory to generate a second lookup result; and merging the first and the second lookup results to identify a merged result.
23. The method of claim 22, wherein said lookup operations in the first and the second associative memory banks are performed substantially simultaneously.
24. The method of claim 22, wherein if the first associate memory result conesponds to a deny operation, the merged result conesponds to a drop packet operation.
25. The method of claim 22, wherein if the first associate memory result conesponds to a permit operation and the second associative memory result conesponds to a permit operation, the merged result conesponds to a permit operation.
26. The method of claim 22, wherein if the first associate memory result conesponds to a permit operation and the second associative memory result conesponds to a deny operation, the merged result conesponds to a drop packet operation.
27. A method for identifying a merged lookup result, the method comprising: identifying a packet; identify a first lookup value; performing substantially simultaneous lookup operations in a plurality of associative memories and adjunct memories to generate a plurality of first lookup results; merge the plurality of first lookup results to identify a merged first result; identify a second lookup value; performing substantially simultaneous lookup operations in the plurality of associative memories and adjunct memories to generate a plurality of second lookup results; and merge the plurality of second lookup results and the merged first result to identify a merged second result.
28. The method of claim 27, each of the plurality of first lookup results conespond to a different feature of a first type as defined in an access control list.
29. The method of claim 28, each of the plurality of second lookup results conespond to a different feature of a second type as defined in the access control list.
30. The method of claim 29, wherein the first type includes a security operation and the second type includes a quality of service operation.
31. The method of claim 27, wherein the first lookup value includes at least one autonomous system number, said at least one autonomous system number identify a set of communication devices under a single administrative authority
32. A computer-readable medium containing computer-executable instructions for performing steps for identifying a merged lookup result, said steps comprising: identifying an access control list including a plurality of access control list entries; identifying a first set of access control list entries conesponding to a first feature of said plurality of access control list entries; programming a first associative memory bank and a first adjunct memory with first associative memory entries conesponding to the first set of access control list entries identifying a second set of access control list entries conesponding to a first feature of said plurality of access control list entries; and programming a second associative memory bank and a second adjunct memory with second associative memory entries conesponding to the second set of access control list entries; wherein said first associative memory entries have a higher lookup precedence than said second associative memory entries.
33. The computer-readable medium of claim 32, wherein said steps comprise: identifying a lookup value; performing lookup operations in the first associative memory bank and the first adjunct memory to generate a first second lookup result; performing lookup operations in the second associative memory bank and the second adjunct memory to generate a second lookup result; and merging the first and the second lookup results to identify a merged result.
34. The computer-readable medium of claim 33, wherein if the first associate memory result conesponds to a deny operation, the merged result conesponds to a drop packet operation.
35. The computer-readable medium of claim 33, wherein if the first associate memory result conesponds to a permit operation and the second associative memory result conesponds to a permit operation, the merged result conesponds to a permit operation.
36. The computer-readable medium of claim 33, wherein if the first associate memory result conesponds to a permit operation and the second associative memory result conesponds to a deny operation, the merged result conesponds to a drop packet operation.
37. A method for generating accounting data, the method comprising: identifying an access control list including a plurality of access control list entries, a subset of the plurality of access control list entries identifying accounting requests; associating accounting mechanisms with each of said access control list entries in the subset of the plurality of access control list entries identifying accounting requests; identifying an item; identifying a particular one of said accounting mechanism conesponding to the item; and updating said accounting mechanism conesponding to the item.
38. The method of claim 37, wherein the item conesponds to one or more fields of a received packet.
39. The method of claim 38, wherein the item further includes at least one autonomous system number, said at least one autonomous system number identify a set of communication devices under a single administrative authority.
40. The method of claim 37, wherein at least one of said accounting mechanisms is associated with at least two different access control list entries in the subset of the plurality of access control list entries identifying accounting requests.
41. A method for generating accounting data, the method comprising: identifying a lookup value; performing a lookup operation in an associative memory based on the lookup value to identify an associative memory result; performing a lookup operation on an adjunct memory based on the associative memory result to identify a counter indication, wherein at least two entries within the adjunct memory include a same counter indication; and updating one of a plurality of counters based on the counter indication.
42. The method of claim 41, wherein said at least two entries are determined based on a conesponding specification in an access confrol list.
43. The method of claim 41, wherein the lookup value includes at least one autonomous system number, said at least one autonomous system number identify a set of communication devices under a single administrative authority.
44. A method for generating accounting data, the method comprising: identifying a lookup value, wherein the lookup value includes at least one autonomous system number, said at least one autonomous system number identify a set of communication devices under a single adminisfrative authority; performing a lookup operation in an associative memory based on the lookup value to identify an associative memory result; performing a lookup operation on an adjunct memory based on the associative memory result to identify a counter indication; and updating one of a plurality of counters based on the counter indication.
45. The method of claim 44, wherein said at least two entries are determined based on a conesponding specification in an access control list.
46. An apparatus for generating accounting data, the apparatus comprising: a lookup word generation mechanism for identifying a lookup value; an associative memory for generating an associative memory result based on the lookup value; an adjunct memory for generating a counter indication based on the associative memory result, at least two entries of the adjunct memory configured to generate a same counter indication value; and a plurality of counters for maintaining counts and for updating one of the plurality of counters based on the counter indication.
47. The apparatus of claim 46, wherein said at least two entries are determined based on a conesponding specification in an access control list.
48. The apparatus of claim 46, wherein the lookup word generate identifies at least one autonomous system number, said at least one autonomous system number identify a set of communication devices under a single administrative authority; and wherein the lookup word includes said at least one autonomous system number.
49. A computer-readable medium containing computer-executable instructions for performing steps for generating accounting data, said steps comprising: identifying an access confrol list including a plurality of access control list entries, a subset of the plurality of access confrol list entries identifying accounting requests; associating accounting mechanisms with each of said access control list entries in the subset of the plurality of access confrol list entries identifying accounting requests; identifying an item; identifying a particular one of said accounting mechanism conesponding to the item; and updating said accounting mechanism conesponding to the item.
50. The computer-readable medium of claim 49, wherein the item conesponds to one or more fields of a received packet.
51. The computer-readable medium of claim 50, wherein the item further includes at least one autonomous system number, said at least one autonomous system number identify a set of communication devices under a single administrative authority.
EP04753312A 2003-07-29 2004-05-26 Force no-hit indications for cam entries based on policy maps Withdrawn EP1654657A4 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US10/630,176 US7082492B2 (en) 2002-08-10 2003-07-29 Associative memory entries with force no-hit and priority indications of particular use in implementing policy maps in communication devices
US10/630,178 US7689485B2 (en) 2002-08-10 2003-07-29 Generating accounting data based on access control list entries
US10/630,174 US7177978B2 (en) 2002-08-10 2003-07-29 Generating and merging lookup results to apply multiple features
PCT/US2004/016463 WO2005017754A1 (en) 2003-07-29 2004-05-26 Force no-hit indications for cam entries based on policy maps

Publications (2)

Publication Number Publication Date
EP1654657A1 true EP1654657A1 (en) 2006-05-10
EP1654657A4 EP1654657A4 (en) 2008-08-13

Family

ID=34199009

Family Applications (1)

Application Number Title Priority Date Filing Date
EP04753312A Withdrawn EP1654657A4 (en) 2003-07-29 2004-05-26 Force no-hit indications for cam entries based on policy maps

Country Status (2)

Country Link
EP (1) EP1654657A4 (en)
WO (1) WO2005017754A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2176920A (en) * 1985-06-13 1987-01-07 Intel Corp Content addressable memory
US5267190A (en) * 1991-03-11 1993-11-30 Unisys Corporation Simultaneous search-write content addressable memory
US6467019B1 (en) * 1999-11-08 2002-10-15 Juniper Networks, Inc. Method for memory management in ternary content addressable memories (CAMs)
US6577520B1 (en) * 2002-10-21 2003-06-10 Integrated Device Technology, Inc. Content addressable memory with programmable priority weighting and low cost match detection

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7065609B2 (en) * 2002-08-10 2006-06-20 Cisco Technology, Inc. Performing lookup operations using associative memories optionally including selectively determining which associative memory blocks to use in identifying a result and possibly propagating error indications

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2176920A (en) * 1985-06-13 1987-01-07 Intel Corp Content addressable memory
US5267190A (en) * 1991-03-11 1993-11-30 Unisys Corporation Simultaneous search-write content addressable memory
US6467019B1 (en) * 1999-11-08 2002-10-15 Juniper Networks, Inc. Method for memory management in ternary content addressable memories (CAMs)
US6577520B1 (en) * 2002-10-21 2003-06-10 Integrated Device Technology, Inc. Content addressable memory with programmable priority weighting and low cost match detection

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
EFTHYMIOU A ET AL: "An adaptive serial-parallel cam architecture for low-power cache bloc" LOW POWER ELECTRONICS AND DESIGN, 2002. ISLPED '02. PROCEEDINGS OF THE 2002 INTERNATIONAL SYMPOSIUM ON AUG. 12-14, 2002, PISCATAWAY, NJ, USA,IEEE, 12 August 2002 (2002-08-12), pages 136-141, XP010600872 ISBN: 978-1-58113-475-9 *
See also references of WO2005017754A1 *

Also Published As

Publication number Publication date
EP1654657A4 (en) 2008-08-13
WO2005017754A1 (en) 2005-02-24

Similar Documents

Publication Publication Date Title
US7350020B2 (en) Generating and merging lookup results to apply multiple features
US7689485B2 (en) Generating accounting data based on access control list entries
US7082492B2 (en) Associative memory entries with force no-hit and priority indications of particular use in implementing policy maps in communication devices
US7103708B2 (en) Performing lookup operations using associative memories optionally including modifying a search key in generating a lookup word and possibly forcing a no-hit indication in response to matching a particular entry
US7065609B2 (en) Performing lookup operations using associative memories optionally including selectively determining which associative memory blocks to use in identifying a result and possibly propagating error indications
US7197597B1 (en) Performing lookup operations in a content addressable memory based on hashed values of particular use in maintaining statistics for packet flows
US7237059B2 (en) Performing lookup operations on associative memory entries
US7349382B2 (en) Reverse path forwarding protection of packets using automated population of access control lists based on a forwarding information base
EP1649389B1 (en) Internet protocol security matching values in an associative memory
US7028136B1 (en) Managing idle time and performing lookup operations to adapt to refresh requirements or operational rates of the particular associative memory or other devices used to implement the system
EP1678619B1 (en) Associative memory with entry groups and skip operations
US7249228B1 (en) Reducing the number of block masks required for programming multiple access control list in an associative memory
US7024515B1 (en) Methods and apparatus for performing continue actions using an associative memory which might be particularly useful for implementing access control list and quality of service features
US7523251B2 (en) Quaternary content-addressable memory
Shen et al. RVH: Range-vector hash for fast online packet classification
EP1128608B1 (en) Method and means for classifying data packets
EP1654657A1 (en) Force no-hit indications for cam entries based on policy maps
CN100498737C (en) Forced no-hit indications for CAM entries based on policy maps

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20060228

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LI LU MC NL PL PT RO SE SI SK TR

DAX Request for extension of the european patent (deleted)
RIN1 Information on inventor provided before grant (corrected)

Inventor name: SAHARIA, GYANESHWAR S.

Inventor name: KANEKAR, BHUSHAN MANGESH

Inventor name: PULLELA, VENKATESHWAR RAO

Inventor name: SCHEID, STEPHEN FRANCIS

Inventor name: CHEN, QIZHONG

Inventor name: GURAJAPU, SURESH

Inventor name: DEVIREDDY, DILEEP KUMAR

Inventor name: BHATTACHARYA, DIPANKAR

Inventor name: RAWAT, ATUL

A4 Supplementary search report drawn up and despatched

Effective date: 20080714

17Q First examination report despatched

Effective date: 20110330

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20180103