US8462786B2 - Efficient TCAM-based packet classification using multiple lookups and classifier semantics - Google Patents

Efficient TCAM-based packet classification using multiple lookups and classifier semantics Download PDF

Info

Publication number
US8462786B2
US8462786B2 US12/855,992 US85599210A US8462786B2 US 8462786 B2 US8462786 B2 US 8462786B2 US 85599210 A US85599210 A US 85599210A US 8462786 B2 US8462786 B2 US 8462786B2
Authority
US
United States
Prior art keywords
tcam
classifier
packet
rules
rule
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related, expires
Application number
US12/855,992
Other versions
US20110038375A1 (en
Inventor
Alex X. LIU
Chad R. Meiners
Eric Torng
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Michigan State University MSU
Original Assignee
Michigan State University MSU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Michigan State University MSU filed Critical Michigan State University MSU
Priority to US12/855,992 priority Critical patent/US8462786B2/en
Assigned to BOARD OF TRUSTEES OF MICHIGAN STATE UNIVERSITY reassignment BOARD OF TRUSTEES OF MICHIGAN STATE UNIVERSITY ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: LIU, ALEX X., MEINERS, CHAD R., TORNG, ERIC
Publication of US20110038375A1 publication Critical patent/US20110038375A1/en
Application granted granted Critical
Publication of US8462786B2 publication Critical patent/US8462786B2/en
Expired - Fee Related legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • H04L47/2441Traffic characterised by specific attributes, e.g. priority or QoS relying on flow classification, e.g. using integrated services [IntServ]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/02Knowledge representation; Symbolic representation
    • G06N5/022Knowledge engineering; Knowledge acquisition
    • G06N5/025Extracting rules from data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/74Address processing for routing
    • H04L45/745Address table lookup; Address filtering
    • H04L45/7453Address table lookup; Address filtering using hashing

Definitions

  • the present disclosure relates to methods for constructing a packet classifier for a computer network system.
  • Packet classification is the core mechanism that enables many networking devices, such as routers and firewalls, to perform services such as packet filtering, quality of service, traffic monitoring, virtual private networks (VPNs), network address translation (NAT), load balancing, traffic accounting and monitoring, differentiated services (Diffserv), etc.
  • the fundamental problem is to compare each packet with a list of predefined rules, which we call a packet classifier, and find the first (i.e., highest priority) rule that the packet matches.
  • Table 1 shows an example packet classifier of three rules. The format of these rules is based upon the format used in Access Control Lists (ACLs) on Cisco routers.
  • ACLs Access Control Lists
  • TCAMs Ternary Content Addressable Memories
  • a TCAM chip receives an address and returns the content of the memory at that address
  • a TCAM chip does the converse: it receives content and returns the address of the first entry where the content lies in the TCAM in constant time (i.e., a few clock cycles).
  • TCAM-based packet classification stores a rule in a TCAM entry as an array of 0's, 1's, or *'s (don't-care values).
  • a packet header i.e., a search key
  • TCAM-based classification Given a search key to a TCAM, the circuits compare the key with all its occupied entries in parallel and return the index (or the content, depending on the chip architecture and configuration,) of the first matching entry.
  • TCAM-based classification is widely used because of its high speed. Although software based classification has been extensively studied, these schemes cannot match the wire speed performance of TCAM-based packet classification systems.
  • TCAM-based packet classification is the de facto industry standard because packets can be classified in constant time, the speed and power efficiency of each memory access decreases significantly as TCAM chip capacity increases. Packet classification with a single TCAM lookup is possible because of the parallel search and priority match circuits in a TCAM chip. Unfortunately, because the capacity of the TCAM chip determines the amount and depth of circuitry active during each parallel priority search, there is a significant tradeoff between the capacity of a TCAM chip and the resulting speed and power efficiency of that chip. For example, based on the detailed TCAM power model disclosed by B. Agrawal and T. Sherwood in “Modeling team power for next generation network devices” In Proc.
  • TCAM chips Building an efficient TCAM-based packet classification system requires careful optimization of the size, speed, and power of TCAM chips.
  • TCAM chips consume a large amount of power due to their parallel searching.
  • the power consumed by a TCAM chip is about 1.85 Watts per megabit (Mb), which is roughly 30 times larger than a comparably sized SRAM chip.
  • Mb Watts per megabit
  • the high power consumption consequently causes TCAM chips to generate a huge amount of heat.
  • TCAM chips have large die areas.
  • a TCAM chip occupies 6 times (or more) board space than an equivalent capacity SRAM chip.
  • TCAM chips The large die area of TCAM chips leads to TCAM chips being very expensive, often costing more than network processors. Although the limited market size may contribute to TCAM's high price, it is not the main reason. Finally, as we noted earlier, smaller TCAM chips support much faster lookups than larger TCAM chips.
  • TCAM chips there is pressure to use large capacity TCAM chips.
  • the first reason is that encoding packet classification rules into TCAM rules often results in an explosion in the number of rules, which is referred to as the range expansion problem.
  • the fields of source and destination IP addresses and protocol type are specified as prefixes, so they can be directly stored in a TCAM.
  • Range reencoding schemes have been proposed to improve the scalability of TCAMs, primarily by mitigating the effect of range expansion.
  • the basic idea is to first reencode a classifier into another classifier that requires less TCAM space and then reencode each packet correspondingly such that the decision made by the reencoded classifier for the reencoded packet is the same as the decision made by the original classifier for the original packet.
  • Range reencoding has two possible benefits: rule width compression so that narrower TCAM entries can be used and rule number compression so that fewer TCAM entries can be used.
  • domain compression we transform a given colored hyperrectangle, which represents the semantics of a given classifier, to the smallest possible “equivalent” colored hyperrectangle. This leads to both optimal rule width compression as well as rule number compression.
  • prefix alignment we strive for rule number compression only by transforming a colored hyperrectangle to an equivalent “prefix-friendly” colored hyperrectangle where the ranges align well with prefix boundaries, minimizing the costs of range expansion.
  • a method for constructing a packet classifier for a computer network system includes: receiving a set of rules for packet classification, where a rule sets forth values for fields in a data packet and a decision for data packets having matching field values; representing the set of rules as a directed graph; partitioning the graph into at least two partitions; generating at least one lookup table for each partition of the graph; and instantiating the lookup tables from one partition on a first content-addressable memory and the lookup tables from the other partition on a second content-addressable memory device.
  • a method for encoding multiple ranges of values for a given field in a data packet as defined in a rule of a packet classifier.
  • the method includes: finding candidate cut points over an entire domain of values for the given field, where candidate cut points correspond to starting or ending points in the ranges of values for the given field; selecting a number of bits to be used to encode the domain in a binary format; and recursively dividing the domain of values along the candidate cut points using dynamic programming and mapping each range of values to a result that is represented in a binary format using the selected number of bits.
  • FIGS. 1A and 1B depicts a two-dimensional prefix classifier and a representation of the same using three one-dimensional tables, respectively;
  • FIG. 2A depicts an exemplary two-dimensional packet classifier
  • FIG. 2B depicts an equivalent representation of the packet classifier as a directed graph
  • FIG. 2C depicts a reduced representation of the directed graph
  • FIG. 2D depicts four TCAM tables that correspond to the four nonterminal nodes in the reduced directed graph
  • FIG. 2E illustrates the process of table concatenation
  • FIG. 3 illustrate the packet lookup process for the tables depicted in FIG. 2E ;
  • FIG. 4 is a flowchart illustrating a TCAM SPliT approach to constructing a packet classifier
  • FIG. 5 illustrates partitioning an exemplary directed graph in accordance with the TCAM SPliT approach
  • FIGS. 6A and 6B are diagrams illustrating how TCAM tables can be shadow packed and a graph representation of the shadowing relationship among the tables, respectively;
  • FIGS. 7A and 7B illustrate the shadow packing process for tables depicted in FIG. 2 ;
  • FIG. 8 illustrates the compression ratios for two exemplary packet classifiers
  • FIGS. 9A-9C are graphs depicting power, latency and throughput measures, respectively, for the exemplary packet classifiers
  • FIGS. 10A-10J illustrate an exemplary topological transformation process
  • FIG. 11 illustrates an exemplary packet classification process using a multi-lookup architecture
  • FIGS. 12 and 13 illustrate the packet classification process using a parallel pipelined-lookup architecture and a chained pipelined-lookup architecture, respectively;
  • FIGS. 14A-14G illustrate an example of the domain compression technique
  • FIG. 15 is a flowchart illustrating the domain compression technique for constructing a packet classifier
  • FIG. 16 illustrates an example of a one-dimensional prefix alignment
  • FIG. 17 is a flowchart illustrating the prefix alignment technique
  • FIGS. 18 and 19 are graphs for a set of exemplary packet classifiers detailing some of their properties.
  • FIGS. 20-24 are graphs of experimental results when topographical transformation is applied to the set of exemplary packet classifiers.
  • TCAM SPliT ternary content addressable memory
  • the proposed TCAM SPliT approach greatly reduces the total required TCAM space. Different than other TCAM optimization schemes, it effectively deals with the multiplicative effect by splitting apart dimensions that do not combine well together. For example, represent an exemplary 2-dimensional classifier that illustrates the multiplicative effect in FIG. 1A using the three one-dimensional tables in FIG. 1B requiring a total of 8 rules, each of which is only 3 bits wide. Although the one-dimensional tables only reduce the number of entries by 2 in this example, the savings would be much larger if the number of distinct entries q i in each dimension were larger.
  • the TCAM SPliT approach also leads to much faster packet classification with lower power consumption as we enable the use of smaller and thus faster and more power efficient TCAM chips.
  • a field F i is a variable of finite length (i.e., of a finite number of bits).
  • the domain of field Fi of w bits, denoted D(Fi), is [0, 2 w ⁇ 1].
  • a packet over the d fields F 1 , . . . , F d is a d-tuple (p i , . . . , p d ) where each p i (1 ⁇ i ⁇ d) is an element of D(F i ).
  • Packet classifiers usually check the following five fields: source IP address, destination IP address, source port number, destination port number, and protocol type.
  • the length of these packet fields are 32, 32, 16, 16, and 8, respectively.
  • to denote the set of all packets over fields F 1 , . . . , F d . It follows that ⁇ is a finite set and
  • a rule has the form (predicate) ⁇ (decision).
  • a (predicate) defines a set of packets over the fields F 1 through F d , and is specified as F 1 ⁇ S 1 F d ⁇ S d where each S i is a subset of D(F i ) and is specified as either a prefix or a range and is specified as F 1 ⁇ S 1 F d ⁇ S d where each S i is a subset of D(F i ) and is specified as either a prefix or a range.
  • a prefix ⁇ 0,1 ⁇ k ⁇ * ⁇ w ⁇ k with k leading 0s or 1s for a packet field of length w denotes the range [ ⁇ 0,1 ⁇ k ⁇ 0 ⁇ w ⁇ k , ⁇ 0,1 ⁇ k ⁇ 1 ⁇ w ⁇ k ].
  • prefix 01** denotes the range [0100, 0111].
  • a rule F 1 ⁇ S 1 F d ⁇ S d ⁇ decision is a prefix rule if and only if each S i is represented as a prefix.
  • a packet (p 1 . . . , p 4 ) matches a predicate F 1 ⁇ S 1 F d ⁇ S d and the corresponding rule if and only if the condition p 1 ⁇ S 1 p d ⁇ S 4 holds.
  • is used to denote the set of possible values that decision can be.
  • typical elements of a include accept, discard, accept with logging, and discard with logging.
  • a sequence of rules (r 1 , . . . , r n ) is complete if and only if for any packet p, there is at least one rule in the sequence that p matches.
  • the predicate of the last rule is usually specified as F 1 ⁇ D(F 1 ) . . . F d ⁇ D(F d ).
  • a packet classifier f is a sequence of rules that is complete. The size of f, denoted
  • is the number of rules in f.
  • a packet classifier f is a prefix packet classifier if and only if every rule in f is a prefix rule.
  • Two rules in a packet classifier may overlap; that is, there exists at least one packet that matches both rules. Furthermore, two rules in a packet classifier may conflict; that is, the two rules not only overlap but also have different decisions. Packet classifiers typically resolve conflicts by employing a first-match resolution strategy where the decision for a packet p is the decision of the first (i.e., highest priority) rule that p matches in f.
  • the worst-case range expansion of a w-bit range results in a set containing 2w ⁇ 2 prefixes.
  • the next step is to compute the cross product of each set of prefixes for each field, resulting in a potentially large number of prefix rules.
  • TCAM chips are allowed to be configured with a limited set of widths, which are typically 36, 72, 144, 288, and 576 bits though some are now 40, 80, 160, 320, and 640 bits. In traditional single-lookup approaches, TCAM chips are typically configured to be 144 bits wide because the standard five packet fields constitute 104 bits. The decision of each rule could be stored either in TCAM or its associated SRAM.
  • FIGS. 2A-2E illustrate the algorithm for constructing a single-field TCAM pipeline for a given classifier.
  • Four steps for constructing a pipeline are as follows: (1) FDD Construction: convert the classifier to its equivalent decision tree representation; (2) FDD Reduction: reduce the size of the decision tree; (3) Table Generation: treat each nonterminal node in the reduced decision tree as a 1-dimensional classifier and generate a TCAM table for this classifier; and (4) Table Concatenation: for each field, merge all the tables into one TCAM table, which will serve as one stage of the TCAM pipeline.
  • Classifiers can be represented using a special decision tree representation called a Firewall Decision Diagram.
  • a Firewall Decision Diagram (FDD) with a decision set DS and over fields F 1 . . . , F 4 is an acyclic and directed graph that has the following five properties. First, there is exactly one node that has no incoming edges. This node is called the root. The nodes that have no outgoing edges are called terminal nodes. Second, each node v has a label, denoted F(v), such that
  • each edge e:u ⁇ v is labeled with a nonempty set of integers, denoted I(e), where I(e) is a subset of the domain of u's label (i.e., I(e) ⁇ D(F(u))).
  • a directed path from the root to a terminal node is called a decision path.
  • a full-length ordered FDD is an FDD where in each decision path all fields appear exactly once and in the same order.
  • FDD full-length ordered FDD
  • FIG. 2B shows the FDD constructed from the classifier in FIG. 2A , where a stands for accept, d stands for discard, and dl stands for discard with log.
  • An FDD construction algorithm which converts a packet classifier to an equivalent full-length ordered FDD, is presented by A. X. Liu and M. G. Gouda in “Design Firewall Design”, IEEE Transactions on Parallel and Distributed Systems (2008). While the following description makes reference to FDDs, it is readily understood that certain aspect of this disclosure are not limited to FDDs and may be implemented using other types of directed diagrams.
  • the second step is to reduce the size of the FDD.
  • the FDD is reduced by merging isomorphic subgraphs.
  • a full-length ordered FDD f is reduced if and only if it satisfies the following two conditions: (1) no two nodes in f are isomorphic; (2) no two nodes have more than one edge between them.
  • Two nodes v and v′ in an FDD are isomorphic if and only if v and v′ satisfy one of the following two conditions: (1) both v and v′ are terminal nodes with identical labels; (2) both v and v′ are nonterminal nodes and there is a one-to-one correspondence between the outgoing edges of v and the outgoing edges of v′ such that every pair of corresponding edges have identical labels and they both point to the same node.
  • Other types of reduction techniques are also contemplated by this disclosure.
  • FIG. 2C shows the resultant FDD reduced from the one in FIG. 2B .
  • an efficient signature-based FDD reduction algorithm is used.
  • the reduced FDD has n nonterminal nodes.
  • v is complete with respect to its labeled field, we can view v as a one-dimensional packet classifier. Construct an equivalent TCAM table Table(v) for each nonterminal node v as follows.
  • a rule r as follows: the predicate of r is p; if v′ is a terminal node, the decision of r is the label of v′; if v′ is a nonterminal node, the decision of r is the ID of v′.
  • the predicate of r is p; if v′ is a terminal node, the decision of r is the label of v′; if v′ is a nonterminal node, the decision of r is the ID of v′.
  • minimize the number of TCAM entries in Table(v) by using an optimal, polynomial-time algorithm for minimizing one-dimensional classifiers.
  • An exemplary algorithm is set forth by Suri et. al. in “Compressing two-dimensional routing tables”, In Algorithmica, 35:287-300 (2003). Other types of algorithms may also be applied.
  • FIG. 2D shows the four minimal TCAM tables that correspond to the four nonterminal nodes in the FDD.
  • the final step is to merge all TCAM tables of field F into a single table. For every nonterminal node v with label F i , prepend v's ID to the predicate of each rule in Table(v). Since each table ID provides a unique signature that distinguishes that table's entries from all other table entries, all tables of field F i can be concatenated into a single table.
  • FIG. 2E illustrates the process of table concatenation.
  • d TCAM tables corresponding to the d fields.
  • These d tables can be loaded (or instantiated) into d TCAM chips, which can be chained together into a d-stage correspondingly, a d-dimensional packet lookup is SPliT into d lookups.
  • the lookup result of the last chip is part of the search key for the (i+1)-st chip, and the result of the last chip is the decision for the packet.
  • d packets can be processed in parallel in the pipeline.
  • FIG. 3 illustrates the packet lookup process for the two tables t 1 and t 2 in FIG. 2E .
  • two packets (010, 001) and (111, 010) arrive one after the other.
  • the first search key 010
  • table t 1 has sent the search result 01 to table t 2 .
  • the second search key for the first packet 01001 is formed in parallel, and both are sent to tables t 1 and t 2 , respectively.
  • This cycle will yield a result of accept for the first packet and a result of 10 for the second packet.
  • the above process continues for every packet.
  • a packet classifier defined by a set of rules for packet classification serves as the starting point.
  • the set of rules is first represented at 44 as a directed graph.
  • the graph is then partitioned at 46 into at least two partitions: a top partition and a bottom partition.
  • the graph is partitioned by horizontally cutting an FDD into k pieces; that is, divide the d fields into k partitions. For example, partition a 5-field FDD into two partitions as shown in FIG. 5 .
  • the top partition is a single, smaller dimensional FDD or sub-graph and consists of one 3-dimensional sub-FDD over fields F 1 , F 2 , and F 3 .
  • the bottom partition contains multiple sub-FDDs and consists of eight 2-dimensional sub-FDDs over fields F 4 and F 5 . It is readily understood that the graph may be partitioned into more or less than two partitions so long as k is less than d.
  • a k-split to be a partition of a sequence of FDD fields in k in order subsequences.
  • two valid 2-splits of the sequence of FDD fields F 1 , . . . , F 5 are (F 1 ,F 2 ,F 3 ),(F 4 ,F 5 ) and (F 1 ,F 2 ),(F 3 ,F 4 ,F 5 ).
  • limit the split tables to a reduced entry width. For example, limit the width of the split tables to half or quarter the width of the original entry width to space TCAM space. That is, with 2 multi-field TCAM tables, each table will be 72-bits wide, while the original table was 144-bits wide. These shorter table widths limit the number of possible splits for a given FDD considerably, which enable examining each valid split to find the split that produces the smallest tables. Define a b-valid k-split to be a k-split such that all the tables generated by the k-split fit into entries that are at most b bits wide.
  • F an FDD with n fields F 1 , . . . ,F n such w(F i ) if the number of bits in F i , a bit bound b, and a table bound k
  • F an FDD with n fields F 1 , . . . ,F n such w(F i ) if the number of bits in F i , a bit bound b, and a table bound k
  • a lookup table is generated at 48 from the sub-graphs in each of the partitions.
  • a lookup table is generated from the sub-tree in the top partition and multiple lookup tables are generated from the bottom partition, where a lookup table is generated for each sub-tree in the bottom partition.
  • the lookup table from the top partition is linked with the lookup tables in the bottom partition by uniquely assigned table identifiers.
  • the procedure for generating k multi-field TCAM tables is similar to the procedure for generating d single-field TCAM tables. First, rather than generating a single-field TCAM table from each nonterminal node, a multi-field TCAM table is generated from each sub-FDD.
  • the lookup tables are instantiated at 49 in content-addressable memory devices.
  • the lookup table from the lookup table from the top partition is instantiated on a first content-addressable memory device; whereas, the lookup tables from the bottom partition are instantiated on a second content-addressable memory device.
  • the lookup tables could be instantiated in random access memory or another type of memory.
  • Packet processing in multi-field TCAM SPliT is similar to that in single-field TCAM SPliT except that a multi-field lookup is performed at each stage. This approach allows for a reduced the number of TCAM chips and lookup stages to any number less than d.
  • the decision for some packets may be completely determined by the first lookup.
  • an extra bit can be used to denote that the next lookup can be eliminated (or short circuited) and immediately return the decision.
  • the first lookup can never be eliminated unless it is a trivial classifier.
  • Packet classification rules periodically need to be updated.
  • the common practice for updating rules is to run two TCAMs in tandem where one TCAM is used while the other is updated.
  • TCAM SPliT is compatible with this current practice with a slight modification.
  • this scheme can be implemented using two pairs of small TCAM chips. However, can also do this using only two small TCAM chips as long as the pipeline can be configured in either order.
  • the two chip update solution would be to write the new classification rules into the unused portion of each TCAM chip.
  • reversing the order of the chips in the pipeline for the next update is suggested.
  • TCAM packing An optimization technique referred to as TCAM packing is presented below. This optimization further reduces TCAM space by allowing multiple rules from different TCAM tables to co-reside in the same TCAM entry.
  • TCAM packing There are two natural ways to view TCAM packing. The first is to view columns as the primary dimension of TCAM chips and pack tables into fixed width columns. The second is to view rows as the primary dimension of TCAM chips and pack tables within fixed height rows. If we take the column-based approach and assume all tables have the same width, TCAM packing becomes a generalization of makespan scheduling on multiple identical machines. If we take the row-based approach and assume all tables have the same height, TCAM packing becomes a generalization of one-dimensional bin packing. Since both problems are NP-complete, TCAM packing is NP-hard.
  • TCAM packing seems most similar to tiling problems where we try to fit 2-dimensional rectangles into a given geometric area.
  • the main additional difficulty for TCAM packing is that the width of the tables is not fixed as part of the input because we must determine how many TD bits must be associated with each table.
  • table ID bits between tables in different columns cannot be shared; that is, while two adjacent tables may have the same table ID, each will have its own table ID bits.
  • the number of bits used for table IDs grows essentially linearly with the number of columns.
  • horizontally aligned tables in the same row can potentially share some “row ID” bits in their table IDs; these tables would be distinguished by their horizontal offsets.
  • table t 0 shadows tables t 00 and t 01 .
  • table t 0 shadows tables t 00 and t 01 .
  • a set of rectangles (tables) is shadow packed if their packing obeys the Top-Left Property and any rectangle not on the boundary is in the shadow of its left neighbor. For example, the rectangles in FIG. 6A are shadow packed.
  • Shadow packed tables An efficient algorithm, SPack, that produces shadow packed tables is also presented.
  • a crucial property of shadow packed tables is that they facilitate the sharing of table ID bits that are in the same horizontal range.
  • VBegin(t) and VEnd(t) to denote the vertical indices of the TCAM entries where the table begins and ends, respectively
  • HBegin(t) and HEnd (t) to denote horizontal indices of the TCAM bit columns where the table begins and ends, respectively.
  • each t i (1 ⁇ i ⁇ m) can use t's ID to distinguish t i from tables outside [Begin(t), End(t)] vertically, and use ⁇ log m ⁇ bits to distinguish t i ; from tables inside [Begin(t), End(t)] vertically.
  • table t and each table t i can be distinguished by properly setting the GMR of the TCAM.
  • Shadow Packing Optimization Problem This problem becomes more difficult as we recurse because we must also address which tables should be allocated to which region.
  • SPack shadow packing algorithm
  • SPack Given a set of tables S and a TCAM region, SPack first finds the tallest table t that will fit in the region where ties are broken by choosing the fattest table. SPack returns when there are no such tables. Otherwise, SPack places t in the top left corner of the region, and SPack is recursively applied to S ⁇ t ⁇ in the region to the right of t. After that, let S′ be the set of tables in S that have not yet been allocated. SPack is applied to S′ in the region below t. Intuitively, SPack greedily packs the tallest (and fattest) possible table horizontally.
  • the pseudocode of SPack is provided as follows:
  • S a set of tables, and a region [v 1 , v 2 ], [h 1 , h 2 ].
  • S′ the set of tables in S that have not been packed.
  • the height of the initial region is the total number of rules within the set of tables. We do not need to set this value carefully because SPack only moves to another row when all the remaining tables do not fit in any of the current shadows.
  • the width is more complicated and must be computed iteratively. For each valid TCAM width w ⁇ ⁇ 36, 72, 144, 288 ⁇ , we set the initial width to be w and run SPack. Once we have a packing, we determine the number of bits b that are needed for node IDs. If the packing could accommodate these extra b bits, we are done. Otherwise, we choose an aggressive backoff scheme by recursing with a width of w ⁇ b.
  • each table needs a new ID, and all the rule decisions need to be remapped to reflect these new IDs.
  • Each table ID is determined by a tree representation similar to the one found in FIG. 6B , which we call a shadow packing tree. For each node v in a shadow packing tree, if v has m>1 outgoing edges, each outgoing edge is uniquely labeled using ⁇ log m ⁇ bits; if v has only one outgoing edge, that edge is labeled *. For each table t, let v be the corresponding node in the shadow packing tree.
  • FIG. 7A shows the shadow packing tree for the four tables in FIG. 2D and their reassigned table IDs.
  • FIG. 7B shows the final memory layout in the TCAM chip after shadow packing and the conceptual memory layout of the decision table within SRAM.
  • the one bit ID column in FIG. 7B is needed to distinguish between the tables with original IDs 01 and 11.
  • table 10 shares the table ID 0 with table 01 as it is the only table in table 01's shadow.
  • the algorithm for processing packets under the shadow packing approach is described using examples.
  • the first TCAM lookup is *000******, and the lookup result is the index value of 0.
  • This index value is used to find entry 0 in the column 00 in the SRAM which contains the decision of 0@4:01.
  • the 0@4 means that the second lookup key should occur in table ID 0 at horizontal offset of 4, and the 01 means that the decision of the next search is located in column 01 in SRAM.
  • the GMR is modified to make the second lookup key 0***111***.
  • the result of the second lookup is the index value of 1, and the decision stored in the second entry of column 01 in SRAM is retrieved, which is accept.
  • TCAM SPliT The impact TCAM SPliT has on the space, power, latency, and throughput of TCAM-based packet classification systems is evaluated.
  • TCAM SPliT with a 2-stage pipeline. Compare SPliT's performance with that of an exemplary state-of-the-art compression technique described by. Meiners et. al. in “TCAM Razor: A systematic approach towards minimizing packet classifiers in TCAMs” In Proc. 15 th IEEE Conference on Network Protocols (October 2007) and referred to herein as “TCAM Razor”. This comparison allows us to assess how much benefit is gained by going from one TCAM lookup to two TCAM lookups. The net result of the experiments is that TCAM SPliT allows us to significantly reduce required TCAM space and power consumption while significantly increasing packet classification throughput with only a small latency penalty.
  • RL real-world packet classifiers
  • the classifiers in RL were chosen from a larger set of real-world classifiers obtained from various network service providers, where the classifiers range in size from a handful of rules to thousands of rules. Partition the original classifiers into 25 groups where the classifiers in each group share similar structure. For example, the ACLs configured for the different interfaces of a router often share a similar structure. RL is created by randomly choosing one classifier from each of the 25 groups. We did this because classifiers with similar structure often exhibit similar results for TCAM SPliT. If all our classifiers are used, results would be skewed by the relative size of each group.
  • a set of classifiers RL u is created by replacing the decision of every rule in each classifier by a unique decision.
  • SY N U we created the set SY N U .
  • each classifier in RL U (or SY N U ) has the maximum possible number of distinct decisions.
  • Such classifiers might arise in the context of rule logging where the system monitors the frequency that each rule is the first matching rule for a packet.
  • Table 3 below shows the average and total compression ratios for TCAM SPliT and TCAM Razor on RL, RL U , SY N, and SY N U .
  • FIGS. 8A , 8 B, 8 C, and 8 D show the compression ratios for each of the classifiers in RL and RL U .
  • TCAM SPliT achieves significant space compression and significantly outperforms TCAM Razor.
  • the average compression ratio of TCAM SPliT on RL is 8.0%, which is three times better than the average compression ratio of 24.5% achieved by TCAM Razor on the same data set.
  • TCAM SPliT is able to find compression opportunities even when TCAM Razor cannot achieve any appreciable compression.
  • TCAM Razor is unable to compress classifiers 18 and 22 in FIG. 8B whereas TCAM SPliT is able to compress these two classifiers. This illustrates how TCAM SPliT is able to eliminate some of the multiplicative effect that is unavoidable in single-lookup schemes.
  • TCAM SPliT is very effective for classifiers with a large number of distinct decisions. For example, on RL U , TCAM SPliT achieves an average compression ratio of 16.0% which is roughly twice as good as TCAM Razor's average compression ratio of 31.9%. Note, there are a few cases in the RL U data set where TCAM SPliT's compression is worse than TCAM Razor's compression. In such cases, we default to using TCAM Razor and a single lookup.
  • TCAM SPliT and TCAM Razor are analyzed using the TCAM power model presented by Agrawal and Sherwood in “Modeling team power for next generation network devices” In Proc. IEEE Int. Symposium on Performance Analysis of Systems and Software (2006).
  • the power, latency, and throughput models are the only publicly available models that we know of for analyzing TCAM-based packet classification schemes.
  • Table 4 shows the average power ratios for TCAM SPliT and TCAM Razor on RL, RL U , SY N, and SY N U .
  • these sets only provide power ratio data for small classifiers that fit on TCAM chips that are smaller than 1 Mbit.
  • To extrapolate the power ratio to larger classifiers we consider theoretical classifiers whose direct expansion fits exactly within standard TCAM chip sizes ranging from 1 Mbit to 36 Mbit.
  • TCAM SPliT and TCAM Razor are applied to these classifiers, the resulting compression ratios will be 8.0%o and 24.5%, respectively, on these classifiers.
  • we use Agrawal and Sherwood's TCAM power model to calculate the power consumed by each search for each of these classifiers and their compressed versions. The extrapolated data are included in FIG. 9A and Table 4.
  • TCAM SPliT uses two TCAM chips and each chip runs at a higher frequency than the single TCAM chip in single-lookup schemes
  • TCAM SPliT still achieves significant power savings because of its huge space savings.
  • TCAM SPliT reduces energy consumption per lookup at least 33% on all data sets.
  • the power savings of TCAM SPliT continues to grow as classifier size increases. For the largest classifier size, TCAM SPliT achieves a power ratio of 18.8%.
  • TCAM SPliT works so well is twofold. TCAM chip energy consumption is reduced if we reduce the number of rows in a TCAM chip and if we reduce the width of a TCAM chip.
  • TCAM SPliT reduces the width of a TCAM chip by a factor of 2 (from 144 to 72), and it reduces the number of rows by a significant amount as well. Even more energy could be saved if we continued to run the smaller TCAM chips at a lower frequency.
  • TCAM SPliT significantly outperforms TCAM Razor in reducing energy consumption. For every data set, TCAM SPliT uses roughly two thirds of the power that TCAM Razor uses. For example, on RL, TCAM SPliT reduces energy per packet by 37.9% which is significantly more than TCAM Razor's 9.1% reduction.
  • TCAM lookup latency model presented by Agrawal and Sherwood.
  • L(A(C)) represent the number of nanoseconds required to perform one search on a TCAM with size equal to the number of rules in A(C).
  • L(A(C)) represent the number of nanoseconds required to perform both searches on the 2-stage pipeline.
  • Table 4 also shows the average latency ratios for TCAM SP1iT and TCAM Razor on RL, RL U , SY N, and SY N U . Using the same methodology as we did for power, we extrapolate our results to larger classifiers. The extrapolated data are included in FIG. 9B and Table 4.
  • TCAM SPliT needs to perform two TCAM lookups for each packet, its total lookup time for each packet is significantly less than double that of single lookup direct expansion. The reason is that the lookup time of a TCAM chip increases as its size increases. Since TCAM SPliT can use two small TCAM chips, its latency is significantly less than double that of single lookup direct expansion. Second, for smaller classifiers, TCAM SPliT's latency is also much less than double that of TCAM Razor. However, for large packet classifiers, its latency is basically double that of TCAM Razor. Given that packet classification systems typically measure speed in terms of throughput rather than latency, the small latency penalty of TCAM SPliT is relatively unimportant if we can significantly improve packet classification throughput.
  • TCAM SPliT and TCAM Razor The packet classification throughput of TCAM SPliT and TCAM Razor is also analyzed using the TCAM throughput model presented by Agrawal and Sherwood.
  • T(A(C)) represent the number of lookups per second for a TCAM of size A(C).
  • TCAM SPliT let T(A(C)) be the minimum throughput of either stage in the 2-stage pipeline.
  • T(A(C)) be the minimum throughput of either stage in the 2-stage pipeline.
  • Table 4 shows the average throughput ratios for TCAM SPliT on RL, RL U , SY N, and SY N U . Using the same methodology we did for power and latency, we extrapolate our results to larger classifiers. The extrapolated data are included in FIG. 9C and Table 4.
  • TCAM SPliT significantly increases throughput for classifiers of all size. The typical improvement is in the 60%-80% range. For an extremely large classifier whose direct expansion requires a 36 Mbit TCAM, TCAM SPliT improves throughput by 155.8%. Second, for smaller classifiers, TCAM SPliT outperforms even TCAM Razor. For example, on the RL data set, TCAM SPliT improves throughput by roughly 60% when compared to TCAM Razor. For large classifiers, TCAM SPliT and TCAM Razor achieve essentially the same throughput.
  • the TCAM SPliT algorithm was implemented on Microsoft.Net framework 2.0 and the experiments were carried out on a desktop PC running Windows XP with 8G memory and a single 2.81 GHz AMD Athlon 64 ⁇ 2 5400+. All algorithms used a single processor core.
  • a topological view of the TCAM encoding process is proposed, where the semantics of the packet classifier is considered.
  • many coordinates i.e., values
  • the idea of domain compression is to reencode the domain so as to eliminate as many redundant coordinates as possible. This leads to both rule width and role number compression. From a geometric perspective, domain compression “squeezes’ a colored hyperrectangle as much as possible. For example, consider the colored rectangle 102 in FIG. 10 that represents the classifier rules below the rectangle.
  • a topological transformation process produces two separate components.
  • the second component is a transformed d-dimensional classifier C′ over packet space ⁇ such that for any packet (p 1 , . . . , p d ) ⁇ ⁇ , the following condition holds: C ( p 1 , . . .
  • TCAM space needed by our transformation approach is measured by the total TCAM spaced needed by the d+1 tables: C′, T 1 (p 1 ), . . . , T d .
  • TCAMS can be configured with varying widths, they do not allow arbitrary widths.
  • the width of a TCAM can be set at different values such as 36, 72, 144, 288, 576 or 40, 80, 160, and 320 bits (per entry). For this section, we assume the allowable widths are 40, 80, 160, and 320 bits.
  • the primary goal of the transformation approach is to produce C′, T d (p 1 ), . . . , T d such that the TCAM space needed by these d+1 TCAM tables is much smaller than the TCAM space needed by the original classifier C.
  • Most previous reencoding approaches ignore the space required by the transformers and only focus on the space required by the transformed classifier C′. Note that we can implement the table for the protocol field using SRAM if desired since the field has only 8 bits.
  • T d There are two natural architectures for storing the d+1 TCAM tables C′, T 1 . . . , T d : the multi-lookup architecture and the pipelined-lookup architecture.
  • table IDs 00, 01, and 10 for the three tables C′, T 1 , and T 2 , respectively, are used.
  • the main advantage of the multi-lookup architecture is that it can be easily deployed since it requires minimal modification of existing TCAM-based packet processing systems. Its main drawback is a modest slowdown in packet processing throughput because d+1 TCAM searches are required to process a d-dimensional packet. In contrast, the main advantage of the two pipelined-lookup architectures is high packet processing throughput. Their main drawback is that the hardware needs to be modified to accommodate d+1 TCAM chips (or d chips if SRAM is used for the protocol field). A performance modeling analysis of the parallel pipelined lookup and multi-lookup architectures is presented below.
  • the domain compression algorithm is comprised generally of steps: (1) computing equivalence classes, (2) constructing transformer T i , for each field F i , and (3) constructing the transformed classifier C′.
  • the first step of our domain compression algorithm is to convert a given d-dimensional packet classifier C to d equivalent reduced FDDs f 1 through f d where the root of FDD f i is labeled by field F i .
  • FIG. 14A shows an example packet classifier over two fields F 1 and F 2 where the domain of each field is [0,63].
  • FIGS. 14B and 14C show the two FDDs f 1 and f 2 , respectively.
  • the FDDs f 1 and f 2 are almost reduced except that the terminal nodes are not merged together for illustration purposes.
  • each edge out of reduced FDD f 1 's root node corresponds to one equivalence class of domain D(F i ).
  • D(F i ) the classifier in FIG. 14A and the corresponding FDD f i in FIG. 14B .
  • transformer T i Given a packet classifier C over fields F 1 . . . F d and the d equivalent reduced FDDs f 1 . . . f d where the root node of f i is labeled F i , we compute transformer T i ; as follows. Let v be the root of f i with m outgoing edges e 1 , , , , e m . First, for each edge e j out of v, we choose one of the ranges in e j 's label to be a representative label, which we call the landmark.
  • a transformed classifier C′ is constructed from classifier C using transformers T i for 1 ⁇ i ⁇ d as follows. Let F 1 ⁇ S 1 . . . F d ⁇ S d ⁇ (decision) be an original rule in C.
  • the domain compression algorithm converts F i ⁇ S i to F i ′ ⁇ S i ′ such that for any landmark range L j (0 ⁇ j ⁇ m ⁇ 1), L j ⁇ S i ⁇ if and only if j ⁇ S i ′.
  • range S i with range [a, b] ⁇ D′(F i ) where a is the smallest number in [0, m ⁇ 1] such that L a ⁇ S i ⁇ and b is the largest number in [0, m ⁇ 1] such that L b ⁇ S i ⁇ .
  • a and b are undefined and S i ′ ⁇ .
  • the five landmarks are the five grayed intervals in 14B, namely [0,0], [1,6], [7,11], [12,15], and [63, 63].
  • [7,60] overlaps with [7,11] and [12,15], which are mapped to 2 and 3 respectively by transformer T 1 .
  • F 1 ⁇ [7,60] is converted to F 1 ′ ⁇ [2,3].
  • packet p′ matches some rule r 1 ′ ⁇ C′ that occurs before rule r′. This implies that for each conjunction F i ⁇ S i of the corresponding rule r 1 ⁇ C that L(p i ) ⁇ S i ⁇ . However, this implies that x i ⁇ S i since if any point in L(p i ) is in S i , then all points in L(p i ) are in S i . It follows that x matches rule r 1 ⁇ C, contradicting our assumption that rule r was the first rule that x matches in C. Thus, it follows that p′ cannot match rule r 1 ′. It then follows that r′ will be the first rule in C that p′ matches and the theorem follows.
  • FIG. 15 further illustrates this domain compression technique for constructing a packet classifier.
  • a firewall decision diagram is first constructed at 152 for each field defined in the set of rules, where a root node in each of the firewall decision diagrams corresponds to a different field.
  • Each of these firewall decision diagrams are reduced in size at 154 by merging isomorphic subgraphs therein.
  • one label is selected at 156 for each of the outgoing edges from the root node in each of the firewall decision diagrams to be a representative label.
  • a first stage classifier is constructed at 158 , where values for a given field of a data packet maps to a result which corresponds to an input of a second stage classifier.
  • a second stage classifier is then constructed at 159 from the set of rules based on overlap of a given rule in the set of rules with a representative label from the reduced firewall decision diagrams. More specifically, the second stage classifier is constructed by comparing each field for a given rule in the set of rules to corresponding representative labels for the field and creating a mapping from the results from the first stage classifier to the decision associated with the given rule when a field in the given rule overlaps with corresponding representative labels for the field.
  • the first and second stage classifiers are preferably instantiated on different content-addressable memory devices but may also be instantiated on the same memory device.
  • prefix alignment we “shift”, “shrink”, or “stretch” ranges by transforming the domain of each field to a new “prefix-friendly” domain so that the majority of the reencoded ranges either are prefixes or can be expressed by a small number of prefixes. This will reduce the costs of range expansion and leads to rule number compression with a potentially small loss in rule width compression.
  • D′(F 1 ) has a total of 4 elements
  • the classifier C′ in FIG. 16A shows the three transformed ranges using the first family of solutions. In both examples, the range expansion of the transformed ranges only has 4 prefix rules while the range expansion of the original ranges has 7 prefix rules.
  • the x 2 cut point partitions [0, 15] into [0, x 2 ], which transforms to prefix 0*, and [x 2 +1, 15], which transforms to prefix 1*.
  • x 2 4; that is, we choose the dashed line in FIG. 16A .
  • ranges [5,15] and [0, 15] are trimmed to [5,15] while range [0,12] is trimmed to [5,12].
  • x 1 the choice of x 1 is immaterial since both trimmed ranges span the entire restricted domain.
  • This tree also encodes the transformation from the original domain to the target domain: all the values in a terminal node are mapped to the prefix represented by the path from the root to the terminal node. For example, as the path from the root to the terminal node of [0,4] is 0, all values in [0,4] ⁇ D(F 1 ) are transformed to 0*.
  • prefix alignment we consider transformers that map points in D(F i ) to prefix ranges in D′(F i ). If this is confusing, we can also work with transformers that map points in D(F i ) to points in D′(F i ) with no change in results; however, transformers that map to prefixes more accurately represent the idea of prefix alignment than transformers that map to points. Because we will perform range expansion on C′ before performing any further optimizations including redundancy removal, we can ignore rule order. We can then view a one-dimensional classifier C as a multiset of ranges S in D(F 1 ).
  • prefix alignment preserves the semantics of the original classifier by first defining the concept of prefix transformers and then showing that prefix alignment must be correct when prefix transformers are used.
  • a transformer T i is an order-preserving prefix transformer from D(F i ) to D′(F i ) for a packet classifier C if T i satisfies the following three properties.
  • Lemma 6.1 and Theorem 6.1 easily follow from the definition of prefix transformers.
  • Lemma 6.1 Given any prefix transformer T i for a field F i , for any a, b, x ⁇ D(F i ), x ⁇ [a, b] if and only if T i (x) ⁇ [min T i (a), max T i (b)].
  • range [0,12] creates S-start point 13 and S-end point 12
  • range [5,15] creates S-end point 4 and S-start point 5
  • range [0, 15] creates no S-start points or S-end points.
  • 0 is an S-start point
  • 15 is an S-end point.
  • PA(x,y,b) For every 1 ⁇ x ⁇ y ⁇ m, we define a prefix alignment problem PA(x,y,b) where the problem is to find a prefix transformer T 1 for [s x , e y ] ⁇ D(F 1 ) such that the range expansion of (S [x,y])′ has the smallest possible number of prefix rules and the transformed domain D′(F 1 ) can be encoded in b bits.
  • cost(x,y,b) to denote the number of prefix rules in the range expansion of the optimal (S [x,y])′.
  • the original prefix alignment problem then corresponds to PA(1,m,b) where b can be arbitrarily large.
  • the prefix alignment problem obeys the optimal substructure property. For example, consider PA(1,m,b). As we employ the divide and conquer strategy to locate a middle cut point that will establish what the prefixes 0 ⁇ * ⁇ b ⁇ 1 and 1 ⁇ * ⁇ b ⁇ 1 correspond to, there are m ⁇ 1 choices of cut points to consider: namely e 1 through e m ⁇ 1 . Suppose the optimal cut point is e k where 1 ⁇ k ⁇ m ⁇ 1. Then the optimal solution to PA(1,m,b) will build upon the optimal solutions to sub-problems PA(1,k,b ⁇ 1) and PA(k+1,m,b ⁇ 1).
  • the optimal transformer for PA(1,m,b) will simply append a 0 to the start of all prefixes in the optimal transformer for PA(1,k,b ⁇ 1) and a 1 to the start of all prefixes in the optimal transformer for PA(k + 1,m,b ⁇ 1).
  • cost(1,m,b) cost(1,k,b ⁇ 1)+cost(k+1,m,b ⁇ 1) ⁇
  • . in the above cost equation because ranges that include all of [s 1 , e m ] are counted twice, once in cost(1,k,b ⁇ 1) and once in cost(k+1, m,b ⁇ 1).
  • Theorem 6.2 shows how to compute the optimal cuts and binary cut tree. As stated earlier, the optimal prefix transformer T 1 can then be computed from the binary cut tree.
  • cost ⁇ ( l , r , b ) min k ⁇ ⁇ l , ... ⁇ , r - 1 ⁇ ⁇ ( cost ⁇ ( l , k , b - 1 ) + cost ⁇ ( k + 1 , r , b - 1 ) - ⁇ S @ [ l , r ] ⁇ ) ⁇ ⁇
  • cost (k,k,0) to
  • for the convenience of the recursive case.
  • the interpretation is that with a 0-bit domain, we can allow only a single value in D′(F 1 ); this single value is sufficient to encode the transformation of an atomic interval.
  • the prefix alignment technique can be generalized as follows.
  • Candidate cut points for a domain of field value are first identified at 172 , where candidate cut points correspond to starting or ending points in the ranges of values for the given field.
  • a number of bits to be used to encode the domain in a binary format is selected at 174 .
  • the domain is recursively divided 176 along the candidate cut points using dynamic programming and mapping each range of values to a result that is represented in a binary format using the selected number of bits.
  • the number of bits is incremented by one and each of the above steps are repeated until the mapping of the range values has been optimized.
  • Multi-dimensional prefix alignment is now considered. Unfortunately, while we can optimally solve the one-dimensional problem, there are complex interactions between the dimensions that complicate the multi-dimensional problem. In particular, the total range expansion required for each rule is the product of the range expansion required for each field. Thus, there may be complex tradeoffs where we sacrifice one field of a rule but align another field so that the costs do not multiply. The complexity of the multi-dimensional prefix alignment problem is currently unknown. A hill-climbing solution is presented where we iteratively apply our one-dimensional prefix alignment algorithm one field at a time. Because the range expansion of one field affects the numbers of ranges that appear in the other fields, we run prefix alignment for each field more than once. We stop when running prefix alignment in each field fails to improve the solution.
  • domain compression and prefix alignment can be used individually, they can be easily combined to achieve superior compression.
  • Given a classifier C over fields F 1 , . . . , F d we first perform domain compression resulting in a transformed classifier C′ and d transformers . . . , then, we perform prefix alignment on the classifier C′ resulting in a transformed classifier C′′ and d transformers . . . , To combine the two transformation processes into one, we merge each pair of transformers into one transformer T i ; for 1 ⁇ i ⁇ d.
  • an optimal algorithm as described by Suri et. al.
  • TCAMs are commonly configured into a series of row banks. Each bank can be individually enabled or disabled to determine whether or not its entries will be included in the TCAM search.
  • Packet classifiers sometimes allow rule logging; that is, recording the packets that match some particular rules.
  • Our algorithm handles rule logging by assigning each rule that is logged a unique decision.
  • TCAM range encoding algorithm A and a classifier C Given a TCAM range encoding algorithm A and a classifier C, let A(C) denote the reencoded classifier, W(A(C)) denote the number of bits to represent each rule in A(C), TW (A(C)) denote the minimum TCAM entry width for storing A(C) given choices 40, 80, 160, or 320,
  • denote the number of rules in A and B(A ) TW(A ) ⁇
  • the main goal of TCAM optimization algorithms is to minimize B(A(C)).
  • W(Direct(C)) 104
  • TW(Direct(C)) 160
  • B(Direct(C)) 160 ⁇
  • a Range encoding scheme Direct direct range expansion packet classifier A reencoded classifier W(A ) width of rules in A
  • RNR ⁇ ( A ⁇ ( C ) ) ⁇ A ⁇ ( C ) ⁇ ⁇ C ⁇ , which is often referred to as expansion ratio, and the Rule Width Ratio of A on C to be
  • RWR ( A ⁇ ( S ) ⁇ C ⁇ S ⁇ RWR ⁇ ( A ⁇ ( C ) ) ⁇ S ⁇ .
  • RL is used to denote a set of 40 real-world packet classifiers that we performed experiments on.
  • RL is chosen from a larger set of real-world classifiers obtained from various network service providers, where the classifiers range in size from a handful of rules to thousands of rules.
  • RLa and RLb where RNR(Direct(C)) ⁇ 4 for all ⁇ RLa and RNR(Direct >40 for all ⁇ RLb.
  • FIG. 18 shows the accumulated percentage graph of atomic intervals for each field for the classifiers in RL
  • FIG. 19 shows the accumulated percentage graphs of classifier sizes in RL before and after direct range expansion.
  • each classifier in RL U has the maximum possible number of distinct decisions.
  • Such classifiers might arise in the context of rule logging where the system monitors the frequency that each rule is the first matching rule for a packet.
  • FIG. 20 shows the accumulated percentage graphs for the compression ratios of our combined techniques for both RL and RL U with and without transformers
  • FIG. 21 shows the accumulated percentage graphs for the compressions ratios of our combined techniques for each field in RL. Note that the data with transformers depicts the true space savings of our methods, but most previous range encoding papers focus only on the data without transformers.
  • FIG. 22 and FIG. 23 show the accumulated percentage graphs of our combined techniques on RL and RL u for rule number ratio and rule width ratio, respectively.
  • TW(A(C)) 40 for all but two classifiers in RL U .
  • the remaining savings is due to rule number compression.
  • the average rule number compression ratio without transformers is 36.1%; that is, domain compression and redundancy removal eliminate an average of 63.9% of the rules from our real-life classifier sets.
  • the goal of all other reencoding schemes is an average rule number compression ratio without transformers of 100%.
  • Our algorithm performs well on all of our other data sets too. For example, for Taylor's rule set TRS, we achieve an average compression ratio of 2.7% with transformers included and 1.0% with transformers excluded. Note that prefix alignment is an important component of our algorithm because it reduces the average compression ratio without transformers for RL from 11.8% to 4.5%.
  • Our algorithm is effective for both efficiently specified classifiers and inefficiently specified classifiers.
  • the efficiently specified classifiers in RLa experience relatively little range expansion; the inefficiently specified classifiers in RLb experience significant range expansion.
  • our algorithm provides roughly 20 times better compression for RLb than for RLa with average compression ratios of 0.9% and 20.7%, respectively.
  • TCAM width compression contributes approximately 25% savings.
  • the difference is rule number compression.
  • Our algorithm outperforms all existing reencoding schemes by at least a factor of 3.11 including transformers and by at least a factor of 5.54 excluding transformers.
  • Our algorithm uses 40 bit wide TCAM entries for all but 2 classifiers in RL u whereas the smallest TCAM width achieved by prior work is 80 bits. Therefore, on TCAM entry width, our algorithm is 2 times better than the best known result.
  • we consider the number of TCAM entries. Excluding TCAM entries for transformers, the best rule number ratio that any other method can achieve on RL is 100% whereas we achieve 36.1%. Therefore, excluding TCAM entries for transformers, our algorithm is at least 5.54 ( 2 ⁇ 100%/36.1%) times better than the optimal TCAM reencoding algorithm that does not consider classifier semantics.
  • the algorithms are implemented on the Microsoft Net framework 2.0 and the experiments are performed on a desktop PC running Windows XP with 3 G memory and a single 3.4 GHz Pentium D processor.
  • the minimum, mean, median, and maximum running time is 0.003, 37.642, 0.079, and 1093.308 seconds;
  • the minimum, mean, median, and maximum running time is 0.006, 1540.934, 0.203, and 54604.311 seconds.
  • Table 7 below shows running time of some representative classifiers in RL and RL u .
  • T(A(C)) represent the number of packets per second that can be classified using the given scheme.
  • T(A(C)) represent the number of packets per second that can be classified using the given scheme.
  • this is the minimum throughput of any of the 6 TCAM chips.
  • this is essentially the inverse of latency because there is no pipelining.
  • the modeling results demonstrate that topological transformation significantly improves throughput if we use the 6 chip configuration.
  • the reason for the throughput increase is the use of the pipeline and the use of smaller and thus faster TCAM chips.
  • the throughput of the 1 chip configuration is significantly reduced because there is no pipeline; however, the throughput is better than 16.6% because it again uses smaller, faster TCAM chips.

Abstract

A method is provided for constructing a packet classifier for a computer network system. The method includes: receiving a set of rules for packet classification, where a rule sets forth values for fields in a data packet and a decision for data packets having matching field values; representing the set of rules as a directed graph; partitioning the graph into at least two partitions; generating at least one lookup table for each partition of the graph; and instantiating the lookup tables from one partition on a first content-addressable memory and the lookup tables from the other partition on a second content-addressable memory device.

Description

CROSS-REFERENCE TO RELATED APPLICATIONS
This application claims the benefit and priority of 61/234,390, filed Aug. 17, 2009. The entire disclosure of the above application is incorporated herein by reference.
FIELD
The present disclosure relates to methods for constructing a packet classifier for a computer network system.
BACKGROUND
Packet classification is the core mechanism that enables many networking devices, such as routers and firewalls, to perform services such as packet filtering, quality of service, traffic monitoring, virtual private networks (VPNs), network address translation (NAT), load balancing, traffic accounting and monitoring, differentiated services (Diffserv), etc. The fundamental problem is to compare each packet with a list of predefined rules, which we call a packet classifier, and find the first (i.e., highest priority) rule that the packet matches. Table 1 shows an example packet classifier of three rules. The format of these rules is based upon the format used in Access Control Lists (ACLs) on Cisco routers. In this paper we use the terms packet classifiers, ACLs, rule lists, and lookup tables interchangeably.
Hardware-based classification using Ternary Content Addressable Memories (TCAMs) is the de facto industry standard. Whereas a traditional random access memory chip receives an address and returns the content of the memory at that address, a TCAM chip does the converse: it receives content and returns the address of the first entry where the content lies in the TCAM in constant time (i.e., a few clock cycles). Exploiting this hardware feature, TCAM-based packet classification stores a rule in a TCAM entry as an array of 0's, 1's, or *'s (don't-care values). A packet header (i.e., a search key) matches an entry if and only if their corresponding 0's and 1's match. Given a search key to a TCAM, the circuits compare the key with all its occupied entries in parallel and return the index (or the content, depending on the chip architecture and configuration,) of the first matching entry. TCAM-based classification is widely used because of its high speed. Although software based classification has been extensively studied, these schemes cannot match the wire speed performance of TCAM-based packet classification systems.
Although TCAM-based packet classification is the de facto industry standard because packets can be classified in constant time, the speed and power efficiency of each memory access decreases significantly as TCAM chip capacity increases. Packet classification with a single TCAM lookup is possible because of the parallel search and priority match circuits in a TCAM chip. Unfortunately, because the capacity of the TCAM chip determines the amount and depth of circuitry active during each parallel priority search, there is a significant tradeoff between the capacity of a TCAM chip and the resulting speed and power efficiency of that chip. For example, based on the detailed TCAM power model disclosed by B. Agrawal and T. Sherwood in “Modeling team power for next generation network devices” In Proc. IEEE International Symposium of Performance Analysis of Systems and Software (2006), a single search on a 36 megabit (Mb) TCAM chip, the largest available, takes 483.4 nanojoules (nJ) and 46.9 nanoseconds (ns), whereas the same search on a 1 Mb TCAM chip takes 17.8 nJ and 2.1 ns.
Building an efficient TCAM-based packet classification system requires careful optimization of the size, speed, and power of TCAM chips. On one hand, there is pressure to use smaller capacity TCAM chips because small TCAM chips consume less power, generate less heat, occupy less line card space, have a lower price, and support faster lookups. TCAM chips consume a large amount of power due to their parallel searching. The power consumed by a TCAM chip is about 1.85 Watts per megabit (Mb), which is roughly 30 times larger than a comparably sized SRAM chip. The high power consumption consequently causes TCAM chips to generate a huge amount of heat. TCAM chips have large die areas. A TCAM chip occupies 6 times (or more) board space than an equivalent capacity SRAM chip. The large die area of TCAM chips leads to TCAM chips being very expensive, often costing more than network processors. Although the limited market size may contribute to TCAM's high price, it is not the main reason. Finally, as we noted earlier, smaller TCAM chips support much faster lookups than larger TCAM chips.
On the other hand, there is pressure to use large capacity TCAM chips. The first reason is that encoding packet classification rules into TCAM rules often results in an explosion in the number of rules, which is referred to as the range expansion problem. In a typical classification rule, the fields of source and destination IP addresses and protocol type are specified as prefixes, so they can be directly stored in a TCAM. However, the fields of source and destination port numbers are specified in ranges, which need to be converted to one or more prefixes before being stored in a TCAM. This can lead to a significant increase in the number of TCAM entries needed to encode a rule. For example, 30 prefixes are needed to represent the single range [1, 65534], so 30 ×30=900 TCAM entries are required to represent the single rule r1 in Table 1 below.
Source IP
Rule Protocol Dest. IP Source Port Dest. Port Action
r1 3.2.1.0/24 192.168.0.1 [1, 65534] [1, 65534] discard
TCP accept
r2 * * * * *

The second reason to use large TCAM chips is that packet classifiers are growing rapidly in length and width due to several causes. First, the deployment of new Internet services and the rise of new security threats lead to larger and more complex packet classification rule sets. While traditional packet classification rules usually examine the five standard header fields, new classification applications examine additional fields such as classified, protocol flags, ToS (type of service), switch port numbers, security tags, etc. Second, with the increasing adoption of IPv6, the number of bits required to represent source and destination IP addresses will grow from 64 to 256. The growth of packet classifier length and width puts more demand on TCAM capacity, power consumption, and heat dissipation.
Range reencoding schemes have been proposed to improve the scalability of TCAMs, primarily by mitigating the effect of range expansion. The basic idea is to first reencode a classifier into another classifier that requires less TCAM space and then reencode each packet correspondingly such that the decision made by the reencoded classifier for the reencoded packet is the same as the decision made by the original classifier for the original packet. Range reencoding has two possible benefits: rule width compression so that narrower TCAM entries can be used and rule number compression so that fewer TCAM entries can be used.
In another aspect of this disclosure, we observe that all previous reencoding schemes suffer from one fundamental limitation: they all ignore the decision associated with each rule and thus the classifier's decision for each packet. Disregarding classifier semantics leads all previous techniques to miss significant opportunities for space compression. Fundamentally different from prior work, we view reencoding as a topological transformation process from one colored hyperrectangle to another where the color is the decision associated with a given packet. Topological transformation allows us to reencode the entire classifier as opposed to reencoding only the ranges in a classifier. Furthermore, we also view reencoding as a classification process that can be implemented with small TCAM tables, which enables fast packet reencoding. We present two orthogonal, yet composable, reencoding approaches: domain compression and prefix alignment. In domain compression, we transform a given colored hyperrectangle, which represents the semantics of a given classifier, to the smallest possible “equivalent” colored hyperrectangle. This leads to both optimal rule width compression as well as rule number compression. In prefix alignment, on the other hand, we strive for rule number compression only by transforming a colored hyperrectangle to an equivalent “prefix-friendly” colored hyperrectangle where the ranges align well with prefix boundaries, minimizing the costs of range expansion.
This section provides background information related to the present disclosure which is not necessarily prior art.
SUMMARY
A method is provided for constructing a packet classifier for a computer network system. The method includes: receiving a set of rules for packet classification, where a rule sets forth values for fields in a data packet and a decision for data packets having matching field values; representing the set of rules as a directed graph; partitioning the graph into at least two partitions; generating at least one lookup table for each partition of the graph; and instantiating the lookup tables from one partition on a first content-addressable memory and the lookup tables from the other partition on a second content-addressable memory device.
In another aspect of this disclosure, a method is provided for encoding multiple ranges of values for a given field in a data packet as defined in a rule of a packet classifier. The method includes: finding candidate cut points over an entire domain of values for the given field, where candidate cut points correspond to starting or ending points in the ranges of values for the given field; selecting a number of bits to be used to encode the domain in a binary format; and recursively dividing the domain of values along the candidate cut points using dynamic programming and mapping each range of values to a result that is represented in a binary format using the selected number of bits.
This section provides a general summary of the disclosure, and is not a comprehensive disclosure of its full scope or all of its features. Further areas of applicability will become apparent from the description provided herein. The description and specific examples in this summary are intended for purposes of illustration only and are not intended to limit the scope of the present disclosure.
DRAWINGS
FIGS. 1A and 1B depicts a two-dimensional prefix classifier and a representation of the same using three one-dimensional tables, respectively;
FIG. 2A depicts an exemplary two-dimensional packet classifier;
FIG. 2B depicts an equivalent representation of the packet classifier as a directed graph;
FIG. 2C depicts a reduced representation of the directed graph;
FIG. 2D depicts four TCAM tables that correspond to the four nonterminal nodes in the reduced directed graph;
FIG. 2E illustrates the process of table concatenation;
FIG. 3 illustrate the packet lookup process for the tables depicted in FIG. 2E;
FIG. 4 is a flowchart illustrating a TCAM SPliT approach to constructing a packet classifier;
FIG. 5 illustrates partitioning an exemplary directed graph in accordance with the TCAM SPliT approach;
FIGS. 6A and 6B are diagrams illustrating how TCAM tables can be shadow packed and a graph representation of the shadowing relationship among the tables, respectively;
FIGS. 7A and 7B illustrate the shadow packing process for tables depicted in FIG. 2;
FIG. 8 illustrates the compression ratios for two exemplary packet classifiers;
FIGS. 9A-9C are graphs depicting power, latency and throughput measures, respectively, for the exemplary packet classifiers;
FIGS. 10A-10J illustrate an exemplary topological transformation process;
FIG. 11 illustrates an exemplary packet classification process using a multi-lookup architecture;
FIGS. 12 and 13 illustrate the packet classification process using a parallel pipelined-lookup architecture and a chained pipelined-lookup architecture, respectively;
FIGS. 14A-14G illustrate an example of the domain compression technique;
FIG. 15 is a flowchart illustrating the domain compression technique for constructing a packet classifier;
FIG. 16 illustrates an example of a one-dimensional prefix alignment;
FIG. 17 is a flowchart illustrating the prefix alignment technique;
FIGS. 18 and 19 are graphs for a set of exemplary packet classifiers detailing some of their properties; and
FIGS. 20-24 are graphs of experimental results when topographical transformation is applied to the set of exemplary packet classifiers.
The drawings described herein are for illustrative purposes only of selected embodiments and not all possible implementations, and are not intended to limit the scope of the present disclosure. Corresponding reference numerals indicate corresponding parts throughout the several views of the drawings.
DETAILED DESCRIPTION
To overcome the fundamental limitations of existing TCAM optimization schemes with respect to the multiplicative effect, the conventional scheme of performing a single d-dimensional lookup on a single d-dimensional classifier in a TCAM is abandoned. Instead, splitting a single d-dimensional classifier stored on a single large and slow TCAM chip into k≦d smaller classifiers stored on a pipeline of k small and fast TCAM chips is proposed (referred to herein as TCAM SPliT). While reference is made throughout this disclosure to ternary content addressable memory, concepts disclosed herein are applicable to other types of content addressable memory, random access memory or combination thereof.
The proposed TCAM SPliT approach greatly reduces the total required TCAM space. Different than other TCAM optimization schemes, it effectively deals with the multiplicative effect by splitting apart dimensions that do not combine well together. For example, represent an exemplary 2-dimensional classifier that illustrates the multiplicative effect in FIG. 1A using the three one-dimensional tables in FIG. 1B requiring a total of 8 rules, each of which is only 3 bits wide. Although the one-dimensional tables only reduce the number of entries by 2 in this example, the savings would be much larger if the number of distinct entries qi in each dimension were larger. The TCAM SPliT approach also leads to much faster packet classification with lower power consumption as we enable the use of smaller and thus faster and more power efficient TCAM chips.
Concepts of fields, packets, and packet classifiers are defined formally as follows. A field Fi is a variable of finite length (i.e., of a finite number of bits). The domain of field Fi of w bits, denoted D(Fi), is [0, 2w−1]. A packet over the d fields F1, . . . , Fd is a d-tuple (pi, . . . , pd) where each pi (1<i<d) is an element of D(Fi). Packet classifiers usually check the following five fields: source IP address, destination IP address, source port number, destination port number, and protocol type. The length of these packet fields are 32, 32, 16, 16, and 8, respectively. We use Σ to denote the set of all packets over fields F1, . . . , Fd. It follows that Σ is a finite set and |Σ|=|D(F1)×|D(Fd)|, where |Σ| denotes the number of elements in set Σ and |D(Fi)| denotes the number of elements in set D(Fi). It is readily understood the packet classifiers may be constructed to work with with more or less fields as well as different fields.
A rule has the form (predicate)→(decision). A (predicate) defines a set of packets over the fields F1 through Fd, and is specified as F1 ε S1
Figure US08462786-20130611-P00001
Fdε Sd where each Si is a subset of D(Fi) and is specified as either a prefix or a range and is specified as F1 ε S1
Figure US08462786-20130611-P00001
Fd ε Sd where each Si is a subset of D(Fi) and is specified as either a prefix or a range. A prefix {0,1}k{*}w−k with k leading 0s or 1s for a packet field of length w denotes the range [{0,1}k{0}w−k, {0,1}k{1}w−k]. For example, prefix 01** denotes the range [0100, 0111]. A rule F1 ε S1
Figure US08462786-20130611-P00001
Fd ε Sd
Figure US08462786-20130611-P00002
decision
Figure US08462786-20130611-P00003
is a prefix rule if and only if each Si is represented as a prefix.
A packet (p1 . . . , p4) matches a predicate F1 ε S1
Figure US08462786-20130611-P00001
Fd ε Sd and the corresponding rule if and only if the condition p1 ε S1
Figure US08462786-20130611-P00001
pd ε S4 holds. α is used to denote the set of possible values that
Figure US08462786-20130611-P00002
decision
Figure US08462786-20130611-P00003
can be. For packet classifiers, typical elements of a include accept, discard, accept with logging, and discard with logging.
A sequence of rules (r1, . . . , rn) is complete if and only if for any packet p, there is at least one rule in the sequence that p matches. To ensure that a sequence of rules is complete and thus is a packet classifier, the predicate of the last rule is usually specified as F1 ε D(F1)
Figure US08462786-20130611-P00004
. . . Fd ε
Figure US08462786-20130611-P00005
D(Fd). A packet classifier f is a sequence of rules that is complete. The size of f, denoted |f|, is the number of rules in f. A packet classifier f is a prefix packet classifier if and only if every rule in f is a prefix rule. Two rules in a packet classifier may overlap; that is, there exists at least one packet that matches both rules. Furthermore, two rules in a packet classifier may conflict; that is, the two rules not only overlap but also have different decisions. Packet classifiers typically resolve conflicts by employing a first-match resolution strategy where the decision for a packet p is the decision of the first (i.e., highest priority) rule that p matches in f.
When using a TCAM to implement a packet classifier, all rules are typically required to be prefix rules. However, in a typical packet classifier rule, some fields such as source and destination port numbers are represented as ranges rather than prefixes. This leads to range expansion, the process of converting a rule that may have fields represented as ranges into one or more prefix rules. In range expansion, each field of a rule is first expanded separately. The goal is to find a minimum set of prefixes such that the union of the prefixes corresponds to the range. For example, if one 3-bit field of a rule is the range [1, 6], a corresponding minimum set of prefixes would be 001, 01*, 10*, 110. The worst-case range expansion of a w-bit range results in a set containing 2w−2 prefixes. The next step is to compute the cross product of each set of prefixes for each field, resulting in a potentially large number of prefix rules. The range expansion of rule r1 in Table 1 results in 30×30=900 prefix rules.
Commercially available TCAM chips are allowed to be configured with a limited set of widths, which are typically 36, 72, 144, 288, and 576 bits though some are now 40, 80, 160, 320, and 640 bits. In traditional single-lookup approaches, TCAM chips are typically configured to be 144 bits wide because the standard five packet fields constitute 104 bits. The decision of each rule could be stored either in TCAM or its associated SRAM.
In this section, how to construct a k-stage TCAM pipeline for k<d is described. First assume k=d so that each stage contains a one-dimensional packet classifier and thus requires a single-field lookup. Then describe how to construct a k-stage TCAM pipeline where k<d. In this case, some stages will contain multidimensional packet classifiers and thus require multi-field lookups.
FIGS. 2A-2E illustrate the algorithm for constructing a single-field TCAM pipeline for a given classifier. Four steps for constructing a pipeline are as follows: (1) FDD Construction: convert the classifier to its equivalent decision tree representation; (2) FDD Reduction: reduce the size of the decision tree; (3) Table Generation: treat each nonterminal node in the reduced decision tree as a 1-dimensional classifier and generate a TCAM table for this classifier; and (4) Table Concatenation: for each field, merge all the tables into one TCAM table, which will serve as one stage of the TCAM pipeline.
Classifiers can be represented using a special decision tree representation called a Firewall Decision Diagram. A Firewall Decision Diagram (FDD) with a decision set DS and over fields F1 . . . , F4 is an acyclic and directed graph that has the following five properties. First, there is exactly one node that has no incoming edges. This node is called the root. The nodes that have no outgoing edges are called terminal nodes. Second, each node v has a label, denoted F(v), such that
F ( v ) { { F 1 , , F d } if v is a nonterminal node , DS if v is a terminal node .
Third, each edge e:u→v is labeled with a nonempty set of integers, denoted I(e), where I(e) is a subset of the domain of u's label (i.e., I(e) D(F(u))). Fourth, a directed path from the root to a terminal node is called a decision path. No two nodes on a decision path have the same label. Fifth, the set of all outgoing edges of a node v, denoted E(v), satisfies the following two conditions: (i) Consistency: I(e) ∩ I(e′)=φ for any two distinct edges e and e′ in E(v), (ii) Completeness: ∪εεE(ν)I(e)=D(F(ν)). Further information regard FDDs can be found in “Structure Firewall Design” by M. G. Gouda and A. X. Liu, Computer Networks Journal, 51(4):1106-1120, March 2007.
A full-length ordered FDD is an FDD where in each decision path all fields appear exactly once and in the same order. For ease of presentation, the term “FDD” is used, to mean “full-length ordered FDD” if not otherwise specified. FIG. 2B shows the FDD constructed from the classifier in FIG. 2A, where a stands for accept, d stands for discard, and dl stands for discard with log. An FDD construction algorithm, which converts a packet classifier to an equivalent full-length ordered FDD, is presented by A. X. Liu and M. G. Gouda in “Design Firewall Design”, IEEE Transactions on Parallel and Distributed Systems (2008). While the following description makes reference to FDDs, it is readily understood that certain aspect of this disclosure are not limited to FDDs and may be implemented using other types of directed diagrams.
The second step is to reduce the size of the FDD. In an exemplary embodiment, the FDD is reduced by merging isomorphic subgraphs. A full-length ordered FDD f is reduced if and only if it satisfies the following two conditions: (1) no two nodes in f are isomorphic; (2) no two nodes have more than one edge between them. Two nodes v and v′ in an FDD are isomorphic if and only if v and v′ satisfy one of the following two conditions: (1) both v and v′ are terminal nodes with identical labels; (2) both v and v′ are nonterminal nodes and there is a one-to-one correspondence between the outgoing edges of v and the outgoing edges of v′ such that every pair of corresponding edges have identical labels and they both point to the same node. Other types of reduction techniques are also contemplated by this disclosure.
FDD reduction is an important step in reducing the TCAM space used by the final pipeline because it reduces the number of nonterminal nodes. FIG. 2C shows the resultant FDD reduced from the one in FIG. 2B. In this work, an efficient signature-based FDD reduction algorithm is used. Suppose the reduced FDD has n nonterminal nodes. Consider any nonterminal node v. Since v is complete with respect to its labeled field, we can view v as a one-dimensional packet classifier. Construct an equivalent TCAM table Table(v) for each nonterminal node v as follows. First, for all tables of field Fi, we assign a unique ID of [log (mi−1)] bits to both v and Table(v) where mi is the number of nodes with label F in the FDD. This ID will serve as the ID of both node v and table Table(v). In the case where there is a single nonterminal node with label F, no ID is assigned. For example, the IDs for the three nonterminal nodes with the label F2 in FIG. 2D are 00, 01, and 10. Second, generate a classifier for each nonterminal node v by generating one rule for each prefix on each outgoing edge of v. That is, for each of v's outgoing edges e from v to v′ and for each prefix p on edge e, generate a rule r as follows: the predicate of r is p; if v′ is a terminal node, the decision of r is the label of v′; if v′ is a nonterminal node, the decision of r is the ID of v′. Third, minimize the number of TCAM entries in Table(v) by using an optimal, polynomial-time algorithm for minimizing one-dimensional classifiers. An exemplary algorithm is set forth by Suri et. al. in “Compressing two-dimensional routing tables”, In Algorithmica, 35:287-300 (2003). Other types of algorithms may also be applied. FIG. 2D shows the four minimal TCAM tables that correspond to the four nonterminal nodes in the FDD.
The final step is to merge all TCAM tables of field F into a single table. For every nonterminal node v with label Fi, prepend v's ID to the predicate of each rule in Table(v). Since each table ID provides a unique signature that distinguishes that table's entries from all other table entries, all tables of field Fi can be concatenated into a single table. FIG. 2E illustrates the process of table concatenation.
After table concatenation, get d TCAM tables corresponding to the d fields. These d tables can be loaded (or instantiated) into d TCAM chips, which can be chained together into a d-stage correspondingly, a d-dimensional packet lookup is SPliT into d lookups. The lookup result of the last chip is part of the search key for the (i+1)-st chip, and the result of the last chip is the decision for the packet. With such a chain, d packets can be processed in parallel in the pipeline.
FIG. 3 illustrates the packet lookup process for the two tables t1 and t2 in FIG. 2E. Suppose two packets (010, 001) and (111, 010) arrive one after the other. When (010, 001) arrives, the first search key, 010, is formed and sent to t1 while the rest of the packet (001) is forwarded to t2. When the next packet (111, 010) arrives, table t1 has sent the search result 01 to table t2. When the first search key for the second packet 111 is formed, the second search key for the first packet 01001 is formed in parallel, and both are sent to tables t1 and t2, respectively. This cycle will yield a result of accept for the first packet and a result of 10 for the second packet. The above process continues for every packet.
With reference to FIGS. 4 and 5, a method for constructing a packet classifier in accordance with the TCAM SPliT approach is further described. A packet classifier defined by a set of rules for packet classification serves as the starting point. The set of rules is first represented at 44 as a directed graph. The graph is then partitioned at 46 into at least two partitions: a top partition and a bottom partition. The graph is partitioned by horizontally cutting an FDD into k pieces; that is, divide the d fields into k partitions. For example, partition a 5-field FDD into two partitions as shown in FIG. 5. The top partition is a single, smaller dimensional FDD or sub-graph and consists of one 3-dimensional sub-FDD over fields F1, F2, and F3. The bottom partition contains multiple sub-FDDs and consists of eight 2-dimensional sub-FDDs over fields F4 and F5. It is readily understood that the graph may be partitioned into more or less than two partitions so long as k is less than d.
When k multi-field TCAM tables are generated, there are (d−1 k) options for splitting the FDD. Define a k-split to be a partition of a sequence of FDD fields in k in order subsequences. For example, two valid 2-splits of the sequence of FDD fields F1, . . . , F5 are (F1,F2,F3),(F4,F5) and (F1,F2),(F3,F4,F5).
Furthermore, limit the split tables to a reduced entry width. For example, limit the width of the split tables to half or quarter the width of the original entry width to space TCAM space. That is, with 2 multi-field TCAM tables, each table will be 72-bits wide, while the original table was 144-bits wide. These shorter table widths limit the number of possible splits for a given FDD considerably, which enable examining each valid split to find the split that produces the smallest tables. Define a b-valid k-split to be a k-split such that all the tables generated by the k-split fit into entries that are at most b bits wide. Given b, find the best k-split by generating tables for the b-valid k-splits from the (d−1 k) choices, and using the k-split which uses the least number of entries. The algorithm below illustrates this concept.
Algorithm 1 FDD Split (FDDSPlit)
Input: F: an FDD with n fields F1, . . . ,Fn such w(Fi)
if the number of bits in Fi, a bit bound b, and a table
bound k
Output: A b-valid k-split
 1: Let C be the collection of all k-splits
 2: Let min = ∞ and minv = Ø
 3: for each c ∈ C do
 4: if c is a b-valid k-split then
 5: Generate the list of tables for c,T = T1, . . . ,Tk.
 6:  Let size be the total number of entries used by T.
 7:  if every table in T1, . . . , Tk has a width of at most
 b bits with table IDs and size < min then
 8: Let min = size and minv = c
 9: end if
10: end if
11: end for
12: return minc

Other techniques for splitting the graph are also within the broader aspects of this disclosure.
Next, a lookup table is generated at 48 from the sub-graphs in each of the partitions. With continued reference to FIG. 5, a lookup table is generated from the sub-tree in the top partition and multiple lookup tables are generated from the bottom partition, where a lookup table is generated for each sub-tree in the bottom partition. The lookup table from the top partition is linked with the lookup tables in the bottom partition by uniquely assigned table identifiers. With the exception of two primary distinctions, the procedure for generating k multi-field TCAM tables is similar to the procedure for generating d single-field TCAM tables. First, rather than generating a single-field TCAM table from each nonterminal node, a multi-field TCAM table is generated from each sub-FDD. Second, rather than optimizing each single-field TCAM table using a single-field classifier minimization algorithm, optimization occurs for each multi-field TCAM table using a multi-field classifier minimization algorithm. An exemplary multi-field classifier minimization algorithm is disclosed by Meiners et al. in “TCAM Razor: A systematic approach towards minimizing packet classifiers in TCAMs” In Proc. 15th IEEE Conference on Network Protocols (October 2007) which is incorporated herein by reference. Other minimization algorithms are contemplated by this disclosure.
Lastly, the lookup tables are instantiated at 49 in content-addressable memory devices. For example, the lookup table from the lookup table from the top partition is instantiated on a first content-addressable memory device; whereas, the lookup tables from the bottom partition are instantiated on a second content-addressable memory device. It is envisioned that the lookup tables could be instantiated in random access memory or another type of memory. Packet processing in multi-field TCAM SPliT is similar to that in single-field TCAM SPliT except that a multi-field lookup is performed at each stage. This approach allows for a reduced the number of TCAM chips and lookup stages to any number less than d.
So far, assume the use of full-length FDDs where every classifier field is used. Also, assume that every packet will visit every stage of the pipeline. In some cases, both of these assumptions are unnecessary, and performance can be improved with field elimination and lookup short circuiting. Field elimination is first described. In some packet classifiers, a given field such as source IP may be irrelevant. This is the case if every node of that field has only one outgoing edge in the reduced FDD. In this case, eliminate the field from consideration and partition the remaining fields among the k chips in multi-field TCAM SPliT. After performing partitioning, it may still be the case that some nodes in the FDD will still have only one outgoing edge. For example, in a 2-stage pipeline, the decision for some packets may be completely determined by the first lookup. In such a case, an extra bit can be used to denote that the next lookup can be eliminated (or short circuited) and immediately return the decision. Note the first lookup can never be eliminated unless it is a trivial classifier. Experiments show that both field elimination and lookup short circuiting improve the performance of TCAM SPliT on real-life classifiers. In particular, field elimination creates more opportunities for shadow packing, the optimization technique is discussed next.
Packet classification rules periodically need to be updated. The common practice for updating rules is to run two TCAMs in tandem where one TCAM is used while the other is updated. TCAM SPliT is compatible with this current practice with a slight modification. First, this scheme can be implemented using two pairs of small TCAM chips. However, can also do this using only two small TCAM chips as long as the pipeline can be configured in either order. The two chip update solution would be to write the new classification rules into the unused portion of each TCAM chip. Furthermore, because of the nature of TCAM SPliT where the TCAM space requirements may be significantly different between the two TCAM chips, reversing the order of the chips in the pipeline for the next update is suggested. That is, write in the updated rules for the second chip into the free space in the current first chip, and vice versa. Once the newly updated rules are ready, allow the pipeline to clear, change the active portion of each TCAM, and then reverse the pipeline with the new updated classifiers. This type of update is supported because TCAM chips allow regions of the memory to be deactivated.
An optimization technique referred to as TCAM packing is presented below. This optimization further reduces TCAM space by allowing multiple rules from different TCAM tables to co-reside in the same TCAM entry. There are two natural ways to view TCAM packing. The first is to view columns as the primary dimension of TCAM chips and pack tables into fixed width columns. The second is to view rows as the primary dimension of TCAM chips and pack tables within fixed height rows. If we take the column-based approach and assume all tables have the same width, TCAM packing becomes a generalization of makespan scheduling on multiple identical machines. If we take the row-based approach and assume all tables have the same height, TCAM packing becomes a generalization of one-dimensional bin packing. Since both problems are NP-complete, TCAM packing is NP-hard. TCAM packing seems most similar to tiling problems where we try to fit 2-dimensional rectangles into a given geometric area. The main additional difficulty for TCAM packing is that the width of the tables is not fixed as part of the input because we must determine how many TD bits must be associated with each table.
While both the row view and the column view are natural ways to simplify the TCAM packing problem, we focus on the row view for the following two reasons. First, with the column view, when tables of varying width are allocated to the same column, the number of bits assigned to each table t is equal to h(t)×w(t′) where t′ is the widest table assigned to that column. This leads to many unused bits if tables of different widths are assigned to the same column. On the other hand, horizontally packed tables can be placed next to each other as keeping the vertical boundaries across multiple tables is unnecessary. Of course, there may be wasted bits if tables of different heights are packed in the same row. We try to minimize this effect by allowing some tables to be stacked in the same row if they fit within the row boundaries. Second, with the column view, table ID bits between tables in different columns cannot be shared; that is, while two adjacent tables may have the same table ID, each will have its own table ID bits. Thus, the number of bits used for table IDs grows essentially linearly with the number of columns. On the other hand, horizontally aligned tables in the same row can potentially share some “row ID” bits in their table IDs; these tables would be distinguished by their horizontal offsets.
If we view the row version of the TCAM packing problem as a 2D strip packing problem, we are basically enforcing a Top-Left property where the top and left segments of every rectangle (table) touch the boundary or another rectangle. The key additional restriction that we will enforce is a shadowing relationship. If the left segment of a rectangle R2 touches the right segment of another rectangle R1, the left segment of R2 must be completely contained within the right segment of R1. In this case, we say that rectangle R1 shadows rectangle R2. We define shadowing to be transitive. That is, if R1 shadows R2 and R2 shadows R3, then R1 shadows R3. For example, in FIG. 6A, table t0 shadows tables t00 and t01. We say that a set of rectangles (tables) is shadow packed if their packing obeys the Top-Left Property and any rectangle not on the boundary is in the shadow of its left neighbor. For example, the rectangles in FIG. 6A are shadow packed.
An efficient algorithm, SPack, that produces shadow packed tables is also presented. A crucial property of shadow packed tables is that they facilitate the sharing of table ID bits that are in the same horizontal range. To fully explain this concept, we first formally define shadowing and shadow packing.
For a table t stored in a TCAM, we use VBegin(t) and VEnd(t) to denote the vertical indices of the TCAM entries where the table begins and ends, respectively, and we use HBegin(t) and HEnd (t) to denote horizontal indices of the TCAM bit columns where the table begins and ends, respectively. For any two tables t1 and t2 where [VBegin(t2), VEnd(t2)] [VBegin(t1), VEnd(t1)], VBegin(t1)≦VBegin(t2)<VEnd(t2)≦VEnd(t1) and HEnd(t1)<HBeing(t2), we say t1) shadows t2.
When table t, shadows t2, the ID of t1 can be reused as part of t2's ID. Suppose table t shadows tables t1, . . . , tm. Because t's ID defines the vertical TCAM region [Begin(t), End (t)], each ti (1≦i≦m) can use t's ID to distinguish ti from tables outside [Begin(t), End(t)] vertically, and use ┌log m┐ bits to distinguish ti; from tables inside [Begin(t), End(t)] vertically. Horizontally, table t and each table ti can be distinguished by properly setting the GMR of the TCAM.
Given a region defined vertically by [vi, v2] and horizontally by [hi, h2], all tables completely contained within this region are shadow packed if and only if there exist m (m≧1) tables t1 . . . , tm in the region such that the following three conditions hold:
    • 1. Boundary v1=VBegin(t1), VEnd(ti)+1=VBegin(ti+1) for 1≦l≦m−1, VEnd™≦v2:
    • 2. No tables are allocated to the region defined vertically by [VEnd(tm)+1, v2] and horizontally by [h1, h2];
    • 3. For each i (1≦i≦m), the region defined vertically by [VBegin(ti), VEnd (ti)] and horizontally by [HEnd(ti)+1, h2] either has no tables or the tables allocated to the region are also shadow packed.
      For example, the tables in FIG. 6A are shadow packed. FIG. 6B shows the tree representation of the shadowing relationship among the tables in FIG. 6A.
Given a set of tables and a TCAM region, a shadow packing algorithm allocates the tables into the region. The goal of a shadow packing algorithm is to minimize the number of TCAM entries occupied by the tables, i.e., to minimize VEnd(tm). We call this minimization problem the. Shadow Packing Optimization Problem. This problem becomes more difficult as we recurse because we must also address which tables should be allocated to which region.
In this disclosure, we present a shadow packing algorithm SPack, which has been shown to be effective in our experiments on real-life packet classifiers. The basic idea of SPack is as follows. Given a set of tables S and a TCAM region, SPack first finds the tallest table t that will fit in the region where ties are broken by choosing the fattest table. SPack returns when there are no such tables. Otherwise, SPack places t in the top left corner of the region, and SPack is recursively applied to S−{t} in the region to the right of t. After that, let S′ be the set of tables in S that have not yet been allocated. SPack is applied to S′ in the region below t. Intuitively, SPack greedily packs the tallest (and fattest) possible table horizontally. The pseudocode of SPack is provided as follows:
Algorithm 1 Shadow Packing (SPack)
Input: S : a set of tables, and a region [v1, v2], [h1, h2].
Output: S′: the set of tables in S that have not been packed.
1: Find the tallest table t ∈ S that will fit in
[v1, v2], [h1, h2] such that ties are broken by choosing
the fattest table;
2: if no table is found then
3: return S;
4: else
5: Place t in the top left corner of [v1, v2], [h1, h2]:
6: S″ ← SPack
(S − {t}, VBegin(t), VEnd(t), HEnd(t) + 1, h2);
7: return SPack(S″, VEnd(t) + 1, v2, h1, h2);
8: end if
We, however, must compute the initial SPack region. The height of the initial region is the total number of rules within the set of tables. We do not need to set this value carefully because SPack only moves to another row when all the remaining tables do not fit in any of the current shadows. The width is more complicated and must be computed iteratively. For each valid TCAM width w ε {36, 72, 144, 288}, we set the initial width to be w and run SPack. Once we have a packing, we determine the number of bits b that are needed for node IDs. If the packing could accommodate these extra b bits, we are done. Otherwise, we choose an aggressive backoff scheme by recursing with a width of w−b. It is possible, particularly for w=36, that no solution will be found. To determine which TCAM width we should use, we choose the width w ε {36, 72, 144, 288) whose final successful value resulted in the fewest number of required TCAM bits. Note that there are other possible strategies for determining the width of the SPack regions; for instance, instead of reducing the region width by b, the width could be reduced by 1. Furthermore, to speed up this process, SPack can be modified to abort the packing once it detects that the table packing and IDs cannot fit within the region.
Because shadow packing establishes a hierarchy of table IDs, each table needs a new ID, and all the rule decisions need to be remapped to reflect these new IDs. Each table ID is determined by a tree representation similar to the one found in FIG. 6B, which we call a shadow packing tree. For each node v in a shadow packing tree, if v has m>1 outgoing edges, each outgoing edge is uniquely labeled using ┌log m┐ bits; if v has only one outgoing edge, that edge is labeled *. For each table t, let v be the corresponding node in the shadow packing tree. We can construct a table ID that distinguishes t from all other table IDs by concatenating the bits labeling each edge along the path from the root to v. Note that the * corresponds to a table where no additional ID bits are needed. In our shadow packing algorithm, we reserve I bit columns in the TCAM where I is the maximum number of bits needed to distinguish a table. Reserving some bit columns for storing table IDs has the advantage of simplifying the processing of packets since the bit columns containing the table IDs are fixed in the TCAM.
FIG. 7A shows the shadow packing tree for the four tables in FIG. 2D and their reassigned table IDs. FIG. 7B shows the final memory layout in the TCAM chip after shadow packing and the conceptual memory layout of the decision table within SRAM. The one bit ID column in FIG. 7B is needed to distinguish between the tables with original IDs 01 and 11. Note that table 10 shares the table ID 0 with table 01 as it is the only table in table 01's shadow. To make the decision table in FIG. 7B easier to understand, we encode it in a memory inefficient manner using columns.
Next, the algorithm for processing packets under the shadow packing approach is described using examples. Given a packet (000, 111), the first TCAM lookup is *000******, and the lookup result is the index value of 0. This index value is used to find entry 0 in the column 00 in the SRAM which contains the decision of 0@4:01. The 0@4 means that the second lookup key should occur in table ID 0 at horizontal offset of 4, and the 01 means that the decision of the next search is located in column 01 in SRAM. To perform the second lookup, the GMR is modified to make the second lookup key 0***111***. The result of the second lookup is the index value of 1, and the decision stored in the second entry of column 01 in SRAM is retrieved, which is accept.
In the following sections, the impact TCAM SPliT has on the space, power, latency, and throughput of TCAM-based packet classification systems is evaluated. Consider TCAM SPliT with a 2-stage pipeline. Compare SPliT's performance with that of an exemplary state-of-the-art compression technique described by. Meiners et. al. in “TCAM Razor: A systematic approach towards minimizing packet classifiers in TCAMs” In Proc. 15th IEEE Conference on Network Protocols (October 2007) and referred to herein as “TCAM Razor”. This comparison allows us to assess how much benefit is gained by going from one TCAM lookup to two TCAM lookups. The net result of the experiments is that TCAM SPliT allows us to significantly reduce required TCAM space and power consumption while significantly increasing packet classification throughput with only a small latency penalty.
Experiments are first performed on a set of 25 real-world packet classifiers, which is denoted by RL. The classifiers in RL were chosen from a larger set of real-world classifiers obtained from various network service providers, where the classifiers range in size from a handful of rules to thousands of rules. Partition the original classifiers into 25 groups where the classifiers in each group share similar structure. For example, the ACLs configured for the different interfaces of a router often share a similar structure. RL is created by randomly choosing one classifier from each of the 25 groups. We did this because classifiers with similar structure often exhibit similar results for TCAM SPliT. If all our classifiers are used, results would be skewed by the relative size of each group.
Because packet classifiers are considered confidential due to security concerns, which makes it difficult to acquire a large quantity of real-world classifiers, we generated a set of synthetic classifiers SY N with the number of rules ranging from 250 to 8000. The predicate of each rule has five fields: source IP, destination IP, source port, destination port, and protocol type. The generation method is based upon Singh et al.'s model of synthetic rules described in “Packet classification using multidimensional cutting” In Proc. ACM SIGCOMM (2003).
To stress test the sensitivity of the algorithms to the number of decisions in a classifier, a set of classifiers RLu is created by replacing the decision of every rule in each classifier by a unique decision. Similarly, we created the set SY NU. Thus, each classifier in RLU (or SY NU) has the maximum possible number of distinct decisions. Such classifiers might arise in the context of rule logging where the system monitors the frequency that each rule is the first matching rule for a packet.
To give a sense of the complexity of the classifier sets RL and SY N, compute the minimum number of “atomic intervals” in each field of each classifier where an atomic interval is one that does not cross any rule range boundary. We also perform direct expansion on each of these atomic intervals to compute how many “atomic prefix intervals” are contained in each field of each classifier. Table 2 below shows the average number of unique atomic intervals and atomic prefix intervals for each field for RL and SY N. We do not include RLU and SY NU since the number of atomic intervals and atomic prefix intervals is not affected by the decision associated with each rule.
Average
Atomic Intervals Atomic Prefix Intervals
P SIP SP DIP DP P SIP SP DIP DP
RL 4.5 37.2 3.6 22.2 27.7 10.5 161.6 19.4 89.6 95.0
SY N 1.0 238.7 95.3 3.3 2.4 1.0 643.8 290.3 31.7 12.4
We focus our evaluation on the 2-stage pipeline where each chip is configured to be 72 bits wide. The variable order that we use to convert a classifier to an equivalent FDD affects the number of tables generated by our algorithms, which consequently affects the TCAM space efficiency. There are 5!=120 different permutations of the five packet fields (source IP address, destination IP address, source port number, destination port number, and protocol type). For RL, we tried each of the 5!=120 permutations and discovered that that the best permutation is (Protocol, Destination IP, Source Port, Destination Port, Source IP). In general, we try all possible partitions. However, since we limit each chip to be 72 bits wide, for this field order assuming no field elimination, there are only two valid partitions: (Protocol+Destination IP, Source Port+Destination Port+Source IP) and (Protocol+Destination IP+Source Port, Destination Port+Source IP). When Field elimination does occur, we have up to four valid partitions. In these cases, we select the partition that results in the best compression. Finally, in a few examples with the RLU data set, it is best to pack all five fields on one chip and only use TCAM Razor.
First define the metrics for measuring the space effectiveness of TCAM SPliT. Let C denote a classifier, S denote a set of classifiers, |S|denote the number of classifiers in S, and A denote an algorithm. We use A(C) and Direct(C) to denote the number of TCAM bits used for classifier C by algorithm A and direct expansion, respectively. For a single classifier C, we define the compression ratio of algorithm A on C as
A ( C ) Direct ( C ) .
For a set of classifiers S, we define the average compression ration of algorithm A over S to be
C S A ( C ) Direct ( C ) S
and the total compression ration of algorithm A over S to be
C S A ( C ) C S Direct ( C ) .
Table 3 below shows the average and total compression ratios for TCAM SPliT and TCAM Razor on RL, RLU, SY N, and SY NU. FIGS. 8A, 8B, 8C, and 8D show the compression ratios for each of the classifiers in RL and RLU. We SPliT RL and RLU into two groups so that the range of compression ratios in each figure is not too large to improve readability of the graphs.
Compression Ratio
Average Total
SPliT Razor SPliT Razor
RL 8.0% 24.5% 2.0% 8.8%
RLU 16.0% 31.9% 6.3% 13.1%
SY N 2.6% 10.4% 1.7% 7.8%
SY NU 6.7% 42.7% 10.7% 38.4%
Three observations are made from the experimental results. First, TCAM SPliT achieves significant space compression and significantly outperforms TCAM Razor. For example, the average compression ratio of TCAM SPliT on RL is 8.0%, which is three times better than the average compression ratio of 24.5% achieved by TCAM Razor on the same data set. Second, TCAM SPliT is able to find compression opportunities even when TCAM Razor cannot achieve any appreciable compression. For example, TCAM Razor is unable to compress classifiers 18 and 22 in FIG. 8B whereas TCAM SPliT is able to compress these two classifiers. This illustrates how TCAM SPliT is able to eliminate some of the multiplicative effect that is unavoidable in single-lookup schemes. Third, TCAM SPliT is very effective for classifiers with a large number of distinct decisions. For example, on RLU, TCAM SPliT achieves an average compression ratio of 16.0% which is roughly twice as good as TCAM Razor's average compression ratio of 31.9%. Note, there are a few cases in the RLU data set where TCAM SPliT's compression is worse than TCAM Razor's compression. In such cases, we default to using TCAM Razor and a single lookup.
The power and energy savings of TCAM SPliT and TCAM Razor are analyzed using the TCAM power model presented by Agrawal and Sherwood in “Modeling team power for next generation network devices” In Proc. IEEE Int. Symposium on Performance Analysis of Systems and Software (2006). The power, latency, and throughput models are the only publicly available models that we know of for analyzing TCAM-based packet classification schemes.
Let P(A(C)) represent the nanojoules consumed to classify one packet on a TCAM with size equal to the number of rules in A(C). For one classifier C, we define the power ratio of algorithm A as
P ( A ( C ) ) P ( Direct ( C ) ) .
For a set of classifiers S, we define the average power ratio of algorithm A over S to be
C S ( A ( C ) ) P ( Direct ( C ) ) S .
Table 4 below shows the average power ratios for TCAM SPliT and TCAM Razor on RL, RLU, SY N, and SY NU. However, these sets only provide power ratio data for small classifiers that fit on TCAM chips that are smaller than 1 Mbit. To extrapolate the power ratio to larger classifiers, we consider theoretical classifiers whose direct expansion fits exactly within standard TCAM chip sizes ranging from 1 Mbit to 36 Mbit. We further assume that when TCAM SPliT and TCAM Razor are applied to these classifiers, the resulting compression ratios will be 8.0%o and 24.5%, respectively, on these classifiers. Finally, we use Agrawal and Sherwood's TCAM power model to calculate the power consumed by each search for each of these classifiers and their compressed versions. The extrapolated data are included in FIG. 9A and Table 4.
Average
Power Latency Throughput
SPliT Razor SPliT Razor SPliT Razor
RL 62.1% 90.9% 122.5% 97.4% 163.3% 102.7%
RLU 62.7% 91.2% 123.2% 97.6% 161.5% 102.6%
SY N 54.8% 80.8% 111.9% 89.2% 182.8% 114.7%
SY NU 57.2% 86.9% 115.9% 91.2% 173.2% 111.3%
1 Mb 44.3% 63.5% 120.0% 81.9% 166.7% 122.1%
2 Mb 35.5% 51.2% 133.3% 70.9% 150.0% 141.0%
4.5 Mb 28.2% 41.0% 142.9% 71.4% 140.0% 140.0%
9 Mb 23.2% 33.7% 150.0% 75.0% 133.3% 133.3%
18 Mb 20.3% 29.5% 123.4% 61.7% 162.0% 162.0%
36 Mb 18.8% 27.0% 78.2% 39.1% 255.8% 255.8%
Two observations are made from the experimental results. First, although TCAM SPliT uses two TCAM chips and each chip runs at a higher frequency than the single TCAM chip in single-lookup schemes, TCAM SPliT still achieves significant power savings because of its huge space savings. TCAM SPliT reduces energy consumption per lookup at least 33% on all data sets. On our extrapolated data, the power savings of TCAM SPliT continues to grow as classifier size increases. For the largest classifier size, TCAM SPliT achieves a power ratio of 18.8%. The reason TCAM SPliT works so well is twofold. TCAM chip energy consumption is reduced if we reduce the number of rows in a TCAM chip and if we reduce the width of a TCAM chip. TCAM SPliT reduces the width of a TCAM chip by a factor of 2 (from 144 to 72), and it reduces the number of rows by a significant amount as well. Even more energy could be saved if we continued to run the smaller TCAM chips at a lower frequency. Second, TCAM SPliT significantly outperforms TCAM Razor in reducing energy consumption. For every data set, TCAM SPliT uses roughly two thirds of the power that TCAM Razor uses. For example, on RL, TCAM SPliT reduces energy per packet by 37.9% which is significantly more than TCAM Razor's 9.1% reduction.
The latency per packet of TCAM SPliT and TCAM Razor are analyzed using the TCAM lookup latency model presented by Agrawal and Sherwood. For single lookup schemes, let L(A(C)) represent the number of nanoseconds required to perform one search on a TCAM with size equal to the number of rules in A(C). For TCAM SPliT, let L(A(C)) represent the number of nanoseconds required to perform both searches on the 2-stage pipeline. For one classifier C, we define the latency ratio of algorithm A as
?? ( A ( C ) ) L ( Direct ( C ) ) .
For a set of classifiers S, we define the average latency ratio for algorithm A over S to be
C S ?? ( A ( C ) ) L ( Direct ( C ) ) S .
Table 4 also shows the average latency ratios for TCAM SP1iT and TCAM Razor on RL, RLU, SY N, and SY NU. Using the same methodology as we did for power, we extrapolate our results to larger classifiers. The extrapolated data are included in FIG. 9B and Table 4.
The following observations are made from the experimental results. Although TCAM SPliT needs to perform two TCAM lookups for each packet, its total lookup time for each packet is significantly less than double that of single lookup direct expansion. The reason is that the lookup time of a TCAM chip increases as its size increases. Since TCAM SPliT can use two small TCAM chips, its latency is significantly less than double that of single lookup direct expansion. Second, for smaller classifiers, TCAM SPliT's latency is also much less than double that of TCAM Razor. However, for large packet classifiers, its latency is basically double that of TCAM Razor. Given that packet classification systems typically measure speed in terms of throughput rather than latency, the small latency penalty of TCAM SPliT is relatively unimportant if we can significantly improve packet classification throughput.
The packet classification throughput of TCAM SPliT and TCAM Razor is also analyzed using the TCAM throughput model presented by Agrawal and Sherwood. For single lookup schemes, let T(A(C)) represent the number of lookups per second for a TCAM of size A(C). For TCAM SPliT, let T(A(C)) be the minimum throughput of either stage in the 2-stage pipeline. For one classifier C, we define the throughput ratio of algorithm A as
T ( A ( C ) ) T ( Direct ( C ) ) .
For a set of classifiers S, we define the average throughput ratio for algorithm A over S to be
C S T ( A ( C ) ) T ( Direct ( C ) ) S .
Table 4 shows the average throughput ratios for TCAM SPliT on RL, RLU, SY N, and SY NU. Using the same methodology we did for power and latency, we extrapolate our results to larger classifiers. The extrapolated data are included in FIG. 9C and Table 4.
We make the following observations from the experimental results. Compared with direct expansion, TCAM SPliT significantly increases throughput for classifiers of all size. The typical improvement is in the 60%-80% range. For an extremely large classifier whose direct expansion requires a 36 Mbit TCAM, TCAM SPliT improves throughput by 155.8%. Second, for smaller classifiers, TCAM SPliT outperforms even TCAM Razor. For example, on the RL data set, TCAM SPliT improves throughput by roughly 60% when compared to TCAM Razor. For large classifiers, TCAM SPliT and TCAM Razor achieve essentially the same throughput.
In an exemplary embodiment, the TCAM SPliT algorithm was implemented on Microsoft.Net framework 2.0 and the experiments were carried out on a desktop PC running Windows XP with 8G memory and a single 2.81 GHz AMD Athlon 64×2 5400+. All algorithms used a single processor core.
In another aspect of this disclosure, a topological view of the TCAM encoding process is proposed, where the semantics of the packet classifier is considered. In most packet classifiers, many coordinates (i.e., values) within a field domain are equivalent. The idea of domain compression is to reencode the domain so as to eliminate as many redundant coordinates as possible. This leads to both rule width and role number compression. From a geometric perspective, domain compression “squeezes’ a colored hyperrectangle as much as possible. For example, consider the colored rectangle 102 in FIG. 10 that represents the classifier rules below the rectangle. In field F1 represented by the X-axis, all values in [0, 7] {grave over (E)}[66, 99] are equivalent; that is, for any y{umlaut over (I)} F2 and any x1, x2 Ï [0,7] È [66, 99], packets (x2, y) and (x2, y) have the same decision. Therefore, when reencoding Ft, we can map all values in [0, 7] È [66, 99] to a single value, say 0. By identifying such equivalences along all dimensions, the rectangle 102 is reencoded to the adjacent rectangle 104, whose corresponding classifier rules are also shown below the rectangle. Two transforming tables for F1 and F2, respectively, are shown between the two rectangles 102, 104. “a” is used as a shorthand for “accept” and “d as a shorthand for “discard.
Given a d-dimensional classifier C over fields F1, . . . , Fd, a topological transformation process produces two separate components. The first component is a set of transformers T={Ti|1≦i≦d} where transformer Ti transforms D (Fi) into a new domain D′ (Fi). Together, the set of transformers T transforms the original packet space Σ into a new packet space Σ′. The second component is a transformed d-dimensional classifier C′ over packet space Σ such that for any packet (p1, . . . , pd) ε Σ, the following condition holds:
C(p 1 , . . . , p d)=C′(T 1(p 1)), . . . , T d(p d))
Each of the d transformers Ti and the transformed packet classifier C′ are implemented in TCAM.
The TCAM space needed by our transformation approach is measured by the total TCAM spaced needed by the d+1 tables: C′, T1(p1), . . . , Td. Define the space used by a classifier or transformer in a TCAM as the number of entries (i.e., rules) multiplied by the width of the TCAM in bits: space=# of entries×TCAM width. Although TCAMS can be configured with varying widths, they do not allow arbitrary widths. The width of a TCAM can be set at different values such as 36, 72, 144, 288, 576 or 40, 80, 160, and 320 bits (per entry). For this section, we assume the allowable widths are 40, 80, 160, and 320 bits. The primary goal of the transformation approach is to produce C′, Td(p1), . . . , Td such that the TCAM space needed by these d+1 TCAM tables is much smaller than the TCAM space needed by the original classifier C. Most previous reencoding approaches ignore the space required by the transformers and only focus on the space required by the transformed classifier C′. Note that we can implement the table for the protocol field using SRAM if desired since the field has only 8 bits.
There are two natural architectures for storing the d+1 TCAM tables C′, T1 . . . , Td: the multi-lookup architecture and the pipelined-lookup architecture.
In the multi-lookup architecture, store all the d+1 tables in one TCAM chip. For each table, we prepend a [log (d+1)] table ID bit string to every entry. FIG. 11 illustrates the packet classification process using the multi-lookup architecture when d=2. Suppose the table IDs 00, 01, and 10 for the three tables C′, T1, and T2, respectively, are used. Given a packet (p1, p2), first concatenate T1's table ID 01 with p1 and use the resulting bit string 01|p1 as the search key for the TCAM. Let p2′ denote the search result. Second, concatenate T2's table ID 10 with p2 and use the resulting bit string 10|p2 as the search key for the TCAM. Let p1′ denote the search result. Third, concatenate the table ID 00 of C′ with p1′ and p2′ and use the resulting bit string 00|p1′|p2′ as the the search key for the TCAM. The search result is the final decision for the given packet (p1, p2).
There are two natural pipelined-lookup architectures: parallel pipelined-lookup and chained pipelined-lookup. In both, store the d+1 tables in d+1 separate TCAMs, so table IDs are no longer needed. In the parallel pipelined-lookup architecture, the d transformer tables T, laid out in parallel, form a two element pipeline with the transformed classifier C′. FIG. 12 illustrates the packet classification process using the parallel pipelined-lookup architecture when d=2. Given a packet (p1, p2), we send p1 and p2, in parallel over separate buses, to T1 and T2, respectively. Then, the search result p1′|p2′ is used as a key to search on C′. This second search result is the final decision for the given packet (P1,P2). FIG. 13 illustrates the packet classification process using the chained pipelined-lookup architecture when d=2. We focus primarily on the parallel pipelined-lookup architecture as this allows us to minimize latency.
The main advantage of the multi-lookup architecture is that it can be easily deployed since it requires minimal modification of existing TCAM-based packet processing systems. Its main drawback is a modest slowdown in packet processing throughput because d+1 TCAM searches are required to process a d-dimensional packet. In contrast, the main advantage of the two pipelined-lookup architectures is high packet processing throughput. Their main drawback is that the hardware needs to be modified to accommodate d+1 TCAM chips (or d chips if SRAM is used for the protocol field). A performance modeling analysis of the parallel pipelined lookup and multi-lookup architectures is presented below.
An innovative domain compression technique is now described. The basic idea is to simplify the logical structure of a classifier by mapping the domain of each field D (Fi) to the smallest possible domain D′ (Fi). We implement domain compression by exploiting the equivalence classes that any classifier C defines on the domain of each of its fields. Domain compression is especially powerful because it contributes to both rule width compression, which allows us to use 40 bit TCAM entries instead of 160 bit TCAM entries, and rule number compression because each transformed rule r′ in classifier C′ will contain fewer equivalence classes than the original rule r did in classifier C. Through domain compression and redundancy removal, C′ typically has far fewer rules than C did, something no other reencoding scheme can achieve.
The domain compression algorithm is comprised generally of steps: (1) computing equivalence classes, (2) constructing transformer Ti, for each field Fi, and (3) constructing the transformed classifier C′.
First, formally define the equivalence relation that classifier C defines on each field domain and the resulting equivalence classes. Use the notation Σ−i, to denote the set of all (d−1)-tuple packets over the fields (F1, . . . , Fi1, Fi,+1, . . . , Fd) and pi−1 to denote an element of Σ−i. Then use C(pi,p−i) to denote the decision that packet classifier C makes for the packet p that is formed by combining pi, ε D(Fi) and p—i.
Given a packet classifier C over fields F1, . . . , Fd, we say that x, y ε D (Fi) for 1≦i≦d are equivalent with respect to C if and only if C(x,p−i)=C(y, p−i) for any p−i, Σ−i It follows that C partitions D(Fi) into equivalence classes. Use the notation C{x} to denote the equivalence class that x belongs to as defined by classifier C.
In domain compression, compress every equivalence class in each domain D(Fi) to a single point in D(Fi). The crucial tool of the domain compression algorithm is the Firewall Decision Diagram (FDD) noted above. After an FDD f is constructed, we can reduce f's size by merging isomorphic subgraphs to create a full length ordered FDD.
The first step of our domain compression algorithm is to convert a given d-dimensional packet classifier C to d equivalent reduced FDDs f1 through fd where the root of FDD fi is labeled by field Fi. FIG. 14A shows an example packet classifier over two fields F1 and F2 where the domain of each field is [0,63]. FIGS. 14B and 14C show the two FDDs f1 and f2, respectively. The FDDs f1 and f2 are almost reduced except that the terminal nodes are not merged together for illustration purposes.
The crucial observation is that each edge out of reduced FDD f1's root node corresponds to one equivalence class of domain D(Fi). For example, consider the classifier in FIG. 14A and the corresponding FDD fi in FIG. 14B. Obviously, for any p1 and p1′ in [7.11] ∪ [16, 19] ∪ [39, 40] U [43, 60], we have C(p1,p2)=C(p1′,p2) for any p2 in [0,63], so it follows that C{p1}=C{p1}. Thus, for any packet classifier C over fields F1, . . . , Fd and an equivalent reduced FDD fi rooted at an Fi, node v, the labels of v's outgoing edges are all the equivalence classes over field Fi as defined by C (referred to as the equivalence class theorem).
Given a packet classifier C over fields F1 . . . Fd and the d equivalent reduced FDDs f1 . . . fd where the root node of fi is labeled Fi, we compute transformer Ti; as follows. Let v be the root of fi with m outgoing edges e1, , , , em. First, for each edge ej out of v, we choose one of the ranges in ej's label to be a representative label, which we call the landmark.
In accordance with the equivalence class theorem, all the ranges in ej's label belong to the same equivalence class, so any one of them can be chosen as the landmark. For each equivalence class, we choose the range that intersects the fewest number of rules in C as the landmark breaking ties arbitrarily. Then sort edges in the increasing order of their landmarks. Use Lj and ej to denote the landmark range and corresponding edge in sorted order where edge e1 has the smallest valued landmark L1 and edge em has the largest valued landmark Lm. Transformer Ti then maps all values in ej's label to value j where 1≦j≦m. For example, in FIGS. 14B and 14C, the grayed ranges are chosen as the landmarks of their corresponding equivalence classes, and FIGS. 14D and 14E show transformers T1 and T2 that result from choosing those landmarks.
A transformed classifier C′ is constructed from classifier C using transformers Ti for 1≦i≦d as follows. Let F1 ε S1
Figure US08462786-20130611-P00006
. . .
Figure US08462786-20130611-P00007
Fd ε Sd→(decision) be an original rule in C. The domain compression algorithm converts Fi ε Si to Fi′ ε Si′ such that for any landmark range Lj (0≦j≦m−1), Lj ∩ Si≠Ø if and only if j ε Si′. Stated another way, we replace range Si with range [a, b] D′(Fi) where a is the smallest number in [0, m−1] such that La ∩ Si≠Ø and b is the largest number in [0, m−1] such that Lb ∩ Si≠Ø. Note, it is possible no landmark ranges intersect range Si; in this case a and b are undefined and Si′≠Ø. For a converted rule r′=F1′ε S1
Figure US08462786-20130611-P00001
Fd′ ε Sd′→(decision) in C′, if there exists 1≦i≦d such that Si′=Ø, then this converted rule r′ can be deleted from C′.
Consider the rule F1 ε [7, 60]
Figure US08462786-20130611-P00008
F2 ε [10, 58]→discard in the example classifier in FIG. 14A. For field F1, the five landmarks are the five grayed intervals in 14B, namely [0,0], [1,6], [7,11], [12,15], and [63, 63]. Among these five landmarks, [7,60] overlaps with [7,11] and [12,15], which are mapped to 2 and 3 respectively by transformer T1. Thus, F1 ε [7,60] is converted to F1′ ε [2,3]. Similarly, for field F2, [10,58] overlaps with only one of F2's landmarks, [10,19] which is mapped to 3 by F2's mapping table. Thus, F2 ε [10, 58] is converted to F2′ ε [3, 3].
C′ together with T is semantically equivalent to C. Consider any classifier C and the resulting transformers T and transformed classifier C′. For any packet p=(p1, . . . , pd), we have
C(p 1 , . . . , p d)=C′(T 1(p 1), . . . , T d(p d)).
For each field Fi for 1≦i≦d, consider p's field value pi. Let L(pi) be the landmark range for C{pi}. Set xi=min(L(pi)). Now consider the packet x=(x1 . . . , xd) and the packets x(j)=(x1, . . . , xj−1, pj, . . . , pd) for 0≦j≦d; that is, in packet x(j), the first j fields are identical to packet x and the last d-j fields are identical to packet p. Note x(0)=p and x(d)=x. We now show that C(p)=C(x). This follows from C(x(0))=C(x(1))= . . . =C(x(d)). Each equality follows from the fact that xj and pj belong to the same equivalence class within D(Fj).
Let r be the first rule in C that packet x matches. We argue that p′ will match the transformed rule r′ ε C′. Consider the conjunction Fi ε Si of rule r. Since x matches rule r, it must be the case that xi ε Si. This implies that L(pi) ∩ Si≠Ø. Thus, by our construction pi′=Ti(pi)=Ti(xi) ε Si′. Since this holds for all fields Fi packet p′ matches rule r′. We also argue that packet p′ will not match any rule before transformed rule r′ ε C′. Suppose packet p′ matches some rule r1′ ε C′ that occurs before rule r′. This implies that for each conjunction Fi ε Si of the corresponding rule r1 ε C that L(pi) ∩ Si≠Ø. However, this implies that xiε Si since if any point in L(pi) is in Si, then all points in L(pi) are in Si. It follows that x matches rule r1 ε C, contradicting our assumption that rule r was the first rule that x matches in C. Thus, it follows that p′ cannot match rule r1′. It then follows that r′ will be the first rule in C that p′ matches and the theorem follows.
FIG. 15 further illustrates this domain compression technique for constructing a packet classifier. Given a set of rules for packet classification, a firewall decision diagram is first constructed at 152 for each field defined in the set of rules, where a root node in each of the firewall decision diagrams corresponds to a different field. Each of these firewall decision diagrams are reduced in size at 154 by merging isomorphic subgraphs therein. In each of the reduced diagrams, one label is selected at 156 for each of the outgoing edges from the root node in each of the firewall decision diagrams to be a representative label. From each of the reduced diagrams, a first stage classifier is constructed at 158, where values for a given field of a data packet maps to a result which corresponds to an input of a second stage classifier. A second stage classifier is then constructed at 159 from the set of rules based on overlap of a given rule in the set of rules with a representative label from the reduced firewall decision diagrams. More specifically, the second stage classifier is constructed by comparing each field for a given rule in the set of rules to corresponding representative labels for the field and creating a mapping from the results from the first stage classifier to the decision associated with the given rule when a field in the given rule overlaps with corresponding representative labels for the field. The first and second stage classifiers are preferably instantiated on different content-addressable memory devices but may also be instantiated on the same memory device.
In prefix alignment, we “shift”, “shrink”, or “stretch” ranges by transforming the domain of each field to a new “prefix-friendly” domain so that the majority of the reencoded ranges either are prefixes or can be expressed by a small number of prefixes. This will reduce the costs of range expansion and leads to rule number compression with a potentially small loss in rule width compression.
First, solve the special case where C has only one field F. An optimal solution is developed using dynamic programming techniques. This solution is then used as a building block to perform prefix alignment on multi-dimensional classifiers. Finally, compose domain compression and prefix alignment together.
The one-dimensional prefix alignment problem is equivalent to the following “cut” problem. Consider the three ranges [0,12], [5, 15], and [0, 15] over domain D(F1)=[0,15] in classifier C in FIG. 16A, and suppose the transformed domain D′(F1)=[00, 11] in binary format. Because D′(F1) has a total of 4 elements, we want to identify three cut points 0≦x1≦x2≦x3≦15 such that if [0,x1] ε D(F1) transforms to 00 ε D′(F1), [x1+1,x2] ε D(F1) transforms to 01 ε D′(F1), [x2+1, x3] ε D(F1) transforms to 10 ε D′(F1), and [x3+1,15] ε D(F1), transforms to 01ε D′(F1), the range expansion of the transformed ranges will have as few rules as possible. For this simple example, there are two families of optimal solutions: those with x1 anywhere in [0, 3], x2=4, and x3=12, and those with x1=4, x2=12, and x3 anywhere in [13,15]. For the first family of solutions, range [0, 12] is transformed to [00,10]=0* 10, range [5,15] is transformed to [10,11]=1*, and range [0,15] is transformed to [00, 11]=**. In the second family of solutions, range [0,12] is transformed to [00, 01]=0*, range [5,15] is transformed to [01, 11]=01 1*, and range [0,15] is transformed to [00,11]=**. The classifier C′ in FIG. 16A shows the three transformed ranges using the first family of solutions. In both examples, the range expansion of the transformed ranges only has 4 prefix rules while the range expansion of the original ranges has 7 prefix rules.
An optimal solution can be completed using a divide and conquer strategy. First observe that we can divide the original problem into two subproblems by choosing the middle cut point. Next observe that a cut point should be the starting or ending point of a range, if possible, in order to reduce range expansion. Suppose the target domain D′(F1) is [0, 2b−1]. First we need to choose the middle cut point x2 b−1, which will divide the problem into two subproblems with target domains [0,2b−1−1]=0{*}b−1 and [2b−, 2b−1]=1{*}b−1 respectively. Consider the example in FIG. 16A, the x2 cut point partitions [0, 15] into [0, x2], which transforms to prefix 0*, and [x2+1, 15], which transforms to prefix 1*. The second observation implies either x2=4 or x2=12. Suppose we choose x2=4; that is, we choose the dashed line in FIG. 16A. This produces two subproblems where we need to identify the x1 cut point in the range [0, 4] and the x3 cut point in [5, 15]. In the two subproblems, we include each range trimmed to fit the restricted domain. For example, ranges [0, 12] and [0, 15] are trimmed to [0, 4] in the first subproblem. In the second subproblem, ranges [5,15] and [0, 15] are trimmed to [5,15] while range [0,12] is trimmed to [5,12]. We must maintain each trimmed range even if there may be duplicates. In the first subproblem, the choice of x1 is immaterial since both trimmed ranges span the entire restricted domain. In the second subproblem, the range [5,12] dictates that x3=12 is the right choice.
This divide and conquer process of computing cut points may be represented as a binary cut tree. FIG. 16B depicts the tree where we select x2=4 and x3=12. This tree also encodes the transformation from the original domain to the target domain: all the values in a terminal node are mapped to the prefix represented by the path from the root to the terminal node. For example, as the path from the root to the terminal node of [0,4] is 0, all values in [0,4] ε D(F1) are transformed to 0*.
In domain compression, we considered transformers that mapped points in D(Fi) to points in D′(Fi). In prefix alignment, we consider transformers that map points in D(Fi) to prefix ranges in D′(Fi). If this is confusing, we can also work with transformers that map points in D(Fi) to points in D′(Fi) with no change in results; however, transformers that map to prefixes more accurately represent the idea of prefix alignment than transformers that map to points. Because we will perform range expansion on C′ before performing any further optimizations including redundancy removal, we can ignore rule order. We can then view a one-dimensional classifier C as a multiset of ranges S in D(F1).
The technical details of our dynamic programming solution to the prefix alignment problem are presented by addressing four issues.
First, it is shown that prefix alignment preserves the semantics of the original classifier by first defining the concept of prefix transformers and then showing that prefix alignment must be correct when prefix transformers are used.
Given a prefix P, we use min P and max P to denote the smallest and the largest values in P, respectively. A transformer Ti, is an order-preserving prefix transformer from D(Fi) to D′(Fi) for a packet classifier C if Ti satisfies the following three properties. (1) (prefix property) ∀x ε D(Fi), Ti(x)=P where P is a prefix in domain D′(Fi); (2) (order-preserving property) ∀x,y ε D(Fi), x≦y implies either Ti(x)=Ti (y) or max Ti (x)<min Ti (y); (3) (consistency property) ∀x,y ε D(Fi), Ti(x)=Ti(y) implies C{x}=C{y}.
The following Lemma 6.1 and Theorem 6.1 easily follow from the definition of prefix transformers.
Lemma 6.1: Given any prefix transformer Ti for a field Fi, for any a, b, x ε D(Fi), x ε [a, b] if and only if Ti(x) [min Ti(a), max Ti(b)].
Theorem 6.1 (Prefix Alignment Theorem): Given a packet classifier C over fields F1, . . . , Fd, and d prefix transformers T={Ti|1≦i=d}, and the classifier C′ constructed by replacing any range [a, b] over field Fi (1≦i≦d) by the range [min Ti(a),maxTi(b)], the condition C(p1, . . . , pd)=C′(T1(p1), . . . , Td(pd)) holds.
Next, we identify candidate cut points using the concept of atomic ranges. For any multiset of ranges S (a multiset may have duplicate entries) and any range x over domain D(F1*), we use S@x to denote the set of ranges in S that contain x. Given a multiset S of ranges, the union of which constitute a range denoted ∪ S, and a set of ranges S′, S′ is the atomic range set of S if and only if the following four conditions hold: (1) (coverage property) ∪ S=∪ S′; (2) (disjoint property) ∀x, y ε S′, x ∪ y=0; (3) (atomicity property) ∀x ε S and ∀y ε S′, x ∪ y≠Ø implies y x; (4) (maximality property) ∀x, y ε S′ and max x+1=min y implies S@x≠S@y.
For any multiset of ranges S, there is a unique atomic range set of S, which we denote as AR(S). Because of the maximality property of atomic range set, the candidate cut points correspond to the end points of ranges in AR(S). We now show how to compute S-start points and S-end points. For any range [x,y] ε S, define the points x−1 and y to be S-end points, and define the points x and y+1 to be S-start points. Note that we ignore x−1 if x is the minimum element of ∪ S and y+1 if y is the maximum element of ∪ S. Let (s1, . . . , sm) and (e1, . . . , em) be the ordered list of S-start points and S-end points. It follows that for 1≦i≦m−1 that si≦ei=si+1+1. Thus, AR(S)={[s1, e1], . . . , [sm, em]}.
For example, if we consider the three ranges in classifier C in example FIG. 16A, range [0,12] creates S-start point 13 and S-end point 12, range [5,15] creates S-end point 4 and S-start point 5, and range [0, 15] creates no S-start points or S-end points. Finally, 0 is an S-start point and 15 is an S-end point. This leads to AR(S)={[0, 4], [5,12], [13,15]}.
Next choose the number of bits b used to encode domain D′(F1). This value b imposes constraints on legal prefix transformers. Consider S={[0, 4], [0,7], [0,12], [0,15]} with AR(S)={[0, 4], [5, 7], [8,12], [13,15]}. If b=2, then the only legal prefix transformer maps [0,4] to 00, [5,7] to 01, [8,12] to 10, and [13,15] to 11. If b=3, there are many more legal prefix transformers including one that maps [0, 4] to 000, [5,7] to 001, [8,12] to 01*, and [13,15] to 1**. In this case, the second prefix transformer is superior to this first prefix transformer.
Include b as an input parameter to our prefix alignment problem. Initialize b as [ log2|AR(S) ], the smallest possible value, and compute an optimal prefix alignment for this value of b. Then increment b and repeat until no improvement is seen. Choose a linear search as opposed to a binary search because computing the optimal solution for b bits requires an optimal solution for b−1 bits.
Now it is shown how to compute the optimal cut points given b bits. View a one-dimensional classifier C as a multiset of ranges S in D(F1) and formulate the prefix alignment problem as follows. Given a multiset of ranges S over field F1 and a number of bits b, find prefix transformer T1 such that the range expansion of the transformed multiset of ranges S′ has the minimum number of prefix rules and D′(F1) can be encoded using only b bits.
An optimal solution is presented using dynamic programming. Given a multiset of ranges S, we first compute AR(S). Suppose there are m atomic ranges R1, . . . , Rm, with S-start points s1 through sm and S-end points e1 through em sorted in increasing order. For any S-start point sx and S-end point sy, where 1≦x≦y≦m, we define S
Figure US08462786-20130611-P00009
[x,y] to be the multiset of ranges from S that intersect range [si, sy]; furthermore, we assume that each range in S
Figure US08462786-20130611-P00009
[x,y] is trimmed so that its start point is at least sx and its end point is at most sy. We then define a collection of subproblems as follows. For every 1≦x≦y≦m, we define a prefix alignment problem PA(x,y,b) where the problem is to find a prefix transformer T1 for [sx, ey] D(F1) such that the range expansion of (S
Figure US08462786-20130611-P00009
[x,y])′ has the smallest possible number of prefix rules and the transformed domain D′(F1) can be encoded in b bits. We use cost(x,y,b) to denote the number of prefix rules in the range expansion of the optimal (S
Figure US08462786-20130611-P00009
[x,y])′. The original prefix alignment problem then corresponds to PA(1,m,b) where b can be arbitrarily large.
The prefix alignment problem obeys the optimal substructure property. For example, consider PA(1,m,b). As we employ the divide and conquer strategy to locate a middle cut point that will establish what the prefixes 0{*}b−1 and 1{*}b−1 correspond to, there are m−1 choices of cut points to consider: namely e1 through em−1. Suppose the optimal cut point is ek where 1≦k≦m−1. Then the optimal solution to PA(1,m,b) will build upon the optimal solutions to sub-problems PA(1,k,b−1) and PA(k+1,m,b−1). That is, the optimal transformer for PA(1,m,b) will simply append a 0 to the start of all prefixes in the optimal transformer for PA(1,k,b−1) and a 1 to the start of all prefixes in the optimal transformer for PA(k +1,m,b−1). Moreover, cost(1,m,b)=cost(1,k,b−1)+cost(k+1,m,b−1)−|S@[1,m]|. We subtract |S@[1,m]|. in the above cost equation because ranges that include all of [s1, em] are counted twice, once in cost(1,k,b−1) and once in cost(k+1, m,b−1). However, as [s1,ek] transforms to 0{*}b−1and [sk+1,em] transforms to 1{*}b−1, the range|s1,em| can be expressed by one prefix {*}b=0{*}b−1∪1{*}b−1.
Based on this analysis, Theorem 6.2 shows how to compute the optimal cuts and binary cut tree. As stated earlier, the optimal prefix transformer T1 can then be computed from the binary cut tree.
Given a multiset of ranges S with |AR(S)|=m, cost(l,r,b)for any b≦0,1≦l≦r≦m can be computed as follows. For any 1≦l≦r≦m, and 1≦k≦m, and b≦0:
cost(l, r, 0)=∞,
cost(k, k, b)=|S@[k, k]|,
and for any 1≦l<r≦m and b≧1
cost ( l , r , b ) = min k { l , , r - 1 } ( cost ( l , k , b - 1 ) + cost ( k + 1 , r , b - 1 ) - S @ [ l , r ] )
Note that we set cost (k,k,0) to |S@[k, k]| for the convenience of the recursive case. The interpretation is that with a 0-bit domain, we can allow only a single value in D′(F1); this single value is sufficient to encode the transformation of an atomic interval.
With reference to FIG. 17, the prefix alignment technique can be generalized as follows. Candidate cut points for a domain of field value are first identified at 172, where candidate cut points correspond to starting or ending points in the ranges of values for the given field. Next, a number of bits to be used to encode the domain in a binary format is selected at 174. The domain is recursively divided 176 along the candidate cut points using dynamic programming and mapping each range of values to a result that is represented in a binary format using the selected number of bits. The number of bits is incremented by one and each of the above steps are repeated until the mapping of the range values has been optimized.
Multi-dimensional prefix alignment is now considered. Unfortunately, while we can optimally solve the one-dimensional problem, there are complex interactions between the dimensions that complicate the multi-dimensional problem. In particular, the total range expansion required for each rule is the product of the range expansion required for each field. Thus, there may be complex tradeoffs where we sacrifice one field of a rule but align another field so that the costs do not multiply. The complexity of the multi-dimensional prefix alignment problem is currently unknown. A hill-climbing solution is presented where we iteratively apply our one-dimensional prefix alignment algorithm one field at a time. Because the range expansion of one field affects the numbers of ranges that appear in the other fields, we run prefix alignment for each field more than once. We stop when running prefix alignment in each field fails to improve the solution. More precisely, for a classifier C over fields F1, . . . , Fd, we first create d identity prefix transformers
Figure US08462786-20130611-P00010
. . . ,
Figure US08462786-20130611-P00011
Define a multi field prefix alignment iteration k as follows. For i from 1 to d, generate the optimal prefix transformer
Figure US08462786-20130611-P00012
assuming the prefix transformers for the other fields are
Figure US08462786-20130611-P00013
. . . ,
Figure US08462786-20130611-P00014
{grave over (T)}i+1 k−1, . . . , Td k−1}. Our iterative solution starts at k=1 and preforms successive multi-field prefix alignment iterations until no improvement is found for any field.
While domain compression and prefix alignment can be used individually, they can be easily combined to achieve superior compression. Given a classifier C over fields F1, . . . , Fd, we first perform domain compression resulting in a transformed classifier C′ and d transformers
Figure US08462786-20130611-P00015
. . . ,
Figure US08462786-20130611-P00016
then, we perform prefix alignment on the classifier C′ resulting in a transformed classifier C″ and d transformers
Figure US08462786-20130611-P00017
. . . ,
Figure US08462786-20130611-P00018
To combine the two transformation processes into one, we merge each pair of transformers
Figure US08462786-20130611-P00019
into one transformer Ti; for 1≦i≦d. In one exemplary embodiment, an optimal algorithm as described by Suri et. al. in “Compression two-dimensional routing tables” Algorithmica, 35:287-300 (2003) is applied to compute the minimum possible transformers Ti for 1≦i≦d. Other algorithms are contemplated by this disclosure. When running prefix alignment after domain compression, computing the atomic ranges and candidate cut points is unnecessary because each point xε D′(Fi) for 1≦i≦d belongs to its own equivalence class in D′(Fi) which implies [x, x] is an atomic range.
Strategies for handling the TCAM updates are now discussed. In most applications, such as router ACLs and firewalls, the rule sets are relatively static. Therefore, we propose using the bank mechanism in TCAMs to handle rule list updates. TCAMs are commonly configured into a series of row banks. Each bank can be individually enabled or disabled to determine whether or not its entries will be included in the TCAM search. We propose storing the compressed transformers and classifier before update in the active banks and the ones after update in the disabled banks. Once the writing is finished, we activate the banks containing the new transformers and compressed classifier and deactivate the banks containing the old ones.
In some applications, there may be more frequent updates of the rule set. Fortunately, such updates are typically the insertion of new rules to the top or front of the classifier or the deletion of recently added rules. We are not aware of any applications that require frequent updates involving rules at arbitrary locations in a classifier. We can support this update pattern by chaining the TCAM chips in our proposed architecture after a small TCAM chip of normal width (160 bits), which we call the “hot” TCAM chip. When a new rule comes, we add the rule to the top of the hot TCAM chip. When a packet comes, we first use the packet as the key to search in the hot chip. If the packet has a match in the hot chip, then the decision of the first matching rule is the decision of the packet. Otherwise, we feed the packet to the TCAM chips in our architecture described as above to find the decision for the packet. Although the lookup on the hot TCAM chip adds a constant delay to per packet latency, throughput will not be affected because we use pipelining. Using batch updating, we only need to run our topological transformation algorithms to recompute the TCAM lookup tables when the hot chip is about to fill up. Note, we may not include specific rules when running topological transformation if they are likely to be deleted in the near future. Instead, we run topological transformation on the remainder of the classifier and retain these likely to be deleted rules in the hot TCAM chip.
Packet classifiers sometimes allow rule logging; that is, recording the packets that match some particular rules. Our algorithm handles rule logging by assigning each rule that is logged a unique decision. These experiments show that even when all rules in a classifier have unique decisions, our algorithm still achieves significant TCAM space reduction.
The effectiveness and efficiency of our topological transformation approaches are evaluated on both real-world and synthetic packet classifiers. Although the two approaches can be used independently, they are much more effective when used together. Preliminary report results for both techniques used together. When a distinction is needed, we use the label DC+PA when reporting results obtained using both techniques combined and the label DC when reporting results obtained using only domain compression. In all cases, we preprocess each classifier by running a redundancy removal algorithm, such as the one described by Liu et. al. in “All-match based complete redundancy removal for packet classifiers in TCAMs” In Proceedings of the 27th Annual IEEE Conference on Computer Communications, April 2008.
Given a TCAM range encoding algorithm A and a classifier C, let A(C) denote the reencoded classifier, W(A(C)) denote the number of bits to represent each rule in A(C), TW (A(C)) denote the minimum TCAM entry width for storing A(C) given choices 40, 80, 160, or 320, |A(C)| denote the number of rules in A
Figure US08462786-20130611-P00020
and B(A
Figure US08462786-20130611-P00021
)=TW(A
Figure US08462786-20130611-P00022
)×|A
Figure US08462786-20130611-P00023
|, which represents the total number of TCAM bits required to store A(C). The main goal of TCAM optimization algorithms is to minimize B(A(C)). We use Direct to denote the direct range expansion algorithm, so B(Direct(C)) represents the baseline we compare against, W(Direct(C))=104, TW(Direct(C))=160, and B(Direct(C))=160×|Direct(C)|. The table below summarizes our notation.
A Range encoding scheme Direct direct range expansion
Figure US08462786-20130611-P00024
packet classifier A 
Figure US08462786-20130611-P00025
reencoded classifier
W(A 
Figure US08462786-20130611-P00026
)
width of rules in A 
Figure US08462786-20130611-P00027
|A 
Figure US08462786-20130611-P00028
|
# rules in A 
Figure US08462786-20130611-P00029
TW(A 
Figure US08462786-20130611-P00030
)
minimum TCAM entry width for A 
Figure US08462786-20130611-P00031
B(A 
Figure US08462786-20130611-P00032
)
TW(A 
Figure US08462786-20130611-P00033
) × |A 
Figure US08462786-20130611-P00034
|, i.e., total bits of A 
Figure US08462786-20130611-P00035

For any A and C, we measure overall effectiveness by the compression ratio
CR ( A ( ) ) = B ( A ( ) ) B ( Direct ( ) ) .
To isolate the factors that contribute to the success of our approaches at compressing classifiers, we define the Rule Number Ratio of A on C to be
RNR ( A ( ) ) = A ( ) ,
which is often referred to as expansion ratio, and the Rule Width Ratio of A on C to be
RWR ( A ( ) ) = W ( A ( ) ) 104 .
When we consider a set of classifiers S where |S| denotes the number of classifiers in S, we generalize our metrics as follows. Average compression ratio of A for S is
CR ( A ( S ) ) = C S CR ( A ( ) ) S ,
average rule number ratio of A for S is
RWR ( A ( S ) = C S RWR ( A ( ) ) S .
RL is used to denote a set of 40 real-world packet classifiers that we performed experiments on. RL is chosen from a larger set of real-world classifiers obtained from various network service providers, where the classifiers range in size from a handful of rules to thousands of rules. We eliminated structurally similar classifiers from RL because similar classifiers exhibited similar results. We created RL by randomly choosing a single classifier from each set of structurally similar classifiers. We then split RL into two groups, RLa and RLb where RNR(Direct(C))≦4 for all
Figure US08462786-20130611-P00036
ε RLa and RNR(Direct
Figure US08462786-20130611-P00037
>40 for all
Figure US08462786-20130611-P00038
ε RLb. We have no classifiers where 4<RNR(Direct(C))≦40. It turns out |RLa|=26 and |RLb|=14. By separating these classifiers into two groups, we can determine how well our techniques work on classifiers that do suffer significantly from range expansion as well as those that do not. FIG. 18 shows the accumulated percentage graph of atomic intervals for each field for the classifiers in RL, and FIG. 19 shows the accumulated percentage graphs of classifier sizes in RL before and after direct range expansion.
Because packet classifiers are considered confidential due to security concerns making it difficult to acquire a large number of real-world classifiers, we generated a set of synthetic classifiers SYN with the number of rules ranging from 250 to 8000 using Singh et al.'s model of synthetic rules. The predicate of each rule has five fields: source IP, destination IP, source port, destination port, and protocol. We also performed experiments on TRS, a set of 490 classifiers produced by Taylor&Turner's Classbench. These classifiers were generated using the parameter files downloaded from Taylor's web site http://www.arl.wusd.edu/-det3/ClmsBench/index.htm. To represent a wide range of classifiers, we chose a uniform sampling of the allowed values for the parameters of smoothness, address scope, and application scope.
To stress test the sensitivity of our algorithms to the number of decisions in a classifier, we created a set of classifiers RLU (and thus RLau and RLbu) by replacing the decision of every rule in each classifier by a unique decision. Similarly, we created the set SYNU. Thus, each classifier in RLU (or SYNU) has the maximum possible number of distinct decisions. Such classifiers might arise in the context of rule logging where the system monitors the frequency that each rule is the first matching rule for a packet.
Table 6 below shows the average compression ratio, rule size ratio, and rule number ratio for our algorithm on all eight data sets. FIG. 20 shows the accumulated percentage graphs for the compression ratios of our combined techniques for both RL and RLU with and without transformers, and FIG. 21 shows the accumulated percentage graphs for the compressions ratios of our combined techniques for each field in RL. Note that the data with transformers depicts the true space savings of our methods, but most previous range encoding papers focus only on the data without transformers. FIG. 22 and FIG. 23 show the accumulated percentage graphs of our combined techniques on RL and RLu for rule number ratio and rule width ratio, respectively.
compression rule number
DC w.o. T with T rule size w.o. T with T
RL 11.8% 4.5% 13.8% 15.9% 36.1% 126.0%
RLU 29.6% 9.8% 20.8% 19.2% 77.0% 183.0%
RLa 17.8% 6.8% 20.7% 20.4% 38.7% 105.2%
RLaU 44.6% 14.9% 31.3% 23.6% 82.7% 161.7%
RLb 0.6% 0.1% 0.9% 7.5% 31.2% 164.7%
RLbU 1.7% 0.4% 1.1% 11.0% 66.4% 222.6%
SY N 0.7% 0.6% 2.5% 10.4% 2.7% 11.8%
SY NU 13.4% 9.3% 12.4% 16.0% 43.9% 58.9%
T RS 6.2% 1.0% 2.7% 15.7% 9.7% 23.3%
Our algorithm achieves significant compression on both real-world and synthetic classifiers. On RL, our algorithm achieves an average compression ratio of 13.8% if we count TCAM space for transformers and 4.5% if we do not. These savings are attributable to both rule width and rule number compression. The average rule width compression ratio is 15.9%, which means that a typical encoded classifier only requires 17 bits, instead of 104 bits, to store a rule. However, the actual savings that rule width compression contributes to average compression ratio is only 25% because the encoded classifiers will use 40 bit wide TCAM entries, the smallest possible TCAM widths (two classifiers in RLU require an 80 bit wide TCAM entry). In comparison, direct range expansion would use 160 bit wide TCAM entries. That is, TW(A(C))=40 for all but two classifiers in RLU. The remaining savings is due to rule number compression. Note that the average rule number compression ratio without transformers is 36.1%; that is, domain compression and redundancy removal eliminate an average of 63.9% of the rules from our real-life classifier sets. In comparison, the goal of all other reencoding schemes is an average rule number compression ratio without transformers of 100%. Our algorithm performs well on all of our other data sets too. For example, for Taylor's rule set TRS, we achieve an average compression ratio of 2.7% with transformers included and 1.0% with transformers excluded. Note that prefix alignment is an important component of our algorithm because it reduces the average compression ratio without transformers for RL from 11.8% to 4.5%.
Our algorithm is effective for both efficiently specified classifiers and inefficiently specified classifiers. The efficiently specified classifiers in RLa experience relatively little range expansion; the inefficiently specified classifiers in RLb experience significant range expansion. Not surprisingly, our algorithm provides roughly 20 times better compression for RLb than for RLa with average compression ratios of 0.9% and 20.7%, respectively. In both sets, TCAM width compression contributes approximately 25% savings. The difference is rule number compression. Whereas our algorithm achieves relatively similar average rule number ratios of 38.7% and 31.2% without transformers for RLa and RLb, respectively, these rule number ratios have significantly different impacts on the final compression ratios given that all the efficiently specified classifiers in RLa have modest range expansion while all the inefficiently specified classifiers in RLb have tremendous range expansion.
Our algorithm's effectiveness is only slightly diminished as we increase the number of unique decisions in a classifier. In the extreme case where we assign each rule a unique decision in RLu, our algorithm achieves an average compression ratio of 20.8% with transformers included and 9.8% with transformers excluded; and on SYNu, our algorithm achieves an average compression ratio of 12.4% with transformers included and 9.3% with transformers excluded. In particular, the TCAM width used by each classifier is unaffected. Rule number ratio compression is worse for RLu, but the rule number ratio without transformers is still leas than 100% for all our data sets with unique decisions.
Our algorithm outperforms all existing reencoding schemes by at least a factor of 3.11 including transformers and by at least a factor of 5.54 excluding transformers. We first consider the width of TCAM entries. Our algorithm uses 40 bit wide TCAM entries for all but 2 classifiers in RLu whereas the smallest TCAM width achieved by prior work is 80 bits. Therefore, on TCAM entry width, our algorithm is 2 times better than the best known result. Next, we consider the number of TCAM entries. Excluding TCAM entries for transformers, the best rule number ratio that any other method can achieve on RL is 100% whereas we achieve 36.1%. Therefore, excluding TCAM entries for transformers, our algorithm is at least 5.54 (=2×100%/36.1%) times better than the optimal TCAM reencoding algorithm that does not consider classifier semantics.
In an exemplary embodiment, the algorithms are implemented on the Microsoft Net framework 2.0 and the experiments are performed on a desktop PC running Windows XP with 3 G memory and a single 3.4 GHz Pentium D processor. On RL, the minimum, mean, median, and maximum running time is 0.003, 37.642, 0.079, and 1093.308 seconds; on RLu, the minimum, mean, median, and maximum running time is 0.006, 1540.934, 0.203, and 54604.311 seconds. Table 7 below shows running time of some representative classifiers in RL and RLu.
Size Time (seconds) Time (seconds) Unique Decisions
511 0.40 1.92
1308 1.00 6.59
1365 14.51 80.91
1794 26.85 273.35
2331 42.33 355.52
3928 0.64 4.86
4004 117.01 3234.90
7652 1093.31 54604.31
We now assess the impact that our two topological transformation schemes (parallel pipelined-lookup using 6 TCAM chips and multi-lookup using 1 TCAM chip) will have on power, latency, and throughput. We compare our topological transformation schemes against direct range expansion. Because we cannot build actual devices, we use Agrawal and Sherwood's power, latency, and throughput models for TCAM chips. To our best knowledge, Agrawal and Sherwood's TCAM models are the only publicly available models and have become widely adopted. To derive meaningful power results, we need much larger classifiers than the largest available classifier in RL. Rather than make large synthetic classifiers, we consider hypothetical classifiers whose direct range expansion fits exactly within standard TCAM chip sizes ranging from 1 Mbit to 36 Mbit. We further assume that when topological transformation is applied to these hypothetical classifiers, the resulting compression ratio will be 15%. Because we do not know how the bits will be allocated to each of the 5 transformers and reencoded classifier, we conservatively assume that each transformer and the reencoded classifier will have a size that is 15% of the direct expansion classifier. For power, latency, and throughput, we then use Agrawal and Sherwood's TCAM model to estimate the relevant metric on each TCAM chip. As our modeling results demonstrate, the 6 chip configuration significantly improves throughput and power and is essentially neutral on latency whereas the 1 chip configuration significantly improves power while suffering some loss in latency and throughput.
For any classifier C, let
Figure US08462786-20130611-P00039
(A(C)) represent the nanojoules needed to classify one packet using the given scheme. For the two topological transformation schemes, we include the power consumed by the transformers. For one classifier C, we define the power ratio of algorithm A as
( A ( C ) ) ( Direct ( C ) ) .
For a set of classifiers S, we define the average power ratio of algorithm A over S to be
C S ( A ( C ) ) ( Direct ( C ) ) S .
The extrapolated average power ratios are displayed in
Table 8 and FIG. 24A.
The modeling results clearly demonstrate that topological transformation results in a significant improvement in power usage per search. The reason for the savings is that even though we perform more searches, each search is performed on a much smaller TCAM chip.
Average
Power Latency Throughput
6 Chip 1 Chip 6 Chip 1 Chip 6 Chip 1 Chip
1 Mb 122.0% 20.6% 100.7% 304.6% 200.4% 32.8%
2 Mb 91.7% 15.6% 92.0% 300.0% 238.1% 33.3%
4.5 Mb 66.8% 11.5% 100.0% 342.9% 233.3% 29.2%
9 Mb 50.3% 8.8% 112.5% 375.0% 200.0% 26.7%
18 Mb 40.5% 7.2% 97.0% 317.4% 226.8% 31.5%
36 Mb 34.8% 6.3% 63.5% 205.2% 341.1% 48.7%
For any classifier C, let T(A(C)) represent the number of packets per second that can be classified using the given scheme. For topological transformation with 6 chips, this is the minimum throughput of any of the 6 TCAM chips. For topological transformation with 1 TCAM chip, this is essentially the inverse of latency because there is no pipelining. For one classifier C, we define the throughput ratio of algorithm A as
?? ( A ( C ) ) T ( Direct ( C ) ) .
For a set of classifiers S, we define the average throughput ratio for algorithm A over S to be
C S ?? ( A ( C ) ) T ( Direct ( C ) ) S - .
The extrapolated average throughputs are included in FIG. 24C and Table 8.
The modeling results demonstrate that topological transformation significantly improves throughput if we use the 6 chip configuration. The reason for the throughput increase is the use of the pipeline and the use of smaller and thus faster TCAM chips. The throughput of the 1 chip configuration is significantly reduced because there is no pipeline; however, the throughput is better than 16.6% because it again uses smaller, faster TCAM chips.
The foregoing description of the embodiments has been provided for purposes of illustration and description. It is not intended to be exhaustive or to limit the invention. Individual elements or features of a particular embodiment are generally not limited to that particular embodiment, but, where applicable, are interchangeable and can be used in a selected embodiment, even if not specifically shown or described. The same may also be varied in many ways. Such variations are not to be regarded as a departure from the invention, and all such modifications are intended to be included within the scope of the invention. Example embodiments are provided so that this disclosure will be thorough, and will fully convey the scope to those who are skilled in the art. Numerous specific details are set forth such as examples of specific components, devices, and methods, to provide a thorough understanding of embodiments of the present disclosure. It will be apparent to those skilled in the art that specific details need not be employed, that example embodiments may be embodied in many different forms and that neither should be construed to limit the scope of the disclosure. In some example embodiments, well-known processes, well-known device structures, and well-known technologies are not described in detail.
The terminology used herein is for the purpose of describing particular example embodiments only and is not intended to be limiting. As used herein, the singular forms “a”, “an” and “the” may be intended to include the plural forms as well, unless the context clearly indicates otherwise. The terms “comprises,” “comprising,” “including,” and “having,” are inclusive and therefore specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. The method steps, processes, and operations described herein are not to be construed as necessarily requiring their performance in the particular order discussed or illustrated, unless specifically identified as an order of performance. It is also to be understood that additional or alternative steps may be employed.

Claims (11)

What is claimed is:
1. A method for constructing a packet classifier for a computer network system, comprising:
receiving a set of rules for packet classification, where a rule sets forth values for fields in a data packet and a decision for data packets having matching field values;
representing the set of rules as a directed graph;
partitioning the graph into at least a top partition and a bottom partition, the top partition having a single sub-graph with a vertex of the graph and the bottom partition having a plurality of sub-graphs;
generating a lookup table from the sub-graph in the top partition;
generating a lookup table for each sub-graph in the bottom partition;
assigning each table associated with the bottom partition a unique identifier;
linking the lookup table from the top partition with the lookup tables in the bottom partition using the assigned table identifiers; and
instantiating the lookup table from the top partition on a first content-addressable memory device and the lookup tables from the bottom partition on a second content-addressable memory device.
2. A method for constructing a packet classifier for a computer network system, comprising:
receiving a set of rules for packet classification, where each rule sets forth values for fields in a data packet and a decision for data packets having matching field values;
constructing a firewall decision diagram from the set of rules for each field defined in the set of rules, where a root node in each of the firewall decision diagrams corresponds to a different field;
reducing size of each firewall decision diagram by merging isomorphic subgraphs therein;
selecting one label for each outgoing edge from the root node in each of the firewall decision diagrams to be a representative label;
constructing a first stage classifier from each of the reduced firewall decision diagrams, where values for a given field of a data packet maps to a result which corresponds to an input of a second stage classifier; and
constructing the second stage classifier from the set of rules based on overlap of a given rule in the set of rules with a representative label from the reduced firewall decision diagrams.
3. The method of claim 2 wherein selecting one label further comprises selecting the label whose range implicates the fewest number of rules in the set of rules.
4. The method of claim 2 wherein constructing the second stage classifier further comprises comparing each field for a given rule in the set of rules to corresponding representative labels for the field; and creating a mapping from the results from the first stage classifier to the decision associated with the given rule when a field in the given rule overlaps with corresponding representative labels for the field.
5. The method of claim 2 wherein constructing the second stage classifier further comprises eliminating the given rule from the second stage classifier when a field of the given rule does not overlap with at least one representative label for the field.
6. The method of claim 2 further comprises instantiating the first and second stage classifier on content-addressable memory, a random access memory or a combination thereof.
7. The method of claim 2 further comprise instantiating the first stage classifier on a first content-addressable memory device and the second stage classifier on a second content-addressable memory device.
8. The method of claim 1 further comprises representing the set of rules as a firewall decision diagram.
9. The method of claim 8 further comprises constructing a firewall decision diagram from the set of rules for each field defined in the set of rules, where a root node in each of the firewall decision diagrams corresponds to a different field.
10. The method of claim 1 further comprises reducing the size of the directed graph by merging isomorphic subgraphs before partitioning the graph.
11. The method of claim 1 further comprises defining the content-addressable memory device as ternary content addressable memory.
US12/855,992 2009-08-17 2010-08-13 Efficient TCAM-based packet classification using multiple lookups and classifier semantics Expired - Fee Related US8462786B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US12/855,992 US8462786B2 (en) 2009-08-17 2010-08-13 Efficient TCAM-based packet classification using multiple lookups and classifier semantics

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US23439009P 2009-08-17 2009-08-17
US12/855,992 US8462786B2 (en) 2009-08-17 2010-08-13 Efficient TCAM-based packet classification using multiple lookups and classifier semantics

Publications (2)

Publication Number Publication Date
US20110038375A1 US20110038375A1 (en) 2011-02-17
US8462786B2 true US8462786B2 (en) 2013-06-11

Family

ID=43588565

Family Applications (1)

Application Number Title Priority Date Filing Date
US12/855,992 Expired - Fee Related US8462786B2 (en) 2009-08-17 2010-08-13 Efficient TCAM-based packet classification using multiple lookups and classifier semantics

Country Status (1)

Country Link
US (1) US8462786B2 (en)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8923301B1 (en) * 2011-12-28 2014-12-30 Juniper Networks, Inc. Fixed latency priority classifier for network data
US20150120754A1 (en) * 2013-10-31 2015-04-30 Oracle International Corporation Systems and Methods for Generating Bit Matrices for Hash Functions Using Fast Filtering
US9706017B2 (en) 2015-09-29 2017-07-11 Mellanox Technologies Tlv Ltd. Atomic update of packet classification rules
US10171369B2 (en) * 2016-12-22 2019-01-01 Huawei Technologies Co., Ltd. Systems and methods for buffer management
US10476794B2 (en) 2017-07-30 2019-11-12 Mellanox Technologies Tlv Ltd. Efficient caching of TCAM rules in RAM
US10491521B2 (en) 2017-03-26 2019-11-26 Mellanox Technologies Tlv Ltd. Field checking based caching of ACL lookups to ease ACL lookup search
US10496680B2 (en) 2015-08-17 2019-12-03 Mellanox Technologies Tlv Ltd. High-performance bloom filter array
US10944675B1 (en) 2019-09-04 2021-03-09 Mellanox Technologies Tlv Ltd. TCAM with multi region lookups and a single logical lookup
US11003715B2 (en) 2018-09-17 2021-05-11 Mellanox Technologies, Ltd. Equipment and method for hash table resizing
US11327974B2 (en) * 2018-08-02 2022-05-10 Mellanox Technologies, Ltd. Field variability based TCAM splitting
US11539622B2 (en) 2020-05-04 2022-12-27 Mellanox Technologies, Ltd. Dynamically-optimized hash-based packet classifier
US11558255B2 (en) 2020-01-15 2023-01-17 Vmware, Inc. Logical network health check in software-defined networking (SDN) environments
US20230269310A1 (en) * 2022-02-24 2023-08-24 Mellanox Technologies, Ltd. Efficient Memory Utilization for Cartesian Products of Rules
US11782895B2 (en) 2020-09-07 2023-10-10 Mellanox Technologies, Ltd. Cuckoo hashing including accessing hash tables using affinity table
US11909653B2 (en) * 2020-01-15 2024-02-20 Vmware, Inc. Self-learning packet flow monitoring in software-defined networking environments
US11917042B2 (en) 2021-08-15 2024-02-27 Mellanox Technologies, Ltd. Optimizing header-based action selection
US11929837B2 (en) 2022-02-23 2024-03-12 Mellanox Technologies, Ltd. Rule compilation schemes for fast packet classification

Families Citing this family (39)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9077669B2 (en) * 2010-06-14 2015-07-07 Dynamic Invention Llc Efficient lookup methods for ternary content addressable memory and associated devices and systems
US8750144B1 (en) * 2010-10-20 2014-06-10 Google Inc. System and method for reducing required memory updates
CN102480424A (en) * 2010-11-30 2012-05-30 瑞昱半导体股份有限公司 Device and method for processing network packet
US9001828B2 (en) * 2011-03-21 2015-04-07 Marvell World Trade Ltd. Method and apparatus for pre-classifying packets
US9110703B2 (en) * 2011-06-07 2015-08-18 Hewlett-Packard Development Company, L.P. Virtual machine packet processing
US8990492B1 (en) * 2011-06-22 2015-03-24 Google Inc. Increasing capacity in router forwarding tables
US10229139B2 (en) * 2011-08-02 2019-03-12 Cavium, Llc Incremental update heuristics
WO2013020001A1 (en) 2011-08-02 2013-02-07 Cavium, Inc. Lookup front end output processor
US9183244B2 (en) 2011-08-02 2015-11-10 Cavium, Inc. Rule modification in decision trees
US9208438B2 (en) 2011-08-02 2015-12-08 Cavium, Inc. Duplication in decision trees
KR20130093707A (en) * 2011-12-23 2013-08-23 한국전자통신연구원 Packet classification apparatus and method for classfying packet thereof
US9269411B2 (en) * 2012-03-14 2016-02-23 Broadcom Corporation Organizing data in a hybrid memory for search operations
US9438505B1 (en) * 2012-03-29 2016-09-06 Google Inc. System and method for increasing capacity in router forwarding tables
US8886879B2 (en) 2012-04-27 2014-11-11 Hewlett-Packard Development Company, L.P. TCAM action updates
US9680747B2 (en) * 2012-06-27 2017-06-13 Futurewei Technologies, Inc. Internet protocol and Ethernet lookup via a unified hashed trie
WO2014047863A1 (en) * 2012-09-28 2014-04-03 Hewlett-Packard Development Company, L. P. Generating a shape graph for a routing table
US8964752B2 (en) 2013-02-25 2015-02-24 Telefonaktiebolaget L M Ericsson (Publ) Method and system for flow table lookup parallelization in a software defined networking (SDN) system
US10009276B2 (en) * 2013-02-28 2018-06-26 Texas Instruments Incorporated Packet processing match and action unit with a VLIW action engine
US10083200B2 (en) * 2013-03-14 2018-09-25 Cavium, Inc. Batch incremental update
US9595003B1 (en) 2013-03-15 2017-03-14 Cavium, Inc. Compiler with mask nodes
US10229144B2 (en) 2013-03-15 2019-03-12 Cavium, Llc NSP manager
US9195939B1 (en) 2013-03-15 2015-11-24 Cavium, Inc. Scope in decision trees
CN104468357B (en) * 2013-09-16 2019-07-12 中兴通讯股份有限公司 Multipolarity method, the multilevel flow table processing method and processing device of flow table
US9825857B2 (en) 2013-11-05 2017-11-21 Cisco Technology, Inc. Method for increasing Layer-3 longest prefix match scale
US9602407B2 (en) 2013-12-17 2017-03-21 Huawei Technologies Co., Ltd. Trie stage balancing for network address lookup
US9275336B2 (en) 2013-12-31 2016-03-01 Cavium, Inc. Method and system for skipping over group(s) of rules based on skip group rule
US9544402B2 (en) 2013-12-31 2017-01-10 Cavium, Inc. Multi-rule approach to encoding a group of rules
US9667446B2 (en) 2014-01-08 2017-05-30 Cavium, Inc. Condition code approach for comparing rule and packet data that are provided in portions
US9350677B2 (en) 2014-01-16 2016-05-24 International Business Machines Corporation Controller based network resource management
WO2016023232A1 (en) 2014-08-15 2016-02-18 Hewlett-Packard Development Company, L.P. Memory efficient packet classification method
US9674081B1 (en) 2015-05-06 2017-06-06 Xilinx, Inc. Efficient mapping of table pipelines for software-defined networking (SDN) data plane
WO2017039689A1 (en) 2015-09-04 2017-03-09 Hewlett Packard Enterprise Development Lp Data tables in content addressable memory
US11157260B2 (en) * 2015-09-18 2021-10-26 ReactiveCore LLC Efficient information storage and retrieval using subgraphs
US10623339B2 (en) * 2015-12-17 2020-04-14 Hewlett Packard Enterprise Development Lp Reduced orthogonal network policy set selection
US9992111B2 (en) 2016-01-21 2018-06-05 Cisco Technology, Inc. Router table scaling in modular platforms
CN105763454B (en) * 2016-02-25 2018-11-27 比威网络技术有限公司 Data message forwarding method and device based on two-dimentional routing policy
CN106096022B (en) * 2016-06-22 2020-02-11 杭州迪普科技股份有限公司 Method and device for dividing multi-domain network packet classification rules
US10033750B1 (en) * 2017-12-05 2018-07-24 Redberry Systems, Inc. Real-time regular expression search engine
US9967272B1 (en) * 2017-12-05 2018-05-08 Redberry Systems, Inc. Real-time regular expression search engine

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020065870A1 (en) * 2000-06-30 2002-05-30 Tom Baehr-Jones Method and apparatus for heterogeneous distributed computation
US6867991B1 (en) * 2003-07-03 2005-03-15 Integrated Device Technology, Inc. Content addressable memory devices with virtual partitioning and methods of operating the same
US20060015514A1 (en) * 2004-06-03 2006-01-19 Canon Kabushiki Kaisha Information processing method and information processing apparatus
US20060218280A1 (en) * 2005-03-23 2006-09-28 Gouda Mohamed G System and method of firewall design utilizing decision diagrams
US20060277601A1 (en) * 2005-06-01 2006-12-07 The Board Of Regents, The University Of Texas System System and method of removing redundancy from packet classifiers
US20070016946A1 (en) * 2005-07-15 2007-01-18 University Of Texas System System and method of querying firewalls
US8089961B2 (en) * 2007-12-07 2012-01-03 University Of Florida Research Foundation, Inc. Low power ternary content-addressable memory (TCAMs) for very large forwarding tables

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020065870A1 (en) * 2000-06-30 2002-05-30 Tom Baehr-Jones Method and apparatus for heterogeneous distributed computation
US6867991B1 (en) * 2003-07-03 2005-03-15 Integrated Device Technology, Inc. Content addressable memory devices with virtual partitioning and methods of operating the same
US20060015514A1 (en) * 2004-06-03 2006-01-19 Canon Kabushiki Kaisha Information processing method and information processing apparatus
US20060218280A1 (en) * 2005-03-23 2006-09-28 Gouda Mohamed G System and method of firewall design utilizing decision diagrams
US20060277601A1 (en) * 2005-06-01 2006-12-07 The Board Of Regents, The University Of Texas System System and method of removing redundancy from packet classifiers
US20070016946A1 (en) * 2005-07-15 2007-01-18 University Of Texas System System and method of querying firewalls
US8089961B2 (en) * 2007-12-07 2012-01-03 University Of Florida Research Foundation, Inc. Low power ternary content-addressable memory (TCAMs) for very large forwarding tables

Non-Patent Citations (9)

* Cited by examiner, † Cited by third party
Title
A whitepaper titled "A Cross-Domain Privacy-preserving Protocol for cooperative Firewall Optimization", Chen et al, 2007. *
A whitepaper titled "Packet classifier in multiple fields", Gupta et al, 1999. *
A whitepaper titled "Packet classifier in Ternary CAM can be smaller", Chen et al, 2006. *
A whitepaper titled "TCAM-Based distributed parallel packet classification with range encoding", Kai et al, 2006. *
C. Meiners et al "Algorithmic Approaches to Redesigning TCAM-Based Systems" Sigmetrics'08 Jun. 2008.
Dong et al. "Packet Classifier in ternary CAMs Can Be Smaller" 2006, the whole pages. *
Hadzic, "Cost Bounded Binary decision Diagrams for 0-1 Programming" Jun. 1, 2007, the whole pages. *
K. Zheng et al "DPPC-RE: TCAM-Based Distributed Parallel Packet Classification With Range Encoding" IEEE Transactions on Computers, 2006.
P. Gupta et al "Packet Classification on Multiple Fields" Proceedings of thoe ACM SIGCOMM, 1999.

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8923301B1 (en) * 2011-12-28 2014-12-30 Juniper Networks, Inc. Fixed latency priority classifier for network data
US20150120754A1 (en) * 2013-10-31 2015-04-30 Oracle International Corporation Systems and Methods for Generating Bit Matrices for Hash Functions Using Fast Filtering
US10503716B2 (en) * 2013-10-31 2019-12-10 Oracle International Corporation Systems and methods for generating bit matrices for hash functions using fast filtering
US10496680B2 (en) 2015-08-17 2019-12-03 Mellanox Technologies Tlv Ltd. High-performance bloom filter array
US9706017B2 (en) 2015-09-29 2017-07-11 Mellanox Technologies Tlv Ltd. Atomic update of packet classification rules
US10171369B2 (en) * 2016-12-22 2019-01-01 Huawei Technologies Co., Ltd. Systems and methods for buffer management
US10491521B2 (en) 2017-03-26 2019-11-26 Mellanox Technologies Tlv Ltd. Field checking based caching of ACL lookups to ease ACL lookup search
US10476794B2 (en) 2017-07-30 2019-11-12 Mellanox Technologies Tlv Ltd. Efficient caching of TCAM rules in RAM
US11327974B2 (en) * 2018-08-02 2022-05-10 Mellanox Technologies, Ltd. Field variability based TCAM splitting
US11003715B2 (en) 2018-09-17 2021-05-11 Mellanox Technologies, Ltd. Equipment and method for hash table resizing
US10944675B1 (en) 2019-09-04 2021-03-09 Mellanox Technologies Tlv Ltd. TCAM with multi region lookups and a single logical lookup
US11558255B2 (en) 2020-01-15 2023-01-17 Vmware, Inc. Logical network health check in software-defined networking (SDN) environments
US11909653B2 (en) * 2020-01-15 2024-02-20 Vmware, Inc. Self-learning packet flow monitoring in software-defined networking environments
US11539622B2 (en) 2020-05-04 2022-12-27 Mellanox Technologies, Ltd. Dynamically-optimized hash-based packet classifier
US11782895B2 (en) 2020-09-07 2023-10-10 Mellanox Technologies, Ltd. Cuckoo hashing including accessing hash tables using affinity table
US11917042B2 (en) 2021-08-15 2024-02-27 Mellanox Technologies, Ltd. Optimizing header-based action selection
US11929837B2 (en) 2022-02-23 2024-03-12 Mellanox Technologies, Ltd. Rule compilation schemes for fast packet classification
US20230269310A1 (en) * 2022-02-24 2023-08-24 Mellanox Technologies, Ltd. Efficient Memory Utilization for Cartesian Products of Rules

Also Published As

Publication number Publication date
US20110038375A1 (en) 2011-02-17

Similar Documents

Publication Publication Date Title
US8462786B2 (en) Efficient TCAM-based packet classification using multiple lookups and classifier semantics
Liu et al. TCAM Razor: A systematic approach towards minimizing packet classifiers in TCAMs
US8654763B2 (en) Systematic approach towards minimizing packet classifiers
Meiners et al. Topological transformation approaches to TCAM-based packet classification
Meiners et al. Bit weaving: A non-prefix approach to compressing packet classifiers in TCAMs
Meiners et al. Split: Optimizing space, power, and throughput for TCAM-based classification
US7089240B2 (en) Longest prefix match lookup using hash function
Singh et al. Packet classification using multidimensional cutting
US9077669B2 (en) Efficient lookup methods for ternary content addressable memory and associated devices and systems
Liu et al. Packet classification using binary content addressable memory
US7536476B1 (en) Method for performing tree based ACL lookups
US8089961B2 (en) Low power ternary content-addressable memory (TCAMs) for very large forwarding tables
US8375433B2 (en) Method for multi-core processor based packet classification on multiple fields
US7990979B2 (en) Recursively partitioned static IP router tables
US8375165B2 (en) Bit weaving technique for compressing packet classifiers
US20150310342A1 (en) Overlay automata approach to regular expression matching for intrusion detection and prevention system
Meiners et al. Topological transformation approaches to optimizing TCAM-based packet classification systems
US9900409B2 (en) Classification engine for data packet classification
Cohen et al. Simple efficient TCAM based range classification
Lo et al. Flow entry conflict detection scheme for software-defined network
Lu et al. Succinct representation of static packet classifiers
Chang Efficient multidimensional packet classification with fast updates
Norige et al. A ternary unification framework for optimizing TCAM-based packet classification systems
Spitznagel Compressed data structures for recursive flow classification
Taylor et al. On using content addressable memory for packet classification

Legal Events

Date Code Title Description
AS Assignment

Owner name: BOARD OF TRUSTEES OF MICHIGAN STATE UNIVERSITY, MI

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:LIU, ALEX X.;MEINERS, CHAD R.;TORNG, ERIC;REEL/FRAME:024834/0547

Effective date: 20100810

STCF Information on status: patent grant

Free format text: PATENTED CASE

FPAY Fee payment

Year of fee payment: 4

FEPP Fee payment procedure

Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

LAPS Lapse for failure to pay maintenance fees

Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Lapsed due to failure to pay maintenance fee

Effective date: 20210611