cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
24368
Views
14
Helpful
7
Comments
Rama Darbha
Level 1
Level 1

 

 

This guide is meant to provide insight into troubleshooting ACL resource partitions.

Multicontext Mode and Resource Partitions

An FWSM configured for multiple context mode will be subject to a feature called the resource acl-partition. ACL partitions are used to allocate ACL limitions on contexts which assist in virtualization of the FWSM. A function of the ACL partition is to apply limitations on the number of access-lists used by each FWSM context.

ACL Counter Outputs

There are easy ways to examine the current utilization of the FWSM with regards to ACL counts. The first output is show resource acl-partition. This output will indicate the number of ACL resource partitions. The maximum number of ACL resource partitions is 12. If a configuration has more than 12 contexts, then some contexts will share an ACL resource partition.

 

In the below example, there are 4 ACL resource partitions. Notice that the count starts from 0. Two of the partitions are utilized and two partitions are not used. The FWSM in question has three contexts: admin, context1 and context2. Currently, this FWSM is configured so that context1 and admin are sharing the first partition space (Partition #0) and context2 is using the second partition space (Partition #1).

 

The number of rules field in the output is an aggregate of all ACLs currently present across all contexts allocated to this partition. The following section will outline outputs that allow a more granular examination of these ACLs. In our case, admin and context1 have 61 total rules combined while context2 has 16 total rules.

 

FWSM/pri/actNoFailover# show resource acl-partition
Total number of configured partitions = 4
Partition #0
    Mode            : non-exclusive
    List of Contexts     : admin, context1
    Number of contexts     : 2(RefCount:2)
    Number of rules     : 61(Max:49971)
Partition #1
    Mode            : non-exclusive
    List of Contexts     : context2
    Number of contexts     : 1(RefCount:1)
    Number of rules     : 16(Max:49971)
Partition #2
    Mode            : non-exclusive
    List of Contexts     : none
    Number of contexts     : 0(RefCount:0)
    Number of rules     : 0(Max:49971)
Partition #3
    Mode            : non-exclusive
    List of Contexts     : none
    Number of contexts     : 0(RefCount:0)
    Number of rules     : 0(Max:49971)



The output of show np 3 acl count <number> allows the users to view the granular ACL usage on a paritular resource partition. Notice that below, the field is referenced as "tree_id". This actually refers to the partition number from above.

 

 

FWSM/pri/actNoFailover# show np 3 acl count ?

 

  <0-11>  tree_id

 

In the above example, it is clear that there are 61 ACLs currently in use in partition #0. In the below output, it can be seen that these 61 rules are made up for 48 fixup rules, 3 console rules and 10 ACL rules. The counters in the heading labeled CLS Rule Current Counts shows the current number of ACLs for each field.

 

Below this heading, there is a field labeled CLS Rule MAX Counts. This field addresses the maximum number of ACLs that can be configured in this partition.

 

FWSM/pri/actNoFailover# show np 3 acl count 0
-------------- CLS Rule Current Counts --------------
CLS Filter Rule Count       :             0
CLS Fixup Rule Count        :            48
CLS Est Ctl Rule Count      :             0
CLS AAA Rule Count          :             0
CLS Est Data Rule Count     :             0
CLS Console Rule Count      :             3
CLS Policy NAT Rule Count   :             0
CLS ACL Rule Count          :            10
CLS ACL Uncommitted Add     :             0
CLS ACL Uncommitted Del     :             0

 

---------------- CLS Rule MAX Counts ----------------
CLS Filter MAX              :          1499
CLS Fixup MAX               :          3997
CLS Est Ctl Rule MAX        :           249
CLS Est Data Rule MAX       :           249
CLS AAA Rule MAX            :          3497
CLS Console Rule MAX        :           999
CLS Policy NAT Rule MAX     :           999
CLS ACL Rule MAX            :         38482

 

-------------- CLS Rule Counter Ranges --------------
CLS L7 Cnt     Start - End  :             1 -     1499
CLS Est Cnt    Start - End  :          1500 -     1748
CLS AAA Cnt    Start - End  :          1749 -     5245
CLS CP Cnt     Start - End  :          5246 -     6244
CLS Policy Cnt Start - End  :          6245 -     7243
CLS ACL Cnt    Start - End  :          7244 -    45725
CLS DYN Cnt    Start - End  :             0 -        0

 

----- CLS Rule Memory Management (Global) ----
CLS Rules Allocated         :           129
CLS Rules Deleted           :            52
CLS Rules Flagged           :             0
CLS Rules Reclaimed         :             0
CLS Rules No Memory         :             0

 

----- CLS Extension Memory Management (Global) ----
CLS Leaf Extensions Alloced :            14
CLS Leaf Extensions Updated :             2
CLS leaf Extensions Deleted :             4
MPC Leaf Extensions Alloced :             0
MPC Leaf Extensions Deleted :             0
MPC Leaf Ext Alloc Errors   :             0
MPC Leaf Ext Free Errors    :             0
-----------------------------------------------------

 

Notice that even though the first output indicated a maximum of 49971 ACL rules, it can clearly be seen that the max is actually 38482 access-control ACL rules.

ACL Partition Optimization

There are some commands that can be used to tweak the ACL partition settings. By default, the number of ACL rules is set to the max. But, the other ACL rules can be altered using the following command:

 

FWSM(config)# resource rule
nat <value1> acl <value2> filter <value3> fixup
<value4> est <value5> aaa <value6> console
<value7>

 

Note that it is very important that the sum of all the rules is less than the maximum listed in the output of 49971. If the sum is more than the limit, the command line will print the following output:

 

ERROR: New total max rules <sum> are more than the allowed total max rules 49971

 

The above ACL rules affect all partitions as it is a global configuration. The FWSM allows a per-partition ACL rules configuration, as described below:

 

FWSM(config0# resource partition 0 rule nat
<value1> acl <value2>  filter <value3> fixup
<value4> est <value5> aaa  <value6> console
<value7>
size <0 to max> *where max is the number identified in show resource acl-partition

 

 

 

Finally, altering the number of partitions may be a solution based on the ACL requirements of the FWSM. The number of ACL partitions can be configured using the command:

FWSM(config)# resource acl-partition <0-12>

 

The number configured above will be the number of ACL partitions. The optimized configuration will have the same number of ACL partitions as contexts.

 

An important concept with regards to ACL partitions is the backup partition. When in multiple context mode, the ACL creates a backup partition which is used when changing  an ACL. In the above example, there are 4 partitions in the NP3 ACL space. There is also a backup partition:

------------------------
p1-|-p2-|-p3-|-p4-|-BK-|
------------------------

This partition is used when an ACL that belongs a another partition needs to be  changed. The backup partition is designed to make ACL changes as  transparent as possible. The backup partition has the same size as all the  partitions.

 

If the configuration were changed to only have a single partition, the ACL space still remains the same. As a result, the below diagram shows the ACL partition division:

---------------------------
----p1------|-----BK------|
---------------------------

 

The general rule:

- increased number of partitions better utilizes the ACL space but reduces the number of ACLs per partition

- decreased number of partitions worsens utilization of the ACL space but increases the number of ACLs per partition

 

This is why it is recommended to only have as many ACL partitions as the number of contexts.

 

Caution:

 

When failover is used, both FWSMs need to be reloaded at the same time after making partition changes. Reloading both FWSMs causes an outage with no possibility for a zero-downtime reload. At no time should two FWSMs with a mismatched number of partitions or rule limits synchronize over failover.

*When you replace a hardware failure FWSM, please make sure you re-customized acl partition before config synchornization.

 

References:

FWSM 3.1: http://www.cisco.com/en/US/docs/security/fwsm/fwsm31/command/reference/fwsm_ref.html

FWSM 3.2:  http://www.cisco.com/en/US/docs/security/fwsm/fwsm32/command/reference/fwsm_ref.html

FWSM 4.0: http://www.cisco.com/en/US/docs/security/fwsm/fwsm40/command/reference/qr.html#wp1622931

FWSM 4.1: http://www.cisco.com/en/US/docs/security/fwsm/fwsm41/command/reference/qr.html#wp1622931

Comments
Pratik Jhaveri
Level 1
Level 1

Great doco.

Juraj Papic
Level 3
Level 3

nice.

Andrey Calderon
Level 1
Level 1

Hello All,

Does anyone know about any error or known issue doing this procedure in FWSM 4.0(14), without reloading both units at the same time using failover? Any major issue because of mismatched number of partitions?

Rama Darbha
Level 1
Level 1

There are no issues reloading both units at the same time. They will still sync failover.

There is an issue if you have two FWSMs in failover and have mismatched partition numbers. You run the risk of losing ACLs during failover sync because there may not be enough partition space on the peer.

Before synchronizing failover, ensure that the number of parittions on both FWSM are the same by using the command "show resource acl-partition".

Andrey Calderon
Level 1
Level 1

Rama, thanks for clarification, regarding risk of losing ACL.

Thanks for sharing this notes 
but why TCP bypass?
TCP bypass is need only for HA.
for your lab since the Server and Client is connect to same ASA inside interface,
we do
NAT static (inside,inside)
NAT dynamic to inside interface

 

this make request and reply pass through same interface and hence through ASA and no TCP mismatch is never happened.

 

GuadalupeGomez
Level 1
Level 1

There are no issues reloading both units at the same time. Very good explanation.Thanks for sharing such an amazing info

Walgreenslistens

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: