<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Hung CTS PAC provisioning job in Network Access Control</title>
    <link>https://community.cisco.com/t5/network-access-control/hung-cts-pac-provisioning-job/m-p/3847467#M473449</link>
    <description>&lt;P&gt;I have a couple of cat 9300 uxm switches in my SDA fabric that continue to attempt to reach out to ISE for pac provisioning. The devices have a valid pac that is not expired. It seems that they have a hung provisioning job. What is the best way of killing the hung session without blowing away the actual pac in use?&amp;nbsp; I would prefer an easier way and would love to avoid having to remove it from fabric and re-add it.&lt;/P&gt;&lt;P&gt;In ISE radius live logs I see:&lt;/P&gt;&lt;P&gt;5405 RADIUS Request dropped&lt;BR /&gt;AAA:service-type=cts-pac-provisioning&lt;/P&gt;&lt;P&gt;On the devices I see:&lt;BR /&gt;#sh cts provisioning&lt;BR /&gt;A-ID: Unknown&lt;BR /&gt;Server XXXXX, using shared secret&lt;BR /&gt;Req-ID 1c6b002a: callback func 0xffef5a6ba8, context (nil)&lt;/P&gt;&lt;P&gt;#sh cts pacs returns valid pac and shows everything is good.&lt;/P&gt;&lt;P&gt;The hosts 8021x sessions and everything seem to be fine. However, every couple of minutes the live logs get flooded with the attempts/drops. My other fabric switches show no outstanding provisioning jobs. The two devices in question were rebooted over the weekend.&lt;/P&gt;</description>
    <pubDate>Mon, 29 Apr 2019 20:17:15 GMT</pubDate>
    <dc:creator>Mike.Cifelli</dc:creator>
    <dc:date>2019-04-29T20:17:15Z</dc:date>
    <item>
      <title>Hung CTS PAC provisioning job</title>
      <link>https://community.cisco.com/t5/network-access-control/hung-cts-pac-provisioning-job/m-p/3847467#M473449</link>
      <description>&lt;P&gt;I have a couple of cat 9300 uxm switches in my SDA fabric that continue to attempt to reach out to ISE for pac provisioning. The devices have a valid pac that is not expired. It seems that they have a hung provisioning job. What is the best way of killing the hung session without blowing away the actual pac in use?&amp;nbsp; I would prefer an easier way and would love to avoid having to remove it from fabric and re-add it.&lt;/P&gt;&lt;P&gt;In ISE radius live logs I see:&lt;/P&gt;&lt;P&gt;5405 RADIUS Request dropped&lt;BR /&gt;AAA:service-type=cts-pac-provisioning&lt;/P&gt;&lt;P&gt;On the devices I see:&lt;BR /&gt;#sh cts provisioning&lt;BR /&gt;A-ID: Unknown&lt;BR /&gt;Server XXXXX, using shared secret&lt;BR /&gt;Req-ID 1c6b002a: callback func 0xffef5a6ba8, context (nil)&lt;/P&gt;&lt;P&gt;#sh cts pacs returns valid pac and shows everything is good.&lt;/P&gt;&lt;P&gt;The hosts 8021x sessions and everything seem to be fine. However, every couple of minutes the live logs get flooded with the attempts/drops. My other fabric switches show no outstanding provisioning jobs. The two devices in question were rebooted over the weekend.&lt;/P&gt;</description>
      <pubDate>Mon, 29 Apr 2019 20:17:15 GMT</pubDate>
      <guid>https://community.cisco.com/t5/network-access-control/hung-cts-pac-provisioning-job/m-p/3847467#M473449</guid>
      <dc:creator>Mike.Cifelli</dc:creator>
      <dc:date>2019-04-29T20:17:15Z</dc:date>
    </item>
    <item>
      <title>Re: Hung CTS PAC provisioning job</title>
      <link>https://community.cisco.com/t5/network-access-control/hung-cts-pac-provisioning-job/m-p/3847487#M473450</link>
      <description>Update: A reload of the device seemed to have terminated the hung provisioning job. If anyone knows of any other ways please advise.</description>
      <pubDate>Mon, 29 Apr 2019 20:49:50 GMT</pubDate>
      <guid>https://community.cisco.com/t5/network-access-control/hung-cts-pac-provisioning-job/m-p/3847487#M473450</guid>
      <dc:creator>Mike.Cifelli</dc:creator>
      <dc:date>2019-04-29T20:49:50Z</dc:date>
    </item>
    <item>
      <title>Re: Hung CTS PAC provisioning job</title>
      <link>https://community.cisco.com/t5/network-access-control/hung-cts-pac-provisioning-job/m-p/3847628#M473451</link>
      <description>We had some interesting issues with 3850's and radius/pac processes when running 3.7.  Some of them have been fixed with a simple enter and exit of the "aaa group server radius &amp;lt;name&amp;gt;".&lt;BR /&gt;&lt;BR /&gt;Out of curiosity, what version of software is this on?</description>
      <pubDate>Tue, 30 Apr 2019 05:16:41 GMT</pubDate>
      <guid>https://community.cisco.com/t5/network-access-control/hung-cts-pac-provisioning-job/m-p/3847628#M473451</guid>
      <dc:creator>Damien Miller</dc:creator>
      <dc:date>2019-04-30T05:16:41Z</dc:date>
    </item>
    <item>
      <title>Re: Hung CTS PAC provisioning job</title>
      <link>https://community.cisco.com/t5/network-access-control/hung-cts-pac-provisioning-job/m-p/3847909#M473455</link>
      <description>16.9.2 Fuji. Thanks for the info on the removal of the radius group as another way to resolve the issue.</description>
      <pubDate>Tue, 30 Apr 2019 12:31:40 GMT</pubDate>
      <guid>https://community.cisco.com/t5/network-access-control/hung-cts-pac-provisioning-job/m-p/3847909#M473455</guid>
      <dc:creator>Mike.Cifelli</dc:creator>
      <dc:date>2019-04-30T12:31:40Z</dc:date>
    </item>
  </channel>
</rss>

