cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
5269
Views
12
Helpful
22
Replies

What is the difference between the IOS 12.2(55) and 12.2(58) versions?

goyourmin
Level 1
Level 1

hello !

i am reviewing IOS 12.2 SE version.

However, many of the 12.2 versions of the 12.2(53), 12.2(54), 12.2(55), 12.2(58), etc., are confusing.

 

What is the difference between IOS 12.2(55)SE and IOS 12.2(58)SE?

Q1. 12.2(55) has the latest update, but common sense dictates that 12.2(58) is the latest version because the number is higher? (e.g. functional part)

 

Q2. Can you find a roadmap for each of them 12.2?
- I can't find the difference between 12.2(55) and 12.2(58) on the CISCO homepage. Or should 12.2(55) and 12.2(58) be considered separate IOS?

 

I checked the link below, but I couldn't find any information about IOS 12.2(55)SE.
(The download homepage provides 12.2(55)SE. and 12.2(58) is not provided.)

link : https://www.cisco.com/c/en/us/support/ios-nx-os-software/ios-software-releases-12-2-se/products-release-notes-list.html

 

best regard,

22 Replies 22


@Ramblin Tech wrote:

You are absolutely right about TAC not supporting products that are past their End-of-Support date, but a customer should still be aware that loading in an unsupported IOS release on EoS h/w may yield hard to troubleshoot issues later, even if the h/w initially boots without a problem.


15.0(2)SE train are supported.  When the 3560 and 3750 reached End-of-Support date, Cisco removed all the files and links.  Had it not, the files would have been there too.  

Hi @Ramblin Tech - sounds like you are quite close to/inside Cisco yourself but no matter.

Just to support what Leo is saying - there are some bugs we discover these days which suggest that some releases have had little to no testing at all - definitely no full regression test.  There are way too many examples to list and I couldn't think of them all right now but 2 examples that come to mind from recent past:
- ROMMON for C9800-80 which effectively bricked the box - all LAN ports disabled.  We never got a full and honest answer on this but we suspect the ROMMON was actually built for another C9800 model (or another platform altogether?) because it literally did not support the 9800-80 ports at all.  Amazingly it took TAC/BU months to remove the faulty image from software download even after it was reported by multiple customers with bricked WLCs - SHOCKING!  We were told it *was* tested - we doubt that is true for 9800-80. CSCwa12216/CSCvz25229  Funny enough CSCvz25229 is tagged as ASR1K even though it's 9800 - maybe that gives us a clue to what really happened and what was actually tested?
- More recently - 17.6.5 (and 17.11.1/17.12.1) - CSCwe11637 - just needs a dialer interface with PPPoE config, which there must be millions of Cisco routers running, so should be in every IOS test cycle (?)  - it fills the logs with MILLIONS of tracebacks so no real testing needed - just a quick glance at the logs.  And yet somehow this got released?

Makes you wonder if those releases were really tested much/at all.  If they were then there are questions about the quality of the testing.

Yes I've had many of those discussions with account teams, and Cisco quality improvement champions etc over the years so I've seen those stats but my perception is that quality has got worse recently.  The other trend I've noticed more often recently is for TAC/BU to try to label bugs (things that were working and then got broken) as "enhancement requests" (Sev 6) presumably in order to make those quality stats look better than reality.  TAC are also very reluctant to open a bug at all sometimes and while customers like us press hard until they do (we shouldn't have to), I suspect many others just give up.

Ramblin Tech
Spotlight
Spotlight

@Rich R:  I hear you.  I was a Cisco customer back in the early-mid '90s in another life, and felt the heat when things went off the rails.

I am not here to say there are not IOS s/w quality issues, nor say that Cisco's testing is adequate to find all bugs. Every IOS/IOS-XE/IOS-XR/NX-OS release ships with bugs, as does every other NOS for every other vendors' networking products. I have no data to say that Cisco is doing any better than other vendors in this regard, so I will not argue with anyone who says Cisco is doing worse. Should Cisco be finding and fixing more defects before they become CFDs? Absolutely, it would be absurd to think otherwise. Software quality (eg, CFDs) is a constant complaint from Sales back to Engineering, and from customers directly to Engineering at various forums (Cisco Live!, etc). It is not a question of whether Cisco is testing s/w or not: it is. The question is whether there is enough testing taking place to cover all possible configurations that customers might deploy: obviously, there is not. And, it is especially infuriating when a s/w release does not even boot up on officially supported h/w; how was that not internally seen in Cisco's "smoke" test (so named to describe the most basic of tests: boot up the device and see if smoke comes out!).

Why, then, is eliminating CFDs so hard? Think of all the supported features in a given IOS/XE/XR/NX-OS release, now think of all the knobs for all the commands to enable those features and all the possible combinations of those knobs between all those feature in the release. Now multiply that very large number by all the possible combinations of h/w modules that can be installed in a given product, and then multiply by the number of products supporting that s/w release. There are a staggering number of h/w & s/w permutations and combinations for any given release, and not all of them are going to get tested, as there is just not enough money, time, and people to do that. Many, many CFDs are corner-cases that come from unique combinations of h/w, s/w, and scale.

So what does Cisco do? The DEs (Development Engineers) run tests against their code to see if it performs as expected with regard to the functional spec provided to them (and yes, they might be using a vrouter as Leo suggested, if it's a PI feature). If their piece of the much bigger puzzle is not functioning wrt the spec, they fix their part of the code just as you would expect in any s/w dev. If the feature is Working As Designed (WAD) against the functional spec, DEs pass the code to devtest for more formal unit tests. (BTW, this is where the infuriating Sev 6 starts: if the functional spec was written for a very narrow use-case, then anything a customer tries beyond that use-case becomes a new feature request). When devtest finds defects, they document their findings and open a DDTS/CDETS ID for the DEs to work on. Bugs found during devtest typically have a flag set so they are not visible externally, so customers do not know of the enormous numbers found during devtest. As every customer can attest, devtest does not find all the bugs, with CFDs being the result. If you are a very large customer (sales-wise), there can even be another round of testing beyond devtest, as there is an Engineering organization that can take the code from devtest and run System Integration Tests (SIT) against actual test plans provided by customers, where those customers' product sets, topologies, and scale factors are specifically tested with the customers' configs, with results readouts provided back to those customers. This customer-specific SIT is quite expensive for Cisco Engineering to execute and performed very selectively.

Despite ongoing efforts to reduce CFDs, there are going to be defects in the code you download from cisco.com, even stupid ones that should have been caught early in the dev cycle. So what can a customer do? Basically, run mature, stable code releases, use supported IOS configs (some customers insist on using commands not supported by their releases) and SIT test it yourselves before putting it into production (Cisco Servcies can help identify stable code for your deployment). You can argue that customers should not have to do Cisco's testing job for it, or you can be pragmatic, accept bugs as a fact of life, and just build SIT testing into your deployment schedules. I worked for years with a very large SP customer that routinely tested s/w releases (from all vendors) for 12-18 months before deploying -- this was the longest, most extensive test cycle of any Cisco customer. They traded-off being later to market with new features against having very few catastrophic bugs in their releases once they got to market. I am not advocating 12-18 month qualification cycles for customers (it's way too long), but no customer should ever just load anybody's brand-new release into a production network without a minimum of their own SIT/qualification testing. When you find defects in your qualification testing, work with Cisco TAC/Services for a workaround (eg, new config) and/or a bug filing. If Services pushes back against opening a non-WAD bug, get your account team involved to press the issue. Unless your topology and configs are relatively simple, your specific combination of topology, h/w SKUs, configs, and scale requirements could very well be unique, with that specific combination never having been tested by Cisco. This doesn't mean that Cisco does not perform testing, they do, but they cannot test the specific topologies/product-sets/configs/scale combinations of tens of thousands of customers.

Disclosure: I am recently retired from Cisco's systems engineering ranks after 24 years. I am still a shareholder, but have no financial interest or business arrangement otherwise.

Disclaimer: I am long in CSCO

Thanks for the extensive reply and explanations - good for everyone else to know.
I am already familiar with all that which is why I gave the examples I did - 1 should have been picked up in the smoke test - suggests it wasn't tested at all.  2 is pretty much a staple feature - no special config or feature mix required - used on literally millions of xDSL lines around the world on Cisco routers so should be in any base test config for any router which supports xDSL - but apparently either wasn't in any test config for that release or the release(s) weren't tested.

The sev 6 complaint is about existing features/functionality which were working fine then get broken - definitely not what I would consider an enhancement.  If it was working I expect it to stay working.

I wasn't comparing Cisco to other vendors at all - just to their own trends over the last 23 years (I started working with Cisco around 2000).  I expect what we're seeing is a consequence of various cost-cutting measures.

So I think we're largely in agreement.


@Ramblin Tech wrote:
Why, then, is eliminating CFDs so hard? Think of all the supported features in a given IOS/XE/XR/NX-OS release, now think of all the knobs for all the commands to enable those features and all the possible combinations of those knobs between all those feature in the release. Now multiply that very large number by all the possible combinations of h/w modules that can be installed in a given product, and then multiply by the number of products supporting that s/w release. There are a staggering number of h/w & s/w permutations and combinations for any given release, and not all of them are going to get tested, as there is just not enough money, time, and people to do that. Many, many CFDs are corner-cases that come from unique combinations of h/w, s/w, and scale

CSCwa12216/CSCvz25229 is a good example of a software that was never tested at all.  This was confirmed by the TAC engineer, the dev and our Wireless SE when they refused to answer the question posed by our management team.  This is not a "corner case", FFS.  Did anyone load the flippin' firmware into an appliance, with ZERO CONFIG, and see what happens?

And during the troubleshooting demonstration (because the dev would not believe us) of CSCvz25229, they asked us "Can we 'borrow' your WLC so we can run some tests of our own?"  Our management team looked at each other and heaved a collective "da fuq".  We gave TAC and the developers the issue as well as the workaround.  We've already done all the hard work for Cisco!  No go and replicate (or "repro").  Oh wait, TAC and dev have no access to a physical appliance to test!  <FACEPALM>

And getting that infernal piece of code removed is another hair-pulling exercise.  "No, we are not removing this code because it is just a minor bug."  A what, now?  Minor?  You call a bug, which could be easily triggered and can lock down the uplink ports, "minor"?  A bug which, by the way, in the hands of a miscreant, can-and-will cripple anyone's WLC, "minor"???  <DOUBLE FACEPALM>

We begged the developers to no avail.  We got our accounts team involved and they were unsuccessful too.  The software finally got pulled because we reached out to our peers and one of them had "executive sponsor".  

Just imagine what would've happened if that software stayed.  Imagine the ensuing chaos had a disgruntled staff loaded that software before being shown the door.  When that controller crashes or reboots, the uplinks are locked.  There is no way to access that WLC but the console port.  And there is no way to determine if someone has "pre-loaded" the faulty code.  It is the best DoS software Cisco has released to the general public.  

FN-72424 and FN-72258 are good examples. 

IOS-XE 16.10.X and 16.11.X are riddled with "stack merge" and PoE bugs.  I do not fully understand how, if they followed all the elaborate steps you've mentioned, these two trains could be released in the first place.  I mean just take two switches (without any configs, of course), stack them, and leave them on for two weeks.  That's all it takes and the "stack merge" bugs will appear.   The PoE bugs are even less trivial to trigger and it does not take a "developer" or "super admin" skill to do it -- Plug a phone in and leave it on for several days.  

Those elaborated steps are nice if people actually take the time to actually follow them.  Right now, as I am reading a long list of updated Cisco Bug IDs I am still unconvinced that this so-called "smoke tests" are being followed. 


@Ramblin Tech wrote:

Basically, run mature, stable code releases


Define "mature" because Cisco is not helping.  Putting a worthless "star" on an IOS-XE code does not mean the code is stable nor mature but Cisco seems to be giving everyone the impression it is.  There is a significant difference between a firmware tagged as "Safe Harbor" and one with a just a "gold star".

For one, I will sleep soundly at night knowing full well that our core switch infrastructure is running an IOS version tagged as "Safe Harbor".  And I have.  For more than twelve years.  That core switch stayed up without throwing any tantrums or crashes.  For twelve years.  Now, I have a 9500 on a "star release" that crashes every several months because of one bug after another.  

 


@Ramblin Tech wrote:
When you find defects in your qualification testing, work with Cisco TAC/Services for a workaround (eg, new config) and/or a bug filing. If Services pushes back against opening a non-WAD bug, get your account team involved to press the issue.

If someone finds an "operational" (not catastrophic) bug, it will take several years to get it fixed unless the organization (who reported the bug) has "executive sponsor".

Cisco cannot be a "market leader" or a "trend setter" if someone uses the argument of "if everyone can do it, why can't we".  A market leader or a trend setter stands up from the "rest of the pack" and ploughs forward.  If Cisco is following what the others are doing, then someone else is setting the trend and Cisco is merely a follower. 

But genuinely thanks for the extensive responses and have a nice weekend.  

To be fair I think some of those fall into the category of production line faults so you wouldn't expect them to be caught in standard regression/unit testing.  But production line QA is generally supposed to test a sample coming off the line (maybe 1 in 1000?) and that is what should have picked up those type of problems.  So either they're not doing those sample tests anymore or the quality of the "testing" is so bad that they simply don't pick up really obvious and/or serious problems.  The fact that these seem to be quite common lately suggests that production line QA may also be getting generally worse.

The 9300 PoE problem (CSCwe22958) is likely easy to explain - they might have tested 1 single piece of hardware (if any), which was a different 9300 model, and because it worked on that hardware they assumed it would also work on all the others.  That's the risk with not testing every model and every feature for every release - you are guaranteed to miss model-specific problems like this.  

If any of these issues (or ones that you might find in the future) impacted your own network, the best way to bring about real change is to bring these to the attention of your account team and stress how a lack of quality will have an impact on future sales to your organization. Account teams are coin-operated: they react to potential sales opportunities and the potential for lost sales. If there is no money on the line, they may empathize with your plight, but their daily efforts are going to be directed towards the customers who might send them POs.

Each of the FNs and bug IDs you list will have a back story as to how they go through the system, and each one represents an opportunity for Cisco Engineering or Manufacturing to improve their quality processes. But, realistically speaking, no one is going to chase these down without a nudge. TAC does not see correcting systemic engineering or manufacturing issues as their mission, as they are focused on returning broken networks to service as quickly as they can. If you have an appropriate Advanced Services contract, their NCEs can research the back stories, but it would really be up to your product or services account teams to drive change into the BU/BE because there is money on the line.

Disclaimer: I am long in CSCO