cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1903
Views
2
Helpful
16
Replies

[Terraform] After removing resource cdp/lldp config left on interface

When I delete a part of the code in e.g. part two, responsible for cdp or lldp it says it is destroyed, but it stays on the Nexus configuration and I don't know why all other things get deleted and those 3 lines of code (seen on the switch) stay, even though the terraform logs clearly say "destroyed". My removal is to simply remove the part of the code responsible for adding cdp and lldp to the second interface, that is, everything from that tag down:
//============================================== part-2 ==============================================

krzysztofmaciejewskiit_0-1723125761289.pngkrzysztofmaciejewskiit_1-1723125769166.pngkrzysztofmaciejewskiit_2-1723125776430.png

Code from terraform: main.tf

terraform {
required_providers {
nxos = {
source = "CiscoDevNet/nxos"
version = "0.5.3"
}
}
}

provider "nxos" {
username = var.nxos_username
password = var.nxos_password
url = var.nxos_url
}

/*============================================ common ============================================*/

resource "nxos_feature_lldp" "lldp" {
admin_state = "enabled"
}

/*=====================================*/

resource "nxos_rest" "cdpEntity" {
dn = "sys/cdp"
class_name = "cdpEntity"
}

//============================================== part-1 ==============================================
//============================================== part-1 ==============================================
//============================================== part-1 ==============================================

/*============================================ description & L3 ============================================*/

resource "nxos_physical_interface" "desc-L3" {
interface_id = "eth1/5"
description = "desc1"
layer = "Layer3"
admin_state = "up"
user_configured_flags = "admin_state"
}

/*============================================ lldp ============================================*/

resource "nxos_rest" "lldpInst" {
depends_on = [nxos_feature_lldp.lldp]
dn = "sys/lldp/inst"
class_name = "lldpInst"
children = [
{
rn = "if-[eth1/5]"
class_name = "lldpIf"
content = {
adminRxSt = "disabled",
adminTxSt = "disabled",
id = "eth1/5"
}
}
]
}

/*============================================ cdp ============================================*/

resource "nxos_rest" "cdpInst" {
depends_on = [nxos_rest.cdpEntity]
dn = "sys/cdp/inst"
class_name = "cdpInst"
children = [
{
rn = "if-[eth1/5]"
class_name = "cdpIf"
content = {
adminSt = "disabled",
id = "eth1/5"
}
}
]
}

//============================================== part-2 ==============================================
//============================================== part-2 ==============================================
//============================================== part-2 ==============================================

/*============================================ description & L3 ============================================*/

resource "nxos_physical_interface" "desc-L3v2" {
interface_id = "eth1/6"
description = "desc2"
layer = "Layer3"
admin_state = "up"
user_configured_flags = "admin_state"
}

/*============================================ lldp ============================================*/

resource "nxos_rest" "lldpInstv2" {
depends_on = [nxos_feature_lldp.lldp]
dn = "sys/lldp/inst"
class_name = "lldpInst"
children = [
{
rn = "if-[eth1/6]"
class_name = "lldpIf"
content = {
adminRxSt = "disabled",
adminTxSt = "disabled",
id = "eth1/6"
}
}
]
}

/*============================================ cdp ============================================*/

resource "nxos_rest" "cdpInstv2" {
depends_on = [nxos_rest.cdpEntity]
dn = "sys/cdp/inst"
class_name = "cdpInst"
children = [
{
rn = "if-[eth1/6]"
class_name = "cdpIf"
content = {
adminSt = "disabled",
id = "eth1/6"
}
}
]
}


I also added another thread where I measure the problem, such that when I add the configuration for the second interface it adds all the config except the "no shutdown" command (without any errors), but in the end the interface does not show the "no shutdown" command, and in the logs it clearly shows that adminSt="up".
https://community.cisco.com/t5/devnet-general-discussions/problem-with-adding-quot-no-shutdown-quot-to-existing/td-p/5157845

 

1 Accepted Solution

Accepted Solutions

At this point, the problem is still not solved.
I consider this to be a bug in Terraform.

View solution in original post

16 Replies 16

Alexander Stevenson
Cisco Employee
Cisco Employee

@krzysztofmaciejewskiit 

I have two suggestions.

 

1) This is hacky but try running terraform destroy again. Recently I have an issue where I have to run terrafrom apply twice to create infra and I'm still trying to figure out why.

2) Try using verbosity

Terraform has a logging mechanism that allows you to set different levels of verbosity using the TF_LOG environment variable. The possible values for TF_LOG are:

TRACE: Provides the most detailed logs, including internal operations and debug information.
DEBUG: Includes detailed debugging information.
INFO: Shows high-level information about what Terraform is doing.
WARN: Displays warnings about potential issues.
ERROR: Shows only error messages.

To use it, you can set the TF_LOG environment variable before running your Terraform commands. For example, to set it to DEBUG, you can use:

1. Run terraform destroy with debug-level logging

On Unix/Linux/Mac:

export TF_LOG=DEBUG
terraform destroy


On Windows (Command Prompt):

set TF_LOG=DEBUG
terraform destroy


On Windows (PowerShell):

$env:TF_LOG="DEBUG"
terraform destroy

 


2. Save debug-level logs to a file

On Unix/Linux/Mac:

export TF_LOG=DEBUG
export TF_LOG_PATH=./terraform_destroy_debug.log
terraform destroy


On Windows (Command Prompt):

set TF_LOG=DEBUG
set TF_LOG_PATH=C:\path\to\terraform_destroy_debug.log
terraform destroy


On Windows (PowerShell):

$env:TF_LOG="DEBUG"
$env:TF_LOG_PATH="C:\path\to\terraform_destroy_debug.log"
terraform destroy

 

Explanation

export TF_LOG=DEBUG (or set TF_LOG=DEBUG on Windows): This sets the logging level to DEBUG, which provides detailed information about what Terraform is doing.


export TF_LOG_PATH=./terraform_destroy_debug.log (or set TF_LOG_PATH=C:\path\to\terraform_destroy_debug.log on Windows): This specifies a file where the logs will be written. You can choose any file path you prefer.

 

 

I hope this helps!

 

Another thing is that when I let go of deleting this resource a second time (by deleting part of the code) it no longer shows that it is deleting it, which means it is probably already deleted. But if I do a "show run" it still shows it.krzysztofmaciejewskiit_0-1723458016603.png

At this point, the problem is still not solved.
I consider this to be a bug in Terraform.

@krzysztofmaciejewskiit would put an issue on the repo in this case - https://github.com/CiscoDevNet/terraform-provider-nxos if you have not done this already

Please mark this as helpful or solution accepted to help others
Connect with me https://bigevilbeard.github.io

Of course! I will do it in my free time.
I'll probably have to start two threads because I had another problem with doing shutdown on interfaces and changing them from L2 -> L3. However, I wonder if these are the right places for this second issue, because according to one commenter it's not a Terraforma problem but a NXOS API problem. Please advise me.
Second case discussion: https://community.cisco.com/t5/devnet-general-discussions/terraform-problem-with-adding-quot-no-shutdown-quot-to/td-p/5157845 

It depends, as the team whom manage this could be in the NX-OS engineering org (this is a guess), and/or other wise that would be a DTS fault case i guess.

Please mark this as helpful or solution accepted to help others
Connect with me https://bigevilbeard.github.io

You could add a null_resource for eth1/6 with no actual config. This tells Terraform to manage the configuration for eth1/6 and explicitly removes any existing settings. 

Regarding the second issue you mentioned, where the "no shut" command is not being applied to the interface, it's possible that there's an issue with the admin_stateattribute in the nxos_physical_interfaceresource. It’s a guess but setting admin_state to "true" instead of "up" to see if that makes a difference.

Please mark this as helpful or solution accepted to help others
Connect with me https://bigevilbeard.github.io

Admin state = true is not possible, it shows an error when typing this.
As for the use of null_resource it will not be possible with me for the reason that we rather not want to run a terraform that will delete a particular part of the configuration, we just want to delete the part of the code that is responsible for that configuration.
If you have an idea that could help with such an approach then I'd love to hear from you.

Hmmm.. you could try adding lifecycle { ignore_changes = all } to the resources, TF should ignore any changes to these resources, including deletions, which means that when you remove the code responsible for these resources, TF will not try to delete them on the device. FYI, this could lead to configuration drift…

Please mark this as helpful or solution accepted to help others
Connect with me https://bigevilbeard.github.io

I think we misunderstood each other.
I want to remove configurations from the Cisco device (on which it executes the TF). However, I need to do this by deleting the TF code, not by running a new TF that will change my variable values, e.g. from enabled -> disabled, etc.
The idea is that we configure one port, then when needed we configure the second port, but when configuring the second port we still need to remember to run the TF code responsible for the first port so that the changes are not overwritten. At the moment, when I want to remove the added configuration from the second port, I will simply remove the code responsible for its configuration and run TF with only one configured port (with the first one), and in my opinion, this should remove the configuration from the second port, because the code responsible for its configuration is no longer in the file main.tf

Arh ok. So as I understand what sound happen is, when you remove a resource from your configuration file (like., main.tf), Terraform will automatically delete that resource from the device during the next terraform apply run, because Terraform keeps track of the resources it has created or modified in its state file.

So, in this case, if you remove the code responsible for configuring the second port, Terraform should delete the corresponding configuration from the device when you run terraform apply again with the updated configuration file?

Please mark this as helpful or solution accepted to help others
Connect with me https://bigevilbeard.github.io

Exactly. The problem is resource lldp and cdp created with generic api (nxos_rest). The whole configuration includes 10+ entries on the interface, enabling it in L3, description, ip, acl, etc. Everything is removed when I remove part of the TF code and do terraform apply, except lldp and cdp. Oddly enough as I posted above in some reply in the logs you can see that lldp is removed and with cdp it shows an error, but both things stay in the configuration. After re-running terraform apply, it no longer attempts to remove these resources, but in the switch configuration they are still there.

I wonder if this the way nxos_restresource works? And as it's a generic API, it might not provide the necessary information for Terraform to properly track and manage the resources? Throwing a guess here btw. Hacky but how about use the terraform state rm command to manually remove the lldp and cdp resources from the Terraform state file?

Please mark this as helpful or solution accepted to help others
Connect with me https://bigevilbeard.github.io

I have another thing configured through nxos_rest, I would say 75% and all the others are removed.

As for manual deletion, I don't know how it will behave, since this is supposed to still exist on other interfaces, I wonder if it will then delete everywhere. I'll check it out and let you know because I'm pretty new to Terraform.