07-29-2019 12:56 PM - edited 07-29-2019 12:58 PM
Using python I have created a script to essentially allow our imaging team to gather a list of MACs/description from a specific endpoint group. I have the script and proper calls 95% working for what I want it to do. However, I am in the process of figuring out two things:
1 - How to write the output to a csv without overwriting rows. The append method seems to work when tweaking code. However, I am at a loss for figuring out how to use headers with append.
2 - Since the endpoint group in ISE is smaller (<100 MACs) I am able to write code to reference the endpoint group href in separate json fields for the next page etc. My question is how would one scale if the endpoint group has more than a few pages of output?
For 2, I was able to run specific curl commands to gather the appropriate information I needed and the last output provided here:
"nextPage" : {
"rel" : "next",
"href" : "https://XXXXXX:9060/ers/config/endpoint?filter=groupId.EQ.a164XXX6840XXX7-0242f38bcc8a&page=2",
"type" : "application/xml"
}
}
}
which led me to the scaling question. For part 1, here is a snippet:
print "Total Number of Endpoints in Group:",endPoints['SearchResult']['total']
print "---------------------------------------"
for MAC in endPoints ["SearchResult"]["resources"]:
temp1 = MAC['name']
try:
temp2 = MAC['description']
with open ('Img_Grp_Macs.csv', 'wb') as csvfile:
##headers = ['MAC Address','Description']
filewriter = csv.writer(csvfile, delimiter=',',quoting=csv.QUOTE_MINIMAL)
#filewriter.writerow(['MAC Address', 'Description'])
filewriter.writerow([temp1, temp2])
print "MAC Address:", temp1, "||", "Description:", temp2
except KeyError:
with open ('Img_Grp_Macs.csv', 'wb') as csvfile:
filewriter = csv.writer(csvfile, delimiter=',',quoting=csv.QUOTE_MINIMAL)
#filewriter.writerow(['MAC Address', 'Description'])
filewriter.writerow([temp1, temp2])
print "MAC Address:", temp1, "||", "Description:"""
continue
Excuse the spacing from copy/paste. The output to screen works and looks something like this:
Total Number of Endpoints in Group: 64
---------------------------------------
MAC Address: x:xxxxxxx:x || Description: test123
MAC Address: x:xxxx:x || Description: test
....and so on
Any help is appreciated.
Solved! Go to Solution.
07-29-2019 09:12 PM
I know this doesn't answer your questions as I am still learning Python myself, but FYI you can increase your counts per request to 100 by adding "&size=100" to the end of your query. That will at least cut down the number of pages to call for larger groups.
I never did a filter by groupID before so that is pretty nice. I noted that for future reference. I test the load in my Python shell and was able to cycle through the data (in JSON format).
Good stuff.
07-29-2019 09:12 PM
I know this doesn't answer your questions as I am still learning Python myself, but FYI you can increase your counts per request to 100 by adding "&size=100" to the end of your query. That will at least cut down the number of pages to call for larger groups.
I never did a filter by groupID before so that is pretty nice. I noted that for future reference. I test the load in my Python shell and was able to cycle through the data (in JSON format).
Good stuff.
07-29-2019 10:23 PM
At least I did it this way:
1.) Initialize an empty result dictionary first (e.g. __result_content = {} )
2.) Initialize a variable, which contains the next URL reference (__nextURL). Initially this variable is populated with your initial API call
3.) Build a loop (e.g. while __nextURL != None:)
4.) API get request inside the loop.... Get the results as a dictionary
5.) Append the results to the empty dictionary from the beginning (1)
6.) From the results, check if the nextPage element is present in the results. If yes, update the __nextURL variable to this value. If not, set the __nextURL to "None".
==> If there is a next page, the loop runs again and updates the dictionary.
At the end of the day, the result dictionary contains all elements. After that you can do whatever you want with it (e.g. loop over the items and create a CSV).
08-06-2019 07:48 AM
08-07-2019 11:02 AM
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide