cancel
Showing results forĀ 
Search instead forĀ 
Did you mean:Ā 
cancel
992
Views
0
Helpful
6
Replies

upgraded vmanage from 20.3.2.1 to 20.3.3.1 is not working, application-server (GUI) is not running

upgraded vmanage from 20.3.2.1 to 20.3.3.1 , and started seeing this error. Is there any workarounds or solutions? diagnostics did not revel anything.

6 Replies 6

i have the same problem , after vmanage cluster to 20.3.3.1 application server never started , even after several attempts to restart the service, reboot vmanage but nothing helps. any solution for this issue?

if you come across any solution. Please post it here! Thanks.

ssarfaraz2412
Level 1
Level 1

My vManage was working fine until last week. Now I'm facing same issue - upstream connect error or disconnect/reset before headers. reset reason: connection failure

vManage version 20.3.3

i never found a solution for this, i got lucky with one of these options: if you have old snapshots use it/ do cold reboot/ power cycle, to see if it works. if it works, my advice is to upgrade to 20.6.3, it is much stable.

I found out that the disk size was full due to which getting error. It was due to a monitoring tool which was freshly deployed. Now the issue is resolved. Please check on vShell used space in /dev/sdb drive.

Skjalg Eggen
Level 1
Level 1

Just replying with how I managed to fix this issue, if anyone else comes across this. 

TAC just told me to upgrade, even though I sent them df -kh outputs that clearly showed /rootfs.rw was 90% full

That is not going to work, and actually is the root cause of the webserver failing. 

logs show that if you have less than 1GB free on /rootfs.rw the application server shuts down. causing the error.

turns out I had a ton of old logs filling up the disk. 

the culprit is these files: /rootfs.rw/var/volatile/log/nms/vmanage-elastic-cluster.log.*

they are not auto cleaned and fills up the disk.

Deleted them and restart nms all, and suddenly everything worked. 

NB: Upgrade will fail if you have less than 50% free space on /rootfs.rw as well. 

So just delete those old logs and make sure you got more than 50% free diskpace and you should be fine.