In tailf-hcc (all versions ti date), the ability to 'auto' failback to the original master-slave config following a master->slave failover is not possible. This was done to ensure that the user manually determines which CBD, the slave or master following failover, is the most recent prior to failing back.
With tailf-hcc, there are situations in which the Slave node will claim mastership (and create a VIP interface), but the original Master node will still think that it is master (with a VIP interface) - yes a split brain situation. The slave has claimed mastership since its 'pings' to the master fail because it can no longer connect/communicate with the master. In cases where the Master server dies or the NSO server crashes, this is great and slave is the sole master. However, in cases where only the network failure between the nodes is blocking the 'pings' (the master server and NSO are still operational) the slave claims master and cannot communicate with the master to inform it of the failover - thus both are masters.
In this case if your northbound client is issuing NSO requests to the VIP it's likely it will continue to update the original master - or in some northbound/network designs it may connect to the VIP on the failed over Slave.
It needs to be determined which CDB is being updated after the failover event - in todays tailf-hcc this needs to be done manually.
Enhancement requests have been made for 'auto-failback'. Currently, NSO HA is getting a rework to provide simpler configuration including the ability for 'auto-failback' targeted for NSO 5.4 I believe.