We have HA cluster of two 401E's. For patch level upgrades our weekly maintenance window is long enough to upgrade firmware via GUI, desk-check the new config, run through the test suite, and revert by booting to the secondary partition before the window closes if necessary.
For past major upgrades I've split the cluster. Then we go through the recommended upgrade steps on the offline unit and once satisfied swap it into production. The cluster is usually re-established a week later.
This time we will go from 6.0.12 -> 6.2.9 -> 6.4.6. On our test system those steps were rather uneventful. Splitting the cluster might introduce more work & risk than necessary. Does anyone know if I can manipulate the partition contents during the process to end up running 6.4.6 (as upgraded from 6.2.9) but with the initial firmware (6.0.12) & config in the secondary partition instead of the last intermediate step (6.2.9)? That would allow us to keep the cluster running but with the flexibility of booting back to our solid 6.0.12 setup in one quick step.
The test system is not a cluster. I thought about trying to break into the reboot on the console to force the new firmware with tftp into the active partition, and reboot leaving that partition as active. I don't see a point in trying this with only a standalone test system. And I don't see anything else in the docs or online. Wondering if someone has a technique to accomplish this they'd like to share. School me, please.
I don't thing what you're thinking is possible. When you upgrade to 6.4.6, the previous config with 6.2.7 needs to be in the partition. Otherwise it can't convert/ugrade the config to fit to 6.4.6.
If you have a TFTP server connected to the cluster, in the worse case, you just need to flush the drive and upload the 6.0.12 then upload the saved config to recover the original environment. I wouldn't imagine it takes more than 15 min.
Yeah, I shouldn't have mentioned that idea. I don't think it'll work either. That's why I'm not even bothering to try it on a standalone. But if someone's proven us wrong I'd love to know about it.
My objective is to find the most efficient way to manipulate the partition contents so I end up with the starting firmware & config in one and the target firmware & config in the other. It's not time to debate whether it's worth doing. I'll decide that (vs splitting the cluster) once the steps involved are known.
I presume I can do it my downgrading from 6.4.6 to 6.0.12 and reloading the 6.0.12 config to end up with the two partitions the way we want. That's probably more time and reboots than are worthwhile. I'm hoping to find a method with less brute force and more elegance. Every now and again you learn about commands and parameters on the forum you didn't know about. I'm hoping for that.
Just posting my results in case this applies to anyone else. As predicted, I could not find a way to manipulate the partitions directly. The brute force approach I outlined above did work though. We got an extra long maintenance window so I tried this out. The real purpose of the test was to see how quickly we could revert to the original firmware & config after a multi-step upgrade. When doing a multi-step upgrade that appears straightforward it seems overkill to split the cluster. But we need to know how to quickly revert if we're not ready for production when nearing the end of the maintenance window. Of course we always keep config backups handy with a TFTP server too in case of disaster.
So first I upgraded as usual:
'Configuration > Revisions > Save' to save a (6.0.9) config to revert if required.
Upgrade firmware to each step in the approved upgrade path. (Just 6.2.9 and 6.4.6 in this case.) The cluster boots and auto upgrades the config in each step to reflect the firmware changes.[/ol]
I think the easiest and quickest way to revert takes two boots of the cluster:
"Upgrade" firmware back to the starting firmware (6.0.9 in this case) and ignore the warnings about downgrades being unsupported. We don't care what happens to the config as long as it's bootable.
'Configuration > Revisions > Revert' to revert to the original config saved earlier. (You can do this with CLI via the console port if your management interface has a problem after the downgrade.)[/ol]