We have HA cluster of two 401E's. For patch level upgrades our weekly maintenance window is long enough to upgrade firmware via GUI, desk-check the new config, run through the test suite, and revert by booting to the secondary partition before the window closes if necessary.
For past major upgrades I've split the cluster. Then we go through the recommended upgrade steps on the offline unit and once satisfied swap it into production. The cluster is usually re-established a week later.
This time we will go from 6.0.12 -> 6.2.9 -> 6.4.6. On our test system those steps were rather uneventful. Splitting the cluster might introduce more work & risk than necessary. Does anyone know if I can manipulate the partition contents during the process to end up running 6.4.6 (as upgraded from 6.2.9) but with the initial firmware (6.0.12) & config in the secondary partition instead of the last intermediate step (6.2.9)? That would allow us to keep the cluster running but with the flexibility of booting back to our solid 6.0.12 setup in one quick step.
The test system is not a cluster. I thought about trying to break into the reboot on the console to force the new firmware with tftp into the active partition, and reboot leaving that partition as active. I don't see a point in trying this with only a standalone test system. And I don't see anything else in the docs or online. Wondering if someone has a technique to accomplish this they'd like to share. School me, please.
...Fred
Nominating a forum post submits a request to create a new Knowledge Article based on the forum post topic. Please ensure your nomination includes a solution within the reply.
I don't thing what you're thinking is possible. When you upgrade to 6.4.6, the previous config with 6.2.7 needs to be in the partition. Otherwise it can't convert/ugrade the config to fit to 6.4.6.
If you have a TFTP server connected to the cluster, in the worse case, you just need to flush the drive and upload the 6.0.12 then upload the saved config to recover the original environment. I wouldn't imagine it takes more than 15 min.
Yeah, I shouldn't have mentioned that idea. I don't think it'll work either. That's why I'm not even bothering to try it on a standalone. But if someone's proven us wrong I'd love to know about it.
My objective is to find the most efficient way to manipulate the partition contents so I end up with the starting firmware & config in one and the target firmware & config in the other. It's not time to debate whether it's worth doing. I'll decide that (vs splitting the cluster) once the steps involved are known.
I presume I can do it my downgrading from 6.4.6 to 6.0.12 and reloading the 6.0.12 config to end up with the two partitions the way we want. That's probably more time and reboots than are worthwhile. I'm hoping to find a method with less brute force and more elegance. Every now and again you learn about commands and parameters on the forum you didn't know about. I'm hoping for that.
...Fred
Just posting my results in case this applies to anyone else. As predicted, I could not find a way to manipulate the partitions directly. The brute force approach I outlined above did work though. We got an extra long maintenance window so I tried this out. The real purpose of the test was to see how quickly we could revert to the original firmware & config after a multi-step upgrade. When doing a multi-step upgrade that appears straightforward it seems overkill to split the cluster. But we need to know how to quickly revert if we're not ready for production when nearing the end of the maintenance window. Of course we always keep config backups handy with a TFTP server too in case of disaster.
So first I upgraded as usual:
[ol]I think the easiest and quickest way to revert takes two boots of the cluster:
[ol]Now the cluster is in sync with the original firmware/config (6.0.9) in active partition and the upgraded firmware/config (6.4.6) in the other. You can boot to switch partitions (e.g., at the next testing window) as outlined here: Technical Tip: Selecting an alternate firmware for the next reboot (fortinet.com)
I hope this helps someone.
...Fred
Select Forum Responses to become Knowledge Articles!
Select the “Nominate to Knowledge Base” button to recommend a forum post to become a knowledge article.
User | Count |
---|---|
1518 | |
1018 | |
749 | |
443 | |
209 |
The Fortinet Security Fabric brings together the concepts of convergence and consolidation to provide comprehensive cybersecurity protection for all users, devices, and applications and across all network edges.
Copyright 2024 Fortinet, Inc. All Rights Reserved.