Support Forum
The Forums are a place to find answers on a range of Fortinet products from peers and product experts.
ftnt-apopisteru

Deploying Fortinet FortiVoice 7.2.3 on Proxmox 8.4.2: Troubleshooting Init Segfaults and Boot Loops

Deploying Fortinet FortiVoice 7.2.3 on Proxmox 8.4.2: Troubleshooting Init Segfaults and Boot Loops

 

Target Audience: Linux System Administrators, DevOps Engineers, Virtualization Specialists
Environment: Proxmox VE 8.4.2, Ubuntu 22.04 LTS (Workstation), FortiVoice 7.2.3 for KVM deployment package
Disclaimer: This article documents the field research of an SE - it was not validated nor backed up neither by CSEs, TAC or RnD and represents the solely the opinion of the author.

Introduction

While Fortinet provides comprehensive documentation for deploying FortiVoice on vanilla KVM (FortiVoice Private Cloud KVM Deployment Guide), translating these instructions to Proxmox VE (PVE) requires careful adaptation.


A direct translation of KVM settings results in a critical failure where the appliance enters a reboot loop immediately after decompressing the root filesystem.

 

This article details the troubleshooting methodology used to identify the root cause—a combination of entropy starvation and disk bus mismatches—and provides the configuration required to stabilize the deployment.

The Symptom

When deploying the FortiVoice 7.2.3build0507 `qcow2` images on Proxmox 8.4.2 using default settings, the VM fails to boot.

 

The console displays a kernel panic triggered by the `init` process segfaulting.

 

Error Signature:

Kernel panic - not syncing: Attempted to kill init! exitcode=0x00000000

 

To diagnose the symptom, we must first gain visibility into the boot process, which is obscured by the default quiet boot parameters.

Phase 1: Troubleshooting

1. Enabling Kernel Debugging

To identify the process triggering the panic, we inject the `debug` flag into the kernel boot parameters. Since the VM cannot boot, we modify the image directly using `nbd` (Network Block Device) on a Linux workstation.

 

Mounting the Image:

# Load the NBD module
sudo modprobe nbd max_part=8

# Connect the qcow2 image
sudo qemu-nbd -c /dev/nbd0 /path/to/fortivoice-kvm.qcow2

# Verify partition layout
sudo fdisk -l /dev/nbd0

Output:

Device Boot Start End Sectors Size Id Type
/dev/nbd0p1 * 1 602110 602110 294M 83 Linux

 

Modifying the Bootloader:
We mount the boot partition and modify `extlinux.conf` to append the debug flag.

sudo mount /dev/nbd0p1 /mnt/

# Append '_debug' to the default label
sudo sed -i '/DEFAULT boot/s/$/_debug/' /mnt/boot/extlinux.conf

# Verify the configuration change
cat /mnt/boot/extlinux.conf

TIMEOUT 100
DEFAULT boot_debug
DISPLAY boot.msg
LABEL boot
KERNEL /boot/vmlinuz
APPEND ro panic=5 console=ttyS0,9600n8 console=tty0 nf_conntrack.hashsize=524288 printk.time=1 initrd=/boot/rootfs.gz
LABEL boot_debug
KERNEL /boot/vmlinuz
APPEND ro panic=5 console=ttyS0,9600n8 console=tty0 debug nf_conntrack.hashsize=524288 printk.time=1 initrd=/boot/rootfs.gz

Extending the 5 seconds panic reboot timer is also recommended.

 

Cleanup:

sudo umount /mnt
sudo qemu-nbd -d /dev/nbd0

 

The equivalent command that reproduce the Proxmox Webui VM declaration, before adding disks:

qm create 100 --name vanilla --memory 2048 --cores 1 --net0 e1000,bridge=vmbr0,firewall=1 --scsihw virtio-scsi-single --ostype l26 --cpu x86-64-v2-AES
boot: order=ide2;net0

 

Transfer and import the disk images

# Import the OS disk
qm disk import 100 /var/lib/vz/template/iso/fortivoice-kvm.qcow2 localdir

# Import the data disk
qm disk import 100 /var/lib/vz/template/iso/250.qcow2 localdir

2. Capturing the Core Dump via Serial Console

To capture the boot logs, we configure the Proxmox VM (ID 100) to redirect the serial console to a log file on the hypervisor host.

 

Proxmox Serial Redirection:

# Redirect serial output to a file for capture
qm set 100 -args "-chardev file,id=char0,mux=on,path=/tmp/serial.100.log,signal=off -serial chardev:char0"

# Disable graphical console interference for testing
qm set 100 --serial1 socket --vga serial1

# Start VM and tail the log
qm start 100 ; sleep 1 ; tail -f /tmp/serial.100.log

 

The Stack Trace:
The logs reveal the exact moment of failure. Multiple processes segfault in `libc.so.6` immediately before the panic.

[ 46.598952] ssh-keygen[1494]: segfault at 5 ip 00007f6a92301f82 sp 00007ffcf3b12d30 error 4 in libc.so.6
[ 46.706875] ssh-keygen[1495]: segfault at 5 ip 00007fd8e156af82 sp 00007ffc54899e60 error 4 in libc.so.6
[ 46.753581] init[1]: segfault at 5 ip 00007ff36e63df82 sp 00007ffee56c1900 error 4 in libc.so.6
[ 46.797530] Kernel panic - not syncing: Attempted to kill init! exitcode=0x00000000

Phase 2: Root Cause Analysis

The stack trace points to `ssh-keygen` segfaulting, followed immediately by `init`.

 

1. Entropy Starvation (The Primary Crash): Security appliances like FortiVoice generate cryptographic keys during the `init` phase. If `ssh-keygen` attempts to read from `/dev/random` and finds insufficient entropy, it may hang or, in this specific implementation, crash due to a null pointer dereference when the crypto context fails to initialize.

* Vanilla KVM: Typically injects an RNG device (`virtio-rng`) by default or via the standard deployment XML.
* Proxmox: Does not add an RNG device by default.

 

2. Disk Bus Mismatch (The Environment Error):

* Vanilla KVM: Uses `virtio-blk` (VirtIO Block), presenting disks as `/dev/vda`.
* Proxmox: Defaults to `virtio-scsi-single` (SCSI over VirtIO), presenting disks as `/dev/sda`.
* Impact: Hardcoded initialization scripts expecting `/dev/vd*` devices fail silently or incorrectly, contributing to the panic.

 

Configuration Comparison

*Reference: Functional vanilla KVM Command - output trimmed for brevity

virt-install \
--name fve723-kvm2 \
--disk path=fortivoice-kvm.qcow2,device=disk,bus=virtio,format=qcow2 \ # VirtIO Block
--rng /dev/urandom \ # Explicit RNG Device
...

*Reference: Failing Proxmox Configuration (VM 100)
This configuration was generated via the GUI defaults.

# qm create 100 --name vanilla --memory 2048 --cores 1 --net0 virtio,bridge=vmbr0,firewall=1 --scsihw virtio-scsi-single --ostype l26 --cpu x86-64-v2-AES
boot: order=ide2;net0
scsihw: virtio-scsi-single # Mismatch: Presents as /dev/sda
# Missing: rng0 entry

Phase 3: Correcting the Deployment

To correct this and match the KVM environment, we must first ensure the disk images are imported correctly (preserving `qcow2`) before fixing the bus and RNG settings.

1. Import Images Correctly

Proxmox's default behavior when importing disk images is often to convert them to `raw` (especially on LVM-thin storage). To minimize variables and match the vendor's KVM spec, we must force the `qcow2` format.

 

Transfer Images to Proxmox Host:

scp fortivoice-kvm.qcow2 root@proxmox:/var/lib/vz/template/iso/
scp 250.qcow2 root@proxmox:/var/lib/vz/template/iso/

 

Import Images (Preserving QCOW2):
We import the disks to a file-level storage backend (e.g., `localdir` or `local`) to support `qcow2`.

# Import the OS disk
qm disk import 100 /var/lib/vz/template/iso/fortivoice-kvm.qcow2 localdir --format qcow2

# Import the data disk
qm disk import 100 /var/lib/vz/template/iso/250.qcow2 localdir --format qcow2

2. Correcting the Hardware Configuration

Stop the VM and clear the shutdown override if active:

qm stop 100 -overrule-shutdown 1

Step A: Switch Disk Bus from SCSI to VirtIO

Detach the existing SCSI disks (if any were created by defaults) and attach the imported images as VirtIO block devices.

 

# Remove SCSI attachments
qm set 100 --delete scsi0
qm set 100 --delete scsi1

# Attach the imported images as VirtIO
# Note: Syntax assumes the import command above created these specific image names
qm set 100 --virtio0 localdir:100/vm-100-disk-0.qcow2
qm set 100 --virtio1 localdir:100/vm-100-disk-1.qcow2

Step B: Add the Random Number Generator

This is the critical fix for the `ssh-keygen` segfault.

qm set 100 --rng0 source=/dev/urandom

Step C: Fix Boot Order and Clean Up

Ensure the VM boots from the new VirtIO disk and restore standard console access.

 

# Set boot priority
qm set 100 --boot order=virtio0

# Restore standard display and serial settings
qm set 100 -delete args
qm set 100 --vga std
qm set 100 --serial0 socket

# Optional: remove secondary serial if not needed
qm set 100 --delete serial1

3. Verification

Start the VM:

qm start 100

Open the console. The kernel should now boot past the `init` stage, format the log disk, and present the login prompt.

Post-Boot Network Check:
KVM working deployment of FortiVoice expects a VirtIO network interface (which Proxmox usually handles correctly if selected). Verify the interface status in the FortiVoice CLI:

get system interface physical
diagnose netlink interface

If the interface is not detected, force the model to VirtIO in Proxmox:

qm set 100 -net0 virtio,bridge=vmbr0,firewall=1

Depending on Proxmox networking configuration, the port1 configuration might have to be changed to static.

 

The appliance is now fully operational and matches the stability of the vanilla KVM deployment.

If it ain't broke, don't fix it
0 REPLIES 0
Announcements
Check out our Community Chatter Blog! Click here to get involved
Labels
Top Kudoed Authors