0

I use the VEEAM agent version 6.0.3.1221 with Debian 12.1 (PROXMOX). At first everything worked fine for weeks. Suddenly we have the following issue - see the log. Is it possible to disable the partition check? I guess it has a problem with some specific virtual disks of PROXMOX.

Log:

[ 16:24:37.478] <140405542733504> lsm | Probing partition table on [/dev/zd256].
[ 16:24:37.478] <140405542733504> lsm | Partition table type (on: /dev/zd256): [dos].
[ 16:24:37.478] <140405542733504> dsk | Async read mode is [true]
[ 16:24:37.478] <140405542733504> dsk | I/O statistics for '/dev/zd256': 512 bytes read unaligned (1 requests)
[ 16:24:37.478] <140405542733504> lsm | Objects relationship: link child [PartitionTable] 'dos 90909090' <-> parent [BlockDevice] '/dev/zd256' single flag: '1'.
[ 16:24:37.478] <140405542733504> lpbdevenu| Try get partition from blkid for device [/dev/zd256p1].
[ 16:24:37.479] <140405542733504> lpbdevenu| Try get partition from blkid for device [/dev/zd256p5].
[ 16:24:37.479] <140405542733504> lpbdevenu| Try get partition from blkid for device [/dev/zd256p6].
[ 16:24:37.480] <140405542733504> lsm | Partition [/dev/zd256p1]. Offset: [32256]. Size: [16105065984]. Index: [1].
[ 16:24:37.480] <140405542733504> lsm | Objects relationship: link child [Partition] 'part 1' <-> parent [PartitionTable] 'dos 90909090' single flag: '1'.
[ 16:24:37.480] <140405542733504> lsm | Objects relationship: link child [BlockDevice] '/dev/zd256p1' <-> parent [Partition] 'part 1' single flag: '1'.
[ 16:24:37.480] <140405542733504> lsm | Probing partition table on [/dev/zd256]. Failed.
[ 16:24:37.480] <140405542733504> lsm | Detect partition tables. Failed.

I tried to reinstall the VEEAM Agent. I tried to find a configuration to skip partition detection while starting the agent.

2
  • Is this a file-level or a volume-level backup? Does the device /dev/zd256 actually contain a DOS MBR partition table (with three partitions: p1, p5 and p6), or is that a misdetection? If so, what does the device actually contain? Is it partitioned using a GPT partition table, or has it been initialized as a single whole-disk filesystem, or does it contain a LVM volume or something more complicated? Commented Oct 17, 2023 at 8:59
  • It is a volume-level backup. The partition table of the affected disk looks as following: zd256 ├─zd256p1 ext4 1.0 cf... ├─zd256p2 └─zd256p5 swap 1 9c... They are virtual disks of a PROXMOX system. p6 is a misdetection instead of p2. In this virtual disk we have a DOS MBR partition. No LVM or any others. Commented Oct 17, 2023 at 13:51

1 Answer 1

1

I had a similar issue. Installed veeam in proxmox to take backup of the first 3 partitions (so, I could restore proxmox itself). All worked well until I got a message,

[error] Extend partition for device [/dev/zd64p5][230:69]

So, I couldn't configure the job or create a new job. I cleaned veeam agent and started over, but the same.

It turns out (at least in my case) the error was caused by a particular VM. In my case it is pfsense VM. I had the replication ON for this VM, which means a copy of this VM caused the same Veeam issue to the other proxmox hosts. I verified by disabling replication and removing this VM from its host. Now, Veeam works without issues.

To add some context, I had imported all my VMs from Vmware. There is something with this specific existing VM disk partitions that triggers the error in Veeam.

Some of the steps that helped me to find this VM:

I am using zfs storage so I am listing: ls -l /dev/zvol/poolname | grep zd64

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.