Hello,
Am 07.05.25 um 17:27 schrieb Krzysztof Hajdamowicz:
> A few months ago, I decided to experiment with bcachefs on my PVE 8.x home
> server.
> So far, the experience has been positive, although it requires building the
> latest kernel from Linus's source tree, which can be a bit on the edge.
Am 13.05.25 um 12:56 schrieb Fiona Ebner:
> The pve-lxc-syscalld systemd service currently uses /run/pve as a
> runtime directory. This means, that when the service is restarted, the
> directory will be recreated. But the /run/pve directory is not just
> used as the runtime directory of this servic
I moved the section on creating swap partitions on ZFS between
the "Disk Health Monitoring" section and the "Logical Volume
Manager" section. I also incorporated all suggestions Fiona
made, except one: No comma was put before "because" because
the subordinate clause is introduced by "because". One
Am 13.05.25 um 12:56 schrieb Fiona Ebner:
> When the service is restarted, the directory will be recreated. The
> issue is that the /run/pve directory is not just used as the runtime
> directory of this service, but also for other things, e.g. storage
> tunnel and mtunnel sockets and container stde
Am 13.05.25 um 15:48 schrieb Daniel Kral:
> On 5/13/25 10:03, Michael Köppl wrote:
>> The value in $conf->{opt} is not necessarily a volume ID. To ensure that
>> a valid volume ID is used, it is retrieved by calling parse_volume().
>>
>> Co-authored-by: Stefan Hrdlicka
>
> As already discussed in
--- Begin Message ---
Re Hannes,
I took some time today to test the new implementation, but I believe there are
still some issues with it.
Here's what I did:
1. Create IPAM entry with Nautobot plugin
2. Create a zone of type simple with Nautobot as an IPAM
3. Create a VNet in said zone, with a /
--- Begin Message ---
>>I'll continue some testing with the dir part (I thought LVM might be
>>more interesting to run into performance issues/.. ;))
>>
>>one thing that I already noticed is that snapshot images are listed
>>like
>>regular images:
>>$ pvesm list extsnapdir
>>Volid
> Alexandre Derumier via pve-devel hat am
> 22.04.2025 13:51 CEST geschrieben:
> add a snapext option to enable the feature
>
> When a snapshot is taken, the current volume is renamed to snap volname
> and a current image is created with the snap volume as backing file
>
> Signed-off-by: Alex
--- Begin Message ---
Hi,
I'm trying to test specific aspects of the storage plugin API,
and I'm not sure how to verify whether my function
volume_rollback_is_possible
correctly sets up the blockers list.
I've checked REST API documentation:
https://pve.proxmox.com/pve-docs/api-viewer/index.ht
Am 11.04.25 um 12:01 schrieb Dominik Csapak:
> The consent window will try to size itself according to the max/min
> constraints set, but those might be too large for some viewport sizes.
>
> Since it's not possible to set those to relative viewport sizes (ExtJS
> does it's own layout, so we can't
> DERUMIER, Alexandre hat am 14.05.2025
> 12:45 CEST geschrieben:
>
>
> >>removed snapshot test2 while VM is running:
> >>
> >>delete qemu external snapshot
> >>stream intermediate snapshot test2 to current
> >stream-drive-scsi1: transferred 309.0 MiB of 32.0 GiB (0.94%) in 0s
> >>stream-dri
--- Begin Message ---
>>removed snapshot test2 while VM is running:
>>
>>delete qemu external snapshot
>>stream intermediate snapshot test2 to current
>stream-drive-scsi1: transferred 309.0 MiB of 32.0 GiB (0.94%) in 0s
>>stream-drive-scsi1: stream-job finished
>>delete old /dev/extsnap/snap-test2-
> Fiona Ebner hat am 14.05.2025 11:31 CEST geschrieben:
>
>
> Am 14.05.25 um 11:06 schrieb Fabian Grünbichler:
> >> Fiona Ebner hat am 14.05.2025 10:22 CEST geschrieben:
> >>
> >>
> >> Am 13.05.25 um 15:31 schrieb Fiona Ebner:
> >>> Signed-off-by: Fiona Ebner
> >>> ---
> >>> src/PVE/Stora
> Fiona Ebner hat am 14.05.2025 10:22 CEST geschrieben:
>
>
> Am 13.05.25 um 15:31 schrieb Fiona Ebner:
> > Signed-off-by: Fiona Ebner
> > ---
> > src/PVE/Storage/RBDPlugin.pm | 6 ++
> > 1 file changed, 6 insertions(+)
> >
> > diff --git a/src/PVE/Storage/RBDPlugin.pm b/src/PVE/Storag
As commit e79ab52 ("Fix #2346: rbd storage shows wrong %-usage")
mentions, Ceph provides a 'stored' field since version 14.2.2 as an
approximation of the actually stored amount of user data. The commit
forgot to update the accompanying comment however.
The 'bytes_used' field refers to the raw usag
As reported in the enterprise support, the usage percentage presented
by Proxmox VE can be quite different from what Ceph itself shows when
compression is used on the pool. The reason is that Proxmox VE used
the 'stored' value as a basis for the calculation which is the amount
of logically stored u
Am 14.05.25 um 11:06 schrieb Fabian Grünbichler:
>> Fiona Ebner hat am 14.05.2025 10:22 CEST geschrieben:
>>
>>
>> Am 13.05.25 um 15:31 schrieb Fiona Ebner:
>>> Signed-off-by: Fiona Ebner
>>> ---
>>> src/PVE/Storage/RBDPlugin.pm | 6 ++
>>> 1 file changed, 6 insertions(+)
>>>
>>> diff --gi
Am 13.05.25 um 15:31 schrieb Fiona Ebner:
> Signed-off-by: Fiona Ebner
> ---
> src/PVE/Storage/RBDPlugin.pm | 6 ++
> 1 file changed, 6 insertions(+)
>
> diff --git a/src/PVE/Storage/RBDPlugin.pm b/src/PVE/Storage/RBDPlugin.pm
> index 154fa00..b56f8e4 100644
> --- a/src/PVE/Storage/RBDPlugin
--- Begin Message ---
ok, I have found the problem
the
eval { lvm_qcow2_format($class, $storeid, $scfg, $name, $fmt,
$backing_snap, $size) };
is expecting bytes for size,
but vdisk_alloc is sending kb. (and volume_resize is sending bytes
for example...)
So, It's creating a really big qc
19 matches
Mail list logo