pathways.com:
>>>>
>>>>> Hello,
>>>>>
>>>>> We are using NFS storage. It is actually native NFS mounts on a NetApp
>>>>> storage system. We haven't seen those log entries, but we also don't
>>>>&g
v...@bw-sw.com]
> > Sent: Sunday, January 27, 2019 7:29 PM
> > To: users ; cloudstack-fan <
> > cloudstack-...@protonmail.com>
> > Cc: dev
> > Subject: Re: Snapshots on KVM corrupting disk images
> >
> > Well, guys. I dived into CS agent scripts, which make volume s
y we had Storage Array level snapshots in place as a safety net...
>
> Thanks!!
> Sean
>
> -Original Message-
> From: Ivan Kudryavtsev [mailto:kudryavtsev...@bw-sw.com]
> Sent: Sunday, January 27, 2019 7:29 PM
> To: users ; cloudstack-fan <
> cloudstack-...@
y net...
Thanks!!
Sean
-Original Message-
From: Ivan Kudryavtsev [mailto:kudryavtsev...@bw-sw.com]
Sent: Sunday, January 27, 2019 7:29 PM
To: users ; cloudstack-fan
Cc: dev
Subject: Re: Snapshots on KVM corrupting disk images
Well, guys. I dived into CS agent scripts, which make v
lly native NFS mounts on a NetApp
>>>> storage system. We haven't seen those log entries, but we also don't
>>>> always know when a VM gets corrupted... When we finally get a call that a
>>>> VM is having issues, we've found that it was corrupt
Sent: Sunday, January 27, 2019 1:45 PM
> To: us...@cloudstack.apache.org
> Cc: dev@cloudstack.apache.org
> Subject: Re: Snapshots on KVM corrupting disk images
>
> Hello Sean,
>
> It seems that you've encountered the same issue that I've been facing
> during the last 5-6 years of using
ted a while ago.
-Original Message-
From: cloudstack-fan [mailto:cloudstack-...@protonmail.com.INVALID]
Sent: Sunday, January 27, 2019 1:45 PM
To: us...@cloudstack.apache.org
Cc: dev@cloudstack.apache.org
Subject: Re: Snapshots on KVM corrupting disk images
Hello Sean,
It seems that you
Subject: Re: Snapshots on KVM corrupting disk images
Well, guys. I dived into CS agent scripts, which make volume snapshots and
found there are no code for suspend/resume and also no code for qemu-agent call
fsfreeze/fsthaw. I don't see any blockers adding that code yet and try to add
it in ne
Well, guys. I dived into CS agent scripts, which make volume snapshots and
found there are no code for suspend/resume and also no code for qemu-agent
call fsfreeze/fsthaw. I don't see any blockers adding that code yet and try
to add it in nearest days. If tests go well, I'll publish the PR, which I
2019 4:06 PM
To: dev@cloudstack.apache.org
Subject: Re: Snapshots on KVM corrupting disk images
Hi Sean,
The (recurring) volume snapshot on running vms should be disabled in cloudstack.
According to some discussions (for example
https://bugzilla.redhat.com/show_bug.cgi?id=920020), the image mig
I've met the situations when CLOUDSTACK+KVM+QCOW2+SNAPSHOTS led to
corrupted images, mostly in 4.3 and NFS, but I've thought that CS stops VM
just before it does the snapshot. At least the VM behavior when the VM
snapshot is created looks like it happens (freezing). That's why it looks
strange. But
Hi Sean,
The (recurring) volume snapshot on running vms should be disabled in
cloudstack.
According to some discussions (for example
https://bugzilla.redhat.com/show_bug.cgi?id=920020), the image might be
corrupted due to the concurrent read/write operations in volume snapshot
(by qemu-img snapsh
ry 22, 2019 10:42 AM
To: us...@cloudstack.apache.org; dev@cloudstack.apache.org
Subject: Re: Snapshots on KVM corrupting disk images
Sean,
What underlying primary storage are you using and how is it being utilized by
ACS (e.g. NFS, shared mount et al)?
- Si
: Snapshots on KVM corrupting disk images
Hi all,
We had some instances where VM disks are becoming corrupted when using KVM
snapshots. We are running CloudStack 4.9.3 with KVM on CentOS 7.
The first time was when someone mass-enabled scheduled snapshots on a lot of
large number VMs and secondary
Hi all,
We had some instances where VM disks are becoming corrupted when using KVM
snapshots. We are running CloudStack 4.9.3 with KVM on CentOS 7.
The first time was when someone mass-enabled scheduled snapshots on a lot of
large number VMs and secondary storage filled up. We had to restore
15 matches
Mail list logo