___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Does somebody try ceph + suspend|hibernate (for UPS power-off)? Can it cause
problems with ceph sync in case async poweroff? Fear to try on production (v2)
first!
--
WBR, Dzianis Kahanovich AKA Denis Kaganovich, http://mahatma.bspu.unibel.by/
___
ceph-u
Hi guys,
I was wondering if anyone has done some work on saving qemu VM state
(RAM, registers, etc.) on Ceph itself?
The purpose for me would be to enable easy backups of non-cooperating
VMs - i.e. without the need to quiesce file systems, databases, etc.
I'm thinking an automated process w
Hello,
I`ve thought of the same mechanism a lot ago. After couple of tests I
have concluded that coredump should be done not to Ceph directly, but
to the tmpfs catalog to decrease VM` idle timeout(explicitly for QEMU,
if other hypervisor able to work with Ceph backend and has COW-like
memory snaps
IMHO interaction QEMU & kernel's FREEZER (part of hibernation & cgroups) can
solve many of problems. It can be done via QEMU host-2-guest sockets and scripts
or embedded into virtual hardware (simulate real "suspend" behavior).
Andrey Korolyov пишет:
> Hello,
>
> I`ve thought of the same mechanis
Freezer in guest agent? Exists or todo?
Dzianis Kahanovich пишет:
> IMHO interaction QEMU & kernel's FREEZER (part of hibernation & cgroups) can
> solve many of problems. It can be done via QEMU host-2-guest sockets and
> scripts
> or embedded into virtual hardware (simulate real "suspend" behavi
If I understood you right, your approach is a suspend VM via ACPI
mechanism, then dump core, then restore it - this should be longer
than simple coredump due timings for guest OS to sleep/resume, which
seems unnecessary. Copy-on-write mechanism should reduce downtime to
very acceptable values but u
Hi,
I`ve thought of the same mechanism a lot ago. After couple of tests I
have concluded that coredump should be done not to Ceph directly, but
to the tmpfs catalog to decrease VM` idle timeout(explicitly for QEMU,
I see that more like as an implementation detail - i.e. the state is
initially
Hi,
IMHO interaction QEMU & kernel's FREEZER (part of hibernation & cgroups) can
solve many of problems. It can be done via QEMU host-2-guest sockets and scripts
That would require a cooperating VM. What I was looking at was how to do
this for non-cooperating VMs.
--
Jens Kristian Søgaard,
Hello Patrick and Ceph users,
On 5/16/13 17:02 PM, Patrick McGarry wrote:
Of course,
we'd love to hear about anything you're working on. So, if you have
notes to share about Ceph with other cloud flavors, massive storage
clusters, or custom work, we'd treasure them appropriately.
as you alrea
Speed is not critical even for usual snapshot. I don't look into qemu code, but
x86 arch normal fork process is in splitting virtual address space and
copy-on-write both "compacted" copies. So, "snapshot" is momental in time (mean
not copy nothing real RAM, just fix descriptors), but forked copy ca
As far as I mind - IMHO problem is in acpi code creation (own bytecode
language). After integrating this interaction into standard suspend/resume
signals - there will only problem of guest suspend support for real hardware.
So, nothing special.
Jens Kristian Søgaard пишет:
> Hi,
>
>> IMHO interac
Hi,
As far as I mind - IMHO problem is in acpi code creation (own bytecode
language). After integrating this interaction into standard suspend/resume
signals - there will only problem of guest suspend support for real hardware.
So, nothing special.
You're right - it is nothing special... but t
I have set up a configuration with 3 x MON + 2 x OSD, each on a different host,
as a test bench setup. I've written nothing to the cluster (yet).
I'm running ceph 0.61.2 (cuttlefish).
I want to discover what happens if I move an OSD from one host to another,
simulating the effect of moving a wo
On 18 May 2013, at 18:20, Alex Bligh wrote:
> I want to discover what happens if I move an OSD from one host to another,
> simulating the effect of moving a working harddrive from a dead host to a
> live host, which I believe should work. So I stopped osd.0 on one host, and
> copied (using scp
15 matches
Mail list logo