[ceph-users] (no subject)

2013-05-18 Thread koma kai

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Ceph + suspend|hibernate?

2013-05-18 Thread Dzianis Kahanovich
Does somebody try ceph + suspend|hibernate (for UPS power-off)? Can it cause
problems with ceph sync in case async poweroff? Fear to try on production (v2)
first!

-- 
WBR, Dzianis Kahanovich AKA Denis Kaganovich, http://mahatma.bspu.unibel.by/
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Ceph and Qemu

2013-05-18 Thread Jens Kristian Søgaard

Hi guys,

I was wondering if anyone has done some work on saving qemu VM state 
(RAM, registers, etc.) on Ceph itself?


The purpose for me would be to enable easy backups of non-cooperating 
VMs - i.e. without the need to quiesce file systems, databases, etc.


I'm thinking an automated process which pauses the VM, flushes the Ceph 
writeback cache (if any), snapshots the rbd image and saves the VM state 
on Ceph as well. I imagine this should only take a very short amount of 
time, and then the VM can be unpaused and continue with minimal 
interruption.


The new Ceph export command could then be used to store that backup on a 
secondary Ceph cluster or on simple storage.


--
Jens Kristian Søgaard, Mermaid Consulting ApS,
j...@mermaidconsulting.dk,
http://www.mermaidconsulting.com/
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph and Qemu

2013-05-18 Thread Andrey Korolyov
Hello,

I`ve thought of the same mechanism a lot ago. After couple of tests I
have concluded that coredump should be done not to Ceph directly, but
to the tmpfs catalog to decrease VM` idle timeout(explicitly for QEMU,
if other hypervisor able to work with Ceph backend and has COW-like
memory snapshotting mechanism, time of the 'flush' of the coredump
does not matter). Anyway, with QEMU relatively simple shell script
should do the thing.

On Sat, May 18, 2013 at 4:51 PM, Jens Kristian Søgaard
 wrote:
> Hi guys,
>
> I was wondering if anyone has done some work on saving qemu VM state (RAM,
> registers, etc.) on Ceph itself?
>
> The purpose for me would be to enable easy backups of non-cooperating VMs -
> i.e. without the need to quiesce file systems, databases, etc.
>
> I'm thinking an automated process which pauses the VM, flushes the Ceph
> writeback cache (if any), snapshots the rbd image and saves the VM state on
> Ceph as well. I imagine this should only take a very short amount of time,
> and then the VM can be unpaused and continue with minimal interruption.
>
> The new Ceph export command could then be used to store that backup on a
> secondary Ceph cluster or on simple storage.
>
> --
> Jens Kristian Søgaard, Mermaid Consulting ApS,
> j...@mermaidconsulting.dk,
> http://www.mermaidconsulting.com/
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph and Qemu

2013-05-18 Thread Dzianis Kahanovich
IMHO interaction QEMU & kernel's FREEZER (part of hibernation & cgroups) can
solve many of problems. It can be done via QEMU host-2-guest sockets and scripts
or embedded into virtual hardware (simulate real "suspend" behavior).

Andrey Korolyov пишет:
> Hello,
> 
> I`ve thought of the same mechanism a lot ago. After couple of tests I
> have concluded that coredump should be done not to Ceph directly, but
> to the tmpfs catalog to decrease VM` idle timeout(explicitly for QEMU,
> if other hypervisor able to work with Ceph backend and has COW-like
> memory snapshotting mechanism, time of the 'flush' of the coredump
> does not matter). Anyway, with QEMU relatively simple shell script
> should do the thing.
> 
> On Sat, May 18, 2013 at 4:51 PM, Jens Kristian Søgaard
>  wrote:
>> Hi guys,
>>
>> I was wondering if anyone has done some work on saving qemu VM state (RAM,
>> registers, etc.) on Ceph itself?
>>
>> The purpose for me would be to enable easy backups of non-cooperating VMs -
>> i.e. without the need to quiesce file systems, databases, etc.
>>
>> I'm thinking an automated process which pauses the VM, flushes the Ceph
>> writeback cache (if any), snapshots the rbd image and saves the VM state on
>> Ceph as well. I imagine this should only take a very short amount of time,
>> and then the VM can be unpaused and continue with minimal interruption.
>>
>> The new Ceph export command could then be used to store that backup on a
>> secondary Ceph cluster or on simple storage.
>>
>> --
>> Jens Kristian Søgaard, Mermaid Consulting ApS,
>> j...@mermaidconsulting.dk,
>> http://www.mermaidconsulting.com/
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
> 


-- 
WBR, Dzianis Kahanovich AKA Denis Kaganovich, http://mahatma.bspu.unibel.by/
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph and Qemu

2013-05-18 Thread Dzianis Kahanovich
Freezer in guest agent? Exists or todo?

Dzianis Kahanovich пишет:
> IMHO interaction QEMU & kernel's FREEZER (part of hibernation & cgroups) can
> solve many of problems. It can be done via QEMU host-2-guest sockets and 
> scripts
> or embedded into virtual hardware (simulate real "suspend" behavior).
> 
> Andrey Korolyov пишет:
>> Hello,
>>
>> I`ve thought of the same mechanism a lot ago. After couple of tests I
>> have concluded that coredump should be done not to Ceph directly, but
>> to the tmpfs catalog to decrease VM` idle timeout(explicitly for QEMU,
>> if other hypervisor able to work with Ceph backend and has COW-like
>> memory snapshotting mechanism, time of the 'flush' of the coredump
>> does not matter). Anyway, with QEMU relatively simple shell script
>> should do the thing.
>>
>> On Sat, May 18, 2013 at 4:51 PM, Jens Kristian Søgaard
>>  wrote:
>>> Hi guys,
>>>
>>> I was wondering if anyone has done some work on saving qemu VM state (RAM,
>>> registers, etc.) on Ceph itself?
>>>
>>> The purpose for me would be to enable easy backups of non-cooperating VMs -
>>> i.e. without the need to quiesce file systems, databases, etc.
>>>
>>> I'm thinking an automated process which pauses the VM, flushes the Ceph
>>> writeback cache (if any), snapshots the rbd image and saves the VM state on
>>> Ceph as well. I imagine this should only take a very short amount of time,
>>> and then the VM can be unpaused and continue with minimal interruption.
>>>
>>> The new Ceph export command could then be used to store that backup on a
>>> secondary Ceph cluster or on simple storage.
>>>
>>> --
>>> Jens Kristian Søgaard, Mermaid Consulting ApS,
>>> j...@mermaidconsulting.dk,
>>> http://www.mermaidconsulting.com/
>>> ___
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
> 
> 


-- 
WBR, Dzianis Kahanovich AKA Denis Kaganovich, http://mahatma.bspu.unibel.by/
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph and Qemu

2013-05-18 Thread Andrey Korolyov
If I understood you right, your approach is a suspend VM via ACPI
mechanism, then dump core, then restore it - this should be longer
than simple coredump due timings for guest OS to sleep/resume, which
seems unnecessary. Copy-on-write mechanism should reduce downtime to
very acceptable values but unfortunately I do not heard of such
mechanism except academic projects.

On Sat, May 18, 2013 at 5:48 PM, Dzianis Kahanovich
 wrote:
> IMHO interaction QEMU & kernel's FREEZER (part of hibernation & cgroups) can
> solve many of problems. It can be done via QEMU host-2-guest sockets and 
> scripts
> or embedded into virtual hardware (simulate real "suspend" behavior).
>
> Andrey Korolyov пишет:
>> Hello,
>>
>> I`ve thought of the same mechanism a lot ago. After couple of tests I
>> have concluded that coredump should be done not to Ceph directly, but
>> to the tmpfs catalog to decrease VM` idle timeout(explicitly for QEMU,
>> if other hypervisor able to work with Ceph backend and has COW-like
>> memory snapshotting mechanism, time of the 'flush' of the coredump
>> does not matter). Anyway, with QEMU relatively simple shell script
>> should do the thing.
>>
>> On Sat, May 18, 2013 at 4:51 PM, Jens Kristian Søgaard
>>  wrote:
>>> Hi guys,
>>>
>>> I was wondering if anyone has done some work on saving qemu VM state (RAM,
>>> registers, etc.) on Ceph itself?
>>>
>>> The purpose for me would be to enable easy backups of non-cooperating VMs -
>>> i.e. without the need to quiesce file systems, databases, etc.
>>>
>>> I'm thinking an automated process which pauses the VM, flushes the Ceph
>>> writeback cache (if any), snapshots the rbd image and saves the VM state on
>>> Ceph as well. I imagine this should only take a very short amount of time,
>>> and then the VM can be unpaused and continue with minimal interruption.
>>>
>>> The new Ceph export command could then be used to store that backup on a
>>> secondary Ceph cluster or on simple storage.
>>>
>>> --
>>> Jens Kristian Søgaard, Mermaid Consulting ApS,
>>> j...@mermaidconsulting.dk,
>>> http://www.mermaidconsulting.com/
>>> ___
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>
>
> --
> WBR, Dzianis Kahanovich AKA Denis Kaganovich, http://mahatma.bspu.unibel.by/
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph and Qemu

2013-05-18 Thread Jens Kristian Søgaard

Hi,


I`ve thought of the same mechanism a lot ago. After couple of tests I
have concluded that coredump should be done not to Ceph directly, but
to the tmpfs catalog to decrease VM` idle timeout(explicitly for QEMU,


I see that more like as an implementation detail - i.e. the state is 
initially saved to RAM (or tmpfs/ramdisk) - and then afterwards 
committed to Ceph storage.


The part I was interested in was if someone had looked at a way to store 
the disk image together with the state as one unit in Ceph. This would 
make it a easier to manage and backup.


As far as I have understood it, it is not immediately possible to use 
qcow2 on top of Ceph with qemu-kvm and librbd. I don't see why this 
should not be possible in theory - and that would make it easy to store 
the state alongside the disk image.


Am I wrong in assuming that it is not possible to layer qcow2 on top of 
rbd with qemu-kvm and librbd?


--
Jens Kristian Søgaard, Mermaid Consulting ApS,
j...@mermaidconsulting.dk,
http://www.mermaidconsulting.com/
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph and Qemu

2013-05-18 Thread Jens Kristian Søgaard

Hi,


IMHO interaction QEMU & kernel's FREEZER (part of hibernation & cgroups) can
solve many of problems. It can be done via QEMU host-2-guest sockets and scripts


That would require a cooperating VM. What I was looking at was how to do 
this for non-cooperating VMs.


--
Jens Kristian Søgaard, Mermaid Consulting ApS,
j...@mermaidconsulting.dk,
http://www.mermaidconsulting.com/
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Using Ceph and CloudStack? Let us know!

2013-05-18 Thread Constantinos Venetsanopoulos

Hello Patrick and Ceph users,

On 5/16/13 17:02 PM, Patrick McGarry wrote:

Of course,
we'd love to hear about anything you're working on.  So, if you have
notes to share about Ceph with other cloud flavors, massive storage
clusters, or custom work, we'd treasure them appropriately.


as you already know from an older post on the Ceph blog [1] we have been
evaluating RADOS for use in our public cloud service [2], powered by the
open source cloud software Synnefo [3]. When writing the post, we had
already fully integrated RADOS in Synnefo (VM disks, Images, Files) and
we were in the process of moving everything into production.

Indeed, we are now happy to inform you that the deployment of RADOS
into our production environment has been completed successfully and
since last month [4] our users are storing their files and images on
RADOS and also have the choice of spawning VMs with their disks on
RADOS too; in seconds with thin cloning.

We are currently experimenting with thin disk snapshotting and hope to
have the functionality integrated in one of the next Synnefo versions.
In the same time, we are expanding our production RADOS cluster as
demand rises with a plan to hit 1PB or raw storage.

Keep the good work,
Kind Regards,
Constantinos


[1] http://ceph.com/community/ceph-comes-to-synnefo-and-ganeti/
[2] http://okeanos.grnet.gr
[3] http://synnefo.org
[4] https://okeanos.grnet.gr/blog/2013/04/04/introducing-archipelago/


Feel free to just reply to this email, send a message to
commun...@inktank.com, message 'scuttlemonkey' on irc.oftc.net, or tie
a note to our ip-over-carrier-pigeon network.  Thanks, and happy
Ceph-ing.


Best Regards,

Patrick McGarry
Director, Community || Inktank

http://ceph.com  ||  http://inktank.com
@scuttlemonkey || @ceph || @inktank
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph and Qemu

2013-05-18 Thread Dzianis Kahanovich
Speed is not critical even for usual snapshot. I don't look into qemu code, but
x86 arch normal fork process is in splitting virtual address space and
copy-on-write both "compacted" copies. So, "snapshot" is momental in time (mean
not copy nothing real RAM, just fix descriptors), but forked copy can be writed
with any speed. But, for example, untuned Linux guest is too sensitive to time
drifts. So, live migration near good only with nohz=off, clock=acpi_pm, etc. And
frequently vm rebooting on ceph failure (1/3 node with 2/1 or 3/2 replication
size) - I tune it, but do not figure out final solution. Windows usually still
live on any random ceph freezes. Default Linux do heavy use of very precise HR
timers and schedulers, so best workaround - "guest cooperation" ;) to freeze 
itself.

Andrey Korolyov пишет:
> If I understood you right, your approach is a suspend VM via ACPI
> mechanism, then dump core, then restore it - this should be longer
> than simple coredump due timings for guest OS to sleep/resume, which
> seems unnecessary. Copy-on-write mechanism should reduce downtime to
> very acceptable values but unfortunately I do not heard of such
> mechanism except academic projects.
> 
> On Sat, May 18, 2013 at 5:48 PM, Dzianis Kahanovich
>  wrote:
>> IMHO interaction QEMU & kernel's FREEZER (part of hibernation & cgroups) can
>> solve many of problems. It can be done via QEMU host-2-guest sockets and 
>> scripts
>> or embedded into virtual hardware (simulate real "suspend" behavior).
>>
>> Andrey Korolyov пишет:
>>> Hello,
>>>
>>> I`ve thought of the same mechanism a lot ago. After couple of tests I
>>> have concluded that coredump should be done not to Ceph directly, but
>>> to the tmpfs catalog to decrease VM` idle timeout(explicitly for QEMU,
>>> if other hypervisor able to work with Ceph backend and has COW-like
>>> memory snapshotting mechanism, time of the 'flush' of the coredump
>>> does not matter). Anyway, with QEMU relatively simple shell script
>>> should do the thing.
>>>
>>> On Sat, May 18, 2013 at 4:51 PM, Jens Kristian Søgaard
>>>  wrote:
 Hi guys,

 I was wondering if anyone has done some work on saving qemu VM state (RAM,
 registers, etc.) on Ceph itself?

 The purpose for me would be to enable easy backups of non-cooperating VMs -
 i.e. without the need to quiesce file systems, databases, etc.

 I'm thinking an automated process which pauses the VM, flushes the Ceph
 writeback cache (if any), snapshots the rbd image and saves the VM state on
 Ceph as well. I imagine this should only take a very short amount of time,
 and then the VM can be unpaused and continue with minimal interruption.

 The new Ceph export command could then be used to store that backup on a
 secondary Ceph cluster or on simple storage.

 --
 Jens Kristian Søgaard, Mermaid Consulting ApS,
 j...@mermaidconsulting.dk,
 http://www.mermaidconsulting.com/
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>> ___
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>>
>>
>>
>> --
>> WBR, Dzianis Kahanovich AKA Denis Kaganovich, http://mahatma.bspu.unibel.by/
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
> 


-- 
WBR, Dzianis Kahanovich AKA Denis Kaganovich, http://mahatma.bspu.unibel.by/
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph and Qemu

2013-05-18 Thread Dzianis Kahanovich
As far as I mind - IMHO problem is in acpi code creation (own bytecode
language). After integrating this interaction into standard suspend/resume
signals - there will only problem of guest suspend support for real hardware.
So, nothing special.

Jens Kristian Søgaard пишет:
> Hi,
> 
>> IMHO interaction QEMU & kernel's FREEZER (part of hibernation & cgroups) can
>> solve many of problems. It can be done via QEMU host-2-guest sockets and 
>> scripts
> 
> That would require a cooperating VM. What I was looking at was how to do this
> for non-cooperating VMs.
> 


-- 
WBR, Dzianis Kahanovich AKA Denis Kaganovich, http://mahatma.bspu.unibel.by/
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph and Qemu

2013-05-18 Thread Jens Kristian Søgaard

Hi,


As far as I mind - IMHO problem is in acpi code creation (own bytecode
language). After integrating this interaction into standard suspend/resume
signals - there will only problem of guest suspend support for real hardware.
So, nothing special.


You're right - it is nothing special... but there are still systems out 
there that doesn't support suspend/resume.


It would be nice to be able to backup a running VM no matter what 
software is actually running inside.


--
Jens Kristian Søgaard, Mermaid Consulting ApS,
j...@mermaidconsulting.dk,
http://www.mermaidconsulting.com/
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Abort on moving OSD

2013-05-18 Thread Alex Bligh
I have set up a configuration with 3 x MON + 2 x OSD, each on a different host, 
as a test bench setup. I've written nothing to the cluster (yet).

I'm running ceph 0.61.2 (cuttlefish).

I want to discover what happens if I move an OSD from one host to another, 
simulating the effect of moving a working harddrive from a dead host to a live 
host, which I believe should work. So I stopped osd.0 on one host, and copied 
(using scp) /var/lib/ceph/osd/ceph-0 from one host to another. My understanding 
is that starting osd.0 on the destination host with 'service ceph start osd.0' 
should rewrite the crush map and everything should be fine.

In fact what happened was:

root@ceph6:~# service ceph start osd.0
=== osd.0 === 
create-or-move updating item id 0 name 'osd.0' weight 0.05 at location 
{host=ceph6,root=default} to crush map
Starting Ceph osd.0 on ceph6...
starting osd.0 at :/0 osd_data /var/lib/ceph/osd/ceph-0 
/var/lib/ceph/osd/ceph-0/journal
...
root@ceph6:~# ceph health
HEALTH_WARN 192 pgs degraded; 192 pgs stuck unclean; 1/2 in osds are down

osd.0 was not running on the new host, due to the abort as set out below (from 
the log file). Should this work?

-- 
Alex Bligh


2013-05-18 17:03:00.345129 7fa408dbb780  0 ceph version 0.61.2 
(fea782543a844bb277ae94d3391788b76c5bee60), process ceph-osd, pid 3398
2013-05-18 17:03:00.676611 7fa408dbb780 -1 filestore(/var/lib/ceph/osd/ceph-0) 
limited size xattrs -- filestore_xattr_use_omap enabled
2013-05-18 17:03:00.891267 7fa408dbb780  0 filestore(/var/lib/ceph/osd/ceph-0) 
mount FIEMAP ioctl is supported and appears to work
2013-05-18 17:03:00.891314 7fa408dbb780  0 filestore(/var/lib/ceph/osd/ceph-0) 
mount FIEMAP ioctl is disabled via 'filestore fiemap' config option
2013-05-18 17:03:00.891533 7fa408dbb780  0 filestore(/var/lib/ceph/osd/ceph-0) 
mount did NOT detect btrfs
2013-05-18 17:03:01.373741 7fa408dbb780  0 filestore(/var/lib/ceph/osd/ceph-0) 
mount syncfs(2) syscall fully supported (by glibc and kernel)
2013-05-18 17:03:01.374175 7fa408dbb780  0 filestore(/var/lib/ceph/osd/ceph-0) 
mount found snaps <>
2013-05-18 17:03:02.023315 7fa408dbb780  0 filestore(/var/lib/ceph/osd/ceph-0) 
mount: enabling WRITEAHEAD journal mode: btrfs not detected
2013-05-18 17:03:02.024992 7fa408dbb780 -1 journal FileJournal::_open: 
disabling aio for non-block journal.  Use journal_force_aio to force use of aio 
anyway
2013-05-18 17:03:02.025372 7fa408dbb780  1 journal _open 
/var/lib/ceph/osd/ceph-0/journal fd 21: 1048576000 bytes, block size 4096 
bytes, directio = 1, aio = 0
2013-05-18 17:03:02.025580 7fa408dbb780  1 journal _open 
/var/lib/ceph/osd/ceph-0/journal fd 21: 1048576000 bytes, block size 4096 
bytes, directio = 1, aio = 0
2013-05-18 17:03:02.027454 7fa408dbb780  1 journal close 
/var/lib/ceph/osd/ceph-0/journal
2013-05-18 17:03:02.302070 7fa408dbb780 -1 filestore(/var/lib/ceph/osd/ceph-0) 
limited size xattrs -- filestore_xattr_use_omap enabled
2013-05-18 17:03:02.361438 7fa408dbb780  0 filestore(/var/lib/ceph/osd/ceph-0) 
mount FIEMAP ioctl is supported and appears to work
2013-05-18 17:03:02.361508 7fa408dbb780  0 filestore(/var/lib/ceph/osd/ceph-0) 
mount FIEMAP ioctl is disabled via 'filestore fiemap' config option
2013-05-18 17:03:02.361755 7fa408dbb780  0 filestore(/var/lib/ceph/osd/ceph-0) 
mount did NOT detect btrfs
2013-05-18 17:03:02.424915 7fa408dbb780  0 filestore(/var/lib/ceph/osd/ceph-0) 
mount syncfs(2) syscall fully supported (by glibc and kernel)
2013-05-18 17:03:02.425107 7fa408dbb780  0 filestore(/var/lib/ceph/osd/ceph-0) 
mount found snaps <>
2013-05-18 17:03:02.519006 7fa408dbb780  0 filestore(/var/lib/ceph/osd/ceph-0) 
mount: enabling WRITEAHEAD journal mode: btrfs not detected
2013-05-18 17:03:02.520446 7fa408dbb780 -1 journal FileJournal::_open: 
disabling aio for non-block journal.  Use journal_force_aio to force use of aio 
anyway
2013-05-18 17:03:02.520507 7fa408dbb780  1 journal _open 
/var/lib/ceph/osd/ceph-0/journal fd 29: 1048576000 bytes, block size 4096 
bytes, directio = 1, aio = 0
2013-05-18 17:03:02.520625 7fa408dbb780  1 journal _open 
/var/lib/ceph/osd/ceph-0/journal fd 29: 1048576000 bytes, block size 4096 
bytes, directio = 1, aio = 0
2013-05-18 17:03:02.522371 7fa408dbb780  0 osd.0 24 crush map has features 
33816576, adjusting msgr requires for clients
2013-05-18 17:03:02.522419 7fa408dbb780  0 osd.0 24 crush map has features 
33816576, adjusting msgr requires for osds
2013-05-18 17:03:02.533617 7fa408dbb780 -1 *** Caught signal (Aborted) **
 in thread 7fa408dbb780

 ceph version 0.61.2 (fea782543a844bb277ae94d3391788b76c5bee60)
 1: /usr/bin/ceph-osd() [0x79087a]
 2: (()+0xfcb0) [0x7fa408254cb0]
 3: (gsignal()+0x35) [0x7fa406a0d425]
 4: (abort()+0x17b) [0x7fa406a10b8b]
 5: (__gnu_cxx::__verbose_terminate_handler()+0x11d) [0x7fa40735f69d]
 6: (()+0xb5846) [0x7fa40735d846]
 7: (()+0xb5873) [0x7fa40735d873]
 8: (()+0xb596e) [0x7fa40735d96e]
 9: (ceph::buffer::list::iterator::copy(unsigned int, char*)+0x127) [0x841227]

Re: [ceph-users] Abort on moving OSD

2013-05-18 Thread Alex Bligh

On 18 May 2013, at 18:20, Alex Bligh wrote:

> I want to discover what happens if I move an OSD from one host to another, 
> simulating the effect of moving a working harddrive from a dead host to a 
> live host, which I believe should work. So I stopped osd.0 on one host, and 
> copied (using scp) /var/lib/ceph/osd/ceph-0 from one host to another. My 
> understanding is that starting osd.0 on the destination host with 'service 
> ceph start osd.0' should rewrite the crush map and everything should be fine.

Apologies, this was my idiocy. scp does not copy xattrs. rsync -aHAX does, and 
indeed works fine.

I suppose it would have been nice if it died a little more gracefully, but I 
think I got what I deserved.

-- 
Alex Bligh




___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com