[ceph-users] Cannot mount ceph filesystem: error 5 (Input/Output error)

2013-03-10 Thread waed Albataineh
Hi, 

When trying to mount Ceph filesystem using 
"sudo mount -t ceph {ip-address-of-monitor}:6789:/ /mnt/mycephfs"
The command hang for a while then fails with error msg  mount 5 = Input/Outpu 
error  !! and then used ceph -s and the output  : 

2013-03-10 13:23:03.834877 7f26b0d8e700  0 -- :/6672 >> 10.242.20.248:6789/0 
pipe(0x1a768a0 sd=3 :0 s=1 pgs=0 cs=0 l=1).fault
2013-03-10 13:23:06.834997 7f26b74ad700  0 -- :/6672 >> 10.242.20.248:6789/0 
pipe(0x7f26a8000c00 sd=3 :0 s=1 pgs=0 cs=0 l=1).fault
2013-03-10 13:23:09.835215 7f26b0d8e700  0 -- :/6672 >> 10.242.20.248:6789/0 
pipe(0x7f26a8003010 sd=3 :0 s=1 pgs=0 cs=0 l=1).fault
.
.

what does that mean and what to do to solve it ??

Thank you. 



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Cannot mount ceph filesystem: error 5 (Input/Output error)

2013-03-10 Thread Wido den Hollander

On 03/10/2013 12:30 PM, waed Albataineh wrote:

Hi,

When trying to mount Ceph filesystem using
"sudo mount -t ceph {ip-address-of-monitor}:6789:/ /mnt/mycephfs"
The command hang for a while then fails with error msg  mount 5 =
Input/Outpu error  !! and then used ceph -s and the output  :

2013-03-10 13:23:03.834877 7f26b0d8e700  0 -- :/6672 >>
10.242.20.248:6789/0 pipe(0x1a768a0 sd=3 :0 s=1 pgs=0 cs=0 l=1).fault
2013-03-10 13:23:06.834997 7f26b74ad700  0 -- :/6672 >>
10.242.20.248:6789/0 pipe(0x7f26a8000c00 sd=3 :0 s=1 pgs=0 cs=0 l=1).fault
2013-03-10 13:23:09.835215 7f26b0d8e700  0 -- :/6672 >>
10.242.20.248:6789/0 pipe(0x7f26a8003010 sd=3 :0 s=1 pgs=0 cs=0 l=1).fault
.
.

what does that mean and what to do to solve it ??



That means your monitor is probably dead. Could you verify the ceph-mon 
process is actually running?



Thank you.






___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




--
Wido den Hollander
42on B.V.

Phone: +31 (0)20 700 9902
Skype: contact42on
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Can't Mount CephFS

2013-03-10 Thread Wido den Hollander

On 03/09/2013 06:55 PM, Scott Kinder wrote:

I'm running ceph 0.56.3 on CentOS, and when I try to mount ceph as a
file system on other servers, the process just waits interminably. I'm
not seeing any relevant entries in syslog on the hosts trying to mount
the file system, nor am I seeing any entries in the ceph monitor logs.
Any ideas on how I can troubleshoot this problem? I'm not running
SELinux on the CentOS hosts, nor are there any firewall rules in place
on either the host or network level.




What does 'ceph -s' say? Is the cluster healthy with an active MDS?


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




--
Wido den Hollander
42on B.V.

Phone: +31 (0)20 700 9902
Skype: contact42on
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Raw disks under OSDs or HW-RAID6 is better?

2013-03-10 Thread Wido den Hollander

On 03/08/2013 02:17 PM, Mihály Árva-Tóth wrote:

Hello,

We're planning 3 hosts, 12 HDDs in each host. Which is the better? If we
set up 1 OSD - 1 HDD structure or create hardware RAID-6 all of 12 HDDs
and only one OSD uses the whole disk space in one host?



I'd saw 12 "RAW" disks, so 12 OSDs. It will consume a bit more memory 
and CPU, but since Ceph does the replication there is no need for 
HW-RAID, it's just wasting space.


But still, 3 hosts with 12 HDDs each is not what I'd recommend. If you 
loose one host 33% of your cluster goes down and the recovery process 
will consume a lot of performance.


Better go with 6 hosts with each 6 disks or even more hosts with each 
less disks.


The more hosts you have the lower the impact is of loosing one.


Thank you,
Mike


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




--
Wido den Hollander
42on B.V.

Phone: +31 (0)20 700 9902
Skype: contact42on
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Raw disks under OSDs or HW-RAID6 is better?

2013-03-10 Thread Dimitri Maziuk

On 3/8/2013 7:17 AM, Mihály Árva-Tóth wrote:

Hello,

We're planning 3 hosts, 12 HDDs in each host. Which is the better? If we
set up 1 OSD - 1 HDD structure or create hardware RAID-6 all of 12 HDDs
and only one OSD uses the whole disk space in one host?


I suspect the issue is what you're going to store on it. With 3TB 
low-end SATA drives & hot spare your OSD filesystem is going to be 27TB. 
If you plan on storing 10GB vm images you're probably better off with 27 
1TB OSDs. Which of course will take 5 times the rack space, 5-10 times 
the power outlets etc.


Dima

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Configuration file

2013-03-10 Thread John Wilkins
Waed,

The only thing you need to consider on a small test cluster is performance.
Do you want to separate journal drives from data drives? That's where your
real performance difference occurs. With a two host cluster, you can just
specify the different hosts. The question is whether you separate the OS
disk from the data disk, and whether you separate the journal from the data
disk. Separate is better for total throughput.

John Wilkins
Senior Technical Writer
Intank
john.wilk...@inktank.com
(415) 425-9599

On Sat, Mar 9, 2013 at 4:05 AM, waed Albataineh wrote:

>
> John,
>
> It's just as you referred to i only wanna test Ceph on a small network ,
> two host only. I wanna see how it's work generally, and try if things could
> been a little bit different.
> So no harms will happen going with just determine the journal sizel and
> hostname values only ???
>
>
> --- On *Fri, 3/8/13, John Wilkins * wrote:
>
>
> From: John Wilkins 
> Subject: Re: [ceph-users] Configuration file
> To: "waed Albataineh" 
> Cc: "Ceph list" 
> Date: Friday, March 8, 2013, 11:15 PM
>
>
> Waed,
>
> These are optional settings. If you specify them, Ceph will create the
> file system for you. If you are just performing a local install for
> testing purposes, you can omit the values. You'd need to have a 'devs'
> setting for each OSD in order for mkcephfs to build a file system for
> you. The default values for
>
> #osd mkfs options {fs-type} = {mkfs options}   # default for xfs is
> "-f"
> #osd mount options {fs-type} = {mount options} # default mount option
> is "rw,noatime"
>
> are meaningless if you don't specify "devs" settings under the OSDs.
>
>
> If the drive for OSD data is separate from the OS drive, you'd have to
> create a file system anyway. Is there a reason you don't want to use
> mkcephfs?
>
> On Fri, Mar 8, 2013 at 9:44 AM, waed Albataineh
> http://mc/compose?to=waedalbatai...@yahoo.com>>
> wrote:
> >
> > I'm using v 0.56.
> > If I dont wanna specify in the configuration file the osd mkfs osd mount
> and devs setings, are there gonna be a default values ??
> >
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
>
>
>
> --
> John Wilkins
> Senior Technical Writer
> Intank
> john.wilk...@inktank.com 
> (415) 425-9599
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] I/O Speed Comparisons

2013-03-10 Thread Wolfgang Hennerbichler
Let me know if I can help out with testing somehow. 

Wolfgang

Von: ceph-users-boun...@lists.ceph.com 
[ceph-users-boun...@lists.ceph.com]" im Auftrag von "Mark Nelson 
[mark.nel...@inktank.com]
Gesendet: Samstag, 09. März 2013 20:33
Bis: ceph-users@lists.ceph.com
Betreff: Re: [ceph-users] I/O Speed Comparisons

Thanks for all of this feedback guys!  It gives us some good data to try
to replicate on our end.  Hopefully I'll have some time next week to
take a look.

Thanks!
Mark

On 03/09/2013 08:14 AM, Erdem Agaoglu wrote:
> Mark,
>
> If it's any help, we've done a small totally unreliable benchmark on our
> end. For a KVM instance, we had:
> 260MB/s write, 200MB/s read on local SAS disks, attached as LVM LVs,
> 250MB/s write, 90MB/s read on RBD, 32 osds, all SATA.
>
> All sequential, a 10G network. It's more than enough currently but we'd
> like to improve RBD read performance.
>
> Cheers,
>
>
> On Sat, Mar 9, 2013 at 7:27 AM, Andrew Thrift  > wrote:
>
> Mark,
>
>
> I would just like to add, we too are seeing the same behavior with
> QEMU/KVM/RBD.  Maybe it is a common symptom of high IO with this setup.
>
>
>
> Regards,
>
>
>
>
>
> Andrew
>
>
> On 3/8/2013 12:46 AM, Mark Nelson wrote:
>
> On 03/07/2013 05:10 AM, Wolfgang Hennerbichler wrote:
>
>
>
> On 03/06/2013 02:31 PM, Mark Nelson wrote:
> t
>
> If you are doing sequential reads, you may benefit by
> increasing the
> read_ahead_kb value for each device in
> /sys/block//queue on the
> OSD hosts.
>
>
> Thanks, that didn't really help. It seems the VM has to
> handle too much
> I/O, even the mouse-cursor is jerking over the screen when
> connecting
> via vnc. I guess this is the wrong list, but it has somehow
> to do with
> librbd in connection with kvm, as the same machine on LVM
> works just ok.
>
>
> Thanks for the heads up Wolfgang.  I'm going to be looking into
> QEMU/KVM
> RBD performance in the coming weeks so I'll try to watch out for
> this
> behaviour.
>
>
> Wolfgang
> _
> ceph-users mailing list
> ceph-users@lists.ceph.com 
> http://lists.ceph.com/__listinfo.cgi/ceph-users-ceph.__com
> 
>
>
>
> _
> ceph-users mailing list
> ceph-users@lists.ceph.com 
> http://lists.ceph.com/__listinfo.cgi/ceph-users-ceph.__com
> 
>
>
>
>
> --
> erdem agaoglu
>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


--
Mark Nelson
Performance Engineer
Inktank
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com