Another strange thing is that the last few (24) pg seems never get ready and
stuck at creating (after 6 hours of waiting):
[root@serverA ~]# ceph -s
2015-03-30 17:14:48.720396 7feb5bd7a700 0 -- :/1000964 >> 10.???.78:6789/0
pipe(0x7feb60026120 sd=3 :0 s=1 pgs=0 cs=0 l=1 c=0x7feb600263b0).fault
Hello All,
I went through the below link and checked that Copy-on-Read is currently
supported only on librbd and not on rbd kernel module.
https://wiki.ceph.com/Planning/Blueprints/Infernalis/rbd%3A_kernel_rbd_client_supports_copy-on-read
Can someone please let me know how to test Copy-on-Read u
One interesting use case of combining Ceph with computing is running big
data jobs on ceph itself. As with CephFS coming along, you can run
Haddop/Spark jobs directly on ceph without needed to move your data to
compute resources with data locality support. I am wondering if anyone
in community is l
This is definitely something that we've discussed, though I don't think
anyone has really planned out what a complete solution would look like
including processor affinity, etc. Before I joined inktank I worked at
a supercomputing institute and one of the projects we worked on was to
develop g
On 03/30/2015 01:29 PM, Mark Nelson wrote:
> This is definitely something that we've discussed, though I don't think
> anyone has really planned out what a complete solution would look like
> including processor affinity, etc. Before I joined inktank I worked at
> a supercomputing institute and on
> Date: Wed, 25 Mar 2015 11:43:44 -0400
> From: yeh...@redhat.com
> To: neville.tay...@hotmail.co.uk
> CC: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] Radosgw authorization failed
>
>
>
> - Original Message -
> > From: "Neville"
> > To: ceph-users@lists.ceph.com
> > Sent: We
We have a related topic in CDS about
hadoop+ceph(https://wiki.ceph.com/Planning/Blueprints/Infernalis/rgw%3A_Hadoop_FileSystem_Interface_for_a_RADOS_Gateway_Caching_Tier).
It's not directly solve the data locality problem but try to avoid
data migration between different storage cluster.
It would
Hi,
I am planning to modify our deployment script so that it can create and deploy
multiple OSDs in parallel to the same host as well as on different hosts.
Just wanted to check if there is any problem to run say 'ceph-deploy osd
create' etc. in parallel while deploying cluster.
Thanks & Regards
The systemd service unit files were imported into the tree, but they
have not been added into any upstream packaging yet. See the discussion
at https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=769593 or "git log
-- systemd". I don't think there are any upstream tickets in Redmine for
this yet.
Si
- Original Message -
> From: "Neville"
> To: "Yehuda Sadeh-Weinraub"
> Cc: ceph-users@lists.ceph.com
> Sent: Monday, March 30, 2015 6:49:29 AM
> Subject: Re: [ceph-users] Radosgw authorization failed
>
>
> > Date: Wed, 25 Mar 2015 11:43:44 -0400
> > From: yeh...@redhat.com
> > To: nev
On Sat, Mar 28, 2015 at 10:12 AM, Barclay Jameson
wrote:
> I redid my entire Ceph build going back to to CentOS 7 hoping to the
> get the same performance I did last time.
> The rados bench test was the best I have ever had with a time of 740
> MB wr and 1300 MB rd. This was even better than the f
I will take a look into the perf counters.
Thanks for the pointers!
On Mon, Mar 30, 2015 at 1:30 PM, Gregory Farnum wrote:
> On Sat, Mar 28, 2015 at 10:12 AM, Barclay Jameson
> wrote:
>> I redid my entire Ceph build going back to to CentOS 7 hoping to the
>> get the same performance I did last t
Hi,
I'm benchmarking my small cluster with HDDs vs HDDs with SSD Journaling. I am
using both RADOS bench and Block device (using fio) for testing.
I am seeing significant Write performance improvements, as expected. I am
however seeing the Reads coming out a bit slower on the SSD Journaling side.
On 03/30/2015 03:01 PM, Garg, Pankaj wrote:
Hi,
I’m benchmarking my small cluster with HDDs vs HDDs with SSD Journaling.
I am using both RADOS bench and Block device (using fio) for testing.
I am seeing significant Write performance improvements, as expected. I
am however seeing the Reads comin
On Mon, Mar 30, 2015 at 1:01 PM, Garg, Pankaj
wrote:
> Hi,
>
> I’m benchmarking my small cluster with HDDs vs HDDs with SSD Journaling. I
> am using both RADOS bench and Block device (using fio) for testing.
>
> I am seeing significant Write performance improvements, as expected. I am
> however se
Hi!
I mistakenly created my MDS node on the 'wrong' server a few months
back. Now I realized I placed it on a machine lacking IPMI and would like
to move it to another node in my cluster.
Is it possible to non-destructively move an MDS ?
Thanks!
_
On Mon, Mar 30, 2015 at 1:51 PM, Steve Hindle wrote:
>
> Hi!
>
> I mistakenly created my MDS node on the 'wrong' server a few months back.
> Now I realized I placed it on a machine lacking IPMI and would like to move
> it to another node in my cluster.
>
> Is it possible to non-destructively m
Hi,
Gregory Farnum wrote:
> The MDS doesn't have any data tied to the machine you're running it
> on. You can either create an entirely new one on a different machine,
> or simply copy the config file and cephx keyring to the appropriate
> directories. :)
Sorry to enter in this post but how can
On Mon, Mar 30, 2015 at 3:15 PM, Francois Lafont wrote:
> Hi,
>
> Gregory Farnum wrote:
>
>> The MDS doesn't have any data tied to the machine you're running it
>> on. You can either create an entirely new one on a different machine,
>> or simply copy the config file and cephx keyring to the appro
Gregory Farnum wrote:
>> Sorry to enter in this post but how can we *remove* a mds daemon of a
>> ceph cluster?
>>
>> Are the commands below enough?
>>
>> stop the daemon
>> rm -r /var/lib/ceph/mds/ceph-$id/
>> ceph auth del mds.$id
>>
>> Should we edit something in the mds map to remo
I've been working at this peering problem all day. I've done a lot of
testing at the network layer and I just don't believe that we have a
problem that would prevent OSDs from peering. When looking though osd_debug
20/20 logs, it just doesn't look like the OSDs are trying to peer. I don't
know if i
Sorry HTML snuck in somewhere.
-- Forwarded message --
From: Robert LeBlanc
Date: Mon, Mar 30, 2015 at 8:15 PM
Subject: Force an OSD to try to peer
To: Ceph-User , ceph-devel
I've been working at this peering problem all day. I've done a lot of
testing at the network layer and
Hi, all
I have a two-node Ceph cluster, and both are monitor and osd. When they're
both up, osd are all up and in, everything is fine... almost:
[root~]# ceph -s
health HEALTH_WARN 25 pgs degraded; 316 pgs incomplete; 85 pgs stale; 24
pgs stuck degraded; 316 pgs stuck inactive; 85 pgs
On Tue, 31 Mar 2015 02:42:27 AM Kai KH Huang wrote:
> Hi, all
> I have a two-node Ceph cluster, and both are monitor and osd. When
> they're both up, osd are all up and in, everything is fine... almost:
Two things.
1 - You *really* need a min of three monitors. Ceph cannot form a quorum wi
On Tue, 31 Mar 2015 02:42:27 AM Kai KH Huang wrote:
> Hi, all
> I have a two-node Ceph cluster, and both are monitor and osd. When
> they're both up, osd are all up and in, everything is fine... almost:
Two things.
1 - You *really* need a min of three monitors. Ceph cannot form a quorum wi
On Mon, Mar 30, 2015 at 8:02 PM, Lindsay Mathieson
wrote:
> On Tue, 31 Mar 2015 02:42:27 AM Kai KH Huang wrote:
>> Hi, all
>> I have a two-node Ceph cluster, and both are monitor and osd. When
>> they're both up, osd are all up and in, everything is fine... almost:
>
>
>
> Two things.
>
> 1 -
On Sun, Mar 29, 2015 at 1:12 AM, Barclay Jameson
wrote:
> I redid my entire Ceph build going back to to CentOS 7 hoping to the
> get the same performance I did last time.
> The rados bench test was the best I have ever had with a time of 740
> MB wr and 1300 MB rd. This was even better than the fi
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
I have this ceph node that will correctly recover into my ceph pool and
performance looks to be normal for the rbd clients. However after a few minutes
once finishing recovery the rbd clients begin to fall over and cannot write
data to the pool.
I've been trying to figure this out for weeks! N
29 matches
Mail list logo