Hello Filippos,
On Wed, 4 Jun 2014 17:22:35 +0300 Filippos Giannakos wrote:
> Hello Ian,
>
> Thanks for your interest.
>
> On Mon, Jun 02, 2014 at 06:37:48PM -0400, Ian Colle wrote:
> > Thanks, Filippos! Very interesting reading.
> >
> > Are you comfortable enough yet to remove the RAID-1 fro
Hi All,
I have a ceph storage cluster with four nodes. I have created block storage
using cinder in openstack and ceph as its storage backend.
So, I see a volume is created in ceph in one of the pools. But how to get
information like on which OSD, PG, the volume is created in ?
Thanks
Kumar
Hi,
some of the osds in my env continues to try to connect to monitors/ceph nodes,
but get connection refused and down/out. It even worse when I try to initialize
100+ osds (800G HDD for each osd), most of the osds would run into the same
problem to connect to monitor. I checked the monitor sta
Hello,
On Wed, 4 Jun 2014 23:46:33 +0800 Indra Pramana wrote:
> Hi Christian,
>
> In addition to my previous email, I realised that if I use dd with 4M
> block size, I can get higher speed.
>
> root@Ubuntu-12043-64bit:/data# dd bs=4M count=128 if=/dev/zero of=test4
> conv=fdatasync oflag=direc
Hello,
On Wed, 4 Jun 2014 22:36:00 +0800 Indra Pramana wrote:
> Hi Christian,
>
> Good day to you, and thank you for your reply.
>
> Just now I managed to identify 3 more OSDs which were slow and needed to
> be trimmed. Here is a longer (1 minute) result of rados bench after the
> trimming:
>
On 04 Jun 2014, at 16:06, Sage Weil wrote:
> You can adjust this on running OSDs with something like 'ceph daemon
> osd.NN config set osd_snap_trim_sleep .01' or with 'ceph tell osd.*
> injectargs -- --osd-snap-trim-sleep .01'.
Thanks, trying that now.
I noticed that using = 0.01 in ceph.conf
On Wed, Jun 4, 2014 at 8:49 AM, Gregory Farnum wrote:
> On Wed, Jun 4, 2014 at 7:58 AM, Sylvain Munaut
> wrote:
>> Hi,
>>
>>
>> During a multi part upload you can't upload parts smaller than 5M, and
>> radosgw also slices object in slices of 4M. Having those two being
>> different is a bit unfort
On 06/04/2014 07:22 PM, Sage Weil wrote:
> On Wed, 4 Jun 2014, Andrey Korolyov wrote:
>> On 06/04/2014 06:06 PM, Sage Weil wrote:
>>> On Wed, 4 Jun 2014, Dan Van Der Ster wrote:
Hi Sage, all,
On 21 May 2014, at 22:02, Sage Weil wrote:
> * osd: allow snap trim throttling wit
On Wed, Jun 4, 2014 at 7:58 AM, Sylvain Munaut
wrote:
> Hi,
>
>
> During a multi part upload you can't upload parts smaller than 5M, and
> radosgw also slices object in slices of 4M. Having those two being
> different is a bit unfortunate because if you slice your files in the
> minimum chunk size
On 06/04/2014 06:06 PM, Sage Weil wrote:
> On Wed, 4 Jun 2014, Dan Van Der Ster wrote:
>> Hi Sage, all,
>>
>> On 21 May 2014, at 22:02, Sage Weil wrote:
>>
>>> * osd: allow snap trim throttling with simple delay (#6278, Sage Weil)
>>
>> Do you have some advice about how to use the snap trim thrott
On Wed, 4 Jun 2014, Dan Van Der Ster wrote:
> On 04 Jun 2014, at 16:06, Sage Weil wrote:
>
> > You can adjust this on running OSDs with something like 'ceph daemon
> > osd.NN config set osd_snap_trim_sleep .01' or with 'ceph tell osd.*
> > injectargs -- --osd-snap-trim-sleep .01'.
>
> Thanks,
On Wed, 4 Jun 2014, Andrey Korolyov wrote:
> On 06/04/2014 07:22 PM, Sage Weil wrote:
> > On Wed, 4 Jun 2014, Andrey Korolyov wrote:
> >> On 06/04/2014 06:06 PM, Sage Weil wrote:
> >>> On Wed, 4 Jun 2014, Dan Van Der Ster wrote:
> Hi Sage, all,
>
> On 21 May 2014, at 22:02, Sage Weil
On Wed, 4 Jun 2014, Dan Van Der Ster wrote:
> On 04 Jun 2014, at 16:06, Sage Weil wrote:
>
> > On Wed, 4 Jun 2014, Dan Van Der Ster wrote:
> >> Hi Sage, all,
> >>
> >> On 21 May 2014, at 22:02, Sage Weil wrote:
> >>
> >>> * osd: allow snap trim throttling with simple delay (#6278, Sage Weil)
>
On Wed, 4 Jun 2014, Andrey Korolyov wrote:
> On 06/04/2014 06:06 PM, Sage Weil wrote:
> > On Wed, 4 Jun 2014, Dan Van Der Ster wrote:
> >> Hi Sage, all,
> >>
> >> On 21 May 2014, at 22:02, Sage Weil wrote:
> >>
> >>> * osd: allow snap trim throttling with simple delay (#6278, Sage Weil)
> >>
> >>
On 04 Jun 2014, at 16:06, Sage Weil wrote:
> On Wed, 4 Jun 2014, Dan Van Der Ster wrote:
>> Hi Sage, all,
>>
>> On 21 May 2014, at 22:02, Sage Weil wrote:
>>
>>> * osd: allow snap trim throttling with simple delay (#6278, Sage Weil)
>>
>> Do you have some advice about how to use the snap trim
Hi,
During a multi part upload you can't upload parts smaller than 5M, and
radosgw also slices object in slices of 4M. Having those two being
different is a bit unfortunate because if you slice your files in the
minimum chunk size you end up with a main file of 4M and a shadowfile
of 1M for each
Hello Ian,
Thanks for your interest.
On Mon, Jun 02, 2014 at 06:37:48PM -0400, Ian Colle wrote:
> Thanks, Filippos! Very interesting reading.
>
> Are you comfortable enough yet to remove the RAID-1 from your architecture and
> get all that space back?
Actually, we are not ready to do that yet.
On Wed, 4 Jun 2014, Dan Van Der Ster wrote:
> Hi Sage, all,
>
> On 21 May 2014, at 22:02, Sage Weil wrote:
>
> > * osd: allow snap trim throttling with simple delay (#6278, Sage Weil)
>
> Do you have some advice about how to use the snap trim throttle? I saw
> osd_snap_trim_sleep, which is sti
Hello,
How can check ceph client session in clients side, for example, when
mount iscsi or nfs, you can check it (nfs just mount, iscsi iscsiadm -m
session), but how can do that with ceph? And is there more detailed
documentation about openstack and ceph than
http://ceph.com/docs/master/rbd/rbd-op
Le 04/06/2014 03:23, Christian Balzer a écrit :
> On Tue, 03 Jun 2014 18:52:00 +0200 Cedric Lemarchand wrote:
>> Le 03/06/2014 12:14, Christian Balzer a écrit :
>>> A simple way to make 1) and 2) cheaper is to use AMD CPUs, they will do
>>> just fine at half the price with these loads.
>>> If you
Hi,
Am 04.06.2014 14:51, schrieb yalla.gnan.ku...@accenture.com:
> Hi All,
>
>
>
> I have a ceph storage cluster with four nodes. I have created block storage
> using cinder in openstack and ceph as its storage backend.
>
> So, I see a volume is created in ceph in one of the pools. But how
Hi All,
I have a ceph storage cluster with four nodes. I have created block storage
using cinder in openstack and ceph as its storage backend.
So, I see a volume is created in ceph in one of the pools. But how to get
information like on which OSD, PG, the volume is created in ?
Thanks
Kumar
Hi Sage, all,
On 21 May 2014, at 22:02, Sage Weil wrote:
> * osd: allow snap trim throttling with simple delay (#6278, Sage Weil)
Do you have some advice about how to use the snap trim throttle? I saw
osd_snap_trim_sleep, which is still 0 by default. But I didn't manage to follow
the original
23 matches
Mail list logo