Dear all,
We have a Ceph cluster with several nodes, each node contains 4-6 OSDs. We
are running the OS off USB drive to maximise the use of the drive bays for
the OSDs and so far everything is running fine.
Occasionally, the OS running on the USB drive would fail, and we would
normally replace t
Dear all,
Anyone using CloudStack with Ceph RBD as primary storage? I am using
CloudStack 4.2.0 with KVM hypervisors and Ceph latest stable version of
dumpling.
Based on what I see, when Ceph cluster is in degraded state (not
active+clean), for example due to one node is down and in recovering
pr
AFAIK, it's not possible. A journal should be on the same server as the OSD
it serves. CMIIW.
Thank you.
On Mon, Jul 21, 2014 at 10:34 PM, 不坏阿峰 wrote:
> thanks for ur reply.
>
> in ur case, u deploy 3 osds in one server. my case is that 3 osds in 3
> server.
> how to do ?
>
>
> 2014-07-21 17:
Hi Gandalf and all,
FYI, I checked sgdisk's man page and it seems that the correct command to
restore should be:
sgdisk --load-backup=/tmp/journal_table /dev/sdg
Will try this next weekend and update again.
Thank you.
On Sat, May 10, 2014 at 10:58 PM, Indra Pramana wrote:
> Hi
Hi Gandalf,
I tried to dump journal partition scheme from the old SSD:
sgdisk --backup=/tmp/journal_table /dev/sdg
and then restore the journal partition scheme to the new SSD after it's
replaced:
sgdisk --restore-backup=/tmp/journal_table /dev/sdg
and it doesn't work. :( parted -l doesn't sho
noout
Looking forward to your reply, thank you.
Cheers.
On Fri, May 9, 2014 at 1:08 AM, Indra Pramana wrote:
> Hi Gandalf and Sage,
>
> Many thanks! Will try this and share the outcome.
>
> Cheers.
>
>
> On Fri, May 9, 2014 at 12:55 AM, Gandalf Corvotempest
Hi Gandalf and Sage,
Many thanks! Will try this and share the outcome.
Cheers.
On Fri, May 9, 2014 at 12:55 AM, Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> wrote:
> 2014-05-08 18:43 GMT+02:00 Indra Pramana :
> > Since we don't use ceph.conf to indicate the da
Hi Sage,
On Fri, May 9, 2014 at 12:32 AM, Sage Weil wrote:
>
>
> On Fri, 9 May 2014, Indra Pramana wrote:
>
> > Hi Sage,
> > Thanks for your reply!
> >
> > Actually what I want is to replace the journal disk only, while I want
> to keep the OSD FS
> &g
tions, label them, and then create the (by
> default, XFS) fs and initialize the journal. After that, udev magic will
> take care of all the mounting and starting of daemons for you.
>
> sage
>
>
> On Fri, 9 May 2014, Indra Pramana wrote:
>
> > Hi Sage,
> > Sorry
Hi Sage,
Sorry to chip you in, do you have any comments on this? Since I noted you
advised Tim Snider on similar situation before. :)
http://www.spinics.net/lists/ceph-users/msg05142.html
Looking forward to your reply, thank you.
Cheers.
On Wed, May 7, 2014 at 11:31 AM, Indra Pramana wrote
Hi Craig and all,
I checked Sébastien Han's blog post, it seems that the way how the journal
was mounted is a bit different, is it because the article was based on
older version of Ceph?
$ sudo mount /dev/sdc /journal
$ ceph-osd -i 2 --mkjournal
2012-08-16 13:29:58.735095 7ff0c4b58780 -1 cr
Hi Cedric,
Thanks for your reply.
On Sun, May 4, 2014 at 6:16 PM, Cedric Lemarchand wrote:
> Hi Indra,
>
> Le 04/05/2014 06:11, Indra Pramana a écrit :
>
> Would like to share after I tried yesterday, this doesn't work:
>
> > - ceph osd set noout
> > - sud
On Sat, May 3, 2014 at 4:01 AM, Indra Pramana wrote:
> > Sorry forgot to cc the list.
> >
> > On 3 May 2014 08:00, "Indra Pramana" wrote:
> >>
> >> Hi Andrey,
> >>
> >> I actually wanted to try this (instead of remove and readd OSD) to
Sorry forgot to cc the list.
On 3 May 2014 08:00, "Indra Pramana" wrote:
> Hi Andrey,
>
> I actually wanted to try this (instead of remove and readd OSD) to avoid
> remapping of PGs to other OSDs and the unnecessary I/O load.
>
> Are you saying that doing this wi
Hi,
May I know if it's possible to replace an OSD drive without removing /
re-adding back the OSD? I want to avoid the time and the excessive I/O load
which will happen during the recovery process at the time when:
- the OSD is removed; and
- the OSD is being put back into the cluster.
I read Da
Hi Irek,
Good day to you.
Any updates/comments on below?
Looking forward to your reply, thank you.
Cheers.
On Tue, Apr 29, 2014 at 12:47 PM, Indra Pramana wrote:
> Hi Irek,
>
> Good day to you, and thank you for your e-mail.
>
> Is there a better way other than patchin
2014 at 12:28 PM, Gregory Farnum wrote:
> Are your OSDs actually running? I see that your older logs have more data
> in them; did you change log rotation from the defaults?
>
>
> On Monday, April 28, 2014, Indra Pramana wrote:
>
>> Hi Greg,
>>
>> The log rot
oads/code/CMD_FLUSH.diff).
> After rebooting, run the following commands:
> echo temporary write through > /sys/class/scsi_disk//cache_type
>
>
> 2014-04-28 15:44 GMT+04:00 Indra Pramana :
>
> Hi Irek,
>>
>> Thanks for the article. Do you have any other web sourc
Sunday, April 27, 2014, Indra Pramana wrote:
>
>> Dear all,
>>
>> I have multiple OSDs per node (normally 4) and I realised that for all
>> the nodes that I have, only one OSD will contain logs under /var/log/ceph,
>> the rest of the logs are empty.
>>
>> ro
Dear Christian and all,
Anyone can advise?
Looking forward to your reply, thank you.
Cheers.
On Thu, Apr 24, 2014 at 1:51 PM, Indra Pramana wrote:
> Hi Christian,
>
> Good day to you, and thank you for your reply.
>
> On Wed, Apr 23, 2014 at 11:41 PM, Christia
t;
> http://www.theirek.com/blog/2014/02/16/patch-dlia-raboty-s-enierghoniezavisimym-keshiem-ssd-diskov
>
>
> 2014-04-28 15:20 GMT+04:00 Indra Pramana :
>
> Hi Udo and Irek,
>>
>> Good day to you, and thank you for your emails.
>>
>>
>> >perhaps
embke :
>
>> Hi,
>> perhaps due IOs from the journal?
>> You can test with iostat (like "iostat -dm 5 sdg").
>>
>> on debian iostat is in the package sysstat.
>>
>> Udo
>>
>> Am 28.04.2014 07:38, schrieb Indra Pramana:
>>
Dear all,
I have multiple OSDs per node (normally 4) and I realised that for all the
nodes that I have, only one OSD will contain logs under /var/log/ceph, the
rest of the logs are empty.
root@ceph-osd-07:/var/log/ceph# ls -la *.log
-rw-r--r-- 1 root root 0 Apr 28 06:50 ceph-client.admin.log
Hi Craig,
Good day to you, and thank you for your enquiry.
As per your suggestion, I have created a 3rd partition on the SSDs and did
the dd test directly into the device, and the result is very slow.
root@ceph-osd-08:/mnt# dd bs=1M count=128 if=/dev/zero of=/dev/sdg3
conv=fdatasync oflag=d
Hi,
On one of our test clusters, I have a node with 4 OSDs with SAS / non-SSD
drives (sdb, sdc, sdd, sde) and 2 SSD drives (sdf and sdg) for journals to
serve the 4 OSDs (2 each).
Model: ATA ST100FM0012 (scsi)
Disk /dev/sdf: 100GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
N
Hi Christian,
Good day to you, and thank you for your reply.
On Wed, Apr 23, 2014 at 11:41 PM, Christian Balzer wrote:
> > > > Using 32 concurrent writes, result is below. The speed really
> > > > fluctuates.
> > > >
> > > > Total time run: 64.31704964.317049
> > > > Total writes made:
Hi Christian,
Good day to you, and thank you for your reply.
On Tue, Apr 22, 2014 at 12:53 PM, Christian Balzer wrote:
> On Tue, 22 Apr 2014 02:45:24 +0800 Indra Pramana wrote:
>
> > Hi Christian,
> >
> > Good day to you, and thank you for your reply. :) See my repl
Hi Christian,
Good day to you, and thank you for your reply. :) See my reply inline.
On Mon, Apr 21, 2014 at 10:20 PM, Christian Balzer wrote:
>
> Hello,
>
> On Mon, 21 Apr 2014 20:47:21 +0800 Indra Pramana wrote:
>
> > Dear all,
> >
> > I have a Ceph RBD cl
Dear all,
I have a Ceph RBD cluster with around 31 OSDs running SSD drives, and I
tried to use the benchmark tools recommended by Sebastien on his blog here:
http://www.sebastien-han.fr/blog/2012/08/26/ceph-benchmarks/
Our configuration:
- Ceph version 0.67.7
- 31 OSDs of 500 GB SSD drives each
ters I need to lookout to check the latency and
iops?
Looking forward to your reply, thank you.
Cheers.
On Sat, Mar 8, 2014 at 1:04 AM, Mariusz Gronczewski <
mariusz.gronczew...@artegence.com> wrote:
> On Fri, 7 Mar 2014 17:50:44 +0800, Indra Pramana wrote:
> >
> &g
r causes it and that you'll 'just' have to add more nodes.
>
> Cheers,
> Martin
>
>
> On Fri, Mar 7, 2014 at 10:50 AM, Indra Pramana wrote:
>
>> Hi,
>>
>> I have a Ceph cluster, currently with 5 osd servers and around 22 OSDs
>> with SS
nce, thank you.
On Fri, Mar 7, 2014 at 6:04 PM, Gilles Mocellin <
gilles.mocel...@nuagelibre.org> wrote:
> Le 07/03/2014 10:50, Indra Pramana a écrit :
>
> Hi,
>>
>> I have a Ceph cluster, currently with 5 osd servers and around 22 OSDs
>> with SSD drives and I n
Hi,
I have a Ceph cluster, currently with 5 osd servers and around 22 OSDs with
SSD drives and I noted that the I/O speed, especially write access to the
cluster is degrading over time. When we first started the cluster, we can
get up to 250-300 MB/s write speed to the SSD cluster but now we can o
> John
>
> On Fri, Jan 24, 2014 at 9:14 AM, Indra Pramana wrote:
> > Dear all,
> >
> > Good day to you.
> >
> > I already have a running Ceph cluster consisting of 3 monitor servers and
> > several OSD servers. I would like to setup another cluster usin
Dear all,
Good day to you.
I already have a running Ceph cluster consisting of 3 monitor servers and
several OSD servers. I would like to setup another cluster using a
different set of OSD servers, but using the same 3 monitor servers, is it
possible?
Can the 3 monitor servers become the MONs fo
Dear all,
I have 4 servers with 4 OSDs / drives each, so total I have 16 OSDs. For
some reason, the last server is over-utilised compared to the first 3
servers, causing all the OSDs on the fourth server: osd.12, osd.13, osd.14
and osd.15 to be near full (above 85%).
/dev/sda1 458140932 3934
Hi Brian and Robert,
Thanks for your replies! Appreciate.
Can I safely say that there will be no downtime to the cluster when I
increase the pg_num and pgp_num values?
Looking forward to your reply, thank you.
Cheers.
On Tue, Dec 3, 2013 at 2:31 PM, Robert van Leeuwen <
robert.vanleeu...@spi
Dear all,
Greetings to all, I am new to this list. Please mind my newbie question. :)
I am running a Ceph cluster with 3 servers and 4 drives / OSDs per server.
So total currently there are 12 OSDs running on the cluster. I set PGs
(Placement Groups) value to 600 based on recommendation of calcul
38 matches
Mail list logo