Hi there,
I have a Ceph cluster with radosgw and I use it in my production
environment for a while. Now I decided to set up another cluster in another
geo place to have a disaster recovery plan. I read some docs like
http://docs.ceph.com/docs/jewel/radosgw/federated-config/, but all of them
is abo
Hi,
I want to copy objects from one of my pools to another pool with "rados
cppool" but the speed of this operation is so low. on the other hand, the
speed of PUT/GET in radosgw is so different and it's so higher.
Is there any trick to speed it up?
ceph version 12.2.3
Regards,
nd cable
4- cross exchanging SSD disks
To those who helped me with this problem, I sincerely thank you so much.
Best regards,
Behnam Loghmani
On Thu, Feb 22, 2018 at 3:18 PM, David Turner wrote:
> Did you remove and recreate the OSDs that used the SSD for their WAL/DB?
> Or did you try to
gt;
> On Wed, Feb 21, 2018 at 5:46 PM, Behnam Loghmani <
> behnam.loghm...@gmail.com> wrote:
>
>> Hi there,
>>
>> I changed SATA port and cable of SSD disk and also update ceph to version
>> 12.2.3 and rebuild OSDs
>> but when recovery starts OSDs
5e992254400
/var/lib/ceph/osd/ceph-7/block) close
2018-02-21 21:12:18.650473 7f3479fe2d00 1 bdev(0x55e992254000
/var/lib/ceph/osd/ceph-7/block) close
2018-02-21 21:12:18.93 7f3479fe2d00 -1 ** ERROR: osd init failed: (22)
Invalid argument
On Wed, Feb 21, 2018 at 5:06 PM, Behnam Loghmani
wrote
ID-controller,
> port, cable) but not disk? Does "faulty" disk works OK on other server?
>
> Behnam Loghmani wrote on 21/02/18 16:09:
>
>> Hi there,
>>
>> I changed the SSD on the problematic node with the new one and
>> reconfigure OSDs an
at 5:16 PM, Behnam Loghmani
wrote:
> Hi Caspar,
>
> I checked the filesystem and there isn't any error on filesystem.
> The disk is SSD and it doesn't any attribute related to Wear level in
> smartctl and filesystem is mounted with default options and no discard.
>
> my
to interpret this.
Could you please help me to recover this node or find a way to prove SSD
disk problem.
Best regards,
Behnam Loghmani
On Mon, Feb 19, 2018 at 1:35 PM, Caspar Smit wrote:
> Hi Behnam,
>
> I would firstly recommend running a filesystem check on the monitor disk
>
b 17, 2018 at 1:09 AM, Gregory Farnum wrote:
> The disk that the monitor is on...there isn't anything for you to
> configure about a monitor WAL though so I'm not sure how that enters into
> it?
>
> On Fri, Feb 16, 2018 at 12:46 PM Behnam Loghmani <
> behnam.loghm.
Thanks for your reply
Do you mean, that's the problem with the disk I use for WAL and DB?
On Fri, Feb 16, 2018 at 11:33 PM, Gregory Farnum wrote:
>
> On Fri, Feb 16, 2018 at 7:37 AM Behnam Loghmani
> wrote:
>
>> Hi there,
>>
>> I have a Ceph cluster versio
nd remove all
mon data and re-add this mon to quorum again.
and ceph goes to the healthy status again.
but now after some days this mon has stopped and I face the same problem
again.
My cluster setup is:
4 osd hosts
total 8 osds
3 mons
1 rgw
this cluster has setup with ceph-volume lvm and wal/db sep
ize, but rocksdb grows with the amount of objects you
> have. You also have copies of the osdmap on each osd. There's just overhead
> that adds up. The biggest is going to be rocksdb with how many objects you
> have.
>
> On Mon, Feb 12, 2018, 8:06 AM Behnam Loghmani
> wrot
on of this high disk usage?
should I change "bluestore_min_alloc_size_hdd"? and If I change it and set
it to smaller size, does it impact on performance?
what is the best practice for storing small files on bluestore?
Best regards,
Behnam Loghmani
_
you have typo in apt source
it must be
https://download.ceph.com/debian-luminous/
not
https://download.ceph.com/debian-luminos/
On Mon, Dec 18, 2017 at 7:58 PM, Andre Goree wrote:
> I'm working on setting up a cluster for testing purposes and I can't see
> to install luminos. All nodes are
14 matches
Mail list logo