On Tue, 10 May 2016 11:48:07 +0800 Geocast wrote:
Hello,
> We have 21 hosts for ceph OSD servers, each host has 12 SATA disks (4TB
> each), 64GB memory.
No journal SSDs?
What CPU(s) and network?
> ceph version 10.2.0, Ubuntu 16.04 LTS
> The whole cluster is new installed.
>
> Can you help chec
Hello,
I'd like some advices about the setup of a new ceph cluster. Here the use case :
RadowGW (S3 and maybe swift for hadoop/spark) will be the main usage. Most of
the access will be in read only mode. Write access will only be done by the
admin to update the datasets.
We might use rbd some ti
Hello Chris,
We don't use SSD as journal.
each host has one intel E5-2620 CPU which is 6 cores.
the networking (both cluster and data networks) is 10Gbps.
My further questions include,
(1) osd_mkfs_type = xfs
osd_mkfs_options_xfs = -f
filestore_xattr_use_omap = true
for XFS filesystem, we shoul
> rbd_cache_size = 268435456
Are you sure that you have 256MB per client to waste on RBD cache?
If so, bully for you, but you might find that depending on your use case a
smaller RBD cache but more VM memory (for pagecache, SLAB, etc) could be
more beneficial.
We have changed this value to 64MB.
Hello,
On Tue, 10 May 2016 10:40:08 +0200 Yoann Moulin wrote:
> Hello,
>
> I'd like some advices about the setup of a new ceph cluster. Here the
> use case :
>
> RadowGW (S3 and maybe swift for hadoop/spark) will be the main usage.
> Most of the access will be in read only mode. Write access
Hello,
On Tue, 10 May 2016 16:50:17 +0800 Geocast Networks wrote:
> Hello Chris,
>
> We don't use SSD as journal.
> each host has one intel E5-2620 CPU which is 6 cores.
That should be enough.
> the networking (both cluster and data networks) is 10Gbps.
>
12 HDDs will barely saturate a 10Gb/s
Hello,
>> I'd like some advices about the setup of a new ceph cluster. Here the
>> use case :
>>
>> RadowGW (S3 and maybe swift for hadoop/spark) will be the main usage.
>> Most of the access will be in read only mode. Write access will only be
>> done by the admin to update the datasets.
>>
>> We
Hello,
we are running ceph 0.94.5 (9764da52395923e0b32908d83a9f7304401fee43) on a
4.5.2 kernel. Our cluster currently consists of 5 nodes, with 6 OSD's each.
An issue has also been filed here (also containg logs, etc.):
http://tracker.ceph.com/issues/15813
Last night we have observed a single OS
Hello.
I have a two node cluster with 4x replicas for all objects distributed between
the two nodes (two copies on each node). I recently converted my OSDs from
BTRFS to XFS (BTRFS was slow) by removing / preparing / activating OSDs on each
node (one at at time) as XFS allowing cluster to reba
Hello,
On Tue, 10 May 2016 13:14:35 +0200 Yoann Moulin wrote:
> Hello,
>
> >> I'd like some advices about the setup of a new ceph cluster. Here the
> >> use case :
> >>
> >> RadowGW (S3 and maybe swift for hadoop/spark) will be the main usage.
> >> Most of the access will be in read only mode.
> -Original Message-
> From: Eric Eastman [mailto:eric.east...@keepertech.com]
> Sent: 09 May 2016 23:09
> To: Nick Fisk
> Cc: Ceph Users
> Subject: Re: [ceph-users] CephFS + CTDB/Samba - MDS session timeout on
> lockfile
>
> On Mon, May 9, 2016 at 3:28 PM, Nick Fisk wrote:
> > Hi Eric,
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Nick Fisk
> Sent: 10 May 2016 13:30
> To: 'Eric Eastman'
> Cc: 'Ceph Users'
> Subject: Re: [ceph-users] CephFS + CTDB/Samba - MDS session timeout on
> lockfile
>
> > -Original Message-
Hi,
Am 2016-05-10 05:48, schrieb Geocast:
Hi members,
We have 21 hosts for ceph OSD servers, each host has 12 SATA disks (4TB
each), 64GB memory.
ceph version 10.2.0, Ubuntu 16.04 LTS
The whole cluster is new installed.
Can you help check what the arguments we put in ceph.conf is reasonable
o
To answer my own question it seems that you can change settings on the fly
using
ceph tell osd.* injectargs '--osd_tier_promote_max_bytes_sec 5242880'
osd.0: osd_tier_promote_max_bytes_sec = '5242880' (unchangeable)
However the response seems to imply I can't change this setting. Is there
an othe
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Peter Kerdisle
> Sent: 10 May 2016 14:37
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] Erasure pool performance expectations
>
> To answer my own question it seems that you can
Hey,
we currently have a problem with our radosgw.
The quota value of a user does not get updated after an admin manually deleted
a bucket (via radosgw-admin). You can only circumvent this if you synced the
user stats before the removal. So there are now users which can not upload new
objects al
Re,
I'd like some advices about the setup of a new ceph cluster. Here the
use case :
RadowGW (S3 and maybe swift for hadoop/spark) will be the main usage.
Most of the access will be in read only mode. Write access will only
be done by the admin to update the datasets.
Hello,
I just upgraded my cluster to the version 10.1.2 and it worked well for a
while until I saw that systemctl ceph-disk@dev-sdc1.service was failed and
I reruned it.
>From there the OSD stopped working.
This is ubuntu 16.04.
I connected to the IRC looking for help where people pointed me to
On Thu, 21 Apr 2016, Dan van der Ster wrote:
> On Thu, Apr 21, 2016 at 1:23 PM, Dan van der Ster wrote:
> > Hi cephalapods,
> >
> > In our couple years of operating a large Ceph cluster, every single
> > inconsistency I can recall was caused by a failed read during
> > deep-scrub. In other words,
Hello,
I forgot to say that the nodes are in preboot status. Something seems
strange to me.
root@red-compute:/var/lib/ceph/osd/ceph-1# ceph daemon osd.1 status
{
"cluster_fsid": "9028f4da-0d77-462b-be9b-dbdf7fa57771",
"osd_fsid": "adf9890a-e680-48e4-82c6-e96f4ed56889",
"whoami": 1,
I must also add that I just found in the log the following. I don't know if
this has something to do with the problem.
=> ceph-osd.admin.log <==
2016-05-10 18:21:46.060278 7fa8f30cc8c0 0 ceph version 10.1.2
(4a2a6f72640d6b74a3bbd92798bb913ed380dcd4), process ceph-osd, pid 14135
2016-05-10 18:21:
On Tue, May 10, 2016 at 6:48 AM, Nick Fisk wrote:
>
>
>> -Original Message-
>> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
>> Nick Fisk
>> Sent: 10 May 2016 13:30
>> To: 'Eric Eastman'
>> Cc: 'Ceph Users'
>> Subject: Re: [ceph-users] CephFS + CTDB/Samba - MDS
Thanks Nick. I added it to my ceph.conf. I'm guessing this is an OSD
setting and therefor I should restart my OSDs is that correct?
On Tue, May 10, 2016 at 3:48 PM, Nick Fisk wrote:
>
>
> > -Original Message-
> > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
>
All,
I am trying to add another OSD to our cluster using ceph-deploy. This is
running Jewel.
I previously set up the other 12 OSDs on a fresh install using the command:
ceph-deploy osd create :/dev/mapper/mpath:/dev/sda
Those are all up and happy. On the systems /dev/sda is an SSD which I have
Hello All,
I´m writing to you because i´m trying to find the way to rebuild a osd disk
in a way to don´t impact the performance of the cluster.
That´s because my applications are very latency sensitive.
1_ I found the way to reuse a OSD ID and don´t rebalance the cluster every
time that I lost a d
Hello,
As far as I know and can tell, you're doing everything that is possible
for having a least impact OSD rebuild/replacement.
If your cluster is still strongly, adversely impacted by this gradual and
throttled approach, how about the following things:
1. Does scrub or deep_scrub also impact
On Tue, 10 May 2016 17:51:24 +0200 Yoann Moulin wrote:
[snip]
> Journal or cache Storage : 2 x SSD 400GB Intel S3300 DC (no Raid)
> >>>
> >>> These SSDs do not exist according to the Intel site and the only
> >>> references I can find for them are on "no longer available" European
> >>> sites
Hi,
If we have 12 SATA disks, each 4TB as storage pool.
Then how many SSD disks we should have for cache tier usage?
thanks.
2016-05-10 16:40 GMT+08:00 Yoann Moulin :
> Hello,
>
> I'd like some advices about the setup of a new ceph cluster. Here the use
> case :
>
> RadowGW (S3 and maybe swift
Hello,
On Wed, 11 May 2016 11:24:29 +0800 Geocast Networks wrote:
> Hi,
>
> If we have 12 SATA disks, each 4TB as storage pool.
> Then how many SSD disks we should have for cache tier usage?
>
That question makes no sense.
Firstly you mentioned earlier that you have 21 of those hosts.
Which w
Hello,
I wanted to resize an image using 'rbd' resize option, but it should
be have data loss.
For ex: I have image with 100 GB size (thin provisioned). and this
image has data of 10GB only. Here I wanted to resize this image to
11GB, so that 10GB data is safe and its resized.
Can I do the above r
Hi
I am using infernalis 9.2.1. While creating bucket, if the bucket already
exists, its still returns 0 as exit status. Is it intentional out of some
reason or a bug?
root@node1:~# ceph osd crush add-bucket rack1 rack
bucket 'rack1' already exists
root@node1:~# echo $?
0
root@node1:~#
-
+ceph users
Hi,
Here is first cut result. I can only manage 128TB box for now.
Ceph code base
Capacity
Each drive capacity
Compute-nodes
Total copy
Total data-set
Failure domain
Fault-injected
Percentage of degraded PGs
Full recovery time
Last 1% of degraded PG recovery time
Hamm
32 matches
Mail list logo