Hi,
lets assume we have size=3 min_size=2 and lost some osds and now have
some placement groups with only one copy left.
Is there a setting to tell ceph to start recovering those pgs first in
order to reach min_size and so get the cluster online faster?
Regards,
Dennis
_
Hi all,
On my test Luminous 12.2.4 cluster, with this set (initially so I could use
upmap in the mgr balancer module):
# ceph osd set-require-min-compat-client luminous
# ceph osd dump | grep client
require_min_compat_client luminous
min_compat_client jewel
Not quite sure why min_compat_cli
Hi Guys,
The striping seems to be slightly better than non-striping write, given
that my storage is configured with 4OSS, and 48OSD. each OSD is 8+2 Raid 6
of 24TB capacity.
But still the performance is around 100MB/sec. On a single haswell core,
I'm able to get 1GB/sec with dd buffered IO.
My qu
On Wed, May 30, 2018 at 5:17 PM, Oliver Freyermuth
wrote:
> Am 30.05.2018 um 10:37 schrieb Yan, Zheng:
>> On Wed, May 30, 2018 at 3:04 PM, Oliver Freyermuth
>> wrote:
>>> Hi,
>>>
>>> ij our case, there's only a single active MDS
>>> (+1 standby-replay + 1 standby).
>>> We also get the health warn
Given what you've shown here, it's probably one of the odder cases CephFS
is subject to, rather than an actual "there's no disk space" error. How far
is the script actually getting? Is it possible your client doesn't have
permission to write to the RADOS pool and isn't finding that out until too
la
Short version: https://pad.ceph.com/p/cfp-coordination is a space for
you to share talks you've submitted to conferences, if you want to let
other Ceph community members know what to look for and avoid
duplicating topics.
Longer version: I and a teammate almost duplicated a talk topic (for
the upc
On 30/05/18 20:35, Jack wrote:
Why would you deploy a Jewel cluster, which is almost 3 majors versions
away ?
Bluestore is also the good answer
It works well, have many advantages, and is simply the future of Ceph
Indeed, and normally I wouldn't even ask, but as I say there's been some
comment
On 05/30/2018 09:20 PM, Simon Ironside wrote:
> * What's the recommendation for what to deploy?
>
> I have a feeling the answer is going to be Luminous (as that's current
> LTS) and Bluestore (since that's the default in Luminous) but several
> recent threads and comments on this list make me doub
Hi again,
I've been happily using both Hammer and Jewel with SSD journals and
spinning disk Filestore OSDs for several years now and, as per my other
email, I'm about to purchase hardware to build a new (separate)
production cluster. I intend to use the same mixture of SSD for journals
(or DB
Hello Ceph Users,
I would like to know how folks are using EC profile in the production
environment, what kind of EC configurations are you using (10+4, 5+3 ?
) with other configuration options, If you can reply to this thread or
update in the shared excel sheet below that will help design better
Hi Everyone,
I'm about to purchase hardware for a new production cluster. I was going
to use 480GB Intel DC S4600 SSDs as either Journal devices for Filestore
and/or DB/WAL for Bluestore spinning disk OSDs until I saw David
Herselman's "Many concurrent drive failures" thread which has given me
Thanks Kefu.
Best,
Jialin
NERSC
On Tue, May 29, 2018 at 11:52 PM, kefu chai wrote:
> On Wed, May 30, 2018 at 11:53 AM, Jialin Liu wrote:
> > Hi Brad,
> >
> > You are correct. the librados.so has the symbol but what I copied is a
> wrong
> > file.
> > Now I can test the striper api with the pre
Hi All,
I'm having issues trying to get a 2nd Rados GW realm/zone up and
running. The configuration seemed to go well, but I'm unable to start the
gateway.
2018-05-29 21:21:27.119192 7fd26cfdd9c0 0 ERROR: failed to decode obj from
.rgw.root:zone_info.fe2e0680-d7e8-415f-bf91-501dda96d075
2018-0
Hi Josef,
The main thing to make sure is that you have set up the host/vm
running nfs-ganesha exactly as if it were going to run radosgw. For
example, you need an appropriate keyring and ceph config. If radosgw
starts and services requests, nfs-ganesha should too.
With the debug settings you've
On 05/30/2018 07:26 PM, Alfredo Deza wrote:
If you don't want LVM, you can continue to use ceph-disk.
How I can do this if ceph-disk will be removed from master?
I really don't understand: why we need LV for new osds.
k
___
ceph-users mailing lis
I am new to Ceph and have built a small Ceph instance on 3 servers. I realize
the configuration is probably not ideal but I’d like to understand an error I’m
getting.
Ceph hosts are cm1, cm2, cm3. Cephfs is mounted with ceph.fuse on a server c1.
I am attempting to perform a simple cp-rp from
On Wed, May 30, 2018 at 8:13 AM, Konstantin Shalygin wrote:
> On 05/30/2018 07:08 PM, Alfredo Deza wrote:
>>
>> ceph-volume accepts a bare block device as input, but it will create
>> an LV behind the scenes
>
>
> I think this is regression. What if I don't need LV?
ceph-volume has always used LV
Hi, thanks for the quick reply. As for 1. I mentioned that i'm running
ubuntu 16.04, kernel 4.4.0-121 - as it seems the platform
package(nfs-ganesha-ceph) does not include the rgw fsal.
2. Nfsd was running - after rebooting i managed to get ganesha to bind,
rpcbind is running, though i still c
On Tue, May 29, 2018 at 11:44 PM, Zhang Qiang wrote:
> Hi all,
>
> I'm new to Luminous, when I use ceph-volume create to add a new
> filestore OSD, it will tell me that the journal's header magic is not
> good. But the journal device is a new LV. How to make it write the new
> OSD's header to the
On 05/30/2018 07:08 PM, Alfredo Deza wrote:
ceph-volume accepts a bare block device as input, but it will create
an LV behind the scenes
I think this is regression. What if I don't need LV?
k
___
ceph-users mailing list
ceph-users@lists.ceph.com
h
On Fri, May 25, 2018 at 3:22 AM, Konstantin Shalygin wrote:
> ceph-disk should be considered as "frozen" and deprecated for Mimic,
> in favor of ceph-volume.
>
>
> ceph-volume will continue to support bare block device, i.e. without lvm'ish
> stuff?
Not sure I follow, ceph-volume has two ways of
Hi Josef,
1. You do need the Ganesha fsal driver to be present; I don't know
your platform and os version, so I couldn't look up what packages you
might need to install (or if the platform package does not build the
RGW fsal)
2. The most common reason for ganesha.nfsd to fail to bind to a port
is
I think it is not working, I'am having the same problem. I'am on the
ganesha mailing list and they have given me a patch for detailed logging
on this issue, so they can determine what is going on. (Didn't have time
to this though)
-Original Message-
From: Josef Zelenka [mailto:jos
Hi everyone, i'm currently trying to set up a NFS-ganesha instance that
mounts a RGW storage, however i'm not succesful in this. I'm running
Ceph Luminous 12.2.4 and ubuntu 16.04. I tried compiling ganesha from
source(latest version), however i didn't manage to get the mount running
with that,
Am 30.05.2018 um 10:37 schrieb Yan, Zheng:
> On Wed, May 30, 2018 at 3:04 PM, Oliver Freyermuth
> wrote:
>> Hi,
>>
>> ij our case, there's only a single active MDS
>> (+1 standby-replay + 1 standby).
>> We also get the health warning in case it happens.
>>
>
> Were there "client.xxx isn't respond
On Wed, May 30, 2018 at 3:04 PM, Oliver Freyermuth
wrote:
> Hi,
>
> ij our case, there's only a single active MDS
> (+1 standby-replay + 1 standby).
> We also get the health warning in case it happens.
>
Were there "client.xxx isn't responding to mclientcaps(revoke)"
warnings in cluster log. ple
Hi,
ij our case, there's only a single active MDS
(+1 standby-replay + 1 standby).
We also get the health warning in case it happens.
Cheers,
Oliver
Am 30.05.2018 um 03:25 schrieb Yan, Zheng:
> I could be http://tracker.ceph.com/issues/24172
>
>
> On Wed, May 30, 2018 at 9:01 AM, Linh Vu wr
27 matches
Mail list logo