Hi,
I'm experiencing the same issue as outlined in this post:
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2016-September/013330.html
I have also deployed this jewel cluster using ceph-deploy.
This is the message I see at boot (happens for all drives, on all OSD nodes):
[ 92.938882] X
On Wed, Mar 22, 2017 at 5:24 PM, Marcus Furlong wrote:
> Hi,
>
> I'm experiencing the same issue as outlined in this post:
>
> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2016-September/013330.html
>
> I have also deployed this jewel cluster using ceph-deploy.
>
> This is the message I see
Hello Pankaj
- Do you use the default port ( 7480 ) ?
- Do you use cephx ?
- i assume you use the default Citeweb ( embedded already ) .
if you wish to use another port , you should modify your conf file and
add the below
rgw_frontends = "civetweb port=80"( this is for port 80 )
- Now
On Wed, Mar 22, 2017 at 8:24 AM, Marcus Furlong wrote:
> Hi,
>
> I'm experiencing the same issue as outlined in this post:
>
> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2016-September/013330.html
>
> I have also deployed this jewel cluster using ceph-deploy.
>
> This is the message I see
Does iostat (eg. iostat -xmy 1 /dev/sd[a-z]) show high util% or await
during these problems?
Ceph filestore requires lots of metadata writing (directory splitting
for example), xattrs, leveldb, etc. which are small sync writes that
HDDs are bad at (100-300 iops), and SSDs are good at (cheapo woul
> [429280.254400] attempt to access beyond end of device
> [429280.254412] sdi1: rw=0, want=19134412768, limit=19134412767
We are seeing the same for our OSDs which have the journal as a
separate partition always on the same disk and only for OSDs which we
added after our cluster was upgraded to j
On 22 March 2017 at 21:23, Martin Palma wrote:
>> [429280.254400] attempt to access beyond end of device
>> [429280.254412] sdi1: rw=0, want=19134412768, limit=19134412767
>
> We are seeing the same for our OSDs which have the journal as a
> separate partition always on the same disk and only for
Hi Jonathan, Anthony and Steve,
Thanks very much for your valuable advise and suggestions!
MJ
On 03/21/2017 08:53 PM, Jonathan Proulx wrote:
If it took 7hr for one drive you probably already done this (or
defaults are for low impact recovery) but before doing anything you
want to besure you
I definitely saw it on a Hammer cluster, though I decided to check my IRC logs
for more context and found that in my specific cases it was due to PGs going
incomplete. `ceph health detail` offered the following, for instance:
pg 8.31f is remapped+incomplete, acting [39] (reducing pool one min_si
Hello, I have a small cluster of Ceph installed and I have followed the
manual installation instructions since I do not have internet.
I have configured the system with two network interfaces, one for the
client network and one for the cluster network.
The problem is that the system when it begins
On Tue, Mar 21, 2017 at 5:31 PM, Deepak Naidu wrote:
> Greetings,
>
>
>
> I have below two cephFS “volumes/filesystem” created on my ceph cluster. Yes
> I used the “enable_multiple” flag to enable the multiple cephFS feature. My
> question
>
>
>
> 1) How do I mention the fs name ie dataX or d
On Tue, Mar 21, 2017 at 1:54 PM, Kjetil Jørgensen wrote:
>> c. Reads can continue from the single online OSD even in pgs that
>> happened to have two of 3 osds offline.
>>
>
> Hypothetically (This is partially informed guessing on my part):
> If the survivor happens to be the acting primary and i
Hi,
radosgw-admin create user sometimes seem to misbehave when trying to
create similarly-named accounts with the same email address:
radosgw-admin -n client.rgw.sto-1-2 user create --uid=XXXDELETEME
--display-name=carthago --email=h...@sanger.ac.uk
{
"user_id": "XXXDELETEME",
[...]
radosgw-
any thoughts ?
On Tue, Mar 14, 2017 at 10:22 PM, Alejandro Comisario wrote:
> Greg, thanks for the reply.
> True that i cant provide enough information to know what happened since
> the pool is gone.
>
> But based on your experience, can i please take some of your time, and
> give me the TOP 5 f
Hey cephers,
Just wanted to share that the new interactive metrics dashboard is now
available for tire-kicking.
https://metrics.ceph.com
There are still a few data pipeline issues and other misc cleanup that
probably needs to happen. We have removed some of the repo tracking to
be more streamlin
Hi all,
Is it possible to create a pool where the minimum number of replicas for
the write operation to be confirmed is 2 but the minimum number of replicas
to allow the object to be read is 1?
This would be useful when a pool consists of immutable objects, so we'd
have:
* size 3 (we always keep
Hi John,
I tried the below option for ceph-fuse & kernel mount. Below is what I
see/error.
1) When trying using ceph-fuse, the mount command succeeds but I see parse
error setting 'client_mds_namespace' to 'dataX' . Not sure if this is normal
message or some error
2) When tryin
2017-03-22 5:30 GMT+01:00 Brad Hubbard :
> On Wed, Mar 22, 2017 at 10:55 AM, Deepak Naidu wrote:
> > Do we know which version of ceph client does this bug has a fix. Bug:
> > http://tracker.ceph.com/issues/17191
> >
> >
> >
> > I have ceph-common-10.2.6-0 ( on CentOS 7.3.1611) & ceph-fs-common-
>
For the most part - I'm assuming min_size=2, size=3. In the min_size=3
and size=3 this changes.
size is how many replicas of an object to maintain, min_size is how many
writes need to succeed before the primary can ack the operation to the
client.
larger min_size most likely higher latency for wr
Hi,
I should clarify. When you worry about concurrent osd failures, it's more
likely that the source of that is from i.e. network/rack/power - you'd
organize your osd's spread across those failure domains, and tell crush
that you put each replica in separate failure domains. I.e. you have 3 or
mo
20 matches
Mail list logo