>>
> >> On Thu, Sep 6, 2018 at 4:50 PM Marc Roos
> wrote:
> >>>
> >>>
> >>>
> >>>
> >>> Do not use Samsung 850 PRO for journal
> >>> Just use LSI logic HBA (eg. SAS2308)
> >>>
> >>>
> >>> -
>
>>>
>>> -----Original Message-
>>> From: Muhammad Junaid [mailto:junaid.fsd...@gmail.com]
>>> Sent: donderdag 6 september 2018 13:18
>>> To: ceph-users@lists.ceph.com
>>> Subject: [ceph-users] help needed
>>>
>>> Hi
To: Muhammad Junaid
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] help needed
The official ceph documentation recommendations for a db partition for a 4TB
bluestore osd would be 160GB each.
Samsung Evo Pro is not an Enterprise class SSD. A quick search of the ML will
allow which
HBA (eg. SAS2308)
>>
>>
>> -Original Message-
>> From: Muhammad Junaid [mailto:junaid.fsd...@gmail.com]
>> Sent: donderdag 6 september 2018 13:18
>> To: ceph-users@lists.ceph.com
>> Subject: [ceph-users] help needed
>>
>> Hi there
>>
>>
sung 850 PRO for journal
> Just use LSI logic HBA (eg. SAS2308)
>
>
> -Original Message-
> From: Muhammad Junaid [mailto:junaid.fsd...@gmail.com]
> Sent: donderdag 6 september 2018 13:18
> To: ceph-users@lists.ceph.com
> Subject: [ceph-users] help needed
>
> Hi
Do not use Samsung 850 PRO for journal
Just use LSI logic HBA (eg. SAS2308)
-Original Message-
From: Muhammad Junaid [mailto:junaid.fsd...@gmail.com]
Sent: donderdag 6 september 2018 13:18
To: ceph-users@lists.ceph.com
Subject: [ceph-users] help needed
Hi there
Hope, every one
Hi there
Hope, every one will be fine. I need an urgent help in ceph cluster design.
We are planning 3 OSD node cluster in the beginning. Details are as under:
Servers: 3 * DELL R720xd
OS Drives: 2 2.5" SSD
OSD Drives: 10 3.5" SAS 7200rpm 3/4 TB
Journal Drives: 2 SSD's Samsung 850 PRO 256GB each
Now here's the thing:
Some weeks ago Proxmox upgraded from kernel 4.13 to 4.15. Since then I'm
getting slow requests that
cause blocked IO inside the VMs that are running on the cluster (but not
necessarily on the host
with the OSD causing the slow request).
If I boot back into 4.13 then Ceph
Dear community,
TL;DR: Cluster runs good with Kernel 4.13, produces slow_requests with Kernel
4.15. How to debug?
I'm running a combined Ceph / KVM cluster consisting of 6 hosts of 2 different
kinds (details at the end).
The main difference between those hosts is CPU generation (Westmere /
Hello all,
would someone please help with recovering from a recent failure of all cache
tier pool OSDs?
My CEPH cluster has a usual replica 2 pool with two 500GB SSD OSD’s writeback
cache tier over it (also replica 2).
Both cache OSD’s were created with standard ceph deploy tool, and have 2
Yesterday we had an outage on our ceph cluster. One OSD was looping on << [call
rgw.bucket_complete_op] snapc 0=[]
ack+ondisk+write+known_if_redirected e359833) currently waiting for
degraded object >> for hours blocking all the requests to this OSD and
then ...
We had to delete the degraded object
The immutable features are features that can be set only at image
creation time. These features are mutable (can be dynamically
enabled/disabled after image creation):
exclusive-lock, object-map, fast-diff, journaling
Also, deep-flatten feature can be dynamically disabled.
So all other feature
It all depends on how you are creating your RBDs. Whatever your using is
likely overriding the defaults and using a custom line in it's code.
What you linked did not say that you cannot turn on the features I
mentioned. There are indeed some features that cannot be enabled if they
have ever been
What seems to be strange is that feature are *all disabled* when I
create some images.
While ceph should use default settings of jewel at least.
Do I need to place in ceph.conf something in order to use default settings?
Il 23/06/2017 23:43, Massimiliano Cuttini ha scritto:
I guess you upd
I guess you updated those feature before the commit that fix this:
https://github.com/ceph/ceph/blob/master/src/include/rbd/features.h
As stated:
// features that make an image inaccessible for read or write by
/// clients that don't understand them
#define RBD_FEATURES_INCOMPATIBLE
I upgraded to Jewel from Hammer and was able to enable those features on
all of my rbds that were format 2, which yours is. Just test it on some
non customer data and see how it goes.
On Fri, Jun 23, 2017, 4:33 PM Massimiliano Cuttini
wrote:
> Ok,
>
> At moment my client use only nbd-rbd, can I
Ok,
At moment my client use only nbd-rbd, can I use all these feature or
this is something unavoidable?
I guess it's ok.
Reading around seems that a lost feature cannot be re-enabled due to
back-compatibility with old clients.
... I guess I'll need to export and import in a new image fully f
All of the features you are talking about likely require the exclusive-lock
which requires the 4.9 linux kernel. You cannot map any RBDs that have
these features enabled with any kernel older than that.
The features you can enable are layering, exclusive-lock, object-map, and
fast-diff. You cann
Hi everybody,
I just realize that all my Images are completly without features:
rbd info VHD-4c7ebb38-b081-48da-9b57-aac14bdf88c4
rbd image 'VHD-4c7ebb38-b081-48da-9b57-aac14bdf88c4':
size 102400 MB in 51200 objects
order 21 (2048 kB objects)
block_name_
Hi,
On 16.11.2016 19:01, Vincent Godin wrote:
> Hello,
>
> We now have a full cluster (Mon, OSD & Clients) in jewel 10.2.2
> (initial was hammer 0.94.5) but we have still some big problems on our
> production environment :
>
> * some ceph filesystem are not mounted at startup and we have to
>
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
Vincent Godin
Sent: 16 November 2016 18:02
To: ceph-users
Subject: [ceph-users] Help needed ! cluster unstable after upgrade from Hammer
to Jewel
Hello,
We now have a full cluster (Mon, OSD & Clients) in j
Hello,
We now have a full cluster (Mon, OSD & Clients) in jewel 10.2.2 (initial
was hammer 0.94.5) but we have still some big problems on our production
environment :
- some ceph filesystem are not mounted at startup and we have to mount
them with the "/bin/sh -c 'flock /var/lock/ceph-disk
Tuesday, February 17, 2015 10:46:48 PM
Subject: RE: [ceph-users] Help needed
Did you set permissions to "sudo chmod +r /etc/ceph/ceph.client.admin.keyring"?
Thx
Alan
From: ceph-users on behalf of SUNDAY A.
OLUTAYO
Sent: Tuesday, February 17, 2015 4:59 PM
ceph-users@lists.ceph.com;
maintain...@lists.ceph.com
Subject: Re: [ceph-users] Help needed
I did that but the problem still persist.
Thanks,
Sunday Olutayo
From: "Jacob Weeks (RIS-BCT)"
To: "SUNDAY A. OLUTAYO" , ceph-users@lists.ceph.com,
ceph-
7:11 PM
Subject: RE: [ceph-users] Help needed
There should be a *.client.admin.keyring file in the directory you were in
while you ran ceph-deploy.
Try copying that file to /etc/ceph/
Thanks,
Jacob
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of SUNDAY
A. OL
-users@lists.ceph.com; ceph-de...@lists.ceph.com;
maintain...@lists.ceph.com
Subject: [ceph-users] Help needed
I am setting up a ceph cluster on Ubuntu 14.04.1 LTS, all went well without
error
but the "ceph status" after "ceph-deploy mon create-initial" indecate otherwis
I am setting up a ceph cluster on Ubuntu 14.04.1 LTS, all went well without
error
but the "ceph status" after "ceph-deploy mon create-initial" indecate otherwise
This is the error message; monclient[hunting]:
Error: missing keyring cannot use cephx for authentication
librados: client.admin in
On Wed, 28 Aug 2013, Gandalf Corvotempesta wrote:
> 2013/6/20 Matthew Anderson :
> > Hi All,
> >
> > I've had a few conversations on IRC about getting RDMA support into Ceph and
> > thought I would give it a quick attempt to hopefully spur some interest.
> > What I would like to accomplish is an RS
On Wed, Aug 28, 2013 at 04:24:59PM +0200, Gandalf Corvotempesta wrote:
> 2013/6/20 Matthew Anderson :
> > Hi All,
> >
> > I've had a few conversations on IRC about getting RDMA support into Ceph and
> > thought I would give it a quick attempt to hopefully spur some interest.
> > What I would like t
2013/6/20 Matthew Anderson :
> Hi All,
>
> I've had a few conversations on IRC about getting RDMA support into Ceph and
> thought I would give it a quick attempt to hopefully spur some interest.
> What I would like to accomplish is an RSockets only implementation so I'm
> able to use Ceph, RBD and
Hi Matthew,
I am not quite sure about the POLLRDHUP.
On the server side (ceph-mon), tcp_read_wait does see the
POLLHUP - which should be the indicator that the
the other side is shutting down.
I have also taken a brief look at the client side (ceph mon stat).
It initiates a shutdown - but never f
Hi Andreas,
I think we're both working on the same thing, I've just changed the
function calls over to rsockets in the source instead of using the pre-load
library. It explains why we're having the exact same problem!
>From what I've been able to tell the entire problem revolves around
rsockets n
Hi Matthew,
On Fri, 9 Aug 2013 09:11:07 +0200
Matthew Anderson wrote:
> So I've had a chance to re-visit this since Bécholey Alexandre was
> kind enough to let me know how to compile Ceph with the RDMACM
> library (thankyou again!).
>
> At this stage it compiles and runs but there appears to b
So I've had a chance to re-visit this since Bécholey Alexandre was kind
enough to let me know how to compile Ceph with the RDMACM library (thankyou
again!).
At this stage it compiles and runs but there appears to be a problem with
calling rshutdown in Pipe as it seems to just wait forever for the
2013/6/20 Matthew Anderson :
> Hi All,
>
> I've had a few conversations on IRC about getting RDMA support into Ceph and
> thought I would give it a quick attempt to hopefully spur some interest.
> What I would like to accomplish is an RSockets only implementation so I'm
> able to use Ceph, RBD and
On 06/20/2013 10:09 AM, Matthew Anderson wrote:
Hi All,
I've had a few conversations on IRC about getting RDMA support into Ceph
and thought I would give it a quick attempt to hopefully spur some
interest. What I would like to accomplish is an RSockets only
implementation so I'm able to use Ceph
Hi All,
I've had a few conversations on IRC about getting RDMA support into Ceph
and thought I would give it a quick attempt to hopefully spur some
interest. What I would like to accomplish is an RSockets only
implementation so I'm able to use Ceph, RBD and QEMU at full speed over an
Infiniband fa
37 matches
Mail list logo