Re: [ceph-users] commend "ceph dashboard create-self-signed-cert " ERR

2018-07-03 Thread John Spray
On Tue, Jul 3, 2018 at 6:25 AM jaywaychou wrote: > > > HI,Cephers: > > I just use Mimic Ceph for Dashboard. I just do as > http://docs.ceph.com/docs/mimic/mgr/dashboard/ > > When install a self-signed certificate as build-in commend , it stuck ERR > like as bellow: > > [root@localhost ~]

Re: [ceph-users] commend "ceph dashboard create-self-signed-cert " ERR

2018-07-03 Thread John Spray
On Tue, Jul 3, 2018 at 9:18 AM John Spray wrote: > > On Tue, Jul 3, 2018 at 6:25 AM jaywaychou wrote: > > > > > > HI,Cephers: > > > > I just use Mimic Ceph for Dashboard. I just do as > > http://docs.ceph.com/docs/mimic/mgr/dashboard/ > > > > When install a self-signed certificate as bui

[ceph-users] mgr modules not enabled in conf

2018-07-03 Thread Gökhan Kocak
Hello everyone, I tried to enable the Prometheus module (and later with the same result the Dashboard module) as outlined in the docs here: http://docs.ceph.com/docs/mimic/mgr/dashboard/#enabling [mon]                 mgr initial modules = prometheus However, when I restart the mgr service the mo

Re: [ceph-users] mgr modules not enabled in conf

2018-07-03 Thread John Spray
On Tue, Jul 3, 2018 at 9:37 AM Gökhan Kocak wrote: > > Hello everyone, > > I tried to enable the Prometheus module (and later with the same result > the Dashboard module) as outlined in the docs here: > http://docs.ceph.com/docs/mimic/mgr/dashboard/#enabling > [mon] > mgr initial m

Re: [ceph-users] Adding SSD-backed DB & WAL to existing HDD OSD

2018-07-03 Thread Caspar Smit
2018-07-03 4:27 GMT+02:00 Brad Fitzpatrick : > Hello, > > I was wondering if it's possible or how best to add a DB & WAL to an OSD > retroactively? (Still using Luminous) > > I hurriedly created some HDD-backed bluestore OSDs without their WAL & DB > on SSDs, and then loaded them up with data. > >

Re: [ceph-users] commend "ceph dashboard create-self-signed-cert " ERR

2018-07-03 Thread jaywaychou
HI, John:I check the selinux. it  disabled:[root@localhost ~]# getenforceDisabled 发自网易邮箱大师 On 07/3/2018 16:19,John Spray wrote: On Tue, Jul 3, 2018 at 9:18 AM John Spray wrote: On Tue, Jul 3, 2018 at 6:25 AM jay

Re: [ceph-users] mgr modules not enabled in conf

2018-07-03 Thread Gökhan Kocak
Thanks for the clarification, John. Kind regards, Gökhan On 07/03/2018 10:52 AM, John Spray wrote: > On Tue, Jul 3, 2018 at 9:37 AM Gökhan Kocak > wrote: >> >> Hello everyone, >> >> I tried to enable the Prometheus module (and later with the same result >> the Dashboard module) as outlined in t

Re: [ceph-users] Adding SSD-backed DB & WAL to existing HDD OSD

2018-07-03 Thread Eugen Block
Hi, we had to recreate some block.db's for some OSDs just a couple of weeks ago because our existing journal SSD had failed. This way we avoided to rebalance the whole cluster, just the OSD had to be filled up. Maybe this will help you too. http://heiterbiswolkig.blogs.nde.ag/2018/04/08/r

Re: [ceph-users] Adding SSD-backed DB & WAL to existing HDD OSD

2018-07-03 Thread Brad Fitzpatrick
Eugen, thanks! That looks like a nice improvement over what I'm doing now. I'll try that. On Tue, Jul 3, 2018 at 3:15 AM Eugen Block wrote: > Hi, > > we had to recreate some block.db's for some OSDs just a couple of > weeks ago because our existing journal SSD had failed. This way we > avoided t

[ceph-users] Spurious empty files in CephFS root pool when multiple pools associated

2018-07-03 Thread Jesus Cea
Hi there. I have an issue with cephfs and multiple datapools inside. I have like SIX datapools inside the cephfs, I control where files are stored using xattrs in the directories. The "root" directory only contains directories with "xattrs" requesting new objects to be stored in different pools.

Re: [ceph-users] Spurious empty files in CephFS root pool when multiple pools associated

2018-07-03 Thread John Spray
On Tue, Jul 3, 2018 at 11:53 AM Jesus Cea wrote: > > Hi there. > > I have an issue with cephfs and multiple datapools inside. I have like > SIX datapools inside the cephfs, I control where files are stored using > xattrs in the directories. > > The "root" directory only contains directories with "

Re: [ceph-users] Spurious empty files in CephFS root pool when multiple pools associated

2018-07-03 Thread Jesus Cea
On 03/07/18 13:08, John Spray wrote: > Right: as you've noticed, they're not spurious, they're where we keep > a "backtrace" xattr for a file. > > Backtraces are lazily updated paths, that enable CephFS to map an > inode number to a file's metadata, which is needed when resolving hard > links or N

Re: [ceph-users] "ceph pg scrub" does not start

2018-07-03 Thread Jake Grimmett
Dear All, Sorry to bump the thread, but I still can't manually repair inconsistent pgs on our Mimic cluster (13.2.0, upgraded from 12.2.5) There are many similarities to an unresolved bug: http://tracker.ceph.com/issues/15781 To give more examples of the problem: The following commands appear

Re: [ceph-users] Spurious empty files in CephFS root pool when multiple pools associated

2018-07-03 Thread John Spray
On Tue, Jul 3, 2018 at 12:24 PM Jesus Cea wrote: > > On 03/07/18 13:08, John Spray wrote: > > Right: as you've noticed, they're not spurious, they're where we keep > > a "backtrace" xattr for a file. > > > > Backtraces are lazily updated paths, that enable CephFS to map an > > inode number to a fi

[ceph-users] RADOSGW err=Input/output error

2018-07-03 Thread Drew Weaver
An application is having general failures writing to a test cluster we have setup. 2018-07-02 23:13:26.128282 7fe00b560700 0 WARNING: set_req_state_err err_no=5 resorting to 500 2018-07-02 23:13:26.128460 7fe00b560700 1 == req done req=0x7fe00b55a110 op status=-5 http_status=500 == 20

Re: [ceph-users] Spurious empty files in CephFS root pool when multiple pools associated

2018-07-03 Thread Jesus Cea
On 03/07/18 13:46, John Spray wrote: > To directly address that warning rather than silencing it, you'd > increase the number of PGs in your primary data pool. Since the number of PGs per OSD is limited (or, at least, a recommended limit), I rather prefer to invest them in my datapools. Since I am

Re: [ceph-users] Spurious empty files in CephFS root pool when multiple pools associated

2018-07-03 Thread Steffen Winther Sørensen
> On 3 Jul 2018, at 12.53, Jesus Cea wrote: > > Hi there. > > I have an issue with cephfs and multiple datapools inside. I have like > SIX datapools inside the cephfs, I control where files are stored using > xattrs in the directories. Couldn’t you just use 6xCephFS each w/metadata + data pool

Re: [ceph-users] Spurious empty files in CephFS root pool when multiple pools associated

2018-07-03 Thread Jesus Cea
On 03/07/18 15:09, Steffen Winther Sørensen wrote: > > >> On 3 Jul 2018, at 12.53, Jesus Cea wrote: >> >> Hi there. >> >> I have an issue with cephfs and multiple datapools inside. I have like >> SIX datapools inside the cephfs, I control where files are stored using >> xattrs in the directories

Re: [ceph-users] VMWARE and RBD

2018-07-03 Thread Philip Schroth
You can contact me about SCSI SCST And CEPH. I have it runnning in my production environment. 2018-06-29 15:19 GMT+02:00 Steven Vacaroaia : > Hi Horace > > Thanks > > Would you be willing to share instructions for using SCST instead of > ceph-iscsi ? > > Thanks > Steven > > On Thu, 28 Jun 2018 a

Re: [ceph-users] VMWARE and RBD

2018-07-03 Thread Alex Gorbachev
On Mon, Jun 18, 2018 at 12:08 PM, Steven Vacaroaia wrote: > Hi, > > I read somewhere that VMWare is planning to support RBD directly > > Anyone here know more about this ..maybe a tentative / date / version ? We use NFS quite successfully (see later Nick Fisk's post on the limitations and challe

[ceph-users] Ceph Developer Monthly - July 2018

2018-07-03 Thread Leonardo Vaz
Hey Cephers, This is just a friendly reminder that the next Ceph Developer Monthly meeting is coming up: http://wiki.ceph.com/Planning Important: Due the July 4th holiday in US we are postponing the Ceph Developer Monthly meeting to July 11th. If you have work that you're doing that it a featu

[ceph-users] Long interruption when increasing placement groups

2018-07-03 Thread fcid
Hello ceph community, Last week I was increasing the PGs in a pool used for RBD, in a attempt to reach 1024 PGs (from 128 PGs). The increments were of 32 each time and after creating the new placement groups I trigger re-balance of data using the pgp_num parameter. Every thing was fine until