ok. thanks.
发件人: Daleep Singh Bais
发送时间: 2016年9月28日 8:14:53
收件人: 卢 迪; ceph-users@lists.ceph.com
主题: Re: 答复: [ceph-users] Ceph user manangerment question
Hi Dillon,
Please check
http://docs.ceph.com/docs/firefly/rados/operations/auth-intro/#ceph-authorization-ca
Hi, colleagues!
I'm using Ceph 10.0.2, build a Ceph cluster in order to use it in
production environment.
And I'm using OpenStack L version. I tested the ceph osd node crash, like
pull out the power supplier or the network cable.
At the same time, in the vm I try to run some commands, it will
Hello,
On Wed, 28 Sep 2016 19:36:28 +0200 Sascha Vogt wrote:
> Hi Christian,
>
> Am 28.09.2016 um 16:56 schrieb Christian Balzer:
> > 0.94.5 has a well known and documented bug, it doesn't rotate the omap log
> > of the OSDs.
> >
> > Look into "/var/lib/ceph/osd/ceph-xx/current/omap/" of the c
S1148 is down but the cluster does not mark it as such.
cluster 3aac8ab8-1011-43d6-b281-d16e7a61b2bd
health HEALTH_WARN
3888 pgs backfill
196 pgs backfilling
6418 pgs degraded
52 pgs down
52 pgs peering
1 pgs recovery_wait
3653 pgs stuck degraded
52 pgs stuck inactive
6088 pgs stuck unc
Hi,
we have same situation with one PG on our different cluster. Scrubs and
deep-scrubs are running over and over for same PG (38.34). I've logged some
period with deep-scrub and some scrubs repeating. OSD log form primary osd
can be found there:
https://www.dropbox.com/s/njmixbgzkfo1wws/ceph-osd.
The question:
Is this something I need to investigate further, or am I being paranoid?
Seems bad to me.
I have a fairly new cluster built using ceph-deploy 1.5.34-0, ceph
10.2.2-0, and centos 7.2.1511.
I recently noticed on every one of
Hi Christian,
Am 28.09.2016 um 16:56 schrieb Christian Balzer:
> 0.94.5 has a well known and documented bug, it doesn't rotate the omap log
> of the OSDs.
>
> Look into "/var/lib/ceph/osd/ceph-xx/current/omap/" of the cache tier and
> most likely discover a huge "LOG" file.
You're right, it was
On Wed, Sep 28, 2016 at 8:03 AM, Ranjan Ghosh wrote:
> Hi everyone,
>
> Up until recently, we were using GlusterFS to have two web servers in sync
> so we could take one down and switch back and forth between them - e.g. for
> maintenance or failover. Usually, both were running, though. The perfor
On Wed, 28 Sep 2016, Orit Wasserman wrote:
see blow
On Tue, Sep 27, 2016 at 8:31 PM, Michael Parson wrote:
We googled around a bit and found the fix-zone script:
https://raw.githubusercontent.com/yehudasa/ceph/wip-fix-default-zone/src/fix-zone
Which ran fine until the last command, which
Hi everyone,
Up until recently, we were using GlusterFS to have two web servers in
sync so we could take one down and switch back and forth between them -
e.g. for maintenance or failover. Usually, both were running, though.
The performance was abysmal, unfortunately. Copying many small files
On Wed, 28 Sep 2016 14:08:43 +0200 Sascha Vogt wrote:
> Hi all,
>
> we currently experience a few "strange" things on our Ceph cluster and I
> wanted to ask if anyone has recommendations for further tracking them
> down (or maybe even an explanation already ;) )
>
> Ceph version is 0.94.5 and we
c and subsequent retries of
the sync fail with a return code of -5.
Any other suggestions?
2016-09-28 16:14:52.145933 7f84609e3700 20 rgw meta sync: entry:
name=20160928:bbp-gva-master.106061599.1
2016-09-28 16:14:52.145994 7f84609e3700 20 rgw meta sync: entry:
name=20160928:bbp-gva-master.106061
> Op 26 september 2016 om 19:51 schreef Sam Yaple :
>
>
> On Mon, Sep 26, 2016 at 5:44 PM, Wido den Hollander wrote:
>
> >
> > > Op 26 september 2016 om 17:48 schreef Sam Yaple :
> > >
> > >
> > > On Mon, Sep 26, 2016 at 9:31 AM, Wido den Hollander
> > wrote:
> > >
> > > > Hi,
> > > >
> > > >
This point release fixes several important bugs in RBD mirroring, RGW
multi-site, CephFS, and RADOS.
We recommend that all v10.2.x users upgrade.
Notable changes in this release include:
* build/ops: 60-ceph-partuuid-workaround-rules still needed by debian jessie
(udev 215-17) (#16351, runsisi,
Hi,
I'm CentOS7/Hammer 0.94.9 (upgraded from RGW s3 objects created in
0.94.7) and I have radosgw multipart and shadow objects in
.rgw.buckets even though I have deleted all buckets 2weeks ago, can
anybody advice on how to prune or garbage collect the orphan and
multipart objects? Pls help. Thx wi
Hi Burkhard,
thanks a lot for the quick response.
Am 28.09.2016 um 14:15 schrieb Burkhard Linke:
> someone correct me if I'm wrong, but removing objects in a cache tier
> setup result in empty objects which acts as markers for deleting the
> object on the backing store.. I've seen the same patter
Hi,
someone correct me if I'm wrong, but removing objects in a cache tier
setup result in empty objects which acts as markers for deleting the
object on the backing store.. I've seen the same pattern you have
described in the past.
As a test you can try to evict all objects from the cache
Hi all,
we currently experience a few "strange" things on our Ceph cluster and I
wanted to ask if anyone has recommendations for further tracking them
down (or maybe even an explanation already ;) )
Ceph version is 0.94.5 and we have a HDD based pool with a cache pool on
NVMe SSDs in front if it.
On Tue, Sep 27, 2016 at 10:19 PM, John Rowe wrote:
> Hi Orit,
>
> It appears it must have been one of the known bugs in 10.2.2. I just
> upgraded to 10.2.3 and bi-directional syncing now works.
>
Good
> I am still seeing some errors when I run synch-related commands but they
> don't seem to be
see blow
On Tue, Sep 27, 2016 at 8:31 PM, Michael Parson wrote:
> (I tried to start this discussion on irc, but I wound up with the wrong
> paste buffer and wound up getting kicked off for a paste flood, sorry,
> that was on me :( )
>
> We were having some weirdness with our Ceph and did an upgr
On 28 September 2016 at 19:22, Wido den Hollander wrote:
>
>
> > Op 28 september 2016 om 0:35 schreef "Nick @ Deltaband"
> > :
> >
> >
> > Hi Cephers,
> >
> > We need to add two new monitors to a production cluster (0.94.9) which has
> > 3 existing monitors. It looks like it's as easy as ceph-dep
> Op 28 september 2016 om 0:35 schreef "Nick @ Deltaband" :
>
>
> Hi Cephers,
>
> We need to add two new monitors to a production cluster (0.94.9) which has
> 3 existing monitors. It looks like it's as easy as ceph-deploy mon add mon>.
>
You are going to add two additional monitors? 3 to 5?
Dear Admins,
During last day I have been trying to deploy a new radosgw, following
jewel guide, ceph cluster is healthy (3 mon and 2 osd servers )
root@cephrgw ceph]# ceph -v
ceph version 10.2.3 (ecc23778eb545d8dd55e2e4735b53cc93f92e65b)
[root@cephrgw ceph]# rpm -qa | grep ceph
ceph-common
Hi Dillon,
Please check
http://docs.ceph.com/docs/firefly/rados/operations/auth-intro/#ceph-authorization-caps
http://docs.ceph.com/docs/jewel/rados/operations/user-management/
This might provide some information on permissions.
Thanks,
Daleep Singh Bais
On 09/28/2016 11:28 AM, 卢 迪 wrote:
>
24 matches
Mail list logo