19 pools, 372 pgs
objects: 54278 objects, 71724 MB
usage: 121 GB used, 27820 GB / 27941 GB avail
pgs: 372 active+clean
1.
http://docs.ceph.com/docs/master/rados/operations/add-or-rm-osds/#replacing-an-osd
On Wed, Aug 2, 2017 at 11:08 AM Roger Brown wrote:
> Hi,
>
>
n auto-marked as owned by rgw though. We
>>>> do have a ticket around that (http://tracker.ceph.com/issues/20891)
>>>> but so far it's just confusing.
>>>> -Greg
>>>>
>>>> On Fri, Aug 4, 2017 at 9:07 AM Roger Brown
>>>> wrote:
>>
d. For more details see
> "Associate Pool to Application" in the documentation.
>
> It is always a good idea to read the release notes before upgrading to a
> new version of Ceph.
>
> On Fri, Aug 4, 2017 at 10:29 AM Roger Brown wrote:
>
>> Is this something ne
Is this something new in Luminous 12.1.2, or did I break something? Stuff
still seems to function despite the warnings.
$ ceph health detail
POOL_APP_NOT_ENABLED application not enabled on 14 pool(s)
application not enabled on pool 'default.rgw.buckets.non-ec'
application not enabled on p
Woops, nvm my last. My eyes deceived me.
On Fri, Aug 4, 2017 at 8:21 AM Roger Brown wrote:
> Did you really mean to say "increase this value to 20 TB from 1 TB"?
>
>
> On Fri, Aug 4, 2017 at 7:28 AM Rhian Resnick wrote:
>
>> Morning,
>>
>>
>&g
Did you really mean to say "increase this value to 20 TB from 1 TB"?
On Fri, Aug 4, 2017 at 7:28 AM Rhian Resnick wrote:
> Morning,
>
>
> We ran into an issue with the default max file size of a cephfs file. Is
> it possible to increase this value to 20 TB from 1 TB without recreating
> the fil
ager daemon. Did you set one up yet?
>
> On Thu, Aug 3, 2017 at 7:31 AM Roger Brown wrote:
>
>> I'm running Luminous 12.1.2 and I seem to be in a catch-22. I've got pgs
>> that report they need to be scrubbed, however the command to scrub them
>> seems to
I'm running Luminous 12.1.2 and I seem to be in a catch-22. I've got pgs
that report they need to be scrubbed, however the command to scrub them
seems to have gone away. The flapping OSD is an issue for another thread.
Please advise.
Example:
roger@desktop:~$ ceph --version
ceph version 12.1.2 (b
Hi,
My OSD's were continuously crashing in cephx_verify_authorizer() while on
Luminous v12.1.0 and v12.1.1, but the crashes stopped once I upgraded to
v12.1.2.
Now however, one of my OSDs is continuing to crash. Looking closer, the
crash reason is different reason and started with v12.1.1.
I've
/ceph/ceph/pull/16421
>
> With the crashes in cephx_verify_authorizer() this rather looks like
> an instance of http://tracker.ceph.com/issues/20667 to me with
> https://github.com/ceph/ceph/pull/16455 as proposed fix. See Sage's
> mail on ceph-dev earlier.
>
> > On Thu,
I could be wrong, but I think you cannot achieve this objective. If you
declare a cluster network, OSDs will route heartbeat, object replication
and recovery traffic over the cluster network. We prefer that the cluster
network is NOT reachable from the public network or the Internet for added
secur
I had same issue on Lumninous and worked around it by disabling ceph-disk.
The osds can start without it.
On Thu, Jul 27, 2017 at 3:36 PM Oscar Segarra
wrote:
> Hi,
>
> First of all, my version:
>
> [root@vdicnode01 ~]# ceph -v
> ceph version 12.1.1 (f3e663a190bf2ed12c7e3cda288b9a159572c800) lum
x27; | while read i; do
> ceph pg deep-scrub ${i}; done
>
>
>
> --
>
> Petr Malkov
>
>
>
> -
>
> Message: 57
>
> Date: Wed, 19 Jul 2017 16:38:20 +
>
> From: Roger Brown
>
> To: ceph-users
>
> Subject: [ceph-users] pgs not deep-scrubb
I hope someone else can answer your question better, but in my case I found
something like this helpful to delete objects faster than I could through
the gateway:
rados -p default.rgw.buckets.data ls | grep 'replace this with pattern
matching files you want to delete' | xargs -d '\n' -n 200 rados
The method I have used is to 1) edit ceph.conf, 2) use ceph-deploy config
push, 3) restart monitors
Example:
roger@desktop:~/ceph-cluster$ vi ceph.conf# make ceph.conf change
roger@desktop:~/ceph-cluster$ ceph-deploy --overwrite-conf config push
nuc{1..3}
[ceph_deploy.conf][DEBUG ] found confi
ceph/bootstrap-mgr/ceph.keyring
> > [client.bootstrap-mgr]
> > key =
> > caps mon = "allow profile bootstrap-mgr"
> >
> >
> > On Sun, Jul 23, 2017 at 5:16 PM Mark Kirkwood
> > mailto:mark.kirkw...@catalyst.net.nz>>
> >
>
> From the error message it does not seem to like
> /var/lib/ceph/bootstrap-mgr/ceph.keyring - what does the contents of
> that look like?
>
> regards
>
> Mark
> On 24/07/17 03:09, Roger Brown wrote:
> > Mark,
> >
> > Thanks for that information. I can
Mark,
Thanks for that information. I can't seem to deploy ceph-mgr either. I also
have the busted mgr bootstrap key. I attempted the suggested fix, but my
issue may be different somehow. Complete output follows.
-Roger
roger@desktop:~$ ceph-deploy --version
1.5.38
roger@desktop:~$ ceph mon versio
I'm on Luminous 12.1.1 and noticed I have flapping OSDs. Even with `ceph
osd set nodown`, the OSDs will catch signal Aborted and sometimes
Segmentation fault 2-5 minutes after starting. I verified hosts can talk to
eachother on the cluster network. I've rebooted the hosts. I'm running out
of ideas.
So I disabled ceph-disk and will chalk it up as a red herring to ignore.
On Thu, Jul 20, 2017 at 11:02 AM Roger Brown wrote:
> Also I'm just noticing osd1 is my only OSD host that even has an enabled
> target for ceph-disk (ceph-disk@dev-sdb2.service).
>
> roger@osd1:~$ sys
oaded active active ceph target allowing to
start/stop all ceph-radosgw@.service instances at once
ceph.target loaded active active ceph target allowing to
start/stop all ceph*@.service instances at once
On Thu, Jul 20, 2017 at 10:23 AM Roger Brown wrote:
> I think I need help with
I think I need help with some OSD trouble. OSD daemons on two hosts started
flapping. At length, I rebooted host osd1 (osd.3), but the OSD daemon still
fails to start. Upon closer inspection, ceph-disk@dev-sdb2.service is
failing to start due to, "Error: /dev/sdb2 is not a block device"
This is th
What's the trick to overcoming unsupported features error when mapping an
erasure-coded rbd? This is on Ceph Luminous 12.1.1, Ubuntu Xenial, Kernel
4.10.0-26-lowlatency.
Steps to replicate:
$ ceph osd pool create rbd_data 32 32 erasure default
pool 'rbd_data' created
$ ceph osd pool set rbd_data
I just upgraded from Luminous 12.1.0 to 12.1.1 and was greeted with this
new "pgs not deep-scrubbed for" warning. Should this resolve itself, or
should I get scrubbing?
$ ceph health detail
HEALTH_WARN 4 pgs not deep-scrubbed for 86400; 15 pgs not scrubbed for 86400
PG_NOT_DEEP_SCRUBBED 4 pgs not
Roger
On Wed, Jul 19, 2017 at 7:34 AM David Turner wrote:
> I would go with the weight that was originally assigned to them. That way
> it is in line with what new osds will be weighted.
>
> On Wed, Jul 19, 2017, 9:17 AM Roger Brown wrote:
>
>> David,
>>
>> Th
Tue, Jul 18, 2017, 11:16 PM Roger Brown wrote:
>
>> Resolution confirmed!
>>
>> $ ceph -s
>> cluster:
>> id: eea7b78c-b138-40fc-9f3e-3d77afb770f0
>> health: HEALTH_OK
>>
>> services:
>> mon: 3 daemons, quorum desktop,mon1,n
: 54243 objects, 71722 MB
usage: 129 GB used, 27812 GB / 27941 GB avail
pgs: 372 active+clean
On Tue, Jul 18, 2017 at 8:47 PM Roger Brown wrote:
> Ah, that was the problem!
>
> So I edited the crushmap (
> http://docs.ceph.com/docs/master/rados/operations/crush-map/) with a
00 host osd1
> -6 9.09560 host osd2
> -2 9.09560 host osd3
>
> The weight allocated to host "osd1" should presumably be the same as
> the other two hosts?
>
> Dump your crushmap and take a good look at it, specifically the
> weighting of "osd1&quo
I also tried ceph pg query, but it gave no helpful recommendations for any
of the stuck pgs.
On Tue, Jul 18, 2017 at 7:45 PM Roger Brown wrote:
> Problem:
> I have some pgs with only two OSDs instead of 3 like all the other pgs
> have. This is causing active+undersized+degrad
Problem:
I have some pgs with only two OSDs instead of 3 like all the other pgs
have. This is causing active+undersized+degraded status.
History:
1. I started with 3 hosts, each with 1 OSD process (min_size 2) for a 1TB
drive.
2. Added 3 more hosts, each with 1 OSD process for a 10TB drive.
3. Rem
I've been trying to work through similar mgr issues for Xenial-Luminous...
roger@desktop:~/ceph-cluster$ ceph-deploy mgr create mon1 nuc2
[ceph_deploy.conf][DEBUG ] found configuration file at:
/home/roger/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.38): /usr/bin/ceph-deploy mgr create
the host to the ServerName before passing on the
> request. Try setting ProxyPreserveHost on as per
> https://httpd.apache.org/docs/2.4/mod/mod_proxy.html#proxypreservehost ?
> >
> > Rich
> >
> > On 11/07/17 21:47, Roger Brown wrote:
> >> Thank you Richard, that mostly work
e, Jul 11, 2017 at 10:22 AM Richard Hesketh <
richard.hesk...@rd.bbc.co.uk> wrote:
> On 11/07/17 17:08, Roger Brown wrote:
> > What are some options for migrating from Apache/FastCGI to Civetweb for
> RadosGW object gateway *without* breaking other websites on the domain?
>
What are some options for migrating from Apache/FastCGI to Civetweb for
RadosGW object gateway *without* breaking other websites on the domain?
I found documention on how to migrate the object gateway to Civetweb (
http://docs.ceph.com/docs/luminous/install/install-ceph-gateway/#migrating-from-apa
I'm a n00b myself, but I'll go on record with my understanding.
On Sun, Jun 4, 2017 at 3:03 PM Benoit GEORGELIN - yulPa <
benoit.george...@yulpa.io> wrote:
> Hi ceph users,
>
> Ceph have a very good documentation about technical usage, but there is a
> lot of conceptual things missing (from my po
I'm using fastcgi/apache2 instead of civetweb (centos7) because i couldn't
get civetweb to work with SSL on port 443 and in a subdomain of my main
website.
So I have domain.com, www.domain.com, s3.domain.com (RGW), and *.
s3.domain.com for the RGW buckets. As long as you can do the same with
civitw
I'm using fastcgi/apache2 instead of civetweb (centos7) because i couldn't
get civetweb to work with SSL on port 443 and in a subdomain of my main
website.
On Fri, May 5, 2017 at 1:51 PM Yehuda Sadeh-Weinraub
wrote:
> RGW has supported since forever. Originally it was the only supported
> front
How interesting! Thank you for that.
On Sat, Apr 29, 2017 at 4:04 PM Bryan Henderson
wrote:
> A few months ago, I posted here asking why the Ceph program takes so much
> memory (virtual, real, and address space) for what seems to be a simple
> task.
> Nobody knew, but I have done extensive resea
I don't recall. Perhaps later I can try a test and see.
On Fri, Apr 28, 2017 at 10:22 AM Ali Moeinvaziri wrote:
> Thanks. So, you didn't get any error on command "ceph-deploy mon
> create-initial"?
> -AM
>
>
> On Fri, Apr 28, 2017 at 9:50 AM, Roger Brown
I used ceph on centos 7. I check monitor status with commands like these:
systemctl status ceph-mon@nuc1
systemctl stop ceph-mon@nuc1
systemctl start ceph-mon@nuc1
systemctl restart ceph-mon@nuc1
for me, the hostnames are nuc1, nuc2, nuc3 so you have to modify to suit
your case.
On Fri, Apr 28,
My first thought is ceph doesn't have permissions to the rados keyring file.
eg.
[root@nuc1 ~]# ls -l /etc/ceph/ceph.client.radosgw.keyring
-rw-rw+ 1 root root 73 Feb 8 20:40
/etc/ceph/ceph.client.radosgw.keyring
You could give it read permission or be clever with setfacl, eg.
setfacl -m u:ce
I had similar issues when I created all the rbd-related pools with
erasure-coding instead of replication. -Roger
On Wed, Mar 1, 2017 at 11:47 AM John Nielsen wrote:
> Hi all-
>
> We use Amazon S3 quite a bit at $WORK but are evaluating Ceph+radosgw as
> an alternative for some things. We have a
replace "master" with the release codename, eg.
http://docs.ceph.com/docs/kraken/
On Mon, Feb 27, 2017 at 12:45 PM Stéphane Klein
wrote:
> Hi,
>
> how can I read old Ceph version documentation?
>
> http://docs.ceph.com I see only "master" documentation.
>
> I look for 0.94.5 documentation.
>
>
Today I learned if you can't use an erasure coded .rgw.buckets.index pool
with radosgw. If you do, expect HTTP 500 errors and stuff
like rgw_create_bucket returned ret=-95.
My setup:
CentOS 7.3.1611
ceph 11.2.0 (Kraken)
Apache/2.4.6
PHP 5.5.38
radosgw with via FastCGI
I recreated the pool without
44 matches
Mail list logo