[ceph-users] Large omap objects in radosgw .usage pool: is there a way to reshard the rgw usage log?

2020-01-22 Thread Ingo Reimann
Hi All

>On 09/10/2019 09:07, Florian Haas wrote: 
>[...]
>the question with about resharding the usage log still stands. (The untrimmed 
>usage log, in my case, would have blasted the old 2M keys threshold, too.) 
>
>Cheers, Florian

Is there any new wisdom about resharding the usage log for one user? Since 
Nautilus we get a HEALTH_WARN after 3 weeks of the month because the usage data 
of one single user reaches the threshold for large omap warnings - which I 
already increased to 1M. At start of month, we truncate the usage data so we 
are save again for a while.

Cheers,
ingo

-- 
ngo Reimann
Teamleiter Technik
[ https://www.dunkel.de/ ]
Dunkel GmbH
Philipp-Reis-Straße 2
65795 Hattersheim
Fon: +49 6190 889-100
Fax: +49 6190 889-399
eMail: supp...@dunkel.de
http://www.Dunkel.de/ Amtsgericht Frankfurt/Main
HRB: 37971
Geschäftsführer: Axel Dunkel
Ust-ID: DE 811622001
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Migrate Jewel from leveldb to rocksdb

2020-01-22 Thread Robert LeBlanc
In the last release of Jewel [0] it mentions that omap data can be stored
in rocksdb instead of leveldb. We are seeing high latencies from compaction
of leveldb on our Jewel cluster (can't upgrade at this time). I installed
the latest version, but apparently that is not enough to do the conversion.
Is there a way to move OSDs from leveldb to rocksdb without trashing and
rebuilding all the osds?

[0] https://docs.ceph.com/docs/master/releases/jewel/

Thanks,
Robert LeBlanc

Robert LeBlanc
PGP Fingerprint 79A2 9CA4 6CC4 45DD A904  C70E E654 3BB2 FA62 B9F1
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Auto create rbd snapshots

2020-01-22 Thread Marc Roos


Is it possible to schedule the creation of snapshots on specific rbd 
images within ceph? 
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Migrate Jewel from leveldb to rocksdb

2020-01-22 Thread Janne Johansson
Den ons 22 jan. 2020 kl 16:30 skrev Robert LeBlanc :

> In the last release of Jewel [0] it mentions that omap data can be stored
> in rocksdb instead of leveldb. We are seeing high latencies from compaction
> of leveldb on our Jewel cluster (can't upgrade at this time). I installed
> the latest version, but apparently that is not enough to do the conversion.
> Is there a way to move OSDs from leveldb to rocksdb without trashing and
> rebuilding all the osds?
>
> [0] https://docs.ceph.com/docs/master/releases/jewel/
>
>
If you are running something else than CentOS there seems to be a way.
For some reason CentOS jewel lacks snappy compression for its rocksdb, but
the tool will make such a rocksdb anyhow.

https://ceph-users.ceph.narkive.com/Znl1HyKq/any-backfill-in-our-cluster-makes-the-cluster-unusable-and-takes-forever


-- 
May the most significant bit of your life be positive.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] ceph-volume lvm filestore OSDs fail to start on reboot. Permission denied on journal partition

2020-01-22 Thread Wesley Dillingham
After upgrading to Nautilus 14.2.6 from Luminous 12.2.12 we are seeing the
following behavior on OSDs which were created with "ceph-volume lvm create
--filestore --osd-id  --data  --journal "

Upon restart of the server containing these OSDs they fail to start with
the following error in the logs:

2020-01-21 13:36:11.635 7fee633e8a80 -1
filestore(/var/lib/ceph/osd/ceph-199) mount(1928): failed to open
journal /var/lib/ceph/osd/ceph-199/journal: (13) Permission denied

/var/lib/ceph/osd/ceph-199/journal symlinks to /dev/sdc5 in our case and
inspecting the ownership on /dev/sdc5 it is root:root, chowning that to
ceph:ceph causes the osd to start and come back up and in near instantly.

As a note these OSDs we experience this with are OSDs which have previously
failed and been replaced using the above ceph-volume, longer running OSDs
in the same server created with ceph-disk or ceph-volume simple (that have
a corresponding .json in /etc/ceph/osd) start up fine and get ceph:ceph on
their journal partition. Bluestore OSDs also do not have any issue.

My hope is that I can preemptively fix these OSDs before shutting them down
so that reboots happen seamlessly. Thanks for any insight.



Respectfully,

*Wes Dillingham*
w...@wesdillingham.com
LinkedIn 
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Migrate Jewel from leveldb to rocksdb

2020-01-22 Thread Alexandru Cucu
Hi,

There is no need to rebuild all ODS. You can follow the procedure
described by RedHat[0] to convert the DB and tell the OSD to use
rocksdb.
Couldn't find this documented elsewhere. You may need a RedHat account
to access the content, but you could create a developer one for free
IIRC.

[0] https://access.redhat.com/solutions/3210951

---
Alex Cucu


On Wed, Jan 22, 2020 at 5:30 PM Robert LeBlanc  wrote:
>
> In the last release of Jewel [0] it mentions that omap data can be stored in 
> rocksdb instead of leveldb. We are seeing high latencies from compaction of 
> leveldb on our Jewel cluster (can't upgrade at this time). I installed the 
> latest version, but apparently that is not enough to do the conversion. Is 
> there a way to move OSDs from leveldb to rocksdb without trashing and 
> rebuilding all the osds?
>
> [0] https://docs.ceph.com/docs/master/releases/jewel/
>
> Thanks,
> Robert LeBlanc
> 
> Robert LeBlanc
> PGP Fingerprint 79A2 9CA4 6CC4 45DD A904  C70E E654 3BB2 FA62 B9F1
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: ceph-volume lvm filestore OSDs fail to start on reboot. Permission denied on journal partition

2020-01-22 Thread Janne Johansson
Den ons 22 jan. 2020 kl 18:01 skrev Wesley Dillingham :

> After upgrading to Nautilus 14.2.6 from Luminous 12.2.12 we are seeing the
> following behavior on OSDs which were created with "ceph-volume lvm create
> --filestore --osd-id  --data  --journal "
>
> Upon restart of the server containing these OSDs they fail to start with
> the following error in the logs:
>
> 2020-01-21 13:36:11.635 7fee633e8a80 -1 filestore(/var/lib/ceph/osd/ceph-199) 
> mount(1928): failed to open journal /var/lib/ceph/osd/ceph-199/journal: (13) 
> Permission denied
>
> /var/lib/ceph/osd/ceph-199/journal symlinks to /dev/sdc5 in our case and
> inspecting the ownership on /dev/sdc5 it is root:root, chowning that to
> ceph:ceph causes the osd to start and come back up and in near instantly.
>
> As a note these OSDs we experience this with are OSDs which have
> previously failed and been replaced using the above ceph-volume, longer
> running OSDs in the same server created with ceph-disk or ceph-volume
> simple (that have a corresponding .json in /etc/ceph/osd) start up fine and
> get ceph:ceph on their journal partition. Bluestore OSDs also do not have
> any issue.
>
> My hope is that I can preemptively fix these OSDs before shutting them
> down so that reboots happen seamlessly. Thanks for any insight.
>
>
Our workaround (not on Nautilus but still) is to add to the pre-run systemd
unit file pointed out like this:


more /usr/lib/systemd/system/ceph-osd\@.service

...
ExecStartPre=/usr/lib/ceph/ceph-osd-prestart.sh 

then in that file, after it figures out what your journal should be (even
if it is a symlink), do a chown to ceph:ceph

more /usr/lib/ceph/ceph-osd-prestart.sh

...


journal="$data/journal"


chown  --dereference ceph:ceph $journal

so it has the correct perms before the filestore OSD gets started.

-- 
May the most significant bit of your life be positive.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: ceph-volume lvm filestore OSDs fail to start on reboot. Permission denied on journal partition

2020-01-22 Thread Marco Gaiarin
Mandi! Wesley Dillingham
  In chel di` si favelave...

> Upon restart of the server containing these OSDs they fail to start with the
> following error in the logs:

I've hit th exactly same trouble. Look at:


https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/FH4QHTQZJ3R7MOXGJYW6YYLURGUHABPW/

https://tracker.ceph.com/issues/41777

still hoping that patch will be integrated upstream...

-- 
dott. Marco Gaiarin GNUPG Key ID: 240A3D66
  Associazione ``La Nostra Famiglia''  http://www.lanostrafamiglia.it/
  Polo FVG   -   Via della Bontà, 7 - 33078   -   San Vito al Tagliamento (PN)
  marco.gaiarin(at)lanostrafamiglia.it   t +39-0434-842711   f +39-0434-842797

Dona il 5 PER MILLE a LA NOSTRA FAMIGLIA!
  http://www.lanostrafamiglia.it/index.php/it/sostienici/5x1000
(cf 00307430132, categoria ONLUS oppure RICERCA SANITARIA)
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: cephfs : write error: Operation not permitted

2020-01-22 Thread Patrick Donnelly
Hi Yoann,

On Tue, Jan 21, 2020 at 11:58 PM Yoann Moulin  wrote:
>
> Hello,
>
> On a fresh install (Nautilus 14.2.6) deploy with ceph-ansible playbook 
> stable-4.0, I have an issue with cephfs. I can create a folder, I can
> create empty files, but cannot write data on like I'm not allowed to write to 
> the cephfs_data pool.
>
> > $ ceph -s
> >   cluster:
> > id: fded5bb5-62c5-4a88-b62c-0986d7c7ac09
> > health: HEALTH_OK
> >
> >   services:
> > mon: 3 daemons, quorum iccluster039,iccluster041,iccluster042 (age 23h)
> > mgr: iccluster039(active, since 21h), standbys: iccluster041, 
> > iccluster042
> > mds: cephfs:3 
> > {0=iccluster043=up:active,1=iccluster041=up:active,2=iccluster042=up:active}
> > osd: 24 osds: 24 up (since 22h), 24 in (since 22h)
> > rgw: 1 daemon active (iccluster043.rgw0)
> >
> >   data:
> > pools:   9 pools, 568 pgs
> > objects: 800 objects, 225 KiB
> > usage:   24 GiB used, 87 TiB / 87 TiB avail
> > pgs: 568 active+clean
>
> The 2 cephfs pools:
>
> > $ ceph osd pool ls detail | grep cephfs
> > pool 1 'cephfs_data' replicated size 3 min_size 2 crush_rule 0 object_hash 
> > rjenkins pg_num 256 pgp_num 256 autoscale_mode warn last_change 83 lfor 
> > 0/0/81 flags hashpspool stripe_width 0 expected_num_objects 1 application 
> > cephfs
> > pool 2 'cephfs_metadata' replicated size 3 min_size 2 crush_rule 0 
> > object_hash rjenkins pg_num 8 pgp_num 8 autoscale_mode warn last_change 48 
> > flags hashpspool stripe_width 0 expected_num_objects 1 pg_autoscale_bias 4 
> > pg_num_min 16 recovery_priority 5 application cephfs
>
> The status of the cephfs filesystem:
>
> > $ ceph fs status
> > cephfs - 1 clients
> > ==
> > +--++--+---+---+---+
> > | Rank | State  | MDS  |Activity   |  dns  |  inos |
> > +--++--+---+---+---+
> > |  0   | active | iccluster043 | Reqs:0 /s |   34  |   18  |
> > |  1   | active | iccluster041 | Reqs:0 /s |   12  |   16  |
> > |  2   | active | iccluster042 | Reqs:0 /s |   10  |   13  |
> > +--++--+---+---+---+
> > +-+--+---+---+
> > |   Pool  |   type   |  used | avail |
> > +-+--+---+---+
> > | cephfs_metadata | metadata | 4608k | 27.6T |
> > |   cephfs_data   |   data   |0  | 27.6T |
> > +-+--+---+---+
> > +-+
> > | Standby MDS |
> > +-+
> > +-+
> > MDS version: ceph version 14.2.6 (f0aa067ac7a02ee46ea48aa26c6e298b5ea272e9) 
> > nautilus (stable)
>
>
> > # mkdir folder
> > # echo "foo" > bar
> > -bash: echo: write error: Operation not permitted
> > # ls -al
> > total 4
> > drwxrwxrwx  1 root root2 Jan 22 07:30 .
> > drwxr-xr-x 28 root root 4096 Jan 21 09:25 ..
> > -rw-r--r--  1 root root0 Jan 22 07:30 bar
> > drwxrwxrwx  1 root root1 Jan 21 16:49 folder
>
> > # df -hT .
> > Filesystem Type  Size  Used Avail Use% 
> > Mounted on
> > 10.90.38.15,10.90.38.17,10.90.38.18:/dslab2020 ceph   28T 0   28T   0% 
> > /cephfs
>
> I try 2 client config :
>
> > $ ceph --cluster dslab2020 fs authorize cephfs client.cephfsadmin / rw
> > [snip]
> > $ ceph auth  get client.fsadmin
> > exported keyring for client.fsadmin
> > [client.fsadmin]
> >   key = [snip]
> >   caps mds = "allow rw"
> >   caps mon = "allow r"
> >   caps osd = "allow rw tag cephfs data=cephfs"
>
> > $ ceph --cluster dslab2020 fs authorize cephfs client.cephfsadmin / rw
> > [snip]
> > $ ceph auth caps client.cephfsadmin mds "allow rw" mon "allow r" osd "allow 
> > rw tag cephfs pool=cephfs_data "
> > [snip]
> > ceph auth caps client.cephfsadmin mds "allow rw" mon "allow r" osd "allow 
> > rw tag cephfs pool=cephfs_data "> updated caps for client.cephfsadmin
> > $ ceph auth  get client.cephfsadmin
> > exported keyring for client.cephfsadmin
> > [client.cephfsadmin]
> >   key = [snip]
> >   caps mds = "allow rw"
> >   caps mon = "allow r"
> >   caps osd = "allow rw tag cephfs pool=cephfs_data "

This should be:

caps osd = "allow rw tag cephfs data=cephfs"

See also: https://docs.ceph.com/docs/nautilus/cephfs/client-auth/

-- 
Patrick Donnelly, Ph.D.
He / Him / His
Senior Software Engineer
Red Hat Sunnyvale, CA
GPG: 19F28A586F808C2402351B93C3301A3E258DD79D
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Migrate Jewel from leveldb to rocksdb

2020-01-22 Thread Robert LeBlanc
On Wed, Jan 22, 2020 at 9:01 AM Alexandru Cucu  wrote:

> Hi,
>
> There is no need to rebuild all ODS. You can follow the procedure
> described by RedHat[0] to convert the DB and tell the OSD to use
> rocksdb.
> Couldn't find this documented elsewhere. You may need a RedHat account
> to access the content, but you could create a developer one for free
> IIRC.
>
> [0] https://access.redhat.com/solutions/3210951


Thanks guys,

I was able to convert some OSDs to rocksdb just fine (the ceph-kvstore-tool
was in the ceph-test debian package). I'm going to let that run for a few
days on one host before converting the others.

I really hope this helps with the heavy write/delete load kicking out OSDs.


Robert LeBlanc
PGP Fingerprint 79A2 9CA4 6CC4 45DD A904  C70E E654 3BB2 FA62 B9F1
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Upcoming Ceph Days for 2020

2020-01-22 Thread Mike Perez
Hi Cephers,

We have just posted some upcoming Ceph Days. We are looking for sponsors
and content:

* Ceph Day Istanbul: March 17
* Ceph Day Oslo: May 13
* Ceph Day Vancouver: May 13

https://ceph.com/cephdays/

Also don't forget about our big event Cephalocon Seoul March 3-5.
Registration, schedule. sponsorship and hotel information is available:

https://ceph.io/cephalocon/seoul-2020/

If you have any questions about these events, please contact us at
eve...@ceph.io.

Any help with promoting these events is greatly appreciated. Thanks!

-- 

Mike Perez

he/him

Ceph Community Manager


M: +1-951-572-2633

494C 5D25 2968 D361 65FB 3829 94BC D781 ADA8 8AEA
@Thingee   Thingee
 

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io