[ceph-users] Re: [Suspicious newsletter] Re: Ceph Tech Talk: Karan Singh - Scale Testing Ceph with 10Billion+ Objects

2020-10-01 Thread Szabo, Istvan (Agoda)
Hi, Is it available for download or youtube? Thank you. From: Peter Sarossy Sent: Friday, October 2, 2020 12:12 AM To: Marc Roos Cc: ceph-users Subject: [Suspicious newsletter] [ceph-users] Re: Ceph Tech Talk: Karan Singh - Scale Testing Ceph with 10Bill

[ceph-users] Re: Feedback for proof of concept OSD Node

2020-10-01 Thread Ignacio Ocampo
Hi Brian, Here more context about what I want to accomplish: I've migrated a bunch of services from AWS to a local server, but having everything in a single server is not safe, and instead of investing in RAID, I would like to start setting up a small Ceph Cluster to have redundancy and a robust m

[ceph-users] RFC: Possible replacement for ceph-disk

2020-10-01 Thread Nico Schottelius
Good evening, since 2018 we have been using a custom script to create disks / partitions, because at the time both ceph-disk and ceph-volume exhibited bugs that made them unreliable for us. We recently re-tested ceph-volume and while it seems generally speaking [0] to work, using LVM seems to i

[ceph-users] Re: rgw index shard much larger than others

2020-10-01 Thread Eric Ivancich
Hi Dan, One way to tell would be to do a: radosgw-admin bi list —bucket= And see if any of the lines output contains (perhaps using `grep`): "type": "olh", That would tell you if there were any versioned objects in the bucket. The “fix” we currently have only prevents this fro

[ceph-users] Re: ceph-volume quite buggy compared to ceph-disk

2020-10-01 Thread tri
Hi Matt, Marc, I'm using Ceph Otopus with cephadm as the orchestration tool. I've tried adding OSDs with ceph orch daemon add ... but it's pretty limited. For one, you can't create dmcrypt OSD with it nor having a separate db device. I found that the most reliable way to create OSD with cephad

[ceph-users] Re: Feedback for proof of concept OSD Node

2020-10-01 Thread Brian Topping
Welcome to Ceph! I think better questions to start with are “what are your objectives in your study?” Is it just seeing Ceph run with many disks, or are you trying to see how much performance you can get out of it with distributed disk? What is your budget? Do you want to try different combinat

[ceph-users] Re: cephfs tag not working

2020-10-01 Thread Andrej Filipcic
On 2020-10-01 15:56, Frank Schilder wrote: There used to be / is a bug in ceph fs commands when using data pools. If you enable the application cephfs on a pool explicitly before running cephfs add datapool, the fs-tag is not applied. Maybe its that? There is an older thread on the topic in th

[ceph-users] Re: cephfs tag not working

2020-10-01 Thread Patrick Donnelly
On Thu, Oct 1, 2020 at 6:57 AM Frank Schilder wrote: > > There used to be / is a bug in ceph fs commands when using data pools. If you > enable the application cephfs on a pool explicitly before running cephfs add > datapool, the fs-tag is not applied. Maybe its that? There is an older thread >

[ceph-users] Re: cephfs tag not working

2020-10-01 Thread Frank Schilder
There used to be / is a bug in ceph fs commands when using data pools. If you enable the application cephfs on a pool explicitly before running cephfs add datapool, the fs-tag is not applied. Maybe its that? There is an older thread on the topic in the users-list and also a fix/workaround. Best

[ceph-users] Re: Feedback for proof of concept OSD Node

2020-10-01 Thread Ignacio Ocampo
RGW and RBD primarily, CephFS in less capacity. > On 1 Oct 2020, at 9:58, Nathan Fish wrote: > >  > What kind of cache configuration are you planning? Are you going to use > CephFS, RGW, and/or RBD? > >> On Tue, Sep 29, 2020 at 2:45 AM Ignacio Ocampo wrote: >> Hi All :), >> >> I would like

[ceph-users] Re: Ceph Tech Talk: Karan Singh - Scale Testing Ceph with 10Billion+ Objects

2020-10-01 Thread Marc Roos
P, thanks, you are right, I am blind and impatient not to look under options. -Original Message- Cc: ceph-users; miperez Subject: *SPAM* Re: [ceph-users] Ceph Tech Talk: Karan Singh - Scale Testing Ceph with 10Billion+ Objects You can click "join without audio and video"

[ceph-users] Re: Ceph Tech Talk: Karan Singh - Scale Testing Ceph with 10Billion+ Objects

2020-10-01 Thread Peter Sarossy
You can click "join without audio and video" at the bottom On Thu, Oct 1, 2020 at 1:10 PM Marc Roos wrote: > > Mike, > > Can you allow access without mic and cam? > > Thanks, > Marc > > > > -Original Message- > > To: ceph-users@ceph.io > Subject: *SPAM* [ceph-users] Ceph Tech Tal

[ceph-users] Ceph Tech Talk: Karan Singh - Scale Testing Ceph with 10Billion+ Objects

2020-10-01 Thread Marc Roos
Mike, Can you allow access without mic and cam? Thanks, Marc -Original Message- To: ceph-users@ceph.io Subject: *SPAM* [ceph-users] Ceph Tech Talk: Karan Singh - Scale Testing Ceph with 10Billion+ Objects Hey all, We're live now with the latest Ceph tech talk! Join us:

[ceph-users] Ceph Tech Talk: Karan Singh - Scale Testing Ceph with 10Billion+ Objects

2020-10-01 Thread Mike Perez
Hey all, We're live now with the latest Ceph tech talk! Join us: https://bluejeans.com/908675367/browser -- Mike Perez he/him Ceph Community Manager M: +1-951-572-2633 494C 5D25 2968 D361 65FB 3829 94BC D781 ADA8 8AEA @Thingee Thingee

[ceph-users] Re: ceph-volume quite buggy compared to ceph-disk

2020-10-01 Thread Marc Roos
> Did you have any success with `ceph-volume` for activating your OSD? No, I have tried with ceph-volume prepare and ceph-volume activate, but got errors also. The only way for me to currently create an osd without hasle is: ceph-volume lvm zap --destroy /dev/sdf && ceph-volume lvm create

[ceph-users] Re: rgw index shard much larger than others

2020-10-01 Thread Dan van der Ster
Thanks Matt and Eric, Sorry for the basic question, but how can I as a ceph operator tell if a bucket is versioned? And for fixing this current situation, I would wait for the fix then reshard? (We want to reshard this bucket anyway because listing perf is way too slow for the user with 512 shard

[ceph-users] Re: rgw index shard much larger than others

2020-10-01 Thread Eric Ivancich
Hi Matt and Dan, I too suspect it’s the issue Matt linked to. That bug only affects versioned buckets, so I’m guessing your bucket is versioned, Dan. This bug is triggered when the final instance of an object in a versioned bucket is deleted, but for reasons we do not yet understand, the object

[ceph-users] Re: ceph-volume quite buggy compared to ceph-disk

2020-10-01 Thread Matt Larson
Hi Marc, Did you have any success with `ceph-volume` for activating your OSD? I am having a similar problem where the command `ceph-bluestore-tool` fails to be able to read a label for a previously created OSD on an LVM partition. I had previously been using the OSD without issues, but after a

[ceph-users] Re: cephfs tag not working

2020-10-01 Thread Eugen Block
Hi, I have a one-node-cluster (also 15.2.4) for testing purposes and just created a cephfs with the tag, it works for me. But my node is also its own client, so there's that. And it was installed with 15.2.4, no upgrade. For the 2nd, mds works, files can be created or removed, but client

[ceph-users] Re: rgw index shard much larger than others

2020-10-01 Thread Matt Benjamin
Hi Dan, Possibly you're reproducing https://tracker.ceph.com/issues/46456. That explains how the underlying issue worked, I don't remember how a bucked exhibiting this is repaired. Eric? Matt On Thu, Oct 1, 2020 at 8:41 AM Dan van der Ster wrote: > > Dear friends, > > Running 14.2.11, we hav

[ceph-users] CEPH iSCSI issue - ESXi command timeout

2020-10-01 Thread Golasowski Martin
Dear All, a week ago we had to reboot our ESXi nodes since our CEPH cluster sudennly stopped serving all I/O. We have identified a VM (vCenter appliance) which was swapping heavily and causing heavy load. However, since then we are experiencing strange issues, as if the cluster cannot handle an

[ceph-users] rgw index shard much larger than others

2020-10-01 Thread Dan van der Ster
Dear friends, Running 14.2.11, we have one particularly large bucket with a very strange distribution of objects among the shards. The bucket has 512 shards, and most shards have ~75k entries, but shard 0 has 1.75M entries: # rados -p default.rgw.buckets.index listomapkeys .dir.61c59385-085d-4caa

[ceph-users] cephfs tag not working

2020-10-01 Thread Andrej Filipcic
Hi, on octopus 15.2.4 I have an issue with cephfs tag auth. The following works fine: client.f9desktop     key:     caps: [mds] allow rw     caps: [mon] allow r     caps: [osd] allow rw  pool=cephfs_data, allow rw pool=ssd_data, allow rw pool=fast_data,  allow rw pool=ar

[ceph-users] Re: hdd pg's migrating when converting ssd class osd's

2020-10-01 Thread Frank Schilder
Dear Mark and Nico, I think this might be the time to file a tracker report. As far as I can see, your set-up is as it should be, OSD operations on your clusters should behave exactly as on ours. I don't know of any other configuration option that influences placement calculation. The problems

[ceph-users] bugs ceph-volume scripting

2020-10-01 Thread Marc Roos
I have been creating lvm osd's with: ceph-volume lvm zap --destroy /dev/sdf && ceph-volume lvm create --data /dev/sdf --dmcrypt Because this procedure failed: ceph-volume lvm zap --destroy /dev/sdf (waiting on slow human typing) ceph-volume lvm create --data /dev/sdf --dmcrypt However when I

[ceph-users] S3 Buckets with "object-lock"

2020-10-01 Thread Torsten Ennenbach
Hello, we are using Ceph 14.x for our s3 storages and some of our customers want to create a locked object bucket. BUT: While the creation of a locked bucket works, the objects are still deletable. Any ideas or hints? Best regards: Torsten ___ ce