Re: [ceph-users] ceph 12.2.5 - atop DB/WAL SSD usage 0%

2018-04-30 Thread Hans van den Bogert
Shouldn't Steven see some data being written to the block/wal for object metadata? Though that might be negligible with 4MB objects On 27-04-18 16:04, Serkan Çoban wrote: rados bench is using 4MB block size for io. Try with with io size 4KB, you will see ssd will be used for write operations.

Re: [ceph-users] ceph luminous 12.2.4 - 2 servers better than 3 ?

2018-04-19 Thread Hans van den Bogert
well, is not true (imo). Hans On Thu, Apr 19, 2018, 19:28 Steven Vacaroaia wrote: > fio is fine and megacli setings are as below ( device with WT is the SSD) > > > Vendor Id : TOSHIBA > > Product Id : PX05SMB040Y > >

Re: [ceph-users] ceph luminous 12.2.4 - 2 servers better than 3 ?

2018-04-19 Thread Hans van den Bogert
DB ( on separate SSD or same HDD) Thanks Steven On Thu, 19 Apr 2018 at 12:06, Hans van den Bogert wrote: > I take it that the first bench is with replication size 2, the second > bench is with replication size 3? Same for the 4 node OSD scenario? > > Also please let us know how you

Re: [ceph-users] ceph luminous 12.2.4 - 2 servers better than 3 ?

2018-04-19 Thread Hans van den Bogert
4194304 > Bandwidth (MB/sec): 44.0793 > Stddev Bandwidth: 55.3843 > Max bandwidth (MB/sec): 232 > Min bandwidth (MB/sec): 0 > Average IOPS: 11 > Stddev IOPS:13 > Max IOPS: 58 > Min IOPS: 0 > Average Latency(s

Re: [ceph-users] ceph luminous 12.2.4 - 2 servers better than 3 ?

2018-04-19 Thread Hans van den Bogert
Hi Steven, There is only one bench. Could you show multiple benches of the different scenarios you discussed? Also provide hardware details. Hans On Apr 19, 2018 13:11, "Steven Vacaroaia" wrote: Hi, Any idea why 2 servers with one OSD each will provide better performance than 3

Re: [ceph-users] scalability new node to the existing cluster

2018-04-18 Thread Hans van den Bogert
just fine in my environment, and I don’t have experience with them. Good luck, Hans > On Apr 18, 2018, at 1:32 PM, Serkan Çoban wrote: > > You can add new OSDs with 0 weight and edit below script to increase > the osd weights instead of decreasing. > > https://github

Re: [ceph-users] Luminous 12.2.3 release date?

2018-02-12 Thread Hans van den Bogert
Hi Wido, Did you ever get an answer? I'm eager to know as well. Hans On Tue, Jan 30, 2018 at 10:35 AM, Wido den Hollander wrote: > Hi, > > Is there a ETA yet for 12.2.3? Looking at the tracker there aren't that many > outstanding issues: http://tracker.ceph.com/projec

[ceph-users] Retrieving ceph health from restful manager plugin

2018-02-05 Thread Hans van den Bogert
as such a restful API call would be preferred. Regards, Hans ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] Redirect for restful API in manager

2018-02-05 Thread Hans van den Bogert
em to be the case that the restful API redirects the client. Can anybody verify that? If it doesn't redirect, will this be added in the near future? Regards, Hans ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listin

[ceph-users] Scrub mismatch since upgrade to Luminous (12.2.2)

2018-01-24 Thread hans
Since upgrade to Ceph Luminous (12.2.2) from Jewel we get scrub mismatch errors every day at the same time (19:25), how can we fix them? Seems to be the same problem as described at http://lists.ceph.com/pipermail/ceph-users-ceph.com/2017-December/023202.html (can't reply to archived messages),

Re: [ceph-users] Fwd: Ceph team involvement in Rook (Deploying Ceph in Kubernetes)

2018-01-21 Thread Hans van den Bogert
Should I summarize this is ceph-helm being being EOL? If I'm spinning up a toy cluster for a homelab, should I invest time in Rook, or stay with ceph-helm for now? On Fri, Jan 19, 2018 at 11:55 AM, Kai Wagner wrote: > Just for those of you who are not subscribed to ceph-users. > > > For

Re: [ceph-users] Increasing PG number

2018-01-02 Thread Hans van den Bogert
s also incomplete, since you also need to change the ‘pgp_num’ as well. Regards, Hans > On Jan 2, 2018, at 4:41 PM, Vladimir Prokofev wrote: > > Increased number of PGs in multiple pools in a production cluster on 12.2.2 > recently - zero issues. > CEPH claims that increasing pg_num

Re: [ceph-users] The way to minimize osd memory usage?

2017-12-11 Thread Hans van den Bogert
There’s probably multiple reasons. However I just wanted to chime in that I set my cache size to 1G and I constantly see OSD memory converge to ~2.5GB. In [1] you can see the difference between a node with 4 OSDs, v12.2.2, on the left; and a node with 4 OSDs v12.2.1 on the right. I really hoped

[ceph-users] osd/bluestore: Get block.db usage

2017-12-04 Thread Hans van den Bogert
the block.db is full? -- For instance, I could not care for the extra latency when object metadata gets spilled to the backing disk if it for RGW-related data, in contrast to RBD objects metadata, which should remain on the faster SSD-based block.db.

Re: [ceph-users] ceps-deploy won't install luminous

2017-11-15 Thread Hans van den Bogert
verify that you did that part? > On Nov 15, 2017, at 10:41 AM, Hans van den Bogert > wrote: > > Hi, > > Can you show the contents of the file, /etc/yum.repos.d/ceph.repo ? > > Regards, > > Hans >> On Nov 15, 2017, at 10:27 AM, Ragan, Tj (Dr.) >> wr

Re: [ceph-users] ceps-deploy won't install luminous

2017-11-15 Thread Hans van den Bogert
Hi, Can you show the contents of the file, /etc/yum.repos.d/ceph.repo ? Regards, Hans > On Nov 15, 2017, at 10:27 AM, Ragan, Tj (Dr.) > wrote: > > Hi All, > > I feel like I’m doing something silly. I’m spinning up a new cluster, and > followed the instructions on the

Re: [ceph-users] Fwd: Luminous RadosGW issue

2017-11-09 Thread Hans van den Bogert
should be used in your config. If Im right you should use ‘client.rgw.radosgw’ in your ceph.conf. > On Nov 9, 2017, at 5:25 AM, Sam Huracan wrote: > > @Hans: Yes, I tried to redeploy RGW, and ensure client.radosgw.gateway is the > same in ceph.conf. > Everything go well, serv

Re: [ceph-users] Fwd: Luminous RadosGW issue

2017-11-08 Thread Hans van den Bogert
Are you sure you deployed it with the client.radosgw.gateway name as well? Try to redeploy the RGW and make sure the name you give it corresponds to the name you give in the ceph.conf. Also, do not forget to push the ceph.conf to the RGW machine. On Wed, Nov 8, 2017 at 11:44 PM, Sam Huracan wrote

Re: [ceph-users] Ceph versions not showing RGW

2017-11-02 Thread Hans van den Bogert
Just to get this really straight, Jewel OSDs do send this metadata? Otherwise I'm probably mistaken that I ever saw 10.2.x versions in the output. Thanks, Hans On 2 Nov 2017 12:31 PM, "John Spray" wrote: > On Thu, Nov 2, 2017 at 11:16 AM, Hans van den Bogert &g

[ceph-users] Ceph versions not showing RGW

2017-11-02 Thread Hans van den Bogert
were still on Jewel. What are the semantics of the `ceph versions` ? -- Was I wrong in expecting that Jewel RGWs should show up there? Thanks, Hans ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users

Re: [ceph-users] PGs inconsistent, do I fear data loss?

2017-11-02 Thread Hans van den Bogert
Never mind, I should’ve read the whole thread first. > On Nov 2, 2017, at 10:50 AM, Hans van den Bogert wrote: > > >> On Nov 1, 2017, at 4:45 PM, David Turner > <mailto:drakonst...@gmail.com>> wrote: >> >> All it takes for data loss is that an osd on

Re: [ceph-users] PGs inconsistent, do I fear data loss?

2017-11-02 Thread Hans van den Bogert
> On Nov 1, 2017, at 4:45 PM, David Turner wrote: > > All it takes for data loss is that an osd on server 1 is marked down and a > write happens to an osd on server 2. Now the osd on server 2 goes down > before the osd on server 1 has finished backfilling and the first osd > receives a reque

Re: [ceph-users] announcing ceph-helm (ceph on kubernetes orchestration)

2017-10-25 Thread Hans van den Bogert
Very interesting. I've been toying around with Rook.io [1]. Did you know of this project, and if so can you tell if ceph-helm and Rook.io have similar goals? Regards, Hans [1] https://rook.io/ On 25 Oct 2017 21:09, "Sage Weil" wrote: > There is a new repo under the ceph org

[ceph-users] Drive write cache recommendations for Luminous/Bluestore

2017-10-23 Thread Hans van den Bogert
or at least elaborate on the subject. 2. Depending on item 1., could and should I enable drive write cache for the disks attached to a HP b140i controller. Thanks! Hans ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi

Re: [ceph-users] Ceph delete files and status

2017-10-20 Thread Hans van den Bogert
force the garbage collector with something like: $ radosgw-admin gc process I haven’t used this command to actually test if this would have the intended result of freeing up space. But it wouldn’t hurt anything. Regards, Hans > On Oct 19, 2017, at 11:06 PM, nigel davies wrote: >

Re: [ceph-users] High mem with Luminous/Bluestore

2017-10-19 Thread Hans van den Bogert
> Memory usage is still quite high here even with a large onode cache! > Are you using erasure coding? I recently was able to reproduce a bug in > bluestore causing excessive memory usage during large writes with EC, > but have not tracked down exactly what's going on yet. > > Mark No, this is

Re: [ceph-users] High mem with Luminous/Bluestore

2017-10-18 Thread Hans van den Bogert
ke HDDs and monitor the memory usage. Thanks, Hans On Wed, Oct 18, 2017 at 11:56 AM, Wido den Hollander wrote: > > > Op 18 oktober 2017 om 11:41 schreef Hans van den Bogert < > hansbog...@gmail.com>: > > > > > > Hi All, > > > > I've c

[ceph-users] High mem with Luminous/Bluestore

2017-10-18 Thread Hans van den Bogert
t;: { "items": 284680, "bytes": 91233440 }, "osdmap": { "items": 14287, "bytes": 731680 }, "osdmap_mapping": { "items": 0, "bytes": 0 }, "pgmap": { "items": 0, "bytes": 0 }, "mds_co": { "items": 0, "bytes": 0 }, "unittest_1": { "items": 0, "bytes": 0 }, "unittest_2": { "items": 0, "bytes": 0 }, "total": { "items": 434277707, "bytes": 4529200468 } } Regards, Hans ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] How to get current min-compat-client setting

2017-10-16 Thread Hans van den Bogert
Thanks, that’s what I was looking for. However, should we create the ` get-require-min-compat-client luminous` option nonetheless? I’m willing to write the patch, unless someone thinks it’s not a good idea. Regards Hans > On Oct 16, 2017, at 12:13 PM, Wido den Hollander wrote: > &g

[ceph-users] How to get current min-compat-client setting

2017-10-13 Thread Hans van den Bogert
get-* variant of the above command. Does anybody now how I can retrieve the current setting with perhaps lower level commands/tools ? Thanks, Hans ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users

Re: [ceph-users] Gracefully reboot OSD node

2017-08-03 Thread Hans van den Bogert
Aug 3, 2017 at 1:55 PM, Hans van den Bogert wrote: > What are the implications of this? Because I can see a lot of blocked > requests piling up when using 'noout' and 'nodown'. That probably makes > sense though. > Another thing, no when the OSDs come back onli

Re: [ceph-users] Gracefully reboot OSD node

2017-08-03 Thread Hans van den Bogert
cted? On Thu, Aug 3, 2017 at 1:36 PM, linghucongsong wrote: > > > set the osd noout nodown > > > > > At 2017-08-03 18:29:47, "Hans van den Bogert" > wrote: > > Hi all, > > One thing which has bothered since the beginning of using ceph is that a >

[ceph-users] Gracefully reboot OSD node

2017-08-03 Thread Hans van den Bogert
? Thanks, Hans ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] Linear space complexity or memory leak in `Radosgw-admin bucket check --fix`

2017-07-25 Thread Hans van den Bogert
osgw-admin process is killed eventually by the Out-Of-Memory-Manager. Is this high RAM usage to be expected, or should I file a bug? Regards, Hans ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] Crash on startup

2017-02-01 Thread Hans van den Bogert
hough correct me if I'm wrong, that replaying the journal fails. Is this something which can just happen and should I just wipe the whole OSD and recreate a new OSD? Or is this a symptom of a bigger issue? Regards, Hans [1] http://pastebin.com/yBqkAqix <http://pastebin