Shouldn't Steven see some data being written to the block/wal for object
metadata? Though that might be negligible with 4MB objects
On 27-04-18 16:04, Serkan Çoban wrote:
rados bench is using 4MB block size for io. Try with with io size 4KB,
you will see ssd will be used for write operations.
well, is not
true (imo).
Hans
On Thu, Apr 19, 2018, 19:28 Steven Vacaroaia wrote:
> fio is fine and megacli setings are as below ( device with WT is the SSD)
>
>
> Vendor Id : TOSHIBA
>
> Product Id : PX05SMB040Y
>
>
DB ( on separate SSD or same HDD)
Thanks
Steven
On Thu, 19 Apr 2018 at 12:06, Hans van den Bogert
wrote:
> I take it that the first bench is with replication size 2, the second
> bench is with replication size 3? Same for the 4 node OSD scenario?
>
> Also please let us know how you
4194304
> Bandwidth (MB/sec): 44.0793
> Stddev Bandwidth: 55.3843
> Max bandwidth (MB/sec): 232
> Min bandwidth (MB/sec): 0
> Average IOPS: 11
> Stddev IOPS:13
> Max IOPS: 58
> Min IOPS: 0
> Average Latency(s
Hi Steven,
There is only one bench. Could you show multiple benches of the different
scenarios you discussed? Also provide hardware details.
Hans
On Apr 19, 2018 13:11, "Steven Vacaroaia" wrote:
Hi,
Any idea why 2 servers with one OSD each will provide better performance
than 3
just
fine in my environment, and I don’t have experience with them.
Good luck,
Hans
> On Apr 18, 2018, at 1:32 PM, Serkan Çoban wrote:
>
> You can add new OSDs with 0 weight and edit below script to increase
> the osd weights instead of decreasing.
>
> https://github
Hi Wido,
Did you ever get an answer? I'm eager to know as well.
Hans
On Tue, Jan 30, 2018 at 10:35 AM, Wido den Hollander wrote:
> Hi,
>
> Is there a ETA yet for 12.2.3? Looking at the tracker there aren't that many
> outstanding issues: http://tracker.ceph.com/projec
as
such a restful API call would be preferred.
Regards,
Hans
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
em to be the case that the restful API redirects
the client. Can anybody verify that? If it doesn't redirect, will this
be added in the near future?
Regards,
Hans
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listin
Since upgrade to Ceph Luminous (12.2.2) from Jewel we get scrub mismatch
errors every day at the same time (19:25), how can we fix them? Seems to
be the same problem as described at
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2017-December/023202.html
(can't reply to archived messages),
Should I summarize this is ceph-helm being being EOL? If I'm spinning up a
toy cluster for a homelab, should I invest time in Rook, or stay with
ceph-helm for now?
On Fri, Jan 19, 2018 at 11:55 AM, Kai Wagner wrote:
> Just for those of you who are not subscribed to ceph-users.
>
>
> For
s also incomplete, since you also need to change the ‘pgp_num’ as well.
Regards,
Hans
> On Jan 2, 2018, at 4:41 PM, Vladimir Prokofev wrote:
>
> Increased number of PGs in multiple pools in a production cluster on 12.2.2
> recently - zero issues.
> CEPH claims that increasing pg_num
There’s probably multiple reasons. However I just wanted to chime in that I set
my cache size to 1G and I constantly see OSD memory converge to ~2.5GB.
In [1] you can see the difference between a node with 4 OSDs, v12.2.2, on the
left; and a node with 4 OSDs v12.2.1 on the right. I really hoped
the block.db is
full? -- For instance, I could not care for the extra latency when
object metadata gets spilled to the backing disk if it for RGW-related
data, in contrast to RBD objects metadata, which should remain on the
faster SSD-based block.db.
verify that you did that part?
> On Nov 15, 2017, at 10:41 AM, Hans van den Bogert
> wrote:
>
> Hi,
>
> Can you show the contents of the file, /etc/yum.repos.d/ceph.repo ?
>
> Regards,
>
> Hans
>> On Nov 15, 2017, at 10:27 AM, Ragan, Tj (Dr.)
>> wr
Hi,
Can you show the contents of the file, /etc/yum.repos.d/ceph.repo ?
Regards,
Hans
> On Nov 15, 2017, at 10:27 AM, Ragan, Tj (Dr.)
> wrote:
>
> Hi All,
>
> I feel like I’m doing something silly. I’m spinning up a new cluster, and
> followed the instructions on the
should be used in your config. If Im right
you should use ‘client.rgw.radosgw’ in your ceph.conf.
> On Nov 9, 2017, at 5:25 AM, Sam Huracan wrote:
>
> @Hans: Yes, I tried to redeploy RGW, and ensure client.radosgw.gateway is the
> same in ceph.conf.
> Everything go well, serv
Are you sure you deployed it with the client.radosgw.gateway name as
well? Try to redeploy the RGW and make sure the name you give it
corresponds to the name you give in the ceph.conf. Also, do not forget
to push the ceph.conf to the RGW machine.
On Wed, Nov 8, 2017 at 11:44 PM, Sam Huracan wrote
Just to get this really straight, Jewel OSDs do send this metadata?
Otherwise I'm probably mistaken that I ever saw 10.2.x versions in the
output.
Thanks,
Hans
On 2 Nov 2017 12:31 PM, "John Spray" wrote:
> On Thu, Nov 2, 2017 at 11:16 AM, Hans van den Bogert
&g
were still on Jewel.
What are the semantics of the `ceph versions` ? -- Was I wrong in expecting
that Jewel RGWs should show up there?
Thanks,
Hans
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users
Never mind, I should’ve read the whole thread first.
> On Nov 2, 2017, at 10:50 AM, Hans van den Bogert wrote:
>
>
>> On Nov 1, 2017, at 4:45 PM, David Turner > <mailto:drakonst...@gmail.com>> wrote:
>>
>> All it takes for data loss is that an osd on
> On Nov 1, 2017, at 4:45 PM, David Turner wrote:
>
> All it takes for data loss is that an osd on server 1 is marked down and a
> write happens to an osd on server 2. Now the osd on server 2 goes down
> before the osd on server 1 has finished backfilling and the first osd
> receives a reque
Very interesting.
I've been toying around with Rook.io [1]. Did you know of this project, and
if so can you tell if ceph-helm and Rook.io have similar goals?
Regards,
Hans
[1] https://rook.io/
On 25 Oct 2017 21:09, "Sage Weil" wrote:
> There is a new repo under the ceph org
or at least elaborate on the subject.
2. Depending on item 1., could and should I enable drive write cache for
the disks attached to a HP b140i controller.
Thanks!
Hans
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi
force the garbage collector with
something like:
$ radosgw-admin gc process
I haven’t used this command to actually test if this would have the intended
result of freeing up space. But it wouldn’t hurt anything.
Regards,
Hans
> On Oct 19, 2017, at 11:06 PM, nigel davies wrote:
>
> Memory usage is still quite high here even with a large onode cache!
> Are you using erasure coding? I recently was able to reproduce a bug in
> bluestore causing excessive memory usage during large writes with EC,
> but have not tracked down exactly what's going on yet.
>
> Mark
No, this is
ke HDDs and monitor the memory usage.
Thanks,
Hans
On Wed, Oct 18, 2017 at 11:56 AM, Wido den Hollander wrote:
>
> > Op 18 oktober 2017 om 11:41 schreef Hans van den Bogert <
> hansbog...@gmail.com>:
> >
> >
> > Hi All,
> >
> > I've c
t;: {
"items": 284680,
"bytes": 91233440
},
"osdmap": {
"items": 14287,
"bytes": 731680
},
"osdmap_mapping": {
"items": 0,
"bytes": 0
},
"pgmap": {
"items": 0,
"bytes": 0
},
"mds_co": {
"items": 0,
"bytes": 0
},
"unittest_1": {
"items": 0,
"bytes": 0
},
"unittest_2": {
"items": 0,
"bytes": 0
},
"total": {
"items": 434277707,
"bytes": 4529200468
}
}
Regards,
Hans
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Thanks, that’s what I was looking for.
However, should we create the ` get-require-min-compat-client luminous` option
nonetheless? I’m willing to write the patch, unless someone thinks it’s not a
good idea.
Regards
Hans
> On Oct 16, 2017, at 12:13 PM, Wido den Hollander wrote:
>
&g
get-* variant of the above command. Does anybody now
how I can retrieve the current setting with perhaps lower level commands/tools ?
Thanks,
Hans
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users
Aug 3, 2017 at 1:55 PM, Hans van den Bogert
wrote:
> What are the implications of this? Because I can see a lot of blocked
> requests piling up when using 'noout' and 'nodown'. That probably makes
> sense though.
> Another thing, no when the OSDs come back onli
cted?
On Thu, Aug 3, 2017 at 1:36 PM, linghucongsong
wrote:
>
>
> set the osd noout nodown
>
>
>
>
> At 2017-08-03 18:29:47, "Hans van den Bogert"
> wrote:
>
> Hi all,
>
> One thing which has bothered since the beginning of using ceph is that a
>
?
Thanks,
Hans
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
osgw-admin process is killed eventually by the Out-Of-Memory-Manager. Is
this high RAM usage to be expected, or should I file a bug?
Regards,
Hans
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
hough correct me if I'm wrong,
that replaying the journal fails.
Is this something which can just happen and should I just wipe the whole OSD
and recreate a new OSD? Or is this a symptom of a bigger issue?
Regards,
Hans
[1] http://pastebin.com/yBqkAqix <http://pastebin
35 matches
Mail list logo