. Basically something is wrong.
Same settings and followed schema on jewel is successful except luminous.
What might it be?
What do you need to know to solve this problem? Why ceph thinks I have 200GB
space only?
Thanks,
Gencer.
___
ceph-users
TOTAL 200G 21381M 179G 10.44
MIN/MAX VAR: 1.00/1.00 STDDEV: 0.00
-Gencer.
-Original Message-
From: Wido den Hollander [mailto:w...@42on.com]
Sent: Monday, July 17, 2017 4:57 PM
To: ceph-users@lists.ceph.com; gen...@gencgiyen.com
Subject: Re: [ceph-users] Ceph (Luminous
bluestore. This
time i did not use bluestore (also removed from conf file). Still seen as 200GB.
How can I make sure BlueStore is disabled (even if i not put any command).
-Gencer.
-Original Message-
From: Wido den Hollander [mailto:w...@42on.com]
Sent: Monday, July 17, 2017 5:57 PM
To: ceph
Also one more thing, If I want to use BlueStore how do I let it to know that I
have more space? Do i need to specify a size at any point?
-Gencer.
-Original Message-
From: gen...@gencgiyen.com [mailto:gen...@gencgiyen.com]
Sent: Monday, July 17, 2017 6:04 PM
To: 'Wido den Holl
-disk -v activate
--mark-init systemd --mount /dev/sdb
Are you sure that we need to remove "1" from at the end?
Can you point me on any doc for this because ceph's own documentation also
shows as sdb1 sdc1...
If you have any sample, I will be very happy :)
-Gencer.
-Original Me
sinbg jewel).
Thanks!,
Gencer.
-Original Message-
From: Wido den Hollander [mailto:w...@42on.com]
Sent: Monday, July 17, 2017 6:17 PM
To: ceph-users@lists.ceph.com; gen...@gencgiyen.com
Subject: RE: [ceph-users] Ceph (Luminous) shows total_space wrong
> Op 17 juli 2017 om 17:03 sch
it doesn't matter if I have this [osd] section or not.
Results are same.
I am open to all suggestions.
Thanks,
Gencer.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
filestore_op_threads = 12
On 2017-07-17 22:41, Patrick Donnelly wrote:
Hi Gencer,
On Mon, Jul 17, 2017 at 12:31 PM, wrote:
I located and applied almost every different tuning setting/config
over the
internet. I couldn’t manage to speed up my speed one byte further. It
is
always same
copy bigfile to different targets in cephfs at the
same time. Then i looked into network graphs and i see numbers up to
1.09 gb/s. But why single copy/rsync cannot exceed 200mb/s? What
prevents it im really wonder this.
Gencer.
On 2017-07-17 23:24, Peter Maloney wrote:
You should have a
0mb/s? What
prevents it im really wonder this.
Gencer.
-Original Message-
From: Patrick Donnelly [mailto:pdonn...@redhat.com]
Sent: 17 Temmuz 2017 Pazartesi 23:21
To: gen...@gencgiyen.com
Cc: Ceph Users
Subject: Re: [ceph-users] Yet another performance tuning for CephFS
On Mon, Jul 17, 20
30575M
total_avail 55857G
total_space 55887G
From: David Turner [mailto:drakonst...@gmail.com]
Sent: Tuesday, July 18, 2017 2:31 AM
To: Gencer Genç ; Patrick Donnelly
Cc: Ceph Users
Subject: Re: [ceph-users] Yet another performance tuning for CephFS
What are
Not even 1 second).
So false alarm here. Ceph is fast enough. I also do stress tests (such as
multiple background write at the same time) and they are very stable too.
Thanks for the heads up to you and all others.
Gencer.
-Original Message-
From: Patrick Donnelly [mailto:pdonn...@redhat.
the journal test?
They are not connected via NVMe neither SSD's. Each node has 10x3TB SATA Hard
Disk Drives (HDD).
-Gencer.
-Original Message-
From: Peter Maloney [mailto:peter.malo...@brockmann-consult.de]
Sent: Tuesday, July 18, 2017 2:47 PM
To: gen...@gencgiyen.com
Cc: cep
than filestore though if you use a large block size.
At the moment, It looks good but, can you explain a bit more on block size? (or
a reference page could also work)
Gencer.
-Original Message-
From: Peter Maloney [mailto:peter.malo...@brockmann-consult.de]
Sent: Tuesday, July 18, 2017 5
15:05:10.125041 [INF] 3.5e scrub ok
2017-07-19 15:05:10.123522 [INF] 3.5e scrub starts
2017-07-19 15:05:14.613124 [WRN] Health check update: 914 pgs not scrubbed
for 86400 (PG_NOT_SCRUBBED)
2017-07-19 15:05:07.433748 [INF] 1.c4 scrub ok
...
Should this be s
15 matches
Mail list logo