Sorry for topposting, but..
The Intel 35xx drives are rated for a much lower DWPD (drive-writes-per-day)
than the 36xx or 37xx models.
Keep in mind that a single SSD that acts as journal for 5 OSDs will recieve ALL
writes for those 5 OSDs before the data is moved off to the OSDs actual data
dr
Also i would set
osd_crush_initial_weight = 0
In ceph.conf an decrease the Crush weight via
Ceph osd Crush reweight osd.36 0.05000
Step by step
Am 25. April 2017 23:19:08 MESZ schrieb Reed Dier :
>Others will likely be able to provide some better responses, but I’ll
>take a shot to see if anyt
Hi David,
Thanks so much for your reply. I will pass this information along to my
team.
Kind Regards,
Scott Lewis
Sr. Developer & Head of Content
Iconfinder Aps
http://iconfinder.com
http://twitter.com/iconfinder
"Helping Designers Make a Living Doing What They Love"
On Mon, May 1, 2017 at 12:
Also, I checked the ceph logs and I see a ton of messages like this, which
seem they could (probably are?) related to the I/O issue:
2017-05-01 11:20:22.096657 7f12b6aa5700 0 -- 10.0.1.1:6846/3810 >>
10.0.33.1:6811/3413 pipe(0x76563600 sd=31 :53523 s=1 pgs=0 cs=1 l=0
c=0x8264ec60).connect claims
Hi,
"Yesterday I replaced one of the 100 GB volumes with a new 2 TB volume
which includes creating a snapshot, detaching the old volume, attaching the
new volume, then using parted to correctly set the start/end of the data
partition. This all went smoothly and no issues reported from AWS or the
s
Perfect. There's the answer, thanks. DWPD seem like an idiotic and
meaningless measurement, but the endurance figures on those data sheets
give the total TB or PB written, which is what I really want to see.
DC S3510: 0.56 TBW/GB of drive capacity
DC S3610: 6.60 TBW/GB of drive capacity
DC S3710
Hi,
Lots of good info on SSD endurance in this thread.
For Ceph journal you should also consider the size of the backing OSDs: the
SSD journal won't last as long if backing 5x8TB OSDs or 5x1TB OSDs.
For example, the S3510 480GB (275TB of endurance), if backing 5x8TB (40TB)
OSDs, will provide ver
I'm by no means a Ceph expert but I feel this is not a fair representation
of Ceph, I am not saying numbers would be better or worse. Just the fact I
see some major holes that don't represent a typical Ceph setup.
1 Mon? Most have a minimum of 3
1 OSD? basically all your reads and writes are going
I can attest to this. I had a cluster that used 3510's for the first rack
and then switched to 3710's after that. We had 3TB drives and every single
3510 ran out of writes after 1.5 years. We noticed because we tracked down
incredibly slow performance to a subset of OSDs and each time they had a
My intention is to just identify the root cause for the too much time
spent on a "table create" operation on CephFS. I am *not* trying to
benchmark with my testing. Sorry if it wasn't clear in my mail.
I am sure the time spent would be lesser if I had a proper CEPH setup.
But, I believe even t
On Mon, May 1, 2017 at 9:17 AM, Babu Shanmugam wrote:
>
> My intention is to just identify the root cause for the too much time spent
> on a "table create" operation on CephFS. I am *not* trying to benchmark with
> my testing. Sorry if it wasn't clear in my mail.
>
> I am sure the time spent wou
Hi all,
I’ve run across a peculiar issue on 10.2.7. On my 3x replicated cache tiering
cache pool, routine scrubbing suddenly found a bunch of PGs with
size_mismatch_oi errors. From the “rados list-inconsistent-pg tool”[1], I see
that all OSDs are reporting size 0 for a particular pg. I’ve check
There is 1 more thing that I noticed when using cephfs instead of RBD for
MySQL, and that is CPU usage on the client.
When using RBD, I was using 99% of the CPU’s. When I switched to cephfs, the
same tests were using 60% of the CPU. Performance was about equal. This test
was an OLTP sysbench usi
hello All,
I am trying Ceph - Jewel on Ubuntu 16.04 with Kubernetes 1.6.2 and Docker
1.11.2
but for some unknown reason its not coming up and crashing often,all ceph
commands are failing.
from *ceph-mon-check:*
kubectl logs -n ceph ceph-mon-check-3190136794-21xg4 -f
subprocess.CalledProcessError:
Hi,
I've ugpraded tiny jewel cluster from 10.2.2 to 10.2.7 and now
one of OSDs fails to start..
here's (hopefully) important part of the backtrace:
2017-05-01 19:54:17.627262 7fb2bbf78800 10 filestore(/var/lib/ceph/osd/ceph-1)
stat meta/#-1:c0371625:::snapmapper:0# = 0 (size 0)
2017-05-01 19:54:
Hello Ceph-users,
Florian has been helping with some issues on our proof-of-concept
cluster, where we've been experiencing these issues. Thanks for the
replies so far. I wanted to jump in with some extra details.
All of our testing has been with scrubbing turned off, to remove that as
a fact
One additional detail, we also did filestore testing using Jewel and saw
substantially similar results to those on Kraken.
On Mon, May 1, 2017 at 2:07 PM, Patrick Dinnen wrote:
> Hello Ceph-users,
>
> Florian has been helping with some issues on our proof-of-concept cluster,
> where we've been e
Hi Patrick,
Is there any chance that you can graph the XFS stats to see if there is an
increase in inode/dentry cache misses as the ingest performance drops off? At
least that might confirm the issue.
Only other thing I can think of would be to try running the OSD’s on top of
something l
Thanks☺
We are using hammer 0.94.5, Which commit is supposed to fix this bug? Thank you.
发件人: David Turner [mailto:drakonst...@gmail.com]
发送时间: 2017年4月25日 20:17
收件人: 许雪寒; ceph-users@lists.ceph.com
主题: Re: [ceph-users] Large META directory within each OSD's directory
Which version of Ceph are yo
> Op 28 april 2017 om 19:14 schreef Sage Weil :
>
>
> Hi everyone,
>
> Are there any osd or filestore options that operators are tuning for
> all-SSD clusters? If so (and they make sense) we'd like to introduce them
> as defaults for ssd-backed OSDs.
>
osd_op_threads and osd_disk_threads.
Hi all,
I was wondering what happens when reads are issued to an RBD device with no
previously written data. Can somebody explain how such requests flow from
rbd (client) into OSDs and whether any of these reads would hit the disks
at all or whether OSD metadata would recognize that there is no da
Hello,
I have added 5 new Ceph OSD nodes to my ceph cluster. Here, I wanted
to increase PG/PGP numbers of pools based new OSDs count. Same time
need to increase the newly added OSDs weight from 0 -> 1.
My question is:
Do I need to increase the PG/PGP num increase and then reweight the OSDs?
Or
Rew
Le 28/04/2017 à 17:03, Mark Nelson a écrit :
On 04/28/2017 08:23 AM, Frédéric Nass wrote:
Le 28/04/2017 à 15:19, Frédéric Nass a écrit :
Hi Florian, Wido,
That's interesting. I ran some bluestore benchmarks a few weeks ago on
Luminous dev (1st release) and came to the same (early) conclusi
Hi Jason,
thanks for your feedback. I did now some tests over the weekend to verify the
memory overhead.
I was using qemu 2.8 (taken from the Ubuntu Cloud Archive) with librbd 10.2.7
on Ubuntu 16.04 hosts. I suspected the ceph rbd cache to be the cause of the
overhead so I just generated a lot
24 matches
Mail list logo