On Fri, Jun 5, 2015 at 6:47 AM, David Z wrote:
> Hi Ceph folks,
>
> We want to use rbd format v2, but find it is not supported on kernel 3.10.0
> of centos 7:
>
> [ceph@ ~]$ sudo rbd map zhi_rbd_test_1
> rbd: sysfs write failed
> rbd: map failed: (22) Invalid argument
> [ceph@ ~]$ dmesg |
Thank you. Unfortunately this won't work because 0.21 is already being
creating:
~# ceph pg force_create_pg 0.21
pg 0.21 already creating
I think, and I am guessing here since I don't know internals that well,
that 0.21 started to be created but since its OSD disappear it never
finished and it k
That happened also to us, but after moving the OSDs with blocked requests
out of the cluster it eventually regained health OK.
Running ceph health details should list those OSDs. Do you have any?
El dia 07/06/2015 16:16, "Marek Dohojda" va
escriure:
> Thank you. Unfortunately this won't work be
I think this is the issue. look at ceph health detail you will see that
0.21 and others are orphan:
HEALTH_WARN 65 pgs stale; 22 pgs stuck inactive; 65 pgs stuck stale; 22 pgs
stuck unclean; too many PGs per OSD (456 > max 300)
pg 0.21 is stuck inactive since forever, current state creating, last
Incidentally I am having similar issues with other PG:
For instance:
pg 0.23 is stuck stale for 302497.994355, current state stale+active+clean,
last acting [5,2,4]
when I do:
# ceph pg 0.23 query
or
# ceph pg 5.5 query
It also freezes. I can't seem to see anything unusual in the log files, o
You can try moving osd.5 out and see what happens next.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Unfortunately nothing. It done its thing, re-balanced it, and left with
same thing in the end. BTW Thank you very much for the time and
suggestion, I really appreciate it.
ceph health detail
HEALTH_WARN 65 pgs stale; 22 pgs stuck inactive; 65 pgs stuck stale; 22 pgs
stuck unclean; too many PGs p
Setting up a Ceph cluster and we want the journals for our spinning disks
to be on SSDs but all of our SSDs are 1TB. We were planning on putting 3
journals on each SSD, but that leaves 900+GB unused on the drive, is it
possible to use the leftover space as another OSD or will it affect
performa
Well I think I got it! The issue was with pools that were created but then
their OSD were pulled from under them via crush rools (I created SSD and
regular disk pools and move the OSD into these). After I deleted these
pools, all the bad PGs dissipated, which made perfect sense, since these
were r
Cameron,
Generally, it's not a good idea.
You want to protect your SSDs used as journal.If any problem on that disk, you
will be losing all of your dependent OSDs.
I don't think a bigger journal will gain you much performance , so, default 5
GB journal size should be good enough. If you want to r
The other option we were considering was putting the journals on the OS
SSDs, they are only 250GB and the rest would be for the OS. Is that a
decent option?
Thanks!
Cameron Scrace
Infrastructure Engineer
Mobile +64 22 610 4629
Phone +64 4 462 5085
Email cameron.scr...@solnet.co.nz
Solnet So
Cameron, Somnath already covered most of these points, but I’ll add my $.02…
The key question to me is this: will these 1TB SSDs perform well as a Journal
target for Ceph? They’ll need to be fast at synchronous writes to fill that
role, and if they aren’t I would use them for other OSD-relate
Probably not, but if your SSD can sustain high endurance and high BW it may be
☺ Also, the amount of data written to the ceph journal partitions will be much
much higher that your OS partition and that could be a problem for SSD to wear
level.
Again, I doubt if anybody tried out this scenario e
Hello,
On Mon, 8 Jun 2015 09:55:56 +1200 cameron.scr...@solnet.co.nz wrote:
> The other option we were considering was putting the journals on the OS
> SSDs, they are only 250GB and the rest would be for the OS. Is that a
> decent option?
>
You'll be getting a LOT better advice if you're tell
Hello,
When trying to deploy ceph mons on our rhel 7 cluster, I get the following
error:
ceph-deploy mon create-initial
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.25): /usr/bin/ceph-deploy mon
create-initial
[ceph_deploy
Hi Christian,
Yes we have purchased all our hardware, was very hard to convince
management/finance to approve it, so some of the stuff we have is a bit
cheap.
We have four storage nodes each with 6 x 6TB Western Digital Red SATA
Drives (WD60EFRX-68M) and 6 x 1TB Samsung EVO 850s SSDs and 2x250
Hello Cameron,
On Mon, 8 Jun 2015 13:13:33 +1200 cameron.scr...@solnet.co.nz wrote:
> Hi Christian,
>
> Yes we have purchased all our hardware, was very hard to convince
> management/finance to approve it, so some of the stuff we have is a bit
> cheap.
>
Unfortunate. Both the done deal and t
Cameron,
To offer at least some constructive advice here instead of just all doom
and gloom, here's what I'd do:
Replace the OS SSDs with 2 400GB Intel DC S3700s (or S3710s).
They have enough BW to nearly saturate your network.
Put all your journals on them (3 SSD OSD and 3 HDD OSD per).
While
Thanks for all the feedback.
What makes the EVOs unusable? They should have plenty of speed but your
link has them at 1.9MB/s, is it just the way they handle O_DIRECT and
D_SYNC?
Not sure if we will be able to spend anymore, we may just have to take the
performance hit until we can get more
On Mon, 8 Jun 2015 14:30:17 +1200 cameron.scr...@solnet.co.nz wrote:
> Thanks for all the feedback.
>
> What makes the EVOs unusable? They should have plenty of speed but your
> link has them at 1.9MB/s, is it just the way they handle O_DIRECT and
> D_SYNC?
>
Precisely.
Read that ML thread
Has anyone had any luck using the radosgw-sync-agent to push or pull
to/from "real" S3?
--
Cheers,
~Blairo
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Just used the method in the link you sent me to test one of the EVO 850s,
with one job it reached a speed of around 2.5MB/s but it didn't max out
until around 32 jobs at 24MB/s:
sudo fio --filename=/dev/sdh --direct=1 --sync=1 --rw=write --bs=4k
--numjobs=32 --iodepth=1 --runtime=60 --time_bas
22 matches
Mail list logo