I am also planning to create SSD only cluster using multiple OSD on
using few hosts,Whats the best way to get maximum performance out of
SSD disks.
I dont have the cluster running,but seeing this thread makes me worry
that RBD will not be able to extract full capability of SSD disks.I am
beginner i
Thanks Florent
On Mon, Feb 2, 2015 at 11:26 PM, Florent MONTHEL wrote:
> Hi,
>
> Writes will be distributed every 4MB (size of IMAGEV1 RBD object)
> IMAGEV2 not fully supported on KRBD (but you can customize size of object
> and striping)
>
> You need to take :
> - SSD SATA 6gbits
> - or SSD SAS
On 02/03/2015 03:50 AM, Mark Kirkwood wrote:
> Same here - in the advent you need to rebuild the whole thing, using
> parallel make speeds it it heaps (and seems to build it correctly), i.e:
>
> $ make -j4
>
> Cheers
>
> Mark
That it already is doing.
If you look at debian/rules, you'll see the B
Hi,
It is possible to reduce pg_num at not used pool, for example from data or
mds. We are using only rbd pool, but pg_num for data and mds is set to 1024.
Regards
Mateusz
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/lis
Mateusz,
Presumably you've tried deleting the data and metadata pools and found
that it refused because they were in use in a filesystem? In that
case you can deactivate the filesystem with "fs rm " (identify
name from "fs ls").
There was a version (I forget which) where the pools couldn't be
de
Hi Sage,
Following are the greps through inotify, I created a file name cris at
client end
>From stats the ino number is 1099511628795
but it does not match to the hex *06E5F474*
On ODS:
/var/local/osd1/current/2.0_head/ OPEN *200.0001__head_06E5F474__2*
/var/local/osd1/current/2.0_head
On Tue, 3 Feb 2015, Mudit Verma wrote:
> Hi Sage,
> Following are the greps through inotify, I created a file name cris at
> client end
>
> From stats the ino number is 1099511628795
10003fb
The object names will be something like 10003fb.0001. The
filenames once they hit disk w
Great, I found the file which contains the data
root@ceph-osd1:/var/local/osd1/current# grep "hello3" * -r
1.0_head/10003fb.__head_4BC00833__1:hello3
And it does indeed match to the hex ino.
How does it determined that the data will go to 1.0_head and what does
head_4BC00833 stand f
Thanks, I doesn't figured out to delete this pools. Btw. I'm on 0.87 release.
Mateusz
-Original Message-
From: john.sp...@inktank.com [mailto:john.sp...@inktank.com] On Behalf Of John
Spray
Sent: Tuesday, February 3, 2015 10:04 AM
To: Mateusz Skała
Cc: ceph-users@lists.ceph.com
Subject: R
The hash of the filename is ($inode.$block) is 4BC00833, and the pg id is
a slightly weird function of those bits.
sage
On Tue, 3 Feb 2015, Mudit Verma wrote:
> Great, I found the file which contains the data
> root@ceph-osd1:/var/local/osd1/current# grep "hello3" * -r
>
> 1.0_head/10003
Thanks a lot Sage.
On Tue, Feb 3, 2015 at 3:34 PM, Sage Weil wrote:
> The hash of the filename is ($inode.$block) is 4BC00833, and the pg id is
> a slightly weird function of those bits.
>
> sage
>
>
> On Tue, 3 Feb 2015, Mudit Verma wrote:
>
> > Great, I found the file which contains the data
>
Mudit,
Those are the journal objects you're seeing touched. Write some data
to the file, and do a "rados -p ls" to check the
objects for the inode number you're expecting.
Cheers,
Johnn
On Tue, Feb 3, 2015 at 10:53 AM, Mudit Verma wrote:
> Hi Sage,
>
> Following are the greps through inotify,
Hi all,
during some failover tests and some configuration tests, we currently
discover a strange phenomenon:
Restarting one of our monitors (5 in sum) triggers about 300 of the
following events:
osd.669 10.76.28.58:6935/149172 failed (20 reports from 20 peers after
22.005858 >= grace 20.00)
On Tue, Feb 3, 2015 at 2:38 PM, Christian Eichelmann
wrote:
> Hi all,
>
> during some failover tests and some configuration tests, we currently
> discover a strange phenomenon:
>
> Restarting one of our monitors (5 in sum) triggers about 300 of the
> following events:
>
> osd.669 10.76.28.58:6935/
Hi,
From my tests with giant, this was the cpu which limit the performance on osd.
I'm going to do some benchmark with 2x10 cores 3,1ghz for 6ssd next month.
I'll post results on the mailing list.
- Mail original -
De: "mad Engineer"
À: "Gregory Farnum"
Cc: "ceph-users"
Envoyé: Ma
Hi!
We have a CephFS directory /baremetal mounted as /cephfs via FUSE on
our clients.
There are no specific settings configured for /baremetal.
As a result, trying to get the directory layout via getfattr does not work
getfattr -n 'ceph.dir.layout' /cephfs
/cephfs: ceph.dir.layout: No such att
turns out i misunderstood the playbook. in "scenario 4" the variable
osd_directories refers to premounted partitions, not the directory of
the ceph journals.
also, unrelated, but might help someone on google, when you use virsh
attach-disk, remember to add --persistent like this (--persistent
migh
Hi All,
We're going to start some basic testing with ceph - throughput, node
failure, and custom crush maps. I'm trying to come up with a testing
plan - but I'm sorta stuck with the most basic check:
Is there an 'easy' way to take a filename, and determine which OSDs
the replica's should be wri
Hi all,
I have to build a new Ceph storage cluster, after i‘ve read the hardware
recommendations and some mail from this mailing list i would like to buy these
servers:
OSD:
SSG-6027R-E1R12L ->
http://www.supermicro.nl/products/system/2U/6027/SSG-6027R-E1R12L.cfm
Intel Xeon e5-2630 v2
64 GB RA
Hi,
Just a couple of points, you might want to see if you can get a Xeon v3
board+CPU as they have more performance and use less power.
You can also get a SM 2U chassis which has 2x 2.5” disk slots at the rear, this
would allow you to have an extra 2x 3.5” disks in the front of the serve
This is the second-to-last chunk of new stuff before Hammer. Big items
include additional checksums on OSD objects, proxied reads in the
cache tier, image locking in RBD, optimized OSD Transaction and
replication messages, and a big pile of RGW and MDS bug fixes.
Upgrading
-
* The experi
Hi Nick,
Hi,
Just a couple of points, you might want to see if you can get a Xeon v3
board+CPU as they have more performance and use less power.
ok
You can also get a SM 2U chassis which has 2x 2.5” disk slots at the rear, this
would allow you to have an extra 2x 3.5” disks in the front o
In the absence of other clues, you might want to try checking that the
network is coming up before ceph tries to mount.
Now I think on it, that might just be it - I seem to recall a similar problem
with cifs mounts, despite having the _netdev option. I had to issue a mount in
/etc/network/if-up
I am using ceph firefly (ceph version 0.80.7 ) with single Radosgw
instance, no RBD.
I am facing problem of ".rgw.buckets has too few pgs "
I have tried to increased the number of pgs using command "ceph osd
pool set pg_num " but in vain.
I also tried "ceph osd crush tunables optimal " but no effe
Pardon me if this is a little basic...
but did you remember to set the pgp_num on the pool after setting pg_num ?
I believe the warning won't go away till that's done.
On Tue, Feb 3, 2015 at 9:27 AM, Shashank Puntamkar wrote:
> I am using ceph firefly (ceph version 0.80.7 ) with single Radosgw
>
I just went through and updated my Ubuntu 14.04 80.7 cluster to 80.8 this
morning. The package updates looked like they triggered _starts_ for some
services (ceph-all, ceph-msd-all), but not restarts. ceph-mon and ceph-osd
services were _not_ restarted, allowing me to restart them in the order I
de
On Tue, Feb 3, 2015 at 3:38 AM, Christian Eichelmann
wrote:
> Hi all,
>
> during some failover tests and some configuration tests, we currently
> discover a strange phenomenon:
>
> Restarting one of our monitors (5 in sum) triggers about 300 of the
> following events:
>
> osd.669 10.76.28.58:6935/
On Tue, Feb 3, 2015 at 5:21 AM, Daniel Schneller
wrote:
> Hi!
>
> We have a CephFS directory /baremetal mounted as /cephfs via FUSE on our
> clients.
> There are no specific settings configured for /baremetal.
> As a result, trying to get the directory layout via getfattr does not work
>
> getfatt
We have a CephFS directory /baremetal mounted as /cephfs via FUSE on our
clients.
There are no specific settings configured for /baremetal.
As a result, trying to get the directory layout via getfattr does not work
getfattr -n 'ceph.dir.layout' /cephfs
/cephfs: ceph.dir.layout: No such attribute
On Tue, Feb 3, 2015 at 9:23 AM, Daniel Schneller
wrote:
>>> We have a CephFS directory /baremetal mounted as /cephfs via FUSE on our
>>> clients.
>>> There are no specific settings configured for /baremetal.
>>> As a result, trying to get the directory layout via getfattr does not
>>> work
>>>
>>>
Hi,
I tried to add a caching pool in front of openstack vms and volumes
pools. I believed that the process was transparent, but as soon as I set
the caching for both of these pools, the VMs could not find their
volumes anymore. Obviously when I undid my changes, everything went back
to normal
On Tue, Feb 3, 2015 at 10:23 AM, J-P Methot wrote:
> Hi,
>
> I tried to add a caching pool in front of openstack vms and volumes pools. I
> believed that the process was transparent, but as soon as I set the caching
> for both of these pools, the VMs could not find their volumes anymore.
> Obvious
On 2015-02-03 18:19:24 +, Gregory Farnum said:
Okay, I've looked at the code a bit, and I think that it's not showing
you one because there isn't an explicit layout set. You should still
be able to set one if you like, though; have you tried that?
Actually, no, not yet. We were setting up
debian deb packages update are not restarting services.
(So, I think it should be the same for ubuntu).
you need to restart daemons in this order:
-monitor
-osd
-mds
-rados gateway
http://ceph.com/docs/master/install/upgrading-ceph/
- Mail original -
De: "Stephen Jahl"
À: "Gregory Fa
On Tue, Feb 3, 2015 at 2:21 PM, Daniel Schneller
wrote:
> Now, say I wanted to put /baremetal into a different pool, how would I go
> about this?
>
> Can I setfattr on the /cephfs mountpoint and assign it a different pool with
> e. g. different replication settings?
This should make it clearer:
h
On Tue, Feb 3, 2015 at 1:17 PM, John Spray wrote:
> On Tue, Feb 3, 2015 at 2:21 PM, Daniel Schneller
> wrote:
>> Now, say I wanted to put /baremetal into a different pool, how would I go
>> about this?
>>
>> Can I setfattr on the /cephfs mountpoint and assign it a different pool with
>> e. g. dif
Understood. Thanks for the details.
Daniel
On Tue, Feb 3, 2015 at 1:23 PM -0800, "Gregory Farnum" wrote:
On Tue, Feb 3, 2015 at 1:17 PM, John Spray wrote:
> On Tue, Feb 3, 2015 at 2:21 PM, Daniel Schneller
> wrote:
>> Now, say I wanted to put /baremetal into a different pool, ho
On Tue, Feb 3, 2015 at 10:23 PM, Gregory Farnum wrote:
>> If you explicitly change the layout of a file containing data to point
>> to a different pool, then you will see zeros when you try to read it
>> back (although new data will be written to the new pool).
>
> That statement sounds really sca
There seems to be a bug with the transaction encoding when upgrading
from v0.91 to v0.92. Users probably want to hold off on upgrading to
v0.92 until http://tracker.ceph.com/issues/10734 is resolved.
-Sam
On Tue, Feb 3, 2015 at 7:40 AM, Sage Weil wrote:
> This is the second-to-last chunk of new
On Tue, Feb 3, 2015 at 1:30 PM, John Spray wrote:
> On Tue, Feb 3, 2015 at 10:23 PM, Gregory Farnum wrote:
>>> If you explicitly change the layout of a file containing data to point
>>> to a different pool, then you will see zeros when you try to read it
>>> back (although new data will be writte
http://tracker.ceph.com/issues/10737
John
On Tue, Feb 3, 2015 at 10:36 PM, Gregory Farnum wrote:
> On Tue, Feb 3, 2015 at 1:30 PM, John Spray wrote:
>> On Tue, Feb 3, 2015 at 10:23 PM, Gregory Farnum wrote:
If you explicitly change the layout of a file containing data to point
to a d
On Tue, 3 Feb 2015 15:16:57 + Colombo Marco wrote:
> Hi all,
> I have to build a new Ceph storage cluster, after i‘ve read the
> hardware recommendations and some mail from this mailing list i would
> like to buy these servers:
>
Nick mentioned a number of things already I totally agree wit
42 matches
Mail list logo