Hi Greg,
Thanks for your reply. Can we have mixed pools( EC and replicated) for
CephFS data and metadata or we have to use anyone pool( EC or Replicated)
for creating CephFS? Also we would like to know, when will the production
release of CephFS happen with erasure coded pool ? We are ready to te
Joao has done it in the past so it's definitely possible, but I
confess I don't know what if anything he had to hack up to make it
work or what's changed since then. ARMv6 is definitely not something
we worry about when adding dependencies. :/
-Greg
On Thu, Jan 15, 2015 at 12:17 AM, Prof. Dr. Chri
A couple of weeks ago, we had some involuntary maintenance come up
that required us to briefly turn off one node of a three-node ceph
cluster.
To our surprise, this resulted in failure to write on the VM's on that
ceph cluster, even though we set noout before the maintenance.
This cluster is for
On Tue, Jan 20, 2015 at 5:48 AM, Mohamed Pakkeer wrote:
>
> Hi all,
>
> We are trying to create 2 PB scale Ceph storage cluster for file system
> access using erasure coded profiles in giant release. Can we create Erasure
> coded pool (k+m = 10 +3) for data and replicated (4 replicas) pool for
> m
Awhile ago, I ran into this issue: http://tracker.ceph.com/issues/10411
I did manage to solve that by deleting the PGs, however ever since that
issue my mon databases have been growing indefinitely. At the moment,
I'm up to 3404 sst files, totaling 7.4GB of space.
This appears to be causing
We have a cluster running RGW (Giant release). We've noticed that the
".rgw" pool has an unexpectedly high number of objects:
$ ceph df
...
POOLS:
NAME ID USED %USED MAX AVAIL
OBJECTS
...
.rgw.root 5 840 029438G
Hello Team,
I have a radosgw node and storage cluster running. I am able to upload a single
file, but the process failed when I enable the multipart option in the client
side. I am using firefly (ceph version 0.80.8 )
Attached the debug log. Below an extract of the log.
2015-01-21 19:29:
Hi all,
Our MDS still fine today. Thanks everyone!
Regards,
Bazli
-Original Message-
From: ceph-devel-ow...@vger.kernel.org
[mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Mohd Bazli Ab Karim
Sent: Monday, January 19, 2015 11:38 AM
To: John Spray
Cc: ceph-users@lists.ceph.com; ce
You only have one osd node (ceph4). The default replication requirements
for your pools (size = 3) require osd's spread over three nodes, so the
data can be replicate on three different nodes. That will be why your pgs
are degraded.
You need to either add mode osd nodes or reduce your size setting
Hi All,
We have a bunch of shiny new hardware we are ready to configure for an all
SSD cluster.
I am wondering what are other people doing for their journal configuration
on all SSD clusters ?
- Seperate Journal partition and OSD partition on each SSD
or
- Journal on OSD
Thanks,
Andrew
_
Hi,
Thanks for the reply. That clarifies it. I thought that the redundancy
can be achieved with multiple OSDs (like multiple disks in RAID) in case
you don't have more nodes. Obviously the single point of failure would
be the box.
My current setting is:
osd_pool_default_size = 2
Thank you
J
Thanks Greg, that's a awesome feature I missed. I find some
explanation on the watch-notify thing:
http://www.slideshare.net/Inktank_Ceph/sweil-librados.
Just want to confirm, it looks like I need to list all the RGW
instances in ceph.conf, and then these RGW instances will
automatically do the ca
Hi,
BTW, is there a way how to achieve redundancy over multiple OSDs in one
box by changing CRUSH map?
Thank you
Jiri
On 20/01/2015 13:37, Jiri Kanicky wrote:
Hi,
Thanks for the reply. That clarifies it. I thought that the redundancy
can be achieved with multiple OSDs (like multiple disks
I've not built such a system myself, so I can't really be sure. The
size and speed of the cache pool would have to depend on how much hot
data you have at a time.
-Greg
On Wed, Jan 21, 2015 at 12:53 AM, Mohamed Pakkeer wrote:
> Hi Greg,
>
> We are planning to create 3 PB EC based storage cluster
Hi Greg,
We are planning to create 3 PB EC based storage cluster initially. what
would be the recommended hardware configuration for creating caching pool?
How many nodes will cache pool require to cater the 3 PB storage cluster?
What is the size and network connectivity of each node?
-- Mohammed
Hi Greg/Zhou,
I have got a similar setup where I have got one HAProxy node and 3 RadosGW
client. I have got rgw cache disabled in my setup.
earlier I had only one node running RadosGW, there I can see the
difference in inbound and outbound network traffic sometimes to a tune of
factor of 10. If
Greg, Thanks a lot for the education!
Sincerely, Yuan
On Tue, Jan 20, 2015 at 2:37 PM, Gregory Farnum wrote:
> You don't need to list them anywhere for this to work. They set up the
> necessary communication on their own by making use of watch-notify.
>
> On Mon, Jan 19, 2015 at 6:55 PM ZHOU Yu
Hi Samuel, Hi Gregory,
we are using Giant (0.87).
Sure, I was checking on this PGs. The strange thing was, that they
reported a bad state ("state": "inactive"), but looking at the recovery
state, everything seems to be fine. That would point to the mentioned
bug. Do you have a link to this bu
Hi Jake,
Thanks for this, I have been going through this and have a pretty good idea on
what you are doing now, however I maybe missing something looking through your
scripts, but I’m still not quite understanding how you are managing to make
sure locking is happening with the ESXi ATS SCSI
Hello
Is there a way to see running / acrive ceph.conf configuration items?
kind regards
Rob Fantini
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
You can use the admin socket:
$ ceph daemon mon. config show
or locally
ceph --admin-daemon /var/run/ceph/ceph-osd.2.asok config show
> On 21 Jan 2015, at 19:46, Robert Fantini wrote:
>
> Hello
>
> Is there a way to see running / acrive ceph.conf configuration items?
>
> kind regards
> R
Hello all,
What reasons would one want k>1?
I read that m determines the number of OSD which can fail before loss. But
I don't see explained how to choose k. Any benefits for choosing k>1?
Thanks!
Chad.
___
ceph-users mailing list
ceph-users@lists
Hello,
Could anyone provide a howto verify that a tiered pool is working correctly?
E.g.
Command to watch as PG migrate from one pool to another? (Or determine
which pool a PG is currently in.)
Command to see how much data is in each pool (global view of number of PGs I
guess)?
Thanks!
Ch
On Mon, Jan 19, 2015 at 8:40 AM, J David wrote:
> A couple of weeks ago, we had some involuntary maintenance come up
> that required us to briefly turn off one node of a three-node ceph
> cluster.
>
> To our surprise, this resulted in failure to write on the VM's on that
> ceph cluster, even thoug
On Mon, Jan 19, 2015 at 2:48 PM, Brian Rak wrote:
> Awhile ago, I ran into this issue: http://tracker.ceph.com/issues/10411
>
> I did manage to solve that by deleting the PGs, however ever since that
> issue my mon databases have been growing indefinitely. At the moment, I'm
> up to 3404 sst file
Version?
-Sam
On Tue, Jan 20, 2015 at 9:45 AM, Gregory Farnum wrote:
> On Tue, Jan 20, 2015 at 2:40 AM, Christian Eichelmann
> wrote:
>> Hi all,
>>
>> I want to understand what Ceph does if several OSDs are down. First of our,
>> some words to our Setup:
>>
>> We have 5 Monitors and 12 OSD Serve
I think you're hitting issue #10271. It has been fixed, but not in the
a formal firefly release yet. You can try picking up the unofficial
firefly branch package off the ceph gitbuilder and test it.
Yehuda
On Wed, Jan 21, 2015 at 11:37 AM, Castillon de la Cruz, Eddy Gonzalo
wrote:
>
> Hello Team
Hello.
how to get pool replicated size through api not by command line?
In the following website ,i can't find the answer.
http://ceph.com/docs/master/rados/api/python/
Bell___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/li
You can run CephFS with a caching pool that is backed by an EC pool,
but you can't use just an EC pool for either of them. There are
currently no plans to develop direct EC support; we have some ideas
but the RADOS EC interface is way more limited than the replicated
one, and we have a lot of other
Hi,
the ceph admin and supervision interface Inkscope is now packaged.
RPMS and DEBS packages are available at :
https://github.com/inkscope/inkscope-packaging
enjoy it!
--
Eric Mourgaya,
Respectons la planete!
Luttons contre la mediocrite!
__
On 21/01/2015 22:42, Chad William Seys wrote:
> Hello all,
> What reasons would one want k>1?
> I read that m determines the number of OSD which can fail before loss. But
> I don't see explained how to choose k. Any benefits for choosing k>1?
The size of each chunk is object size / K. If
OK, I've set up 'giant' in a single-node cluster, played with a replicated pool
and an EC pool. All goes well so far. Question: I have two different kinds of
HDD in my server - some fast, 15K RPM SAS drives and some big, slow (5400 RPM!)
SATA drives.
Right now, I have OSDs on all, and when I
It has been proven that the OSDs can’t take advantage of the SSD, so I’ll
probably collocate both journal and osd data.
Search in the ML for [Single OSD performance on SSD] Can't go over 3, 2K IOPS
You will see that there is no difference it terms of performance between the
following:
* 1 SSD f
Well, look at it this way: with 3X replication, for each TB of data you need 3
TB disk. With (for example) 10+3 EC, you get better protection, and for each
TB of data you need 1.3 TB disk.
-don-
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf O
Hello,
On Wed, 21 Jan 2015 23:28:15 +0100 Sebastien Han wrote:
> It has been proven that the OSDs can’t take advantage of the SSD, so
> I’ll probably collocate both journal and osd data. Search in the ML for
> [Single OSD performance on SSD] Can't go over 3, 2K IOPS
>
> You will see that there
Hi,
I have a Ceph cluster that works correctly (Firefly on Ubuntu Trusty servers).
I would like to install a radosgw. In fact, I would like install 2 radosgw:
radosgw-1 and radosgw-2 with a floating IP address to support failover etc.
After reading the doc, I still have a point that is not clear
I've been looking at the steps required to enable (say) multi region
metadata sync where there is an existing RGW that has been in use (i.e
non trivial number of buckets and objects) which been setup without any
region parameters.
Now given that the existing objects are all in the pools corres
Hi!
I have a server (ceph version 0.80.7, links 10Gb), there is set: 1 pool is
write to 5 osd. I'm using the iscsi-target write to this pool (disk rbd3) some
data from other server. And speed on network is near 150 Mbit / sec. In this
case, iostat shows the usage rbd3 drive 100%, but drives on w
Hi David,
How about your pools size & min_size setting?
In your cluster, you may need to set all pools min_size=1 before shutdown server
BR,
Luke
MYCOM-OSI
From: ceph-users [ceph-users-boun...@lists.ceph.com] on behalf of J David
[j.david.li...@gmail.com]
Hi,
From my last benchmark,
I was around 12 iops rand read 4k , 2iops rand write 4k (3 nodes with
2ssd osd+journal ssd intel 3500)
My main bottleneck was cpu (it's was 2x4cores 1,4ghz intel), both on osd and
client.
I'm going to test next month my production cluster, with bigger nod
40 matches
Mail list logo