On Thu, 29 Jan 2015 01:30:41 + Ramakrishna Nishtala (rnishtal) wrote:
> Hi,
> Apologize if something came up before like this.
> Reading archives, it appears that 4 to 5 spinning disks are recommended
> for single SSD.
>
It all depends on the SSDs and HDDs in question for one (how many HDDs c
https://bugs.launchpad.net/glance/+bug/1415679
--
Regards
Zeeshan Ali Shah
System Administrator - PDC HPC
PhD researcher (IT security)
Kungliga Tekniska Hogskolan
+46 8 790 9115
http://www.pdc.kth.se/members/zashah
___
ceph-users mailing list
ceph-use
On 29/01/15 13:58, Mark Kirkwood wrote:
However if I
try to write to eu-west I get:
Sorry - that should have said:
However if I try to write to eu-*east* I get:
The actual code is (see below) connecting to the endpoint for eu-east
(ceph4:80), so seeing it redirected to us-*west* is pretty
Thanks ! I have resolved it with your suggestion.
At 2015-01-28 22:38:21,"Yan, Zheng" wrote:
>On Wed, Jan 28, 2015 at 10:35 PM, Yan, Zheng wrote:
>> On Wed, Jan 28, 2015 at 2:48 PM, 于泓海 wrote:
>>> Hi:
>>>
>>> I have completed the installation of ceph cluster,and the ceph health is
>>>
Hi,
Apologize if something came up before like this.
Reading archives, it appears that 4 to 5 spinning disks are recommended for
single SSD.
I have two questions on the subject.
* Some of the links suggest that we should use 'sync writes' to really
size the journals. If true, then what
Hi,
I am following
http://docs.ceph.com/docs/master/radosgw/federated-config/ using cepg
0.91 (0.91-665-g6f44f7a):
- 2 regions (US and EU). US is the master region
- 2 ceph clusters, one per region
- 4 zones (us east and west, eu east and west
- 4 hosts (ceph1 + ceph2 being us-west + us-east
My apologies if this has been covered ad-naseum in the past; I wasn't finding a
lot of relevant archived info.
I'm curious how may people are using
1) OSD's on spinning disks, with journals on SSD's -- how many journals per
SSD? 4-5?
2) OSD's on spinning disks, with [10GB] journals co-locate
On Wed, Jan 28, 2015 at 11:43 AM, Gregory Farnum wrote:
>
> On Wed, Jan 28, 2015 at 10:06 AM, Sage Weil wrote:
> > On Wed, 28 Jan 2015, John Spray wrote:
> >> On Wed, Jan 28, 2015 at 5:23 PM, Gregory Farnum wrote:
> >> > My concern is whether we as the FS are responsible for doing anything
> >>
Hi Raj,
Sébastien Han has done some excellent Ceph benchmarking on his blog here:
http://www.sebastien-han.fr/blog/2012/08/26/ceph-benchmarks/
Maybe that's a good place to start for your own testing?
Cheers,
Lincoln
On Jan 28, 2015, at 12:59 PM, Jeripotula, Shashiraj wrote:
> Resending, Guys,
Resending, Guys, Please help me point to some good documentation.
Thanks in advance.
Regards
Raj
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
Jeripotula, Shashiraj
Sent: Tuesday, January 27, 2015 10:32 AM
To: ceph-users@lists.ceph.com
Subject: [ceph-users] Ceph Test
Thanks Greg. Perhaps this is a motivation for us to switch to ceph-fuse
from the kernel client - at least that way, we could easily upgrade for bug
fixes without waiting for a new kernel.
Chris
On Wed, Jan 28, 2015 at 9:32 AM, Gregory Farnum wrote:
> This is in our testing branch and should go
On Wed, Jan 28, 2015 at 10:06 AM, Sage Weil wrote:
> On Wed, 28 Jan 2015, John Spray wrote:
>> On Wed, Jan 28, 2015 at 5:23 PM, Gregory Farnum wrote:
>> > My concern is whether we as the FS are responsible for doing anything
>> > more than storing and returning that immutable flag ? are we suppos
On Wed, 28 Jan 2015, John Spray wrote:
> On Wed, Jan 28, 2015 at 5:23 PM, Gregory Farnum wrote:
> > My concern is whether we as the FS are responsible for doing anything
> > more than storing and returning that immutable flag ? are we supposed
> > to block writes to anything that has it set? That
On Wed, Jan 28, 2015 at 5:23 PM, Gregory Farnum wrote:
> My concern is whether we as the FS are responsible for doing anything
> more than storing and returning that immutable flag — are we supposed
> to block writes to anything that has it set? That could be much
> trickier...
The VFS layer is c
Hi,
I'm having an issue wuite similar to this old bug :
http://tracker.ceph.com/issues/5194, except that I'm using centos 6.
Basically, I setup a cluster using ceph-deploy to save some time (this
is a 90+ OSD cluster). I rebooted a node earlier today and now all the
drives are unmounted and a
This is in our testing branch and should go to Linus the next time we
send him stuff for merge. Unfortunately there's nobody doing CephFS
kernel backports at this time so you'll need to wait for that to come
out or spin your own. :(
-Greg
On Tue, Jan 27, 2015 at 10:46 AM, Christopher Armstrong
wr
On Wed, Jan 28, 2015 at 5:24 AM, John Spray wrote:
> We don't implement the GETFLAGS and SETFLAGS ioctls used for +i.
>
> Adding the ioctls is pretty easy, but then we need somewhere to put
> the flags. Currently we don't store a "flags" attribute on inodes,
> but maybe we could borrow the high b
Hi,
I'm running a small Ceph cluster (Emperor), with 3 servers, each running a
monitor and two 280 GB OSDs (plus an SSD for the journals). Servers have 16 GB
memory and a 8 core Xeon processor and are connected with 3x 1 gbps (lacp
trunk).
As soon as I give the cluster some load from a client
Thank you for the reply. This is a feature that we would like to see.
Should I write a cephfs tracker report on this as a possible future
enhancement?
On Wed, Jan 28, 2015 at 6:24 AM, John Spray wrote:
> We don't implement the GETFLAGS and SETFLAGS ioctls used for +i.
>
> Adding the ioctls is pr
On Wed, Jan 28, 2015 at 10:35 PM, Yan, Zheng wrote:
> On Wed, Jan 28, 2015 at 2:48 PM, 于泓海 wrote:
>> Hi:
>>
>> I have completed the installation of ceph cluster,and the ceph health is
>> ok:
>>
>> cluster 15ee68b9-eb3c-4a49-8a99-e5de64449910
>> health HEALTH_OK
>> monmap e1: 1 m
Hi,Sage.
Yes, Firefly.
[root@ceph05 ~]# ceph --version
ceph version 0.80.8 (69eaad7f8308f21573c604f121956e64679a52a7)
Yes, I have seen this behavior.
[root@ceph08 ceph]# rbd info vm-160-disk-1
rbd image 'vm-160-disk-1':
size 32768 MB in 8192 objects
order 22 (4096 kB objects)
On Wed, Jan 28, 2015 at 2:48 PM, 于泓海 wrote:
> Hi:
>
> I have completed the installation of ceph cluster,and the ceph health is
> ok:
>
> cluster 15ee68b9-eb3c-4a49-8a99-e5de64449910
> health HEALTH_OK
> monmap e1: 1 mons at {ceph01=10.194.203.251:6789/0}, election epoch 1,
> quor
On Wed, 28 Jan 2015, Irek Fasikhov wrote:
> Sage.
> Is a sentence when deleting objects bypass the cache tier pool.
There's currently no knob or hint to do that. It would be pretty simple
to add, but it's a heuristic that only works for certain workloads..
sage
> Thank
>
> Wed Jan 28 2015
Sage.
Is a sentence when deleting objects bypass the cache tier pool.
Thank
Wed Jan 28 2015 at 5:13:36 PM, Irek Fasikhov :
> Hi,Sage.
>
> Yes, Firefly.
> [root@ceph05 ~]# ceph --version
> ceph version 0.80.8 (69eaad7f8308f21573c604f121956e64679a52a7)
>
> Yes, I have seen this behavior.
>
> [root@
We don't implement the GETFLAGS and SETFLAGS ioctls used for +i.
Adding the ioctls is pretty easy, but then we need somewhere to put
the flags. Currently we don't store a "flags" attribute on inodes,
but maybe we could borrow the high bits of the mode attribute for this
if we wanted to implement
On 01/28/2015 02:10 AM, Nick Fisk wrote:
> Hi Mike,
>
> I've been working on some resource agents to configure LIO to use implicit
> ALUA in an Active/Standby config across 2 hosts. After a week long crash
> course in pacemaker and LIO, I now have a very sore head but it looks like
> it's working
I am using ceph firefly (ceph version 0.80.7 ) with single Radosgw
instance, no RBD.
I am facing problem of ".rgw.buckets has too few pgs "
I have tried to increased the number of pgs using command "ceph osd
pool set pg_num " but in vain.
I also tried "ceph osd crush tunables optimal " but no effe
Hi Mike,
I've been working on some resource agents to configure LIO to use implicit
ALUA in an Active/Standby config across 2 hosts. After a week long crash
course in pacemaker and LIO, I now have a very sore head but it looks like
it's working fairly well. I hope to be in a position in the next f
Your mount command?
Lindsay Mathieson
-Original Message-
From: "于泓海"
Sent: 28/01/2015 4:48 PM
To: "ceph-us...@ceph.com"
Subject: [ceph-users] Help:mount error
Hi:
I have completed the installation of ceph cluster,and the ceph health is ok:
cluster 15ee68b9-eb3c-4a49-8a9
29 matches
Mail list logo