Здравствуйте!
On Fri, Oct 30, 2015 at 09:30:40PM +, moloney wrote:
> Hi,
> I recently got my first Ceph cluster up and running and have been doing some
> stress tests. I quickly found that during sequential write benchmarks the
> throughput would often drop to zero. Initially I saw this i
Hi,
I have a trouble when integrating Ceph 0.94.5 with OpenStack Kilo.
I upload successfully image to Glance, but I can't delete, it's status
always keeps in "Deleting"
This is my glance-api.conf
http://pastebin.com/index/TpZ4xps1
Thanks and regards
_
Bump... :)
On 2015-11-02 15:52:44 +, Daniel Schneller said:
Hi!
I am trying to set up a Rados Gateway, prepared for multiple regions
and zones, according to the documenation on
http://docs.ceph.com/docs/hammer/radosgw/federated-config/.
Ceph version is 0.94.3 (Hammer).
I am stuck at the
On 11/05/2015 01:13 PM, Daniel Schneller wrote:
> Bump... :)
>
> On 2015-11-02 15:52:44 +, Daniel Schneller said:
>
>> Hi!
>>
>>
>> I am trying to set up a Rados Gateway, prepared for multiple regions
>> and zones, according to the documenation on
>> http://docs.ceph.com/docs/hammer/radosgw/f
I am not sure of its status -- it looks like it was part of 3.6 planning but it
recently was moved to 4.0 on the wiki. There is a video walkthrough of the
running integration from this past August [1]. You would need to just deploy
Cinder and Keystone -- no need for all the other bits. Again,
On 2015-11-04T14:30:56, Hugo Slabbert wrote:
> Sure. My post was not intended to say that iSCSI over RBD is *slow*, just
> that it scales differently than native RBD client access.
>
> If I have 10 OSD hosts with a 10G link each facing clients, provided the OSDs
> can saturate the 10G links, I
On 2015-11-05 12:16:35 +, Wido den Hollander said:
This is usuaully when keys aren't set up properly. Are you sure that the
cephx keys you are using are correct and that you can connect to the
Ceph cluster?
Wido
Yes, I could execute all kinds of commands, however it turns out, I
might ha
Hi,
Do you have any ideas as to what might be wrong? Since my last email I decided
to recreate the cluster. I am currently testing upgrading from 0.72 to 0.80.10
with hopes to end up on hammer.
So I completely erased the cluster and reloaded the machines with centos 6.5(to
match my productio
It worked.
So Whats broken with caching?
- Original Message -
From: "Jason Dillaman"
To: "Joe Ryner"
Cc: ceph-us...@ceph.com
Sent: Thursday, November 5, 2015 3:18:39 PM
Subject: Re: [ceph-users] rbd hang
Can you retry with 'rbd --rbd-cache=false -p images export joe /root/joe.raw'?
--
Can you retry with 'rbd --rbd-cache=false -p images export joe /root/joe.raw'?
--
Jason Dillaman
- Original Message -
> From: "Joe Ryner"
> To: "Jason Dillaman"
> Cc: ceph-us...@ceph.com
> Sent: Thursday, November 5, 2015 4:14:28 PM
> Subject: Re: [ceph-users] rbd hang
>
> Hi,
>
>
It appears you have set your cache size to 64 bytes(!):
2015-11-05 15:07:49.927510 7f0d9af5a760 20 librbd::ImageCtx: Initial cache
settings: size=64 num_objects=10 max_dirty=32 target_dirty=16 max_dirty_age=5
This exposed a known issue [1] when you attempt to read more data in a single
read req
Why did you guys go with partitioning the SSD for ceph journals, instead of
just using the whole SSD for bcache and leaving the journal on the filesystem
(which itself is ontop bcache)? Was there really a benefit to separating the
journals from the bcache fronted HDDs?
I ask because it has been
Hi,
ceph-deploy mon create ceph5
builds the monitor ( 5th new monitor, 4 already existing )
ceph5# python /usr/sbin/ceph-create-keys --cluster ceph -i ceph5
hangs with:
INFO:ceph-create-keys:ceph-mon is not in quorum: u'probing'
INFO:ceph-create-keys:ceph-mon is not in quorum: u'probing'
^CTr
Thanks for the heads up. I have had this set this way for a long time in all
of my deployments. I assumed that the units where in MB.
Arg..
I will test new settings.
Joe
- Original Message -
From: "Jason Dillaman"
To: "Joe Ryner"
Cc: ceph-us...@ceph.com
Sent: Thursday, November 5,
I have the following 4 pools:
pool 1 'rep2host' replicated size 2 min_size 1 crush_ruleset 1 object_hash
rjenkins pg_num 128 pgp_num 128 last_change 88 flags hashpspool stripe_width 0
pool 17 'rep2osd' replicated size 2 min_size 1 crush_ruleset 1 object_hash
rjenkins pg_num 256 pgp_num 256 last_
(128*2+256*2+256*14+256*5)/15 =~ 375.
On Thursday, November 05, 2015 10:21:00 PM Deneau, Tom wrote:
> I have the following 4 pools:
>
> pool 1 'rep2host' replicated size 2 min_size 1 crush_ruleset 1 object_hash
> rjenkins pg_num 128 pgp_num 128 last_change 88 flags hashpspool
> stripe_width 0 poo
On the bright side, at least your week of export-related pain should result in
a decent speed boost when your clients get 64MB of cache instead of 64B.
--
Jason Dillaman
- Original Message -
> From: "Joe Ryner"
> To: "Jason Dillaman"
> Cc: ceph-us...@ceph.com
> Sent: Thursday, Nove
Its weird that it has even been working.
Thanks again for your help!
- Original Message -
From: "Jason Dillaman"
To: "Joe Ryner"
Cc: ceph-us...@ceph.com
Sent: Thursday, November 5, 2015 4:29:49 PM
Subject: Re: [ceph-users] rbd hang
On the bright side, at least your week of export-rel
Hi Craig,
I am testing the federated gateway of 1 region with 2 zones. And I found only
metadata is replicated, the data is NOT.
According to your check list, I am sure all thinks are checked. Could you
review my configuration scripts? The configuration files are similar to
http://docs.ceph.com/
Dear Ceph supports and developers.
This is just a suggestion to improve Ceph visibility.
I have been looking on how to properly cite Ceph project in proposals
and scientific literature. I have just found that GitHub provides a way
to generate a DOI for projects. Just check:
https://github
20 matches
Mail list logo