Hi,
Consider a ceph cluster with one IO intensive pool (e.g. VM storage) plus a few
not-so IO intensive.
I'm thinking of whether it makes sense to use the available SSDs in the cluster
nodes (1 SSD for 4 HDDs) as part of a writeback cache pool in front of the IO
intensive pool, instead of using
> I'm thinking of whether it makes sense to use the available SSDs in the
> cluster nodes (1 SSD for 4 HDDs) as part of a writeback cache pool in front of
> the IO intensive pool, instead of using them as journal SSDs? With this
> method, the OSD journals would be co-located on the HDDs or the SSD:
Hi James,
Yes, I've checked bcache, but as far as I can tell you need to manually
configure and register the backing devices and attach them to the cache device,
which is not really suitable to dynamic environment (like RBD devices for cloud
VMs).
Benjamin
> -Original Message-
Hi,
I have an issue with adding "new" (someone re-provisioned the machine
accidentally) OSDs to the cluster.
First I removed the old OSDs from the crush map (they are no longer in ceph osd
tree)
Trying to add OSDs fails:
With ceph-deploy it *looks* like it is working but then hangs on OSD start
Thank you Josh,
I'm not sure if this is ceph's fault or perhaps FastCGI's or Boto's.
When trying to upload 10KB chunk I've had those mysterious FastCGI's
errors on server side and boto.exception.BotoServerError:
BotoServerError: 500 Internal Server Error on client side.
I've tried now to upload 4M
On 07/08/2014 11:04 AM, Robert van Leeuwen wrote:
Hi,
I have an issue with adding "new" (someone re-provisioned the machine
accidentally) OSDs to the cluster.
First I removed the old OSDs from the crush map (they are no longer in ceph osd
tree)
Trying to add OSDs fails:
With ceph-deploy it *l
Hi Benjamin,
Unless I misunderstood, I think the suggestion was to use bcache devices on the
OSDs
(not on the clients), so what you use it for in the end doesn’t really matter.
The setup of bcache devices is pretty similar to a mkfs and once set up, bcache
devices
come up and can be mounted as
hi all,
one of the changes in the 0.82 release (accoridng to the notes) is:
mon: prevent EC pools from being used with cephfs
can someone clarify this a bit? cephfs with EC pools make no sense? now?
ever? or is it just not recommended (i'm also interested in the
technical reasons behind it)
On 07/08/2014 05:06 AM, Yitao Jiang wrote:
hi, cephers
i am new here , and lanunched a ceph single node with mon, mds and
osds, belows is ceph.conf. I want using ipv4 with mon deamon, but it
still using IPV6
*ceph.conf*
---
[global]
auth_se
> Hi James,
>
> Yes, I've checked bcache, but as far as I can tell you need to manually
> configure and register the backing devices and attach them to the cache
> device, which is not really suitable to dynamic environment (like RBD devices
> for cloud VMs).
>
You would use bcache for the osd n
Hi Arne and James,
Ah, I misunderstood James' suggestion. Using bcache w/ SSDs can be another
viable alternative to SSD journal partitions indeed.
I think ultimately I will need to test the options since very few people have
experience with cache tiering or bcache.
Thanks,
Benjamin
From: Arne
Hi,
I'm trying to get radosgw working with OpenStack Horizon in a right way,
and I'm facing some strange problems.
The overall data flow is as follows:
client -> [ nginx+horizon(via uwsgi) ] -> [ some_server + radosgw ]
Everything works fine when I use nginx as a server for radosgw, the only
pr
> Try to add --debug-osd=20 and --debug-filestore=20
> The logs might tell you more why it isn't going through.
Nothing of interest there :(
What I do notice is that when I run ceph_deploy it is referencing a keyring
that does not exist:
--keyring /var/lib/ceph/tmp/mnt.J7nSi0/keyring
If I look o
On 07/08/2014 04:28 AM, Stijn De Weirdt wrote:
hi all,
one of the changes in the 0.82 release (accoridng to the notes) is:
mon: prevent EC pools from being used with cephfs
can someone clarify this a bit? cephfs with EC pools make no sense? now?
ever? or is it just not recommended (i'm also in
Hello,
I want to enable the qemu rbd writeback cache, the following is the settings in
/etc/ceph/ceph.conf
[client]
rbd_cache = true
rbd_cache_writethrough_until_flush = false
rbd_cache_size = 27180800
rbd_cache_max_dirty = 20918080
rbd_cache_target_dirty = 16808000
rbd_cache_max_dirty_age = 60
Do you set cache=writeback in your vm’s qemu conf for that disk?
// david
8 jul 2014 kl. 14:34 skrev lijian :
> Hello,
>
> I want to enable the qemu rbd writeback cache, the following is the settings
> in /etc/ceph/ceph.conf
> [client]
> rbd_cache = true
> rbd_cache_writethrough_until_flush =
With info you provided, I think you have enabled rbd cache. As for
performance improvement, it's related to your performance tests
On Tue, Jul 8, 2014 at 8:34 PM, lijian wrote:
> Hello,
>
> I want to enable the qemu rbd writeback cache, the following is the settings
> in /etc/ceph/ceph.conf
> [cl
Hi David,
I set the writeback in vm qemu definition
Thanks!
JIan Li
在 2014-07-08 08:47:24,"David" 写道:
Do you set cache=writeback in your vm’s qemu conf for that disk?
// david
8 jul 2014 kl. 14:34 skrev lijian :
Hello,
I want to enable the qemu rbd writeback cache, the following
Haomai Wang,
I use the FIO to test 4K and 1M read/write under iodepth=1 and iodepth=32 in VM
Do you know the improvements percentage using cache base on yourself test.
Thanks!
JIan Li
At 2014-07-08 08:47:39, "Haomai Wang" wrote:
>With info you provided, I think you have enabled rbd cache. As fo
Hello, Sylvain
Could you please share your civetweb conf for radosgw?
If launched directly (via --rgw-frontends "civetweb port=80") it has issues
with Horizon, maybe I'm missing something.
2014-07-07 14:41 GMT+03:00 Sylvain Munaut :
> Hi,
>
> > if anyone else is looking to run radosgw without ha
Hi,
we maintain a cluster with 126 OSDs, replication 3 and appr. 148T raw
used space. We store data objects basically on two pools, the one
being appr. 300x larger in data stored and # of objects terms than the
other. Based on the formula provided here
http://ceph.com/docs/master/rados/operations/p
Hello,
how did you come up with those bizarre cache sizes?
Either way, if you test with FIO anything that will significantly exceed
the size of the cache will have very little to no effect.
To verify things, set the cache values to be based around 2GB and test
with a FIO that is just 1GB in siz
hi mark,
thanks for clarifying it a bit. we'll certainly have a look at the
caching tier setup.
stijn
On 07/08/2014 01:53 PM, Mark Nelson wrote:
On 07/08/2014 04:28 AM, Stijn De Weirdt wrote:
hi all,
one of the changes in the 0.82 release (accoridng to the notes) is:
mon: prevent EC pool
ping?
now centos7 is out, we'd like to make some comparison with el6 and fuse
(on el7 and el6) etc etc.
the ceph module from the kernel itself compiles and loads (the rbd
module from the kernel sources gives a build error, but i don't need
that one)
but in general it might be nice that the k
The impact won't be 300 times bigger, but it will be bigger. There are two
things impacting your cluster here
1) the initial "split" of the affected PGs into multiple child PGs. You can
mitigate this by stepping through pg_num at small multiples.
2) the movement of data to its new location (when yo
Hi Greg,
We're also due for a similar splitting exercise in the not too distant future,
and will also need to minimize the impact on latency.
In addition to increasing pg_num in small steps and using a minimal
max_backfills/recoveries configuration, I was planning to increase pgp_num very
slowl
On Tue, Jul 8, 2014 at 10:14 AM, Dan Van Der Ster
wrote:
> Hi Greg,
> We're also due for a similar splitting exercise in the not too distant
> future, and will also need to minimize the impact on latency.
>
> In addition to increasing pg_num in small steps and using a minimal
> max_backfills/recov
You can and should run multiple RadosGW and Apache instances per zone. The
whole point of Ceph is eliminating as many points of failure as possible.
You'll want to setup a load balancer just like you would for any website.
You'll want your load balancer to recognize and forward both
http://us-wes
Hi -
We have a multi-region configuration with metadata replication between the
regions for a unified namespace. Each region has pools in a different cluster.
Users created in the master region are replicated to the slave region without
any issues - we can get user info, and everything is cons
It is not a crush map thing. What is the PG/OSD ratio? CEPH recommends 100-200
PG (after multiplying the replica number or EC stripe number) per OSD. But even
though we also observed about 20-40% differences for PG/OSD distribution. You
may try higher PG/OSD ratio but be warned that the messenge
Guess I'll try again. I gave this another shot, following the
documentation, and still end up with basically a fork bomb rather than
the nice ListAllMyBucketsResult output that the docs say I should get.
Everything else about the cluster works fine, and I see others
talking about the gateway
Hi Greg,
thanks for your immediate feedback. My comments follow.
Initially we thought that the 248 PG (15%) increment we used was
really small, but it seems that we should increase PGs in even small
increments. I think that the term "multiples" is not the appropriate
term here, I fear someone woul
32 matches
Mail list logo