how did you deploy ceph jewel on debian7?
2016-07-26 1:08 GMT+08:00 Mark Nelson :
> Several years ago Mark Kampe proposed doing something like this. I was
> never totally convinced we could make something accurate enough quickly
> enough for it to be useful.
>
> If I were to attempt it, I would
good job, thank you for sharing, Wido~
it's very useful~
2016-07-14 14:33 GMT+08:00 Wido den Hollander :
> To add, the RGWs upgraded just fine as well.
>
> No regions in use here (yet!), so that upgraded as it should.
>
> Wido
>
> > Op 13 juli 2016 om 16:56 schreef Wido den Hollander :
> >
> >
>
very appreciate~
2016-07-07 14:18 GMT+08:00 Haomai Wang :
> Previously dpdk plugin only support cmake.
>
> Currently I'm working on split that PR into multi clean PR to let
> merge. So previous PR isn't on my work list. plz move on the following
> changes
>
> On T
645: recipe for target 'libcommon_crc.la' failed
make[3]: *** [libcommon_crc.la] Error 1
make[3]: *** Waiting for unfinished jobs
2016-07-07 9:04 GMT+08:00 席智勇 :
> Hi haomai:
>
> I noticed your PR about support DPDK by Ceph:
>
> https://github.com/ceph/ceph/pull/9230
Hi haomai:
I noticed your PR about support DPDK by Ceph:
https://github.com/ceph/ceph/pull/9230
It's great job for Ceph.
I want to do some test base on the PR, but can not use it still.First I can
not find the package for dpdk on debian/ubuntu, So I download the source
code of dpdk and compile
version info
==
cepher@10-165-160-18:~/xzy$ ceph -v
ceph version 10.2.1 (3a66dd4f30852819c1bdaa8ec23c795d4ad77269)
cepher@10-165-160-18:~/xzy$ cat /etc/debian_version
8.4
2016-06-02 17:54 GMT+08:00 席智勇 :
> hi cepher:
>
>I upgrade my ceph cluster to Jewel, and use bluestore a
hi cepher:
I upgrade my ceph cluster to Jewel, and use bluestore as beckend
store, when I create image using rbd command line tool, it works ok, like:
cepher@10-165-160-18:~/xzy$ sudo rbd create xzy_vol -p
switch01_ssd_volumes --size 10240
cepher@10-165-160-18:~/xzy$ rbd ls -p switch01_
got~
thank you~
regards~
2016-04-28 20:59 GMT+08:00 Sage Weil :
> Hi,
>
> Here are the slides:
>
>
> http://www.slideshare.net/sageweil1/bluestore-a-new-faster-storage-backend-for-ceph
>
> sage
>
> On Thu, 28 Apr 2016, 席智勇 wrote:
>
> > hi sage:
> >
hi sage:
I find the slides of VAULT of 2016 on this page(
http://events.linuxfoundation.org/events/vault/program/slides), it seems
not the whole accoding to the schedule info, and I didn't find yours. Can
you share your slides or any things usefull on VAULT about BlueStore.
regards~
zhiyo
Anyone can give me some advice?
-- Forwarded message --
From:
Date: 2016-04-26 18:50 GMT+08:00
Subject: google perftools on ceph-osd
To: Stefan Priebe - Profihost AG
hi Stefan:
When We are using ceph, I found osd process use much more CPU,
especially when small rand write
And this setting will enable "exclusive-lock, object-map, fast-diff,
> deep-flatten" features.
>
>
>
>
> Best wishes,
> Mika
>
>
> 2016-04-21 16:56 GMT+08:00 席智勇 :
>
>> That's true for me too.
>> You can disable them via set in the conf fil
That's true for me too.
You can disable them via set in the conf file.
#ceph.conf
rbd_default_features = 3
#meens only enable layering and striping
2016-04-21 16:00 GMT+08:00 Mika c :
> Hi cephers,
> Had the same issue too. But the command "rbd feature disable" not
> working to me.
> Any com
2016-04-07 0:18 GMT+08:00 Patrick McGarry :
> Hey cephers,
>
> I have all but one of the presentations from Ceph Day Sunnyvale, so
> rather than wait for a full hand I went ahead and posted the link to
> the slides on the event page:
>
> http://ceph.com/cephdays/ceph-day-sunnyvale/
thanks for sh
I have read the SK‘s performance tuning work too, it's a good
job especially the analysis of write/read latancy on OSD.
I want to ask a question about the optimize on 'Long logging time', what's
the meaning about 'split logging into another thread and do it later',
AFAIK, ceph does logging async by
some tips:
1.if you enabled auth_cluster_required, you may shoud have a check
the keyring
2.can you reach the monitors from your admin node by ssh without passwd
2016-04-16 18:16 GMT+08:00 AJ NOURI :
> Followed the preflight and quick start
> http://docs.ceph.com/docs/master/start/quick-ceph-depl
Hi, can you post the 'modinfo rbd' and your cluster state 'ceph -s'.
>
> 2016-04-18 16:35 GMT+08:00 席智勇 :
> > hi cephers:
> >
> > I create a rbd volume(image) on Jewel release, when exec rbd map, I got
> the
> > error message as follows.i can not
hi cephers:
I create a rbd volume(image) on Jewel release, when exec rbd map, I got the
error message as follows.i can not find any message usage in
syslog/kern.log/messages.
anyone can share some tips?
--my ceph
version
root@hzbxs-
hi Jan:
got it.
thanks for the reply.
At 2015-11-19 17:14:19, "Jan Schermer" wrote:
>There's no added benefit - it just adds resiliency.
>On the other hand - more monitors means more likelihood that one of them will
>break, when that happens there will be a brief interruption to so
hi all:
As the title, if I deploy more than three ceph-mon node, I can tolerate
more monitor node failture, what I wana know is, is there any other benefit,
for example, better for IOPS or latency? On the other hand, what the
disadvantage if it has?
best regards~
19 matches
Mail list logo