On Sat, Nov 18, 2017 at 4:49 PM, Fred Gansevles wrote:
> Hi,
>
> Currently our company has +/- 50 apps where every app has its own
> data-area on NFS.
> We need to switch S3, using Ceph, as our new data layer with
> every app using its own s3-bucket, equivalent to the NFS data-area.
> The sizes o
e you added the
> plugin? Is it simply a matter of putting a symlink in the right place or
> will I have to recompile?
yep, maybe we could add lz4 as the default buld option
>
> Any suggestions or pointers would be gratefully received.
>
> -TJ Ragan
>
>
>
> On 26 Oct 201
Stefan Priebe - Profihost AG 于2017年10月26日 周四17:06写道:
> Hi Sage,
>
> Am 25.10.2017 um 21:54 schrieb Sage Weil:
> > On Wed, 25 Oct 2017, Stefan Priebe - Profihost AG wrote:
> >> Hello,
> >>
> >> in the lumious release notes is stated that zstd is not supported by
> >> bluestor due to performance rea
ellanox
> (https://www.youtube.com/watch?v=Qb2SUWLdDCw)
>
> Is nobody out there having a running cluster with RDMA ?
> any help is appreciated !
>
> Gerhard W. Recher
>
> net4sec UG (haftungsbeschränkt)
> Leitenweg 6
> 86929 Penzing
>
> +49 171 4802507
&g
4802507
> Am 27.09.2017 um 15:59 schrieb Haomai Wang:
>> do you set local gid option?
>>
>> On Wed, Sep 27, 2017 at 9:52 PM, Gerhard W. Recher
>> wrote:
>>> Yep ROcE
>>>
>>> i followed up all recommendations in mellanox papers ...
>>&g
inet6 fe80::268a:7ff:fee2:6071 prefixlen 64 scopeid 0x20
> ether 24:8a:07:e2:60:71 txqueuelen 1000 (Ethernet)
> RX packets 25450717 bytes 39981352146 (37.2 GiB)
> RX errors 0 dropped 77 overruns 77 frame 0
> TX packets 26554236 bytes 5341915
On Wed, Sep 27, 2017 at 8:33 PM, Gerhard W. Recher
wrote:
> Hi Folks!
>
> I'm totally stuck
>
> rdma is running on my nics, rping udaddy etc will give positive results.
>
> cluster consist of:
> proxmox-ve: 5.0-23 (running kernel: 4.10.17-3-pve)
> pve-manager: 5.0-32 (running version: 5.0-32/2560e
Oh, I'm on flight at the time
On Wed, Sep 6, 2017 at 6:28 PM, Joao Eduardo Luis wrote:
> On 09/06/2017 06:06 AM, Leonardo Vaz wrote:
>>
>> Hey cephers,
>>
>> The Ceph Developer Monthly is confirmed for tonight, September 6 at 9pm
>> Eastern Time (EDT), in an APAC-friendly time slot.
>
>
> As much
29 09:40:21.541394
> /build/ceph-12.1.4/src/msg/async/rdma/RDMAConnectedSocketImpl.cc: 244:
> FAILED assert(!r)
>
>
> Reverting back to 4.4.0-92-generic solves it though.. so I stay with that
> for now.
> I will have a go with linux 4.6.0.
interesting, I have no idea on this. m
On Tue, Aug 29, 2017 at 12:01 AM, Florian Haas wrote:
> Sorry, I worded my questions poorly in the last email, so I'm asking
> for clarification here:
>
> On Mon, Aug 28, 2017 at 6:04 PM, Haomai Wang wrote:
>> On Mon, Aug 28, 2017 at 7:54 AM, Florian Haas wrote:
>>&
On Mon, Aug 28, 2017 at 7:54 AM, Florian Haas wrote:
> On Mon, Aug 28, 2017 at 4:21 PM, Haomai Wang wrote:
>> On Wed, Aug 23, 2017 at 1:26 AM, Florian Haas wrote:
>>> Hello everyone,
>>>
>>> I'm trying to get a handle on the current state of the async m
do you follow this instruction(https://community.mellanox.com/docs/DOC-2693)?
On Mon, Aug 28, 2017 at 6:40 AM, Jeroen Oldenhof wrote:
> Hi All!
>
> I'm trying to run CEPH over RDMA, using a batch of Infiniband Mellanox
> MT25408 20GBit (4x DDR) cards.
>
> RDMA is running, rping works between all
On Wed, Aug 23, 2017 at 1:26 AM, Florian Haas wrote:
> Hello everyone,
>
> I'm trying to get a handle on the current state of the async messenger's
> RDMA transport in Luminous, and I've noticed that the information
> available is a little bit sparse (I've found
> https://community.mellanox.com/do
On Tue, Jul 11, 2017 at 11:11 PM, Sage Weil wrote:
> On Tue, 11 Jul 2017, Sage Weil wrote:
>> Hi all,
>>
>> Luminous features a new 'service map' that lets rgw's (and rgw nfs
>> gateways and iscsi gateways and rbd mirror daemons and ...) advertise
>> themselves to the cluster along with some metad
you can decrease "ms_async_rdma_send_buffers" and
"ms_async_rdma_receive_buffers" to see whether help if the reason is
system limitation
On Wed, Jun 28, 2017 at 6:09 PM, Haomai Wang wrote:
> On Wed, Jun 28, 2017 at 6:02 PM, 한승진 wrote:
>> Hello Cephers!
>>
On Wed, Jun 28, 2017 at 6:02 PM, 한승진 wrote:
> Hello Cephers!
>
> I am testing CEPH over RDMA now.
>
> I cloned the latest source code of ceph.
>
> I added below configs in ceph.conf
>
> ms_type = async+rdma
> ms_cluster_type = async+rdma
> ms_async_rdma_device_name = mlx4_0
>
> However, I got same
refer to https://github.com/ceph/ceph/pull/5013
On Thu, May 4, 2017 at 7:56 AM, Brad Hubbard wrote:
> +ceph-devel to get input on whether we want/need to check the value of
> /dev/cpu_dma_latency (platform dependant) at startup and issue a
> warning, or whether documenting this would suffice?
>
>
Regards,
>
> Hung-Wei Chiu(邱宏瑋)
> --
> Computer Center, Department of Computer Science
> National Chiao Tung University
>
> 2017-03-24 18:28 GMT+08:00 Haomai Wang :
>
>> the content of ceph.conf ?
>>
>> On Fri, Mar 24, 2017 at 4:32 AM, Hung-Wei Chiu (邱宏瑋) &
the content of ceph.conf ?
On Fri, Mar 24, 2017 at 4:32 AM, Hung-Wei Chiu (邱宏瑋)
wrote:
> Hi Deepak.
>
> Thansk your reply,
>
> I try to use gperf to profile the ceph-osd with basic mode (without RDMA)
> and you can see the result in the following link.
> http://imgur.com/a/SJgEL
>
> In the gper
On Thu, Mar 23, 2017 at 5:49 AM, Hung-Wei Chiu (邱宏瑋)
wrote:
> Hi,
>
> I use the latest (master branch, upgrade at 2017/03/22) to build ceph with
> RDMA and use the fio to test its iops/latency/throughput.
>
> In my environment, I setup 3 hosts and list the detail of each host below.
>
> OS: ubunt
plz uses master branch to test rdma
On Sun, Mar 19, 2017 at 11:08 PM, Hung-Wei Chiu (邱宏瑋) wrote:
> Hi
>
> I want to test the performance for Ceph with RDMA, so I build the ceph
> with RDMA and deploy into my test environment manually.
>
> I use the fio for my performance evaluation and it works
ot;
it must be ceph-mon doesn't compile with rdma support
>
> Appreciate any pointers.
>
> On Thu, Mar 9, 2017 at 5:56 PM, Haomai Wang wrote:
>>
>> On Fri, Mar 10, 2017 at 4:28 AM, PR PR wrote:
>> > Hi,
>> >
>> > I am trying to use ceph w
On Fri, Mar 10, 2017 at 4:28 AM, PR PR wrote:
> Hi,
>
> I am trying to use ceph with RDMA. I have a few questions.
>
> 1. Is there a prebuilt package that has rdma support or the only way to try
> ceph+rdma is to checkout from github and compile from scratch?
>
> 2. Looks like there are two ways o
On Tue, Feb 14, 2017 at 11:44 PM, Bastian Rosner
wrote:
>
> Hi,
>
> according to kraken release-notes and documentation, AsyncMessenger now also
> supports RDMA and DPDK.
>
> Is anyone already using async-ms with RDMA or DPDK and might be able to tell
> us something about real-world performance
On Wed, Nov 16, 2016 at 7:19 PM, fridifree wrote:
> Hi,
> Thanks
> This is for rados, not for s3 with nodejs
> If someone can send examples how to do that I will appreciate it
oh, if you refer to s3, you can get nodejs support from aws doc
>
> Thank you
>
>
> On Nov 1
https://www.npmjs.com/package/rados
On Wed, Nov 16, 2016 at 6:29 PM, fridifree wrote:
> Hi Everyone,
>
> Someone knows how to nodeJS with Ceph S3(Radosgw)
> I succeed to do that on python using boto, I don't find any examples about
> how to this on Nodejs.
> If someone can share with me examples
there may something broken when switching to cmake. I will check this. thanks!
On Mon, Oct 31, 2016 at 6:46 PM, wrote:
> Hi Cephers:
>
> I build Ceph(v11.0.2) with spdk,then prompt the following error:
>
> [ 87%] Building CXX object src/os/CMakeFiles/os.dir/FuseStore.cc.o
> [ 87%] Building
On Thu, Oct 27, 2016 at 2:10 AM, Trygve Vea
wrote:
> - Den 26.okt.2016 16:37 skrev Sage Weil s...@newdream.net:
>> On Wed, 26 Oct 2016, Trygve Vea wrote:
>>> - Den 26.okt.2016 14:41 skrev Sage Weil s...@newdream.net:
>>> > On Wed, 26 Oct 2016, Trygve Vea wrote:
>>> >> Hi,
>>> >>
>>> >> We
On Wed, Oct 26, 2016 at 9:57 PM, Trygve Vea
wrote:
> - Den 26.okt.2016 15:36 skrev Haomai Wang hao...@xsky.com:
>> On Wed, Oct 26, 2016 at 9:09 PM, Trygve Vea
>> wrote:
>>>
>>> - Den 26.okt.2016 14:41 skrev Sage Weil s...@newdream.net:
>>&g
On Wed, Oct 26, 2016 at 9:09 PM, Trygve Vea
wrote:
>
> - Den 26.okt.2016 14:41 skrev Sage Weil s...@newdream.net:
> > On Wed, 26 Oct 2016, Trygve Vea wrote:
> >> Hi,
> >>
> >> We have two Ceph-clusters, one exposing pools both for RGW and RBD
> >> (OpenStack/KVM) pools - and one only for RBD.
could you check dmesg? I think there exists disk EIO error
On Tue, Oct 25, 2016 at 9:58 AM, Zhang Qiang wrote:
> Hi,
>
> One of several OSDs on the same machine crashed several times within days.
> It's always that one, other OSDs are all fine. Below is the dumped message,
> since it's too long
On Fri, Oct 21, 2016 at 10:56 PM, Nick Fisk wrote:
> > -Original Message-
> > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf
> Of Haomai Wang
> > Sent: 21 October 2016 15:40
> > To: Nick Fisk
> > Cc: ceph-users@lists.ceph.com
>
On Fri, Oct 21, 2016 at 10:31 PM, Nick Fisk wrote:
> > -Original Message-
> > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf
> Of Haomai Wang
> > Sent: 21 October 2016 15:28
> > To: Nick Fisk
> > Cc: ceph-users@lists.ceph.com
>
On Fri, Oct 21, 2016 at 10:19 PM, Nick Fisk wrote:
> Hi,
>
> I'm just testing out using a Ceph client in a DMZ behind a FW from the
> main Ceph cluster. One thing I have noticed is that if the
> state table on the FW is emptied maybe by restarting it or just clearing
> the state table...etc. Then
do you try to restart osd to se the memory usage?
On Fri, Oct 7, 2016 at 1:04 PM, David Burns wrote:
> Hello all,
>
> We have a small 160TB Ceph cluster used only as a test s3 storage repository
> for media content.
>
> Problem
> Since upgrading from Firefly to Hammer we are experiencing very hi
BTW, why you need to iterate so much objects. I think it should be
done by other ways to achieve the goal.
On Wed, Sep 21, 2016 at 11:23 PM, Iain Buclaw wrote:
> On 20 September 2016 at 19:27, Gregory Farnum wrote:
>> In librados getting a stat is basically equivalent to reading a small
>> o
On Wed, Sep 21, 2016 at 2:41 AM, Wido den Hollander wrote:
>
>> Op 20 september 2016 om 20:30 schreef Haomai Wang :
>>
>>
>> On Wed, Sep 21, 2016 at 2:26 AM, Wido den Hollander wrote:
>> >
>> >> Op 20 september 2016 om 19:27 schreef Gregory Farnu
On Wed, Sep 21, 2016 at 2:26 AM, Wido den Hollander wrote:
>
>> Op 20 september 2016 om 19:27 schreef Gregory Farnum :
>>
>>
>> In librados getting a stat is basically equivalent to reading a small
>> object; there's not an index or anything so FileStore needs to descend its
>> folder hierarchy. I
On Mon, Sep 19, 2016 at 8:25 PM, Daniel Schneller
wrote:
> Hello!
>
>
> We are observing a somewhat strange IO pattern on our OSDs.
>
>
> The cluster is running Hammer 0.94.1, 48 OSDs, 4 TB spinners, xfs,
>
> colocated journals.
I think we need to upgrade to newer hammer version.
>
>
> Over peri
users
>
>
> -- Original --
> From: "Haomai Wang";
> Date: Mon, Sep 19, 2016 04:18 PM
> To: "王海生-软件研发部";
> Cc: "ceph-users";
> Subject: Re: [ceph-users] ceph object merge file pieces
>
> On Mon, Sep 19, 2016 at 4:10 P
On Mon, Sep 19, 2016 at 4:10 PM, 王海生-软件研发部
wrote:
> Dear all
> we have setup a ceph cluster using object storage, there is a case in our
> product, where client can upload large file ,right now we are thinking split
> large file into serveral pieces and send it to ceph,can i send each piece to
> c
Previously dpdk plugin only support cmake.
Currently I'm working on split that PR into multi clean PR to let
merge. So previous PR isn't on my work list. plz move on the following
changes
On Thu, Jul 7, 2016 at 1:25 PM, 席智勇 wrote:
> I copy rte_config.h to /usr/include/ and it can pass the ./conf
On Fri, May 13, 2016 at 8:11 PM, Florent B wrote:
> Hi everyone,
>
> I would like to setup Ceph cache tiering and I would like to know if I
> can have a single cache tier pool, used as "hot storage" for multiple
> backend pools ?
no, we can't. I think it's too complexity to implement this in curr
your
cluster is ceph running in docker via NET, I'm not sure whether exists some
potential problems, but obviously it's not a very common deploy
>
>
>
> 2016-02-20 9:01 GMT+03:00 Haomai Wang :
> >
> >
> > On Sat, Feb 20, 2016 at 2:26 AM, wrote:
> >>
On Sat, Feb 20, 2016 at 2:26 AM, wrote:
> Hi All.
>
> We're running 180-node cluster in docker containers -- official
> ceph:hammer.
> Recently, we've found a rarely reproducible problem on it: sometimes
> data transfer freezes for significant time (5-15 minutes). The issue
> is taking place whil
On Tue, Dec 22, 2015 at 3:27 PM, Florian Haas wrote:
> On Tue, Dec 22, 2015 at 3:10 AM, Haomai Wang wrote:
> >> >> >> Hey everyone,
> >> >> >>
> >> >> >> I recently got my hands on a cluster that has been underperforming
> &
On Tue, Dec 22, 2015 at 3:33 AM, Florian Haas wrote:
> On Mon, Dec 21, 2015 at 4:15 PM, Haomai Wang wrote:
> >
> >
> > On Mon, Dec 21, 2015 at 10:55 PM, Florian Haas
> wrote:
> >>
> >> On Mon, Dec 21, 2015 at 3:35 PM, Haomai Wang wrote:
> >&
On Mon, Dec 21, 2015 at 10:55 PM, Florian Haas wrote:
> On Mon, Dec 21, 2015 at 3:35 PM, Haomai Wang wrote:
> >
> >
> > On Fri, Dec 18, 2015 at 1:16 AM, Florian Haas
> wrote:
> >>
> >> Hey everyone,
> >>
> >> I recently got my hands
resend
On Mon, Dec 21, 2015 at 10:35 PM, Haomai Wang wrote:
>
>
> On Fri, Dec 18, 2015 at 1:16 AM, Florian Haas wrote:
>
>> Hey everyone,
>>
>> I recently got my hands on a cluster that has been underperforming in
>> terms of radosgw throughput, averaging
On Tue, Nov 24, 2015 at 10:35 AM, Marek Dohojda
wrote:
> No SSD and SAS are in two separate pools.
>
> On Mon, Nov 23, 2015 at 7:30 PM, Haomai Wang wrote:
>>
>> On Tue, Nov 24, 2015 at 10:23 AM, Marek Dohojda
>> wrote:
>> > I have a Hammer Ceph cluster on 7 n
On Tue, Nov 24, 2015 at 10:23 AM, Marek Dohojda
wrote:
> I have a Hammer Ceph cluster on 7 nodes with total 14 OSDs. 7 of which are
> SSD and 7 of which are SAS 10K drives. I get typically about 100MB IO rates
> on this cluster.
You mixed up sas and ssd in one pool?
>
> I have a simple questio
I guess we have a lot of qemu performance problem related mails in the
ML. You may get insight from their discusses.
You may expect to run rbd bench-write to see how many iops you can get
outside vm
On Thu, Nov 19, 2015 at 6:46 PM, Sean Redmond wrote:
> Hi Mike/Warren,
>
> Thanks for helping out
.
>
> No, rbd cache is not enabled. Even if each Image creates only one extra
> thread, if I have tens of thousands of Image objects open, there will be
> tens of thousands of threads in my process. Practically speaking, am I not
> allowed to cache Image objects?
>
> On Fri, No
What's your ceph version?
Do you enable rbd cache? By default, each Image should only have one
extra thread(maybe we also should obsolete this?).
On Sat, Nov 21, 2015 at 9:26 AM, Allen Liao wrote:
> I am developing a python application (using rbd.py) that requires querying
> information about te
39 2 57 2 0 0| 058M|1711k 2187k| 0 0 |406223k
> 38 2 58 3 0 0| 060M| 910k 1373k| 0 0 |407522k
Hmm, it's really a strange result for me. AFAR, it should be a burst
bandwidth at least. From my tests, the disk bandwidth is keeping high
level.
Yes, it's a expected case. Actually if you use Hammer, you can enable
filestore_fiemap to use sparse copy which especially useful for rbd
snapshot copy. But keep in mind some old kernel are *broken* in
fiemap. CentOS 7 is only the distro I verfied fine to this feature.
On Wed, Nov 18, 2015 at 12:
On Sat, Nov 14, 2015 at 9:23 AM, Artie Ziff wrote:
>>>
>> Yes, this is definitely an old version of librados getting picked up
>> somewhere in your library load path.
>>
>> You can find where the old librados is via:
>>
>> strace -f -e open ./rbd ls 2>&1 | grep librados.so
>
> Thanks very much, Jo
On Sat, Nov 14, 2015 at 8:29 AM, Artie Ziff wrote:
> Hello!
>
> I see similar behavior on a build of version 9.2.0-702-g7d926ce
>
> Single node.
> Ceph Mon only service that is running.
>
> In Ceph configuration file (/etc/ceph/ceph.conf)
> ms_crc_header = false
>
> $ ceph -s
> 2015-11-13 16:06:24
On Fri, Nov 13, 2015 at 8:31 AM, Artie Ziff wrote:
> Greetings Ceph Users everywhere!
>
> I was hoping to locate an entry for this Ceph configuration setting:
> ms_crc_header
> Would it be here:
> http://docs.ceph.com/docs/master/rados/configuration/ms-ref/
> Or perhaps it is deprecated?
> I have
Actually keyvaluestore would submit transaction with sync flag
too(rely to keyvaluedb impl journal/logfile).
Yes, if we disable sync flag, keyvaluestore's performance will
increase a lot. But we dont provide with this option now
On Tue, Oct 20, 2015 at 9:22 PM, Z Zhang wrote:
> Thanks, Sage, for
On Tue, Oct 20, 2015 at 8:47 PM, Sage Weil wrote:
> On Tue, 20 Oct 2015, Z Zhang wrote:
>> Hi Guys,
>>
>> I am trying latest ceph-9.1.0 with rocksdb 4.1 and ceph-9.0.3 with
>> rocksdb 3.11 as OSD backend. I use rbd to test performance and following
>> is my cluster info.
>>
>> [ceph@xxx ~]$ ceph -
The fact is that journal could help a lot for rbd use cases,
especially for small ios. I don' t think it will be bottleneck. If we
just want to reduce double write, it doesn't solve any performance
problem.
For rgw and cephfs, we actually need journal to keep atomic.
On Tue, Oct 20, 2015 at 8:54
On Wed, Oct 14, 2015 at 1:03 AM, Sage Weil wrote:
> On Mon, 12 Oct 2015, Robert LeBlanc wrote:
>> -BEGIN PGP SIGNED MESSAGE-
>> Hash: SHA256
>>
>> After a weekend, I'm ready to hit this from a different direction.
>>
>> I replicated the issue with Firefly so it doesn't seem an issue that
>
gt;
> Regards
> Somnath
>
>
> -----Original Message-
> From: Haomai Wang [mailto:haomaiw...@gmail.com]
> Sent: Monday, October 12, 2015 11:35 PM
> To: Somnath Roy
> Cc: Mark Nelson; ceph-devel; ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] Initial performance cluste
s out of messenger thread.
>
> Could you please send out any documentation around Async messenger ? I tried
> to google it , but, not even blueprint is popping up.
>
>
>
>
>
> Thanks & Regards
>
> Somnath
>
> From: ceph-users [mailto:ceph-users-boun...@lists
resend
On Tue, Oct 13, 2015 at 10:56 AM, Haomai Wang wrote:
> COOL
>
> Interesting that async messenger will consume more memory than simple, in my
> mind I always think async should use less memory. I will give a look at this
>
> On Tue, Oct 13, 2015 at 12:50 AM, Mark Nelso
COOL
Interesting that async messenger will consume more memory than simple, in
my mind I always think async should use less memory. I will give a look at
this
On Tue, Oct 13, 2015 at 12:50 AM, Mark Nelson wrote:
> Hi Guy,
>
> Given all of the recent data on how different memory allocator
> conf
On Sun, Oct 4, 2015 at 6:06 PM, niejunwei wrote:
> HI, All:
>
> I am new and this is my first time to write an email here, i am very
> appreciate that i can get your help.
>
> I create 3 virtual machines on my personal computer and build a simple test
> ceph storage platform based on these 3 VMs.
ice, actually I dive into all-ssd ceph since 2013, I can see the
improvement from Dumpling to Hammer.
You can find my related thread in 2014. Mainly about ensure fd hit, cpu
powersave disable, memory management
>
> Jan
>
>
> On 10 Sep 2015, at 16:38, Haomai Wang wrote:
>
>
On Thu, Sep 10, 2015 at 10:36 PM, Jan Schermer wrote:
>
> On 10 Sep 2015, at 16:26, Haomai Wang wrote:
>
> Actually we can reach 700us per 4k write IO for single io depth(2 copy,
> E52650, 10Gib, intel s3700). So I think 400 read iops shouldn't be a
> unbridgeable problem
Actually we can reach 700us per 4k write IO for single io depth(2 copy,
E52650, 10Gib, intel s3700). So I think 400 read iops shouldn't be a
unbridgeable problem.
CPU is critical for ssd backend, so what's your cpu model?
On Thu, Sep 10, 2015 at 9:48 PM, Jan Schermer wrote:
> It's certainly not
On Wed, Sep 9, 2015 at 3:00 AM, Niels Jakob Darger wrote:
> Hello,
>
> Excuse my ignorance, I have just joined this list and started using Ceph
> (which looks very cool). On AWS I have set up a 5-way Ceph cluster (4 vCPUs,
> 32G RAM, dedicated SSDs for system, osd and journal) with the Object
> Ga
Yes, we already notice this, and have PR to fix partial of this I
think https://github.com/ceph/ceph/pull/5451/files
On Fri, Aug 28, 2015 at 4:59 AM, Chad William Seys
wrote:
> Hi all,
>
> It appears that OSD daemons only very slowly free RAM after an extended period
> of an unhealthy cluster (sh
it seems like a leveldb problem. could you just kick it out and add a
new osd to make cluster healthy firstly?
On Wed, Aug 12, 2015 at 1:31 AM, Gerd Jakobovitsch wrote:
>
>
> Dear all,
>
> I run a ceph system with 4 nodes and ~80 OSDs using xfs, with currently 75%
> usage, running firefly. On fri
mages. It could not relief the imbalance
>> within the existing data. Please correct me if I'm wrong.
For the existing pool, you could adjust crush weight to get better
data balance.
>>
>> Thanks,
>> Jevon
>>
>> 2015-08-04 22:01 GMT+08:00 Haomai Wang :
>&
On Mon, Aug 3, 2015 at 4:05 PM, 乔建峰 wrote:
> [Including ceph-users alias]
>
> 2015-08-03 16:01 GMT+08:00 乔建峰 :
>>
>> Hi Cephers,
>>
>> Currently, I'm experiencing an issue which suffers me a lot, so I'm
>> writing to ask for your comments/help/suggestions. More details are provided
>> bellow.
>>
>
plz follow the quick installation here(
http://ceph.com/docs/master/start/quick-start-preflight/) instead of manual
install.
On Mon, Aug 3, 2015 at 3:48 PM, Jiwan Ninglekhu
wrote:
> Hello Ceph user peers,
>
> I am trying to setup a Ceph cluster in OpenStack cloud. I remotely created
> 3 VMs via
On Fri, Jul 31, 2015 at 5:47 PM, Jan Schermer wrote:
> I know a few other people here were battling with the occasional issue of OSD
> being extremely slow when starting.
>
> I personally run OSDs mixed with KVM guests on the same nodes, and was
> baffled by this issue occuring mostly on the mos
quot;ceph daemon osd.0 config show | grep memstore"
> Aakanksha
>
> -Original Message-
> From: Haomai Wang [mailto:haomaiw...@gmail.com]
> Sent: Tuesday, July 28, 2015 7:36 PM
> To: Aakanksha Pudipeddi-SSI
> Cc: ceph-us...@ceph.com
> Subject: Re: [ceph-users] Co
I think option 2 should be reliable.
On Wed, Jul 29, 2015 at 9:00 PM, Kenneth Waegeman
wrote:
> Hi all,
>
> We are considering to migrate all our OSDs of our EC pool from KeyValue to
> Filestore. Does someone has experience with this? What would be a good
> procedure?
>
> We have Erasure Code usi
On Wed, Jul 29, 2015 at 10:21 AM, Aakanksha Pudipeddi-SSI
wrote:
> Hello Haomai,
>
> I am using v0.94.2.
>
> Thanks,
> Aakanksha
>
> -Original Message-
> From: Haomai Wang [mailto:haomaiw...@gmail.com]
> Sent: Tuesday, July 28, 2015 7:20 PM
> To: Aaka
Which version do you use?
https://github.com/ceph/ceph/commit/c60f88ba8a6624099f576eaa5f1225c2fcaab41a
should fix your problem
On Wed, Jul 29, 2015 at 5:44 AM, Aakanksha Pudipeddi-SSI
wrote:
> Hello,
>
>
>
> I am trying to setup a ceph cluster with a memstore backend. The problem is,
> it is alw
On Tue, Jul 28, 2015 at 5:28 PM, Burkhard Linke
wrote:
> Hi,
>
> On 07/28/2015 11:08 AM, Haomai Wang wrote:
>>
>> On Tue, Jul 28, 2015 at 4:47 PM, Gregory Farnum wrote:
>>>
>>> On Tue, Jul 28, 2015 at 8:01 AM, Burkhard Linke
>>> wrote:
>
&g
On Tue, Jul 28, 2015 at 4:47 PM, Gregory Farnum wrote:
> On Tue, Jul 28, 2015 at 8:01 AM, Burkhard Linke
> wrote:
>> Hi,
>>
>> On 07/27/2015 05:42 PM, Gregory Farnum wrote:
>>>
>>> On Mon, Jul 27, 2015 at 4:33 PM, Burkhard Linke
>>> wrote:
Hi,
the nfs-ganesha documentation st
On Fri, Jul 24, 2015 at 11:55 PM, Jason Dillaman wrote:
>> Hi all,
>> I am looking for a way to alleviate the overhead of RBD snapshots/clones for
>> some time.
>>
>> In our scenario there are a few “master” volumes that contain production
>> data, and are frequently snapshotted and cloned for dev
image metadata isn't supported by hammer, interfails supports
On Mon, Jul 13, 2015 at 11:29 PM, Maged Mokhtar wrote:
> Hello
>
> i am trying to use the rbd image-meta set.
> i get an error from rbd that this command is not recognized
> yet it is documented in rdb documentation:
> http://ceph.com/
Do you use upstream ceph version previously? Or do you shutdown
running ceph-osd when upgrading osd?
How many osds meet this problems?
This assert failure means that osd detects a upgraded pg meta object
but failed to read(or lack of 1 key) meta keys from object.
On Thu, Jul 23, 2015 at 7:03 PM,
I guess you only need to add "osd objectstore = keyvaluestore" and
"enable experimental unrecoverable data corrupting features =
keyvaluestore".
And you need to know keyvaluestore is a experimental backend now, it's
not recommended to deploy in producation env !
On Thu, Jul 23, 2015 at 7:13 AM, S
On Tue, Jun 30, 2015 at 9:07 PM, Michał Chybowski
wrote:
> Hi,
>
> Lately I've been working on XEN RBD SM and I'm using RBD's built-in snapshot
> functionality.
>
> My system looks like this:
> base image -> snapshot -> snaphot is used to create XEN VM's volumes ->
> volume snapshots (via rbd snap
On Wed, Jul 1, 2015 at 4:50 AM, Steffen Tilsch wrote:
> Hello Cephers,
>
> I got some questions regarding where what type of IO is generated.
>
>
>
> As far as I understand it looks like this (please see picture:
> http://imageshack.com/a/img673/4563/zctaGA.jpg ) :
>
> 1. Clients -> OSD (Journal):
On Tue, Jun 30, 2015 at 12:24 AM, German Anders wrote:
> hi cephers,
>
>Want to know if there's any 'best' practice or procedure to implement
> Ceph with Infiniband FDR 56gb/s for front and back end connectivity. Any
> crush tunning parameters, etc.
>
> The Ceph cluster has:
>
> - 8 OSD server
Actually it's looks reasonable to me and we already got the same
conclusion for latency with SSD.
I think potential direction is that we need to reduce cpu util for
osd. Reduce high-end cpu dependent degree for ssd
On Sat, Jun 13, 2015 at 10:58 PM, Nick Fisk wrote:
> Hi All,
>
> I know there has
Hi Partick,
It looks confusing to use this. Is it need that we upload a txt file
to describe blueprint instead of editing directly online?
On Wed, May 27, 2015 at 5:05 AM, Patrick McGarry wrote:
> It's that time again, time to gird up our loins and submit blueprints
> for all work slated for the
On Thu, May 28, 2015 at 1:40 AM, Robert LeBlanc wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> With all the talk of tcmalloc and jemalloc, I decided to do some
> testing og the different memory allocating technologies between KVM
> and Ceph. These tests were done a pre-production s
ld keyring and conf. Any important thing to note for this
> kind of data recovery work? thx
>
> regards,
> mingfai
>
> On Sat, May 23, 2015 at 10:59 AM, Haomai Wang wrote:
>>
>> Experimental feature like keyvaluestore won't support upgrade from 0.87 to
>> 0.94.
Experimental feature like keyvaluestore won't support upgrade from 0.87 to 0.94.
Sorry
On Sat, May 23, 2015 at 7:35 AM, Mingfai wrote:
> hi,
>
> I have a ceph cluster that use keyvaluestore-dev. After upgraded from v0.87
> to v0.94.1, and changed the configuration (removed "-dev" suffix and adde
It looks like deep scrub cause the disk busy and some threads blocking on this.
Maybe you could lower the scrub related configurations and see the
disk util when deep-scrubing.
On Sat, Apr 11, 2015 at 3:01 AM, Andrei Mikhailovsky wrote:
> Hi guys,
>
> I was wondering if anyone noticed that the d
Oh, you also need to turn off "mon_osd_adjust_down_out_interval"
On Tue, Apr 7, 2015 at 8:57 PM, lijian wrote:
>
> Haomai Wang,
>
> the mon_osd_down_out_interval is 300, please refer to my settings, and I use
> the cli 'service ceph stop osd.X' to stop a
Whatever the version you tested, ceph won't recover data when you
manually stop osd immediately. And it will trigger mark down osd out
when it reach "mon_osd_down_out_interval" seconds.
On Tue, Apr 7, 2015 at 8:33 PM, lijian wrote:
> Hi,
> The recovering start delay 300s after I stop a osd and th
We have a related topic in CDS about
hadoop+ceph(https://wiki.ceph.com/Planning/Blueprints/Infernalis/rgw%3A_Hadoop_FileSystem_Interface_for_a_RADOS_Gateway_Caching_Tier).
It's not directly solve the data locality problem but try to avoid
data migration between different storage cluster.
It would
1 - 100 of 208 matches
Mail list logo