Please vote for my presentation (search "swami reddy")
https://www.openstack.org/summit/barcelona-2016/vote-for-speakers/presentation/15137/?q=ranga
On Wed, Jul 27, 2016 at 1:16 AM, Patrick McGarry wrote:
> Hey cephers,
>
> It seems that direct links to specific OpenStack talks have been
> disabl
The purpose of the cluster network is to isolate the heartbeat (and
recovery) traffic. I imagine that is why you are struggling to get the
heartbeat traffic on the public network.
On 27 Jul 2016 8:32 p.m., "Venkata Manojawa Paritala"
wrote:
> Hi,
>
> I have configured the below 2 networks in Cep
On Fri, 29 Jul 2016 16:20:03 +0800 Chengwei Yang wrote:
> On Fri, Jul 29, 2016 at 11:47:59AM +0900, Christian Balzer wrote:
> > On Fri, 29 Jul 2016 09:59:38 +0800 Chengwei Yang wrote:
> >
> > > Hi list,
> > >
> > > I just followed the placement group guide to set pg_num for the rbd pool.
> > >
Hello,
On Fri, 29 Jul 2016 04:46:54 + zhu tong wrote:
> Right, that was the one that I calculated the osd_pool_default_pg_num in
> our test cluster.
>
>
> 7 OSD, 11 pools, osd_pool_default_pg_num is calculated to be 256, but when
> ceph status shows
>
Already wrong, that default is _pe
Dear cephers.
I would like to request some clarification on migrating from legacy to optimal
(jewel) tunables.
We have recently migrated from infernalis to Jewel. However, we are still using
legacy tunables.
All our ceph infrastructure (mons. odss and mdss) are running 10.2.2 in Centos
7.2.15
Thank Chengwei Yang.
2016-07-29 17:17 GMT+07:00 Chengwei Yang :
> Would http://ceph.com/pgcalc/ help?
>
> On Mon, Jul 18, 2016 at 01:27:38PM +0700, Khang Nguyễn Nhật wrote:
> > Hi all,
> > I have a cluster consists of: 3 Monitors, 1 RGW, 1 host of 24
> OSDs(2TB/OSD) and
> > some pool as:
> >
Hello,
On Sat, 30 Jul 2016 16:51:10 +1000 Richard Thornton wrote:
> Hi,
>
> Thanks for taking a look, any help you can give would be much appreciated.
>
> In the next few months or so I would like to implement Ceph for my
> small business because it sounds cool and I love tinkering.
>
Commenda
Thanks Wido, David and Christian, much appreciated!
Regarding using SSD for OSD, I don’t want to spend any more so I will
use the 2TB spinning disks, performance is not a huge issue. Ceph is
overkill but I have the hardware lying around.
It’s a small business, just a few users, no current file s
Hello,
On Mon, 1 Aug 2016 15:03:14 +1000 Richard Thornton wrote:
> Thanks Wido, David and Christian, much appreciated!
>
> Regarding using SSD for OSD, I don’t want to spend any more so I will
> use the 2TB spinning disks, performance is not a huge issue. Ceph is
> overkill but I have the hard
On Mon, Aug 01, 2016 at 10:37:27AM +0900, Christian Balzer wrote:
> On Fri, 29 Jul 2016 16:20:03 +0800 Chengwei Yang wrote:
>
> > On Fri, Jul 29, 2016 at 11:47:59AM +0900, Christian Balzer wrote:
> > > On Fri, 29 Jul 2016 09:59:38 +0800 Chengwei Yang wrote:
> > >
> > > > Hi list,
> > > >
> > > >
On Fri, Jul 29, 2016 at 05:36:16PM +0200, Wido den Hollander wrote:
>
> > Op 29 juli 2016 om 16:30 schreef Chengwei Yang :
> >
> >
> > On Fri, Jul 29, 2016 at 01:48:43PM +0200, Wido den Hollander wrote:
> > >
> > > > Op 29 juli 2016 om 13:20 schreef Chengwei Yang
> > > > :
> > > >
> > > >
>
11 matches
Mail list logo