Ryan, a team at Ebay recently did some metadata testing, have a search on
this list. Pretty sure they found there wasn't a huge benefit it putting
the metadata pool on solid. As Christian says, it's all about ram and Cpu.
You want to get as many inodes into cache as possible.
On 26 Sep 2016 2:09 a
Hello,
On Mon, 26 Sep 2016 08:28:02 +0100 David wrote:
> Ryan, a team at Ebay recently did some metadata testing, have a search on
> this list. Pretty sure they found there wasn't a huge benefit it putting
> the metadata pool on solid. As Christian says, it's all about ram and Cpu.
> You want to
On Fri, Sep 23, 2016 at 09:31:46AM +0200, Wido den Hollander wrote:
>
> > Op 23 september 2016 om 5:59 schreef Chengwei Yang
> > :
> >
> >
> > Hi list,
> >
> > I found that ceph repo is broken these days, no any repodata in the repo at
> > all.
> >
> > http://us-east.ceph.com/rpm-jewel/el7/x
On Mon, Sep 26, 2016 at 8:39 AM, Nikolay Borisov wrote:
>
>
> On 09/22/2016 06:36 PM, Ilya Dryomov wrote:
>> On Thu, Sep 15, 2016 at 3:18 PM, Ilya Dryomov wrote:
>>> On Thu, Sep 15, 2016 at 2:43 PM, Nikolay Borisov wrote:
[snipped]
cat /sys/bus/rbd/devices/47/client_id
c
On Mon, Sep 26, 2016 at 8:28 AM, David wrote:
> Ryan, a team at Ebay recently did some metadata testing, have a search on
> this list. Pretty sure they found there wasn't a huge benefit it putting the
> metadata pool on solid. As Christian says, it's all about ram and Cpu. You
> want to get as man
On Mon, Sep 26, 2016 at 11:13 AM, Ilya Dryomov wrote:
> On Mon, Sep 26, 2016 at 8:39 AM, Nikolay Borisov wrote:
>>
>>
>> On 09/22/2016 06:36 PM, Ilya Dryomov wrote:
>>> On Thu, Sep 15, 2016 at 3:18 PM, Ilya Dryomov wrote:
On Thu, Sep 15, 2016 at 2:43 PM, Nikolay Borisov wrote:
>
>
Hi John,
Can you provide:
radosgw-admin zonegroupmap get on both us-dfw and us-phx?
radosgw-admin realm get and radosgw-admin period get on all the gateways?
Orit
On Thu, Sep 22, 2016 at 4:37 PM, John Rowe wrote:
> Hello Orit, thanks.
>
> I will do all 6 just in case. Also as an FYI I originally
Hi,
This has been discussed on the ML before [0], but I would like to bring this up
again with the outlook towards BlueStore.
Bcache [1] allows for block device level caching in Linux. This can be
read/write(back) and vastly improves read and write performance to a block
device.
With the curr
Hello all!
I need some help with my Ceph cluster.
I've installed ceph cluster with two physical servers with osd /data 40G on
each.
Here is ceph.conf:
[global]
fsid = 377174ff-f11f-48ec-ad8b-ff450d43391c
mon_initial_members = vm35, vm36
mon_host = 192.168.1.35,192.168.1.36
auth_cluster_required = c
Hi,
On 09/26/2016 12:58 PM, Dmitriy Lock wrote:
Hello all!
I need some help with my Ceph cluster.
I've installed ceph cluster with two physical servers with osd /data
40G on each.
Here is ceph.conf:
[global]
fsid = 377174ff-f11f-48ec-ad8b-ff450d43391c
mon_initial_members = vm35, vm36
mon_host
Yes, you are right!
I've changed this for all pools, but not for last two!
pool 1 '.rgw.root' replicated size 2 min_size 2 crush_ruleset 0 object_hash
rjenkins pg_num 8 pgp_num 8 last_change 27 owner 18446744073709551615 flags
hashpspool strip
e_width 0
pool 2 'default.rgw.control' replicated size
Hello,
> Yes, you are right!
> I've changed this for all pools, but not for last two!
>
> pool 1 '.rgw.root' replicated size 2 min_size 2 crush_ruleset 0 object_hash
> rjenkins pg_num 8 pgp_num 8 last_change 27 owner
> 18446744073709551615 flags hashpspool strip
> e_width 0
> pool 2 'default.rgw
Hey,
10.2.3 is tagged in jewel branch for more than 5 days already, but there
were no announcement for that yet. Is there any reasons for that?
Packages seems to be present too
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.
Hi experts,
I need your help. I have a running cluster with 19 OSDs and 3 MONs. I
created a separate LVM for /var/lib/ceph on one of the nodes. I
stopped the mon service on that node, rsynced the content to the newly
created LVM and restarted the monitor, but obviously, I didn't do that
c
(Sorry, sometimes I use the wrong shortcuts too quick)
Hi experts,
I need your help. I have a running cluster with 19 OSDs and 3 MONs. I
created a separate LVM for /var/lib/ceph on one of the nodes. I
stopped the mon service on that node, rsynced the content to the newly
created LVM and re
2016-09-26 11:31 GMT+02:00 Wido den Hollander :
...
> Does anybody know the proper route we need to take to get this fixed
> upstream? Has any contacts with the bcache developers?
I do not have direct contacts either, but having partitions on bcache
would be really great. Currently we do some nas
On Mon, Sep 26, 2016 at 9:31 AM, Wido den Hollander wrote:
> Hi,
>
> This has been discussed on the ML before [0], but I would like to bring
> this up again with the outlook towards BlueStore.
>
> Bcache [1] allows for block device level caching in Linux. This can be
> read/write(back) and vastly
Hello all
I need your help.
I have running ceph cluster on aws with 3 mons and 3 osd.
my question is that can I use EBS snapshot of OSD as backup solution will
it work if I crate volume from snapshot of OSD and add to ceph cluster as
new OSD
any help weather this approach is correct or not. w
> Op 26 september 2016 om 17:48 schreef Sam Yaple :
>
>
> On Mon, Sep 26, 2016 at 9:31 AM, Wido den Hollander wrote:
>
> > Hi,
> >
> > This has been discussed on the ML before [0], but I would like to bring
> > this up again with the outlook towards BlueStore.
> >
> > Bcache [1] allows for blo
On Mon, Sep 26, 2016 at 5:44 PM, Wido den Hollander wrote:
>
> > Op 26 september 2016 om 17:48 schreef Sam Yaple :
> >
> >
> > On Mon, Sep 26, 2016 at 9:31 AM, Wido den Hollander
> wrote:
> >
> > > Hi,
> > >
> > > This has been discussed on the ML before [0], but I would like to bring
> > > this
Agreed no announcement like there usually is, what is going on?
Hopefully there is an explanation. :|
On Mon, Sep 26, 2016 at 6:01 AM Henrik Korkuc wrote:
> Hey,
>
> 10.2.3 is tagged in jewel branch for more than 5 days already, but there
> were no announcement for that yet. Is there any reasons
We are running on Hammer 0.94.7 and have had very bad experiences with PG
folders splitting a sub-directory further. OSDs being marked out, hundreds of
blocked requests, etc. We have modified our settings and watched the behavior
match the ceph documentation for splitting, but right now the su
We are looking to implement a small setup in Ceph+Openstack+kvm for a
college that teaches IT careers.We want to empower teachers and students to
self-provision resources and to develop skills to extend and/or build
Multi-Tenant portals.
Currently:
45VMs (90% Linux and 10% Wndows) using 70vCPUs f
please try with:
ceph pg repair
most of the time will be good!
good luck!
> 在 2016年9月26日,下午10:44,Eugen Block 写道:
>
> (Sorry, sometimes I use the wrong shortcuts too quick)
>
> Hi experts,
>
> I need your help. I have a running cluster with 19 OSDs and 3 MONs. I created
> a separate LVM
> 在 2016年9月26日,下午10:44,Eugen Block 写道:
>
> And the number of scrub errors is increasing, although I started with more
> thatn 400 scrub errors.
> What I have tried is to manually repair single PGs as described in [1]. But
> some of the broken PGs have no entries in the log file so I don't have
> 在 2016年9月26日,下午10:44,Eugen Block 写道:
>
> What I have tried is to manually repair single PGs as described in [1]. But
> some of the broken PGs have no entries in the log file so I don't have
> anything to look at.
> In case there is one object in one OSD but is missing in the other. how do I
Hello all,
I'm a newbie of Ceph. I read the document and created a ceph cluster against
VM. I have a question about how to apply user managerment to the cluster. I'm
not asking how to create or modify users or user privileges. I have found this
in the Ceph document.
I want to know:
1. Is t
Hi Jason,
I've been able to rebuild some of images but all are corrupted at this
time, but your procedure appears ok.
Thanks!
2016-09-22 15:07 GMT+02:00 Jason Dillaman :
> You can do something like the following:
>
> # create a sparse file the size of your image
> $ dd if=/dev/zero of=rbd_expor
Hi Dillon,
Ceph uses CephX authentication, which gives permission to users on
selected Pools to read / write. We give mon 'allow r'
to get cluster/Crush map for client.
You can refer to below URL for more information on CephX and creating
user keyrings for access to selected / specific pools.
29 matches
Mail list logo