Congratulations on the release!
John
On Wed, Sep 14, 2016 at 4:08 PM, Lenz Grimmer wrote:
> Hi,
>
> if you're running a Ceph cluster and would be interested in trying out a
> new tool for managing/monitoring it, we've just released version 2.0.14
> of openATTIC that now provides a first implemen
> Op 13 september 2016 om 18:54 schreef "WRIGHT, JON R (JON R)"
> :
>
>
> VM Client OS: ubuntu 14.04
>
> Openstack: kilo
>
> libvirt: 1.2.12
>
> nova-compute-kvm: 1:2015.1.4-0ubuntu2
>
What librados/librbd version are you running on the client?
Wido
> Jon
>
> On 9/13/2016 11:17 AM, Wido
> Op 15 september 2016 om 10:34 schreef Florent B :
>
>
> Hi everyone,
>
> I have a Ceph cluster on Jewel.
>
> Monitors are on 32GB ram hosts.
>
> After a few days, ceph-mon process uses 25 to 35% of 32GB (8 to 11 GB) :
>
> 1150 ceph 20 0 15.454g 7.983g 7852 S 0.3 25.5 490:29.11
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Jim
> Kilborn
> Sent: 14 September 2016 20:30
> To: Reed Dier
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] Replacing a failed OSD
>
> Reed,
>
>
>
> Thanks for the response.
> Op 14 september 2016 om 14:56 schreef "Dennis Kramer (DT)" :
>
>
> Hi Burkhard,
>
> Thank you for your reply, see inline:
>
> On Wed, 14 Sep 2016, Burkhard Linke wrote:
>
> > Hi,
> >
> >
> > On 09/14/2016 12:43 PM, Dennis Kramer (DT) wrote:
> >> Hi Goncalo,
> >>
> >> Thank you. Yes, i have
Hi,
does CephFS impose an upper limit on the number of files in a directory?
We currently have one directory with a large number of subdirectories:
$ ls | wc -l
158141
Creating a new subdirectory fails:
$ touch foo
touch: cannot touch 'foo': No space left on device
Creating files in a diffe
Hi Jim,
I'm using a location script for OSDs, so when I add an OSD this script
will determine its place in the cluster and in which bucket it belongs.
In your ceph.conf there is a setting you can use:
osd_crush_location_hook =
With regards,
On 09/14/2016 09:30 PM, Jim Kilborn wrote:
> Reed,
>
Hello cephers,
last week we survived a 3-day outage on our ceph cluster (Hammer
0.94.7, 162 OSDs, 27 "fat" nodes, 1000s of clients) due to 6 out of
162 OSDs crash in the SAME node. The outage was caused in the
following timeline:
time 0: OSDs living in the same node (rd0-19) start heavily flapping
> Op 15 september 2016 om 10:40 schreef Florent B :
>
>
> On 09/15/2016 10:37 AM, Wido den Hollander wrote:
> >> Op 15 september 2016 om 10:34 schreef Florent B :
> >>
> >>
> >> Hi everyone,
> >>
> >> I have a Ceph cluster on Jewel.
> >>
> >> Monitors are on 32GB ram hosts.
> >>
> >> After a few
On Thu, Sep 15, 2016 at 2:20 PM, Burkhard Linke
wrote:
> Hi,
>
> does CephFS impose an upper limit on the number of files in a directory?
>
>
> We currently have one directory with a large number of subdirectories:
>
> $ ls | wc -l
> 158141
>
> Creating a new subdirectory fails:
>
> $ touch foo
>
On Thu, Sep 15, 2016 at 10:22 AM, Nikolay Borisov
wrote:
>
>
> On 09/15/2016 09:22 AM, Nikolay Borisov wrote:
>>
>>
>> On 09/14/2016 05:53 PM, Ilya Dryomov wrote:
>>> On Wed, Sep 14, 2016 at 3:30 PM, Nikolay Borisov wrote:
On 09/14/2016 02:55 PM, Ilya Dryomov wrote:
> On Wed, S
Hello,
I have Ceph Jewel 10.2.1 cluster and RadosGW. Issue is that when
authenticating against Swift API I receive different values for
X-Storage-Url header:
# curl -i -H "X-Auth-User: internal-it:swift" -H "X-Auth-Key: ***"
https://ed-1-vip.cloud/auth/v1.0 | grep X-Storage-Url
X-Storage-Url: htt
Hi,
On 09/15/2016 12:00 PM, John Spray wrote:
On Thu, Sep 15, 2016 at 2:20 PM, Burkhard Linke
wrote:
Hi,
does CephFS impose an upper limit on the number of files in a directory?
We currently have one directory with a large number of subdirectories:
$ ls | wc -l
158141
Creating a new subd
On 09/15/2016 01:24 PM, Ilya Dryomov wrote:
> On Thu, Sep 15, 2016 at 10:22 AM, Nikolay Borisov
> wrote:
>>
>>
>> On 09/15/2016 09:22 AM, Nikolay Borisov wrote:
>>>
>>>
>>> On 09/14/2016 05:53 PM, Ilya Dryomov wrote:
On Wed, Sep 14, 2016 at 3:30 PM, Nikolay Borisov wrote:
>
>
>
Sorry, a revoke my question. On one node there was a duplicate RGW
daemon with old config. That's why sometimes I was receiving wrong
URLs.
2016-09-15 13:23 GMT+03:00 Василий Ангапов :
> Hello,
>
> I have Ceph Jewel 10.2.1 cluster and RadosGW. Issue is that when
> authenticating against Swift API
Hello cephers,
being in a degraded cluster state with 6/162 OSDs down ((Hammer
0.94.7, 162 OSDs, 27 "fat" nodes, 1000s of clients) ) like the below
ceph cluster log indicates:
2016-09-12 06:26:08.443152 mon.0 62.217.119.14:6789/0 217309 : cluster
[INF] pgmap v106027148: 28672 pgs: 2 down+remapped+
> Op 15 september 2016 om 13:27 schreef Kostis Fardelas :
>
>
> Hello cephers,
> being in a degraded cluster state with 6/162 OSDs down ((Hammer
> 0.94.7, 162 OSDs, 27 "fat" nodes, 1000s of clients) ) like the below
> ceph cluster log indicates:
>
> 2016-09-12 06:26:08.443152 mon.0 62.217.119.1
On Thu, Sep 15, 2016 at 12:54 PM, Nikolay Borisov wrote:
>
>
> On 09/15/2016 01:24 PM, Ilya Dryomov wrote:
>> On Thu, Sep 15, 2016 at 10:22 AM, Nikolay Borisov
>> wrote:
>>>
>>>
>>> On 09/15/2016 09:22 AM, Nikolay Borisov wrote:
On 09/14/2016 05:53 PM, Ilya Dryomov wrote:
> On
On 09/15/2016 03:15 PM, Ilya Dryomov wrote:
> On Thu, Sep 15, 2016 at 12:54 PM, Nikolay Borisov wrote:
>>
>>
>> On 09/15/2016 01:24 PM, Ilya Dryomov wrote:
>>> On Thu, Sep 15, 2016 at 10:22 AM, Nikolay Borisov
>>> wrote:
On 09/15/2016 09:22 AM, Nikolay Borisov wrote:
>
>
Nick/Dennis,
Thanks for the info. I did fiddle with a location script that would determine
whether the drive is a spinning or ssd drive, and put it in the appropriate
bucket. I might move back to that now that I understand ceph better.
Thanks for the link to the sample script as well.
Se
On Thu, Sep 15, 2016 at 2:43 PM, Nikolay Borisov wrote:
>
> [snipped]
>
> cat /sys/bus/rbd/devices/47/client_id
> client157729
> cat /sys/bus/rbd/devices/1/client_id
> client157729
>
> Client client157729 is alxc13, based on correlation by the ip address
> shown by the rados -p ... command. So it'
Our ceph cluster (from emperor till hammer) has made many times
recoveries during host outages/network failures and threads never
exceeded 10K. The thread leaks we experienced with down+peering PGs
(lasting for several hours) was something that we saw for the first
time. I don't see the reason to b
Dear Ceph users,
Any suggestion on this please.
Regards
Gaurav Goyal
On Wed, Sep 14, 2016 at 2:50 PM, Gaurav Goyal
wrote:
> Dear Ceph Users,
>
> I need you help to sort out following issue with my cinder volume.
>
> I have created ceph as backend for cinder. Since i was using SAN storage
> fo
All,
I have been making some progress on troubleshooting this.
I am seeing that when rgw is configured for LDAP, I am getting an error in my
slapd log:
Sep 14 06:56:21 mgmt1 slapd[23696]: conn=1762 op=0 RESULT tag=97 err=2
text=historical protocol version requested, use LDAPv3 instead
Am I corr
I have a replicated cache pool and metadata pool which reside on ssd drives,
with a size of 2, backed by a erasure coded data pool
The cephfs filesystem was in a healthy state. I pulled an SSD drive, to perform
an exercise in osd failure.
The cluster recognized the ssd failure, and replicated ba
> On 14 Sep 2016, at 23:07, Gregory Farnum wrote:
>
> On Wed, Sep 14, 2016 at 7:19 AM, Dan Van Der Ster
> wrote:
>> Indeed, seems to be trimmed by osd_target_transaction_size (default 30) per
>> new osdmap.
>> Thanks a lot for your help!
>
> IIRC we had an entire separate issue before adding
Thanks a lot for your explanation!
I just increased the 'rasize' option of the kernel module and got
significant better throughput for sequential reads.
Thanks,
Andreas
2016-09-15 0:29 GMT+02:00 Gregory Farnum :
> Oh hrm, I missed the stripe count settings. I'm not sure if that's
> helping you
Hi,
So, maybe someone has an idea of where to go on this.
I have just setup 2 rgw instances in a multisite setup. They are working
nicely. I have add a couple of test buckets and some files to make sure it
works is all. The status shows both are caught up. Nobody else is accessing
or using
Hi, just to let the admins know that when searching for terms (i searched
for erasure coding) in the mailing list
http://lists.ceph.com/pipermail/ceph-users-ceph.com/
This error is returned in the browser at
http://lists.ceph.com/mmsearch.cgi/ceph-users-ceph.com
ht://Dig error
htsearch detected
Can someone point me to a thread or site that uses ceph+erasure coding to
serve block storage for Virtual Machines running with Openstack+KVM?
All references that I found are using erasure coding for cold data or *not*
VM block access.
thanks,
--
-
Erick
Can anyone know this problem,please help me to watch this
2016年9月13日 下午5:58,"Brian Chang-Chien" 寫道:
> Hi ,naga.b
>
> I use Ceph jewel 10.2.2
> my ceph.conf as follow
> [global]
> fsid = d056c174-2e3a-4c36-a067-cb774d176ce2
> mon_initial_members = brianceph
> mon_host = 10.62.9.140
> auth_cluster
On 09/16/2016 09:46 AM, Erick Perez - Quadrian Enterprises wrote:
Can someone point me to a thread or site that uses ceph+erasure coding
to serve block storage for Virtual Machines running with Openstack+KVM?
All references that I found are using erasure coding for cold data or
*not* VM block acc
On Thu, Sep 15, 2016 at 4:53 PM, lewis.geo...@innoscale.net
wrote:
> Hi,
> So, maybe someone has an idea of where to go on this.
>
> I have just setup 2 rgw instances in a multisite setup. They are working
> nicely. I have add a couple of test buckets and some files to make sure it
> works is all.
33 matches
Mail list logo