> What i want is to mount ceph volume to VM instance. But getting deadly errors
> like this . Expecting your help in this
> [root@rdo /(keystone_admin)]# uname -a
> Linux rdo 3.10.18-1.el6.elrepo.x86_64 #1 SMP Mon Nov 4 19:12:54 EST 2013
> x86_64 x86_64 x86_64 GNU/Linux
Since it is Redhat 6 ba
Hi,
we are currently using the patched fastcgi version
(2.4.7-0910042141-6-gd4fffda) Updating to a more recent version is currently
blocked by http://tracker.ceph.com/issues/6453
Is there a documentation for running radosgw with nginx? I only find some
mailinglist posts with some config snippe
Hi,
Our conf :
server {
listen 80;
listen [::]:80;
server_name radosgw-prod;
client_max_body_size 1000m;
error_log /var/log/nginx/radosgw-prod-error.log;
access_log off;
location / {
fastcgi_pass_header Author
> On 4 Dec 2013, at 04:37, "Yan, Zheng" wrote:
>
> On Tue, Dec 3, 2013 at 4:00 PM, Miguel Afonso Oliveira
> wrote:
>>
>>> If your issue is caused by the bug I presume, you need to use the
>>> newest client (0.72 ceph-fuse or 3.12 kernel)
>>>
>>> Regards
>>> Yan, Zheng
>>
>>
>> Hi,
>>
>> W
On Wed, Dec 4, 2013 at 8:11 PM, Miguel Oliveira
wrote:
>
>
>> On 4 Dec 2013, at 04:37, "Yan, Zheng" wrote:
>>
>> On Tue, Dec 3, 2013 at 4:00 PM, Miguel Afonso Oliveira
>> wrote:
>>>
If your issue is caused by the bug I presume, you need to use the
newest client (0.72 ceph-fuse or 3.12
Thanks a lot, that helped.
Should have cleaned leftovers before :)
Ugis
2013/12/3 Gregory Farnum :
> CRUSH is failing to map all the PGs to the right number of OSDs.
> You've got a completely empty host which has ~1/3 of the cluster's
> total weight, and that is probably why — remove it!
> -Greg
Hello Ceph,
I'm trying to find documentation on the feature which allows an admin to
set a quota on a bucket. I intent on using the API in a middleware tool
communicating with a billing solution.
Some information which I managed to find:
Release notes of 0.72 mention "rgw: bucket quotas" as one
Hi,
i have a ceph cluster with 3 nodes on Ubuntu 12.04.3 LTS and ceph
version 0.72.1
My configuration is the follow:
* 3 MON
- XRVCLNOSTK001=10.170.0.110
- XRVCLNOSTK002=10.170.0.111
- XRVOSTKMNG001=10.170.0.112
* 3 OSD
- XRVCLNOSTK001=10.170.0.110
- XRVCLNOSTK002=10.170.0.111
- X
Hi,
i have a ceph cluster with 3 nodes on Ubuntu 12.04.3 LTS and ceph
version 0.72.1
My configuration is the follow:
* 3 MON
- XRVCLNOSTK001=10.170.0.110
- XRVCLNOSTK002=10.170.0.111
- XRVOSTKMNG001=10.170.0.112
* 3 OSD
- XRVCLNOSTK001=10.170.0.110
- XRVCLNOSTK002=10.170.0.111
- X
Hi James,
Can you generate an OSD log with 'debug filestore = 20' for an idle period?
Thanks!
sage
James Harper wrote:
>I have been testing osd on btrfs, and the first thing I notice is that
>there is constant write activity when idle.
>
>The write activity hovers between 5Mbytes/second and 30M
Hi,
Can you attach the output from 'ceph osd dump'?
Thanks!
sage
"Alexis GÜNST HORN" wrote:
>Hello,
>
>I can't understand an error I have since now :
>
>HEALTH_WARN pool .rgw.buckets has too few pgs.
>Do you have any ideas ?
>
>Some info :
>
>[root@admin ~]# ceph --version
>ceph version 0.72.1
Gandalf Corvotempesta writes:
> what do you think to use the same SSD as journal and as root partition?
> Forexample:
> 1x 128GB SSD
> [...]
> All logs are stored remotely via rsyslogd
> Is this good ? AFAIK, in this configuration, ceph will be executed
> totally in ram.
I think this is a fine c
Hi Sage,
Am 04.12.2013 17:13, schrieb Sage Weil:
Hi James,
Can you generate an OSD log with 'debug filestore = 20' for an idle period?
i had reported something very similiar with no result to zendesk (#702).
Stefan
Thanks!
sage
James Harper wrote:
I have been testing osd on btrfs,
On Wed, 4 Dec 2013, Stefan Priebe wrote:
> Hi Sage,
>
> Am 04.12.2013 17:13, schrieb Sage Weil:
> > Hi James,
> >
> > Can you generate an OSD log with 'debug filestore = 20' for an idle period?
>
> i had reported something very similiar with no result to zendesk (#702).
Were you also seeing the
It looks like the same effect can be gotten by getting the src.rpm
and building it with "rpmbuild -ba --with rhev_features --without
guest_agent qemu-kvm.spec"
This can then be used as a drop-in replacement for qemu-kvm and will not
break things that look for qemu-kvm specifically.
On 12/02/
Am 04.12.2013 20:25, schrieb Sage Weil:
On Wed, 4 Dec 2013, Stefan Priebe wrote:
Hi Sage,
Am 04.12.2013 17:13, schrieb Sage Weil:
Hi James,
Can you generate an OSD log with 'debug filestore = 20' for an idle period?
i had reported something very similiar with no result to zendesk (#702).
Can you try
ceph tell osd.71 injectargs '--filestore-max-sync-interval 20'
(or some other btrfs osd) and see if it changes?
sage
On Wed, 4 Dec 2013, James Harper wrote:
> > Hi James,
> >
> > Can you generate an OSD log with 'debug filestore = 20' for an idle period?
> >
>
> The best I ca
(reposted to the list without the attachment as the list blocks it. If anyone
else wants it I can send it direct)
> Hi James,
>
> Can you generate an OSD log with 'debug filestore = 20' for an idle period?
>
The best I can do right now is 'relatively idle'. Ceph -w says that the cluster
is av
Hi all,
I was going through the documentation
(http://ceph.com/docs/master/radosgw/federated-config/), having in mind a
(future) replicated swift object store between 2 geographically separated
datacenters (and 2 different Ceph clusters) and a few things caught my
attention. Considering I'm pl
>
> Can you try
>
> ceph tell osd.71 injectargs '--filestore-max-sync-interval 20'
>
> (or some other btrfs osd) and see if it changes?
>
It's now 9:15am here so the network is getting less idle and any measurements
I'm taking will be more noisy, but anyway...
iostat -x 10 now alternates be
hello, I'm testing Ceph as storage for KVM virtual machine images,
my cluster have 3 mons and 3 data nodes, every data node have 8x2T SATA HDD and
1 SSD for journal.
when I shutdown one data node to imitate server fault, the cluster begin to
recovery , when under recovery,
I can see many blocked
>> Is having two cluster networks like this a supported configuration? Every
>> osd and mon can reach every other so I think it should be.
>
> Maybe. If your back end network is a supernet and each cluster network is a
> subnet of that supernet. For example:
>
> Ceph.conf cluster network (supernet)
> > Ceph.conf cluster network (supernet): 10.0.0.0/8
> >
> > Cluster network #1: 10.1.1.0/24
> > Cluster network #2: 10.1.2.0/24
> >
> > With that configuration OSD address autodection *should* just work.
>
> It should work but thinking more about it the OSDs will likely be
> assigned IPs on a si
Hello,
Here it is :
http://pastie.org/private/u5yut673fv6csobuvain9g
Thanks a lot for your help
Best Regards - Cordialement
Alexis GÜNST HORN,
Tel : 0826.206.307 (poste )
Fax : +33.1.83.62.92.89
IMPORTANT: The information contained in this message may be privileged
and confidential and prot
>
> Can you generate an OSD log with 'debug filestore = 20' for an idle period?
>
Any more tests you would like me to run? I'm going to recreate that osd as xfs
soon.
James
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/
25 matches
Mail list logo