My apache conf is as follows
cat /etc/apache2/httpd.conf
ServerName radosgw01.swisstxt.ch
cat /etc/apache2/sites-enabled/000_radosgw
ServerName *.radosgw01.swisstxt.ch
# ServerAdmin {email.address}
ServerAdmin serviced...@swisstxt.ch
DocumentRoot /var/www
Hi.
With this patch - is all ok.
Thanks for help!
-Original Message-
From: Alfredo Deza [mailto:alfredo.d...@inktank.com]
Sent: Wednesday, August 21, 2013 7:16 PM
To: Pavel Timoschenkov
Cc: ceph-us...@ceph.com
Subject: Re: [ceph-users] ceph-deploy and journal on separate disk
On Wed, Aug
Hey Samuel,
On wo, 2013-08-21 at 20:27 -0700, Samuel Just wrote:
> I think the rbd cache one you'd need to run for a few minutes to get
> meaningful results. It should stabilize somewhere around the actual
> throughput of your hardware.
Ok, I now also ran this test on Cuttlefish as well as Dumpl
Hi Josh,
thank you for your answer, but I was in Bobtail so no listwatchers command :)
I planned a reboot of concerned compute nodes and all went fine then. I updated
Ceph to last stable though.
De : Josh Durgin [josh.dur...@inktank.com]
Date d'envoi
hello!
Today our radosgw crashed while running multiple deletions via s3 api.
Is this known bug ?
POST
WSTtobXBlBrm2r78B67LtQ==
Thu, 22 Aug 2013 11:38:34 GMT
/inna-a/?delete
-11> 2013-08-22 13:39:26.650499 7f36347d8700 2 req 95:0.000555:s3:POST
/inna-a/:multi_object_delete:reading permissio
Hi,
I think about sharding s3 buckets in CEPH cluster, create
bucket-per-XX (256 buckets) or even bucket-per-XXX (4096 buckets)
where XXX is sign from object md5 url.
Could this be the problem? (performance, or some limits)
--
Regards
Dominik
___
ceph-us
On Thu, Aug 22, 2013 at 4:36 AM, Pavel Timoschenkov
wrote:
> Hi.
> With this patch - is all ok.
> Thanks for help!
>
Thanks for confirming this, I have opened a ticket
(http://tracker.ceph.com/issues/6085 ) and will work on this patch to
get it merged.
> -Original Message-
> From: Alfred
Hello!
We, in our environment, need a shared file system for
/var/lib/nova/instances and Glance image cache (_base)..
Is anyone using CephFS for this purpose?
When folks say CephFS is not production ready, is the primary concern
stability/data-integrity or performance?
Is NFS (with NFS-Ganesha) i
I'm sorry for the spam :-(
--
Dominik
2013/8/22 Dominik Mostowiec :
> Hi,
> I think about sharding s3 buckets in CEPH cluster, create
> bucket-per-XX (256 buckets) or even bucket-per-XXX (4096 buckets)
> where XXX is sign from object md5 url.
> Could this be the problem? (performance, or some lim
Hi,
RBD has had support for sparse allocation for some time now. However, when
using an RBD volume as a virtual disk for a virtual machine, the RBD volume
will inevitably grow until it reaches its actual nominal size, even if the
filesystem in the guest machine never reaches full utilization.
On Thu, Aug 22, 2013 at 7:11 AM, Dominik Mostowiec
wrote:
> Hi,
> I think about sharding s3 buckets in CEPH cluster, create
> bucket-per-XX (256 buckets) or even bucket-per-XXX (4096 buckets)
> where XXX is sign from object md5 url.
> Could this be the problem? (performance, or some limits)
>
The
On Thu, Aug 22, 2013 at 5:18 AM, Pawel Stefanski wrote:
> hello!
>
> Today our radosgw crashed while running multiple deletions via s3 api.
>
> Is this known bug ?
>
> POST
> WSTtobXBlBrm2r78B67LtQ==
>
> Thu, 22 Aug 2013 11:38:34 GMT
> /inna-a/?delete
>-11> 2013-08-22 13:39:26.650499 7f36347d8
There is TRIM/discard support and I use it with some success. There are some
details here http://ceph.com/docs/master/rbd/qemu-rbd/ The one caveat I have
is that I've sometimes been able to crash an osd by doing fstrim inside a guest.
On Aug 22, 2013, at 10:24 AM, Guido Winkelmann
wrote:
> H
Thank's for your answer.
--
Regards
Dominik
2013/8/22 Yehuda Sadeh :
> On Thu, Aug 22, 2013 at 7:11 AM, Dominik Mostowiec
> wrote:
>> Hi,
>> I think about sharding s3 buckets in CEPH cluster, create
>> bucket-per-XX (256 buckets) or even bucket-per-XXX (4096 buckets)
>> where XXX is sign from ob
On Thu, Aug 22, 2013 at 12:36 AM, Fuchs, Andreas (SwissTXT)
wrote:
> My apache conf is as follows
>
> cat /etc/apache2/httpd.conf
> ServerName radosgw01.swisstxt.ch
>
> cat /etc/apache2/sites-enabled/000_radosgw
>
>
> ServerName *.radosgw01.swisstxt.ch
> # ServerAdmin {email.addre
On Thu, 22 Aug 2013, Mih?ly ?rva-T?th wrote:
> Hello,
>
> Is there any method to one radosgw user has more than one access/secret_key?
Yes, you can have multiple keys for each user:
radosgw-admin key create ...
sage
___
ceph-users mailing list
ceph-u
And here is my ceph.log
.
.
.
[ceph@cephadmin my-clusters]$ less ceph.log
2013-08-22 09:01:27,375 ceph_deploy.new DEBUG Creating new cluster named ceph
2013-08-22 09:01:27,375 ceph_deploy.new DEBUG Resolving host cephs1
2013-08-22 09:01:27,382 ceph_deploy.new DEBUG Monitor cephs1 at 10.2.9.223
2013
Hi,
I think about sharding s3 buckets in CEPH cluster, create bucket-per-XX (256
buckets) or even bucket-per-XXX (4096 buckets) where XXX is sign from object
md5 url.
Could this be the problem? (performance, or some limits)
--
Regards
Dominik
___
ceph-
Hi!
I am trying ceph on RHEL 6.4
My ceph version is cuttlefish
I followed the intro and ceph-deploy new .. ceph-deploy instal ..
--stable cuttlefish
It didn't appear an error until here.
And then I typed ceph-deploy mon create ..
Here comes the error as bellow
.
.
.
[ceph@cephadmi
ceph cluster is running fine in centos6.4.
Now I would like to export the block device to client using rbd.
my question is,
1. I used to modprobe rbd in one of the monitor host. But I got error,
FATAL: Module rbd not found
I could not find rbd module. How can i do this?
2. Once the rbd is
We should perhaps hack the old (cuttlefish and earlier) flushing behavior
into the new code so that we can confirm that it is really the writeback
that is causing the problem and not something else...
sage
On Thu, 22 Aug 2013, Oliver Daudey wrote:
> Hey Samuel,
>
> On wo, 2013-08-21 at 20:27
For what it's worth, I was still seeing some small sequential write
degradation with kernel RBD with dumpling, though random writes were not
consistently slower in the testing I did. There was also some variation
in performance between 0.61.2 and 0.61.7 likely due to the workaround we
had to i
Am Donnerstag, 22. August 2013, 10:32:30 schrieb Mike Lowe:
> There is TRIM/discard support and I use it with some success. There are some
> details here http://ceph.com/docs/master/rbd/qemu-rbd/ The one caveat I
> have is that I've sometimes been able to crash an osd by doing fstrim
> inside a gu
Jumping in pretty late on this thread, but I can confirm much higher CPU
load on ceph-osd using 0.67.1 compared to 0.61.7 under a write-heavy RBD
workload. Under my workload, it seems like it might be 2x-5x higher CPU
load per process.
Thanks,
Mike Dawson
On 8/22/2013 4:41 AM, Oliver Daudey
On Thursday, August 22, 2013, Amit Vijairania wrote:
> Hello!
>
> We, in our environment, need a shared file system for
> /var/lib/nova/instances and Glance image cache (_base)..
>
> Is anyone using CephFS for this purpose?
> When folks say CephFS is not production ready, is the primary concern
>
> I see yet another caveat: According to that documentation, it only works with
> the IDE driver, not with virtio.
>
> Guido
I've just been looking into this but have not yet tested. It looks like
discard is supported in the newer virtio-scsi devices but not virtio-blk.
This Sheepdog pag
I use the virtio-scsi driver.
On Aug 22, 2013, at 12:05 PM, David Blundell
wrote:
>> I see yet another caveat: According to that documentation, it only works with
>> the IDE driver, not with virtio.
>>
>>Guido
>
> I've just been looking into this but have not yet tested. It looks like
>
On Wed, Aug 21, 2013 at 10:05 PM, SOLO wrote:
> Hi!
>
> I am trying ceph on RHEL 6.4
> My ceph version is cuttlefish
> I followed the intro and ceph-deploy new .. ceph-deploy instal ..
> --stable cuttlefish
> It didn't appear an error until here.
> And then I typed ceph-deploy mon create
Same problem with me. It took me one step further to add the public network
parameter to all the ceph.conf files. However, ceph-deploy tells me mons are
created but those won’t show up in ceph -w output.
Am 22.08.2013 um 18:43 schrieb Alfredo Deza :
> On Wed, Aug 21, 2013 at 10:05 PM, SOLO wro
I have been benchmarking our Ceph installation for the last week or so, and
I've come across an issue that I'm having some difficulty with.
Ceph bench reports reasonable write throughput at the OSD level:
ceph tell osd.0 bench
{ "bytes_written": 1073741824,
"blocksize": 4194304,
"bytes_per_se
Hi,
I'm trying to create a snapshot from a KVM VM:
# virsh snapshot-create one-5
error: unsupported configuration: internal checkpoints require at least
one disk to be selected for snapshot
RBD should support such snapshot, according to the wiki:
http://ceph.com/w/index.php?title=QEMU-RBD#Sn
Hey Greg,
I encountered a similar problem and we're just in the process of
tracking it down here on the list. Try downgrading your OSD-binaries to
0.61.8 Cuttlefish and re-test. If it's significantly faster on RBD,
you're probably experiencing the same problem I have with Dumpling.
PS: Only dow
On Thu, Aug 22, 2013 at 2:23 PM, Oliver Daudey wrote:
> Hey Greg,
>
> I encountered a similar problem and we're just in the process of
> tracking it down here on the list. Try downgrading your OSD-binaries to
> 0.61.8 Cuttlefish and re-test. If it's significantly faster on RBD,
> you're probably
Hey Greg,
Thanks for the tip! I was assuming a clean shutdown of the OSD should
flush the journal for you and have the OSD try to exit with it's
data-store in a clean state? Otherwise, I would first have to stop
updates a that particular OSD, then flush the journal, then stop it?
Regards,
On Thu, Aug 22, 2013 at 2:47 PM, Oliver Daudey wrote:
> Hey Greg,
>
> Thanks for the tip! I was assuming a clean shutdown of the OSD should
> flush the journal for you and have the OSD try to exit with it's
> data-store in a clean state? Otherwise, I would first have to stop
> updates a that par
Hey Greg,
I didn't know that option, but I'm always careful to downgrade and
upgrade the OSDs one by one and wait for the cluster to report healthy
again before proceeding to the next, so, as you said, chances of losing
data should have been minimal. Will flush the journals too next time.
Thanks!
I should have also said that we experienced similar performance on
Cuttlefish. I have run identical benchmarks on both.
On Thu, Aug 22, 2013 at 2:23 PM, Oliver Daudey wrote:
> Hey Greg,
>
> I encountered a similar problem and we're just in the process of
> tracking it down here on the list. Tr
On Thu, Aug 22, 2013 at 2:34 PM, Gregory Farnum wrote:
> You don't appear to have accounted for the 2x replication (where all
> writes go to two OSDs) in these calculations. I assume your pools have
>
Ah. Right. So I should then be looking at:
# OSDs * Throughput per disk / 2 / repl factor ?
Hi,
It was mentioned in the devel mailing list that for 2 networks setup, if the
cluster network failed, the cluster behave pretty badly. Ref:
http://article.gmane.org/gmane.comp.file-systems.ceph.devel/12285/match=cluster+network+fail
May I know if this problem still exist in cuttlefish or dum
On Fri, 23 Aug 2013, Keith Phua wrote:
> Hi,
>
> It was mentioned in the devel mailing list that for 2 networks setup, if
> the cluster network failed, the cluster behave pretty badly. Ref:
> http://article.gmane.org/gmane.comp.file-systems.ceph.devel/12285/match=cluster+network+fail
>
> May I
Thank you - It works now as expected.
I've removed the MDS. As soon as the 2nd osd machine came up, it fixed
the other errors!?
On 19.08.2013 18:28, Gregory Farnum wrote:
Have you ever used the FS? It's missing an object which we're
intermittently seeing failures to create (on initial setup) w
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
My radosgw is up now.
There were two problems in my config
1.) I missed to copy the "FastCgiExternalServer /var/www/s3gw.fcgi -socket
/tmp/radosgw.sock" entry from the instructions into my apache config
2.) I did a mistake with the ceph conf, I entered:
[client.radosgw.gateway]
h
43 matches
Mail list logo