Hi, everyone!
I'm pleased to announce that we have begun preparations for the first
Ceph Developer Summit. This summit is where planning for the upcoming
Dumpling release will happen, and attendance is open to all. It will be
virtual (IRC, etherpad, video conference), so you won't even have to
No, I'm not using RDMA in this configuration since this will eventually get
deployed to production with 10G ethernet (yes RDMA is faster). I would
prefer Ceph because it has a storage drive built into OpenNebula which my
company is using and as you mentioned individual drives.
I'm not sure what t
Done. I also added some comments to the OSD configuration section noting
that OSDs names are numeric and incremental, e.g., 0,1,2,3; osd.0, osd.1,
etc.
On Thu, Apr 11, 2013 at 12:46 PM, Joe Ryner wrote:
> Probably should mention that the "ceph osd create" command will output
> what the new {osd
Probably should mention that the "ceph osd create" command will output what the
new {osd-number} should be.
Thanks for making the change so fast.
Joe
- Original Message -
From: "John Wilkins"
To: "Joe Ryner"
Cc: ceph-users@lists.ceph.com
Sent: Thursday, April 11, 2013 2:37:33 PM
Subje
Thanks Joe! I've made the change. You should see it up on the site shortly.
On Thu, Apr 11, 2013 at 10:00 AM, Joe Ryner wrote:
> Hi,
>
> I have found some issues in:
> http://ceph.com/docs/master/rados/operations/add-or-rm-osds
>
> In the adding section:
> Step 6 Should be ran before 1-5 as it
Hi,
I have found some issues in:
http://ceph.com/docs/master/rados/operations/add-or-rm-osds
In the adding section:
Step 6 Should be ran before 1-5 as it outputs the OSD number when it exits. I
had a really hard time figuring this out. I am currently running 0.56.4 on
RHEL 6. The First 5 st
That's certainly not great. Have you lost any data or removed anything
from the cluster? It looks like perhaps your MDS log lost an object,
and maybe got one shortened as well.
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
On Mon, Apr 8, 2013 at 11:55 PM, x yasha wrote:
> I'
It's more or less a Ceph bug; the patch fixing this is in the 3.9-rc's
(although it should backport trivially if you're willing to build a
kernel: 92a49fb0f79f3300e6e50ddf56238e70678e4202). You can look at
http://tracker.ceph.com/issues/3793 if you want details.
-Greg
Software Engineer #42 @ http:/
Yehuda, Caleb,
Thanks for your quick replies.
Setting
rgw print continue = false
helped indeed, the problem has gone. We apparently misunderstood the line
saying
"if you do NOT use a modified fastcgi …" in the radosgw manual install
documentation.
Thanks again for your help!
Cheers,
Arn
On Thu, Apr 11, 2013 at 5:53 AM, Arne Wiebalck wrote:
> Hi,
>
> We see a reproducible "Internal Server Error" when doing something like
>
> -->
> #!/usr/bin/env python
>
> import boto
> import boto.s3.connection
> access_key = '...'
> secret_key = '...'
>
> conn = boto.connect_s3(
> aws_ac
Hey Arne,
Could you send me your RGW logs and your Apache RGW configuration?
Caleb
--
Developer, inktank
On Apr 11, 2013 8:53 AM, "Arne Wiebalck" wrote:
> Hi,
>
> We see a reproducible "Internal Server Error" when doing something like
>
> -->
> #!/usr/bin/env python
>
> import boto
> import bot
Hi,
We see a reproducible "Internal Server Error" when doing something like
-->
#!/usr/bin/env python
import boto
import boto.s3.connection
access_key = '...'
secret_key = '...'
conn = boto.connect_s3(
aws_access_key_id = access_key,
aws_secret_access_key = secret_key,
With GlusterFS are you using the native RDMA support?
Ceph and Gluster tend to prefer pretty different disk setups too. Afaik
RH still recommends RAID6 beind each brick while we do better with
individual disks behind each OSD. You might want to watch the OSD admin
socket and see if operation
Hi,
I made the test : I have 3 nodes on Centos 6.3 for mon/mds and 8 nodes under
Debian Squeeze with 3.2.0 kernel for osd (each with 1 hd with 1To for osd and
the journal on the OS hd) and for mounting cephfs (i know it's very very bad...
but i use ceph for a ro filesystem shared on all node of
Can anybody repeat is test in own production cluster?
2013/4/4 Timofey Koolin
> I have centos 6.3 with kernel 3.8.4-1.el6.elrepo.x86_64 from elrepo.org.
> Cephfs mount with kernel module.
>
> [root@localhost t1]# wget
> http://joomlacode.org/gf/download/frsrelease/17965/78413/Joomla_3.0.3-Stabl
15 matches
Mail list logo