I'm testing ceph for a while with a 4 node cluster(1 mon, 1 mds, and 2
osds), each installed ceph 0.56.2.
Today I ran into a mds crash case, on host mds process ceph-mds is
terminated by assert().
My questions here are:
1. Reason of mds' crash.
2. How to solve it without mkcephfs.
It's reproducib
On 04/08/2013 05:55 AM, Mark Nelson wrote:
On 04/08/2013 01:09 AM, Matthieu Patou wrote:
On 04/01/2013 11:26 PM, Matthieu Patou wrote:
On 04/01/2013 05:35 PM, Mark Nelson wrote:
On 03/31/2013 06:37 PM, Matthieu Patou wrote:
Hi,
I was doing some testing with iozone and found that performance
On 04/08/2013 04:12 PM, Ziemowit Pierzycki wrote:
There is one SSD in each node. IPoIB performance is about 7 gbps
between each host. CephFS is mounted via kernel client. Ceph version
is ceph-0.56.3-1. I have a 1GB journal on the same drive as the OSD but
on a seperate file system split via L
There is one SSD in each node. IPoIB performance is about 7 gbps between
each host. CephFS is mounted via kernel client. Ceph version
is ceph-0.56.3-1. I have a 1GB journal on the same drive as the OSD but on
a seperate file system split via LVM.
Here is output of another test with fdatasync:
Hi,
How many drives? Have you tested your IPoIB performance with iperf? Is
this CephFS with the kernel client? What version of Ceph? How are your
journals configured? etc. It's tough to make any recommendations
without knowing more about what you are doing.
Also, please use conv=fdatasy
Hi,
The first test was writing 500 mb file and was clocked at 1.2 GBps. The
second test was writing 5000 mb file at 17 MBps. The third test was
reading the file at ~400 MBps.
On Mon, Apr 8, 2013 at 2:56 PM, Gregory Farnum wrote:
> More details, please. You ran the same test twice and perform
More details, please. You ran the same test twice and performance went
up from 17.5MB/s to 394MB/s? How many drives in each node, and of what
kind?
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
On Mon, Apr 8, 2013 at 12:38 PM, Ziemowit Pierzycki
wrote:
> Hi,
>
> I have a 3 n
Hi,
I'm currently testing Ceph with the POSIX fs at work as a fs cluster. So far
I've managed to cripple the test environment three or four times be
unintendently crashing the MDS(1 active, 2 hot-standby). It seems to me the MDS
are pretty sensitive to all kinds of environmental changes and so
Hi,
I have a 3 node SSD-backed cluster connected over infiniband (16K MTU) and
here is the performance I am seeing:
[root@triton temp]# !dd
dd if=/dev/zero of=/mnt/temp/test.out bs=512k count=1000
1000+0 records in
1000+0 records out
524288000 bytes (524 MB) copied, 0.436249 s, 1.2 GB/s
[root@tri
There seems to be an open issue at s3cmd
https://github.com/s3tools/s3cmd/issues/37. I'll try with other tools
On Mon, Apr 8, 2013 at 9:26 PM, Yehuda Sadeh wrote:
> This one fails because copy object into itself would only work if
> replacing it's attrs (X_AMZ_METADATA_DIRECTIVE=REPLACE).
>
> O
This one fails because copy object into itself would only work if
replacing it's attrs (X_AMZ_METADATA_DIRECTIVE=REPLACE).
On Mon, Apr 8, 2013 at 10:35 AM, Erdem Agaoglu wrote:
> This is the log grepped with the relevant threadid. It shows 400 in the last
> lines but nothing seems odd besides tha
This is the log grepped with the relevant threadid. It shows 400 in the
last lines but nothing seems odd besides that.
http://pastebin.com/xWCYmnXV
Thanks for your interest.
On Mon, Apr 8, 2013 at 8:21 PM, Yehuda Sadeh wrote:
> Each bucket has a unique prefix which you can get by doing radosgw
Just tried that file:
$ s3cmd mv s3://imgiz/data/avatars/492/492923.jpg
s3://imgiz/data/avatars/492/492923.jpg
ERROR: S3 error: 400 (InvalidRequest)
a more verbose output shows that the sign-headers was
'PUT\n\n\n\nx-amz-copy-source:/imgiz/data/avatars/492/492923.jpg\nx-amz-date:Mon,
08 Apr 2013
Can you try copying one of these objects to itself? Would that work and/or
change the index entry? Another option would be to try copying all the
objects to a different bucket.
On Mon, Apr 8, 2013 at 9:48 AM, Erdem Agaoglu wrote:
> omap header and all other omap attributes was destroyed. I copie
omap header and all other omap attributes was destroyed. I copied another
index over the destroyed one to get a somewhat valid header and it seems
intact. After a 'check --fix':
# rados -p .rgw.buckets getomapheader .dir.4470.1
header (49 bytes) :
: 03 02 2b 00 00 00 01 00 00 00 01 02 02 18
We'll need to have more info about the current state. Was just the
omap header destroyed, or does it still exist? What does the header
contain now? Are you able to actually access objects in that bucket,
but just fail to list them?
On Mon, Apr 8, 2013 at 8:34 AM, Erdem Agaoglu wrote:
> Hi again,
Hey Ceph-users,
It looks like Inktank will be showing up to Portland in force next
week! The OpenStack Developer Summit is drawing a total of 11
Inktankers of varying flavors, so if you would like to say hi, have a
chat, or beat us about the head and shoulders, there are a wide
variety of outlets
Hi again,
I managed to change the file with some other bucket's index.
--check-objects --fix worked but my hopes have failed as it didn't actually
read through the files or fixed anything. Any suggestions?
On Thu, Apr 4, 2013 at 5:56 PM, Erdem Agaoglu wrote:
> Hi all,
>
> After a major failure,
Matthew,
I have seen the same behavior on 0.59. Ran through some troubleshooting
with Dan and Joao on March 21st and 22nd, but I haven't looked at it
since then.
If you look at running processes, I believe you'll see an instance of
ceph-create-keys start each time you start a Monitor. So, if
On 04/08/2013 01:09 AM, Matthieu Patou wrote:
On 04/01/2013 11:26 PM, Matthieu Patou wrote:
On 04/01/2013 05:35 PM, Mark Nelson wrote:
On 03/31/2013 06:37 PM, Matthieu Patou wrote:
Hi,
I was doing some testing with iozone and found that performance of an
exported rdb volume where 1/3 of the p
20 matches
Mail list logo