El 09/07/14 16:53, Christian Balzer escribió:
> On Wed, 09 Jul 2014 07:07:50 -0500 Mark Nelson wrote:
>
>> On 07/09/2014 06:52 AM, Xabier Elkano wrote:
>>> El 09/07/14 13:10, Mark Nelson escribió:
On 07/09/2014 05:57 AM, Xabier Elkano wrote:
>
> Hi,
>
> I was doing some tests i
Hello,
the ceph command hangs after execution (doing what it is supposed to do)
following an update on one Jessie machine today.
---
# ceph -s
cluster d6b84616-ff3e-4b04-b50b-bd398d7fa69a
health HEALTH_OK
monmap e1: 3 mons at
{c-admin=10.0.0.10:6789/0,ceph-01=10.0.0.41:6789/0,cep
Hi,
I would like to known if a centos7 respository will be available soon ?
Or can I use current rhel7 for the moment ?
http://ceph.com/rpm-firefly/rhel7/x86_64/
Cheers,
Alexandre
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.c
Just to ask a couple obvious questions...
You didn't accidentally put 'http://us-secondary.example.comhttp://
us-secondary.example.com/' in any of your region or zone configuration
files? The fact that it's missing the :80 makes me think it's getting that
URL from someplace that isn't the command
FWIW, I'm beginning to think that SSD journals are a requirement.
Even with minimal recovery/backfilling settings, it's very easy to kick off
an operation that will bring a cluster to it's knees. Increasing PG/PGP,
increasing replication, adding too many new OSDs, etc. These operations
can cause
Sending this over to ceph-devel and ceph-user where this will have
interest (ceph-community is for the user committee). Thanks.
Best Regards,
Patrick McGarry
Director Ceph Community || Red Hat
http://ceph.com || http://community.redhat.com
@scuttlemonkey || @ceph
On Wed, Jul 9, 2014 at 11:1
On 07/09/2014 02:22 PM, Pierre BLONDEAU wrote:
Hi,
There is any chance to restore my data ?
Okay, I talked to Sam and here's what you could try before anything else:
- Make sure you have everything running on the same version.
- unset the the chooseleaf_vary_r flag -- this can be accomplished
It crashed on an OSD reply. What's the output of "ceph -s"?
-Greg
On Wednesday, July 9, 2014, Florent B wrote:
> Hi all,
>
> I run a Firefly cluster with a MDS server for a while without any problem.
>
> I would like to setup a second one to get a failover server.
>
> To minimize downtime in cas
On 07/09/2014 02:22 PM, Pierre BLONDEAU wrote:
Hi,
There is any chance to restore my data ?
Hello Pierre,
I've been giving this some thought and my guess is that yes, it should
be possible. However, it may not be a simple fix.
So, first of all, you got bit by http://tracker.ceph.com/issue
I find osd log contain "fault with nothing to send, going to standby ",what
happened?
baijia...@126.com___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On Wed, 09 Jul 2014 07:07:50 -0500 Mark Nelson wrote:
> On 07/09/2014 06:52 AM, Xabier Elkano wrote:
> > El 09/07/14 13:10, Mark Nelson escribió:
> >> On 07/09/2014 05:57 AM, Xabier Elkano wrote:
> >>>
> >>>
> >>> Hi,
> >>>
> >>> I was doing some tests in my cluster with fio tool, one fio instance
You're physically moving (lots of) data around between most of your
disks. There's going to be an IO impact from that, although we are
always working on ways to make it more controllable and try to
minimize its impact. Your average latency increase sounds a little
high to me, but I don't have much
On 09 Jul 2014, at 15:30, Robert van Leeuwen
wrote:
>> Which leveldb from where? 1.12.0-5 that tends to be in el6/7 repos is broken
>> for Ceph.
>> You need to remove the “basho fix” patch.
>> 1.7.0 is the only readily available version that works, though it is so old
>> that I suspect it is r
> Which leveldb from where? 1.12.0-5 that tends to be in el6/7 repos is broken
> for Ceph.
> You need to remove the “basho fix” patch.
> 1.7.0 is the only readily available version that works, though it is so old
> that I suspect it is responsible for various
> issues we see.
Apparently at some
Hi,
There is any chance to restore my data ?
Regards
Pierre
Le 07/07/2014 15:42, Pierre BLONDEAU a écrit :
No chance to have those logs and even less in debug mode. I do this
change 3 weeks ago.
I put all my log here if it's can help :
https://blondeau.users.greyc.fr/cephlog/all/
I have a ch
there is memory leak bug in standby replay code, your issue is likely
caused by it.
Yan, Zheng
On Wed, Jul 9, 2014 at 4:49 PM, Florent B wrote:
> Hi all,
>
> I run a Firefly cluster with a MDS server for a while without any problem.
>
> I would like to setup a second one to get a failover server
Hi,
On 09 Jul 2014, at 14:44, Robert van Leeuwen
wrote:
>> I cannot add a new OSD to a current Ceph cluster.
>> It just hangs, here is the debug log:
>> This is ceph 0.72.1 on CentOS.
>
> Found the issue:
> Although I installed the specific ceph (0.72.1) version the latest leveldb
> was insta
> I cannot add a new OSD to a current Ceph cluster.
> It just hangs, here is the debug log:
> This is ceph 0.72.1 on CentOS.
Found the issue:
Although I installed the specific ceph (0.72.1) version the latest leveldb was
installed.
Apparently this breaks stuff...
Cheers,
Robert van Leeuwen
_
El 09/07/14 13:14, hua peng escribió:
> what're the IO throughput (MB/s) for the test cases?
>
> Thanks.
Hi Hua,
the throughput in each test is IOPS x 4K block size, all tests are
random write.
Xabier
>
> On 14-7-9 下午6:57, Xabier Elkano wrote:
>>
>>
>> Hi,
>>
>> I was doing some tests in my clust
El 09/07/14 14:07, Mark Nelson escribió:
> On 07/09/2014 06:52 AM, Xabier Elkano wrote:
>> El 09/07/14 13:10, Mark Nelson escribió:
>>> On 07/09/2014 05:57 AM, Xabier Elkano wrote:
Hi,
I was doing some tests in my cluster with fio tool, one fio instance
with 70 jobs, e
On 07/09/2014 06:52 AM, Xabier Elkano wrote:
El 09/07/14 13:10, Mark Nelson escribió:
On 07/09/2014 05:57 AM, Xabier Elkano wrote:
Hi,
I was doing some tests in my cluster with fio tool, one fio instance
with 70 jobs, each job writing 1GB random with 4K block size. I did this
test with 3 var
Christian,
Excellent performance improvements base on your guide,
I set so small rbd cache size before, so do not get any improvements using RBD
cache.
Thanks a lot!
Jian Li
At 2014-07-08 10:14:27, "Christian Balzer" wrote:
>
>Hello,
>
>how did you come up with those bizarre cache sizes?
El 09/07/14 13:10, Mark Nelson escribió:
> On 07/09/2014 05:57 AM, Xabier Elkano wrote:
>>
>>
>> Hi,
>>
>> I was doing some tests in my cluster with fio tool, one fio instance
>> with 70 jobs, each job writing 1GB random with 4K block size. I did this
>> test with 3 variations:
>>
>> 1- Creating 70
what're the IO throughput (MB/s) for the test cases?
Thanks.
On 14-7-9 下午6:57, Xabier Elkano wrote:
Hi,
I was doing some tests in my cluster with fio tool, one fio instance
with 70 jobs, each job writing 1GB random with 4K block size. I did this
test with 3 variations:
1- Creating 70 images
On 07/09/2014 05:57 AM, Xabier Elkano wrote:
Hi,
I was doing some tests in my cluster with fio tool, one fio instance
with 70 jobs, each job writing 1GB random with 4K block size. I did this
test with 3 variations:
1- Creating 70 images, 60GB each, in the pool. Using rbd kernel module,
format
Hi,
I cannot add a new OSD to a current Ceph cluster.
It just hangs, here is the debug log:
ceph-osd -d --debug-ms=20 --debug-osd=20 --debug-filestore=31 -i 10
--osd-journal=/mnt/ceph/journal_vg_sda/journal0 --mkfs --mkjournal --mkkey
2014-07-09 10:50:28.934959 7f80f6a737a
Hi,
I was doing some tests in my cluster with fio tool, one fio instance
with 70 jobs, each job writing 1GB random with 4K block size. I did this
test with 3 variations:
1- Creating 70 images, 60GB each, in the pool. Using rbd kernel module,
format and mount each image as ext4. Each fio job wri
thank you for your reply. I am running ceph 0.80.1, radosgw-agent 1.2 on
Ubuntu 14.04 LTS (GNU/Linux 3.13.0-24-generic x86_64) . I also ran into
this same issue with ubuntu 12.04 previously.
There are no special characters in the access or secret key (ive had
issues with this before so i make su
hi mark,
what kind of caching is actually needed here?
i assume a 2 replica pool for writes (don't want to loose the data will
it has not been flushed back to the EC pools) and a 1 replica for
reading (no additional write IOs and network traffic while reading, no
data loss if the read cache po
29 matches
Mail list logo