Hi,
Can you try modifying osd_snap_trim_sleep ? The default value is 0, I have good
results with 0.25 with a ceph cluster using SATA disks :
ceph tell osd.* injectargs -- --osd_snap_trim_sleep 0.25
Best regards,
- Le 10 Déc 15, à 7:52, Wukongming a écrit :
> Hi, All
> I used a rbd c
On Thu, Dec 10, 2015 at 5:06 AM, Christian Balzer wrote:
>
> Hello,
>
> On Wed, 9 Dec 2015 15:57:36 + MATHIAS, Bryn (Bryn) wrote:
>
>> to update this, the error looks like it comes from updatedb scanning the
>> ceph disks.
>>
>> When we make sure it doesn’t, by putting the ceph mount points in
On Thu, 10 Dec 2015 09:11:46 +0100 Dan van der Ster wrote:
> On Thu, Dec 10, 2015 at 5:06 AM, Christian Balzer wrote:
> >
> > Hello,
> >
> > On Wed, 9 Dec 2015 15:57:36 + MATHIAS, Bryn (Bryn) wrote:
> >
> >> to update this, the error looks like it comes from updatedb scanning
> >> the ceph di
Hi Kris,
Indeed I am seeing some spikes on the latency, they seem to be linked to
other spikes on throughput and cluster global IOPS. I also see some spikes
on the OSD (I guess this is when the journal is flushed) but IO on the
journals are quite steady. I already tuned a bit the osd filestore and
Am 10.12.2015 um 06:38 schrieb Robert LeBlanc:
> I noticed this a while back and did some tracing. As soon as the PGs
> are read in by the OSD (very limited amount of housekeeping done), the
> OSD is set to the "in" state so that peering with other OSDs can
> happen and the recovery process can beg
Hi Loic,
I applied the fixed version. I don't get error messages when running
ceph-disk list, but the output is not as I expect it to be (On hammer
release I saw all partitions):
ceph-disk list
/dev/cciss/c0d0 other, unknown
/dev/cciss/c0d1 other, unknown
/dev/cciss/c0d2 other, unknown
/dev/cciss
Hi,
do you know why this happens when try to install ceph on centos 7 system?
$ ceph-deploy install ceph4
[ceph_deploy.conf][DEBUG ] found configuration file at:
/home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.5): /usr/bin/ceph-deploy install
ceph4
[ceph_deploy.install][DEB
On 12/10/2015 04:00 AM, deeepdish wrote:
> Hello,
>
> I encountered a strange issue when rebuilding monitors reusing same
> hostnames, however different IPs.
>
> Steps to reproduce:
>
> - Build monitor using ceph-deploy create mon
> - Remove monitor
> via http://docs.ceph.com/docs/master/rados/
Just try to give the booting OSD and all MONs the resources they ask for (CPU,
memory).
Yes, it causes disruption but only for a select group of clients, and only for
a moment (<20s with my extremely high number of PGs).
From a service provider perspective this might break SLAs, but until you get
Unfortunately I haven't found a newer package for centos in ceph repos.
Not even an src.rpm so I could build the newer package on CentOS.
I've re-created the monitor on that machine from scratch (this is fairly
simple and quick).
Ubuntu has leveldb 1.15, CentOS has 1.12. I've found lveldb 1.1
Hello,
We are using ceph version 0.94.4, with radosgw offering S3 storage
to our users.
Each user is assigned one bucket (and only one; max_buckets is set to 1).
The bucket name is actually the user name (typical unix login name, up to
8 characters long).
Users can read and write objects in thei
Removing snapshot means looking for every *potential* object the snapshot can
have, and this takes a very long time (6TB snapshot will consist of 1.5M
objects (in one replica) assuming the default 4MB object size). The same
applies to large thin volumes (don't try creating and then dropping a 1
When I adjusted the third parameter of OPTION(osd_snap_trim_sleep, OPT_FLOAT,
0) from 0 to 1, the issue could be fixed. I tried again with the value 0.1, it
would not cause any problem either.
So what is the best choice, Have you got a commended value?
Thanks!!
Kongming Wu
---
Thanks, I'll look into that.
On 10/12/2015 10:27, Stolte, Felix wrote:
> Hi Loic,
>
> I applied the fixed version. I don't get error messages when running
> ceph-disk list, but the output is not as I expect it to be (On hammer
> release I saw all partitions):
>
> ceph-disk list
> /dev/cciss/c0d0
Hi,
I try to test ceph 9.2 cluster.
My lab have 1 mon and 2 osd with 4 disks each.
Only 1 osd server (with 4 disks) are online.
The disks of second osd don't go up ...
Some info about environment:
[ceph@OSD1 ~]$ sudo ceph osd tree
ID WEIGHT TYPE NAME UP/DOWN REWEI
On Wed, Dec 9, 2015 at 1:25 PM, Jacek Jarosiewicz
wrote:
> 2015-12-09 13:11:51.171377 7fac03c7f880 -1
> filestore(/var/lib/ceph/osd/ceph-5) Error initializing leveldb : Corruption:
> 29 missing files; e.g.: /var/lib/ceph/osd/ceph-5/current/omap/046388.sst
Did you have .lbd files? If so, this shou
On 12/10/2015 02:50 PM, Dan van der Ster wrote:
On Wed, Dec 9, 2015 at 1:25 PM, Jacek Jarosiewicz
wrote:
2015-12-09 13:11:51.171377 7fac03c7f880 -1
filestore(/var/lib/ceph/osd/ceph-5) Error initializing leveldb : Corruption:
29 missing files; e.g.: /var/lib/ceph/osd/ceph-5/current/omap/046388.s
On Thu, 10 Dec 2015, Jan Schermer wrote:
> Removing snapshot means looking for every *potential* object the snapshot can
> have, and this takes a very long time (6TB snapshot will consist of 1.5M
> objects (in one replica) assuming the default 4MB object size). The same
> applies to large thin v
> On 10 Dec 2015, at 15:14, Sage Weil wrote:
>
> On Thu, 10 Dec 2015, Jan Schermer wrote:
>> Removing snapshot means looking for every *potential* object the snapshot
>> can have, and this takes a very long time (6TB snapshot will consist of 1.5M
>> objects (in one replica) assuming the defaul
Hi to all
Someone has news about Geo-replication?
I have find this really nice article by Sebastien
http://www.sebastien-han.fr/blog/2013/01/28/ceph-geo-replication-sort-of/ but
it's 3 years ago...
My question is about configuration (and limitation : TTL, distance, flapping
network consideratio
If you don't need synchronnous replication then asynchronnous is the way to go,
but Ceph doesn't offer that natively. (not for RBD anyway, not sure how radosgw
could be set up).
200km will add at least 1ms of latency network-wise, 2ms RTT, for TCP it will
be more.
For sync replication (which ce
If using s3cmd to radosgw and using s3cmd's --disable-multipart option, is
there any limit to the size of the object that can be stored thru radosgw?
Also, is there a recommendation for multipart chunk size for radosgw?
-- Tom
___
ceph-users mailing li
On Thu, Dec 10, 2015 at 2:26 AM, Xavier Serrano
wrote:
> Hello,
>
> We are using ceph version 0.94.4, with radosgw offering S3 storage
> to our users.
>
> Each user is assigned one bucket (and only one; max_buckets is set to 1).
> The bucket name is actually the user name (typical unix login name,
On Thu, Dec 10, 2015 at 11:10 AM, Deneau, Tom wrote:
> If using s3cmd to radosgw and using s3cmd's --disable-multipart option, is
> there any limit to the size of the object that can be stored thru radosgw?
>
rgw limits plain uploads to 5GB
> Also, is there a recommendation for multipart chunk
On Thu, Dec 10, 2015 at 11:25 AM, Gregory Farnum wrote:
> On Thu, Dec 10, 2015 at 2:26 AM, Xavier Serrano
> wrote:
>> Hello,
>>
>> We are using ceph version 0.94.4, with radosgw offering S3 storage
>> to our users.
>>
>> Each user is assigned one bucket (and only one; max_buckets is set to 1).
>>
Thanks Josh - this turned out to be a snafu on our end, filesystem out of
space, sorry for the hassle.
The workaround completely resolved the merge-diff failure. Thanks again!
http://tracker.ceph.com/issues/14030
> As a workaround, you can pass the first diff in via stdin, e.g.:
>
Thanks a lot for your help, Varada. Since I was deploying Ceph via ceph-deploy
I could not see the actual errors. Low disk space led to a failure in creating
monfs. Things are now working fine.
Thanks,
Aakanksha
From: Varada Kari [mailto:varada.k...@sandisk.com]
Sent: Tuesday, December 08, 2015
Hi Ilya,
I had already recovered but I managed to recreate the problem again. I
ran the commands against rbd_data.f54f9422698a8. which
was one of those listed in osdc this time. We have 2048 PGs in the
pool so the list is long.
As for when I fetched the object using rados, it grab
Hi,
I missed two, could you please try again with:
https://raw.githubusercontent.com/dachary/ceph/b1ad205e77737cfc42400941ffbb56907508efc5/src/ceph-disk
This is from https://github.com/ceph/ceph/pull/6880
Thanks for your patience :-)
Cheers
On 10/12/2015 10:27, Stolte, Felix wrote:
> Hi Loic,
Hi Loic,
output is still the same:
ceph-disk list
/dev/cciss/c0d0 other, unknown
/dev/cciss/c0d1 other, unknown
/dev/cciss/c0d2 other, unknown
/dev/cciss/c0d3 other, unknown
/dev/cciss/c0d4 other, unknown
/dev/cciss/c0d5 other, unknown
/dev/cciss/c0d6 other, unknown
/dev/cciss/c0d7 other, unknow
On Wed, Dec 2, 2015 at 7:35 PM, Alfredo Deza wrote:
> On Tue, Dec 1, 2015 at 4:59 AM, Deepak Shetty wrote:
> > Hi,
> > Does anybody how/where I can get the F21 repo for ceph hammer release ?
> >
> > In download.ceph.com/rpm-hammer/ I only see F20 dir, not F21
>
> Right, we haven't built FC bina
31 matches
Mail list logo