Hi All,
We have CEPH RBD with OCFS2 mounted servers. we are facing i/o errors
simultaneously while move the data's in the same disk (Copying is not having
any problem). Temporary we remount the partition and the issue get resolved but
after sometime problem again reproduced. If anybo
Hi,
In the /var/lib/ceph/mon/ceph-l16-s01/store.db/ directory there are two very
large files LOG and LOG.OLD (multiple GB's) and my diskspace is running low.
Can I safely delete those files?
Regards,
Erwin
___
ceph-users mailing list
ceph-users@lists.
Hello,
On Thu, 8 Oct 2015 09:38:02 +0200 Erwin Lubbers wrote:
> Hi,
>
> In the /var/lib/ceph/mon/ceph-l16-s01/store.db/ directory there are two
> very large files LOG and LOG.OLD (multiple GB's) and my diskspace is
> running low. Can I safely delete those files?
>
That sounds odd, what version
On 10/07/2015 10:52 PM, Sage Weil wrote:
> On Wed, 7 Oct 2015, David Zafman wrote:
>> There would be a benefit to doing fadvise POSIX_FADV_DONTNEED after
>> deep-scrub reads for objects not recently accessed by clients.
> Yeah, it's the 'except for stuff already in cache' part that we don't do
>
Christian,
Still running Dumpling (I know I have to start upgrading). Cluster has 66 OSD’s
and a total size close to 100 GB. Cluster is running for around 2 years now and
the monitor server has an uptime of 258 days.
The LOG file is 1.2 GB in size and ls shows the current time for it. The
LOG.
Hello,
On Thu, 8 Oct 2015 10:27:16 +0200 Erwin Lubbers wrote:
> Christian,
>
> Still running Dumpling (I know I have to start upgrading). Cluster has
> 66 OSD’s and a total size close to 100 GB. Cluster is running for around
> 2 years now and the monitor server has an uptime of 258 days.
>
> T
HI all,
I didn't notice that osd reweight for ssd was curiously set to a low value.
I don't know how and when these values were set so low.
Our environment is Mirantis-driven and the installation was powered by fuel and
puppet.
(the installation was run by the openstack team and I checked the cep
Hi, all:
If the Ceph cluster health status is HEALTH_OK, the execution time of 'sudo
rbd ls rbd' is very short, like the following results.
$ time sudo rbd ls rbd
real0m0.096s
user0m0.014s
sys 0m0.028s
But if there are several warnings (eg: 1 pgs degraded; 6 pgs incomplete; 1650
Hi,
I've moved all files from a CephFS data pool (EC pool with frontend
cache tier) in order to remove the pool completely.
Some objects are left in the pools ('ceph df' output of the affected pools):
cephfs_ec_data 19 7565k 0 66288G 13
Listing the object
On Thu, Oct 8, 2015 at 10:21 AM, Burkhard Linke
wrote:
> Hi,
>
> I've moved all files from a CephFS data pool (EC pool with frontend cache
> tier) in order to remove the pool completely.
>
> Some objects are left in the pools ('ceph df' output of the affected pools):
>
> cephfs_ec_data
Hi,
I’m trying to get a list of all users from the rados-rest-gateway analog to
"radosgw-admin metadata list user“.
I can retrieve a user info for a specified user from
https://rgw01.XXX.de/admin/user?uid=klaus&format=json.
http://docs.ceph.com/docs/master/radosgw/adminops/#get-user-info say "
Hi John,
On 10/08/2015 12:05 PM, John Spray wrote:
On Thu, Oct 8, 2015 at 10:21 AM, Burkhard Linke
wrote:
Hi,
*snipsnap*
I've moved all files from a CephFS data pool (EC pool with frontend cache
tier) in order to remove the pool completely.
Some objects are left in the pools ('ceph df' out
On Thu, Oct 8, 2015 at 11:41 AM, Burkhard Linke
wrote:
> Hi John,
>
> On 10/08/2015 12:05 PM, John Spray wrote:
>>
>> On Thu, Oct 8, 2015 at 10:21 AM, Burkhard Linke
>> wrote:
>>>
>>> Hi,
>
> *snipsnap*
>>>
>>>
>>> I've moved all files from a CephFS data pool (EC pool with frontend cache
>>> tier
I have probably similar situation on latest hammer & 4.1+ kernels on spinning
OSDs (journal - leased partition on same HDD): evential slow requests, etc. Try:
1) even on leased partition journal - "journal aio = false";
2) single-queue "noop" scheduler (OSDs);
3) reduce nr_requests to 32 (OSDs);
Le 07/10/2015 13:44, Paweł Sadowski a écrit :
> Hi,
>
> Can anyone tell if deep scrub is done using O_DIRECT flag or not? I'm
> not able to verify that in source code.
>
> If not would it be possible to add such feature (maybe config option) to
> help keeping Linux page cache in better shape?
Note
Hi All,
Anybody pls help me on this issue.
Regards
Prabu
On Thu, 08 Oct 2015 12:35:27 +0530 gjprabu
wrote
Hi All,
We have CEPH RBD with OCFS2 mounted servers. we are facing i/o errors
simultaneously while move the data's in the sa
Hi John,
On 10/08/2015 01:03 PM, John Spray wrote:
On Thu, Oct 8, 2015 at 11:41 AM, Burkhard Linke
wrote:
*snipsnap*
Thanks for the fast reply. During the transfer of all files from the EC pool
to a standard replicated pool I've copied the file to a new file name,
removed the orignal one an
On 10/08/2015 10:46 AM, wd_hw...@wistron.com wrote:
> Hi, all:
> If the Ceph cluster health status is HEALTH_OK, the execution time of 'sudo
> rbd ls rbd' is very short, like the following results.
> $ time sudo rbd ls rbd
> real0m0.096s
> user0m0.014s
> sys 0m0.028s
>
> But if th
Hi Wido:
According to your reply, if I add/remove OSDs from Ceph cluster, I have to
wait all PGs moving action are completed.
Then 'rbd ls' operation may works well.
Is there any way to speed up PGs action of adding/removing OSDs ?
Thanks a lot.
Best Regards,
WD
-Original Message---
On 10/08/2015 04:28 PM, wd_hw...@wistron.com wrote:
> Hi Wido:
> According to your reply, if I add/remove OSDs from Ceph cluster, I have to
> wait all PGs moving action are completed.
> Then 'rbd ls' operation may works well.
> Is there any way to speed up PGs action of adding/removing OSDs
This issue with the conflicts between Firefly and EPEL is tracked at
http://tracker.ceph.com/issues/11104
On Sun, Aug 30, 2015 at 4:11 PM, pavana bhat
wrote:
> In case someone else runs into the same issue in future:
>
> I came out of this issue by installing epel-release before installing
> ceph
On Wed, Sep 30, 2015 at 7:46 PM, Goncalo Borges
wrote:
> - Each time logrotate is executed, we received a daily notice with the
> message
>
> ibust[8241/8241]: Warning: HOME environment variable not set. Disabling
> LTTng-UST per-user tracing. (in setup_local_apps() at lttng-ust-comm.c:305)
Thank
Hi Sage,
Will this patch be in 0.94.4? We've got the same problem here.
-Lincoln
> On Oct 8, 2015, at 12:11 AM, Sage Weil wrote:
>
> On Wed, 7 Oct 2015, Adam Tygart wrote:
>> Does this patch fix files that have been corrupted in this manner?
>
> Nope, it'll only prevent it from happening to n
Somewhat related to this, I have a pending pull request to dynamically load
LTTng-UST via your ceph.conf or via the admin socket [1]. While it won't solve
this particular issue if you have manually enabled tracing, it will prevent
these messages in the new default case where tracing isn't enabl
On Thu, Oct 8, 2015 at 6:29 AM, Burkhard Linke
wrote:
> Hammer 0.94.3 does not support a 'dump cache' mds command.
> 'dump_ops_in_flight' does not list any pending operations. Is there any
> other way to access the cache?
"dumpcache", it looks like. You can get all the supported commands
with "he
On Thu, Oct 8, 2015 at 7:23 PM, Gregory Farnum wrote:
> On Thu, Oct 8, 2015 at 6:29 AM, Burkhard Linke
> wrote:
>> Hammer 0.94.3 does not support a 'dump cache' mds command.
>> 'dump_ops_in_flight' does not list any pending operations. Is there any
>> other way to access the cache?
>
> "dumpcache
On Fri, Sep 25, 2015 at 10:04 AM, Jan Schermer wrote:
> I get that, even though I think it should be handled more gracefuly.
> But is it expected to also lead to consistency issues like this?
I don't think it's expected, but obviously we never reproduced it in
the lab. Given that dumpling is EOL
On Tue, Sep 29, 2015 at 12:08 AM, Balázs Kossovics wrote:
> Hey!
>
> I'm trying to understand the peering algorithm based on [1] and [2]. There
> are things that aren't really clear or I'm not entirely sure if I understood
> them correctly, so I'd like to ask some clarification on the points below
On Tue, Sep 29, 2015 at 7:24 AM, Andras Pataki
wrote:
> Thanks, that makes a lot of sense.
> One more question about checksumming objects in rados. Our cluster uses
> two copies per object, and I have some where the checkums mismatch between
> the two copies (that deep scrub warns about). Does c
After discovering this excellent blog post [1], I thought that taking
advantage of users' "default_placement" feature would be a preferable
way to achieve my multi-tenancy requirements (see previous post).
Alas I seem to be hitting a snag. Any attempt to create a bucket with a
user setup with
On Thu, Oct 8, 2015 at 1:55 PM, Christian Sarrasin
wrote:
> After discovering this excellent blog post [1], I thought that taking
> advantage of users' "default_placement" feature would be a preferable way to
> achieve my multi-tenancy requirements (see previous post).
>
> Alas I seem to be hittin
Hi Yehuda,
Yes I did run "radosgw-admin regionmap update" and the regionmap appears
to know about my custom placement_target. Any other idea?
Thanks a lot
Christian
radosgw-admin region-map get
{ "regions": [
{ "key": "default",
"val": { "name": "default",
"ap
When you start radosgw, do you explicitly state the name of the region
that gateway belongs to?
On Thu, Oct 8, 2015 at 2:19 PM, Christian Sarrasin
wrote:
> Hi Yehuda,
>
> Yes I did run "radosgw-admin regionmap update" and the regionmap appears to
> know about my custom placement_target. Any oth
Hello everyone,
I am very new to Ceph so, please excuse me if this has already been
discussed. I couldn't find anything on the web.
We are interested in using Ceph and access it directly via its native rados
API with python. We noticed that certain functions that are available in
the C library ar
On Thu, Oct 8, 2015 at 5:01 PM, Rumen Telbizov wrote:
> Hello everyone,
>
> I am very new to Ceph so, please excuse me if this has already been
> discussed. I couldn't find anything on the web.
>
> We are interested in using Ceph and access it directly via its native rados
> API with python. We no
Sounds good. We'll try to work on this.
On Thu, Oct 8, 2015 at 5:06 PM, Gregory Farnum wrote:
> On Thu, Oct 8, 2015 at 5:01 PM, Rumen Telbizov wrote:
> > Hello everyone,
> >
> > I am very new to Ceph so, please excuse me if this has already been
> > discussed. I couldn't find anything on the we
Hi,
On 08/10/2015 22:25, Gregory Farnum wrote:
> So that means there's no automated way to guarantee the right copy of
> an object when scrubbing. If you have 3+ copies I'd recommend checking
> each of them and picking the one that's duplicated...
It's curious because I have already tried with c
On Thu, Oct 8, 2015 at 6:45 PM, Francois Lafont wrote:
> Hi,
>
> On 08/10/2015 22:25, Gregory Farnum wrote:
>
>> So that means there's no automated way to guarantee the right copy of
>> an object when scrubbing. If you have 3+ copies I'd recommend checking
>> each of them and picking the one that'
On Wed, Oct 7, 2015 at 11:46 PM, Dzianis Kahanovich
wrote:
> John Spray пишет:
>
> [...]
>
> There are part of log for restarted mds debug 7 (without standby-replplay,
> but IMHO no matter):
>
> (PS How [un]safe multiple mds in current hammer? Now I try & temporary work
> with "set_max_mds 3", but
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Sage,
After trying to bisect this issue (all test moved the bisect towards
Infernalis) and eventually testing the Infernalis branch again, it
looks like the problem still exists although it is handled a tad
better in Infernalis. I'm going to test ag
Hi All,
Anybody pls help me on this issue.
Regards
Prabu
On Thu, 08 Oct 2015 12:35:27 +0530 gjprabu
wrote
Hi All,
We have CEPH RBD with OCFS2 mounted servers. we are facing i/o errors
simultaneously while move the data's in the s
No further info ?
From: Fulin Sun
Date: 2015-10-08 10:39
To: ceph-users
Subject: ceph osd start failed
The failing message looks like following:
What would be the root cause ?
=== osd.0 ===
failed: 'timeout 30 /usr/bin/ceph -c /etc/ceph/ceph.conf --name=osd.0
--keyring=/var/lib/ceph/os
42 matches
Mail list logo