goes worse for my small production cluster:(
I tweak the WBThrottle options (specifically the param
'filestore_wbthrottle_xfs_inodes_start_flusher'), and finally, the OSD
is no more suicided due to high disk usage.
Hope your wonderful PR "writepath optimizati
bsize=4096 ascii-ci=0 ftype=0
log =internal bsize=4096 blocks=357381, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
On 2015年12月02日 16:20, flisky wrote:
> It work
-Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> flisky
> Sent: Tuesday, December 01, 2015 11:04 AM
> To: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] does anyone know what xfsaild and kworker are?they
> make osd di
On 2015年12月02日 01:31, Somnath Roy wrote:
> This is xfs metadata sync process...when it is waking up and there are lot of
> data to sync it will throttle all the process accessing the drive...There are
> some xfs settings to control the behavior, but you can't stop that
May I ask how to tune the x
On 2014年11月11日 12:23,
duan.xuf...@zte.com.cn wrote:
ZTE Information
Security Notice: The information contained in this mail (and any
attachment transmitted herewith) is privileged and confidential and is
intended for the exclusive use of t
On 2015年07月02日 00:16, Gregory Farnum wrote:
How reproducible is this issue for you? Ideally I'd like to get logs
from both clients and the MDS server while this is happening, with mds
and client debug set to 20. And also to know if dropping kernel caches
and re-listing the directory resolves the
On 2015年07月01日 16:11, Gregory Farnum wrote:
On Wed, Jul 1, 2015 at 9:02 AM, flisky wrote:
Hi list,
I meet a strange problem:
sometimes I cannot see the file/directory created by another ceph-fuse
client. It comes into visible after I touch/mkdir the same name.
Any thoughts?
What version
Hi list,
I meet a strange problem:
sometimes I cannot see the file/directory created by another ceph-fuse
client. It comes into visible after I touch/mkdir the same name.
Any thoughts?
Thanks!
___
ceph-users mailing list
ceph-users@lists.ceph.com
On 2015年05月19日 17:07, Markus Blank-Burian wrote:
Here are some logs and the infos from the mdsc files. But I am afraid
that there might not be much info in the logs, since I had a very low
log level. Look for example at2015-05-18T21:28:33+02:00. The mdsc files
are concatenated from all of the cli
On 2015年01月10日 03:21, Gregory Farnum wrote:
On Fri, Jan 9, 2015 at 2:00 AM, Nico Schottelius
wrote:
Lionel, Christian,
we do have the exactly same trouble as Christian,
namely
Christian Eichelmann [Fri, Jan 09, 2015 at 10:43:20AM +0100]:
We still don't know what caused this specific error...
ed?
I think the RGW would eventually be NA if more slow requests. Is that true?
Thanks!
On 2015年05月15日 23:54, flisky wrote:
All OSDs are up and in, and crushmap should be okay.
ceph -s:
health HEALTH_WARN
9 pgs stuck inactive
9 pgs stuck unclean
149
onnects to 4 OSD ports and exits with
'initialization timeout' after a restart. However telnetting other OSD
port is okay.
Please help.
Thanks!
On 2015年05月15日 18:17, flisky wrote:
Hi list,
I reformatted some OSDs to increase the journal_size, and just did it in
the hurry, some pgs
Hi list,
I reformatted some OSDs to increase the journal_size, and just did it in
the hurry, some pgs have lost data and in the incomplete status.
The cluster is stuck in 'creating' status after **ceph osd lost xx** and
**force_create_pg**. I find the dir 'osd-xx/current/xx.xxx_head' only
co
On 2015年05月02日 03:02, John Spray wrote:
On 30/04/2015 09:21, flisky wrote:
When I read the file through the ceph-fuse, the process crashed.
Here is the log -
terminate called after throwing an instance of
'ceph::buffer::end_of_buffer'
what(): buffer::end
n use.
I'm not sure why it would be getting aborted without any output though
— is there any traceback at all in the logs? A message about the
OOM-killer zapping it or something?
-Greg
On Thu, Apr 30, 2015 at 1:45 AM, flisky wrote:
Sorry,I cannot reproduce the "Operation not permitted&
I think you can take a look at [ceph-ansible][1], and the
[rolling_update][2] particularly.
And in the upgrade, every thing goes smoothly, except the [dirty data
issue][3] bugged me a lot.
[1]: https://github.com/ceph/ceph-ansible
[2]: https://github.com/ceph/ceph-ansible/blob/master/rolling_
Second this. An VPN works when you cannot open ceph.com.
And FYI, you can use 'eu.ceph.com' to download packages when GFW sucks.
On 2015年04月30日 15:08, Ray Sun wrote:
I think it's GFW's problem. I think it's ok for me to use a VPN inside a
wall.
Best Regards
-- Ray
On Thu, Apr 30, 2015 at 3:00
.149:6823/982446
0> 2015-04-30 16:29:12.872787 7fe97bfff700 -1 *** Caught signal
(Aborted) **
On 2015年04月30日 16:21, flisky wrote:
When I read the file through the ceph-fuse, the process crashed.
Here is the log -
terminate called after throwing an instance of
'ce
When I read the file through the ceph-fuse, the process crashed.
Here is the log -
terminate called after throwing an instance of 'ceph::buffer::end_of_buffer'
what(): buffer::end_of_buffer
*** Caught signal (Aborted) **
in thread 7fe0814d3700
ceph version 0.94.1 (e4bfad
19 matches
Mail list logo