dear,
it happend again , and detail error info is listed below:
770'25857 snapset=0=[]:[] snapc=0=[]) v10 1295+0+39 (3373886567 0
3511520646) 0xccbd800 con 0xaa41760
-27> 2014-09-07 15:28:58.921019 7f5ad3a92700 5 -- op tracker -- , seq:
1849, time: 2014-09-07 15:28:58.920839, event
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi,
I'm trying to install Ceph Firefly on RHEL 7 on my three of my storage
servers.
Each server has 17 HD, thus I thought each will have 17 OSD's and I've
installed monitors on all three servers.
After the installation I get this output
# ceph health
According your log, your ceph version is 0.80.5 which has several
KeyValueStore bugs. I recommend you to use the latest ceph version for
KeyValueStore
On Sun, Sep 7, 2014 at 7:56 PM, 廖建锋 wrote:
> already set ulimit -a 65536
> already set debug_keyvaluestore=20/20 in global section of ceph.conf
>
I have found the root cause. It's a bug.
When chunky scrub happen, it will iterate the who pg's objects and
each iterator only a few objects will be scan.
osd/PG.cc:3758
ret = get_pgbackend()-> objects_list_partial(
start,
cct->_conf->osd_scrub_chunk_min,
cct->_conf-
Thanks Greg, that helped get the last stuck PGs back online, and
everything looks normal again.
Here's the promised post-mortem. It might contain only a little of
value to developers, but certainly a bunch of face-palming for readers
(and a big red face for me).
This mess started during a r
Thanks Greg, that helped get the last stuck PGs back online, and
everything looks normal again.
Here's the promised post-mortem. It might contain only a little of
value to developers, but certainly a bunch of face-palming for readers
(and a big red face for me).
This mess started during a r
On 07/09/2014 14:11, yr...@redhat.com wrote:
> Hi,
>
> I'm trying to install Ceph Firefly on RHEL 7 on my three of my storage
> servers.
> Each server has 17 HD, thus I thought each will have 17 OSD's and I've
> installed monitors on all three servers.
>
> After the installation I get this outp
Hello guys,
was wondering if it is a good idea to enable TRIM (mount option discard) on the
ssd disks which are used for either cache pool or osd journals?
For performance, is it better to enable it or run fstrim with cron every once
in a while?
Thanks
Andrei
_
I recently found out about the "ceph --admin-daemon
/var/run/ceph/ceph-osd..asok dump_historic_ops" command, and noticed
something unexpected in the output on my cluster, after checking
numerous output samples...
It looks to me like "normal" write ops on my cluster spend roughly:
<1ms between
Andrei Mikhailovsky writes:
>
> Hello guys,
>
> was wondering if it is a good idea to enable TRIM (mount option discard)
on the ssd disks which are used for
> either cache pool or osd journals?
As far as the journals are concerned, isn't this irrelevant if you're
assigning a block device to
On Mon, 8 Sep 2014 00:20:37 + (UTC) Alex Moore wrote:
> Andrei Mikhailovsky writes:
>
> >
> > Hello guys,
> >
> > was wondering if it is a good idea to enable TRIM (mount option
> > discard)
> on the ssd disks which are used for
> > either cache pool or osd journals?
>
> As far as the jo
Hi Yahuda,
I need more info on Ceph object backup mechanism.. Could please share
a related doc or link for this?
Thanks
Swami
On Thu, Sep 4, 2014 at 10:58 PM, M Ranga Swami Reddy
wrote:
> Hi,
> I need more info on Ceph object backup mechanism.. Could someone share a
> related doc or link for thi
I wouldn't trust the 3.15.x kernel, its already EOL, and has issues.
http://tracker.ceph.com/issues/8818. that one hit me, i switched to a 3.14
kernel and my problems went away. its suppose to be released in 3.16.2, but
i looked at the change log and couldn't find any reference of it, so i'm
not su
13 matches
Mail list logo