I check the log,it has assert like thus, so the OSD deamon is stopped
0> 2015-08-28 17:03:36.898139 7f322a9b2700 -1 osd/ReplicatedPG.cc: In function
'virtual void ReplicatedPG::on_local_recover(const hobject_t&, const
object_stat_sum_t&, const ObjectRecoveryInfo&, ObjectContextRef,
ObjectSt
I check the log,it has assert like thus, so the OSD deamon is stopped
0> 2015-08-28 17:03:36.898139 7f322a9b2700 -1 osd/ReplicatedPG.cc: In function
'virtual void ReplicatedPG::on_local_recover(const hobject_t&, const
object_stat_sum_t&, const ObjectRecoveryInfo&, ObjectContextRef,
ObjectStore
Hello CephFS / Ceph gurus...
I am currently using cephfs to store data in ceph object storage cluster.
Cephfs is using separate pools for data and metadata.
I am trying to understand how to recover cephfs in a situation where:
1. The cluster looses more OSDs than the number of configured re
Preface: as I mentioned on your other thread about repairing a
filesystem, these tools are not finished yet, and probably won't be
comprehensively documented until they all fit together. So my answers
here are for interest, not an encouragement to treat these tools as
"ready to go".
On Tue, Sep 1
Hi, Cephers!
We have an issue on a Firefly production cluster: after a disk error, one osd
was out of
the cluster. During a half of a hour, xfs async write tried to commit xfs
journal to a
bad disk and a whole node get down with "BUG: cpu## soft lockup". We suspect,
that it can be
a bug or stra
> -Original Message-
> From: Gregory Farnum [mailto:gfar...@redhat.com]
> Sent: 15 September 2015 00:09
> To: Nick Fisk ; Samuel Just
> Cc: Shinobu Kinjo ; GuangYang
> ; ceph-users
> Subject: Re: [ceph-users] Ceph performance, empty vs part full
>
> It's been a while since I looked a
So, I asked this on the irc as well but I will ask it here as well.
When one does 'ceph -s' it shows client IO.
The question is simple.
Is this total throughput or what the clients would see?
Since it's replication factor of 3 that means for every write 3 are
actually written.
First lets assum
Hi,
I'd like to run Ceph on a few machines, each of which has multiple disks. The
disks are heterogeneous: some are rotational disks of larger capacities while
others are smaller solid state disks. What are the recommended ways of running
ceph osd-es on them?
Two of the approaches can be:
1)
On Tue, Sep 15, 2015 at 9:10 AM, Barclay Jameson
wrote:
> So, I asked this on the irc as well but I will ask it here as well.
>
> When one does 'ceph -s' it shows client IO.
>
> The question is simple.
>
> Is this total throughput or what the clients would see?
>
> Since it's replication factor of
Unfortunately, it's not longer idle as my CephFS cluster is now in production :)
On Tue, Sep 15, 2015 at 11:17 AM, Gregory Farnum wrote:
> On Tue, Sep 15, 2015 at 9:10 AM, Barclay Jameson
> wrote:
>> So, I asked this on the irc as well but I will ask it here as well.
>>
>> When one does 'ceph -s
FWIW I wouldn't totally trust these numbers. At one point a while back
I had ceph reporting 226GB/s for several seconds sustained. While that
would have been really fantastic, I suspect it probably wasn't the case. ;)
Mark
On 09/15/2015 11:25 AM, Barclay Jameson wrote:
Unfortunately, it's no
I have a program that monitors the speed, and I have seen 1TB/s pop up and
there is just no way that is true.
Probably the way it is calculated is prone to extreme measurements, where
if you average it out you get a more realistic number.
On Tue, Sep 15, 2015 at 12:25 PM Mark Nelson wrote:
> FWI
Good point. I have seen some really weird numbers something like 7x my
normal client IO. This happens very rarely though.
On Tue, Sep 15, 2015 at 2:25 PM, Mark Nelson wrote:
> FWIW I wouldn't totally trust these numbers. At one point a while back I
> had ceph reporting 226GB/s for several second
Yes, they're both delayed and a guesstimate: the OSDs send periodic
information to the monitors about the state of their PGs, which
includes amount of data read/written from them. The monitor
extrapolates the throughput at each report interval based on the pg
updates it received during that time.
-
Dear all,
I have a ceph cluster deployed in debian; I'm trying to test ISA
erasure-coded pools, but there is no plugin (libec_isa.so) included in
the library.
Looking at the packages at debian Ceph repository, I found a "trusty"
package that includes the plugin. Is it created to use with deb
Hi Gerd,
On 16/09/2015 00:52, Gerd Jakobovitsch wrote:
> Dear all,
>
> I have a ceph cluster deployed in debian; I'm trying to test ISA
> erasure-coded pools, but there is no plugin (libec_isa.so) included in the
> library.
>
> Looking at the packages at debian Ceph repository, I found a "trus
Hi,
I'm working to correct a partitioning error from when our cluster was
first installed (ceph 0.56.4, ubuntu 12.04). This left us with 2TB
partitions for our OSDs, instead of the 2.8TB actually available on
disk, a 29% space hit. (The error was due to a gdisk bug that
mis-computed the end of t
Le 16/09/2015 01:21, John-Paul Robinson a écrit :
> Hi,
>
> I'm working to correct a partitioning error from when our cluster was
> first installed (ceph 0.56.4, ubuntu 12.04). This left us with 2TB
> partitions for our OSDs, instead of the 2.8TB actually available on
> disk, a 29% space hit. (Th
18 matches
Mail list logo