Hello,
Thank you Bryan.
I was just trying to upgrade to hammer or upper but before that I was wanting
to get the cluster in Healthy state.
Do you think it is safe to upgrade now first to latest firefly then to Hammer ?
Regards.
Dimitar Boichev
SysAdmin Team Lead
AXSMarine Sofia
Phone: +359 889
My ceph cluster config:
7 nodes(including 3 mons, 3 mds).
9 SATA HDD in every node and each HDD as an OSD&journal(deployed by
ceph-deploy).
CPU: 32core
Mem: 64GB
public network: 1Gbx2 bond0,
cluster network: 1Gbx2 bond0.
The read bw is 109910KB/s for 1M-read, and 34329KB/s for 1M-write.
Why is i
Hello,
This is sort of a FAQ, google is your friend.
For example find the recent thread "Performance Testing of CEPH on ARM
MicroServer" in this ML which addresses some points pertinent to your query.
Read it, I will reference things from it below
On Tue, 23 Feb 2016 19:55:22 +0800 yang wrote:
Hello,
On Tue, 23 Feb 2016 22:49:44 +0900 Christian Balzer wrote:
[snip]
> > 7 nodes(including 3 mons, 3 mds).
> > 9 SATA HDD in every node and each HDD as an OSD&journal(deployed by
> What replication, default of 3?
>
> That would give the theoretical IOPS of 21 HDDs, but your slow (more
> pre
Hello Guys
I am getting wired output from osd map. The object does not exists on pool
but osd map still shows its PG and OSD on which its stored.
So i have rbd device coming from pool 'gold' , this image has an object
'rb.0.10f61.238e1f29.2ac5'
The below commands verifies this
*[root@ce
This Hammer point release fixes a range of bugs, most notably a fix for
unbounded growth of the monitor’s leveldb store, and a workaround in the
OSD to keep most xattrs small enough to be stored inline in XFS inodes.
We recommend that all hammer v0.94.x users upgrade.
For more detailed informat
This is not a bug. The map command just says which PG/OSD an object maps
to; it does not go out and query the osd to see if there actually is such
an object.
-Greg
On Tuesday, February 23, 2016, Vickey Singh
wrote:
> Hello Guys
>
> I am getting wired output from osd map. The object does not exis
Dimitar,
I would agree with you that getting the cluster into a healthy state first
is probably the better idea. Based on your pg query, it appears like
you're using only 1 replica. Any ideas why that would be?
The output should look like this (with 3 replicas):
osdmap e133481 pg 11.1b8 (11.1b
I had a weird thing happen when I was testing an upgrade in a dev
environment where I have removed an MDS from a machine a while back.
I upgraded to 0.94.6 and low and behold the mds daemon started up on the
machine again. I know the /var/lib/ceph/mds folder was removed becaues I
renamed it /var/l
On Saturday, February 20, 2016, Sorin Manolache wrote:
> Hello,
>
> I can set a watch on an object in librados. Does this object have to exist
> already at the moment I'm setting the watch on it? What happens if the
> object does not exist? Is my watcher valid? Will I get notified when
> someone
Thanks Greg,
Do you mean ceph osd map command is not displaying accurate information ?
I guess, either of these things are happening with my cluster
- ceph osd map is not printing true information
- Object to PG mapping is not correct ( one object is mapped to multiple
PG's )
This is happening f
On Tuesday, February 23, 2016, Vickey Singh
wrote:
> Thanks Greg,
>
> Do you mean ceph osd map command is not displaying accurate information ?
>
> I guess, either of these things are happening with my cluster
> - ceph osd map is not printing true information
> - Object to PG mapping is not corre
Thank you Greg, much appreciated.
I'll test with the crush tool to see if it complains about this new layout.
George
On Mon, Feb 22, 2016 at 3:19 PM, Gregory Farnum wrote:
> On Mon, Feb 22, 2016 at 9:29 AM, George Mihaiescu
> wrote:
> > Hi,
> >
> > We have a fairly large Ceph cluster (3.2 PB)
Dear all:
I have a ceph object storage cluster with 143 osd and 7 radosgw, and choose
XFS as the underlying file system.
I recently ran into a problem that sometimes a osd is marked down when the
returned value of the function "chain_setxattr()" is -117. I only umount
the disk and repair it with "
>> ceph version 9.2.0 (bb2ecea240f3a1d525bcb35670cb07bd1f0ca299)
>>
>> Ceph contains
>> MON: 3
>> OSD: 3
>>
> For completeness sake, the OSDs are on 3 different hosts, right?
It is single machine. I`m doing tests only.
>> File system: ZFS
> That is the odd one out, very few people I'm aware of
Dimitar
Is it fixed ?
- is your cluster pool size is 2
- you can consider running ceph pg repair {pgid} or ceph osd lost 4 ( this is
a bit dangerous command )
Karan Singh
Systems Specialist , Storage Platforms
CSC - IT Center fo
Adding community for further help on this.
On Tue, Feb 23, 2016 at 10:57 PM, Vickey Singh
wrote:
>
>
> On Tue, Feb 23, 2016 at 9:53 PM, Gregory Farnum
> wrote:
>
>>
>>
>> On Tuesday, February 23, 2016, Vickey Singh
>> wrote:
>>
>>> Thanks Greg,
>>>
>>> Do you mean ceph osd map command is not d
Problem is now solved, the cluster is now backfilling/recovering normally,
no more NEAR FULL OSD.
It turns out that I have RBD objects that should have been deleted long
time ago but it's still there. Openstack Glance did not removed it, I think
it's an issue with snapshots, an RBD file can't be
- Original Message -
> From: "Sorin Manolache"
> To: ceph-users@lists.ceph.com
> Sent: Sunday, 21 February, 2016 8:20:13 AM
> Subject: [ceph-users] librados: how to get notified when a certain object is
> created
>
> Hello,
>
> I can set a watch on an object in librados. Does this obj
Hi,
Every time 2 of 18 OSDs are crashing. I think it's happening when run PG
replication because crashing only 2 OSDs and every time they're are the
same.
0> 2016-02-24 04:51:45.884445 7fd994825700 -1 osd/ReplicatedPG.cc: In
function 'int ReplicatedPG::fill_in_copy_get(ReplicatedPG::OpContext*,
c
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
You probably haven't written to any objects after fixing the problem.
Do some client I/O on the cluster and the PG will show fixed again. I
had this happen to me as well.
-BEGIN PGP SIGNATURE-
Version: Mailvelope v1.3.5
Comment: https://www.m
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
ceph pg dump
Since all objects map to a PG, as long as you can verify that no PG is
on the same host/chassis/rack, you are good.
-BEGIN PGP SIGNATURE-
Version: Mailvelope v1.3.5
Comment: https://www.mailvelope.com
wsFcBAEBCAAQBQJWzS0NCRDmVD
Hi Ceph,
TL;DR: If you have one day a week to work on the next Ceph stable releases [1]
your help would be most welcome.
The Ceph "Long Term Stable" (LTS) releases - currently hammer[2] - are used by
individuals, non-profits, government agencies and companies for their
production Ceph clusters
23 matches
Mail list logo