*Hi all*
I have two ceph monitors working fine , i have added them a while ago,
for now i have added a new Ceph Monitor and it does showing me the
following log file
2015-05-18 10:54:42.585123 7f4a9609d700 0
mon.monitor03@2(synchronizing).data_health(0) update_stats avail 44%
total 51175 M
Not that I know of, but if you wanted to repurpose this code it would
probably be pretty easy:
https://github.com/ceph/Diamond/blob/calamari/src/collectors/ceph/ceph.py
Cheers,
John
On 17/05/2015 23:19, German Anders wrote:
Hi all,
I want to know if someone has deploy some new relic (pyhon) p
Hi Ali,
Which version of Ceph are you using?. Is there any re-spawning osds?
Regards
K.Mohamed Pakkeer
On Mon, May 18, 2015 at 2:23 PM, Ali Hussein <
ali.alkhazr...@earthlinktele.com> wrote:
> *Hi all*
>
> I have two ceph monitors working fine , i have added them a while ago, for
> now i have
The two old Monitors uses Ceph version 0.87.1 , while the new added
Monitor uses 0.87.2
P.S:- ntp is installed and working fine
On 18/05/2015 12:11 م, Mohamed Pakkeer wrote:
Hi Ali,
Which version of Ceph are you using?. Is there any re-spawning osds?
Regards
K.Mohamed Pakkeer
On Mon, May 18,
On 05/18/2015 10:33 AM, Ali Hussein wrote:
> The two old Monitors uses Ceph version 0.87.1 , while the new added
> Monitor uses 0.87.2
> P.S:- ntp is installed and working fine
This is not related with clocks (or, at least, should not be).
State 'synchronizing' means the monitor is getting its st
Thanks a lot John, will definitely take a look on that.
Best regards,
*German Anders*
Storage System Engineer Leader
*Despegar* | IT Team
*office* +54 11 4894 3500 x3408
*mobile* +54 911 3493 7262
*mail* gand...@despegar.com
2015-05-18 6:04 GMT-03:00 John Spray :
> Not that I know of, but if y
Just to update on this, I've been watching iostat across my Ceph nodes and I
can see something slightly puzzling happening and is most likely the cause
of the slow (>32s) requests I am getting.
During a client write-only IO stream, I see reads and writes to the cache
tier, which is normal as block
We just enabled a small cache pool on one of our clusters (v 0.94.1) and
have run into some issues:
1) Cache population appears to happen via the public network (not the
cluster network). We're seeing basically no traffic on the cluster
network, and multiple gigabits inbound to our cache OSDs
Hello all,
I've encountered a problem when upgrading my single node home cluster from
giant to hammer, and I would greatly appreciate any insight.
I upgraded the packages like normal, then proceeded to restart the mon and
once that came back restarted the first OSD (osd.3). However it
subsequentl
You have most likely hit http://tracker.ceph.com/issues/11429. There are some
workarounds in the bugs marked as duplicates of that bug, or you can wait for
the next hammer point release.
-Sam
- Original Message -
From: "Berant Lemmenes"
To: ceph-users@lists.ceph.com
Sent: Monday, May 1
Sam,
Thanks for taking a look. It does seem to fit my issue. Would just removing
the 5.0_head directory be appropriate or would using ceph-objectstore-tool
be better?
Thanks,
Berant
On Mon, May 18, 2015 at 1:47 PM, Samuel Just wrote:
> You have most likely hit http://tracker.ceph.com/issues/11
[..]
> Seeing this in the firefly cluster as well. Tried a couple of rados
> commands on the .rgw.root pool this is what is happening:
>
> abhi@st:~$ sudo rados -p .rgw.root put test.txt test.txt
> error putting .rgw.root/test.txt: (6) No such device or address
>
> abhi@st:~$ sudo ceph osd map .rg
Hi List,
We would like to avoid creation of buckets directly by users or subuser (s3 or
swift)
We would like to do it through administration interface (with user and special
right) in order to normalize bucketname
Is it possibility to do it (with caps or parameter) ?
Thanks
Sent from my iPhone
Running a benchmark :
rados bench -p cephfs_data 300 write --no-cleanup
While watching ceph client io:
ceph -w
I get different numbers.
The output form rados bench is as follows:
Total time run: 300.108725
Total writes made: 66306
Write size: 4194304
Bandwidth (MB/sec):
Hello all,
I am attempting to install a ceph cluster which has been built from source. I
first cloned the Ceph master repository and then followed steps given in the
Ceph documentation about installing a Ceph build. So I now have the binaries
available in /usr/local/bin.
The next step is for m
Hi List,
I would like to know the best way to have several radosgw servers on the same
cluster with the same ceph.conf file
From now, I have 2 radosgw server but I have 1 conf file on each with below
section on parrot :
[client.radosgw.gateway]
host = parrot
keyring = /etc/ceph/ceph.client.rad
Hi List,
I would like to know the content of each pools of radosgw in order to
understand usage
So I have check the content with rados ls -p poolname
### .intent-log
=> this pool is empty on my side. what’s the need ?
### .log
=> this pool is empty on my side. what’s the need ?
### .rgw
=> b
17 matches
Mail list logo