Hello all,
I am testing cluster with mixed type OSD on same data node (yes, it's the idea
from:
http://www.sebastien-han.fr/blog/2014/08/25/ceph-mix-sata-and-ssd-within-the-same-box/),
and run into a strange status:
ceph -s or ceph pg dump shows incorrect PG information after set pg_num to pool
hi,
does anyone know who is maintaining rados-java and perform release to the
Maven central? In May, there was a release to Maven central *[1], but the
release version is not based on the latest code base from:
https://github.com/ceph/rados-java
I wonder if the one who do the Maven release could t
On Mon, Jul 13, 2015 at 11:00 PM, Simion Rad wrote:
> Hi ,
>
> I'm running a small cephFS ( 21 TB , 16 OSDs having different sizes between
> 400G and 3.5 TB ) cluster that is used as a file warehouse (both small and
> big files).
> Every day there are times when a lot of processes running on the c
Thank you for your reply.
Comments inline.
I’m still hoping to get some more input, but there are many people running ceph
on ext4, and it sounds like it works pretty good out of the box. Maybe I’m
overthinking this, then?
Jan
> On 13 Jul 2015, at 21:04, Somnath Roy wrote:
>
> <
> -Orig
Hi,
On 14-07-15 11:05, Mingfai wrote:
> hi,
>
> does anyone know who is maintaining rados-java and perform release to
> the Maven central? In May, there was a release to Maven central *[1],
> but the release version is not based on the latest code base from:
> https://github.com/ceph/rados-java
>
On Tue, Jul 14, 2015 at 10:53 AM, Jan Schermer wrote:
> Thank you for your reply.
> Comments inline.
>
> I’m still hoping to get some more input, but there are many people running
> ceph on ext4, and it sounds like it works pretty good out of the box. Maybe
> I’m overthinking this, then?
I thin
Hi ,
The output of ceph -s :
cluster 50961297-815c-4598-8efe-5e08203f9fea
health HEALTH_OK
monmap e5: 5 mons at
{pshn05=10.71.13.5:6789/0,pshn06=10.71.13.6:6789/0,pshn13=10.71.13.13:6789/0,psosctl111=10.71.13.111:6789/0,psosctl112=10.71.13.112:6789/0},
election epoch 258, quorum 0,1
On Tue, Jul 14, 2015 at 11:30 AM, Simion Rad wrote:
> Hi ,
>
> The output of ceph -s :
>
> cluster 50961297-815c-4598-8efe-5e08203f9fea
> health HEALTH_OK
> monmap e5: 5 mons at
> {pshn05=10.71.13.5:6789/0,pshn06=10.71.13.6:6789/0,pshn13=10.71.13.13:6789/0,psosctl111=10.71.13.111:6789/0
I don't think there were any stale or unclean PGs, (when there are,
I have seen "health detail" list them and it did not in this case).
I have since restarted the 2 osds and the health went immediately to HEALTH_OK.
-- Tom
> -Original Message-
> From: Will.Boege [mailto:will.bo...@target
Instead of guessing I took a look at one of my OSDs.
TL;DR: I’m going to bump the inode size to 512 which should fit majority of
xattrs, no need to touch filestore parameters.
Short news first - I can’t find a file with more than 2 xattrs. (and that’s
good)
Then I extracted all the xattrs on a
Hi All,
I am trying to debug ceph_erasure_code_benchmark_app available in ceph
repo. using cauchy_good technique. I am running gdb using following command:
src/ceph_erasure_code_benchmark --plugin jerasure_neon --workload encode
--iterations 10 --size 1048576 --parameter k=6 --parameter m=2 --par
Hi,
I managed to destroy my development cluster yesteday after upgrading it to
Scientific Linux and kernel 2.6.32-504.23.4.el6.x86_64.
Upon rebooting the development node hung whilst attempting to start the
monitor. It was still in the same state after being left overnight to
see if it would tim
Hi,
This reminds me of when a buggy leveldb package slipped into the ceph
repos (http://tracker.ceph.com/issues/7792).
Which version of leveldb do you have installed?
Cheers, Dan
On Tue, Jul 14, 2015 at 3:39 PM, Barry O'Rourke wrote:
> Hi,
>
> I managed to destroy my development cluster yesteday
Hi,
I've observed the same thing but never spent time to figure that out. It would
be nice to know. I don't think it's a bug, just something slightly confusing.
Cheers
On 14/07/2015 14:52, Nitin Saxena wrote:
> Hi All,
>
> I am trying to debug ceph_erasure_code_benchmark_app available in ceph
I'll consider looking into more detail at the slow OSDs.
Thank you,
Simion Rad.
From: Gregory Farnum [g...@gregs42.com]
Sent: Tuesday, July 14, 2015 13:42
To: Simion Rad
Cc: ceph-us...@ceph.com
Subject: Re: [ceph-users] ceph daemons stucked in FUTEX_WAIT
In my experience I have seen something like this this happen twice - First
time there were unclean PGs because Ceph was down to one replica of a PG.
When that happens Ceph blocks IO to remaining replicas when the number
falls below the Œmin_size¹ parameter. That will manifest as blocked ops.
Second
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
I'm trying to understand the real world reliability of Ceph to provide
some data to our upper management and may be valuable to others
investigating Ceph.
Things I'm trying to understand:
1. How many clusters are in production?
2. How long has the c
Hi,
Does anyone know if it is possible to use Ceph storage in Redhat Enterprise
Virtualization (RHEV),
and connect it as a data domain in the Redhat Enterprise Virtualization Manager
(RHEVM).
My RHEV version and Hypervisors are the latest RHEV 6.5 version.
Thanks,
Peter Calum
TDC
RHEV does not formally support Ceph yet. Future versions are looking to
include Cinder support which will allow you to hook in Ceph.
You should contact your RHEV contacts who can give an indication of the
timeline for this.
Neil
On Tue, Jul 14, 2015 at 10:43 AM, Peter Michael Calum wrote:
> Hi
When starting the rbdmap.service to provide map/unmap of rbd devices across
boot/shutdown cycles the /etc/init.d/rbdmap includes /lib/lsb/init-functions.
This is not a problem except that the rbdmap script is making calls to the
log_daemon_* log_progress_* log_actiion_* functions that are includ
Hi,
Curently tracker.ceph.com doesn't have SSL enabled.
Every time I log in I'm sending my password over plain text which I'd
rather not.
Can we get SSL enabled on tracker.ceph.com?
And while we are at it, can we enable IPv6 as well? :)
--
Wido den Hollander
42on B.V.
Ceph trainer and consult
Hi,
We have a Openstack + Ceph cluster based on Giant release. We use ceph for the
VMs volumes including the boot volumes. Under load, we see the write access to
the volumes stuck from within the VM. The same would work after a VM reboot.
The issue is seen with and without rbd cache. Let me kno
Hi All,
I've just upgraded Ceph cluster from Firefly 0.80.8 (Redhat Ceph 1.2.3) to
Hammer (Redhat Ceph 1.3) - Usage : radosgw with Apache 2.4.19 on MPM prefork
mode
I'm experiencing huge write performance degradation just after upgrade
(Cosbench).
Do you already run performance tests between H
On 07/15/2015 01:17 AM, Jeya Ganesh Babu Jegatheesan wrote:
> Hi,
>
> We have a Openstack + Ceph cluster based on Giant release. We use ceph for
> the VMs volumes including the boot volumes. Under load, we see the write
> access to the volumes stuck from within the VM. The same would work after
On 07/14/2015 06:42 PM, Florent MONTHEL wrote:
Hi All,
I've just upgraded Ceph cluster from Firefly 0.80.8 (Redhat Ceph 1.2.3) to
Hammer (Redhat Ceph 1.3) - Usage : radosgw with Apache 2.4.19 on MPM prefork
mode
I'm experiencing huge write performance degradation just after upgrade
(Cosbench)
Yes of course thanks Mark
Infrastructure : 5 servers with 10 sata disks (50 osd at all) - 10gb connected
- EC 2+1 on rgw.buckets pool - 2 radosgw RR-DNS like installed on 2 cluster
servers
No SSD drives used
We're using Cosbench to send :
- 8k object size : 100% read with 256 workers : better r
Hi list
Do you recommend to enable or disable hyper threading on CPU ?
Is it the case for Mon ? Osd ? Radosgw ?
Thanks
Sent from my iPhone
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On 7/14/15, 4:56 PM, "ceph-users on behalf of Wido den Hollander"
wrote:
>On 07/15/2015 01:17 AM, Jeya Ganesh Babu Jegatheesan wrote:
>> Hi,
>>
>> We have a Openstack + Ceph cluster based on Giant release. We use ceph
>>for the VMs volumes including the boot volumes. Under load, we see the
>>w
I was getting better performance with HT enabled (Intel cpu) for ceph-osd. I
guess for mon it doesn't matter, but, for RadosGW I didn't measure the
difference...We are running our benchmark with HT enabled for all components
though.
Thanks & Regards
Somnath
-Original Message-
From: cep
Thanks for feed-back Somnath
Sent from my iPhone
> On 14 juil. 2015, at 20:24, Somnath Roy wrote:
>
> I was getting better performance with HT enabled (Intel cpu) for ceph-osd. I
> guess for mon it doesn't matter, but, for RadosGW I didn't measure the
> difference...We are running our benchma
Hi Florent,
10x degradation is definitely unusual! A couple of things to look at:
Are 8K rados bench writes to the rgw.buckets pool slow? You can with
something like:
rados -p rgw.buckets bench 30 write -t 256 -b 8192
You may also want to try targeting a specific RGW server to make sure
t
On 07/13/2015 02:11 PM, Wido den Hollander wrote:
> On 07/13/2015 09:43 PM, Corin Langosch wrote:
>> Hi Wido,
>>
>> I'm the dev of https://github.com/netskin/ceph-ruby and still use it in
>> production on some systems. It has everything I
>> need so I didn't develop any further. If you find any bu
Hi John,
I cut the test down to a single client running only Ganesha NFS
without any ceph drivers loaded on the Ceph FS client. After deleting
all the files in the Ceph file system, rebooting all the nodes, I
restarted the create 5 million file test using 2 NFS clients to the
one Ceph file system
On 07/14/2015 04:14 PM, Wido den Hollander wrote:
> Hi,
>
> Curently tracker.ceph.com doesn't have SSL enabled.
>
> Every time I log in I'm sending my password over plain text which I'd
> rather not.
>
> Can we get SSL enabled on tracker.ceph.com?
>
> And while we are at it, can we enable IPv6
I change the "mds_cache_size" to 50 from 10 get rid of the
WARN temporary.
Now dumping the mds daemon shows like this:
"inode_max": 50,
"inodes": 124213,
But i have no idea if the "indoes" rises more than 50 , change the
"mds_cache_size" again?
Thanks.
2015-07-15 13
Hi,
I have an issue where I cannot delete files or folders from Buckets, no issues
when copying data over. whenever i try to delete something i get:
Internal error 500, here is a sample from the radosgw log:
2015-07-12 17:51:33.216750 7f5daaf65700 15 calculated
digest=4/aScqOXY8O45BFQds0
36 matches
Mail list logo