Hi,
We're about to start looking more seriously into CephFS and were wondering
about the tradeoffs in our RHEL6.x based environment between using the kernel
client (on 3.13) vs using the fuse-client.
Current Ceph version we use is 0.67.5.
Thoughts?
Thanks!
Arne
--
Arne Wiebalck
CERN IT
smim
yes so somebody from inktank should comment. I've put that one into my
private repo.
Stefan
Am 24.01.2014 04:08, schrieb Alexandre DERUMIER:
>>> But there is an officil libleveldb version from ceph for wheezy:
>>>
>>> http://gitbuilder.ceph.com/leveldb-deb-x86_64/
>>>
>>> http://gitbuilder.ceph
Dear all,
Good day to you.
I already have a running Ceph cluster consisting of 3 monitor servers and
several OSD servers. I would like to setup another cluster using a
different set of OSD servers, but using the same 3 monitor servers, is it
possible?
Can the 3 monitor servers become the MONs fo
Hi,
I've got 6 OSDs and I want 3 replicas per object, so following the
function that's 200 PGs per OSD, which is 1,200 overall.
I've got two RBD pools and the .rgw.buckets pool that are considerably
higher in the number of objects it has compared to the others (given
that RADOS gateway needs
Hi.
Please read: http://ceph.com/docs/master/rados/operations/placement-groups/
2014/1/24 Graeme Lambert
> Hi,
>
> I've got 6 OSDs and I want 3 replicas per object, so following the
> function that's 200 PGs per OSD, which is 1,200 overall.
>
> I've got two RBD pools and the .rgw.buckets pool
On Fri, Jan 24, 2014 at 5:03 PM, Arne Wiebalck wrote:
> Hi,
>
> We're about to start looking more seriously into CephFS and were wondering
> about the tradeoffs in our RHEL6.x based environment between using the kernel
> client (on 3.13) vs using the fuse-client.
> Current Ceph version we use is
Usually you would like to start here:
http://ceph.com/docs/master/rbd/rbd-openstack/
Sébastien Han
Cloud Engineer
"Always give 100%. Unless you're giving blood.”
Phone: +33 (0)1 49 70 99 72
Mail: sebastien@enovance.com
Address : 10, rue de la Victoire - 75009 Paris
Web : www.eno
Greg,
Do you have any estimation about how heartbeat messages use the network?
How busy is it?
At some point (if the cluster gets big enough), could this degrade the network
performance? Will it make sense to have a separate network for this?
So in addition to public and storage we will have an
Hi,
> At some point (if the cluster gets big enough), could this degrade the
> network performance? Will it make sense to have a separate network for this?
>
> So in addition to public and storage we will have an heartbeat network, so we
> could pin it to a specific network link.
I think the wh
I agree but somehow this generates more traffic too. We just need to find a
good balance.
But I don’t think this will change the scenario where the cluster network is
down and OSDs die because of this…
Sébastien Han
Cloud Engineer
"Always give 100%. Unless you're giving blood.”
Phone:
Hi all,
We already have a full featured Ceph Storage plugin in our Proxmox VE solution
and now - BRAND NEW - it is now possible to install and manage the Ceph Server
directly on Proxmox VE - integrated in our management stack (GUI and CLI via
Proxmox VE API).
Documentation
http://pve.proxmox.c
On 01/24/2014 09:31 AM, Christian Kauhaus wrote:
Hi,
we're using Ceph to serve VM images via RBD and thus, RBD performance is
important for us. I've prepared some write benchmarks using different object
sizes. One time I use 'rados bench' directly and the other time 'rbd
bench-write'.
The resu
Hi Yehuda,
I was able to finally get this working by disabling the default Apache
site.
I'm now able to create S3 buckets, create objects within those buckets,
etc.
The Ceph documentation was slightly in error (at least for my Ubuntu
setup):
sudo a2dissite default
I had to issue the followi
On Friday, January 24, 2014, Sebastien Han
wrote:
> Greg,
>
> Do you have any estimation about how heartbeat messages use the network?
> How busy is it?
Not very. It's one very small message per OSD peer per...second?
>
> At some point (if the cluster gets big enough), could this degrade the
Ok Greg, thanks for the clarification!
Sébastien Han
Cloud Engineer
"Always give 100%. Unless you're giving blood.”
Phone: +33 (0)1 49 70 99 72
Mail: sebastien@enovance.com
Address : 10, rue de la Victoire - 75009 Paris
Web : www.enovance.com - Twitter : @enovance
On 24 Jan 201
>>Any comment and feedback is welcome!
I think it could be great to add some osd statistics (io/s,...), I think it's
possible through ceph api.
Also maybe an email alerting system if an osd state change (up/down/)
- Mail original -
De: "Martin Maurer"
À: ceph-users@lists.cep
> I think it could be great to add some osd statistics (io/s,...), I think it's
> possible
> through ceph api.
You see IO/s on the log.
I also added Latency stats for OSDs rescently.
> Also maybe an email alerting system if an osd state change (up/down/)
yes, and SMART,
_
I've been trying to debug this problem that shows up when using
ceph-deploy, and have found some pretty weird behavior:
[ceph_deploy.gatherkeys][WARNIN] Unable to find
/etc/ceph/ceph.client.admin.keyring on ['issdm-2']
The below findings are on the monitor node after the failed gatherkeys.
T
Yes, you can have two different monitor daemons on the same server, as
long as they have different names and are using different ports (set
"mon addr" to ip:port for the relevant in ceph.conf for the new mons).
Not sure which deployment tools would do it that way, it might be a
bit of a manual pro
On Thu, Jan 23, 2014 at 4:27 PM, Greg Poirier wrote:
> Hello!
>
> I have a great deal of interest in the ability to version objects in buckets
> via the S3 API. Where is this on the roadmap for Ceph?
This still needs to be prioritized. There was a discussion about it in
the latest CDS, but there
On Fri, Jan 24, 2014 at 4:28 PM, Yehuda Sadeh wrote:
> For each object that rgw stores it keeps a version tag. However this
> version is not ascending, it's just used for identifying whether an
> object has changed. I'm not completely sure what is the problem that
> you're trying to solve though.
So we have a test cluster, and two production clusters all running on
RHEL6.5. Two are running Emperor and one of them running Dumpling. On
all of them our OSDs do not start at boot it seems via the udev rules.
The OSDs were created with ceph-deploy and are all GPT. The OSDs are
visable with `ce
Hi,
can someone please mail the correct ownership and permissions of the
dirs and files in /etc/ceph and /var/lib/ceph ?
Thank you,
Markus
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi Gregory,
On 24/01/14 12:20, Gregory Farnum wrote:
>>> I could see in netstat that the connections were still "up" after
>>> a few minutes.
> How long did you wait? By default the node will get timed out after
> ~30 seconds, be marked down, and then the remaining OSDs will take
> over all acti
24 matches
Mail list logo