Hello,
There are several older discussions regarding RGW performance with high
volume small files.
I'm planning on running some tests on our test cluster to benchmark this
performance, but before I do I wanted to ask several questions, to make
sure that me test is valid.
1) does firefly have any
Hello,
I am having and issue with CEPH in that it won't communicate over the
private network so the moment I turn the firewalls on then it will start
marking OSD's offline. I have specified in the ceph.conf file that it has a
separate cluster network but it looks like it is not obeying my orders.
Hi!
I have installed ceph and created two osds and was very happy with that
but apparently not everything was correct.
Today after a system reboot the cluster comes up and for a few moments
it seems that it's ok (using the "ceph health" command) but after a few
seconds the "ceph health" com
My setup consists of two nodes.
The first node (master) is running:
-mds
-mon
-osd.0
and the second node (CLIENT) is running:
-osd.1
Therefore I 've restarted ceph services on both nodes
Leaving the "ceph -w" running for as long as it can after a few seconds
the error that is produced i
On 03/05/2014 11:21 AM, Georgios Dimitrakakis wrote:
My setup consists of two nodes.
The first node (master) is running:
-mds
-mon
-osd.0
and the second node (CLIENT) is running:
-osd.1
Therefore I 've restarted ceph services on both nodes
Leaving the "ceph -w" running for as long as it
Actually there are two monitors (my bad in the previous e-mail).
One at the MASTER and one at the CLIENT.
The monitor in CLIENT is failing with the following
2014-03-05 13:08:38.821135 7f76ba82b700 1
mon.client1@0(leader).paxos(paxos active c 25603..26314) is_readable
now=2014-03-05 13:08:38.
Hi Alexandre,
Can you tell me the version of libvirt, qemu and OS you use? thanks.
Thanks & Regards
Li JiaMin
System Cloud Platform
3#4F108
-邮件原件-
发件人: ljm李嘉敏
发送时间: 2014年3月5日 15:10
收件人: 'Alexandre DERUMIER'
抄送: ceph-us...@ceph.com
主题: 答复: [ceph-users] Enabling discard/trim
Thank you
Hello,
I don't use libvirt (i'm using proxmox solution + qemu 1.7).
discard is supported since qemu 1.5, but i'm not sure about the rbd block
driver. (maybe 1.6)
I'm using virtio-scsi in guest
- Mail original -
De: "ljm李嘉敏"
À: "Alexandre DERUMIER"
Cc: ceph-us...@ceph.com
Envoyé
I'm really appreciated for your help, thank you!
Best Regards,
Jia-min
在 2014-3-5,20:35,"Alexandre DERUMIER" 写道:
> Hello,
> I don't use libvirt (i'm using proxmox solution + qemu 1.7).
>
> discard is supported since qemu 1.5, but i'm not sure about the rbd block
> driver. (maybe 1.6)
> I'm us
Can someone help me with this error:
2014-03-05 14:54:27.253711 7f654fd3d700 0=20
mon.client1@0(leader).data_health(96) update_stats avail 3% total=20
51606140 used 47174264 avail 1810436
2014-03-05 14:54:27.253916 7f654fd3d700 -1=20
mon.client1@0(leader).data_health(96) reached critical
On Wed, Mar 5, 2014 at 2:41 AM, kenneth wrote:
> Hi all,
> I'm trying create ceph cluster with 3 nodes, is it a requirement to use
> ceph-deploy for deployment?
It is not a requirement but it is definitely easier because it uses a
lot of defaults and well known conventions
while at the same time
In an attempt to add a mon server, I appear to have completely broken a
mon service to the cluster:-
# ceph quorum_status --format json-pretty
2014-03-05 14:36:43.704939 7fb065058700 0 monclient(hunting):
authenticate timed out after 300
2014-03-05 14:36:43.705029 7fb065058700 0 librados: client
Hello all,
Recently I am working on Ceph performance analysis on our cluster, our OSD
hardware looks like:
11 SATA disks, 4TB for each, 7200RPM
48GB RAM
When break down the latency, we found that half of the latency (average latency
is around 60 milliseconds via radosgw) comes from file loo
On 03/05/2014 02:30 PM, Jonathan Gowar wrote:
In an attempt to add a mon server, I appear to have completely broken a
mon service to the cluster:-
Did you start the mon you added? How did you add the new monitor?
Are your other monitors running? How many are they?
For each of your monitors,
On Wed, 2014-03-05 at 16:35 +, Joao Eduardo Luis wrote:
> On 03/05/2014 02:30 PM, Jonathan Gowar wrote:
> > In an attempt to add a mon server, I appear to have completely broken a
> > mon service to the cluster:-
>
> Did you start the mon you added? How did you add the new monitor?
>From the
We're going to get started on Ceph Developer Day 2, which is our
APAC-friendly day, here in a couple minutes. If you aren't already
tuned in, make sure you click on the appropriate session link:
https://wiki.ceph.com/Planning/CDS/CDS_Giant_(Mar_2014)
For those who aren't able to make it, all vid
On Wed, Mar 5, 2014 at 12:26 AM, Christopher O'Connell
wrote:
> Hello,
>
> There are several older discussions regarding RGW performance with high
> volume small files.
>
> I'm planning on running some tests on our test cluster to benchmark this
> performance, but before I do I wanted to ask sever
Hi,
We experience something similar with our Openstack Swift setup.
You can change the sysstl "vm.vfs_cache_pressure" to make sure more inodes are
being kept in cache.
(Do not set this to 0 because you will trigger the OOM killer at some point ;)
We also decided to go for nodes with more memory
18 matches
Mail list logo