Georgios, oh, sorry for my poor english _-_, may be I poor expressed
what i want =]
i know how to write simple Crush rule and how use it, i want several
things things:
1. Understand why, after inject bad map, my test node make offline.
This is unexpected.
2. May be somebody can explain what and wh
Hello Jason,
> but to me it sounds like you are saying that there are no/minimal deltas
between snapshots move2db24-20150428 and 2015-05-05 (both from the
export-diff and from your clone).
yep, it correct. difference between snapshots move2db24-20150428 & 2015-05-05
is too small 4kb instead of 20
Timofey,
may be your best chance is to connect directly at the server and see
what is going on.
Then you can try debug why the problem occurred. If you don't want to
wait until tomorrow
you may try to see what is going on using the server's direct remote
console access.
The majority of the ser
Any updates on when this is going to be released?
Daniel
On Wed, May 6, 2015 at 3:51 AM, Yehuda Sadeh-Weinraub
wrote:
> Yes, so it seems. The librados::nobjects_begin() call expects at least a
> Hammer (0.94) backend. Probably need to add a try/catch there to catch this
> issue, and maybe see i
I build two ceph clusters.
for the first cluster, I do the follow steps
1??create pools
sudo ceph osd pool create .us-east.rgw.root 64 64
sudo ceph osd pool create .us-east.rgw.control 64 64
sudo ceph osd pool create .us-east.rgw.gc 64 64
sudo ceph osd pool create .us-east.rgw.buckets 64 64
sudo
Hello ceph developers and users,
some time ago, I posted here a question regarding very different
performance for two volumes in one pool (backed by SSD drives).
After some examination, I probably got to the root of the problem..
When I create fresh volume (ie rbd create --image-format 2 --size
Two things..
1. You should always use SSD drives for benchmarking after preconditioning it.
2. After creating and mapping rbd lun, you need to write data first to read it
afterword otherwise fio output will be misleading. In fact, I think you will
see IO is not even hitting cluster (check with
On Mon, May 11, 2015 at 05:20:25AM +, Somnath Roy wrote:
> Two things..
>
> 1. You should always use SSD drives for benchmarking after preconditioning it.
well, I don't really understand... ?
>
> 2. After creating and mapping rbd lun, you need to write data first to read
> it afterword oth
Hello again,
Just an update on this; I restarted all the acting osd daemons, and the unfound
message is now gone. There must have been some sort of a book keeping error
which got fixed on daemon restart.
-Original Message-
From: Eino Tuominen
Sent: 4. toukokuuta 2015 13:27
To: Eino Tuo
Hi All.
We have a wierd issue where civetweb just locks up, it just fails to
respond to HTTP and a restart resolves the problem. This happens anywhere
from every 60 seconds to every 4 hours with no reason behind it.
We have run the gateway in full debug mode and there is nothing there that
seems
Yes, you need to run fio clients on a separate box, it will take quite a bit of
cpu.
Stopping OSDs on other nodes, rebalancing will start. Have you waited cluster
to go for active + clean state ? If you are running while rebalancing is going
on , the performance will be impacted.
~110% cpu uti
On Mon, May 11, 2015 at 06:07:21AM +, Somnath Roy wrote:
> Yes, you need to run fio clients on a separate box, it will take quite a bit
> of cpu.
> Stopping OSDs on other nodes, rebalancing will start. Have you waited cluster
> to go for active + clean state ? If you are running while rebalan
12 matches
Mail list logo