Hello.
Some strange things happen with my ceph installation after I was moved journal
to SSD disk.
OS: Ubuntu 15.04 with ceph version 0.94.2-0ubuntu0.15.04.1
server: dell r510 with PERC H700 Integrated 512MB RAID cache
my cluster have:
1 monitor node
2 OSD nodes with 6 OSD daemons at each server
performance of SSD disk
>(10-30k write io)."
>
>This is not realistic. Try:
>
>fio --sync=1 --fsync=1 --direct=1 --iodepth=1 --ioengine=aio
>
>Jan
>
>On 23 Oct 2015, at 16:31, K K < n...@mail.ru > wrote:
>Hello.
>Some strange things happen with my cep
I was get same situation at 1gbit network. Try to change mtu 9000 on nic and
switch.
Do you show cluster configs?
--
Kostya суббота, 31 октября 2015г., 02:30 +05:00 от Brendan Moloney <
molo...@ohsu.edu> :
>Hi,
>
>I recently got my first Ceph cluster up and running and have been doing some
>st
Hello, guys
I to face a task poor performance into windows 2k12r2 instance running on rbd
(openstack cluster). RBD disk have a size 17Tb. My ceph cluster consist from:
- 3 monitors nodes (Celeron G530/6Gb RAM, DualCore E6500/2Gb RAM, Core2Duo
E7500/2Gb RAM). Each node have 1Gbit network to fron
rk to the clients from the
> storage nodes is fully functional.
The network have been tested by iperf. 950-970Mbit among all nodes in clustes
(openstack and ceph) Понедельник, 11 июля 2016, 10:58 +05:00 от Christian
Balzer :
>
>
>Hello,
>
>On Mon, 11 Jul 2016 07:35:02 +0300 K K wrote:
ers:
I can't change those parametres on fly:
ceph tell osd.* injectargs '--osd_scrub_end_hour=6'
osd.0: osd_scrub_end_hour = '6' (unchangeable)
osd.1: osd_scrub_end_hour = '6' (unchangeable)
osd.2: osd_scrub_end_hour = '6' (unchangeable)
...
I try
;: "false",
"osd_scrub_auto_repair_num_errors": "5",
"osd_scrub_priority": "5",
"osd_scrub_cost": "52428800",
Christian, can you tell optimal params for my environment?
>Понедельник, 11 июля 2016, 12:38 +05:00 от Christian
+deep
client io 3909 kB/s rd, 30277 B/s wr, 23 op/s rd, 9 op/s wr
Bow I temporary disable deep-scrub via "ceph osd set nodeep-scrub".
But performance still poor into VM Понедельник, 11 июля 2016, 12:38 +05:00 от
Christian Balzer < ch...@gol.com >:
>
>
>Hello,
>
>On Mon,
Read 4KB (QD=32) : 32.220 MB/s [ 7866.1 IOPS]
Random Write 4KB (QD=32) : 12.564 MB/s [ 3067.4 IOPS]
Test : 4000 MB [D: 97.5% (15699.7/16103.1 GB)] (x3)
>Понедельник, 11 июля 2016, 12:38 +05:00 от Christian Balzer :
>
>
>Hello,
>
>On Mon, 11 Jul 2016 09:54:59 +0300 K K wrote:
>
tings will only apply to new scrubs, not running ones, as you
>found out.
>
>On Mon, 11 Jul 2016 15:37:49 +0300 K K wrote:
>
>>
>> I have tested windows instance Crystal Disk Mark. Result is:
>>
>Again, when running a test like this, check with atop/iostat how
Hello.
Maybe deep-scrub starting at this time? Четверг, 21 июля 2016, 11:10 +05:00 от
Christian Balzer :
>
>
>Hello,
>
>On Wed, 20 Jul 2016 12:19:07 -0700 Kane Kim wrote:
>
>> Hello,
>>
>> I was running cosbench for some time and noticed sharp consistent
>> performance decrease at some point.
Hello, all!
I have successfully create 2 zone cluster(se and se2). But my radosgw machines
are sending many GET /admin/log requests to each other after put 10k items to
cluster via radosgw. It's look like:
2017-03-03 17:31:17.897872 7f21b9083700 1 civetweb: 0x7f222001f660: 10.30.18.24
- - [03/M
Hi
I faced in those error. After some research I commentded out my custom setting:
#rgw zonegroup root pool = se.root
#rgw zone root pool = se.root
and after those rgw successfully started. Now setting are placed in default
pool: .rgw.root
>Суббота, 4 марта 2017, 6:40 +05:00 от Gagandeep Arora
"029e0f49-f4dc-4f29-8855-bcc23a8bbcd9",
"name": "se2-k12",
"endpoints": [
"http:\/\/se2.local:80"
],
"log_meta": "false",
"log_data": "true",
"bucket_index_max_shards": 0,
"read_
Hello,
my cluster didn't have MDS. I recommended add "ceph osd set noout" before
shutdown OSD daemons.
I've done those operation tomorrow and my cluster working again. Пятница, 7
апреля 2017, 13:47 +05:00 от TYLin :
>
>Hi all,
>
>We’re trying to stop and then restart our ceph cluster. Our s
Hi all!
jewel 10.2.6 release
I try to setup "X-History-Location: Arhive". But those function not work.
Do anybody know plans to add those option to radosgw?
X-Versions-Location working fine.
Thanks all
--
Konstantin___
ceph-users mailing list
ceph
16 matches
Mail list logo