hello sir!
According to TomiTakussaari/riak_zabbixCurrently supported Zabbix
keys:riak.ring_num_partitions
riak.memory_total
riak.memory_processes_used
riak.pbc_active
riak.pbc_connects
riak.node_gets
riak.node_puts
riak.node_get_fsm_time_median
riak.node_put_fsm_time_medianAll these metrics are m
Examples
Backups:
/usr/bin/nice -n +20 /usr/bin/rbd -n client.backup export
test/vm-105-disk-1@rbd_data.505392ae8944a - | /usr/bin/pv -s 40G -n -i 1 |
/usr/bin/nice -n +20 /usr/bin/pbzip2 -c > /backup/vm-105-disk-1
Restore:
pbzip2 -dk /nfs/RBD/big-vm-268-disk-1-LyncV2-20140830-011308.pbzip2 -c |
rb
Hi.
For faster operation, use rbd export/export-diff and import/import-diff
2014-12-11 17:17 GMT+03:00 Lindsay Mathieson :
>
> Anyone know why a VM live restore would be excessively slow on Ceph?
> restoring
> a small VM with 12GB disk/2GB Ram is taking 18 *minutes*. Larger VM's can
> be
> over
Hi.
We use Zabbix.
2014-12-12 8:33 GMT+03:00 pragya jain :
> hello sir!
>
> I need some open source monitoring tool for examining these metrics.
>
> Please suggest some open source monitoring software.
>
> Thanks
> Regards
> Pragya Jain
>
>
> On Thursday, 11 December 2014 9:16 PM, Denish Patel
hello sir!
I need some open source monitoring tool for examining these metrics.
Please suggest some open source monitoring software.
Thanks Regards Pragya Jain
On Thursday, 11 December 2014 9:16 PM, Denish Patel
wrote:
Try http://www.circonus.com
On Thu, Dec 11, 2014 at 1:22 AM, pr
OK! I will give it some time and will try again later!
Thanks a lot for your help!
Warmest regards,
George
The branch I pushed earlier was based off recent development branch.
I
just pushed one based off firefly (wip-10271-firefly). It will
probably take a bit to build.
Yehuda
On Thu, Dec
Hi all,
Any one can help ?
On Dec 11, 2014, at 20:34, mail list wrote:
> Hi all,
>
> I follow the http://docs.ceph.com/docs/master/start/quick-ceph-deploy/ to
> deploy ceph,
> But when install the monitor node, i got error as below:
>
> {code}
> [louis@adminnode my-cluster]$ ceph-deploy new
On Thu, Dec 11, 2014 at 2:21 AM, Joao Eduardo Luis wrote:
> On 12/11/2014 04:28 AM, Christopher Armstrong wrote:
>>
>> If someone could point me to where this fix should go in the code, I'd
>> actually love to dive in - I've been wanting to contribute back to Ceph,
>> and this bug has hit us perso
Anyone know why a VM live restore would be excessively slow on Ceph? restoring
a small VM with 12GB disk/2GB Ram is taking 18 *minutes*. Larger VM's can be
over half an hour.
The same VM's on the same disks, but native, or glusterfs take less than 30
seconds.
VM's are KVM on Proxmox.
thank
On 12/10/2014 07:30 PM, Kevin Sumner wrote:
The mons have grown another 30GB each overnight (except for 003?), which
is quite worrying. I ran a little bit of testing yesterday after my
post, but not a significant amount.
I wouldn’t expect compact on start to help this situation based on the
nam
Be very careful with running "ceph pg repair". Have a look at this
thread:
http://thread.gmane.org/gmane.comp.file-systems.ceph.user/15185
--
Tomasz Kuzemko
tomasz.kuze...@ovh.net
On Thu, Dec 11, 2014 at 10:57:22AM +, Luis Periquito wrote:
> Hi,
>
> I've stopped OSD.16, removed the PG from
On 12/11/2014 04:28 AM, Christopher Armstrong wrote:
If someone could point me to where this fix should go in the code, I'd
actually love to dive in - I've been wanting to contribute back to Ceph,
and this bug has hit us personally so I think it's a good candidate :)
I'm not sure where the bug
On 12/10/2014 04:08 PM, Mike wrote:
> Hello all!
> Some our customer asked for only ssd storage.
> By now we looking to 2027R-AR24NV w/ 3 x HBA controllers (LSI3008 chip,
> 8 internal 12Gb ports on each), 24 x Intel DC S3700 800Gb SSD drives, 2
> x mellanox 40Gbit ConnectX-3 (maybe newer ConnectX-4
Hi all,
i am using ceph-deploy as following on cents 6.5 x86_64:
ceph-deploy -v install --release=giant adminnode
as you see, I specified the release version as giant , but got the following
error:
[adminnode][WARNIN] curl: (22) The requested URL returned error: 404 Not Found
[adminnode][DEBU
Hi again!
I have installed and enabled the development branch repositories as
described here:
http://ceph.com/docs/master/install/get-packages/#add-ceph-development
and when I try to update the ceph-radosgw package I get the following:
Installed Packages
Name: ceph-radosgw
Arch
Hi all,
I have upgrade two LSI SAS9201-16i HBAs to the latest Firmware P20.00.00
and after that I got following syslog messages:
Dec 9 18:11:31 ceph-03 kernel: [ 484.602834] mpt2sas0: log_info(0x3108):
originator(PL), code(0x08), sub_code(0x)
Dec 9 18:12:15 ceph-03 kernel: [ 528.31017
Pushed a fix to wip-10271. Haven't tested it though, let me know if you try it.
Thanks,
Yehuda
On Thu, Dec 11, 2014 at 8:38 AM, Yehuda Sadeh wrote:
> I don't think it has been fixed recently. I'm looking at it now, and
> not sure why it hasn't triggered before in other areas.
>
> Yehuda
>
> On T
I don't think it has been fixed recently. I'm looking at it now, and
not sure why it hasn't triggered before in other areas.
Yehuda
On Thu, Dec 11, 2014 at 5:55 AM, Georgios Dimitrakakis
wrote:
> This issue seems very similar to these:
>
> http://tracker.ceph.com/issues/8202
> http://tracker.cep
Hi all,
I follow the http://docs.ceph.com/docs/master/start/quick-ceph-deploy/ to
deploy ceph,
But when install the monitor node, i got error as below:
{code}
[louis@adminnode my-cluster]$ ceph-deploy new node1
[ceph_deploy.conf][DEBUG ] found configuration file at:
/home/louis/.cephdeploy.con
This issue seems very similar to these:
http://tracker.ceph.com/issues/8202
http://tracker.ceph.com/issues/8702
Would it make any difference if I try to build CEPH from sources?
I mean is someone aware of it been fixed on any of the recent commits
and probably hasn't passed yet to the reposit
Christian,
That indeed looks like the bug! We tried with moving the monitor
host/address into global and everything works as expected - see
https://github.com/deis/deis/issues/2711#issuecomment-66566318
This seems like a potentially bad bug - how has it not come up before?
Anything we can do to h
If someone could point me to where this fix should go in the code, I'd
actually love to dive in - I've been wanting to contribute back to Ceph,
and this bug has hit us personally so I think it's a good candidate :)
On Wed, Dec 10, 2014 at 8:25 PM, Christopher Armstrong
wrote:
> We're running Cep
The branch I pushed earlier was based off recent development branch. I
just pushed one based off firefly (wip-10271-firefly). It will
probably take a bit to build.
Yehuda
On Thu, Dec 11, 2014 at 12:03 PM, Georgios Dimitrakakis
wrote:
> Hi again!
>
> I have installed and enabled the development b
Hi,
Can you post the commands you ran for both benchmarks, without knowing the
block size, write pattern and queue depth, it’s hard to determine where the
bottleneck might be.
I can see OSD sde has a very high service time, which could be a sign of a
problem, does it always show up high
On Thu, Dec 11, 2014 at 2:57 AM, Luis Periquito wrote:
> Hi,
>
> I've stopped OSD.16, removed the PG from the local filesystem and started
> the OSD again. After ceph rebuilt the PG in the removed OSD I ran a
> deep-scrub and the PG is still inconsistent.
What led you to remove it from osd 16? Is
(On Giant, v0.87)
On Thu, Dec 11, 2014 at 10:34 AM, Christopher Armstrong
wrote:
> Our users are running CoreOS with kernel 3.17.2. Our user tested this by
> setting up the config and then bringing down one of the mons. See
> https://github.com/deis/deis/issues/2711#issuecomment-66566318 for his
Our users are running CoreOS with kernel 3.17.2. Our user tested this by
setting up the config and then bringing down one of the mons. See
https://github.com/deis/deis/issues/2711#issuecomment-66566318 for his
testing scenario.
On Thu, Dec 11, 2014 at 8:16 AM, Joao Eduardo Luis wrote:
> On 12/11
Was there any activity against your cluster when you reduced the size
from 3 -> 2? I think maybe it was just taking time to percolate
through the system if nothing else was going on. When you reduced them
to size 1 then data needed to be deleted so everything woke up and
started processing.
-Greg
there is a ceph pool on a hp dl360g5 with 25 sas 10k (sda-sdy) on a msa70
which gives me about 600 MB/s continous write speed with rados write bench.
tgt on the server with rbd backend uses this pool. mounting local(host)
with iscsiadm, sdz is the virtual iscsi device. As you can see, sdz max out
w
On 12/11/2014 02:46 PM, Gregory Farnum wrote:
On Thu, Dec 11, 2014 at 2:21 AM, Joao Eduardo Luis wrote:
On 12/11/2014 04:28 AM, Christopher Armstrong wrote:
If someone could point me to where this fix should go in the code, I'd
actually love to dive in - I've been wanting to contribute back t
Hello,
On 12/11/2014 11:35 AM, Christian Balzer wrote:
>
> Hello,
>
> On Wed, 10 Dec 2014 18:08:23 +0300 Mike wrote:
>
>> Hello all!
>> Some our customer asked for only ssd storage.
>> By now we looking to 2027R-AR24NV w/ 3 x HBA controllers (LSI3008 chip,
>> 8 internal 12Gb ports on each), 24 x
On Thu, Dec 11, 2014 at 3:18 AM, Irek Fasikhov wrote:
> Hi, Cao.
>
> https://github.com/ceph/ceph/commits/firefly
>
>
> 2014-12-11 5:00 GMT+03:00 Cao, Buddy :
>>
>> Hi, I tried to download firefly rpm package, but found two rpms existing
>> in different folders, what is the difference of 0.87.0 an
Hi,
I've stopped OSD.16, removed the PG from the local filesystem and started
the OSD again. After ceph rebuilt the PG in the removed OSD I ran a
deep-scrub and the PG is still inconsistent.
I'm running out of ideas on trying to solve this. Does this mean that all
copies of the object should also
On 12/11/2014 04:18 AM, Christian Balzer wrote:
On Wed, 10 Dec 2014 20:09:01 -0800 Christopher Armstrong wrote:
Christian,
That indeed looks like the bug! We tried with moving the monitor
host/address into global and everything works as expected - see
https://github.com/deis/deis/issues/2711#i
Hi
Is it possible to share performance results with this kind of config? How many
iops? Bandwidth ? Latency ?
Thanks
Sent from my iPhone
> On 11 déc. 2014, at 09:35, Christian Balzer wrote:
>
>
> Hello,
>
>> On Wed, 10 Dec 2014 18:08:23 +0300 Mike wrote:
>>
>> Hello all!
>> Some our custom
Hello,
On Wed, 10 Dec 2014 18:08:23 +0300 Mike wrote:
> Hello all!
> Some our customer asked for only ssd storage.
> By now we looking to 2027R-AR24NV w/ 3 x HBA controllers (LSI3008 chip,
> 8 internal 12Gb ports on each), 24 x Intel DC S3700 800Gb SSD drives, 2
> x mellanox 40Gbit ConnectX-3 (m
Hi,
Back on this.
I finally found out a logic in the mapping.
So after taking the time to note all the disks serial numbers on 3 different
machines and 2 different OSes, I now know that my specific LSI SAS 2008 cards
(no reference on them, but I think those are LSI sas 9207-8i) map the disks of
37 matches
Mail list logo