[ceph-users] subscribe to ceph-user list

2018-01-15 Thread German Anders
___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Recommendations for I/O (blk-mq) scheduler for HDDs and SSDs?

2017-12-11 Thread German Anders
Hi Patrick, Some thoughts about blk-mq: *(virtio-blk)* - it's activated by default on kernels >= 3.13 on driver virtio-blk - *The blk-mq feature is currently implemented, and enabled by default, in the following drivers: virtio-blk, mtip32xx, nvme, and rbd*. ( https://access.redhat.

Re: [ceph-users] luminous 12.2.2 traceback (ceph fs status)

2017-12-11 Thread German Anders
cephfs_metadata The goods news is that after restarting the ceph-mgr, it started to work :) but like you said, it would be nice to know how the system got into this. Thanks a lot John :) Best, *German* 2017-12-11 12:17 GMT-03:00 John Spray : > On Mon, Dec 11, 2017 at 3:13 PM, German Anders >

Re: [ceph-users] luminous 12.2.2 traceback (ceph fs status)

2017-12-11 Thread German Anders
handle_fs_status(cmd) File "/usr/lib/ceph/mgr/status/module.py", line 219, in handle_fs_status stats = pool_stats[pool_id] KeyError: (15L,) *German* 2017-12-11 12:08 GMT-03:00 John Spray : > On Mon, Dec 4, 2017 at 6:37 PM, German Anders > wrote: > > Hi, > > >

[ceph-users] luminous 12.2.2 traceback (ceph fs status)

2017-12-04 Thread German Anders
Hi, I just upgrade a ceph cluster from version 12.2.0 (rc) to 12.2.2 (stable), and i'm getting a traceback while trying to run: *# ceph fs status* Error EINVAL: Traceback (most recent call last): File "/usr/lib/ceph/mgr/status/module.py", line 301, in handle_command return self.handle_fs_s

Re: [ceph-users] ceph all-nvme mysql performance tuning

2017-12-04 Thread German Anders
Could anyone run the tests? and share some results.. Thanks in advance, Best, *German* 2017-11-30 14:25 GMT-03:00 German Anders : > That's correct, IPoIB for the backend (already configured the irq > affinity), and 10GbE on the frontend. I would love to try rdma but like > y

Re: [ceph-users] ceph all-nvme mysql performance tuning

2017-11-30 Thread German Anders
_sysbench_perf_test.out 2>/dev/null Im looking for tps, qps and 95th perc, could anyone with a all-nvme cluster run the test and share the results? I would really appreciate the help :) Thanks in advance, Best, *German * 2017-11-29 19:14 GMT-03:00 Zoltan Arnold Nagy : > On 2017-11-27 14

Re: [ceph-users] Transparent huge pages

2017-11-29 Thread German Anders
Is possible that in Ubuntu with kernel version 4.12.14 at least, it comes by default with the parameter enabled in [madvise]? *German* 2017-11-28 12:07 GMT-03:00 Nigel Williams : > Given that memory is a key resource for Ceph, this advice about switching > Transparent Huge Pages kernel setting

Re: [ceph-users] ceph all-nvme mysql performance tuning

2017-11-28 Thread German Anders
statsd collectors running on the osd nodes so, I hope to get some info about that :) *German* 2017-11-28 16:12 GMT-03:00 Marc Roos : > > I was wondering if there are any statistics available that show the > performance increase of doing such things? > > > > > >

Re: [ceph-users] ceph all-nvme mysql performance tuning

2017-11-28 Thread German Anders
ve you tuned > them to have a bigger cache? > > These are from what I've learned using filestore - I've yet to run full > tests on bluestore - but they should still apply... > > On Mon, Nov 27, 2017 at 5:10 PM, German Anders > wrote: > >> Hi Nick, >>

Re: [ceph-users] ceph all-nvme mysql performance tuning

2017-11-27 Thread German Anders
et back as soon as get got those tests running. Thanks a lot, Best, *German* 2017-11-27 12:16 GMT-03:00 Nick Fisk : > *From:* ceph-users [mailto:ceph-users-boun...@lists.ceph.com] *On Behalf > Of *German Anders > *Sent:* 27 November 2017 14:44 > *To:* Maged Mokhtar > *Cc:* ce

Re: [ceph-users] ceph all-nvme mysql performance tuning

2017-11-27 Thread German Anders
t; NVMe, environment. > > > > David Byte > > Sr. Technology Strategist > > *SCE Enterprise Linux* > > *SCE Enterprise Storage* > > Alliances and SUSE Embedded > > db...@suse.com > > 918.528.4422 > > > > *From: *ceph-users on behalf of >

Re: [ceph-users] ceph all-nvme mysql performance tuning

2017-11-27 Thread German Anders
more tests and param changes to see if we get better :) Thanks, Best, *German* 2017-11-27 11:36 GMT-03:00 Maged Mokhtar : > On 2017-11-27 15:02, German Anders wrote: > > Hi All, > > I've a performance question, we recently install a brand new Ceph cluster > with all-nvme disk

Re: [ceph-users] ceph all-nvme mysql performance tuning

2017-11-27 Thread German Anders
e use this server also with an old ceph cluster. we are going to upgrade the version and see if tests get better. Thanks *German* 2017-11-27 10:16 GMT-03:00 Wido den Hollander : > > > Op 27 november 2017 om 14:02 schreef German Anders >: > > > > > > Hi All,

Re: [ceph-users] ceph all-nvme mysql performance tuning

2017-11-27 Thread German Anders
ot; > as well since per-message log gathering takes a large hit on small IO > performance. > > On Mon, Nov 27, 2017 at 8:02 AM, German Anders > wrote: > >> Hi All, >> >> I've a performance question, we recently install a brand new Ceph cluster >> w

[ceph-users] ceph all-nvme mysql performance tuning

2017-11-27 Thread German Anders
Hi All, I've a performance question, we recently install a brand new Ceph cluster with all-nvme disks, using ceph version 12.2.0 with bluestore configured. The back-end of the cluster is using a bond IPoIB (active/passive) , and for the front-end we are using a bonding config with active/active (2

Re: [ceph-users] Ceph monitoring

2017-10-02 Thread German Anders
prometheus has a nice data exporter build in go, that then you can send to grafana or any other tool https://github.com/digitalocean/ceph_exporter *German* 2017-10-02 8:34 GMT-03:00 Osama Hasebou : > Hi Everyone, > > Is there a guide/tutorial about how to setup Ceph monitoring system using > co

Re: [ceph-users] Minimum requirements to mount luminous cephfs ?

2017-09-27 Thread German Anders
Try to work with the tunables: $ *ceph osd crush show-tunables* { "choose_local_tries": 0, "choose_local_fallback_tries": 0, "choose_total_tries": 50, "chooseleaf_descend_once": 1, "chooseleaf_vary_r": 1, "chooseleaf_stable": 0, "straw_calc_version": 1, "allowed_buc

Re: [ceph-users] after reboot node appear outside the root root tree

2017-09-13 Thread German Anders
= /path/to/customized-ceph-crush-location" (see > https://github.com/ceph/ceph/blob/master/src/ceph-crush-location.in). > > Cheers, > Maxime > > On Wed, 13 Sep 2017 at 18:35 German Anders wrote: > >> *# ceph health detail* >> HEALTH_OK >> >> *# ceph osd st

Re: [ceph-users] after reboot node appear outside the root root tree

2017-09-13 Thread German Anders
*# ceph health detail* HEALTH_OK *# ceph osd stat* 48 osds: 48 up, 48 in *# ceph pg stat* 3200 pgs: 3200 active+clean; 5336 MB data, 79455 MB used, 53572 GB / 53650 GB avail *German* 2017-09-13 13:24 GMT-03:00 dE : > On 09/13/2017 09:08 PM, German Anders wrote: > > Hi cephers, >

[ceph-users] after reboot node appear outside the root root tree

2017-09-13 Thread German Anders
Hi cephers, I'm having an issue with a newly created cluster 12.2.0 (32ce2a3ae5239ee33d6150705cdb24d43bab910c) luminous (rc). Basically when I reboot one of the nodes, and when it come back, it come outside of the root type on the tree: root@cpm01:~# ceph osd tree ID CLASS WEIGHT TYPE NAME

Re: [ceph-users] Upgrade target for 0.82

2017-06-27 Thread German Anders
Thanks a lot Wido Best, *German* 2017-06-27 16:08 GMT-03:00 Wido den Hollander : > > > Op 27 juni 2017 om 20:56 schreef German Anders : > > > > > > Hi Cephers, > > > >I want to upgrade an existing cluster (version 0.82), and I would like > > to

[ceph-users] Upgrade target for 0.82

2017-06-27 Thread German Anders
Hi Cephers, I want to upgrade an existing cluster (version 0.82), and I would like to know if there's any recommended upgrade-path and also the recommended target version. Thanks in advance, *German* ___ ceph-users mailing list ceph-users@lists.ceph

Re: [ceph-users] ceph-deploy to a particular version

2017-05-02 Thread German Anders
I think you can do *$ceph-deploy install --release --repo-url http://download.ceph.com/. .. *, also you can change the --release flag with --dev or --testing and specify the version, I've done with release and dev flags and work great :) hope it helps best, *Ger

Re: [ceph-users] Ceph UPDATE (not upgrade)

2017-04-26 Thread German Anders
in the > repo files and make it so that you have to include the packages to update > the ceph packages. > > On Wed, Apr 26, 2017 at 1:12 PM German Anders > wrote: > >> Hi Massimiiano, >> >> I think you best go with the upgrade process from Ceph site, take a

Re: [ceph-users] Ceph UPDATE (not upgrade)

2017-04-26 Thread German Anders
t and get things fine :) hope it helps, Best, *German Anders* 2017-04-26 11:21 GMT-03:00 Massimiliano Cuttini : > On a Ceph Monitor/OSD server can i run just: > > *yum update -y* > > in order to upgrade system and packages or d

Re: [ceph-users] How to think a two different disk's technologies architecture

2017-03-25 Thread German Anders
_ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-user

[ceph-users] News on RDMA on future releases

2016-12-07 Thread German Anders
Hi all, I want to know if there's any news on future releases, regarding RDMA if it's going to be integrated or not, since RDMA should increase IOPS performance a lot, specially on small block sizes. Thanks in advance, Best, *German* ___ ceph-users m

Re: [ceph-users] A VM with 6 volumes - hangs

2016-11-14 Thread German Anders
ot; > > But Ceph status is in OK state. > > Thanks > Swami > > On Mon, Nov 14, 2016 at 8:27 PM, German Anders > wrote: > >> Could you share some info about the ceph cluster? logs? did you see >> anything different from normal op on the logs? >> >&

Re: [ceph-users] A VM with 6 volumes - hangs

2016-11-14 Thread German Anders
Could you share some info about the ceph cluster? logs? did you see anything different from normal op on the logs? Best, *German* 2016-11-14 11:46 GMT-03:00 M Ranga Swami Reddy : > +ceph-devel > > On Fri, Nov 11, 2016 at 5:09 PM, M Ranga Swami Reddy > wrote: > >> Hello, >> I am using the ceph

Re: [ceph-users] ceph on two data centers far away

2016-10-25 Thread German Anders
snapshots to a remote location (separate > cluster or separate pool). Similar to RBD mirroring, in this situation > your client writes are not subject to that latency. > > On Thu, Oct 20, 2016 at 1:51 PM, German Anders > wrote: > > Thanks, that's too far actually

Re: [ceph-users] ceph on two data centers far away

2016-10-20 Thread German Anders
16-10-20 9:54 GMT-07:00 German Anders : > >> from curiosity I wanted to ask you what kind of network topology are you >> trying to use across the cluster? In this type of scenario you really need >> an ultra low latency network, how far from each other? >> >> Best,

Re: [ceph-users] ceph on two data centers far away

2016-10-20 Thread German Anders
from curiosity I wanted to ask you what kind of network topology are you trying to use across the cluster? In this type of scenario you really need an ultra low latency network, how far from each other? Best, *German* 2016-10-18 16:22 GMT-03:00 Sean Redmond : > Maybe this would be an option for

Re: [ceph-users] is the web site down ?

2016-10-12 Thread German Anders
I think that you can check it over here: http://www.dreamhoststatus.com/2016/10/11/dreamcompute- us-east-1-cluster-service-disruption/ *German Anders* Storage Engineer Leader *Despegar* | IT Team *office* +54 11 4894 3500 x3408 *mobile* +54 911 3493 7262 *mail* gand...@despegar.com 2016-10-12

Re: [ceph-users] Ceph InfiniBand Cluster - Jewel - Performance

2016-04-07 Thread German Anders
gt; =EF9A > -END PGP SIGNATURE----- > > Robert LeBlanc > PGP Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1 > > > On Thu, Apr 7, 2016 at 1:43 PM, German Anders > wrote: > > Hi Cephers, > > > > I've setup a produ

[ceph-users] Ceph InfiniBand Cluster - Jewel - Performance

2016-04-07 Thread German Anders
Hi Cephers, I've setup a production environment Ceph cluster with the Jewel release (10.1.0 (96ae8bd25f31862dbd5302f304ebf8bf1166aba6)) consisting of 3 MON Servers and 6 OSD Servers: 3x MON Servers: 2x Intel Xeon E5-2630v3@2.40Ghz 384GB RAM 2x 200G Intel DC3700 in RAID-1 for OS 1x InfiniBand Conn

Re: [ceph-users] OSD crash after conversion to bluestore

2016-03-31 Thread German Anders
having jewel install, is possible to run a command in order to see that the OSD is actually using bluestore? Thanks in advance, Best, *German* 2016-03-31 1:24 GMT-03:00 Adrian Saul : > > I upgraded my lab cluster to 10.1.0 specifically to test out bluestore and > see what latency difference i

Re: [ceph-users] Scrubbing a lot

2016-03-30 Thread German Anders
Ok, but I've kernel 3.19.0-39-generic, so the new version is supposed to work right?, and I'm still getting issues while trying to map the RBD: $ *sudo rbd --cluster cephIB create e60host01vX --size 100G --pool rbd -c /etc/ceph/cephIB.conf* $ *sudo rbd -p rbd bench-write e60host01vX --io-size 4096

Re: [ceph-users] Scrubbing a lot

2016-03-29 Thread German Anders
ot; to the "rbd create" command-line (or by updating > your config as documented in the release notes). > > -- > > Jason Dillaman > > > - Original Message - > > > From: "German Anders" > > To: "Jason Dillaman" > > Cc: &

Re: [ceph-users] Scrubbing a lot

2016-03-29 Thread German Anders
format 1 when creating the image. > > > > Originalmeddelande > Från: Samuel Just > Datum: 2016-03-29 22:24 (GMT+01:00) > Till: German Anders > Kopia: ceph-users > Rubrik: Re: [ceph-users] Scrubbing a lot > > Sounds like a version/compatibility thing. Are your rbd clients

Re: [ceph-users] Scrubbing a lot

2016-03-29 Thread German Anders
ore > than layering, you need to disable them via the 'rbd feature disable' > command. > > [1] https://github.com/ceph/ceph/blob/master/doc/release-notes.rst#L302 > > -- > > Jason Dillaman > > > - Original Message - > > > From: "Ge

Re: [ceph-users] Scrubbing a lot

2016-03-29 Thread German Anders
-03-29 17:24 GMT-03:00 Samuel Just : > Or you needed to run it as root? > -Sam > > On Tue, Mar 29, 2016 at 1:24 PM, Samuel Just wrote: > > Sounds like a version/compatibility thing. Are your rbd clients really > old? > > -Sam > > > > On Tue, Mar 29,

Re: [ceph-users] Scrubbing a lot

2016-03-29 Thread German Anders
eph 526 Mar 28 12:06 cephIB.conf -rw--- 1 ceph ceph 63 Mar 29 16:11 cephIB.client.admin.keyring ​Thanks in advance, Best, *German* 2016-03-29 14:45 GMT-03:00 German Anders : > Sure, also the scrubbing is happening on all the osds :S > > # ceph --cluster cephIB daemon osd.

Re: [ceph-users] Scrubbing a lot

2016-03-29 Thread German Anders
uot;, "log_to_stderr": "true", "mds_data": "\/var\/lib\/ceph\/mds\/ceph-4", "mon_cluster_log_file": "default=\/var\/log\/ceph\/ceph.$channel.log cluster=\/var\/log\/ceph\/ceph.log", "m

[ceph-users] Scrubbing a lot

2016-03-29 Thread German Anders
Hi All, I've maybe a simple question, I've setup a new cluster with Infernalis release, there's no IO going on at the cluster level and I'm receiving a lot of these messages: 2016-03-29 12:22:07.462818 mon.0 [INF] pgmap v158062: 8192 pgs: 8192 active+clean; 20617 MB data, 46164 MB used, 52484 GB

Re: [ceph-users] Crush Map tunning recommendation and validation

2016-03-24 Thread German Anders
; > Thanks > > On Wed, Mar 23, 2016 at 3:50 PM, German Anders > wrote: > >> Hi all, >> >> I had a question, I'm in the middle of a new ceph deploy cluster and I've >> 6 OSD servers between two racks, so rack1 would have osdserver1,3 and 5, >&g

[ceph-users] Crush Map tunning recommendation and validation

2016-03-23 Thread German Anders
Hi all, I had a question, I'm in the middle of a new ceph deploy cluster and I've 6 OSD servers between two racks, so rack1 would have osdserver1,3 and 5, and rack2 osdserver2,4 and 6. I've edited the following crush map and I want to know if it's ok and also if the objects would be stored one on

Re: [ceph-users] Ceph 0.94.5 with accelio

2015-11-24 Thread German Anders
elp because the > bottleneck is my disks and CPU. > - > Robert LeBlanc > PGP Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1 > > > On Tue, Nov 24, 2015 at 10:26 AM, German Anders wrote: > > Thanks a lot Robert for the explanation. I unders

Re: [ceph-users] Ceph 0.94.5 with accelio

2015-11-24 Thread German Anders
Yes, I'm wondering if this is my top performance threshold with this kind of setup, although I'll assume that IB perf would be better.. :( *German* 2015-11-24 14:24 GMT-03:00 Mark Nelson : > On 11/24/2015 09:05 AM, German Anders wrote: > >> Thanks a lot for the response Ma

Re: [ceph-users] Ceph 0.94.5 with accelio

2015-11-24 Thread German Anders
t; DRnXv0qd8UAIgza0VYTyZuElUC4V39wMe503tXo5By+NGKWzVNOWR1X0+46i > Xq2zvZQzc9MPtGHMmnm1dkJ+d6imfLzTf099njZ+Wl1xbagnQiKbiwKL8T/k > d3OClf514rV4i7FtwOoB8NQcUMUjaeZGmPVDhmVt7fRYz/+rARkN/jwXH4qG > x/Dk > =/88f > -END PGP SIGNATURE- > > Robert LeBlanc > PGP Fi

Re: [ceph-users] Ceph 0.94.5 with accelio

2015-11-24 Thread German Anders
ffic across many ports. We've seen this in > lab environments, especially with bonded ethernet. > > Mark > > On 11/24/2015 07:22 AM, German Anders wrote: > >> After doing some more in deep research and tune some parameters I've >> gain a little bi

Re: [ceph-users] Ceph 0.94.5 with accelio

2015-11-24 Thread German Anders
. > > "iperf -c -P " on the client > > This will give you an idea of how your network is doing. All-To-All > network tests are also useful, in that sometimes network issues can crop up > only when there's lots of traffic across many ports. We've seen this in

Re: [ceph-users] Ceph 0.94.5 with accelio

2015-11-24 Thread German Anders
All > network tests are also useful, in that sometimes network issues can crop up > only when there's lots of traffic across many ports. We've seen this in > lab environments, especially with bonded ethernet. > > Mark > > On 11/24/2015 07:22 AM, German Anders wrote: &

Re: [ceph-users] Ceph 0.94.5 with accelio

2015-11-24 Thread German Anders
ert LeBlanc : > -BEGIN PGP SIGNED MESSAGE- > Hash: SHA256 > > Are you using unconnected mode or connected mode? With connected mode > you can up your MTU to 64K which may help on the network side. > - > Robert LeBlanc > PGP Fingerprint 79A2 9CA4 6

Re: [ceph-users] Ceph 0.94.5 with accelio

2015-11-23 Thread German Anders
Got it Robert, It was my mistake, I put post-up instead of pre-up, now it changed ok, I'll do new tests with this config and let you know. Regards, *German* 2015-11-23 15:36 GMT-03:00 German Anders : > Hi Robert, > > Thanks for the response. I was configured as 'dat

Re: [ceph-users] Ceph 0.94.5 with accelio

2015-11-23 Thread German Anders
Are you using unconnected mode or connected mode? With connected mode > you can up your MTU to 64K which may help on the network side. > - > Robert LeBlanc > PGP Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1 > > > On Mon, Nov 23, 2015 at 10:4

Re: [ceph-users] Ceph 0.94.5 with accelio

2015-11-23 Thread German Anders
on. > > 3) Assuming you are using IPoIB, try some iperf tests to see how your > network is doing. > > Mark > > > On 11/23/2015 10:17 AM, German Anders wrote: > >> Thanks a lot for the quick update Greg. This lead me to ask if there's >> anything out there

Re: [ceph-users] Ceph 0.94.5 with accelio

2015-11-23 Thread German Anders
GMT-03:00 Gregory Farnum : > On Mon, Nov 23, 2015 at 10:05 AM, German Anders > wrote: > > Hi all, > > > > I want to know if there's any improvement or update regarding ceph 0.94.5 > > with accelio, I've an already configured cluster (with no data on it) &

[ceph-users] Ceph 0.94.5 with accelio

2015-11-23 Thread German Anders
Hi all, I want to know if there's any improvement or update regarding ceph 0.94.5 with accelio, I've an already configured cluster (with no data on it) and I would like to know if there's a way to 'modify' the cluster in order to use accelio. Any info would be really appreciated. Cheers, *German

Re: [ceph-users] ceph infernalis pg creating forever

2015-11-20 Thread German Anders
satisfiable. Check what the rule is doing. > -Greg > > > On Friday, November 20, 2015, German Anders wrote: > >> Hi all, I've finished the install of a new ceph cluster with infernalis >> 9.2.0 release. But I'm getting the following error msg: >> >> $

[ceph-users] ceph infernalis pg creating forever

2015-11-20 Thread German Anders
Hi all, I've finished the install of a new ceph cluster with infernalis 9.2.0 release. But I'm getting the following error msg: $ ceph -w cluster 29xx-3xxx-xxx9-xxx7-b8xx health HEALTH_WARN 64 pgs degraded 64 pgs stale 64 pgs stuck degraded

Re: [ceph-users] ceph osd prepare cmd on infernalis 9.2.0

2015-11-20 Thread German Anders
Date: Thursday, 19 November 2015 at 18:43 > To: German Anders > Cc: ceph-users > Subject: Re: [ceph-users] ceph osd prepare cmd on infernalis 9.2.0 > > I believe the error message says that there is no space left on the device > for the second partition to be created. Perhaps tr

Re: [ceph-users] ceph osd prepare cmd on infernalis 9.2.0

2015-11-19 Thread German Anders
I've already try that with no luck at all On Thursday, 19 November 2015, Mykola Dvornik wrote: > *'Could not create partition 2 from 10485761 to 10485760'.* > > Perhaps try to zap the disks first? > > On 19 November 2015 at 16:22, German Anders > wrote:

[ceph-users] ceph osd prepare cmd on infernalis 9.2.0

2015-11-19 Thread German Anders
Hi cephers, I had some issues while running the prepare osd command: ceph version: infernalis 9.2.0 disk: /dev/sdf (745.2G) /dev/sdf1 740.2G /dev/sdf2 5G # parted /dev/sdf GNU Parted 2.3 Using /dev/sdf Welcome to GNU Parted! Type 'help' to view a list of commands. (parted) p

Re: [ceph-users] Can't activate osd in infernalis

2015-11-19 Thread German Anders
I've a similar problem while trying to run the prepare osd command: ceph version: infernalis 9.2.0 disk: /dev/sdf (745.2G) /dev/sdf1 740.2G /dev/sdf2 5G # parted /dev/sdf GNU Parted 2.3 Using /dev/sdf Welcome to GNU Parted! Type 'help' to view a list of commands. (parted) pri

[ceph-users] Performance output con Ceph IB with fio examples

2015-11-17 Thread German Anders
Hi cephers, Is there anyone out there using Ceph (any version) with Infiniband FDR topology network (both public and cluster), that could share some performance results? To be more specific, running something like this on a RBD volume mapped to a IB host: # fio --rw=randread --bs=4m --numjobs=4 -

[ceph-users] Bcache and Ceph Question

2015-11-17 Thread German Anders
Hi all, Is there any way to use bcache in an already configured Ceph cluster? I've both OSD and Journals inside the same OSD daemon, and I want to try bcache in front of the OSD daemon and also move in the bcache device the journal, so for example I got: /dev/sdc --> SSD disk /dev/sdc1 --> 1st

[ceph-users] Question about OSD activate with ceph-deploy

2015-11-13 Thread German Anders
Hi all, I'm having some issues while trying to run the osd activate command with ceph-deploy tool (1.5.28), the osd prepare command run fine, but then... osd: sdf1 journal: /dev/sdc1 $ ceph-deploy osd activate cibn01:sdf1:/dev/sdc1 [ceph_deploy.conf][DEBUG ] found configuration file at: /home/c

Re: [ceph-users] v0.94.4 Hammer released

2015-10-20 Thread German Anders
with > reduced privileges, but upgraded daemons will continue to run as > root. > > > > Udo > > On 20.10.2015 14:59, German Anders wrote: > > trying to upgrade from hammer 0.94.3 to 0.94.4 I'm getting the following > er

Re: [ceph-users] v0.94.4 Hammer released upgrade

2015-10-20 Thread German Anders
Yep also: $ ceph-mon -v ceph version 0.94.4 (95292699291242794510b39ffde3f4df67898d3a) *German* 2015-10-20 11:48 GMT-03:00 Sage Weil : > On Tue, 20 Oct 2015, German Anders wrote: > > trying to upgrade from hammer 0.94.3 to 0.94.4 I'm getting the following > > error

[ceph-users] v0.94.4 Hammer released upgrade

2015-10-20 Thread German Anders
trying to upgrade from hammer 0.94.3 to 0.94.4 I'm getting the following error msg while trying to restart the mon daemons ($ sudo restart ceph-mon-all): 2015-10-20 08:56:37.410321 7f59a8c9d8c0 0 ceph version 0.94.4 (95292699291242794510b39ffde3f4df67898d3a), process ceph-mon, pid 6821 2015-10-20

Re: [ceph-users] v0.94.4 Hammer released

2015-10-20 Thread German Anders
trying to upgrade from hammer 0.94.3 to 0.94.4 I'm getting the following error msg while trying to restart the mon daemons: 2015-10-20 08:56:37.410321 7f59a8c9d8c0 0 ceph version 0.94.4 (95292699291242794510b39ffde3f4df67898d3a), process ceph-mon, pid 6821 2015-10-20 08:56:37.429036 7f59a8c9d8c0

[ceph-users] Error after upgrading to Infernalis

2015-10-16 Thread German Anders
Hi all, I'm trying to upgrade a ceph cluster (prev hammer release 0.94.3) to the last release of *infernalis* (9.1.0-61-gf2b9f89). So far so good while upgrading the mon servers, all work fine. But then when trying to upgrade the OSD servers I got an error while trying to start the osd services ag

[ceph-users] error while upgrading to infernalis last release on OSD serv

2015-10-15 Thread German Anders
Hi all, I'm trying to upgrade a ceph cluster (prev hammer release) to the last release of infernalis. So far so good while upgrading the mon servers, all work fine. But then when trying to upgrade the OSD servers I got an error while trying to start the osd services again: What I did is first to

[ceph-users] Fwd: Proc for Impl XIO mess with Infernalis

2015-10-14 Thread German Anders
eers, *German* -- Forwarded message ------ From: German Anders Date: 2015-10-14 12:46 GMT-03:00 Subject: Proc for Impl XIO mess with Infernalis To: ceph-users Hi all, I would like to know if with this new release of Infernalis is there somewhere a procedure in order to implemen

[ceph-users] Proc for Impl XIO mess with Infernalis

2015-10-14 Thread German Anders
Hi all, I would like to know if with this new release of Infernalis is there somewhere a procedure in order to implement xio messager with ib and ceph. Also if it's possible to change an existing ceph cluster to this kind of new setup (the existing cluster does not had any production data yet). T

Re: [ceph-users] ceph-deploy prepare btrfs osd error

2015-09-07 Thread German Anders
gt; > There appears to be an issue with zap not wiping the partitions correctly. > http://tracker.ceph.com/issues/6258 > > > > Yours seems slightly different though. Curious, what size disk are you > trying to use? > > > > Cheers, > > > > Simon > > > &g

Re: [ceph-users] Best layout for SSD & SAS OSDs

2015-09-05 Thread German Anders
a lot! Best regards German On Saturday, September 5, 2015, Christian Balzer wrote: > > Hello, > > On Fri, 4 Sep 2015 12:30:12 -0300 German Anders wrote: > > > Hi cephers, > > > >I've the following scheme: > > > > 7x OSD servers with: > >

[ceph-users] ceph-deploy prepare btrfs osd error

2015-09-04 Thread German Anders
Any ideas? ceph@cephdeploy01:~/ceph-ib$ ceph-deploy osd prepare --fs-type btrfs cibosd04:sdc [ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf [ceph_deploy.cli][INFO ] Invoked (1.5.28): /usr/bin/ceph-deploy osd prepare --fs-type btrfs cibosd04:sdc [ceph_deploy.cl

[ceph-users] ceph osd prepare btrfs

2015-09-04 Thread German Anders
Trying to do a prepare on a osd with btrfs, and getting this error: [cibosd04][INFO ] Running command: sudo ceph-disk -v prepare --cluster ceph --fs-type btrfs -- /dev/sdc [cibosd04][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid [cibosd04][WARNI

Re: [ceph-users] Best layout for SSD & SAS OSDs

2015-09-04 Thread German Anders
If you can get by with just the > SAS disks for now and make a more informed decision about the cache tiering > when Infernalis is released then that might be your best bet. > > > > Otherwise you might just be best using them as a basic SSD only Pool. > > > > Nick > &g

[ceph-users] Best layout for SSD & SAS OSDs

2015-09-04 Thread German Anders
Hi cephers, I've the following scheme: 7x OSD servers with: 4x 800GB SSD Intel DC S3510 (OSD-SSD) 3x 120GB SSD Intel DC S3500 (Journals) 5x 3TB SAS disks (OSD-SAS) The OSD servers are located on two separate Racks with two power circuits each. I would like to know what is the

[ceph-users] Ceph new mon deploy v9.0.3-1355

2015-09-02 Thread German Anders
Hi cephers, trying to deploying a new ceph cluster with master release (v9.0.3) and when trying to create the initial mons and error appears saying that "admin_socket: exception getting command descriptions: [Errno 2] No such file or directory", find the log: ... [ceph_deploy.mon][INFO ] distro

Re: [ceph-users] Accelio & Ceph

2015-09-01 Thread German Anders
> > > > > *From:* ceph-users [mailto:ceph-users-boun...@lists.ceph.com] *On Behalf > Of *German Anders > *Sent:* Tuesday, September 01, 2015 12:00 PM > *To:* Somnath Roy > > *Cc:* ceph-users > *Subject:* Re: [ceph-users] Accelio & Ceph > > > > Th

Re: [ceph-users] Accelio & Ceph

2015-09-01 Thread German Anders
out the doc you are maintaining ? > > > > Regards > > Somnath > > > > *From:* German Anders [mailto:gand...@despegar.com] > *Sent:* Tuesday, September 01, 2015 11:36 AM > > *To:* Somnath Roy > *Cc:* Robert LeBlanc; ceph-users > *Subject:* Re: [ceph-users] Accelio &am

Re: [ceph-users] Accelio & Ceph

2015-09-01 Thread German Anders
probably, not sure if it is added as git > submodule or not, Vu , could you please confirm ? > > > > Since we are working to make this solution work at scale, could you please > give us some idea what is the scale you are looking at for future > deployment ? > >

Re: [ceph-users] Accelio & Ceph

2015-09-01 Thread German Anders
how many nodes/OSDs/SSD or HDDs/ EC or Replication etc. > etc.). > > > > Thanks & Regards > > Somnath > > > > *From:* ceph-users [mailto:ceph-users-boun...@lists.ceph.com] *On Behalf > Of *German Anders > *Sent:* Tuesday, September 01, 2015 10:39 AM

Re: [ceph-users] Accelio & Ceph

2015-09-01 Thread German Anders
elio and Ceph are still in heavy development and not ready for production. > > - > Robert LeBlanc > PGP Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1 > > On Tue, Sep 1, 2015 at 10:31 AM, German Anders wrote: > Hi cephers, > > I would lik

[ceph-users] Accelio & Ceph

2015-09-01 Thread German Anders
Hi cephers, I would like to know the status for production-ready of Accelio & Ceph, does anyone had a home-made procedure implemented with Ubuntu? recommendations, comments? Thanks in advance, Best regards, *German* ___ ceph-users mailing list ceph-

Re: [ceph-users] ceph version for productive clusters?

2015-08-31 Thread German Anders
Thanks a lot Kobi *German* 2015-08-31 14:20 GMT-03:00 Kobi Laredo : > Hammer should be very stable at this point. > > *Kobi Laredo* > *Cloud Systems Engineer* | (*408) 409-KOBI* > > On Mon, Aug 31, 2015 at 8:51 AM, German Anders > wrote: > >> Hi cephers, >>

[ceph-users] ceph version for productive clusters?

2015-08-31 Thread German Anders
Hi cephers, What's the recommended version for new productive clusters? Thanks in advanced, Best regards, *German* ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Disk/Pool Layout

2015-08-27 Thread German Anders
an save money on memory. > >>> > >>> What will be the role of this cluster? VM disks? Object storage? > >>> Streaming?... > >>> > >>> Jan > >>> > >>> On 27 Aug 2015, at 17:56, German Anders wrote: > >&g

Re: [ceph-users] Disk/Pool Layout

2015-08-27 Thread German Anders
cages on different UPSes, then you can do stuff like disable > barriers if you go with some cheaper drives that need it.) I'm not a CRUSH > expert, there are more tricks to do before you set this up. > > Jan > > On 27 Aug 2015, at 18:31, German Anders wrote: > > Hi Jan,

Re: [ceph-users] Disk/Pool Layout

2015-08-27 Thread German Anders
ly > need higher-grade SSDs. You can save money on memory. > > What will be the role of this cluster? VM disks? Object storage? > Streaming?... > > Jan > > On 27 Aug 2015, at 17:56, German Anders wrote: > > Hi all, > >I'm planning to deploy a new Ce

[ceph-users] Disk/Pool Layout

2015-08-27 Thread German Anders
Hi all, I'm planning to deploy a new Ceph cluster with IB FDR 56Gb/s and I've the following HW: *3x MON Servers:* 2x Intel Xeon E5-2600@v3 8C 256GB RAM 1xIB FRD ADPT-DP (two ports for PUB network) 1xGB ADPT-DP Disk Layout: SOFT-RAID: SCSI1 (0,0,0) (sda) - 120.0 GB ATA IN

Re: [ceph-users] any recommendation of using EnhanceIO?

2015-07-02 Thread German Anders
yeah 3TB SAS disks *German Anders* Storage System Engineer Leader *Despegar* | IT Team *office* +54 11 4894 3500 x3408 *mobile* +54 911 3493 7262 *mail* gand...@despegar.com 2015-07-02 9:04 GMT-03:00 Jan Schermer : > And those disks are spindles? > Looks like there’s simply too few of

Re: [ceph-users] any recommendation of using EnhanceIO?

2015-07-02 Thread German Anders
eaf firstn -1 type host step emit } # end crush map *German* 2015-07-02 8:15 GMT-03:00 Lionel Bouton : > On 07/02/15 12:48, German Anders wrote: > > The idea is to cache rbd at a host level. Also could be possible to > > cache at the osd level. We have high iowait and we n

Re: [ceph-users] any recommendation of using EnhanceIO?

2015-07-02 Thread German Anders
gt; On 02 Jul 2015, at 11:29, Emmanuel Florac > wrote: > > > > Le Wed, 1 Jul 2015 17:13:03 -0300 > > German Anders > écrivait: > > > >> Hi cephers, > >> > >> Is anyone out there that implement enhanceIO in a production > >&g

Re: [ceph-users] Ceph Journal Disk Size

2015-07-01 Thread German Anders
> big of an issue. Now that assumes that replication actually works well in > that size cluster. We're still cessing out this part of the PoC > engagement. > > ~~shane > > > > > On 7/1/15, 5:05 PM, "ceph-users on behalf of German Anders" < > ceph

Re: [ceph-users] Ceph Journal Disk Size

2015-07-01 Thread German Anders
kind of disk you will get no more than 100-110 iops per disk *German Anders* Storage System Engineer Leader *Despegar* | IT Team *office* +54 11 4894 3500 x3408 *mobile* +54 911 3493 7262 *mail* gand...@despegar.com 2015-07-01 20:54 GMT-03:00 Nate Curry : > 4TB is too much to lose? Why would

  1   2   >