[ceph-users] Using Ramdisk wi

2014-07-30 Thread German Anders
Hi Everyone, Anybody is using ramdisk to put the Journal on it? If so, could you please share the commands to implement that? since I'm having some issues with that and want to test that out to see if i could get better performance. Thanks in advance, German A

Re: [ceph-users] Using Ramdisk wi

2014-07-30 Thread German Anders
try RAMDISK on Journals, i've noticed that he implement that on their Ceph cluster. I will really appreciate the help on this. Also if you need me to send you some more information about the Ceph scheme please let me know. Also if someone could share some detail conf info will really he

Re: [ceph-users] Calamari Goes Open Source

2014-07-30 Thread German Anders
Have the same issue, it will be really helpful if someone has any home-made procedure or some notes. German Anders --- Original message --- Asunto: Re: [ceph-users] Calamari Goes Open Source De: Larry Liu Para: Mike Dawson Cc: Ceph Devel , Ceph-User Fecha: Wednesday, 30

[ceph-users] flashcache from fb and dm-cache??

2014-07-30 Thread German Anders
Also, does someone try flashcache from facebook on ceph? cons? pros? any perf improvement? and dm-cache? German Anders ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Using Ramdisk wi

2014-07-30 Thread German Anders
Hi Christian, How are you? Thanks a lot for the answers, mine in red. --- Original message --- Asunto: Re: [ceph-users] Using Ramdisk wi De: Christian Balzer Para: Cc: German Anders Fecha: Wednesday, 30/07/2014 11:42 Hello, On Wed, 30 Jul 2014 09:55:49 -0400 German Anders wrote

Re: [ceph-users] 0.80.5-1precise Not Able to Map RBD & CephFS

2014-07-31 Thread German Anders
Hi Ilya, I think you need to upgrade the kernel version of that ubuntu server, I've a similar problem and after upgrade the kernel to 3.13 the problem was resolved successfully. Best regards, German Anders --- Original message --- Asunto: Re: [ceph-users] 0

Re: [ceph-users] 0.80.5-1precise Not Able to Map RBD & CephFS

2014-08-01 Thread German Anders
e From: Ilya Dryomov Date: 01/08/2014 08:22 (GMT-03:00) To: German Anders Cc: Larry Liu ,ceph-users@lists.ceph.com Subject: Re: [ceph-users] 0.80.5-1precise Not Able to Map RBD & CephFS On Fri, Aug 1, 2014 at 12:29 AM, German Anders wrote: > Hi Ilya, >   I think you n

Re: [ceph-users] Ceph writes stall for long perioids with nodisk/network activity

2014-08-04 Thread German Anders
--- Original message --- Asunto: Re: [ceph-users] Ceph writes stall for long perioids with nodisk/network activity De: Chris Kitzmiller Para: Mariusz Gronczewski Cc: Fecha: Monday, 04/08/2014 17:28 On Aug 1, 2014, at 1:31 PM, Mariusz Gronczewski wrote: I got weird stalling during writes

[ceph-users] build ceph from tar.gz proc

2014-08-04 Thread German Anders
Hi to all, does anybody have a procedure step-by-step to install Ceph from tar.gz file? I would like to test version 0.82 Thanks in advance, Best regards, German Anders ___ ceph-users mailing list ceph-users@lists.ceph.com http

[ceph-users] ceph-deploy disk activate error msg

2014-08-06 Thread German Anders
also try to run the command manually on the osd server, but getting the same error message. Any ideas? Thanks in advance, Best regards, German Anders ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/lis

Re: [ceph-users] ceph-deploy disk activate error msg

2014-08-06 Thread German Anders
___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] ceph-deploy activate actually didn't activate the OSD

2014-08-07 Thread German Anders
1 root root 92 May 12 11:14 rbdmap Any ideas? I'm stuck here and can't go any further. Thanks in advance, Best regards, German Anders ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Can't start OSD

2014-08-08 Thread German Anders
How about the logs? Is something there? ls /var/log/ceph/ German Anders --- Original message --- Asunto: Re: [ceph-users] Can't start OSD De: "O'Reilly, Dan" Para: Karan Singh Cc: ceph-users@lists.ceph.com Fecha: Friday, 08/08/2014 10:53 Nope. Not

[ceph-users] Performance really drops from 700MB/s to 10MB/s

2014-08-13 Thread German Anders
id-journal [client.volumes] keyring = /etc/ceph/ceph.client.volumes.keyring Thanks in advance, Best regards, German Anders ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Performance really drops from 700MB/s to 10MB/s

2014-08-13 Thread German Anders
[eta 01h:26m:43s] It's seems like is doing nothing.. German Anders --- Original message --- Asunto: Re: [ceph-users] Performance really drops from 700MB/s to 10MB/s De: Mark Nelson Para: Fecha: Wednesday, 13/08/2014 11:00 On 08/13/2014 08:19 AM, German Anders wrot

Re: [ceph-users] Performance really drops from 700MB/s to 10MB/s

2014-08-13 Thread German Anders
e I can't run a "ls" on the rbd. Thanks in advance, Best regards, German Anders --- Original message --- Asunto: Re: [ceph-users] Performance really drops from 700MB/s to 10MB/s De: German Anders Para: Mark Nelson Cc: Fecha: Wednesday, 13/08/2014 11:09 A

Re: [ceph-users] Performance really drops from 700MB/s to 10MB/s

2014-08-14 Thread German Anders
Run status group 0 (all jobs): WRITE: io=10240MB, aggrb=741672KB/s, minb=741672KB/s, maxb=741672KB/s, mint=14138msec, maxt=14138msec Disk stats (read/write): rbd0: ios=182/20459, merge=0/0, ticks=92/1213748, in_queue=1214796, util=99.80% ceph@mail02-old:~$ German Anders --

Re: [ceph-users] Performance really drops from 700MB/s to 10MB/s

2014-08-14 Thread German Anders
I use nmon on each OSD server, this is a really good tool to find out what is going on regarding CPU, Mem, Disks and Networking German Anders --- Original message --- Asunto: Re: [ceph-users] Performance really drops from 700MB/s to 10MB/s De: Craig Lewis Para: Mariusz

[ceph-users] Ceph + Infiniband CLUS & PUB Network

2015-03-17 Thread German Anders
Hi All, Does anyone have Ceph implemented with Infiniband for Cluster and Public network? Thanks in advance, German Anders ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users

Re: [ceph-users] Ceph + Infiniband CLUS & PUB Network

2015-03-17 Thread German Anders
es to install on the hosts, etc. Any help will really be appreciated. Thanks in advance, German Anders Storage System Engineer Leader Despegar | IT Team office +54 11 4894 3500 x3408 mobile +54 911 3493 7262 mail gand...@despegar.com --- Original message --- Asunto: Re: [ceph-

[ceph-users] new relic ceph plugin

2015-05-17 Thread German Anders
Hi all, I want to know if someone has deploy some new relic (pyhon) plugin for Ceph. Thanks a lot, Best regards, *Ger* ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] new relic ceph plugin

2015-05-18 Thread German Anders
Thanks a lot John, will definitely take a look on that. Best regards, *German Anders* Storage System Engineer Leader *Despegar* | IT Team *office* +54 11 4894 3500 x3408 *mobile* +54 911 3493 7262 *mail* gand...@despegar.com 2015-05-18 6:04 GMT-03:00 John Spray : > Not that I know of, but

Re: [ceph-users] krbd splitting large IO's into smaller IO's

2015-06-10 Thread German Anders
74.96 sdm 0.00 0.000.60 544.6019.20 40348.00 148.08 118.31 217.00 17.33 217.22 1.67 90.80 Thanks in advance, Best regards, *German Anders* Storage System Engineer Leader *Despegar* | IT Team *office* +54 11 4894 3500 x3408 *mobile* +54 911 3493 7262 *mail*

Re: [ceph-users] High IO Waits

2015-06-10 Thread German Anders
Thanks a lot Nick, I'll try with more PGs and if I don't see any improve I'll add more OSD servers to the cluster. Best regards, *German Anders* Storage System Engineer Leader *Despegar* | IT Team *office* +54 11 4894 3500 x3408 *mobile* +54 911 3493 7262 *mail* gand...@despegar.

[ceph-users] kernel 3.18 io bottlenecks?

2015-06-24 Thread German Anders
Hi all, Is there any IO botleneck reported on kernel 3.18.3-031803-generic? since I'm having a lot of iowait and the cluster is really getting slow, and actually there's no much going on. I've read some time ago that there were some issues with kern 3.18, so I would like to know what's the 'bes

Re: [ceph-users] kernel 3.18 io bottlenecks?

2015-06-24 Thread German Anders
Hi Lincoln, how are you? It's with RBD Thanks a lot, Best regards, *German* 2015-06-24 11:53 GMT-03:00 Lincoln Bryant : > Hi German, > > Is this with CephFS, or RBD? > > Thanks, > Lincoln > > On Jun 24, 2015, at 9:44 AM, German Anders wrote: > > Hi al

Re: [ceph-users] kernel 3.18 io bottlenecks?

2015-06-24 Thread German Anders
mq was introduced which > brings two other limitations:- > > > > 1. Max queue depth of 128 > > 2. IO’s sizes are restricted/split to 128kb > > > > *From:* ceph-users [mailto:ceph-users-boun...@lists.ceph.com] *On Behalf > Of *German Anders > *Sent:* 24

Re: [ceph-users] kernel 3.18 io bottlenecks?

2015-06-24 Thread German Anders
Got you, thanks a lot Nick, i'll go with 4.0.6-wily Best regards! *German* 2015-06-24 12:07 GMT-03:00 Nick Fisk : > There isn’t really a best option at the moment, although if your IO sizes > aren’t that big, 4.0+ is probably the best option. > > > > *From:* Ger

[ceph-users] infiniband implementation

2015-06-29 Thread German Anders
hi cephers, Want to know if there's any 'best' practice or procedure to implement Ceph with Infiniband FDR 56gb/s for front and back end connectivity. Any crush tunning parameters, etc. The Ceph cluster has: - 8 OSD servers - 2x Intel Xeon E5 8C with HT - 128G RAM - 2x 200G Intel

Re: [ceph-users] infiniband implementation

2015-06-29 Thread German Anders
me is using the S3700 for OS but the S3500 for > journals. I would use the S3700 for journals and S3500 for the OS. Looks > pretty good other than that! > > > > ------ > *From: *"German Anders" > *To: *"ceph-users" > *S

[ceph-users] any recommendation of using EnhanceIO?

2015-07-01 Thread German Anders
Hi cephers, Is anyone out there that implement enhanceIO in a production environment? any recommendation? any perf output to share with the diff between using it and not? Thanks in advance, *German* ___ ceph-users mailing list ceph-users@lists.ceph.

Re: [ceph-users] Ceph Journal Disk Size

2015-07-01 Thread German Anders
I would probably go with less size osd disks, 4TB is to much to loss in case of a broken disk, so maybe more osd daemons with less size, maybe 1TB or 2TB size. 4:1 relationship is good enough, also i think that 200G disk for the journals would be ok, so you can save some money there, the osd's of c

Re: [ceph-users] Ceph Journal Disk Size

2015-07-01 Thread German Anders
kind of disk you will get no more than 100-110 iops per disk *German Anders* Storage System Engineer Leader *Despegar* | IT Team *office* +54 11 4894 3500 x3408 *mobile* +54 911 3493 7262 *mail* gand...@despegar.com 2015-07-01 20:54 GMT-03:00 Nate Curry : > 4TB is too much to lose? Why would

Re: [ceph-users] Ceph Journal Disk Size

2015-07-01 Thread German Anders
> big of an issue. Now that assumes that replication actually works well in > that size cluster. We're still cessing out this part of the PoC > engagement. > > ~~shane > > > > > On 7/1/15, 5:05 PM, "ceph-users on behalf of German Anders" < > ceph

Re: [ceph-users] any recommendation of using EnhanceIO?

2015-07-02 Thread German Anders
gt; On 02 Jul 2015, at 11:29, Emmanuel Florac > wrote: > > > > Le Wed, 1 Jul 2015 17:13:03 -0300 > > German Anders > écrivait: > > > >> Hi cephers, > >> > >> Is anyone out there that implement enhanceIO in a production > >&g

Re: [ceph-users] any recommendation of using EnhanceIO?

2015-07-02 Thread German Anders
eaf firstn -1 type host step emit } # end crush map *German* 2015-07-02 8:15 GMT-03:00 Lionel Bouton : > On 07/02/15 12:48, German Anders wrote: > > The idea is to cache rbd at a host level. Also could be possible to > > cache at the osd level. We have high iowait and we n

Re: [ceph-users] any recommendation of using EnhanceIO?

2015-07-02 Thread German Anders
yeah 3TB SAS disks *German Anders* Storage System Engineer Leader *Despegar* | IT Team *office* +54 11 4894 3500 x3408 *mobile* +54 911 3493 7262 *mail* gand...@despegar.com 2015-07-02 9:04 GMT-03:00 Jan Schermer : > And those disks are spindles? > Looks like there’s simply too few of

[ceph-users] Disk/Pool Layout

2015-08-27 Thread German Anders
Hi all, I'm planning to deploy a new Ceph cluster with IB FDR 56Gb/s and I've the following HW: *3x MON Servers:* 2x Intel Xeon E5-2600@v3 8C 256GB RAM 1xIB FRD ADPT-DP (two ports for PUB network) 1xGB ADPT-DP Disk Layout: SOFT-RAID: SCSI1 (0,0,0) (sda) - 120.0 GB ATA IN

Re: [ceph-users] Disk/Pool Layout

2015-08-27 Thread German Anders
ly > need higher-grade SSDs. You can save money on memory. > > What will be the role of this cluster? VM disks? Object storage? > Streaming?... > > Jan > > On 27 Aug 2015, at 17:56, German Anders wrote: > > Hi all, > >I'm planning to deploy a new Ce

Re: [ceph-users] Disk/Pool Layout

2015-08-27 Thread German Anders
cages on different UPSes, then you can do stuff like disable > barriers if you go with some cheaper drives that need it.) I'm not a CRUSH > expert, there are more tricks to do before you set this up. > > Jan > > On 27 Aug 2015, at 18:31, German Anders wrote: > > Hi Jan,

Re: [ceph-users] Disk/Pool Layout

2015-08-27 Thread German Anders
an save money on memory. > >>> > >>> What will be the role of this cluster? VM disks? Object storage? > >>> Streaming?... > >>> > >>> Jan > >>> > >>> On 27 Aug 2015, at 17:56, German Anders wrote: > >&g

[ceph-users] ceph version for productive clusters?

2015-08-31 Thread German Anders
Hi cephers, What's the recommended version for new productive clusters? Thanks in advanced, Best regards, *German* ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] ceph version for productive clusters?

2015-08-31 Thread German Anders
Thanks a lot Kobi *German* 2015-08-31 14:20 GMT-03:00 Kobi Laredo : > Hammer should be very stable at this point. > > *Kobi Laredo* > *Cloud Systems Engineer* | (*408) 409-KOBI* > > On Mon, Aug 31, 2015 at 8:51 AM, German Anders > wrote: > >> Hi cephers, >>

[ceph-users] Accelio & Ceph

2015-09-01 Thread German Anders
Hi cephers, I would like to know the status for production-ready of Accelio & Ceph, does anyone had a home-made procedure implemented with Ubuntu? recommendations, comments? Thanks in advance, Best regards, *German* ___ ceph-users mailing list ceph-

Re: [ceph-users] Accelio & Ceph

2015-09-01 Thread German Anders
elio and Ceph are still in heavy development and not ready for production. > > - > Robert LeBlanc > PGP Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1 > > On Tue, Sep 1, 2015 at 10:31 AM, German Anders wrote: > Hi cephers, > > I would lik

Re: [ceph-users] Accelio & Ceph

2015-09-01 Thread German Anders
how many nodes/OSDs/SSD or HDDs/ EC or Replication etc. > etc.). > > > > Thanks & Regards > > Somnath > > > > *From:* ceph-users [mailto:ceph-users-boun...@lists.ceph.com] *On Behalf > Of *German Anders > *Sent:* Tuesday, September 01, 2015 10:39 AM

Re: [ceph-users] Accelio & Ceph

2015-09-01 Thread German Anders
probably, not sure if it is added as git > submodule or not, Vu , could you please confirm ? > > > > Since we are working to make this solution work at scale, could you please > give us some idea what is the scale you are looking at for future > deployment ? > >

Re: [ceph-users] Accelio & Ceph

2015-09-01 Thread German Anders
out the doc you are maintaining ? > > > > Regards > > Somnath > > > > *From:* German Anders [mailto:gand...@despegar.com] > *Sent:* Tuesday, September 01, 2015 11:36 AM > > *To:* Somnath Roy > *Cc:* Robert LeBlanc; ceph-users > *Subject:* Re: [ceph-users] Accelio &am

Re: [ceph-users] Accelio & Ceph

2015-09-01 Thread German Anders
> > > > > *From:* ceph-users [mailto:ceph-users-boun...@lists.ceph.com] *On Behalf > Of *German Anders > *Sent:* Tuesday, September 01, 2015 12:00 PM > *To:* Somnath Roy > > *Cc:* ceph-users > *Subject:* Re: [ceph-users] Accelio & Ceph > > > > Th

[ceph-users] adding another mon failed

2013-11-29 Thread German Anders
avail 192 active+clean Anyone have any idea what could be the issue here? Thanks in advance, Best regards, German Anders ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] radosgw daemon stalls on download of some files

2013-11-29 Thread German Anders
ks in advance, Best regards, German Anders --- Original message --- Asunto: Re: [ceph-users] radosgw daemon stalls on download of some files De: Sebastian Para: ceph-users Fecha: Friday, 29/11/2013 16:18 Hi Yehuda, It's interesting, the responses are received but seems that

[ceph-users] Ceph Performance MB/sec

2013-12-01 Thread German Anders
5903 s, 702 MB/s Thanks in advance, Best regards, German Anders ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Ceph Performance MB/sec

2013-12-01 Thread German Anders
156, util=50.86% sdd: ios=67692/34736, merge=0/0, ticks=490456/34692, in_queue=525144, util=51.05% root@e05-host05:/home/cloud# German Anders --- Original message --- Asunto: Re: [ceph-users] Ceph Performance MB/sec De: Gilles Mocellin Para: Fecha: Sunday, 01/12/2013 13:59 Le 0

Re: [ceph-users] Frequent Crashes on rbd to nfs gateway Server

2014-09-24 Thread German Anders
things work fine on kernel 3.13.0-35 German Anders --- Original message --- Asunto: Re: [ceph-users] Frequent Crashes on rbd to nfs gateway Server De: Ilya Dryomov Para: Micha Krause Cc: ceph-users@lists.ceph.com Fecha: Wednesday, 24/09/2014 11:33 On Wed, Sep 24, 2014 at

Re: [ceph-users] Frequent Crashes on rbd to nfs gateway Server

2014-09-24 Thread German Anders
3.13.0-35 -generic? really? I found my self in a similar situation like yours and making a downgrade to that version works fine, also you could try 3.14.9-031, it work fine for me also. German Anders --- Original message --- Asunto: Re: [ceph-users] Frequent Crashes on rbd

Re: [ceph-users] Frequent Crashes on rbd to nfs gateway Server

2014-09-24 Thread German Anders
And on 3.14.9-031? German Anders --- Original message --- Asunto: Re: [ceph-users] Frequent Crashes on rbd to nfs gateway Server De: Andrei Mikhailovsky Para: German Anders Cc: , Micha Krause Fecha: Wednesday, 24/09/2014 12:43 I also had the hang tasks issues with 3.13.0

[ceph-users] question about activate OSD

2014-10-31 Thread German Anders
ing']' returned non-zero exit status 1 [ceph-bkp-osd01][ERROR ] RuntimeError: command returned non-zero exit status: 1 [ceph_deploy][ERROR ] RuntimeError: Failed to execute command: ceph-disk-activate --mark-init upstart --mount /dev/sdf1 ceph@cephbkdeploy01:~/desp-bkp-cluster$ I'

Re: [ceph-users] question about activate OSD

2014-11-03 Thread German Anders
-keyring', '/var/lib/ceph/tmp/mnt.MW51n4/keyring']' returned non-zero exit status 1 [ceph-bkp-osd01][ERROR ] RuntimeError: command returned non-zero exit status: 1 [ceph_deploy][ERROR ] RuntimeError: Failed to execute command: ceph-disk-activate --mark-init upstart --mount /d

Re: [ceph-users] Typical 10GbE latency

2014-11-06 Thread German Anders
also, between two hosts on a NetGear SW model at 10GbE: rtt min/avg/max/mdev = 0.104/0.196/0.288/0.055 ms German Anders --- Original message --- Asunto: [ceph-users] Typical 10GbE latency De: Wido den Hollander Para: Fecha: Thursday, 06/11/2014 10:18 Hello, While

Re: [ceph-users] Poor RBD performance as LIO iSCSI target

2014-11-13 Thread German Anders
___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Typical 10GbE latency

2014-11-13 Thread German Anders
[fixed] l2-fwd-offload: off busy-poll: on [fixed] ceph@cephosd01:~$ German Anders --- Original message --- Asunto: Re: [ceph-users] Typical 10GbE latency De: Stephan Seitz Para: Wido den Hollander Cc: Fecha: Thursday, 13/11/2014 15:39 Indeed, there must be something

Re: [ceph-users] Poor RBD performance as LIO iSCSI target

2014-11-13 Thread German Anders
___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] slow read-performance inside the vm

2015-01-08 Thread German Anders
that depends.. with which block size do you get those numbers? Ceph is really good with block sizes > 256kb, 1M, 4M... German Anders --- Original message --- Asunto: [ceph-users] slow read-performance inside the vm De: Patrik Plank Para: ceph-users@lists.ceph.com Fe

[ceph-users] Ceph with IB and ETH

2015-01-23 Thread German Anders
o our existing Ethernet clients can communicate with the IB clients... now... is there any specification or consideration regarding this type of configuration in terms of Ceph? Thanks in advance, Regards, German Anders ___ ceph-

Re: [ceph-users] Error in starting ceph

2013-12-05 Thread German Anders
Hi Sahana,    Did you already create any osd? With the osd prepare and activate command? Best regards Enviado desde mi Personal Samsung GT-i8190L Original message From: Sahana Date: 05/12/2013 07:26 (GMT-03:00) To: ceph-us...@ceph.com Subject: [ceph-users] Error in star

[ceph-users] Ceph read & write performance benchmark

2013-12-11 Thread German Anders
if anybody had some recommendations or tips regarding the configuration for performance. The filesystem to be used is XFS. I really appreciated the help. Thanks in advance, Best regards, German Anders ___ ceph-users mailing list ceph-users

Re: [ceph-users] Ceph read & write performance benchmark

2013-12-12 Thread German Anders
ournal you loose all those 4 OSD's right? The 10Gb connection is because we already had our environment with that connectivity speed. Do you know customers that had a Ceph cluster, and running on it Cassandras, mongoDB's and Hadoops clusters? Thanks in advance,

[ceph-users] Ceph not responding after trying to add a new MON

2013-12-13 Thread German Anders
___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Ceph not responding after trying to add a new MON

2013-12-13 Thread German Anders
___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Ceph not responding after trying to add a new MON

2013-12-13 Thread German Anders
___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Ceph not responding after trying to add a new MON

2013-12-13 Thread German Anders
___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Ceph not responding after trying to add a new MON

2013-12-13 Thread German Anders
___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Ceph not responding after trying to add a new MON

2013-12-17 Thread German Anders
___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] Ceph New Cluster Configuration Recommendations

2013-12-18 Thread German Anders
ster how can i specify the name of the cluster? Again sorry if they are very newbie questions but as i said im new to Ceph. Thanks in advance, Best regards, German Anders ___ ceph-users mailing list ceph-users@lists.ceph.com http://lis

Re: [ceph-users] Ceph New Cluster Configuration Recommendations

2013-12-18 Thread German Anders
Thanks!, I forgot to mentioned that we are using D2200sb Storage Blade for the disks inside the Enclosure. German Anders --- Original message --- Asunto: Re: [ceph-users] Ceph New Cluster Configuration Recommendations De: Alfredo Deza Para: German Anders Cc: ceph-users

[ceph-users] Journal configuration

2013-12-26 Thread German Anders
ideas? or recommendations? Is better to partitioned the Journal with XFS, right? Thanks in advance, Best regards, German Anders ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] HEALTH_WARN too few pgs per osd (3 < min 20)

2013-12-27 Thread German Anders
500.06999osd.50up1 510.06999osd.51up1 520.06999osd.52up1 530.45osd.53up1 540.45osd.54up1 ceph@ceph-node04:~$ Someone could give me a hand to resolved this situation.

[ceph-users] rbd: add failed: (1) Operation not permitted

2013-12-27 Thread German Anders
try to make the map it failed with: sudo rbd map ceph-pool/RBDTest --id admin -k /home/ceph/ceph-cluster-prd/ceph.client.admin.keyring rbd: add failed: (1) Operation not permitted Anyone could give me a hand or know what could be this issue? Am i missing something? Thanks in advance, Be

Re: [ceph-users] HEALTH_WARN too few pgs per osd (3 < min 20)

2013-12-27 Thread German Anders
Thanks a lot! I've increase the number of pg and pgp and now it works fine :) Best regards, German Anders --- Original message --- Asunto: Re: [ceph-users] HEALTH_WARN too few pgs per osd (3 < min 20) De: Ирек Фасихов Para: German Anders Cc: ceph-users@lists.ceph.co

[ceph-users] Cluster Performance very Poor

2013-12-27 Thread German Anders
9 Max bandwidth (MB/sec): 104 Min bandwidth (MB/sec): 0 Average Latency:0.420573 Stddev Latency: 0.226378 Max latency:1.81426 Min latency:0.101352 root@ceph-node03:/home/ceph# Thanks in advance, Best regards, German A

Re: [ceph-users] rbd: add failed: (1) Operation not permitted

2013-12-27 Thread German Anders
Doesn't work either, is displays the "rbd: add failed: (1) Operation not permitted" error message. The only way I've found to mapped is to running: rbd map -m 10.1.1.151 RBDTest --pool ceph-pool --id admin -k /home/ceph/ceph-cluster-prd/ceph.client.admin.keyri

Re: [ceph-users] Cluster Performance very Poor

2013-12-27 Thread German Anders
u had the commands to do those movements? Thanks a lot, Best regards, German Anders --- Original message --- Asunto: Re: [ceph-users] Cluster Performance very Poor De: Mark Nelson Para: Fecha: Friday, 27/12/2013 15:39 On 12/27/2013 12:19 PM, German Anders wrote: Hi Ce

Re: [ceph-users] Cluster Performance very Poor

2013-12-27 Thread German Anders
dvance, German Anders --- Original message --- Asunto: Re: [ceph-users] Cluster Performance very Poor De: Mark Nelson Para: Fecha: Friday, 27/12/2013 15:39 On 12/27/2013 12:19 PM, German Anders wrote: Hi Cephers, I've run a rados bench to measure the throughput o

Re: [ceph-users] Unable to add monitor nodes

2014-02-17 Thread German Anders
___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Ceph t-shirts are available

2014-03-29 Thread German Anders
How can get one? Enviado desde mi Personal Samsung GT-i8190L Original message From: Loic Dachary Date: 29/03/2014 11:35 (GMT-03:00) To: ceph-users Cc: Ceph Community Subject: [ceph-users] Ceph t-shirts are available ___ ce

Re: [ceph-users] RBD as backend for iSCSI SAN Targets

2014-04-01 Thread German Anders
ench --num-threads=16 --test=fileio --file-total-size=3G --file-test-mode=rndrw run sysbench --num-threads=16 --test=fileio --file-total-size=3G --file-test-mode=rndrw cleanup Thanks in advance, Best regards, German Anders --- Original message --- Asunto: Re: [ceph-users] RBD as b

Re: [ceph-users] write speed issue on RBD image

2014-04-02 Thread German Anders
l options when formatting the XFS filesystem? and/or mount options? What hypervisor are you using? Best regards, German Anders Field Storage Support Engineer Despegar.com - IT Team --- Original message --- Asunto: [ceph-users] write speed issue on RBD image De: Russell E. Glaue

Re: [ceph-users] write speed issue on RBD image

2014-04-02 Thread German Anders
ks, memory, cpu and swap, and look there for something that it's not normal Hope this helps, Best regards, German Anders Field Storage Support Engineer Despegar.com - IT Team --- Original message --- Asunto: Re: [ceph-users] write speed issue on RBD image De: Russell E

Re: [ceph-users] Slow IOPS on RBD compared to journal and backingdevices

2014-05-14 Thread German Anders
Someone could get a performance throughput on RBD of 600MB/s or more on (rw) with a block size of 32768k? German Anders Field Storage Support Engineer Despegar.com - IT Team --- Original message --- Asunto: Re: [ceph-users] Slow IOPS on RBD compared to journal and backingdevices

Re: [ceph-users] Slow IOPS on RBD compared to journal andbackingdevices

2014-05-14 Thread German Anders
I forgot to mention, of course on a 10GbE network German Anders Field Storage Support Engineer Despegar.com - IT Team --- Original message --- Asunto: Re: [ceph-users] Slow IOPS on RBD compared to journal andbackingdevices De: German Anders Para: Christian Balzer Cc: Fecha

Re: [ceph-users] Slow IOPS on RBD compared to journalandbackingdevices

2014-05-14 Thread German Anders
Hi Josef, Thanks a lot for the quick answer. yes 32M and rand writes and also, do you get those values i guess with a MTU of 9000 or with the traditional and beloved MTU 1500? German Anders Field Storage Support Engineer Despegar.com - IT Team --- Original message --- Asunto: Re

[ceph-users] Ceph new mon deploy v9.0.3-1355

2015-09-02 Thread German Anders
Hi cephers, trying to deploying a new ceph cluster with master release (v9.0.3) and when trying to create the initial mons and error appears saying that "admin_socket: exception getting command descriptions: [Errno 2] No such file or directory", find the log: ... [ceph_deploy.mon][INFO ] distro

[ceph-users] Best layout for SSD & SAS OSDs

2015-09-04 Thread German Anders
Hi cephers, I've the following scheme: 7x OSD servers with: 4x 800GB SSD Intel DC S3510 (OSD-SSD) 3x 120GB SSD Intel DC S3500 (Journals) 5x 3TB SAS disks (OSD-SAS) The OSD servers are located on two separate Racks with two power circuits each. I would like to know what is the

Re: [ceph-users] Best layout for SSD & SAS OSDs

2015-09-04 Thread German Anders
If you can get by with just the > SAS disks for now and make a more informed decision about the cache tiering > when Infernalis is released then that might be your best bet. > > > > Otherwise you might just be best using them as a basic SSD only Pool. > > > > Nick > &g

[ceph-users] ceph osd prepare btrfs

2015-09-04 Thread German Anders
Trying to do a prepare on a osd with btrfs, and getting this error: [cibosd04][INFO ] Running command: sudo ceph-disk -v prepare --cluster ceph --fs-type btrfs -- /dev/sdc [cibosd04][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid [cibosd04][WARNI

[ceph-users] ceph-deploy prepare btrfs osd error

2015-09-04 Thread German Anders
Any ideas? ceph@cephdeploy01:~/ceph-ib$ ceph-deploy osd prepare --fs-type btrfs cibosd04:sdc [ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf [ceph_deploy.cli][INFO ] Invoked (1.5.28): /usr/bin/ceph-deploy osd prepare --fs-type btrfs cibosd04:sdc [ceph_deploy.cl

Re: [ceph-users] Best layout for SSD & SAS OSDs

2015-09-05 Thread German Anders
a lot! Best regards German On Saturday, September 5, 2015, Christian Balzer wrote: > > Hello, > > On Fri, 4 Sep 2015 12:30:12 -0300 German Anders wrote: > > > Hi cephers, > > > >I've the following scheme: > > > > 7x OSD servers with: > >

Re: [ceph-users] ceph-deploy prepare btrfs osd error

2015-09-07 Thread German Anders
gt; > There appears to be an issue with zap not wiping the partitions correctly. > http://tracker.ceph.com/issues/6258 > > > > Yours seems slightly different though. Curious, what size disk are you > trying to use? > > > > Cheers, > > > > Simon > > > &g

[ceph-users] Proc for Impl XIO mess with Infernalis

2015-10-14 Thread German Anders
Hi all, I would like to know if with this new release of Infernalis is there somewhere a procedure in order to implement xio messager with ib and ceph. Also if it's possible to change an existing ceph cluster to this kind of new setup (the existing cluster does not had any production data yet). T

[ceph-users] Fwd: Proc for Impl XIO mess with Infernalis

2015-10-14 Thread German Anders
eers, *German* -- Forwarded message ------ From: German Anders Date: 2015-10-14 12:46 GMT-03:00 Subject: Proc for Impl XIO mess with Infernalis To: ceph-users Hi all, I would like to know if with this new release of Infernalis is there somewhere a procedure in order to implemen

  1   2   >