Hi Everyone,
Anybody is using ramdisk to put the Journal on it? If so, could
you please share the commands to implement that? since I'm having some
issues with that and want to test that out to see if i could get
better performance.
Thanks in advance,
German A
try RAMDISK on Journals, i've noticed that
he implement that on their Ceph cluster.
I will really appreciate the help on this. Also if you need me to send
you some more information about the Ceph scheme please let me know.
Also if someone could share some detail conf info will really he
Have the same issue, it will be really helpful if someone has any
home-made procedure or some notes.
German Anders
--- Original message ---
Asunto: Re: [ceph-users] Calamari Goes Open Source
De: Larry Liu
Para: Mike Dawson
Cc: Ceph Devel , Ceph-User
Fecha: Wednesday, 30
Also, does someone try flashcache from facebook on ceph? cons? pros?
any perf improvement? and dm-cache?
German Anders
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi Christian,
How are you? Thanks a lot for the answers, mine in red.
--- Original message ---
Asunto: Re: [ceph-users] Using Ramdisk wi
De: Christian Balzer
Para:
Cc: German Anders
Fecha: Wednesday, 30/07/2014 11:42
Hello,
On Wed, 30 Jul 2014 09:55:49 -0400 German Anders wrote
Hi Ilya,
I think you need to upgrade the kernel version of that ubuntu
server, I've a similar problem and after upgrade the kernel to 3.13
the problem was resolved successfully.
Best regards,
German Anders
--- Original message ---
Asunto: Re: [ceph-users] 0
e
From: Ilya Dryomov
Date: 01/08/2014 08:22 (GMT-03:00)
To: German Anders
Cc: Larry Liu ,ceph-users@lists.ceph.com
Subject: Re: [ceph-users] 0.80.5-1precise Not Able to Map RBD & CephFS
On Fri, Aug 1, 2014 at 12:29 AM, German Anders wrote:
> Hi Ilya,
> I think you n
--- Original message ---
Asunto: Re: [ceph-users] Ceph writes stall for long perioids with
nodisk/network activity
De: Chris Kitzmiller
Para: Mariusz Gronczewski
Cc:
Fecha: Monday, 04/08/2014 17:28
On Aug 1, 2014, at 1:31 PM, Mariusz Gronczewski wrote:
I got weird stalling during writes
Hi to all, does anybody have a procedure step-by-step to install Ceph
from tar.gz file? I would like to test version 0.82
Thanks in advance,
Best regards,
German Anders
___
ceph-users mailing list
ceph-users@lists.ceph.com
http
also
try to run the command manually on the osd server, but getting the
same error message. Any ideas?
Thanks in advance,
Best regards,
German Anders
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/lis
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
1 root root 92 May 12 11:14 rbdmap
Any ideas? I'm stuck here and can't go any further.
Thanks in advance,
Best regards,
German Anders
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
How about the logs? Is something there?
ls /var/log/ceph/
German Anders
--- Original message ---
Asunto: Re: [ceph-users] Can't start OSD
De: "O'Reilly, Dan"
Para: Karan Singh
Cc: ceph-users@lists.ceph.com
Fecha: Friday, 08/08/2014 10:53
Nope. Not
id-journal
[client.volumes]
keyring = /etc/ceph/ceph.client.volumes.keyring
Thanks in advance,
Best regards,
German Anders
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
[eta
01h:26m:43s]
It's seems like is doing nothing..
German Anders
--- Original message ---
Asunto: Re: [ceph-users] Performance really drops from 700MB/s to
10MB/s
De: Mark Nelson
Para:
Fecha: Wednesday, 13/08/2014 11:00
On 08/13/2014 08:19 AM, German Anders wrot
e I can't run a "ls" on
the rbd.
Thanks in advance,
Best regards,
German Anders
--- Original message ---
Asunto: Re: [ceph-users] Performance really drops from 700MB/s to
10MB/s
De: German Anders
Para: Mark Nelson
Cc:
Fecha: Wednesday, 13/08/2014 11:09
A
Run status group 0 (all jobs):
WRITE: io=10240MB, aggrb=741672KB/s, minb=741672KB/s,
maxb=741672KB/s, mint=14138msec, maxt=14138msec
Disk stats (read/write):
rbd0: ios=182/20459, merge=0/0, ticks=92/1213748, in_queue=1214796,
util=99.80%
ceph@mail02-old:~$
German Anders
--
I use nmon on each OSD server, this is a really good tool to find out
what is going on regarding CPU, Mem, Disks and Networking
German Anders
--- Original message ---
Asunto: Re: [ceph-users] Performance really drops from 700MB/s to
10MB/s
De: Craig Lewis
Para: Mariusz
Hi All,
Does anyone have Ceph implemented with Infiniband for Cluster
and Public network?
Thanks in advance,
German Anders
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users
es to install on the hosts, etc.
Any help will really be appreciated.
Thanks in advance,
German Anders
Storage System Engineer Leader
Despegar | IT Team
office +54 11 4894 3500 x3408
mobile +54 911 3493 7262
mail gand...@despegar.com
--- Original message ---
Asunto: Re: [ceph-
Hi all,
I want to know if someone has deploy some new relic (pyhon) plugin for
Ceph.
Thanks a lot,
Best regards,
*Ger*
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Thanks a lot John, will definitely take a look on that.
Best regards,
*German Anders*
Storage System Engineer Leader
*Despegar* | IT Team
*office* +54 11 4894 3500 x3408
*mobile* +54 911 3493 7262
*mail* gand...@despegar.com
2015-05-18 6:04 GMT-03:00 John Spray :
> Not that I know of, but
74.96
sdm 0.00 0.000.60 544.6019.20 40348.00
148.08 118.31 217.00 17.33 217.22 1.67 90.80
Thanks in advance,
Best regards,
*German Anders*
Storage System Engineer Leader
*Despegar* | IT Team
*office* +54 11 4894 3500 x3408
*mobile* +54 911 3493 7262
*mail*
Thanks a lot Nick, I'll try with more PGs and if I don't see any improve
I'll add more OSD servers to the cluster.
Best regards,
*German Anders*
Storage System Engineer Leader
*Despegar* | IT Team
*office* +54 11 4894 3500 x3408
*mobile* +54 911 3493 7262
*mail* gand...@despegar.
Hi all,
Is there any IO botleneck reported on kernel 3.18.3-031803-generic?
since I'm having a lot of iowait and the cluster is really getting slow,
and actually there's no much going on. I've read some time ago that there
were some issues with kern 3.18, so I would like to know what's the 'bes
Hi Lincoln,
how are you? It's with RBD
Thanks a lot,
Best regards,
*German*
2015-06-24 11:53 GMT-03:00 Lincoln Bryant :
> Hi German,
>
> Is this with CephFS, or RBD?
>
> Thanks,
> Lincoln
>
> On Jun 24, 2015, at 9:44 AM, German Anders wrote:
>
> Hi al
mq was introduced which
> brings two other limitations:-
>
>
>
> 1. Max queue depth of 128
>
> 2. IO’s sizes are restricted/split to 128kb
>
>
>
> *From:* ceph-users [mailto:ceph-users-boun...@lists.ceph.com] *On Behalf
> Of *German Anders
> *Sent:* 24
Got you, thanks a lot Nick, i'll go with 4.0.6-wily
Best regards!
*German*
2015-06-24 12:07 GMT-03:00 Nick Fisk :
> There isn’t really a best option at the moment, although if your IO sizes
> aren’t that big, 4.0+ is probably the best option.
>
>
>
> *From:* Ger
hi cephers,
Want to know if there's any 'best' practice or procedure to implement
Ceph with Infiniband FDR 56gb/s for front and back end connectivity. Any
crush tunning parameters, etc.
The Ceph cluster has:
- 8 OSD servers
- 2x Intel Xeon E5 8C with HT
- 128G RAM
- 2x 200G Intel
me is using the S3700 for OS but the S3500 for
> journals. I would use the S3700 for journals and S3500 for the OS. Looks
> pretty good other than that!
>
>
>
> ------
> *From: *"German Anders"
> *To: *"ceph-users"
> *S
Hi cephers,
Is anyone out there that implement enhanceIO in a production
environment? any recommendation? any perf output to share with the diff
between using it and not?
Thanks in advance,
*German*
___
ceph-users mailing list
ceph-users@lists.ceph.
I would probably go with less size osd disks, 4TB is to much to loss in
case of a broken disk, so maybe more osd daemons with less size, maybe 1TB
or 2TB size. 4:1 relationship is good enough, also i think that 200G disk
for the journals would be ok, so you can save some money there, the osd's
of c
kind of disk you will get no more than 100-110 iops per disk
*German Anders*
Storage System Engineer Leader
*Despegar* | IT Team
*office* +54 11 4894 3500 x3408
*mobile* +54 911 3493 7262
*mail* gand...@despegar.com
2015-07-01 20:54 GMT-03:00 Nate Curry :
> 4TB is too much to lose? Why would
> big of an issue. Now that assumes that replication actually works well in
> that size cluster. We're still cessing out this part of the PoC
> engagement.
>
> ~~shane
>
>
>
>
> On 7/1/15, 5:05 PM, "ceph-users on behalf of German Anders" <
> ceph
gt; On 02 Jul 2015, at 11:29, Emmanuel Florac > wrote:
> >
> > Le Wed, 1 Jul 2015 17:13:03 -0300
> > German Anders > écrivait:
> >
> >> Hi cephers,
> >>
> >> Is anyone out there that implement enhanceIO in a production
> >&g
eaf firstn -1 type host
step emit
}
# end crush map
*German*
2015-07-02 8:15 GMT-03:00 Lionel Bouton :
> On 07/02/15 12:48, German Anders wrote:
> > The idea is to cache rbd at a host level. Also could be possible to
> > cache at the osd level. We have high iowait and we n
yeah 3TB SAS disks
*German Anders*
Storage System Engineer Leader
*Despegar* | IT Team
*office* +54 11 4894 3500 x3408
*mobile* +54 911 3493 7262
*mail* gand...@despegar.com
2015-07-02 9:04 GMT-03:00 Jan Schermer :
> And those disks are spindles?
> Looks like there’s simply too few of
Hi all,
I'm planning to deploy a new Ceph cluster with IB FDR 56Gb/s and I've
the following HW:
*3x MON Servers:*
2x Intel Xeon E5-2600@v3 8C
256GB RAM
1xIB FRD ADPT-DP (two ports for PUB network)
1xGB ADPT-DP
Disk Layout:
SOFT-RAID:
SCSI1 (0,0,0) (sda) - 120.0 GB ATA IN
ly
> need higher-grade SSDs. You can save money on memory.
>
> What will be the role of this cluster? VM disks? Object storage?
> Streaming?...
>
> Jan
>
> On 27 Aug 2015, at 17:56, German Anders wrote:
>
> Hi all,
>
>I'm planning to deploy a new Ce
cages on different UPSes, then you can do stuff like disable
> barriers if you go with some cheaper drives that need it.) I'm not a CRUSH
> expert, there are more tricks to do before you set this up.
>
> Jan
>
> On 27 Aug 2015, at 18:31, German Anders wrote:
>
> Hi Jan,
an save money on memory.
> >>>
> >>> What will be the role of this cluster? VM disks? Object storage?
> >>> Streaming?...
> >>>
> >>> Jan
> >>>
> >>> On 27 Aug 2015, at 17:56, German Anders wrote:
> >&g
Hi cephers,
What's the recommended version for new productive clusters?
Thanks in advanced,
Best regards,
*German*
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Thanks a lot Kobi
*German*
2015-08-31 14:20 GMT-03:00 Kobi Laredo :
> Hammer should be very stable at this point.
>
> *Kobi Laredo*
> *Cloud Systems Engineer* | (*408) 409-KOBI*
>
> On Mon, Aug 31, 2015 at 8:51 AM, German Anders
> wrote:
>
>> Hi cephers,
>>
Hi cephers,
I would like to know the status for production-ready of Accelio & Ceph,
does anyone had a home-made procedure implemented with Ubuntu?
recommendations, comments?
Thanks in advance,
Best regards,
*German*
___
ceph-users mailing list
ceph-
elio and Ceph are still in heavy development and not ready for production.
>
> -
> Robert LeBlanc
> PGP Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1
>
> On Tue, Sep 1, 2015 at 10:31 AM, German Anders wrote:
> Hi cephers,
>
> I would lik
how many nodes/OSDs/SSD or HDDs/ EC or Replication etc.
> etc.).
>
>
>
> Thanks & Regards
>
> Somnath
>
>
>
> *From:* ceph-users [mailto:ceph-users-boun...@lists.ceph.com] *On Behalf
> Of *German Anders
> *Sent:* Tuesday, September 01, 2015 10:39 AM
probably, not sure if it is added as git
> submodule or not, Vu , could you please confirm ?
>
>
>
> Since we are working to make this solution work at scale, could you please
> give us some idea what is the scale you are looking at for future
> deployment ?
>
>
out the doc you are maintaining ?
>
>
>
> Regards
>
> Somnath
>
>
>
> *From:* German Anders [mailto:gand...@despegar.com]
> *Sent:* Tuesday, September 01, 2015 11:36 AM
>
> *To:* Somnath Roy
> *Cc:* Robert LeBlanc; ceph-users
> *Subject:* Re: [ceph-users] Accelio &am
>
>
>
>
> *From:* ceph-users [mailto:ceph-users-boun...@lists.ceph.com] *On Behalf
> Of *German Anders
> *Sent:* Tuesday, September 01, 2015 12:00 PM
> *To:* Somnath Roy
>
> *Cc:* ceph-users
> *Subject:* Re: [ceph-users] Accelio & Ceph
>
>
>
> Th
avail
192 active+clean
Anyone have any idea what could be the issue here?
Thanks in advance,
Best regards,
German Anders
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
ks in advance,
Best regards,
German Anders
--- Original message ---
Asunto: Re: [ceph-users] radosgw daemon stalls on download of some
files
De: Sebastian
Para: ceph-users
Fecha: Friday, 29/11/2013 16:18
Hi Yehuda,
It's interesting, the responses are received but seems that
5903 s, 702 MB/s
Thanks in advance,
Best regards,
German Anders
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
156, util=50.86%
sdd: ios=67692/34736, merge=0/0, ticks=490456/34692,
in_queue=525144, util=51.05%
root@e05-host05:/home/cloud#
German Anders
--- Original message ---
Asunto: Re: [ceph-users] Ceph Performance MB/sec
De: Gilles Mocellin
Para:
Fecha: Sunday, 01/12/2013 13:59
Le 0
things work fine on kernel 3.13.0-35
German Anders
--- Original message ---
Asunto: Re: [ceph-users] Frequent Crashes on rbd to nfs gateway Server
De: Ilya Dryomov
Para: Micha Krause
Cc: ceph-users@lists.ceph.com
Fecha: Wednesday, 24/09/2014 11:33
On Wed, Sep 24, 2014 at
3.13.0-35 -generic? really? I found my self in a similar situation
like yours and making a downgrade to that version works fine, also you
could try 3.14.9-031, it work fine for me also.
German Anders
--- Original message ---
Asunto: Re: [ceph-users] Frequent Crashes on rbd
And on 3.14.9-031?
German Anders
--- Original message ---
Asunto: Re: [ceph-users] Frequent Crashes on rbd to nfs gateway Server
De: Andrei Mikhailovsky
Para: German Anders
Cc: , Micha Krause
Fecha: Wednesday, 24/09/2014 12:43
I also had the hang tasks issues with 3.13.0
ing']' returned non-zero exit status
1
[ceph-bkp-osd01][ERROR ] RuntimeError: command returned non-zero exit
status: 1
[ceph_deploy][ERROR ] RuntimeError: Failed to execute command:
ceph-disk-activate --mark-init upstart --mount /dev/sdf1
ceph@cephbkdeploy01:~/desp-bkp-cluster$
I'
-keyring',
'/var/lib/ceph/tmp/mnt.MW51n4/keyring']' returned non-zero exit status
1
[ceph-bkp-osd01][ERROR ] RuntimeError: command returned non-zero exit
status: 1
[ceph_deploy][ERROR ] RuntimeError: Failed to execute command:
ceph-disk-activate --mark-init upstart --mount /d
also, between two hosts on a NetGear SW model at 10GbE:
rtt min/avg/max/mdev = 0.104/0.196/0.288/0.055 ms
German Anders
--- Original message ---
Asunto: [ceph-users] Typical 10GbE latency
De: Wido den Hollander
Para:
Fecha: Thursday, 06/11/2014 10:18
Hello,
While
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
[fixed]
l2-fwd-offload: off
busy-poll: on [fixed]
ceph@cephosd01:~$
German Anders
--- Original message ---
Asunto: Re: [ceph-users] Typical 10GbE latency
De: Stephan Seitz
Para: Wido den Hollander
Cc:
Fecha: Thursday, 13/11/2014 15:39
Indeed, there must be something
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
that depends.. with which block size do you get those numbers? Ceph is
really good with block sizes > 256kb, 1M, 4M...
German Anders
--- Original message ---
Asunto: [ceph-users] slow read-performance inside the vm
De: Patrik Plank
Para: ceph-users@lists.ceph.com
Fe
o our existing Ethernet clients can communicate
with the IB clients... now... is there any specification or
consideration regarding this type of configuration in terms of Ceph?
Thanks in advance,
Regards,
German Anders
___
ceph-
Hi Sahana,
Did you already create any osd? With the osd prepare and activate command?
Best regards
Enviado desde mi Personal Samsung GT-i8190L
Original message
From: Sahana
Date: 05/12/2013 07:26 (GMT-03:00)
To: ceph-us...@ceph.com
Subject: [ceph-users] Error in star
if anybody had some recommendations or tips regarding the
configuration for performance. The filesystem to be used is XFS.
I really appreciated the help.
Thanks in advance,
Best regards,
German Anders
___
ceph-users mailing list
ceph-users
ournal you loose all
those 4 OSD's right?
The 10Gb connection is because we already had our environment
with that connectivity speed. Do you know customers that had a Ceph
cluster, and running on it Cassandras, mongoDB's and Hadoops clusters?
Thanks in advance,
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
ster how can i specify
the name of the cluster?
Again sorry if they are very newbie questions but as i said im new to
Ceph.
Thanks in advance,
Best regards,
German Anders
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lis
Thanks!, I forgot to mentioned that we are using D2200sb Storage Blade
for the disks inside the Enclosure.
German Anders
--- Original message ---
Asunto: Re: [ceph-users] Ceph New Cluster Configuration
Recommendations
De: Alfredo Deza
Para: German Anders
Cc: ceph-users
ideas? or recommendations? Is better to
partitioned the Journal with XFS, right?
Thanks in advance,
Best regards,
German Anders
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
500.06999osd.50up1
510.06999osd.51up1
520.06999osd.52up1
530.45osd.53up1
540.45osd.54up1
ceph@ceph-node04:~$
Someone could give me a hand to resolved this situation.
try to make the map it failed
with:
sudo rbd map ceph-pool/RBDTest --id admin -k
/home/ceph/ceph-cluster-prd/ceph.client.admin.keyring
rbd: add failed: (1) Operation not permitted
Anyone could give me a hand or know what could be this issue? Am i
missing something?
Thanks in advance,
Be
Thanks a lot! I've increase the number of pg and pgp and now it works
fine :)
Best regards,
German Anders
--- Original message ---
Asunto: Re: [ceph-users] HEALTH_WARN too few pgs per osd (3 < min 20)
De: Ирек Фасихов
Para: German Anders
Cc: ceph-users@lists.ceph.co
9
Max bandwidth (MB/sec): 104
Min bandwidth (MB/sec): 0
Average Latency:0.420573
Stddev Latency: 0.226378
Max latency:1.81426
Min latency:0.101352
root@ceph-node03:/home/ceph#
Thanks in advance,
Best regards,
German A
Doesn't work either, is displays the "rbd: add failed: (1) Operation
not permitted" error message. The only way I've found to mapped is to
running:
rbd map -m 10.1.1.151 RBDTest --pool ceph-pool --id admin -k
/home/ceph/ceph-cluster-prd/ceph.client.admin.keyri
u had
the commands to do those movements?
Thanks a lot,
Best regards,
German Anders
--- Original message ---
Asunto: Re: [ceph-users] Cluster Performance very Poor
De: Mark Nelson
Para:
Fecha: Friday, 27/12/2013 15:39
On 12/27/2013 12:19 PM, German Anders wrote:
Hi Ce
dvance,
German Anders
--- Original message ---
Asunto: Re: [ceph-users] Cluster Performance very Poor
De: Mark Nelson
Para:
Fecha: Friday, 27/12/2013 15:39
On 12/27/2013 12:19 PM, German Anders wrote:
Hi Cephers,
I've run a rados bench to measure the throughput o
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
How can get one?
Enviado desde mi Personal Samsung GT-i8190L
Original message
From: Loic Dachary
Date: 29/03/2014 11:35 (GMT-03:00)
To: ceph-users
Cc: Ceph Community
Subject: [ceph-users] Ceph t-shirts are available
___
ce
ench --num-threads=16 --test=fileio --file-total-size=3G
--file-test-mode=rndrw run
sysbench --num-threads=16 --test=fileio --file-total-size=3G
--file-test-mode=rndrw cleanup
Thanks in advance,
Best regards,
German Anders
--- Original message ---
Asunto: Re: [ceph-users] RBD as b
l options when formatting the XFS filesystem? and/or mount
options? What hypervisor are you using?
Best regards,
German Anders
Field Storage Support Engineer
Despegar.com - IT Team
--- Original message ---
Asunto: [ceph-users] write speed issue on RBD image
De: Russell E. Glaue
ks, memory, cpu and swap,
and look there for something that it's not normal
Hope this helps,
Best regards,
German Anders
Field Storage Support Engineer
Despegar.com - IT Team
--- Original message ---
Asunto: Re: [ceph-users] write speed issue on RBD image
De: Russell E
Someone could get a performance throughput on RBD of 600MB/s or more
on (rw) with a block size of 32768k?
German Anders
Field Storage Support Engineer
Despegar.com - IT Team
--- Original message ---
Asunto: Re: [ceph-users] Slow IOPS on RBD compared to journal and
backingdevices
I forgot to mention, of course on a 10GbE network
German Anders
Field Storage Support Engineer
Despegar.com - IT Team
--- Original message ---
Asunto: Re: [ceph-users] Slow IOPS on RBD compared to journal
andbackingdevices
De: German Anders
Para: Christian Balzer
Cc:
Fecha
Hi Josef,
Thanks a lot for the quick answer.
yes 32M and rand writes
and also, do you get those values i guess with a MTU of 9000 or with
the traditional and beloved MTU 1500?
German Anders
Field Storage Support Engineer
Despegar.com - IT Team
--- Original message ---
Asunto: Re
Hi cephers, trying to deploying a new ceph cluster with master release
(v9.0.3) and when trying to create the initial mons and error appears
saying that "admin_socket: exception getting command descriptions: [Errno
2] No such file or directory", find the log:
...
[ceph_deploy.mon][INFO ] distro
Hi cephers,
I've the following scheme:
7x OSD servers with:
4x 800GB SSD Intel DC S3510 (OSD-SSD)
3x 120GB SSD Intel DC S3500 (Journals)
5x 3TB SAS disks (OSD-SAS)
The OSD servers are located on two separate Racks with two power circuits
each.
I would like to know what is the
If you can get by with just the
> SAS disks for now and make a more informed decision about the cache tiering
> when Infernalis is released then that might be your best bet.
>
>
>
> Otherwise you might just be best using them as a basic SSD only Pool.
>
>
>
> Nick
>
&g
Trying to do a prepare on a osd with btrfs, and getting this error:
[cibosd04][INFO ] Running command: sudo ceph-disk -v prepare --cluster
ceph --fs-type btrfs -- /dev/sdc
[cibosd04][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd
--cluster=ceph --show-config-value=fsid
[cibosd04][WARNI
Any ideas?
ceph@cephdeploy01:~/ceph-ib$ ceph-deploy osd prepare --fs-type btrfs
cibosd04:sdc
[ceph_deploy.conf][DEBUG ] found configuration file at:
/home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.28): /usr/bin/ceph-deploy osd
prepare --fs-type btrfs cibosd04:sdc
[ceph_deploy.cl
a lot!
Best regards
German
On Saturday, September 5, 2015, Christian Balzer wrote:
>
> Hello,
>
> On Fri, 4 Sep 2015 12:30:12 -0300 German Anders wrote:
>
> > Hi cephers,
> >
> >I've the following scheme:
> >
> > 7x OSD servers with:
> >
gt;
> There appears to be an issue with zap not wiping the partitions correctly.
> http://tracker.ceph.com/issues/6258
>
>
>
> Yours seems slightly different though. Curious, what size disk are you
> trying to use?
>
>
>
> Cheers,
>
>
>
> Simon
>
>
>
&g
Hi all,
I would like to know if with this new release of Infernalis is there
somewhere a procedure in order to implement xio messager with ib and ceph.
Also if it's possible to change an existing ceph cluster to this kind of
new setup (the existing cluster does not had any production data yet).
T
eers,
*German*
-- Forwarded message ------
From: German Anders
Date: 2015-10-14 12:46 GMT-03:00
Subject: Proc for Impl XIO mess with Infernalis
To: ceph-users
Hi all,
I would like to know if with this new release of Infernalis is there
somewhere a procedure in order to implemen
1 - 100 of 174 matches
Mail list logo