Re: [ceph-users] exclusive-lock

2016-07-11 Thread Christian Balzer
le to > > protect the volume in this scenario? > > > > > > _______ > > ceph-users mailing list > > ceph-users@lists.ceph.com > > http://lists.ceph.com/listinfo.cgi/ceph-users-ce

Re: [ceph-users] Cache Tier configuration

2016-07-11 Thread Christian Balzer
> full error on cache tier, I was thinking that only one of pools can stops > and other without cache tier should still work. > Once you activate a cache-tier it becomes for all intends and purposes the the pool it's caching for. So any problem with it will be fatal. Christian --

Re: [ceph-users] Advice on increasing pgs

2016-07-11 Thread Christian Balzer
running ceph 0.80.9 and have a cluster of 126 > > OSDs with only 64 pgs allocated to the pool. As a result, 2 OSDs are now > > 88% full, while the pool is only showing as 6% used. > > > > Based on my understanding, this is clearly a placement problem, so the > > plan

Re: [ceph-users] Flood of 'failed to encode map X with expected crc' on 1800 OSD cluster after upgrade

2016-07-11 Thread Christian Balzer
es just fine. > > I know that the messages pop up due to a version mismatch, but is there any > way to suppress them? > > Wido > _______ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph

Re: [ceph-users] Advice on meaty CRUSH map update

2016-07-12 Thread Christian Balzer
istered in England and Wales no. 05611763 > > ___ > > ceph-users mailing list > > ceph-users@lists.ceph.com > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > __

Re: [ceph-users] Quick short survey which SSDs

2016-07-12 Thread Christian Balzer
essive on the > >higher end with multiple threads, but figured for most of our nodes with > >4-6 OSD's the intel were a bit more proven and had better "light-medium" > >load numbers. > > > >Carlos M. Perez > >CMP Consulting Services > >305-6

Re: [ceph-users] SSD Journal

2016-07-12 Thread Christian Balzer
ists.ceph.com > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > ___ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > -- Christian BalzerNetwork/Sys

Re: [ceph-users] multiple journals on SSD

2016-07-12 Thread Christian Balzer
urnal but a write cache during operation. I had that > > kind of configuration with 1 SSD for 20 SATA HDD. With a Ceph bench, > > i notice that my rate whas limited between 350 and 400 MB/s. In fact, > > a iostat show me that my SSD was 100% utilised with a rate of 350-40

Re: [ceph-users] cephfs change metadata pool?

2016-07-12 Thread Christian Balzer
sure if I can delete the fs and re-create it using the > > existing > > > data pool and the cloned metadata pool. > > > > > > Thank you. > > > > > > > > > Zhang Di > > > > > > ___ >

Re: [ceph-users] Cache Tier configuration

2016-07-12 Thread Christian Balzer
Hello, On Tue, 12 Jul 2016 11:01:30 +0200 Mateusz Skała wrote: > Thank You for replay. Answers below. > > > -Original Message- > > From: Christian Balzer [mailto:ch...@gol.com] > > Sent: Tuesday, July 12, 2016 3:37 AM > > To: ceph-users@lists.ceph.com >

Re: [ceph-users] cephfs change metadata pool?

2016-07-12 Thread Christian Balzer
but read the recent filestore merge and split threads, including the entirety of this thread: https://www.mail-archive.com/ceph-users@lists.ceph.com/msg29243.html Christian > Thanks for the hints. > > On Tue, Jul 12, 2016 at 8:19 PM, Christian Balzer wrote: > > > > > Hello,

Re: [ceph-users] cephfs change metadata pool?

2016-07-13 Thread Christian Balzer
d > > Total time run: 10.546251 > > Total reads made: 293 > > Read size:4194304 > > Object size: 4194304 > > Bandwidth (MB/sec): 111.13 > > Average IOPS: 27 > > Stddev IOPS: 2 > > Max IOPS: 32 &

Re: [ceph-users] SSD Journal

2016-07-13 Thread Christian Balzer
is is especially important if you were to run a MON on those machines as well. Christian > Thanks, > Ashley > > -Original Message- > From: Christian Balzer [mailto:ch...@gol.com] > Sent: 13 July 2016 01:12 > To: ceph-users@lists.ceph.com > Cc: Wido den Hollander

Re: [ceph-users] Question on Sequential Write performance at 4K blocksize

2016-07-13 Thread Christian Balzer
is > addressed or by their designee. If the reader of this message is not the > intended recipient, you are on notice that any distribution of this message, > in any form, is strictly prohibited. If you have received this message in > erro

Re: [ceph-users] cephfs change metadata pool?

2016-07-13 Thread Christian Balzer
| erro 0/s | drpo 0/s | > > /dev/sda is the OS and journaling SSD. The other three are OSDs. > > Am I missing anything? > > Thanks, > > > > > Zhang, Di > Postdoctoral Associate > Baylor College of Medicine &g

Re: [ceph-users] SSD Journal

2016-07-14 Thread Christian Balzer
Hello, On Thu, 14 Jul 2016 13:37:54 +0200 Steffen Weißgerber wrote: > > > >>> Christian Balzer schrieb am Donnerstag, 14. Juli 2016 um > 05:05: > > Hello, > > > Hello, > > > > On Wed, 13 Jul 2016 09:34:35 + Ashley Merrick wrote: > &

Re: [ceph-users] Slow requet on node reboot

2016-07-15 Thread Christian Balzer
low requests, but they cleared up very quickly and definitely did not require any of the OSDs to be brought back up. Christian -- Christian BalzerNetwork/Systems Engineer ch...@gol.com Global OnLine Japan/Rakuten Communications http://www.gol.com/

Re: [ceph-users] Lessons learned upgrading Hammer -> Jewel

2016-07-15 Thread Christian Balzer
s which we wanted to add to the Jewel > > cluster. > > > > > > Using Salt and ceph-disk we ran into a partprobe issue in > > combination with ceph-disk. There was already a Pull Request for > > the fix, but that was not included in Jewel 10.2.2. > &g

Re: [ceph-users] ceph OSD with 95% full

2016-07-19 Thread Christian Balzer
iling list > >> ceph-users@lists.ceph.com > >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > > > > > > > > _______ > > ceph-users mailing list > > ceph-users@lists.ceph.com > > http://l

Re: [ceph-users] how to use cache tiering with proxy in ceph-10.2.2

2016-07-19 Thread Christian Balzer
ove is not a bug. Christian > > So you can obviously ignore the ceph --show-config command. Its simply > not working correctly. > > -- Christian BalzerNetwork/Systems Engineer ch...@gol.com Global OnLine Japan/Rakuten Communications http://www.gol.com/

Re: [ceph-users] Cache Tier configuration

2016-07-19 Thread Christian Balzer
Hello, On Tue, 19 Jul 2016 15:15:55 +0200 Mateusz Skała wrote: > Hello, > > > -Original Message- > > From: Christian Balzer [mailto:ch...@gol.com] > > Sent: Wednesday, July 13, 2016 4:03 AM > > To: ceph-users@lists.ceph.com > > Cc: Mateusz Skała

Re: [ceph-users] pgs stuck unclean after reweight

2016-07-20 Thread Christian Balzer
g 6.4 (6.4) -> up [53,24,6] acting [53,6,26] > > # ceph pg map 5.24 > osdmap e1054 pg 5.24 (5.24) -> up [32,13,56] acting [32,13,51] > > # ceph pg map 5.306 > osdmap e1054 pg 5.306 (5.306) -> up [44,60,26] acting [44,7,59] > > > To complete

Re: [ceph-users] Cache Tier configuration

2016-07-20 Thread Christian Balzer
he pool, but in practice I think it would break horribly, at least until you removed the broken cache pool manually. The readforward and readproxy modes will cache writes (and thus reads for objects that have been written to and are still in the cache). And as such they contain your most valuable da

Re: [ceph-users] performance decrease after continuous run

2016-07-20 Thread Christian Balzer
on your RGW (since you mention cosbench). Christian -- Christian BalzerNetwork/Systems Engineer ch...@gol.com Global OnLine Japan/Rakuten Communications http://www.gol.com/ ___ ceph-users mailing list ceph-users@l

Re: [ceph-users] Storage tiering in Ceph

2016-07-20 Thread Christian Balzer
on different performance > hardware, however is there any automation possible in Ceph that will promote > data from slow hardware to fast one and back? > http://docs.ceph.com/docs/master/rados/operations/cache-tiering/ Christian -- Christian BalzerNetwork/Systems Engineer

Re: [ceph-users] thoughts about Cache Tier Levels

2016-07-21 Thread Christian Balzer
ng ! :) > A dedicated pool (something with replica 1 or 2 and backed by RAID6 OSDs for example or an EC pool) and "manual" moving would likely be better suited than the approach above. Christian > Thanks for feedback and suggestion on how to handle data you "never will &

Re: [ceph-users] OSD / Journal disk failure

2016-07-21 Thread Christian Balzer
ery few seconds or so), not really needed either. > 2) If there is a journal disk failure, how long does the ceph cluster > detect the journal disk failure? > > Is there any config option to allow the ceph cluster to detect the status > of journal disk? > Same as abov

Re: [ceph-users] pgs stuck unclean after reweight

2016-07-25 Thread Christian Balzer
. Christian > Thanks for the help > Goncalo > > > > From: Christian Balzer [ch...@gol.com] > Sent: 20 July 2016 19:36 > To: ceph-us...@ceph.com > Cc: Goncalo Borges > Subject: Re: [ceph-users] pgs stuck unclean after reweight > > Hello, > > On Wed

Re: [ceph-users] mon_osd_nearfull_ratio (unchangeable) ?

2016-07-25 Thread Christian Balzer
at was run after restarting > services so it is still unclear to me why the new value is not picked up > and why running 'ceph --show-config --conf /dev/null | grep > mon_osd_nearfull_ratio' still shows 0.85 > Don't use that, do something like: ceph --admin-daem

Re: [ceph-users] OSD host swap usage

2016-07-27 Thread Christian Balzer
sts keep on swapping, and others don't? > Could this be some issue? > > Thanks !! > > Kenneth > ___ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > -- Christian Balzer

Re: [ceph-users] ONE pg deep-scrub blocks cluster

2016-07-28 Thread Christian Balzer
onnection @ 1x 10GB; A second > connection is also connected via 10GB but provides only a Backup > connection when the active Switch fails - no LACP possible. > - We do not use Jumbo Frames yet.. > - Public and Cluster-Network related Ceph traffic is going through this > one active 10GB Interface on each S

Re: [ceph-users] too many PGs per OSD (307 > max 300)

2016-07-28 Thread Christian Balzer
per OSD is [30, 300], > any > other pg_num out of this range with bring cluster to HEALTH_WARN. > > So what I would like to say: is the document misleading? Should we fix it? > Definitely. Christian -- Christian BalzerNetwork/Systems Engineer ch...@gol.co

Re: [ceph-users] 答复: too many PGs per OSD (307 > max 300)

2016-07-28 Thread Christian Balzer
256 per pool. Again, see pgcalc. Christian > Thanks. > ________ > 发件人: ceph-users 代表 Christian Balzer > > 发送时间: 2016年7月29日 2:47:59 > 收件人: ceph-users > 主题: Re: [ceph-users] too many PGs per OSD (307 > max 300) > > On Fri, 29 Jul 2016 0

Re: [ceph-users] too many PGs per OSD (307 > max 300)

2016-07-31 Thread Christian Balzer
On Fri, 29 Jul 2016 16:20:03 +0800 Chengwei Yang wrote: > On Fri, Jul 29, 2016 at 11:47:59AM +0900, Christian Balzer wrote: > > On Fri, 29 Jul 2016 09:59:38 +0800 Chengwei Yang wrote: > > > > > Hi list, > > > > > > I just followed the placement

Re: [ceph-users] 答复: 答复: too many PGs per OSD (307 > max 300)

2016-07-31 Thread Christian Balzer
OTAL), so your individual pools would be 46 PGs on average, meaning small ones would be 32 and larger ones 64 or 128. Getting this right with a small # of OSDs is a challenge. Christian > > Thanks. > > > 发件人: Christian Balzer > 发送时间: 2016年7月29日 3

Re: [ceph-users] 2TB useable - small business - help appreciated

2016-07-31 Thread Christian Balzer
gt; > I have no idea on what I should do for RGW, RBD and CephFS, should I > just have them all running on the 3 nodes? > I don't see how RGW and CephFS enter your setup at all, RBD is part of the Ceph basics, no extra server required for it. Christian > Thanks again! >

Re: [ceph-users] 2TB useable - small business - help appreciated

2016-07-31 Thread Christian Balzer
Christian > Thanks again. > > Richard > _______ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com -- Christian BalzerNetwork/Systems Engineer ch...@gol.com

Re: [ceph-users] Small Ceph cluster

2016-08-01 Thread Christian Balzer
not so much. Your network can't even saturate one 200GB DC S3710. >From a redundancy point of view you might be better off with more nodes. Christian > > Kind regards, > Tom -- Christian BalzerNetwork/Systems Engineer ch...@gol.com

Re: [ceph-users] Small Ceph cluster

2016-08-01 Thread Christian Balzer
ing that your cache pool needs to be just as reliable as everything else. > Is a caching tier with one SSD recommended or should i always have two SSD > in replicated mode ? > See above. Christian > > Kind regards, > Tom > > > > On Mon, Aug 1, 2016 at 2:00 PM, Chri

[ceph-users] Intel SSD (DC S3700) Power_Loss_Cap_Test failure

2016-08-02 Thread Christian Balzer
. ^o^ Thanks, Christian -- Christian BalzerNetwork/Systems Engineer ch...@gol.com Global OnLine Japan/Rakuten Communications http://www.gol.com/ ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] CRUSH map utilization issue

2016-08-03 Thread Christian Balzer
X> > > > > root@ceph2:~/crush_files# crushtool -i crushmap --test > > --show-utilization-all -- - Pastebin.com<http://pastebin.com/ar6SAFnX> > > pastebin.com > > > > [http://pastebin.com/i/facebook.png]<http://pastebin.com/2mbBnmSM> > > >

Re: [ceph-users] Intel SSD (DC S3700) Power_Loss_Cap_Test failure

2016-08-03 Thread Christian Balzer
s with this > > workload? Are you really writing ~600TB/month?? > > > > Jan > > > > > ___ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > -- C

Re: [ceph-users] Intel SSD (DC S3700) Power_Loss_Cap_Test failure

2016-08-03 Thread Christian Balzer
anks > Jan > > > On 03 Aug 2016, at 13:33, Christian Balzer wrote: > > > > > > Hello, > > > > yeah, I was particular interested in the Power_Loss_Cap_Test bit, as it > > seemed to be such an odd thing to fail (given that's not single capacitor)

Re: [ceph-users] Number of PGs: fix from start or change as we grow ?

2016-08-03 Thread Christian Balzer
onstantly at its breaking point, it's also an operation that should be doable w/o major impacts. I'd start with 1024 PGs on those 20 OSDs, at 50 OSDs go to 4096 PGs and at around 100 OSDs it is safe to go to 8192 PGs. Christian -- Christian BalzerNetwork/Systems Engineer

[ceph-users] Upgrading a "conservative" [tm] cluster from Hammer to Jewel, a nightmare in the making

2016-08-04 Thread Christian Balzer
t won't be systemd (as Jewel actually has the targets now), but the inability to deal with a manually deployed environment like mine. Expect news about that next week the latest. Christian -- Christian BalzerNetwork/Systems Engineer ch...@gol.com Global OnLi

Re: [ceph-users] question about ceph-deploy osd create

2016-08-04 Thread Christian Balzer
s ceph user. > > It works when i don't specify a separate journal > > Any idea of what i'm doing wrong ? > > thks -- Christian BalzerNetwork/Systems Engineer ch...@gol.com Global OnLine Japan/Rakuten Communications http://www.

Re: [ceph-users] question about ceph-deploy osd create

2016-08-04 Thread Christian Balzer
ke this: > > > ceph-deploy osd prepare ceph-osd1:sdd:sdf7 > > > And then: > > > ceph-deploy osd activate ceph-osd1:sdd:sdf7 > > > I end up with "wrong permission" on the osd when activating, complaining > > > about "tmp" directory where

Re: [ceph-users] question about ceph-deploy osd create

2016-08-04 Thread Christian Balzer
t;:". Christian > Le 5 août 2016 02:30, "Christian Balzer" a écrit : > > > > > Hello, > > > > On Fri, 5 Aug 2016 02:11:31 +0200 Guillaume Comte wrote: > > > > > I am reading half your answer > > > > > > Do you mean that

[ceph-users] fio rbd engine "perfectly" fragments filestore file systems

2016-08-05 Thread Christian Balzer
0 --- Regards, Christian -- Christian BalzerNetwork/Systems Engineer ch...@gol.com Global OnLine Japan/Rakuten Communications http://www.gol.com/ ___ ceph-users mailing list ceph-users@lists.ceph

Re: [ceph-users] OSDs going down when we bring down some OSD nodes Or cut-off the cluster network link between OSD nodes

2016-08-07 Thread Christian Balzer
n't know. > > B) Add 2 monitors to each site. This would make each site with 3 monitors > and the overall cluster will have 9 monitors. The reason we wanted to try > this is, we think that the OSDs are going down as the the quorum is unable > to find the minimum number nodes (may

[ceph-users] Better late than never, some XFS versus EXT4 test results

2016-08-08 Thread Christian Balzer
ver according to atop the avio per HDD is 12ms with XFS and 8ms with EXT4. Some food for thought, minor though with BlueStore in the pipeline. Christian -- Christian BalzerNetwork/Systems Engineer ch...@gol.com Global OnLine Japan/Rakuten Communic

Re: [ceph-users] Fast Ceph a Cluster with PB storage

2016-08-08 Thread Christian Balzer
ients. You will also want faster and more cores and way more memory (at least 64GB), how much depends on your CephFS size (number of files). > I assume to use for an acceleration SSD for a cache and a log of OSD. MDS don't hold any local data (caches), a logging SSD is fine. Christian -

Re: [ceph-users] Fast Ceph a Cluster with PB storage

2016-08-09 Thread Christian Balzer
5:11 +03:00 от Christian Balzer : > > > > > >Hello, > > > >On Mon, 08 Aug 2016 17:39:07 +0300 Александр Пивушков wrote: > > > >> > >> Hello dear community! > >> I'm new to the Ceph and not long ago took up the theme of build

Re: [ceph-users] Fast Ceph a Cluster with PB storage

2016-08-09 Thread Christian Balzer
t it does work. > Well, I saw this before I gave my answer: http://www.ovirt.org/develop/release-management/features/storage/cinder-integration/ And based on that I would say oVirt is not a good fit for Ceph at this time. Even less so than OpenNebula, which currently needs an additional

Re: [ceph-users] Fast Ceph a Cluster with PB storage

2016-08-09 Thread Christian Balzer
ONE drivers are mostly a set of shell scripts). > Thanks, I'll give that a spin next week. Christian > Best regards, > Vladimir > > > С уважением, > Дробышевский Владимир > Компания "АйТи Город" > +7 343 192 > > Аппаратное и программное обеспече

Re: [ceph-users] Include mon restart in logrotate?

2016-08-11 Thread Christian Balzer
Sitz und Registergericht: Hamburg, HRB 90934 > > >>Vorstand: Jens-U. Mozdzen > > >> USt-IdNr. DE 814 013 983 > > >> > > >> ___ > > >> ceph-users

Re: [ceph-users] what happen to the OSDs if the OS disk dies?

2016-08-14 Thread Christian Balzer
have dedicated slots on the back for OS disks, then i > >>> would recomend using SATADOM flash modules directly into a SATA port > >>> internal in the machine. Saves you 2 slots for osd's and they are > >>> quite reliable. you could even use 2 sd cards if your machine have > >>> the internal SD s

Re: [ceph-users] Substitute a predicted failure (not yet failed) osd

2016-08-14 Thread Christian Balzer
reate the osd > # ceph osd unset noout > > Cheers > Goncalo > > > ___ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com -- Christian BalzerNetwork/Sy

Re: [ceph-users] rbd readahead settings

2016-08-15 Thread Christian Balzer
message in error, please immediately notify the sender and delete or > > destroy any copy of this message! > > ___ > > ceph-users mailing list > > ceph-users@lists.ceph.com > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.co

Re: [ceph-users] Fast Ceph a Cluster with PB storage

2016-08-16 Thread Christian Balzer
es at 1.9 Ghz > > 3. MDS - 2 pcs. All virtual server: > > a. 1 Gbps Ethernet / c - 1 port. > > b. SATA drive 40 GB for installation of the operating system (or > > booting from the network, which is preferable) > > c. SATA drive 40 GB > > d. 6

Re: [ceph-users] Testing Ceph cluster for future deployment.

2016-08-16 Thread Christian Balzer
ne get > zone.conf.json > unable to initialize zone: (2) No such file or directory > > This could have something to do with the other error radosgw-admin is > giving me. > > -- Christian BalzerNetwork/Systems Engineer ch...@gol.com Global OnLine Japan/Rakuten Communications http://www.gol.com/ ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Fast Ceph a Cluster with PB storage

2016-08-16 Thread Christian Balzer
Hello, On Wed, 17 Aug 2016 09:27:30 +0500 Дробышевский, Владимир wrote: > Christian, > > thanks a lot for your time. Please see below. > > > 2016-08-17 5:41 GMT+05:00 Christian Balzer : > > > > > Hello, > > > > On Wed, 17 Aug 201

Re: [ceph-users] How can we repair OSD leveldb?

2016-08-17 Thread Christian Balzer
> > Sadly this directory is empty. > > -- Dan > > > Wido > > > >> Thanks, > >> > >> -- Dan J___ > >> ceph-users mailing list > >> ceph-users@lists.ceph.com > >> htt

Re: [ceph-users] Simple question about primary-affinity

2016-08-18 Thread Christian Balzer
___ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > -- Christian BalzerNetwork/Systems Engineer ch...@gol.com Global OnLine Japan/Rakuten Communications http:/

Re: [ceph-users] Understanding write performance

2016-08-18 Thread Christian Balzer
crappy SATA disks each (so 16 OSDs), I can get better and more consistent write speed than you, around 100MB/s. Christian > Anyway, some basic idea on those concepts or some pointers to some good > docs or articles would be wonderful. Thank you! > > Lewis George > > &g

Re: [ceph-users] Understading osd default min size

2016-08-18 Thread Christian Balzer
be blocked given the above values. Christian -- Christian BalzerNetwork/Systems Engineer ch...@gol.com Global OnLine Japan/Rakuten Communications http://www.gol.com/ ___ ceph-users mailing list ceph-users@lists.cep

Re: [ceph-users] Spreading deep-scrubbing load

2016-08-18 Thread Christian Balzer
Holly thread necromancy Batman! On Fri, 19 Aug 2016 15:39:13 +1200 Mark Kirkwood wrote: > On 15/06/16 13:18, Christian Balzer wrote: > > > > "osd_scrub_min_interval": "86400", > > "osd_scrub_max_interval": "604800", >

Re: [ceph-users] Understanding write performance

2016-08-18 Thread Christian Balzer
t; > Lewis George > > > -------- > From: "Christian Balzer" > Sent: Thursday, August 18, 2016 6:31 PM > To: ceph-users@lists.ceph.com > Cc: "lewis.geo...@innoscale.net" > Subject: Re: [ceph-users] Understanding write performance > > Hello

Re: [ceph-users] Ceph + VMware + Single Thread Performance

2016-08-21 Thread Christian Balzer
;> > >> >> >>>> Hi, > >> >> >>>> > >> >> >>>> Same here, I've read some blog saying that vmware will > >> >> >>>> frequently verify the locking on VMFS over iSCSI, hence it will > >> >> >

Re: [ceph-users] Ceph + VMware + Single Thread Performance

2016-08-21 Thread Christian Balzer
Hello, On Sun, 21 Aug 2016 09:57:40 +0100 Nick Fisk wrote: > > > > -Original Message- > > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of > > Christian Balzer > > Sent: 21 August 2016 09:32 > > To: ceph-users > &g

Re: [ceph-users] Fast Ceph a Cluster with PB storage

2016-08-22 Thread Christian Balzer
On Mon, 22 Aug 2016 10:18:51 +0300 Александр Пивушков wrote: > Hello, > Several answers below > > >Среда, 17 августа 2016, 8:57 +03:00 от Christian Balzer : > > > > > >Hello, > > > >On Wed, 17 Aug 2016 09:27:30 +0500 Дробышевский, Владимир wrote:

Re: [ceph-users] Ceph random read IOPS

2017-06-26 Thread Christian Balzer
>>> increase the measured iops. > > >>>> > > >>>> Our ceph.conf is pretty basic (debug is set to 0/0 for > > >>>> everything) and > > >>>> the crushmap just defines the different buckets/rules for >

Re: [ceph-users] Cache-tiering work abnormal

2017-06-27 Thread Christian Balzer
cached_test_cache 5 71625M 23.57 226G 185 > test_cache 6 44324M 1.75 2421G 189 -- Christian BalzerNetwork/Systems Engineer ch...@gol.com Rakuten Communications

Re: [ceph-users] TRIM/Discard on SSDs with BlueStore

2017-06-27 Thread Christian Balzer
in the > controller. > If it were that last bit, I'd be for it, if it isn't then something that you can fully control akin to fstrim would be a much better idea. That being said, I'm disinclined to deploy any SSDs that actually REQUIRE trim/discard to maintai

Re: [ceph-users] TRIM/Discard on SSDs with BlueStore

2017-06-27 Thread Christian Balzer
On Tue, 27 Jun 2017 13:24:45 +0200 (CEST) Wido den Hollander wrote: > > Op 27 juni 2017 om 13:05 schreef Christian Balzer : > > > > > > On Tue, 27 Jun 2017 11:24:54 +0200 (CEST) Wido den Hollander wrote: > > > > > Hi, > > > > >

Re: [ceph-users] TRIM/Discard on SSDs with BlueStore

2017-06-27 Thread Christian Balzer
On Tue, 27 Jun 2017 14:07:24 +0200 Dan van der Ster wrote: > On Tue, Jun 27, 2017 at 1:56 PM, Christian Balzer wrote: > > On Tue, 27 Jun 2017 13:24:45 +0200 (CEST) Wido den Hollander wrote: > > > >> > Op 27 juni 2017 om 13:05 schreef Christian Balzer : > >>

Re: [ceph-users] Upgrade target for 0.82

2017-06-27 Thread Christian Balzer
.ceph.com/listinfo.cgi/ceph-users-ceph.com > ___ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > -- Christian BalzerNetwork/Systems Engineer c

Re: [ceph-users] Ceph New OSD cannot be started

2017-06-29 Thread Christian Balzer
l > > 45/5084377 objects degraded (0.001%) > > 1103 active+clean > >4 active+degraded > > 109 active+remapped > > client io 21341 B/s rd, 477 kB/s wr, 118 op/s > > > > > > Any idea how

Re: [ceph-users] dropping filestore+btrfs testing for luminous

2017-06-30 Thread Christian Balzer
ers that support for it it's going to be removed in > > the near future. The documentation must be updated accordingly and it > > must be clearly emphasized in the release notes. > > > > Simply disabling the tests while keeping the code in the distribution

Re: [ceph-users] Cache Tier or any other possibility to accelerate RBD with SSD?

2017-07-03 Thread Christian Balzer
2 with sufficiently small/fast SSDs. With bcache etc just caching reads, you can get away with a single replication of course, however failing SSDs may then cause your cluster to melt down. Christian -- Christian BalzerNetwork/Systems Engineer

Re: [ceph-users] Cache Tier or any other possibility to accelerate RBD with SSD?

2017-07-03 Thread Christian Balzer
Hello, On Mon, 3 Jul 2017 14:18:27 +0200 Mateusz Skała wrote: > @Christian ,thanks for quick answer, please look bellow. > > > -Original Message- > > From: Christian Balzer [mailto:ch...@gol.com] > > Sent: Monday, July 3, 2017 1:39 PM > > To: ceph-users@

Re: [ceph-users] Speeding up backfill after increasing PGs and or adding OSDs

2017-07-06 Thread Christian Balzer
sd_debug_reject_backfill_probability": "0", > "osd_recovery_op_priority": "5", > "osd_recovery_priority": "5", > "osd_recovery_cost": "20971520", > "osd_recovery_op_warn_multiple": "

Re: [ceph-users] How to set Ceph client operation priority (ionice)

2017-07-06 Thread Christian Balzer
crush rules to separate the distinct users so that those reads go to OSDs that aren't used by the batch stuff. Beyond that, journal SSDs (future WAL SSDs for Bluestore), SSDs for bcache or so to cache reads, SSD pools, cache-tiering, etc. Christian -- Christian BalzerNetwo

[ceph-users] Stealth Jewel release?

2017-07-09 Thread Christian Balzer
Hello, so this morning I was greeted with the availability of 10.2.8 for both Jessie and Stretch (much appreciated), but w/o any announcement here or updated release notes on the website, etc. Any reason other "Friday" (US time) for this? Christian -- Christian BalzerNetwo

[ceph-users] Access rights of /var/lib/ceph with Jewel

2017-07-09 Thread Christian Balzer
s decision? Christian -- Christian BalzerNetwork/Systems Engineer ch...@gol.com Rakuten Communications ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Access rights of /var/lib/ceph with Jewel

2017-07-10 Thread Christian Balzer
/ceph has these permissions: "drwxr-x---", while > every directory below it still has the world aXessible bit set. > > This makes it impossible (by default) for nagios and other non-root bits > to determine the disk usage for example. > > Any rhyme or reason for this dec

Re: [ceph-users] VMware + Ceph using NFS sync/async ?

2017-08-16 Thread Christian Balzer
other professional privilege. They are > intended solely for the attention and use of the named addressee(s). They may > only be copied, distributed or disclosed with the consent of the copyright > owner. If you have received this email by mistake or by breach of the >

Re: [ceph-users] Ceph cluster with SSDs

2017-08-17 Thread Christian Balzer
-least 40% high > >as compared with HDD's OSD bench. > > > >Did I miss anything here? Any hint is appreciated. > > > >Thanks > >Swami > >___ > >ceph-users mailing list > >ceph-users@lists.ceph.com &

Re: [ceph-users] How to distribute data

2017-08-17 Thread Christian Balzer
ache tier is that Ceph is going to need to promote and > evict stuff all the time (not free). A lot of people that want to use SSD > cache tiering for RBDs end up with slower performance because of this. > Christian Balzer is the expert on Cache Tiers for RBD usage. His primary > stan

Re: [ceph-users] Optimise Setup with Bluestore

2017-08-17 Thread Christian Balzer
>> „filestore“ to „bluestore“ 😊 > >>> > >>> As far as i have read bluestore consists of > >>> - „the device“ > >>> - „block-DB“: device that store RocksDB metadata > >>> - „block-WAL“: device that stores RocksDB „write-ahead journal“

Re: [ceph-users] How to distribute data

2017-08-17 Thread Christian Balzer
tic tests. > --> Yes, the problem is that I have to buy a HW and for Windows 10 VDI... > and I cannot make realistic tests previously :( but I will work on this > line... > > Thanks a lot again! > > > > 2017-08-18 3:14 GMT+02:00 Christian Balzer : > > >

Re: [ceph-users] Ceph cluster with SSDs

2017-08-19 Thread Christian Balzer
//lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > >> > >> > >> ___ > >> ceph-users mailing list > >> ceph-users@lists.ceph.com > >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > >> > _

Re: [ceph-users] Ceph cluster with SSDs

2017-08-20 Thread Christian Balzer
result: https://forum.proxmox.com/threads/slow-ceph-journal-on-samsung-850-pro.27733/ Christian > > > Op 20 aug. 2017 om 06:03 heeft Christian Balzer het > > volgende geschreven: > > > > DWPD > > -- Christian BalzerNetwork/Systems Eng

Re: [ceph-users] Ceph Random Read Write Performance

2017-08-20 Thread Christian Balzer
ets lost. The network part is unavoidable (a local SAS/SATA link is not the same as a bonded 10Gbps link), though 25Gbps, IB etc can help. The Ceph stack will benefit from faster CPUs as mentioned above. > We are using Ceph Jewel 10.2.5-1trusty, kernel 4.4.0.-31 generic, Ubuntu > 14.04 >

Re: [ceph-users] Ceph cluster with SSDs

2017-08-20 Thread Christian Balzer
solely for the attention and use of the named addressee(s). They may > only be copied, distributed or disclosed with the consent of the copyright > owner. If you have received this email by mistake or by breach of the > confidentiality clause, please notify the sender immediately by return email > and delete or destroy all copies of the email. Any confidentiality, privilege > or copyright is not waived or lost because this email has been sent to you by > mistake. > ___ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > -- Christian BalzerNetwork/Systems Engineer ch...@gol.com Rakuten Communications ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] pros/cons of multiple OSD's per host

2017-08-20 Thread Christian Balzer
n "expensive" in most cases and this is no exception. Smaller hosts are more expensive in terms of space and parts (a NIC for each OSD instead of one per 12, etc). And before you mention really small hosts with 1GbE NICs, the latency penalty is significant there, the limitation to 100MB/

Re: [ceph-users] Ceph cluster with SSDs

2017-08-21 Thread Christian Balzer
t worse due to the overhead outside the SSD itself. Christian > On Sun, Aug 20, 2017 at 9:33 AM, Christian Balzer wrote: > > > > Hello, > > > > On Sat, 19 Aug 2017 23:22:11 +0530 M Ranga Swami Reddy wrote: > > > >> SSD make details : SSD 850 EVO 2.5&q

Re: [ceph-users] NVMe + SSD + HDD RBD Replicas with Bluestore...

2017-08-21 Thread Christian Balzer
drive) > but obviously performance is nowhere near to SSDs or NVMe. > > So, what do you think? Does anybody have some opinions or experience he would > like to share? > > Thanks! > Xavier. > > > -- Christian BalzerNetwork/Systems Engineer ch...@gol.com Rakuten Communications ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] pros/cons of multiple OSD's per host

2017-08-21 Thread Christian Balzer
le. > And > I don't have enough hardware to setup a test cluster of any significant > size to run some actual testing. > You may want to set up something to get a feeling for CephFS, if it's right for you or if something else on top of RBD may be more suitable. Christian -- Ch

Re: [ceph-users] Cache tier unevictable objects

2017-08-22 Thread Christian Balzer
of the > volumes listed in the cache pool, but the objects didn't change at > all, the total number was also still 39. For the rbd_header objects I > don't even know how to identify their "owner", is there a way? > > Has anyone a hint what else I could c

<    3   4   5   6   7   8   9   10   11   12   >