Re: [ceph-users] Ceph User Teething Problems

2015-03-04 Thread Lionel Bouton
On 03/04/15 22:18, John Spray wrote: > On 04/03/2015 20:27, Datatone Lists wrote: >> [...] [Please don't mention ceph-deploy] > This kind of comment isn't very helpful unless there is a specific > issue with ceph-deploy that is preventing you from using it, and > causing you to resort to manual ste

Re: [ceph-users] Ceph User Teething Problems

2015-03-04 Thread Lionel Bouton
On 03/04/15 22:50, Travis Rhoden wrote: > [...] > Thanks for this feedback. I share a lot of your sentiments, > especially that it is good to understand as much of the system as you > can. Everyone's skill level and use-case is different, and > ceph-deploy is targeted more towards PoC use-cases.

Re: [ceph-users] Slow performance during recovery operations

2015-04-02 Thread Lionel Bouton
hatever you use to supervise your cluster). I actually considered monitoring Ceph for backfills and using ceph set nodeep-scrub automatically when there are some and unset it when they disappear. Best regards, Lionel Bouton ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Slow performance during recovery operations

2015-04-02 Thread Lionel Bouton
On 04/02/15 21:02, Stillwell, Bryan wrote: >> With these settings and no deep-scrubs the load increased a bit in the >> VMs doing non negligible I/Os but this was manageable. Even disk thread >> ioprio settings (which is what you want to get the ionice behaviour for >> deep scrubs) didn't seem to m

Re: [ceph-users] Slow performance during recovery operations

2015-04-02 Thread Lionel Bouton
On 04/02/15 21:02, Stillwell, Bryan wrote: > > I'm pretty sure setting 'nodeep-scrub' doesn't cancel any current > deep-scrubs that are happening, Indeed it doesn't. > but something like this would help prevent > the problem from getting worse. If the cause of the recoveries/backfills are an OS

Re: [ceph-users] Slow performance during recovery operations

2015-04-05 Thread Lionel Bouton
Hi, On 04/06/15 02:26, Francois Lafont wrote: > Hi, > > Lionel Bouton wrote : > >> Sorry this wasn't clear: I tried the ioprio settings before disabling >> the deep scrubs and it didn't seem to make a difference when deep scrubs >> occured

Re: [ceph-users] long blocking with writes on rbds

2015-04-08 Thread Lionel Bouton
Ds, 1504GB (~ 250G / osd) and a total of 4400 pgs ? With a replication of 3 this is 2200 pgs / OSD, which might be too much and unnecessarily increase the load on your OSDs. Best regards, Lionel Bouton ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] advantages of multiple pools?

2015-04-17 Thread Lionel Bouton
On 04/17/15 16:01, Saverio Proto wrote: > For example you can assign different read/write permissions and > different keyrings to different pools. >From memory you can set different replication settings, use a cache pool or not, use specific crush map rules too. Lion

Re: [ceph-users] long blocking with writes on rbds

2015-04-22 Thread Lionel Bouton
On 04/22/15 17:57, Jeff Epstein wrote: > > > On 04/10/2015 10:10 AM, Lionel Bouton wrote: >> On 04/10/15 15:41, Jeff Epstein wrote: >>> [...] >>> This seems highly unlikely. We get very good performance without >>> ceph. Requisitioning and manupula

Re: [ceph-users] long blocking with writes on rbds

2015-04-22 Thread Lionel Bouton
On 04/22/15 19:50, Lionel Bouton wrote: > On 04/22/15 17:57, Jeff Epstein wrote: >> >> >> On 04/10/2015 10:10 AM, Lionel Bouton wrote: >>> On 04/10/15 15:41, Jeff Epstein wrote: >>>> [...] >>>> This seems highly unlikely. We get very

Re: [ceph-users] Cost- and Powerefficient OSD-Nodes

2015-04-29 Thread Lionel Bouton
(it can't really do much harm as you don't want your journals to be too big anyway). I would be *very* surprised if the drive would get any performance or life expectancy benefit when used at 19.2% instead of 32% or even 75%... Best regards, Lionel Bouton __

[ceph-users] Experience going through rebalancing with active VMs / questions

2015-05-02 Thread Lionel Bouton
Hi, we are currently running the latest firefly (0.80.9) and we have difficulties maintaining good throughput when Ceph is backfilling/recovering and/or deep-scrubing after an outage. This got to the point where when the VM using rbd start misbehaving (load rising, some simple SQL update queries t

[ceph-users] Btrfs defragmentation

2015-05-03 Thread Lionel Bouton
Hi, we began testing one Btrfs OSD volume last week and for this first test we disabled autodefrag and began to launch manual btrfs fi defrag. During the tests, I monitored the number of extents of the journal (10GB) and it went through the roof (it currently sits at 8000+ extents for example). I

Re: [ceph-users] Btrfs defragmentation

2015-05-03 Thread Lionel Bouton
On 05/04/15 01:34, Sage Weil wrote: > On Mon, 4 May 2015, Lionel Bouton wrote: >> Hi, we began testing one Btrfs OSD volume last week and for this >> first test we disabled autodefrag and began to launch manual btrfs fi >> defrag. During the tests, I monitored the num

Re: [ceph-users] Btrfs defragmentation

2015-05-04 Thread Lionel Bouton
On 05/04/15 01:34, Sage Weil wrote: > On Mon, 4 May 2015, Lionel Bouton wrote: >> Hi, >> >> we began testing one Btrfs OSD volume last week and for this first test >> we disabled autodefrag and began to launch manual btrfs fi defrag. >> [...] > Cool.. let us kno

Re: [ceph-users] Btrfs defragmentation

2015-05-05 Thread Lionel Bouton
On 05/05/15 06:30, Timofey Titovets wrote: > Hi list, > Excuse me, what I'm saying is off topic > > @Lionel, if you use btrfs, did you already try to use btrfs compression for > OSD? > If yes, сan you share the your experience? Btrfs compresses by default using zlib. We force lzo compression inst

Re: [ceph-users] Btrfs defragmentation

2015-05-06 Thread Lionel Bouton
On 05/05/15 02:24, Lionel Bouton wrote: > On 05/04/15 01:34, Sage Weil wrote: >> On Mon, 4 May 2015, Lionel Bouton wrote: >>> Hi, >>> >>> we began testing one Btrfs OSD volume last week and for this first test >>> we disabled autodefrag and began to lau

Re: [ceph-users] Btrfs defragmentation

2015-05-06 Thread Lionel Bouton
Hi, On 05/06/15 20:04, Mark Nelson wrote: > [...] > Out of curiosity, do you see excessive memory usage during > defragmentation? Last time I spoke to josef it sounded like it wasn't > particularly safe yet and could make the machine go OOM, especially if > there are lots of snapshots. > We have

Re: [ceph-users] Btrfs defragmentation

2015-05-06 Thread Lionel Bouton
Hi, On 05/06/15 20:07, Timofey Titovets wrote: > 2015-05-06 20:51 GMT+03:00 Lionel Bouton : >> Is there something that would explain why initially Btrfs creates the >> 4MB files with 128k extents (32 extents / file) ? Is it a bad thing for >> performance ? > This kind of b

Re: [ceph-users] Btrfs defragmentation

2015-05-07 Thread Lionel Bouton
On 05/06/15 19:51, Lionel Bouton wrote: > During normal operation Btrfs OSD volumes continue to behave in the same > way XFS ones do on the same system (sometimes faster/sometimes slower). > What is really slow though it the OSD process startup. I've yet to make > serious tes

Re: [ceph-users] Btrfs defragmentation

2015-05-07 Thread Lionel Bouton
Hi, On 05/07/15 12:30, Burkhard Linke wrote: > [...] > Part of the OSD boot up process is also the handling of existing > snapshots and journal replay. I've also had several btrfs based OSDs > that took up to 20-30 minutes to start, especially after a crash. > During journal replay the OSD daemon

Re: [ceph-users] Btrfs defragmentation

2015-05-12 Thread Lionel Bouton
On 05/06/15 20:28, Lionel Bouton wrote: > Hi, > > On 05/06/15 20:07, Timofey Titovets wrote: >> 2015-05-06 20:51 GMT+03:00 Lionel Bouton : >>> Is there something that would explain why initially Btrfs creates the >>> 4MB files with 128k extents (32 extent

Re: [ceph-users] Performance and CPU load on HP servers running ceph (DL380 G6, should apply to others too)

2015-05-26 Thread Lionel Bouton
On 05/26/15 10:06, Jan Schermer wrote: > Turbo Boost will not hurt performance. Unless you have 100% load on > all cores it will actually improve performance (vastly, in terms of > bursty workloads). > The issue you have could be related to CPU cores going to sleep mode. Another possibility is tha

Re: [ceph-users] Discuss: New default recovery config settings

2015-06-01 Thread Lionel Bouton
On 06/01/15 09:43, Jan Schermer wrote: > We had to disable deep scrub or the cluster would me unusable - we need to > turn it back on sooner or later, though. > With minimal scrubbing and recovery settings, everything is mostly good. > Turned out many issues we had were due to too few PGs - once

Re: [ceph-users] Is Ceph right for me?

2015-06-11 Thread Lionel Bouton
On 05/20/15 23:34, Trevor Robinson - Key4ce wrote: > > Hello, > > > > Could somebody please advise me if Ceph is suitable for our use? > > > > We are looking for a file system which is able to work over different > locations which are connected by VPN. If one locations was to go > offline then

[ceph-users] Unexpected disk write activity with btrfs OSDs

2015-06-18 Thread Lionel Bouton
Hi, I've just noticed an odd behaviour with the btrfs OSDs. We monitor the amount of disk writes on each device, our granularity is 10s (every 10s the monitoring system collects the total amount of sector written and write io performed since boot and computes both the B/s and IO/s). With only res

Re: [ceph-users] Unexpected disk write activity with btrfs OSDs

2015-06-18 Thread Lionel Bouton
btrfs. On 06/18/15 23:28, Lionel Bouton wrote: > Hi, > > I've just noticed an odd behaviour with the btrfs OSDs. We monitor the > amount of disk writes on each device, our granularity is 10s (every 10s > the monitoring system collects the total amount of sector written and > w

Re: [ceph-users] Fwd: Re: Unexpected disk write activity with btrfs OSDs

2015-06-19 Thread Lionel Bouton
On 06/19/15 13:42, Burkhard Linke wrote: > > Forget the reply to the list... > > Forwarded Message > Subject: Re: [ceph-users] Unexpected disk write activity with btrfs OSDs > Date: Fri, 19 Jun 2015 09:06:33 +0200 > From: Burkhard Linke

Re: [ceph-users] Unexpected disk write activity with btrfs OSDs

2015-06-22 Thread Lionel Bouton
On 06/22/15 11:27, Jan Schermer wrote: > I don’t run Ceph on btrfs, but isn’t this related to the btrfs > snapshotting feature ceph uses to ensure a consistent journal? It's possible: if I understand correctly the code, the btrfs filestore backend creates a snapshot when syncing the journal. I'm a

Re: [ceph-users] Unexpected disk write activity with btrfs OSDs

2015-06-22 Thread Lionel Bouton
On 06/19/15 13:23, Erik Logtenberg wrote: > I believe this may be the same issue I reported some time ago, which is > as of yet unsolved. > > https://www.mail-archive.com/ceph-users@lists.ceph.com/msg19770.html > > I used strace to figure out that the OSD's were doing an incredible > amount of getx

Re: [ceph-users] Unexpected disk write activity with btrfs OSDs

2015-06-22 Thread Lionel Bouton
On 06/22/15 17:21, Erik Logtenberg wrote: > I have the journals on a separate disk too. How do you disable the > snapshotting on the OSD? http://ceph.com/docs/master/rados/configuration/filestore-config-ref/ : filestore btrfs snap = false ___ ceph-users

Re: [ceph-users] Unexpected disk write activity with btrfs OSDs

2015-06-23 Thread Lionel Bouton
On 06/23/15 11:43, Gregory Farnum wrote: > On Tue, Jun 23, 2015 at 9:50 AM, Erik Logtenberg wrote: >> Thanks! >> >> Just so I understand correctly, the btrfs snapshots are mainly useful if >> the journals are on the same disk as the osd, right? Is it indeed safe >> to turn them off if the journals

Re: [ceph-users] Unexpected issues with simulated 'rack' outage

2015-06-24 Thread Lionel Bouton
On 06/24/15 14:44, Romero Junior wrote: > > Hi, > > > > We are setting up a test environment using Ceph as the main storage > solution for my QEMU-KVM virtualization platform, and everything works > fine except for the following: > > > > When I simulate a failure by powering off the switches on

Re: [ceph-users] any recommendation of using EnhanceIO?

2015-07-02 Thread Lionel Bouton
On 07/02/15 12:48, German Anders wrote: > The idea is to cache rbd at a host level. Also could be possible to > cache at the osd level. We have high iowait and we need to lower it a > bit, since we are getting the max from our sas disks 100-110 iops per > disk (3TB osd's), any advice? Flashcache?

Re: [ceph-users] any recommendation of using EnhanceIO?

2015-07-02 Thread Lionel Bouton
On 07/02/15 13:49, German Anders wrote: > output from iostat: > > CEPHOSD01: > > Device: rrqm/s wrqm/s r/s w/srMB/swMB/s > avgrq-sz avgqu-sz await r_await w_await svctm %util > sdc(ceph-0) 0.00 0.001.00 389.00 0.0035.98 > 188.9660.32 120.1

Re: [ceph-users] Where does 130IOPS come from?

2015-07-02 Thread Lionel Bouton
On 07/02/15 17:53, Steffen Tilsch wrote: > > Hello Cephers, > > Whenever I read about HDDs for OSDs it is told that "they will deliver > around 130 IOPS". > Where does this number come from and how it was measured (random/seq, > how big where the IOs, which queue-dephat what latency) or is it more

Re: [ceph-users] Ceph Journal Disk Size

2015-07-02 Thread Lionel Bouton
On 07/02/15 18:27, Shane Gibson wrote: > > On 7/2/15, 9:21 AM, "Nate Curry" > wrote: > > Are you using the 4TB disks for the journal? > > > Nate - yes, at the moment the Journal is on 4 TB 7200 rpm disks as > well as the OSDS. It's what I've got for hardware ... si

Re: [ceph-users] Ceph Journal Disk Size

2015-07-02 Thread Lionel Bouton
On 07/02/15 19:13, Shane Gibson wrote: > > Lionel - thanks for the feedback ... inline below ... > > On 7/2/15, 9:58 AM, "Lionel Bouton" <mailto:lionel+c...@bouton.name>> wrote: > > > Ouch. These spinning disks are probably a bottleneck: there are &

Re: [ceph-users] FW: Ceph data locality

2015-07-07 Thread Lionel Bouton
Hi Dmitry, On 07/07/15 14:42, Dmitry Meytin wrote: > Hi Christian, > Thanks for the thorough explanation. > My case is Elastic Map Reduce on top of OpenStack with Ceph backend for > everything (block, object, images). > With default configuration, performance is 300% worse than bare metal. > I di

Re: [ceph-users] FW: Ceph data locality

2015-07-07 Thread Lionel Bouton
On 07/07/15 17:41, Dmitry Meytin wrote: > Hi Lionel, > Thanks for the answer. > The missing info: > 1) Ceph 0.80.9 "Firefly" > 2) map-reduce makes sequential reads of blocks of 64MB (or 128 MB) > 3) HDFS which is running on top of Ceph is replicating data for 3 times > between VMs which could be l

Re: [ceph-users] FW: Ceph data locality

2015-07-07 Thread Lionel Bouton
On 07/07/15 18:20, Dmitry Meytin wrote: > Exactly because of that issue I've reduced the number of Ceph replications to > 2 and the number of HDFS copies is also 2 (so we're talking about 4 copies). > I want (but didn't tried yet) to change Ceph replication to 1 and change HDFS > back to 3. You

Re: [ceph-users] How to prefer faster disks in same pool

2015-07-10 Thread Lionel Bouton
On 07/10/15 02:13, Christoph Adomeit wrote: > Hi Guys, > > I have a ceph pool that is mixed with 10k rpm disks and 7.2 k rpm disks. > > There are 85 osds and 10 of them are 10k > Size is not an issue, the pool is filled only 20% > > I want to somehow prefer the 10 k rpm disks so that they get more

Re: [ceph-users] Real world benefit from SSD Journals for a more read than write cluster

2015-07-12 Thread Lionel Bouton
On 07/12/15 05:55, Alex Gorbachev wrote: > FWIW. Based on the excellent research by Mark Nelson > (http://ceph.com/community/ceph-performance-part-2-write-throughput-without-ssd-journals/) > we have dropped SSD journals altogether, and instead went for the > battery protected controller writeback c

Re: [ceph-users] Issue with journal on another drive

2015-07-13 Thread Lionel Bouton
On 07/14/15 00:08, Rimma Iontel wrote: > Hi all, > > [...] > Is there something that needed to be done to journal partition to > enable sharing between multiple OSDs? Or is there something else > that's causing the isssue? > IIRC you can't share a volume between multiple OSDs. What you could do i

Re: [ceph-users] how to recover from: 1 pgs down; 10 pgs incomplete; 10 pgs stuck inactive; 10 pgs stuck unclean

2015-07-15 Thread Lionel Bouton
Le 15/07/2015 10:55, Jelle de Jong a écrit : > On 13/07/15 15:40, Jelle de Jong wrote: >> I was testing a ceph cluster with osd_pool_default_size = 2 and while >> rebuilding the OSD on one ceph node a disk in an other node started >> getting read errors and ceph kept taking the OSD down, and instea

Re: [ceph-users] CephFS vs RBD

2015-07-22 Thread Lionel Bouton
Le 22/07/2015 21:17, Lincoln Bryant a écrit : > Hi Hadi, > > AFAIK, you can’t safely mount RBD as R/W on multiple machines. You > could re-export the RBD as NFS, but that’ll introduce a bottleneck and > probably tank your performance gains over CephFS. > > For what it’s worth, some of our RBDs are

Re: [ceph-users] btrfs w/ centos 7.1

2015-08-07 Thread Lionel Bouton
Le 07/08/2015 22:05, Ben Hines a écrit : > Howdy, > > The Ceph docs still say btrfs is 'experimental' in one section, but > say it's the long term ideal for ceph in the later section. Is this > still accurate with Hammer? Is it mature enough on centos 7.1 for > production use? > > (kernel is 3.10.

Re: [ceph-users] Ceph for multi-site operation

2015-08-24 Thread Lionel Bouton
Le 24/08/2015 15:11, Julien Escario a écrit : > Hello, > First, let me advise I'm really a noob with Cephsince I have only read some > documentation. > > I'm now trying to deploy a Ceph cluster for testing purposes. The cluster is > based on 3 (more if necessary) hypervisors running proxmox 3.4. >

Re: [ceph-users] EXT4 for Production and Journal Question?

2015-08-24 Thread Lionel Bouton
Le 24/08/2015 19:34, Robert LeBlanc a écrit : > Building off a discussion earlier this month [1], how "supported" is > EXT4 for OSDs? It seems that some people are getting good results with > it and I'll be testing it in our environment. > > The other question is if the EXT4 journal is even necessa

Re: [ceph-users] ceph at "Universite de Lorraine"

2014-10-10 Thread Lionel Bouton
eedback from the community on the challenges you might have deploying your platform. Then you'll have a better grasp on what you'll have to ask an integrator if you really need one. Best regards, Lionel Bouton ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] ceph at "Universite de Lorraine"

2014-10-10 Thread Lionel Bouton
eedback from the community on the challenges you might have deploying your platform. Then you'll have a better grasp on what you'll have to ask an integrator if you really need one. Best regards, Lionel Bouton ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] Ceph OSD very slow startup

2014-10-13 Thread Lionel Bouton
'snap_21161268' got (2) No such file or directory I suppose it is harmless (at least these OSD don't show any other error/warning and have been restarted and their filesystem remounted on numerous occasions), but I'd like to be sure: is it? Best regards, Lionel Bouton ___

Re: [ceph-users] Ceph OSD very slow startup

2014-10-13 Thread Lionel Bouton
Le 14/10/2014 01:28, Lionel Bouton a écrit : > Hi, > > # First a short description of our Ceph setup > > You can skip to the next section ("Main questions") to save time and > come back to this one if you need more context. Missing important piece of information: this

Re: [ceph-users] Ceph OSD very slow startup

2014-10-14 Thread Lionel Bouton
Le 14/10/2014 18:17, Gregory Farnum a écrit : > On Monday, October 13, 2014, Lionel Bouton <mailto:lionel%2bc...@bouton.name>> wrote: > > [...] > > What could explain such long startup times? Is the OSD init doing > a lot > of random disk accesses? Is

Re: [ceph-users] Ceph OSD very slow startup

2014-10-14 Thread Lionel Bouton
Le 14/10/2014 18:51, Lionel Bouton a écrit : > Le 14/10/2014 18:17, Gregory Farnum a écrit : >> On Monday, October 13, 2014, Lionel Bouton > <mailto:lionel%2bc...@bouton.name>> wrote: >> >> [...] >> >> What could explain such long startup times?

Re: [ceph-users] Ceph OSD very slow startup

2014-10-20 Thread Lionel Bouton
Hi, More information on our Btrfs tests. Le 14/10/2014 19:53, Lionel Bouton a écrit : > > > Current plan: wait at least a week to study 3.17.0 behavior and > upgrade the 3.12.21 nodes to 3.17.0 if all goes well. > 3.17.0 and 3.17.1 have a bug which remounts Btrfs filesystem

Re: [ceph-users] why the erasure code pool not support random write?

2014-10-20 Thread Lionel Bouton
Le 20/10/2014 16:39, Wido den Hollander a écrit : > On 10/20/2014 03:25 PM, 池信泽 wrote: >> hi, cephers: >> >> When I look into the ceph source code, I found the erasure code pool >> not support >> the random write, it only support the append write. Why? Is that random >> write of is erasure co

Re: [ceph-users] why the erasure code pool not support random write?

2014-10-20 Thread Lionel Bouton
een OSDs (and probably use a "majority wins" rule for repair). If you are using Btrfs it will report an I/O error because it uses an internal checksum by default which will force Ceph to use other OSDs for repair. I'd be glad to be proven wrong

Re: [ceph-users] why the erasure code pool not support random write?

2014-10-21 Thread Lionel Bouton
Le 21/10/2014 09:31, Nicheal a écrit : > 2014-10-21 7:40 GMT+08:00 Lionel Bouton : >> Hi, >> >> Le 21/10/2014 01:10, 池信泽 a écrit : >> >> Thanks. >> >>Another reason is the checksum in the attr of object used for deep scrub >> in EC po

[ceph-users] Question/idea about performance problems with a few overloaded OSDs

2014-10-21 Thread Lionel Bouton
ctive loads of all OSDs storing a given PG to avoid "ping-pong" situations where read requests overload OSDs before overloading another and coming round again. Any thought? Is it based on wrong assumptions? Would it prove to be a can of worms if someo

Re: [ceph-users] Question/idea about performance problems with a few overloaded OSDs

2014-10-21 Thread Lionel Bouton
Hi Gregory, Le 21/10/2014 19:39, Gregory Farnum a écrit : > On Tue, Oct 21, 2014 at 10:15 AM, Lionel Bouton > wrote: >> [...] >> Any thought? Is it based on wrong assumptions? Would it prove to be a >> can of worms if someone tried to implement it? > Yeah, there

Re: [ceph-users] Problems with pgs incomplete

2014-12-01 Thread Lionel Bouton
gs, I'd expect ~1/3rd of your pgs to be incomplete given your "ceph osd tree" output) but reducing min_size to 1 should be harmless and should unfreeze the recovering process. Best regards, Lionel Bouton ___ ceph-users mailing list ceph-u

Re: [ceph-users] Problems with pgs incomplete

2014-12-01 Thread Lionel Bouton
Le 01/12/2014 17:08, Lionel Bouton a écrit : > I may be wrong here (I'm surprised you only have 4 incomplete pgs, I'd > expect ~1/3rd of your pgs to be incomplete given your "ceph osd tree" > output) but reducing min_size to 1 should be harmless and should >

Re: [ceph-users] HEALTH_WARN 29 pgs degraded; 29 pgs stuck degraded; 133 pgs stuck unclean; 29 pgs stuck undersized;

2014-12-29 Thread Lionel Bouton
reat for us (giving us 20-30% additional space and most probably a little performance advantage too) Best regards, Lionel Bouton ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] OSDs with btrfs are down

2015-01-04 Thread Lionel Bouton
of (de)activating the two configuration options above are (expected performance gains? Additional Ceph features?). Best regards, Lionel Bouton ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] OSDs with btrfs are down

2015-01-06 Thread Lionel Bouton
On 01/06/15 02:36, Gregory Farnum wrote: > [...] > "filestore btrfs snap" controls whether to use btrfs snapshots to keep > the journal and backing store in check. WIth that option disabled it > handles things in basically the same way we do with xfs. > > "filestore btrfs clone range" I believe con

Re: [ceph-users] OSDs with btrfs are down

2015-01-06 Thread Lionel Bouton
ity arise I'll just adapt my tests accordingly and report (may take some months before we create new OSDs though). Best regards, Lionel Bouton ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Is ceph production ready? [was: Ceph PG Incomplete = Cluster unusable]

2015-01-07 Thread Lionel Bouton
On 12/30/14 16:36, Nico Schottelius wrote: > Good evening, > > we also tried to rescue data *from* our old / broken pool by map'ing the > rbd devices, mounting them on a host and rsync'ing away as much as > possible. > > However, after some time rsync got completly stuck and eventually the > host w

Re: [ceph-users] How to tell a VM to write more local ceph nodes than to the network.

2015-01-14 Thread Lionel Bouton
only disks on the s3 server in write-back mode but given the user experience reports on the list it may actually perform worse than your current setup. Best regards, Lionel Bouton ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] btrfs backend with autodefrag mount option

2015-01-30 Thread Lionel Bouton
first you might want to disable and check how autodefrag and defrag behave. It might be possible to use snap and defrag, BTRFS was quite stable for us (but all our OSDs are on systems with at least 72GB RAM which have enough CPU power so memory wasn't much of an issue). Best regards, Lionel

Re: [ceph-users] Corruption of file systems on RBD images

2015-09-02 Thread Lionel Bouton
Hi Mathieu, Le 02/09/2015 14:10, Mathieu GAUTHIER-LAFAYE a écrit : > Hi All, > > We have some troubles regularly with virtual machines using RBD storage. When > we restart some virtual machines, they starts to do some filesystem checks. > Sometime it can rescue it, sometime the virtual machine d

Re: [ceph-users] Corruption of file systems on RBD images

2015-09-02 Thread Lionel Bouton
Le 02/09/2015 18:16, Mathieu GAUTHIER-LAFAYE a écrit : > Hi Lionel, > > - Original Message - >> From: "Lionel Bouton" >> To: "Mathieu GAUTHIER-LAFAYE" , >> ceph-us...@ceph.com >> Sent: Wednesday, 2 September, 2015 4:40:26 PM >>

[ceph-users] backfilling on a single OSD and caching controllers

2015-09-09 Thread Lionel Bouton
Hi, just a tip I just validated on our hardware. I'm currently converting an OSD from xfs with journal on same platter to btrfs with journal on SSD. To avoid any unwanted movement, I reused the same OSD number, weight and placement : so Ceph is simply backfilling all PGs previously stored on the o

Re: [ceph-users] Hammer reduce recovery impact

2015-09-10 Thread Lionel Bouton
Le 10/09/2015 22:56, Robert LeBlanc a écrit : > We are trying to add some additional OSDs to our cluster, but the > impact of the backfilling has been very disruptive to client I/O and > we have been trying to figure out how to reduce the impact. We have > seen some client I/O blocked for more than

Re: [ceph-users] Hammer reduce recovery impact

2015-09-10 Thread Lionel Bouton
Le 11/09/2015 00:20, Robert LeBlanc a écrit : > I don't think the script will help our situation as it is just setting > osd_max_backfill from 1 to 0. It looks like that change doesn't go > into effect until after it finishes the PG. That was what I was afraid of. Note that it should help a little

Re: [ceph-users] Hammer reduce recovery impact

2015-09-10 Thread Lionel Bouton
Le 11/09/2015 01:24, Lincoln Bryant a écrit : > On 9/10/2015 5:39 PM, Lionel Bouton wrote: >> For example deep-scrubs were a problem on our installation when at >> times there were several going on. We implemented a scheduler that >> enforces limits on simultaneous deep-scru

Re: [ceph-users] question on reusing OSD

2015-09-15 Thread Lionel Bouton
Le 16/09/2015 01:21, John-Paul Robinson a écrit : > Hi, > > I'm working to correct a partitioning error from when our cluster was > first installed (ceph 0.56.4, ubuntu 12.04). This left us with 2TB > partitions for our OSDs, instead of the 2.8TB actually available on > disk, a 29% space hit. (Th

[ceph-users] Simultaneous CEPH OSD crashes

2015-09-27 Thread Lionel Bouton
. I made copies of the ceph osd logs (including the stack trace and the recent events) if needed. Can anyone put some light on why these OSDs died ? Best regards, Lionel Bouton ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Simultaneous CEPH OSD crashes

2015-09-27 Thread Lionel Bouton
Le 27/09/2015 09:15, Lionel Bouton a écrit : > Hi, > > we just had a quasi simultaneous crash on two different OSD which > blocked our VMs (min_size = 2, size = 3) on Firefly 0.80.9. > > the first OSD to go down had this error : > > 2015-09-27 06:30:33.257133 7f7ac7fef70

Re: [ceph-users] Issue with journal on another drive

2015-09-29 Thread Lionel Bouton
Le 29/09/2015 07:29, Jiri Kanicky a écrit : > Hi, > > Is it possible to create journal in directory as explained here: > http://wiki.skytech.dk/index.php/Ceph_-_howto,_rbd,_lvm,_cluster#Add.2Fmove_journal_in_running_cluster Yes, the general idea (stop, flush, move, update ceph.conf, mkjournal, sta

Re: [ceph-users] Issue with journal on another drive

2015-09-29 Thread Lionel Bouton
Hi, Le 29/09/2015 13:32, Jiri Kanicky a écrit : > Hi Lionel. > > Thank you for your reply. In this case I am considering to create > separate partitions for each disk on the SSD drive. Would be good to > know what is the performance difference, because creating partitions > is kind of waste of spa

Re: [ceph-users] Simultaneous CEPH OSD crashes

2015-09-29 Thread Lionel Bouton
Le 27/09/2015 10:25, Lionel Bouton a écrit : > Le 27/09/2015 09:15, Lionel Bouton a écrit : >> Hi, >> >> we just had a quasi simultaneous crash on two different OSD which >> blocked our VMs (min_size = 2, size = 3) on Firefly 0.80.9. >> >> the first OSD to go

Re: [ceph-users] Predict performance

2015-10-02 Thread Lionel Bouton
Hi, Le 02/10/2015 18:15, Christian Balzer a écrit : > Hello, > On Fri, 2 Oct 2015 15:31:11 +0200 Javier C.A. wrote: > > Firstly, this has been discussed countless times here. > For one of the latest recurrences, check the archive for: > > "calculating maximum number of disk and node failure that c

Re: [ceph-users] Simultaneous CEPH OSD crashes

2015-10-03 Thread Lionel Bouton
Hi, Le 29/09/2015 19:06, Samuel Just a écrit : > It's an EIO. The osd got an EIO from the underlying fs. That's what > causes those asserts. You probably want to redirect to the relevant > fs maling list. Thanks. I didn't get any answer on this from BTRFS developers yet. The problem seems har

Re: [ceph-users] O_DIRECT on deep-scrub read

2015-10-08 Thread Lionel Bouton
Le 07/10/2015 13:44, Paweł Sadowski a écrit : > Hi, > > Can anyone tell if deep scrub is done using O_DIRECT flag or not? I'm > not able to verify that in source code. > > If not would it be possible to add such feature (maybe config option) to > help keeping Linux page cache in better shape? Note

Re: [ceph-users] CEPH over SW-RAID

2015-11-23 Thread Lionel Bouton
Le 23/11/2015 18:17, Jan Schermer a écrit : > SW-RAID doesn't help with bit-rot if that's what you're afraid of. > If you are afraid bit-rot you need to use a fully checksumming filesystem > like ZFS. > Ceph doesn't help there either when using replicas - not sure how strong > error detection+cor

Re: [ceph-users] CEPH over SW-RAID

2015-11-23 Thread Lionel Bouton
Hi, Le 23/11/2015 18:37, Jose Tavares a écrit : > Yes, but with SW-RAID, when we have a block that was read and does not match > its checksum, the device falls out of the array I don't think so. Under normal circumstances a device only falls out of a md array if it doesn't answer IO queries afte

Re: [ceph-users] CEPH over SW-RAID

2015-11-23 Thread Lionel Bouton
Le 23/11/2015 19:58, Jose Tavares a écrit : > > > On Mon, Nov 23, 2015 at 4:15 PM, Lionel Bouton > <mailto:lionel-subscript...@bouton.name>> wrote: > > Hi, > > Le 23/11/2015 18:37, Jose Tavares a écrit : > > Yes, but with SW-RAID, when we

Re: [ceph-users] CEPH over SW-RAID

2015-11-23 Thread Lionel Bouton
Le 23/11/2015 21:01, Jose Tavares a écrit : > > > > > My new question regarding Ceph is if it isolates this bad sectors where > it found bad data when scrubbing? or there will be always a replica of > something over a known bad block..? > Ceph OSDs don't know about bad sectors, they deleg

Re: [ceph-users] CEPH over SW-RAID

2015-11-23 Thread Lionel Bouton
Le 23/11/2015 21:58, Jose Tavares a écrit : > > AFAIK, people are complaining about lots os bad blocks in the new big > disks. The hardware list seems to be small and unable to replace > theses blocks. Note that if by big disks you mean SMR-based disks, they can exhibit what looks like bad blocks

Re: [ceph-users] Scrubbing question

2015-11-26 Thread Lionel Bouton
Le 26/11/2015 15:53, Tomasz Kuzemko a écrit : > ECC will not be able to recover the data, but it will always be able to > detect that data is corrupted. No. That's a theoretical impossibility as the detection is done by some kind of hash over the memory content which brings the possibility of hash

Re: [ceph-users] Global, Synchronous Blocked Requests

2015-11-28 Thread Lionel Bouton
Hi, Le 28/11/2015 04:24, Brian Felton a écrit : > Greetings Ceph Community, > > We are running a Hammer cluster (0.94.3-1) in production that recently > experienced asymptotic performance degradation. We've been migrating > data from an older non-Ceph cluster at a fairly steady pace for the > pas

Re: [ceph-users] dense storage nodes

2016-05-18 Thread Lionel Bouton
Hi, I'm not yet familiar with Jewel, so take this with a grain of salt. Le 18/05/2016 16:36, Benjeman Meekhof a écrit : > We're in process of tuning a cluster that currently consists of 3 > dense nodes with more to be added. The storage nodes have spec: > - Dell R730xd 2 x Xeon E5-2650 v3 @ 2.30

Re: [ceph-users] Pinpointing performance bottleneck / would SSD journals help?

2016-06-27 Thread Lionel Bouton
Le 27/06/2016 17:42, Daniel Schneller a écrit : > Hi! > > We are currently trying to pinpoint a bottleneck and are somewhat stuck. > > First things first, this is the hardware setup: > > 4x DELL PowerEdge R510, 12x4TB OSD HDDs, journal colocated on HDD > 96GB RAM, 2x6 Cores + HT > 2x1GbE bonded i

Re: [ceph-users] pg scrub and auto repair in hammer

2016-06-28 Thread Lionel Bouton
Hi, Le 28/06/2016 08:34, Stefan Priebe - Profihost AG a écrit : > [...] > Yes but at least BTRFS is still not working for ceph due to > fragmentation. I've even tested a 4.6 kernel a few weeks ago. But it > doubles it's I/O after a few days. BTRFS autodefrag is not working over the long term. Tha

Re: [ceph-users] Another cluster completely hang

2016-06-29 Thread Lionel Bouton
Hi, Le 29/06/2016 12:00, Mario Giammarco a écrit : > Now the problem is that ceph has put out two disks because scrub has > failed (I think it is not a disk fault but due to mark-complete) There is something odd going on. I've only seen deep-scrub failing (ie detect one inconsistency and marking

Re: [ceph-users] pg scrub and auto repair in hammer

2016-06-29 Thread Lionel Bouton
Hi, Le 29/06/2016 18:33, Stefan Priebe - Profihost AG a écrit : >> Am 28.06.2016 um 09:43 schrieb Lionel Bouton >> : >> >> Hi, >> >> Le 28/06/2016 08:34, Stefan Priebe - Profihost AG a écrit : >>> [...] >>> Yes but at least BTRFS is still no

Re: [ceph-users] Fwd: Ceph OSD suicide himself

2016-07-11 Thread Lionel Bouton
Le 11/07/2016 04:48, 한승진 a écrit : > Hi cephers. > > I need your help for some issues. > > The ceph cluster version is Jewel(10.2.1), and the filesytem is btrfs. > > I run 1 Mon and 48 OSD in 4 Nodes(each node has 12 OSDs). > > I've experienced one of OSDs was killed himself. > > Always it issued s

Re: [ceph-users] Fwd: Ceph OSD suicide himself

2016-07-11 Thread Lionel Bouton
Le 11/07/2016 11:56, Brad Hubbard a écrit : > On Mon, Jul 11, 2016 at 7:18 PM, Lionel Bouton > wrote: >> Le 11/07/2016 04:48, 한승진 a écrit : >>> Hi cephers. >>> >>> I need your help for some issues. >>> >>> The ceph cluster version is Jewel

Re: [ceph-users] Fwd: Ceph OSD suicide himself

2016-07-12 Thread Lionel Bouton
Hi, Le 12/07/2016 02:51, Brad Hubbard a écrit : > [...] This is probably a fragmentation problem : typical rbd access patterns cause heavy BTRFS fragmentation. >>> To the extent that operations take over 120 seconds to complete? Really? >> Yes, really. I had these too. By default Ceph/R

  1   2   >