Re: [ceph-users] pgs incomplete; pgs stuck inactive; pgs stuck unclean

2014-05-23 Thread Michael
and see if your "step" is set to osd or rack. If it's not host then change it to that and pull it in again. Check the docs on crush maps http://ceph.com/docs/master/rados/operations/crush-map/ for more info. -Michael On 23/05/2014 10:53, Karan Singh wrote: Try increasing the

Re: [ceph-users] Is there a way to repair placement groups?

2014-05-27 Thread Michael
Hi Peter, Please use "ceph pg repair XX.xx". It might take a few seconds to kick in after being instructed. -Michael On 27/05/2014 21:40, phowell wrote: Hi First apologies if this is the wrong place to ask this question. We are running a small Ceph (0.79) cluster will about 12 o

[ceph-users] Fwd: Re: pgs incomplete; pgs stuck inactive; pgs stuck unclean

2014-05-27 Thread Michael
ies but still usable, cluster stops accepting data at one accessible copy. -Michael On 27/05/2014 18:38, Sudarsan, Rajesh wrote: I am seeing the same error message with ceph health command. I am using Ubuntu 14.04 with ceph 0.79. I am using the ceph distribution that comes with the Ubuntu release.

Re: [ceph-users] Is there a way to repair placement groups?

2014-05-27 Thread Michael
Would it be feasible to try for an odd one out policy by default when repairing from a pool of 3 or more disks? Or is the most common cause of inconsistency most likely to not effect the primary? -Michael On 27/05/2014 23:55, Gregory Farnum wrote: Note that while the "repair" com

Re: [ceph-users] Hard drives of different sizes.

2014-06-05 Thread Michael
ceph osd dump | grep size Check that all pools are size 2, min size 2 or 1. If not you can change on the fly with: ceph osd pool set #poolname size/min_size #size See docs http://ceph.com/docs/master/rados/operations/pools/ for alterations to pool attributes. -Michael On 05/06/2014 17:29

Re: [ceph-users] anti-cephalopod question

2014-07-28 Thread Michael
ON split that doesn't risk the two nodes being up and unable to serve data while the three are down so you'd need to find a way to make it a 2/2/1 split instead. -Michael On 28/07/2014 18:41, Robert Fantini wrote: OK for higher availability then 5 nodes is better then 3 . So we'

Re: [ceph-users] Deployment scenario with 2 hosts

2014-07-28 Thread Michael
You can use multiple "steps" in your crush map in order to do things like choose two different hosts then choose a further OSD on one of the hosts and do another replication so that you can get three replicas onto two hosts without risking ending up with three replicas on a single node. On 28/

Re: [ceph-users] GPF kernel panics

2014-07-31 Thread Michael
The mainline packages from Ubuntu should be helpful in testing. Info: https://wiki.ubuntu.com/Kernel/MainlineBuilds Packages: http://kernel.ubuntu.com/~kernel-ppa/mainline/?C=N;O=D On 31/07/2014 10:31, James Eckersall wrote: Ah, thanks for the clarification on that. We are very close to the 250

Re: [ceph-users] ceph can not repair itself after accidental power down, half of pgs are peering

2014-08-26 Thread Michael
How far out are your clocks? It's showing a clock skew, if they're too far out it can cause issues with cephx. Otherwise you're probably going to need to check your cephx auth keys. -Michael On 26/08/2014 12:26, yuelongguang wrote: hi,all i have 5 osds and 3 mons. its status is

Re: [ceph-users] three way replication on pool a failed

2014-09-18 Thread Michael
t down you'll only have 1 mon left, 1/3 will fail quorum and so the cluster will stop taking data to prevent split-brain scenarios. For 2 nodes to be down and the cluster to continue to operate you'd need a minimum of 5 mons or you'd need to mo

Re: [ceph-users] ls/file access hangs on a single ceph directory

2013-10-23 Thread Michael
/0x100 [ceph] [] vfs_getattr+0x4e/0x80 [] vfs_fstatat+0x4e/0x70 [] vfs_lstat+0x1e/0x20 [] sys_newlstat+0x1a/0x40 [] system_call_fastpath+0x16/0x1b [] 0x Started occurring shortly (within an hour or so) after adding a pool, not sure if that's relevant yet. -Michael On 23/10

Re: [ceph-users] ls/file access hangs on a single ceph directory

2013-10-24 Thread Michael
On 24/10/2013 03:09, Yan, Zheng wrote: On Thu, Oct 24, 2013 at 6:44 AM, Michael wrote: Tying to gather some more info. CentOS - hanging ls [root@srv ~]# cat /proc/14614/stack [] wait_answer_interruptible+0x81/0xc0 [fuse] [] fuse_request_send+0x1cb/0x290 [fuse] [] fuse_do_getattr+0x10c/0x2c0

Re: [ceph-users] ls/file access hangs on a single ceph directory

2013-10-24 Thread Michael
On 24/10/2013 13:53, Yan, Zheng wrote: On Thu, Oct 24, 2013 at 5:43 PM, Michael wrote: On 24/10/2013 03:09, Yan, Zheng wrote: On Thu, Oct 24, 2013 at 6:44 AM, Michael wrote: Tying to gather some more info. CentOS - hanging ls [root@srv ~]# cat /proc/14614/stack [] wait_answer_interruptible

[ceph-users] ls/file access hangs on a single ceph directory

2013-10-24 Thread Michael
1: 1/1/1 up {0=srv10=up:active} Have done a full deep scrub/repair cycle on all of the osd which has come back fine so not really sure where to start looking to find out what's wrong with it. Any ideas? -Michael ___ ceph-users mailing list ceph-

Re: [ceph-users] ls/file access hangs on a single ceph directory

2013-10-24 Thread Michael
On 24/10/2013 14:55, Yan, Zheng wrote: On Thu, Oct 24, 2013 at 9:13 PM, Michael wrote: On 24/10/2013 13:53, Yan, Zheng wrote: On Thu, Oct 24, 2013 at 5:43 PM, Michael wrote: On 24/10/2013 03:09, Yan, Zheng wrote: On Thu, Oct 24, 2013 at 6:44 AM, Michael wrote: Tying to gather some more

Re: [ceph-users] New to Ceph.... Install Guide

2013-10-26 Thread michael
7;leftover' information here and the for older versions but otherwise ceph has some very very nice documentation. -Michael On 26/10/2013 05:21, Raghavendra Lad wrote: Hi Cephs, I am new to Ceph. I am planning to install CEPH. I already have Openstack Grizzly installed and for storage t

Re: [ceph-users] ceph-deploy problems on CentOS-6.4

2013-10-29 Thread Michael
manually using http://ceph.com/docs/next/install/rpm/ -Michael On 29/10/2013 15:57, Narendra Trivedi wrote: Hi All, I am a newbie to ceph. I am installing ceph (dumpling release) using *ceph-deploy* (issued from my admin node) on one monitor and two OSD nodes running CentOS 6.4 (64-bit)

Re: [ceph-users] please help me.problem with my ceph

2013-11-08 Thread Michael
it's probably a good idea to double check your pg numbers while you're doing this. -Michael On 08/11/2013 11:08, Karan Singh wrote: Hello Joseph This sounds like a solution , BTW how to set replication level to 1 , is there any direct command or need to edit configuration

Re: [ceph-users] please help me.problem with my ceph

2013-11-08 Thread Michael
Apologies, that should have been: ceph osd dump | grep 'rep size' What I get from blindly copying from a wiki! -Michael On 08/11/2013 11:38, Michael wrote: Hi Karan, There's info on http://ceph.com/docs/master/rados/operations/pools/ But primarily you need to check your rep

Re: [ceph-users] ceph-deploy: osd creating hung with one ssd disk as shared journal

2013-11-12 Thread Michael
eing /dev/sda mean you're putting your journal onto an already partitioned and in use by the OS SSD? -Michael On 12/11/2013 18:09, Gruher, Joseph R wrote: I didn't think you could specify the journal in this manner (just pointing multiple OSDs on the same host all to journal /d

Re: [ceph-users] ceph-deploy: osd creating hung with one ssd disk as shared journal

2013-11-12 Thread Michael
Sorry, just spotted you're mounting on sdc. Can you chuck out a partx -v /dev/sda to see if there's anything odd about the data currently on there? -Michael On 12/11/2013 18:22, Michael wrote: As long as there's room on the SSD for the partitioner it'll just use the conf v

[ceph-users] HEALTH_WARN # requests are blocked > 32 sec

2013-11-25 Thread Michael
eems to happen for periods of a couple of minutes then wake up again. Thanks much, -Michael ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] HEALTH_WARN # requests are blocked > 32 sec

2013-11-25 Thread Michael
g during the block but this is now getting more frequent and seems to be for longer periods. Looking at the osd logs for 3 and 8 there's nothing of relevance in there. Any ideas on the next step? Thanks, -Michael On 25/11/2013 15:28, Ирек Фасихов wrote: ceph health detail -- С у

Re: [ceph-users] Constant slow / blocked requests with otherwise healthy cluster

2013-11-27 Thread Michael
's running of it responding noticeably slower. Wish I knew what actually caused it. :/ What version of ceph are you on? -Michael On 27/11/2013 21:00, Andrey Korolyov wrote: Hey, What number do you have for a replication factor? As for three, 1.5k IOPS may be a little bit high for 36 dis

Re: [ceph-users] adding another mon failed

2013-11-29 Thread Michael
This previous thread looks like it might be the same error, could be helpful. http://www.spinics.net/lists/ceph-users/msg05295.html -Michael On 29/11/2013 19:24, German Anders wrote: Hi, i'm having issues while trying to add another monitor to my cluster: ceph@ceph-deploy01:~/ceph-cl

Re: [ceph-users] After reboot nothing worked

2013-12-17 Thread Michael
The clocks on your two nodes are not aligned, you'll need to set up an ntp daemon and either sync them to a remote system or sync them to an internal system. Either way you just need to get them the same. -Michael On 17/12/2013 10:39, Umar Draz wrote: 2) After fixing the above issue I

Re: [ceph-users] radosgw package - missing deps on Ubuntu < 13.04

2014-01-08 Thread Michael
Would also be interested to know about these performance issues, we have a 12.04 cluster using RBD caching we're about to double in size so it'd be good to know if we could be about to run into any potential bottlenecks. -Michael On 06/01/2014 21:03, LaSalle, Jurvis wrote: On 1/

Re: [ceph-users] RBD cache questions (kernel vs. user space, KVM live migration)

2014-01-14 Thread michael
" the live migrations are allowed regardless of cache mode. https://www.suse.com/documentation/sles11/singlehtml/book_kvm/book_kvm.html#idm139742235036576 Afaik a full FS flush is called just as it completes copying the memory across for the live migration. -Michael On 15/01/2014 02:41, C

Re: [ceph-users] RBD cache questions (kernel vs. user space, KVM live migration)

2014-01-15 Thread Michael
good for a bit more piece of mind! -Michael On 15/01/2014 05:41, Christian Balzer wrote: Hello, Firstly thanks to Greg and Sage for clearing this up. Now all I need for a very early Xmas is ganeti 2.10 released and a Debian KVM release that has RBD enabled. ^o^ Meaning that for now I'm

Re: [ceph-users] Emperor Upgrade: osds not starting

2014-01-16 Thread Michael
h osd dump | grep 'pg_num' And see the docs: http://ceph.com/docs/master/rados/operations/placement-groups/ You can currently increase the number of PG/PGP of a pool but not decrease them, so take care if you need to balance them as higher numbers increases CPU load. -Michael Howe

[ceph-users] Removing OSD, double data migration

2014-02-12 Thread Michael
Hi Ceph users, Have always wondered this, why does data get shuffled twice when you delete an OSD? You out an OSD and the data gets moved to other nodes - understandable but then when you remove that OSD from crush it moves data again, aren't outed OSD's and an OSD's not in crush the same fro

[ceph-users] Flapping/Crashing OSD

2014-02-20 Thread Michael
Hi All, Have a log full of - "log [ERR] : 1.9 log bound mismatch, info (46784'1236417,46797'1239418] actual [46784'1235968,46797'1239418]" "192.168.7.177:6800/15655 >> 192.168.7.183:6802/3348 pipe(0x20e4f00 sd=65 :56394 s=2 pgs=24194 cs=1 l=0 c=0x19668f20).fault, initiating reconnect" and

Re: [ceph-users] Flapping/Crashing OSD

2014-02-20 Thread Michael
Thanks Gregory. Currently it's just the one OSD with the issue. If it's more of a general failing of an OSD I'll rip it out and replace the drive. -Michael On 20/02/2014 17:55, Gregory Farnum wrote: On Thu, Feb 20, 2014 at 4:26 AM, Michael wrote: Hi All, Have a log full

[ceph-users] Ubuntu 13.10 packages

2014-02-25 Thread Michael
Hi All, Just wondering if there was a reason for no packages for Ubuntu Saucy in http://ceph.com/packages/ceph-extras/debian/dists/. Could do with upgrading to fix a few bugs but would hate to have to drop Ceph from being handled through the package manager! Thanks, -Michael

Re: [ceph-users] Ubuntu 13.10 packages

2014-02-27 Thread Michael
Thanks Tim, I'll give the raring packages a try. Found a tracker for Saucy packages, looks like the person they were assigned to hasn't checking in for a fair while so they might have just been overlooked http://tracker.ceph.com/issues/6726. -Michael On 27/02/2014 13:33, Tim Bi

Re: [ceph-users] Ceph 0.72.2 installation on Ubuntu 12.04.4 LTS never got active + сlean

2014-04-29 Thread Michael
tep chooseleaf firstn 0 type osd" or similar depending on your crush setup. Please see the documentation for more info https://ceph.com/docs/master/rados/operations/crush-map/. -Michael On 29/04/2014 21:00, Vadim Kimlaychuk wrote: Hello all, I have tried to install subj. almo

Re: [ceph-users] Ceph 0.72.2 installation on Ubuntu 12.04.4 LTS never got active + сlean

2014-04-29 Thread Michael
Have just looked at the documentation Vadim was trying to use to set up a cluster and http://eu.ceph.com/docs/wip-6919/start/quick-start/ should really be updated or removed as it will not result in a working cluster with recent Ceph versions. -Michael On 29/04/2014 21:09, Michael wrote: Hi

[ceph-users] 0.80 Firefly Debian/Ubuntu Trusty Packages

2014-05-08 Thread Michael
Hi, Have these been missed or have they been held back for a specific reason? http://ceph.com/debian-firefly/dists/ looks like Trusty is the only one that hasn't been updated. -Michael ___ ceph-users mailing list ceph-users@lists.ceph.com

Re: [ceph-users] 0.80 Firefly Debian/Ubuntu Trusty Packages

2014-05-08 Thread Michael
Ah, thanks for the info. Will keep an eye on it there instead and clean the ceph.com from the sources list. -Michael On 08/05/2014 21:48, Henrik Korkuc wrote: hi, trusty will include ceph in usual repos. I am tracking http://packages.ubuntu.com/trusty/ceph and https://bugs.launchpad.net

Re: [ceph-users] ceph firefly PGs in active+clean+scrubbing state

2014-05-13 Thread Michael
n scrub status was completely ignoring standard restart commands which prevented any scrubbing from continuing within the cluster even after update. -Michael On 13/05/2014 17:03, Fabrizio G. Ventola wrote: I've upgraded to 0.80.1 on a testing instance: the cluster gets cyclically active+clea

[ceph-users] Ceph 0.80.1 delete/recreate data/metadata pools

2014-05-13 Thread Michael
ot mounted anywhere either. Any way I can clean out these pools now and reset the pgp num etc? Thanks, -Michael ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Ceph 0.80.1 delete/recreate data/metadata pools

2014-05-13 Thread Michael
Answered my own question. Created two new pools, used mds newfs on them and then deleted the original pools and renamed the new ones. -Michael On 13/05/2014 22:20, Michael wrote: Hi All, Seems commit 2adc534a72cc199c8b11dbdf436258cbe147101b has removed the ability to delete and recreate the

[ceph-users] RBD cache pool - not cleaning up

2014-05-21 Thread Michael
pool full of data and both of them full of objects. Anyone else trying this out? -Michael ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] RBD cache pool - not cleaning up

2014-05-21 Thread Michael
Thanks Sage, the cache system's look pretty great so far. Combined with erasure coding it's really adding a lot of options. -Michael On 21/05/2014 21:54, Sage Weil wrote: On Wed, 21 May 2014, Michael wrote: Hi All, Experimenting with cache pools for RBD, created two pools, sl

Re: [ceph-users] [Luminous]How to choose the proper ec profile?

2017-10-30 Thread Michael
. shadow_lin wrote: What would be a good ec profile for archive purpose(decent write perfomance and just ok read performace)? I don't actually know that - but the default is not bad if you ask me (not that it features writes faster than reads). Plus it lets you pick m. - Michael

[ceph-users] rocksdb: Corruption: missing start of fragmented record

2017-11-01 Thread Michael
ome way in which I can tell rockdb to truncate or delete / skip the respective log entries? Or can I get access to rocksdb('s files) in some other way to just manipulate it or delete corrupted WAL files manually? -Michael ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] rocksdb: Corruption: missing start of fragmented record

2017-11-01 Thread Michael
ount, the OSD won't activate and the error is the same. Is there any fix in .2 that might address this, or do you just mean that in general there will be bug fixes? Thanks for your response! - Michael ___ ceph-users mailing list ceph-users@lists.

Re: [ceph-users] FAILED assert(p.same_interval_since) and unusable cluster

2017-11-04 Thread Michael
ch. As you might see on the bug tracker, the patch did apparently avoid the immediate error for me, but Ceph then ran into another error. - Michael ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] rocksdb: Corruption: missing start of fragmented record

2017-11-13 Thread Michael
Konstantin Shalygin wrote: > I think Christian talks about version 12.2.2, not 12.2.* Which isn't released yet, yes. I could try building the development repository if you think that has a chance of resolving the issue? Although I'd still like to know how I could theoretically get my hands at

Re: [ceph-users] HW Raid vs. Multiple OSD

2017-11-13 Thread Michael
l replication and so on will trigger *before* you remove it. (There is a configurable timeout for how long an OSD can be down, after which the OSD is essentially treated as dead already, at which point replication and rebalancing starts). -Michael

[ceph-users] Dell Ceph Hardware recommendations

2016-02-09 Thread Michael
Hello, I'm looking at purchasing Qty 3-4, Dell PowerEdge T630 or R730xd for my OSD nodes in a Ceph cluster. Hardware: Qty x 1, E5-2630v3 2.4Ghz 8C/16T 128 GB DDR4 Ram QLogic 57810 DP 10Gb DA/SFP+ Converged Network Adapter I'm trying to determine which RAID controller to use, since I've read JBO

Re: [ceph-users] Dell Ceph Hardware recommendations

2016-02-11 Thread Michael
Alex Leake writes: > > Hello Michael​, > > I maintain a small Ceph cluster at the University of Bath, our cluster consists of: > > Monitors: > 3 x Dell PowerEdge R630 > > - 2x Intel(R) Xeon(R) CPU E5-2609 v3 > - 64GB RAM > - 4x 300GB SAS (RAID 10) >

Re: [ceph-users] nginx (tengine) and radosgw

2014-05-29 Thread Michael Lukzak
HTTP/1.1" 100 0 "-" "Boto/2.27.0 Python/2.7.6 Linux/3.13.0-24-generic" Do You have also problem with that? I used for testing oryginal nginx and also have a problem with 100-Continue. Only Apache 2.x works fine. BR, Michael I haven't tried SSL yet. We currently do

[ceph-users] NGINX and 100-Continue

2014-05-29 Thread Michael Lukzak
Hi, I have a question about Nginx and 100-Continue. If I use client like boto or Cyberduck all works fine, but when I want to upload file on 100% upload, progress bar hangs and after about 30s Cyberduck reports that HTTP 100-Continue timeouted. I use nginx v1.4.1 Only when I use Apache 2 with fa

Re: [ceph-users] NGINX and 100-Continue

2014-05-29 Thread Michael Lukzak
nrert report that this might be involved with mod fastcgi (fcgi can't handle this, only fastcgi can handle http 100-Continue). I can't find how this might be correlated with fastcgi in nginx. Michael Don't use nginx. The current version buffers all the uploads to the local dis

Re: [ceph-users] nginx (tengine) and radosgw

2014-05-29 Thread Michael Lukzak
Hi, Ups, so I don't read carefully a doc... I will try this solution. Thanks! Michael From the docs, you need this setting in ceph.conf (if you're using nginx/tengine): rgw print continue = false This will fix the 100-continue issues. On 5/29/2014 5:56 AM, Michael Lukzak w

[ceph-users] HEALTH_WARN 4 pgs incomplete; 4 pgs stuck inactive; 4 pgs stuck unclean

2014-08-12 Thread Riederer, Michael
Hi all, How do I get my Ceph Cluster back to a healthy state? root@ceph-admin-storage:~# ceph -v ceph version 0.80.5 (38b73c67d375a2552d8ed67843c8a65c2c0feba6) root@ceph-admin-storage:~# ceph -s cluster 6b481875-8be5-4508-b075-e1f660fd7b33 health HEALTH_WARN 4 pgs incomplete; 4 pgs stuck

Re: [ceph-users] HEALTH_WARN 4 pgs incomplete; 4 pgs stuck inactive; 4 pgs stuck unclean

2014-08-12 Thread Riederer, Michael
__ Von: Karan Singh [karan.si...@csc.fi] Gesendet: Dienstag, 12. August 2014 10:35 An: Riederer, Michael Cc: ceph-users@lists.ceph.com Betreff: Re: [ceph-users] HEALTH_WARN 4 pgs incomplete; 4 pgs stuck inactive; 4 pgs stuck unclean Can you provide your cluster’s ceph osd du

Re: [ceph-users] HEALTH_WARN 4 pgs incomplete; 4 pgs stuck inactive; 4 pgs stuck unclean

2014-08-12 Thread Riederer, Michael
2014 13:00 An: Riederer, Michael Cc: ceph-users@lists.ceph.com Betreff: Re: [ceph-users] HEALTH_WARN 4 pgs incomplete; 4 pgs stuck inactive; 4 pgs stuck unclean I am not sure if this helps , but have a look https://www.mail-archive.com/ceph-users@lists.ceph.com/msg10078.html - Karan - On 12

Re: [ceph-users] HEALTH_WARN 4 pgs incomplete; 4 pgs stuck inactive; 4 pgs stuck unclean

2014-08-12 Thread Riederer, Michael
Von: Craig Lewis [cle...@centraldesktop.com] Gesendet: Dienstag, 12. August 2014 20:02 An: Riederer, Michael Cc: Karan Singh; ceph-users@lists.ceph.com Betreff: Re: [ceph-users] HEALTH_WARN 4 pgs incomplete; 4 pgs stuck inactive; 4 pgs stuck unclean For the incomplete PGs, can you give

Re: [ceph-users] HEALTH_WARN 4 pgs incomplete; 4 pgs stuck inactive; 4 pgs stuck unclean

2014-08-14 Thread Riederer, Michael
b the pgs. Many thanks for your help. Regards, Mike Von: Craig Lewis [cle...@centraldesktop.com] Gesendet: Mittwoch, 13. August 2014 19:48 An: Riederer, Michael Cc: Karan Singh; ceph-users@lists.ceph.com Betreff: Re: [ceph-users] HEALTH_WARN 4 pgs incomplete;

Re: [ceph-users] HEALTH_WARN 4 pgs incomplete; 4 pgs stuck inactive; 4 pgs stuck unclean

2014-08-18 Thread Riederer, Michael
Von: Craig Lewis [cle...@centraldesktop.com] Gesendet: Donnerstag, 14. August 2014 19:56 An: Riederer, Michael Cc: Karan Singh; ceph-users@lists.ceph.com Betreff: Re: [ceph-users] HEALTH_WARN 4 pgs incomplete; 4 pgs stuck inactive; 4 pgs stuck unclean It sound likes you need to thro

Re: [ceph-users] HEALTH_WARN 4 pgs incomplete; 4 pgs stuck inactive; 4 pgs stuck unclean

2014-08-18 Thread Riederer, Michael
-boun...@lists.ceph.com]" im Auftrag von "Riederer, Michael [michael.riede...@br.de] Gesendet: Montag, 18. August 2014 13:40 An: Craig Lewis Cc: ceph-users@lists.ceph.com; Karan Singh Betreff: Re: [ceph-users] HEALTH_WARN 4 pgs incomplete; 4 pgs stuck inactive; 4 pgs stuck unclean Hi C

[ceph-users] Cache tiering and CRUSH map

2014-08-18 Thread Michael Kolomiets
to to use some location type under host level to group OSDs by type and use then it in mapping rules? -- Michael Kolomiets ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] HEALTH_WARN 4 pgs incomplete; 4 pgs stuck inactive; 4 pgs stuck unclean

2014-08-19 Thread Riederer, Michael
Regards, Mike ____ Von: Craig Lewis [cle...@centraldesktop.com] Gesendet: Montag, 18. August 2014 19:22 An: Riederer, Michael Cc: ceph-users@lists.ceph.com Betreff: Re: [ceph-users] HEALTH_WARN 4 pgs incomplete; 4 pgs stuck inactive; 4 pgs stuck unclean I take it th

Re: [ceph-users] HEALTH_WARN 4 pgs incomplete; 4 pgs stuck inactive; 4 pgs stuck unclean

2014-08-22 Thread Riederer, Michael
Hi Craig, many thanks for your help. I decided to reinstall ceph. Regards, Mike Von: Craig Lewis [cle...@centraldesktop.com] Gesendet: Dienstag, 19. August 2014 22:24 An: Riederer, Michael Cc: ceph-users@lists.ceph.com Betreff: Re: [ceph-users] HEALTH_WARN 4 pgs

[ceph-users] Cephfs: sporadic damages uploaded files

2014-08-27 Thread Michael Kolomiets
.2’ saved [3249803264/3249803264] root@lw01p01-mgmt01:/export/secondary# md5sum XXX.iso.2 5e28d425f828440b025d769609c5bb41 XXX.iso.2 root@lw01p01-mgmt01:/export/secondary# md5sum XXX.iso.2 5e28d425f828440b025d769609c5bb41 XXX.iso.2 -- Michael Kolomiets ___ c

Re: [ceph-users] Cephfs: sporadic damages uploaded files

2014-08-27 Thread Michael Kolomiets
03:00 Yan, Zheng : > I suspect the client does not have permission to write to pool 3. > could you check if the contents of XXX.iso.2 are all zeros. > > Yan, Zheng > > On Wed, Aug 27, 2014 at 5:05 PM, Michael Kolomiets > wrote: >> Hi! >> I use ceph pool mount

Re: [ceph-users] Cephfs: sporadic damages uploaded files

2014-08-27 Thread Michael Kolomiets
ent does not have permission to write to pool 3. > could you check if the contents of XXX.iso.2 are all zeros. > > Yan, Zheng > > On Wed, Aug 27, 2014 at 5:05 PM, Michael Kolomiets > wrote: >> Hi! >> I use ceph pool mounted via cephfs for cloudstack secondary storage &

Re: [ceph-users] HELP FOR CEPH SOURCE CODE

2015-02-21 Thread Michael Andersen
C++ is case sensitive, it will be very difficult... On Feb 21, 2015 3:44 AM, "Stefan Priebe - Profihost AG" < s.pri...@profihost.ag> wrote: > This will be very difficult with a broken keyboard! > > > Am 21.02.2015 um 12:16 schrieb khyati joshi : > > > > I WANT TO ADD 2 NEW FEATURES IN CEPH NAMELY

Re: [ceph-users] who is using radosgw with civetweb?

2015-02-26 Thread Michael Kuriger
I¹d also like to set this up. I¹m not sure where to begin. When you say enabled by default, where is it enabled? Many thanks, Mike On 2/25/15, 1:49 PM, "Sage Weil" wrote: >On Wed, 25 Feb 2015, Robert LeBlanc wrote: >> We tried to get radosgw working with Apache + mod_fastcgi, but due to >> t

Re: [ceph-users] who is using radosgw with civetweb?

2015-02-26 Thread Michael Kuriger
Thanks Sage for the quick reply! -=Mike On 2/26/15, 8:05 AM, "Sage Weil" wrote: >On Thu, 26 Feb 2015, Michael Kuriger wrote: >> I¹d also like to set this up. I¹m not sure where to begin. When you >>say >> enabled by default, where is it enabled? > >Th

[ceph-users] ceph binary missing from ceph-0.87.1-0.el6.x86_64

2015-03-02 Thread Michael Kuriger
child_exception [ceph201][ERROR ] OSError: [Errno 2] No such file or directory [ceph201][ERROR ] [ceph201][ERROR ] Michael Kuriger ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Ceph repo - RSYNC?

2015-03-05 Thread Michael Kuriger
I use reposync to keep mine updated when needed. Something like: cd ~ /ceph/repos reposync -r Ceph -c /etc/yum.repos.d/ceph.repo reposync -r Ceph-noarch -c /etc/yum.repos.d/ceph.repo reposync -r elrepo-kernel -c /etc/yum.repos.d/elrepo.repo   Michael Kuriger Sr. Unix Systems Engineer S mk7

Re: [ceph-users] [SPAM] Changing pg_num => RBD VM down !

2015-03-16 Thread Michael Kuriger
I always keep my pg number a power of 2. So I’d go from 2048 to 4096. I’m not sure if this is the safest way, but it’s worked for me. [yp] Michael Kuriger Sr. Unix Systems Engineer • mk7...@yp.com<mailto:mk7...@yp.com> |• 818-649-7235 From: Chu Duc Minh mailto:chu.ducm...@gma

Re: [ceph-users] Firefly - Giant : CentOS 7 : install failed ceph-deploy

2015-04-08 Thread Michael Kidd
I don't think this came through the first time.. resending.. If it's a dupe, my apologies.. For Firefly / Giant installs, I've had success with the following: yum install ceph ceph-common --disablerepo=base --disablerepo=epel Let us know if this works for you as well. Thanks,

Re: [ceph-users] Firefly - Giant : CentOS 7 : install failed ceph-deploy

2015-05-04 Thread Michael Kidd
For Firefly / Giant installs, I've had success with the following: yum install ceph ceph-common --disablerepo=base --disablerepo=epel Let us know if this works for you as well. Thanks, Michael J. Kidd Sr. Storage Consultant Inktank Professional Services - by Red Hat On Wed, Apr 8, 2015

Re: [ceph-users] very different performance on two volumes in the same pool #2

2015-05-11 Thread Mason, Michael
I had the same problem when doing benchmarks with small block sizes (<8k) to RBDs. These settings seemed to fix the problem for me. sudo ceph tell osd.* injectargs '--filestore_merge_threshold 40' sudo ceph tell osd.* injectargs '--filestore_split_multiple 8' After you apply the settings give it

[ceph-users] New Calamari server

2015-05-11 Thread Michael Kuriger
I had an issue with my calamari server, so I built a new one from scratch. I¹ve been struggling trying to get the new server to start up and see my ceph cluster. I went so far as to remove salt and diamond from my ceph nodes and reinstalled again. On my calamari server, it sees the hosts connect

Re: [ceph-users] New Calamari server

2015-05-12 Thread Michael Kuriger
In my case, I did remove all salt keys. The salt portion of my install is working. It’s just that the calamari server is not seeing the ceph cluster. Michael Kuriger Sr. Unix Systems Engineer * mk7...@yp.com |( 818-649-7235 On 5/12/15, 1:35 AM, "Alexandre DERUMIER" wrote:

Re: [ceph-users] Does anyone understand Calamari??

2015-05-13 Thread Michael Kuriger
.el6 Installed: salt.noarch 0:2014.7.1-1.el6salt-minion.noarch 0:2014.7.1-1.el6 This is on CentOS 6.6 -=Mike Kuriger [yp] Michael Kuriger Sr. Unix Systems Engineer • mk7...@yp.com<mailto:mk7...@yp.com> |• 818-649-7235 From: Bruce McFarland mailto:bruce.

Re: [ceph-users] client.radosgw.gateway for 2 radosgw servers

2015-05-19 Thread Michael Kuriger
= civetweb port=80 rgw_socket_path = /var/run/ceph/ceph-client.radosgw.ceph-gw3.asok [yp] Michael Kuriger Sr. Unix Systems Engineer * mk7...@yp.com<mailto:mk7...@yp.com> |* 818-649-7235 From: Florent MONTHEL mailto:fmont...@flox-arts.net>> Date: Monday, May 18, 2015 at 6:14 PM To:

Re: [ceph-users] Beginners ceph journal question

2015-06-09 Thread Michael Kuriger
You could mount /dev/sdb to a filesystem, such as /ceph-disk, and then do this: ceph-deploy osd create ceph-node1:/ceph-disk Your journal would be a file doing it this way. [yp] Michael Kuriger Sr. Unix Systems Engineer * mk7...@yp.com<mailto:mk7...@yp.com> |* 818-649-7235 From:

Re: [ceph-users] radosgw backup

2015-06-11 Thread Michael Kuriger
You may be able to use replication. Here is a site showing a good example of how to set it up. I have not tested replicating within the same datacenter, but you should just be able to define a new zone within your existing ceph cluster and replicate to it. http://cephnotes.ksperis.com/blog/20

Re: [ceph-users] ceph mount error

2015-06-11 Thread Michael Kuriger
1) set up mds server ceph-deploy mds --overwrite-conf create 2) create filesystem ceph osd pool create cephfs_data 128 ceph osd pool create cephfs_metadata 16 ceph fs new cephfs cephfs_metadata cephfs_data ceph fs ls ceph mds stat 3) mount it! From: ceph-users [mailto:ceph-users-boun...@lists.

Re: [ceph-users] Is Ceph right for me?

2015-06-11 Thread Michael Kuriger
You might be able to accomplish that with something like dropbox or owncloud From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Trevor Robinson - Key4ce Sent: Wednesday, May 20, 2015 2:35 PM To: ceph-users@lists.ceph.com Subject: [ceph-users] Is Ceph right for me? Hello, C

[ceph-users] firefly to giant upgrade broke ceph-gw

2015-06-15 Thread Michael Kuriger
) but deleting is not working unless I specify an exact file to delete. Also, my radosgw-agent is not syncing buckets any longer. I¹m using s3cmd to test reads/writes to the gateway. Has anyone else had problems in giant? Michael Kuriger Sr. Unix Systems Engineer * mk7...@yp.com |( 818-649-7235

[ceph-users] CEPH-GW replication, disable /admin/log

2015-06-22 Thread Michael Kuriger
Is it possible to disable the replication of /admin/log and other replication logs? It seems that This log replication is occupying a lot of time in my cluster(s). I’d like to only replicate user’s data. Thanks! [yp] Michael Kuriger Sr. Unix Systems Engineer • mk7...@yp.com<mailto:

Re: [ceph-users] Happy SysAdmin Day!

2015-07-31 Thread Michael Kuriger
Thanks Mark you too Michael Kuriger Sr. Unix Systems Engineer * mk7...@yp.com |( 818-649-7235 On 7/31/15, 3:02 PM, "ceph-users on behalf of Mark Nelson" wrote: >Most folks have either probably already left or are on their way out the >door late on a friday, but I ju

[ceph-users] Osd crash and misplaced objects after rapid object deletion

2013-07-23 Thread Michael Lowe
On two different occasions I've had an osd crash and misplace objects when rapid object deletion has been triggered by discard/trim operations with the qemu rbd driver. Has anybody else had this kind of trouble? The objects are still on disk, just not in a place where the osd thinks is valid.

[ceph-users] Glance image upload errors after upgrading to Dumpling

2013-08-14 Thread Michael Morgan
Hello Everyone, I have a Ceph test cluster doing storage for an OpenStack Grizzly platform (also testing). Upgrading to 0.67 went fine on the Ceph side with the cluster showing healthy but suddenly I can't upload images into Glance anymore. The upload fails and glance-api throws an error: 2013-0

Re: [ceph-users] Glance image upload errors after upgrading to Dumpling

2013-08-15 Thread Michael Morgan
On Wed, Aug 14, 2013 at 04:24:55PM -0700, Josh Durgin wrote: > On 08/14/2013 02:22 PM, Michael Morgan wrote: > >Hello Everyone, > > > > I have a Ceph test cluster doing storage for an OpenStack Grizzly > > platform > >(also testing). Upgrading to 0.67 we

Re: [ceph-users] RBD hole punching

2013-08-22 Thread Michael Lowe
I use the virtio-scsi driver. On Aug 22, 2013, at 12:05 PM, David Blundell wrote: >> I see yet another caveat: According to that documentation, it only works with >> the IDE driver, not with virtio. >> >>Guido > > I've just been looking into this but have not yet tested. It looks like >

Re: [ceph-users] Loss of connectivity when using client caching with libvirt

2013-10-02 Thread Michael Lowe
FWIW: I use a qemu 1.4.2 that I built with a debian package upgrade script and the stock libvirt from raring. > On Oct 2, 2013, at 10:59 PM, Josh Durgin wrote: > >> On 10/02/2013 06:26 PM, Blair Bethwaite wrote: >> Josh, >> >>> On 3 October 2013 10:36, Josh Durgin wrote: >>> The version bas

Re: [ceph-users] Expanding ceph cluster by adding more OSDs

2013-10-09 Thread Michael Lowe
There used to be, can't find it right now. Something like 'ceph osd set pg_num ' then 'ceph osd set pgp_num ' to actually move your data into the new pg's. I successfully did it several months ago, when bobtail was current. Sent from my iPad > On Oct 9, 2013, at 10:30 PM, Guang wrote: > > T

Re: [ceph-users] Expanding ceph cluster by adding more OSDs

2013-10-09 Thread Michael Lowe
at the same time? > 2) What is the recommended way to scale a cluster from like 1PB to 2PB, > should we scale it to like 1.1PB to 1.2PB or move to 2PB directly? > > Thanks, > Guang > >> On Oct 10, 2013, at 11:10 AM, Michael Lowe wrote: >> >> There used to b

Re: [ceph-users] monitor failover of ceph

2013-10-11 Thread Michael Lowe
You must have a quorum or MORE than 50% of your monitors functioning for the cluster to function. With one of two you only have 50% which isn't enough and stops i/o. Sent from my iPad > On Oct 11, 2013, at 11:28 PM, "飞" wrote: > > hello, I am a new user of ceph, > I have built a ceph testing

Re: [ceph-users] Full OSD with 29% free

2013-10-14 Thread Michael Lowe
How fragmented is that file system? Sent from my iPad > On Oct 14, 2013, at 5:44 PM, Bryan Stillwell > wrote: > > This appears to be more of an XFS issue than a ceph issue, but I've > run into a problem where some of my OSDs failed because the filesystem > was reported as full even though ther

Re: [ceph-users] kvm live migrate wil ceph

2013-10-14 Thread Michael Lowe
I live migrate all the time using the rbd driver in qemu, no problems. Qemu will issue a flush as part of the migration so everything is consistent. It's the right way to use ceph to back vm's. I would strongly recommend against a network file system approach. You may want to look into format

Re: [ceph-users] Perl Bindings for Ceph

2013-10-20 Thread Michael Lowe
1. How about enabling trim/discard support in virtio-SCSI and using fstrim? That might work for you. 4. Well you can mount them rw in multiple vm's with predictably bad results, so I don't see any reason why you could not specify ro as a mount option and do ok. Sent from my iPad > On Oct 21

  1   2   3   4   >