[ceph-users] mds daemon damaged

2018-07-12 Thread Kevin
Sorry for the long posting but trying to cover everything I woke up to find my cephfs filesystem down. This was in the logs 2018-07-11 05:54:10.398171 osd.1 [ERR] 2.4 full-object read crc 0x6fc2f65a != expected 0x1c08241c on 2:292cf221:::200.:head I had one standby MDS, but as far as

[ceph-users] Continuing placement group problems

2014-06-26 Thread Kevin Horan
that these are harmless and will go away in a future version. I also looked in the monitor logs but didn't see any reference to inconsistent or scrubbed objects. Kevin ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/lis

Re: [ceph-users] Continuing placement group problems

2014-06-26 Thread kevin horan
On 06/26/2014 01:08 PM, Gregory Farnum wrote: On Thu, Jun 26, 2014 at 12:52 PM, Kevin Horan wrote: I am also getting inconsistent object errors on a regular basis, about 1-2 every week or so for about 300GB of data. All OSDs are using XFS filesystems. Some OSDs are individual 3TB internal

Re: [ceph-users] OSD Performance

2015-02-24 Thread Kevin Walker
fragmentation problems other users have experienced? Kind regards Kevin ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] OSD Performance

2015-02-24 Thread Kevin Walker
SSD's for OSD's and RAM disk pcie devices for the Journals so this would be ok. Kind regards Kevin Walker +968 9765 1742 On 25 Feb 2015, at 02:35, Mark Nelson wrote: > On 02/24/2015 04:21 PM, Kevin Walker wrote: > Hi All > > Just recently joined the list and have been

Re: [ceph-users] OSD Performance

2015-02-24 Thread Kevin Walker
vide FC targets, which adds further power consumption. Kind regards Kevin Walker +968 9765 1742 On 25 Feb 2015, at 04:40, Christian Balzer wrote: On Wed, 25 Feb 2015 02:50:59 +0400 Kevin Walker wrote: > Hi Mark > > Thanks for the info, 22k is not bad, but still massively below what

Re: [ceph-users] Extreme slowness in SSD cluster with 3 nodes and 9 OSD with 3.16-3 kernel

2015-02-28 Thread Kevin Walker
What about the Samsung 845DC Pro SSD's? These have fantastic enterprise performance characteristics. http://www.thessdreview.com/our-reviews/samsung-845dc-pro-review-800gb-class-leading-speed-endurance/ Kind regards Kevin On 28 February 2015 at 15:32, Philippe Schwarz wrote: > --

Re: [ceph-users] Ceph Hammer OSD Shard Tuning Test Results

2015-03-01 Thread Kevin Walker
Can I ask what xio and simple messenger are and the differences? Kind regards Kevin Walker +968 9765 1742 On 1 Mar 2015, at 18:38, Alexandre DERUMIER wrote: Hi Mark, I found an previous bench from Vu Pham (it's was about simplemessenger vs xiomessenger) http://www.spinics.net/lists

[ceph-users] how to improve seek time using hammer-test release

2015-03-09 Thread kevin parrikar
found from mailing list) - This showed some noticeable difference . Will configuring ssd in RAID0 improve this,A single OSD from RAID0 Regards, Kevin ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users

[ceph-users] client crashed when osd gets restarted - hammer 0.93

2015-03-11 Thread kevin parrikar
Hi, I am trying hammer 0.93 on Ubuntu 14.04. rbd is mapped in client ,which is also ubuntu 14.04 . When i did a stop ceph-osd-all and then a start,client machine crashed and attached pic was in the console.Not sure if its related to ceph. Thanks ___

Re: [ceph-users] client crashed when osd gets restarted - hammer 0.93

2015-03-11 Thread kevin parrikar
thanks i will follow this work around. On Thu, Mar 12, 2015 at 12:18 AM, Somnath Roy wrote: > Kevin, > > This is a known issue and should be fixed in the latest krbd. The problem > is, it is not backported to 14.04 krbd yet. You need to build it from > latest krbd source if yo

[ceph-users] calculating maximum number of disk and node failure that can be handled by cluster with out data loss

2015-06-09 Thread kevin parrikar
I have 4 node cluster each with 5 disks (4 OSD and 1 Operating system also hosting 3 monitoring process) with default replica 3. Total OSD disks : 16 Total Nodes : 4 How can i calculate the - Maximum number of disk failures my cluster can handle with out any impact on current data and new

[ceph-users] trouble authenticating after bootstrapping monitors

2013-08-02 Thread Kevin Weiler
elot on camelot... === mds.camelot === Starting Ceph mds.camelot on camelot... starting mds.camelot at :/0 [root@camelot ~]# ceph auth get mon. access denied If someone could tell me what I'm doing wrong it would be greatly appreciated. Thanks! -- Kevin Weiler IT IMC Financial Markets | 233 S. Wack

Re: [ceph-users] trouble authenticating after bootstrapping monitors

2013-08-05 Thread Kevin Weiler
when creating the client.admin key so it doesn't need capabilities? Thanks again! -- Kevin Weiler IT IMC Financial Markets | 233 S. Wacker Drive, Suite 4300 | Chicago, IL 60606 | http://imc-chicago.com/ Phone: +1 312-204-7439 | Fax: +1 312-244-3301 | E-Mail: kevin.wei...@imc-chicago.com

[ceph-users] adding osds manually

2013-08-06 Thread Kevin Weiler
Hi again Ceph devs, I'm trying to deploy ceph using puppet and I'm hoping to add my osds non-sequentially. I spoke with dmick on #ceph about this and we both agreed it doesn't seem possible given the documentation. However, I have an example of a ceph cluster that was deployed using ceph-deploy

[ceph-users] ceph-deploy pushy dependency problem

2013-08-27 Thread Kevin Weiler
rrect version). The spec file looks fine in the ceph-deploy git repo, maybe you just need to rerun the package/repo generation? Thanks! -- Kevin Weiler IT IMC Financial Markets | 233 S. Wacker Drive, Suite 4300 | Chicago, IL 60606 | http://imc-chicago.com/ Phone: +1 312-204-7439 | Fax: +1 312-244-

Re: [ceph-users] ceph-deploy pushy dependency problem

2013-08-28 Thread Kevin Weiler
k=0 proxy=_none_ metadata_expire=0 -- Kevin Weiler IT IMC Financial Markets | 233 S. Wacker Drive, Suite 4300 | Chicago, IL 60606 | http://imc-chicago.com/ Phone: +1 312-204-7439 | Fax: +1 312-244-3301 | E-Mail: kevin.wei...@imc-chicago.com<mailto:kevin.wei...@imc-chicago.com> From: Gary Lowe

Re: [ceph-users] ceph-deploy pushy dependency problem

2013-08-28 Thread Kevin Weiler
: NOKEY /usr/bin/env gdisk or pushy >= 0.5.3 python(abi) = 2.7 python-argparse python-distribute python-pushy >= 0.5.3 rpmlib(CompressedFileNames) <= 3.0.4-1 rpmlib(PayloadFilesHavePrefix) <= 4.0-1 it seems to require both pushy AND python-pushy. -- Kevin Weiler IT IMC Financial Mar

[ceph-users] mounting RBD in linux containers

2013-10-17 Thread Kevin Weiler
sages on either the container or the host box. Any ideas on how to troubleshoot this? Thanks! -- Kevin Weiler IT IMC Financial Markets | 233 S. Wacker Drive, Suite 4300 | Chicago, IL 60606 | http://imc-chicago.com/ Phone: +1 312-204-7439 | Fax: +1 312-244-3301 | E-Mail: kevin.wei...@

Re: [ceph-users] mounting RBD in linux containers

2013-10-18 Thread Kevin Weiler
The kernel is 3.11.4-201.fc19.x86_64, and the image format is 1. I did, however, try a map with an RBD that was format 2. I got the same error. -- Kevin Weiler IT IMC Financial Markets | 233 S. Wacker Drive, Suite 4300 | Chicago, IL 60606 | http://imc-chicago.com/ Phone: +1 312-204-7439 | Fax

Re: [ceph-users] mounting RBD in linux containers

2013-10-28 Thread Kevin Weiler
Hi Josh, We did map it directly to the host, and it seems to work just fine. I think this is a problem with how the container is accessing the rbd module. -- Kevin Weiler IT IMC Financial Markets | 233 S. Wacker Drive, Suite 4300 | Chicago, IL 60606 | http://imc-chicago.com/ Phone: +1 312

[ceph-users] ceph recovery killing vms

2013-10-28 Thread Kevin Weiler
o that our VMs don't go down when there is a problem with the cluster? -- Kevin Weiler IT IMC Financial Markets | 233 S. Wacker Drive, Suite 4300 | Chicago, IL 60606 | http://imc-chicago.com/ Phone: +1 312-204-7439 | Fax: +1 312-244-3301 | E-Mail: kevin.wei...@imc-chicago.com<mailt

Re: [ceph-users] ceph recovery killing vms

2013-11-05 Thread Kevin Weiler
Thanks Kyle, What's the unit for osd recovery max chunk? Also, how do I find out what my current values are for these osd options? -- Kevin Weiler IT IMC Financial Markets | 233 S. Wacker Drive, Suite 4300 | Chicago, IL 60606 | http://imc-chicago.com/ Phone: +1 312-204-7439 | Fax: +

[ceph-users] near full osd

2013-11-05 Thread Kevin Weiler
Hi guys, I have an OSD in my cluster that is near full at 90%, but we're using a little less than half the available storage in the cluster. Shouldn't this be balanced out? -- Kevin Weiler IT IMC Financial Markets | 233 S. Wacker Drive, Suite 4300 | Chicago, IL 60606 | http://imc-c

Re: [ceph-users] near full osd

2013-11-05 Thread Kevin Weiler
All of the disks in my cluster are identical and therefore all have the same weight (each drive is 2TB and the automatically generated weight is 1.82 for each one). Would the procedure here be to reduce the weight, let it rebal, and then put the weight back to where it was? -- Kevin Weiler IT

Re: [ceph-users] ceph recovery killing vms

2013-11-05 Thread Kevin Weiler
; 20 I assume this is in bytes. -- Kevin Weiler IT IMC Financial Markets | 233 S. Wacker Drive, Suite 4300 | Chicago, IL 60606 | http://imc-chicago.com/ Phone: +1 312-204-7439 | Fax: +1 312-244-3301 | E-Mail: kevin.wei...@imc-chicago.com<mailto:kevin.wei...@imc-chicago.com> From: Kur

Re: [ceph-users] near full osd

2013-11-08 Thread Kevin Weiler
Thanks Gregory, One point that was a bit unclear in documentation is whether or not this equation for PGs applies to a single pool, or the entirety of pools. Meaning, if I calculate 3000 PGs, should each pool have 3000 PGs or should all the pools ADD UP to 3000 PGs? Thanks! -- Kevin Weiler IT

Re: [ceph-users] near full osd

2013-11-08 Thread Kevin Weiler
Thanks again Gregory! One more quick question. If I raise the amount of PGs for a pool, will this REMOVE any data from the full OSD? Or will I have to take the OSD out and put it back in to realize this benefit? Thanks! -- Kevin Weiler IT IMC Financial Markets | 233 S. Wacker Drive, Suite

[ceph-users] strange radios df output

2013-11-09 Thread Kevin Weiler
bytes. Am I reading this incorrectly? -- Kevin Weiler IT IMC Financial Markets | 233 S. Wacker Drive, Suite 4300 | Chicago, IL 60606 | http://imc-chicago.com/ Phone: +1 312-204-7439 | Fax: +1 312-244-3301 | E-Mail: kevin.wei...@imc-chicago.com<mailto:kevin.wei...@imc-chi

[ceph-users] OSD on an external, shared device

2013-11-26 Thread Kevin Horan
r any help. Kevin ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] OSD on an external, shared device

2013-11-27 Thread kevin horan
r any help. Kevin ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] OSD on an external, shared device

2013-11-27 Thread Kevin Horan
drives, how do you limit the visibility of the drives? Am I missing something here? Could there be a configuration option or something added to ceph to ensure that it never tries to mount things on its own? Thanks. Kevin On 11/26/2013 05:14 PM, Kyle Bader wrote: Is there any way to

Re: [ceph-users] OSD on an external, shared device

2013-11-27 Thread Kevin Horan
Ah, that sounds like what I want. I'll look into that, thanks. Kevin On 11/27/2013 11:37 AM, LaSalle, Jurvis wrote: Is LUN masking an option in your SAN? On 11/27/13, 2:34 PM, "Kevin Horan" wrote: Thanks. I may have to go this route, but it seems awfully fragile. One stray

[ceph-users] CephFS unresponsive at scale (2M files,

2014-11-17 Thread Kevin Sumner
near-idle up to similar 100-150% CPU. Hopefully, I’ve missed something in the CephFS tuning. However, I’m looking for direction on figuring out if it is, indeed, a tuning problem or if this behavior is a symptom of the “not ready for production” banner in the documentation. -- Kevin Sumner ke

Re: [ceph-users] CephFS unresponsive at scale (2M files,

2014-11-17 Thread Kevin Sumner
> On Nov 17, 2014, at 15:52, Sage Weil wrote: > > On Mon, 17 Nov 2014, Kevin Sumner wrote: >> I?ve got a test cluster together with a ~500 OSDs and, 5 MON, and 1 MDS. All >> the OSDs also mount CephFS at /ceph. I?ve got Graphite pointing at a space >> under /ceph.

Re: [ceph-users] CephFS unresponsive at scale (2M files,

2014-11-18 Thread Kevin Sumner
minute, so cache at 1 million is still undersized. If that doesn’t work, we’re running Firefly on the cluster currently and I’ll be upgrading it to Giant. -- Kevin Sumner ke...@sumner.io > On Nov 18, 2014, at 1:36 AM, Thomas Lemarchand > wrote: > > Hi Kevin, > > There

Re: [ceph-users] CephFS unresponsive at scale (2M files,

2014-11-19 Thread Kevin Sumner
Making mds cache size 5 million seems to have helped significantly, but we’re still seeing issues occasionally on metadata reads while under load. Settings over 5 million don’t seem to have any noticeable impact on this problem. I’m starting the upgrade to Giant today. -- Kevin Sumner ke

[ceph-users] "store is getting too big" on monitors after Firefly to Giant upgrade

2014-12-09 Thread Kevin Sumner
nt io 3463 MB/s rd, 18710 kB/s wr, 7456 op/s -- Kevin Sumner ke...@sumner.io ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] "store is getting too big" on monitors after Firefly to Giant upgrade

2014-12-10 Thread Kevin Sumner
5360 MB -- 85% avail mon.cluster4-monitor004 store is getting too big! 93414 MB >= 15360 MB -- 69% avail mon.cluster4-monitor005 store is getting too big! 88232 MB >= 15360 MB -- 71% avail -- Kevin Sumner ke...@sumner.io > On Dec 9, 2014, at 6:20 PM, Haomai Wang wrote: > > Mayb

[ceph-users] Stripping data

2014-12-14 Thread Kevin Shiah
Hello All, Does anyone know how to configure data stripping when using ceph as file system? My understanding is that configuring stripping with rbd is only for block device. Many thanks, Kevin ___ ceph-users mailing list ceph-users@lists.ceph.com http

Re: [ceph-users] File System stripping data

2014-12-16 Thread Kevin Shiah
lying file system does not support xattr. Has anyone ever run into similar problem before? I deployed CephFS on Debian wheezy. And here is the mounting information: ceph-fuse on /dfs type fuse.ceph-fuse (rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other) Many thanks, Kev

Re: [ceph-users] File System stripping data

2014-12-17 Thread Kevin Shiah
Hi John, I am using 0.56.1. Could it be because data striping is not supported in this version? Kevin On Wed Dec 17 2014 at 4:00:15 AM PST Wido den Hollander wrote: > On 12/17/2014 12:35 PM, John Spray wrote: > > On Wed, Dec 17, 2014 at 10:25 AM, Wido den Hollander > wrote: >

Re: [ceph-users] [Qemu-devel] qemu + rbd block driver with cache=writeback, is live migration safe ?

2014-04-22 Thread Kevin Wolf
ration with rbd and cache.direct=off. > If yes, it is possible to disable manually writeback online with qmp ? No, such a QMP command doesn't exist, though it would be possible to implement (for toggling cache.direct, that is; cache.writeback is guest visible and

Re: [ceph-users] [Qemu-devel] qemu + rbd block driver with cache=writeback, is live migration safe ?

2014-04-22 Thread Kevin Wolf
Am 19.04.2014 um 00:33 hat Josh Durgin geschrieben: > On 04/18/2014 10:47 AM, Alexandre DERUMIER wrote: > >Thanks Kevin for for the full explain! > > > >>>cache.writeback=on,cache.direct=off,cache.no-flush=off > > > >I didn't known about the cache option

[ceph-users] cannot revert lost objects

2014-05-01 Thread kevin horan
"incomplete": 0, "last_epoch_started": 20323}, "recovery_state": [ { "name": "Started\/Primary\/Active", "enter_time": "2014-05-01 09:03:30.557244", "might_have_unfound": [ {

Re: [ceph-users] cannot revert lost objects

2014-05-03 Thread Kevin Horan
t the operation just hangs. Kevin On 5/1/14 10:11 , kevin horan wrote: Here is how I got into this state. I have only 6 OSDs total, 3 on one host (vashti) and 3 on another (zadok). I set the noout flag so I could reboot zadok. Zadok was down for 2 minutes. When it came up

Re: [ceph-users] cannot revert lost objects

2014-05-07 Thread Kevin Horan
While everything was moving from degraded to active+clean, it finally finished probing. If it's still happening tomorrow, I'd try to find a Geeks on IRC Duty (http://ceph.com/help/community/). On 5/3/14 09:43 , Kevin Horan wrote: Craig, Thanks for your response

Re: [ceph-users] cannot revert lost objects

2014-05-15 Thread Kevin Horan
t the operation just hangs. Kevin On 5/1/14 10:11 , kevin horan wrote: Here is how I got into this state. I have only 6 OSDs total, 3 on one host (vashti) and 3 on another (zadok). I set the noout flag so I could reboot zadok. Zadok was down for 2 minutes. When it came up

Re: [ceph-users] CephFS First product release discussion

2013-03-05 Thread Kevin Decherf
ame needs here. -- Kevin Decherf - @Kdecherf GPG C610 FE73 E706 F968 612B E4B2 108A BD75 A81E 6E2F http://kdecherf.com ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] "Recommended" cache size on MDS

2013-04-24 Thread Kevin Decherf
on the ratio cache size/total cluster size? Or any better ratio than others observed in your labs? Thanks, -- Kevin Decherf - @Kdecherf GPG C610 FE73 E706 F968 612B E4B2 108A BD75 A81E 6E2F http://kdecherf.com ___ ceph-users mailing list ceph-users

Re: [ceph-users] "Recommended" cache size on MDS

2013-04-25 Thread Kevin Decherf
at least an order of magnitude higher. Ok :) Do you have an idea of the average size of an inode in the cache? -- Kevin Decherf - @Kdecherf GPG C610 FE73 E706 F968 612B E4B2 108A BD75 A81E 6E2F http://kdecherf.com ___ ceph-users mailing list ceph-users

Re: [ceph-users] MDS Crash on recovery (0.60)

2013-04-30 Thread Kevin Decherf
On Tue, Apr 30, 2013 at 03:10:00PM +0100, Mike Bryant wrote: > All of my MDS daemons have begun crashing when I start them up, and > they try to begin recovery. Hi, It seems to be the same bug as #4644 http://tracker.ceph.com/issues/4644 -- Kevin Decherf - @Kdecherf GPG C610 FE73 E70

[ceph-users] Wireshark on Windows

2013-05-31 Thread Kevin Jones
Hi, I have done a bit of work on the wireshark plugin so it will compile for WIN32 really as a by product of trying to investigate a problem as a learning exercise and finding the plugin was not decoding the area I was interested in. I havn't tried to improve the plugin but thought I would me

[ceph-users] upgrade Hammer>Jewel>Luminous OSD fail to start

2017-09-11 Thread kevin parrikar
hello All, I am trying to upgrade a small test setup having one monitor and one osd node which is in hammer release . I updating from hammer to jewel using package update commands and things are working. How ever after updating from Jewel to Luminous, i am facing issues with osd failing to start

Re: [ceph-users] upgrade Hammer>Jewel>Luminous OSD fail to start

2017-09-12 Thread kevin parrikar
Can some one please help me on this.I have no idea how to bring up the cluster to operational state. Thanks, Kev On Tue, Sep 12, 2017 at 11:12 AM, kevin parrikar wrote: > hello All, > I am trying to upgrade a small test setup having one monitor and one osd > node which is in hamme

Re: [ceph-users] upgrade Hammer>Jewel>Luminous OSD fail to start

2017-09-12 Thread kevin parrikar
> match path" option) and such after upgrading from Hammer to Jewel? I am not > sure if that matters here, but it might help if you elaborate on your > upgrade process a bit. > > --Lincoln > > > On Sep 12, 2017, at 2:22 PM, kevin parrikar > wrote: > > > >

[ceph-users] Flattening loses sparseness

2017-10-12 Thread Massey, Kevin
ete...done. $ rbd du NAMEPROVISIONED USED child10240k 10240k parent@snap 10240k 0 parent 10240k 0 20480k 10240k Is there any way to flatten a clone while retaining its sparseness, perhaps in Luminous or with BlueStor

[ceph-users] Cluster Down from reweight-by-utilization

2017-11-04 Thread Kevin Hrpcek
t daemons are simply waiting for new maps. I can often see the "newest_map" incrementing on osd daemons, but it is slow and some are behind by thousands. Thanks, Kevin Cluster details: CentOS 7.4 Kraken ceph-11.2.1-0.el7.x86_64 540 OSD, 3 mon/mgr/mds ~3.6PB, 72% raw used, ~40 million ob

Re: [ceph-users] Cluster Down from reweight-by-utilization

2017-11-04 Thread Kevin Hrpcek
d, a little tight on some of those early gen servers, but I haven't seen OOM killing things off yet. I think I saw mention of that patch and luminous handling this type of situation better while googling the issue...larger osdmap increments or something similar if i recall correctly.

Re: [ceph-users] Cluster Down from reweight-by-utilization

2017-11-06 Thread Kevin Hrpcek
quickly setting nodown,noout,noup when everything is already down will help as well. Sage, thanks again for your input and advice. Kevin On 11/04/2017 11:54 PM, Sage Weil wrote: On Sat, 4 Nov 2017, Kevin Hrpcek wrote: Hey Sage, Thanks for getting back to me this late on a weekend. Do you

Re: [ceph-users] Pool shard/stripe settings for file too large files?

2017-11-09 Thread Kevin Hrpcek
28MB limit is a bit high but not unreasonable. If you have an application written directly to librados that is using objects larger than 128MB you may need to adjust osd_max_object_size" Kevin On 11/09/2017 02:01 PM, Marc Roos wrote: I would like store objects with rados -p ec32

[ceph-users] ceph status doesnt show available and used disk space after upgrade

2017-12-20 Thread kevin parrikar
showing correct values. Can some one help me here please Regards, Kevin ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] ceph status doesnt show available and used disk space after upgrade

2017-12-20 Thread kevin parrikar
17-12-21 02:39:10.622835 7fb40a22b700 0 Cannot get stat of OSD 141 Not sure whats wrong in my setup Regards, Kevin On Thu, Dec 21, 2017 at 2:37 AM, Jean-Charles Lopez wrote: > Hi, > > make sure client.admin user has an MGR cap using ceph auth list. At some > point there was a glitch w

Re: [ceph-users] ceph status doesnt show available and used disk space after upgrade

2017-12-21 Thread kevin parrikar
key: AQByfDparprIEBAAj7Pxdr/87/v0kmJV49aKpQ== caps: [mds] allow * caps: [mgr] allow * caps: [mon] allow * caps: [osd] allow * Regards, Kevin On Thu, Dec 21, 2017 at 8:10 AM, kevin parrikar wrote: > Thanks JC, > I tried > ceph auth caps client.admin o

Re: [ceph-users] ceph status doesnt show available and used disk space after upgrade

2017-12-26 Thread kevin parrikar
It was a firewall issue on the controller nodes.After allowing ceph-mgr port in iptables everything is displaying correctly.Thanks to people on IRC. Thanks alot, Kevin On Thu, Dec 21, 2017 at 5:24 PM, kevin parrikar wrote: > accidently removed mailing list email > > ++ceph-users >

[ceph-users] slow 4k writes, Luminous with bluestore backend

2017-12-26 Thread kevin parrikar
objects, 72319 MB usage: 229 GB used, 39965 GB / 40195 GB avail pgs: 6240 active+clean can some one suggest a way to improve this. Thanks, Kevin ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users

Re: [ceph-users] Ceph luminous - throughput performance issue

2018-01-31 Thread Kevin Hrpcek
e a ton of different configurations to test but I only did a few focused on writes. Kevin R440, Perc H840 with 2 MD1400 attached with 12 10TB NLSAS drives per md1400. Xfs filestore with 10gb journal lv on each 10tb disk. Ceph cluster set up as a single mon/mgr/osd server for testing. These tables p

[ceph-users] RFC Bluestore-Cluster of SAMSUNG PM863a

2018-02-02 Thread Kevin Olbrich
4.4.x-kernel. We plan to migrate to Ubuntu 16.04.3 with HWE (kernel 4.10). Clients will be Fedora 27 + OpenNebula. Any comments? Thank you. Kind regards, Kevin ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-

Re: [ceph-users] RFC Bluestore-Cluster of SAMSUNG PM863a

2018-02-02 Thread Kevin Olbrich
2018-02-02 12:44 GMT+01:00 Richard Hesketh : > On 02/02/18 08:33, Kevin Olbrich wrote: > > Hi! > > > > I am planning a new Flash-based cluster. In the past we used SAMSUNG > PM863a 480G as journal drives in our HDD cluster. > > After a lot of tests with luminous and

[ceph-users] _read_bdev_label failed to open

2018-02-04 Thread Kevin Olbrich
te: Failed to activate > [osd01.cloud.example.local][WARNIN] unmount: Unmounting > /var/lib/ceph/tmp/mnt.pAfCl4 > Same problem on 2x 14 disks. I was unable to get this cluster up. Any ideas? Kind regards, Kevin ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] _read_bdev_label failed to open

2018-02-04 Thread Kevin Olbrich
I also noticed there are no folders under /var/lib/ceph/osd/ ... Mit freundlichen Grüßen / best regards, Kevin Olbrich. 2018-02-04 19:01 GMT+01:00 Kevin Olbrich : > Hi! > > Currently I try to re-deploy a cluster from filestore to bluestore. > I zapped all disks (multiple times

Re: [ceph-users] _read_bdev_label failed to open

2018-02-04 Thread Kevin Olbrich
artitions 1 - 2 were not added, they are (this disk has only two partitions). Should I open a bug? Kind regards, Kevin 2018-02-04 19:05 GMT+01:00 Kevin Olbrich : > I also noticed there are no folders under /var/lib/ceph/osd/ ... > > > Mit freundlichen Grüßen / best regards, > Kevi

Re: [ceph-users] Luminous/Ubuntu 16.04 kernel recommendation ?

2018-02-07 Thread Kevin Olbrich
Would be interested as well. - Kevin 2018-02-04 19:00 GMT+01:00 Yoann Moulin : > Hello, > > What is the best kernel for Luminous on Ubuntu 16.04 ? > > Is linux-image-virtual-lts-xenial still the best one ? Or > linux-virtual-hwe-16.04 will offer some improvement ? > >

Re: [ceph-users] Luminous/Ubuntu 16.04 kernel recommendation ?

2018-02-08 Thread Kevin Olbrich
diff,object-map,deep-flatten on the image. > Otherwise it runs well. > I always thought that the latest features are built into newer kernels, are they available on non-HWE 4.4, HWE 4.8 or HWE 4.10? Also I am researching for the OSD server side. - Kevin _

[ceph-users] How are replicas spread in default crush configuration?

2016-11-23 Thread Kevin Olbrich
OSDs (and setting size to 3). I want to make sure we can resist two offline hosts (in terms of hardware). Is my assumption correct? Mit freundlichen Grüßen / best regards, Kevin Olbrich. ___ ceph-users mailing list ceph-users@lists.ceph.com http://list

Re: [ceph-users] degraded objects after osd add

2016-11-23 Thread Kevin Olbrich
regards, Kevin Olbrich. > > Original Message > Subject: Re: [ceph-users] degraded objects after osd add (17-Nov-2016 9:14) > From:Burkhard Linke > To: c...@dolphin-it.de > > Hi, > > > On 11/17/2016 08:07 AM, Steffen Weißgerber wrot

[ceph-users] Ceph performance laggy (requests blocked > 32) on OpenStack

2016-11-25 Thread Kevin Olbrich
them run remote services (terminal). My question is: Are 80 VMs hosted on 53 disks (mostly 7.2k SATA) to much? We sometime experience lags where nearly all servers suffer from "blocked IO > 32" seconds. What are your experiences? Mit freundlichen Grüßen / best regards,

[ceph-users] Deploying new OSDs in parallel or one after another

2016-11-28 Thread Kevin Olbrich
Hi! I want to deploy two nodes with 4 OSDs each. I already prepared OSDs and only need to activate them. What is better? One by one or all at once? Kind regards, Kevin. ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com

Re: [ceph-users] Deploying new OSDs in parallel or one after another

2016-11-28 Thread Kevin Olbrich
I need to note that I already have 5 hosts with one OSD each. Mit freundlichen Grüßen / best regards, Kevin Olbrich. 2016-11-28 10:02 GMT+01:00 Kevin Olbrich : > Hi! > > I want to deploy two nodes with 4 OSDs each. I already prepared OSDs and > only need to activate them. > What

Re: [ceph-users] [EXTERNAL] Re: 2x replication: A BIG warning

2016-12-07 Thread Kevin Olbrich
is safe regardless of full outage. Mit freundlichen Grüßen / best regards, Kevin Olbrich. 2016-12-07 21:10 GMT+01:00 Wido den Hollander : > > > Op 7 december 2016 om 21:04 schreef "Will.Boege" >: > > > > > > Hi Wido, > > > > Just curious how

[ceph-users] What happens if all replica OSDs journals are broken?

2016-12-12 Thread Kevin Olbrich
, Kevin ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] What happens if all replica OSDs journals are broken?

2016-12-13 Thread Kevin Olbrich
Ok, thanks for your explanation! I read those warnings about size 2 + min_size 1 (we are using ZFS as RAID6, called zraid2) as OSDs. Time to raise replication! Kevin 2016-12-13 0:00 GMT+01:00 Christian Balzer : > On Mon, 12 Dec 2016 22:41:41 +0100 Kevin Olbrich wrote: > > > Hi, >

Re: [ceph-users] What happens if all replica OSDs journals are broken?

2016-12-14 Thread Kevin Olbrich
2016-12-14 2:37 GMT+01:00 Christian Balzer : > > Hello, > Hi! > > On Wed, 14 Dec 2016 00:06:14 +0100 Kevin Olbrich wrote: > > > Ok, thanks for your explanation! > > I read those warnings about size 2 + min_size 1 (we are using ZFS as > RAID6, > > called

[ceph-users] Analysing ceph performance with SSD journal, 10gbe NIC and 2 replicas -Hammer release

2017-01-05 Thread kevin parrikar
understand this better. Regards, Kevin ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Analysing ceph performance with SSD journal, 10gbe NIC and 2 replicas -Hammer release

2017-01-06 Thread kevin parrikar
for your suggestion. Regards, Kevin On Fri, Jan 6, 2017 at 8:56 AM, jiajia zhong wrote: > > > 2017-01-06 11:10 GMT+08:00 kevin parrikar : > >> Hello All, >> >> I have setup a ceph cluster based on 0.94.6 release in 2 servers each >> with 80Gb intel s3510 and

Re: [ceph-users] Analysing ceph performance with SSD journal, 10gbe NIC and 2 replicas -Hammer release

2017-01-06 Thread kevin parrikar
Thanks Christian for your valuable comments,each comment is a new learning for me. Please see inline On Fri, Jan 6, 2017 at 9:32 AM, Christian Balzer wrote: > > Hello, > > On Fri, 6 Jan 2017 08:40:36 +0530 kevin parrikar wrote: > > > Hello All, > > > > I h

Re: [ceph-users] Analysing ceph performance with SSD journal, 10gbe NIC and 2 replicas -Hammer release

2017-01-06 Thread kevin parrikar
. Regards, Kevin On Fri, Jan 6, 2017 at 4:42 PM, kevin parrikar wrote: > Thanks Christian for your valuable comments,each comment is a new learning > for me. > Please see inline > > On Fri, Jan 6, 2017 at 9:32 AM, Christian Balzer wrote: > >> >> Hello, >> >&g

Re: [ceph-users] Analysing ceph performance with SSD journal, 10gbe NIC and 2 replicas -Hammer release

2017-01-07 Thread kevin parrikar
m SEC . I suppose this also shows slow performance. Any idea where could be the issue? I use LSI 9260-4i controller (firmware 12.13.0.-0154) on both the nodes with write back enabled . i am not sure if this controller is suitable for ceph. Regards, Kevin On Sat, Jan 7, 2017 at 1:23 PM, Mag

Re: [ceph-users] Analysing ceph performance with SSD journal, 10gbe NIC and 2 replicas -Hammer release

2017-01-07 Thread kevin parrikar
bought S3500 because last time when we tried ceph, people were suggesting this model :) :) Thanks alot for your help On Sat, Jan 7, 2017 at 6:01 PM, Lionel Bouton < lionel-subscript...@bouton.name> wrote: > Hi, > > Le 07/01/2017 à 04:48, kevin parrikar a écrit : > > i reall

Re: [ceph-users] Analysing ceph performance with SSD journal, 10gbe NIC and 2 replicas -Hammer release

2017-01-07 Thread kevin parrikar
more osd "per" node or more osd "nodes". Thanks alot for all your help.Learned so many new things thanks again Kevin On Sat, Jan 7, 2017 at 7:33 PM, Lionel Bouton < lionel-subscript...@bouton.name> wrote: > Le 07/01/2017 à 14:11, kevin parrikar a écrit : > > T

[ceph-users] rbd lock remove unable to parse address

2018-07-09 Thread Kevin Olbrich
s for this image? Kind regards, Kevin ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] rbd lock remove unable to parse address

2018-07-09 Thread Kevin Olbrich
Is it possible to force-remove the lock or the image? Kevin 2018-07-09 21:14 GMT+02:00 Jason Dillaman : > Hmm ... it looks like there is a bug w/ RBD locks and IPv6 addresses since > it is failing to parse the address as valid. Perhaps it's barfing on the > "%eth0" sc

Re: [ceph-users] rbd lock remove unable to parse address

2018-07-09 Thread Kevin Olbrich
and IPv6 addresses >> since it is failing to parse the address as valid. Perhaps it's barfing on >> the "%eth0" scope id suffix within the address. >> >> On Mon, Jul 9, 2018 at 2:47 PM Kevin Olbrich wrote: >> >>> Hi! >>> >>> I tri

Re: [ceph-users] rbd lock remove unable to parse address

2018-07-09 Thread Kevin Olbrich
ink local when there is an ULA-prefix available. The address is available on brX on this client node. - Kevin > On Mon, Jul 9, 2018 at 3:43 PM Kevin Olbrich wrote: > >> 2018-07-09 21:25 GMT+02:00 Jason Dillaman : >> >>> BTW -- are you running Ceph on a one-node computer

Re: [ceph-users] rbd lock remove unable to parse address

2018-07-10 Thread Kevin Olbrich
2018-07-10 14:37 GMT+02:00 Jason Dillaman : > On Tue, Jul 10, 2018 at 2:37 AM Kevin Olbrich wrote: > >> 2018-07-10 0:35 GMT+02:00 Jason Dillaman : >> >>> Is the link-local address of "fe80::219:99ff:fe9e:3a86%eth0" at least >>> present on the clien

Re: [ceph-users] PGs stuck peering (looping?) after upgrade to Luminous.

2018-07-11 Thread Kevin Olbrich
Sounds a little bit like the problem I had on OSDs: [ceph-users] Blocked requests activating+remapped after extending pg(p)_num <http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-May/026680.html> *Kevin Olbrich* - [ceph-users] Blocked requests activating+remapped afterextendi

Re: [ceph-users] Bluestore and number of devices

2018-07-13 Thread Kevin Olbrich
You can keep the same layout as before. Most place DB/WAL combined in one partition (similar to the journal on filestore). Kevin 2018-07-13 12:37 GMT+02:00 Robert Stanford : > > I'm using filestore now, with 4 data devices per journal device. > > I'm confused by th

[ceph-users] Periodically activating / peering on OSD add

2018-07-14 Thread Kevin Olbrich
Hi, why do I see activating followed by peering during OSD add (refill)? I did not change pg(p)_num. Is this normal? From my other clusters, I don't think that happend... Kevin ___ ceph-users mailing list ceph-users@lists.ceph.com

Re: [ceph-users] Periodically activating / peering on OSD add

2018-07-14 Thread Kevin Olbrich
PS: It's luminous 12.2.5! Mit freundlichen Grüßen / best regards, Kevin Olbrich. 2018-07-14 15:19 GMT+02:00 Kevin Olbrich : > Hi, > > why do I see activating followed by peering during OSD add (refill)? > I did not change pg(p)_num. > > Is this normal? From my other

Re: [ceph-users] v12.2.7 Luminous released

2018-07-19 Thread Kevin Olbrich
Hi, on upgrade from 12.2.4 to 12.2.5 the balancer module broke (mgr crashes minutes after service started). Only solution was to disable the balancer (service is running fine since). Is this fixed in 12.2.7? I was unable to locate the bug in bugtracker. Kevin 2018-07-17 18:28 GMT+02:00

  1   2   3   >