Re: [ceph-users] ceph-iscsi upgrade issue

2018-10-10 Thread Mike Christie
On 10/10/2018 12:52 PM, Mike Christie wrote: > On 10/10/2018 08:21 AM, Steven Vacaroaia wrote: >> Hi Jason, >> Thanks for your prompt responses >> >> I have used same iscsi-gateway.cfg file - no security changes - just >> added prometheus entry >>

Re: [ceph-users] tcmu iscsi (failover not supported)

2018-10-10 Thread Mike Christie
On 10/10/2018 12:40 PM, Mike Christie wrote: > On 10/09/2018 05:09 PM, Brady Deetz wrote: >> I'm trying to replace my old single point of failure iscsi gateway with >> the shiny new tcmu-runner implementation. I've been fighting a Windows >> initiator all day. I have

Re: [ceph-users] tcmu iscsi (failover not supported)

2018-10-10 Thread Mike Christie
tcmu-runner 1.4.0. > You need this patch which sets the failover type back to implicit to match tcmu-runner 1.4.0 and also makes it configurable for future versions: commit 8d66492b8c7134fb37b72b5e8e77d7c8109220d9 Author: Mike Christie Date: Mon Jul 23 15:45:09 2018 -0500 Allow alua fail

Re: [ceph-users] ceph-iscsi upgrade issue

2018-10-10 Thread Mike Christie
On 10/10/2018 03:18 PM, Steven Vacaroaia wrote: > so, it seems OSD03 is having issues when creating disks ( I can create > target and hosts ) - here is an excerpt from api.log > Please note I can create disk on the other node > > 2018-10-10 16:03:03,369DEBUG [lun.py:381:allocate()] - LUN.all

Re: [ceph-users] OSDs crash after deleting unfound object in Luminous 12.2.8

2018-10-12 Thread Mike Lovell
one anyways. i think that just happened for a little bit in irc. i guess it didn't happen cause no one followed up on it. good luck and hopefully you don't blame me if things get worse. :) mike On Fri, Oct 12, 2018 at 7:34 AM Lawrence Smith < lawrence.sm...@uni-muenster.de> wrote:

Re: [ceph-users] OSDs crash after deleting unfound object in Luminous 12.2.8

2018-10-18 Thread Mike Lovell
mark another hit set missing. :) i think the code that removes the hit set from the pg data is before that assert so its possible it still removed it from the history. mike On Thu, Oct 18, 2018 at 9:11 AM Lawrence Smith < lawrence.sm...@uni-muenster.de> wrote: > Hi Mike, > > Tha

Re: [ceph-users] Many concurrent drive failures - How do I activate pgs?

2018-02-22 Thread Mike Lovell
d any luck there. mike On Thu, Feb 22, 2018 at 9:49 AM, Chris Sarginson wrote: > Hi Caspar, > > Sean and I replaced the problematic DC S4600 disks (after all but one had > failed) in our cluster with Samsung SM863a disks. > There was an NDA for new Intel firmware (as mentioned e

Re: [ceph-users] Many concurrent drive failures - How do I activate pgs?

2018-02-22 Thread Mike Lovell
erent behavior. i'll try to post updates as i have them. mike On Thu, Feb 22, 2018 at 2:33 PM, David Herselman wrote: > Hi Mike, > > > > I eventually got hold of a customer relations manager at Intel but his > attitude was lack luster and Intel never officially responde

Re: [ceph-users] Many concurrent drive failures - How do I activate pgs?

2018-02-22 Thread Mike Lovell
i did try some micron m600s a couple years ago and was disappointed by them so i'm avoiding the "prosumer" ones from micron if i can. my use case has been the 1TB range ssds and am using them mainly as a cache tier and filestore. my needs might not line up closely with yours though.

Re: [ceph-users] PG mapped to OSDs on same host although 'chooseleaf type host'

2018-02-22 Thread Mike Lovell
was the pg-upmap feature used to force a pg to get mapped to a particular osd? mike On Thu, Feb 22, 2018 at 10:28 AM, Wido den Hollander wrote: > Hi, > > I have a situation with a cluster which was recently upgraded to Luminous > and has a PG mapped to OSDs on the same host.

Re: [ceph-users] Ceph iSCSI is a prank?

2018-03-02 Thread Mike Christie
On 03/02/2018 01:24 AM, Joshua Chen wrote: > Dear all, > I wonder how we could support VM systems with ceph storage (block > device)? my colleagues are waiting for my answer for vmware (vSphere 5) We were having difficulties supporting older versions, because they will drop down to using SCSI-2

Re: [ceph-users] iSCSI Multipath (Load Balancing) vs RBD Exclusive Lock

2018-03-06 Thread Mike Christie
On 03/06/2018 01:17 PM, Lazuardi Nasution wrote: > Hi, > > I want to do load balanced multipathing (multiple iSCSI gateway/exporter > nodes) of iSCSI backed with RBD images. Should I disable exclusive lock > feature? What if I don't disable that feature? I'm using TGT (manual > way) since I get so

Re: [ceph-users] iSCSI Multipath (Load Balancing) vs RBD Exclusive Lock

2018-03-08 Thread Mike Christie
> target_core_rbd? > Thanks. > > 2018-03-07 > > shadowlin > > ---- > > *发件人:*Mike Christie > *发送时间:*2018-03-07

Re: [ceph-users] iSCSI Multipath (Load Balancing) vs RBD Exclusive Lock

2018-03-08 Thread Mike Christie
On 03/08/2018 10:59 AM, Lazuardi Nasution wrote: > Hi Mike, > > Since I have moved from LIO to TGT, I can do full ALUA (active/active) > of multiple gateways. Of course I have to disable any write back cache > at any level (RBD cache and TGT cache). It seem to be safe to disable &

Re: [ceph-users] iSCSI Multipath (Load Balancing) vs RBD Exclusive Lock

2018-03-08 Thread Mike Christie
On 03/08/2018 12:44 PM, Mike Christie wrote: > stuck/queued then your osd_request_timeout value might be too short. For Sorry, I meant too long. ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] iSCSI Multipath (Load Balancing) vs RBD Exclusive Lock

2018-03-11 Thread Mike Christie
--- > shadowlin > > > > *发件人:*Jason Dillaman > *发送时间:*2018-03-11 07:46 > *主题:*Re: Re: [ceph-users] iSCSI Multipath (Load Balancing) vs RBD > Exclusive Lock > *收件人:*"shadow_lin" > *抄送:*"Mike Christie","Lazuardi > Na

Re: [ceph-users] iSCSI Multipath (Load Balancing) vs RBD Exclusive Lock

2018-03-15 Thread Mike Christie
> > >> ... > Where you send the patches that add your delays could you send the > target side /var/log/tcmu-runner.log with log_level = 4. > ... > > > Mike, see please patches and /var/log/tcmu-runner.log in attachment. > > T

Re: [ceph-users] iSCSI Multipath (Load Balancing) vs RBD Exclusive Lock

2018-03-15 Thread Mike Christie
On 03/15/2018 02:32 PM, Maxim Patlasov wrote: > On Thu, Mar 15, 2018 at 12:48 AM, Mike Christie <mailto:mchri...@redhat.com>> wrote: > > ... > > It looks like there is a bug. > > 1. A regression was added when I stopped killing the iscsi connection >

[ceph-users] Erasure Coded Pools and OpenStack

2018-03-22 Thread Mike Cave
ive data (such as OS volumes and database volumes). Thank you for taking the time to read this far. I am happy to provide any further details you might need or try any configuration changes you might suggest. This is completely development so I’m not afraid to try t

Re: [ceph-users] Erasure Coded Pools and OpenStack

2018-03-23 Thread Mike Cave
from OpenStack! Thanks again, Mike -Original Message- From: Jason Dillaman Reply-To: "dilla...@redhat.com" Date: Thursday, March 22, 2018 at 5:15 PM To: Cave Mike Cc: "ceph-users@lists.ceph.com" Subject: Re: [ceph-users] Erasure Coded Pools and OpenStack On Fri, Mar

Re: [ceph-users] All pools full after one OSD got OSD_FULL state

2018-03-29 Thread Mike Lovell
On Thu, Mar 29, 2018 at 1:17 AM, Jakub Jaszewski wrote: > Many thanks Mike, that justifies stopped IOs. I've just finished adding > new disks to cluster and now try to evenly reweight OSD by PG. > > May I ask you two more questions? > 1. As I was in a hurry I did not che

[ceph-users] Too many objects per pg than average: deadlock situation

2018-05-20 Thread Mike A
rk when there is one pool with a huge object / pool ratio There is no obvious solution. How to solve this problem correctly? — Mike, runs! ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Too many objects per pg than average: deadlock situation

2018-05-21 Thread Mike A
Hello, > 21 мая 2018 г., в 2:05, Sage Weil написал(а): > > On Sun, 20 May 2018, Mike A wrote: >> Hello! >> >> In our cluster, we see a deadlock situation. >> This is a standard cluster for an OpenStack without a RadosGW, we have a >> standard block

Re: [ceph-users] Too many objects per pg than average: deadlock situation

2018-05-23 Thread Mike A
Hello > 21 мая 2018 г., в 2:05, Sage Weil написал(а): > > On Sun, 20 May 2018, Mike A wrote: >> Hello! >> >> In our cluster, we see a deadlock situation. >> This is a standard cluster for an OpenStack without a RadosGW, we have a >> standard block

Re: [ceph-users] iSCSI to a Ceph node with 2 network adapters - how to ?

2018-06-01 Thread Mike Christie
On 06/01/2018 02:01 AM, Wladimir Mutel wrote: > Dear all, > > I am experimenting with Ceph setup. I set up a single node > (Asus P10S-M WS, Xeon E3-1235 v5, 64 GB RAM, 8x3TB SATA HDDs, > Ubuntu 18.04 Bionic, Ceph packages from > http://download.ceph.com/debian-luminous/dists/xe

Re: [ceph-users] iSCSI to a Ceph node with 2 network adapters - how to ?

2018-06-18 Thread Mike Christie
On 06/15/2018 12:21 PM, Wladimir Mutel wrote: > Jason Dillaman wrote: > [1] http://docs.ceph.com/docs/master/rbd/iscsi-initiator-win/ > >>> I don't use either MPIO or MCS on Windows 2008 R2 or Windows 10 >>> initiator (not Win2016 but hope there is no much difference). I try >>> to

Re: [ceph-users] Issue with upgrade from 0.94.9 to 10.2.5

2017-01-23 Thread Mike Lovell
h but just wanted to second that i've seen this happen on a recent hammer to recent jewel upgrade. mike On Wed, Jan 18, 2017 at 4:25 AM, Piotr Dałek wrote: > On 01/17/2017 12:52 PM, Piotr Dałek wrote: > >> During our testing we found out that during upgrade from 0.94.9 to 10.2.5

Re: [ceph-users] Migrate cephfs metadata to SSD in running cluster

2017-02-16 Thread Mike Miller
there other alternatives to this suggested configuration? I am kind of a little paranoid to start playing around with crush rules in the running system. Regards, Mike On 1/5/17 11:40 PM, jiajia zhong wrote: 2017-01-04 23:52 GMT+08:00 Mike Miller <mailto:millermike...@gmail.com>>:

Re: [ceph-users] Experience with 5k RPM/archive HDDs

2017-02-17 Thread Mike Miller
SATA drives with only slightly higher price/capacity ratios. - mike On 2/3/17 2:46 PM, Stillwell, Bryan J wrote: On 2/3/17, 3:23 AM, "ceph-users on behalf of Wido den Hollander" wrote: Op 3 februari 2017 om 11:03 schreef Maxime Guyot : Hi, Interesting feedback! > In my opin

Re: [ceph-users] Experience with 5k RPM/archive HDDs

2017-02-20 Thread Mike Miller
/ recovery. In all cases we tested the cluster is useless from the client side during backfilling / recovery. - mike On 2/19/17 9:54 AM, Wido den Hollander wrote: Op 18 februari 2017 om 17:03 schreef rick stehno : I work for Seagate and have done over a hundred of tests using SMR 8TB

Re: [ceph-users] Ceph on XenServer - Using RBDSR

2017-02-25 Thread Mike Jacobacci
s working on other pool members of an existing pool. Let me know if you have any questions. Cheers, Mike ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Ceph on XenServer - RBD Image Size

2017-02-27 Thread Mike Jacobacci
am totally open to change if I am doing something wrong. Cheers, Mike >Hi Mike, > >Have you considered creating SR which doesn't make one huge RBD volume >and on top of it creates LVM but instead creates separate RBD volumes >for each VDI? _

[ceph-users] osds crashing during hit_set_trim and hit_set_remove_all

2017-03-03 Thread Mike Lovell
ll then flush the cache tier and remove it then add one back. still not sure if its going to work. does anyone else have any other thoughts about how to handle this or why this is happening? or what else we could do to get the osds back online? this has crashed almost all of the cache tier osds

[ceph-users] hammer to jewel upgrade experiences? cache tier experience?

2017-03-06 Thread Mike Lovell
of the the communities' experiences there? thanks mike ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] Creating new Pools - PG's

2017-03-15 Thread Mike Jacobacci
ve a clean slate going forward... Would it be bad to create a new pool with the same PG/Replication if I am going to remove the old pool after I have migrated the VM's over? Right now I am only using about 2TB out of 109TB. Cheers, Mike ___ ceph-use

Re: [ceph-users] Creating new Pools - PG's

2017-03-15 Thread Mike Jacobacci
Hi David, Thank you for your response! I was thinking that I may use Ceph to back other projects outside of our infrastructure, so I calculated 75% VM and 25% other usage when I created the pool. Cheers, Mike On Wed, Mar 15, 2017 at 12:57 PM, David Turner wrote: > Especially if you

Re: [ceph-users] cephfs cache tiering - hitset

2017-03-20 Thread Mike Lovell
s hesitant to increase it will still on hammer. min_write_recency_for_promote wasn't added till after hammer. hopefully that helps. mike On Fri, Mar 17, 2017 at 2:02 PM, Webert de Souza Lima wrote: > Hello everyone, > > I`m deploying a ceph cluster with cephfs and I`d like to tune c

Re: [ceph-users] cephfs cache tiering - hitset

2017-03-20 Thread Mike Lovell
On Mon, Mar 20, 2017 at 4:20 PM, Nick Fisk wrote: > Just a few corrections, hope you don't mind > > > -Original Message- > > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of > > Mike Lovell > > Sent: 20 March 2017 20:30 > &g

Re: [ceph-users] rbd iscsi gateway question

2017-04-10 Thread Mike Christie
On 04/06/2017 03:22 AM, yipik...@gmail.com wrote: > On 06/04/2017 09:42, Nick Fisk wrote: >> >> I assume Brady is referring to the death spiral LIO gets into with >> some initiators, including vmware, if an IO takes longer than about >> 10s. I haven’t heard of anything, and can’t see any changes, s

Re: [ceph-users] rbd iscsi gateway question

2017-04-10 Thread Mike Christie
On 04/06/2017 08:46 AM, David Disseldorp wrote: > On Thu, 6 Apr 2017 14:27:01 +0100, Nick Fisk wrote: > ... >>> I'm not to sure what you're referring to WRT the spiral of death, but we did >>> patch some LIO issues encountered when a command was aborted while >>> outstanding at the LIO backstore la

Re: [ceph-users] rbd iscsi gateway question

2017-04-10 Thread Mike Christie
On 04/10/2017 01:21 PM, Timofey Titovets wrote: > JFYI: Today we get totaly stable setup Ceph + ESXi "without hacks" and > this pass stress tests. > > 1. Don't try pass RBD directly to LIO, this setup are unstable > 2. Instead of that, use Qemu + KVM (i use proxmox for that create VM) > 3. Attach

[ceph-users] Re-weight Entire Cluster?

2017-05-29 Thread Mike Cave
to 2 gradually? Any and all suggestions are welcome. Cheers, Mike Cave ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Re-weight Entire Cluster?

2017-05-30 Thread Mike Cave
ceph-users Cc: Cave Mike Subject: Re: [ceph-users] Re-weight Entire Cluster? > It appears the current best practice is to weight each OSD according to it?s > size (3.64 for 4TB drive, 7.45 for 8TB drive, etc). OSD’s are created with those sorts of CRUSH weights by default, yes

[ceph-users] Mix of SATA and SSD

2015-12-11 Thread Mike Miller
less intense redistribute from SSD to the spinners? Is this possible using a suitable crushmap? Is this thought equivalent to having large SSD journals? Thanks and regards, Mike ___ ceph-users mailing list ceph-users@lists.ceph.com http

[ceph-users] Debug / monitor osd journal usage

2015-12-14 Thread Mike Miller
Hi, is there a way to debug / monitor the osd journal usage? Thanks and regards, Mike ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] mount.ceph not accepting options, please help

2015-12-16 Thread Mike Miller
I am using hammer 0.94.5 and ubuntu trusty. Thanks for your help! Mike ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] Upgrade from hammer to infernalis - osd's down

2016-01-05 Thread Mike Carlson
Hey ceph-users We upgraded from hammer to infernalis, stopped all osd's to change the user permissions from root to ceph, and all of our osd's are down (some say they are up, but the status says it is booting) ceph -s cluster cabd1728-2eca-4e18-a581-b4885364e5a4 health HEALTH_WARN

Re: [ceph-users] Upgrade from hammer to infernalis - osd's down

2016-01-05 Thread Mike Carlson
Well, we figured it out :) This mailing list post fixed our problem http://www.spinics.net/lists/ceph-users/msg24220.html We had to mark the osds that were falsely reported as up, as down, and then restart all osd's Thanks! On Tue, Jan 5, 2016 at 6:43 PM, Mike Carlson wrote: > Hey ce

[ceph-users] cephfs - inconsistent nfs and samba directory listings

2016-01-13 Thread Mike Carlson
-alnd SCHOOL667055 drwxrwsr-x 1 21695 21183 2962751438 Jan 13 09:33 SCHOOL667055 Any tips are appreciated! Thanks, Mike C ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] cephfs - inconsistent nfs and samba directory listings

2016-01-13 Thread Mike Carlson
herring. With that, it is oddly coincidental that we just started seeing issues. On Wed, Jan 13, 2016 at 11:30 AM, Gregory Farnum wrote: > On Wed, Jan 13, 2016 at 11:24 AM, Mike Carlson wrote: > > Hello. > > > > Since we upgraded to Infernalis last, we have noticed a severe prob

Re: [ceph-users] cephfs - inconsistent nfs and samba directory listings

2016-01-14 Thread Mike Carlson
4 active+clean+scrubbing+deep Now we are really down for the count. We cannot get our MDS back up in an active state and none of our data is accessible. On Wed, Jan 13, 2016 at 7:05 PM, Yan, Zheng wrote: > On Thu, Jan 14, 2016 at 3:37 AM, Mike Carlson wrote: > > Hey Greg,

Re: [ceph-users] cephfs - inconsistent nfs and samba directory listings

2016-01-14 Thread Mike Carlson
ck it up, we're okay with rebuilding. We just need the data back. Mike C On Thu, Jan 14, 2016 at 3:33 PM, Yan, Zheng wrote: > On Fri, Jan 15, 2016 at 3:28 AM, Mike Carlson wrote: > > Thank you for the reply Zheng > > > > We tried set mds bal frag to true, but the end re

Re: [ceph-users] cephfs - inconsistent nfs and samba directory listings

2016-01-14 Thread Mike Carlson
Did I just loose all of my data? If we were able to export the journal, could we create a brand new mds out of that and retrieve our data? On Thu, Jan 14, 2016 at 4:15 PM, Yan, Zheng wrote: > > > On Jan 15, 2016, at 08:01, Gregory Farnum wrote: > > > > On Thu, Jan 14,

Re: [ceph-users] cephfs - inconsistent nfs and samba directory listings

2016-01-14 Thread Mike Carlson
okay, that sounds really good. Would it help if you had access to our cluster? On Thu, Jan 14, 2016 at 4:19 PM, Yan, Zheng wrote: > > > On Jan 15, 2016, at 08:16, Mike Carlson wrote: > > > > Did I just loose all of my data? > > > > If we were able to expo

Re: [ceph-users] cephfs - inconsistent nfs and samba directory listings

2016-01-14 Thread Mike Carlson
t; > > On 2016-01-14 18:46, Yan, Zheng wrote: > >> Here is patch for v9.2.0. After install the modified version of >> ceph-mon, run “ceph mds add failed 1” >> >> >> >> >> >> On Jan 15, 2016, at 08:20, Mike Carlson wrote: >>> >

Re: [ceph-users] cephfs - inconsistent nfs and samba directory listings

2016-01-14 Thread Mike Carlson
Hey ceph-users, I wanted to follow up, Zheng's patch did the trick. We re-added the removed mds, and it all came back. We're sync-ing our data off to a backup server. Thanks for all of the help, Ceph has a great community to work with! Mike C On Thu, Jan 14, 2016 at 4:46 PM, Yan, Zh

Re: [ceph-users] CentOS 7 iscsi gateway using lrbd

2016-01-19 Thread Mike Christie
d do the right thing. On 01/19/2016 05:45 AM, Василий Ангапов wrote: > So is it a different approach that was used here by Mike Christie: > http://www.spinics.net/lists/target-devel/msg10330.html ? > It seems to be a confusion because it also implements target_core_rbd > module. Or not?

Re: [ceph-users] CentOS 7 iscsi gateway using lrbd

2016-01-20 Thread Mike Christie
On 01/20/2016 06:07 AM, Nick Fisk wrote: > Thanks for your input Mike, a couple of questions if I may > > 1. Are you saying that this rbd backing store is not in mainline and is only > in SUSE kernels? Ie can I use this lrbd on Debian/Ubuntu/CentOS? The target_core_rbd backing

[ceph-users] fsid changed?

2016-01-21 Thread Mike Carlson
ch And yes, we need to increase our PG count. This cluster has grown from a few 2TB drives to multiple 600GB sas drives, but I don't want to touch anything else until I can get this figured out. This is running as our Openstack VM storage, so it is not something we can simply rebuild. T

Re: [ceph-users] CentOS 7 iscsi gateway using lrbd

2016-02-15 Thread Mike Christie
re other issues you can find on various lists. Some people on this list have got it working ok for their specific application or at least have made other workarounds for any issues they were hitting. > > > Thanks > > > Dominik > > > > On 21 January 2016 at 12:08,

[ceph-users] HBA - PMC Adaptec HBA 1000

2016-03-02 Thread Mike Miller
Hi, can someone report their experiences with the PMC Adaptec HBA 1000 series of controllers? https://www.adaptec.com/en-us/smartstorage/hba/ Thanks and regards, Mike ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com

[ceph-users] Infernalis 9.2.1: the "rados df"ommand show wrong data

2016-03-04 Thread Mike Almateia
tier for 'data' Any one can explain this? -- Mike, runs! ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] MDS memory sizing

2016-03-05 Thread Mike Miller
memory and hangs. All on hammer 0.94. Regards, Mike On 3/1/16 8:13 AM, Yan, Zheng wrote: On Tue, Mar 1, 2016 at 7:28 PM, Dietmar Rieder wrote: Dear ceph users, I'm in the very initial phase of planning a ceph cluster an have a question regarding the RAM recommendation for an MDS. Acco

[ceph-users] Cache Pool and EC: objects didn't flush to a cold EC storage

2016-03-06 Thread Mike Almateia
0570792 25030755 It a bug or predictable action? -- Mike. runs! ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] osds crashing on Thread::create

2016-03-07 Thread Mike Lovell
log. i removed some extraneous bits from after the osds was restarted and a large amount of 'recent events' that were from well before the crash. thanks mike 2016-03-07 10:51:08.739907 7fb0c56a1700 0 -- 10.208.16.26:6802/7034 >> 10.208.16.42:0/3019478 pipe(0x47aab000 sd=1360 :6802 s=0

Re: [ceph-users] osds crashing on Thread::create

2016-03-07 Thread Mike Lovell
12 osds running. it looks like they're creating over 2500 threads each. i don't know the internals of the code but that seems like a lot. oh well. hopefully this fixes it. mike On Mon, Mar 7, 2016 at 1:55 PM, Gregory Farnum wrote: > On Mon, Mar 7, 2016 at 11:04 AM, Mike Lovell

Re: [ceph-users] Cache Pool and EC: objects didn't flush to a cold EC storage

2016-03-07 Thread Mike Almateia
06-Mar-16 17:28, Christian Balzer пишет: On Sun, 6 Mar 2016 12:17:48 +0300 Mike Almateia wrote: Hello Cephers! When my cluster hit "full ratio" settings, objects from cache pull didn't flush to a cold storage. As always, versions of everything, Ceph foremost. Yes of cours

Re: [ceph-users] Cache Pool and EC: objects didn't flush to a cold EC storage

2016-03-07 Thread Mike Almateia
option. But the cluster has earned again after I add new OSD in the cache tier pool and 'full OSD' status was dropped. Robert LeBlanc PGP Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1 On Sun, Mar 6, 2016 at 2:17 AM, Mike Almateia wrote: Hello Cephers! When my cluster

Re: [ceph-users] Infernalis 9.2.1: the "rados df"ommand show wrong data

2016-03-07 Thread Mike Almateia
07-Mar-16 21:28, Gregory Farnum пишет: On Fri, Mar 4, 2016 at 11:56 PM, Mike Almateia wrote: Hello Cephers! On my small cluster I see this: [root@c1 ~]# rados df pool name KB objects clones degraded unfound rdrd KB wrwr KB data

[ceph-users] data corruption with hammer

2016-03-14 Thread Mike Lovell
advance for any help you can provide. mike ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] data corruption with hammer

2016-03-15 Thread Mike Lovell
close. mike On Mon, Mar 14, 2016 at 9:35 PM, Christian Balzer wrote: > > Hello, > > On Mon, 14 Mar 2016 20:51:04 -0600 Mike Lovell wrote: > > > something weird happened on one of the ceph clusters that i administer > > tonight which resulted in virtual machines using rbd

Re: [ceph-users] data corruption with hammer

2016-03-19 Thread Mike Lovell
set to greater than 1. mike On Wed, Mar 16, 2016 at 4:41 PM, Mike Lovell wrote: > robert and i have done some further investigation the past couple days on > this. we have a test environment with a hard drive tier and an ssd tier as > a cache. several vms were created with volumes from

Re: [ceph-users] data corruption with hammer

2016-03-19 Thread Mike Lovell
? anyone have a similar problem? mike On Mon, Mar 14, 2016 at 8:51 PM, Mike Lovell wrote: > something weird happened on one of the ceph clusters that i administer > tonight which resulted in virtual machines using rbd volumes seeing > corruption in multiple forms. > > when ever

Re: [ceph-users] ZFS or BTRFS for performance?

2016-03-20 Thread Mike Almateia
node * 12 HDD). Btrfs on the disks - one disk, one OSD. Also we use EC 3+2. Cluster build for video record from street's cams. Our tests with 4Mb bs/99% seq. write show a good perfomance: * around 500-550 Mb/s with BTRFS vs 120-140 Mb/s Disk+Journal on same disk. We use Ce

Re: [ceph-users] ZFS or BTRFS for performance?

2016-03-22 Thread Mike Almateia
use also 10Gbit network. -- Mike. ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] Question about cache tier and backfill/recover

2016-03-25 Thread Mike Miller
tier? Thanks and regards, Mike ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Question about cache tier and backfill/recover

2016-03-26 Thread Mike Miller
, right? Mike Christian Bob On Fri, Mar 25, 2016 at 8:30 AM, Mike Miller wrote: Hi, in case of a failure in the storage tier, say single OSD disk failure or complete system failure with several OSD disks, will the remaining cache tier (on other nodes) be used for rapid backfilling/recovering

[ceph-users] Slow read on RBD mount, Hammer 0.94.5

2016-04-19 Thread Mike Miller
least 200-300 MB/s when reading, but I am seeing 10% of that at best. Thanks for your help. Mike ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] Build Raw Volume from Recovered RBD Objects

2016-04-19 Thread Mike Dawson
n the default 4MB chunk size be handled? Should they be padded somehow? 3) If any objects were completely missing and therefore unavailable to this process, how should they be handled? I assume we need to offset/pad to compensate. -- Thanks, Mike Dawson Co-Founder & Director of Cloud Arc

Re: [ceph-users] Slow read on RBD mount, Hammer 0.94.5

2016-04-21 Thread Mike Miller
know if there is a way to enable clients better single threaded read performance for large files. Thanks and regards, Mike On 4/20/16 10:43 PM, Nick Fisk wrote: -Original Message- From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Udo Lembke Sent: 20 April 2016

Re: [ceph-users] Slow read on RBD mount, Hammer 0.94.5

2016-04-21 Thread Mike Miller
%) but not much. I also found this info http://tracker.ceph.com/issues/9192 Maybe Ilya can help us, he knows probably best how this can be improved. Thanks and cheers, Mike On 4/21/16 4:32 PM, Udo Lembke wrote: Hi Mike, Am 21.04.2016 um 09:07 schrieb Mike Miller: Hi Nick and Udo, thanks

Re: [ceph-users] Using FC with LIO targets

2018-10-30 Thread Mike Christie
On 10/28/2018 03:18 AM, Frédéric Nass wrote: > Hello Mike, Jason, > > Assuming we adapt the current LIO configuration scripts and put QLogic HBAs > in our SCSI targets, could we use FC instead of iSCSI as a SCSI transport > protocol with LIO ? Would this still work with multip

[ceph-users] Ceph Community Newsletter (October 2018)

2018-11-02 Thread Mike Perez
Hey Cephers, The Ceph Community Newsletter of October 2018 has been published: https://ceph.com/community/ceph-community-newsletter-october-2018-edition/ ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users

Re: [ceph-users] New us-central mirror request

2018-11-05 Thread Mike Perez
Hi Zachary, Thanks for contributing this mirror to the community! It has now been added: https://ceph.com/get/ On Tue, Oct 30, 2018 at 8:30 AM Zachary Muller wrote: > > Hi all, > > We are GigeNET, a datacenter based in Arlington Heights, IL (close to > Chicago). We are starting to mirror ceph a

Re: [ceph-users] New open-source foundation

2018-11-14 Thread Mike Perez
Hi Eric, Please take a look at the new Foundation site's FAQ for answers to these questions: https://ceph.com/foundation/ On Tue, Nov 13, 2018 at 11:51 AM Smith, Eric wrote: > > https://techcrunch.com/2018/11/12/the-ceph-storage-project-gets-a-dedicated-open-source-foundation/ > > > > What does

[ceph-users] CentOS Dojo at Oak Ridge, Tennessee CFP is now open!

2018-12-03 Thread Mike Perez
would make a great topic as an example. https://wiki.centos.org/Events/Dojo/ORNL2019 -- Mike Perez (thingee) Ceph Community Manager, Red Hat pgplymAxh5Qf7.pgp Description: PGP signature ___ ceph-users mailing list ceph-users@lists.ceph.com http

Re: [ceph-users] CentOS Dojo at Oak Ridge, Tennessee CFP is now open!

2018-12-03 Thread Mike Perez
On 14:26 Dec 03, Mike Perez wrote: > Hey Cephers! > > Just wanted to give a heads up on the CentOS Dojo at Oak Ridge, Tennessee on > Tuesday, April 16th, 2019. > > The CFP is now open, and I would like to encourage our community to > participate > if you can make t

Re: [ceph-users] Cephalocon (was Re: CentOS Dojo at Oak Ridge, Tennessee CFP is now open!)

2018-12-04 Thread Mike Perez
On 16:25 Dec 04, Matthew Vernon wrote: > On 03/12/2018 22:46, Mike Perez wrote: > > >Also as a reminder, lets try to coordinate our submissions on the CFP > >coordination pad: > > > >https://pad.ceph.com/p/cfp-coordination > > I see that mentions a Cephaloco

Re: [ceph-users] ceph-iscsi iSCSI Login negotiation failed

2018-12-05 Thread Mike Christie
On 12/05/2018 09:43 AM, Steven Vacaroaia wrote: > Hi, > I have a strange issue > I configured 2 identical iSCSI gateways but one of them is complaining > about negotiations although gwcli reports the correct auth and status ( > logged-in) > > Any help will be truly appreciated > > Here are some

Re: [ceph-users] cephday berlin slides

2018-12-06 Thread Mike Perez
Hi Serkan, I'm currently working on collecting the slides to have them posted to the Ceph Day Berlin page as Lenz mentioned they would show up. I will notify once the slides are available on mailing list/twitter. Thanks! On Fri, Nov 16, 2018 at 2:30 AM Serkan Çoban wrote: > > Hi, > > Does anyone

Re: [ceph-users] cephday berlin slides

2018-12-06 Thread Mike Perez
The slides are now posted: https://ceph.com/cephdays/ceph-day-berlin/ -- Mike Perez (thingee) On Thu, Dec 6, 2018 at 8:49 AM Mike Perez wrote: > > Hi Serkan, > > I'm currently working on collecting the slides to have them posted to > the Ceph Day Berlin page as Lenz menti

[ceph-users] Cephalocon Barcelona 2019 CFP now open!

2018-12-10 Thread Mike Perez
. If you have any questions, please let me know. [1] - https://ceph.com/cephalocon/barcelona-2019/ -- Mike Perez (thingee) ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] cephday berlin slides

2018-12-10 Thread Mike Perez
On Mon, Dec 10, 2018 at 3:07 AM stefan wrote: > > Quoting Marc Roos (m.r...@f1-outsourcing.eu): > > > > Are there video's available (MeerKat, CTDB)? > > Nope, no recordings were made during the day. > > > PS. Disk health prediction link is not working

Re: [ceph-users] Cephalocon Barcelona 2019 CFP now open!

2018-12-10 Thread Mike Perez
On Mon, Dec 10, 2018 at 8:05 AM Wido den Hollander wrote: > > > > On 12/10/18 5:00 PM, Mike Perez wrote: > > Hello everyone! > > > > It gives me great pleasure to announce the CFP for Cephalocon Barcelona > > 2019 is now open [1]! > > > > Cephaloco

[ceph-users] Ceph is now declared stable in Rook v0.9

2018-12-10 Thread Mike Perez
Luminous and upgrading to Mimic while running a simple application. More details and schedule hopefully later. Great way to start KubeCon Seattle soon! -- Mike Perez (thingee) ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com

[ceph-users] Ceph Meetings Canceled for Holidays

2018-12-17 Thread Mike Perez
=OXRzOWM3bHQ3dTF2aWMyaWp2dnFxbGZwbzBAZ3JvdXAuY2FsZW5kYXIuZ29vZ2xlLmNvbQ .ics file: https://calendar.google.com/calendar/ical/9ts9c7lt7u1vic2ijvvqqlfpo0%40group.calendar.google.com/public/basic.ics -- Mike Perez (thingee) ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com

[ceph-users] Ceph 10.2.11 - Status not working

2018-12-17 Thread Mike O'Connor
] host = blade7 Any ideas ? more information ? Mike ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] [Warning: Forged Email] Ceph 10.2.11 - Status not working

2018-12-17 Thread Mike O'Connor
mmm wonder why the list is saying my email is forged, wonder what I have wrong. My email is sent via an outbound spam filter, but I was sure I had the SPF set correctly. Mike On 18/12/18 10:53 am, Mike O'Connor wrote: > Hi All > > I have a ceph cluster which has been working with

Re: [ceph-users] [Warning: Forged Email] Ceph 10.2.11 - Status not working

2018-12-17 Thread Mike O'Connor
Added DKIM to my server, will this help On 18/12/18 11:04 am, Mike O'Connor wrote: > mmm wonder why the list is saying my email is forged, wonder what I have > wrong. > > My email is sent via an outbound spam filter, but I was sure I had the > SPF set correctly. > > Mik

Re: [ceph-users] [Warning: Forged Email] Ceph 10.2.11 - Status not working

2018-12-17 Thread Mike O'Connor
ing clearly when new issues hit me. Really need the next few weeks to go well so I can get some de-stress time. Mike On 18/12/18 1:44 pm, Oliver Freyermuth wrote: > That's kind of unrelated to Ceph, but since you wrote two mails already, > and I believe it is caused by the maili

<    1   2   3   4   5   >