Re: [ceph-users] dropping python 2 for nautilus... go/no-go

2019-01-16 Thread Mike Perez
://governance.openstack.org/tc/goals/pike/python35.html [2] - https://releases.openstack.org/ [3] - https://governance.openstack.org/tc/goals/stein/python3-first.html -- Mike Perez (thingee) On Wed, Jan 16, 2019 at 7:45 AM Sage Weil wrote: > > Hi everyone, > > This has come up s

[ceph-users] Ceph Nautilus Release T-shirt Design

2019-01-16 Thread Mike Perez
b.com/ceph/ceph/blob/master/src/pybind/mgr/dashboard/frontend/src/assets/1280px-Nautilus_Octopus.jpg I'm waiting to hear back from our vendor of how much notice they would need to have the design ready for print on demand through the ceph store https://www.proforma.com/sdscommunitystore --

[ceph-users] Google Summer of Code / Outreachy Call for Projects

2019-01-17 Thread Mike Perez
t for Outreachy so we can start selecting applicants. End of May will begin the internships. https://www.outreachy.org/mentor/ You can submit project ideas for both programs on this etherpad. https://pad.ceph.com/p/project-ideas Stay tuned for more updates. -- Mike Perez (th

Re: [ceph-users] Cephalocon Barcelona 2019 CFP now open!

2019-01-17 Thread Mike Perez
/prog/cephalocon_2019/ https://pad.ceph.com/p/cfp-coordination -- Mike Perez (thingee) On Mon, Dec 10, 2018 at 8:00 AM Mike Perez wrote: > > Hello everyone! > > It gives me great pleasure to announce the CFP for Cephalocon Barcelona 2019 > is now open [1]! > > Cephalocon Ba

Re: [ceph-users] [Ceph-announce] Ceph tech talk tomorrow: NooBaa data platform for distributed hybrid clouds

2019-01-21 Thread Mike Perez
Hey all, Here's the tech talk recording: https://www.youtube.com/watch?v=uW6NvsYFX-s -- Mike Perez (thingee) On Wed, Jan 16, 2019 at 4:01 PM Sage Weil wrote: > > Hi everyone, > > First, this is a reminder that there is a Tech Talk tomorrow from Guy > Margalit about N

[ceph-users] Cephalocon Barcelona 2019 Early Bird Registration Now Available!

2019-01-21 Thread Mike Perez
assistance with your presentation or have any questions, please reach out to eve...@ceph.io. Thank you! -- Mike Perez (thingee) ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] Cephalocon Barcelona 2019 CFP ends tomorrow!

2019-01-31 Thread Mike Perez
Hey everyone, Just a last minute reminder if you're considering presenting at Cephalocon Barcelona 2019, the CFP will be ending tomorrow. Early bird ticket rate ends February 15. https://ceph.com/cephalocon/barcelona-2019/ -- Mike Perez (th

[ceph-users] Orchestration weekly meeting location change

2019-02-06 Thread Mike Perez
-915-6466 (US) See all numbers: https://www.redhat.com/en/conference-numbers 2.) Enter Meeting ID: 908675367 3.) Press # Want to test your video connection? https://bluejeans.com/111 -- Mike Perez (thingee) ___ ceph-users mailing list ceph-users

Re: [ceph-users] Cephalocon Barcelona 2019 Early Bird Registration Now Available!

2019-02-14 Thread Mike Perez
Reminder that early bird rate ends tomorrow. If you are proposing a talk, please still register and we can issue you a refund if your talk is accepted. We will plan better with early bird and cfp acceptance dates with future events. https://ceph.com/cephalocon/barcelona-2019/ -- Mike Perez

Re: [ceph-users] Ceph Nautilus Release T-shirt Design

2019-02-14 Thread Mike Perez
Hi Marc, You can see previous designs on the Ceph store: https://www.proforma.com/sdscommunitystore -- Mike Perez (thingee) On Fri, Jan 18, 2019 at 12:39 AM Marc Roos wrote: > > > Is there an overview of previous tshirts? > > > -Original Message- > From: Ant

Re: [ceph-users] CEPH ISCSI Gateway

2019-03-09 Thread Mike Christie
On 03/07/2019 09:22 AM, Ashley Merrick wrote: > Been reading into the gateway, and noticed it’s been mentioned a few > times it can be installed on OSD servers. > > I am guessing therefore there be no issues like is sometimes mentioned > when using kRBD on a OSD node apart from the extra resources

Re: [ceph-users] 答复: CEPH ISCSI LIO multipath change delay

2019-03-21 Thread Mike Christie
On 03/21/2019 11:27 AM, Maged Mokhtar wrote: > > Though i do not recommend changing it, if there is a need to lower > fast_io_fail_tmo, then osd_heartbeat_interval + osd_heartbeat_grace sum > need to be lowered as well, their default sum is 25 sec, which i would > assume why fast_io_fail_tmo is se

[ceph-users] Ceph will be at SUSECON 2019!

2019-03-25 Thread Mike Perez
n the conference this year, please consider providing content and/or support for questions at the booth. Please reply to me directly if you're interested. Thanks! -- Mike Perez (thingee) ___ ceph-users mailing list ceph-users@lists.ceph.com http://list

[ceph-users] DevConf US CFP Ends Today + Planning

2019-04-08 Thread Mike Perez
attending and want to help with Ceph's presence, please email me directly so I can make sure you're part of any communication. Looking forward to potentially meeting more people in the community! [1] - https://devconf.info/us/2019 [2] - https://pad.ceph.com/p/cfp-coordination --

Re: [ceph-users] rgw, nss: dropping the legacy PKI token support in RadosGW (removed in OpenStack Ocata)

2019-04-19 Thread Mike Lowe
I’ve run production Ceph/OpenStack since 2015. The reality is running OpenStack Newton (the last one with pki) with a post Nautilus release just isn’t going to work. You are going to have bigger problems than trying to make object storage work with keystone issued tokens. Worst case is you will

Re: [ceph-users] ceph-iscsi: problem when discovery auth is disabled, but gateway receives auth requests

2019-04-23 Thread Mike Christie
On 04/18/2019 06:24 AM, Matthias Leopold wrote: > Hi, > > the Ceph iSCSI gateway has a problem when receiving discovery auth > requests when discovery auth is not enabled. Target discovery fails in > this case (see below). This is especially annoying with oVirt (KVM > management platform) where yo

[ceph-users] [events] Ceph at Red Hat Summit May 7th 6:30pm

2019-04-30 Thread Mike Perez
-community-happy-hour-at-red-hat-summit-registration-60698158827 -- Mike Perez (thingee) ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] [events] Ceph Day Netherlands July 2nd - CFP ends June 3rd

2019-05-23 Thread Mike Perez
community. In addition to Ceph experts, community members, and vendors, you’ll hear from production users of Ceph who’ll share what they’ve learned from their deployments. Each Ceph Day ends with a Q&A session and cocktail reception. Join us! -- Mike Perez (thi

[ceph-users] [events] Ceph Day CERN September 17 - CFP now open!

2019-05-27 Thread Mike Perez
All event information for CFP, registration, accommodations can be found on the CERN website: https://indico.cern.ch/event/765214/ And thank you to Dan van der Ster for reaching out to organizer this event! -- Mike Perez (thingee) ___ ceph-users ma

Re: [ceph-users] [events] Ceph Day CERN September 17 - CFP now open!

2019-05-27 Thread Mike Perez
Hi Peter, Thanks for verifying this. September 17 is the new date. We moved it in order to get a bigger room for the event after receiving good interest about it during Cephalocon. — Mike Perez (thingee) On May 27, 2019, 2:56 AM -0700, Peter Wienemann , wrote: > Hi Mike, > > there

Re: [ceph-users] [events] Ceph Day Netherlands July 2nd - CFP ends June 3rd

2019-05-29 Thread Mike Perez
Hi everyone, This is the last week to submit for the Ceph Day Netherlands CFP ending June 3rd: https://ceph.com/cephdays/netherlands-2019/ https://zfrmz.com/E3ouYm0NiPF1b3NLBjJk -- Mike Perez (thingee) On Thu, May 23, 2019 at 10:12 AM Mike Perez wrote: > > Hi everyone, > > We wi

[ceph-users] Using Ceph Ansible to Add Nodes to Cluster at Weight 0

2019-05-29 Thread Mike Cave
be able provide. I’m happy to provide any details you might want. Cheers, Mike ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] [events] Ceph Day London - October 24 - CFP now open

2019-05-30 Thread Mike Perez
n us! -- Mike Perez (thingee) ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] Ceph Day Netherlands CFP Extended to June 14th

2019-06-10 Thread Mike Perez
for some great discussion and content in Utrecht! — Mike Perez (thingee) ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] Ceph Day Netherlands Schedule Now Available!

2019-06-13 Thread Mike Perez
-netherlands-tickets-62122673589 -- Mike Perez (thingee) ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] Octopus roadmap planning series is now available

2019-06-13 Thread Mike Perez
In case you missed these events on the community calendar, here are the recordings: https://www.youtube.com/playlist?list=PLrBUGiINAakPCrcdqjbBR_VlFa5buEW2J -- Mike Perez (thingee) ___ ceph-users mailing list ceph-users@lists.ceph.com http

[ceph-users] [events] Ceph Day CERN September 17 - CFP now open!

2019-06-25 Thread Mike Perez
All event information for CFP, registration, accommodations can be found on the CERN website: https://indico.cern.ch/event/765214/ And thank you to Dan van der Ster for reaching out to organizer this event! -- Mike Perez (thingee) ___ ceph-users ma

Re: [ceph-users] New best practices for osds???

2019-07-16 Thread Mike O'Connor
th spinning rust you can use a SAS expander, a single drive can not saturate the link but SSD has to be 1 to 1. Mike ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] reproducable rbd-nbd crashes

2019-07-19 Thread Mike Christie
On 07/19/2019 02:42 AM, Marc Schöchlin wrote: > Hello Jason, > > Am 18.07.19 um 20:10 schrieb Jason Dillaman: >> On Thu, Jul 18, 2019 at 1:47 PM Marc Schöchlin wrote: >>> Hello cephers, >>> >>> rbd-nbd crashes in a reproducible way here. >> I don't see a crash report in the log below. Is it reall

Re: [ceph-users] reproducable rbd-nbd crashes

2019-07-22 Thread Mike Christie
On 07/22/2019 06:00 AM, Marc Schöchlin wrote: >> With older kernels no timeout would be set for each command by default, >> so if you were not running that tool then you would not see the nbd >> disconnect+io_errors+xfs issue. You would just see slow IOs. >> >> With newer kernels, like 4.15, nbd.ko

Re: [ceph-users] reproducable rbd-nbd crashes

2019-07-22 Thread Mike Christie
On 07/19/2019 02:42 AM, Marc Schöchlin wrote: > We have ~500 heavy load rbd-nbd devices in our xen cluster (rbd-nbd 12.2.5, > kernel 4.4.0+10, centos clone) and ~20 high load krbd devices (kernel > 4.15.0-45, ubuntu 16.04) - we never experienced problems like this. For this setup, do you have 25

Re: [ceph-users] reproducable rbd-nbd crashes

2019-07-24 Thread Mike Christie
On 07/23/2019 12:28 AM, Marc Schöchlin wrote: >>> For testing purposes i set the timeout to unlimited ("nbd_set_ioctl >>> /dev/nbd0 0", on already mounted device). >>> >> I re-executed the problem procedure and discovered that the >>> >> compression-procedure crashes not at the same file, but cra

Re: [ceph-users] Ceph Scientific Computing User Group

2019-08-02 Thread Mike Perez
We have scheduled the next meeting on the community calendar for August 28 at 14:30 UTC. Each meeting will then take place on the last Wednesday of each month. Here's the pad to collect agenda/notes: https://pad.ceph.com/p/Ceph_Science_User_Group_Index -- Mike Perez (thingee) On Tue, J

Re: [ceph-users] tcmu-runner: "Acquired exclusive lock" every 21s

2019-08-05 Thread Mike Christie
On 08/05/2019 05:58 AM, Matthias Leopold wrote: > Hi, > > I'm still testing my 2 node (dedicated) iSCSI gateway with ceph 12.2.12 > before I dare to put it into production. I installed latest tcmu-runner > release (1.5.1) and (like before) I'm seeing that both nodes switch > exclusive locks for th

Re: [ceph-users] tcmu-runner: "Acquired exclusive lock" every 21s

2019-08-06 Thread Mike Christie
On 08/06/2019 07:51 AM, Matthias Leopold wrote: > > > Am 05.08.19 um 18:31 schrieb Mike Christie: >> On 08/05/2019 05:58 AM, Matthias Leopold wrote: >>> Hi, >>> >>> I'm still testing my 2 node (dedicated) iSCSI gateway with ceph 12.2.12 >>&

Re: [ceph-users] tcmu-runner: "Acquired exclusive lock" every 21s

2019-08-06 Thread Mike Christie
On 08/06/2019 11:28 AM, Mike Christie wrote: > On 08/06/2019 07:51 AM, Matthias Leopold wrote: >> >> >> Am 05.08.19 um 18:31 schrieb Mike Christie: >>> On 08/05/2019 05:58 AM, Matthias Leopold wrote: >>>> Hi, >>>> >>>> I

Re: [ceph-users] reproducible rbd-nbd crashes

2019-08-13 Thread Mike Christie
On 07/31/2019 05:20 AM, Marc Schöchlin wrote: > Hello Jason, > > it seems that there is something wrong in the rbd-nbd implementation. > (added this information also at https://tracker.ceph.com/issues/40822) > > The problem not seems to be related to kernel releases, filesystem types or > the c

Re: [ceph-users] reproducible rbd-nbd crashes

2019-08-13 Thread Mike Christie
On 08/13/2019 07:04 PM, Mike Christie wrote: > On 07/31/2019 05:20 AM, Marc Schöchlin wrote: >> Hello Jason, >> >> it seems that there is something wrong in the rbd-nbd implementation. >> (added this information also at https://tracker.ceph.com/issues/40822) >&g

Re: [ceph-users] reproducible rbd-nbd crashes

2019-08-14 Thread Mike Christie
On 08/14/2019 07:35 AM, Marc Schöchlin wrote: >>> 3. I wonder if we are hitting a bug with PF_MEMALLOC Ilya hit with krbd. >>> He removed that code from the krbd. I will ping him on that. > > Interesting. I activated Coredumps for that processes - probably we can > find something interesting here.

Re: [ceph-users] reproducible rbd-nbd crashes

2019-08-14 Thread Mike Christie
On 08/14/2019 02:09 PM, Mike Christie wrote: > On 08/14/2019 07:35 AM, Marc Schöchlin wrote: >>>> 3. I wonder if we are hitting a bug with PF_MEMALLOC Ilya hit with krbd. >>>> He removed that code from the krbd. I will ping him on that. >> >> Interesting. I

Re: [ceph-users] reproducible rbd-nbd crashes

2019-08-15 Thread Mike Christie
On 08/14/2019 06:55 PM, Mike Christie wrote: > On 08/14/2019 02:09 PM, Mike Christie wrote: >> On 08/14/2019 07:35 AM, Marc Schöchlin wrote: >>>>> 3. I wonder if we are hitting a bug with PF_MEMALLOC Ilya hit with krbd. >>>>> He removed that code

[ceph-users] RBD error when run under cron

2019-09-11 Thread Mike O'Connor
91817.diff FATAL: Cannot open tty: No such device or address. rbd: export-diff error: (32) Broken pipe Error in upload --- Thanks Mike ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] tcmu-runner: mismatched sizes for rbd image size

2019-10-02 Thread Mike Christie
On 10/02/2019 02:15 PM, Kilian Ries wrote: > Ok i just compared my local python files and the git commit you sent me > - it really looks like i have the old files installed. All the changes > are missing in my local files. > > > > Where can i get a new ceph-iscsi-config package that has the fixe

Re: [ceph-users] tcmu-runner: mismatched sizes for rbd image size

2019-10-15 Thread Mike Christie
On 10/14/2019 09:01 AM, Kilian Ries wrote: > > @Mike > > > Did you have the chance to update download.ceph.com repositories for the > new version? No. I have updated the upstream repos with the needed patches and made new releases there. I appear to be hitting a bug with jenk

Re: [ceph-users] ceph iscsi question

2019-10-17 Thread Mike Christie
On 10/16/2019 01:35 AM, 展荣臻(信泰) wrote: > hi,all > we deploy ceph with ceph-ansible.osds,mons and daemons of iscsi runs in > docker. > I create iscsi target according to > https://docs.ceph.com/docs/luminous/rbd/iscsi-target-cli/. > I discovered and logined iscsi target on another host,as sh

Re: [ceph-users] ceph iscsi question

2019-10-17 Thread Mike Christie
On 10/17/2019 10:52 AM, Mike Christie wrote: > On 10/16/2019 01:35 AM, 展荣臻(信泰) wrote: >> hi,all >> we deploy ceph with ceph-ansible.osds,mons and daemons of iscsi runs in >> docker. >> I create iscsi target according to >> https://docs.ceph.com/docs/lumin

Re: [ceph-users] TCMU Runner: Could not check lock ownership. Error: Cannot send after transport endpoint shutdown

2019-10-22 Thread Mike Christie
looks like it has the fix: commit dd7dd51c6cafa8bbcd3ca0eef31fb378b27ff499 Author: Mike Christie Date: Mon Jan 14 17:06:27 2019 -0600 Allow some commands to run while taking lock so we should not be seeing it. Could you turn on tcmu-runner debugging? Open the file: /etc/tcmu/tcmu.conf and set: log_level = 5

Re: [ceph-users] TCMU Runner: Could not check lock ownership. Error: Cannot send after transport endpoint shutdown

2019-10-22 Thread Mike Christie
runs VMs on each > of the LUNs) > > > Ok, i'll update this tomorrow with the logs you asked for ... > > ---- > *Von:* Mike Christie > *Gesendet:* Dienstag, 22. Oktober 2019 19:43:40 > *An

[ceph-users] Migrating from block to lvm

2019-11-15 Thread Mike Cave
. I’m looking for opinions on best practices to complete this as I’d like to minimize impact to our clients. Cheers, Mike Cave ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Migrating from block to lvm

2019-11-15 Thread Mike Cave
On Fri, Nov 15, 2019 at 6:04 PM Mike Cave wrote: > > Greetings all! > > > > I am looking at upgrading to Nautilus in the near future (currently on Mimic). We have a cluster built on 480 OSDs all using multipath and simple block devices. I see that the

Re: [ceph-users] Migrating from block to lvm

2019-11-15 Thread Mike Cave
Victoria O: 250.472.4997 From: Janne Johansson Date: Friday, November 15, 2019 at 11:46 AM To: Cave Mike Cc: Paul Emmerich , ceph-users Subject: Re: [ceph-users] Migrating from block to lvm Den fre 15 nov. 2019 kl 19:40 skrev Mike Cave mailto:mc...@uvic.ca>>: So would you recommend do

Re: [ceph-users] Migrating from block to lvm

2019-11-15 Thread Mike Cave
Losing a node is not a big deal for us (dual bonded 10G connection to each node). I’m thinking: 1. Drain node 2. Redeploy with Ceph Ansible It would require much less hands-on time for our group. I know the churn on the cluster would be high, which was my only concern. Mike Senior

<    1   2   3   4   5