://governance.openstack.org/tc/goals/pike/python35.html
[2] - https://releases.openstack.org/
[3] - https://governance.openstack.org/tc/goals/stein/python3-first.html
--
Mike Perez (thingee)
On Wed, Jan 16, 2019 at 7:45 AM Sage Weil wrote:
>
> Hi everyone,
>
> This has come up s
b.com/ceph/ceph/blob/master/src/pybind/mgr/dashboard/frontend/src/assets/1280px-Nautilus_Octopus.jpg
I'm waiting to hear back from our vendor of how much notice they would
need to have the design ready for print on demand through the ceph
store
https://www.proforma.com/sdscommunitystore
--
t for
Outreachy so we can start selecting applicants. End of May will begin
the internships.
https://www.outreachy.org/mentor/
You can submit project ideas for both programs on this etherpad.
https://pad.ceph.com/p/project-ideas
Stay tuned for more updates.
--
Mike Perez (th
/prog/cephalocon_2019/
https://pad.ceph.com/p/cfp-coordination
--
Mike Perez (thingee)
On Mon, Dec 10, 2018 at 8:00 AM Mike Perez wrote:
>
> Hello everyone!
>
> It gives me great pleasure to announce the CFP for Cephalocon Barcelona 2019
> is now open [1]!
>
> Cephalocon Ba
Hey all,
Here's the tech talk recording:
https://www.youtube.com/watch?v=uW6NvsYFX-s
--
Mike Perez (thingee)
On Wed, Jan 16, 2019 at 4:01 PM Sage Weil wrote:
>
> Hi everyone,
>
> First, this is a reminder that there is a Tech Talk tomorrow from Guy
> Margalit about N
assistance
with your presentation or have any questions, please reach out to
eve...@ceph.io. Thank you!
--
Mike Perez (thingee)
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hey everyone,
Just a last minute reminder if you're considering presenting at
Cephalocon Barcelona 2019, the CFP will be ending tomorrow.
Early bird ticket rate ends February 15.
https://ceph.com/cephalocon/barcelona-2019/
--
Mike Perez (th
-915-6466 (US)
See all numbers: https://www.redhat.com/en/conference-numbers
2.) Enter Meeting ID: 908675367
3.) Press #
Want to test your video connection?
https://bluejeans.com/111
--
Mike Perez (thingee)
___
ceph-users mailing list
ceph-users
Reminder that early bird rate ends tomorrow. If you are proposing a
talk, please still register and we can issue you a refund if your talk
is accepted. We will plan better with early bird and cfp acceptance
dates with future events.
https://ceph.com/cephalocon/barcelona-2019/
--
Mike Perez
Hi Marc,
You can see previous designs on the Ceph store:
https://www.proforma.com/sdscommunitystore
--
Mike Perez (thingee)
On Fri, Jan 18, 2019 at 12:39 AM Marc Roos wrote:
>
>
> Is there an overview of previous tshirts?
>
>
> -Original Message-
> From: Ant
On 03/07/2019 09:22 AM, Ashley Merrick wrote:
> Been reading into the gateway, and noticed it’s been mentioned a few
> times it can be installed on OSD servers.
>
> I am guessing therefore there be no issues like is sometimes mentioned
> when using kRBD on a OSD node apart from the extra resources
On 03/21/2019 11:27 AM, Maged Mokhtar wrote:
>
> Though i do not recommend changing it, if there is a need to lower
> fast_io_fail_tmo, then osd_heartbeat_interval + osd_heartbeat_grace sum
> need to be lowered as well, their default sum is 25 sec, which i would
> assume why fast_io_fail_tmo is se
n the conference this year, please consider providing
content and/or support for questions at the booth. Please reply to me
directly if you're interested.
Thanks!
--
Mike Perez (thingee)
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://list
attending and want to help with Ceph's presence,
please email me directly so I can make sure you're part of any
communication.
Looking forward to potentially meeting more people in the community!
[1] - https://devconf.info/us/2019
[2] - https://pad.ceph.com/p/cfp-coordination
--
I’ve run production Ceph/OpenStack since 2015. The reality is running
OpenStack Newton (the last one with pki) with a post Nautilus release just
isn’t going to work. You are going to have bigger problems than trying to make
object storage work with keystone issued tokens. Worst case is you will
On 04/18/2019 06:24 AM, Matthias Leopold wrote:
> Hi,
>
> the Ceph iSCSI gateway has a problem when receiving discovery auth
> requests when discovery auth is not enabled. Target discovery fails in
> this case (see below). This is especially annoying with oVirt (KVM
> management platform) where yo
-community-happy-hour-at-red-hat-summit-registration-60698158827
--
Mike Perez (thingee)
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
community.
In addition to Ceph experts, community members, and vendors, you’ll
hear from production users of Ceph who’ll share what they’ve learned
from their deployments.
Each Ceph Day ends with a Q&A session and cocktail reception. Join us!
--
Mike Perez (thi
All event information for CFP, registration, accommodations can be
found on the CERN website:
https://indico.cern.ch/event/765214/
And thank you to Dan van der Ster for reaching out to organizer this event!
--
Mike Perez (thingee)
___
ceph-users ma
Hi Peter,
Thanks for verifying this. September 17 is the new date. We moved it in order
to get a bigger room for the event after receiving good interest about it
during Cephalocon.
— Mike Perez (thingee)
On May 27, 2019, 2:56 AM -0700, Peter Wienemann ,
wrote:
> Hi Mike,
>
> there
Hi everyone,
This is the last week to submit for the Ceph Day Netherlands CFP
ending June 3rd:
https://ceph.com/cephdays/netherlands-2019/
https://zfrmz.com/E3ouYm0NiPF1b3NLBjJk
--
Mike Perez (thingee)
On Thu, May 23, 2019 at 10:12 AM Mike Perez wrote:
>
> Hi everyone,
>
> We wi
be able provide. I’m happy to
provide any details you might want.
Cheers,
Mike
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
n us!
--
Mike Perez (thingee)
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
for some great discussion and content in
Utrecht!
— Mike Perez (thingee)
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
-netherlands-tickets-62122673589
--
Mike Perez (thingee)
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
In case you missed these events on the community calendar, here are
the recordings:
https://www.youtube.com/playlist?list=PLrBUGiINAakPCrcdqjbBR_VlFa5buEW2J
--
Mike Perez (thingee)
___
ceph-users mailing list
ceph-users@lists.ceph.com
http
All event information for CFP, registration, accommodations can be
found on the CERN website:
https://indico.cern.ch/event/765214/
And thank you to Dan van der Ster for reaching out to organizer this event!
--
Mike Perez (thingee)
___
ceph-users ma
th spinning rust you can use a SAS expander, a single drive can not
saturate the link but SSD has to be 1 to 1.
Mike
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On 07/19/2019 02:42 AM, Marc Schöchlin wrote:
> Hello Jason,
>
> Am 18.07.19 um 20:10 schrieb Jason Dillaman:
>> On Thu, Jul 18, 2019 at 1:47 PM Marc Schöchlin wrote:
>>> Hello cephers,
>>>
>>> rbd-nbd crashes in a reproducible way here.
>> I don't see a crash report in the log below. Is it reall
On 07/22/2019 06:00 AM, Marc Schöchlin wrote:
>> With older kernels no timeout would be set for each command by default,
>> so if you were not running that tool then you would not see the nbd
>> disconnect+io_errors+xfs issue. You would just see slow IOs.
>>
>> With newer kernels, like 4.15, nbd.ko
On 07/19/2019 02:42 AM, Marc Schöchlin wrote:
> We have ~500 heavy load rbd-nbd devices in our xen cluster (rbd-nbd 12.2.5,
> kernel 4.4.0+10, centos clone) and ~20 high load krbd devices (kernel
> 4.15.0-45, ubuntu 16.04) - we never experienced problems like this.
For this setup, do you have 25
On 07/23/2019 12:28 AM, Marc Schöchlin wrote:
>>> For testing purposes i set the timeout to unlimited ("nbd_set_ioctl
>>> /dev/nbd0 0", on already mounted device).
>>> >> I re-executed the problem procedure and discovered that the
>>> >> compression-procedure crashes not at the same file, but cra
We have scheduled the next meeting on the community calendar for August 28
at 14:30 UTC. Each meeting will then take place on the last Wednesday of
each month.
Here's the pad to collect agenda/notes:
https://pad.ceph.com/p/Ceph_Science_User_Group_Index
--
Mike Perez (thingee)
On Tue, J
On 08/05/2019 05:58 AM, Matthias Leopold wrote:
> Hi,
>
> I'm still testing my 2 node (dedicated) iSCSI gateway with ceph 12.2.12
> before I dare to put it into production. I installed latest tcmu-runner
> release (1.5.1) and (like before) I'm seeing that both nodes switch
> exclusive locks for th
On 08/06/2019 07:51 AM, Matthias Leopold wrote:
>
>
> Am 05.08.19 um 18:31 schrieb Mike Christie:
>> On 08/05/2019 05:58 AM, Matthias Leopold wrote:
>>> Hi,
>>>
>>> I'm still testing my 2 node (dedicated) iSCSI gateway with ceph 12.2.12
>>&
On 08/06/2019 11:28 AM, Mike Christie wrote:
> On 08/06/2019 07:51 AM, Matthias Leopold wrote:
>>
>>
>> Am 05.08.19 um 18:31 schrieb Mike Christie:
>>> On 08/05/2019 05:58 AM, Matthias Leopold wrote:
>>>> Hi,
>>>>
>>>> I
On 07/31/2019 05:20 AM, Marc Schöchlin wrote:
> Hello Jason,
>
> it seems that there is something wrong in the rbd-nbd implementation.
> (added this information also at https://tracker.ceph.com/issues/40822)
>
> The problem not seems to be related to kernel releases, filesystem types or
> the c
On 08/13/2019 07:04 PM, Mike Christie wrote:
> On 07/31/2019 05:20 AM, Marc Schöchlin wrote:
>> Hello Jason,
>>
>> it seems that there is something wrong in the rbd-nbd implementation.
>> (added this information also at https://tracker.ceph.com/issues/40822)
>&g
On 08/14/2019 07:35 AM, Marc Schöchlin wrote:
>>> 3. I wonder if we are hitting a bug with PF_MEMALLOC Ilya hit with krbd.
>>> He removed that code from the krbd. I will ping him on that.
>
> Interesting. I activated Coredumps for that processes - probably we can
> find something interesting here.
On 08/14/2019 02:09 PM, Mike Christie wrote:
> On 08/14/2019 07:35 AM, Marc Schöchlin wrote:
>>>> 3. I wonder if we are hitting a bug with PF_MEMALLOC Ilya hit with krbd.
>>>> He removed that code from the krbd. I will ping him on that.
>>
>> Interesting. I
On 08/14/2019 06:55 PM, Mike Christie wrote:
> On 08/14/2019 02:09 PM, Mike Christie wrote:
>> On 08/14/2019 07:35 AM, Marc Schöchlin wrote:
>>>>> 3. I wonder if we are hitting a bug with PF_MEMALLOC Ilya hit with krbd.
>>>>> He removed that code
91817.diff
FATAL: Cannot open tty: No such device or address.
rbd: export-diff error: (32) Broken pipe
Error in upload
---
Thanks
Mike
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On 10/02/2019 02:15 PM, Kilian Ries wrote:
> Ok i just compared my local python files and the git commit you sent me
> - it really looks like i have the old files installed. All the changes
> are missing in my local files.
>
>
>
> Where can i get a new ceph-iscsi-config package that has the fixe
On 10/14/2019 09:01 AM, Kilian Ries wrote:
>
> @Mike
>
>
> Did you have the chance to update download.ceph.com repositories for the
> new version?
No. I have updated the upstream repos with the needed patches and made
new releases there. I appear to be hitting a bug with jenk
On 10/16/2019 01:35 AM, 展荣臻(信泰) wrote:
> hi,all
> we deploy ceph with ceph-ansible.osds,mons and daemons of iscsi runs in
> docker.
> I create iscsi target according to
> https://docs.ceph.com/docs/luminous/rbd/iscsi-target-cli/.
> I discovered and logined iscsi target on another host,as sh
On 10/17/2019 10:52 AM, Mike Christie wrote:
> On 10/16/2019 01:35 AM, 展荣臻(信泰) wrote:
>> hi,all
>> we deploy ceph with ceph-ansible.osds,mons and daemons of iscsi runs in
>> docker.
>> I create iscsi target according to
>> https://docs.ceph.com/docs/lumin
looks like it has the fix:
commit dd7dd51c6cafa8bbcd3ca0eef31fb378b27ff499
Author: Mike Christie
Date: Mon Jan 14 17:06:27 2019 -0600
Allow some commands to run while taking lock
so we should not be seeing it.
Could you turn on tcmu-runner debugging? Open the file:
/etc/tcmu/tcmu.conf
and set:
log_level = 5
runs VMs on each
> of the LUNs)
>
>
> Ok, i'll update this tomorrow with the logs you asked for ...
>
> ----
> *Von:* Mike Christie
> *Gesendet:* Dienstag, 22. Oktober 2019 19:43:40
> *An
.
I’m looking for opinions on best practices to complete this as I’d like to
minimize impact to our clients.
Cheers,
Mike Cave
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On Fri, Nov 15, 2019 at 6:04 PM Mike Cave wrote:
>
> Greetings all!
>
>
>
> I am looking at upgrading to Nautilus in the near future (currently on
Mimic). We have a cluster built on 480 OSDs all using multipath and simple
block devices. I see that the
Victoria
O: 250.472.4997
From: Janne Johansson
Date: Friday, November 15, 2019 at 11:46 AM
To: Cave Mike
Cc: Paul Emmerich , ceph-users
Subject: Re: [ceph-users] Migrating from block to lvm
Den fre 15 nov. 2019 kl 19:40 skrev Mike Cave
mailto:mc...@uvic.ca>>:
So would you recommend do
Losing a node is not a big deal for us (dual bonded 10G connection to each
node).
I’m thinking:
1. Drain node
2. Redeploy with Ceph Ansible
It would require much less hands-on time for our group.
I know the churn on the cluster would be high, which was my only concern.
Mike
Senior
401 - 452 of 452 matches
Mail list logo