On 10/10/2018 12:52 PM, Mike Christie wrote:
> On 10/10/2018 08:21 AM, Steven Vacaroaia wrote:
>> Hi Jason,
>> Thanks for your prompt responses
>>
>> I have used same iscsi-gateway.cfg file - no security changes - just
>> added prometheus entry
>>
On 10/10/2018 12:40 PM, Mike Christie wrote:
> On 10/09/2018 05:09 PM, Brady Deetz wrote:
>> I'm trying to replace my old single point of failure iscsi gateway with
>> the shiny new tcmu-runner implementation. I've been fighting a Windows
>> initiator all day. I have
tcmu-runner 1.4.0.
>
You need this patch which sets the failover type back to implicit to
match tcmu-runner 1.4.0 and also makes it configurable for future versions:
commit 8d66492b8c7134fb37b72b5e8e77d7c8109220d9
Author: Mike Christie
Date: Mon Jul 23 15:45:09 2018 -0500
Allow alua fail
On 10/10/2018 03:18 PM, Steven Vacaroaia wrote:
> so, it seems OSD03 is having issues when creating disks ( I can create
> target and hosts ) - here is an excerpt from api.log
> Please note I can create disk on the other node
>
> 2018-10-10 16:03:03,369DEBUG [lun.py:381:allocate()] - LUN.all
one anyways. i think
that just happened for a little bit in irc. i guess it didn't happen cause
no one followed up on it.
good luck and hopefully you don't blame me if things get worse. :)
mike
On Fri, Oct 12, 2018 at 7:34 AM Lawrence Smith <
lawrence.sm...@uni-muenster.de> wrote:
mark another hit set
missing. :) i think the code that removes the hit set from the pg data is
before that assert so its possible it still removed it from the history.
mike
On Thu, Oct 18, 2018 at 9:11 AM Lawrence Smith <
lawrence.sm...@uni-muenster.de> wrote:
> Hi Mike,
>
> Tha
d any luck there.
mike
On Thu, Feb 22, 2018 at 9:49 AM, Chris Sarginson wrote:
> Hi Caspar,
>
> Sean and I replaced the problematic DC S4600 disks (after all but one had
> failed) in our cluster with Samsung SM863a disks.
> There was an NDA for new Intel firmware (as mentioned e
erent behavior. i'll try to post updates as i have them.
mike
On Thu, Feb 22, 2018 at 2:33 PM, David Herselman wrote:
> Hi Mike,
>
>
>
> I eventually got hold of a customer relations manager at Intel but his
> attitude was lack luster and Intel never officially responde
i did try some micron
m600s a couple years ago and was disappointed by them so i'm avoiding the
"prosumer" ones from micron if i can. my use case has been the 1TB range
ssds and am using them mainly as a cache tier and filestore. my needs might
not line up closely with yours though.
was the pg-upmap feature used to force a pg to get mapped to a particular
osd?
mike
On Thu, Feb 22, 2018 at 10:28 AM, Wido den Hollander wrote:
> Hi,
>
> I have a situation with a cluster which was recently upgraded to Luminous
> and has a PG mapped to OSDs on the same host.
On 03/02/2018 01:24 AM, Joshua Chen wrote:
> Dear all,
> I wonder how we could support VM systems with ceph storage (block
> device)? my colleagues are waiting for my answer for vmware (vSphere 5)
We were having difficulties supporting older versions, because they will
drop down to using SCSI-2
On 03/06/2018 01:17 PM, Lazuardi Nasution wrote:
> Hi,
>
> I want to do load balanced multipathing (multiple iSCSI gateway/exporter
> nodes) of iSCSI backed with RBD images. Should I disable exclusive lock
> feature? What if I don't disable that feature? I'm using TGT (manual
> way) since I get so
> target_core_rbd?
> Thanks.
>
> 2018-03-07
>
> shadowlin
>
> ----
>
> *发件人:*Mike Christie
> *发送时间:*2018-03-07
On 03/08/2018 10:59 AM, Lazuardi Nasution wrote:
> Hi Mike,
>
> Since I have moved from LIO to TGT, I can do full ALUA (active/active)
> of multiple gateways. Of course I have to disable any write back cache
> at any level (RBD cache and TGT cache). It seem to be safe to disable
&
On 03/08/2018 12:44 PM, Mike Christie wrote:
> stuck/queued then your osd_request_timeout value might be too short. For
Sorry, I meant too long.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
---
> shadowlin
>
>
>
> *发件人:*Jason Dillaman
> *发送时间:*2018-03-11 07:46
> *主题:*Re: Re: [ceph-users] iSCSI Multipath (Load Balancing) vs RBD
> Exclusive Lock
> *收件人:*"shadow_lin"
> *抄送:*"Mike Christie","Lazuardi
> Na
>
> >> ...
> Where you send the patches that add your delays could you send the
> target side /var/log/tcmu-runner.log with log_level = 4.
> ...
>
>
> Mike, see please patches and /var/log/tcmu-runner.log in attachment.
>
> T
On 03/15/2018 02:32 PM, Maxim Patlasov wrote:
> On Thu, Mar 15, 2018 at 12:48 AM, Mike Christie <mailto:mchri...@redhat.com>> wrote:
>
> ...
>
> It looks like there is a bug.
>
> 1. A regression was added when I stopped killing the iscsi connection
>
ive data (such as OS volumes and database
volumes).
Thank you for taking the time to read this far.
I am happy to provide any further details you might need or try any
configuration changes you might suggest. This is completely development so I’m
not afraid to try t
from OpenStack!
Thanks again,
Mike
-Original Message-
From: Jason Dillaman
Reply-To: "dilla...@redhat.com"
Date: Thursday, March 22, 2018 at 5:15 PM
To: Cave Mike
Cc: "ceph-users@lists.ceph.com"
Subject: Re: [ceph-users] Erasure Coded Pools and OpenStack
On Fri, Mar
On Thu, Mar 29, 2018 at 1:17 AM, Jakub Jaszewski
wrote:
> Many thanks Mike, that justifies stopped IOs. I've just finished adding
> new disks to cluster and now try to evenly reweight OSD by PG.
>
> May I ask you two more questions?
> 1. As I was in a hurry I did not che
rk when there is one pool with a huge object / pool ratio
There is no obvious solution.
How to solve this problem correctly?
—
Mike, runs!
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hello,
> 21 мая 2018 г., в 2:05, Sage Weil написал(а):
>
> On Sun, 20 May 2018, Mike A wrote:
>> Hello!
>>
>> In our cluster, we see a deadlock situation.
>> This is a standard cluster for an OpenStack without a RadosGW, we have a
>> standard block
Hello
> 21 мая 2018 г., в 2:05, Sage Weil написал(а):
>
> On Sun, 20 May 2018, Mike A wrote:
>> Hello!
>>
>> In our cluster, we see a deadlock situation.
>> This is a standard cluster for an OpenStack without a RadosGW, we have a
>> standard block
On 06/01/2018 02:01 AM, Wladimir Mutel wrote:
> Dear all,
>
> I am experimenting with Ceph setup. I set up a single node
> (Asus P10S-M WS, Xeon E3-1235 v5, 64 GB RAM, 8x3TB SATA HDDs,
> Ubuntu 18.04 Bionic, Ceph packages from
> http://download.ceph.com/debian-luminous/dists/xe
On 06/15/2018 12:21 PM, Wladimir Mutel wrote:
> Jason Dillaman wrote:
>
[1] http://docs.ceph.com/docs/master/rbd/iscsi-initiator-win/
>
>>> I don't use either MPIO or MCS on Windows 2008 R2 or Windows 10
>>> initiator (not Win2016 but hope there is no much difference). I try
>>> to
h but just wanted to second that i've seen this happen on a recent
hammer to recent jewel upgrade.
mike
On Wed, Jan 18, 2017 at 4:25 AM, Piotr Dałek
wrote:
> On 01/17/2017 12:52 PM, Piotr Dałek wrote:
>
>> During our testing we found out that during upgrade from 0.94.9 to 10.2.5
there other alternatives to this suggested configuration?
I am kind of a little paranoid to start playing around with crush rules
in the running system.
Regards,
Mike
On 1/5/17 11:40 PM, jiajia zhong wrote:
2017-01-04 23:52 GMT+08:00 Mike Miller <mailto:millermike...@gmail.com>>:
SATA drives with only slightly higher price/capacity ratios.
- mike
On 2/3/17 2:46 PM, Stillwell, Bryan J wrote:
On 2/3/17, 3:23 AM, "ceph-users on behalf of Wido den Hollander"
wrote:
Op 3 februari 2017 om 11:03 schreef Maxime Guyot
:
Hi,
Interesting feedback!
> In my opin
/ recovery.
In all cases we tested the cluster is useless from the client side
during backfilling / recovery.
- mike
On 2/19/17 9:54 AM, Wido den Hollander wrote:
Op 18 februari 2017 om 17:03 schreef rick stehno :
I work for Seagate and have done over a hundred of tests using SMR 8TB
s working on other pool members of an existing pool.
Let me know if you have any questions.
Cheers,
Mike
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
am totally open to change if I am doing something wrong.
Cheers,
Mike
>Hi Mike,
>
>Have you considered creating SR which doesn't make one huge RBD volume
>and on top of it creates LVM but instead creates separate RBD volumes
>for each VDI?
_
ll then flush the cache tier and remove
it then add one back. still not sure if its going to work.
does anyone else have any other thoughts about how to handle this or why
this is happening? or what else we could do to get the osds back online?
this has crashed almost all of the cache tier osds
of the the
communities' experiences there?
thanks
mike
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
ve a
clean slate going forward... Would it be bad to create a new pool with the
same PG/Replication if I am going to remove the old pool after I have
migrated the VM's over? Right now I am only using about 2TB out of 109TB.
Cheers,
Mike
___
ceph-use
Hi David,
Thank you for your response!
I was thinking that I may use Ceph to back other projects outside of our
infrastructure, so I calculated 75% VM and 25% other usage when I created
the pool.
Cheers,
Mike
On Wed, Mar 15, 2017 at 12:57 PM, David Turner
wrote:
> Especially if you
s hesitant to increase it
will still on hammer. min_write_recency_for_promote wasn't added till after
hammer.
hopefully that helps.
mike
On Fri, Mar 17, 2017 at 2:02 PM, Webert de Souza Lima wrote:
> Hello everyone,
>
> I`m deploying a ceph cluster with cephfs and I`d like to tune c
On Mon, Mar 20, 2017 at 4:20 PM, Nick Fisk wrote:
> Just a few corrections, hope you don't mind
>
> > -Original Message-
> > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> > Mike Lovell
> > Sent: 20 March 2017 20:30
> &g
On 04/06/2017 03:22 AM, yipik...@gmail.com wrote:
> On 06/04/2017 09:42, Nick Fisk wrote:
>>
>> I assume Brady is referring to the death spiral LIO gets into with
>> some initiators, including vmware, if an IO takes longer than about
>> 10s. I haven’t heard of anything, and can’t see any changes, s
On 04/06/2017 08:46 AM, David Disseldorp wrote:
> On Thu, 6 Apr 2017 14:27:01 +0100, Nick Fisk wrote:
> ...
>>> I'm not to sure what you're referring to WRT the spiral of death, but we did
>>> patch some LIO issues encountered when a command was aborted while
>>> outstanding at the LIO backstore la
On 04/10/2017 01:21 PM, Timofey Titovets wrote:
> JFYI: Today we get totaly stable setup Ceph + ESXi "without hacks" and
> this pass stress tests.
>
> 1. Don't try pass RBD directly to LIO, this setup are unstable
> 2. Instead of that, use Qemu + KVM (i use proxmox for that create VM)
> 3. Attach
to 2 gradually?
Any and all suggestions are welcome.
Cheers,
Mike Cave
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
ceph-users
Cc: Cave Mike
Subject: Re: [ceph-users] Re-weight Entire Cluster?
> It appears the current best practice is to weight each OSD according to it?s
> size (3.64 for 4TB drive, 7.45 for 8TB drive, etc).
OSD’s are created with those sorts of CRUSH weights by default, yes
less intense redistribute from SSD
to the spinners?
Is this possible using a suitable crushmap?
Is this thought equivalent to having large SSD journals?
Thanks and regards,
Mike
___
ceph-users mailing list
ceph-users@lists.ceph.com
http
Hi,
is there a way to debug / monitor the osd journal usage?
Thanks and regards,
Mike
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
I am using hammer 0.94.5 and ubuntu trusty.
Thanks for your help!
Mike
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hey ceph-users
We upgraded from hammer to infernalis, stopped all osd's to change the user
permissions from root to ceph, and all of our osd's are down (some say they
are up, but the status says it is booting)
ceph -s
cluster cabd1728-2eca-4e18-a581-b4885364e5a4
health HEALTH_WARN
Well, we figured it out :)
This mailing list post fixed our problem
http://www.spinics.net/lists/ceph-users/msg24220.html
We had to mark the osds that were falsely reported as up, as down, and then
restart all osd's
Thanks!
On Tue, Jan 5, 2016 at 6:43 PM, Mike Carlson wrote:
> Hey ce
-alnd SCHOOL667055
drwxrwsr-x 1 21695 21183 2962751438 Jan 13 09:33 SCHOOL667055
Any tips are appreciated!
Thanks,
Mike C
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
herring. With that, it is oddly
coincidental that we just started seeing issues.
On Wed, Jan 13, 2016 at 11:30 AM, Gregory Farnum wrote:
> On Wed, Jan 13, 2016 at 11:24 AM, Mike Carlson wrote:
> > Hello.
> >
> > Since we upgraded to Infernalis last, we have noticed a severe prob
4 active+clean+scrubbing+deep
Now we are really down for the count. We cannot get our MDS back up in an
active state and none of our data is accessible.
On Wed, Jan 13, 2016 at 7:05 PM, Yan, Zheng wrote:
> On Thu, Jan 14, 2016 at 3:37 AM, Mike Carlson wrote:
> > Hey Greg,
ck it up, we're okay with rebuilding. We just
need the data back.
Mike C
On Thu, Jan 14, 2016 at 3:33 PM, Yan, Zheng wrote:
> On Fri, Jan 15, 2016 at 3:28 AM, Mike Carlson wrote:
> > Thank you for the reply Zheng
> >
> > We tried set mds bal frag to true, but the end re
Did I just loose all of my data?
If we were able to export the journal, could we create a brand new mds out
of that and retrieve our data?
On Thu, Jan 14, 2016 at 4:15 PM, Yan, Zheng wrote:
>
> > On Jan 15, 2016, at 08:01, Gregory Farnum wrote:
> >
> > On Thu, Jan 14,
okay, that sounds really good.
Would it help if you had access to our cluster?
On Thu, Jan 14, 2016 at 4:19 PM, Yan, Zheng wrote:
>
> > On Jan 15, 2016, at 08:16, Mike Carlson wrote:
> >
> > Did I just loose all of my data?
> >
> > If we were able to expo
t;
>
> On 2016-01-14 18:46, Yan, Zheng wrote:
>
>> Here is patch for v9.2.0. After install the modified version of
>> ceph-mon, run “ceph mds add failed 1”
>>
>>
>>
>>
>>
>> On Jan 15, 2016, at 08:20, Mike Carlson wrote:
>>>
>
Hey ceph-users,
I wanted to follow up, Zheng's patch did the trick. We re-added the removed
mds, and it all came back. We're sync-ing our data off to a backup server.
Thanks for all of the help, Ceph has a great community to work with!
Mike C
On Thu, Jan 14, 2016 at 4:46 PM, Yan, Zh
d do the right thing.
On 01/19/2016 05:45 AM, Василий Ангапов wrote:
> So is it a different approach that was used here by Mike Christie:
> http://www.spinics.net/lists/target-devel/msg10330.html ?
> It seems to be a confusion because it also implements target_core_rbd
> module. Or not?
On 01/20/2016 06:07 AM, Nick Fisk wrote:
> Thanks for your input Mike, a couple of questions if I may
>
> 1. Are you saying that this rbd backing store is not in mainline and is only
> in SUSE kernels? Ie can I use this lrbd on Debian/Ubuntu/CentOS?
The target_core_rbd backing
ch
And yes, we need to increase our PG count. This cluster has grown from a
few 2TB drives to multiple 600GB sas drives, but I don't want to touch
anything else until I can get this figured out.
This is running as our Openstack VM storage, so it is not something we can
simply rebuild.
T
re other issues you can find on various lists. Some people on
this list have got it working ok for their specific application or at
least have made other workarounds for any issues they were hitting.
>
>
> Thanks
>
>
> Dominik
>
>
>
> On 21 January 2016 at 12:08,
Hi,
can someone report their experiences with the PMC Adaptec HBA 1000
series of controllers?
https://www.adaptec.com/en-us/smartstorage/hba/
Thanks and regards,
Mike
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com
tier for 'data'
Any one can explain this?
--
Mike, runs!
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
memory and hangs.
All on hammer 0.94.
Regards,
Mike
On 3/1/16 8:13 AM, Yan, Zheng wrote:
On Tue, Mar 1, 2016 at 7:28 PM, Dietmar Rieder
wrote:
Dear ceph users,
I'm in the very initial phase of planning a ceph cluster an have a
question regarding the RAM recommendation for an MDS.
Acco
0570792 25030755
It a bug or predictable action?
--
Mike. runs!
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
log. i removed some extraneous bits from after the osds was restarted and a
large amount of 'recent events' that were from well before the crash.
thanks
mike
2016-03-07 10:51:08.739907 7fb0c56a1700 0 -- 10.208.16.26:6802/7034 >>
10.208.16.42:0/3019478 pipe(0x47aab000 sd=1360 :6802 s=0
12 osds running. it looks like they're creating over 2500
threads each. i don't know the internals of the code but that seems like a
lot. oh well. hopefully this fixes it.
mike
On Mon, Mar 7, 2016 at 1:55 PM, Gregory Farnum wrote:
> On Mon, Mar 7, 2016 at 11:04 AM, Mike Lovell
06-Mar-16 17:28, Christian Balzer пишет:
On Sun, 6 Mar 2016 12:17:48 +0300 Mike Almateia wrote:
Hello Cephers!
When my cluster hit "full ratio" settings, objects from cache pull
didn't flush to a cold storage.
As always, versions of everything, Ceph foremost.
Yes of cours
option.
But the cluster has earned again after I add new OSD in the cache tier
pool and 'full OSD' status was dropped.
Robert LeBlanc
PGP Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1
On Sun, Mar 6, 2016 at 2:17 AM, Mike Almateia wrote:
Hello Cephers!
When my cluster
07-Mar-16 21:28, Gregory Farnum пишет:
On Fri, Mar 4, 2016 at 11:56 PM, Mike Almateia wrote:
Hello Cephers!
On my small cluster I see this:
[root@c1 ~]# rados df
pool name KB objects clones degraded unfound
rdrd KB wrwr KB
data
advance for any help you can provide.
mike
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
close.
mike
On Mon, Mar 14, 2016 at 9:35 PM, Christian Balzer wrote:
>
> Hello,
>
> On Mon, 14 Mar 2016 20:51:04 -0600 Mike Lovell wrote:
>
> > something weird happened on one of the ceph clusters that i administer
> > tonight which resulted in virtual machines using rbd
set to greater than 1.
mike
On Wed, Mar 16, 2016 at 4:41 PM, Mike Lovell
wrote:
> robert and i have done some further investigation the past couple days on
> this. we have a test environment with a hard drive tier and an ssd tier as
> a cache. several vms were created with volumes from
? anyone have a similar problem?
mike
On Mon, Mar 14, 2016 at 8:51 PM, Mike Lovell
wrote:
> something weird happened on one of the ceph clusters that i administer
> tonight which resulted in virtual machines using rbd volumes seeing
> corruption in multiple forms.
>
> when ever
node * 12 HDD).
Btrfs on the disks - one disk, one OSD.
Also we use EC 3+2.
Cluster build for video record from street's cams.
Our tests with 4Mb bs/99% seq. write show a good perfomance:
* around 500-550 Mb/s with BTRFS vs 120-140 Mb/s Disk+Journal on same disk.
We use Ce
use also 10Gbit network.
--
Mike.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
tier?
Thanks and regards,
Mike
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
, right?
Mike
Christian
Bob
On Fri, Mar 25, 2016 at 8:30 AM, Mike Miller
wrote:
Hi,
in case of a failure in the storage tier, say single OSD disk failure
or complete system failure with several OSD disks, will the remaining
cache tier (on other nodes) be used for rapid backfilling/recovering
least 200-300 MB/s when reading, but I
am seeing 10% of that at best.
Thanks for your help.
Mike
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
n the default 4MB chunk size be
handled? Should they be padded somehow?
3) If any objects were completely missing and therefore unavailable to
this process, how should they be handled? I assume we need to offset/pad
to compensate.
--
Thanks,
Mike Dawson
Co-Founder & Director of Cloud Arc
know if there is a way to enable clients better single
threaded read performance for large files.
Thanks and regards,
Mike
On 4/20/16 10:43 PM, Nick Fisk wrote:
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
Udo Lembke
Sent: 20 April 2016
%) but not
much.
I also found this info
http://tracker.ceph.com/issues/9192
Maybe Ilya can help us, he knows probably best how this can be improved.
Thanks and cheers,
Mike
On 4/21/16 4:32 PM, Udo Lembke wrote:
Hi Mike,
Am 21.04.2016 um 09:07 schrieb Mike Miller:
Hi Nick and Udo,
thanks
On 10/28/2018 03:18 AM, Frédéric Nass wrote:
> Hello Mike, Jason,
>
> Assuming we adapt the current LIO configuration scripts and put QLogic HBAs
> in our SCSI targets, could we use FC instead of iSCSI as a SCSI transport
> protocol with LIO ? Would this still work with multip
Hey Cephers,
The Ceph Community Newsletter of October 2018 has been published:
https://ceph.com/community/ceph-community-newsletter-october-2018-edition/
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users
Hi Zachary,
Thanks for contributing this mirror to the community! It has now been added:
https://ceph.com/get/
On Tue, Oct 30, 2018 at 8:30 AM Zachary Muller
wrote:
>
> Hi all,
>
> We are GigeNET, a datacenter based in Arlington Heights, IL (close to
> Chicago). We are starting to mirror ceph a
Hi Eric,
Please take a look at the new Foundation site's FAQ for answers to
these questions:
https://ceph.com/foundation/
On Tue, Nov 13, 2018 at 11:51 AM Smith, Eric wrote:
>
> https://techcrunch.com/2018/11/12/the-ceph-storage-project-gets-a-dedicated-open-source-foundation/
>
>
>
> What does
would make a great topic as an example.
https://wiki.centos.org/Events/Dojo/ORNL2019
--
Mike Perez (thingee)
Ceph Community Manager, Red Hat
pgplymAxh5Qf7.pgp
Description: PGP signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http
On 14:26 Dec 03, Mike Perez wrote:
> Hey Cephers!
>
> Just wanted to give a heads up on the CentOS Dojo at Oak Ridge, Tennessee on
> Tuesday, April 16th, 2019.
>
> The CFP is now open, and I would like to encourage our community to
> participate
> if you can make t
On 16:25 Dec 04, Matthew Vernon wrote:
> On 03/12/2018 22:46, Mike Perez wrote:
>
> >Also as a reminder, lets try to coordinate our submissions on the CFP
> >coordination pad:
> >
> >https://pad.ceph.com/p/cfp-coordination
>
> I see that mentions a Cephaloco
On 12/05/2018 09:43 AM, Steven Vacaroaia wrote:
> Hi,
> I have a strange issue
> I configured 2 identical iSCSI gateways but one of them is complaining
> about negotiations although gwcli reports the correct auth and status (
> logged-in)
>
> Any help will be truly appreciated
>
> Here are some
Hi Serkan,
I'm currently working on collecting the slides to have them posted to
the Ceph Day Berlin page as Lenz mentioned they would show up. I will
notify once the slides are available on mailing list/twitter. Thanks!
On Fri, Nov 16, 2018 at 2:30 AM Serkan Çoban wrote:
>
> Hi,
>
> Does anyone
The slides are now posted:
https://ceph.com/cephdays/ceph-day-berlin/
--
Mike Perez (thingee)
On Thu, Dec 6, 2018 at 8:49 AM Mike Perez wrote:
>
> Hi Serkan,
>
> I'm currently working on collecting the slides to have them posted to
> the Ceph Day Berlin page as Lenz menti
. If you have any questions, please let me
know.
[1] - https://ceph.com/cephalocon/barcelona-2019/
--
Mike Perez (thingee)
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On Mon, Dec 10, 2018 at 3:07 AM stefan wrote:
>
> Quoting Marc Roos (m.r...@f1-outsourcing.eu):
> >
> > Are there video's available (MeerKat, CTDB)?
>
> Nope, no recordings were made during the day.
>
> > PS. Disk health prediction link is not working
On Mon, Dec 10, 2018 at 8:05 AM Wido den Hollander wrote:
>
>
>
> On 12/10/18 5:00 PM, Mike Perez wrote:
> > Hello everyone!
> >
> > It gives me great pleasure to announce the CFP for Cephalocon Barcelona
> > 2019 is now open [1]!
> >
> > Cephaloco
Luminous and upgrading to Mimic while running a simple application.
More details and schedule hopefully later.
Great way to start KubeCon Seattle soon!
--
Mike Perez (thingee)
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com
=OXRzOWM3bHQ3dTF2aWMyaWp2dnFxbGZwbzBAZ3JvdXAuY2FsZW5kYXIuZ29vZ2xlLmNvbQ
.ics file:
https://calendar.google.com/calendar/ical/9ts9c7lt7u1vic2ijvvqqlfpo0%40group.calendar.google.com/public/basic.ics
--
Mike Perez (thingee)
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com
]
host = blade7
Any ideas ? more information ?
Mike
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
mmm wonder why the list is saying my email is forged, wonder what I have
wrong.
My email is sent via an outbound spam filter, but I was sure I had the
SPF set correctly.
Mike
On 18/12/18 10:53 am, Mike O'Connor wrote:
> Hi All
>
> I have a ceph cluster which has been working with
Added DKIM to my server, will this help
On 18/12/18 11:04 am, Mike O'Connor wrote:
> mmm wonder why the list is saying my email is forged, wonder what I have
> wrong.
>
> My email is sent via an outbound spam filter, but I was sure I had the
> SPF set correctly.
>
> Mik
ing clearly when new issues hit me.
Really need the next few weeks to go well so I can get some de-stress time.
Mike
On 18/12/18 1:44 pm, Oliver Freyermuth wrote:
> That's kind of unrelated to Ceph, but since you wrote two mails already,
> and I believe it is caused by the maili
301 - 400 of 452 matches
Mail list logo