Sorry for the long posting but trying to cover everything
I woke up to find my cephfs filesystem down. This was in the logs
2018-07-11 05:54:10.398171 osd.1 [ERR] 2.4 full-object read crc
0x6fc2f65a != expected 0x1c08241c on 2:292cf221:::200.:head
I had one standby MDS, but as far as
that these are harmless and will go
away in a future version. I also looked in the monitor logs but didn't
see any reference to inconsistent or scrubbed objects.
Kevin
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/lis
On 06/26/2014 01:08 PM, Gregory Farnum wrote:
On Thu, Jun 26, 2014 at 12:52 PM, Kevin Horan
wrote:
I am also getting inconsistent object errors on a regular basis, about 1-2
every week or so for about 300GB of data. All OSDs are using XFS
filesystems. Some OSDs are individual 3TB internal
fragmentation problems other users have
experienced?
Kind regards
Kevin
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
SSD's
for OSD's and RAM disk pcie devices for the Journals so this would be ok.
Kind regards
Kevin Walker
+968 9765 1742
On 25 Feb 2015, at 02:35, Mark Nelson wrote:
> On 02/24/2015 04:21 PM, Kevin Walker wrote:
> Hi All
>
> Just recently joined the list and have been
vide FC targets, which adds
further power consumption.
Kind regards
Kevin Walker
+968 9765 1742
On 25 Feb 2015, at 04:40, Christian Balzer wrote:
On Wed, 25 Feb 2015 02:50:59 +0400 Kevin Walker wrote:
> Hi Mark
>
> Thanks for the info, 22k is not bad, but still massively below what
What about the Samsung 845DC Pro SSD's?
These have fantastic enterprise performance characteristics.
http://www.thessdreview.com/our-reviews/samsung-845dc-pro-review-800gb-class-leading-speed-endurance/
Kind regards
Kevin
On 28 February 2015 at 15:32, Philippe Schwarz wrote:
> --
Can I ask what xio and simple messenger are and the differences?
Kind regards
Kevin Walker
+968 9765 1742
On 1 Mar 2015, at 18:38, Alexandre DERUMIER wrote:
Hi Mark,
I found an previous bench from Vu Pham (it's was about simplemessenger vs
xiomessenger)
http://www.spinics.net/lists
found from mailing list) - This showed
some noticeable difference .
Will configuring ssd in RAID0 improve this,A single OSD from RAID0
Regards,
Kevin
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users
Hi,
I am trying hammer 0.93 on Ubuntu 14.04.
rbd is mapped in client ,which is also ubuntu 14.04 .
When i did a stop ceph-osd-all and then a start,client machine crashed and
attached pic was in the console.Not sure if its related to ceph.
Thanks
___
thanks i will follow this work around.
On Thu, Mar 12, 2015 at 12:18 AM, Somnath Roy
wrote:
> Kevin,
>
> This is a known issue and should be fixed in the latest krbd. The problem
> is, it is not backported to 14.04 krbd yet. You need to build it from
> latest krbd source if yo
I have 4 node cluster each with 5 disks (4 OSD and 1 Operating system also
hosting 3 monitoring process) with default replica 3.
Total OSD disks : 16
Total Nodes : 4
How can i calculate the
- Maximum number of disk failures my cluster can handle with out any
impact on current data and new
elot on camelot...
=== mds.camelot ===
Starting Ceph mds.camelot on camelot...
starting mds.camelot at :/0
[root@camelot ~]# ceph auth get mon.
access denied
If someone could tell me what I'm doing wrong it would be greatly appreciated.
Thanks!
--
Kevin Weiler
IT
IMC Financial Markets | 233 S. Wack
when creating
the client.admin key so it doesn't need capabilities? Thanks again!
--
Kevin Weiler
IT
IMC Financial Markets | 233 S. Wacker Drive, Suite 4300 | Chicago, IL
60606 | http://imc-chicago.com/
Phone: +1 312-204-7439 | Fax: +1 312-244-3301 | E-Mail:
kevin.wei...@imc-chicago.com
Hi again Ceph devs,
I'm trying to deploy ceph using puppet and I'm hoping to add my osds
non-sequentially. I spoke with dmick on #ceph about this and we both agreed it
doesn't seem possible given the documentation. However, I have an example of a
ceph cluster that was deployed using ceph-deploy
rrect version). The spec file looks fine in
the ceph-deploy git repo, maybe you just need to rerun the package/repo
generation? Thanks!
--
Kevin Weiler
IT
IMC Financial Markets | 233 S. Wacker Drive, Suite 4300 | Chicago, IL 60606 |
http://imc-chicago.com/
Phone: +1 312-204-7439 | Fax: +1 312-244-
k=0
proxy=_none_
metadata_expire=0
--
Kevin Weiler
IT
IMC Financial Markets | 233 S. Wacker Drive, Suite 4300 | Chicago, IL 60606 |
http://imc-chicago.com/
Phone: +1 312-204-7439 | Fax: +1 312-244-3301 | E-Mail:
kevin.wei...@imc-chicago.com<mailto:kevin.wei...@imc-chicago.com>
From: Gary Lowe
: NOKEY
/usr/bin/env
gdisk
or
pushy >= 0.5.3
python(abi) = 2.7
python-argparse
python-distribute
python-pushy >= 0.5.3
rpmlib(CompressedFileNames) <= 3.0.4-1
rpmlib(PayloadFilesHavePrefix) <= 4.0-1
it seems to require both pushy AND python-pushy.
--
Kevin Weiler
IT
IMC Financial Mar
sages on either
the container or the host box. Any ideas on how to troubleshoot this?
Thanks!
--
Kevin Weiler
IT
IMC Financial Markets | 233 S. Wacker Drive, Suite 4300 | Chicago, IL 60606 |
http://imc-chicago.com/
Phone: +1 312-204-7439 | Fax: +1 312-244-3301 | E-Mail:
kevin.wei...@
The kernel is 3.11.4-201.fc19.x86_64, and the image format is 1. I did,
however, try a map with an RBD that was format 2. I got the same error.
--
Kevin Weiler
IT
IMC Financial Markets | 233 S. Wacker Drive, Suite 4300 | Chicago, IL 60606 |
http://imc-chicago.com/
Phone: +1 312-204-7439 | Fax
Hi Josh,
We did map it directly to the host, and it seems to work just fine. I
think this is a problem with how the container is accessing the rbd module.
--
Kevin Weiler
IT
IMC Financial Markets | 233 S. Wacker Drive, Suite 4300 | Chicago, IL
60606 | http://imc-chicago.com/
Phone: +1 312
o that our VMs don't go
down when there is a problem with the cluster?
--
Kevin Weiler
IT
IMC Financial Markets | 233 S. Wacker Drive, Suite 4300 | Chicago, IL 60606 |
http://imc-chicago.com/
Phone: +1 312-204-7439 | Fax: +1 312-244-3301 | E-Mail:
kevin.wei...@imc-chicago.com<mailt
Thanks Kyle,
What's the unit for osd recovery max chunk?
Also, how do I find out what my current values are for these osd options?
--
Kevin Weiler
IT
IMC Financial Markets | 233 S. Wacker Drive, Suite 4300 | Chicago, IL
60606 | http://imc-chicago.com/
Phone: +1 312-204-7439 | Fax: +
Hi guys,
I have an OSD in my cluster that is near full at 90%, but we're using a little
less than half the available storage in the cluster. Shouldn't this be balanced
out?
--
Kevin Weiler
IT
IMC Financial Markets | 233 S. Wacker Drive, Suite 4300 | Chicago, IL 60606 |
http://imc-c
All of the disks in my cluster are identical and therefore all have the same
weight (each drive is 2TB and the automatically generated weight is 1.82 for
each one).
Would the procedure here be to reduce the weight, let it rebal, and then put
the weight back to where it was?
--
Kevin Weiler
IT
; 20
I assume this is in bytes.
--
Kevin Weiler
IT
IMC Financial Markets | 233 S. Wacker Drive, Suite 4300 | Chicago, IL 60606 |
http://imc-chicago.com/
Phone: +1 312-204-7439 | Fax: +1 312-244-3301 | E-Mail:
kevin.wei...@imc-chicago.com<mailto:kevin.wei...@imc-chicago.com>
From: Kur
Thanks Gregory,
One point that was a bit unclear in documentation is whether or not this
equation for PGs applies to a single pool, or the entirety of pools.
Meaning, if I calculate 3000 PGs, should each pool have 3000 PGs or should
all the pools ADD UP to 3000 PGs? Thanks!
--
Kevin Weiler
IT
Thanks again Gregory!
One more quick question. If I raise the amount of PGs for a pool, will this
REMOVE any data from the full OSD? Or will I have to take the OSD out and put
it back in to realize this benefit? Thanks!
--
Kevin Weiler
IT
IMC Financial Markets | 233 S. Wacker Drive, Suite
bytes.
Am I reading this incorrectly?
--
Kevin Weiler
IT
IMC Financial Markets | 233 S. Wacker Drive, Suite 4300 | Chicago, IL 60606 |
http://imc-chicago.com/
Phone: +1 312-204-7439 | Fax: +1 312-244-3301 | E-Mail:
kevin.wei...@imc-chicago.com<mailto:kevin.wei...@imc-chi
r any help.
Kevin
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
r any help.
Kevin
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
drives, how do you limit
the visibility of the drives? Am I missing something here? Could there
be a configuration option or something added to ceph to ensure that it
never tries to mount things on its own?
Thanks.
Kevin
On 11/26/2013 05:14 PM, Kyle Bader wrote:
Is there any way to
Ah, that sounds like what I want. I'll look into that, thanks.
Kevin
On 11/27/2013 11:37 AM, LaSalle, Jurvis wrote:
Is LUN masking an option in your SAN?
On 11/27/13, 2:34 PM, "Kevin Horan" wrote:
Thanks. I may have to go this route, but it seems awfully fragile. One
stray
near-idle up to similar 100-150% CPU.
Hopefully, I’ve missed something in the CephFS tuning. However, I’m looking
for direction on figuring out if it is, indeed, a tuning problem or if this
behavior is a symptom of the “not ready for production” banner in the
documentation.
--
Kevin Sumner
ke
> On Nov 17, 2014, at 15:52, Sage Weil wrote:
>
> On Mon, 17 Nov 2014, Kevin Sumner wrote:
>> I?ve got a test cluster together with a ~500 OSDs and, 5 MON, and 1 MDS. All
>> the OSDs also mount CephFS at /ceph. I?ve got Graphite pointing at a space
>> under /ceph.
minute, so cache at 1
million is still undersized. If that doesn’t work, we’re running Firefly on
the cluster currently and I’ll be upgrading it to Giant.
--
Kevin Sumner
ke...@sumner.io
> On Nov 18, 2014, at 1:36 AM, Thomas Lemarchand
> wrote:
>
> Hi Kevin,
>
> There
Making mds cache size 5 million seems to have helped significantly, but we’re
still seeing issues occasionally on metadata reads while under load. Settings
over 5 million don’t seem to have any noticeable impact on this problem. I’m
starting the upgrade to Giant today.
--
Kevin Sumner
ke
nt io 3463 MB/s rd, 18710 kB/s wr, 7456 op/s
--
Kevin Sumner
ke...@sumner.io
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
5360 MB -- 85%
avail
mon.cluster4-monitor004 store is getting too big! 93414 MB >= 15360 MB -- 69%
avail
mon.cluster4-monitor005 store is getting too big! 88232 MB >= 15360 MB -- 71%
avail
--
Kevin Sumner
ke...@sumner.io
> On Dec 9, 2014, at 6:20 PM, Haomai Wang wrote:
>
> Mayb
Hello All,
Does anyone know how to configure data stripping when using ceph as file
system? My understanding is that configuring stripping with rbd is only for
block device.
Many thanks,
Kevin
___
ceph-users mailing list
ceph-users@lists.ceph.com
http
lying file system does not support xattr. Has
anyone ever run into similar problem before?
I deployed CephFS on Debian wheezy.
And here is the mounting information:
ceph-fuse on /dfs type fuse.ceph-fuse
(rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other)
Many thanks,
Kev
Hi John,
I am using 0.56.1. Could it be because data striping is not supported in
this version?
Kevin
On Wed Dec 17 2014 at 4:00:15 AM PST Wido den Hollander
wrote:
> On 12/17/2014 12:35 PM, John Spray wrote:
> > On Wed, Dec 17, 2014 at 10:25 AM, Wido den Hollander
> wrote:
>
ration with rbd and cache.direct=off.
> If yes, it is possible to disable manually writeback online with qmp ?
No, such a QMP command doesn't exist, though it would be possible to
implement (for toggling cache.direct, that is; cache.writeback is guest
visible and
Am 19.04.2014 um 00:33 hat Josh Durgin geschrieben:
> On 04/18/2014 10:47 AM, Alexandre DERUMIER wrote:
> >Thanks Kevin for for the full explain!
> >
> >>>cache.writeback=on,cache.direct=off,cache.no-flush=off
> >
> >I didn't known about the cache option
"incomplete": 0,
"last_epoch_started": 20323},
"recovery_state": [
{ "name": "Started\/Primary\/Active",
"enter_time": "2014-05-01 09:03:30.557244",
"might_have_unfound": [
{
t the operation just
hangs.
Kevin
On 5/1/14 10:11 , kevin horan wrote:
Here is how I got into this state. I have only 6 OSDs total, 3 on
one host (vashti) and 3 on another (zadok). I set the noout flag
so I could reboot zadok. Zadok was down for 2 minutes. When it
came up
While everything was
moving from degraded to active+clean, it finally finished probing.
If it's still happening tomorrow, I'd try to find a Geeks on IRC
Duty (http://ceph.com/help/community/).
On 5/3/14 09:43 , Kevin Horan wrote:
Craig,
Thanks for your response
t the operation just
hangs.
Kevin
On 5/1/14 10:11 , kevin horan wrote:
Here is how I got into this state. I have only 6 OSDs total, 3 on
one host (vashti) and 3 on another (zadok). I set the noout flag
so I could reboot zadok. Zadok was down for 2 minutes. When it
came up
ame needs here.
--
Kevin Decherf - @Kdecherf
GPG C610 FE73 E706 F968 612B E4B2 108A BD75 A81E 6E2F
http://kdecherf.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
on the ratio cache
size/total cluster size? Or any better ratio than others observed
in your labs?
Thanks,
--
Kevin Decherf - @Kdecherf
GPG C610 FE73 E706 F968 612B E4B2 108A BD75 A81E 6E2F
http://kdecherf.com
___
ceph-users mailing list
ceph-users
at least an order of magnitude higher.
Ok :) Do you have an idea of the average size of an inode in the cache?
--
Kevin Decherf - @Kdecherf
GPG C610 FE73 E706 F968 612B E4B2 108A BD75 A81E 6E2F
http://kdecherf.com
___
ceph-users mailing list
ceph-users
On Tue, Apr 30, 2013 at 03:10:00PM +0100, Mike Bryant wrote:
> All of my MDS daemons have begun crashing when I start them up, and
> they try to begin recovery.
Hi,
It seems to be the same bug as #4644
http://tracker.ceph.com/issues/4644
--
Kevin Decherf - @Kdecherf
GPG C610 FE73 E70
Hi,
I have done a bit of work on the wireshark plugin so it will compile for
WIN32 really as a by product of trying to investigate a problem as a
learning exercise and finding the plugin was not decoding the area I was
interested in. I havn't tried to improve the plugin but thought I would
me
hello All,
I am trying to upgrade a small test setup having one monitor and one osd
node which is in hammer release .
I updating from hammer to jewel using package update commands and things
are working.
How ever after updating from Jewel to Luminous, i am facing issues with osd
failing to start
Can some one please help me on this.I have no idea how to bring up the
cluster to operational state.
Thanks,
Kev
On Tue, Sep 12, 2017 at 11:12 AM, kevin parrikar
wrote:
> hello All,
> I am trying to upgrade a small test setup having one monitor and one osd
> node which is in hamme
> match path" option) and such after upgrading from Hammer to Jewel? I am not
> sure if that matters here, but it might help if you elaborate on your
> upgrade process a bit.
>
> --Lincoln
>
> > On Sep 12, 2017, at 2:22 PM, kevin parrikar
> wrote:
> >
> >
ete...done.
$ rbd du
NAMEPROVISIONED USED
child10240k 10240k
parent@snap 10240k 0
parent 10240k 0
20480k 10240k
Is there any way to flatten a clone while retaining its sparseness, perhaps in
Luminous or with BlueStor
t daemons are simply
waiting for new maps. I can often see the "newest_map" incrementing on
osd daemons, but it is slow and some are behind by thousands.
Thanks,
Kevin
Cluster details:
CentOS 7.4
Kraken ceph-11.2.1-0.el7.x86_64
540 OSD, 3 mon/mgr/mds
~3.6PB, 72% raw used, ~40 million ob
d, a little tight on some of those early
gen servers, but I haven't seen OOM killing things off yet. I think I
saw mention of that patch and luminous handling this type of situation
better while googling the issue...larger osdmap increments or something
similar if i recall correctly.
quickly setting nodown,noout,noup when
everything is already down will help as well.
Sage, thanks again for your input and advice.
Kevin
On 11/04/2017 11:54 PM, Sage Weil wrote:
On Sat, 4 Nov 2017, Kevin Hrpcek wrote:
Hey Sage,
Thanks for getting back to me this late on a weekend.
Do you
28MB limit is a bit high but not unreasonable. If
you have an application written directly to librados that is using
objects larger than 128MB you may need to adjust osd_max_object_size"
Kevin
On 11/09/2017 02:01 PM, Marc Roos wrote:
I would like store objects with
rados -p ec32
showing correct
values.
Can some one help me here please
Regards,
Kevin
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
17-12-21 02:39:10.622835 7fb40a22b700 0 Cannot get stat of OSD 141
Not sure whats wrong in my setup
Regards,
Kevin
On Thu, Dec 21, 2017 at 2:37 AM, Jean-Charles Lopez
wrote:
> Hi,
>
> make sure client.admin user has an MGR cap using ceph auth list. At some
> point there was a glitch w
key: AQByfDparprIEBAAj7Pxdr/87/v0kmJV49aKpQ==
caps: [mds] allow *
caps: [mgr] allow *
caps: [mon] allow *
caps: [osd] allow *
Regards,
Kevin
On Thu, Dec 21, 2017 at 8:10 AM, kevin parrikar
wrote:
> Thanks JC,
> I tried
> ceph auth caps client.admin o
It was a firewall issue on the controller nodes.After allowing ceph-mgr
port in iptables everything is displaying correctly.Thanks to people on
IRC.
Thanks alot,
Kevin
On Thu, Dec 21, 2017 at 5:24 PM, kevin parrikar
wrote:
> accidently removed mailing list email
>
> ++ceph-users
>
objects, 72319 MB
usage: 229 GB used, 39965 GB / 40195 GB avail
pgs: 6240 active+clean
can some one suggest a way to improve this.
Thanks,
Kevin
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users
e a ton of different configurations to test but I only did
a few focused on writes.
Kevin
R440, Perc H840 with 2 MD1400 attached with 12 10TB NLSAS drives per
md1400. Xfs filestore with 10gb journal lv on each 10tb disk. Ceph
cluster set up as a single mon/mgr/osd server for testing. These tables
p
4.4.x-kernel. We plan to migrate
to Ubuntu 16.04.3 with HWE (kernel 4.10).
Clients will be Fedora 27 + OpenNebula.
Any comments?
Thank you.
Kind regards,
Kevin
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-
2018-02-02 12:44 GMT+01:00 Richard Hesketh :
> On 02/02/18 08:33, Kevin Olbrich wrote:
> > Hi!
> >
> > I am planning a new Flash-based cluster. In the past we used SAMSUNG
> PM863a 480G as journal drives in our HDD cluster.
> > After a lot of tests with luminous and
te: Failed to activate
> [osd01.cloud.example.local][WARNIN] unmount: Unmounting
> /var/lib/ceph/tmp/mnt.pAfCl4
>
Same problem on 2x 14 disks. I was unable to get this cluster up.
Any ideas?
Kind regards,
Kevin
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
I also noticed there are no folders under /var/lib/ceph/osd/ ...
Mit freundlichen Grüßen / best regards,
Kevin Olbrich.
2018-02-04 19:01 GMT+01:00 Kevin Olbrich :
> Hi!
>
> Currently I try to re-deploy a cluster from filestore to bluestore.
> I zapped all disks (multiple times
artitions 1 - 2 were not added, they are (this disk
has only two partitions).
Should I open a bug?
Kind regards,
Kevin
2018-02-04 19:05 GMT+01:00 Kevin Olbrich :
> I also noticed there are no folders under /var/lib/ceph/osd/ ...
>
>
> Mit freundlichen Grüßen / best regards,
> Kevi
Would be interested as well.
- Kevin
2018-02-04 19:00 GMT+01:00 Yoann Moulin :
> Hello,
>
> What is the best kernel for Luminous on Ubuntu 16.04 ?
>
> Is linux-image-virtual-lts-xenial still the best one ? Or
> linux-virtual-hwe-16.04 will offer some improvement ?
>
>
diff,object-map,deep-flatten on the image.
> Otherwise it runs well.
>
I always thought that the latest features are built into newer kernels, are
they available on non-HWE 4.4, HWE 4.8 or HWE 4.10?
Also I am researching for the OSD server side.
- Kevin
_
OSDs (and setting size
to 3).
I want to make sure we can resist two offline hosts (in terms of hardware).
Is my assumption correct?
Mit freundlichen Grüßen / best regards,
Kevin Olbrich.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://list
regards,
Kevin Olbrich.
>
> Original Message
> Subject: Re: [ceph-users] degraded objects after osd add (17-Nov-2016 9:14)
> From:Burkhard Linke
> To: c...@dolphin-it.de
>
> Hi,
>
>
> On 11/17/2016 08:07 AM, Steffen Weißgerber wrot
them run remote services (terminal).
My question is: Are 80 VMs hosted on 53 disks (mostly 7.2k SATA) to much?
We sometime experience lags where nearly all servers suffer from "blocked
IO > 32" seconds.
What are your experiences?
Mit freundlichen Grüßen / best regards,
Hi!
I want to deploy two nodes with 4 OSDs each. I already prepared OSDs and
only need to activate them.
What is better? One by one or all at once?
Kind regards,
Kevin.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com
I need to note that I already have 5 hosts with one OSD each.
Mit freundlichen Grüßen / best regards,
Kevin Olbrich.
2016-11-28 10:02 GMT+01:00 Kevin Olbrich :
> Hi!
>
> I want to deploy two nodes with 4 OSDs each. I already prepared OSDs and
> only need to activate them.
> What
is safe regardless of full outage.
Mit freundlichen Grüßen / best regards,
Kevin Olbrich.
2016-12-07 21:10 GMT+01:00 Wido den Hollander :
>
> > Op 7 december 2016 om 21:04 schreef "Will.Boege" >:
> >
> >
> > Hi Wido,
> >
> > Just curious how
,
Kevin
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Ok, thanks for your explanation!
I read those warnings about size 2 + min_size 1 (we are using ZFS as RAID6,
called zraid2) as OSDs.
Time to raise replication!
Kevin
2016-12-13 0:00 GMT+01:00 Christian Balzer :
> On Mon, 12 Dec 2016 22:41:41 +0100 Kevin Olbrich wrote:
>
> > Hi,
>
2016-12-14 2:37 GMT+01:00 Christian Balzer :
>
> Hello,
>
Hi!
>
> On Wed, 14 Dec 2016 00:06:14 +0100 Kevin Olbrich wrote:
>
> > Ok, thanks for your explanation!
> > I read those warnings about size 2 + min_size 1 (we are using ZFS as
> RAID6,
> > called
understand this better.
Regards,
Kevin
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
for your
suggestion.
Regards,
Kevin
On Fri, Jan 6, 2017 at 8:56 AM, jiajia zhong wrote:
>
>
> 2017-01-06 11:10 GMT+08:00 kevin parrikar :
>
>> Hello All,
>>
>> I have setup a ceph cluster based on 0.94.6 release in 2 servers each
>> with 80Gb intel s3510 and
Thanks Christian for your valuable comments,each comment is a new learning
for me.
Please see inline
On Fri, Jan 6, 2017 at 9:32 AM, Christian Balzer wrote:
>
> Hello,
>
> On Fri, 6 Jan 2017 08:40:36 +0530 kevin parrikar wrote:
>
> > Hello All,
> >
> > I h
.
Regards,
Kevin
On Fri, Jan 6, 2017 at 4:42 PM, kevin parrikar
wrote:
> Thanks Christian for your valuable comments,each comment is a new learning
> for me.
> Please see inline
>
> On Fri, Jan 6, 2017 at 9:32 AM, Christian Balzer wrote:
>
>>
>> Hello,
>>
>&g
m SEC .
I suppose this also shows slow performance.
Any idea where could be the issue?
I use LSI 9260-4i controller (firmware 12.13.0.-0154) on both the nodes
with write back enabled . i am not sure if this controller is suitable for
ceph.
Regards,
Kevin
On Sat, Jan 7, 2017 at 1:23 PM, Mag
bought S3500 because last time when we tried ceph, people were
suggesting this model :) :)
Thanks alot for your help
On Sat, Jan 7, 2017 at 6:01 PM, Lionel Bouton <
lionel-subscript...@bouton.name> wrote:
> Hi,
>
> Le 07/01/2017 à 04:48, kevin parrikar a écrit :
>
> i reall
more osd "per" node or more osd "nodes".
Thanks alot for all your help.Learned so many new things thanks again
Kevin
On Sat, Jan 7, 2017 at 7:33 PM, Lionel Bouton <
lionel-subscript...@bouton.name> wrote:
> Le 07/01/2017 à 14:11, kevin parrikar a écrit :
>
> T
s for this image?
Kind regards,
Kevin
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Is it possible to force-remove the lock or the image?
Kevin
2018-07-09 21:14 GMT+02:00 Jason Dillaman :
> Hmm ... it looks like there is a bug w/ RBD locks and IPv6 addresses since
> it is failing to parse the address as valid. Perhaps it's barfing on the
> "%eth0" sc
and IPv6 addresses
>> since it is failing to parse the address as valid. Perhaps it's barfing on
>> the "%eth0" scope id suffix within the address.
>>
>> On Mon, Jul 9, 2018 at 2:47 PM Kevin Olbrich wrote:
>>
>>> Hi!
>>>
>>> I tri
ink local when there is an ULA-prefix available.
The address is available on brX on this client node.
- Kevin
> On Mon, Jul 9, 2018 at 3:43 PM Kevin Olbrich wrote:
>
>> 2018-07-09 21:25 GMT+02:00 Jason Dillaman :
>>
>>> BTW -- are you running Ceph on a one-node computer
2018-07-10 14:37 GMT+02:00 Jason Dillaman :
> On Tue, Jul 10, 2018 at 2:37 AM Kevin Olbrich wrote:
>
>> 2018-07-10 0:35 GMT+02:00 Jason Dillaman :
>>
>>> Is the link-local address of "fe80::219:99ff:fe9e:3a86%eth0" at least
>>> present on the clien
Sounds a little bit like the problem I had on OSDs:
[ceph-users] Blocked requests activating+remapped after extending pg(p)_num
<http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-May/026680.html>
*Kevin
Olbrich*
- [ceph-users] Blocked requests activating+remapped
afterextendi
You can keep the same layout as before. Most place DB/WAL combined in one
partition (similar to the journal on filestore).
Kevin
2018-07-13 12:37 GMT+02:00 Robert Stanford :
>
> I'm using filestore now, with 4 data devices per journal device.
>
> I'm confused by th
Hi,
why do I see activating followed by peering during OSD add (refill)?
I did not change pg(p)_num.
Is this normal? From my other clusters, I don't think that happend...
Kevin
___
ceph-users mailing list
ceph-users@lists.ceph.com
PS: It's luminous 12.2.5!
Mit freundlichen Grüßen / best regards,
Kevin Olbrich.
2018-07-14 15:19 GMT+02:00 Kevin Olbrich :
> Hi,
>
> why do I see activating followed by peering during OSD add (refill)?
> I did not change pg(p)_num.
>
> Is this normal? From my other
Hi,
on upgrade from 12.2.4 to 12.2.5 the balancer module broke (mgr crashes
minutes after service started).
Only solution was to disable the balancer (service is running fine since).
Is this fixed in 12.2.7?
I was unable to locate the bug in bugtracker.
Kevin
2018-07-17 18:28 GMT+02:00
1 - 100 of 221 matches
Mail list logo