Hello,
On Mon, 19 Mar 2018 10:39:02 -0400 Mark Steffen wrote:
> At the moment I'm just testing things out and have no critical data on
> Ceph. I'm using some Intel DC S3510 drives at the moment; these may not be
> optimal but I'm just trying to do some testing and get my feet with with
> Ceph (
David,
Pretty sure you must be aware of the filestore random split on existing OSD
PGs, `filestore split rand factor`, may be you could try that too.
Thanks,
-Pavan.
From: ceph-users on behalf of David Turner
Date: Monday, March 19, 2018 at 1:36 PM
To: Caspar Smit
Cc: ceph-users
Subject: E
On Monday, March 19, 2018 at 18:45, Nicolas Huillard wrote:
> > Then I tried to reduce the number of MDS, from 4 to 1,
> Le lundi 19 mars 2018 à 19:15 +0300, Sergey Malinin a écrit :
> Forgot to mention, that in my setup the issue gone when I had
> reverted back to single MDS and switched dirfrag
Hi,
What is the best way to do if there network segments different between OSD
to OSD, OSD to MON and OSD to client due to some networking policy? What
should I put for public_addr and cluster_addr? Is that simple "as is"
depend on the connected network segments of each OSD and MON? If it is not
r
Sorry for being away. I set all of my backfilling to VERY slow settings
over the weekend and things have been stable, but incredibly slow (1%
recovery from 3% misplaced to 2% all weekend). I'm back on it now and well
rested.
@Caspar, SWAP isn't being used on these nodes and all of the affected OS
Hi everybody,
does anybody have used Petasan?
On the website it claim that use Ceph with ready-to-use iSCSI.
Is it something that somebody have try already?
Experience?
Thought?
Reviews?
Dubts?
Pro?
Cons?
Thanks for any thoughts.
Max
___
ceph-users m
Forgot to mention, that in my setup the issue gone when I had reverted back to
single MDS and switched dirfrag off.
On Monday, March 19, 2018 at 18:45, Nicolas Huillard wrote:
> Then I tried to reduce the number of MDS, from 4 to 1,
___
ceph-users
Le lundi 19 mars 2018 à 15:30 +0300, Sergey Malinin a écrit :
> Default for mds_log_events_per_segment is 1024, in my set up I ended
> up with 8192.
> I calculated that value like IOPS / log segments * 5 seconds (afaik
> MDS performs journal maintenance once in 5 seconds by default).
I tried 4096
Hi Marc,
You mentioned following the instructions 'except for doing this ldap
token'. Do I read that correctly that you did not generate / use an
LDAP token with your client? I think that is a necessary part of
triggering the LDAP authentication (Section 3.2 and 3.3 of the doc you
linked). I ca
On Mon, Mar 05, 2018 at 12:55:52PM -0500, Jonathan D. Proulx wrote:
:Hi All,
:
:I've recently noticed my deep scrubs are EXTREAMLY poorly
:distributed. They are stating with in the 18->06 local time start
:stop time but are not distrubuted over enough days or well distributed
:over the range of da
At the moment I'm just testing things out and have no critical data on
Ceph. I'm using some Intel DC S3510 drives at the moment; these may not be
optimal but I'm just trying to do some testing and get my feet with with
Ceph (since trying it out with 9 OSDs on 2TB spinners about 4 years ago).
I had
Default for mds_log_events_per_segment is 1024, in my set up I ended up with
8192.
I calculated that value like IOPS / log segments * 5 seconds (afaik MDS
performs journal maintenance once in 5 seconds by default).
On Monday, March 19, 2018 at 15:20, Nicolas Huillard wrote:
> I can't find any
Le lundi 19 mars 2018 à 10:01 +, Sergey Malinin a écrit :
> I experienced the same issue and was able to reduce metadata writes
> by raising mds_log_events_per_segment to
> it’s original value multiplied several times.
I changed it from 1024 to 4096 :
* rsync status (1 line per file) scrolls m
On Mon, Mar 19, 2018 at 7:29 AM, ST Wong (ITSC) wrote:
> Hi,
>
>
>
> I tried to extend my experimental cluster with more OSDs running CentOS 7
> but failed with warning and error with following steps:
>
>
>
> $ ceph-deploy install --release luminous newosd1
> # no error
>
> $ ceph-deploy osd creat
Hi,
I tried to extend my experimental cluster with more OSDs running CentOS 7 but
failed with warning and error with following steps:
$ ceph-deploy install --release luminous newosd1#
no error
$ ceph-deploy osd create newosd1 --data /dev/sdb
cut here --
Hello,
On Sun, 18 Mar 2018 10:59:15 -0400 Mark Steffen wrote:
> Hello,
>
> I have a Ceph newb question I would appreciate some advice on
>
> Presently I have 4 hosts in my Ceph cluster, each with 4 480GB eMLC drives
> in them. These 4 hosts have 2 more empty slots each.
>
A lot of the answer
Hi Steven,
Le 16/03/2018 à 17:26, Steven Vacaroaia a écrit :
Hi All,
Can someone confirm please that, for a perfect performance/safety
compromise, the following would be the best settings ( id 0 is SSD,
id 1 is HDD )
Alternatively, any suggestions / sharing configuration / advice would
be g
We don't run compression as far as I know, so that wouldn't be it. We do
actually run a mix of bluestore & filestore - due to the rest of the
cluster predating a stable bluestore by some amount.
12.2.2 -> 12.2.4 at 2018/03/10: I don't see increase of memory usage. No
any compressions of cour
I experienced the same issue and was able to reduce metadata writes by raising
mds_log_events_per_segment to it’s original value multiplied several times.
From: ceph-users on behalf of Nicolas
Huillard
Sent: Monday, March 19, 2018 12:01:09 PM
To: ceph-users@list
Hi Berant
I've created prometheus exporter that scrapes the RADOSGW Admin Ops API and
exports the usage information for all users and buckets. This is my first
prometheus exporter so if anyone has feedback I'd greatly appreciate it.
I've tested it against Hammer, and will shortly test against J
The MDS has to write to its local journal when clients open files, in
case of certain kinds of failures.
I guess it doesn't distinguish between read-only (when it could
*probably* avoid writing them down? Although it's not as simple a
thing as it sounds) and writeable file opens. So every file you
Hi all,
I'm experimenting with a new little storage cluster. I wanted to take
advantage of the week-end to copy all data (1TB, 10M objects) from the
cluster to a single SATA disk. I expected to saturate the SATA disk
while writing to it, but the storage cluster actually saturates its
network links
Maybe (likely?) in Mimic. Certainly the next release.
Some code has been written but the reason we haven’t done this before is
the number of edge cases involved, and it’s not clear how long rounding
those off will take.
-Greg
On Fri, Mar 16, 2018 at 2:38 PM Ovidiu Poncea
wrote:
> Hi All,
>
> Is
Mostly, this exists because syslog is just receiving our raw strings, and
those embed time stamps deep in the code level.
So we *could* strip them out for syslog, but we’d have still paid the cost
of generating them, and as you can see we have much higher precision than
that syslog output, plus it
You can explore the rbd exclusive lock functionality if you want to do
this, but it’s not typically advised because using it makes moving live VMs
across hosts harder, IUIC.
-Greg
On Sat, Mar 17, 2018 at 7:47 PM Egoitz Aurrekoetxea
wrote:
> Good morning,
>
>
> Does some kind of config param exis
25 matches
Mail list logo