On 1/9/19 10:10 AM, 楼锴毅 wrote:
> Well, but the client with kernel 3.10 could work well several days ago, when
> the cluster version was also 12.2.10
>
I see, but you went forward with that flag. Somebody set the flag.
I know that Redhat backports a lot of things to the 3.10 kernel, maybe
you
nobody have this problem?
On Mon, Jan 7, 2019 at 10:11 AM 王俊 wrote:
> Hi,all:
> May I ask a question?
> The default number of files in a subdirectory before splitting into child
> directories is between 320 and 640. My server have 256GB memory,and number
> of core are 64..How can I calculate th
Hello,
I have a cluster of 3 nodes, 3 OSD per nodes (so 9 OSD in total),
replication set to 3 (os each node has a copy).
For some reason, I would like to recreate the node 1. What I have done :
1. out the 3 OSDs of node 1, stop then, then destroy them (almost in the
same time)
2. recreate the new
I've done something similar. I used a process like this:
ceph osd set noout
ceph osd set nodown
ceph osd set nobackfill
ceph osd set norebalance
ceph osd set norecover
Then I did my work to manually remove/destroy the OSDs I was replacing,
brought the replacements online, and unset all of those o
Hello,
I have a CEPH cluster in luminous 12.2.10 dedicated to cephfs.
The raw size is 65.5 TB, with a replica 3, I should have ~21.8 TB usable.
But the size of the cephfs view by df is *only* 19 TB, is that normal ?
Best regards,
here some hopefully useful information :
> apollo@icadmin004:~$
Hello ceph-users. I'm operating a moderately large ceph cluster with
cephfs. We currently have 288 osd's, made up of all 10TB drives, and are
getting ready to migrate another 432 drives into the cluster (I'm going to
have more questions on that later). Our workload is highly distributed
(containeri
Hey folks, I’m looking into what I would think would be a simple problem, but
is turning out to be more complicated than I would have anticipated. A
virtual machine managed by OpenNebula was blown away, but the backing RBD
images remain. Upon investigating, it appears
that the images still ha
Hi,
On 08/01/2019 18:58, David Galloway wrote:
> The current distro matrix is:
>
> Luminous: xenial centos7 trusty jessie stretch
> Mimic: bionic xenial centos7
Thanks for clarifying :)
> This may have been different in previous point releases because, as Greg
> mentioned in an earlier post in
Hi all
I'm seeing some behaviour I wish to check on a Luminous (12.2.10) cluster
that I'm running for rbd and rgw (mostly SATA filestore with NVME journal
with a few SATA only bluestore). There's a set of dedicated SSD OSDs
running bluestore for the .rgw buckets.index pool and also holding the
.r
Hello Jonathan,
On Wed, Jan 9, 2019 at 5:37 AM Jonathan Woytek wrote:
> While working on examining performance under load at scale, I see a marked
> performance improvement whenever I would restart certain mds daemons. I was
> able to duplicate the performance improvement by issuing a "daemon m
On Wed, Jan 9, 2019 at 4:34 PM Patrick Donnelly wrote:
> Hello Jonathan,
>
> On Wed, Jan 9, 2019 at 5:37 AM Jonathan Woytek wrote:
> > While working on examining performance under load at scale, I see a
> marked performance improvement whenever I would restart certain mds
> daemons. I was able t
Dear Cephalopodians,
inspired by
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2019-January/032092.html I
did a check of the object-maps of our RBD volumes
and snapshots. We are running 13.2.1 on the cluster I am talking about, all
hosts (OSDs, MONs, RBD client nodes) still on CentOS 7.5.
On Thu, Jan 10, 2019 at 8:02 AM Jonathan Woytek wrote:
>
> On Wed, Jan 9, 2019 at 4:34 PM Patrick Donnelly wrote:
>>
>> Hello Jonathan,
>>
>> On Wed, Jan 9, 2019 at 5:37 AM Jonathan Woytek wrote:
>> > While working on examining performance under load at scale, I see a marked
>> > performance im
Hi,cephersI have two inconsistent pg.I try list inconsistent obj,got nothing.rados list-inconsistent-obj 388.c29No scrub information available for pg 388.c29error 2: (2) No such file or directory
so I se
On 1/10/19 8:36 AM, hnuzhoulin2 wrote:
>
> Hi,cephers
>
> I have two inconsistent pg.I try list inconsistent obj,got nothing.
>
> rados list-inconsistent-obj 388.c29
> No scrub information available for pg 388.c29
> error 2: (2) No such file or directory
>
Have you tried to run a deep-scrub
On 1/9/19 2:33 PM, Yoann Moulin wrote:
> Hello,
>
> I have a CEPH cluster in luminous 12.2.10 dedicated to cephfs.
>
> The raw size is 65.5 TB, with a replica 3, I should have ~21.8 TB usable.
>
> But the size of the cephfs view by df is *only* 19 TB, is that normal ?
>
Yes. Ceph will calcul
16 matches
Mail list logo