cephfs is not alone at this, there are other inode-less filesystems
around. They all go with zeroes:
# df -i /nfs-dir
Filesystem Inodes IUsed IFree IUse% Mounted on
xxx.xxx.xx.x:/xxx/xxx/x 0 0 0 - /xxx
# df -i /reiserfs-dir
Filesystem
On Tue, 30 Apr 2019 at 20:56, Igor Podlesny wrote:
> On Tue, 30 Apr 2019 at 19:10, Denny Fuchs wrote:
> [..]
> > Any suggestions ?
>
> -- Try different allocator.
Ah, BTW, except memory allocator there's another option: recently
backported bitmap allocator.
Igor Fedotov wrote about it's expected
On Wed, May 1, 2019 at 10:54 AM Brad Hubbard wrote:
>
> Which size is correct?
Sorry, accidental discharge =D
If the object info size is *incorrect* try forcing a write to the OI
with something like the following.
1. rados -p [name_of_pool_17] setomapval 10008536718.
temporary-key anyth
Which size is correct?
On Tue, Apr 30, 2019 at 1:06 AM Reed Dier wrote:
>
> Hi list,
>
> Woke up this morning to two PG's reporting scrub errors, in a way that I
> haven't seen before.
>
> $ ceph versions
> {
> "mon": {
> "ceph version 13.2.5 (cbff874f9007f1869bfd3821b7e33b2a6ffd4988
Dear ceph users,
I would like to ask, does the metadata server needs much block devices for
storage? Or does it only needs RAM? How could I calculate the amount of disks
and/or memory needed?
Thank you very much
Manuel Sopena Ballesteros
Big Data Engineer | Kinghorn Centre for Clinical Genom
Am 01.05.19 um 00:51 schrieb Patrick Donnelly:
> On Tue, Apr 30, 2019 at 8:01 AM Oliver Freyermuth
> wrote:
>>
>> Dear Cephalopodians,
>>
>> we have a classic libvirtd / KVM based virtualization cluster using Ceph-RBD
>> (librbd) as backend and sharing the libvirtd configuration between the nodes
Hi, I've just noticed PR #27652, will test this out on my setup today.
Thanks again
From: "Wes Cilldhaire"
To: "Ricardo Dias"
Cc: "ceph-users"
Sent: Tuesday, 9 April, 2019 12:54:02 AM
Subject: Re: [ceph-users] Unable to list rbd block > images in nautilus
dashboard
Thank you
-
On Tue, Apr 30, 2019 at 8:01 AM Oliver Freyermuth
wrote:
>
> Dear Cephalopodians,
>
> we have a classic libvirtd / KVM based virtualization cluster using Ceph-RBD
> (librbd) as backend and sharing the libvirtd configuration between the nodes
> via CephFS
> (all on Mimic).
>
> To share the libvir
Hello Reed,
I would give PG repair a try.
IIRC there should be issue when you have Size 3... it would be difficult when
you have Size 2 I guess...
Hth
Mehmet
Am 29. April 2019 17:05:48 MESZ schrieb Reed Dier :
>Hi list,
>
>Woke up this morning to two PG's reporting scrub errors, in a way that
>
Hi my beloved Ceph list,
After an upgrade from Ubuntu Cosmic to Ubuntu Disco (and according Ceph
packages updated from 13.2.2 to 13.2.4), I now get this when I enter
"ceph health":
HEALTH_WARN 3 modules have failed dependencies
"ceph mgr module ls" only reports those 3 modules enabled:
"ena
On Tue, Apr 30, 2019 at 9:01 PM Igor Podlesny wrote:
>
> On Wed, 1 May 2019 at 01:26, Igor Podlesny wrote:
> > On Wed, 1 May 2019 at 01:01, Dan van der Ster wrote:
> > >> > The upmap balancer in v12.2.12 works really well... Perfectly uniform
> > >> > on our clusters.
> > >>
> > >> mode upmap ?
On Wed, 1 May 2019 at 01:58, Dan van der Ster wrote:
> On Tue, Apr 30, 2019 at 8:26 PM Igor Podlesny wrote:
[...]
> All of the clients need to be luminous our newer:
>
> # ceph osd set-require-min-compat-client luminous
>
> You need to enable the module:
>
> # ceph mgr module enable balancer
(En
On Wed, 1 May 2019 at 01:26, Igor Podlesny wrote:
> On Wed, 1 May 2019 at 01:01, Dan van der Ster wrote:
> >> > The upmap balancer in v12.2.12 works really well... Perfectly uniform on
> >> > our clusters.
> >>
> >> mode upmap ?
> >
> > yes, mgr balancer, mode upmap.
Also -- do your CEPHs have
On Tue, Apr 30, 2019 at 8:26 PM Igor Podlesny wrote:
>
> On Wed, 1 May 2019 at 01:01, Dan van der Ster wrote:
> >> > The upmap balancer in v12.2.12 works really well... Perfectly uniform on
> >> > our clusters.
> >>
> >> mode upmap ?
> >
> > yes, mgr balancer, mode upmap.
>
> I see. Was it a mat
Hey all,
If you happen to be attending the Boston Red Hat Summit or in the area,
please join the Ceph and Gluster community May 7th 6:30pm at our happy hour
event. Find all the details on the Eventbrite page. Looking forward to
seeing you all there!
https://www.eventbrite.com/e/ceph-and-gluster-c
On Wed, 1 May 2019 at 01:26, Jack wrote:
> If those pools are useless, you can:
> - drop them
As Dan pointed out it's unlikely of having any effect.
The thing is imbalance is a "property" of a pool, I'd suppose that
most often -- is the most loaded one (or of a few most loaded ones).
Not that muc
On Wed, 1 May 2019 at 01:01, Dan van der Ster wrote:
>> > The upmap balancer in v12.2.12 works really well... Perfectly uniform on
>> > our clusters.
>>
>> mode upmap ?
>
> yes, mgr balancer, mode upmap.
I see. Was it a matter of just:
1) ceph balancer mode upmap
2) ceph balancer on
or were th
You have a lot of useless PG, yet they have the same "weight" as the
useful ones
If those pools are useless, you can:
- drop them
- raise npr_archive's pg_num using the freed PGs
As npr_archive own 97% of your data, it should get 97% of your pg (which
is ~8000)
The balance module is still quite
Removing pools won't make a difference.
Read up to slide 22 here:
https://www.slideshare.net/mobile/Inktank_Ceph/ceph-day-berlin-mastering-ceph-operations-upmap-and-the-mgr-balancer
..
Dan
(Apologies for terseness, I'm mobile)
On Tue, 30 Apr 2019, 20:02 Shain Miley, wrote:
> Here is the per
Here is the per pool pg_num info:
'data' pg_num 64
'metadata' pg_num 64
'rbd' pg_num 64
'npr_archive' pg_num 6775
'.rgw.root' pg_num 64
'.rgw.control' pg_num 64
'.rgw' pg_num 64
'.rgw.gc' pg_num 64
'.users.uid' pg_num 64
'.users.email' pg_num 64
'.users' pg_num 64
'.usage' pg_num 64
'.rgw.buckets
On Tue, 30 Apr 2019, 19:32 Igor Podlesny, wrote:
> On Wed, 1 May 2019 at 00:24, Dan van der Ster wrote:
> >
> > The upmap balancer in v12.2.12 works really well... Perfectly uniform on
> our clusters.
> >
> > .. Dan
>
> mode upmap ?
>
yes, mgr balancer, mode upmap.
.. Dan
> --
> End of mes
On Wed, 1 May 2019 at 00:24, Dan van der Ster wrote:
>
> The upmap balancer in v12.2.12 works really well... Perfectly uniform on our
> clusters.
>
> .. Dan
mode upmap ?
--
End of message. Next message?
___
ceph-users mailing list
ceph-users@lists.ce
The upmap balancer in v12.2.12 works really well... Perfectly uniform on
our clusters.
.. Dan
On Tue, 30 Apr 2019, 19:22 Kenneth Van Alstyne,
wrote:
> Unfortunately it looks like he’s still on Luminous, but if upgrading is an
> option, the options are indeed significantly better. If I recall
Unfortunately it looks like he’s still on Luminous, but if upgrading is an
option, the options are indeed significantly better. If I recall correctly, at
least the balancer module is available in Luminous.
Thanks,
--
Kenneth Van Alstyne
Systems Architect
Knight Point Systems, LLC
Service-Disab
Hi,
I see that you are using rgw
RGW comes with many pools, yet most of them are used for metadata and
configuration, those do not store many data
Such pools do not need more than a couple PG, each (I use pg_num = 8)
You need to allocate your pg on pool that actually stores the data
Please do th
Shain:
Have you looked into doing a "ceph osd reweight-by-utilization” by chance?
I’ve found that data distribution is rarely perfect and on aging clusters, I
always have to do this periodically.
Thanks,
--
Kenneth Van Alstyne
Systems Architect
Knight Point Systems, LLC
Service-Disabled Vetera
Hi,
We have a cluster with 235 osd's running version 12.2.11 with a
combination of 4 and 6 TB drives. The data distribution across osd's
varies from 52% to 94%.
I have been trying to figure out how to get this a bit more balanced as
we are running into 'backfillfull' issues on a regular bas
Dear Cephalopodians,
we have a classic libvirtd / KVM based virtualization cluster using Ceph-RBD
(librbd) as backend and sharing the libvirtd configuration between the nodes
via CephFS
(all on Mimic).
To share the libvirtd configuration between the nodes, we have symlinked some
folders from
On Tue, 30 Apr 2019 at 19:10, Denny Fuchs wrote:
[..]
> Any suggestions ?
-- Try different allocator.
In Proxmox 4 they by default had this in /etc/default/ceph {{
## use jemalloc instead of tcmalloc
#
# jemalloc is generally faster for small IO workloads and when
# ceph-osd is backed by SSDs.
On Tue, 30 Apr 2019 at 19:11, Adrien Gillard
wrote:
> On Tue, Apr 30, 2019 at 10:06 AM Igor Podlesny wrote:
> >
> > On Tue, 30 Apr 2019 at 04:13, Adrien Gillard
> wrote:
> > > I would add that the use of cache tiering, though still possible, is
> not recommended
> >
> > It lacks references. CEP
On Tue, Apr 30, 2019 at 10:06 AM Igor Podlesny wrote:
>
> On Tue, 30 Apr 2019 at 04:13, Adrien Gillard wrote:
> > I would add that the use of cache tiering, though still possible, is not
> > recommended
>
> It lacks references. CEPH docs I gave links to didn't say so.
The cache tiering document
hi,
I want to add also a memory problem.
What we have:
* Ceph version 12.2.11
* 5 x 512MB Samsung 850 Evo
* 5 x 1TB WD Red (5.4k)
* OS Debian Stretch ( Proxmox VE 5.x )
* 2 x CPU CPU E5-2620 v4
* Memory 64GB DDR4
I've added to ceph.conf
...
[osd]
osd memory target = 3221225472
...
Which i
On Mon, 15 Apr 2019 at 19:40, Wido den Hollander wrote:
>
> Hi,
>
> With the release of 12.2.12 the bitmap allocator for BlueStore is now
> available under Mimic and Luminous.
>
> [osd]
> bluestore_allocator = bitmap
> bluefs_allocator = bitmap
Hi!
Have you tried this? :)
--
End of message. Ne
On Tue, 30 Apr 2019 at 04:13, Adrien Gillard wrote:
> I would add that the use of cache tiering, though still possible, is not
> recommended
It lacks references. CEPH docs I gave links to didn't say so.
> comes with its own challenges.
It's challenging for some to not over-quote when replying,
Hi,
Only available in mimic and up.
To create or delete snapshots, clients require the ‘s’ flag in
addition to ‘rw’. Note that when capability string also contains the
‘p’ flag, the ‘s’ flag must appear after it (all flags except ‘rw’
must be specified in alphabetical order).
http://docs.ceph.co
Hi,
> Any recommendations?
>
> .. found a lot of names allready ..
> OpenStack
> CloudStack
> Proxmox
> ..
>
> But recommendations are truely welcome.
I would recommend OpenNebula. Adopters of the KISS methodology.
Gr. Stefan
--
| BIT BV http://www.bit.nl/Kamer van Koophandel
Hi folks,
we are using nfs-ganesha to expose cephfs (Luminous) to nfs clients. I want to
make use of snapshots, but limit the creation of snapshots to ceph admins. I
read about cephx capabilities which allow/deny the creation of snapshots a
while ago, but I can’t find the info anymore. Can some
37 matches
Mail list logo