Hi all,
We deployed a new Nautilus cluster using ceph-ansible. We enabled dashboard
through group vars.
Then we enabled pg autoscaler using command line "ceph mgr module enable
pg_autoscaler".
Shall we update group vars and deploy again to make the change permanent?
Sorry for the newbie quest
I believe that change should be persistent regardless of wether you deploy
it with ceph-ansible or via command.
The only reason I would enable it in your ceph-ansible group_vars is so the
next time you deploy a cluster, you won’t have to remember to run that
command by hand.
Someone feel free to
Hi,
I can’t get data flushed out of osd with weights set to 0. Is there
any way of checking the tasks queued for PG remapping ? Thank You.
can you give some more details about your cluster (replicated or EC
pools, applied rules etc.)? My first guess would be that the other
OSDs are either
Bump!
> From: "Andrei Mikhailovsky"
> To: "ceph-users"
> Sent: Friday, 28 June, 2019 14:54:53
> Subject: [ceph-users] troubleshooting space usage
> Hi
> Could someone please explain / show how to troubleshoot the space usage in
> Ceph
> and how to reclaim the unused space?
> I have a small
Hi Andrei,
The most obvious reason is space usage overhead caused by BlueStore
allocation granularity, e.g. if bluestore_min_alloc_size is 64K and
average object size is 16K one will waste 48K per object in average.
This is rather a speculation so far as we lack key the information about
you
Yes, this should be possible using an object class which is also a
RADOS client (via the RADOS API). You'll still have some client
traffic as the machine running the object class will still need to
connect to the relevant primary osd and send the write (presumably in
some situations though this wil
Hi all,
I'm facing a very strange issue after migrating my Luminous cluster to
Nautilus.
I have 2 pools configured for Openstack cinder volumes with multiple
backend setup, One "service" Ceph pool with cache tiering and one "R&D"
Ceph pool.
After the upgrade, the R&D pool became inaccessible f
Hi,
did you try to use rbd and rados commands with the cinder keyring, not
the admin keyring? Did you check if the caps for that client are still
valid (do the caps differ between the two cinder pools)?
Are the ceph versions on your hypervisors also nautilus?
Regards,
Eugen
Zitat von Adr
Hi Eugen,
The cinder keyring used by the 2 pools is the same, the rbd command
works using this keyring and ceph.conf used by Openstack while the rados
ls command stays stuck.
I tried with the previous ceph-common version used 10.2.5 and the last
ceph version 14.2.1.
With the Nautilus ceph-co
Hi,
on client machines, when I use the command rbd, for example, rbd ls
poolname, this message is always displayed:
2019-07-02 11:18:10.613 7fb2eaffd700 -1 set_mon_vals failed to set
cluster_network = 10.1.2.0/24: Configuration option 'cluster_network'
may not be modified at runtime
2019-07-02 11
I'm not sure how or why you'd get an object class involved in doing
this in the normal course of affairs.
There's a copy_from op that a client can send and which copies an
object from another OSD into the target object. That's probably the
primitive you want to build on. Note that the OSD doesn't
Hi all,
Starting to make preparations for Nautilus upgrades from Mimic, and I'm looking
over my client/session features and trying to fully grasp the situation.
> $ ceph versions
> {
> "mon": {
> "ceph version 13.2.6 (7b695f835b03642f85998b2ae7b6dd093d9fbce4) mimic
> (stable)": 3
On Wed, Jul 3, 2019 at 4:25 AM Gregory Farnum wrote:
>
> I'm not sure how or why you'd get an object class involved in doing
> this in the normal course of affairs.
>
> There's a copy_from op that a client can send and which copies an
> object from another OSD into the target object. That's probab
I wouldn't say that's a pretty common failure. The flaw here perhaps is the
design of the cluster and that it was relying on a single power source.
Power sources fail. Dual power supplies connected to a b power sources in
the data centre is pretty standard.
On Tuesday, July 2, 2019, Bryan Henderso
I am getting "Operation not permitted" on a write when trying to set caps
for a user. Admin user (allow * for everything) works ok.
This does not work:
caps: [mds] allow r,allow rw path=/home
caps: [mon] allow r
caps: [osd] allow rwx tag cephfs data=cephfs_data2
This does
Here's some counter-evidence to the proposition that it's not pretty common
for an entire cluster to go down because of a power failure.
Every data center class hardware storage server product I know of has dual
power input and is also designed to tolerate losing power on both at once. If
that ha
On Fri, Jun 28, 2019 at 8:27 AM Jorge Garcia wrote:
>
> This seems to be an issue that gets brought up repeatedly, but I haven't
> seen a definitive answer yet. So, at the risk of repeating a question
> that has already been asked:
>
> How do you migrate a cephfs data pool to a new data pool? The
I'd suggest creating a tracker similar to
http://tracker.ceph.com/issues/40554 which was created for the issue
in the thread you mentioned.
On Wed, Jul 3, 2019 at 12:29 AM Vandeir Eduardo
wrote:
>
> Hi,
>
> on client machines, when I use the command rbd, for example, rbd ls
> poolname, this messa
Hi All,
Some feedback on my end. I managed to recover the "lost data" from one of
the other OSDs. Seems like my initial summary was a bit off, in that the
PG's was replicated, CEPH just wanted to confirm that the objects were
still relevant.
For future reference, I basically marked the OSD as los
19 matches
Mail list logo