How does one read/set that from the command line?
Thanks,
Lindsay
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
On 29/04/2021 11:52 pm, Schmid, Michael wrote:
I am new to ceph and at the moment I am doing some performance tests with a 4
node ceph-cluster (pacific, 16.2.1).
Ceph doesn't do well with small numbers, 4 OSD's is really marginal.
Your latency isn't crash hot either. What size are you running
On 20/10/2020 11:38 pm, Mac Wynkoop wrote:
Autoscaler isn't on, what part of Ceph is handling the increase of pgp_num?
Because I'd like to turn up the rate at which it splits the PG's, but if
autoscaler isn't doing it, I'd have no clue what to adjust. Any ideas?
Normal recovery ops I imagine -
On 23/09/2020 2:29 pm, Виталий Филиппов wrote:
Not RBD, it has an own qemu driver
How have you integrated it into Qemu? from memory Qemu doesn't support
plugin drivers.
Do we need to custom patch Qemu?
--
Lindsay
___
ceph-users mailing list -- c
On 23/09/2020 8:44 am, vita...@yourcmc.ru wrote:
There are more details in the README file which currently opens from the
domainhttps://vitastor.io
that redirects to https://yourcmc.ru/git/vitalif/vitastor
Is that your own site?
--
Lindsay
___
ceph
On 23/09/2020 8:44 am, vita...@yourcmc.ru wrote:
After almost a year of development in my spare time I present my own
software-defined block storage system: Vitastor -https://vitastor.io
Interesting, thanks.
It supports qemu connecting via rbd?
--
Lindsay
__
On 23/09/2020 12:51 am, Lenz Grimmer wrote:
It's on the OSD page, click "Cluster-wide configuration -> Recovery
priority" option on top of the table.
Thanks Lenz! totally missed that that (and the PG Scrub options)
--
Lindsay
___
ceph-users mailing
On 22/09/2020 10:55 pm, René Bartsch wrote:
What do you mean with EC?
Proxmox doesn't support creating EC pools via its gui, as ec is not
considered a good fit for VM hosting. However you can create EC pools
viua the command line as normal.
--
Lindsay
___
On 22/09/2020 7:10 pm, Lenz Grimmer wrote:
Alternatively, you could have used the Dashboard's OSD Recovery
Priority Feature (see screen shot)
Whereabouts is that? not seeing it on the ceph nautilus dashboard
--
Lindsay
___
ceph-users mailing list -
On 21/09/2020 5:40 am, Stefan Kooman wrote:
My experience with bonding and Ceph is pretty good (OpenvSwitch). Ceph
uses lots of tcp connections, and those can get shifted (balanced)
between interfaces depending on load.
Same here - I'm running 4*1GB (LACP, Balance-TCP) on a 5 node cluster
with
On 8/09/2020 5:30 pm, Marc Roos wrote:
Do know that this is the only mailing list I am subscribed to, that
sends me so much spam. Maybe the list admin should finally have a word
with other list admins on how they are managing their lists
___
cep
On 28/08/2020 5:19 pm, Zhenshi Zhou wrote:
In my deployment I part the disk for wal and db seperately as I can assign
the size manually.
When using ceph-volume, you can specify the sizes on the command line.
--
Lindsay
___
ceph-users mailing list --
On 25/08/2020 6:07 am, Tony Liu wrote:
I don't need to create
WAL device, just primary on HDD and DB on SSD, and WAL will be
using DB device cause it's faster. Is that correct?
Yes.
But be aware that the DB sizes are limited to 3GB, 30GB and 300GB.
Anything less than those sizes will have
Did you check the ceph status? ("ceph -s")
On 16/08/2020 1:47 am, Matt Dunavant wrote:
Hi all,
We just completed maintenance on an OSD node and we ran into an issue where all
data seemed to stop flowing while the node was down. We couldn't connect to any
of our VMs during that time. I was und
On 6/08/2020 8:52 pm, Marc Roos wrote:
Can you block gmail.com or so!!!
! Gmail account here :(
Can't we just restrict the list to emails from members?
--
Lindsay
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to cep
On 20/07/2020 10:48 pm, carlimeun...@gmail.com wrote:
After trying to restart the mds master, it also failed. Now the cluster state
is :
Try deleting and recreating one of the MDS.
--
Lindsay
___
ceph-users mailing list -- ceph-users@ceph.io
To unsu
On 12/07/2020 6:34 pm, c...@elchaka.de wrote:
I guess ceph.log in your mon would a good place to start... but am not sure
Thanks. Guess I should do a grep through all the ceph logs :)
--
Lindsay
___
ceph-users mailing list -- ceph-users@ceph.io
To un
On 10/07/2020 1:33 pm, Zhenshi Zhou wrote:
Hi, not trying to save storage, I just wanna know what would be impacted if
I modify the total number of object copies.
Storage is cheap, data is expensive.
--
Lindsay
___
ceph-users mailing list -- ceph-use
On 7/07/2020 2:41 am, Etienne Mula wrote:
Update on this one , we have now changed the active ceph_mgr to another
instance and now getting :
0 mgr[zabbix] Exception when sending: 'zabbix_sender'
strange thing since using zabbix_sender is still working.
What user to it run as when ceph_mgr r
On 5/07/2020 10:43 pm, Wout van Heeswijk wrote:
After unsetting the norecover and nobackfill flag some OSDs started
crashing every few minutes. The OSD log, even with high debug
settings, don't seem to reveal anything, it just stops logging mid log
line.
POOMA U, but could the OOM Killer be
Dumb question - in what log file are the rocks db spillover warnings posted?
Thanks.
--
Lindsay
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
On 5/07/2020 8:16 pm, Lindsay Mathieson wrote:
But from what you are saying, the 500GB disk would have been gaining
no benefit? I would be better off allocating 30GB (or 30GB) for each
disk?
Edit: 30GB or 62GB (its a 127GB SSD)
--
Lindsay
___
ceph
On 5/07/2020 7:38 pm, Alexander E. Patrakov wrote:
If the wal location is not explicitly specified, it goes together with
the db. So it is on the SSD.
Conversely, what happens with the block.db if I place the wal with
--block.wal
The db then stays with the data.
Ah, so my 2nd reading was corr
Nautilus install.
Documentation seems a bit ambiguous to me - this is for a spinner + SSD,
using ceph-volume
If I put the block.db on the SSD with
"ceph-volume lvm create --bluestore --data /dev/sdd --block.db
/dev/sdc1"
does the wal exists on the ssd (/dev/sdc1) as well, or does it re
On 2/07/2020 4:38 pm, Burkhard Linke wrote:
# ceph-volume lvm list
Perfect, thank you.
--
Lindsay
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Is there a way to display an OSD's setup - data, data.db and WAL
disks/partitions?
--
Lindsay
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
On 30/06/2020 8:17 pm, Eugen Block wrote:
Don't forget to set the correct LV tags for the new db device as
mentioned in [1] and [2].
[1]
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/6OHVTNXH5SLI4ABC75VVP7J2DT7X4FZA/
[2] https://tracker.ceph.com/issues/42928
Thanks and
On 29/06/2020 11:44 pm, Stefan Priebe - Profihost AG wrote:
You need to use:
ceph-bluestore-tool --path /var/lib/ceph/osd/ceph-${OSD}
bluefs-bdev-new-db --dev-target /dev/bluesfs_db/db-osd${OSD}
and
ceph-bluestore-tool --path dev/osd1/ --devs-source dev/osd1/block
--dev-target dev/osd1/block.db
Nautilus - Bluestore OSD's created with everything on disk. Now I have
some spare SSD's - can I move the location of the existing WAL and/or DB
to SSD partitions without recreating the OSD?
I suspect not - saw emails from 2018, in the negative :(
Failing that - is it difficult to add lvmcach
On 26/06/2020 8:08 pm, Zhenshi Zhou wrote:
Hi Lindsay,
I have only 3 hosts, and is there any method to set a EC pool cluster
in a better way
There's failure domain by OSD, which Janne knows far better than I :)
--
Lindsay
___
ceph-users mailing lis
On 26/06/2020 6:31 pm, Zhenshi Zhou wrote:
I'm going to deploy a cluster with erasure code pool for cold storage.
There are 3 servers for me to set up the cluster, 12 OSDs on each server.
Does that mean the data is secure while 1/3 OSDs of the cluster is down,
or only 2 of the OSDs is down , if I
On 26/06/2020 5:27 pm, Francois Legrand wrote:
In that case it's normal to have misplaced objects (because with the new disk
some pgs needs to be migrated to populate this new space), but degraded pg does
not seems to be the good behaviour !
Yes, that would be bad, not sure if thats the proce
On 25/06/2020 5:10 pm, Frank Schilder wrote:
I was pondering with that. The problem is, that on Centos systems it seems to
be ignored, in general it does not apply to SAS drives, for example, and that
it has no working way of configuring which drives to exclude.
For example, while for data dis
On 26/06/2020 1:44 am, Jiri D. Hoogeveen wrote:
In Mimic I had only some misplaced objects and it recovered within an hour.
In Nautilis, when I do exactly the same, I get beside misplaced objects,
also degraded PGs and undersized PGs, and the recovery takes almost a day.
Slowness of recovery as
On 25/06/2020 3:17 am, dhils...@performair.com wrote:
Completely non-portable, but...
Couldn't you write a script to issue the necessary commands to the desired
drives, then create a system unit that calls it before OSD initialization?
Couldn't we just set (uncomment)
write_cache = off
in /e
Thanks Eugen
On 22/06/2020 10:27 pm, Eugen Block wrote:
Regarding the inactive PGs, how are your pools configured? Can you share
ceph osd pool ls detail
It could be an issue with min_size (is it also set to 3?).
pool 2 'ceph' replicated size 3 min_size 1 crush_rule 0 object_hash
rjenkins p
On 22/06/2020 9:36 am, Lindsay Mathieson wrote:
I have a problem with one osd (osd.5 on server lod) that keeps
crashing. Often it immediately crashes on restart, but oddly a server
reboot fixes that, also it alwats starts ok from the command line.
Service status and journalctl don't sho
I have a problem with one osd (osd.5 on server lod) that keeps crashing.
Often it immediately crashes on restart, but oddly a server reboot fixes
that, also it alwats starts ok from the command line. Service status and
journalctl don't show any useful information.
There's two osd's on the serv
Nautilus 14.2.9, setup using Proxmox.
* 5 Hosts
* 18 OSDs with a mix of disk sizes (3TB, 1TB, 500GB), all bluestore
* Pool size = 3, pg_num = 512
According to:
https://docs.ceph.com/docs/nautilus/rados/operations/placement-groups/#preselection
With 18 OSD's I should be using pg_num=1024, bu
39 matches
Mail list logo