That's really useful to know, thanks Daniel.
On 15/11/2023 19:07, Daniel Baumann wrote:
On 11/15/23 19:52, Daniel Baumann wrote:
for 18.2.0, there's only one trivial thing needed:
https://git.progress-linux.org/packages/graograman-backports-extras/ceph/commit/?id=ed59c69244ec7b81ec08f7a2d1a1f0a
On 13/11/2023 16:28, Daniel Baumann wrote:
On 11/13/23 17:14, Luke Hall wrote:
How is it that Proxmox were able to release Debian12 packages for Quincy
quite some time ago?
because you can, as always, just (re-)build the package yourself.
I guess I was just trying to point out that there
How is it that Proxmox were able to release Debian12 packages for Quincy
quite some time ago?
https://download.proxmox.com/debian/ceph-quincy/dists/
My understanding is that they change almost nothing in their packages
and just roll them to fit with their naming schema etc.
On 01/11/2023 07:
Hi,
Since the recent update to 16.2.14-1~bpo11+1 on Debian Bullseye I've
started seeing OSD crashes being registered almost daily across all six
physical machines (6xOSD disks per machine). There's a --block-db for
each osd on a LV from an NVMe.
If anyone has any idea what might be causing t
Ditto this query. I can't recall if there's a separate list for Debian
packaging of Ceph or not.
On 22/06/2023 15:25, Christian Peters wrote:
Hi ceph users/maintainers,
I installed ceph quincy on debian bullseye as a ceph client and now want
to update to bookworm.
I see that there is at the
There's definitely at least one or two 16.2.11-1~bpo11+1_amd64.deb
packages missing actually.
python3-rados for example
On 26/01/2023 11:02, Luke Hall wrote:
So it looks as though the python packages are in the pool ok eg
https://download.ceph.com/debian-pacific/pool/main/c/ceph/py
them as available so does the repo need its
release file etc rebuilding?
On 26/01/2023 10:49, Luke Hall wrote:
Hi,
Trying to dist-upgrade an osd server this morning and lots of necessary
packages have been removed!
Start-Date: 2023-01-26 10:04:57
Commandline: apt dist-upgrade
Install: linux
Hi,
Trying to dist-upgrade an osd server this morning and lots of necessary
packages have been removed!
Start-Date: 2023-01-26 10:04:57
Commandline: apt dist-upgrade
Install: linux-image-5.10.0-21-amd64:amd64 (5.10.162-1, automatic)
Upgrade: librados2:amd64 (16.2.10-1~bpo11+1, 16.2.11-1~bpo11
Hello,
Recently when running "rbd ls -l .. " I am seeing list of all the
existing disks/snapshots but also this error:
rbd: error opening vm-225-state-_before_upgrade_03may2022: (2)
No such file or directory
...
rbd: listing images failed: (2) No such file or directory
I am quite ce
Hi,
Looking to take our Octopus Ceph up to Pacific in the coming days.
All the machines (physical - osd,mon,admin,meta) are running Debian
'buster' and the setup was done originally with cephdeploy (~2016).
Previously I've been able to upgrade the core OS, keeping the ceph
packages at the sa
13:57 schreef Luke Hall:
Hello,
We are looking to replace the 36 aging 4TB HDDs in our 6 OSD machines
with 36x 4TB SATA SSDs.
There's obviously a big range of prices for large SSDs so I would
appreciate any recommendations of Manufacturer/models to consider/avoid.
I expect the balance to
.
Thanks but replacing these chassis will be something we look to do in
perhaps a years time. For now we need a stop-gap and switching out to
SATA SSDs is the easiest option.
On Mon, 22 Nov 2021 at 13:57, Luke Hall <mailto:l...@positive-internet.com>> wrote:
Hello,
We are l
Hello,
We are looking to replace the 36 aging 4TB HDDs in our 6 OSD machines
with 36x 4TB SATA SSDs.
There's obviously a big range of prices for large SSDs so I would
appreciate any recommendations of Manufacturer/models to consider/avoid.
I expect the balance to be between
price/performan
It is best practice to have rulesets that select either hdd or ssd
classes and then assign these rules to different pools.
It is not good practice to just mixed these classes in one pool, except
for a transition period like with your project. The performance
difference is just too large.
P
Hi,
We have six osd machines, each containing 6x4TB HDDs plus one nvme for
rocksdb. I need to plan upgrading these machines to all or partial SSDs.
The question I have is:
I know that ceph recognises SSDs as distinct from HDDs from their
physical device ids etc. In a setup with 50/50 HDDs/SS
15 matches
Mail list logo