- Forwarded message from Jan Fajerski -
Date: Mon, 28 Oct 2024 17:31:49 +0100
From: Jan Fajerski
To: fos...@lists.fosdem.org
Cc:
Subject: [devroom-managers] Call for participation: Software Defined Storage
devroom at FOSDEM 2025
FOSDEM is a free software event that offers open
iewed by a steering committee:
- Niels de Vos (OpenShift Container Storage Developer - Red Hat)
- Jan Fajerski (Ceph Developer - SUSE)
- TBD
Use the FOSDEM 'pentabarf' tool to submit your proposal:
https://penta.fosdem.org/submission/FOSDEM21
- If necessary, create a Pentabarf account and
On Fri, Nov 06, 2020 at 10:15:52AM -, victorh...@yahoo.com wrote:
I'm building a new 4-node Proxmox/Ceph cluster, to hold disk images for our
VMs. (Ceph version is 15.2.5).
Each node has 6 x NVMe SSDs (4TB), and 1 x Optane drive (960GB).
CPU is AMD Rome 7442, so there should be plenty of C
It sure is: https://docs.ceph.com/en/latest/releases/octopus/#rbd-block-storage
On Tue, Oct 27, 2020 at 10:29:18AM -0400, Adam Boyhan wrote:
That is exactly what I am thinking. My mistake, I should have
specified RBD.
Is snapshots scheduling/retention for RBD already in Octopus as well?
Maybe you are thinking of the rbd variety of this?
https://docs.ceph.com/en/latest/rbd/rbd-mirroring/
On Tue, Oct 27, 2020 at 10:19:25AM -0400, Adam Boyhan wrote:
I thought Octopus brought the new snapshot replication feature to the
table? Was there issues with it?
snapshot scheduling for CephFS landed in master already, The octopus backport is
pending still but on the agenda (https://github.com/ceph/ceph/pull/37142)
Replication of those snapshots is being worked on and is planned for pacific,
but rsync can help you out until then surely.
On Thu, Oct 22,
On Tue, Sep 08, 2020 at 07:14:16AM -, kle...@psi-net.si wrote:
I found out that it's already possible to specify storage path in OSD service
specification yaml. It works for data_devices, but unfortunately not for
db_devices and wal_devices, at least not in my case.
Aside from the questio
to /sys/block//queue/rotational.
Thanks!
Tony
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
--
Jan Fajerski
Senior Software Engineer Enterprise Storage
SUSE Software Solutions Germany GmbH
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
--
Jan Fajerski
Senior Software Enginee
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
--
Jan Fajerski
Senior Software Engineer Enterprise Storage
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Ge
t -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
--
Jan Fajerski
Senior Software Engineer Enterprise Storage
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Fe
node while I'm up all night recovering.)
>>
>>If you could tell me where to look I'd gladly read some code and see
>>if I can find anything that way. Or if there's any sort of design
>>document describing the deep internals I'd be glad to scan it to
ith debug output
enabled:
CEPH_VOLUME_DEBUG=true ceph-volume --cluster ceph lvm batch --bluestore ...
Ideally you could also openen a bug report here
https://tracker.ceph.com/projects/ceph-volume/issues/new
Thanks!
>
>Thanks.
>
>-Dave
>
>--
>Dave Hall
>Binghamto
ng Marquardt (Vorsitzender),
>Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt,
>Prof. Dr. Sebastian M. Schmidt
>-
>-
>
>
>_______
>ceph-users mailing
>ceph-users mailing list -- ceph-users@ceph.io
>To unsubscribe send an email to ceph-users-le...@ceph.io
--
Jan Fajerski
Senior Software Engineer Enterprise Storage
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
On Fri, Dec 13, 2019 at 11:49:50PM +0100, Oscar Segarra wrote:
> Hi,
> I have recently started working with Ceph Nautilus release and I have
> realized that you have to start working with LVM to create OSD instead
> of the "old fashioned" ceph-disk.
> In terms of performance and best prac
"name": "restful",
> "can_run": true,
> "error_string": ""
> },
> {
> "name": "selftest",
> "can_run": true,
>
ing a bug report on
https://tracker.ceph.com/projects/ceph-volume.
I have found other situation where a roll-back is working as it should, though
not with as much impact as this.
>
>Regards,
>
>Matthew
>
>___
>ceph-users mailin
18 matches
Mail list logo