[ceph-users] librbd Python asyncio

2023-07-09 Thread Tony Liu
Hi,

Wondering if there is librbd supporting Python asyncio,
or any plan to do that?


Thanks!
Tony
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] CEPH orch made osd without WAL

2023-07-09 Thread Jan Marek
Hello,

I've tried to add to CEPH cluster OSD node with a 12 rotational
disks and 1 NVMe. My YAML was this:

service_type: osd
service_id: osd_spec_default
service_name: osd.osd_spec_default
placement:
  host_pattern: osd8
spec:
  block_db_size: 64G
  data_devices:
rotational: 1
  db_devices:
paths:
- /dev/nvme0n1
  filter_logic: AND
  objectstore: bluestore

Now I have 12 OSD with DB on NVMe device, but without WAL. How I
can add WAL to this OSD?

NVMe device still have 128GB free place.

Thanks a lot.

Sincerely
Jan Marek
-- 
Ing. Jan Marek
University of South Bohemia
Academic Computer Centre
Phone: +420389032080
http://www.gnu.org/philosophy/no-word-attachments.cs.html


signature.asc
Description: PGP signature
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: CEPH orch made osd without WAL

2023-07-09 Thread Eugen Block

Hi,

if you don't specify a different device for WAL it will be  
automatically colocated on the same device as the DB. So you're good  
with this configuration.


Regards,
Eugen


Zitat von Jan Marek :


Hello,

I've tried to add to CEPH cluster OSD node with a 12 rotational
disks and 1 NVMe. My YAML was this:

service_type: osd
service_id: osd_spec_default
service_name: osd.osd_spec_default
placement:
  host_pattern: osd8
spec:
  block_db_size: 64G
  data_devices:
rotational: 1
  db_devices:
paths:
- /dev/nvme0n1
  filter_logic: AND
  objectstore: bluestore

Now I have 12 OSD with DB on NVMe device, but without WAL. How I
can add WAL to this OSD?

NVMe device still have 128GB free place.

Thanks a lot.

Sincerely
Jan Marek
--
Ing. Jan Marek
University of South Bohemia
Academic Computer Centre
Phone: +420389032080
http://www.gnu.org/philosophy/no-word-attachments.cs.html



___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Planning cluster

2023-07-09 Thread Jan Marek
Hello,

I have a cluster, which have this configuration:

osd pool default size = 3
osd pool default min size = 1

I have 5 monitor nodes and 7 OSD nodes.

I have changed a crush map to divide ceph cluster to two
datacenters - in the first one will be a part of cluster with 2
copies of data and in the second one will be part of cluster
with one copy - only emergency.

I still have this cluster in one 

This cluster have a 1 PiB of raw data capacity, thus it is very
expensive add a further 300TB capacity to have 2+2 data redundancy.

Will it works?

If I turn off the 1/3 location, will it be operational? I
believe, it is a better choose, it will. And what if "die" 2/3
location? On this cluster is pool with cephfs - this is a main
part of CEPH.

Many thanks for your notices.

Sincerely
Jan Marek
-- 
Ing. Jan Marek
University of South Bohemia
Academic Computer Centre
Phone: +420389032080
http://www.gnu.org/philosophy/no-word-attachments.cs.html


signature.asc
Description: PGP signature
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: CEPH orch made osd without WAL

2023-07-09 Thread Jan Marek
Hello,

but when I try to list devices config with ceph-volume, I can see
a DB devices, but no WAL devices:

ceph-volume lvm list

== osd.8 ===

  [db]  
/dev/ceph-5aa92e38-077b-48e2-bda6-5b7db7b7701c/osd-db-bfd11468-d109-4f85-9723-75976f51bfb9

  block device  
/dev/ceph-eaf5f0d7-ad50-4009-9ee6-04b8204b5b1a/osd-block-26b1d4b7-2425-4a2f-912b-111cf66a5970
  block uuidj4s9lv-wS9n-xg2W-I4Y0-fUSu-Vuvl-9gOB2P
  cephx lockbox secret
  cluster fsid  2c565e24-7850-47dc-a751-a6357cbbaf2a
  cluster name  ceph
  crush device class
  db device 
/dev/ceph-5aa92e38-077b-48e2-bda6-5b7db7b7701c/osd-db-bfd11468-d109-4f85-9723-75976f51bfb9
  db uuid   d9MZ2r-ImXX-Xod0-TNDS-tqi5-oG5Y-wrXFtW
  encrypted 0
  osd fsid  26b1d4b7-2425-4a2f-912b-111cf66a5970
  osd id8
  osdspec affinity  osd_spec_default
  type  db
  vdo   0
  devices   /dev/nvme0n1

  [block]   
/dev/ceph-eaf5f0d7-ad50-4009-9ee6-04b8204b5b1a/osd-block-26b1d4b7-2425-4a2f-912b-111cf66a5970

  block device  
/dev/ceph-eaf5f0d7-ad50-4009-9ee6-04b8204b5b1a/osd-block-26b1d4b7-2425-4a2f-912b-111cf66a5970
  block uuidj4s9lv-wS9n-xg2W-I4Y0-fUSu-Vuvl-9gOB2P
  cephx lockbox secret
  cluster fsid  2c565e24-7850-47dc-a751-a6357cbbaf2a
  cluster name  ceph
  crush device class
  db device 
/dev/ceph-5aa92e38-077b-48e2-bda6-5b7db7b7701c/osd-db-bfd11468-d109-4f85-9723-75976f51bfb9
  db uuid   d9MZ2r-ImXX-Xod0-TNDS-tqi5-oG5Y-wrXFtW
  encrypted 0
  osd fsid  26b1d4b7-2425-4a2f-912b-111cf66a5970
  osd id8
  osdspec affinity  osd_spec_default
  type  block
  vdo   0
  devices   /dev/sdi

(part of listing...)

Sincerely
Jan Marek


Dne Po, čec 10, 2023 at 08:10:58 CEST napsal Eugen Block:
> Hi,
> 
> if you don't specify a different device for WAL it will be automatically
> colocated on the same device as the DB. So you're good with this
> configuration.
> 
> Regards,
> Eugen
> 
> 
> Zitat von Jan Marek :
> 
> > Hello,
> > 
> > I've tried to add to CEPH cluster OSD node with a 12 rotational
> > disks and 1 NVMe. My YAML was this:
> > 
> > service_type: osd
> > service_id: osd_spec_default
> > service_name: osd.osd_spec_default
> > placement:
> >   host_pattern: osd8
> > spec:
> >   block_db_size: 64G
> >   data_devices:
> > rotational: 1
> >   db_devices:
> > paths:
> > - /dev/nvme0n1
> >   filter_logic: AND
> >   objectstore: bluestore
> > 
> > Now I have 12 OSD with DB on NVMe device, but without WAL. How I
> > can add WAL to this OSD?
> > 
> > NVMe device still have 128GB free place.
> > 
> > Thanks a lot.
> > 
> > Sincerely
> > Jan Marek
> > --
> > Ing. Jan Marek
> > University of South Bohemia
> > Academic Computer Centre
> > Phone: +420389032080
> > http://www.gnu.org/philosophy/no-word-attachments.cs.html
> 
> 
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io

-- 
Ing. Jan Marek
University of South Bohemia
Academic Computer Centre
Phone: +420389032080
http://www.gnu.org/philosophy/no-word-attachments.cs.html


signature.asc
Description: PGP signature
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io