[ceph-users] Re: Issue with ceph-ansible installation, No such file or directory

2020-06-30 Thread Mason-Williams, Gabryel (DLSLtd,RAL,LSCI)
Hello,

I have added to the https://github.com/ceph/ceph-ansible/issues/4955 issue how 
I solved this problem, in short, it requires you to set the PATH inside of the 
environment keyword in all relevant files to something like:

environment:
  CEPH_VOLUME_DEBUG: 1
  CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + 
ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment 
else None }}"
  CEPH_CONTAINER_BINARY: "{{ container_binary }}"
  PYTHONIOENCODING: utf-8
PATH: "{{ ansible_env.PATH }}:/Path/To/Ceph-Volume"

If you are unsure of what I have done please check the ansible docs: 
https://docs.ansible.com/ansible/latest/reference_appendices/faq.html

Kind regards

Gabryel

From: sachin.ni...@gmail.com 
Sent: 29 June 2020 19:24
To: ceph-users@ceph.io 
Subject: [ceph-users] Issue with ceph-ansible installation, No such file or 
directory

Error occurs here:

 - name: look up for ceph-volume rejected devices
ceph_volume:
  cluster: "{{ cluster }}"
  action: "inventory"
register: rejected_devices
environment:
  CEPH_VOLUME_DEBUG: 1
  CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + 
ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment 
else None }}"
  CEPH_CONTAINER_BINARY: "{{ container_binary }}"
  PYTHONIOENCODING: utf-8

With Error:

fatal: [18.225.11.17]: FAILED! => changed=false
  cmd: ceph-volume inventory --format=json
  msg: '[Errno 2] No such file or directory'
  rc: 2

I think ansible is not able to found the ceph-volume command. How can i fix 
this?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

-- 
This e-mail and any attachments may contain confidential, copyright and or 
privileged material, and are for the use of the intended addressee only. If you 
are not the intended addressee or an authorised recipient of the addressee 
please notify us of receipt by returning the e-mail and do not use, copy, 
retain, distribute or disclose the information in or attached to the e-mail.
Any opinions expressed within this e-mail are those of the individual and not 
necessarily of Diamond Light Source Ltd. 
Diamond Light Source Ltd. cannot guarantee that this e-mail or any attachments 
are free from viruses and we cannot accept liability for any damage which you 
may sustain as a result of software viruses which may be transmitted in or with 
the message.
Diamond Light Source Limited (company no. 4375679). Registered in England and 
Wales with its registered office at Diamond House, Harwell Science and 
Innovation Campus, Didcot, Oxfordshire, OX11 0DE, United Kingdom

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Bench on specific OSD

2020-06-30 Thread vitalif
Create a pool with size=minsize=1 and use ceph-gobench 
https://github.com/rumanzo/ceph-gobench



Hi all.

Is there anyway to completely health check one OSD host or instance?
For example rados bech just on that OSD or do some checks for disk and
front and back netowrk?

Thanks.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Suspicious memory leakage

2020-06-30 Thread XuYun
Hi,

We’ve observed some suspicious memory leak problems of MGR since upgraded to 
Nautilus. 
Yesterday I upgrade our cluster to the latest 14.2.10 and this problem seems 
still reproducible. According to the monitoring chart (memory usage of the 
active mgr node), the memory consumption started to increase with higher 
velocity at about 21:40pm. Up to now the reserved memory of mgr is about 8.3G 
according to top. I also checked the log of mgr and found that from 21:38:40, a 
message "client.0 ms_handle_reset on v2:10.3.1.3:6800/6” was produced every 
second:


2020-06-29 21:38:24.173 7f2ecab7a700  0 log_channel(cluster) log [DBG] : pgmap 
v9979: 1824 pgs: 3 active+clean+scrubbing+deep, 1821 active+clean; 54 TiB data, 
169 TiB used, 153 TiB / 322 TiB avail; 8.1 MiB/s rd, 21 MiB/s wr, 673 op/s
2020-06-29 21:38:26.180 7f2ecab7a700  0 log_channel(cluster) log [DBG] : pgmap 
v9980: 1824 pgs: 3 active+clean+scrubbing+deep, 1821 active+clean; 54 TiB data, 
169 TiB used, 153 TiB / 322 TiB avail; 9.6 MiB/s rd, 26 MiB/s wr, 764 op/s
2020-06-29 21:38:28.183 7f2ecab7a700  0 log_channel(cluster) log [DBG] : pgmap 
v9981: 1824 pgs: 3 active+clean+scrubbing+deep, 1821 active+clean; 54 TiB data, 
169 TiB used, 153 TiB / 322 TiB avail; 9.3 MiB/s rd, 23 MiB/s wr, 667 op/s
2020-06-29 21:38:30.186 7f2ecab7a700  0 log_channel(cluster) log [DBG] : pgmap 
v9982: 1824 pgs: 3 active+clean+scrubbing+deep, 1821 active+clean; 54 TiB data, 
169 TiB used, 153 TiB / 322 TiB avail; 9.3 MiB/s rd, 22 MiB/s wr, 661 op/s
2020-06-29 21:38:32.191 7f2ecab7a700  0 log_channel(cluster) log [DBG] : pgmap 
v9983: 1824 pgs: 3 active+clean+scrubbing+deep, 1821 active+clean; 54 TiB data, 
169 TiB used, 153 TiB / 322 TiB avail; 8.1 MiB/s rd, 21 MiB/s wr, 683 op/s
2020-06-29 21:38:34.195 7f2ecab7a700  0 log_channel(cluster) log [DBG] : pgmap 
v9984: 1824 pgs: 3 active+clean+scrubbing+deep, 1821 active+clean; 54 TiB data, 
169 TiB used, 153 TiB / 322 TiB avail; 4.0 MiB/s rd, 17 MiB/s wr, 670 op/s
2020-06-29 21:38:36.200 7f2ecab7a700  0 log_channel(cluster) log [DBG] : pgmap 
v9985: 1824 pgs: 3 active+clean+scrubbing+deep, 1821 active+clean; 54 TiB data, 
169 TiB used, 153 TiB / 322 TiB avail; 2.4 MiB/s rd, 15 MiB/s wr, 755 op/s
2020-06-29 21:38:38.203 7f2ecab7a700  0 log_channel(cluster) log [DBG] : pgmap 
v9986: 1824 pgs: 3 active+clean+scrubbing+deep, 1821 active+clean; 54 TiB data, 
169 TiB used, 153 TiB / 322 TiB avail; 1.2 MiB/s rd, 12 MiB/s wr, 668 op/s
2020-06-29 21:38:40.207 7f2ecab7a700  0 log_channel(cluster) log [DBG] : pgmap 
v9987: 1824 pgs: 3 active+clean+scrubbing+deep, 1821 active+clean; 54 TiB data, 
169 TiB used, 153 TiB / 322 TiB avail; 1.1 MiB/s rd, 12 MiB/s wr, 681 op/s
2020-06-29 21:38:40.887 7f2edbec3700  0 client.0 ms_handle_reset on 
v2:10.3.1.3:6800/6
2020-06-29 21:38:41.887 7f2edbec3700  0 client.0 ms_handle_reset on 
v2:10.3.1.3:6800/6
2020-06-29 21:38:42.213 7f2ecab7a700  0 log_channel(cluster) log [DBG] : pgmap 
v9988: 1824 pgs: 3 active+clean+scrubbing+deep, 1821 active+clean; 54 TiB data, 
169 TiB used, 153 TiB / 322 TiB avail; 1.2 MiB/s rd, 13 MiB/s wr, 735 op/s
2020-06-29 21:38:42.887 7f2edbec3700  0 client.0 ms_handle_reset on 
v2:10.3.1.3:6800/6
2020-06-29 21:38:43.887 7f2edbec3700  0 client.0 ms_handle_reset on 
v2:10.3.1.3:6800/6
2020-06-29 21:38:44.216 7f2ecab7a700  0 log_channel(cluster) log [DBG] : pgmap 
v9989: 1824 pgs: 3 active+clean+scrubbing+deep, 1821 active+clean; 54 TiB data, 
169 TiB used, 153 TiB / 322 TiB avail; 982 KiB/s rd, 12 MiB/s wr, 687 op/s
2020-06-29 21:38:44.888 7f2edbec3700  0 client.0 ms_handle_reset on 
v2:10.3.1.3:6800/6
2020-06-29 21:38:45.888 7f2edbec3700  0 client.0 ms_handle_reset on 
v2:10.3.1.3:6800/6
2020-06-29 21:38:46.222 7f2ecab7a700  0 log_channel(cluster) log [DBG] : pgmap 
v9990: 1824 pgs: 3 active+clean+scrubbing+deep, 1821 active+clean; 54 TiB data, 
169 TiB used, 153 TiB / 322 TiB avail; 1.2 MiB/s rd, 17 MiB/s wr, 789 op/s
2020-06-29 21:38:46.888 7f2edbec3700  0 client.0 ms_handle_reset on 
v2:10.3.1.3:6800/6
2020-06-29 21:38:47.888 7f2edbec3700  0 client.0 ms_handle_reset on 
v2:10.3.1.3:6800/6
2020-06-29 21:38:48.225 7f2ecab7a700  0 log_channel(cluster) log [DBG] : pgmap 
v9991: 1824 pgs: 3 active+clean+scrubbing+deep, 1821 active+clean; 54 TiB data, 
169 TiB used, 153 TiB / 322 TiB avail; 1.2 MiB/s rd, 15 MiB/s wr, 684 op/s
2020-06-29 21:38:48.888 7f2edbec3700  0 client.0 ms_handle_reset on 
v2:10.3.1.3:6800/6
2020-06-29 21:38:49.888 7f2edbec3700  0 client.0 ms_handle_reset on 
v2:10.3.1.3:6800/6
2020-06-29 21:38:50.228 7f2ecab7a700  0 log_channel(cluster) log [DBG] : pgmap 
v9992: 1824 pgs: 3 active+clean+scrubbing+deep, 1821 active+clean; 54 TiB data, 
169 TiB used, 153 TiB / 322 TiB avail; 940 KiB/s rd, 16 MiB/s wr, 674 op/s
2020-06-29 21:38:50.888 7f2edbec3700  0 client.0 ms_handle_reset on 
v2:10.3.1.3:6800/6
2020-06-29 21:38:51.888 7f2edbec3700  0 client.0 ms_handle_reset on 
v2:10.3.1.3:6800/6
2020-06-29 21:38:52.235 7f2ecab7a700  0 log_channel(cluster) log [DBG]

[ceph-users] Re: Suspicious memory leakage

2020-06-30 Thread XuYun
Seems the attached log file is missing:
https://pastebin.com/wAULN20N 


> 2020年6月30日 下午1:26,XuYun  写道:
> 
> Hi,
> 
> We’ve observed some suspicious memory leak problems of MGR since upgraded to 
> Nautilus. 
> Yesterday I upgrade our cluster to the latest 14.2.10 and this problem seems 
> still reproducible. According to the monitoring chart (memory usage of the 
> active mgr node), the memory consumption started to increase with higher 
> velocity at about 21:40pm. Up to now the reserved memory of mgr is about 8.3G 
> according to top. I also checked the log of mgr and found that from 21:38:40, 
> a message "client.0 ms_handle_reset on v2:10.3.1.3:6800/6” was produced every 
> second:
> 
> 
> 2020-06-29 21:38:24.173 7f2ecab7a700  0 log_channel(cluster) log [DBG] : 
> pgmap v9979: 1824 pgs: 3 active+clean+scrubbing+deep, 1821 active+clean; 54 
> TiB data, 169 TiB used, 153 TiB / 322 TiB avail; 8.1 MiB/s rd, 21 MiB/s wr, 
> 673 op/s
> 2020-06-29 21:38:26.180 7f2ecab7a700  0 log_channel(cluster) log [DBG] : 
> pgmap v9980: 1824 pgs: 3 active+clean+scrubbing+deep, 1821 active+clean; 54 
> TiB data, 169 TiB used, 153 TiB / 322 TiB avail; 9.6 MiB/s rd, 26 MiB/s wr, 
> 764 op/s
> 2020-06-29 21:38:28.183 7f2ecab7a700  0 log_channel(cluster) log [DBG] : 
> pgmap v9981: 1824 pgs: 3 active+clean+scrubbing+deep, 1821 active+clean; 54 
> TiB data, 169 TiB used, 153 TiB / 322 TiB avail; 9.3 MiB/s rd, 23 MiB/s wr, 
> 667 op/s
> 2020-06-29 21:38:30.186 7f2ecab7a700  0 log_channel(cluster) log [DBG] : 
> pgmap v9982: 1824 pgs: 3 active+clean+scrubbing+deep, 1821 active+clean; 54 
> TiB data, 169 TiB used, 153 TiB / 322 TiB avail; 9.3 MiB/s rd, 22 MiB/s wr, 
> 661 op/s
> 2020-06-29 21:38:32.191 7f2ecab7a700  0 log_channel(cluster) log [DBG] : 
> pgmap v9983: 1824 pgs: 3 active+clean+scrubbing+deep, 1821 active+clean; 54 
> TiB data, 169 TiB used, 153 TiB / 322 TiB avail; 8.1 MiB/s rd, 21 MiB/s wr, 
> 683 op/s
> 2020-06-29 21:38:34.195 7f2ecab7a700  0 log_channel(cluster) log [DBG] : 
> pgmap v9984: 1824 pgs: 3 active+clean+scrubbing+deep, 1821 active+clean; 54 
> TiB data, 169 TiB used, 153 TiB / 322 TiB avail; 4.0 MiB/s rd, 17 MiB/s wr, 
> 670 op/s
> 2020-06-29 21:38:36.200 7f2ecab7a700  0 log_channel(cluster) log [DBG] : 
> pgmap v9985: 1824 pgs: 3 active+clean+scrubbing+deep, 1821 active+clean; 54 
> TiB data, 169 TiB used, 153 TiB / 322 TiB avail; 2.4 MiB/s rd, 15 MiB/s wr, 
> 755 op/s
> 2020-06-29 21:38:38.203 7f2ecab7a700  0 log_channel(cluster) log [DBG] : 
> pgmap v9986: 1824 pgs: 3 active+clean+scrubbing+deep, 1821 active+clean; 54 
> TiB data, 169 TiB used, 153 TiB / 322 TiB avail; 1.2 MiB/s rd, 12 MiB/s wr, 
> 668 op/s
> 2020-06-29 21:38:40.207 7f2ecab7a700  0 log_channel(cluster) log [DBG] : 
> pgmap v9987: 1824 pgs: 3 active+clean+scrubbing+deep, 1821 active+clean; 54 
> TiB data, 169 TiB used, 153 TiB / 322 TiB avail; 1.1 MiB/s rd, 12 MiB/s wr, 
> 681 op/s
> 2020-06-29 21:38:40.887 7f2edbec3700  0 client.0 ms_handle_reset on 
> v2:10.3.1.3:6800/6
> 2020-06-29 21:38:41.887 7f2edbec3700  0 client.0 ms_handle_reset on 
> v2:10.3.1.3:6800/6
> 2020-06-29 21:38:42.213 7f2ecab7a700  0 log_channel(cluster) log [DBG] : 
> pgmap v9988: 1824 pgs: 3 active+clean+scrubbing+deep, 1821 active+clean; 54 
> TiB data, 169 TiB used, 153 TiB / 322 TiB avail; 1.2 MiB/s rd, 13 MiB/s wr, 
> 735 op/s
> 2020-06-29 21:38:42.887 7f2edbec3700  0 client.0 ms_handle_reset on 
> v2:10.3.1.3:6800/6
> 2020-06-29 21:38:43.887 7f2edbec3700  0 client.0 ms_handle_reset on 
> v2:10.3.1.3:6800/6
> 2020-06-29 21:38:44.216 7f2ecab7a700  0 log_channel(cluster) log [DBG] : 
> pgmap v9989: 1824 pgs: 3 active+clean+scrubbing+deep, 1821 active+clean; 54 
> TiB data, 169 TiB used, 153 TiB / 322 TiB avail; 982 KiB/s rd, 12 MiB/s wr, 
> 687 op/s
> 2020-06-29 21:38:44.888 7f2edbec3700  0 client.0 ms_handle_reset on 
> v2:10.3.1.3:6800/6
> 2020-06-29 21:38:45.888 7f2edbec3700  0 client.0 ms_handle_reset on 
> v2:10.3.1.3:6800/6
> 2020-06-29 21:38:46.222 7f2ecab7a700  0 log_channel(cluster) log [DBG] : 
> pgmap v9990: 1824 pgs: 3 active+clean+scrubbing+deep, 1821 active+clean; 54 
> TiB data, 169 TiB used, 153 TiB / 322 TiB avail; 1.2 MiB/s rd, 17 MiB/s wr, 
> 789 op/s
> 2020-06-29 21:38:46.888 7f2edbec3700  0 client.0 ms_handle_reset on 
> v2:10.3.1.3:6800/6
> 2020-06-29 21:38:47.888 7f2edbec3700  0 client.0 ms_handle_reset on 
> v2:10.3.1.3:6800/6
> 2020-06-29 21:38:48.225 7f2ecab7a700  0 log_channel(cluster) log [DBG] : 
> pgmap v9991: 1824 pgs: 3 active+clean+scrubbing+deep, 1821 active+clean; 54 
> TiB data, 169 TiB used, 153 TiB / 322 TiB avail; 1.2 MiB/s rd, 15 MiB/s wr, 
> 684 op/s
> 2020-06-29 21:38:48.888 7f2edbec3700  0 client.0 ms_handle_reset on 
> v2:10.3.1.3:6800/6
> 2020-06-29 21:38:49.888 7f2edbec3700  0 client.0 ms_handle_reset on 
> v2:10.3.1.3:6800/6
> 2020-06-29 21:38:50.228 7f2ecab7a700  0 log_channel(cluster) log [DBG] : 
> pgmap v9992: 1824 pgs: 3 active+clean+scrubbing+deep, 1821 active+clean; 54 
> TiB 

[ceph-users] Re: Move WAL/DB to SSD for existing OSD?

2020-06-30 Thread Eugen Block
Don't forget to set the correct LV tags for the new db device as  
mentioned in [1] and [2].



[1]  
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/6OHVTNXH5SLI4ABC75VVP7J2DT7X4FZA/

[2] https://tracker.ceph.com/issues/42928


Zitat von Lindsay Mathieson :


On 29/06/2020 11:44 pm, Stefan Priebe - Profihost AG wrote:

You need to use:
ceph-bluestore-tool --path /var/lib/ceph/osd/ceph-${OSD}
bluefs-bdev-new-db --dev-target /dev/bluesfs_db/db-osd${OSD}

and

ceph-bluestore-tool --path dev/osd1/ --devs-source dev/osd1/block
--dev-target dev/osd1/block.db bluefs-bdev-migrate

those are not tested and just what i remembered but it should point you
in the richt direction.

greets,
Stefan

Thanks Stephan, thats great. Best I test with a single OSD first :)  
I presume it needs to be down for this. Going to have to wait a bit  
while the cluster rebalances after a pg reduction though.



Might start a separate thread on lvmcache versus WAL/DB on SSD for  
VM usage, if that hasn't been done to death...


--
Lindsay
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io



___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: [RGW] Space usage vastly overestimated since Octopus upgrade

2020-06-30 Thread EDH - Manuel Rios
You can ignore rgw.none details, it dont make sense today from our experience

Still dont know why dev dont cleanup bucket with those rgw.none stats...

Some of our buckets got it others new ones no.


-Mensaje original-
De: Janne Johansson  
Enviado el: martes, 30 de junio de 2020 8:40
Para: Liam Monahan 
CC: ceph-users 
Asunto: [ceph-users] Re: [RGW] Space usage vastly overestimated since Octopus 
upgrade

Den mån 29 juni 2020 kl 17:27 skrev Liam Monahan :

>
> For example, here is a bucket that all of a sudden reports that it has
> 18446744073709551615 objects!  The actual count should be around 20,000.
>
> "rgw.none": {
> "size": 0,
> "size_actual": 0,
> "size_utilized": 0,
> "size_kb": 0,
> "size_kb_actual": 0,
> "size_kb_utilized": 0,
> "num_objects": 18446744073709551615
> },
>

That number is a small negative 64bit signed value, printed as an unsigned
64bit integer.
Seems like the counter underflowed.

2^64 = 18446744073709551616


-- 
May the most significant bit of your life be positive.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Move WAL/DB to SSD for existing OSD?

2020-06-30 Thread Lindsay Mathieson

On 30/06/2020 8:17 pm, Eugen Block wrote:
Don't forget to set the correct LV tags for the new db device as 
mentioned in [1] and [2].



[1] 
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/6OHVTNXH5SLI4ABC75VVP7J2DT7X4FZA/ 


[2] https://tracker.ceph.com/issues/42928



Thanks and *ouch* - thats a lot of manual tweaking with things I don't 
really understand if stuff goes wrong and is still in testing/experimental?



I might just setting for slowly recreating the OSD's - seems safer. Its 
not like I'm in a hurry or will be traveling anywhere soon :)



Thanks!

--
Lindsay
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] RES: Debian install

2020-06-30 Thread Rafael Quaglio
Thanks for your reply Anastasios, 

I was waiting for some answer.

 

My /etc/apt/sources.list.d/ceph.list content is:

deb https://download.ceph.com/debian-nautilus/ buster main

 

Even if I do “apt-get update”, the packages still the same.

 

The Ceph client (CephFS mount) is working well, but I can´t deploy new osds.

 

The error that I posted occurs when I do : “ceph-deploy osd create --data 
/dev/sdb node1”

 

 

I appreciate any help.

 

 

Rafael.  

 

De: Anastasios Dados  
Enviada em: segunda-feira, 29 de junho de 2020 20:01
Para: Rafael Quaglio ; ceph-users@ceph.io
Assunto: Re: [ceph-users] Debian install

 

Hello Rafael,

Can you check the apt sources list that exist from your ceph-deploy node? Maybe 
there you have put luminous debian packages version?

 

Regards,

Anastasios

 

 

On Mon, 2020-06-29 at 06:59 -0300, Rafael Quaglio wrote:

Hi,

 

We have already installed a new Debian (10.4) server and I need put it in a

Ceph cluster. 

 

When I execute the command to install ceph on this node:

 

 

 

ceph-deploy install --release nautilus node1

 

 

 

It starts to install a version 12.x in my node...

 

 

 

(...)

 

[serifos][DEBUG ] After this operation, 183 MB of additional disk space will

be used.

 

[serifos][DEBUG ] Selecting previously unselected package python-cephfs.

 

(Reading database ... 30440 files and directories currently installed.)

 

[serifos][DEBUG ] Preparing to unpack

.../python-cephfs_12.2.11+dfsg1-2.1+b1_amd64.deb ...

 

[serifos][DEBUG ] Unpacking python-cephfs (12.2.11+dfsg1-2.1+b1) ...

 

[serifos][DEBUG ] Selecting previously unselected package ceph-common.

 

[serifos][DEBUG ] Preparing to unpack

.../ceph-common_12.2.11+dfsg1-2.1+b1_amd64.deb ...

 

[serifos][DEBUG ] Unpacking ceph-common (12.2.11+dfsg1-2.1+b1) ...

 

(...)

 

 

 

How do I upgrade this packages?

 

Even installed packages in this version, the installation

completes without erros.

 

 

 

The question is due to an error message that I'm recieving

when deploy a new osd.

 

 

 

ceph-deploy osd create --data /dev/sdb node1

 

 

 

 

 

At this point:

 

 

 

[ceph_deploy.osd][INFO  ] Distro info: debian 10.4 buster

 

[ceph_deploy.osd][DEBUG ] Deploying osd to node1

 

[node1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf

 

[node1][DEBUG ] find the location of an executable

 

[node1][INFO  ] Running command: sudo /usr/sbin/ceph-volume --cluster ceph

lvm create --bluestore --data /dev/sdb

 

[node1][WARNIN] -->  RuntimeError: Unable to create a new OSD id

 

[node1][DEBUG ] Running command: /bin/ceph-authtool --gen-print-key

 

[node1][DEBUG ] Running command: /bin/ceph --cluster ceph --name

client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i -

osd new 76da6c51-8385-4ffc-9a8e-0dfc11e31feb

 

[node1][DEBUG ]  stderr:

/build/ceph-qtARip/ceph-12.2.11+dfsg1/src/mon/MonMap.cc: In function 'void

MonMap::sanitize_mons(std::map,

entity_addr_t>&)' thread 7f2bc7fff700 time 2020-06-29 06:56:17.331350

 

[node1][DEBUG ]  stderr:

/build/ceph-qtARip/ceph-12.2.11+dfsg1/src/mon/MonMap.cc: 77: FAILED

assert(mon_info[p.first].public_addr == p.second)

 

[node1][DEBUG ]  stderr: ceph version 12.2.11

(26dc3775efc7bb286a1d6d66faee0ba30ea23eee) luminous (stable)

 

[node1][DEBUG ]  stderr: 1: (ceph::__ceph_assert_fail(char const*, char

const*, int, char const*)+0xf5) [0x7f2bdaff5f75]

 

[node1][DEBUG ]  stderr: 2:

(MonMap::sanitize_mons(std::map, std::allocator >, entity_addr_t,

std::less,

std::allocator > >,

std::allocator, std::allocator > const, entity_addr_t> >

&)+0x568) [0x7f2bdb050038]

 

[node1][DEBUG ]  stderr: 3:

(MonMap::decode(ceph::buffer::list::iterator&)+0x4da) [0x7f2bdb05500a]

 

[node1][DEBUG ]  stderr: 4: (MonClient::handle_monmap(MMonMap*)+0x216)

[0x7f2bdb042a06]

 

[node1][DEBUG ]  stderr: 5: (MonClient::ms_dispatch(Message*)+0x4ab)

[0x7f2bdb04729b]

 

[node1][DEBUG ]  stderr: 6: (DispatchQueue::entry()+0xeba) [0x7f2bdb06bf5a]

 

[node1][DEBUG ]  stderr: 7: (DispatchQueue::DispatchThread::entry()+0xd)

[0x7f2bdb1576fd]

 

[node1][DEBUG ]  stderr: 8: (()+0x7fa3) [0x7f2be499dfa3]

 

[node1][DEBUG ]  stderr: 9: (clone()+0x3f) [0x7f2be45234cf]

 

[node1][DEBUG ]  stderr: NOTE: a copy of the executable, or `objdump -rdS

` is needed to interpret this.

 

[node1][ERROR ] RuntimeError: command returned non-zero exit status: 1

 

[ceph_deploy.osd][ERROR ] Failed to execute command: /usr/sbin/ceph-volume

--cluster ceph lvm create --bluestore --data /dev/sdb

 

[ceph_deploy][ERROR ] GenericError: Failed to create 1 OSDs

 

 

 

 

 

I think this error occurs because the wrong package that was

installed.

 

 

 

 

 

Thanks,

 

Rafael

 

___

ceph-users mailing list -- ceph-users@ceph.io  

To unsu

[ceph-users] Multisite setup with and without replicated region

2020-06-30 Thread Szabo, Istvan (Agoda)
Hi,

It is possible to create a multisite cluster with multiple zones?
I'd like to have zone/region which is replicated across DCs, but I want to have 
without replication as well.

Would prefer to use earlier version of ceph, not octopus yet.

Thank you


This message is confidential and is for the sole use of the intended 
recipient(s). It may also be privileged or otherwise protected by copyright or 
other legal rules. If you have received it by mistake please let us know by 
reply email and delete it from your system. It is prohibited to copy this 
message or disclose its content to anyone. Any confidentiality or privilege is 
not waived or lost by any mistaken delivery or unauthorized disclosure of the 
message. All messages sent to and from Agoda may be monitored to ensure 
compliance with company policies, to protect the company's interests and to 
remove potential malware. Electronic messages may be intercepted, amended, lost 
or deleted, or contain viruses.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Best practice for object store design

2020-06-30 Thread Szabo, Istvan (Agoda)
Hi,

What is the let's say best practice to place haproxy, rgw, mon services in a 
new cluster?
We would like to have a new setup, but unsure how to create a best setup in 
front of the OSD nodes.

Let's say we have 3 mons as ceph suggest it, where should I put haproxy and 
rados?
Should be vm or physical?

Thank you the ideas.


This message is confidential and is for the sole use of the intended 
recipient(s). It may also be privileged or otherwise protected by copyright or 
other legal rules. If you have received it by mistake please let us know by 
reply email and delete it from your system. It is prohibited to copy this 
message or disclose its content to anyone. Any confidentiality or privilege is 
not waived or lost by any mistaken delivery or unauthorized disclosure of the 
message. All messages sent to and from Agoda may be monitored to ensure 
compliance with company policies, to protect the company's interests and to 
remove potential malware. Electronic messages may be intercepted, amended, lost 
or deleted, or contain viruses.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: [RGW] Space usage vastly overestimated since Octopus upgrade

2020-06-30 Thread Liam Monahan
Thanks, both.  That’s a useful observation.  I wonder what I can try to get 
accurate user stats.  All of our users are quota-ed, so wrong users stats 
actually stop them from writing data.  Since stats are only updated on write: I 
have some users who are inactive and their stats are correct.  I have other 
users who have been actively writing.  I see users who have up to 55 times the 
expected vs. actual size.  I looped over buckets manually via the Admin Ops API 
and pulled the stats for all of the user’s buckets and summed these and 
compared that to the output from “radosgw-admin user stats"

I would guess that underflowing counters could be one explanation, but there 
may be other things going wrong in the stats aggregation...

Thanks,
Liam

> On Jun 30, 2020, at 6:36 AM, EDH - Manuel Rios  
> wrote:
> 
> You can ignore rgw.none details, it dont make sense today from our experience
> 
> Still dont know why dev dont cleanup bucket with those rgw.none stats...
> 
> Some of our buckets got it others new ones no.
> 
> 
> -Mensaje original-
> De: Janne Johansson  
> Enviado el: martes, 30 de junio de 2020 8:40
> Para: Liam Monahan 
> CC: ceph-users 
> Asunto: [ceph-users] Re: [RGW] Space usage vastly overestimated since Octopus 
> upgrade
> 
> Den mån 29 juni 2020 kl 17:27 skrev Liam Monahan :
> 
>> 
>> For example, here is a bucket that all of a sudden reports that it has
>> 18446744073709551615 objects!  The actual count should be around 20,000.
>> 
>>"rgw.none": {
>>"size": 0,
>>"size_actual": 0,
>>"size_utilized": 0,
>>"size_kb": 0,
>>"size_kb_actual": 0,
>>"size_kb_utilized": 0,
>>"num_objects": 18446744073709551615
>>},
>> 
> 
> That number is a small negative 64bit signed value, printed as an unsigned
> 64bit integer.
> Seems like the counter underflowed.
> 
> 2^64 = 18446744073709551616
> 
> 
> -- 
> May the most significant bit of your life be positive.
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: [RGW] Space usage vastly overestimated since Octopus upgrade

2020-06-30 Thread Liam Monahan
Reposting here since it seemed to start a new thread.

Thanks, both.  That’s a useful observation.  I wonder what I can try to get 
accurate user stats.  All of our users are quota-ed, so wrong users stats 
actually stop them from writing data.  Since stats are only updated on write: I 
have some users who are inactive and their stats are correct.  I have other 
users who have been actively writing.  I see users who have up to 55 times the 
expected vs. actual size.  I looped over buckets manually via the Admin Ops API 
and pulled the stats for all of the user’s buckets and summed these and 
compared that to the output from “radosgw-admin user stats"

I would guess that underflowing counters could be one explanation, but there 
may be other things going wrong in the stats aggregation...

Thanks,
Liam
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Bluestore performance tuning for hdd with nvme db+wal

2020-06-30 Thread Anthony D'Atri


> That is an interesting point. We are using 12 on 1 nvme journal for our 
> Filestore nodes (which seems to work ok). The workload for wal + db is 
> different so that could be a factor. However when I've looked at the IO 
> metrics for the nvme it seems to be only lightly loaded, so does not appear 
> to be the issue (at 1st sight anyway).

How are you determining “lightly loaded”.  Not iostat %util I hope.


> Also the particular nvme model could be a factor (in the 6 vs 12 area) -  
> what type are you using?

I suspect that SSD OSDs would favor the 6 end of the scale; 12 doesn’t seem 
wholly unreasonable for HDD OSDs.  In the end though, add the hassle factor and 
the cost of the NVMe device — when provisioned with enough capacity — and maybe 
just using SATA SSDs makese sease.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] removing the private cluster network

2020-06-30 Thread Magnus HAGDORN
Hi there,
we currently have a ceph cluster with 6 nodes and a public and cluster
network. Each node has two bonded 2x1GE network interfaces, one for the
public and one for the cluster network. We are planning to upgrade the
networking to 10GE. Given the modest size of our cluster we would like
to shut down the cluster network. The new 10GE switches will be on the
public netowkr. What's the best way achieving this while the cluster is
running.
Regards
magnus
The University of Edinburgh is a charitable body, registered in Scotland, with 
registration number SC005336.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] v15.2.4 Octopus released

2020-06-30 Thread David Galloway
We're happy to announce the fourth bugfix release in the Octopus series.
In addition to a security fix in RGW, this release brings a range of fixes
across all components. We recommend that all Octopus users upgrade to this
release. For a detailed release notes with links & changelog please
refer to the official blog entry at 
https://ceph.io/releases/v15-2-4-octopus-released

Notable Changes
---
* CVE-2020-10753: rgw: sanitize newlines in s3 CORSConfiguration's ExposeHeader
  (William Bowling, Adam Mohammed, Casey Bodley)

* Cephadm: There were a lot of small usability improvements and bug fixes:
  * Grafana when deployed by Cephadm now binds to all network interfaces.
  * `cephadm check-host` now prints all detected problems at once.
  * Cephadm now calls `ceph dashboard set-grafana-api-ssl-verify false`
when generating an SSL certificate for Grafana.
  * The Alertmanager is now correctly pointed to the Ceph Dashboard
  * `cephadm adopt` now supports adopting an Alertmanager
  * `ceph orch ps` now supports filtering by service name
  * `ceph orch host ls` now marks hosts as offline, if they are not
accessible.

* Cephadm can now deploy NFS Ganesha services. For example, to deploy NFS with
  a service id of mynfs, that will use the RADOS pool nfs-ganesha and namespace
  nfs-ns::

ceph orch apply nfs mynfs nfs-ganesha nfs-ns

* Cephadm: `ceph orch ls --export` now returns all service specifications in
  yaml representation that is consumable by `ceph orch apply`. In addition,
  the commands `orch ps` and `orch ls` now support `--format yaml` and
  `--format json-pretty`.

* Cephadm: `ceph orch apply osd` supports a `--preview` flag that prints a 
preview of
  the OSD specification before deploying OSDs. This makes it possible to
  verify that the specification is correct, before applying it.

* RGW: The `radosgw-admin` sub-commands dealing with orphans --
  `radosgw-admin orphans find`, `radosgw-admin orphans finish`, and
  `radosgw-admin orphans list-jobs` -- have been deprecated. They have
  not been actively maintained and they store intermediate results on
  the cluster, which could fill a nearly-full cluster.  They have been
  replaced by a tool, currently considered experimental,
  `rgw-orphan-list`.

* RBD: The name of the rbd pool object that is used to store
  rbd trash purge schedule is changed from "rbd_trash_trash_purge_schedule"
  to "rbd_trash_purge_schedule". Users that have already started using
  `rbd trash purge schedule` functionality and have per pool or namespace
  schedules configured should copy "rbd_trash_trash_purge_schedule"
  object to "rbd_trash_purge_schedule" before the upgrade and remove
  "rbd_trash_purge_schedule" using the following commands in every RBD
  pool and namespace where a trash purge schedule was previously
  configured::

rados -p  [-N namespace] cp rbd_trash_trash_purge_schedule 
rbd_trash_purge_schedule
rados -p  [-N namespace] rm rbd_trash_trash_purge_schedule

  or use any other convenient way to restore the schedule after the
  upgrade.

Getting Ceph

* Git at git://github.com/ceph/ceph.git
* Tarball at http://download.ceph.com/tarballs/ceph-14.2.10.tar.gz
* For packages, see http://docs.ceph.com/docs/master/install/get-packages/
* Release git sha1: 7447c15c6ff58d7fce91843b705a268a1917325c

-- 
David Galloway
Systems Administrator, RDU
Ceph Engineering
IRC: dgalloway
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: [RGW] Space usage vastly overestimated since Octopus upgrade

2020-06-30 Thread Liam Monahan
I downgraded a single RGW node in the cluster back to Nautilus (14.2.10), ran 
"radosgw-admin user stats --uid= --sync-stats" and the usage 
calculated to the correct usage values.  It seems like there is a potential bug 
in how Octopus is calculating its user stats.

~ Liam
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Bluestore performance tuning for hdd with nvme db+wal

2020-06-30 Thread Nigel Williams
On Wed, 1 Jul 2020 at 01:47, Anthony D'Atri  wrote:
> > However when I've looked at the IO metrics for the nvme it seems to be only 
> > lightly loaded, so does not appear to be the issue (at 1st sight anyway).
>
> How are you determining “lightly loaded”.  Not iostat %util I hope.

For reference, this explains why iostat for SSDs can be misleading:

https://brooker.co.za/blog/2014/07/04/iostat-pct.html
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Bluestore performance tuning for hdd with nvme db+wal

2020-06-30 Thread Mark Kirkwood

Increasing the memory target appears to have solved the issue.

On 26/06/20 11:47 am, Mark Kirkwood wrote:

Progress update:

- tweaked debug_rocksdb to 1/5. *possibly* helped, fewer slow requests

- will increase osd_memory_target from 4 to 16G, and observe

On 24/06/20 1:30 pm, Mark Kirkwood wrote:

Hi,

We have recently added a new storage node to our Luminous (12.2.13) 
cluster. The prev nodes are all setup as Filestore: e.g 12 osds on 
hdd (Seagate Constellations) with one NVMe (Intel P4600) journal. 
With the new guy we decided to introduce Bluestore so it is 
configured as: (same HW) 12 osd with data on hdd and db + wal on one 
NVMe.


We noticed there are periodic slow requests logged, and the 
implicated osds are the Bluestore ones 98% of the time! This suggests 
that we need to tweak our Bluestore settings in some way. 
Investigating I'm seeing:


- A great deal of rocksdb debug info in the logs - perhaps we should 
tone that down? (debug_rocksdb 4/5 -> 1/5)


- We look to have the default cache settings 
(bluestore_cache_size_hdd|ssd etc), we have memory to increase these


- There are some buffered io settings (bluefs_buffered_io, 
bluestore_default_buffered_write), set to (default) false. Are these 
safe (or useful) to change?


- We have default rocksdb options, should some of these be changed? 
(bluestore_rocksdb_options, in particular 
max_background_compactions=2 - should we have less, or more?)


Also, anything else we should be looking at?

regards

Mark

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: removing the private cluster network

2020-06-30 Thread Frank Schilder
There are plenty of threads with info on this, see, for example, 
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/Y23SQN357RYMFTBKJ2VKIQRR43KURWZJ/#4EYQVVJ7IOSEBPZGMPPRZAZ5XBUHHF5F
 .

Best regards,
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14


From: Magnus HAGDORN 
Sent: 30 June 2020 17:59:57
To: ceph-users@ceph.io
Subject: [ceph-users] removing the private cluster network

Hi there,
we currently have a ceph cluster with 6 nodes and a public and cluster
network. Each node has two bonded 2x1GE network interfaces, one for the
public and one for the cluster network. We are planning to upgrade the
networking to 10GE. Given the modest size of our cluster we would like
to shut down the cluster network. The new 10GE switches will be on the
public netowkr. What's the best way achieving this while the cluster is
running.
Regards
magnus
The University of Edinburgh is a charitable body, registered in Scotland, with 
registration number SC005336.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: v15.2.4 Octopus released

2020-06-30 Thread Sasha Litvak
David,

Download link points to 14.2.10 tarball.

On Tue, Jun 30, 2020, 3:38 PM David Galloway  wrote:

> We're happy to announce the fourth bugfix release in the Octopus series.
> In addition to a security fix in RGW, this release brings a range of fixes
> across all components. We recommend that all Octopus users upgrade to this
> release. For a detailed release notes with links & changelog please
> refer to the official blog entry at
> https://ceph.io/releases/v15-2-4-octopus-released
>
> Notable Changes
> ---
> * CVE-2020-10753: rgw: sanitize newlines in s3 CORSConfiguration's
> ExposeHeader
>   (William Bowling, Adam Mohammed, Casey Bodley)
>
> * Cephadm: There were a lot of small usability improvements and bug fixes:
>   * Grafana when deployed by Cephadm now binds to all network interfaces.
>   * `cephadm check-host` now prints all detected problems at once.
>   * Cephadm now calls `ceph dashboard set-grafana-api-ssl-verify false`
> when generating an SSL certificate for Grafana.
>   * The Alertmanager is now correctly pointed to the Ceph Dashboard
>   * `cephadm adopt` now supports adopting an Alertmanager
>   * `ceph orch ps` now supports filtering by service name
>   * `ceph orch host ls` now marks hosts as offline, if they are not
> accessible.
>
> * Cephadm can now deploy NFS Ganesha services. For example, to deploy NFS
> with
>   a service id of mynfs, that will use the RADOS pool nfs-ganesha and
> namespace
>   nfs-ns::
>
> ceph orch apply nfs mynfs nfs-ganesha nfs-ns
>
> * Cephadm: `ceph orch ls --export` now returns all service specifications
> in
>   yaml representation that is consumable by `ceph orch apply`. In addition,
>   the commands `orch ps` and `orch ls` now support `--format yaml` and
>   `--format json-pretty`.
>
> * Cephadm: `ceph orch apply osd` supports a `--preview` flag that prints a
> preview of
>   the OSD specification before deploying OSDs. This makes it possible to
>   verify that the specification is correct, before applying it.
>
> * RGW: The `radosgw-admin` sub-commands dealing with orphans --
>   `radosgw-admin orphans find`, `radosgw-admin orphans finish`, and
>   `radosgw-admin orphans list-jobs` -- have been deprecated. They have
>   not been actively maintained and they store intermediate results on
>   the cluster, which could fill a nearly-full cluster.  They have been
>   replaced by a tool, currently considered experimental,
>   `rgw-orphan-list`.
>
> * RBD: The name of the rbd pool object that is used to store
>   rbd trash purge schedule is changed from "rbd_trash_trash_purge_schedule"
>   to "rbd_trash_purge_schedule". Users that have already started using
>   `rbd trash purge schedule` functionality and have per pool or namespace
>   schedules configured should copy "rbd_trash_trash_purge_schedule"
>   object to "rbd_trash_purge_schedule" before the upgrade and remove
>   "rbd_trash_purge_schedule" using the following commands in every RBD
>   pool and namespace where a trash purge schedule was previously
>   configured::
>
> rados -p  [-N namespace] cp rbd_trash_trash_purge_schedule
> rbd_trash_purge_schedule
> rados -p  [-N namespace] rm rbd_trash_trash_purge_schedule
>
>   or use any other convenient way to restore the schedule after the
>   upgrade.
>
> Getting Ceph
> 
> * Git at git://github.com/ceph/ceph.git
> * Tarball at http://download.ceph.com/tarballs/ceph-14.2.10.tar.gz
> * For packages, see http://docs.ceph.com/docs/master/install/get-packages/
> * Release git sha1: 7447c15c6ff58d7fce91843b705a268a1917325c
>
> --
> David Galloway
> Systems Administrator, RDU
> Ceph Engineering
> IRC: dgalloway
> ___
> Dev mailing list -- d...@ceph.io
> To unsubscribe send an email to dev-le...@ceph.io
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: v15.2.4 Octopus released

2020-06-30 Thread Dan Mick
True.  That said, the blog post points to 
http://download.ceph.com/tarballs/ where all the tarballs, including 
15.2.4, live.


 On 6/30/2020 5:57 PM, Sasha Litvak wrote:

David,

Download link points to 14.2.10 tarball.

On Tue, Jun 30, 2020, 3:38 PM David Galloway  wrote:


We're happy to announce the fourth bugfix release in the Octopus series.
In addition to a security fix in RGW, this release brings a range of fixes
across all components. We recommend that all Octopus users upgrade to this
release. For a detailed release notes with links & changelog please
refer to the official blog entry at
https://ceph.io/releases/v15-2-4-octopus-released

Notable Changes
---
* CVE-2020-10753: rgw: sanitize newlines in s3 CORSConfiguration's
ExposeHeader
   (William Bowling, Adam Mohammed, Casey Bodley)

* Cephadm: There were a lot of small usability improvements and bug fixes:
   * Grafana when deployed by Cephadm now binds to all network interfaces.
   * `cephadm check-host` now prints all detected problems at once.
   * Cephadm now calls `ceph dashboard set-grafana-api-ssl-verify false`
 when generating an SSL certificate for Grafana.
   * The Alertmanager is now correctly pointed to the Ceph Dashboard
   * `cephadm adopt` now supports adopting an Alertmanager
   * `ceph orch ps` now supports filtering by service name
   * `ceph orch host ls` now marks hosts as offline, if they are not
 accessible.

* Cephadm can now deploy NFS Ganesha services. For example, to deploy NFS
with
   a service id of mynfs, that will use the RADOS pool nfs-ganesha and
namespace
   nfs-ns::

 ceph orch apply nfs mynfs nfs-ganesha nfs-ns

* Cephadm: `ceph orch ls --export` now returns all service specifications
in
   yaml representation that is consumable by `ceph orch apply`. In addition,
   the commands `orch ps` and `orch ls` now support `--format yaml` and
   `--format json-pretty`.

* Cephadm: `ceph orch apply osd` supports a `--preview` flag that prints a
preview of
   the OSD specification before deploying OSDs. This makes it possible to
   verify that the specification is correct, before applying it.

* RGW: The `radosgw-admin` sub-commands dealing with orphans --
   `radosgw-admin orphans find`, `radosgw-admin orphans finish`, and
   `radosgw-admin orphans list-jobs` -- have been deprecated. They have
   not been actively maintained and they store intermediate results on
   the cluster, which could fill a nearly-full cluster.  They have been
   replaced by a tool, currently considered experimental,
   `rgw-orphan-list`.

* RBD: The name of the rbd pool object that is used to store
   rbd trash purge schedule is changed from "rbd_trash_trash_purge_schedule"
   to "rbd_trash_purge_schedule". Users that have already started using
   `rbd trash purge schedule` functionality and have per pool or namespace
   schedules configured should copy "rbd_trash_trash_purge_schedule"
   object to "rbd_trash_purge_schedule" before the upgrade and remove
   "rbd_trash_purge_schedule" using the following commands in every RBD
   pool and namespace where a trash purge schedule was previously
   configured::

 rados -p  [-N namespace] cp rbd_trash_trash_purge_schedule
rbd_trash_purge_schedule
 rados -p  [-N namespace] rm rbd_trash_trash_purge_schedule

   or use any other convenient way to restore the schedule after the
   upgrade.

Getting Ceph

* Git at git://github.com/ceph/ceph.git
* Tarball at http://download.ceph.com/tarballs/ceph-14.2.10.tar.gz
* For packages, see http://docs.ceph.com/docs/master/install/get-packages/
* Release git sha1: 7447c15c6ff58d7fce91843b705a268a1917325c

--
David Galloway
Systems Administrator, RDU
Ceph Engineering
IRC: dgalloway
___
Dev mailing list -- d...@ceph.io
To unsubscribe send an email to dev-le...@ceph.io


___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: v15.2.4 Octopus released

2020-06-30 Thread Neha Ojha
On Tue, Jun 30, 2020 at 6:04 PM Dan Mick  wrote:
>
> True.  That said, the blog post points to
> http://download.ceph.com/tarballs/ where all the tarballs, including
> 15.2.4, live.
>
>   On 6/30/2020 5:57 PM, Sasha Litvak wrote:
> > David,
> >
> > Download link points to 14.2.10 tarball.
> >
> > On Tue, Jun 30, 2020, 3:38 PM David Galloway  wrote:
> >
> >> We're happy to announce the fourth bugfix release in the Octopus series.
> >> In addition to a security fix in RGW, this release brings a range of fixes
> >> across all components. We recommend that all Octopus users upgrade to this
> >> release. For a detailed release notes with links & changelog please
> >> refer to the official blog entry at
> >> https://ceph.io/releases/v15-2-4-octopus-released
> >>
> >> Notable Changes
> >> ---
> >> * CVE-2020-10753: rgw: sanitize newlines in s3 CORSConfiguration's
> >> ExposeHeader
> >>(William Bowling, Adam Mohammed, Casey Bodley)
> >>
> >> * Cephadm: There were a lot of small usability improvements and bug fixes:
> >>* Grafana when deployed by Cephadm now binds to all network interfaces.
> >>* `cephadm check-host` now prints all detected problems at once.
> >>* Cephadm now calls `ceph dashboard set-grafana-api-ssl-verify false`
> >>  when generating an SSL certificate for Grafana.
> >>* The Alertmanager is now correctly pointed to the Ceph Dashboard
> >>* `cephadm adopt` now supports adopting an Alertmanager
> >>* `ceph orch ps` now supports filtering by service name
> >>* `ceph orch host ls` now marks hosts as offline, if they are not
> >>  accessible.
> >>
> >> * Cephadm can now deploy NFS Ganesha services. For example, to deploy NFS
> >> with
> >>a service id of mynfs, that will use the RADOS pool nfs-ganesha and
> >> namespace
> >>nfs-ns::
> >>
> >>  ceph orch apply nfs mynfs nfs-ganesha nfs-ns
> >>
> >> * Cephadm: `ceph orch ls --export` now returns all service specifications
> >> in
> >>yaml representation that is consumable by `ceph orch apply`. In 
> >> addition,
> >>the commands `orch ps` and `orch ls` now support `--format yaml` and
> >>`--format json-pretty`.
> >>
> >> * Cephadm: `ceph orch apply osd` supports a `--preview` flag that prints a
> >> preview of
> >>the OSD specification before deploying OSDs. This makes it possible to
> >>verify that the specification is correct, before applying it.
> >>
> >> * RGW: The `radosgw-admin` sub-commands dealing with orphans --
> >>`radosgw-admin orphans find`, `radosgw-admin orphans finish`, and
> >>`radosgw-admin orphans list-jobs` -- have been deprecated. They have
> >>not been actively maintained and they store intermediate results on
> >>the cluster, which could fill a nearly-full cluster.  They have been
> >>replaced by a tool, currently considered experimental,
> >>`rgw-orphan-list`.
> >>
> >> * RBD: The name of the rbd pool object that is used to store
> >>rbd trash purge schedule is changed from 
> >> "rbd_trash_trash_purge_schedule"
> >>to "rbd_trash_purge_schedule". Users that have already started using
> >>`rbd trash purge schedule` functionality and have per pool or namespace
> >>schedules configured should copy "rbd_trash_trash_purge_schedule"
> >>object to "rbd_trash_purge_schedule" before the upgrade and remove
> >>"rbd_trash_purge_schedule" using the following commands in every RBD
> >>pool and namespace where a trash purge schedule was previously
> >>configured::
> >>
> >>  rados -p  [-N namespace] cp rbd_trash_trash_purge_schedule
> >> rbd_trash_purge_schedule
> >>  rados -p  [-N namespace] rm rbd_trash_trash_purge_schedule
> >>
> >>or use any other convenient way to restore the schedule after the
> >>upgrade.
> >>
> >> Getting Ceph
> >> 
> >> * Git at git://github.com/ceph/ceph.git

Correction:
* Tarball at http://download.ceph.com/tarballs/ceph-15.2.4.tar.gz

> >> * For packages, see http://docs.ceph.com/docs/master/install/get-packages/
> >> * Release git sha1: 7447c15c6ff58d7fce91843b705a268a1917325c
> >>
> >> --
> >> David Galloway
> >> Systems Administrator, RDU
> >> Ceph Engineering
> >> IRC: dgalloway
> >> ___
> >> Dev mailing list -- d...@ceph.io
> >> To unsubscribe send an email to dev-le...@ceph.io
> >>
> > ___
> > ceph-users mailing list -- ceph-users@ceph.io
> > To unsubscribe send an email to ceph-users-le...@ceph.io
> >
> ___
> Dev mailing list -- d...@ceph.io
> To unsubscribe send an email to dev-le...@ceph.io
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io