[ceph-users] Re: Benchmark WAL/DB on SSD and HDD for RGW RBD CephFS

2020-09-16 Thread Janne Johansson
Den ons 16 sep. 2020 kl 06:27 skrev Danni Setiawan <
danni.n.setia...@gmail.com>:

> Hi all,
>
> I'm trying to find performance penalty with OSD HDD when using WAL/DB in
> faster device (SSD/NVMe) vs WAL/DB in same device (HDD) for different
> workload (RBD, RGW with index bucket in SSD pool, and CephFS with
> metadata in SSD pool). I want to know if giving up disk slot for WAL/DB
> device is worth vs adding more OSD.
>
> Unfortunately I cannot find the benchmark for these kind workload. Has
> anyone ever done this benchmark?
>

I think this probably is a too vague and broad question. If you ask
"will my cluster handle far more write iops if I have WAL/DB (or journal)
 on SSD/NVME instead of on the same drive as the data", then almost everyone
will agree that yes, flash WAL/DB will make your writes (and recoveries)
lots
quicker, since NVME/SSD will do anything from 10x to 100x the amount of
small
writes per second than the best spin-HDDs. But how this will affect any one
single
end-user experience behind S3 or CephFS without diving into a ton of
implementation
details like "how much ram cache does the MDS have for cephfs, how many RGWs
and S3 streams are you using in parallel in order to speed up S3/RGW
operations"
will be very hard to say in pure numbers.

Also, even if flash devices are "only" used for speeding up writes, normal
clusters see a lot
of mixed IO so if writes theoretically take 0ms, you get lots more free
time to do reads
on the HDDs, and reads often can be accelerated with RAM caches in various
places.

So like any other storage system, if you put a flash device in front of the
spinners you
will see improvements, especially for many small write ops, but if your use
case consists
of "copy these 100 10G-images to this pool every night" or
"every hour we unzip the sources to a large program and checksum the files
 and then clean the directory" will have a large impact on how flash helps
your
cluster.

Also, more boxes add more performance in more ways than just "more disk",
every extra
cpu, every G ram, every extra network port means the overall perf of the
cluster goes up
by sharing the total load better. This will not show up in simple
one-threaded tests but as
you get 2-5-10-100 active clients doing IO it will be noticeable.

-- 
May the most significant bit of your life be positive.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: ceph rbox test on passive compressed pool

2020-09-16 Thread Marc Roos
 

Hi David, Jan, 

Terribly terribly sorry, I just noticed that the imaptest program used 
this 'DON'T DELETE THIS MESSAGE -- FOLDER INTERNAL DATA' to test with, 
which is 537 bytes. This explains at least why half the messages were so 
small. So conclusion is that.

- either the hint is not correctly set to do the passive compression.
- or the passive compression is not working when this hint is set.

Thanks,
Marc


-Original Message-
Cc: ceph-users
Subject: Re: [ceph-users] ceph rbox test on passive compressed pool

On 09/11 09:36, Marc Roos wrote:
> 
> Hi David,
> 
> Just to let you know, this hint is being set, what is the reason for 
> ceph of doing only half the objects? Can it be that there is some 
> issue with my osd's? Like some maybe have an old fs (still using disk 
> not volume)? Is this still to be expected or does ceph under pressure 
> drop compressing?
> 
> https://github.com/ceph-dovecot/dovecot-ceph-plugin/blob/56d6c900cc9ec
> 07dfb98ef2abac07aae466b7610/src/librmb/rados-storage-impl.cpp#L75


I was trying to look into this a bit :), can you give me more info about 
the OSDs that you are using?
What filesystem are they on?

Cheers!
> 
>  Thanks,
> Marc
> 
> 
> 
> -Original Message-
> Cc: jan.radon
> Subject: Re: [ceph-users] ceph rbox test on passive compressed pool
> 
> The hints have to be given from the client side as far as I 
> understand, can you share the client code too?
> 
> Also,not seems that there's no guarantees that it will actually do 
> anything (best effort I guess):
> https://docs.ceph.com/docs/mimic/rados/api/librados/#c.rados_set_alloc
> _hint
> 
> Cheers
> 
> 
> On 6 September 2020 15:59:01 BST, Marc Roos 
> wrote:
> 
>   
>   
>   I have been inserting 10790 exactly the same 64kb text message to 
a
> 
>   passive compressing enabled pool. I am still counting, but it 
looks 
> like
>   only half the objects are compressed.  
>   
>   mail/b08c3218dbf1545ff43052412a8e mtime 2020-09-06 
> 16:27:39.00,
>   size 63580
>   mail/00f6043775f1545ff43052412a8e mtime 2020-09-06 
> 16:25:57.00,
>   size 525
>   mail/b875f40571f1545ff43052412a8e mtime 2020-09-06 
> 16:25:53.00,
>   size 63580
>   mail/e87c120b19f1545ff43052412a8e mtime 2020-09-06 
> 16:24:25.00,
>   size 525
>   
>   I am not sure if this should be expected from passive, these 
docs[1]
>   hint that passive 'compress if hinted COMPRESSIBLE'. From that I 
> would
>   conclude that all text messages should be compressed. 
>   A previous test with a 64kb gzip attachment seemed to not 
compress,
> 
>   although I did not look at all object sizes.
>   
>   
>   
>   on 14.2.11
>   
>   [1]
>   
https://documentation.suse.com/ses/5.5/html/ses-all/ceph-pools.html
> #sec-ceph-pool-compression
>   https://docs.ceph.com/docs/mimic/rados/operations/pools/
> 
> 
>   ceph-users mailing list -- ceph-users@ceph.io
>   To unsubscribe send an email to ceph-users-le...@ceph.io
> 
> 
> --
> Sent from my Android device with K-9 Mail. Please excuse my brevity.
> 
> 

--
David Caro

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Benchmark WAL/DB on SSD and HDD for RGW RBD CephFS

2020-09-16 Thread Danni Setiawan
Yes, I agree that there are many knob for fine tuning Ceph performance. 
The problem is we don't have data which workload that benefit most from 
WAL/DB in SSD vs in same spinning drive and by how much. Does it really 
help in a cluster that mostly for object storage/RGW? Or may be just 
block storage/RBD workload that benefit most?


IMHO, I think we need some cost-benefit analysis from this because the 
cost placing WAL/DB in SSD is quite noticeable (multiple OSD would be 
fail when SSD fail and capacity reduced).


Thanks.

On 16/09/20 14.45, Janne Johansson wrote:
Den ons 16 sep. 2020 kl 06:27 skrev Danni Setiawan 
mailto:danni.n.setia...@gmail.com>>:


Hi all,

I'm trying to find performance penalty with OSD HDD when using
WAL/DB in
faster device (SSD/NVMe) vs WAL/DB in same device (HDD) for different
workload (RBD, RGW with index bucket in SSD pool, and CephFS with
metadata in SSD pool). I want to know if giving up disk slot for
WAL/DB
device is worth vs adding more OSD.

Unfortunately I cannot find the benchmark for these kind workload.
Has
anyone ever done this benchmark?


I think this probably is a too vague and broad question. If you ask
"will my cluster handle far more write iops if I have WAL/DB (or journal)
 on SSD/NVME instead of on the same drive as the data", then almost 
everyone
will agree that yes, flash WAL/DB will make your writes (and 
recoveries) lots
quicker, since NVME/SSD will do anything from 10x to 100x the amount 
of small
writes per second than the best spin-HDDs. But how this will affect 
any one single
end-user experience behind S3 or CephFS without diving into a ton of 
implementation
details like "how much ram cache does the MDS have for cephfs, how 
many RGWs
and S3 streams are you using in parallel in order to speed up S3/RGW 
operations"

will be very hard to say in pure numbers.

Also, even if flash devices are "only" used for speeding up writes, 
normal clusters see a lot
of mixed IO so if writes theoretically take 0ms, you get lots more 
free time to do reads
on the HDDs, and reads often can be accelerated with RAM caches in 
various places.


So like any other storage system, if you put a flash device in front 
of the spinners you
will see improvements, especially for many small write ops, but if 
your use case consists

of "copy these 100 10G-images to this pool every night" or
"every hour we unzip the sources to a large program and checksum the files
 and then clean the directory" will have a large impact on how 
flash helps your

cluster.

Also, more boxes add more performance in more ways than just "more 
disk", every extra
cpu, every G ram, every extra network port means the overall perf of 
the cluster goes up
by sharing the total load better. This will not show up in simple 
one-threaded tests but as

you get 2-5-10-100 active clients doing IO it will be noticeable.

--
May the most significant bit of your life be positive.

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: ceph pgs inconsistent, always the same checksum

2020-09-16 Thread Igor Fedotov

Hi David,

the morning's log is fine for now. The full log is preferred unless it's 
too large. If that's the case then please take 2 lines prior to the 
read failure. Just in case please also check whether additional 
occurrences for "_verify_csum bad" are present before this snippet.



Once you face the issue again please run deep fsck - I'd like to make 
sure if these checksum failures are persistent or not. The latter is a 
strong symptom of https://tracker.ceph.com/issues/22464



Thanks,

Igor


On 9/16/2020 2:36 AM, David Orman wrote:
Yes, we can do this (log/deep fsck) on the next incident. I have not 
repaired the currently inconsistent PG, would you like me to do the 
fsck on it now, prior to the PG repair? PG repair is what we've been 
using. Unfortunately, I did not see this until now, and we had a 
failure in the morning, so I am not sure how useful the logs will be 
to you. Are you just looking for the entries related to the failure 
such as the ones posted in the initial message, or the full OSD log? 
If you've got a preferred method for me to capture what you want - let 
me know. We are using cephadm/podman container operated ceph.


We will update with logs when the next failure happens.

On Tue, Sep 15, 2020 at 4:05 AM Igor Fedotov > wrote:


Hi Welby,

could you share an OSD log containing such errors then, please?


Also - David mentioned 'repair' which fixes the issue - is it
bluestore repair or PG  one?

If the latter could you please try bluestore deep fsck (via
'ceph-bluestore-tool --command fsck --deep 1') immediately after
the failure has been discovered?

Will it succeed?


Thanks,

Igor


On 9/14/2020 8:45 PM, Welby McRoberts wrote:

Hi Igor

We'll take a look at disabling swap on the nodes and see if that
improves the situation.

Having checked across all osds we're not seeing
bluestore_reads_with_retries as anything other than a zero value.
We get the error anywhere from 3 - 10 occurrences of the error a
week, but it's usually only one or two PGs that are inconsistent
at any one time.

Thanks
Welby

On Mon, Sep 14, 2020 at 12:17 PM Igor Fedotov mailto:ifedo...@suse.de>> wrote:

Hi David,

you might want to try to disable swap for your nodes. Look
like there is
some implicit correlation between such read errors and
enabled swapping.

Also wondering whether you can observe non-zero values for
"bluestore_reads_with_retries" performance counters over your
OSDs. How
wide-spread these cases are present? How high this counter
might get?


Thanks,

Igor


On 9/9/2020 4:59 PM, David Orman wrote:
> Right, you can see the previously referenced ticket/bug in
the link I had
> provided. It's definitely not an unknown situation.
>
> We have another one today:
>
> debug 2020-09-09T06:49:36.595+ 7f570871d700 -1
> bluestore(/var/lib/ceph/osd/ceph-123) _verify_csum bad
crc32c/0x1000
> checksum at blob offset 0x6, got 0x6706be76, expected
0x929a618, device
> location [0x2f387d7~1000], logical extent 0xe~1000,
object
> 0#2:7ff493bc:::rbd_data.3.20d195d612942.04228a96:head#
>
> debug 2020-09-09T06:49:36.611+ 7f570871d700 -1
> bluestore(/var/lib/ceph/osd/ceph-123) _verify_csum bad
crc32c/0x1000
> checksum at blob offset 0x6, got 0x6706be76, expected
0x929a618, device
> location [0x2f387d7~1000], logical extent 0xe~1000,
object
> 0#2:7ff493bc:::rbd_data.3.20d195d612942.04228a96:head#
>
> debug 2020-09-09T06:49:36.611+ 7f570871d700 -1
> bluestore(/var/lib/ceph/osd/ceph-123) _verify_csum bad
crc32c/0x1000
> checksum at blob offset 0x6, got 0x6706be76, expected
0x929a618, device
> location [0x2f387d7~1000], logical extent 0xe~1000,
object
> 0#2:7ff493bc:::rbd_data.3.20d195d612942.04228a96:head#
>
> debug 2020-09-09T06:49:36.611+ 7f570871d700 -1
> bluestore(/var/lib/ceph/osd/ceph-123) _verify_csum bad
crc32c/0x1000
> checksum at blob offset 0x6, got 0x6706be76, expected
0x929a618, device
> location [0x2f387d7~1000], logical extent 0xe~1000,
object
> 0#2:7ff493bc:::rbd_data.3.20d195d612942.04228a96:head#
>
> debug 2020-09-09T06:49:37.315+ 7f570871d700 -1
log_channel(cluster) log
> [ERR] : 2.3fe shard 123(0) soid
> 2:7ff493bc:::rbd_data.3.20d195d612942.04228a96:head
: candidate had
> a read error
>
> debug 2020-09-09T06:57:08.930+ 7f570871d700 -1
log_channel(cluster) log
  

[ceph-users] Re: multiple OSD crash, unfound objects

2020-09-16 Thread Frank Schilder
Sounds similar to this one: https://tracker.ceph.com/issues/46847

If you have or can reconstruct the crush map from before adding the OSDs, you 
might be able to discover everything with the temporary reversal of the crush 
map method.

Not sure if there is another method, i never got a reply to my question in the 
tracker.

Best regards,
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14


From: Michael Thomas 
Sent: 16 September 2020 01:27:19
To: ceph-users@ceph.io
Subject: [ceph-users] multiple OSD crash, unfound objects

Over the weekend I had multiple OSD servers in my Octopus cluster
(15.2.4) crash and reboot at nearly the same time.  The OSDs are part of
an erasure coded pool.  At the time the cluster had been busy with a
long-running (~week) remapping of a large number of PGs after I
incrementally added more OSDs to the cluster.  After bringing all of the
OSDs back up, I have 25 unfound objects and 75 degraded objects.  There
are other problems reported, but I'm primarily concerned with these
unfound/degraded objects.

The pool with the missing objects is a cephfs pool.  The files stored in
the pool are backed up on tape, so I can easily restore individual files
as needed (though I would not want to restore the entire filesystem).

I tried following the guide at
https://docs.ceph.com/docs/octopus/rados/troubleshooting/troubleshooting-pg/#unfound-objects.
  I found a number of OSDs that are still 'not queried'.  Restarting a
sampling of these OSDs changed the state from 'not queried' to 'already
probed', but that did not recover any of the unfound or degraded objects.

I have also tried 'ceph pg deep-scrub' on the affected PGs, but never
saw them get scrubbed.  I also tried doing a 'ceph pg force-recovery' on
the affected PGs, but only one seems to have been tagged accordingly
(see ceph -s output below).

The guide also says "Sometimes it simply takes some time for the cluster
to query possible locations."  I'm not sure how long "some time" might
take, but it hasn't changed after several hours.

My questions are:

* Is there a way to force the cluster to query the possible locations
sooner?

* Is it possible to identify the files in cephfs that are affected, so
that I could delete only the affected files and restore them from backup
tapes?

--Mike

ceph -s:

   cluster:
 id: 066f558c-6789-4a93-aaf1-5af1ba01a3ad
 health: HEALTH_ERR
 1 clients failing to respond to capability release
 1 MDSs report slow requests
 25/78520351 objects unfound (0.000%)
 2 nearfull osd(s)
 Reduced data availability: 1 pg inactive
 Possible data damage: 9 pgs recovery_unfound
 Degraded data redundancy: 75/626645098 objects degraded
(0.000%), 9 pgs degraded
 1013 pgs not deep-scrubbed in time
 1013 pgs not scrubbed in time
 2 pool(s) nearfull
 1 daemons have recently crashed
 4 slow ops, oldest one blocked for 77939 sec, daemons
[osd.0,osd.41] have slow ops.

   services:
 mon: 4 daemons, quorum ceph1,ceph2,ceph3,ceph4 (age 9d)
 mgr: ceph3(active, since 11d), standbys: ceph2, ceph4, ceph1
 mds: archive:1 {0=ceph4=up:active} 3 up:standby
 osd: 121 osds: 121 up (since 6m), 121 in (since 101m); 4 remapped pgs

   task status:
 scrub status:
 mds.ceph4: idle

   data:
 pools:   9 pools, 2433 pgs
 objects: 78.52M objects, 298 TiB
 usage:   412 TiB used, 545 TiB / 956 TiB avail
 pgs: 0.041% pgs unknown
  75/626645098 objects degraded (0.000%)
  135224/626645098 objects misplaced (0.022%)
  25/78520351 objects unfound (0.000%)
  2421 active+clean
  5active+recovery_unfound+degraded
  3active+recovery_unfound+degraded+remapped
  2active+clean+scrubbing+deep
  1unknown
  1active+forced_recovery+recovery_unfound+degraded

   progress:
 PG autoscaler decreasing pool 7 PGs from 1024 to 512 (5d)
   []
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] v15.2.5 octopus released

2020-09-16 Thread Abhishek Lekshmanan

This is the fifth backport release of the Ceph Octopus stable release
series. This release brings a range of fixes across all components. We
recommend that all Octopus users upgrade to this release. 

Notable Changes
---

* CephFS: Automatic static subtree partitioning policies may now be configured
  using the new distributed and random ephemeral pinning extended attributes on
  directories. See the documentation for more information:
  https://docs.ceph.com/docs/master/cephfs/multimds/

* Monitors now have a config option `mon_osd_warn_num_repaired`, 10 by default.
  If any OSD has repaired more than this many I/O errors in stored data a
  `OSD_TOO_MANY_REPAIRS` health warning is generated.

* Now when noscrub and/or no deep-scrub flags are set globally or per pool,
  scheduled scrubs of the type disabled will be aborted. All user initiated
  scrubs are NOT interrupted.

* Fix an issue with osdmaps not being trimmed in a healthy cluster (
  issue#47297, pr#36981)

For the detailed changelog please refer to the blog entry at
https://ceph.io/releases/v15-2-5-octopus-released/

Getting Ceph

* Git at git://github.com/ceph/ceph.git
* Tarball at http://download.ceph.com/tarballs/ceph-15.2.5.tar.gz
* For packages, see http://docs.ceph.com/docs/master/install/get-packages/
* Release git sha1: 2c93eff00150f0cc5f106a559557a58d3d7b6f1f

-- 
Abhishek Lekshmanan
SUSE Software Solutions Germany GmbH
GF: Felix Imendörffer, HRB 36809 (AG Nürnberg)
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Migration to ceph.readthedocs.io underway

2020-09-16 Thread Neha Ojha
Hi everyone,

We are in the process of migrating from docs.ceph.com to
ceph.readthedocs.io. We enabled it in
https://github.com/ceph/ceph/pull/34499 and will now be using it by
default.

Why?

- The search feature in ceph.readthedocs.io is much better than
docs.ceph.com and allows you to search multiple strings.
- RTD provides an in-built version switching feature which we plan to
use in future.

What does it mean to you?

- Some broken links are expected during this migration. Things like
ceph API documentation need special handling (example:
https://docs.ceph.com/en/latest/rados/api/) and are expected to be
broken temporarily.

- Much better Ceph documentation experience once the migration is done.

Thanks for your patience!

Cheers,
Neha
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Migration to ceph.readthedocs.io underway

2020-09-16 Thread Marc Roos


 
- In the future you will not be able to read the docs if you have an 
adblocker(?)



-Original Message-
To: dev; ceph-users
Cc: Kefu Chai
Subject: [ceph-users] Migration to ceph.readthedocs.io underway

Hi everyone,

We are in the process of migrating from docs.ceph.com to 
ceph.readthedocs.io. We enabled it in
https://github.com/ceph/ceph/pull/34499 and will now be using it by 
default.

Why?

- The search feature in ceph.readthedocs.io is much better than 
docs.ceph.com and allows you to search multiple strings.
- RTD provides an in-built version switching feature which we plan to 
use in future.

What does it mean to you?

- Some broken links are expected during this migration. Things like ceph 
API documentation need special handling (example:
https://docs.ceph.com/en/latest/rados/api/) and are expected to be 
broken temporarily.

- Much better Ceph documentation experience once the migration is done.

Thanks for your patience!

Cheers,
Neha
___
ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an 
email to ceph-users-le...@ceph.io

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Migration to ceph.readthedocs.io underway

2020-09-16 Thread Sasha Litvak
I wonder if this new system allows me to choose Ceph versions.  I see the
v:latest in the right bottom corner but it seems to be the only choice so
far.

On Wed, Sep 16, 2020 at 12:31 PM Marc Roos  wrote:

>
>
> - In the future you will not be able to read the docs if you have an
> adblocker(?)
>
>
>
> -Original Message-
> To: dev; ceph-users
> Cc: Kefu Chai
> Subject: [ceph-users] Migration to ceph.readthedocs.io underway
>
> Hi everyone,
>
> We are in the process of migrating from docs.ceph.com to
> ceph.readthedocs.io. We enabled it in
> https://github.com/ceph/ceph/pull/34499 and will now be using it by
> default.
>
> Why?
>
> - The search feature in ceph.readthedocs.io is much better than
> docs.ceph.com and allows you to search multiple strings.
> - RTD provides an in-built version switching feature which we plan to
> use in future.
>
> What does it mean to you?
>
> - Some broken links are expected during this migration. Things like ceph
> API documentation need special handling (example:
> https://docs.ceph.com/en/latest/rados/api/) and are expected to be
> broken temporarily.
>
> - Much better Ceph documentation experience once the migration is done.
>
> Thanks for your patience!
>
> Cheers,
> Neha
> ___
> ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an
> email to ceph-users-le...@ceph.io
>
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Migration to ceph.readthedocs.io underway

2020-09-16 Thread Neha Ojha
On Wed, Sep 16, 2020 at 10:51 AM Sasha Litvak
 wrote:
>
> I wonder if this new system allows me to choose Ceph versions.  I see the 
> v:latest in the right bottom corner but it seems to be the only choice so far.

Not yet, but that's where we plan to incorporate other versions.

>
> On Wed, Sep 16, 2020 at 12:31 PM Marc Roos  wrote:
>>
>>
>>
>> - In the future you will not be able to read the docs if you have an
>> adblocker(?)
>>
>>
>>
>> -Original Message-
>> To: dev; ceph-users
>> Cc: Kefu Chai
>> Subject: [ceph-users] Migration to ceph.readthedocs.io underway
>>
>> Hi everyone,
>>
>> We are in the process of migrating from docs.ceph.com to
>> ceph.readthedocs.io. We enabled it in
>> https://github.com/ceph/ceph/pull/34499 and will now be using it by
>> default.
>>
>> Why?
>>
>> - The search feature in ceph.readthedocs.io is much better than
>> docs.ceph.com and allows you to search multiple strings.
>> - RTD provides an in-built version switching feature which we plan to
>> use in future.
>>
>> What does it mean to you?
>>
>> - Some broken links are expected during this migration. Things like ceph
>> API documentation need special handling (example:
>> https://docs.ceph.com/en/latest/rados/api/) and are expected to be
>> broken temporarily.
>>
>> - Much better Ceph documentation experience once the migration is done.
>>
>> Thanks for your patience!
>>
>> Cheers,
>> Neha
>> ___
>> ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an
>> email to ceph-users-le...@ceph.io
>>
>> ___
>> ceph-users mailing list -- ceph-users@ceph.io
>> To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Migration to ceph.readthedocs.io underway

2020-09-16 Thread Neha Ojha
On Wed, Sep 16, 2020 at 11:08 AM Marc Roos  wrote:
>
>
>
> - In the future you will not be able to read the docs if you have an
> adblocker(?)

not aware of anything of this sort

>
>
>
> -Original Message-
> To: dev; ceph-users
> Cc: Kefu Chai
> Subject: [ceph-users] Migration to ceph.readthedocs.io underway
>
> Hi everyone,
>
> We are in the process of migrating from docs.ceph.com to
> ceph.readthedocs.io. We enabled it in
> https://github.com/ceph/ceph/pull/34499 and will now be using it by
> default.
>
> Why?
>
> - The search feature in ceph.readthedocs.io is much better than
> docs.ceph.com and allows you to search multiple strings.
> - RTD provides an in-built version switching feature which we plan to
> use in future.
>
> What does it mean to you?
>
> - Some broken links are expected during this migration. Things like ceph
> API documentation need special handling (example:
> https://docs.ceph.com/en/latest/rados/api/) and are expected to be
> broken temporarily.
>
> - Much better Ceph documentation experience once the migration is done.
>
> Thanks for your patience!
>
> Cheers,
> Neha
> ___
> ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an
> email to ceph-users-le...@ceph.io
>
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io