Replace some hard-coded section numbers by dynamic links.

Signed-off-by: Thomas Monjalon <thomas.monjalon at 6wind.com>
---
 doc/guides/nics/virtio.rst                           | 2 +-
 doc/guides/prog_guide/ip_fragment_reassembly_lib.rst | 2 +-
 doc/guides/prog_guide/kernel_nic_interface.rst       | 2 ++
 doc/guides/prog_guide/lpm6_lib.rst                   | 2 +-
 doc/guides/prog_guide/lpm_lib.rst                    | 2 ++
 doc/guides/prog_guide/mbuf_lib.rst                   | 2 ++
 doc/guides/prog_guide/mempool_lib.rst                | 2 ++
 doc/guides/prog_guide/multi_proc_support.rst         | 2 +-
 doc/guides/prog_guide/qos_framework.rst              | 8 ++++----
 doc/guides/prog_guide/writing_efficient_code.rst     | 2 +-
 doc/guides/sample_app_ug/l2_forward_job_stats.rst    | 5 +++--
 doc/guides/sample_app_ug/multi_process.rst           | 2 +-
 12 files changed, 21 insertions(+), 12 deletions(-)

diff --git a/doc/guides/nics/virtio.rst b/doc/guides/nics/virtio.rst
index 200a8be..06ca433 100644
--- a/doc/guides/nics/virtio.rst
+++ b/doc/guides/nics/virtio.rst
@@ -140,7 +140,7 @@ Host2VM communication example
     For each physical port, kni also creates a kernel thread that retrieves 
packets from the kni receive queue,
     place them onto kni's raw socket's queue and wake up the vhost kernel 
thread to exchange packets with the virtio virt queue.

-    For more details about kni, please refer to Chapter 24 "Kernel NIC 
Interface".
+    For more details about kni, please refer to :ref:`kni`.

 #.  Enable the kni raw socket functionality for the specified physical NIC 
port,
     get the generated file descriptor and set it in the qemu command line 
parameter.
diff --git a/doc/guides/prog_guide/ip_fragment_reassembly_lib.rst 
b/doc/guides/prog_guide/ip_fragment_reassembly_lib.rst
index 1d3d4ac..43168f0 100644
--- a/doc/guides/prog_guide/ip_fragment_reassembly_lib.rst
+++ b/doc/guides/prog_guide/ip_fragment_reassembly_lib.rst
@@ -54,7 +54,7 @@ Finally 'direct' and 'indirect' mbufs for each fragment are 
linked together via

 The caller has an ability to explicitly specify which mempools should be used 
to allocate 'direct' and 'indirect' mbufs from.

-For more information about direct and indirect mbufs, refer to the *DPDK 
Programmers guide 7.7 Direct and Indirect Buffers.*
+For more information about direct and indirect mbufs, refer to 
:ref:`direct_indirect_buffer`.

 Packet reassembly
 -----------------
diff --git a/doc/guides/prog_guide/kernel_nic_interface.rst 
b/doc/guides/prog_guide/kernel_nic_interface.rst
index 0d91476..fac1960 100644
--- a/doc/guides/prog_guide/kernel_nic_interface.rst
+++ b/doc/guides/prog_guide/kernel_nic_interface.rst
@@ -28,6 +28,8 @@
     (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
     OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

+.. _kni:
+
 Kernel NIC Interface
 ====================

diff --git a/doc/guides/prog_guide/lpm6_lib.rst 
b/doc/guides/prog_guide/lpm6_lib.rst
index 87f5066..0aea5c5 100644
--- a/doc/guides/prog_guide/lpm6_lib.rst
+++ b/doc/guides/prog_guide/lpm6_lib.rst
@@ -75,7 +75,7 @@ The main methods exported for the LPM component are:
 Implementation Details
 ~~~~~~~~~~~~~~~~~~~~~~

-This is a modification of the algorithm used for IPv4 (see Section 19.2 
"Implementation Details").
+This is a modification of the algorithm used for IPv4 (see 
:ref:`lpm4_details`).
 In this case, instead of using two levels, one with a tbl24 and a second with 
a tbl8, 14 levels are used.

 The implementation can be seen as a multi-bit trie where the *stride*
diff --git a/doc/guides/prog_guide/lpm_lib.rst 
b/doc/guides/prog_guide/lpm_lib.rst
index c33e469..8b5ff99 100644
--- a/doc/guides/prog_guide/lpm_lib.rst
+++ b/doc/guides/prog_guide/lpm_lib.rst
@@ -62,6 +62,8 @@ The main methods exported by the LPM component are:
     the algorithm picks the rule with the highest depth as the best match rule,
     which means that the rule has the highest number of most significant bits 
matching between the input key and the rule key.

+.. _lpm4_details:
+
 Implementation Details
 ----------------------

diff --git a/doc/guides/prog_guide/mbuf_lib.rst 
b/doc/guides/prog_guide/mbuf_lib.rst
index 32a041e..8e61682 100644
--- a/doc/guides/prog_guide/mbuf_lib.rst
+++ b/doc/guides/prog_guide/mbuf_lib.rst
@@ -235,6 +235,8 @@ The list of flags and their precise meaning is described in 
the mbuf API
 documentation (rte_mbuf.h). Also refer to the testpmd source code
 (specifically the csumonly.c file) for details.

+.. _direct_indirect_buffer:
+
 Direct and Indirect Buffers
 ---------------------------

diff --git a/doc/guides/prog_guide/mempool_lib.rst 
b/doc/guides/prog_guide/mempool_lib.rst
index f0ca06f..5fae79a 100644
--- a/doc/guides/prog_guide/mempool_lib.rst
+++ b/doc/guides/prog_guide/mempool_lib.rst
@@ -98,6 +98,8 @@ no padding is required between objects (except for objects 
whose size are n x 3

 When creating a new pool, the user can specify to use this feature or not.

+.. _mempool_local_cache:
+
 Local Cache
 -----------

diff --git a/doc/guides/prog_guide/multi_proc_support.rst 
b/doc/guides/prog_guide/multi_proc_support.rst
index 1680d6b..badd102 100644
--- a/doc/guides/prog_guide/multi_proc_support.rst
+++ b/doc/guides/prog_guide/multi_proc_support.rst
@@ -80,7 +80,7 @@ and point to the same objects, in both processes.

 .. note::

-    Refer to Section 23.3 "Multi-process Limitations" for details of
+    Refer to `Multi-process Limitations`_ for details of
     how Linux kernel Address-Space Layout Randomization (ASLR) can affect 
memory sharing.

 .. _figure_multi_process_memory:
diff --git a/doc/guides/prog_guide/qos_framework.rst 
b/doc/guides/prog_guide/qos_framework.rst
index c4e390c..f3f60b8 100644
--- a/doc/guides/prog_guide/qos_framework.rst
+++ b/doc/guides/prog_guide/qos_framework.rst
@@ -147,7 +147,7 @@ these packets are later on removed and handed over to the 
NIC TX with the packet

 The hierarchical scheduler is optimized for a large number of packet queues.
 When only a small number of queues are needed, message passing queues should 
be used instead of this block.
-See Section 26.2.5 "Worst Case Scenarios for Performance" for a more detailed 
discussion.
+See `Worst Case Scenarios for Performance`_ for a more detailed discussion.

 Scheduling Hierarchy
 ~~~~~~~~~~~~~~~~~~~~
@@ -712,7 +712,7 @@ where, r = port line rate (in bytes per second).
    |   |                         |     of the grinders), update the credits 
for the pipe and its subport.      |
    |   |                         |                                             
                                |
    |   |                         | The current implementation is using option 
3.  According to Section         |
-   |   |                         | 26.2.4.4 "Dequeue State Machine", the pipe 
and subport credits are          |
+   |   |                         | `Dequeue State Machine`_, the pipe and 
subport credits are                  |
    |   |                         | updated every time a pipe is selected by 
the dequeue process before the     |
    |   |                         | pipe and subport credits are actually used. 
                                |
    |   |                         |                                             
                                |
@@ -783,7 +783,7 @@ as described in :numref:`table_qos_10` and 
:numref:`table_qos_11`.
    | 1 | tc_time               | Bytes | Time of the next update (upper limit 
refill) for the 4 TCs of the     |
    |   |                       |       | current subport / pipe.               
                                |
    |   |                       |       |                                       
                                |
-   |   |                       |       | See  Section 26.2.4.5.1, "Internal 
Time Reference" for the            |
+   |   |                       |       | See  Section `Internal Time 
Reference`_ for the                       |
    |   |                       |       | explanation of why the time is 
maintained in byte units.              |
    |   |                       |       |                                       
                                |
    
+---+-----------------------+-------+-----------------------------------------------------------------------+
@@ -1334,7 +1334,7 @@ Where:

 The time reference is in units of bytes,
 where a byte signifies the time duration required by the physical interface to 
send out a byte on the transmission medium
-(see Section 26.2.4.5.1 "Internal Time Reference").
+(see Section `Internal Time Reference`_).
 The parameter s is defined in the dropper module as a constant with the value: 
s=2^22.
 This corresponds to the time required by every leaf node in a hierarchy with 
64K leaf nodes
 to transmit one 64-byte packet onto the wire and represents the worst case 
scenario.
diff --git a/doc/guides/prog_guide/writing_efficient_code.rst 
b/doc/guides/prog_guide/writing_efficient_code.rst
index 613db88..78d2afa 100644
--- a/doc/guides/prog_guide/writing_efficient_code.rst
+++ b/doc/guides/prog_guide/writing_efficient_code.rst
@@ -113,7 +113,7 @@ it is advised to use the DPDK ring API, which provides a 
lockless ring implement

 The ring supports bulk and burst access,
 meaning that it is possible to read several elements from the ring with only 
one costly atomic operation
-(see Chapter 5 "Ring Library").
+(see :doc:`ring_lib`).
 Performance is greatly improved when using bulk access operations.

 The code algorithm that dequeues messages may be something similar to the 
following:
diff --git a/doc/guides/sample_app_ug/l2_forward_job_stats.rst 
b/doc/guides/sample_app_ug/l2_forward_job_stats.rst
index acf6273..03d9977 100644
--- a/doc/guides/sample_app_ug/l2_forward_job_stats.rst
+++ b/doc/guides/sample_app_ug/l2_forward_job_stats.rst
@@ -154,7 +154,8 @@ Command Line Arguments
 ~~~~~~~~~~~~~~~~~~~~~~

 The L2 Forwarding sample application takes specific parameters,
-in addition to Environment Abstraction Layer (EAL) arguments (see Section 9.3).
+in addition to Environment Abstraction Layer (EAL) arguments
+(see `Running the Application`_).
 The preferred way to parse parameters is to use the getopt() function,
 since it is part of a well-defined and portable library.

@@ -344,7 +345,7 @@ The list of queues that must be polled for a given lcore is 
stored in a private
 Values of struct lcore_queue_conf:

 *   n_rx_port and rx_port_list[] are used in the main packet processing loop
-    (see Section 9.4.6 "Receive, Process and Transmit Packets" later in this 
chapter).
+    (see Section `Receive, Process and Transmit Packets`_ later in this 
chapter).

 *   rx_timers and flush_timer are used to ensure forced TX on low packet rate.

diff --git a/doc/guides/sample_app_ug/multi_process.rst 
b/doc/guides/sample_app_ug/multi_process.rst
index ffe7ee6..3571490 100644
--- a/doc/guides/sample_app_ug/multi_process.rst
+++ b/doc/guides/sample_app_ug/multi_process.rst
@@ -495,7 +495,7 @@ For threads/processes not created in that way, either 
pinned to a core or not, t
 rte_lcore_id() function will not work in the correct way.
 However, sometimes these threads/processes still need the unique ID mechanism 
to do easy access on structures or resources.
 For example, the DPDK mempool library provides a local cache mechanism
-(refer to *DPDK Programmer's Guide* , Section 6.4, "Local Cache")
+(refer to :ref:`mempool_local_cache`)
 for fast element allocation and freeing.
 If using a non-unique ID or a fake one,
 a race condition occurs if two or more threads/ processes with the same core 
ID try to use the local cache.
-- 
2.7.0

Reply via email to