Signed-off-by: Mauricio Vasquez B <mauricio.vasq...@polito.it>
---
 INSTALL.DPDK.rst      | 148 +++++++++++++++++++++++++++++++++++++-------------
 INSTALL.Debian.rst    |  20 +++++--
 INSTALL.Docker.rst    |  76 +++++++++++++++++++-------
 INSTALL.KVM.rst       |  24 ++++++--
 INSTALL.Windows.rst   | 136 ++++++++++++++++++++++++++++++++++------------
 INSTALL.XenServer.rst |  32 ++++++++---
 INSTALL.userspace.rst |  12 +++-
 7 files changed, 336 insertions(+), 112 deletions(-)
diff --git a/INSTALL.DPDK.rst b/INSTALL.DPDK.rst
index c4b9167..5780909 100644
--- a/INSTALL.DPDK.rst
+++ b/INSTALL.DPDK.rst
@@ -66,7 +66,9 @@ Installing
 DPDK
 ~~~~
 
-1. Download the `DPDK sources`_, extract the file and set ``DPDK_DIR``:::
+1. Download the `DPDK sources`_, extract the file and set ``DPDK_DIR``:
+
+::
 
        $ cd /usr/src/
        $ wget http://dpdk.org/browse/dpdk/snapshot/dpdk-16.07.zip
@@ -76,13 +78,17 @@ DPDK
 
 2. Configure and install DPDK
 
-   Build and install the DPDK library:::
+   Build and install the DPDK library:
+
+::
 
        $ export DPDK_TARGET=x86_64-native-linuxapp-gcc
        $ export DPDK_BUILD=$DPDK_DIR/$DPDK_TARGET
        $ make install T=$DPDK_TARGET DESTDIR=install
 
-   If IVSHMEM support is required, use a different target:::
+   If IVSHMEM support is required, use a different target:
+
+::
 
        $ export DPDK_TARGET=x86_64-ivshmem-linuxapp-gcc
 
@@ -106,7 +112,9 @@ has to be configured with DPDK support (``--with-dpdk``).
 2. Bootstrap, if required, as described in the `installation guide
    <INSTALL.rst>`__.
 
-3. Configure the package using the ``--with-dpdk`` flag:::
+3. Configure the package using the ``--with-dpdk`` flag:
+
+::
 
        $ ./configure --with-dpdk=$DPDK_BUILD
 
@@ -132,19 +140,27 @@ Setup Hugepages
 Allocate a number of 2M Huge pages:
 
 -  For persistent allocation of huge pages, write to hugepages.conf file
-   in `/etc/sysctl.d`:::
+   in `/etc/sysctl.d`:
+
+::
 
        $ echo 'vm.nr_hugepages=2048' > /etc/sysctl.d/hugepages.conf
 
--  For run-time allocation of huge pages, use the ``sysctl`` utility:::
+-  For run-time allocation of huge pages, use the ``sysctl`` utility:
+
+::
 
        $ sysctl -w vm.nr_hugepages=N  # where N = No. of 2M huge pages
 
-To verify hugepage configuration:::
+To verify hugepage configuration:
+
+::
 
     $ grep HugePages_ /proc/meminfo
 
-Mount the hugepages, if not already mounted by default:::
+Mount the hugepages, if not already mounted by default:
+
+::
 
     $ mount -t hugetlbfs none /dev/hugepages``
 
@@ -157,13 +173,17 @@ VFIO is prefered to the UIO driver when using recent 
versions of DPDK. VFIO
 support required support from both the kernel and BIOS. For the former, kernel
 version > 3.6 must be used. For the latter, you must enable VT-d in the BIOS
 and ensure this is configured via grub. To ensure VT-d is enabled via the BIOS,
-run:::
+run:
+
+::
 
     $ dmesg | grep -e DMAR -e IOMMU
 
 If VT-d is not enabled in the BIOS, enable it now.
 
-To ensure VT-d is enabled in the kernel, run:::
+To ensure VT-d is enabled in the kernel, run:
+
+::
 
     $ cat /proc/cmdline | grep iommu=pt
     $ cat /proc/cmdline | grep intel_iommu=on
@@ -171,7 +191,9 @@ To ensure VT-d is enabled in the kernel, run:::
 If VT-d is not enabled in the kernel, enable it now.
 
 Once VT-d is correctly configured, load the required modules and bind the NIC
-to the VFIO driver:::
+to the VFIO driver:
+
+::
 
     $ modprobe vfio-pci
     $ /usr/bin/chmod a+x /dev/vfio
@@ -187,7 +209,9 @@ Open vSwitch should be started as described in the 
`installation guide
 special configuration to enable DPDK functionality. DPDK configuration
 arguments can be passed to ovs-vswitchd via the ``other_config`` column of the
 ``Open_vSwitch`` table. At a minimum, the ``dpdk-init`` option must be set to
-``true``. For example:::
+``true``. For example:
+
+::
 
     $ export DB_SOCK=/usr/local/var/run/openvswitch/db.sock
     $ ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true
@@ -216,7 +240,9 @@ listed below. Defaults will be provided for all values not 
explicitly set.
 
 If allocating more than one GB hugepage (as for IVSHMEM), you can configure the
 amount of memory used from any given NUMA nodes. For example, to use 1GB from
-NUMA node 0, run:::
+NUMA node 0, run:
+
+::
 
     $ ovs-vsctl --no-wait set Open_vSwitch . \
         other_config:dpdk-socket-mem="1024,0"
@@ -224,7 +250,9 @@ NUMA node 0, run:::
 Similarly, if you wish to better scale the workloads across cores, then
 multiple pmd threads can be created and pinned to CPU cores by explicity
 specifying ``pmd-cpu-mask``. For example, to spawn two pmd threads and pin
-them to cores 1,2, run:::
+them to cores 1,2, run:
+
+::
 
     $ ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=6
 
@@ -245,33 +273,43 @@ Creating bridges and ports
 ~~~~~~~~~~~~~~~~~~~~~~~~~~
 
 You can now use ovs-vsctl to set up bridges and other Open vSwitch features.
-Bridges should be created with a ``datapath_type=netdev``:::
+Bridges should be created with a ``datapath_type=netdev``:
+
+::
 
     $ ovs-vsctl add-br br0 -- set bridge br0 datapath_type=netdev
 
 Now you can add DPDK devices. OVS expects DPDK device names to start with
 ``dpdk`` and end with a portid. ovs-vswitchd should print the number of dpdk
-devices found in the log file:::
+devices found in the log file:
+
+::
 
     $ ovs-vsctl add-port br0 dpdk0 -- set Interface dpdk0 type=dpdk
     $ ovs-vsctl add-port br0 dpdk1 -- set Interface dpdk1 type=dpdk
 
 After the DPDK ports get added to switch, a polling thread continuously polls
 DPDK devices and consumes 100% of the core, as can be checked from 'top' and
-'ps' cmds:::
+'ps' cmds:
+
+::
 
     $ top -H
     $ ps -eLo pid,psr,comm | grep pmd
 
 Creating bonds of DPDK interfaces is slightly different to creating bonds of
 system interfaces. For DPDK, the interface type must be explicitly set. For
-example:::
+example:
+
+::
 
     $ ovs-vsctl add-bond br0 dpdkbond dpdk0 dpdk1 \
         -- set Interface dpdk0 type=dpdk \
         -- set Interface dpdk1 type=dpdk
 
-To stop ovs-vswitchd & delete bridge, run:::
+To stop ovs-vswitchd & delete bridge, run:
+
+::
 
     $ ovs-appctl -t ovs-vswitchd exit
     $ ovs-appctl -t ovsdb-server exit
@@ -280,23 +318,31 @@ To stop ovs-vswitchd & delete bridge, run:::
 PMD thread statistics
 ~~~~~~~~~~~~~~~~~~~~~
 
-To show current stats:::
+To show current stats:
+
+::
 
     $ ovs-appctl dpif-netdev/pmd-stats-show
 
-To clear previous stats:::
+To clear previous stats:
+
+::
 
     $ ovs-appctl dpif-netdev/pmd-stats-clear
 
 Port/rxq assigment to PMD threads
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-To show port/rxq assignment:::
+To show port/rxq assignment:
+
+::
 
     $ ovs-appctl dpif-netdev/pmd-rxq-show
 
 To change default rxq assignment to pmd threads, rxqs may be manually pinned to
-desired cores using:::
+desired cores using:
+
+::
 
     $ ovs-vsctl set Interface <iface> \
         other_config:pmd-rxq-affinity=<rxq-affinity-list>
@@ -308,7 +354,9 @@ where:
                            ``<affinity-pair>`` , ``<non-empty-list>``
 - ``<affinity-pair>`` ::= ``<queue-id>`` : ``<core-id>``
 
-For example:::
+For example:
+
+::
 
     $ ovs-vsctl set interface dpdk0 options:n_rxq=4 \
         other_config:pmd-rxq-affinity="0:3,1:7,3:8"
@@ -343,7 +391,9 @@ the `advanced install guide <INSTALL.DPDK-advanced.md>`__.
 .. note::
   Support for DPDK in the guest requires QEMU >= 2.2.0.
 
-To being, instantiate the guest:::
+To being, instantiate the guest:
+
+::
 
     $ export VM_NAME=Centos-vm export GUEST_MEM=3072M
     $ export QCOW2_IMAGE=/root/CentOS7_x86_64.qcow2
@@ -360,7 +410,9 @@ To being, instantiate the guest:::
         -netdev type=vhost-user,id=mynet2,chardev=char1,vhostforce \
         -device 
virtio-net-pci,mac=00:00:00:00:00:02,netdev=mynet2,mrg_rxbuf=off \
 
-Download the DPDK sourcs to VM and build DPDK:::
+Download the DPDK sourcs to VM and build DPDK:
+
+::
 
     $ cd /root/dpdk/
     $ wget http://dpdk.org/browse/dpdk/snapshot/dpdk-16.07.zip
@@ -371,14 +423,18 @@ Download the DPDK sourcs to VM and build DPDK:::
     $ cd $DPDK_DIR
     $ make install T=$DPDK_TARGET DESTDIR=install
 
-Build the test-pmd application:::
+Build the test-pmd application:
+
+::
 
     $ cd app/test-pmd
     $ export RTE_SDK=$DPDK_DIR
     $ export RTE_TARGET=$DPDK_TARGET
     $ make
 
-Setup huge pages and DPDK devices using UIO:::
+Setup huge pages and DPDK devices using UIO:
+
+::
 
     $ sysctl vm.nr_hugepages=1024
     $ mkdir -p /dev/hugepages
@@ -398,7 +454,9 @@ Testing
 -------
 
 Below are few testcases and the list of steps to be followed. Before beginning,
-ensure a userspace bridge has been created and two DPDK ports added:::
+ensure a userspace bridge has been created and two DPDK ports added:
+
+::
 
     $ ovs-vsctl add-br br0 -- set bridge br0 datapath_type=netdev
     $ ovs-vsctl add-port br0 dpdk0 -- set Interface dpdk0 type=dpdk
@@ -407,7 +465,9 @@ ensure a userspace bridge has been created and two DPDK 
ports added:::
 PHY-PHY
 ~~~~~~~
 
-Add test flows to forward packets betwen DPDK port 0 and port 1:::
+Add test flows to forward packets betwen DPDK port 0 and port 1:
+
+::
 
     # Clear current flows
     $ ovs-ofctl del-flows br0
@@ -421,14 +481,18 @@ Transmit traffic into either port. You should see it 
returned via the other.
 PHY-VM-PHY (vhost loopback)
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-Add two ``dpdkvhostuser`` ports to bridge ``br0``:::
+Add two ``dpdkvhostuser`` ports to bridge ``br0``:
+
+::
 
     $ ovs-vsctl add-port br0 dpdkvhostuser0 \
         -- set Interface dpdkvhostuser0 type=dpdkvhostuser
     $ ovs-vsctl add-port br0 dpdkvhostuser1 \
         -- set Interface dpdkvhostuser1 type=dpdkvhostuser
 
-Add test flows to forward packets betwen DPDK devices and VM ports:::
+Add test flows to forward packets betwen DPDK devices and VM ports:
+
+::
 
     # Clear current flows
     $ ovs-ofctl del-flows br0
@@ -456,7 +520,9 @@ Create a VM using the following configuration:
 +----------------------+--------+-----------------+
 
 You can do this directly with QEMU via the ``qemu-system-x86_64``
-application:::
+application:
+
+::
 
     $ export VM_NAME=vhost-vm
     $ export GUEST_MEM=3072M
@@ -475,7 +541,9 @@ application:::
       -device virtio-net-pci,mac=00:00:00:00:00:02,netdev=mynet2,mrg_rxbuf=off
 
 Alternatively, you can configure the guest using libvirt. Below is an XML
-configuration for a 'demovm' guest that can be instantiated using `virsh`:::
+configuration for a 'demovm' guest that can be instantiated using `virsh`:
+
+::
 
     <domain type='kvm'>
       <name>demovm</name>
@@ -553,7 +621,9 @@ configuration for a 'demovm' guest that can be instantiated 
using `virsh`:::
 Once the guest is configured and booted, configure DPDK packet forwarding
 within the guest. To accomplish this, DPDK and testpmd application have to
 be first compiled on the VM as described in **Guest Setup**. Once compiled, run
-the ``test-pmd`` application:::
+the ``test-pmd`` application:
+
+::
 
     $ cd $DPDK_DIR/app/test-pmd;
     $ ./testpmd -c 0x3 -n 4 --socket-mem 1024 -- \
@@ -561,14 +631,18 @@ the ``test-pmd`` application:::
     $ set fwd mac retry
     $ start
 
-When you finish testing, bind the vNICs back to kernel:::
+When you finish testing, bind the vNICs back to kernel:
+
+::
 
     $ $DPDK_DIR/tools/dpdk-devbind.py --bind=virtio-pci 0000:00:03.0
     $ $DPDK_DIR/tools/dpdk-devbind.py --bind=virtio-pci 0000:00:04.0
 
 .. note::
   Appropriate PCI IDs to be passed in above example. The PCI IDs can be
-  retrieved like so:::
+  retrieved like so:
+
+::
 
       $ $DPDK_DIR/tools/dpdk-devbind.py --status
 
diff --git a/INSTALL.Debian.rst b/INSTALL.Debian.rst
index 4947af1..418ce21 100644
--- a/INSTALL.Debian.rst
+++ b/INSTALL.Debian.rst
@@ -50,7 +50,9 @@ Git tree with these instructions.
 
 You do not need to be the superuser to build the Debian packages.
 
-1. Install the "build-essential" and "fakeroot" packages. For example:::
+1. Install the "build-essential" and "fakeroot" packages. For example:
+
+::
 
        $ apt-get install build-essential fakeroot
 
@@ -66,17 +68,23 @@ directory. If you've installed all the dependencies 
properly,
 ``dpkg-checkbuilddeps`` will exit without printing anything. If you forgot to
 install some dependencies, it will tell you which ones.
 
-4. Build the package:::
+4. Build the package:
+
+::
 
        $ fakeroot debian/rules binary
 
    This will do a serial build that runs the unit tests. This will take
    approximately 8 to 10 minutes. If you prefer, you can run a faster parallel
-   build:::
+   build:
+
+::
 
        $ DEB_BUILD_OPTIONS='parallel=8' fakeroot debian/rules binary
 
-   If you are in a big hurry, you can even skip the unit tests:::
+   If you are in a big hurry, you can even skip the unit tests:
+
+::
 
        $ DEB_BUILD_OPTIONS='parallel=8 nocheck' fakeroot debian/rules binary
 
@@ -85,7 +93,9 @@ install some dependencies, it will tell you which ones.
   There are a few pitfalls in the Debian packaging building system so that,
   occasionally, you may find that in a tree that you have using for a while,
   the build command above exits immediately without actually building anything.
-  To fix the problem, run:::
+  To fix the problem, run:
+
+::
 
       $ fakeroot debian/rules clean
 
diff --git a/INSTALL.Docker.rst b/INSTALL.Docker.rst
index 35dcce2..32885de 100644
--- a/INSTALL.Docker.rst
+++ b/INSTALL.Docker.rst
@@ -46,7 +46,9 @@ Setup
 For multi-host networking with OVN and Docker, Docker has to be started with a
 destributed key-value store. For example, if you decide to use consul as your
 distributed key-value store and your host IP address is ``$HOST_IP``, start
-your Docker daemon with:::
+your Docker daemon with:
+
+::
 
     $ docker daemon --cluster-store=consul://127.0.0.1:8500 \
         --cluster-advertise=$HOST_IP:0
@@ -87,7 +89,9 @@ The "overlay" mode
 
   Start ovn-northd daemon. This daemon translates networking intent from Docker
   stored in the OVN\_Northbound database to logical flows in ``OVN_Southbound``
-  database. For example:::
+  database. For example:
+
+::
 
       $ /usr/share/openvswitch/scripts/ovn-ctl start_northd
 
@@ -95,7 +99,9 @@ The "overlay" mode
 
    On each host, where you plan to spawn your containers, you will need to run
    the below command once. You may need to run it again if your OVS database
-   gets cleared. It is harmless to run it again in any case:::
+   gets cleared. It is harmless to run it again in any case:
+
+::
 
        $ ovs-vsctl set Open_vSwitch . \
            external_ids:ovn-remote="tcp:$CENTRAL_IP:6642" \
@@ -117,7 +123,9 @@ The "overlay" mode
      Open vSwitch kernel module from upstream Linux, you will need a minumum
      kernel version of 3.18 for ``geneve``. There is no ``stt`` support in
      upstream Linux. You can verify whether you have the support in your kernel
-     as follows:::
+     as follows:
+
+::
 
          $ lsmod | grep $ENCAP_TYPE
 
@@ -126,7 +134,9 @@ The "overlay" mode
    distribution packaging for Open vSwitch (e.g. .deb or .rpm packages), or if
    you use the ovs-ctl utility included with Open vSwitch, it automatically
    configures a system-id.  If you start Open vSwitch manually, you should set
-   one up yourself. For example:::
+   one up yourself. For example:
+
+::
 
        $ id_file=/etc/openvswitch/system-id.conf
        $ test -e $id_file || uuidgen > $id_file
@@ -134,7 +144,9 @@ The "overlay" mode
 
 3. Start the ``ovn-controller``.
 
-   You need to run the below command on every boot:::
+   You need to run the below command on every boot:
+
+::
 
        $ /usr/share/openvswitch/scripts/ovn-ctl start_controller
 
@@ -146,13 +158,17 @@ The "overlay" mode
 
    The Open vSwitch driver uses the Python's flask module to listen to Docker's
    networking api calls. So, if your host does not have Python's flask module,
-   install it:::
+   install it:
+
+::
 
        $ sudo pip install Flask
 
    Start the Open vSwitch driver on every host where you plan to create your
    containers. Refer to the note on ``$OVS_PYTHON_LIBS_PATH`` that is used 
below
-   at the end of this document:::
+   at the end of this document:
+
+::
 
        $ PYTHONPATH=$OVS_PYTHON_LIBS_PATH ovn-docker-overlay-driver --detach
 
@@ -175,7 +191,9 @@ commands. Here are some examples.
 Create a logical switch
 ~~~~~~~~~~~~~~~~~~~~~~~
 
-To create a logical switch with name 'foo', on subnet '192.168.1.0/24', run:::
+To create a logical switch with name 'foo', on subnet '192.168.1.0/24', run:
+
+::
 
     $ NID=`docker network create -d openvswitch --subnet=192.168.1.0/24 foo`
 
@@ -187,7 +205,9 @@ List all logical switches
     $ docker network ls
 
 You can also look at this logical switch in OVN's northbound database by
-running the following command:::
+running the following command:
+
+::
 
     $ ovn-nbctl --db=tcp:$CENTRAL_IP:6640 ls-list
 
@@ -204,7 +224,9 @@ Create a logical port
 
 Docker creates your logical port and attaches it to the logical network in a
 single step. For example, to attach a logical port to network ``foo`` inside
-container busybox, run:::
+container busybox, run:
+
+::
 
     $ docker run -itd --net=foo --name=busybox busybox
 
@@ -212,7 +234,9 @@ List all logical ports
 ~~~~~~~~~~~~~~~~~~~~~~
 
 Docker does not currently have a CLI command to list all logical ports but you
-can look at them in the OVN database by running:::
+can look at them in the OVN database by running:
+
+::
 
     $ ovn-nbctl --db=tcp:$CENTRAL_IP:6640 lsp-list $NID
 
@@ -250,22 +274,30 @@ The "underlay" mode
    that belongs to management logical networks. The tenant needs to fetch the
    port-id associated with the interface via which he plans to send the 
container
    traffic inside the spawned VM. This can be obtained by running the below
-   command to fetch the 'id' associated with the VM:::
+   command to fetch the 'id' associated with the VM:
+
+::
 
        $ nova list
 
-   and then by running:::
+   and then by running:
+
+::
 
        $ neutron port-list --device_id=$id
 
    Inside the VM, download the OpenStack RC file that contains the tenant
    information (henceforth referred to as ``openrc.sh``). Edit the file and 
add the
    previously obtained port-id information to the file by appending the 
following
-   line:::
+   line:
+
+::
 
        $ export OS_VIF_ID=$port_id
 
-   After this edit, the file will look something like:::
+   After this edit, the file will look something like:
+
+::
 
        #!/bin/bash
        export OS_AUTH_URL=http://10.33.75.122:5000/v2.0
@@ -298,17 +330,23 @@ The "underlay" mode
    networking api calls. The driver also uses OpenStack's
    ``python-neutronclient`` libraries. If your host does not have Python's
    ``flask`` module or ``python-neutronclient`` you must install them. For
-   example:::
+   example:
+
+::
 
        $ pip install python-neutronclient
        $ pip install Flask
 
-   Once installed, source the ``openrc`` file:::
+   Once installed, source the ``openrc`` file:
+
+::
 
        $ . ./openrc.sh
 
    Start the network driver and provide your OpenStack tenant password when
-   prompted:::
+   prompted:
+
+::
 
        $ PYTHONPATH=$OVS_PYTHON_LIBS_PATH ovn-docker-underlay-driver \
            --bridge breth0 --detach
diff --git a/INSTALL.KVM.rst b/INSTALL.KVM.rst
index 9e0b2bd..d6f8a35 100644
--- a/INSTALL.KVM.rst
+++ b/INSTALL.KVM.rst
@@ -37,7 +37,9 @@ Setup
 -----
 
 KVM uses tunctl to handle various bridging modes, which you can install with
-the Debian/Ubuntu package ``uml-utilities``:::
+the Debian/Ubuntu package ``uml-utilities``:
+
+::
 
     $ apt-get install uml-utilities
 
@@ -45,7 +47,9 @@ Next, you will need to modify or create custom versions of 
the ``qemu-ifup``
 and ``qemu-ifdown`` scripts. In this guide, we'll create custom versions that
 make use of example Open vSwitch bridges that we'll describe in this guide.
 
-Create the following two files and store them in known locations. For 
example:::
+Create the following two files and store them in known locations. For example:
+
+::
 
     echo << EOF > /etc/ovs-ifup
     #!/bin/sh
@@ -66,18 +70,24 @@ Create the following two files and store them in known 
locations. For example:::
 
 The basic usage of Open vSwitch is described at the end of the `installation
 guide <INSTALL.rst>`__. If you haven't already, create a bridge named ``br0``
-with the following command:::
+with the following command:
+
+::
 
     $ ovs-vsctl add-br br0
 
 Then, add a port to the bridge for the NIC that you want your guests to
-communicate over (e.g. ``eth0``):::
+communicate over (e.g. ``eth0``):
+
+::
 
     $ ovs-vsctl add-port br0 eth0
 
 Refer to ovs-vsctl(8) for more details.
 
-Next, we'll start a guest that will use our ifup and ifdown scripts:::
+Next, we'll start a guest that will use our ifup and ifdown scripts:
+
+::
 
     $ kvm -m 512 -net nic,macaddr=00:11:22:EE:EE:EE -net \
         tap,script=/etc/ovs-ifup,downscript=/etc/ovs-ifdown -drive \
@@ -88,7 +98,9 @@ script will add a port on the br0 bridge so that the guest 
will be able to
 communicate over that bridge.
 
 To get some more information and for debugging you can use Open vSwitch
-utilities such as ovs-dpctl and ovs-ofctl, For example:::
+utilities such as ovs-dpctl and ovs-ofctl, For example:
+
+::
 
     $ ovs-dpctl show
     $ ovs-ofctl show br0
diff --git a/INSTALL.Windows.rst b/INSTALL.Windows.rst
index 3cec9e7..174b21b 100644
--- a/INSTALL.Windows.rst
+++ b/INSTALL.Windows.rst
@@ -52,7 +52,9 @@ The following explains the steps in some detail.
   automake and autoconf(version 2.68).
 
   Also make sure that ``/mingw`` mount point exists. If its not, please
-  add/create the following entry in ``/etc/fstab``:::
+  add/create the following entry in ``/etc/fstab``:
+
+::
 
       'C:/MinGW /mingw'.
 
@@ -123,7 +125,9 @@ Bootstrapping
 This step is not needed if you have downloaded a released tarball. If
 you pulled the sources directly from an Open vSwitch Git tree or got a
 Git tree snapshot, then run boot.sh in the top source directory to build
-the "configure" script:::
+the "configure" script:
+
+::
 
     > ./boot.sh
 
@@ -134,7 +138,9 @@ Configuring
 
 Configure the package by running the configure script.  You should provide some
 configure options to choose the right compiler, linker, libraries, Open vSwitch
-component installation directories, etc. For example:::
+component installation directories, etc. For example:
+
+::
 
     > ./configure CC=./build-aux/cccl LD="$(which link)" \
         LIBS="-lws2_32 -liphlpapi" --prefix="C:/openvswitch/usr" \
@@ -146,7 +152,9 @@ component installation directories, etc. For example:::
   By default, the above enables compiler optimization for fast code.  For
   default compiler optimization, pass the ``--with-debug`` configure option.
 
-To configure with SSL support, add the requisite additional options:::
+To configure with SSL support, add the requisite additional options:
+
+::
 
     > ./configure CC=./build-aux/cccl LD="`which link`"  \
         LIBS="-lws2_32 -liphlpapi" --prefix="C:/openvswitch/usr" \
@@ -155,7 +163,9 @@ To configure with SSL support, add the requisite additional 
options:::
          --with-pthread="C:/pthread" \
          --enable-ssl --with-openssl="C:/OpenSSL-Win32"
 
-Finally, to the kernel module also:::
+Finally, to the kernel module also:
+
+::
 
     > ./configure CC=./build-aux/cccl LD="`which link`" \
         LIBS="-lws2_32 -liphlpapi" --prefix="C:/openvswitch/usr" \
@@ -182,7 +192,9 @@ Building
 Once correctly configured, building Open vSwitch on Windows is similar to
 building on Linux, FreeBSD, or NetBSD.
 
-#. Run make for the ported executables in the top source directory, e.g.:::
+#. Run make for the ported executables in the top source directory, e.g.:
+
+::
 
        > make
 
@@ -198,15 +210,21 @@ building on Linux, FreeBSD, or NetBSD.
 
          > mingw-get upgrade msys-core-bin=1.0.17-1
 
-#. To run all the unit tests in Open vSwitch, one at a time:::
+#. To run all the unit tests in Open vSwitch, one at a time:
+
+::
 
        > make check
 
-   To run all the unit tests in Open vSwitch, up to 8 in parallel:::
+   To run all the unit tests in Open vSwitch, up to 8 in parallel:
+
+::
 
        > make check TESTSUITEFLAGS="-j8"
 
-#. To install all the compiled executables on the local machine, run:::
+#. To install all the compiled executables on the local machine, run:
+
+::
 
        > make install
 
@@ -236,7 +254,9 @@ the target Hyper-V machine.
 Now run ``./uninstall.cmd`` to remove the old extension. Once complete, run
 ``./install.cmd`` to insert the new one.  For this to work you will have to
 turn on ``TESTSIGNING`` boot option or 'Disable Driver Signature
-Enforcement' during boot.  The following commands can be used:::
+Enforcement' during boot.  The following commands can be used:
+
+::
 
     > bcdedit /set LOADOPTIONS DISABLE_INTEGRITY_CHECKS
     > bcdedit /set TESTSIGNING ON
@@ -251,7 +271,9 @@ existing switch, make sure to enable the "Allow Management 
OS" option for VXLAN
 to work (covered later).
 
 The command to create a new switch named 'OVS-Extended-Switch' using a physical
-NIC named 'Ethernet 1' is:::
+NIC named 'Ethernet 1' is:
+
+::
 
     PS > New-VMSwitch "OVS-Extended-Switch" -AllowManagementOS $true \
         -NetAdapterName "Ethernet 1"
@@ -262,7 +284,9 @@ NIC named 'Ethernet 1' is:::
 
 In the properties of any switch, you should should now see "Open vSwitch
 Extension" under 'Extensions'.  Click the check box to enable the extension.
-An alternative way to do the same is to run the following command:::
+An alternative way to do the same is to run the following command:
+
+::
 
     PS > Enable-VMSwitchExtension "Open vSwitch Extension" OVS-Extended-Switch
 
@@ -302,7 +326,9 @@ harmless::
     > ovs-vsctl --no-wait init
 
 .. tip::
-  If you would later like to terminate the started ovsdb-server, run:::
+  If you would later like to terminate the started ovsdb-server, run:
+
+::
 
       > ovs-appctl -t ovsdb-server exit
 
@@ -312,7 +338,9 @@ domain socket::
     > ovs-vswitchd -vfile:info --log-file --pidfile --detach
 
 .. tip::
-  If you would like to terminate the started ovs-vswitchd, run:::
+  If you would like to terminate the started ovs-vswitchd, run:
+
+::
 
       > ovs-appctl exit
 
@@ -329,7 +357,9 @@ Add bridges
 ~~~~~~~~~~~
 
 Let's start by creating an integration bridge, ``br-int`` and a PIF bridge,
-``br-pif``:::
+``br-pif``:
+
+::
 
     > ovs-vsctl add-br br-int
     > ovs-vsctl add-br br-pif
@@ -340,7 +370,9 @@ Let's start by creating an integration bridge, ``br-int`` 
and a PIF bridge,
   issue despite that, hit Ctrl-C to terminate ovs-vsctl and check the output to
   see if your command succeeded.
 
-Validate that ports are added by dumping from both ovs-dpctl and ovs-vsctl:::
+Validate that ports are added by dumping from both ovs-dpctl and ovs-vsctl:
+
+::
 
     > ovs-dpctl show
     system@ovs-system:
@@ -387,7 +419,9 @@ switch using the instructions above.  In OVS for Hyper-V, 
we use a the name of
 that specific adapter as a special name to refer to that adapter. By default it
 is created under the following rule ``vEthernet (<name of the switch>)``.
 
-As a whole example, if we issue the following in a powershell console:::
+As a whole example, if we issue the following in a powershell console:
+
+::
 
     PS C:\package\binaries> Get-NetAdapter | select 
Name,MacAddress,InterfaceDescription
     Name                   MacAddress         InterfaceDescription
@@ -403,12 +437,16 @@ As a whole example, if we issue the following in a 
powershell console:::
 
 We can see that we have a switch(external) created upon adapter name
 'Ethernet0' with an internal port under name ``vEthernet (external)``. Thus
-resulting into the following ovs-vsctl commands:::
+resulting into the following ovs-vsctl commands:
+
+::
 
     > ovs-vsctl add-port br-pif Ethernet0
     > ovs-vsctl add-port br-pif "vEthernet (external)"
 
-Dumping the ports should show the additional ports that were just added:::
+Dumping the ports should show the additional ports that were just added:
+
+::
 
     > ovs-dpctl show
     system@ovs-system:
@@ -459,11 +497,15 @@ assumed to be the Hyper-V switch with OVS extension 
enabled.::
     PS> Connect-VMNetworkAdapter -VMNetworkAdapter $vnic[0] \
           -SwitchName OVS-Extended-Switch
 
-Next, add the VIFs to ``br-int``:::
+Next, add the VIFs to ``br-int``:
+
+::
 
     > ovs-vsctl add-port br-int ovs-port-a
 
-Dumping the ports should show the additional ports that were just added:::
+Dumping the ports should show the additional ports that were just added:
+
+::
 
     > ovs-dpctl show
     system@ovs-system:
@@ -498,19 +540,25 @@ Add patch ports and configure VLAN tagging
 The Windows Open vSwitch implementation support VLAN tagging in the switch.
 Switch VLAN tagging along with patch ports between ``br-int`` and ``br-pif`` is
 used to configure VLAN tagging functionality between two VMs on different
-Hyper-Vs.  To start, add a patch port from ``br-int`` to ``br-pif``:::
+Hyper-Vs.  To start, add a patch port from ``br-int`` to ``br-pif``:
+
+::
 
     > ovs-vsctl add-port br-int patch-to-pif
     > ovs-vsctl set interface patch-to-pif type=patch \
         options:peer=patch-to-int
 
-Add a patch port from ``br-pif`` to ``br-int``:::
+Add a patch port from ``br-pif`` to ``br-int``:
+
+::
 
     > ovs-vsctl add-port br-pif patch-to-int
     > ovs-vsctl set interface patch-to-int type=patch \
         options:peer=patch-to-pif
 
-Re-Add the VIF ports with the VLAN tag:::
+Re-Add the VIF ports with the VLAN tag:
+
+::
 
     > ovs-vsctl add-port br-int ovs-port-a tag=900
     > ovs-vsctl add-port br-int ovs-port-b tag=900
@@ -520,7 +568,9 @@ Add tunnels
 
 The Windows Open vSwitch implementation support VXLAN and STT tunnels. To add
 tunnels. For example, first add the tunnel port between 172.168.201.101 <->
-172.168.201.102:::
+172.168.201.102:
+
+::
 
     > ovs-vsctl add-port br-int tun-1
     > ovs-vsctl set Interface tun-1 type=<port-type>
@@ -529,7 +579,9 @@ tunnels. For example, first add the tunnel port between 
172.168.201.101 <->
     > ovs-vsctl set Interface tun-1 options:in_key=flow
     > ovs-vsctl set Interface tun-1 options:out_key=flow
 
-...and the tunnel port between 172.168.201.101 <-> 172.168.201.105:::
+...and the tunnel port between 172.168.201.101 <-> 172.168.201.105:
+
+::
 
     > ovs-vsctl add-port br-int tun-2
     > ovs-vsctl set Interface tun-2 type=<port-type>
@@ -554,12 +606,16 @@ daemons via ``make install``.
 .. note::
   The commands shown here can be run from MSYS bash or Windows command prompt.
 
-To start, create the database:::
+To start, create the database:
+
+::
 
     > ovsdb-tool create C:/openvswitch/etc/openvswitch/conf.db \
         "C:/openvswitch/usr/share/openvswitch/vswitch.ovsschema"
 
-Create the ovsdb-server service and start it:::
+Create the ovsdb-server service and start it:
+
+::
 
     > sc create ovsdb-server \
         binpath="C:/openvswitch/usr/sbin/ovsdb-server.exe \
@@ -571,30 +627,42 @@ Create the ovsdb-server service and start it:::
 .. tip::
   One of the common issues with creating a Windows service is with mungled
   paths.  You can make sure that the correct path has been registered with the
-  Windows services manager by running:::
+  Windows services manager by running:
+
+::
 
       > sc qc ovsdb-server
 
-Check that the service is healthy by running:::
+Check that the service is healthy by running:
+
+::
 
     > sc query ovsdb-server
 
-Initialize the database:::
+Initialize the database:
+
+::
 
     > ovs-vsctl --no-wait init
 
-Create the ovs-vswitchd service and start it:::
+Create the ovs-vswitchd service and start it:
+
+::
 
     > sc create ovs-vswitchd \
       binpath="C:/openvswitch/usr/sbin/ovs-vswitchd.exe \
       --pidfile -vfile:info --log-file  --service --service-monitor"
     > sc start ovs-vswitchd
 
-Check that the service is healthy by running:::
+Check that the service is healthy by running:
+
+::
 
     > sc query ovs-vswitchd
 
-To stop and delete the services, run:::
+To stop and delete the services, run:
+
+::
 
     > sc stop ovs-vswitchd
     > sc stop ovsdb-server
diff --git a/INSTALL.XenServer.rst b/INSTALL.XenServer.rst
index f3855d4..5b380f4 100644
--- a/INSTALL.XenServer.rst
+++ b/INSTALL.XenServer.rst
@@ -41,7 +41,9 @@ Git tree.  The recommended build environment to build RPMs 
for Citrix XenServer
 is the DDK VM available from Citrix.
 
 1. If you are building from an Open vSwitch Git tree, then you will need to
-   first create a distribution tarball by running:::
+   first create a distribution tarball by running:
+
+::
 
        $ ./boot.sh
        $ ./configure
@@ -58,7 +60,9 @@ is the DDK VM available from Citrix.
 3. In the DDK VM, unpack the distribution tarball into a temporary directory
    and "cd" into the root of the distribution tarball.
 
-4. To build Open vSwitch userspace, run:::
+4. To build Open vSwitch userspace, run:
+
+::
 
        $ rpmbuild -bb xenserver/openvswitch-xen.spec
 
@@ -69,7 +73,9 @@ is the DDK VM available from Citrix.
    - ``openvswitch-debuginfo``
 
    The above command automatically runs the Open vSwitch unit tests.  To
-   disable the unit tests, run:::
+   disable the unit tests, run:
+
+::
 
        $ rpmbuild -bb --without check xenserver/openvswitch-xen.spec
 
@@ -79,7 +85,9 @@ Build Parameters
 ``openvswitch-xen.spec`` needs to know a number of pieces of information about
 the XenServer kernel.  Usually, it can figure these out for itself, but if it
 does not do it correctly then you can specify them yourself as parameters to
-the build.  Thus, the final ``rpmbuild`` step above can be elaborated as:::
+the build.  Thus, the final ``rpmbuild`` step above can be elaborated as:
+
+::
 
     $ VERSION=<Open vSwitch version>
     $ KERNEL_NAME=<Xen Kernel name>
@@ -103,7 +111,9 @@ where:
   ``kernel-NAME-xen``, without the ``kernel-`` prefix.
 
 ``<Xen Kernel version>``
-  is the output of:::
+  is the output of:
+
+::
 
       $ rpm -q --queryformat "%{Version}-%{Release}" <kernel-devel-package>,
 
@@ -118,7 +128,9 @@ where:
 
 For XenServer 6.5 or above, the kernel version naming no longer contains
 KERNEL_FLAVOR.  In fact, only providing the ``uname -r`` output is enough.  So,
-the final ``rpmbuild`` step changes to:::
+the final ``rpmbuild`` step changes to:
+
+::
 
     $ KERNEL_UNAME=<`uname -r` output>
     $ rpmbuild \
@@ -130,7 +142,9 @@ Installing Open vSwitch for XenServer
 
 To install Open vSwitch on a XenServer host, or to upgrade to a newer version,
 copy the ``openvswitch`` and ``openvswitch-modules-xen`` RPMs to that host with
-``scp``, then install them with ``rpm -U``, e.g.:::
+``scp``, then install them with ``rpm -U``, e.g.:
+
+::
 
     $ scp openvswitch-$VERSION-1.i386.rpm \
         openvswitch-modules-xen-$XEN_KERNEL_VERSION-$VERSION-1.i386.rpm \
@@ -141,7 +155,9 @@ copy the ``openvswitch`` and ``openvswitch-modules-xen`` 
RPMs to that host with
     $ rpm -U openvswitch-$VERSION-1.i386.rpm \
         openvswitch-modules-xen-$XEN_KERNEL_VERSION-$VERSION-1.i386.rpm
 
-To uninstall Open vSwitch from a XenServer host, remove the packages:::
+To uninstall Open vSwitch from a XenServer host, remove the packages:
+
+::
 
     $ ssh root@<host>
     # Enter <host>'s root password again.
diff --git a/INSTALL.userspace.rst b/INSTALL.userspace.rst
index e327c0e..baf3ce1 100644
--- a/INSTALL.userspace.rst
+++ b/INSTALL.userspace.rst
@@ -63,7 +63,9 @@ Using the Userspace Datapath with ovs-vswitchd
 ----------------------------------------------
 
 To use ovs-vswitchd in userspace mode, create a bridge with
-``datapath_type=netdev`` in the configuration database.  For example:::
+``datapath_type=netdev`` in the configuration database.  For example:
+
+::
 
     $ ovs-vsctl add-br br0
     $ ovs-vsctl set bridge br0 datapath_type=netdev
@@ -76,7 +78,9 @@ the same as the bridge, as well as for each configured 
internal interface.
 
 Currently, on FreeBSD, the functionality required for in-band control support
 is not implemented.  To avoid related errors, you can disable the in-band
-support with the following command:::
+support with the following command:
+
+::
 
     $ ovs-vsctl set bridge br0 other_config:disable-in-band=true
 
@@ -87,7 +91,9 @@ On Linux, when a physical interface is in use by the 
userspace datapath,
 packets received on the interface still also pass into the kernel TCP/IP stack.
 This can cause surprising and incorrect behavior.  You can use "iptables" to
 avoid this behavior, by using it to drop received packets.  For example, to
-drop packets received on eth0:::
+drop packets received on eth0:
+
+::
 
     $ iptables -A INPUT -i eth0 -j DROP
     $ iptables -A FORWARD -i eth0 -j DROP
-- 
1.9.1

_______________________________________________
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev

Reply via email to