He's a simple example to (hopefully) make it clear:
$ cd /srv/node/vdc/quarantined/containers/3a7b4bae41a17d0f54b247c727b4f0cd
$ ls -l
total 20
-rw--- 1 swift swift 18432 Aug 15 2016
3a7b4bae41a17d0f54b247c727b4f0cd.db
$ sudo swift-container-info ./3a7b4bae41a17d0f54b247c727b4f0cd.db
Tr
Thanks! But not very much understood...
Do you mean on each of the physical hosts, I should create one neutron port and
interface for each of the tenant networks, and write appropriate routes?
If so, I'm worried wouldn't that be too many ports, interfaces, and routes on
each host?
Regards,
Hi,
Lately I've been testing out the virtio-scsi libvirt driver for block
devices. While it seems to work well and does support unmap/discard, I'm
having problems when it comes to attaching additional cinder block
devices. Essentially, I'm able to provision VMs from block devices, with
the bl
Hello Anastasia,
thanks for the feedback and your hard work.
I will clone the new master and try to build it again and let you know
of my findings.
Warmest regards,
George
Hi Georgios,
The difference was caused by the presence of caching metadata in nova
and the lack of caching in ec2-ap
2017-06-15 7:01 GMT+00:00 duhongwei :
>
> Hi guys,
>
> Now I'm using a vlan based Neutron network and there's no IP overlap.
> Meanwhile I want to let the physical hosts be able to access each of the
> tenant vlan networks (still keep tenant networks isolated).
>
> Any ideas?
Well, you could creat
Hi guys,
Now I'm using a vlan based Neutron network and there's no IP overlap. Meanwhile
I want to let the physical hosts be able to access each of the tenant vlan
networks (still keep tenant networks isolated).
Any ideas?
Regards,
Dastan___
Maili