Maybe I should point out more clearly that there are several ways of
providing disk space for your instances.
If you choose file based storage for your instances (e.g. ephemeral
disks as qcow images), you'll need a lot of space in
/var/lib/nova/instances as ephemeral storage. If you delete an
instance its disk is also gone.
Then there's cinder that can provide persistant storage to your
instances or for additional volumes to existing VMs. If you delete an
instance its disk will not be deleted (if you choose so).
Cinder can be configured with different backends, e.g. LVM or Ceph.
The short description in [1] scratches only the top of this but maybe
this helps understanding the basics. For now you can ignore the HA
references.
So in conclusion you'll need to make a choice (for now) how to provide
disk space for your instances (ephemeral or persistant). You'll see
"phys_disk" grow if you provide more space to /var/lib/nova/instances,
e.g. we use Ceph as backend and have /var/lib/nova/instances mounted
on shared storage which gives us 22 TB of space:
Final resource view: name=compute2 phys_ram=64395MB used_ram=68048MB
phys_disk=22877GB used_disk=490GB
If you use cinder with LVM these statistics will differ, of course.
I hope this clears it up a little bit.
Regards
[1] https://docs.openstack.org/ha-guide/storage-ha-backend.html
Zitat von Bernd Bausch <berndbau...@gmail.com>:
Your node uses logical volume /h020--vg-root/ as its root filesystem.
This logical volume has a size of 370GB:
# lsblk -o NAME,FSTYPE,SIZE,MOUNTPOINT,LABEL
NAME FSTYPE SIZE MOUNTPOINT LABEL
(...)
└─sdk5 LVM2_member 371.5G
* ├─h020--vg-root ext4 370.6G /*
└─h020--vg-swap_1 swap 976M [SWAP]
Now you created another physical volume, //dev/sdb1/, and added it to
volume group /h020-vg/. This increases the size of the *volume group*,
but not the size of the *logical volume*.
If you want to provide more space to instances' ephemeral storage, you
could:
* increase the size of root volume /h020--vg-root/ using the
/lvextend/ command, then increase the size of the filesystem on it.
I believe that this requires a reboot, since it's the root filesystem.
or
* create another logical volume, e.g. lvcreate -L1000GB -n
lv-instances h020-vg for a 1000GB logical volume, and mount it under
//var/lib/nova/instances/: mount /dev/h020-vg/lv-instances
/var/lib/nova/instances
(before mounting, create a filesystem on /lv-instances/ and transfer
the data from //var/lib/nova/instances/ to the new filesystem. Also,
don't forget to persist the mount by adding it to //etc/fstab/)
The second option is by far better, in my opinion, as you should
separate operating system files from OpenStack data.
You say that you are new to OpenStack. That's fine, but you seem to be
lacking the fundamentals of Linux system management as well. You can't
learn OpenStack without a certain level of Linux skills. At least learn
about LVM (it's not that hard) and filesystems. You will also need to
have networking fundamentals and Linux networking tools under your belt.
Good luck!
Bernd Bausch
On 8/9/2018 2:30 AM, Jay See wrote:
Hai Eugen,
Thanks for your suggestions and I went back to find more about adding
the new HD to VG. I think it was successful. (Logs are at the end of
the mail)
Followed this link
-
https://www.howtoforge.com/logical-volume-manager-how-can-i-extend-a-volume-group
But still on the nova-compute logs it still shows wrong phys_disk
size. Even in the horizon it doesn't get updated with the new HD added
to compute node.
2018-08-08 19:22:56.671 3335 INFO nova.compute.resource_tracker
[req-14a2b7e2-7703-4a75-9014-180eb26876ff - - - - -] Final resource
view: name=h020 phys_ram=515767MB used_ram=512MB
*phys_disk=364GB *used_disk=0GB total_vcpus=40 used_vcpus=0 pci_stats=[]
I understood they are not supposed to be mounted
on /var/lib/nova/instances so removed them now.
Thanks
Jay.
root@h020:~# vgdisplay
--- Volume group ---
*VG Name h020-vg*
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 3
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 2
Max PV 0
Cur PV 1
Act PV 1
VG Size 371.52 GiB
PE Size 4.00 MiB
Total PE 95109
* Alloc PE / Size 95105 / 371.50 GiB*
* Free PE / Size 4 / 16.00 MiB*
VG UUID 4EoW4w-x2cw-xDmC-XrrX-SXBG-RePM-XmWA2U
root@h020:~# pvcreate */dev/sdb1*
Physical volume "/dev/sdb1" successfully created
root@h020:~# pvdisplay
--- Physical volume ---
PV Name /dev/sdk5
VG Name h020-vg
PV Size 371.52 GiB / not usable 2.00 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 95109
Free PE 4
Allocated PE 95105
PV UUID BjGeac-TRkC-0gi8-GKX8-2Ivc-7awz-DTK2nR
"/dev/sdb1" is a new physical volume of "5.46 TiB"
--- NEW Physical volume ---
PV Name /dev/sdb1
VG Name
PV Size 5.46 TiB
Allocatable NO
PE Size 0
Total PE 0
Free PE 0
Allocated PE 0
PV UUID CPp369-3MwJ-ic3I-Keh1-dJJY-Gcrc-CpC443
root@h020:~# vgextend /dev/h020-vg /dev/sdb1
Volume group "h020-vg" successfully extended
root@h020:~# vgdisplay
--- Volume group ---
VG Name h020-vg
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 4
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 2
Max PV 0
Cur PV 2
Act PV 2
VG Size 5.82 TiB
PE Size 4.00 MiB
Total PE 1525900
* Alloc PE / Size 95105 / 371.50 GiB*
* Free PE / Size 1430795 / 5.46 TiB*
VG UUID 4EoW4w-x2cw-xDmC-XrrX-SXBG-RePM-XmWA2U
root@h020:~# service nova-compute restart
root@h020:~# lsblk -o NAME,FSTYPE,SIZE,MOUNTPOINT,LABEL
NAME FSTYPE SIZE MOUNTPOINT LABEL
sda 5.5T
├─sda1 vfat 500M ESP
├─sda2 vfat 100M DIAGS
└─sda3 vfat 2G OS
sdb 5.5T
└─sdb1 LVM2_member 5.5T
sdk 372G
├─sdk1 ext2 487M /boot
├─sdk2 1K
└─sdk5 LVM2_member 371.5G
├─h020--vg-root ext4 370.6G /
└─h020--vg-swap_1 swap 976M [SWAP]
root@h020:~# pvscan
PV /dev/sdk5 VG h020-vg lvm2 [371.52 GiB / 16.00 MiB free]
PV /dev/sdb1 VG h020-vg lvm2 [5.46 TiB / 5.46 TiB free]
Total: 2 [5.82 TiB] / in use: 2 [5.82 TiB] / in no VG: 0 [0 ]
root@h020:~# vgs
VG #PV #LV #SN Attr VSize VFree
h020-vg 2 2 0 wz--n- 5.82t 5.46t
root@h020:~# vi /var/log/nova/nova-compute.log
root@h020:~#
On Wed, Aug 8, 2018 at 3:36 PM, Eugen Block <ebl...@nde.ag
<mailto:ebl...@nde.ag>> wrote:
Okay, I'm really not sure if I understand your setup correctly.
Server does not add them automatically, I tried to mount them.
I tried they
way they discussed in the page with /dev/sdb only. Other hard
disks I have
mounted them my self. Yes I can see them in lsblk output as below
What do you mean with "tried with /dev/sdb"? I assume this is a
fresh setup and Cinder didn't work yet, am I right?
The new disks won't be added automatically to your cinder
configuration, if that's what you expected. You'll have to create
new physical volumes and then extend the existing VG to use new disks.
In Nova-Compute logs I can only see main hard disk shown in
the the
complete phys_disk, it was supposed to show more phys_disk
available
atleast 5.8 TB if only /dev/sdb is added as per my understand
(May be I am
thinking it in the wrong way, I want increase my compute node
disk size to
launch more VMs)
If you plan to use cinder volumes as disks for your instances, you
don't need much space in /var/lib/nova/instances but more space
available for cinder, so you'll need to grow the VG.
Regards
Zitat von Jay See <jayachander...@gmail.com
<mailto:jayachander...@gmail.com>>:
Hai,
Thanks for a quick response.
- what do you mean by "disks are not added"? Does the server
recognize
them? Do you see them in the output of "lsblk"?
Server does not add them automatically, I tried to mount them.
I tried they
way they discussed in the page with /dev/sdb only. Other hard
disks I have
mounted them my self. Yes I can see them in lsblk output as below
root@h020:~# lsblk -o NAME,FSTYPE,SIZE,MOUNTPOINT,LABEL
NAME FSTYPE SIZE
MOUNTPOINT LABEL
sda 5.5T
├─sda1 vfat 500M
ESP
├─sda2 vfat 100M
DIAGS
└─sda3 vfat 2G
OS
sdb 5.5T
├─sdb1 5.5T
├─cinder--volumes-cinder--volumes--pool_tmeta 84M
│ └─cinder--volumes-cinder--volumes--pool 5.2T
└─cinder--volumes-cinder--volumes--pool_tdata 5.2T
└─cinder--volumes-cinder--volumes--pool 5.2T
sdc 5.5T
└─sdc1 xfs 5.5T
sdd 5.5T
└─sdd1 xfs 5.5T
/var/lib/nova/instances/sdd1
sde 5.5T
└─sde1 xfs 5.5T
/var/lib/nova/instances/sde1
sdf 5.5T
└─sdf1 xfs 5.5T
/var/lib/nova/instances/sdf1
sdg 5.5T
└─sdg1 xfs 5.5T
/var/lib/nova/instances/sdg1
sdh 5.5T
└─sdh1 xfs 5.5T
/var/lib/nova/instances/sdh1
sdi 5.5T
└─sdi1 xfs 5.5T
/var/lib/nova/instances/sdi1
sdj 5.5T
└─sdj1 xfs 5.5T
/var/lib/nova/instances/sdj1
sdk 372G
├─sdk1 ext2
487M /boot
├─sdk2 1K
└─sdk5 LVM2_member 371.5G
├─h020--vg-root ext4 370.6G /
└─h020--vg-swap_1 swap
976M [SWAP]
- Do you already have existing physical volumes for cinder
(assuming you
deployed cinder with lvm as in the provided link)?
Yes, I have tried one of the HD (/dev/sdb)
- If the system recognizes the new disks and you deployed
cinder with lvm
you can create a new physical volume and extend your existing
volume group
to have more space for cinder. Is this a failing step or
someting else?
System does not recognize the disks automatically, I have
manually mounted
them or added them to cinder.
In Nova-Compute logs I can only see main hard disk shown in
the the
complete phys_disk, it was supposed to show more phys_disk
available
atleast 5.8 TB if only /dev/sdb is added as per my understand
(May be I am
thinking it in the wrong way, I want increase my compute node
disk size to
launch more VMs)
2018-08-08 11:58:41.722 34111 INFO nova.compute.resource_tracker
[req-a180079f-d7c0-4430-9c14-314ac4d0832b - - - - -] F
inal resource view: name=h020 phys_ram=515767MB used_ram=512MB
*phys_disk=364GB* used_disk=0GB total_vcpus=
40 used_vcpus=0 pci_stats=[]
- Please describe more precisely what exactly you tried and
what exactly
fails.
As explained in the previous point, I want to increase the
phys_disk size
to use the compute node more efficiently. So to add the HD to
compute node
I am installing cinder on the compute node to add all the HDs.
I might be doing something wrong.
Thanks and Regards,
Jayachander.
On Wed, Aug 8, 2018 at 11:24 AM, Eugen Block <ebl...@nde.ag
<mailto:ebl...@nde.ag>> wrote:
Hi,
there are a couple of questions rising up:
- what do you mean by "disks are not added"? Does the
server recognize
them? Do you see them in the output of "lsblk"?
- Do you already have existing physical volumes for cinder
(assuming you
deployed cinder with lvm as in the provided link)?
- If the system recognizes the new disks and you deployed
cinder with lvm
you can create a new physical volume and extend your
existing volume group
to have more space for cinder. Is this a failing step or
someting else?
- Please describe more precisely what exactly you tried
and what exactly
fails.
The failing neutron-l3-agent shouldn't have to do anything
with your disk
layout, so it's probably something else.
Regards,
Eugen
Zitat von Jay See <jayachander...@gmail.com
<mailto:jayachander...@gmail.com>>:
Hai,
I am installing Openstack Queens on Ubuntu Server.
My server has extra hard disk(s) apart from main hard
disk where
OS(Ubuntu)
is running.
(
https://docs.openstack.org/cinder/queens/install/cinder-stor
<https://docs.openstack.org/cinder/queens/install/cinder-stor>
age-install-ubuntu.html
)
As suggested in cinder (above link), I have been
trying to add the new
hard
disk but the other hard disks are not getting added.
Can anyone tell me , what am i missing to add these
hard disks?
Other info : neutron-l3-agent on controller is not
running, is it related
to this issue ? I am thinking it is not related to
this issue.
I am new to Openstack.
~ Jayachander.
--
P *SAVE PAPER – Please do not print this e-mail
unless absolutely
necessary.*
_______________________________________________
Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstac
<http://lists.openstack.org/cgi-bin/mailman/listinfo/openstac>
k
Post to : openstack@lists.openstack.org
<mailto:openstack@lists.openstack.org>
Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstac
<http://lists.openstack.org/cgi-bin/mailman/listinfo/openstac>
k
--
P *SAVE PAPER – Please do not print this e-mail unless absolutely
necessary.*
--
P *SAVE PAPER – Please do not print this e-mail unless absolutely
necessary.*
_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack