Your node uses logical volume /h020--vg-root/ as its root filesystem.
This logical volume has a size of 370GB:
# lsblk -o NAME,FSTYPE,SIZE,MOUNTPOINT,LABEL
NAME FSTYPE SIZE MOUNTPOINT LABEL
(...)
└─sdk5 LVM2_member 371.5G
* ├─h020--vg-root ext4 370.6G /
Got it! Thank you, Jay! - Cody
On Wed, Aug 8, 2018 at 11:36 AM Jay Pipes wrote:
>
> So, that is normal operation, actually. The conductor calls the
> scheduler to find a place for your requested instances. The scheduler
> responded to the conductor that, sorry, there were no hosts that were
> abl
Hai Eugen,
Thanks for your suggestions and I went back to find more about adding the
new HD to VG. I think it was successful. (Logs are at the end of the mail)
Followed this link -
https://www.howtoforge.com/logical-volume-manager-how-can-i-extend-a-volume-group
But still on the nova-compute log
So, that is normal operation, actually. The conductor calls the
scheduler to find a place for your requested instances. The scheduler
responded to the conductor that, sorry, there were no hosts that were
able to match the request (I don't know what the details of that request
were).
And so th
Victoria,
Thank you for everything you've down with the Outreachy program!
Amy (spotz)
On Tue, Aug 7, 2018 at 6:47 PM, Victoria Martínez de la Cruz <
victo...@vmartinezdelacruz.com> wrote:
> Hi all,
>
> I'm reaching you out to let you know that I'll be stepping down as
> coordinator for OpenSta
Hi Jay,
Thank you for getting back. I attached the log in my previous reply,
but I guess Gmail hided it from you as a quoted message. Here comes
again:
From nova-conductor.log
### BEGIN ###
2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager
[req-ef0d8ea1-e801-483e-b913-9148a6ac5d90
2499343
On 08/08/2018 09:37 AM, Cody wrote:
On 08/08/2018 07:19 AM, Bernd Bausch wrote:
I would think you don't even reach the scheduling stage. Why bother
looking for a suitable compute node if you exceeded your quota anyway?
The message is in the conductor log because it's the conductor that does
mos
> On 08/08/2018 07:19 AM, Bernd Bausch wrote:
> > I would think you don't even reach the scheduling stage. Why bother
> > looking for a suitable compute node if you exceeded your quota anyway?
> >
> > The message is in the conductor log because it's the conductor that does
> > most of the work. The
Okay, I'm really not sure if I understand your setup correctly.
Server does not add them automatically, I tried to mount them. I tried they
way they discussed in the page with /dev/sdb only. Other hard disks I have
mounted them my self. Yes I can see them in lsblk output as below
What do you m
Thank you Victoria for the initiative and the effort all these years!
On a related note, I will continue to coordinate OpenStack Outreachy for
the next round and if anyone else would like to join the effort, please
feel free to contact me or Victoria.
Best,
Mahati
On Wed, Aug 8, 2018 at 5:17 AM,
Thanks Victoria for all your efforts, highly recognized!
---
Emilien Macchi
On Tue, Aug 7, 2018, 7:48 PM Victoria Martínez de la Cruz, <
victo...@vmartinezdelacruz.com> wrote:
> Hi all,
>
> I'm reaching you out to let you know that I'll be stepping down as
> coordinator for OpenStack next round.
On 08/08/2018 07:19 AM, Bernd Bausch wrote:
I would think you don't even reach the scheduling stage. Why bother
looking for a suitable compute node if you exceeded your quota anyway?
The message is in the conductor log because it's the conductor that does
most of the work. The others are just sl
I would think you don't even reach the scheduling stage. Why bother
looking for a suitable compute node if you exceeded your quota anyway?
The message is in the conductor log because it's the conductor that does
most of the work. The others are just slackers (like nova-api) or wait
for instruction
Hai,
Thanks for a quick response.
- what do you mean by "disks are not added"? Does the server recognize
them? Do you see them in the output of "lsblk"?
Server does not add them automatically, I tried to mount them. I tried they
way they discussed in the page with /dev/sdb only. Other hard disks
Hi,
there are a couple of questions rising up:
- what do you mean by "disks are not added"? Does the server recognize
them? Do you see them in the output of "lsblk"?
- Do you already have existing physical volumes for cinder (assuming
you deployed cinder with lvm as in the provided link)?
-
Hai,
I am installing Openstack Queens on Ubuntu Server.
My server has extra hard disk(s) apart from main hard disk where OS(Ubuntu)
is running.
(
https://docs.openstack.org/cinder/queens/install/cinder-storage-install-ubuntu.html
)
As suggested in cinder (above link), I have been trying to add t
On 8 August 2018 at 00:47, Victoria Martínez de la Cruz
wrote:
> I'm reaching you out to let you know that I'll be stepping down as
> coordinator for OpenStack next round. I had been contributing to this effort
> for several rounds now and I believe is a good moment for somebody else to
> take the
17 matches
Mail list logo