I'm reading through all of the documentation at
https://ovirt.org/documentation/, and am a bit overwhelmed with all of the
different options for installing oVirt.
My particular use case is that I'm looking for a way to manage VMs on multiple
physical servers from 1 interface, and be able to de
but most are now 128GB
> 24core or more.
>
> ovirt node ng is a prepackaged installer for an oVirt hypervisor/gluster
> host, with its cockpit interface you can create and install the hosted-engine
> VM for the user and admin web interface. Its very good on enterprise serve
ons with 8GB RAM.
> > > For modern VMs CentOS8 x86_64 recommends at least 2GB for a host.
> > > CentOS7 was OK with 1, CentOS6 maybe 512K.
> > > The tendency is always increasing with updated OS versions.
> >
> > > My minimum ovirt systems were mostly 48
> > > > > you're probably better off with just plain KVM/qemu and using
> > > > > virt-manager for the interface.
> >
> > > > > Those memory/cpu requirements you listed are really tiny and I
> > > > > wouldn't recommend e
d ssds for gluster storage. You could get
> away with non raid to save money since you can do replica three with gluster
> meaning your data is fully replicated across all three hosts.
>
> On Tue, Jun 23, 2020 at 5:17 PM David White via Users wrote:
>
> > Thanks.
>
hile much more systems as
> ovirt nodes (CPU & RAM) to host VMs.
> In case of a 4 node setup - 3 hosts have the gluster data and the 4th - is
> not part of ths gluster, just hosting VMs.
>
> Best Regards,
> Strahil Nikolov
>
> На 19 юли 2020 г. 15:25:10 GMT+
Hi,
I started an email thread a couple months ago, and felt like I got some great
feedback and suggestions on how to best setup an oVirt cluster. Thanks for your
responses thus far.My goal is to take a total of 3-4 servers that I can use for
both the storage and the virtualization, and I want bo
:)
Ok, after all of the questions I've pestered you guys with, I'm actually
getting started. Thank you all for your help.
A few of questions right off the bat.
For this install, I'm installing the latest (4.4.2) onto CentOS 8.
1) What's the difference between the `ovirt-hosted-engine-setup` com
> Please see my reply from a few minutes ago to the thread
> "[ovirt-users] ovirt-imageio : can't upload / download".
Thank you.
I read through the "ovirt-imageio: can't upload / download" thread, and your
brief glossary was very helpful. Perhaps it would make sense to put some basic
terminology
h ProtonMail Secure Email.
‐‐‐ Original Message ‐‐‐
On Friday, August 21, 2020 8:19 PM, David White via Users
wrote:
> > Please see my reply from a few minutes ago to the thread
> > "[ovirt-users] ovirt-imageio : can't upload / download".
>
> Thank you.
&g
that "Connection to
ovirt-imageio-proxy service has failed."
Sent with ProtonMail Secure Email.
‐‐‐ Original Message ‐‐‐
On Saturday, August 22, 2020 6:53 AM, Michael Jones
wrote:
> On 22/08/2020 11:20, David White via Users wrote:
>
> > I'm not sure
the bug
> will be actioned).
>
> Kind Regards,
>
> Mike
>
> On 22/08/2020 12:06, David White via Users wrote:
>
> > Ok, that at least got the certificate trusted.
> > Thank you for the fast response on that!
> > The certificate is now installed and
"?
Sent with ProtonMail Secure Email.
‐‐‐ Original Message ‐‐‐
On Saturday, August 22, 2020 9:37 AM, Michael Jones
wrote:
> On 22/08/2020 13:58, David White via Users wrote:
>
> > So, what's the point of all-in-one if you cannot upload ISOs and boot VMs
> >
I'm running into the same problem.
I just wiped my CentOS 8.2 system, and in place of that, installed oVirt Node
4.4.1.
I'm downloading 4.4.2-2020081922 now.
Sent with ProtonMail Secure Email.
‐‐‐ Original Message ‐‐‐
On Friday, August 7, 2020 11:55 AM, Roberto Nunin wrote:
> Il gior
Getting the same problem on 4.4.2-2020081922.
I'll try the image that Roberto found to work, and will report back.
Perhaps I'm still too new to this. :)
Sent with ProtonMail Secure Email.
‐‐‐ Original Message ‐‐‐
On Saturday, August 22, 2020 7:12 PM, David White via Use
t point. I want to make sure the
problem isn't me.
Sent with ProtonMail Secure Email.
‐‐‐ Original Message ‐‐‐
On Tuesday, August 25, 2020 3:47 AM, Yedidyah Bar David wrote:
> On Sun, Aug 23, 2020 at 3:25 AM David White via Users users@ovirt.org wrote:
>
> > Gettin
In a recent thread, Roberto mentioned seeing the error message "FQDN Not
Reachable" when trying to deploy oVirt Node 4.4.1, but was able to get past
that error when using ovirt-node-ng-installer-4.4.2-2020080612.el8.iso.
I experienced the same problems on oVirt Node install 4.4.1, so I tried the
il.
‐‐‐ Original Message ‐‐‐
On Wednesday, August 26, 2020 3:32 AM, Yedidyah Bar David
wrote:
> On Wed, Aug 26, 2020 at 2:32 AM David White via Users users@ovirt.org wrote:
>
> > In a recent thread, Roberto mentioned seeing the error message "FQDN Not
> > Reachable"
e data?
And is this basically what I should expect for a 3-node deployment as well?
Make sure I have two different devices, 1 for the host OS and 1 for the data?
Sent with ProtonMail Secure Email.
‐‐‐ Original Message ‐‐‐
On Wednesday, August 26, 2020 5:41 AM, David White via Users
I finally got oVirt node installed with gluster on a single node.
So that's great progress!
Once that step was complete...
I noticed that the Engine Deployment wizard asks for SMTP settings for where to
send notifications. I was kind of surprised that it doesn't allow one to enter
any credentia
I tested oVirt (4.3? I can't remember) last fall on a single host
(hyperconverged).
Now, I'm getting ready to deploy to a 3 physical node (possibly 4)
hyperconverged cluster, and I guess I'll go ahead and go with 4.4.
Although Red Hat's recent shift of CentOS 8 to the Stream model, as well as th
If I have a private network (10.1.0.0/24) that is being used by the cluster for
intra-host communication & replication, how do I get a block of public IP
addresses routed to the virtual cluster?
For example, let's say I have a public /28, and let's use 1.1.1.0/28 for
example purposes.
I'll assi
the public
> space on a vlan, add public as a vlan tagged network in ovirt. Only your
> public facing VM's need addresses in the space.
>
> On 2021-03-08 05:53, David White via Users wrote:
>
> > If I have a private network (10.1.0.0/24) that is being used by the
> >
Hello,
Reviewing https://ovirt.org/release/4.4.5/, I see that the target release date
was set to March 9. However, glancing at
https://bugzilla.redhat.com/buglist.cgi?quicksearch=ALL%20target_milestone%3A%22ovirt-4.4.5%22%20-target_milestone%3A%22ovirt-4.4.5-%22,
I see a number of outstanding op
I just finished deploying oVirt 4.4.5 onto a 3-node hyperconverged cluster
running on Red Hat 8.3 OS.
Over the course of the setup, I noticed that I had to setup the storage for the
engine separately from the gluster bricks.
It looks like the engine was installed onto /rhev/data-center/ on the
capable of running the hostess engine.
>
> On Sat, Mar 20, 2021 at 5:14 PM David White via Users wrote:
>
> > I just finished deploying oVirt 4.4.5 onto a 3-node hyperconverged cluster
> > running on Red Hat 8.3 OS.
> >
> > Over the course of the setup, I
ne should already be HA and can
> > run on any host. I’d you look at GUI you will see a crown beside each host
> > that is capable of running the hostess engine.
> >
> > On Sat, Mar 20, 2021 at 5:14 PM David White via Users
> > wrote:
> >
> > >
Ah, I see.
The "host" in this context does need to be the backend mgt / gluster network.
I was able to add the 2nd host, and I'm working on adding the 3rd now.
Sent with ProtonMail Secure Email.
‐‐‐ Original Message ‐‐‐
On Saturday, March 20, 2021 4:32 PM, David White vi
Hi all,
I used the oVirt installer via cockpit to setup a hyperconverged cluster on 3
physical hosts running Red Hat 8.3. I used the following two resources to guide
my work:
-
https://www.ovirt.org/documentation/gluster-hyperconverged/chap-Deploying_Hyperconverged.html
-
https://blogs.ovi
and make sure all of my traffic is segmented properly, and get the management
network and VM network separated out again onto their own vlans.
Sent with ProtonMail Secure Email.
‐‐‐ Original Message ‐‐‐
On Friday, March 26, 2021 9:38 PM, David White via Users
wrote:
> Hi all,
I'm replying to Thomas's thread below, but am creating a new subject so as not
to hijack the original thread.
I'm sure that this topic has come up before.
I first joined this list last fall, when I began planning and testing with
oVirt, but as of the past few weeks, I'm paying closer attentio
wrote:
> Dear David,
>
> do you have a link to that anouncement which you have referenced below "so
> the announcement of RHV's (commercial) demise was poor timing for me“
>
> Cheers
> Timo
>
> > Am 02.04.2021 um 17:10 schrieb David White via Users users
I'm working on setting up my environment prior to production, and have run into
an issue.
I got most things configured, but due to a limitation on one of my switches, I
decided to change the management vlan that the hosts communicate on. Over the
course of changing that vlan, I wound up resetti
virt-ha-broker: [Errno 2] No such file or directory,
[monitor: 'network', options: {'addr': '10.1.0.1', 'network_test': 'dns',
'tcp_t_address': '', 'tcp_t_port': ''}]
MainThread::ERROR::2021-04-07
20:23:09,842
ine 561, in _initialize_broker
> m.get('options', {}))
> File
> "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py",
> line 91, in start_monitor
> ).format(t=type, o=options, e=e)
> ovirt_hosted_engine_ha.lib.exceptions.Re
he
hosted engine and configure oVirt.
4. If possible, don't turn off and turn on your servers constantly. :) I
realize this is a given. I just don't have much choice in the matter right now,
due to lack of datacenter in my home office.
Sent with ProtonMail Secure Email.
‐‐
oughts?
Thanks again,
David
[Screenshot from 2021-04-10 17-21-39.png]
Sent with ProtonMail Secure Email.
‐‐‐ Original Message ‐‐‐
On Sunday, April 11, 2021 3:05 AM, Yedidyah Bar David wrote:
> On Sat, Apr 10, 2021 at 1:14 PM David White via Users wrote:
>
> > This is
As readers will already know, I have three oVirt hosts in a hyperconverged
cluster.
I had a DIMM with too many errors on it, so I scavenged from a spare server
that I had in my office precisely for this purpose.
I went ahead and replaced all of the RAM, because the DIMMs in my spare were a
di
ge and see if RAM usage is
> consistent with having 64GB rather than 32GB.
>
> Thanks,
> Joe
>
> -- Original Message --
> From: "David White via Users"
> To: "users@ovirt.org"
> Sent: 13/04/2021 10:00:38 AM
> Subject: [ovirt-users] Adding
Is it possible to expand an existing gluster volume?
I have a hyperconverged environment, and have enough space right now, but I'm
going to have to significantly over-provision my environment. The vast majority
of our customers are using a small fraction of the amount of space that they
are tec
I need to mount a partition across 3 different VMs.
How do I attach a disk to multiple VMs?
This looks like fairly old documentation-not-documentation:
https://www.ovirt.org/develop/release-management/features/storage/sharedrawdisk.html
Sent with ProtonMail Secure Email.
publickey - dmwhite823@
rsday, April 15, 2021 5:05 PM, David White via Users
wrote:
> I need to mount a partition across 3 different VMs.
> How do I attach a disk to multiple VMs?
>
> This looks like fairly old documentation-not-documentation:
> https://www.ovirt.org/develop/release-management/featu
rious what the use case is. :9 you plan on using the disk with
> three vms at the same time? This isn’t really what shareable disks are meant
> to do afaik. If you want to share storage with multiple vms I’d probably just
> setup an nfs share on one of the vms
>
> On Thu
I'm currently thinking about just setting up a rsync cron to run every minute.
Sent with ProtonMail Secure Email.
‐‐‐ Original Message ‐‐‐
On Thursday, April 15, 2021 8:55 PM, David White via Users
wrote:
> > David, I’m curious what the use case is
>
> This is f
e impact as
> > gluster will search the new bricks (for the files' whose hash matches the
> > new subvolume) before searching the old bricks.
> > P.S: I think that the Engine's web interface fully supports that operation,
> > although I'm used to the cli.
> &
constraints but IMHO worth to consider.
>
> Vojta
>
> On Friday, 16 April 2021 03:00:23 CEST David White via Users wrote:
>
> > I'm currently thinking about just setting up a rsync cron to run every
> > minute.
> > Sent with ProtonMail Secure Email.
> &g
> Ah, you got free space on lvm.
> Just 'lvextend -r -l +xxx vg/lv' on all bricks and you are good to go.
>
> Best Regards,
> Strahil Nikolov
>
> > On Fri, Apr 16, 2021 at 12:39, David White via Users
> > wrote:
> > Sorry, I meant to reply-all, t
‐‐‐ Original Message ‐‐‐
On Friday, April 16, 2021 6:39 AM, Strahil Nikolov via Users
wrote:
> Have you thought about copying the data via rsync/scp to the new disks
> (assuming that you have similar machine) ?
But this would still require that I remove a host from the cluster, and add
need shared storage at all, so you will be quite
> fine.
>
> Don't forget that lattency kills gluster, so keep it as tight as possible ,
> but in the same time keep them on separate hosts.
>
> Best Regards,
> Strahil Nikolov
>
> В петък, 16 април 2021 г., 03
I'm running into the issue described in this thread:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KT3B6N3UZ3DS3J6FV6OKQAXPNPTLZPOB/
In short, I have ssh to the datacenter. I can ssh to a public IP address with
the "-D 8080" option to forward local port 8080 act as a SOCKS proxy.
.4
0.0.0.0/0
Note that "1.2.3.4" is my remote IP address in the above example.
Also note that I've had to enter in the remote IP address twice (once when
passing it in using the -x argument)
Sent with ProtonMail Secure Email.
‐‐‐ Original Message ‐‐‐
On Saturday, April 17
I discovered that the servers I purchased did not come with 10Gbps network
cards, like I thought they did. So my storage network has been running on a
1Gbps connection for the past week, since I deployed the servers into the
datacenter a little over a week ago. I purchased 10Gbps cards, and put
This turned into quite a discussion. LOL.
A lot of interesting points.
Thomas said -->
> If only oVirt was a product rather than only a patchwork design!
I think Sandro already spoke into this a little bit, but I would echo what they
(he? she?) said. oVirt is an open source project, so there's
As part of my troubleshooting earlier this morning, I gracefully shut down the
ovirt-engine so that it would come up on a different host (can't remember if I
mentioned that or not).
I just verified forward DNS on all 3 of the hosts.
All 3 resolve each other just fine, and are able to ping each o
.anysubdomain.domain host1
> 10.10.10.11 host2.anysubdomain.domain host2
>
> Usually the hostname is defined for each peer in the /var/lib/glusterd/peers.
> Can you check the contents on all nodes ?
>
> Best Regards,
> Strahil Nikolov
>
Maybe you got too many backups
> running in parallel ?
>
> Best Regards,
> Strahil Nikolov
>
> > On Mon, May 10, 2021 at 19:13, David White via Users
> > wrote:
> > ___
> > Users mailing list -- users@ovirt
interfaces are also bridged - and controlled - by oVirt itself.
Is it possible that oVirt took them down for some reason.
I don't know what that reason might be?
Sent with ProtonMail Secure Email.
‐‐‐ Original Message ‐‐‐
On Monday, May 10, 2021 7:14 PM, David White via Users wrote:
s using a linux bridge and maybe STP kicked in ?
> Do you know of any changes done in the network at that time ?
>
> Best Regards,
> Strahil Nikolov
>
> > On Tue, May 11, 2021 at 2:27, David White via Users
> > wrote:
> > ___
So I have two switches.
All 3 of my HCI oVirt servers are connected to both switches.
1 switch serves the ovirtmgmt network (internal, gluster communication and
everything else on that subnet)
The other switch serves the "main" front-end network (Private).
It turns out that my datacenter plugged
Hi Pavel,
At the risk of being somewhat lazy, could I ask you where the docs are for
installing the iDrac modules and getting power management setup? This is a
topic that I haven't explored yet, but probably need to.
I have 3x Dell R630s.
Sent with ProtonMail Secure Email.
‐‐‐ Original Me
Hello,
Is it possible to use Ubuntu to share an NFS export with oVirt?I'm trying to
setup a Backup Domain for my environment.
I got to the point of actually adding the new Storage Domain.
When I click OK, I see the storage domain appear momentarily before
disappearing, at which point I get a mes
I have a 3-node hyperconverged cluster with Gluster filesystem running on RHEL
8.3 hosts.
It's been stable on oVirt 4.5.
Today, I just upgraded the Engine to v4.6.
[Screenshot from 2021-05-22 20-29-23.png]
I then logged into the oVirt manager, navigated to Compute -> Clusters, and
clicked on U
t sure what else is going on. It wants me to put all three hosts into
> > maintenance mode which is impossible.
> >
> > On Sat, May 22, 2021 at 8:36 PM David White via Users
> > wrote:
> >
> > > I have a 3-node hyperconverged cluster with Glust
I have oVirt 4.4.6 running on my Engine VM.
I also have oVirt 4.4.6 on one of my hosts.
The other two hosts are still on oVirt 4.4.5.
My issue seems slightly different than the issue(s) other people have described.
I'm on RHEL 8 hosts.
My Engine VM is running fine on the upgraded host, as is one
upgraded a 2nd host, onto which I was able to migrate VMs for
preparation of upgrading the 3rd host.
All seems well, for now, after I upgraded the 1st host a second time earlier
today.
Sent with ProtonMail Secure Email.
‐‐‐ Original Message ‐‐‐
On Wednesday, May 26, 2021 2:55 PM, David
Hello,
Is there documentation anywhere for adding a 4th compute-only host to an
existing HCI cluster?
I did the following earlier today:
- Installed RHEL 8.4 onto the new (4th) host
- Setup an NFS share on the host
- Attached the NFS share to oVirt as a new storage domain
- I then turne
2, 2021 at 3:08 AM Nir Soffer nsof...@redhat.com wrote:
>
> > On Sat, May 22, 2021 at 8:20 PM David White via Users users@ovirt.org wrote:
> >
> > > Hello,
> > > Is it possible to use Ubuntu to share an NFS export with oVirt?
> > > I'm trying to s
I'm trying to figure out how to keep a "broken" NFS mount point from causing
the entire HCI cluster to crash.
HCI is working beautifully.
Last night, I finished adding some NFS storage to the cluster - this is storage
that I don't necessarily need to be HA, and I was hoping to store some backups
Ever since I deployed oVirt a couple months ago, I've been unable to boot any
VMs from a RHEL ISO.
Ubuntu works fine, as does CentOS.
I've tried multiple RHEL 8 ISOs on multiple VMs.
I've destroyed and re-uploaded the ISOs, and I've also destroyed and re-created
the VMs.
Every time I try to bo
‐‐‐
On Friday, June 4, 2021 12:29 PM, Nir Soffer wrote:
> On Fri, Jun 4, 2021 at 12:11 PM David White via Users users@ovirt.org wrote:
>
> > I'm trying to figure out how to keep a "broken" NFS mount point from
> > causing the entire HCI cluster to crash.
> &
t version ?
>
> Best Regards,
> Strahil Nikolov
>
> > On Fri, Jun 4, 2021 at 12:25, David White via Users
> > wrote:
> > Ever since I deployed oVirt a couple months ago, I've been unable to boot
> > any VMs from a RHEL ISO.
> > Ubuntu works fi
from 2021-06-04 20-47-14.png]
Sent with ProtonMail Secure Email.
‐‐‐ Original Message ‐‐‐
On Friday, June 4, 2021 8:27 PM, David White via Users wrote:
> I uploaded the RHEL ISOs the same way I uploaded the Ubuntu ISOs:
> Navigate to Storage -> Disks and click Upload (so us
If you plan on using CentOS going forward, I would recommend using (starting
with) stream, as CentOS 8 will be completely EOL at the end of this year.
That said, you can easily convert a CentOS 8 server to CentOS Stream by running
these commands:
dnf swap centos-linux-repos centos-stream-repos
VM is
booting to that ISO fine.
Problem resolved.
Sent with ProtonMail Secure Email.
‐‐‐ Original Message ‐‐‐
On Friday, June 4, 2021 8:47 PM, David White via Users wrote:
> More details here:
>
> I just tested this (new) VM now on a CentOS 7 ISO.
> That worked perfectly
n external certificate.
(And I did / do use the "Test Connection" button)
Which logs would be helpful?
Sent with ProtonMail Secure Email.
‐‐‐ Original Message ‐‐‐
On Sunday, June 6th, 2021 at 2:52 AM, Yedidyah Bar David
wrote:
> On Sat, Jun 5, 2021 at 2:41 PM David
I deployed a rootless Podman container on a RHEL 8 guest on Saturday (3 days
ago).
At the time, I remember seeing some selinux AVC "denied" messages related to
qemu-guest-agent and podman, but I didn't have time to look into it further,
but made a mental note to come back to it, because it real
Hello,
Reading
https://www.ovirt.org/documentation/administration_guide/index.html#IPv6-networking-support-labels,
I see this tidbit:
- Dual-stack addressing, IPv4andIPv6, is not supported
- Switching clusters from IPv4 to IPv6 is not supported.
If I'm understanding this correctly... does t
My current hyperconverged environment is replicating data across all 3 servers.
I'm running critically low on disk space, and need to add space.
To that end, I've ordered 8x 800GB ssd drives, and plan to put 4 drives in 1
server, and 4 drives in the other.
What's my best option for reconfiguring
lica 3 arbiter 1) to the volume- wait for the heals to
> finish
>
> Then repeat again for each volume.
>
> Adding the new disks should be done later.
>
> Best Regards,Strahil Nikolov
>
> > On Sat, Jul 10, 2021 at 3:15, David White via Users
> > wrote
like the rest of the nodes. Usually I use
> > 'noatime,inode64,context=system_u:object_r:glusterd_brick_t:s0'- add this
> > new brick (add-brick replica 3 arbiter 1) to the volume- wait for the heals
> > to finish
> >
> > Then repeat again for each v
> volumes. I’ve done this in the past when doing major maintenance on gluster
> volumes to err on the side of caution.
>
> On Sat, Jul 10, 2021 at 7:22 AM David White via Users wrote:
>
> > Hmm right as I said that, I just had a thought.
> > I DO have a "bac
Thank you.
I'm doing some more research & reading on this to make sure I understand
everything before I do this work.
You wrote:
> If you rebuild the raid, you are destroying the brick, so after mounting it
> back, you will need to reset-brick. If it doesn't work for some reason , you
> can alw
My hyperconverged cluster was running out of space.
The reason for that is a good problem to have - I've grown more in the last 4
months than in the past 4-5 years combined.
But the downside was, I had to go ahead and upgrade my storage, and it became
urgent to do so.
I began that process last w
Hi Patrick,
This would be amazing, if possible.
Checking /gluster_bricks/data/data on the host where I've removed (but not
replaced) the bricks, I see a single directory.
When I go into that directory, I see two directories:
dom_md
images
If I go into the images directory, I think I see the has
Thank you for all the responses.
Following Strahil's instructions, I *think* that I was able to reconstruct the
disk image. I'm just waiting for that image to finish downloading onto my local
machine, at which point I'll try to import into VirtualBox or something.
Fingers crossed!
Worst case sc
Hello,
It appears that my Manager / hosted-engine isn't working, and I'm unable to get
it to start.
I have a 3-node HCI cluster, but right now, Gluster is only running on 1 host
(so no replication).
I was hoping to upgrade / replace the storage on my 2nd host today, but aborted
that maintenance
ail Secure Email.
‐‐‐ Original Message ‐‐‐
On Friday, August 13th, 2021 at 2:41 PM, Nir Soffer wrote:
> On Fri, Aug 13, 2021 at 9:13 PM David White via Users users@ovirt.org wrote:
>
> > Hello,
> >
> > It appears that my Manager / hosted-engine isn't wo
be a
> major deal to have to rebuild all 20+ of those VMs.
>
> Sent with ProtonMail Secure Email.
>
> ‐‐‐ Original Message ‐‐‐
>
> On Friday, August 13th, 2021 at 2:41 PM, Nir Soffer nsof...@redhat.com wrote:
>
> > On Fri, Aug 13, 2021 at 9:13
So it looks like I'm going to move to a new datacenter. I went into somewhere
cheap on a month-to-month contract earlier this year, and they've been a pain
to deal with. At the same time, I've grown a lot faster than expected, so I've
decided to move into a better, more reputable datacenter soon
I have an unused 200GB partition that I'd like to use to copy / export / backup
a few VMs onto, so I mounted it to one of my oVirt hosts as /ova-images/, and
then ran "chown 36:36" on ova-images.
>From the engine, I then tried to export an OVA to that directory.
Watching the directory with "ls",
ut it into pastebin or provide a link to access these files?
Sent with ProtonMail Secure Email.
‐‐‐ Original Message ‐‐‐
On Sunday, August 22nd, 2021 at 3:58 AM, Liran Rotenberg
wrote:
> On Sun, Aug 22, 2021 at 3:42 AM David White via Users users@ovirt.org wrote:
>
> >
I have an HCI cluster running on Gluster storage. I exposed an NFS share into
oVirt as a storage domain so that I could clone all of my VMs (I'm preparing to
move physically to a new datacenter). I got 3-4 VMs cloned perfectly fine
yesterday. But then this evening, I tried to clone a big VM, and
very large ones.
>
> Best Regards,Strahil Nikolov
>
> Sent from Yahoo Mail on Android
>
> > On Thu, Aug 26, 2021 at 3:27, David White via Users
> > wrote:I have an HCI cluster running on Gluster storage. I exposed an NFS
> > share into oVirt as a storage doma
September 3rd, 2021 at 4:10 AM, David White via Users
wrote:
> In this particular case, I have 1 (one) 250GB virtual disk..
>
> Sent with ProtonMail Secure Email.
>
> ‐‐‐ Original Message ‐‐‐
>
> On Tuesday, August 31st, 2021 at 11:21 PM, Strahil Nikolov
>
>
> Sent with ProtonMail Secure Email.
>
> ‐‐‐ Original Message ‐‐‐
> On Friday, September 3rd, 2021 at 4:10 AM, David White via Users
> wrote:
>
> > In this particular case, I have 1 (one) 250GB virtual disk..
> >
> > Sent with ProtonMail Secure E
I can't remember if I've asked this already, or if someone else has brought
this up.
I have noticed that gluster replication in a hyperconverged environment is very
slow.
I just (successfully, this time) added a brick to volume that was originally
built on a single-node Hyperconverged cluster.
About a month ago, I completely rebuilt my oVirt cluster, as I needed to move
all my hardware from 1 data center to another with minimal downtime.
All my hardware is in the new data center (yay for HIPAA compliance and 24/7
access, unlike the old place!)
I originally built the cluster as a singl
st c back on,
> the engine shuts down from a, and comes back up on c.
>
> Check the scores of the systems via 'hosted-engine --vm-status'.Check vdsm
> logs on both hosts.Check the logs in the engine itself.
>
> Best Regards,Strahil Nikolov
>
> > On Tue, Oct 12, 2021
type : he_local
Sent with ProtonMail Secure Email.
‐‐‐ Original Message ‐‐‐
On Tuesday, October 12th, 2021 at 4:38 PM, David White via Users
wrote:
> > Check the scores of the systems via 'hosted-engine --vm-status'.
> That must be the problem.
>
> Host a has a
I am trying to put a host into maintenance mode, and keep getting this error:
Error while executing action: Cannot switch Host cha1-storage.my-domain.com to
Maintenance mode. Image transfer is in progress for the following (3) disks:
e0f46dc5-7f98-47cf-a586-4645177bd6a2,
06bd3678-bfab-4793-a839-
1 - 100 of 148 matches
Mail list logo