Nothing against oVirt, it's a great product.. however the future of it is
somewhat up in the air from what I've last seen. If you are looking at
starting a new project I would personally consider an alternative like
Proxmox.
Just my humble opinion.
On Sat, Apr 12, 2025 at 8:59 AM Andreas Gianna
I've always found the same personally. You'd be much better off if you
could implement an incremental backup solution. vProtect is one option that
might do the job
On Tue, Feb 4, 2025 at 9:44 AM Enrico Becchetti via Users
wrote:
>Dear all,
> I have an Ovirt cluster with 3 Dell R7525 nodes a
from what I recall the only times I ever had issues like this was due to
not having the certificates imported into my browser. The engine CA
certificate which is linked to from the main oVirt admin login page.
On Thu, Jan 16, 2025 at 1:36 PM morgan cox wrote:
> Hi Marcos.
>
> Thanks for taking t
You are seeing these errors because centos8 stream is eol and the repos are
no longer available
On Sun, Sep 1, 2024 at 6:53 AM a_kagra--- via Users wrote:
> Hello,
>
> I am currently sitting on oVirt 4.5 and I have 5 Linux hosts. I was hoping
> I could get some insight as to the correct way to g
You could add NFS storage domain, move VMs to it then detach it and attach
to new cluster and import VMs from it.
On Mon, Aug 12, 2024 at 8:42 PM Diggy Mc wrote:
> Since export domains have been deprecated starting with oVirt v4.4, what
> is the preferred/best method for moving VMs from an old o
can a CentOS 8 stream engine be converted to AlmaLinux 8?
On Mon, Jun 3, 2024 at 2:57 PM Jayme wrote:
> can engine run on Centos 9?
>
> On Mon, Jun 3, 2024 at 5:41 AM Sandro Bonazzola
> wrote:
>
>> Hi, just a reminder that CentOS Stream 8 reached its end of life at the
&g
can engine run on Centos 9?
On Mon, Jun 3, 2024 at 5:41 AM Sandro Bonazzola wrote:
> Hi, just a reminder that CentOS Stream 8 reached its end of life at the
> end of last month on Friday, May 31, 2024.
> As it has been announced by CentOS project[1], all the CentOS Stream 8
> repositories are go
I believe all you should need to do is run engine-setup --offline
On Sat, May 25, 2024 at 10:14 AM Bogdan Dumitrescu via Users <
users@ovirt.org> wrote:
> Need some help with the renewal of the ssl certificate (I believe it is a
> self-signed cert )that is about to expire in a month, this is a st
those are VNC ports iirc
On Wed, May 15, 2024 at 3:46 PM Muhesh Kumar
wrote:
> Hi Everyone,
> Initially, In ovirt host machine qemu-kvm is running in
> 5901,5902,5903,5904,5906 and 5907. But suddenly, It has now running with
> additional ports 5900 and 5905. Can anyone know why this kind of port
I'm looking at doing something similar with 3 node HCI setup currently
running Ovirt 4.5 w/ CentOS8 node NG. I want to move to EL9 node but keep
gluster config in tact. Is it as simple as re-installing OS and leaving
gluster drives untouched. Is there a guide for this?
On Fri, Apr 5, 2024 at 5:57
What happens if you place a node in maintenance and run yum update on the
node directly and reboot?
On Thu, Jan 4, 2024 at 7:22 AM Gianluca Amato
wrote:
> Hello,
> I am trying to update my oVirt installation to 4.5.5, but while I had no
> problems upgrading the self-hosted ovirt-engine, I am not
Thanks to all for your work on this!
On Fri, Dec 1, 2023 at 8:59 AM Sandro Bonazzola wrote:
> oVirt 4.5.5 is now generally available
>
> The oVirt project is excited to announce the general availability of oVirt
> 4.5.5, as of December 1st, 2023.
> Important notes before you install / upgrade
>
Exporting ova can be a decent backup method depending on environment. It
can be resource intensive and you can’t do incremental backups.
For an enterprise grade backup solution look into vprotect.
For more simple backup operations I wrote this ansible playbook a while
back to automate backing up
You would likely need to use the REST API or ansible to achieve this.
On Mon, Oct 31, 2022 at 10:32 AM wodel youchi
wrote:
> Hi,
>
> VM disks can be downloaded from the Manager UI. How can I achieve the same
> result using curl or wget or something else?
> I have several disks that I wish to do
It appears that I may have resolved the issue after putting host into
maintenance again and rebooting a second time. I'm really not sure why but
all bricks are up now
On Mon, Aug 29, 2022 at 3:45 PM Jayme wrote:
> A bit more info from the host's brick log
>
> [2022-08-29 18
98486-GRAPH_ID:3-PID:2473-HOST:host0.x-PC_NAME:engine-client-0-RECON_NO:-0},
{error-xlator=engine-posix}, {errno=22}, {error=Invalid argument}]
On Mon, Aug 29, 2022 at 3:18 PM Jayme wrote:
> Hello All,
>
> I've been struggling with a few issues upgrading my 3-node HCI custer from
&
Hello All,
I've been struggling with a few issues upgrading my 3-node HCI custer from
4.4 to 4.5.
At present the self hosted engine VM is properly running oVirt 4.5 on
CentOS 8x stream.
First host node, I set in maintenance and installed new node-ng image. I
ran into issue with rescue mode on bo
vprotect has a decent backup offering for oVirt that would be worth looking
into. It's free for up to 10 VMs to try out.
On Mon, Feb 14, 2022 at 2:53 PM marcel d'heureuse
wrote:
> Moin,
>
> We have in our Environment 12 servers managed by one self hosted engine.
> It is ovirt 4.3.9. We are Frei
With 4 servers only three would be used for hyperconverged storage, the 4th
would be added as a compute node which would not participate in GlusterFS
storage.
To expand hyper-converged to more than 3 servers you have to add hosts in
multiples of 3
On Tue, Sep 28, 2021 at 9:49 AM wrote:
> Kindly
wrote:
> I appreciate the quick replies. Yes very basic. Single host/node with
> NFS and local storage.
>
> I will try to reinstall the host and import. I haven't had to do this yet!
>
> On Sat, Sep 18, 2021, 5:26 PM Jayme wrote:
>
>> It sounds like your setup is
Wesley Stewart wrote:
> I believe I should have the dump from the initial engine-setup. Can that
> be used?
>
> Also, I'm only running about 8 vms and need to start over for 4.4. Would
> it be easier to just reinstall and import the disks?
>
>
>
> On Sat, Se
Do you have a backup of the hosted engine that you can restore? If your vms
are on nfs mounts you should be able to readd the storage domain and import
the vms
On Sat, Sep 18, 2021 at 4:26 PM Wesley Stewart wrote:
> Luckily this is for a home lab and nothing critical is lost. However I
> was tr
Shouldn’t that be admin@internal or was that a typo?
On Wed, Sep 15, 2021 at 4:40 AM wrote:
> i've put all in a rest client to check the syntax and how the request
> looks, now i got this response:
>
> access_denied: Cannot authenticate user 'admin@intern': No valid profile
> found in credential
I use the nagios check_rhv plugin, it has support for monitoring GlusterFS
as well: https://github.com/rk-it-at/check_rhv
On Tue, Sep 7, 2021 at 8:39 AM Jiří Sléžka wrote:
> Hi,
>
> On 9/7/21 1:05 PM, si...@justconnect.ie wrote:
> > Hi All,
> >
> > Does anyone have recommendations for GlusterFS
You could use a single sever with vms on local storage. Or connected to
remote storage such as nfs.
There are drawbacks of course. You could not keep vms running if host is
down or for upgrades etc.
For any kind of high availability you’d want at least two severs with
remote storage, but then you
Just a thought but depending on resources you might be able to use your 4th
server as nfs storage and live migrate vm disks to it and off of your
gluster volumes. I’ve done this in the past when doing major maintenance on
gluster volumes to err on the side of caution.
On Sat, Jul 10, 2021 at 7:22
I have observed this behaviour recently and in the past on 4.3 and 4.4, and
in my case it’s almost always following an ovirt upgrade. After upgrade
(especially upgrades involving glusterfs) I’d have bricks randomly go down
like your describing for about a week or so after upgrade and I’d have to
ma
Check if there’s another lvm.conf file in the dir like lvm.conf.rpmsave and
swap them out. I recall having to do something similar to solve a
deployment issue much like yours
On Wed, Jun 30, 2021 at 6:39 PM wrote:
> Been beating my head against this for a while, but I'm having issues
> deploying
A while ago I wrote an ansible playbook to backup ovirt vms via ova export
to storage attached to one of the hosts (nfs in my case). You can check it
out here:
https://github.com/silverorange/ovirt_ansible_backup
I’ve been using this for a while and it has been working well for me on
ovirt 4.3 and
I'm not sure if the hosted engine is on stream yet. I'm also on 4.4.6 and
while my nodes are CentOS 8 stream my hosted engine is also still 8.3
On Mon, May 31, 2021 at 3:45 AM mail--- via Users wrote:
> I upgraded. The upgrade seems to have been successful.
> However, the distribution OS of the
Removing the ovirt-node-ng-image-update package and re-installing it
manually seems to have done the trick. Thanks for pointing me in the right
direction!
On Thu, May 27, 2021 at 9:57 PM Jayme wrote:
> # rpm -qa | grep ovirt-node
> ovirt-node-ng-nodectl-4.4.0-1.el8.noarch
> python3-o
3-1.el8.noarch.rpm
>>
>
> PS: remember to use tmux if executing via ssh.
>
> Regards.
>
> Le jeu. 27 mai 2021 à 22:21, Jayme a écrit :
>
>> The good host:
>>
>> bootloader:
>> default: ovirt-node-ng-4.4.6.3-0.20210518.0 (4.18.0-301.1.el8.x86_64)
&
.x86_64)
blsid:
ovirt-node-ng-4.4.5.1-0.20210323.0+1-4.18.0-240.15.1.el8_3.x86_64
layers:
ovirt-node-ng-4.4.5.1-0.20210323.0:
ovirt-node-ng-4.4.5.1-0.20210323.0+1
current_layer: ovirt-node-ng-4.4.5.1-0.20210323.0+1
On Thu, May 27, 2021 at 6:18 PM Jayme wrote:
> It shows 4.4.5 image on
?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail>
> Virus-free.
> www.avast.com
> <https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail>
> <#m_1909242515811637061_DAB4FAD
I updated my three server HCI cluster from 4.4.5 to 4.4.6. All hosts
updated successfully and rebooted and are active. I notice that only one
host out of the three is actually running oVirt node 4.4.6 and the other
two are running 4.4.5. If I check for upgrade in admin it shows no upgrades
availabl
The problem appears to be MTU related, I may have a network configuration
problem. Setting back to 1500 mtu seems to have solved it for now
On Thu, May 27, 2021 at 2:26 PM Jayme wrote:
> I've gotten a bit further. I have a separate 10Gbe network for GlusterFS
> traffic which was also
ed to
work fine on GlusterFS migration network in the past.
On Thu, May 27, 2021 at 2:11 PM Jayme wrote:
> I have a three node oVirt 4.4.5 cluster running oVirt node hosts. Storage
> is mix of GlusterFS and NFS. Everything has been running smoothly, but the
> other day I noticed m
I have a three node oVirt 4.4.5 cluster running oVirt node hosts. Storage
is mix of GlusterFS and NFS. Everything has been running smoothly, but the
other day I noticed many VMs had invalid snapshots. I run a script to
export OVA for VMs for backup purposes, exports seemed to have been fine
but sna
What do you mean Gluster being announced as EOL? Where did you find this
information?
On Mon, Apr 26, 2021 at 9:34 AM penguin pages
wrote:
>
> I have been building out HCI stack with KVM/RHEV + oVirt with the HCI
> deployment process. This is very nice for small / remote site use cases,
> but w
Vprotect would be worth looking into
On Sun, Apr 18, 2021 at 3:23 AM wrote:
> Hi there,
>
> I want forever incremental backup for over 150+ virtual machines inside
> oVirt to save more backup space, then restore in case some problem occurs,
> any good advice?
> __
David, I’m curious what the use case is. :9 you plan on using the disk with
three vms at the same time? This isn’t really what shareable disks are
meant to do afaik. If you want to share storage with multiple vms I’d
probably just setup an nfs share on one of the vms
On Thu, Apr 15, 2021 at 7:37
If it's a smaller setup one option might be to use RHEL. A developers
account with Redhat will allow for 16 licensed servers for free.
On Mon, Apr 12, 2021 at 4:07 AM dhanaraj.ramesh--- via Users <
users@ovirt.org> wrote:
> We had done successful POC of ovirt node & HE with 4.4.5 version and now
er volume) using
> that EL8 host.
>
> Once it's successful, the rest of the hosts should be available and you
> will be able to remove one of the other nodes, reduce gluster, reinstall ,
> get gluster running and add the host again in oVirt.
>
>
> Best Regards,
> Strahil
I have a fairly stock three node HCI setup running oVirt 4.3.9. The hosts
are oVirt node. I'm using GlusterFS storage for the self hosted engine and
for some VMs. I also have some other VMs running from an external NFS
storage domain.
Is it possible for me to upgrade this environment to 4.4 while
The hosts run the vms. The engine just basically coordinates everything.
On Sun, Mar 21, 2021 at 8:50 PM jenia mtl wrote:
> Hi Edward.
>
> "Therein" meaning inside the engine? The virtualization hosts run inside
> the engine not inside the hypervision/Ovirt-node? And just to make sure,
> the vir
If you deployed with wizard the hosted engine should already be HA and can
run on any host. I’d you look at GUI you will see a crown beside each host
that is capable of running the hostess engine.
On Sat, Mar 20, 2021 at 5:14 PM David White via Users
wrote:
> I just finished deploying oVirt 4.4.
Are you trying to set this up as a hci deployment? If so it might be
failing if the raspberry Pi cpu is not supported by ovirt
On Wed, Feb 24, 2021 at 3:08 AM wrote:
> Hey there,
>
> I tried using oVirt for some time now and like to convert my main Proxmox
> Cluster (2 Nodes + RPI for Quorum) to
I believe you'd need to add in multiples of 3 hosts to expand gluster
storage
On Mon, Feb 22, 2021 at 3:56 AM wrote:
> Ok, thanks for your answer.
>
> If I understand well, I can't expand my gluster storage ?
>
> My goal was add a node when I can to growing up my gluster and my compute
> abilit
Take a look at configuring affinity rules
On Wed, Jan 20, 2021 at 7:49 PM Shantur Rathore wrote:
> Hi all,
>
> I am trying to figure if there is a way to force oVirt to schedule VMs on
> different hosts.
> So if I am cloning 6 VMs from a template, I want oVirt to schedule them on
> all different
Correct me if I'm wrong but according to the docs, there might be a more
elegant way of doing something similar with gluster cli ex: gluster volume
heal split-brain latest-mtime -- although I have never
tried it myself.
On Mon, Jan 11, 2021 at 1:50 PM Strahil Nikolov via Users
wrote:
>
> > Is
It takes a very small amount of effort to do it one time using ssh-copy-id
but I suppose you could easily do it with Ansible too.
On Thu, Jan 7, 2021 at 11:42 AM marcel d'heureuse
wrote:
> hi,
>
> we have setup a ovirt system with 9 hosts.
> we will now add three more nodes and we have to excha
It looks like a few forks are popping up already. A new project called
RockyLinux and now CloudLinux announced an RHEL fork today which sounds
promising:
https://blog.cloudlinux.com/announcing-open-sourced-community-driven-rhel-fork-by-cloudlinux
On Thu, Dec 10, 2020 at 5:42 AM Jorick Astrego wro
Ok yeah that is fairly similar to my setup, except I only have two drives
in each host.
In my case I created completely separate data volume, one per drive. You
could do the same, three data volumes for storage @ 7TB each for example.
On one of the volumes you'll need to split off 100Gb volume for
I have two ssds in each host for storage. I ended up using the wizard but
in the wizard I simply added two data volumes
Ex:
storage1 = /dev/sda
storage2 = /dev/sdb
You can create as many storage volumes as you want. You don’t need to have
just one single large volume. You could have just one big
Personally I also found this confusing when I setup my cluster a while
back. I ended up creating multiple data volumes. One for each drive. You
could probably software raid the drives first and present it to the
deployment wizard as one block device. I’m not sure if deployment wizard
will combine m
IMO this is best handled at hardware level with UPS and battery/flash
backed controllers. Can you share more details about your oVirt setup? How
many servers are you working with andare you using replica 3 or replica 3
arbiter?
On Thu, Oct 8, 2020 at 9:15 AM Jarosław Prokopowski
wrote:
> Hi Guys
https://docs.google.com/forms/u/1/d/e/1FAIpQLSdzzh_MSsSq-LSQLauJzuaHC0Va1baXm84A_9XBCIileLNSPQ/viewform?usp=send_form
On Tue, Oct 6, 2020 at 7:28 PM Strahil Nikolov via Users
wrote:
> Hello All,
>
>
>
> can someone send me the full link (not the short one) as my proxy is
> blocking it :)
>
>
>
t HCI would support it. You might have to roll
out your own GlusterFS storage solution. Someone with more Gluster/HCI
knowledge might know better.
On Mon, Sep 28, 2020 at 1:26 PM C Williams wrote:
> Jayme,
>
> Thank for getting back with me !
>
> If I wanted to be wasteful with st
You can only do HCI in multiple's of 3. You could do a 3 server HCI setup
and add the other two servers as compute nodes or you could add a 6th
server and expand HCI across all 6
On Mon, Sep 28, 2020 at 12:28 PM C Williams wrote:
> Hello,
>
> We recently received 5 servers. All have about 3 TB o
Assuming you don't care about data on the drive you may just need to use
wipefs on the device i.e. wipefs -a /dev/sdb
On Fri, Sep 25, 2020 at 12:53 PM Staniforth, Paul <
p.stanifo...@leedsbeckett.ac.uk> wrote:
> Hello,
> how do you manage a gluster host when upgrading a node?
>
> I upgr
Interested to hear how upgrading 4.3 HCI to 4.4 goes. I've been considering
it in my environment but was thinking about moving all VMs off to NFS
storage then rebuilding oVirt on 4.4 and importing.
On Thu, Sep 24, 2020 at 1:45 PM wrote:
> I am hoping for a miracle like that, too.
>
> In the mean
You could try setting host to maintenance and check stop gluster option,
then re-activate host or try restarting glusterd service on the host
On Mon, Sep 21, 2020 at 2:52 PM Jeremey Wise wrote:
>
> oVirt engine shows one of the gluster servers having an issue. I did a
> graceful shutdown of al
I believe if you go into the storage domain in GUI there should be a tab
for vms which should list the vms then you can click the : menu and choose
import
On Wed, Sep 2, 2020 at 9:24 AM Darin Schmidt wrote:
> I am running this as an all in one system for a test bed at home. The
> system crashed
Thanks for letting me know, I suspected that might be the case. I’ll make a
note to fix that in the playbook
On Mon, Aug 31, 2020 at 3:57 AM Stefan Wolf wrote:
> I think, I found the problem.
>
>
>
> It is case sensitive. For the export it is NOT case sensitive but for the
> step "wait for expor
Interesting I’ve not hit that issue myself. I’d think it must somehow be
related to getting the event status. Is it happening to the same vms every
time? Is there anything different about the vm names or anything that would
set them apart from the others that work?
On Sun, Aug 30, 2020 at 11:56 AM
Also if you look at the blog post linked on github page it has info about
increasing the ansible timeout on ovirt engine machine. This will be
necessary when dealing with large vms that take over 2 hours to export
On Sun, Aug 30, 2020 at 8:52 AM Jayme wrote:
> You should be able to fix
You should be able to fix by increasing the timeout variable in main.yml. I
think the default is pretty low around @ 600 seconds (10 minutes). I have
mine set for a few hours since I’m dealing with large vms. I’d also
increase poll interval as well so it’s not checking for completion every 10
secon
Probably the easiest way is to export the VM as OVA. The OVA format is a
single file which includes the entire VM image along with the config. You
can import it back into oVirt easily as well. You can do this from the GUI
on a running VM and export to OVA without bringing the VM down. The export
pr
Vprotect can do some form of incremental backup of ovirt vms. At least on
4.3 I’m not sure where they’re at for 4.4 support. Worth checking out, free
for 10 vms
On Wed, Aug 19, 2020 at 7:03 AM Kevin Doyle
wrote:
> Hi
>
> I am looking at ways to backup VM's, ideally that support incremental
> bac
I think you are perhaps overthinking a tad. Glusterfs is a fine solution
but it has had a rocky road. It would not be my first suggestion if you are
seeking high level write performance although that has been improving and
can be fine tuned. Instability at least in the past was mostly centered
arou
Check engine.log in /var/log/ovirt-engine on the engine sever/vm
On Tue, Jul 28, 2020 at 7:16 PM Philip Brown wrote:
> I just tried to import an OVA file.
> The GUI status mentions that things seem to go along fairly happily..
> it mentions that it creates a disk for it
> but then eventually
ervers in order to increase the total amount of
> > > disk space made available to oVirt as a whole. And then from there, I
> > > would of course setup a number of virtual disks that I would attach
> > > back to that customer's VM.
> > > So to recap, if I
Your other hosts that aren’t participating in gluster storage would just
mount the gluster storage domains.
On Wed, Jul 15, 2020 at 6:44 PM Philip Brown wrote:
> Hmm...
>
>
> Are you then saying, that YES, all host nodes need to be able to talk to
> the glusterfs filesystem?
>
>
> on a related n
Personally I find the rhev documentation much more complete:
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.3/
On Mon, Jul 13, 2020 at 6:17 PM Philip Brown wrote:
> I find it odd that the ovirt website allows to see older version RELEASE
> NOTES...
> but doesnt seem to gi
ontrollers.
You should still be able to get it to work using a driver update disk
during install. See: https://forums.centos.org/viewtopic.php?t=71862
Either way, this is good to know ahead of time as to limit surprises!
- Jayme
On Tue, Jul 7, 2020 at 10:22 AM shadow emy wrote:
> i found the p
I’ve tried various methods to improve gluster performance on similar
hardware and never had much luck. Small file workloads were particularly
troublesome. I ended up switching high performance vms to nfs storage and
performance with nfs improved greatly in my use case.
On Sun, Jun 28, 2020 at 6:42
Yes this is the point of hyperconverged. You only need three hosts to setup
a proper hci cluster. I would recommend ssds for gluster storage. You could
get away with non raid to save money since you can do replica three with
gluster meaning your data is fully replicated across all three hosts.
On
This is of course not recommended but there has been times where I have
lost network access to storage or storage sever while vms were running.
They paused and came back up when storage was available again without
causing any problems. This doesn’t mean it’s 100% safe but from my
experience it has
I wrote a simple un-official ansible playbook to backup full VMs here:
https://github.com/silverorange/ovirt_ansible_backup -- it works great for
my use case, but it is more geared toward smaller environments.
For commercial software I'd take a look at vProtect (it's free for up to 10
VMs)
I've
Also, I can't think of the limit off the top of my head. I believe it's
either 75 or 100Gb. If the engine volume is set any lower the installation
will fail. There is a minimum size requirement.
On Fri, May 29, 2020 at 12:09 PM Jayme wrote:
> Regarding Gluster question. The vol
Regarding Gluster question. The volumes would be provisioned with LVM on
the same block device. I believe 100Gb is recommended for the engine
volume. The other volumes such as data would be created on another logical
volume and you can use up the rest of the available space there. Ex. 100gb
engine,
Here is the bug report:
https://bugzilla.redhat.com/show_bug.cgi?id=1832210
On Thu, May 28, 2020 at 8:23 AM Jayme wrote:
> If it’s the issue I’m thinking of it’s because Apple Mojave started
> rejecting carts that have a validity date shorter than a certain period of
> time which ovir
If it’s the issue I’m thinking of it’s because Apple Mojave started
rejecting carts that have a validity date shorter than a certain period of
time which ovirt ca does not follow. I posted another message on this group
about it a little while ago and I think a bug report was made.
The only way I c
This is likely due to centos8 not node image in particular. Centos8 dropped
support for many lsi raid controllers including older perc controllers.
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/considerations_in_adopting_rhel_8/hardware-enablement_considerations-in-a
Has the drive been used before, it might have existing partition/filesystem
on it? If you are sure it's fine to overwrite try running wipefs -a
/dev/sdb on all hosts. Also make sure there aren't any filters setup in
lvm.conf (there shouldn't be on fresh install, but worth checking).
On Tue, Apr 28
Oh and also gluster interface should not be set as default route either.
On Tue, Apr 28, 2020 at 7:19 PM Jayme wrote:
> On gluster interface try setting gateway to 10.0.1.1
>
> If that doesn’t work let us know where the process is failing currently
> and with what errors etc.
>
&
one
>
> DEFROUTE=yes
>
> IPV4_FAILURE_FATAL=no
>
> IPV6INIT=no
>
> IPV6_AUTOCONF=yes
>
> IPV6_DEFROUTE=yes
>
> IPV6_FAILURE_FATAL=no
>
> IPV6_ADDR_GEN_MODE=stable-privacy
>
> NAME=p1p1
>
> UUID=1adb45d3-4dac-4bac-bb19-257fb9c7016b
>
> DEVIC
ng SSH keys seems to take over a minute just to
>> prompt for a password. Something smells here.
>>
>> On Tue, Apr 28, 2020 at 7:32 PM Jayme wrote:
>>
>>> You should be using a different subnet for each. I.e. 10.0.0.30 and
>>> 10.0.1.30 for example
>>>
You should be using a different subnet for each. I.e. 10.0.0.30 and
10.0.1.30 for example
On Tue, Apr 28, 2020 at 2:49 PM Shareef Jalloq wrote:
> Hi,
>
> I'm in the process of trying to set up an HCI 3 node cluster in my homelab
> to better understand the Gluster setup and have failed at the fir
What is the vm optimizer you speak of?
Have you tried the high performance vm profile? When set it will prompt you
to make additional manual changes such as configuring numa and hugepages
etc
On Tue, Apr 21, 2020 at 8:52 AM wrote:
> On oVirt 4.3. i installed w10_64 with q35 cpu.
> i've used v
Do you have the guest agent installed on the VMs?
On Thu, Apr 16, 2020 at 2:55 PM wrote:
> Are you getting any errors in the engine log or
> /var/log/libvirt/qemu/.log?
> I have Windows 10 and haven't experienced that. You can't shut it down in
> the UI? Even after you try to shut it down inside
In oVirt admin go to Storage > Domains. Click your storage domain. Click
"Virtual Machines" tab. You should see a list of VMs on that storage
domain. Click one or highlight multiple then click import.
On Thu, Apr 16, 2020 at 2:34 PM wrote:
> If you click on the 3 dots in the vm portal, there is
The error suggests a problem with ansible. What packages are you using?
On Tue, Apr 14, 2020 at 1:51 AM Gabriel Bueno wrote:
> Does anyone have any clue that it may be happening?
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an em
I recently setup a new ovirt environment using latest 4.3.9 installer. I
can't seem to get the novnc client to work for the life of me in safari or
chrome on MacOS catalina.
I have downloaded the CA from the login page and imported it into keychain
and made sure it was fully trusted. In both syste
Was wondering if there are any guides or if anyone could share their
storage configuration details for NFS. If using LVM is it safe to snapshot
volumes with running VM images for backup purposes?
___
Users mailing list -- users@ovirt.org
To unsubscribe se
Christian,
I've been following along with interest, as I've also been trying
everything I can to improve gluster performance in my HCI cluster. My issue
is mostly latency related and my workloads are typically small file
operations which have been especially challenging.
Couple of things
1. Abou
I strongly believe that FUSE mount is the real reason for poor performance
in HCI and these minor gluster and other tweaks won't satisfy most seeking
i/o performance. Enabling libgfapi is probably the best option. Redhat has
recently closed bug reports related to libgfapi citing won't fix and one
c
Hey Sandro,
Do you have more specific details or guidelines in regards to the graphics
you are looking for?
Thanks!
On Tue, Mar 24, 2020 at 1:27 PM Sandro Bonazzola
wrote:
> Hi,
> in preparation of oVirt 4.4 GA it would be nice to have some graphics we
> can use for launching oVirt 4.4 GA on s
I too struggle with speed issues in hci. Latency is a big problem with
writes for me especially when dealing with small file workloads. How are
you testing exactly?
Look into enabling libgfapi and try some comparisons with that. People have
been saying it’s much faster, but it’s not a default opti
9/03/2020 11:18, Jayme wrote:
> > At the very least you should make sure to apply the gluster virt profile
> > to vm volumes. This can also be done using optimize for virt store in
> > the ovirt GUI
>
> --
> with kind regards,
> m
1 - 100 of 392 matches
Mail list logo