On 23/06/16 01:05 AM, Andrew Dent wrote:
> Hi
>
> We have a Single box running Ovirt 3.5.6.2-1 + CentOS7.
> The Engine, VDSM Host and the Storage is all on the Single box, which
> contains 2 * RAID1 arrays.
>
> We are looking to purchase a second box, and I'm wondering if someone
> can please hel
Hi
We have a Single box running Ovirt 3.5.6.2-1 + CentOS7.
The Engine, VDSM Host and the Storage is all on the Single box, which
contains 2 * RAID1 arrays.
We are looking to purchase a second box, and I'm wondering if someone
can please help me to understand of how best to migrate to a HA
en
RHEV is a cloud solution with some HA features. It is not an actual HA
solution.
digimer
On 23/06/16 12:08 AM, Eero Volotinen wrote:
> How about trying commercial RHEV?
>
> Eero
> 22.6.2016 8.02 ap. "Tom Robinson" kirjoitti:
>
>> Hi,
>>
>> I have two KVM hosts (CentOS 7) and would like them to
How about trying commercial RHEV?
Eero
22.6.2016 8.02 ap. "Tom Robinson" kirjoitti:
> Hi,
>
> I have two KVM hosts (CentOS 7) and would like them to operate as High
> Availability servers,
> automatically migrating guests when one of the hosts goes down.
>
> My question is: Is this even possible
On 23.06.2016 02:52, listmail wrote:
According to the compatibility chart over here:
https://access.redhat.com/support/policy/intel
...anything later than 6.3 (6.4 and up) should work with the E3-12xx v3
family of processors. But those are not the results I am seeing.
Does anyone have experience
Hi All,
Hopefully someone with broad overview of CentOS compatibility issues can
comment on this:
I am evaluating a Supermicro X10SLM motherboard with an Intel E3-1231 v3
CPU. Testing with boots from Live DVDs, the CentOS 6.x family is panicking
at boot time. I have tried 6.8, 6.5, and 6.3, an
I had no real reason to doubt. I was just being lazy. I figured that,
if anyone knew the correct answer, it you be the people on this list.
Thank you for your gracious forbearance.
On 06/21/16 20:01, Boris Epstein wrote:
> I would think the same as Gordon that as long as your 64-bit VM
> virtua
On 22/06/16 02:36 PM, Paul Heinlein wrote:
> On Wed, 22 Jun 2016, Digimer wrote:
>
>> The nodes are not important, the hosted services are.
>
> The only time this isn't true is when you're using the node to heat the
> room.
>
> Otherwise, the service is always the important thing. (The node may
On 22/06/16 02:34 PM, m.r...@5-cent.us wrote:
> Digimer wrote:
>> On 22/06/16 02:01 PM, Chris Adams wrote:
>>> Once upon a time, John R Pierce said:
On 6/22/2016 10:47 AM, Digimer wrote:
> This is called "fabric fencing" and was originally the only supported
> option in the very early
On 22/06/16 02:31 PM, John R Pierce wrote:
> On 6/22/2016 11:06 AM, Digimer wrote:
>> I know this goes against the
>> grain of sysadmins to yank power, but in an HA setup, nodes should be
>> disposable and replaceable. The nodes are not important, the hosted
>> services are.
>
> of course, the rea
Once upon a time, John R Pierce said:
> of course, the really tricky problem is implementing an ISCSI
> storage infrastructure thats fully redundant and has no single point
> of failure. this requires the redundant storage controllers to
> have shared write-back cache, fully redundant networking
On Wed, 22 Jun 2016, Digimer wrote:
The nodes are not important, the hosted services are.
The only time this isn't true is when you're using the node to heat
the room.
Otherwise, the service is always the important thing. (The node may
become as synonymous with the service because there's
Digimer wrote:
> On 22/06/16 02:01 PM, Chris Adams wrote:
>> Once upon a time, John R Pierce said:
>>> On 6/22/2016 10:47 AM, Digimer wrote:
This is called "fabric fencing" and was originally the only supported
option in the very early days of HA. It has fallen out of favour for
sev
On 6/22/2016 11:06 AM, Digimer wrote:
I know this goes against the
grain of sysadmins to yank power, but in an HA setup, nodes should be
disposable and replaceable. The nodes are not important, the hosted
services are.
of course, the really tricky problem is implementing an ISCSI storage
infra
On 22/06/16 02:12 PM, Chris Adams wrote:
> Once upon a time, Digimer said:
>> The cluster software and any hosted services aren't running. It's not
>> that they think they're wrong, they just have no existing state so they
>> won't try to touch anything without first ensuring it is safe to do so.
Once upon a time, Digimer said:
> The cluster software and any hosted services aren't running. It's not
> that they think they're wrong, they just have no existing state so they
> won't try to touch anything without first ensuring it is safe to do so.
Well, I was being short; what I meant was, in
On 22/06/16 02:01 PM, Chris Adams wrote:
> Once upon a time, John R Pierce said:
>> On 6/22/2016 10:47 AM, Digimer wrote:
>>> This is called "fabric fencing" and was originally the only supported
>>> option in the very early days of HA. It has fallen out of favour for
>>> several reasons, but it d
Once upon a time, John R Pierce said:
> On 6/22/2016 10:47 AM, Digimer wrote:
> >This is called "fabric fencing" and was originally the only supported
> >option in the very early days of HA. It has fallen out of favour for
> >several reasons, but it does still work fine. The main issues is that it
On 6/22/2016 10:47 AM, Digimer wrote:
This is called "fabric fencing" and was originally the only supported
option in the very early days of HA. It has fallen out of favour for
several reasons, but it does still work fine. The main issues is that it
leaves the node in an unclean state. If an admi
On 22/06/16 01:38 PM, John R Pierce wrote:
> On 6/21/2016 10:01 PM, Tom Robinson wrote:
>> Currently when I migrate a guest, I can all too easily start it up on
>> both hosts! There must be some
>> way to fence these off but I'm just not sure how to do this.
>
> in addition to power fencing as des
I have multiple VMs that are hanging on boot. Sometimes they'll boot
fine after 5 mins and other times it'll take over an hour. The problem
seems to be related to journald but I'd like to figure out how I can
get more information.
The VMs are running CentOS 7.1.1503. systemd and journald are both
v
On 6/21/2016 10:01 PM, Tom Robinson wrote:
Currently when I migrate a guest, I can all too easily start it up on both
hosts! There must be some
way to fence these off but I'm just not sure how to do this.
in addition to power fencing as described by others, you can also fence
at the ethernet
Send CentOS-announce mailing list submissions to
centos-annou...@centos.org
To subscribe or unsubscribe via the World Wide Web, visit
https://lists.centos.org/mailman/listinfo/centos-announce
or, via email, send a message with subject or body 'help' to
centos-announce-requ.
On 22 June 2016 at 09:03, Indunil Jayasooriya wrote:
>
> When an UNCLEAN SHUDWON happens or ifdown eth0 in node1 , can OVIRT
> migrate VMs from node1 to node2?
Yep.
> in that case, Is power management such as ILO needed?
It needs a way to ensure the host is down to prevent storage
corruption,
24 matches
Mail list logo