On 2012-08-17T08:19:45, Ulrich Windl <[email protected]> wrote:

> Likewise if you use resource utilization on primitives in a group, the group 
> begains to start on one node, then stalls when the next primitive's 
> utilization cannot be fulfilled. That's bad especially when there are enough 
> resources for the whole group on another node. (Here ulilizations are not 
> summed).

This was not the target use case for utilizations. They were targeted
for "I have a base storage stack and now don't want to place all VMs
manually"; e.g., just the top-level resource would have utilization
applied.

People are now applying it to other scenarios, but for those, the PE has
to be extended to cope first.

(A work-around is to manually sum up the utilization and set it on the
lowest resource in the group. Not optimal from a usability perspective,
but working.)

> Some concepts had been implemented very "ad hoc".

We like to phrase this as "sufficiently implemented to satisfy the
business need" ;-)

> And one of the popular clusterbooks describes the XML configuration.

Uhm. But that is hardly the fault of SLE HA ;-)

> The best tool around is the crm shell (IMHO), while the GUI has 
> extraordinarily poor performance once your cluster has a reasonable number of 
> resources.

True. The python UI is sort-of suckish for larger clusters. Which is why
we're providing the crm shell and hawk; the python UI is basically in
maintenance mode.

> There is a acess control concept (ACLs) based on XPath. Unfortunately
> that would require to exactly describe the data model of the CIB to
> really implement proven access restrictions. It's a bit
> complicated...

The ACL model targets common use cases like "I want my operations staff
to see, but not modify" or "This person is allowed to see, but only
start/stop a single resource". These use cases are trivial to express in
the shell, for example.

It's not meant to provide formally validated and BSI/DoD certified
levels of security.

> Yes: I found out that there is no mechanism to repair non-clustered
> MD-RAIDs,

You mean those not managed by the Raid resource agent?

> so I wrote a RAID monitor. Proposed that to support. Still didn't hear
> any feedback about it...

Feature requests take a while. They're not usually considered bugs. But
I've actually seen this being discussed internally; ping your support
contact again for the current status.


Regards,
    Lars

-- 
Architect Storage/HA
SUSE LINUX Products GmbH, GF: Jeff Hawn, Jennifer Guild, Felix Imendörffer, HRB 
21284 (AG Nürnberg)
"Experience is the name everyone gives to their mistakes." -- Oscar Wilde

_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Reply via email to