Re: [VOTE] Primate as modern UI for CloudStack

2019-10-16 Thread K B Shiv Kumar
+1 (binding)

On Mon, Oct 7, 2019 at 5:01 PM Rohit Yadav 
wrote:

> All,
>
> The feedback and response has been positive on the proposal to use Primate
> as the modern UI for CloudStack [1] [2]. Thank you all.
>
> I'm starting this vote (to):
>
>   *   Accept Primate codebase [3] as a project under Apache CloudStack
> project
>   *   Create and host a new repository (cloudstack-primate) and follow
> Github based development workflow (issues, pull requests etc) as we do with
> CloudStack
>   *   Given this is a new project, to encourage cadence until its feature
> completeness the merge criteria is proposed as:
>  *   Manual testing against each PR and/or with screenshots from the
> author or testing contributor, integration with Travis is possible once we
> get JS/UI tests
>  *   At least 1 LGTM from any of the active contributors, we'll move
> this to 2 LGTMs when the codebase reaches feature parity wrt the
> existing/old CloudStack UI
>  *   Squash and merge PRs
>   *   Accept the proposed timeline [1][2] (subject to achievement of goals
> wrt Primate technical release and GA)
>  *   the first technical preview targetted with the winter 2019 LTS
> release (~Q1 2020) and release to serve a deprecation notice wrt the older
> UI
>  *   define a release approach before winter LTS
>  *   stop taking feature FRs for old/existing UI after winter 2019 LTS
> release, work on upgrade path/documentation from old UI to Primate
>  *   the first Primate GA targetted wrt summer LTS 2020 (~H2 2019),
> but still ship old UI with a final deprecation notice
>  *   old UI codebase removed from codebase in winter 2020 LTS release
>
> The vote will be up for the next two weeks to give enough time for PMC and
> the community to gather consensus and still have room for questions,
> feedback and discussions. The results to be shared on/after 21th October
> 2019.
>
> For sanity in tallying the vote, can PMC members please be sure to
> indicate "(binding)" with their vote?
>
> [ ] +1  approve
> [ ] +0  no opinion
> [ ] -1  disapprove (and reason why)
>
> [1] Primate Proposal:
>
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/Proposal%3A+CloudStack+Primate+UI
>
> [2] Email thread reference:
> https://markmail.org/message/z6fuvw4regig7aqb
>
> [3] Primate repo current location: https://github.com/shapeblue/primate
>
>
> Regards,
>
> Rohit Yadav
>
> Software Architect, ShapeBlue
>
> https://www.shapeblue.com
>
> rohit.ya...@shapeblue.com
> www.shapeblue.com
> Amadeus House, Floral Street, London  WC2E 9DPUK
> @shapeblue
>
>
>
>

-- 
Regards
Shiv


Re: Root disk resizing

2021-10-11 Thread K B Shiv Kumar
I believe there's a section called boothook in cloud-init which is probably
what you want.

We're also trying things on cloud-init. ☺️

Best Regards
Shiv
(Sent from mobile device. Apologies for brevity and typos)

On Mon, 11 Oct, 2021, 20:55 Marcus,  wrote:

> Cloud-init is always fun to debug :-). It will probably require some
> playing with to get a pattern down.
>
> There is perhaps a way to get it to re-check and grow every reboot if you
> adjust/override the module frequency, deleting the module semaphore in
> /var/lib/cloud/sem or worst case clearing the metadata via 'cloud-init
> clear' or  deleting the /var/lib/cloud.
>
> On Mon, Oct 11, 2021 at 3:07 AM Wido den Hollander  wrote:
>
> >
> >
> > On 10/10/21 10:35 AM, Ranjit Jadhav wrote:
> > > Hello folks,
> > >
> > > I have implemented cloudstack with Xenserver Host. The template has
> been
> > > made out of VM with basic centos 7 and following package installed on
> it
> > > 
> > > sudo yum -y cloud-init
> > > sudo yum -y install cloud-utils-growpart
> > > sudo yum -y install gdisk
> > > 
> > >
> > > After creating new VM with this template, root disk is created as per
> > size
> > > mention in template or we are able to increase it at them time of
> > creation.
> > >
> > > But later when we try to increase root disk again, it increases disk
> > space
> > > but "/" partiton do not get autoresize.
> > >
> >
> > As far as I know it only grows the partition once, eg, upon first boot.
> > I won't do it again afterwards.
> >
> > Wido
> >
> > >
> > > Following parameters were passed in userdata
> > > 
> > > #cloud-config
> > > growpart:
> > > mode: auto
> > > devices: ["/"]
> > > ignore_growroot_disabled: true
> > > 
> > >
> > > Thanks & Regards,
> > > Ranjit
> > >
> >
>


Re: [Consultation] Remove DB HA feature (db.ha.enabled)

2023-08-22 Thread K B Shiv Kumar
We faced some issues when running Galera. We went back to master slave.

Anyone using Galera in production for a long time?

Regards,
Shiv

> On 22-Aug-2023, at 19:34, Nux  wrote:
> 
> Happy to contribute a doc on how to achieve HA if we decide to remove this.
> 
> Thanks
> 
> On 2023-08-22 15:01, Rohit Yadav wrote:
>> +1 it's a broken feature that at least doesn't work with MySQL 8.x, I'm not 
>> sure if it worked with prior versions of MySQL. However, we need to document 
>> some sort of suggested MySQL HA setup in our docs.
>> Regards.
>> 
>> From: Nux 
>> Sent: Tuesday, August 22, 2023 18:54
>> To: us...@cloudstack.apache.org ; Dev 
>> 
>> Subject: [Consultation] Remove DB HA feature (db.ha.enabled)
>> Hello everyone,
>> A few weeks ago I asked you if you use or managed to use the DB HA
>> Cloudstack feature (db.ha.enabled)[1] and after reading some of the
>> replies and doing intensive testing myself I have found out that the
>> feature is indeed non-functional, it's broken.
>> In my testing I discovered DB HA can easily be done outside of
>> Cloudstack by employing load balancers and other techniques.
>> Personally I have achieved that by using Haproxy in front of Galera
>> cluster, but also introduced Keepalived (vrrp) in my setup to "balance"
>> multiple Haproxies which also worked well.
>> As such, since the feature is basically broken, it will not be trivial
>> to fix it and there are better ways of doing HA, then I propose to
>> remove it altogether.
>> Thoughts? Anyone against it?
>> Cheers
>> [1] -
>> https://docs.cloudstack.apache.org/en/latest/adminguide/reliability.html#database-high-availability


-- 
This message is intended only for the use of the individual or entity to 
which it is addressed and may contain confidential and/or privileged 
information. If you are not the intended recipient, please delete the 
original message and any copy of it from your computer system. You are 
hereby notified that any dissemination, distribution or copying of this 
communication is strictly prohibited unless proper authorization has been 
obtained for such action. If you have received this communication in error, 
please notify the sender immediately. Although IndiQus attempts to sweep 
e-mail and attachments for viruses, it does not guarantee that both are 
virus-free and accepts no liability for any damage sustained as a result of 
viruses.


Re: [Consultation] Remove DB HA feature (db.ha.enabled)

2023-08-22 Thread K B Shiv Kumar
Well, if it is broken and it is not prominently mentioned anywhere new adopters 
may go ahead with that on production. So I guess best to remove or at least 
mention that it is not production grade.

Thanks
Shiv

> On 22-Aug-2023, at 20:12, Nux  wrote:
> 
> But what do you think of the removal of DB HA code?
> 
> When using Galera you need to query against a single node, don't spread the 
> load among all 3, as this will break certain locking functionality in 
> Cloudstack and lead to problems.
> 
> In a Haproxy configuration you should be keeping just one active, eg:
>server galera1 10.0.3.2:3306 check
>server galera2 10.0.3.3:3306 check backup
>server galera3 10.0.3.4:3306 check backup
> 
> Regards
> 
> On 2023-08-22 15:36, K B Shiv Kumar wrote:
>> We faced some issues when running Galera. We went back to master slave.
>> Anyone using Galera in production for a long time?
>> Regards,
>> Shiv
>>> On 22-Aug-2023, at 19:34, Nux  wrote:
>>> Happy to contribute a doc on how to achieve HA if we decide to remove this.
>>> Thanks
>>> On 2023-08-22 15:01, Rohit Yadav wrote:
>>>> +1 it's a broken feature that at least doesn't work with MySQL 8.x, I'm 
>>>> not sure if it worked with prior versions of MySQL. However, we need to 
>>>> document some sort of suggested MySQL HA setup in our docs.
>>>> Regards.
>>>> 
>>>> From: Nux 
>>>> Sent: Tuesday, August 22, 2023 18:54
>>>> To: us...@cloudstack.apache.org ; Dev 
>>>> 
>>>> Subject: [Consultation] Remove DB HA feature (db.ha.enabled)
>>>> Hello everyone,
>>>> A few weeks ago I asked you if you use or managed to use the DB HA
>>>> Cloudstack feature (db.ha.enabled)[1] and after reading some of the
>>>> replies and doing intensive testing myself I have found out that the
>>>> feature is indeed non-functional, it's broken.
>>>> In my testing I discovered DB HA can easily be done outside of
>>>> Cloudstack by employing load balancers and other techniques.
>>>> Personally I have achieved that by using Haproxy in front of Galera
>>>> cluster, but also introduced Keepalived (vrrp) in my setup to "balance"
>>>> multiple Haproxies which also worked well.
>>>> As such, since the feature is basically broken, it will not be trivial
>>>> to fix it and there are better ways of doing HA, then I propose to
>>>> remove it altogether.
>>>> Thoughts? Anyone against it?
>>>> Cheers
>>>> [1] -
>>>> https://docs.cloudstack.apache.org/en/latest/adminguide/reliability.html#database-high-availability


-- 
This message is intended only for the use of the individual or entity to 
which it is addressed and may contain confidential and/or privileged 
information. If you are not the intended recipient, please delete the 
original message and any copy of it from your computer system. You are 
hereby notified that any dissemination, distribution or copying of this 
communication is strictly prohibited unless proper authorization has been 
obtained for such action. If you have received this communication in error, 
please notify the sender immediately. Although IndiQus attempts to sweep 
e-mail and attachments for viruses, it does not guarantee that both are 
virus-free and accepts no liability for any damage sustained as a result of 
viruses.