[GSoC] [CLOUDSTACK-6114] Progress update

2014-06-01 Thread Ian Duffy
Hi all,

I am making slow but steady progress on my GSoC project. If anybody is
interested in seeing the work completed and work left to do the JIRA ticket
can be seen over here https://issues.apache.org/jira/browse/CLOUDSTACK-6114

Any of the work can be seen over at https://github.com/imduffy15/GSoC-2014

At the moment I have the following:

# XenServer as Vagrant Box

https://github.com/imduffy15/packer-xenserver

I have a packerfile which will output a xenserver vagrant box. This box is
configured with two interfaces. The vagrant NAT interface and a host only
interface which will be used by Cloudstack.

Originally I planned to use iptables on the xenserver box to allow the
host-only interface to use the NAT interface to supply internet access to
VMs brought up on the hypervisor. Sadly this didn't work as planned as the
iptables were overwrote.

The solution for this was to supply a gateway on the host only network.

# Mysql, NFS and Gateway vagrant box

https://github.com/imduffy15/GSoC-2014/tree/master/MySQL_NFS

I was able to re-use the chef recipes by the folks over at cloudops to
bring up a vagrant box with MySQL and NFS installed/configured.

The MySQL server is configured with no password. the vagrant file I use
does a port forwarding from host machine to VM on 3306 to enable the host
machine to execute things like mvn -P developer -pl developer -Ddeploydb
without the need to change different configuration files within cloudstack.

The NFS recipe simply exports /exports.

I wrote a simple chef recipe (
https://github.com/imduffy15/cookbook_nat-router ) to use iptables to
forward traffic from the host-only interface to the NAT interface. This
enables this box to be a gateway for the host-only network and can be used
by the VMs brought up on xenserver to get internet access.


# Modified devcloud.cfg

I have supplied a modified devcloud.cfg file to work with the above
described environment.
https://github.com/imduffy15/GSoC-2014/blob/master/devcloud.cfg


My next step is to create a chef recipe that will download a system vm
specified by a chef attribute. This system vm will be downloaded and
installed onto the Mysql, NFS and Gateway vagrant box.


Re: [PROPOSAL] git workflow

2014-06-01 Thread Rajani Karuturi
Yes as mike said, if its a one-off case we can do a empty merge(merge -s ours) 
for it and git will assume its merged but will not bring in any changes.

If the branches diverged a lot, for example after a major rewrite, we could 
stop merging to that branch and above and make the fix manually.


~Rajani



On 30-May-2014, at 11:26 pm, Mike Tutkowski  
wrote:

> Yep, that's what I was referring to in that a particular fix for an old
> release may not apply to newer versions. That does happen.
> 
> We used to mark those as "don't need to merge to branch x" in SVN and then
> you handed it however made sense on the applicable branch(es).
> 
> 
> On Fri, May 30, 2014 at 11:53 AM, Stephen Turner 
> wrote:
> 
>> What happens if a fix isn't relevant for newer versions, or has to be
>> rewritten for newer versions because the code has changed? Don't the
>> branches diverge and you end up cherry-picking after that?
>> 
>> --
>> Stephen Turner
>> 
>> 
>> -Original Message-
>> From: Mike Tutkowski [mailto:mike.tutkow...@solidfire.com]
>> Sent: 30 May 2014 18:48
>> To: dev@cloudstack.apache.org
>> Subject: Re: [PROPOSAL] git workflow
>> 
>> I think this flow is something we should seriously consider.
>> 
>> I find cherry picking from branch to branch to be error prone in that it's
>> easy for someone to forget to cherry pick to all applicable branches and
>> you don't have any easy way to see the cherry picks are related.
>> 
>> When I worked at HP, we had automated tools check to see if you checked a
>> fix into a prior release, but not later releases. In such a situation, you
>> either 1) forgot to perform the check-in or 2) the check-in was no longer
>> applicable in the later release(s), so you needed to mark it as
>> un-necessary (SVN supported this ability...not sure about Git).
>> 
>> 
>> On Fri, May 30, 2014 at 10:49 AM, Rajani Karuturi <
>> rajani.karut...@citrix.com> wrote:
>> 
>>> Hi all,
>>> 
>>> 
>>> 
>>> Our current git workflow is confusing with the *forward branches and
>>> cherry-picking. Its hard to track on what all releases the commit has
>>> gone into unless I do some git log greping. Also, as a contributor, I
>>> endup creating patches for each branch as it doesn’t cleanly apply on
>>> different branches.
>>> 
>>> 
>>> 
>>> I think we should have some guidelines. Here is what I propose.
>>> 
>>> 
>>> 
>>>  1.  There should be branch for every major release(ex: 4.3.x, 4.4.x,
>>> 5.0.x,5.1.x) and the minor releases should be tagged accordingly on
>>> the respective branches.
>>>  2.  The branch naming convention is to be followed. Many branches
>>> with 4.3, 4.3.0, 4.3.1 etc. is confusing
>>>  3.  Cherry-picking should be avoided. In git, when we cherry-pick,
>>> we have two physically distinct commits for the same change or fix and
>>> is difficult to track unless you do cherry-pick -x
>>>  4.  There should always be a continous flow from release branches to
>>> master. This doesn’t mean cherry-picking. They should be merged(either
>>> ff or no-ff) which retains the commit ids and easily trackable with
>>> git branch --contains
>>> *   Every bug fix should always flow from minimal release uptill
>>> master. A bug isnt fixed until the fix reaches master.
>>> *   For ex. A bug 4.2.1 should be committed to
>>> 4.2.x->4.3.x->4.4.x->master
>>> *   If someone forgets to do the merge, the next time a new commit
>> is
>>> done this will also get merged.
>>>  5.  There should always be a continuous flow from master to feature
>>> branches. Meaning all feature branch owners should proactively take
>>> any new commits from master by doing a merge from master
>>>  6.  The commits from feature branch will make to master on code
>>> complete through a merge.
>>>  7.  There should never be a merge from master to release branches
>>>  8.  Every commit in LTS branch(targetted to any minor release)
>>> should have atleast bug id and correct author information
>>> *   Cassandra's template: patch by ; reviewed by 
>>> for CASSANDRA-
>>>  9.  Once the release branch is created(after code freeze), any bug
>>> in jira can be marked with fix version current release(4.4) only on
>>> RM's approval and only they can go to the release branch.  This can be
>>> done through jira and with certain rules.(may be using jira vote?)
>>> this would save the cherry-picking time and another branch maintenance.
>>> 
>>> 
>>> 
>>> Please add your thoughts/suggestions/comments.
>>> 
>>> 
>>> 
>>> Ref:
>>> http://www.draconianoverlord.com/2013/09/07/no-cherry-picking.html
>>> https://www.youtube.com/watch?v=AJ-CpGsCpM0
>>> 
>>> ~Rajani
>>> 
>>> 
>>> 
>>> 
>> 
>> 
>> --
>> *Mike Tutkowski*
>> *Senior CloudStack Developer, SolidFire Inc.*
>> e: mike.tutkow...@solidfire.com
>> o: 303.746.7302
>> Advancing the way the world uses the cloud
>> *™*
>> 
> 
> 
> 
> -- 
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkow...@solidfire.com
> o: 

Review Request 22126: [Windows]Integrate SystemSeed Template into installer and add progress bar messages to the installer so that admin can aware of changes happening to the system

2014-06-01 Thread Damodar Reddy Talakanti

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/22126/
---

Review request for cloudstack, Abhinandan Prateek and Murali Reddy.


Bugs: https://issues.apache.org/jira/browse/CLOUDSTACK-6701 and 
https://issues.apache.org/jira/browse/CLOUDSTACK-6702

https://issues.apache.org/jira/browse/https://issues.apache.org/jira/browse/CLOUDSTACK-6701

https://issues.apache.org/jira/browse/https://issues.apache.org/jira/browse/CLOUDSTACK-6702


Repository: cloudstack-git


Description
---

1. Integrate System Seed Template into MSI Installer
2. Give necessary progress bar messages to admin before changing the system 
properties like environment varibales, firewall rules etc..
3. Move all titles and descriptions to a property file or move to build 
properties


Diffs
-

  client/pom.xml c55d5b7 
  scripts/installer/windows/WixInstallerDialog.wxs b0f510b 
  scripts/installer/windows/acs.wxs 8206afa 
  scripts/installer/windows/en-us.wxl b43393c 
  scripts/storage/secondary/cloud-install-sys-tmplt.py PRE-CREATION 

Diff: https://reviews.apache.org/r/22126/diff/


Testing
---

Tested On Windows 2012 R2 Server


Thanks,

Damodar Reddy Talakanti



[DISCUSS] Increasing VM IOPS by separating golden image in high IOPS partition in Xen Server ?

2014-06-01 Thread Hieu LE
Hi all,

There are some problems while deploying a large amount of VMs in my company
with CloudStack. All VMs are deployed from same template (e.g: Windows 7)
and the quantity is approximately ~1000VMs. The problems here is low IOPS,
low performance of VM (about ~10-11 IOPS, boot time is very high). The
storage of my company is SAN/NAS with NFS and Xen Server 6.2.0. All Xen
Server nodes have standard server HDD disk raid.

I have found some solutions for this such as:

   - Enable Xen Server Intellicache and some tweaks in CloudStack codes to
   deploy and start VM in Intellicache mode. But this solution will transfer
   all IOPS from shared storage to all local storage, hence affect and limit
   some CloudStack features.
   - Buying some expensive storage solutions and network to increase IOPS.
   Nah..

So, I am thinking about a new feature that (may be) increasing IOPS and
performance of VMs:

   1. Separate golden image in high IOPS partition: buying new SSD, plug in
   Xen Server and deployed a new VM in NFS storage WITH golden image in this
   new SSD partition. This can reduce READ IOPS in shared storage and decrease
   boot time of VM. (Currenty, VM deployed in Xen Server always have a master
   image (golden image - in VMWare) always in the same storage repository with
   different image (child image)). We can do this trick by tweaking in VHD
   header file with new Xen Server plug-in.
   2. Create golden primary storage and VM template that enable this
   feature.
   3. So, all VMs deployed from template that had enabled this feature will
   have a golden image stored in golden primary storage (SSD or some high IOPS
   partition), and different image (child image) stored in other normal
   primary storage.

This new feature will not transfer all IOPS from shared storage to local
storage (because high IOPS partition can be another high IOPS shared
storage) and require less money than buying new storage solution.

What do you think ? If possible, may I write a proposal in CloudStack wiki ?

BRs.

Hieu Lee

-- 
-BEGIN GEEK CODE BLOCK-
Version: 3.1
GCS/CM/IT/M/MU d-@? s+(++):+(++) !a C()$ ULC(++)$ P L++(+++)$ E
!W N* o+ K w O- M V- PS+ PE++ Y+ PGP+ t 5 X R tv+ b+(++)>+++ DI- D+ G
e++(+++) h-- r(++)>+++ y-
--END GEEK CODE BLOCK--


Re: [DISCUSS] Increasing VM IOPS by separating golden image in high IOPS partition in Xen Server ?

2014-06-01 Thread Punith S
hi hieu,

your problem is the bottle neck we see as a storage vendors in the cloud,
meaning all the vms in the cloud have not been guaranteed iops from the
primary storage, because in your case i'm assuming you are running 1000vms
on a xen cluster whose all vm's disks are lying on a same primary nfs
storage mounted to the cluster,
hence you won't get the dedicated iops for each vm since every vm is
sharing the same storage. to solve this issue in cloudstack we the third
party vendors have implemented the plugin(namely cloudbyte , solidfire etc)
to support managed storage(dedicated volumes with guaranteed qos for each
vms) , where we are mapping each root disk(vdi) or data disk of a vm with
one nfs or iscsi share coming out of a pool, also we are proposing the new
feature to change volume iops on fly in 4.5, where you can increase or
decrease your root disk iops while booting or at peak times. but to use
this plugin you have to buy our storage solution.

if not , you can try creating a nfs share out of ssd pool storage and
create a primary storage in cloudstack out of it named as golden primary
storage with specific tag like gold, and create a compute offering for your
template with the storage tag as gold, hence all the vm's you create will
sit on this gold primary storage with high iops. and other data disks on
other primary storage but still here you cannot guarantee the qos at vm
level.

thanks


On Mon, Jun 2, 2014 at 10:12 AM, Hieu LE  wrote:

> Hi all,
>
> There are some problems while deploying a large amount of VMs in my company
> with CloudStack. All VMs are deployed from same template (e.g: Windows 7)
> and the quantity is approximately ~1000VMs. The problems here is low IOPS,
> low performance of VM (about ~10-11 IOPS, boot time is very high). The
> storage of my company is SAN/NAS with NFS and Xen Server 6.2.0. All Xen
> Server nodes have standard server HDD disk raid.
>
> I have found some solutions for this such as:
>
>- Enable Xen Server Intellicache and some tweaks in CloudStack codes to
>deploy and start VM in Intellicache mode. But this solution will
> transfer
>all IOPS from shared storage to all local storage, hence affect and
> limit
>some CloudStack features.
>- Buying some expensive storage solutions and network to increase IOPS.
>Nah..
>
> So, I am thinking about a new feature that (may be) increasing IOPS and
> performance of VMs:
>
>1. Separate golden image in high IOPS partition: buying new SSD, plug in
>Xen Server and deployed a new VM in NFS storage WITH golden image in
> this
>new SSD partition. This can reduce READ IOPS in shared storage and
> decrease
>boot time of VM. (Currenty, VM deployed in Xen Server always have a
> master
>image (golden image - in VMWare) always in the same storage repository
> with
>different image (child image)). We can do this trick by tweaking in VHD
>header file with new Xen Server plug-in.
>2. Create golden primary storage and VM template that enable this
>feature.
>3. So, all VMs deployed from template that had enabled this feature will
>have a golden image stored in golden primary storage (SSD or some high
> IOPS
>partition), and different image (child image) stored in other normal
>primary storage.
>
> This new feature will not transfer all IOPS from shared storage to local
> storage (because high IOPS partition can be another high IOPS shared
> storage) and require less money than buying new storage solution.
>
> What do you think ? If possible, may I write a proposal in CloudStack wiki
> ?
>
> BRs.
>
> Hieu Lee
>
> --
> -BEGIN GEEK CODE BLOCK-
> Version: 3.1
> GCS/CM/IT/M/MU d-@? s+(++):+(++) !a C()$ ULC(++)$ P L++(+++)$
> E
> !W N* o+ K w O- M V- PS+ PE++ Y+ PGP+ t 5 X R tv+ b+(++)>+++ DI- D+ G
> e++(+++) h-- r(++)>+++ y-
> --END GEEK CODE BLOCK--
>



-- 
regards,

punith s
cloudbyte.com


Re: [DISCUSS] Increasing VM IOPS by separating golden image in high IOPS partition in Xen Server ?

2014-06-01 Thread Mike Tutkowski
Thanks, Punith - this is similar to what I was going to say.

Any time a set of CloudStack volumes share IOPS from a common pool, you
cannot guarantee IOPS to a given CloudStack volume at a given time.

Your choices at present are:

1) Use managed storage (where you can create a 1:1 mapping between a
CloudStack volume and a volume on a storage system that has QoS). As Punith
mentioned, this requires that you purchase storage from a vendor who
provides guaranteed QoS on a volume-by-volume bases AND has this integrated
into CloudStack.

2) Create primary storage in CloudStack that is not managed, but has a high
number of IOPS (ex. using SSDs). You can then storage tag this primary
storage and create Compute and Disk Offerings that use this storage tag to
make sure their volumes end up on this storage pool (primary storage). This
will still not guarantee IOPS on a CloudStack volume-by-volume basis, but
it will at least place the CloudStack volumes that need a better chance of
getting higher IOPS on a storage pool that could provide the necessary
IOPS. A big downside here is that you want to watch how many CloudStack
volumes get deployed on this primary storage because you'll need to
essentially over-provision IOPS in this primary storage to increase the
probability that each and every CloudStack volume that uses this primary
storage gets the necessary IOPS (and isn't as likely to suffer from the
Noisy Neighbor Effect). You should be able to tell CloudStack to only use,
say, 80% (or whatever) of the storage you're providing to it (so as to
increase your effective IOPS per GB ratio). This over-provisioning of IOPS
to control Noisy Neighbors is avoided in option 1. In that situation, you
only provision the IOPS and capacity you actually need. It is a much more
sophisticated approach.

Thanks,
Mike


On Sun, Jun 1, 2014 at 11:36 PM, Punith S  wrote:

> hi hieu,
>
> your problem is the bottle neck we see as a storage vendors in the cloud,
> meaning all the vms in the cloud have not been guaranteed iops from the
> primary storage, because in your case i'm assuming you are running 1000vms
> on a xen cluster whose all vm's disks are lying on a same primary nfs
> storage mounted to the cluster,
> hence you won't get the dedicated iops for each vm since every vm is
> sharing the same storage. to solve this issue in cloudstack we the third
> party vendors have implemented the plugin(namely cloudbyte , solidfire etc)
> to support managed storage(dedicated volumes with guaranteed qos for each
> vms) , where we are mapping each root disk(vdi) or data disk of a vm with
> one nfs or iscsi share coming out of a pool, also we are proposing the new
> feature to change volume iops on fly in 4.5, where you can increase or
> decrease your root disk iops while booting or at peak times. but to use
> this plugin you have to buy our storage solution.
>
> if not , you can try creating a nfs share out of ssd pool storage and
> create a primary storage in cloudstack out of it named as golden primary
> storage with specific tag like gold, and create a compute offering for your
> template with the storage tag as gold, hence all the vm's you create will
> sit on this gold primary storage with high iops. and other data disks on
> other primary storage but still here you cannot guarantee the qos at vm
> level.
>
> thanks
>
>
> On Mon, Jun 2, 2014 at 10:12 AM, Hieu LE  wrote:
>
>> Hi all,
>>
>> There are some problems while deploying a large amount of VMs in my
>> company
>> with CloudStack. All VMs are deployed from same template (e.g: Windows 7)
>> and the quantity is approximately ~1000VMs. The problems here is low IOPS,
>> low performance of VM (about ~10-11 IOPS, boot time is very high). The
>> storage of my company is SAN/NAS with NFS and Xen Server 6.2.0. All Xen
>> Server nodes have standard server HDD disk raid.
>>
>> I have found some solutions for this such as:
>>
>>- Enable Xen Server Intellicache and some tweaks in CloudStack codes to
>>deploy and start VM in Intellicache mode. But this solution will
>> transfer
>>all IOPS from shared storage to all local storage, hence affect and
>> limit
>>some CloudStack features.
>>- Buying some expensive storage solutions and network to increase IOPS.
>>Nah..
>>
>> So, I am thinking about a new feature that (may be) increasing IOPS and
>> performance of VMs:
>>
>>1. Separate golden image in high IOPS partition: buying new SSD, plug
>> in
>>Xen Server and deployed a new VM in NFS storage WITH golden image in
>> this
>>new SSD partition. This can reduce READ IOPS in shared storage and
>> decrease
>>boot time of VM. (Currenty, VM deployed in Xen Server always have a
>> master
>>image (golden image - in VMWare) always in the same storage repository
>> with
>>different image (child image)). We can do this trick by tweaking in VHD
>>header file with new Xen Server plug-in.
>>2. Create golden primary storage and VM template that enable this
>> 

RE: seeing "Unknown parameters : ctxdetails" for addResourceDetail/removeResourceDetail

2014-06-01 Thread Santhosh Edukulla
1. I just did a rough grep, so 158 logs out of 6674 are these. Roughly 2.3%. 
This is just creating datacenter and playing few initial things. As well, 
message format is of these logs is little incomplete i believe, it ends at 
typ.. with out giving further information.

2. Its giving good information on unknown parameters, but what is the 
definition of unknown and what users should do with it? may be we can document 
for production users.

Thanks!
Santhosh

From: Daan Hoogland [daan.hoogl...@gmail.com]
Sent: Friday, May 30, 2014 4:41 PM
To: dev
Cc: Antonio Fornié Casarrubios
Subject: Re: seeing "Unknown parameters : ctxdetails" for 
addResourceDetail/removeResourceDetail

I am alright with the ability to turn it off, preferably at run time.

On Fri, May 30, 2014 at 8:20 PM, Nitin Mehta  wrote:
> Hey Daan,
> In the thread we have discussed the merits and demerits of having this.
> IMHO, I am not convinced why we should have this but very concerned that
> the downsides includes performance of apis, unnecessary clutter in the
> logs etc. Its evident that not every one wants this ability. There should
> be at the minimum ability to turn off this worker. Let me know if you
> agree.
>
> Thanks,
> -Nitin
>
> On 29/05/14 10:30 AM, "Daan Hoogland"  wrote:
>
>>If removing the worker means that unknown parameters are never logged
>>i am -1 removing it.
>>
>>On Thu, May 29, 2014 at 7:25 PM, Nitin Mehta 
>>wrote:
>>> Antonio - Can you please remove this worker ? I had filed a bug
>>> https://issues.apache.org/jira/browse/CLOUDSTACK-6658 for the same
>>>
>>> Thanks,
>>> -Nitin
>>>
>>> On 29/05/14 5:06 AM, "Santhosh Edukulla" 
>>> wrote:
>>>
We still see "Unknown parameters..." huge number of logs in server log.
May be, we can make this as configurable or dump to some other log say
misc log,

if mandatory params are not sent  as per request, dump to the log and
return,
if arg types and arguments as per validation are wrong, then dump.  But
dumping every unknown param,  real issues get lost with these huge log
set rolling when debugging.

Regards,
Santhosh

From: Nitin Mehta [nitin.me...@citrix.com]
Sent: Monday, May 19, 2014 2:42 AM
To: Antonio Fornié Casarrubios; cloudstack
Subject: Re: seeing "Unknown parameters : ctxdetails" for
addResourceDetail/removeResourceDetail

Thanks Anotnio.  That’s what I have been saying from the beginning.
IMHO,
I don’t see much value in having this, but I am really concerned with
the
performance of the apis especially in production setups.
For this reason can we please remove this worker or at the very least
have a setting to not have it turned on by default ?

-Nitin

From: Antonio Fornié Casarrubios
mailto:antonio.for...@gmail.com>>
Date: Sunday 18 May 2014 4:22 PM
To: cloudstack
mailto:dev@cloudstack.apache.org>>
Cc: Nitin Mehta mailto:nitin.me...@citrix.com>>
Subject: Re: seeing "Unknown parameters : ctxdetails" for
addResourceDetail/removeResourceDetail


If the parameter is correct then it should not be in the logs as
unknown.
And so it should be added to the worker in the list of parameters that
the worker will never blame. That is the fix. Right?

Perhaps it is not considered good that everytime a new parameter is
added
to the api requests it has to be included in the worker. In that case
then perhaps it's better to just completely remove the worker itself.

Thansk, cheers
Antonio

El 16/05/2014 23:21, "Min Chen"
mailto:min.c...@citrix.com>> escribió:
Ctxdetails complained in your warning log is one of internal parameters
added by ApiDispatcher, and is not publicly presented in the API Cmd
class. For those parameters, they are not errors in the request and
nothing to be fixed.

Thanks
-min

On 5/14/14 12:46 AM, "Antonio Fornié Casarrubios"
mailto:antonio.for...@gmail.com>> wrote:

>The errors in the requests are created by these well known clients,
>that's
>why they should be fixed. It's not that the admin misspelled a param,
>it's
>more that the code that creates the requests (the js in the web ui,
>cloudmonkey, Marvin or any other...)
>
>Cheers
>antonio
>
>
>2014-05-14 3:05 GMT+02:00 Nitin Mehta
>mailto:nitin.me...@citrix.com>>:
>
>> Daan - MS logs are visible only to the admin and not a general user.
>>So
>> are you saying this is for admin to debug in case he misspelled a
>>param
>>?
>>
>> I feel that this shouldn’t be ON by default and whether such logic
>>should
>> be part of CS core ?
>> I also find it difficult to understand that in production the admin
>>would
>> commit such basic mistakes. I am assuming that he/she would be a
>>

RE: Patch Management

2014-06-01 Thread Santhosh Edukulla
Iam not sure, its stretching a limit, but i have seen many patches missing in 
master or only applied to one branch. 

One RCA for many failures or issues reported during initial days of every new 
branch is that it has these fixes are missing. I believe if we can enforce a 
patch submitter to provide working patch for atleast master and concerned 
branch, we should not put closure to bug, review and patch submission i 
believe. Or have review board submissions considered "only" containing patches 
for both master and individual branch.

Santhosh

From: Daan Hoogland [daan.hoogl...@gmail.com]
Sent: Saturday, May 17, 2014 5:06 AM
To: dev
Subject: Re: Patch Management

plus one; the review board does not easily facilitate multiple patches
so I would use the 'depend on' feature to point to the other
branch-patches

On Thu, May 15, 2014 at 9:18 AM, Santhosh Edukulla
 wrote:
> Team,
>
> Currently, it seems we have few patches missing in master but available in 
> current running branch, could be attributed for various reasons.
>
> Its hard to track some times that all changes made to make issue fixes work, 
> are all available in master at any given time. I believe we can have an entry 
> gate, that if the patch does not apply cleanly on either of the branch(  and 
> if it is supposed for both ), then its better we don't push it to both the 
> branches, until the clean patch for both master and running branch are 
> submitted again.
>
> This way, review will not be closed to submission. Otherwise, from bug fixes 
> perspective, fixes should go hand in hand to both master and running branch i 
> believe. Tracking manually through review\git log for missing patches may be 
> little tedious and error prone again.
>
> You can see if there is a better way to track patch to completion for both 
> and is proper.
>
> Thanks!
> Santhosh



--
Daan