Prasanna, 

If we broke these tests down into smaller test specific test cases, could
we automate them into one specific end-to-end test together?

I.e. (And I'm over simplifying this for the sake of the e-mail)

1. Create Zone
2. Create Domain
3. Create Accounts
4. Create Users
5. Create Networks
6. Create VMs
7. Destroy VMs
8. Delete Users
9. Delete Accounts
10. Delete Zone

Each of these would have multiple steps and then would have a integration
test where you could pick and choose the unit test cases to be ran
together? I don't know the testing system well, but I know there are
automated systems out there that have done this, so I'm asking what our
systems can do. 

Matt 



On 7/23/13 1:26 PM, "Ahmad Emneina" <aemne...@gmail.com> wrote:

>In terms of integration tests, it could be broken up into smaller pieces,
>but as part of an overall 'networking functional' suite. Maybe we need to
>further divide the tests between functional tests and unit or integration
>tests. We really need to improve the testing around error handling and the
>lifecycle of a feature.
>
>For example network.gc does its job in cleaning up rules after a network
>has shut down. But when restarting a vm in said network, none of the
>previous rules get reinstated. That to me is brittle and needs to
>mature...
>sorry for the anti-rant :p
>
>
>On Tue, Jul 23, 2013 at 9:22 AM, Prasanna Santhanam <t...@apache.org>
>wrote:
>
>> here's another one:
>>
>> def test_07_delete_network_with_rules(self):
>>         """ Test delete network that has PF/staticNat/LB rules/Network
>>Acl
>>
>>         # Validate the following
>>         # 1. Create a VPC with cidr - 10.1.1.1/16
>>         # 2. Add network1(10.1.1.1/24) and network2(10.1.2.1/24) to this
>> VPC.
>>         # 3. Deploy vm1 and vm2 in network1 and vm3 and vm4 in network2.
>>         # 4. Create a PF /Static Nat/LB rule for vms in network1.
>>         # 5. Create a PF /Static Nat/LB rule for vms in network2.
>>         # 6. Create ingress network ACL for allowing all the above rules
>> from
>>         #    public ip range on network1 and network2.
>>         # 7. Create egress network ACL for network1 and network2 to
>>access
>>         #    google.com.
>>         # 8. Create a private gateway for this VPC and add a static
>>route
>> to
>>         #    this gateway
>>         # 9. Create a VPN gateway for this VPC and add a static route to
>> this
>>         #    gateway.
>>         # 10. Make sure that all the PF,LB, Static NAT rules work as
>> expected
>>         # 11. Make sure that we are able to access google from all user
>>Vms
>>         # 12. Make sure that the newly added private gateway's and VPN
>>         #    gateway's static routes work as expected.
>>         # Steps:
>>         # 1. Delete the 1st network.
>>         # 2. Delete account
>>         # Validations:
>>         # 1. As part of network deletion all the resources attached with
>>         #    network should get deleted. All other VMs and rules shall
>> work as
>>         #    expected
>>         # 2. All the resources associated with account should be deleted
>>
>> This is such a complicated test. I can see breaking it down into at
>>least 5
>> tests. The point I'm trying to make here is simply this -
>>
>> When we don't have simple tests that make sure if ACLs are working
>> correctly we
>> shouldn't overindulge in this kind of testing. Bear in mind that testing
>> that
>> simple ACL ingress works correctly is also a "system integration" test.
>> There's
>> no reason why only unittests shold be the most condensed in form.
>>
>> if we had one test for ACL, one for VPC offerings, one for the VPN gw,
>>one
>> for
>> the check whether ACLs are able to connect to an external service like
>> 'google'. The tests can be much simpler to debug and much better
>> indicators of
>> failure
>>
>> This entire scenario not just convolutes analysis of a failure but also
>> makes
>> maintaining this a problem. I'm completely fine with integration tests
>>as
>> being
>> tests that put together things as a whole. But Sanjay worked on this
>>today
>> and
>> Sheng worked on another scenario in related to VPC load balancing. And
>>it
>> takes
>> a few hours to get this entire thing debugged and fixed. And the entire
>> suite
>> (atleast 8 to 10 tests) are such scenarios.
>>
>> We need to start working on critically reviewing the tests that come
>> through.
>>
>> Of course I'd like to hear others thoughts on this.
>>
>> On Fri, Jul 19, 2013 at 06:06:18PM +0530, Prasanna Santhanam wrote:
>> > My problem is that we have tests that already check cleanup of
>> > accounts. A test should do the most crystallized set of steps to
>> > achieve the scenario and not try to put everything and the kitchen
>> > sink in to it. If we see ourselves doing that - we need to break down
>> > our tests into smaller blocks. They'll still be
>> > system/integration/live tests only.
>> >
>> > As to the simulator - yes you can run these tests on a simulator
>> > today.
>> >
>> > On Fri, Jul 19, 2013 at 01:12:14AM +0000, Alex Huang wrote:
>> > > I disagree.  Error handling should be part of our testing.
>> > >
>> > > We should incorporate the simulator into the BVT and regression
>> > > tests.  On testcases that really is to test the business logic
>> > > rather than the provisioning code, the test case should perform all
>> > > of the provisioning on the simulator instead.  Then simulator can be
>> > > programmed to simulate VM stopped failure etc and see how the
>> > > business responds to these problems.
>> > >
>> > > --Alex
>> > >
>> > > > -----Original Message-----
>> > > > From: Anthony Xu [mailto:xuefei...@citrix.com]
>> > > > Sent: Thursday, July 18, 2013 3:02 PM
>> > > > To: dev@cloudstack.apache.org
>> > > > Subject: RE: [rant] stupid test cases
>> > > >
>> > > > +1   VM can be in "Stopped" state
>> > > >
>> > > >
>> > > > Anthony
>> > > >
>> > > > -----Original Message-----
>> > > > From: Marcus Sorensen [mailto:shadow...@gmail.com]
>> > > > Sent: Wednesday, July 17, 2013 10:47 PM
>> > > > To: dev@cloudstack.apache.org
>> > > > Subject: Re: [rant] stupid test cases
>> > > >
>> > > > I can understand that we may want to test that everything related
>>to
>> the
>> > > > domain gets cleaned up properly. We have run into all sorts of
>> things when
>> > > > deleting accounts, for example where resources won't clean up
>> because the
>> > > > account is gone and we throw null pointers because a bunch of code
>> looks up
>> > > > account when deleting. However, to your point, VMs can be created
>>in
>> a
>> > > > "stopped" state and that wouldn't incur the overhead of
>>deployment.
>> > > > On Jul 17, 2013 11:33 PM, "Prasanna Santhanam" <t...@apache.org>
>> wrote:
>> > > >
>> > > > > I was just going through one of the automated test cases and I
>> find it
>> > > > > really silly that there's the following test:
>> > > > >
>> > > > > def test_forceDeleteDomain(self):
>> > > > >         """ Test delete domain force option"""
>> > > > >
>> > > > >         # Steps for validations
>> > > > >         # 1. create a domain DOM
>> > > > >         # 2. create 2 users under this domain
>> > > > >         # 3. deploy 1 VM into each of these user accounts
>> > > > >         # 4. create PF / FW rules for port 22 on these VMs for
>> their
>> > > > >         #    respective accounts
>> > > > >         # 5. delete the domain with force=true option
>> > > > >         # Validate the following
>> > > > >         # 1. listDomains should list the created domain
>> > > > >         # 2. listAccounts should list the created accounts
>> > > > >         # 3. listvirtualmachines should show the Running VMs
>> > > > >         # 4. PF and FW rules should be shown in
>>listFirewallRules
>> > > > >         # 5. domain should delete successfully and above three
>> list calls
>> > > > >         #    should show all the resources now deleted.
>> listRouters should
>> > > > >         #    not return any routers in the deleted
>>accounts/domains
>> > > > >
>> > > > > Why would one need the overhead of creating VMs in a domain
>> deletion
>> > > > test?
>> > > > > Do
>> > > > > we not understand that the basics accounts/domains/ etc that
>> > > > > cloudstack has nothing to do with the virtual machines? This
>>kind
>> of a
>> > > > > test slows down other useful tests that we could be running.
>>More
>> over
>> > > > > when this fails in the VM creation step I'd have to go in and
>> analyse
>> > > > > logs to realize that deletedomain was perhaps fine but vm
>>creation
>> has
>> > > > > failed.
>> > > > >
>> > > > > That's a pointless effort. I'm sure there are others in the
>> automated
>> > > > > tests that do this kind of wasteful testing. So please please
>> pleaes
>> > > > > &*()#@()# please review test plans before automating them!
>> > > > >
>> > > > > I'm not going to be looking at this forever to fix these issues
>> when
>> > > > > we want to see pretty metrics and numbers.
>> > > > >
>> > > > > --
>> > > > > Prasanna.,
>> > > > >
>> > > > > ------------------------
>> > > > > Powered by BigRock.com
>> > > > >
>> > > > >
>> >
>> > --
>> > Prasanna.,
>> >
>> > ------------------------
>> > Powered by BigRock.com
>>
>> --
>> Prasanna.,
>>
>> ------------------------
>> Powered by BigRock.com
>>
>>

Reply via email to