We actually have a machine that is a perfect clone of the production
machine. The only difference is the passwords. We test all deployments
to it first. We call it staging. Having a staging has 2 benefits:
1) We can test our deployment scripts, migrations, etc on as close as we
can get to production.
2) If the production box dies, we have one that can take it's place very
quickly (change the database passwords/pointers and go).
We also have a demo box that is updated via capistrano whenever the
build passes.
Testing configuration / deployment is hard because you can assert that
the config is what you think it is, but that in no way proves that it's
actually working. It's like using mocks to build up functionality
against a mock library. At some point you actually have to test against
the real thing or you're just guessing.
-Mike
Matt Wynne wrote:
On 27 Jan 2009, at 17:08, Brian Takita wrote:
On Tue, Jan 27, 2009 at 4:15 AM, Matt Wynne <[email protected]> wrote:
Not done it, but Cucumber acceptance tests would surely be a good
fit here:
Given there is a database "foo"
When I run the script
Then there should be a backup no more than 10 minutes old
And the backup should restore OK
And the restored database should contain the same records as the
original database
On 27 Jan 2009, at 03:31, Scott Taylor wrote:
Does anyone have any insight into testing capistrano tasks? More
specifically, I'm looking to add regression tests to this package,
which
adds database backup tasks to capistrano:
Yes, I have experience testing capistrano. My experience with unit
testing Capistrano has been less than positive. Capistrano is
difficult to test. Basically you have to mock out the shell/run/file
transfer commands to do unit tests. The big problem is how do you know
the shell commands are correct?
There is some benefit though. You do get to see a list of the shell
commands that are run in one place. However, there can be many
permutations of logic inside of a deploy, so its often difficult to
capture every situation.
I realize that this sounds terrible, and perhaps there is a better way
to go about this, but this has been my experience.
IMO, the best way to test Capistrano is to have a staging environment
that simulates your production environment where you deploy to and
make sure things work.
I can't recall a situation where I had non-obvious issues that would
have been prevented with unit tests. Often times, non-obvious issues
are related to properly restarting processes because the monit + shell
script interactions had issues. This is testable, of course.
That's why I suggested *acceptance tests* in Cucumber. I don't think
mocking / unit testing is going to get you much value here - what you
need is something that feeds back whether the whole thing works. So
yeah you'll need a sandbox / staging environment for that.
Matt Wynne
http://blog.mattwynne.net
http://www.songkick.com
_______________________________________________
rspec-users mailing list
[email protected]
http://rubyforge.org/mailman/listinfo/rspec-users
_______________________________________________
rspec-users mailing list
[email protected]
http://rubyforge.org/mailman/listinfo/rspec-users