On Tue, Jan 15, 2013 at 3:50 PM, Erik Paulson <epaul...@unit1127.com> wrote:
> I'm curious what Hadoop developers use for their day-to-day hacking on
> Hadoop. I'm talking changes to the Hadoop libraries and daemons, and not
> developing Map-Reduce jobs or using using the HDFS Client libraries to talk
> to a filesystem from an application.
>
> I've checked out Hadoop, made minor changes and built it with Maven, and
> tracked down the resulting artifacts in a target/ directory that I could
> deploy. Is this typically how a cloudera/hortonworks/mapr/etc dev works, or
> are the IDEs more common?

I use both vim and Eclipse (3.8.0~rc4-1 from Debian). I use git for
version control with a branch per JIRA. Most testing is done with
jUnit tests, I try to write a testcase to repro a bug before trying to
fix the bug. Sometimes for a particular bug I need to install
artifacts on a cluster (of VMs or physical machines) during the
edit-compile-debug cycle; in such cases I build with mvn and carefully
choose which artifacts need to be updated on the target cluster using
rsync to speed up the cycle.

It's pretty difficult to develop in Java without using Eclipse or
similar. Like Todd I stuck to my preferred editor environment for
several months but found the IDE crutch too useful to avoid entirely.
Luckily nowadays Eclipse and vim synchronize through the filesystem
pretty well (much better than 6-8 years ago); I haven't yet lost even
a single line of code due to "oh you edited the same file in two
editors and they overwrote one another"; both vim and Eclipse
carefully say "It was changed on disk! Oh Noes! What shall we do?".

You can run jUnit tests from either Eclipse or mvn, and I do both regularly.

-andy

Reply via email to