Hello - I'm curious what Hadoop developers use for their day-to-day hacking on Hadoop. I'm talking changes to the Hadoop libraries and daemons, and not developing Map-Reduce jobs or using using the HDFS Client libraries to talk to a filesystem from an application.
I've checked out Hadoop, made minor changes and built it with Maven, and tracked down the resulting artifacts in a target/ directory that I could deploy. Is this typically how a cloudera/hortonworks/mapr/etc dev works, or are the IDEs more common? I realize this sort of sounds like a dumb question, but I'm mostly curious what I might be missing out on if I stay away from anything other than vim, and not being entirely sure where maven might be caching jars that it uses to build, and how careful I have to be to ensure that my changes wind up in the right places without having to do a clean build every time. Thanks! -Erik