+1, This is Jeff Zhang from Zeppelin community.
Thanks Xun for bring this up. Submarine has been integrated into Zeppelin
several months ago, and I already see some early adoption of that in China.
AI is fast growing area, I believe moving into a separate project would be
helpful for Submarine to
Jeff Zhang created HADOOP-15514:
---
Summary: NoClassDefFoundError of TimelineCollectorManager when
starting MiniCluster
Key: HADOOP-15514
URL: https://issues.apache.org/jira/browse/HADOOP-15514
Project
Jeff Zhang created HADOOP-12028:
---
Summary: Allow to set handler thread name when building IPC Server
Key: HADOOP-12028
URL: https://issues.apache.org/jira/browse/HADOOP-12028
Project: Hadoop Common
Another work around I can think of is that have my own copy of hadoop, and
copy extra jars to my hadoop. But it result into more maintenance effort
On Wed, Mar 23, 2011 at 9:19 AM, Jeff Zhang wrote:
> Hi all,
>
> When I use command "hadoop fs -text" I need to add ext
OP_HOME/lib
Is there any other ways to add extra jar to CLASSPATH ?
--
Best Regards
Jeff Zhang
s (e.g. consider
> load balancing, affinity to data?). For example, how to decide on
> which nodes mappers and reducers are to be executed and when.
> Thanks!
>
> Gerald
>
--
Best Regards
Jeff Zhang
e.org/jira/browse/HADOOP-6957
Project: Hadoop Common
Issue Type: Improvement
Components: ipc
Reporter: Jeff Zhang
Priority: Minor
I am trying to make Pseudo-Distributed hadoop can been accessed by outside, I
change the fs.default.name to 0.
ted
> or
> invalidated. However ,When I run the same job with the purge operation at the
> end multiple times, I find the local files have never been deleted and the
> modification time is when the first job run. How can I ask my job to
> re-distributed the cache again anyway?
>
>
't find the place where distributed cache
> start
> working. I want to know between DistriubtedCache.addCacheFile at the master
> node
> and DistributedCache.getLocalCacheFiles at the client side, when and where are
> the files get distributed.
>
>
> Thanks,
> -Gang
>
>
>
>
>
--
Best Regards
Jeff Zhang
hen how they
> > use it? Explain the architecture or flow of how google or other search
> > engines work and what is the part of mapreduce in it.
> >
> >Please Explain.
> >
> > With Regards,
> > B.Yuhendar
> >
> >
> > -
> > This email was sent using TCEMail Service.
> > Thiagarajar College of Engineering
> > Madurai-625 015, India
> >
>
>
--
Best Regards
Jeff Zhang
VM for each task, and
>> >“*io.sort.mb*”
>> >presents the buffer size in memory inside *one map task child-JVM*, the
>> >default value 100MB should be large enough because the input split of
>> >one
>> >map task is usually 64MB, as large as the block size we usually set.
>> >Then
>> >why the recommendation of “*io.sort.mb*” is 200MB for large jobs (and
>> >it
>> >really works)? How could the job size affect the procedure?
>> >Is there any fault here of my understanding? Any comment/suggestion
>> >will be
>> >highly valued, thanks in advance.
>> >
>> >Best Regards,
>> >Carp
>>
>
--
Best Regards
Jeff Zhang
y setups and
> cleanups. What's hadoop philosophy on this?
>
> Thanks,
> Min
> --
> My research interests are distributed systems, parallel computing and
> bytecode based virtual machine.
>
> My profile:
> http://www.linkedin.com/in/coderplay
> My blog:
> http://coderplay.javaeye.com
>
--
Best Regards
Jeff Zhang
ata and solr for creating instances. Is that right???
> thanks
> sujitha
>
>
>
>
> -
> This email was sent using TCEMail Service.
> Thiagarajar College of Engineering
> Madurai-625 015, India
>
>
--
Best Regards
Jeff Zhang
Amazon EC2 will charge you by hours, so I think it will fit for your
requirement.
Jeff Zhang
On Thu, Nov 26, 2009 at 1:42 PM, Palikala, Rajendra (CCL) <
rpalik...@carnival.com> wrote:
>
> Hi,
>
> I am planning to develop some proto-types on Hadoop for ETL to a
> datwareho
No sure why you need file object.
in hdfs, Path is something like File Object in local filesystem.
Jeff Zhang
On Wed, Nov 4, 2009 at 3:28 AM, iGama wrote:
> Hy all. I'm using Java.
>
> I Have a function that receives a File ( it manipulates images ). in a
> local file s
;
}
Best Regards,
Jeff zhang
On Thu, Oct 29, 2009 at 12:32 PM, Steve Gao wrote:
>
> Does anybody have the similar issue? If you store XML files in HDFS, how
> can you make sure a chunk reads by a mapper does not contain partial data of
> an XML segment?
>
> For example:
16 matches
Mail list logo