Abhishek Modi created HADOOP-19343:
--
Summary: Add native support for GCS connector
Key: HADOOP-19343
URL: https://issues.apache.org/jira/browse/HADOOP-19343
Project: Hadoop Common
Issue
Abhishek Pal created HADOOP-19275:
-
Summary: dtutil cancel command fails with NullPointerException
Key: HADOOP-19275
URL: https://issues.apache.org/jira/browse/HADOOP-19275
Project: Hadoop Common
Abhishek Das created HADOOP-18129:
-
Summary: Change URI[] in INodeLink to String[] to reduce memory
footprint of ViewFileSystem
Key: HADOOP-18129
URL: https://issues.apache.org/jira/browse/HADOOP-18129
Abhishek Das created HADOOP-18100:
-
Summary: Change scope of inner classes in InodeTree to make them
accessible outside package
Key: HADOOP-18100
URL: https://issues.apache.org/jira/browse/HADOOP-18100
Abhishek Das created HADOOP-17039:
-
Summary: Change scope of InternalDirOfViewFs and InodeTree to make
ViewFileSystem extendable outside common package
Key: HADOOP-17039
URL: https://issues.apache.org/jira/browse
Abhishek Das created HADOOP-17032:
-
Summary: Handle path having multiple children mount points
pointing to different filesystems
Key: HADOOP-17032
URL: https://issues.apache.org/jira/browse/HADOOP-17032
Abhishek Das created HADOOP-17029:
-
Summary: ViewFS does not return correct user/group and ACL
Key: HADOOP-17029
URL: https://issues.apache.org/jira/browse/HADOOP-17029
Project: Hadoop Common
; https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
> >
> > This vote will run for 7 days(5 weekdays), ending on 18th Sept at 11:59
> pm
> > PST.
> >
> > I have done testing with a pseudo cluster and distributed shell job. My
> +1
> > to start.
> >
> > Thanks & Regards
> > Rohith Sharma K S
> >
>
--
Regards,
Abhishek Modi
986ed800029@%3Cyarn-dev.hadoop.apache.org%3E
> > [2]
> >
> >
> https://lists.apache.org/thread.html/6e94469ca105d5a15dc63903a541bd21c7ef70b8bcff475a16b5ed73@%3Cyarn-dev.hadoop.apache.org%3E
> >
>
--
With Regards,
Abhishek Modi
Member of Technical Staff,
Qubole Private Ltd,
Bengaluru
Mobile: +91-9560486536
> >>
> > > >> On Tue, Mar 5, 2019 at 11:20 AM Eric Payne > > >> .invalid>
> > > >> wrote:
> > > >>
> > > >>> It is my pleasure to announce that Eric Badger has accepted an
> > > invitation
> > > >>> to become a Hadoop Core committer.
> > > >>>
> > > >>> Congratulations, Eric! This is well-deserved!
> > > >>>
> > > >>> -Eric Payne
> > > >>>
> > > >>
> > >
> > > -
> > > To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> > > For additional commands, e-mail: common-dev-h...@hadoop.apache.org
> > >
> > >
> >
>
--
With Regards,
Abhishek Modi
Abhishek Agarwal created HADOOP-12423:
-
Summary: ShutdownHookManager throws exception if JVM is already
being shut down
Key: HADOOP-12423
URL: https://issues.apache.org/jira/browse/HADOOP-12423
Abhishek Gayakwad created HADOOP-9403:
-
Summary: in case of zero map jobs map completion graph is broken
Key: HADOOP-9403
URL: https://issues.apache.org/jira/browse/HADOOP-9403
Project: Hadoop
choose to decide number of reducers to mention explicitly, what should I
consider.Because choosing in appropriate number of reducer hampers the
performance.
Sorry for this question, am little confused on this number of reducers.
Regards
Abhishek
Sent from my iPhone
Data is skewed here so does it help?
Thanks
Abhi
Sent from my iPhone
On Jul 24, 2012, at 9:50 AM, Nitin Pawar wrote:
> Tried rarely.
>
> It also depends on the data :)
>
> On Tue, Jul 24, 2012 at 6:26 PM, Abhishek wrote:
>> Hi nitin,
>>
>> Thanks for
ioner;
> set total.order.partitioner.natural.order=[false|true];
> set total.order.partitioner.path=[path];
>
> On Mon, Jul 23, 2012 at 8:56 AM, abhiTowson cal
> wrote:
>>
>> hi all,
>>
>> How to use total order partitioner hive?
>>
>> Regards
>> Abhishek
>
>
>
>
> --
> Nitin Pawar
ueNumber myUniqueNumber=MyUniqueNumber.getInstance();
> for(int n=0;n<=5;n++){
> System.out.println("read number:
> "+myUniqueNumber.getUniqueNumber());
> }
> }
> }
>
>
> On Thu, Jun 7, 2012 at 6:14 PM, abhishek dodda
Actually trying to look how we can write a mapreduce program.
On Thu, Jun 7, 2012 at 7:12 AM, Harsh J wrote:
> Ashish - Note though that Twitter's Snowflake is only roughly
> sequential in generation.
>
> Abhishek - Do you really require proper sequence IDs? Why not just
>
Hi all,
I have a scenario where all my tables are in DB2 and after dumping
them into HDFS.
A unique sequence id (or) unique identifier an extra column should be
added at beginning or at the end of the table can anyone help me on
this please.
Thanks
Regards
Abhi.
Hi Shivam,
The following paper by Zaharia et al. has design insights as well as
lots of evaluation.
http://www.cs.berkeley.edu/~matei/papers/2010/eurosys_delay_scheduling.pdf
Abhishek
On Mon, Oct 17, 2011 at 1:20 PM, Harsh J wrote:
> Shivam,
>
> Here lies its inception with good read
Actually, I found the reason. I am running HDFS as "root" and there is
a bug that has recently been fixed.
https://issues.apache.org/jira/browse/HDFS-1943
Thanks,
Abhishek
On Thu, Sep 1, 2011 at 6:25 PM, Ravi Prakash wrote:
> Hi Abhishek,
>
> Try reading through the s
t;1.6.0_27" but get same error with it.
Abhishek
On Thu, Sep 1, 2011 at 4:00 PM, hailong.yang1115
wrote:
> Hi abhishek,
>
> Have you successfully installed java virtual machine like sun JDK before
> running Hadoop? Or maybe you forget to configure the environment variable
> J
Hi all,
I am trying to install Hadoop (release 0.20.203) on a machine with CentOS.
When I try to start HDFS, I get the following error.
: Unrecognized option: -jvm
: Could not create the Java virtual machine.
Any idea what might be the problem?
Thanks,
Abhishek
Hi,
What do the following two File Sytem counters associated with a job
(and printed at the end of a job's execution) represent?
FILE_BYTES_READ and FILE_BYTES_WRITTEN
How are they different from the HDFS_BYTES_READ and HDFS_BYTES_WRITTEN?
Thanks,
Abhishek
What is the inter-arrival time between these jobs?
There is a "set up" phase for jobs before they are launched. It is
possible that the order of jobs can change due to slightly different
set up times. Apart from the number of blocks, it may matter "where"
these blocks lie.
A
change the Fair Scheduler to assign more than 1 map
task to a TT per heart beat (I did that and achieved 100% utilization
even with small map tasks). But I am wondering, if doing so will
violate some fairness properties.
Thanks,
Abhishek
information (say, in the TaskTracker logs)?
Thanks,
Abhishek
I realized that I made a mistake in my earlier post. So here is the correct one.
I have a job ("loadgen") with only 1 input (say) part-0 of size
1368654 bytes.
So when I submit this job, I get the following output:
INFO mapred.FileInputFormat: Total input paths to process : 1
However, in th
two splits greater than the size of my
input?
I also noticed that if I run the same job with 2 inputs (say)
part-0 and part-1, then only 2 map tasks are created.
To my knowledge, the number of map tasks should be the same as the
number of inputs.
Thanks,
Abhishek
> No. of slots per task tracker cannot be varied so even if some nodes
> have additional cores, extra slots cannot be added.
True. This is what I have been wishing for;-) I routinely use clusters
where some machines have 8 while others have 4 cores.
Abhishek
>
> Regards
more
cores compared to servers with fewer cores.
Abhishek
On Wed, Mar 10, 2010 at 11:39 PM, Sujitha wrote:
> hi all
> map reduce is running on hadoop clusters(homogeneous).is it possible to
> use it in a heterogeneous environment..is that we need any interface to
> support this??
irectory
from the command-line for example: $ mkdir
{hadoop-home}/logs//userlogs/testdir
If you have too many directories in userlogs the OS should fail to
create the directory and report there are too many.
Thanks,
Abhishek
ota over time. How do I reduce the number of
these queries in Hadoop?
Thanks,
Abhishek
Do you want the latest source code or the source code for a particular
release? In case, it is a particular release, the source code comes
with the distribution.
Abhishek
On Wed, Jan 13, 2010 at 12:21 AM, wrote:
> Hi all,
> I want to know how to get the source code for hadoop an
Hi Matei,
Many thanks for your prompt reply.
Abhishek
On Tue, Dec 22, 2009 at 8:42 PM, Matei Zaharia wrote:
> Hi Abhishek,
>
> You can find the in-development version of Hadoop in SVN, using the
> instructions at http://hadoop.apache.org/mapreduce/version_control.html. Note
possible
for me to download the code related to the modified Fair Scheduler in
order to use the global scheduling feature?
Thanks,
Abhishek
35 matches
Mail list logo