You Need to use “load data local inpath”
From: Vineet Mishra [mailto:clearmido...@gmail.com]
Sent: Tuesday, October 20, 2015 6:08 PM
To: user@hive.apache.org; cdh-u...@cloudera.or to
Subject: HiveServer2 load data inpath fails
Hi All,
I am trying to run load data inpath to update/refr
2
>
> I am also new to Hadoop but faced this problems previously.
> What I did that
> I had combine the value in mapper by "," and then I passed it to
> context.write
>
>
> On Mon, Jan 9, 2012 at 5:56 PM, vikas Srivastava wrote:
>
>> Hi Bhavesh Shah
tput in different directories *
> *according to reducer values.
>
> *Refer this link, here is the solution*:
> *
>
> https://sites.google.com/site/hadoopandhive/home/how-to-write-output-to-multiple-named-files-in-hadoop-using-multipletextoutputformat
>
>
>
> --
> Rega
Hi folks,
i have few question like
1:- How to format output from reduce( like default is tab separator can we
make it "," separator)
2:- and how to make output in different directories according to reducer
values.
Thanks in advance
r
@ check you jobtracker or tasktracker is running or not ...
On Mon, Jan 2, 2012 at 7:23 PM, wd wrote:
> Because 'select *' will not run map reduce job, may be you should
> check if your hadoop cluster is work
>
> On Mon, Jan 2, 2012 at 10:37 AM, Aditya Kumar
> wrote:
> >
> > Hi,
> > I am able t
hey
yes its possible to load data through hive in hadoop, but you can't decide
that where data file should store(on which node). that could only be
decide by namenode.
Regards
Vikas Srivastava
On Thu, Dec 8, 2011 at 12:49 PM, Savant, Keshav <
keshav.c.sav...@fisglobal.com> wrote
oprietary and/or
>> confidential. If you are not the intended recipient, please: (i) delete the
>> message and all copies; (ii) do not disclose, distribute or use the message
>> in any manner; and (iii) notify the sender immediately. In addition, please
>> be aware that any message addressed to our domain is subject to archiving
>> and review by persons other than the intended recipient. Thank you.
>>
>
>
>
> --
> Best Regards,
>
> Mohit Gupta
> Software Engineer at Vdopia Inc.
>
>
>
--
With Regards
Vikas Srivastava
DWH & Analytics Team
Mob:+91 9560885900
One97 | Let's get talking !
here!!
>> It should create a table "testHiveDriverTable" in hive, but "show
>> tables;" on $hive returns no such table.
>>
>> Please help.. any1 working with Hive Jdbc, am I missing any
>> configuration?
>>
>> Regards,
>> Alok
>>
>>
>
>
> --
> Shashwat Shriparv
> 09900059620
> 09663531241
>
>
>
> http://rcm.amazon.com/e/cm?t=shriparv-20&o=1&p=48&l=ur1&category=kindlerotating&f=ifr";
> width="728" height="90" scrolling="no" border="0" marginwidth="0"
> style="border:none;" frameborder="0">
>
>
>
--
With Regards
Vikas Srivastava
DWH & Analytics Team
Mob:+91 9560885900
One97 | Let's get talking !
or write a
particular region some times it takes few seconds to few minutes. Is there
anyway we can avoid this situation or improve the situation.
--
With Regards
Vikas Srivastava
DWH & Analytics Team
Mob:+91 9560885900
One97 | Let's get talking !
--
With Regards
Vikas Srivas
read or write a
particular region some times it takes few seconds to few minutes. Is there
anyway we can avoid this situation or improve the situation.
--
With Regards
Vikas Srivastava
DWH & Analytics Team
Mob:+91 9560885900
One97 | Let's get talking !
Hey ,
There is no need for Ant and stuff...you can directly install from by tar.gz
here is the full documentation
https://cwiki.apache.org/confluence/display/Hive/GettingStarted#GettingStarted-InstallationandConfiguration
Regards
Vikas Srivastava
On Fri, Nov 11, 2011 at 5:59 AM, Vandana
hey Aditya!!
col values is case sensitive so u have to put exact value .
select * from table where col_name=*'EXACT_VALUE'*
regards
Vikas Srivastava
On Tue, Nov 8, 2011 at 5:17 PM, Aditya Singh30
wrote:
> Hi,
>
> I have setup a two node hadoop cluste
Hey ,
I m new to MAHOUT can you guys give me some idea about MAHOUT or any pdf on
that !!!
With Regards
Vikas Srivastava
DWH & Analytics Team
M: +91 9560885900
P: + 91 120 4770102
Email: vikas.srivast...@one97.net
W: www.one97world.com
One97 | Let's get talking !
From:
Hey Ashu/anh
Is this true that hive 0.8 supports insert and append data into table .
With Regards
Vikas Srivastava
DWH & Analytics Team
M: +91 9560885900
P: + 91 120 4770102
Email: vikas.srivast...@one97.net
W: www.one97world.com
One97 | Let's get talking !
From: Ashutos
its should make less no of file as compare to non
compressed.
--
With Regards
Vikas Srivastava
DWH & Analytics Team
Mob:+91 9560885900
One97 | Let's get talking !
00%, reduce = 33%
>> 2011-10-19 08:55:43,839 Stage-1 map = 100%, reduce = 0%
>> 2011-10-19 08:55:50,855 Stage-1 map = 100%, reduce = 33%
>> 2011-10-19 08:55:54,864 Stage-1 map = 100%, reduce = 0%
>> 2011-10-19 08:56:00,878 Stage-1 map = 100%, reduce = 33%
>> 2011-10-
Hi
Did you change the new host name in /etc/hosts of all the Datanodes and on
hive server.??
With Regards
Vikas Srivastava
DWH & Analytics Team
M: +91 9560885900
P: + 91 120 4770102
Email: vikas.srivast...@one97.net
W: www.one97world.com
One97 | Let's get talking !
Fro
Yup!!!
With Regards
Vikas Srivastava
DWH & Analytics Team
M: +91 9560885900
P: + 91 120 4770102
Email: vikas.srivast...@one97.net
W: www.one97world.com
One97 | Let's get talking !
From: Ankit Jain [mailto:ankitjainc...@gmail.com]
Sent: Tuesday, October 11, 2011 6:22 PM
that also i have dine.. i put msql connector in that lib
On Tue, Oct 11, 2011 at 5:39 PM, Ankit Jain wrote:
> Hello Vikas,
>
> I think you have to put the mysql Driver in lib dir of hive.
>
> Thanks,
> Ankit
>
> On Tue, Oct 11, 2011 at 5:18 PM, Vikas Srivastava <
&
gt; SKYPE: chen0727
> Mobil: 886-937545215
> Tel: 886-2-8798-2988 #222
> Fax:886-2-8751-5499
>
> -----Original Message-
> From: vikas srivastava [mailto:vikas.srivast...@one97.net]
> Sent: Tuesday, October 11, 2011 3:29 PM
> To: user@hive.apache.org
> Subject: pr
t; requires
"org.eclipse.text" but it cannot be resolved.
2011-10-11 12:45:14,150 ERROR DataNucleus.Plugin
(Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires
"org.eclipse.text" but it cannot be resolved.
2011-10-11 12:45:21,053 ERROR ql.Driver (
Hey All,
I have sum question
1:- maximum space of a datanode in hadoop cluster.
2:- Best Raid for hadoop (hdfs)
3:- minimum size of Hadoop cluster for gud performance.
Please help ..
With Regards
Vikas Srivastava
DWH & Analytics Team
M: +91 9560885900
P: + 91 120 4770102
E
ava:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
With Regards
Vikas Srivastava
DWH & Analytics Team
M: +91 9560885900
P: + 91 120 4770102
Email: vikas.srivast...@one97.net
W: www.one97world.com
One97 | Let's get talking !
Hey Folks,
I have configured a 8 tb Dn in my Hadoop cluster but in Dfs URL is showing
.25 instead of 8 tb ,
The rest data node are of 2 tb and this one is of 8tb.
Please suggest why its showing less capacity !!!
Is it configurable or what.
With Regards
Vikas Srivastava
DWH & Analytics
data in to hadoop.
Please provide your valuable suggestion.
With Regards
Vikas Srivastava
DWH & Analytics Team
M: +91 9560885900
P: + 91 120 4770102
Email: vikas.srivast...@one97.net
W: www.one97world.com
One97 | Let's get talking !
at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
> at java.security.Ac
*Any help would be appreciated*
--
With Regards
Vikas Srivastava
DWH & Analytics Team
Mob:+91 9560885900
One97 | Let's get talking !
ke(Method.java:597)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
> at java.security.AccessController.doPrivileged
hadoop.ipc.RPC$Server.call(RPC.java:508)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
>
>
>
>
> please suggest any help would be appreciated!!
>
>
> --
> With Regards
> Vikas Srivastava
>
> DWH & Analytics Team
> Mob:+91 9560885900
> One97 | Let's get talking !
>
>
--
With Regards
Vikas Srivastava
DWH & Analytics Team
Mob:+91 9560885900
One97 | Let's get talking !
$Handler.run(Server.java:953)
please suggest any help would be appreciated!!
--
With Regards
Vikas Srivastava
DWH & Analytics Team
Mob:+91 9560885900
One97 | Let's get talking !
thanks Ayon!!
ll try that then l let ya knw how its work...
regards
Vikas Srivastava
On Wed, Sep 14, 2011 at 9:33 PM, Ayon Sinha wrote:
> Hi Vikas,
> The imbalance does create imbalance in MR but with your configuration it
> may not be a big issue. Basically the balancer will put da
all config with 16 gb ram
Regards
Vikas Srivastava
On Tue, Sep 13, 2011 at 11:20 PM, Ayon Sinha wrote:
> What you can do for each node:
> 1. decommission node (or 2 nodes if you want to do this faster). You can do
> this with the excludes file.
> 2. Wait for blocks to be moved off
and run balancer to balance the load.
2:- free any 1 node(repalcement node).
question:- does the imbalance size in cluster is of any datanode create a
problem...or have any bad impact
regards
Vikas Srivastava
On Tue, Sep 13, 2011 at 5:37 PM, Sonal Goyal wrote:
> Hi Vikas,
>
>
HI ,
can ny1 tell me how we can migrate hadoop or replace old hard disks with new
big size hdd.
actually i need to replace old hdd of 300 tbs to 1 tb so how can i do this
efficiently!!!
ploblem is to migrate data from 1 hdd to other
--
With Regards
Vikas Srivastava
DWH & Analytics Team
th $path can
> itself be a script that feeds the data in streaming fashion? Something like
> "load data using script 'loader.py' into table foo".
>
> On 2011/09/12, at 15:36, Vikas Srivastava wrote:
>
> > hive -e "load data local in path '$path
hive -e "load data local in path '$path' into table $table
partition(date='$date')"
On Mon, Sep 12, 2011 at 7:04 PM, Adriaan Tijsseling
wrote:
> Do you have the syntax for the proper hive QL command?
>
> Thanks!
>
> On 2011/09/12, at 15:23, Vikas S
l;
> Copying data from file:/home/myserver/data-load/social_sample.txt
> Copying file: file:/home/myserver/data-load/social_sample.txt
> Loading data to table default.social
> OK
> Time taken: 3.823 seconds
>
> Any help is appreciated.
>
> Thanks
> - Prvn
>
>
>
--
With Regards
Vikas Srivastava
DWH & Analytics Team
Mob:+91 9560885900
One97 | Let's get talking !
ptions when using
> SELECT TRANSFORM.
>
> Thanks in advance,
>
> Adriaan
>
>
--
With Regards
Vikas Srivastava
DWH & Analytics Team
Mob:+91 9560885900
One97 | Let's get talking !
-- Forwarded message --
From: Vikas Srivastava
Date: Fri, Aug 26, 2011 at 6:21 PM
Subject: Need help in hive
To: user@hive.apache.org
Hey folks,
i m getting this error while running simple query...
like desc table.
i m using hive 0.7 and hadoop 0.20.2
*this is hive.log
ect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
2011-08-26 17:41:59,229 ERROR ql.Driver (SessionState.java:printError(351))
- FAILED: Execution Error, re
need help on this
-- Forwarded message --
From: Vikas Srivastava
Date: Thu, Aug 25, 2011 at 6:37 PM
Subject: Re: Problem in hive
To: user@hive.apache.org
Hey Ashu!!1
i have given full permission to hadoop user on new server(A) with user name
and password.
it can only read
Hey Ashu!!1
i have given full permission to hadoop user on new server(A) with user name
and password.
it can only read those tables made by this server(A) and desc them,
and from other server(B) we cant be able to read tables created by this
server(A).
regards
Vikas Srivastava
On Thu, Aug 25
hey ashutosh,
thanks for reply..
the output of that is
*Failed with exception null
FAILED: Execution Error, return code 1 from org.apache.hado**op.hive.ql.exec
**.DDLTask*
regards
Vikas Srivastava
On Thu, Aug 25, 2011 at 4:52 AM, Ashutosh Chauhan wrote:
> Vikas,
> Looks like your me
equence of queries u have executed.
>
> When I checked the trunk code this exception will come when getCols()
> returns null. Check u r metadata is in good state or not.
>
> Thanks
>
> Chinna Rao Lalam
>
> -- Forwarded message --
> From:
hey folks.
can any1 tell me that how many tables are allowed in hive.
Is dere any limit or cofiguration through which we can set the max no of
tables in hive.
--
With Regards
Vikas Srivastava
DWH & Analytics Team
Mob:+91 9560885900
One97 | Let's get talking !
-- Forwarded message --
From: Vikas Srivastava
Date: Tue, Aug 23, 2011 at 7:26 PM
Subject: Problem in hive
To: user@hive.apache.org
HI team,
i m facing this problem.
show tables is running fine but when i run below query.
hive> select * from aircel_obd;
FAILED: Hive Inter
ata nodes. So I expect hive should kick
> off 3 map tasks, one on each task nodes. What can make hive only run one map
> task? Do I need to set something to kick off multiple map task? in my
> config, I didn't change hive config.
>
>
>
--
With Regards
Vikas Srivastava
DWH &am
oke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
--
With Regards
Vikas Srivastava
DWH & Analytics Team
Mob:+91 9560885900
One97 | Let's get talking !
hey sid!!
thanks bro...
but cant parse the file actually have 3TB data in that format . so i need to
find the solution and 1 more thing it ll take much time to parse it.
regards
Vikas Srivastava
On Fri, Aug 19, 2011 at 6:41 PM, Siddharth Tiwari wrote:
> You will have to parse t
019795974_224281||0\N0
*actually problem is hive reads single '|' as a fields separators due to
which 2 columns divided into 3 columns .*
Anybody have the solution for that !!!!!!
--
With Regards
Vikas Srivastava
DWH & Analytics Team
Mob:+91 9560885900
One97 | Let's get talking !
Hey ,
Simpley you have run query like this
FROM sales_temp INSERT OVERWRITE TABLE sales partition(period_key) SELECT *
Regards
Vikas Srivastava
2011/8/12 Daniel,Wu
> suppose the table is partitioned by period_key, and the csv file also has
> a column named as period_key. The cs
Hey All,
Please tell me where to enter datanode IP's in CHD3U2 , actally i installed
all the components in namenode and datanode but confuse where to put
datanode IPS in namenode so thet they get connected.
--
With Regards
Vikas Srivastava
DWH & Analytics Team
Mob:+91 9560885
HI ,
How to create a read-only user in hive and wat are the steps to be taken!!
regards
vikas Srivastava
Hey ,
Is any1 using google snappy i tried it but didnt get success.
If there any1 is using it please tell me the procedure to use it.
--
With Regards
Vikas Srivastava
DWH & Analytics Team
Mob:+91 9560885900
One97 | Let's get talking !
Hey
can any1 tell me how to use or install patches given in jira for hadoop or
hive!!
--
With Regards
Vikas Srivastava
DWH & Analytics Team
Mob:+91 9560885900
One97 | Let's get talking !
Hey ,
i just want to use any compression in hadoop so i heard about lzo which is
best among all the compression (after snappy)
please any1 tell me who is already using any kind of compression in hadoop
0.20.2
--
With Regards
Vikas Srivastava
DWH & Analytics Team
Mob:+91 9560885900
O
way to get column headers in the results
> for the queries.
>
> -Ayon
> See My Photos on Flickr <http://www.flickr.com/photos/ayonsinha/>
> Also check out my Blog for answers to commonly asked
> questions.<http://dailyadvisor.blogspot.com>
>
>
>
>
>
--
With Regards
Vikas Srivastava
DWH & Analytics Team
Mob:+91 9560885900
One97 | Let's get talking !
owser to work to use
>> HUE.****
>>
>> Any suggestions?
>>
>> ** **
>>
>> -SB
>>
>
>
--
With Regards
Vikas Srivastava
DWH & Analytics Team
Mob:+91 9560885900
One97 | Let's get talking !
ption:
Connection refused)'
FAILED: Execution Error, return code 1 from
org.apache.hadoop.hive.ql.exec.MapRedTask
actually i added namenode into hive site at *dfs.default.name* but now i
facing above error!!!
please advice!!
regards
vikas srivastava
On Thu, Jul 21, 2011 at 11:29 AM, Gu
9378326346525810
> ,
> expected: hdfs://hadoopnametes:9000
>
> For your fs.default.name config, avoid putting in an IP, and place a
> hostname instead.
>
> On Wed, Jul 20, 2011 at 2:30 PM, Vikas Srivastava
> wrote:
> > HI Team,
> >
> > i m facing problem
org.apache.hadoop.hdfs.DistributedFileSystem.makeQualified(DistributedFileSystem.java:116)
at org.apache.hadoop.hive.ql.Context.getScratchDir(Context.java:148)
at
org.apache.hadoop.hive.ql.Context.getMRScratchDir(Context.java:194)
... 15 more
--
With Regards
Vikas Srivastava
DWH
ping from name-node that is an issue here, you
> should run a ping command from data-node to all data-nodes/name-node.
>
> Thanks,
> Viral
>
> On Tue, Jul 19, 2011 at 6:50 AM, Edward Capriolo wrote:
>
>>
>>
>> On Tue, Jul 19, 2011 at 9:46 AM, Vikas Srivastava
that..
Regards
Vikas Srivastava
9560885900
On Tue, Jul 19, 2011 at 7:03 PM, Edward Capriolo wrote:
> It must be a hostname or DNS problem. Use dig and ping to find out what is
> wrong.
>
> On Tue, Jul 19, 2011 at 9:05 AM, Vikas Srivastava <
> vikas.srivast...@one97.net> wrote:
On Tue, Jul 19, 2011 at 6:29 PM, Vikas Srivastava <
vikas.srivast...@one97.net> wrote:
>
> HI Team,
>>
>>
>> we are using 1 namenode with 11 Datanode each of (16GB ram and 1.4 tb hdd)
>>
>> i m getting this error while running any query , simple it
64 matches
Mail list logo