at it takes few seconds while Hive's Orc format takes
> fraction of seconds.
>
> Regards,
> Amey
>
--
Nitin Pawar
ive 1.2.1
>
> regards
>
>
>
--
Nitin Pawar
>
>> Hello,
>>
>> We have a requirement to load data from xml file to Hive tables.
>> The xml tags woud be the columns and values will be the data for those
>> columns.
>> Any pointers will be really helpful.
>>
>> Thanks,
>> Nitin
>>
>
>
--
Nitin Pawar
owing setting , it started working
> set hive.auto.convert.join=true;
>
> can you please help me understand , what had happened ?
>
>
>
> Regards
> Sanjiv Singh
> Mob : +091 9990-447-339
>
> On Tue, Sep 22, 2015 at 11:41 AM, Nitin Pawar
> wrote:
>
>> C
.main(CliDriver.java:570)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
>
>
>
> Regards
> Sanjiv Singh
> Mob : +091 9990-447-339
>
--
Nitin Pawar
; CREATE EXTERNAL TABLE IF NOT EXISTS test_table
>>> OK
>>> Time taken: 0.124 seconds
>>>
>>> MSCK REPAIR TABLE test_table
>>> OK
>>> Tables missing on filesystem: test_table
>>>
>>> Time taken: 0.691 seconds, Fetched: 1 row(s)
>>>
>>>
>>> Thanks,
>>> Ravi
>>>
>>>
>>
>
--
Nitin Pawar
s data wont
> be available to hive until it converts to parquet and write to hive
> location?
>
>
>
>
> On Tue, Aug 25, 2015 at 11:53 AM, Nitin Pawar
> wrote:
>
>> Is it possible for you to write the data into staging area and run a job
>> on that and th
have some raw events right?
>
>
>
>
> On Tue, Aug 25, 2015 at 11:35 AM, Nitin Pawar
> wrote:
>
>> file formats in a hive is a table level property.
>> I am not sure why would you have data at 15mins interval to your actual
>> table instead of a staging table an
ed and parquet files in
>>>> same folder. can hive load these?
>>>>
>>>> I am getting Json data and storing in HDFS. later I am running job to
>>>> convert JSon to Parquet(every 15 mins). so we will habe 15 mins Json data.
>>>>
>>>> Can i provide multiple serde in hive?
>>>>
>>>> regards
>>>> Jeetendra
>>>>
>>>
>>>
>>
>
--
Nitin Pawar
ies where you don't
> specify statistics partitions, Hive doesn't pre-compute which one to take
> so it will take all the table.
>
> I would suggest implementing the max date by code in a separate query.
>
>
> On Thu, Aug 20, 2015 at 12:16 PM, Nitin Pawar
> wrote:
>
any help guys ?
On Thu, Aug 13, 2015 at 2:52 PM, Nitin Pawar
wrote:
> Hi,
>
> right now hive does not support the equality clause in sub-queries.
> for ex: select * from A where date = (select max(date) from B)
>
> It though supports IN clause
> select * from A where dat
wrote:
> I have used hive query to get column values that returns HiveResultSet. I
> need to find Min and Max value in HiveResultSet in code level.
> Is there any possibility. I am using c#.
>
> -Renuka N
>
>
> On Fri, Jul 31, 2015 at 3:29 AM, Nitin Pawar
> wrote:
>
&g
then why not just use max function?
select max(a) from (select sum(a) as a, b from t group by b)n
On Fri, Jul 31, 2015 at 12:48 PM, Renuka Be wrote:
> Hi Nitin,
>
> I am using hive query.
>
> Regards,
> Renuka N.
>
> On Fri, Jul 31, 2015 at 2:42 AM, Nitin Pawar
>
sing fields!
> Expected 14 fields but only got 7! Last field end 97 and serialize buffer
> end 61. Ignoring similar problems.
>
> On Fri, Jul 31, 2015 at 12:47 PM, Nitin Pawar
> wrote:
>
>> is there a different output format or the output table bucketed?
>> can you try
e,
> then this problem occurs.
>
> Please find the answers inline.
>
>
> Thanks,
> Ravi
>
> On Fri, Jul 31, 2015 at 12:34 PM, Nitin Pawar
> wrote:
>
>> sorry but i could not find following info
>> 1) are you using tez as execution engine? if yes make sure
ctInspector.java:64)
> at
> org.apache.hadoop.hive.ql.exec.ExprNodeColumnEvaluator._evaluate(ExprNodeColumnEvaluator.java:94)
> at
> org.apache.hadoop.hive.ql.exec.ExprNodeEvaluator.evaluate(ExprNodeEvaluator.java:77)
> at
> org.apache.hadoop.hive.ql.exec.ExprNodeEvaluator.evaluate(ExprNodeEvaluator.java:65)
> at
> org.apache.hadoop.hive.ql.exec.ReduceSinkOperator.makeValueWritable(ReduceSinkOperator.java:558)
> at
> org.apache.hadoop.hive.ql.exec.ReduceSinkOperator.process(ReduceSinkOperator.java:383)
> ... 13 more
>
>
> FAILED: Execution Error, return code 2 from
> org.apache.hadoop.hive.ql.exec.mr.MapRedTask
>
>
>
> Thanks,
> Ravi
>
>
--
Nitin Pawar
quot;. When i use this
> 'HiveResultSet.Max()' it throws exception.
>
> Error : At least one object must implement IComparable.
>
> Is there any way to find Min, Max from the HiveResultSet?
>
> Thanks,
> Renuka N.
>
--
Nitin Pawar
from Table, any more, just got error line 1:1
> character '' not supported here, no matter Tez or MR engine.
>
> How can you solve the problem in your case?
>
> BR,
> Patcharee
>
>
>
> On 18. juli 2015 21:26, Nitin Pawar wrote:
>
> can you tell exac
able in orc format, partitioned and compressed by ZLIB. The
> problem happened just after I concatenate table.
>
> BR,
> Patcharee
>
> On 18/07/15 12:46, Nitin Pawar wrote:
>
> select * without where will work because it does not involve file
> processing
> I suspec
, Jul 18, 2015 at 3:58 PM, patcharee
wrote:
> This select * from table limit 5; works, but not others. So?
>
> Patcharee
>
>
> On 18. juli 2015 12:08, Nitin Pawar wrote:
>
> can you do select * from table limit 5;
>
> On Sat, Jul 18, 2015 at 3:35 PM, patcharee
>
27;' not supported here
> line 1:139 character '' not supported here
> line 1:140 character '' not supported here
> line 1:141 character '' not supported here
> line 1:142 character '' not supported here
> line 1:143 character '' not supported here
> line 1:144 character '' not supported here
> line 1:145 character '' not supported here
> line 1:146 character '' not supported here
>
> BR,
> Patcharee
>
>
>
--
Nitin Pawar
y contain
> confidential or privileged information. If you are
> not the intended recipient, any dissemination, use,
> review, distribution, printing or copying of the
> information contained in this e-mail message
> and/or attachments to it are strictly prohibited. If
> you have received this communication in error,
> please notify us by reply e-mail or telephone and
> immediately and permanently delete the message
> and any attachments. Thank you
>
>
--
Nitin Pawar
g following
> result
>
> [image: Inline image]
>
>
> I am using Scala version 1.3.1 in windows 8
>
> Thanks in advance,
> Vinod
>
>
--
Nitin Pawar
; java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> ]
> DAG failed due to vertex failure. failedVertices:1 killedVertices:0
> FAILED: Execution Error, return code 2 from
> org.apache.hadoop.hive.ql.exec.DDLTask
>
> BR,
> Patcharee
>
>
--
Nitin Pawar
:55 PM, Erwan Queffélec <
> erwan.queffe...@gmail.com> wrote:
>
>> Hi Nitin,
>>
>> Digging up a bit I discovered that the error is probably on our end :
>>
>>
>>
>> On Mon, Jun 29, 2015 at 3:54 PM, Nitin Pawar
>> wrote:
>>
>>&
78d4
> Compiled by jenkins on Tue Mar 31 16:26:33 EDT 2015
> From source with checksum 1f34a1d4e566c3e801582862ed85ee93
>
> Thanks for taking the time.
>
> Kind regards,
>
> Erwan
>
> On Mon, Jun 29, 2015 at 3:44 PM, Nitin Pawar
> wrote:
>
>> by any chance
:
>>
>> # ls -l /usr/hdp/current/hive-server2/lib/commons-httpclient-3.0.1.jar
>> -rw-r--r-- 1 root root 279781 Mar 31 20:26
>> /usr/hdp/current/hive-server2/lib/commons-httpclient-3.0.1.jar
>> # ls -l /usr/hdp/current/hive-client/lib/commons-httpclient-3.0.1.jar
>> -rw-r--r-- 1 root root 279781 Mar 31 20:26
>> /usr/hdp/current/hive-client/lib/commons-httpclient-3.0.1.jar
>>
>> What am I missing ?
>>
>> Thanks a lot for your help,
>>
>> Kind regards,
>>
>> Erwan
>>
>
>
--
Nitin Pawar
lease help any other way to achieve this scenario?
>
>
> Regards
> Ravisnkar
>
--
Nitin Pawar
Answering my own question
either way the file was available via distributed cache.
it was a spelling mistake in the code for me, correcting it solved the
problem
On Sun, May 17, 2015 at 2:46 AM, Nitin Pawar
wrote:
> Hi,
>
> I am trying to access a lookup file from a udf.
> There ar
ac-4bcb-bee1-7d8ed9a271a0_resources/tmp.txt
Question: how do I get the file at same location (like option 1 all times)
cause from option 2 I keep getting the error tmp.txt does not exists when
I initialize the udf
thanks
--
Nitin Pawar
ession from server ?
>
> The query is succeed in hive command line.
>
> On Fri, May 15, 2015 at 11:52 AM, Nitin Pawar
> wrote:
>
>> Is this happening for Hue?
>>
>> If yes, may be you can try cleaning up hue sessions from server. (this
>> may clean all us
The query is succeed in hive command line.
>>
>> Please suggest on this,
>>
>>
>> Thanks you
>> Amit
>>
>
>
--
Nitin Pawar
is
> virus free and accepts no responsibility for any loss or damage arising
> from its use. If you have received this e-mail in error please notify
> immediately the sender and delete the original email received, any
> attachments and all copies from your system.
>
--
Nitin Pawar
s compared to Hive. However, Hive as I understand is
> widely used everywhere!
>
> Thank you
>
--
Nitin Pawar
hments are confidential and are intended solely
> for the use of the addressed recipient.
> Any views or opinions expressed are those of the author and do not
> necessarily represent Jaywing. If you are not
> the intended recipient, you must not forward or show this to anyone or
> take any action based upon it.
> Please contact the sender if you received this in error.
>
--
Nitin Pawar
2 'BUILDING' == 'BUILDING ', Here is a link
> <http://support.microsoft.com/en-us/kb/316626> for an article about it.
>
> Regards
> Sanjiv Singh
> Mob : +091 9990-447-339
>
> On Fri, Mar 27, 2015 at 1:41 PM, Nitin Pawar
> wrote:
>
>> Hive d
"' , status , '"') from customer WHERE status =
> 'BUILDING' LIMIT 2;
>
> ***<>***
>
> It seems that teradata is doing trimming short of thing before actually
> comparing stating values. But Hive is matching strings as it is.
>
> Not sure, It is expected behaviour or bug or can be raised as enhancement.
>
> I see below possible solution:
>
>- Convert into like operator expression with wildcard character before
>and after
>
> Looking forward for your response on this. How can it be handled/achieved
> in hive.
>
> Regards
> Sanjiv Singh
> Mob : +091 9990-447-339
>
--
Nitin Pawar
ns and handle the privileges for it
>
> Daniel
>
> On 26 במרץ 2015, at 12:40, Allen wrote:
>
> hi,
>
> We use SQL standards based authorization for authorization in Hive 0.14.
> But it has not support for column level privileges.
>
> So, I want to know Is there anyway to set column level privileges?
>
> Thanks!
>
>
>
>
--
Nitin Pawar
select sysdate();
>>
>> Execution log at:
>> /tmp/hadoop/hadoop_20141230101717_282ec475-8621-40fa-8178-a7927d81540b.log
>> java.io.FileNotFoundException: File does not exist:
>> hdfs://tmp/5c658d17-dbeb-4b84-ae8d-ba936404c8bc_resources/nexr-hive-udf-0.2-SNAPSHOT.jar
>
gt; Job Submission failed with exception 'java.io.FileNotFoundException(File
> does not exist:
> hdfs://tmp/5c658d17-dbeb-4b84-ae8d-ba936404c8bc_resources/nexr-hive-udf-0.2-SNAPSHOT.jar
> )'
> Execution failed with exit status: 1
> Obtaining error information
> Task failed!
> Task ID:
> Stage-1
> Logs:
> /tmp/hadoop/hive.log
> FAILED: Execution Error, return code 1 from
> org.apache.hadoop.hive.ql.exec.mr.MapRedTask
>
>
> Step 5: (check the file)
> hive> dfs -ls
> /tmp/69700312-684c-45d3-b27a-0732bb268ddc_resources/nexr-hive-udf-0.2-SNAPSHOT.jar;
> ls:
> `/tmp/69700312-684c-45d3-b27a-0732bb268ddc_resources/nexr-hive-udf-0.2-SNAPSHOT.jar':
> No such file or directory
> Command failed with exit code = 1
> Query returned non-zero code: 1, cause: null
>
>
>
>
>
>
>
>
>
>
>
--
Nitin Pawar
oop, Bigdata Developer*
> *Centre for Cyber Security | Amrita Vishwa Vidyapeetham*
> http://www.unmeshasreeveni.blogspot.in/
>
>
>
--
Nitin Pawar
whats your create table DDL?
On 24 Nov 2014 13:43, "unmesha sreeveni" wrote:
> Hi
>
> I am using hive -0.14.0 which support UPDATE statement
>
> but I am getting an error once I did this Command
> UPDATE Emp SET salary = 5 WHERE employeeid = 19;
>
> FAILED: SemanticException [Error 10294]: At
> @Nitin
> Would be very grateful if you're able to dig it out! Thanks!
>
> Best Regards
>
>
> On Thu, Nov 6, 2014 at 7:48 AM, Jason Dere wrote:
>
>> That would be great!
>>
>> On Nov 5, 2014, at 10:49 PM, Nitin Pawar wrote:
>>
>> May be a
e
>>>> timestamp in question would fall onto client's daily saving time period.
>>>> This behaviour would make sense to me, however:
>>>>
>>>> • this is server, not client settings we're talking about here
>>>> • the server and client do
I have shared this on github :
> https://github.com/devopam/hadoopHA
> apologies if there is any problem on github as I have limited familiarity
> with it :(
>
>
> regards
> Devopam
>
>
>
> On Wed, Nov 5, 2014 at 12:31 PM, Nitin Pawar
> wrote:
>
>> +1
&
ut spending
> effort to code it.
>
> Do share your feedback/ fixes if you spot any.
>
> --
> Devopam Mittra
> Life and Relations are not binary
>
--
Nitin Pawar
.?Can you send me
> examples.
>
> Thanks
> Mahesh
>
> On Tue, Nov 4, 2014 at 12:21 PM, Nitin Pawar
> wrote:
>
>> As the error says, your table file format has to be AcidOutPutFormat or
>> table needs to be bucketed to perform update operation.
>>
>> Yo
on
> table default.new that does not use an AcidOutputFormat or is not bucketed.
>
> When i update the table i got the above error.
>
> Can you help me guys.
>
> Thanks
>
> Mahesh.S
>
>
>
>
--
Nitin Pawar
;> Any reason why
>>
>> select from_unixtime(0) t0 FROM …
>>
>> gives
>>
>> 1970-01-01 01:00:00
>>
>> ?
>>
>> By all available definitions (epoch, from_unixtime etc..) I would expect
>> it to be 1970-01-01 00:00:00…?
>>
>
>
>
> --
> Kind Regards
> Maciek Kocon
>
>
--
Nitin Pawar
.) I would expect
> it to be 1970-01-01 01:00:00…?
>
--
Nitin Pawar
whats your table create ddl?
is the data in csv like format?
On 21 Oct 2014 00:26, "Raj Hadoop" wrote:
> I am able to see the data in the table for all the columns when I issue
> the following -
>
> SELECT * FROM t1 WHERE dt1='2013-11-20'
>
>
> But I am unable to see the column data when i issue
;
> --
> Shiang Luong
> Software Engineer in Test | OpenX
> 888 East Walnut Street, 2nd Floor | Pasadena, CA 91101
> o: +1 (626) 466-1141 x | m: +1 (626) 512-2165 | shiang.lu...@openx.com
> OpenX ranked No. 7 in Forbes’ America’s Most Promising Companies
>
--
Nitin Pawar
So, I thought bucketing will
> speed up the queries. What are my options ?
>
> Please let me know.
>
> Regards,
> Murali.
>
>
--
Nitin Pawar
p 15, 2014 at 6:56 AM, Sreenath wrote:
>
>> How about writing a python UDF that takes input line by line
>> and it saves the previous lines location and can replace it with that
>> if location turns out to be '-1'
>>
>> On 15 September 2014 17:01, Nitin Pawa
(1)
> T2.location
> FROM#temp1 AS T2
> WHERE T2.record < T1.record
> AND T2.fk = T1.fk
> AND T2.location != -1
> ORDER BY T2.Record DESC
> )
> ENDFROM#temp1 AS T1
>
> Thank you for your help in advance!
>
--
Nitin Pawar
/reading material appreciated
>
> Thanks!
> Manoj
>
--
Nitin Pawar
Thanks for correcting me Anusha,
Here are the links you gave me
https://cwiki.apache.org/confluence/display/Hive/HCatalog+Config+Properties
https://issues.apache.org/jira/secure/attachment/12622686/HIVE-6109.pdf
On Tue, Sep 9, 2014 at 5:16 PM, Nitin Pawar wrote:
> you can not modify
get partition name as Just Column Value INDIA and DELHI ...not
> including column name ...like /hive/warehouse/invoice_details_hive
> _partitioned/INDIA/DELHI?
>
> Thanks in Advance
>
>
>
--
Nitin Pawar
an you please specify what this means?
>
>
>
> *From:* Nitin Pawar [mailto:nitinpawar...@gmail.com]
> *Sent:* Thursday, September 04, 2014 4:00 PM
> *To:* user@hive.apache.org
> *Subject:* Re: Hive columns
>
>
>
> If those are text files you can create the table with sing
arising
> from its use. If you have received this e-mail in error please notify
> immediately the sender and delete the original email received, any
> attachments and all copies from your system.
>
--
Nitin Pawar
sales report using the hive, data pulled
> from mysql using the prototype tool.My data will be around 2GB/day.
>
>
>
> *Regards Muthupandi.K*
>
> [image: Picture (Device Independent Bitmap)]
>
>
--
Nitin Pawar
ables, there are few columns data type that are not supported in
>>> Hive. So to map the source table columns to my destination table columns in
>>> Hive, I want to create my own data type in Hive.
>>>
>>> I know about writing UDF's in Hive but have no idea about creating user
>>> defined data type in HIve. Any idea and example on the same would be of
>>> great help.
>>>
>>> Thanks.
>>>
>>
>>
>
--
Nitin Pawar
ust be enclosed in ' '.
>
> Hope this helps.
>
> --Bala G.
>
>
> On Sun, Aug 24, 2014 at 12:57 AM, Nitin Pawar
> wrote:
>
>> I am not sure if you can transform array from shell to java, you may want
>> to write your own custom UDF for that
>>
. I can get start date and end date,. But
>> can i get all the dates with in START DATE AND END DATE.??? . so that my
>> query looks something like this
>>
>> "Select a, b, c from table_x where date in (${hiveconf:LIST_OF DATES})"
>>
>>
>>
>>
om> wrote:
> As my raw-data table is partitioned by date.. i want to get data to run a
> query every days to find top 10 products in last 15 days .
>
> How to pass list of dates dynamically as arguments in hive query using
> hiveconf?
>
>
>
--
Nitin Pawar
va <
karthiksrivasth...@gmail.com> wrote:
> Hi,
>
> I am passing substitution variable using hiveconf in Hive..
>
> But i couldnt execute simple queries when i am trying to pass more than
> one parameter. It throws NoViableAltException - AtomExpression.. Am i
> missing something.?
>
--
Nitin Pawar
> Sushant
> On Tuesday 19 August 2014 02:33 PM, Nitin Pawar wrote:
>
> can you give an example of your dataset?
>
>
> On Tue, Aug 19, 2014 at 2:31 PM, Sushant Prusty wrote:
>
>> Pl let me know how I can load a CSV file with embedded map and arrays
>> data into Hiv
can you give an example of your dataset?
On Tue, Aug 19, 2014 at 2:31 PM, Sushant Prusty wrote:
> Pl let me know how I can load a CSV file with embedded map and arrays data
> into Hive.
>
> Regards,
> Sushant
>
--
Nitin Pawar
are you talking about the tables in map--join being loaded into distributed
cache?
On Wed, Aug 13, 2014 at 6:01 PM, harish tangella
wrote:
> Hi all,
>
> Request you to help
>
>What are cache tables in hive
>
> Regards
> Harish
>
>
>
>
--
Nitin Pawar
then you
can use them as well
On Tue, Aug 12, 2014 at 5:58 PM, CHEBARO Abdallah <
abdallah.cheb...@murex.com> wrote:
> Yes I mean the data is on hdfs like filesystem
>
>
>
> *From:* Nitin Pawar [mailto:nitinpawar...@gmail.com]
> *Sent:* Tuesday, August 12, 2
you have received this e-mail in error please notify
> immediately the sender and delete the original email received, any
> attachments and all copies from your system.
>
--
Nitin Pawar
supports this as per the Cloudera documentation
> http://www.cloudera.com/content/cloudera-content/cloudera-docs/Impala/latest/Installing-and-Using-Impala/ciiu_perf_hdfs_caching.html
>
> Thanks
> uli
>
--
Nitin Pawar
te, copy or use
> this e-mail or any attachment to it. Murex cannot guarantee that it is
> virus free and accepts no responsibility for any loss or damage arising
> from its use. If you have received this e-mail in error please notify
> immediately the sender and delete the original email received, any
> attachments and all copies from your system.
>
--
Nitin Pawar
n hive, and
> return the result to shell.
> How can I do?
>
--
Nitin Pawar
on: Index: 29,
> Size: 5*
> at java.util.ArrayList.RangeCheck(ArrayList.java:547)
> at java.util.ArrayList.get(ArrayList.java:322)
> at
> org.apache.hadoop.hive.ql.io.parquet.read.DataWritableReadSupport.init(DataWritableReadSupport.java:96)
> at
> org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.getSplit(ParquetRecordReaderWrapper.java:204)
> at
> org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.(ParquetRecordReaderWrapper.java:79)
> at
> org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.(ParquetRecordReaderWrapper.java:66)
> at
> org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat.getRecordReader(MapredParquetInputFormat.java:51)
> at
> org.apache.hadoop.hive.ql.exec.FetchOperator.getRecordReader(FetchOperator.java:471)
> at
> org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:561)
> ... 18 more
>
> Looks like it is trying to access the column with index as 29 where as there
> are only 5 non null columns being present in the row - which matches the
> Arraylist size.
>
> What could be going wrong here?
>
>
> Thanks
>
> Suma
>
>
>
>
--
Nitin Pawar
not guarantee that it is
> virus free and accepts no responsibility for any loss or damage arising
> from its use. If you have received this e-mail in error please notify
> immediately the sender and delete the original email received, any
> attachments and all copies from your system.
>
--
Nitin Pawar
heb...@murex.com> wrote:
> “With hive, without creating a table with full data, you can do
> intermediate processing like select only few columns and write into another
> table”. How can I do this process?
>
>
>
> Thank you alot!
>
>
>
> *From:* Nitin Pawar [mailto:nitin
ah.cheb...@murex.com> wrote:
>
> Hello,
>
>
>
> Thank you for your reply.
>
>
>
> Consider we have data divided into 5 columns (col1, col2, col3, col4,
> col5).
>
> So I can’t load directly col1, col3 and col5?
>
> If I can’t do it directly, can y
sorry hit send too soon ..
I mean without creating intermediate tables, in hive you can process the
file directly
On Wed, Jul 30, 2014 at 3:06 PM, Nitin Pawar
wrote:
> With hive, without creating a table with full data, you can do
> intermediate processing like select only few colum
rantee that it is
> virus free and accepts no responsibility for any loss or damage arising
> from its use. If you have received this e-mail in error please notify
> immediately the sender and delete the original email received, any
> attachments and all copies from your system.
>
--
Nitin Pawar
you want to know how initializes an udtf or how to build udtf ?
On Tue, Jul 29, 2014 at 1:30 AM, Doug Christie
wrote:
> Can anyone point me to the source code in hive where the calls to
> initialize, process and forward in a UDTF are made? Thanks.
>
>
>
> Doug
>
>
>
--
Nitin Pawar
gt; src_filename string
> server_date date
>
> my analyze query is
> analyze table mytable partition(server_date=’2013-11-30′) compute
> statistics for columns load_inst_id;
>
> i am always getting 0 as loadinstant id ,i have to turn off my
> hive.compute.query.using.stats to get correct result(through map reduce
> max(load_inst_id))
>
>
>
--
Nitin Pawar
you can try with like statement
On 21 Jul 2014 19:32, "fab wol" wrote:
> Hi everyone,
>
> I have the following problem: I have a partitoned managed table (Partition
> table is a string which represents a date, eg. log-date="2014-07-15").
> Unfortunately there is one partition in there like this:
> *rank() over(partition by p_mfgr order by p_name)*?
>
> Thanks,
>
> Eric
>
>
--
Nitin Pawar
> I am using the below command to alter the partitioned column name: -
>
>
>
> ALTER TABLE siplogs_partitioned PARTITION str_date RENAME TO PARTITION
> call_date;
>
>
>
> When I run the above command I am getting an error : -
>
>
>
> FAILED: ParseException line 1:12 cannot recognize input near
> 'siplogs_partitioned' 'PARTITION' 'str_date' in alter table partition
> statement
>
>
>
> Is the “ALTER TABLE” usage correct to rename the partitioned column names?
>
>
>
> Any pointer or help is appreciated.
>
>
>
> Thanks,
>
> Manish
>
>
>
--
Nitin Pawar
ption: Unable
> to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient
>
>
> This is related to hive metastore only.
> Can anyone please help me out with this.
>
> Thanks,
> Rishabh
>
--
Nitin Pawar
Hi,
can someone add me to hive wiki editors?
My userid is : nitinpawar432
--
Nitin Pawar
uot;:"analytics-android","libraryVersion":"0.6.13"},"properties":{"comment":"Much
>>> joy."}}, ...]
>>>
>>> This "batch" may contain n events will a structure like above.
>>>
>>> I want to put all events in a table where each "element" will be stored
>>> in a unique column: timestamp, requestId, sessionId, event, userId, action,
>>> context, properties
>>>
>>> 2. explode the "batch" I read a lot about SerDe, etc. - but I don't get
>>> it.
>>>
>>> - I tried to create a table with an array and load the data into it -
>>> several errors
>>> use explode in query but it doesn't accept "batch" as array
>>> - integrated several SerDes but get things like "unknown function jspilt"
>>> - I'm lost in too many documents, howtos, etc. and could need some
>>> advices...
>>>
>>> Thank you in advance!
>>>
>>> Best, Chris
>>>
>>
>>
>
--
Nitin Pawar
iableSubstitution-SubstitutionDuringQueryConstruction>
> .
>
> Please check my wording and let me know if revisions are needed.
>
> -- Lefty
>
>
> On Fri, Jun 20, 2014 at 5:17 AM, Nitin Pawar
> wrote:
>
>> hive variables are not replaced on mapreduce jobs but when the que
to-set-variables-in-hive-scripts
>
>
>
> Thanks,
>
> Chandra
>
>
>
--
Nitin Pawar
and how to create them, I'd
> just go on the Hive wiki page.
>
> Good luck!
>
> Best,
> Nishant
>
> On Jun 19, 2014 6:17 AM, "Clay McDonald"
> wrote:
>
> hi all,
>
> how do I write the following query to insert a note with a current system
> timestamp?
>
> I tried the following;
>
>
> INSERT INTO TEST_LOG VALUES (unix_timestamp(),'THIS IS A TEST.');
>
> thanks, Clay
>
--
Nitin Pawar
ethodAccessorImpl.java:39)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
> FAILED: ParseException line 1:19 mismatched input '' expecting FROM
> near 'CURRENT_TIME' in from clause
>
--
Nitin Pawar
ch I know another man's religion is folly
> teaches
> me to suspect that my own is also."
>-- Mark Twain
>
>
>
>
--
Nitin Pawar
/ co-develop custom UDFs for text analytics and
> data mining over Hive directly.
>
> --
> Devopam Mittra
> Life and Relations are not binary
>
--
Nitin Pawar
>>> Have gone through some sites but not able to figure out correctly.. few
>>> are mentioning that we need use some JAR's to achieve it...
>>>
>>>
>>> Thanks in advance,
>>> Rams
>>>
>>
>>
>
--
Nitin Pawar
nning
> same query multiple times and insert only in single table at a time)?
>
>
>
> Thanks,
>
> Chandra
>
--
Nitin Pawar
ry like
>> select distinct name,age from testing;
>>
>> It outputs,
>> A 21
>> B 21
>> C 21
>>
>> I want to know whether A 21 is from file1 or file2.
>>
>> Thanks,
>> Rishabh.
>>
>
>
--
Nitin Pawar
> X. For example if we had the same IP enter the
> same query Y times we wouldnlt want to include this in the final
> result unless there have been X-Y other IP's that searched for that
> query.
>
> Is this perhaps better suited fro Pig?
>
> Thanks
>
--
Nitin Pawar
any dissemination, use,
> review, distribution, printing or copying of the
> information contained in this e-mail message
> and/or attachments to it are strictly prohibited. If
> you have received this communication in error,
> please notify us by reply e-mail or telephone and
> im
must notify the sender and delete it from their system. L&T Infotech will
>> not accept responsibility or liability for the accuracy or completeness of,
>> or the presence of any virus or disabling code in this e-mail"
>>
>
>
>
> --
> Regards
> Shengjun
>
--
Nitin Pawar
1 - 100 of 525 matches
Mail list logo