ed by: java.net.ConnectException: Connection refused (Connection
> refused)
>
> at java.net.PlainSocketImpl.socketConnect(Native Method)
>
> at
> java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
>
> at
> java.net.AbstractPlain
Yes, it works. Thank you very much,
Garry
From: Suresh Kumar Sethuramaswamy
Reply-To: "user@hive.apache.org"
Date: Wednesday, November 7, 2018 at 3:10 PM
To: "user@hive.apache.org"
Subject: Re: Create external table with s3 location error
Thanks for the logs. Couple of th
.(SessionHiveMetaStoreClient.java:74)
From: Suresh Kumar Sethuramaswamy
Reply-To: "user@hive.apache.org"
Date: Wednesday, November 7, 2018 at 2:50 PM
To: "user@hive.apache.org"
Subject: Re: Create external table with s3 location error
Are you using EMR or Apache hadoop open source?
Can you share
Are you using EMR or Apache hadoop open source?
Can you share your hive megastore logs?
On Wed, Nov 7, 2018, 2:19 PM Garry Chen hi All,
>
> I am try to create a external table using s3 as location
> but failed. I add my access key and security key in hive-site.xml and
> reboot t
I have kerberos enabled in my cluster.
In case I create external table using beeline I see from hdfs namenode
log that it does Kerberos auth for every single file I guess.
It may be the reason why creating external hive table fails in case I
have loads of directories and files under them.
M
No I got closer and discovered that my problem is related with permissions.
In example
drwxr-xr-x - margusja hdfs 0 2016-05-12 03:33 /tmp/files_10k
...
-rw-r--r-- 3 margusja hdfs 5 2016-05-12 02:01
/tmp/files_10k/f1959.txt
-rw-r--r-- 3 margusja hdfs 4 2016-05
One more example:
[hdfs@hadoopnn1 ~]$ hdfs dfs -count -h /user/margusja/files_10k/
19.8 K 47.7 K /user/margusja/files_10k
[hdfs@hadoopnn1 ~]$ hdfs dfs -count -h /datasource/dealgate/
537.9 K 8.5 G /datasource/dealgate
2: jdbc:hive2://
More information:
2016-05-11 13:31:17,086 INFO [HiveServer2-Handler-Pool: Thread-5867]:
parse.ParseDriver (ParseDriver.java:parse(185)) - Parsing command:
create external table files_10k (i int) row format delimited fields
terminated by '\t' location '/user/margusja/files_10k'
2016-05-11 13:3
What do you mean?
Margus (margusja) Roo
http://margus.roo.ee
skype: margusja
+372 51 48 780
On 11/05/16 08:21, Mich Talebzadeh wrote:
yes but table then exists correct I mean second time
did you try
*use default;*
*
drop table if exists trips;*
**
it is still within Hive metadata registere
Sadly in our environment:
Generated files like you did.
Connected to: Apache Hive (version 1.2.1.2.3.4.0-3485)
Driver: Hive JDBC (version 1.2.1.2.3.4.0-3485)
Transaction isolation: TRANSACTION_REPEATABLE_READ
0: jdbc:hive2://hadoopnn1.estpak.ee:2181,hado> create external table
files_10k (i int
ilto:mar...@roo.ee]
Sent: Tuesday, May 10, 2016 11:26 PM
To: user@hive.apache.org
Subject: Re: Create external table
Hi again
I opened hive (an old client)
And exactly the same create external table location [paht in hdfs to place
where are loads of files] works and the same DDL does not work
yes but table then exists correct I mean second time
did you try
*use default;*
*drop table if exists trips;*
it is still within Hive metadata registered as an existing table.
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8
Hi
Thanks for your answer.
---
At first I create an empty hdfs directory (if directory is empty I did
not have problems before too).
[margusja@hadoopnn1 ~]$ hdfs dfs -mkdir /user/margusja/trips
[margusja@hadoopnn1 ~]$ beeline -f create_externat_table_trips.hql -u
"jdbc:hive2://hadoopnn1.ex
Try this simple external table creation in beeline (check first that that
it connects OK)
*use default;drop table if exists trips;CREATE EXTERNAL TABLE `TRIPS`(
`bike_nr` string, `duration` int, `start_date` string, `start_station`
string, `end_station` string)PARTITIONED
Hi again
I opened hive (an old client)
And exactly the same create external table location [paht in hdfs
to place where are loads of files] works and the same DDL does not work
via beeline.
Margus (margusja) Roo
http://margus.roo.ee
skype: margusja
+372 51 48 780
On 10/05/16 23:03, Mar
> And again: the same row is correct if I export a small set of data, and
>incorrect if I export a large set - so I think that file/data size has
>something to do with this.
My Phoenix vs LLAP benchmark hit size related issues in ETL.
In my case, the tipping point was >1 hdfs block per CSV file.
around for this?
-Original Message-
From: Nicholas Hakobian [mailto:nicholas.hakob...@rallyhealth.com]
Sent: Thursday, January 28, 2016 3:15 PM
To: user@hive.apache.org
Subject: Re: "Create external table" nulling data from source table
Do you have any fields with embedded newline charac
Do you have any fields with embedded newline characters? If so,
certain hive output formats will parse the newline character as the
end of row, and when importing, chances are the missing fields (now
part of the next row) will be padded with nulls. This happens in Hive
as well if you are using a Te
Sent from remote device, Please excuse typos
-Original Message-
From: Joseph D Antoni
Date: Fri, 15 Feb 2013 08:55:50
To: user@hive.apache.org
Reply-To: user@hive.apache.org
Subject: Re: CREATE EXTERNAL TABLE Fails on Some Directories
Not sure--I just truncated the file list from the ls-
: Friday, February 15, 2013 11:50 AM
Subject: Re: CREATE EXTERNAL TABLE Fails on Some Directories
Something's odd about this output; why is there no / in front of 715? I always
get the full path when I run a -ls command. I would expect either:
/715/file.csv
or
/user//715/file.csv
Or is that
the directory--wasn't clear on that..
>
> Joey
>
>
>
> --
> *From:* Dean Wampler
> *To:* user@hive.apache.org; Joseph D Antoni
> *Sent:* Friday, February 15, 2013 11:37 AM
> *Subject:* Re: CREATE EXTERNAL TABLE Fails on Some Directories
>
> You confirmed that 715 is an
To: user@hive.apache.org; Joseph D Antoni
Sent: Friday, February 15, 2013 11:37 AM
Subject: Re: CREATE EXTERNAL TABLE Fails on Some Directories
You confirmed that 715 is an actual directory? It didn't become a file by
accident?
By the way, you don't need to include the file name in t
You confirmed that 715 is an actual directory? It didn't become a file by
accident?
By the way, you don't need to include the file name in the LOCATION. It
will read all the files in the directory.
dean
On Fri, Feb 15, 2013 at 10:29 AM, Joseph D Antoni wrote:
> I'm trying to create a series of
Thanks for the reference Bejoy.
V
On Fri, Jul 27, 2012 at 12:36 PM, Bejoy Ks wrote:
> Hi Vidhya
>
> This bug was reported and fixed in a later version of hive , Hive 0.8. An
> upgrade would set things in place.
>
> https://issues.apache.org/jira/browse/HIVE-2888
>
> Regards,
> Bejoy KS
>
> -
Hi Vidhya
This bug was reported and fixed in a later version of hive , Hive 0.8. An
upgrade would set things in place.
https://issues.apache.org/jira/browse/HIVE-2888
Regards,
Bejoy KS
From: Vidhya Venkataraman
To: user@hive.apache.org
Sent: Friday, July
Naga"
To: user@hive.apache.org
Sent: Tuesday, June 19, 2012 6:16:31 PM
Subject: Re: create external table on existing hive partitioned table ?
Thanks Mark,
The reason to create the 2nd table is One of the column is defined as string in
the first table, I wanted to read the string int
y, June 19, 2012 6:16:31 PM
Subject: Re: create external table on existing hive partitioned table ?
Thanks Mark,
The reason to create the 2nd table is One of the column is defined as string in
the first table, I wanted to read the string into Map data type.
i.e
Existing table.
{"
Thanks Mark,
The reason to create the 2nd table is One of the column is defined as
string in the first table, I wanted to read the string into Map data type.
i.e
Existing table.
{"UY": 2, "BR": 1}
{"LV": 1, "BR": 1}
To
Country Map
Thanks
Gopi
On Tue, Jun 19, 2012 at 1:37 PM, Mark Grover
Sai,
Maybe I don't understand your question properly but creating an external table
on a partitioned table is no different than create an external table on a
non-partitioned one.
Your syntax looks right. After table creation, you would have to add all
existing partitions of the table so that th
29 matches
Mail list logo