Anand,
I doubt this information is readily available in hive as this is not meta
information rather access information.
For number of records in a table you can just run a query like select
count(1) from table;
For the access details on table data, you will need to process hadoop logs
and based
I have 2 table, each has 6 million records and clustered into 10 buckets
These tables are very simple with 1 key column and 1 value column, all I
want is getting the key that exists in both table but different value.
The normal did the trick, took only 141 secs.
select * from ra_md_cdr
How do I get the following meta information about a table
1. recent users of table,
2. top users of table,
3. recent queries/jobs/reports,
4. number of rows in a table
I don't see anything either in DESCRIBE FORMATTED or SHOW TABLE EXTENDED LIKE
commands.
Thanks
Anand
Hi Ranjith,
Your understanding is correct.
I going to answer your 2nd question here. I would say if you don't have a lot
of concurrent users using Hive (and hence, the metastore) at the same time,
local relational DB (like MySQL) would work well as your metastore.
If you have more concurrent co
I tried creating the hive table like this,
Create table my table (..)ROW FORMAT DELIMITED FIELDS TERMINATED BY
'\t' LINES TERMINATED BY '\n' STORED AS TEXTFILE;
And then I would add about 95 rows into this hive table and use the sqoop below
to export. It works, so we know data is inta
Hi Bejoy,
The syntax you suggested does work.
I have many years of oracle (as well as other rdbm's) so it would have
been more natural to have assumed the AS were present: but instead I
followed the ayntax in the jira that came up (and which lacks the AS
clause).But as Edward mentions
Thanks v much Edward, I am looking at those resources now. The queries in
clientpositive are quite instructive.
Here is the correct way to do it so it seems
hive> create table dem2 like demographics_local;
OK
Time taken: 0.188 seconds
hive> hive> insert overwrite table dem2 select * from demogra
That did the trick.
Thanks Carla!
Matt Tucker
-Original Message-
From: carla.stae...@nokia.com [mailto:carla.stae...@nokia.com]
Sent: Friday, March 30, 2012 11:31 AM
To: user@hive.apache.org
Subject: RE: Variable Substitution Depth Limit
Sorry, hit send too fast...As a work around for
Sorry, hit send too fast...As a work around for now, what I ended up doing was
using the 'sed' command to replace all of my variables with a shell script.
Kind of a hack, but it worked in the end for what I needed.
Carla
-Original Message-
From: Staeben Carla (Nokia-LC/Boston)
Sent: F
Yeah, you're not the only one who's run into that issue. There is an open Jira
item for it, so they're aware we'd like it configurable anyway...
https://issues.apache.org/jira/browse/HIVE-2021
-Original Message-
From: ext Edward Capriolo [mailto:edlinuxg...@gmail.com]
Sent: Friday, Ma
Unfortunately you are going to have to roll your own hive. It was just
a concept we borrowed from Hadoop since it does not support more then
40 depth substitution. We can probably make it configurable by a
hive-site property.
On Fri, Mar 30, 2012 at 10:59 AM, Tucker, Matt wrote:
> I’m trying to m
I'm trying to modify a script to allow for more code reuse, by prepending table
names with a variable.
For example: CREATE TABLE etl_${hiveconf:table}_traffic AS ...
The problem I'm running into is that after building all of these etl_* tables,
I use a final query to join all of the tables and
Thanks for quick response Bejoy,
I tried auxpath as you suggested, it has no effect, here is my hive
session http://pastie.org/private/kmkw3wofiegdqzhckyrgq
Also I checked my jars permissions its readable by all.
-v_abhi_v
On Fri, Mar 30, 2012 at 5:06 PM, Bejoy Ks wrote:
> Abshiek
>
> Hive
Abshiek
Hive is unable to locate a proper jar that has the connector . Usually the add
jar should work but still, did you try to include the jar file in
Hive_Aux_Jars, like. (Also ensure that the jar has no permission issues)
hive --auxpath /location/postgresql9jdbc3.jar
Or in your hive-site.x
Abshiek
What is the version of Sqoop you are using? Also can you paste in the
sqoop command you use with the full stack trace?
Make sure that you have the required jdbc driver jar in the /lib directory of
SQOOP.
Regards
Bejoy
From: Abhishek Parolkar
To
I even tried sqoop, but with no luck. It complains for connection manager
even though my mysql connector jar is in lib folder of sqoop's instalation
dir.
Any help?
If sqoop's purpose is to allow import/export from RDBMS, why the basic
mysql/pg connectors bundled with it?
-v_abhi_v
On Fri, Mar 3
16 matches
Mail list logo