ATT
the table records are more than 12000
On Fri, Jul 19, 2013 at 9:34 AM, Stephen Boesch wrote:
> one mapper. how big is the table?
>
>
> 2013/7/18 ch huang
>
>> i wait long time,no result ,why hive is so slow?
>>
>> hive> select cookie,url,ip,source,vsid,token,residence,edate from
>> hb_cook
one mapper. how big is the table?
2013/7/18 ch huang
> i wait long time,no result ,why hive is so slow?
>
> hive> select cookie,url,ip,source,vsid,token,residence,edate from
> hb_cookie_history where edate>='1371398400500' and edate<='1371400200500';
> Total MapReduce jobs = 1
> Launching Job
i wait long time,no result ,why hive is so slow?
hive> select cookie,url,ip,source,vsid,token,residence,edate from
hb_cookie_history where edate>='1371398400500' and edate<='1371400200500';
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce
On Jul 18, 2013, at 1:40 PM, Tzur Turkenitz wrote:
> Hello,
> Just finished reading the Hive-Architecture pdf, and failed to find the
> answers I was hoping for. So here I am, hoping this community will shed some
> light.
> I think I know what the answers will be, I need that bolted down and se
Hello,
Just finished reading the Hive-Architecture pdf, and failed to find the
answers I was hoping for. So here I am, hoping this community will shed some
light.
I think I know what the answers will be, I need that bolted down and
secured.
We are concerned on how data is transferred between
unsubscribe
> This message has no content.
You have to send a mail to user-unsubscr...@hive.apache.org
On Thu, Jul 18, 2013 at 1:30 PM, Beau Rothrock wrote:
>
Changing the datatype of a column will *not* alter the column's data
itself - just Hive's metadata for that table. To modify the type of
existing data:
1. Create a new table with the desired structure
2. Copy the existing table into the new table - applying any necessary
type casting
3. Drop
--help me!---
Hi Hive users! Please help me, I can’t start Hive Metastore!
I have started Hive Metastore and using Kerberos for authentication.
In Kerberos, I created a principal:bss_ra/mas...@example.com
[bss_ra@master sbin]$ sudo kadmin.local
Authenticating as principal root/ad
let's put it this way. what makes you ask the question that altering the
datatype wouldn't work? After all that's why it's there. :)
On Thu, Jul 18, 2013 at 2:45 AM, Manickam P wrote:
> Hi experts,
>
> I have created a table in hive and loaded the data into it. now i want to
> change the dat
Hi experts,
I have created a table in hive and loaded the data into it. now i want to
change the datatype of one particular column. Do i need to drop and move the
file again to hive?will it work fine if i just alter the data type alone in
hive?
Thanks,Manickam P
Hi,
Since we saw that we have to give arguments in RANK() function, i'm trying
to translate this one (working on Oracle 10g) to be functionnally in Hive :
RANK() OVER (PARTITION BY mag.co_magasin, dem.id_produit ORDER BY
pnvente.dt_debut_commercial DESC,
COALESCE(pnvente.id_produit,dem.id_produit
that is what I search for a long time, and no responses. But if you are not
in the cloud (AWS, Azure,...) you can add the jar for your all Datanodes in
$HADOOP_HOME/lib , and then restart the service mapreduce-tasktracker like
this
/etc/init.d/*mapreduce-tasktracker stop
/etc/init.d/*mapreduce-
The best way to restore is from a backup. We use distcp to keep this
scalable : http://hadoop.apache.org/docs/r1.2.0/distcp2.html
The data we feed to hdfs also gets pushed to this backup and the
metadatabase from hive also gets pushed here. So this combination works
well for us (had to use it on
16 matches
Mail list logo