Hi Michael,
You can try using a with statement, pseudo:
WITH input AS (SELECT colA, colB FROM table ORDER BY colA ASC)
SELECT colB FROM input
Best regards,
Robin Verlangen
*Chief Data Architect*
W http://www.robinverlangen.nl
E ro...@us2.nl
<http://goo.gl/Lt7BC>
*What is CloudPelican?
One thing I found in the change logs was this
https://issues.apache.org/jira/browse/HIVE-7041 which sounds like it might
have something to do with this. I don't use any byte datatypes in the
structure, so it would be hard to verify those.
Best regards,
Robin Verlangen
*Chief Data Architec
fluence/display/Hive/LanguageManual+Types#LanguageManualTypes-DecimalTypeIncompatibilitiesbetweenHive0.12.0and0.13.0
but that doesn't seem to help as well.
Any idea on how I can resolve this?
Thanks in advance!
Best regards,
Robin Verlangen
*Chief Data Architect*
W http://www.robinve
Hi Christian,
Sounds like a work around, but how do you prefix the job with a certain
name? Is that possible with a hive query statement?
Best regards,
Robin Verlangen
*Data Architect*
*
*
W http://www.robinverlangen.nl
E ro...@us2.nl
<http://goo.gl/Lt7BC>
*What is CloudPelican? <http
Hmm, after looking in the job&task-tracker web interfaces it seemed that
one of the new nodes was unable to connect to two of the others. This
caused the copying of data to "hang" (in fact: timeout, on timeout, on
timeout, ...).
Best regards,
Robin Verlangen
*Software engine
It actually seems that only 1 of the 24 reducers hangs at the "copy" phase.
Any solutions for this?
Best regards,
Robin Verlangen
*Software engineer*
*
*
W http://www.robinverlangen.nl
E ro...@us2.nl
<http://goo.gl/Lt7BC>
Disclaimer: The information contained in this message an
Nothing special over there. Most of the jobs complete after quite some
time. However it makes no sense to me that it takes that long. Its probably
just a couple of hundreds of megabytes.
Best regards,
Robin Verlangen
*Software engineer*
*
*
W http://www.robinverlangen.nl
E ro...@us2.nl
<h
Hi there,
It seems that some of my jobs hang in the reduce phase for a very long time
(for example, days). Is there anything I could tweak on? The query is
pretty simple, like:
SELECT SUM(colA), to_date(colB) AS dt FROM table GROUP BY to_date(colB)
ORDER BY dt ASC;
Best regards,
Robin
because of enabling mysql binlog for replication. Didn't know
that could co-relate to hive. Just disabled it for now. Thank you all for
your time.
Best regards,
Robin Verlangen
*Software engineer*
*
*
W http://www.robinverlangen.nl
E ro...@us2.nl
<http://goo.gl/Lt7BC>
Disclaimer: Th
ask*
Best regards,
Robin Verlangen
*Software engineer*
*
*
W http://www.robinverlangen.nl
E ro...@us2.nl
<http://goo.gl/Lt7BC>
Disclaimer: The information contained in this message and attachments is
intended solely for the attention and use of the named addressee and may be
confidential. If yo
Hi Chen,
The user that ran this job is root and all hdfs folders are also owned by
root.
Best regards,
Robin Verlangen
*Software engineer*
*
*
W http://www.robinverlangen.nl
E ro...@us2.nl
<http://goo.gl/Lt7BC>
Disclaimer: The information contained in this message and attachments is
in
SS*
*Total MapReduce CPU Time Spent: 38 minutes 31 seconds 260 msec*
Does anyone have a clue how to resolve this?
Best regards,
Robin Verlangen
*Software engineer*
*
*
W http://www.robinverlangen.nl
E ro...@us2.nl
<http://goo.gl/Lt7BC>
Disclaimer: The information contained in this message
I found a workaround / solution to this, read more about it on my personal
blog:
http://www.robinverlangen.nl/index/view/507e5cc902681-422420/tableau-server-with-hive-issue-column-index-out-of-bounds.html
Best regards,
Robin Verlangen
*Software engineer*
*
*
W http://www.robinverlangen.nl
E ro
Does anyone out here have a solution to this?
Best regards,
Robin Verlangen
*Software engineer*
*
*
W http://www.robinverlangen.nl
E ro...@us2.nl
<http://goo.gl/Lt7BC>
Disclaimer: The information contained in this message and attachments is
intended solely for the attention and use
sn't sound like something production-ready.
Thank you very much in advance!
Best regards,
Robin Verlangen
*Software engineer*
*
*
W http://www.robinverlangen.nl
E ro...@us2.nl
<http://goo.gl/Lt7BC>
Disclaimer: The information contained in this message and attachments is
intended sol
Hi there,
We notice that hive leaves a lot of _copy files in place. Does anyone know
how to resolve this? It seems to happen when the MoveTask fails after a
LOAD DATA command.
Best regards,
Robin Verlangen
*Software engineer*
*
*
W http://www.robinverlangen.nl
E ro...@us2.nl
<http://goo
@Jamie:
I was trying this too with a view, like:
DROP VIEW IF EXISTS standard_today; CREATE VIEW standard_today AS SELECT *
FROM standard_feed WHERE bdate='2012-09-21';
However when I run select * FROM standard_today it starts iterating over
all data again.
Best regards,
Robin
Hi Bejoy,
Thank you for your reply. Is there any way to fix my problem? I want to
have a query that has a dynamic range, from now (and in some cases now - x
days until now).
Best regards,
Robin Verlangen
*Software engineer*
*
*
W http://www.robinverlangen.nl
E ro...@us2.nl
Disclaimer: The
When we use it dynamically:
SELECT * FROM standard_feed WHERE bdate=to_date(unix_timestamp())
*Starts a job with 1000 mappers, 2 reducers*
*
*
What's the problem here? The result of the to_date of the current timestamp
should be equal to a normal fixed date? Does anyone have a solution?
Best regar
Same for me. It's been there always, didn't really bother, however if
there's a fix, we should fix it.
Best regards,
Robin Verlangen
*Software engineer*
*
*
W http://www.robinverlangen.nl
E ro...@us2.nl
Disclaimer: The information contained in this message and attachments is
inte
Hi Manish,
Thank you for your response. I can't really see why this would happen as we
used the same format in an other cluster that worked well. Is this a
version problem? We do actually have a date field in the rows.
Best regards,
Robin Verlangen
*Software engineer*
*
*
W
house*
Best regards,
Robin Verlangen
*Software engineer*
*
*
W http://www.robinverlangen.nl
E ro...@us2.nl
Disclaimer: The information contained in this message and attachments is
intended solely for the attention and use of the named addressee and may be
confidential. If you are not the intended reci
22 matches
Mail list logo