be more than
one of each? Is there anywhere this implementation is documented or defined?
Thank you,
Andreea
From: Andreea Paduraru
Reply-To: "user@hive.apache.org"
Date: Thursday, 13 December 2018 at 14:39
To: "user@hive.apache.org"
Subject: Partition Filtering Using LIK
Hi,
I would like to know what kind of expressions the ‘LIKE’ operator supports in
the case of partition filtering. Is it percent (“%”) and underscore (“_”) or
something else?
Thanks,
Andreea
Hive everyone,
Recently I've found out that masking and filtering of rows/columns feature
was added in Hive. https://issues.apache.org/jira/browse/HIVE-13125 During
my research I've found out that we can use this feature through Apache
Ranger. Is it possible to configure and use th
Hello Hive,
I posted the below question
<http://stackoverflow.com/questions/24817308/hive-support-for-filtering-unicode-data?noredirect=1#comment38534961_24817308>
on Stackoverflow
<http://stackoverflow.com/questions/24817308/hive-support-for-filtering-unicode-data?nor
w. In my custom InputFormat I can read the
> config settings
>
** **
>
> JobConf .get(“"hive.io.filter.text"”);
>
> JobConf .get(“"hive.io.filter.expr.serialized"”);
>
well, you don't need double quotes, but yes.
>
>
> ** **
>
>
d"”);
And so I can then find the predicate that I need to do the filtering.
In particular I can set the input splits so that it just reads the right
records.
>Really? With ORC, allowing the reader to skip over rows that don't matter is
>very important. Keeping Hive from rechecki
>inputformats to negotiate what parts of the predicate they can process.
Ah, yes, sorry. I really want to be able to remove part of the predicate and
subsume the filtering into the InputFormat class.
There’s little point in me going down this route if I can’t do that.
>>
>>-- Owen
On Wed, May 15, 2013 at 3:38 AM, Peter Marron <
peter.mar...@trilliumsoftware.com> wrote:
> Hi,
>
> ** **
>
> I’m using Hive 0.10.0 and Hadoop 1.0.4.
>
> ** **
>
> I would like to create a normal table but have some of my code run so that
> I can r
Hi,
I'm using Hive 0.10.0 and Hadoop 1.0.4.
I would like to create a normal table but have some of my code run so that I
can remove filtering
parts of the query and limit the output in the splits of the InputFormat. I
believe that this is
"Filtering Pushdown" as desc
est function to
get this to work or is there something else I should be using
From: Ladda, Anand
Sent: Monday, May 28, 2012 11:00 AM
To: user@hive.apache.org
Subject: RE: FW: Filtering on TIMESTAMP data type
Debarshi
Didn't quite follow your first comment. I get the write-your-own UDF part b
://www.tcs.comExperience certainty. IT ServicesBusiness SolutionsOutsourcing-"Ladda, Anand" wrote: -
To: "user@hive.apache.org" From: "Ladda, Anand" Date: 05/28/2012 08:30PMSubject: RE: FW: Filtering on TIMESTAMP data
SolutionsOutsourcing-"Ladda, Anand" wrote: -
To: "user@hive.apache.org" From: "Ladda, Anand" Date: 05/28/2012 08:30PMSubject: RE: FW: Filtering on TIMESTAMP data type
Debarshi
Didn’t quite follow your first comment. I get the write-your-own UDF par
Debarshi
Didn't quite follow your first comment. I get the write-your-own UDF part but
was wondering how others have been transitioning from STRING dates to TIMESTAMP
dates and getting filtering, partition pruning, etc to work with constants
-Anand
From: Debarshi Basak [mailto:debars
SolutionsOutsourcing-"Ladda, Anand" wrote: -
To: "user@hive.apache.org" , "d...@hive.apache.org" From: "Ladda, Anand" Date: 05/26/2012 06:58PMSubject: FW: Filtering on TIMESTAMP data type
How do I set-up a filter constant for TIMESTAMP dat
How do I set-up a filter constant for TIMESTAMP datatype. In Hive 0.7 since
timestamps were represented as strings a query like this would return data
select * from LU_day where day_date ='2010-01-01 00:00:00';
But now with day_date as a TIMESTAMP column it doesn't. Is there some type of a
TO_T
: Cam Bazz
To: user@hive.apache.org
Sent: Tue, February 8, 2011 7:57:53 PM
Subject: filtering out crawlers
Hello,
Is there a practical way to filter the logs left by crawlers like google?
They usually have user-agent strings like
Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com
Hello,
Is there a practical way to filter the logs left by crawlers like google?
They usually have user-agent strings like
Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)
Mozilla/5.0 (compatible; bingbot/2.0; +http://www.bing.com/bingbot.htm)
is there a database for the
hmmm, I've seen mention of SymLink but I don't yet grasp how it works/applies
to selecting files to process. Also, I don't have much control over how the
data gets to the bucket I end up reading from, hence the need to powerfully
select.
Could you point me to some SymLink documentation or an
On Mon, Jan 24, 2011 at 5:58 PM, Avram Aelony wrote:
> Hi,
>
> I really like the virtual column feature in 0.7 that allows me to request
> INPUT__FILE__NAME and see the names of files that are being acted on.
>
> Because I can see the files that are being read, I see that I am spending
> time qu
Hi,
I really like the virtual column feature in 0.7 that allows me to request
INPUT__FILE__NAME and see the names of files that are being acted on.
Because I can see the files that are being read, I see that I am spending time
querying many, many very large files, most of which I do not need
Hi,
I just recently updated to trunk, was lagging a few months behind. Now I'm
getting errors like: "Filtering is supported only on partition keys of type
string"
It seems some type checking was added on
org.apache.hadoop.hive.metastore.parser.ExpressionTree.java:161 wh
21 matches
Mail list logo