[ 
https://issues.apache.org/jira/browse/HIVE-21509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16801561#comment-16801561
 ] 

Adam Szita commented on HIVE-21509:
-----------------------------------

Looks like a quite serious issue. The root cause is as follows:
 # At the first execution of the query the LLAP IO thread has to read from the 
input text file and produce (Long)ColumnVectors (CVs) from the data, wrapped 
into VectorizedRowBatches (VRBs).
 # These VRBs are
 ## passed by VectorDeserializeOrcWriter to a newly created async ORC writer 
thread for ORC encoding and cache persistence. 
 ## also propagated back to consumers, namely to OrcEncodedDataConsumer and 
then finally to LLAPRecordReader.
 # The ORC writer thread may get to writing out the VRB (and therefore the CV) 
only after that the IO thread has:
 ## Created a CVB in OrcEncodedDataConsumer#decodeBatch to wrap the CV coming 
from VRB and passed the batch to LLAPRecordReader
 ## LLAPRecordReader used this batch and is receiving a new one. This time (on 
Tez thread) it will return the previous CVB and offer it back to an object pool 
so that the next decodeBatch may reuse it.
 ## The next decodeBatch call polls this reused CVB from the pool and will call 
CV.reset() on the CVs wrapped inside, and finally it will also overwrite the 
existing data in there
 ## ....and now is the time that the ORC writer thread got to writing the VRB 
and therefore the very same CVs into cache, that have just been modified in the 
meantime due to this re-using logic of LLAPRecordReader and 
OrcEncodedDataConsumer
 ## (I guess this is why high executor count vs low IO thread count helps 
surfacing this issue: the 32 Tez threads are very fast returning the used CVBs, 
but the one IO thread and its one ORC writer thread is outnumbered when trying 
to write it out in time, before it'd get corrupted)
 # Because of this the correct query result will be displayed at first 
(LLAPRecordReader does get all the correct CVs), but the content written in 
cache is corrupted
 # The second run of this query will go directly to cache and use the corrupted 
data there to produce a wrong result this time.

 

> LLAP may cache corrupted column vectors and return wrong query result
> ---------------------------------------------------------------------
>
>                 Key: HIVE-21509
>                 URL: https://issues.apache.org/jira/browse/HIVE-21509
>             Project: Hive
>          Issue Type: Bug
>          Components: llap
>            Reporter: Adam Szita
>            Assignee: Adam Szita
>            Priority: Major
>
> In some scenarios, LLAP might store column vectors in cache that are getting 
> reused and reset just before their original content would be written.
> The issue is a concurrency issue and is thereby flaky. It is not easy to 
> reproduce, but the odds of surfacing this issue can by improved by setting 
> LLAP executor and IO thread counts this way:
>  * set hive.llap.daemon.num.executors=32;
>  * set hive.llap.io.threadpool.size=1;
>  * using TPCDS input data of store_sales table, which is in text format:
> {code:java}
> ROW FORMAT SERDE    'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe'  
> WITH SERDEPROPERTIES (    'field.delim'='|',    'serialization.format'='|')  
> STORED AS INPUTFORMAT    'org.apache.hadoop.mapred.TextInputFormat'  
> OUTPUTFORMAT    
> 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'{code}
>  * run query on this this table: select min(ss_sold_date_sk) from store_sales;
> The first query result is correct (2450816 in my case). Repeating the query 
> will trigger reading from LLAP cache and produce a wrong result: 0.
> If one wants to make sure of running into this issue, place a 
> Thread.sleep(250) at the beginning of VectorDeserializeOrcWriter#run().
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to