Could you send us your configuration of the Spark Interpreter in Zeppelin?
It can see how both jobs can be long lived in Spark & hive/Tez but they should 
not block one or another.



    _____________________________
From: Will Du <will...@gmail.com>
Sent: Thursday, December 3, 2015 2:15 AM
Subject: Re: zeppelin job is running all the time
To:  <users@zeppelin.incubator.apache.org>


       When I run spark-shell and hive cml, everything is good.     One more 
thing I find is that when I run the hive CML, there is single hive job running 
(seen from hue job browser) all the time until I exit from hive cml.       No 
matter how many HQL I submit, it is still single job there. I think it is 
because of using Tez. Is Tez having conflict with zeppelin?       
                  On Dec 3, 2015, at 2:15 AM, Felix Cheung <      
felixcheun...@hotmail.com> wrote:          
                                  I don't know enough about HDP, but there 
should be a way to check user queue in YARN?        
Spark job shouldn't affect Hive job though. Have you tried running spark-shell 
(--master yarn-client) and a Hive job at the same time?        
         
                 From:          will...@gmail.com         
Subject: Re: zeppelin job is running all the time         
Date: Tue, 1 Dec 2015 22:56:55 -0500         
To:          users@zeppelin.incubator.apache.org         
         
Do I have to stop zeppelin to make hive job running?                   I am 
thinking to have one notebook have both spark and hive coding running.          
                  
                            In addition, I am quite sure the first job in the 
picture is hive job. The 2nd one is zeppelin running spark. I wonder how to 
unblock hive job.                            
                                              On Dec 1, 2015, at 10:45 PM, Jeff 
Zhang <             zjf...@gmail.com> wrote:                        
                                       >>>               the status of hive 
query becomes from ACCEPT to RUNNING                             The status you 
see on the hue is the yarn app status which is used for spark. It should be 
running forever until you shutdown it in zeppelin                               
                         
                             On Wed, Dec 2, 2015 at 11:42 AM, Will Du           
     <will...@gmail.com> wrote:               
                                                The latest HDP sandbox 2.3.2    
                                                                      
                                                                                
      On Dec 1, 2015, at 10:38 PM, Jeff Zhang <                       
zjf...@gmail.com> wrote:                                            
                                                                     Which 
version of HDP do you use ?                                                     
                 
                                                 On Wed, Dec 2, 2015 at 11:23 
AM, Will Du                          <will...@gmail.com> wrote:                 
        
                                                                              I 
have assigned an dedicate yarn queue to spark, the status of hive query becomes 
from ACCEPT to RUNNING. However, it sees running forever still. Everything is 
else is in default config of HDP sandbox. Do I need to set something else? The 
browser I saw status is from hue.                                               
        
                                                                                
  Thanks,                                                                       
           wd                            
                                                         
                                                                                
                                                                           On 
Nov 30, 2015, at 12:40 AM, Rick Moritz <                                 
rah...@gmail.com> wrote:                                
                                                                                
                                                  To explain the previous 
reply: the SparkContext created by Zeppelin is persistent and independent of 
whether it's currently processing a paragraph or not. Therefore the Zeppelin 
job will claim all ressources assigned to it until Zeppelin is stopped.         
                                                                                
         
                                                                   On Mon, Nov 
30, 2015 at 4:20 AM, Jeff Zhang <zjf...@gmail.com> wrote:
                                                                                
                                                              I assume this is 
yarn app Job Browser. How many executors do you specify for your zeppelin yarn 
app ? It seems your zeppelin yarn app consume all the resources so that it 
block other applications.                                                       
                    
                                                                                
                                
                                                                                
                                                                                
                                                           
                                                                             On 
Mon, Nov 30, 2015 at 11:10 AM, Will Du <will...@gmail.com> wrote:
                                                                                
                                        Hi folks,                               
                                                      I am running a simple 
scala word count from zeppelin in HDP sandbox. The job is successful with 
expected result. But the job of zeppelin is shown in job status of hue forever. 
It seems block the my hive job. Does anyone know why?                           
                                                                                
                    thanks,                                                     
                                                                          wd    
                                                                                
                                         <PastedGraphic-1.tiff>                 
                                                                                
                                                                                
                   
                                      
                                                                             
                                                                                
                                                                   -- 
                                                                               
Best Regards                                        
                                        
Jeff Zhang                                                                      
                                                                                
                          
                                                                                
                                                                        
                                                                                
                                                                          
                        
                                                 
                        --                         
                                                 Best Regards                   
      
                         
Jeff Zhang                                                                      
                                                              
                                                                                
                                 
              
                             
              --               
                             Best Regards               
               
Jeff Zhang                                                                      
  
                                               
   


  

Reply via email to