Nice it worked !! 

thx 

Jorge Machado
www.jmachado.me





> On 12 Jan 2017, at 17:46, Asher Krim <ak...@hubspot.com> wrote:
> 
> Have you tried using an alias? You should be able to replace 
> ("dbtable”,"sometable") with ("dbtable”,"SELECT utc_timestamp AS my_timestamp 
> FROM sometable")
> 
> -- 
> Asher Krim
> Senior Software Engineer
> 
> 
> On Thu, Jan 12, 2017 at 10:49 AM, Jorge Machado <jom...@me.com 
> <mailto:jom...@me.com>> wrote:
> Hi Guys, 
> 
> I’m having a issue loading data with a jdbc connector
> My line of code is : 
> 
> val df = 
> sqlContext.read.format("jdbc").option("url","jdbc:mysql://localhost:3306 
> <>").option("driver","com.mysql.jdbc.Driver").option("dbtable”,"sometable").option("user”,"superuser").option("password”,"supersafe").option("partitionColumn","id").option("lowerBound","1325376000").option("upperBound","1483228800").option("numPartitions","20").load()
> 
> when I do : df.filter("last_value IS NOT NULL ").filter("utc_timestamp <=  
> 1347369839").count 
> or df.filter("last_value IS NOT NULL ").filter(“ `utc_timestamp` <=  
> 1347369839").count
> 
> on the mysql logs I see : SELECT `last_value`,`utc_timestamp` FROM sometable 
> WHERE utc_timestamp <= 1347369839 AND id >= 1451658240 AND id < 145955088
> 
> the problem is that utc_timestamp is a function on mysql and it gets 
> executed. How can I force Spark  to not remove the `` on the where clauses ? 
> 
> 
> 
> 
> Jorge Machado
> www.jmachado.me <http://www.jmachado.me/>
> 
> Jorge Machado
> www.jmachado.me <http://www.jmachado.me/>
> 
> 
> 
> 
> 
> 

Reply via email to