hi,fight fate
Did I can in bulkPut() function use Get value first ,then put this
value to Hbase ?
> 在 2015年12月9日,16:02,censj 写道:
>
> Thank you! I know
>> 在 2015年12月9日,15:59,fightf...@163.com <mailto:fightf...@163.com> 写道:
>>
>> If you are using
hi,all:
how cloud I through spark function hbase get value then update this value
and put this value to hbase ?
hi,all:
I wirte data to hbase,but Hbase arise this ERROR,Could you help me?
>
> r.KeeperException$SessionExpiredException: KeeperErrorCode = Session expired
> for /hbase-unsecure/rs/byd0157,16020,1449106975377
> 2015-12-17 21:24:29,854 WARN [regionserver/byd0157/192.168.0.157:16020]
> zooke
I use Huawei-Spark/Spark-SQL-on-HBase,but running ./bin/hbase-sql throwing.
5/12/19 16:59:34 INFO storage.BlockManagerMaster: Registered BlockManager
Exception in thread "main" java.lang.NoSuchMethodError:
jline.Terminal.getTerminal()Ljline/Terminal;
at jline.ConsoleReader.(ConsoleReader.j
ok! But I thank jline version error. I found pom.xml jine
0.9.94.
> 在 2015年12月19日,17:29,Ravindra Pesala 写道:
>
> Hi censj,
>
> Please try the new repo at https://github.com/HuaweiBigData/astro
> <https://github.com/HuaweiBigData/astro> , not maintaining the old repo.
why to override a method from SparkContext?
> 在 2016年1月8日,14:36,yuliya Feldman 写道:
>
> Hello,
>
> I am new to Spark and have a most likely basic question - can I override a
> method from SparkContext?
>
> Thanks
You can try it.
> 在 2016年1月8日,14:44,yuliya Feldman 写道:
>
> invoked
Dear all,
Can you tell me how did get past SQLContext load function read multiple
tables?
; but if your tables have the same columns, you can read them one by one, then
> unionAll them, such as:
>
> val df1 = sqlContext.table(“table1”)
> val df2 = sqlContext.table(“table2”)
>
> val df = df1.unionAll(df2)
>
>
>
>
>
>> On Dec 2, 2015, at 4:06
d multiple tables by using sql ? JdbcRelation now only can
> load single table. It doesn't accept sql as loading command.
>
> On Wed, Dec 2, 2015 at 4:33 PM, censj <mailto:ce...@lotuseed.com>> wrote:
> hi Fengdong Yu:
> I want to use sqlContext.read.format(
hi all,
I want to update row on base. how to create connecting base on Rdd?
-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org
ok! I try it.
> 在 2015年12月7日,20:11,ayan guha 写道:
>
> Kindly take a look https://github.com/nerdammer/spark-hbase-connector
> <https://github.com/nerdammer/spark-hbase-connector>
>
> On Mon, Dec 7, 2015 at 10:56 PM, censj <mailto:ce...@lotuseed.com>> wrote:
&
hi all,
now I using spark,but I not found spark operation hbase open source. Do
any one tell me?
/jira/browse/HBASE-13992>
> already integrates that feature into the hbase side, but
> that feature has not been released.
>
> Best,
> Sun.
>
> fightf...@163.com <mailto:fightf...@163.com>
>
> From: censj <mailto:ce...@lotuseed.com>
> Date: 2015-
Can you get me a example?
I want to update base data.
> 在 2015年12月9日,15:19,Fengdong Yu 写道:
>
> https://github.com/nerdammer/spark-hbase-connector
> <https://github.com/nerdammer/spark-hbase-connector>
>
> This is better and easy to use.
>
>
>
>
>
So, I how to get this jar? I use set package project.I not found sbt lib.
> 在 2015年12月9日,15:42,fightf...@163.com 写道:
>
> I don't think it really need CDH component. Just use the API
>
> fightf...@163.com <mailto:fightf...@163.com>
>
> 发件人: censj <mailto:ce.
spark-hbase-connector
> <http://spark-packages.org/package/nerdammer/spark-hbase-connector>
> as Feng Dongyu recommend, you can try this also, but I had no experience of
> using this.
>
>
> fightf...@163.com <mailto:fightf...@163.com>
>
> 发件人: censj &
17 matches
Mail list logo