have timestamp mapping implemented.
Regards,
Marek
2015-04-06 22:25 GMT+02:00 Nicolas Maillard
mailto:nmaill...@hortonworks.com>>:
Hello Marek
your error seems to point to a conversion mismatch when bringing back the
timestamp type from Phoenix, essentially the timestamp is being brought bac
.2 but I got 4.3 version installed in
HBase - could it be the root cause of the exception ?
Thanks!
Marek
2015-04-06 20:57 GMT+02:00 Nicolas Maillard
mailto:nmaill...@hortonworks.com>>:
Hello Marek
There are a couple ways of reaching to phoenix through Hive
- One is calling out
Hello Marek
There are a couple ways of reaching to phoenix through Hive
- One is calling out directly the hbase layer with the Hive Hbase connector but
this has some caveats.
- Second is this project I am working on, the latest branch is built against
phoenix 4.3, but building it against 4.2.
inting or copying of the
> information contained in this e-mail message
> and/or attachments to it are strictly prohibited. If
> you have received this communication in error,
> please notify us by reply e-mail or telephone and
> immediately and permanently delete the message
>
iaobo1982";;
> *Send time:* Wednesday, Dec 24, 2014 10:13 AM
> *To:* "user";
> *Subject: * Re: sqlline.py failed to work
>
> This works, thanks.
>
>
> ------ Original --
> *From: * "Nicolas Maillard";;
> *Send time:*
iltin-java classes where applicable
>
> 14/12/10 05:20:59 ERROR
> client.HConnectionManager$HConnectionImplementation: The node /hbase is not
> in ZooKeeper. It should have been written by the master. Check the value
> configured in 'zookeep[xiaobogu@lix1 bin]$
>
--
Nicolas Maillard So
Hello Prakash
Considering Hive or Phoenix is a little misleading they di serve different
needs, let me break it down as I can.
You mention security:
Phoenix and hive both work on a secured Hadoop cluster, but Hive with Hive
Atz has a more fine grained authorization model. So from that perspective
Hello
Just to be sure you are using HDP 2.1 and you gotten the phoenix jars from
the page you listed and have put the phoenix jar in all on the hbase nodes
in the lib directory and restarted the whole hbase service.
If so could you also paste the line you use to start sqlline.
On Mon, Sep 1, 2014
Hello russel
Phoenix works transparently on HDP 2.1, I have not tried it on HDP 2.0, I
am not sure a lot of tests have been done on hbase .96. If I am not
mistaken phoenix 3 is compiled with hbase .94 and phoenix 4 with hbase 98.
but this can be changed.
Do you have any information on a specific e
Hello Rahul
You will need to create at least one column, a default columnfamily will be
created if you do not specify one.
You will need to create at least one columnname per columnfamily you would
like to have if you want specific columnfamilies in your table.
At a later you can write or read fro
Hey james
Looking at your reply got me thinking. Could pagination of very large
result sets be set up in a temporary hbase table.
Say your query for a large dataset with a complex query, the result could
be "cached" in a temp table and served paginated where rowkeys would be
postions. This way you
Hello Abe
You are right currently the pheonix client is the final step hence some
processing can happen there.
One way is to actually put the client on the cluster to avoid long anf
suboptimal networks.
Maybe a service standing in the cluster for you in front of the client to
do last stepds and ev
12 matches
Mail list logo