on, Feb 13, 2017 at 4:52 PM, Cheyenne Forbes <
> cheyenne.osanu.for...@gmail.com> wrote:
>
>> my project highly depends on protobuf2, can I tell phoenix which version
>> of protobuf to read with when I am sending a request?
>>
>
>
--
Mark Heppner
mination, use,
> review, distribution, printing or copying of the
> information contained in this e-mail message
> and/or attachments to it are strictly prohibited. If
> you have received this communication in error,
> please notify us by reply e-mail or telephone and
> immediately and permanently delete the message
> and any attachments. Thank you
>
>
--
Mark Heppner
to drop HBase 1.0 support
> from v4.9 onwards.
> http://search-hadoop.com/m/Phoenix/9UY0h21AGFh2sgGFK?
> subj=Re+DISCUSS+Drop+HBase+1+0+support+for+4+9
>
>
>
> On Tue, Feb 7, 2017 at 8:13 PM, Mark Heppner
> wrote:
>
>> Pedro,
>> I can't answer your que
/1.2
>
> Is there any plan to support HBase 1.0 again on this (or newer) versions?
>
> Thanks for a great work!
>
> Regards.
>
> --
> Un saludo.
> Pedro Boado.
>
--
Mark Heppner
ill Xu wrote:
> Hi Mark,
>
> When you say Phoenix supports ACID do you mean via Tephra?
>
>
> Regards,
>
> Will
> --
> *From:* Mark Heppner
> *Sent:* Tuesday, January 31, 2017 6:37 AM
> *To:* user@phoenix.apache.org
> *Cc:* noam.bu
--
>>
>> PRIVILEGED AND CONFIDENTIAL
>> PLEASE NOTE: The information contained in this message is privileged and
>> confidential, and is intended only for the use of the individual to whom it
>> is addressed and others who have been specifically authorized to receive
>> it. If you are not the intended recipient, you are hereby notified that any
>> dissemination, distribution or copying of this communication is strictly
>> prohibited. If you have received this communication in error, or if any
>> problems occur with transmission, please contact sender. Thank you.
>>
>
>
--
Mark Heppner
data and make sure you've got the schema, queries and HBase settings setup
> the way you like, then add Spark into the mix. Then start adding a bit more
> data, check results, find any bottlenecks, and tune as needed.
>
> If you're able to identify any issues specifically wit
> Phoenix seems to do a very good job not reading data from column families
> that aren't needed by the query, so I think your schema design is fine.
>
> On Jan 19, 2017, at 10:30 AM, Mark Heppner wrote:
>
> Thanks for the quick reply, Josh!
>
> For our demo cluster, we h
gions servers
>>
>> Aslo added following properties to hbase.site
>> phoenix.trace.statsTableName SYSTEM.TRACING_STATS> alue>
>> phoenix.trace.frequency always. After this
>>
>> I am not clear where to place the ddl for SYSTEM.TRACING_STATS. Also i
>> could not see ./bin/traceserver.py to start
>> Please advice.
>>
>> Thanks,
>> Pradheep
>>
>
>
--
Mark Heppner
e of is the present Phoenix MR /
> Spark code isn't location aware, so executors are likely reading big chunks
> of data from another node. There's a few patches in to address this, but
> they're not in a released version yet:
>
> https://issues.apache.org/jira/brow
query more efficiently?
If this is a better design, is there any way of moving the "image" column
family from "mytable" to the default column family of the new "images"
table? Is it possible to create the new table with the "image_id"s, make
the foreign keys, then move the column family into the new table?
--
Mark Heppner
on.relocateRegion(ConnectionManager.java:1152)
> at org.apache.hadoop.hbase.client.ConnectionManager$
> HConnectionImplementation.relocateRegion(ConnectionManager.java:1136)
> at org.apache.hadoop.hbase.client.ConnectionManager$
> HConnectionImplementation.getRegionLocation(ConnectionManager.java:957)
> at org.apache.phoenix.query.ConnectionQueryServicesImpl.
> getAllTableRegions(ConnectionQueryServicesImpl.java:531)
> ... 32 more
> sqlline version 1.1.9
>
> Kindly let me know how to fix this error.
>
> Thanks,
>
>
>
--
Mark Heppner
g each of these
> 3600 columns the query takes around 2+ mins to just return a few lines
> (limit 2,10 etc).
>
> Subsequently on selecting lesser number of columns the performance seems
> to improve.
>
> is it an anti-pattern to have large number of columns in phoenix tables?
>
> *Cheers !!*
> Arvind
>
--
Mark Heppner
ow many I mean, should 1 be dedicated to hbase region, 1 to
> hbase master and 1 to zookeeper
>
--
Mark Heppner
; If you have sqlline-thin client you can test it out.
>>>>>
>>>>> $>bin/sqlline-thin.py http://localhost:8765
>>>>>
>>>>> Regards,
>>>>> Will
>>>>> --
>>>>> *From:* Cui Lin
>>>>> *Sent:* Friday, December 16, 2016 10:29 AM
>>>>> *To:* user@phoenix.apache.org
>>>>> *Subject:* Phoenix database adapter for Python not working
>>>>>
>>>>> I followed the instruction from http://python-phoenixdb.readth
>>>>> edocs.io/en/latest/
>>>>>
>>>>> to connect Hbase in cloudera cluster, but I got the following error
>>>>> below.
>>>>>
>>>>>
>>>>> >>> import phoenixdb
>>>>> >>> database_url = 'http://localhost:8765/'
>>>>> >>> conn = phoenixdb.connect(database_url, autocommit=True)
>>>>> Traceback (most recent call last):
>>>>> File "", line 1, in
>>>>> File "/root/anaconda2/lib/python2.7/site-packages/phoenixdb/__init__.py",
>>>>> line 63, in connect
>>>>> client.connect()
>>>>> File "/root/anaconda2/lib/python2.7/site-packages/phoenixdb/avatica.py",
>>>>> line 152, in connect
>>>>> raise errors.InterfaceError('Unable to connect to the specified
>>>>> service', e)
>>>>> phoenixdb.errors.InterfaceError: ('Unable to connect to the specified
>>>>> service', error(111, 'Connection refused'), None, None)
>>>>>
>>>>>
>>>>> I can create table using phoenix-sqlline.py localhost:2181:/hbase or
>>>>> even use ./psql.py to import CSV, why the python adaopter does not work?
>>>>> Could someone give me a simple example that allows the adapter to connect
>>>>> Hbase in Cloudera?
>>>>>
>>>>> I've been trying to find the solution for days... please help!
>>>>>
>>>>>
>>>>> --
>>>>> Best regards!
>>>>>
>>>>> Lin,Cui
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Best regards!
>>>>
>>>> Lin,Cui
>>>>
>>>
>>>
>>>
>>> --
>>> Best regards!
>>>
>>> Lin,Cui
>>>
>>
>>
>>
>> --
>> Best regards!
>>
>> Lin,Cui
>>
>
>
>
> --
> Best regards!
>
> Lin,Cui
>
--
Mark Heppner
db issue.
>>
>> Thanks,
>> James
>>
>> On Tue, Dec 6, 2016 at 8:58 AM, Mark Heppner
>> wrote:
>>
>>> I encountered something interesting and I'm not sure if it's a bug in
>>> Phoenix itself, the query server, or just a sid
I encountered something interesting and I'm not sure if it's a bug in
Phoenix itself, the query server, or just a side effect of using a large
binary column. If I create a table like this (in sqlline):
create table test1 (
id integer not null primary key,
image varbinary,
17 matches
Mail list logo