DDL. See http://phoenix.apache.org/language
>
> On Tue, Feb 2, 2016 at 11:54 AM, Serega Sheypak
> wrote:
>
>> Hm... and what is the right to presplit table then?
>>
>> 2016-02-02 18:30 GMT+01:00 Mujtaba Chohan :
>>
>>> If your filter matches few rows due to filter on l
r standard features.
> Thanks,
> James
>
> [1] https://phoenix.apache.org/views.html
>
> On Tue, Feb 2, 2016 at 1:42 PM, Serega Sheypak
> wrote:
>
>> It super overhead, you have to query twice...
>>
>> 2016-02-02 22:34 GMT+01:00 Steve Terrell :
>>
&
d so that the person writing
> queries for the first table can know what's fields are available.
>
> Also would like to know if there is a way to do bulk upserting with
> dynamic fields.
>
> On Tue, Feb 2, 2016 at 3:27 PM, Serega Sheypak
> wrote:
>
>> Hi, is it p
Hi, is it possible to select all dynamic columns if you don't know their
names in advance?
Example:
I have a table with single defined column named PK, which is a primary key
Someone runs query:
UNSERT INTO MY_TBL(PK, C1, C2, C3) VALUES('x', '1', '2', '3')
where C1, C2, C3 are dynamic columns
need for multiple blocks reads for
> salted one.
>
>
> On Tuesday, February 2, 2016, Serega Sheypak
> wrote:
>
>> > then you would be better off not using salt buckets all together
>> rather than having 100 parallel scan and block reads in your case. I
>> Did
Does phoenix have something similar:
hbase org.apache.hadoop.hbase.util.RegionSplitter MY_TABLE HexStringSplit
-c 10 -f c
Command creates pre-splitte table with 10 splits where each split takes a
part of range from 000 to f?
2016-02-02 10:34 GMT+01:00 Serega Sheypak :
> > th
salted table offer much better performance
> since it ends up reading fewer blocks from a single region.
>
> //mujtaba
>
> On Mon, Feb 1, 2016 at 1:16 PM, Serega Sheypak
> wrote:
>
>> Hi, here is my table DDL:
>> CREATE TABLE IF NOT EXISTS id_ref
>> (
Hi, here is my table DDL:
CREATE TABLE IF NOT EXISTS id_ref
(
id1 VARCHAR NOT NULL,
value1 VARCHAR,
id2 VARCHAR NOT NULL,
value2 VARCHAR
CONSTRAINT id_ref_pk PRIMARY KEY (id1, id2)
)IMMUTABLE_ROWS=true,SALT_BUCKETS=100, VERSIONS=1, TTL=691200
I'm trying to analyze resu
Hi, I'm using phoenix in web-application. My phoenix version is 4.3.0
I'm getting exceptions immediately when restarting an application.
What it could be? I'm doing select by primary key.
aused by: org.apache.phoenix.exception.PhoenixIOException:
java.lang.RuntimeException:
java.util.concurrent.
Hi, I see surprising result.
Phoenix implementation is 2 times slower than pure hbase implementation.
Both implementations do the same:
1. put 2 rowkeys at once
2. get by rowkey.
No scans.
previous implementation took 40-50ms for insert (without batch, one request
= one put)
current one takes 120
I'm using 4.3.0-clabs-phoenix-1.0.0 (phoenix for CDH)
2015-10-06 20:41 GMT+02:00 Serega Sheypak :
> Hi, It's web-app.
> There are many concurrent web-threads (100 per app). Each thread:
> 1. create connection
> 2. execute statement
> 3. close statement
> 4. cl
x27;t have such problems.
2015-10-06 18:52 GMT+02:00 Samarth Jain :
> Sergea, any chance you have other queries concurrently executing on the
> client? What version of Phoenix you are on?
>
>
> On Tuesday, October 6, 2015, Serega Sheypak
> wrote:
>
>> Hi,
Hi, found smth similar here:
http://mail-archives.apache.org/mod_mbox/phoenix-user/201501.mbox/%3CCAAF1Jdg-E4=54e5dC3WazL=mvue8c93e4zohobiywaovs86...@mail.gmail.com%3E
My queries are:
1. insert into TABLE(KEY_COL, A, B,C) values(?, ?,?,?)
2. select A, B, C, KEY_COL from TABLE where KEY_COL=?
Why
https://phoenix.apache.org/dynamic_columns.html
It works, 100% feel free to ask if it doesn't work for you.
2015-09-10 11:08 GMT+02:00 Hafiz Mujadid :
> Hi!
>
> How can I add a new column into an existing table ?
>
> Thanks
>
; otherwise phoenix would throw an exception
>
> Alex
>
> On Sun, Sep 6, 2015 at 12:36 PM, Serega Sheypak
> wrote:
>
>> Hi, approach above doesn't fit web-app. There are multiple simultaneous
>> upserts comes from different threads.
>> So the only thing is to
s?
>>> Thanks in advance!
>>> -Jaime
>>>
>>> On Thu, Sep 3, 2015 at 3:35 PM, Samarth Jain
>>> wrote:
>>>
>>>> Yes. PhoenixConnection implements java.sql.Connection.
>>>>
>>>> On Thu, Sep 3, 2015 at 12:34 PM,
{
> stmt.executeUpdate();
> batchSize++;
> if (batchSize % commitSize == 0) {
> conn.commit();
> }
> }
> conn.commit(); // commit the last batch of records
> }
>
> You don't want commitSize to be too large since Phoenix client keeps the
Hi, I'm using phoenix in java web-application. App does upsert or select
by primary key.
What is the right pattern to do it?
- I create new connection for each request
- prepare and execute statement
- close stmt
- close connection
Does phoenix caches connections internally? What is the right way
No, problem is other: DATA_BLOCK_ENCODING='SNAPPY' doesn't for for some
reason.
1. I can't execute such DDL from sqlline
2. I can execute it via JDBC, but I can't query on insert into this table...
2015-09-01 23:35 GMT+02:00 Serega Sheypak :
> Thanks, I'
(DispatchCallback.java:83)
at
sqlline.SunSignalHandler.handle(SunSignalHandler.java:38)
2015-09-02 22:09 GMT+02:00 Serega Sheypak :
> Hi, I'm here again. Wrote local unit-tests, all works perfectly. Started
> to run smokes on prod
Hi, I'm here again. Wrote local unit-tests, all works perfectly. Started to
run smokes on production and can't reach great success. What this exception
means?
My ddl is:
CREATE TABLE IF NOT EXISTS cross_id_attributes
(
crossIdVARCHAR NOT NULL
CONSTRAINT cross_id_reference_pk
Ok, I hit this one:
http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.1.2/bk_releasenotes_hdp_2.1/content/ch_relnotes-hdpch_relnotes-hdp-2.1.1-knownissues-phoenix.html
problem is solved.
2015-09-01 23:51 GMT+02:00 Serega Sheypak :
> Hm... If I pass quorum as:
> node01:2181,node04:2181,
): Malformed connection url.
jdbc:phoenix:node01:2181,node04:2181,node05:2181
at
org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:361)
~[phoenix-core-4.3.0-clabs-phoenix-1.0.0.jar:na]
2015-09-01 23:37 GMT+02:00 Serega Sheypak :
> Hi, I wrote ninja-applicat
Hi, I wrote ninja-application (ninjaframework.org) with phenix. I used my
custom testing utility to test my app. When I deployd my app to server, I
got exception:
java.sql.SQLException: No suitable driver found for
jdbc:phoenix:node01,node04,node05:2181
at java.sql.DriverManager.getConnection(Dri
Thanks, I'll try.
it's tempale query, it works 100% through JDBC
2015-09-01 23:26 GMT+02:00 Michael McAllister :
> I think you need a comma between your column definition and your
> constraint definition.
>
>
> On Sep 1, 2015, at 2:54 PM, Serega Sheypak
> wrote:
>
Hi, I wrote itegration test that uses HBasetesting utility and phoenix.
Test creates table and inserts data. It works fine.
I'm trying to run
CREATE TABLE IF NOT EXISTS cross_id_attributes
(
crossIdVARCHAR NOT NULL
CONSTRAINT cross_id_reference_pk PRIMARY KEY (crossId)
)SALT_BUCKE
VARCHAR in your upsert statement:
>
> upsert into cross_id_attributes (crossId, id1, id2) values
> ('crossIdvalue','id1Value','id2Value')
>
>
> Sent from my iPhone
>
> On 30 Aug 2015, at 22:38, Serega Sheypak wrote:
>
> Hi, Getting error
I would suggest you to use
https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/BufferedMutator.html
instead of list of puts and share mutableBuffer across threads (it's
thread-safe). I reduced my response time from 30-40 ms to 4ms while using
buffferedmutator. It also sends mutations in
Hi, reading: https://phoenix.apache.org/language/
confused with stmt:
Otherwise, data is buffered on the client and, if auto commit is on,
committed in row batches as specified by the UpsertBatchSize connection
property (or the phoenix.mutate.upsertBatchSize HBase config property which
defaults t
Thy to use this:
https://repository.cloudera.com/cloudera/cloudera-repos/org/apache/phoenix/phoenix-core/4.3.0-clabs-phoenix-1.0.0/
2015-07-15 0:05 GMT+02:00 Veerraju Tadimeti :
> Hi,
>
> I amtrying to connect from phoenix 4.0.0-HBase1.0 to Cloudera 5.4.3, HBase
> 1.0. I am getting the followng
Hi, here is my table
CREATE TABLE IF NOT EXISTS cross_id_reference
(
id1VARCHAR NOT NULL,
id2VARCHAR NOT NULL,
CONSTRAINT my_pk PRIMARY KEY (id1)
) IMMUTABLE_ROWS=true, TTL=691200;
Is it ok to set TTL and IMMUTABLE_ROWS at the same time? TTL should delete
expired rows
Hi, I have an immutable table with 4 columns:
id1, id2, meta_id1, meta_id2.
Primary key is id1, I select all fields from table "row" by id1. So it's
the fastest way to get data.
Second access path is to select by id2.
I have serious mixed workload. What is better:
1. use secondary index for id2
2
at allows Cloudera
> Manager to understand what it is and how to use it.
>
> Kevin
>
> *From:* Serega Sheypak [mailto:serega.shey...@gmail.com
> ]
> *Sent:* Tuesday, June 23, 2015 1:27 PM
> *To:* user@phoenix.apache.org
> *Subject:* Re: CDH 5.4 and Phoenix
>
> I read that l
ction returned by
> getJdbcFacade().createConnection().
> If not, you need to call connection.commit() after executeUpdate()
>
> -Samarth
>
> On Tuesday, June 23, 2015, Serega Sheypak
> wrote:
>
>> Hi, I'm testing dummy code:
>>
>> int result = getJdb
om the Cloudera Manager like any other
> parcel. And it's for Phoenix 4.3.1, 1.0 it's probably the version of the
> cloudera's parcel.
>
> , and it's for Phoenix 4.3.1.
>
> On Tue, Jun 23, 2015 at 2:48 PM Serega Sheypak
> wrote:
>
>> Hi, no
Hi, I'm testing dummy code:
int result = getJdbcFacade().createConnection().prepareStatement("upsert
into unique_site_visitor (visitorId, siteId, visitTs) values ('xxxyyyzzz',
1, 2)").executeUpdate();
LOG.debug("executeUpdate result: {}", result); //executeUpdate
result: 1
Hi, no one:) ?
2015-06-21 22:41 GMT+02:00 Serega Sheypak :
> Hi!, did anyone try to integrate Phoenix 1.0 with CDH 5.4.x?
> I see weird installation path here:
>
> http://www.cloudera.com/content/cloudera/en/developers/home/cloudera-labs/apache-phoenix/install-apache-phoenix-cloud
Hi!, did anyone try to integrate Phoenix 1.0 with CDH 5.4.x?
I see weird installation path here:
http://www.cloudera.com/content/cloudera/en/developers/home/cloudera-labs/apache-phoenix/install-apache-phoenix-cloudera-labs.pdf
I would like to avoid it and run app using plain maven dependencies.
Rig
38 matches
Mail list logo