Re: looking for help with Pherf setup

2016-02-29 Thread Gaurav Kanade
-client/ folder (not from the ../phoenix-client/lib folder) HTH Gaurav On 29 February 2016 at 11:23, Gaurav Kanade wrote: > Are you running Pherf with "thin" client ? In that case it is possible the > class path is missing that particular jar. > > Gaurav > > On 29 Feb

Re: looking for help with Pherf setup

2016-02-29 Thread Gaurav Kanade
URLClassLoader.findClass(URLClassLoader.java:354) >> at java.lang.ClassLoader.loadClass(ClassLoader.java:425) >> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308) >> at java.lang.ClassLoader.loadClass(ClassLoader.java:358) >> ... 2 more >> >> >> We did check that the table was there, so this is a bit puzzling. >> >> Thanks, >> >> Peter >> > > -- Gaurav Kanade, Software Engineer Big Data Cloud and Enterprise Division Microsoft

YCSB with Phoenix?

2016-02-19 Thread Gaurav Kanade
Hi All I am relatively new to Phoenix and was working on some performance tuning/benchmarking experiments and tried to search online for whether there exists YCSB client to go through Phoenix. I came across this https://github.com/brianfrankcooper/YCSB/pull/178 and some related links but it seems

Re: Problems with Phoenix SqlLine loading large amounts of data

2015-09-28 Thread Gaurav Kanade
n of > the query. Have you tried running "UPDATE STATISTICS" on the table? If > not, please see this link: > http://phoenix.apache.org/update_statistics.html > > Manually splitting the table will also likely improve the > parallelization of a select count(*) query. > &g

Problems with Phoenix SqlLine loading large amounts of data

2015-09-25 Thread Gaurav Kanade
Hello Guys I was able to load my large data set (200 G) with phoenix bulk load tool with your help last week. But I am running into other problem running queries on this now using sqlline. All I am trying to do is run a simple count(*) query. Initially I hit timeout issues due to a socketconnec

Re: Using Phoenix Bulk Upload CSV to upload 200GB data

2015-09-17 Thread Gaurav Kanade
output - but just wanted to check if this is expected behavior on workloads of this size. Thanks Gaurav On 16 September 2015 at 12:21, Gaurav Kanade wrote: > Thanks for the pointers Gabriel! Will give it a shot now! > > On 16 September 2015 at 12:15, Gabriel Reid > wrote: > >&g

Re: Using Phoenix Bulk Upload CSV to upload 200GB data

2015-09-16 Thread Gaurav Kanade
bs like this using screen [1] so that losing a > client terminal connection won't get in the way of the full job completing. > > > - Gabriel > > > > 1. https://www.gnu.org/software/screen/manual/screen.html > > On Wed, Sep 16, 2015 at 9:07 PM, Gaurav Kanade > w

Re: Using Phoenix Bulk Upload CSV to upload 200GB data

2015-09-16 Thread Gaurav Kanade
ase, apart from that there isn't any basic thing that you're > >> probably missing, so any additional information that you can supply > >> about what you're running into would be useful. > >> > >> - Gabriel > >> > >> > >> O

Re: Using Phoenix Bulk Upload CSV to upload 200GB data

2015-09-16 Thread Gaurav Kanade
; high > > to me. There are very few specifics in your mail. Are you using YARN? Can > > you provide details like table structure, # of rows & columns, etc. Do > you > > have an error stack? > > > > > > On Friday, September 11, 2015, Gaurav Kanade > &g

Re: timeouts for long queries

2015-09-15 Thread Gaurav Kanade
ffa., >>> hostname=ip-172-31-31-177.ec2.chonp.net,60020,1442309899160, seqNum=2 >>> >>> at java.util.concurrent.FutureTask.report(FutureTask.java:122) >>> >>> at java.util.concurrent.FutureTask.get(FutureTask.java:206) >>> >>> at >>> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:534) >>> >>> ... 31 more >>> >>> >>> Is this a client-side timeout, or do I need to change something >>> HBase-related on the server and restart the cluster? On master, or all >>> region servers? >>> >>> If it's a client-side thing, where (in JDBC terms) do I do this? >>> >>> I've tried various things, but I always hit this timeout, and it always >>> says the timeout is 6 (ms, presumably). >>> >>> James >>> >> >> > -- Gaurav Kanade, Software Engineer Big Data Cloud and Enterprise Division Microsoft

Using Phoenix Bulk Upload CSV to upload 200GB data

2015-09-11 Thread Gaurav Kanade
complaining that Node Health is bad (log-dirs and local-dirs are bad) Is there some inherent setting I am missing that I need to set up for the particular job ? Any pointers would be appreciated Thanks -- Gaurav Kanade, Software Engineer Big Data Cloud and Enterprise Division Microsoft