[GENERAL] How to INSERT INTO one table from another table, WHERE

2013-05-03 Thread Kirk Wythers
I am trying to insert data from 2 columns in tableB (colX and colY) into the same two columns of tableB, with a join like where clause. Is this possible? For example: INSERT INTO tableA (colX, colY) (SELECT colX, colY FROM tableB WHERE tableA.blockname = tableB.block_name AND t

Re: [GENERAL] run COPY as user other than postgres

2013-04-24 Thread Kirk Wythers
On Apr 24, 2013, at 6:14 AM, Bill Moran wrote: >>> > > Write your own client that uses the copy interface to > load a file from wherever and send it to the server. > > Or just use the one built in to psql, as Jasen suggested. > I am using "copy to" to write data from the db out to csv files.

[GENERAL] run COPY as user other than postgres

2013-04-23 Thread Kirk Wythers
I would like to run the COPY command as a user other than "postgres". I find it a bit of a pain (or at least requiring an extra step or two) to have the postgres user own the files that I am creating with COPY TO. Here is a simple example where the location '/some/path/to/file/file.csv' is owned

Re: [GENERAL] pulling year out of a timestamp

2013-04-11 Thread Kirk Wythers
On Apr 11, 2013, at 10:55 AM, Ryan Kelly wrote: > You want date_part, not date_trunc. > > -Ryan Thanks Ryan. It looks like " EXTRACT(YEAR FROM table2.time2)" works as well.

[GENERAL] pulling year out of a timestamp

2013-04-11 Thread Kirk Wythers
I am trying to perform a join between two tables where I need to join "year" in table 1 with the year component of a timestamp in table 2. Something like this: table1.year = table2.timestamp where timestamp has the format: "2009-01-01 00:00:00" I've tried date_trunc('year', table2.timestamp

Re: [GENERAL] big un stacking query - help save me from myself

2013-03-15 Thread Kirk Wythers
On Mar 14, 2013, at 10:27 AM, Kevin Grittner wrote: >> > > I didn't understand your description of what you are trying to do, > and the example has so many columns and cases that it would take a > long time to understand it. Can you distill this down to just a > few columns and cases so that i

[GENERAL] big un stacking query - save me from myself

2013-03-13 Thread Kirk Wythers
I have a fairly large table with two columns that I need to "de-normalize" (235 million rows) There has got to be a better (i.e. faster) approach than what I am doing. I am using a MAX CASE on each of the 24 variables (column names variable and value) that I want to unstack. Any suggestions wou

[GENERAL] big un stacking query - help save me from myself

2013-03-13 Thread Kirk Wythers
I hace a fairly large table with two columns that I need to "de-normalize" (235 million rows) There has got to be a better (i.e. faster) approach than what I am doing. I am using a MAX CASE on each of the 24 variables (column names variable and value) that I want to unstack. Any suggestions wou

Re: [GENERAL] PostgreSQL took over my login

2013-02-20 Thread Kirk Wythers
On Feb 20, 2013, at 5:42 PM, Adrian Klaver wrote: > On 02/19/2013 07:04 PM, NiRE wrote: >> All of a sudden my macbook pro force quit and when I restarted it it is >> run by user PostgreSQL user - i do not know the password and cannot >> change it. I cannot find out online how to fix this back. A

Re: [GENERAL] partial time stamp query

2013-02-04 Thread Kirk Wythers
y choose from an interval pick list & the > > SQL is hidden. Some extra enhancements are the automatic collation of lat & > > lon gps readings into a Postgis point for each reading record, & the > > automatic aggregation of daily points into daily track lines, so t

Re: [GENERAL] date_trunc to aggregate values?

2013-02-04 Thread Kirk Wythers
On Feb 4, 2013, at 3:26 PM, Jason Dusek wrote: > 2013/2/4 Kirk Wythers : >> I am looking for suggestions on aggregation techniques using a timestamp >> column. In my case I have tried: >> >> date_trunc('hour', derived_tsoil_fifteen_min_stacked.time2)

Re: [GENERAL] partial time stamp query

2013-02-04 Thread Kirk Wythers
; > Cheers > > Brent Wood > > Programme leader: Environmental Information Delivery > NIWA > DDI: +64 (4) 3860529 > > From: pgsql-general-ow...@postgresql.org [pgsql-general-ow...@postgresql.org] > on behalf of Kirk Wythers [wythe...@u

[GENERAL] date_trunc to aggregate values?

2013-02-04 Thread Kirk Wythers
I am looking for suggestions on aggregation techniques using a timestamp column. In my case I have tried: date_trunc('hour', derived_tsoil_fifteen_min_stacked.time2), but date_truck only seems to aggregate the timestamp. I thought I could use AVG(derived_tsoil_fifteen_min_stacked.value) in co

Fwd: [GENERAL] partial time stamp query

2013-02-04 Thread Kirk Wythers
mn representing the mean of the first and last four of each 15 minute records. Perhaps date_trunk only works for the timestamp? On Feb 4, 2013, at 8:50 AM, Misa Simic wrote: > WHERE derived_tsoil_fifteen_min_stacked.time2::date = '2010-07-07'::date > > On Monday, Febru

Re: [GENERAL] partial time stamp query

2013-02-04 Thread Kirk Wythers
mn representing the mean of the first and last four of each 15 minute records. Perhaps date_trunk only works for the timestamp? On Feb 4, 2013, at 8:50 AM, Misa Simic wrote: > WHERE derived_tsoil_fifteen_min_stacked.time2::date = '2010-07-07'::date > > On Monday, Febru

[GENERAL] partial time stamp query

2013-02-04 Thread Kirk Wythers
I am trying to write a query that grabs one particular day from a timestamp column. The data are ordered in 15 minute chunks like this: 2010-07-07 12:45:00 2010-07-07 13:00:00 2010-07-07 13:15:00 2010-07-07 13:30:00 etc… If I wanted all records from july 7th 2010, I would expect 4 x 24 = 96 reco

[GENERAL] partial time stamp query

2013-02-04 Thread Kirk Wythers
I am trying to write a query that grabs one particular day from a timestamp column. The data are ordered in 15 minute chunks like this: 2010-07-07 12:45:00 2010-07-07 13:00:00 2010-07-07 13:15:00 2010-07-07 13:30:00 etc… If I wanted all records from july 7th 2010, I would expect 4 x 24 = 96 reco

[GENERAL] date_trunc to aggregate by timestamp?

2013-01-24 Thread Kirk Wythers
I am trying to some up with an approach that uses "date_truc" to aggregate 15 minute time series data to hourly bins. My current query which utilizes a view, does performs a join after which I use a series a WHERE statements to specify which of the 15 minute records I want to look at. I think

Re: [GENERAL] speeding up a join query that utilizes a view

2013-01-18 Thread Kirk Wythers
~ 'tsoil' The whole point of the de-normalized table "fifteen_min_stacked_propper" is so that variable names in fifteen_min_stacked_propper.variable can be used to join on data_key.variable_channel. Does that make sense? Kirk > >> -Original Message- >> From: Kirk

Re: [GENERAL] speeding up a join query that utilizes a view

2013-01-18 Thread Kirk Wythers
On Jan 18, 2013, at 8:10 AM, Igor Neyman wrote: > Yes, my mistake, I meant to ask about fifteen_min_stacked_view definition, > and Postgres parameters from postgresql.conf configuration file, at least > those - modified from default setting and related to "resource consumption" > and "query t

Re: [GENERAL] speeding up a join query that utilizes a view

2013-01-17 Thread Kirk Wythers
tement -Wendif-labels -Wformat-security -fno-strict-aliasing -fwrapv CFLAGS_SL = LDFLAGS = -arch x86_64 -pipe -Os -g -Wall -Wno-deprecated-declarations -Wl,-dead_strip_dylibs LDFLAGS_EX = -mdynamic-no-pic LDFLAGS_SL = LIBS = -lpgport -lxslt -lxml2 -lpam -lssl -lcrypto -lgssapi_krb5 -lz

Re: [GENERAL] speeding up a join query that utilizes a view

2013-01-17 Thread Kirk Wythers
> > Not enough information: > > Postgres version? > OS? > Some Postgres configuration parameters, specifically related to "RESOURCE > USAGE" and " QUERY TUNING"? > Table structures (including indexes) for: fifteen_min_stacked_propper, > fifteen_min, and data_key? > View definition for fifteen_m

[GENERAL] speeding up a join query that utilizes a view

2013-01-16 Thread Kirk Wythers
I am looking for advice on a performance problem. I'm pretty sure that the culprit of my slow performance is a view that is several hundred million records in size. Because it is a view, I can only index the underlying table, but because the view generates an "un-pivoted" version of the underlyi

Re: [GENERAL] psql copy from through bash

2013-01-11 Thread Kirk Wythers
On Jan 11, 2013, at 12:18 PM, Szymon Guz wrote: > > > > On 11 January 2013 19:13, Kirk Wythers wrote: > Can anyone see what I'm misisng? I am trying to run a psql "copy from" > command through a bash script to load a buch of cdv files into the same > t

[GENERAL] psql copy from through bash

2013-01-11 Thread Kirk Wythers
Can anyone see what I'm misisng? I am trying to run a psql "copy from" command through a bash script to load a buch of cdv files into the same table. I'm getting an error about the file "infile" not existing? #!/bin/sh for infile in /path_to_files/*.csv do cat infile | psql dbname -c "\copy

Re: [GENERAL] query by partial timestamp

2013-01-08 Thread Kirk Wythers
On Jan 8, 2013, at 6:48 PM, Tom Lane wrote: > The OP didn't > suggest how many years his data covers, but it's quite possible that > pulling a full year's worth of data will read enough of the table that > there's no point in worrying about whether an index could be used > anyway. There are onl

[GENERAL] query by partial timestamp

2013-01-08 Thread Kirk Wythers
I have a column of type TIMESTAMP, I'd like to query all records from 2011. If it were text I could use a partial such as: WHERE text ~ '2011' There must be a simple way to pull the year part out of a timestamp format. Thanks in advance. -- Sent via pgsql-general mailing list (pgsq

[GENERAL] recasting to timestamp from varchar

2013-01-04 Thread Kirk Wythers
I am trying to re-cast a column as a timestamp> ALTER TABLE sixty_min ALTER COLUMN time2 TYPE timestamp; ERROR: column "time2" cannot be cast to type timestamp without time zone The column time2 is currently a varchar. I actually do not want to mess with time zones, but it seems that postgres w

[GENERAL] recasting to timestamp from varchar

2013-01-04 Thread Kirk Wythers
I am trying to re-cast a column as a timestamp> ALTER TABLE sixty_min ALTER COLUMN time2 TYPE timestamp; ERROR: column "time2" cannot be cast to type timestamp without time zone The column time2 is currently a varchar. I actually do not want to mess with time zones, but it seems that postgres w

[GENERAL] recasting to timestamp from varchar

2013-01-04 Thread Kirk Wythers
I am trying to re-cast a column as a timestamp> ALTER TABLE sixty_min ALTER COLUMN time2 TYPE timestamp; ERROR: column "time2" cannot be cast to type timestamp without time zone The column time2 is currently a varchar. I actually do not want to mess with time zones, but it seems that postgres w

[GENERAL] un-pivot with crosstab?

2013-01-02 Thread Kirk Wythers
I am trying to follow the example given below, but in reverse. I have something like table2, that I am trying to "un-pivot" to look like table1… And I seem to be having a booger of a time. My attempt: SELECT * FROM crosstab( 'SELECT rowid, key1,key1 FROM test WHERE key1= ''val

[GENERAL] do I need a table function to do this

2012-12-29 Thread Kirk Wythers
I have been given an interesting problem to solve in a postgres db. I was given two tables t1: sitetreatment variableid (pk) - A X BLUE1A

[GENERAL] do I need a table function to do this?

2012-12-29 Thread Kirk Wythers
I have been given an interesting problem to solve in a postgres db. I was given two tables t1: sitetreatment variableid (pk) - A X BLUE1A

[GENERAL] update from a csv file?

2012-12-27 Thread Kirk Wythers
I have been using COPY FROM to do a mass import of records from CSV files into a new database. I have discover however, a small number of records ( a few thousand) in one of the files that contain new data that needs to be added to the database, but on rows that have a primary key and have alrea

Re: [GENERAL] update table from a csv file

2012-12-27 Thread Kirk Wythers
On Dec 27, 2012, at 10:39 AM, Adrian Klaver wrote: > No. Some questions though. Thanks for the reply Adrian. > > What version pf Postgres? 9.1 > Is that the actual UPDATE statement, I see no SET? I was reading the docs but obviously don't understand the syntax of the update statement.

[GENERAL] update table from csv file

2012-12-27 Thread Kirk Wythers
I have been using COPY FROM to do a mass import of records from CSV files into a new database. I have discover however, a small number of records ( a few thousand) in one of the files that contain new data that needs to be added to the database, but on rows that have a primary key and have alrea

[GENERAL] update table from a csv file

2012-12-27 Thread Kirk Wythers
I have been using COPY FROM to do a mass import of records from CSV files into a new database. I have discover however, a small number of records ( a few thousand) in one of the files that contain new data that needs to be added to the database, but on rows that have a primary key and have alrea

[GENERAL] data type troubles

2012-12-21 Thread Kirk Wythers
I seem to be dealing with a data type issue when I try and import data into a new and empty database. Error Message: ERROR: date/time field value out of range: "1332471600:00:00" LINE 1: ...tc_avg25") values ('2012-03-22 21:00:00_B4WARM_A','133247160… Here are the first two rows in the files I

Re: [GENERAL] copy from questions

2012-12-19 Thread Kirk Wythers
Thanks for the reply Steve. These suggestions are new to me, so I'd like to rephrase them back to you in order to make sure I understand the bits and details. On Dec 19, 2012, at 10:48 AM, Steve Crawford wrote: >> > I suppose you could use a trigger to check each record before inserting bu

[GENERAL] copy from questions

2012-12-19 Thread Kirk Wythers
I am using version 9.1 and have a large number of files to insert. I am trying to use a simple COPY FROM command but have a couple questions. 1. There are a small number of instances where there are duplicate records that are being caught by the primary key (as it is supposed to do). However, th

Re: [GENERAL] data type - import problem

2012-12-17 Thread Kirk Wythers
NA NA NA On Dec 17, 2012, at 4:05 PM, Adrian Klaver wrote: > On 12/17/2012 01:53 PM, Kirk Wythers wrote: >> >> I seem to be dealing with a data type issue when I try and import data >> into a new and empty database. >> >> Error Message: E

[GENERAL] data type - import problem

2012-12-17 Thread Kirk Wythers
I seem to be dealing with a data type issue when I try and import data into a new and empty database. Error Message: ERROR: date/time field value out of range: "1332471600:00:00" LINE 1: ...tc_avg25") values ('2012-03-22 21:00:00_B4WARM_A','133247160… Here are the first two rows in the files

Re: [GENERAL] db not dumping properly, or at least not restoring

2009-10-16 Thread Kirk Wythers
On Oct 16, 2009, at 4:51 PM, Scott Marlowe wrote: On Fri, Oct 16, 2009 at 11:25 AM, Kirk Wythers wrote: Any ideas what the problem could be here? Use the pg_dump from the target (i.e. newer) pgsql. I.e. if going from 8.3.8 to 8.4.1, use the pg_dump that comes with 8.4.1 to dump the

[GENERAL] db not dumping properly, or at least not restoring

2009-10-16 Thread Kirk Wythers
I am trying to move databases to another macine (and update from 8.2 to 8.4 along the way). I first tried pg_dumpall, but I found that one of the data bases did not restore and data, just an empty db with no tables. Since then I have tried pg_dump with the following: bash-3.2$ /usr/local/pg

[GENERAL] query help

2007-08-14 Thread Kirk Wythers
I need some help with rewriting a query. I have a fairly complicated query (for me anyway) that dumps daily climate data, filling in missing data with monthly averages (one line per day). I want to output monthly averages (one line per month). I am having a hard time wrapping my head around

Re: [GENERAL] problem with a conditional statement

2007-05-09 Thread Kirk Wythers
On May 9, 2007, at 10:41 AM, Erik Jones wrote: On May 9, 2007, at 10:32 AM, Kirk Wythers wrote: Here is a more elaborate version, I'm trying to add 'avgsol' to your original FROM clause: SELECT CASE WHEN w.station_id = site_near.station_id THEN w.obs_id ELSE s.obs_i

Re: [GENERAL] problem with a conditional statement

2007-05-09 Thread Kirk Wythers
Here is a more elaborate version, I'm trying to add 'avgsol' to your original FROM clause: SELECT CASE WHEN w.station_id = site_near.station_id THEN w.obs_id ELSE s.obs_id END AS obs_id, site_near.station_id, site_near.longname, w.year, w.doy, w.precip, w.tmin, w.tmax, --replace missing val

Re: [GENERAL] problem with a conditional statement

2007-05-08 Thread Kirk Wythers
On May 8, 2007, at 2:02 AM, Albe Laurenz wrote: Kirk Wythers wrote: I am struggling to get a CASE WHEN statement to work within another CASE WHEN. Here is my original code: SELECT CASE WHEN w.station_id = site_near.station_id THEN w.obs_id ELSE s.obs_id END AS obs_id

[GENERAL] problem with a conditional statement

2007-05-07 Thread Kirk Wythers
I am struggling to get a CASE WHEN statement to work within another CASE WHEN. Here is my original code: SELECT CASE WHEN w.station_id = site_near.station_id THEN w.obs_id ELSE s.obs_id END AS obs_id, site_near.station_id, site_near.longname, w.year, w.doy, w.precip, w.tmin, w.tmax, --repl

Re: [GENERAL] out of memory error on 3 table join

2006-12-11 Thread Kirk Wythers
On Dec 11, 2006, at 1:43 PM, Tom Lane wrote: Kirk Wythers <[EMAIL PROTECTED]> writes: I have an database (pg 8.1.0 on OS X) where a three table inner-join gives the following errors: psql(606) malloc: *** vm_allocate(size=8421376) failed (error code=3) psql(606) malloc: *** error:

[GENERAL] out of memory error on 3 table join

2006-12-11 Thread Kirk Wythers
I have an database (pg 8.1.0 on OS X) where a three table inner-join gives the following errors: psql(606) malloc: *** vm_allocate(size=8421376) failed (error code=3) psql(606) malloc: *** error: can't allocate region psql(606) malloc: *** set a breakpoint in szone_error to debug out of memory