Thanks for looking into it, Tom. We're using 9.0.4, so that might indeed
be the problem. What additional data (if any) would you like to see? If
you want to look into it further, I can give you schema, though I hesitate
to spam the whole list. I could also mock up some tables and see what's
the
Hi,
2011/8/12 David Johnston :
> In my table, some of the columns are in text datatype. Few data will come
> down from UI layer as integers. I want to convert that to string/text before
> saving it into the table. Please help me on this.
>
>
> SQL Standard: "CAST( value AS text )" [or varchar]
>
On Tue, Mar 20, 2012 at 7:14 PM, Stefan Keller wrote:
> But this only works if the input is a clean list of number characters already!
> Anything other than this will issue an error:
>
> postgres=# SELECT '10'::int;
>
> After trying hard to cope with anything possibly as an input string I
> found
2012/3/20 Chris Angelico :
> On Tue, Mar 20, 2012 at 7:14 PM, Stefan Keller wrote:
>> But this only works if the input is a clean list of number characters
>> already!
>> Anything other than this will issue an error:
>>
>> postgres=# SELECT '10'::int;
>>
>> After trying hard to cope with anythin
On Mar 19, 2012, at 10:59 AM, Welty, Richard wrote:
> i just finished this thread from May of last year, and am wondering if this
> still represents consensus thinking about postgresql deployments in the EC2
> cloud:
>
> http://postgresql.1045698.n5.nabble.com/amazon-ec2-td4368036.html
>
> Yes,
Florent THOMAS wrote:
>>> 1 - Is there a way to have conditions for committing transactions
like in oracle :
>>>
http://www.scribd.com/doc/42831667/47/Validation-conditionnelle-de-trans
action-62
>>
>> PostgreSQL follows the SQL standard which does not allow anything
like that.
>>
>> Later versio
Kevin Goess writes:
> Thanks for looking into it, Tom. We're using 9.0.4, so that might indeed
> be the problem. What additional data (if any) would you like to see?
Well, the first thing to do is update to 9.0.latest and see if the plan
changes. There are plenty of good reasons to do that besi
Hi,
I look for a way to reproduce the encrypted string stored as a password by
means other than using the CREATE ROLE command.
When using CREATE ROLEPASSWORD 'somepass' the resulting string for
rolpassword in pg_authid always starts with md5, suggesting it would create
some md5 string.
On Tue, Mar 20, 2012 at 8:28 AM, Alexander Reichstadt wrote:
> Hi,
>
> I look for a way to reproduce the encrypted string stored as a password by
> means other than using the CREATE ROLE command.
>
> When using CREATE ROLEPASSWORD 'somepass' the resulting string for
> rolpassword in pg_a
Hi all,
I'm using the embedded SQL in C to create the User-Defined Aggregate(UDA).
The command is :
CREATE AGGREGATE aggname( sfunc=kmeans,
stype=double precision[],
finalfunc=kmeansfinal,
INITCOND='{1,2,3}');
Since I need
I have now tried at least 7 different install methods to get pg up and running
on Lion. I fear that my system is now thoroughly inoculated and will never be
able to run postgres/postgis.
I started with the pg mac installer / stack builder. That worked to get pg
installed, but could not get post
On 3/20/2012 9:22 AM, Sam Loy wrote:
I have now tried at least 7 different install methods to get pg up and running
on Lion. I fear that my system is now thoroughly inoculated and will never be
able to run postgres/postgis.
I started with the pg mac installer / stack builder. That worked to ge
Sam,
I started with the pg mac installer / stack builder. That worked to get pg
installed, but could not get postgis installed.
I haven't installed PostGIS, but I have no problems running the database in
Lion using the EnterpriseDB (EDB) installer as provided.
Is there a way to purge my syst
On Wed, Mar 7, 2012 at 3:49 PM, Merlin Moncure wrote:
> On Wed, Mar 7, 2012 at 2:31 PM, Tom Lane wrote:
>> Merlin Moncure writes:
>>> On Wed, Mar 7, 2012 at 11:45 AM, Mike Blackwell
>>> wrote:
alter table a add column even_more_stuff boolean not null default false;
>>
>>> aha! that's not
On Tue, 2012-03-20 at 09:22 -0500, Sam Loy wrote:
> I have now tried at least 7 different install methods to get pg up and
> running on Lion. I fear that my system is now thoroughly inoculated and will
> never be able to run postgres/postgis.
>
> I started with the pg mac installer / stack build
On Mon, Mar 19, 2012 at 03:07:02PM -0700, Jeff Davis wrote:
> On Mon, 2012-03-19 at 15:30 -0400, Bruce Momjian wrote:
> > On Thu, Mar 01, 2012 at 02:01:31PM -0800, Lonni J Friedman wrote:
> > > I've got a 3 node cluster (1 master/2 slaves) running 9.0.x with
> > > streaming replication. I'm in the
Robert Haas writes:
> On Wed, Mar 7, 2012 at 3:49 PM, Merlin Moncure wrote:
>> On Wed, Mar 7, 2012 at 2:31 PM, Tom Lane wrote:
>>> It is not a bug. The ALTER ADD ... DEFAULT ... form implies rewriting
>>> every existing tuple of the rowtype to insert a non-null value in the
>>> added column, an
On Tue, Mar 20, 2012 at 11:46 AM, Bruce Momjian wrote:
> On Mon, Mar 19, 2012 at 03:07:02PM -0700, Jeff Davis wrote:
>> On Mon, 2012-03-19 at 15:30 -0400, Bruce Momjian wrote:
>> > On Thu, Mar 01, 2012 at 02:01:31PM -0800, Lonni J Friedman wrote:
>> > > I've got a 3 node cluster (1 master/2 slaves
On Tue, Mar 20, 2012 at 11:56:29AM -0700, Lonni J Friedman wrote:
> >> So how can you resume streaming without rebuilding the slaves?
> >
> > Oh, wow, I never thought of the fact that the system tables will be
> > different? I guess you could assume the pg_dump restore is going to
> > create thin
On Tue, Mar 20, 2012 at 12:16 PM, Robert Haas wrote:
> I think Tom's correct about what the right behavior would be if
> composite types supported defaults, but they don't, never have, and
> maybe never will. I had a previous argument about this with Tom, and
> lost, though I am not sure that any
Interesting idea. However, I think this is ssl between the client and
database. Given the client would be the server hosting the web service I
don't think this would work for the web service client.
On Fri, Mar 16, 2012 at 2:54 PM, Raymond O'Donnell wrote:
> On 16/03/2012 18:39, Bryan Montgomery
Il 20/03/12 15:22, Sam Loy ha scritto:
I have now tried at least 7 different install methods to get pg up and running
on Lion. I fear that my system is now thoroughly inoculated and will never be
able to run postgres/postgis.
I started with the pg mac installer / stack builder. That worked to
Hi,
The link[1] for the development snapshots of pg-admin as advertised
here [2] seems to be broken. Are these snapshots hosted somewhere
else these days or are they no longer produced. I have a colleague
who's bravely switching from SQL Server to Postgresql who'd really
like to use the new scri
> Is there anyone who has ever successfully gotten postgres/postGIS running on
> Mac Lion? Really? How?
Hello Sam,
I'm running Lion, and had the same trouble using the Enterprise Stack Builder
to install PostGIS. I finally got it working by using Kyng Chaos' installers
for both PostgreSQL an
On Tue, Mar 20, 2012 at 02:58:20PM -0400, Bruce Momjian wrote:
> On Tue, Mar 20, 2012 at 11:56:29AM -0700, Lonni J Friedman wrote:
> > >> So how can you resume streaming without rebuilding the slaves?
> > >
> > > Oh, wow, I never thought of the fact that the system tables will be
> > > different?
Hi,
On Tue, 2012-03-20 at 16:01 -0400, Andy Chambers wrote:
[...]
> The link[1] for the development snapshots of pg-admin as advertised
> here [2] seems to be broken. Are these snapshots hosted somewhere
> else these days or are they no longer produced.
They are no longer produced. I'll fix the
On Tue, 2012-03-20 at 16:49 -0400, Bruce Momjian wrote:
> On Tue, Mar 20, 2012 at 02:58:20PM -0400, Bruce Momjian wrote:
> > On Tue, Mar 20, 2012 at 11:56:29AM -0700, Lonni J Friedman wrote:
> > > >> So how can you resume streaming without rebuilding the slaves?
> > > >
> > > > Oh, wow, I never tho
On Tue, Mar 20, 2012 at 4:53 PM, Guillaume Lelarge
wrote:
> Hi,
>
> On Tue, 2012-03-20 at 16:01 -0400, Andy Chambers wrote:
> [...]
>> The link[1] for the development snapshots of pg-admin as advertised
>> here [2] seems to be broken. Are these snapshots hosted somewhere
>> else these days or are
I've got a SaaS situation where I'm using 1000+ schemas in a single
database (each schema contains the same tables, just different data
per tenant). I used schemas so that the shared app servers could
share a connection to the single database for all schemas. Things are
working fine. However, whe
actually rsync works fine on file level and is good for manual syncing.
it check really the files with the stat command, so a bit change will trigger
the copy
in practice you need to keep an eye on compleetness of the rsync action.
try to use it without compression for large data sets, it saves t
On Tue, 2012-03-20 at 17:17 -0400, Andy Chambers wrote:
> On Tue, Mar 20, 2012 at 4:53 PM, Guillaume Lelarge
> wrote:
> > Hi,
> >
> > On Tue, 2012-03-20 at 16:01 -0400, Andy Chambers wrote:
> > [...]
> >> The link[1] for the development snapshots of pg-admin as advertised
> >> here [2] seems to be
Actually, through some experimentation, googling and looking at a postgres
book, I found out how to encrypt the password, and to compare that to
pg_shadow. However, during my research I realized the need for double
encrypting as per postgres clients.
So,another option is to use encryption on the w
I've got a SaaS situation where I'm using 1000+ schemas in a single
database (each schema contains the same tables, just different data
per tenant). I used schemas so that the shared app servers could
share a connection to the single database for all schemas. Things are
working fine. However, whe
New to PostgreSQL, I'd like to install a dictionnary "unaccent.rules" and
needs an howto.
I do have to install that on Ubuntu 11.10 and Mac OS X latest.
--
Yvon
Greetings list!
I am pretty new to postgresql from mysql and did a fairly extensive
search of the list and came up with a few good ones but didn't find
the exact same situation as I have now. so I am venturing asking here.
I have daily minute stock price data from 2005 on and each day with
columns
>
> right now I am having about 7000 tables for individual stock and I use
> perl to do inserts, it's very slow. I would like to use copy or other
> bulk loading tool to load the daily raw gz data. but I need the split
> the file to per stock files first before I do bulk loading. I consider
> this
Cody Cutrer writes:
> I've got a SaaS situation where I'm using 1000+ schemas in a single
> database (each schema contains the same tables, just different data
> per tenant). ...
> if I add "nspname = ANY(current_schemas(true))" to the query psql is
> using, and an index to pg_class on relnamespac
On 03/20/2012 04:27 PM, Jim Green wrote:
Greetings list!
I am pretty new to postgresql from mysql
Welcome.
I have daily minute stock price data from 2005 on and each day with
columns timestamp, open,high,low,close,volume and a few more. each
day's data is about 1.2million rows. I want import al
On 20 March 2012 19:45, Michael Nolan wrote:
>
>>
>> right now I am having about 7000 tables for individual stock and I use
>> perl to do inserts, it's very slow. I would like to use copy or other
>> bulk loading tool to load the daily raw gz data. but I need the split
>> the file to per stock fil
On 20 March 2012 20:19, Steve Crawford wrote:
> On 03/20/2012 04:27 PM, Jim Green wrote:
>>
>> Greetings list!
>> I am pretty new to postgresql from mysql
>
> Welcome.
>
>> I have daily minute stock price data from 2005 on and each day with
>> columns timestamp, open,high,low,close,volume and a fe
On Tue, 2012-03-20 at 22:21 +0100, Henk Bronk wrote:
> actually rsync works fine on file level and is good for manual syncing.
> it check really the files with the stat command, so a bit change will trigger
> the copy
> in practice you need to keep an eye on compleetness of the rsync action.
Rsyn
On 03/20/2012 04:27 PM, Jim Green wrote:
Greetings list!
I am pretty new to postgresql from mysql and did a fairly extensive
search of the list and came up with a few good ones but didn't find
the exact same situation as I have now. so I am venturing asking here.
I have daily minute stock price
On Tue, Mar 20, 2012 at 8:27 PM, Jeff Davis wrote:
> On Tue, 2012-03-20 at 22:21 +0100, Henk Bronk wrote:
> > actually rsync works fine on file level and is good for manual syncing.
> > it check really the files with the stat command, so a bit change will
> trigger the copy
> > in practice you ne
On 20 March 2012 21:40, David Kerr wrote:
> On 03/20/2012 04:27 PM, Jim Green wrote:
>
> Greetings list!
> I am pretty new to postgresql from mysql and did a fairly extensive
> search of the list and came up with a few good ones but didn't find
> the exact same situation as I have now. so I am ven
On 20 March 2012 21:54, Brent Wood wrote:
>
> Also look at a clustered index on timestamp
Thanks, this looks very helpful. what do you think about the thousands
table vs one table partitioned by month? I guess if I go with one
table, index would be too big to fit in ram?
Jim.
--
Sent via pgsql
On 03/20/2012 06:50 PM, Jim Green wrote:
On 20 March 2012 21:40, David Kerr wrote:
On 03/20/2012 04:27 PM, Jim Green wrote:
Greetings list!
I am pretty new to postgresql from mysql and did a fairly extensive
search of the list and came up with a few good ones but didn't find
the exact same sit
On 20 March 2012 22:03, David Kerr wrote:
> \copy on 1.2million rows should only take a minute or two, you could make
> that table "unlogged"
> as well to speed it up more. If you could truncate / drop / create / load /
> then index the table each
> time then you'll get the best throughput.
Tha
On 20 March 2012 22:08, Jim Green wrote:
> On 20 March 2012 22:03, David Kerr wrote:
>
>> \copy on 1.2million rows should only take a minute or two, you could make
>> that table "unlogged"
>> as well to speed it up more. If you could truncate / drop / create / load /
>> then index the table each
On 03/20/2012 07:08 PM, Jim Green wrote:
On 20 March 2012 22:03, David Kerr wrote:
\copy on 1.2million rows should only take a minute or two, you could make
that table "unlogged"
as well to speed it up more. If you could truncate / drop / create / load /
then index the table each
time then yo
On 03/20/12 7:12 PM, Jim Green wrote:
Also if I use copy, I would be tempted to go the one table route, or
else I need to parse my raw daily file, separate to individual symbol
file and copy to individual table for each symbol(this sounds like not
very efficient)..
your 7000 tables all contain
On 03/20/2012 09:12 PM, Jim Green wrote:
On 20 March 2012 22:08, Jim Green wrote:
On 20 March 2012 22:03, David Kerr wrote:
\copy on 1.2million rows should only take a minute or two, you could make
that table "unlogged"
as well to speed it up more. If you could truncate / drop / create / lo
On 20 March 2012 22:21, David Kerr wrote:
> I'm imagining that you're loading the raw file into a temporary table that
> you're going to use to
> process / slice new data data into your 7000+ actual tables per stock.
Thanks! would "slice new data data into your 7000+ actual tables per
stock." be
On 20 March 2012 22:22, John R Pierce wrote:
> your 7000 tables all contain the exact same information, with the only
> difference being the stock ticker symbol, right? then really, the single
> table, perhaps partitioned by month or whatever, is the right way to go.
> Any schema that makes yo
On 20 March 2012 22:25, Andy Colson wrote:
> I think the decisions:
>
> 1) one big table
> 2) one big partitioned table
> 3) many little tables
>
> would probably depend on how you want to read the data. Writing would be
> very similar.
>
> I tried to read through the thread but didnt see how you
On 03/20/2012 09:35 PM, Jim Green wrote:
On 20 March 2012 22:25, Andy Colson wrote:
I think the decisions:
1) one big table
2) one big partitioned table
3) many little tables
would probably depend on how you want to read the data. Writing would be
very similar.
I tried to read through the t
On 20 March 2012 22:43, Andy Colson wrote:
> Here is some copy/pasted parts:
>
> my @list = glob('*.gz');
> for my $fname (@list)
> {
> $db->do('copy access from stdin');
> open my $fh, "-|", "/usr/bin/zcat $fname" or die "$fname: $!";
> while (<$fh>)
> {
>
On 03/20/12 7:49 PM, Jim Green wrote:
yes its possible but I would more likely grab the data to R and get
the avg in R..
avg() in the database is going to be a lot faster than copying the data
into memory for an application to process.
Also, you know there's a plR for postgres that lets you
On 03/20/2012 09:49 PM, Jim Green wrote:
On 20 March 2012 22:43, Andy Colson wrote:
Do you ever plan on batch deleted a BUNCH of records?
no, after historical data is populated, I'll only add data daily. no delete..
Do you ever want to do read all of one symbol (like, select avg(high) fro
Also look at a clustered index on timestamp
Brent Wood
GIS/DBA consultant
NIWA
+64 (4) 4 386-0300
From: pgsql-general-ow...@postgresql.org [pgsql-general-ow...@postgresql.org]
on behalf of Jim Green [student.northwest...@gmail.com]
Sent: Wednesday, Marc
Looks promising. Does anyone know if you install tpostgres using the postgres
EDB before using Kyng Chaos'. Im not sure of the process…
Thanks,
Sam
On Mar 20, 2012, at 3:16 PM, Bryan Lee Nuse wrote:
>> Is there anyone who has ever successfully gotten postgres/postGIS running on
>> Mac Lion? Re
On 03/20/2012 08:54 PM, Brent Wood wrote:
Also look at a clustered index on timestamp
Brent Wood
GIS/DBA consultant
NIWA
+64 (4) 4 386-0300
A clustered index is only "clustered" at the point in time you run the command.
It wont remain that way, and with a really big table, you don't wanna
On 03/20/2012 07:26 PM, Jim Green wrote:
On 20 March 2012 22:21, David Kerr wrote:
I'm imagining that you're loading the raw file into a temporary table that
you're going to use to
process / slice new data data into your 7000+ actual tables per stock.
Thanks! would "slice new data data into
On 20 March 2012 22:57, John R Pierce wrote:
> avg() in the database is going to be a lot faster than copying the data into
> memory for an application to process.
I see..
>
> Also, you know there's a plR for postgres that lets you embed R functions in
> the database server and invoke them in S
On 20 March 2012 23:01, Andy Colson wrote:
> Of course, there are probably other usage patters I'm not aware of. And I
> also am assuming some things based on what I've heard -- not of actual
> experience.
I am not expert in sql, so what I get out of postgresql is probably
mostly select, but as
On Fri, Mar 16, 2012 at 11:39 AM, Bryan Montgomery wrote:
> Hello,
> We are looking at implementing a web service that basically makes calls to
> the database.
>
> I have been thinking about ways to secure the web service based on the
> database.
>
> I initially thought about just connecting to th
Hello.
We have FreeBSD/amd64 PostgreSQL 9.0 server and would like to move data
to Linux/amd64 PostgreSQL 9.0 server. Are databases on these systems
binary compatible? Can I just transfer datafiles or I have to do full
export/import?
--
Best regards,
Alexander Pyhalov,
system administrator of C
Alexander Pyhalov writes:
> We have FreeBSD/amd64 PostgreSQL 9.0 server and would like to move data
> to Linux/amd64 PostgreSQL 9.0 server. Are databases on these systems
> binary compatible? Can I just transfer datafiles or I have to do full
> export/import?
If built with same configure optio
folks,
i am newbie in prosgretsql i am in midst of making decission of which database
techology shoould i choose for our large web apps. mysql or postgresql?
could you share larges sites who are implemented Posgtresql successfully in
term of replication, clustering, scale, and etc. I heard postg
On Tue, Mar 20, 2012 at 11:27 PM, Geek Matter wrote:
> folks,
>
> i am newbie in prosgretsql i am in midst of making decission of which
> database techology shoould i choose for our large web apps. mysql or
> postgresql?
> could you share larges sites who are implemented Posgtresql successfully in
any other large sites use postgresql? i need to make right descission coz my
decision will affect business that is related with $
From: Scott Marlowe
To: Geek Matter
Cc: "pgsql-general@postgresql.org"
Sent: Wednesday, March 21, 2012 1:32 PM
Subject: Re: [G
On Tue, Mar 20, 2012 at 11:54 PM, Geek Matter wrote:
> any other large sites use postgresql? i need to make right descission coz my
> decision will affect business that is related with $
The people who run the .info and .org domains... Lots more. google
is your friend.
--
Sent via pgsql-gener
71 matches
Mail list logo