2010/4/30 Vincenzo Romano :
> 2010/4/30 David Fetter :
>> On Thu, Apr 29, 2010 at 11:29:36AM +0200, Vincenzo Romano wrote:
>>> > No info about this point (partial indexes)?
>>> > Is also this geared with linear algorithms ?
>>>
>>> Should I move to an "enterprise grade" version of PostgreSQL?
>>
>>
Hi
Did you eventually figure out what was wrong?
Was it just that you were trying to load a full result set and running
out of memory with an OutOfMemoryError?
Or was the jvm truly crashing rather than just throwing OutOfMemoryError?
--
Craig Ringer
--
Sent via pgsql-general mailing list (p
2010/4/30 David Fetter :
> On Thu, Apr 29, 2010 at 11:29:36AM +0200, Vincenzo Romano wrote:
>> > No info about this point (partial indexes)?
>> > Is also this geared with linear algorithms ?
>>
>> Should I move to an "enterprise grade" version of PostgreSQL?
>
> The enterprise grade version of Post
On Thu, Apr 29, 2010 at 11:29:36AM +0200, Vincenzo Romano wrote:
> > No info about this point (partial indexes)?
> > Is also this geared with linear algorithms ?
>
> Should I move to an "enterprise grade" version of PostgreSQL?
The enterprise grade version of PostgreSQL is the community version.
Thanks a lot for all your responses
I am impress, really impress. I never though I could get this amount of
responses in this shorter time. Wonderful support :)
Thanks a lot :) :)
I don't have details, I'll get them really soon. But all your input is
really valuable. I have much more information.
On 04/29/2010 05:08 PM, Thomas Kellerer wrote:
SELECT organization, state, lastdate, age(lastdate)
FROM (
SELECT organization,
state,
(select max(idate) from times where
customers.custid=times.custid and taskid = 27) as lastdate
FROM customers
) t
order by lastdate desc
Hello there,
1. Try using COPY Command, you will see significant decrease in the loading
time.
2. Turn off auto commit and Remove foreign key constraints if it is only one
time load - this will also help in decreasing the load time.
Try these options and let us know how it went.
We load around 6
Thanks, Merlin! The "restack" function solves the problem! :)
> what are you trying to do w/unfold function exactly?
The recursive query I mentioned was to produce from the argument
array[array[1,2,3],array[4,5,6],array[7,8,9],array[10,11]] the result
array[1,2,3,4,5,6,7,8,9,10,11].
The behavio
On 30/04/2010 5:29 AM, Eric Langheinrich wrote:
I'm looking for options to recover data from a crashed postgres database
server. We recently had a solid state storage device blow up taking the
database server with it.
The database is version 8.3, the pg_clog, pg_xlog and subdirectories of
pg_tbl
Andy Colson wrote on 29.04.2010 23:51:
Here is my query, which works:
select organization,
state,
(select max(idate) from times where customers.custid=times.custid and
taskid = 27) as lastdate,
age( (select max(idate) from times where customers.custid=times.custid
and taskid = 27) )
from custom
On 4/29/2010 4:51 PM, Andy Colson wrote:
I tried this:
select organization, state, max(idate), age(max(idate))
from customers
inner join times using(custid)
where taskid = 27
group by organization, state
order by idate desc nulls last;
but get error that times.idate must appear in group by or
Here is my query, which works:
select organization,
state,
(select max(idate) from times where customers.custid=times.custid and
taskid = 27) as lastdate,
age( (select max(idate) from times where
customers.custid=times.custid and taskid = 27) )
from customers
order by lastdate desc null
On Thu, 2010-04-29 at 15:29 -0600, Eric Langheinrich wrote:
> I'm looking for options to recover data from a crashed postgres
> database server. We recently had a solid state storage device blow up
> taking the database server with it.
>
> The database is version 8.3, the pg_clog, pg_xlog and s
I'm looking for options to recover data from a crashed postgres database
server. We recently had a solid state storage device blow up taking the
database server with it.
The database is version 8.3, the pg_clog, pg_xlog and subdirectories of
pg_tblspc were wiped out with the crashed storage devic
Any suggestions ?
On Thu, Apr 29, 2010 at 4:42 PM, raghavendra t wrote:
> Hi All,
>
> I am using Postgres 8.4. pg_restore -j option. I have dump of the database
> with -Fc and elected the pg_restore -j option for the faster restoration.
> When the restoration process is in progress, i want to mon
Thanks Merlin:
I failed to mention that I'm running 8.3 (no array_agg), but you certainly
pointed me in the right direction. This worked:
INSERT INTO foo_arrays SELECT
cde,
nbr,
ARRAY_ACCUM(CAST(aaa AS text)),
ARRAY_ACCUM(CAST(bbb AS text)),
ARRAY_ACCUM(CAST(ccc AS text))
FROM raw_foo
GROU
On Apr 29, 2010, at 10:45 AM, Justin Graf wrote:
> Many people encode the binary data in Base64 and store as text data
> type?? Then never have to deal with escaping bytea data type. Which i
> have found can be a pain
Damn. Wish I'd thought of that ;-)
--
Scott Ribe
scott_r...@elevated-dev
I'm looking for options to recover data from a crashed postgres database
server. We recently had a solid state storage device blow up taking the
database server with it.
The database is version 8.3, the pg_clog, pg_xlog and subdirectories of
pg_tblspc were wiped out with the crashed storage device
On Thu, Apr 29, 2010 at 1:41 PM, Scott Marlowe wrote:
> On Wed, Apr 28, 2010 at 7:08 PM, Jaime Rodriguez
> wrote:
>> hi,
>> Today is my first day looking at PostgreSQL
>> I am looking to migrate a MS SQL DB to PostgreSQL :) :)
>> My customer requires that DBMS shall support 4000 simultaneous requ
On Wed, Apr 28, 2010 at 7:08 PM, Jaime Rodriguez
wrote:
> hi,
> Today is my first day looking at PostgreSQL
> I am looking to migrate a MS SQL DB to PostgreSQL :) :)
> My customer requires that DBMS shall support 4000 simultaneous requests
> Also the system to be deploy maybe a cluster, with 12 mi
On 4/29/2010 3:18 PM, Tom Lane wrote:
> Alvaro Herrera writes:
>
> However, that toast limit is per-table, whereas the pg_largeobject limit
> is per-database. So for example if you have a partitioned table then
> the toast limit only applies per partition. With large objects you'd
> fall ove
This whole sockets conversation has wandered way off topic. PostgreSQL
runs into high-connection scaling issues due to memory limitations (on
Windows in particular, as noted in the FAQ entry I suggested), shared
resource contention, and general per-connection overhead long before
socket issues
Alvaro Herrera writes:
> Each toasted object also requires an OID, so you cannot have more than 4
> billion toasted attributes in a table.
> I've never seen this to be a problem in real life, but if you're talking
> about having that many large objects, then it will be a problem with
> toast too.
Justin Graf wrote:
> On 4/29/2010 12:07 PM, David Wall wrote:
> >
> >
> > Big downside for the DB is that all large objects appear to be stored
> > together in pg_catalog.pg_largeobject, which seems axiomatically
> > troubling that you know you have lots of big data, so you then store
> > them t
I dont think its that easy. 50,000 sockets open, sure, but whats the
performance? The programming model has everything to do with that,
and windows select() wont support that many sockets with any sort of
performance. For windows you have to convert to using non-blocking
sockets w/messages
On Thu, Apr 29, 2010 at 1:51 PM, David Wall wrote:
> I missed the part that BYTEA was being used since it's generally not a good
> way for starting large binary data because you are right that BYTEA requires
> escaping across the wire (client to backend) both directions, which for true
> binary da
On 4/29/2010 11:49 AM, Ozz Nixon wrote:
On 4/29/10 12:42 PM, Greg Smith wrote:
Alban Hertroys wrote:
The reason I'm asking is that Postgres doesn't perform at its best on
Windows and I seriously wonder whether the OS would be able to handle
a load like that at all (can Windows handle 4000 open
it has been years since i've mucked in the C++ swamp but
that means your (near) heap is ok but you're stack is hosed..
probably specific to compiler (version) and Operating System(version) and
environment settings..ping back if you are still experiencing those problems
with those configuration
On 4/29/2010 1:51 PM, David Wall wrote:
>
>> Put it another way: bytea values are not stored in the pg_largeobject
>> catalog.
>
> I missed the part that BYTEA was being used since it's generally not a
> good way for starting large binary data because you are right that
> BYTEA requires escaping
Jorge Arevalo writes:
> Many thanks! That was one of my errors. Another one was this:
> char szDataPointer[10];
> sprintf(szDataPointer, "%p", a_pointer);
> These lines caused a memory error.
That looks all right in itself (unless you're on a 64-bit machine, in
which case you need a bigger arra
Huh ??? isn't that point of using bytea or text datatypes.
I could have sworn bytea does not use large object interface it uses
TOAST or have i gone insane
You're not insane :)
Put it another way: bytea values are not stored in the pg_largeobject
catalog.
I missed the part that
You can avoid stemming by using 'simple' instead of 'english' as the
language of the words in to_tsvector (which is a little more awkward
than the cast).
"There are no stop words for the simple dictionary. It will just
convert to lower case, and index every unique word.
SELECT to_tsvector(
On Thu, Apr 29, 2010 at 3:56 PM, Tom Lane wrote:
> Jorge Arevalo writes:
>> Yes. For example, the function expects 2 arguments, and it's called
>> with 2 arguments: 1 composite type (following this format
>> https://svn.osgeo.org/postgis/spike/wktraster/doc/RFC1-SerializedFormat)
>> and one integ
>
> Curious note - how does the non-subselect version and the subselect
> version compare performance-wise?
Magnus,
On a test table with 12,000 rows there's not much in it, the subselect has a
simpler plan but they both take practically the same time.
The two plans (note I've been rewriting th
I have a product names table like this:
datab=# select product_id, name from table.product_synonyms where name
ilike '%%olympus e-pl1%%';
product_id
|
name
+---
Le 29/04/2010 18:45, Justin Graf a écrit :
> On 4/29/2010 12:07 PM, David Wall wrote:
>>
>>
>> Big downside for the DB is that all large objects appear to be stored
>> together in pg_catalog.pg_largeobject, which seems axiomatically
>> troubling that you know you have lots of big data, so you the
On 4/29/10 12:42 PM, Greg Smith wrote:
Alban Hertroys wrote:
The reason I'm asking is that Postgres doesn't perform at its best on
Windows and I seriously wonder whether the OS would be able to handle
a load like that at all (can Windows handle 4000 open sockets for
example?).
You have to g
On 4/29/2010 12:07 PM, David Wall wrote:
>
>
> Big downside for the DB is that all large objects appear to be stored
> together in pg_catalog.pg_largeobject, which seems axiomatically
> troubling that you know you have lots of big data, so you then store
> them together, and then worry about run
Alban Hertroys wrote:
The reason I'm asking is that Postgres doesn't perform at its best on Windows and I seriously wonder whether the OS would be able to handle a load like that at all (can Windows handle 4000 open sockets for example?).
You have to go out of your way to even get >125 connecti
Things to consider when /not /storing them in the DB:
1) Backups of DB are incomplete without a corresponding backup of the files.
2) No transactional integrity between filesystem and DB, so you will
have to deal with orphans from both INSERT and DELETE (assuming you
don't also update the file
Le 29/04/2010 10:40, Piotr Kublicki a écrit :
> Guillaume Lelarge wrote on 28/04/2010 15:04:07:
>
>>> In such case the new created start-up script postgresql2 should not be
>>> modified in the following line:
>>>
>>> # Override defaults from /etc/sysconfig/pgsql if file is present
>>> [ -f /etc/s
In response to Geoffrey Myers :
> I'm trying the following:
>
> ship_date between '04/30/2010' AND '04/30/2010' + 14
>
> But this returns:
>
> ERROR: invalid input syntax for integer: "04/30/2010"
>
> Can I use between with dates?
Sure, why not, but you have to CAST your STRING into a DATE, o
On 29 April 2010 14:55, Geoffrey Myers wrote:
> I'm trying the following:
>
> ship_date between '04/30/2010' AND '04/30/2010' + 14
>
> But this returns:
>
> ERROR: invalid input syntax for integer: "04/30/2010"
>
> Can I use between with dates?
>
>
You need to cast that last date, so:
ship_date
2010/4/29 Geoffrey Myers
> I'm trying the following:
>
> ship_date between '04/30/2010' AND '04/30/2010' + 14
>
> But this returns:
>
> ERROR: invalid input syntax for integer: "04/30/2010"
>
> Can I use between with dates?
>
>
>
This should be fine:
ship_date between '04/30/2010'::date AND '04/
I'm trying the following:
ship_date between '04/30/2010' AND '04/30/2010' + 14
But this returns:
ERROR: invalid input syntax for integer: "04/30/2010"
Can I use between with dates?
--
Geoffrey Myers
Myers Consulting Inc.
770.592.1651
--
Sent via pgsql-general mailing list (pgsql-general@pos
Tom Lane wrote:
Geoffrey writes:
ship_date between '04/30/2010' AND '04/30/2010' + 14
ERROR: invalid input syntax for integer: "04/30/2010"
Can I use between with dates?
The problem with that is the parser has no reason to treat the strings
as dates, at least not till it comes to consider
On Thu, 29 Apr 2010 19:27:33 +0530 wrote
>On Thu, Apr 29, 2010 at 01:13:40PM -, sandeep prakash dhumale wrote:
>> Hello All,
>>
>> I am trying to get tsearch working for my application but I am facing a
>> problem when alphabet 'Y' is the in the tsquery.
>>
>> can anyone please share some
Geoffrey writes:
> ship_date between '04/30/2010' AND '04/30/2010' + 14
> ERROR: invalid input syntax for integer: "04/30/2010"
> Can I use between with dates?
The problem with that is the parser has no reason to treat the strings
as dates, at least not till it comes to consider the BETWEEN
com
On Thursday 29 April 2010 6:58:26 am Geoffrey wrote:
> I'm trying the following:
>
> ship_date between '04/30/2010' AND '04/30/2010' + 14
>
> But this returns:
>
> ERROR: invalid input syntax for integer: "04/30/2010"
>
> Can I use between with dates?
>
> --
> Until later, Geoffrey
>
> "I predict
Geoffrey wrote:
I'm trying the following:
ship_date between '04/30/2010' AND '04/30/2010' + 14
But this returns:
ERROR: invalid input syntax for integer: "04/30/2010"
Can I use between with dates?
Got it:
ship_date between '04/30/2010' and timestamp '04/30/2010' + interval '14
day'
-
I'm trying the following:
ship_date between '04/30/2010' AND '04/30/2010' + 14
But this returns:
ERROR: invalid input syntax for integer: "04/30/2010"
Can I use between with dates?
--
Until later, Geoffrey
"I predict future happiness for America if they can prevent
the government from wasti
"sandeep prakash dhumale" writes:
> I am trying to get tsearch working for my application but I am facing a
> problem when alphabet 'Y' is the in the tsquery.
> # SELECT 'hollywood'::tsvector @@ to_tsquery('holly:*');
> ?column?
> --
> f
> (1 row)
You can't use to_tsquery for this sor
Jorge Arevalo writes:
> Yes. For example, the function expects 2 arguments, and it's called
> with 2 arguments: 1 composite type (following this format
> https://svn.osgeo.org/postgis/spike/wktraster/doc/RFC1-SerializedFormat)
> and one integer. But PG_NARGS() returns a really big value (16297)
>
On Thu, Apr 29, 2010 at 01:13:40PM -, sandeep prakash dhumale wrote:
> Hello All,
>
> I am trying to get tsearch working for my application but I am facing a
> problem when alphabet 'Y' is the in the tsquery.
>
> can anyone please share some light on it.
>
>
> # SELECT 'hollywood'::tsvector
On 29/04/2010 9:23 PM, ashish.a...@sungard.com wrote:
Hi Craig,
Sorry for creating confusion. Let me (I work with Ambarish, the original
author of the mail) try to be more specific now.
We have a library (written in C) which helps us in doing phonetic based
name search. We want to use this libr
Hi Craig,
Sorry for creating confusion. Let me (I work with Ambarish, the original
author of the mail) try to be more specific now.
We have a library (written in C) which helps us in doing phonetic based
name search. We want to use this library inside a postgres DB function.
To achieve this we wr
Hello All,
I am trying to get tsearch working for my application but I am facing a
problem when alphabet 'Y' is the in the tsquery.
can anyone please share some light on it.
# SELECT 'hollywood'::tsvector @@ to_tsquery('holly:*');
?column?
--
f
(1 row)
SELECT 'hollywood'::tsvector
On Wed, Apr 28, 2010 at 10:43 PM, Tom Lane wrote:
> Jorge Arevalo writes:
>> My doubt is if I'm doing things right getting all the stuff I need (an
>> array) in the first call, pointing user_fctx to this array and
>> accessing myStructsArray[call_cntr] in each successive call, until
>> myStructsA
On 29 Apr 2010, at 3:08, Jaime Rodriguez wrote:
> hi,
> Today is my first day looking at PostgreSQL
> I am looking to migrate a MS SQL DB to PostgreSQL :) :)
> My customer requires that DBMS shall support 4000 simultaneous requests
> Also the system to be deploy maybe a cluster, with 12 microproce
On 29/04/2010 8:48 PM, a.bhattacha...@sungard.com wrote:
Your understanding is slightly incorrect. Actually we required to uses a
special library from postgres.
That mystery library being? From "postgres"? Do you mean a library
supplied by the PostgreSQL project its self? Libpq? If not, what?
On Thu, Apr 29, 2010 at 8:52 AM, Thom Brown wrote:
> On 29 April 2010 13:35, Bård Grønbech wrote:
>>
>> Have a string like '0.1;0.2;null;0.3' which I would like to convert
>> into a double precision[] array.
>>
>> Trying:
>>
>> select cast (string_to_array('0.1;0.2;null;0.3', ';') as float8[])
>>
On Thu, Apr 29, 2010 at 8:46 AM, Merlin Moncure wrote:
> On Wed, Apr 28, 2010 at 8:48 PM, Belka Lambda wrote:
>> Hi!
>>
>> I tried to write a recursive SELECT, that would do the concatination, but a
>> problem appeared:
>> can't make a {1,2,3} from {{1,2,3}}.
>> Here are some experiments:
>> ---
On 29 April 2010 13:35, Bård Grønbech wrote:
> Have a string like '0.1;0.2;null;0.3' which I would like to convert
> into a double precision[] array.
>
> Trying:
>
> select cast (string_to_array('0.1;0.2;null;0.3', ';') as float8[])
>
> gives me an error: invalid input syntax for type double prec
-Original Message-
From: Craig Ringer [mailto:cr...@postnewspapers.com.au]
Sent: Thursday, April 29, 2010 6:02 PM
To: Bhattacharya, A
Cc: pgsql-general@postgresql.org
Subject: Re: FW: [GENERAL] Java Memory Issue while Loading Postgres
library
On 29/04/2010 7:34 PM, a.bhattacha...@sungard.
On Wed, Apr 28, 2010 at 8:48 PM, Belka Lambda wrote:
> Hi!
>
> I tried to write a recursive SELECT, that would do the concatination, but a
> problem appeared:
> can't make a {1,2,3} from {{1,2,3}}.
> Here are some experiments:
>
Have a string like '0.1;0.2;null;0.3' which I would like to convert
into a double precision[] array.
Trying:
select cast (string_to_array('0.1;0.2;null;0.3', ';') as float8[])
gives me an error: invalid input syntax for type double precision: "null".
Can anybody help me?
-Bård
--
Sent via pg
On 29/04/2010 7:34 PM, a.bhattacha...@sungard.com wrote:
We have a java exe making a call to a postgres function. This postgres
function internally makes a call to a dll (which is written using
Postgres extended C).
If I understand you correctly, you have a dll that uses ecpg to
communicate w
On Thu, Apr 29, 2010 at 13:43, Oliver Kohll - Mailing Lists
wrote:
>>
>> Aren't you looking for something along the line of:
>>
>> SELECT year, sum(c) over (order by year)
>> FROM (
>> SELECT extract(year from signup_date) AS year, count(email_address) AS c
>> FROM email_list
>> GROUP BY extrac
>>
>> Aren't you looking for something along the line of:
>>
>> SELECT year, sum(c) over (order by year)
>> FROM (
>> SELECT extract(year from signup_date) AS year, count(email_address) AS c
>> FROM email_list
>> GROUP BY extract(year from signup_date)
>> )
>>
>> (adjust for typos, I didn't t
Dear All Experts,
We have a java exe making a call to a postgres function. This postgres
function internally makes a call to a dll (which is written using
Postgres extended C).
Now the issue is that, when we make a call to this dll, it consumes a
lot of memory and this memory is getting cons
Hi All,
I am using Postgres 8.4. pg_restore -j option. I have dump of the database
with -Fc and elected the pg_restore -j option for the faster restoration.
When the restoration process is in progress, i want to monitor the threads
invoked by pg_restore (suppose if i give -j 4). I have verified in
On 29 April 2010 11:39, Oliver Kohll - Mailing Lists <
oliver.li...@gtwm.co.uk> wrote:
>
> On 29 Apr 2010, at 10:01, Magnus Hagander wrote:
>
>
> select extract(year from signup_date),
>
> count(email_address),
>
> sum(count(email_address)) over (partition by 1 order by 1 asc rows
> unbounded pr
On 29 Apr 2010, at 10:01, Magnus Hagander wrote:
>>
>> select extract(year from signup_date),
>> count(email_address),
>> sum(count(email_address)) over (partition by 1 order by 1 asc rows
>> unbounded preceding)
>> from email_list group by 1 order by 1;
>>
>> Does anyone have any other ideas
Dear All Experts,
We have a java exe making a call to a postgres function. This postgres
function internally makes a call to a dll (which is written using
Postgres extended C).
Now the issue is that, when we make a call to this dll, it consumes a
lot of memory and this memory is getting cons
2010/4/26 Vincenzo Romano :
> 2010/4/26 Bruce Momjian :
>> Vincenzo Romano wrote:
>>> Hi all.
>>>
>>> I'm wondering how efficient the inheritance can be.
>>> I'm using the constraint exclusion feature and for each child table
>>> (maybe but one) I have a proper CHECK constraint.
>>> How efficient c
2010/4/22 Brian Peschel :
>
> On 04/22/2010 10:12 AM, Ben Chobot wrote:
>>
>> On Apr 21, 2010, at 1:41 PM, Brian Peschel wrote:
>>
>>
>>>
>>> I have a replication problem I am hoping someone has come across before
>>> and can provide a few ideas.
>>>
>>> I am looking at a configuration of on 'writa
2010/4/28 Adrian Klaver :
> On Tuesday 27 April 2010 5:45:43 pm Anthony wrote:
>> On Tue, Apr 27, 2010 at 5:17 AM, Cédric Villemain <
>>
>> cedric.villemain.deb...@gmail.com> wrote:
>> > store your files in a filesystem, and keep the path to the file (plus
>> > metadata, acl, etc...) in database.
>
On Thu, Apr 29, 2010 at 10:52, Oliver Kohll - Mailing Lists
wrote:
> Hello,
>
> Many thanks to andreas.kretschmer for this helpful reply about how to set up
> a window function to perform a running total:
> http://archives.postgresql.org/pgsql-general/2010-03/msg01122.php
>
> It works perfectly w
Hello,
Many thanks to andreas.kretschmer for this helpful reply about how to set up a
window function to perform a running total:
http://archives.postgresql.org/pgsql-general/2010-03/msg01122.php
It works perfectly with the simple test data but I've just got back to work,
tried implementing it
Guillaume Lelarge wrote on 28/04/2010 15:04:07:
> > In such case the new created start-up script postgresql2 should not be
> > modified in the following line:
> >
> > # Override defaults from /etc/sysconfig/pgsql if file is present
> > [ -f /etc/sysconfig/pgsql/${NAME} ] && . /etc/sysconfig/pgsql
On Thu, Apr 29, 2010 at 05:02, Andreas wrote:
> Hi,
>
> I've got an 8.4.3 Unicode DB that accidentally holds a few records with
> characters that can't be converted to Latin1 or 9 for output to CSV.
>
> I'd just need a way to check if a collumn contains values that CAN NOT be
> converted from Utf8
I had a similar problem: older versions of Postgres have IP addressing in
one column and subnetting/mask in the next one. 8.4 uses CIDR expression in
one column - applying CIDR notation solved my problem. I think it's
advisable to manually correct the pg_hba.conf file instead of replacing it
with t
82 matches
Mail list logo