Kirk Wythers <[EMAIL PROTECTED]> writes:
> The new table needs to be filled with the results of the join. If
> there is a way to do this without a SELECT, please share.
If it's an entirely new table, then you probably want to use INSERT
... SELECT. If what you want is to update existing rows us
Kirk Wythers wrote:
>
> On Feb 2, 2007, at 8:32 PM, Tom Lane wrote:
>
>> Kirk Wythers <[EMAIL PROTECTED]> writes:
>>> On Feb 2, 2007, at 7:39 PM, Luke Lonergan wrote:
Now he's got to worry about how to page through 8GB of results in
something less than geological time with the space bar
On Feb 2, 2007, at 8:32 PM, Tom Lane wrote:
Kirk Wythers <[EMAIL PROTECTED]> writes:
On Feb 2, 2007, at 7:39 PM, Luke Lonergan wrote:
Now he's got to worry about how to page through 8GB of results in
something less than geological time with the space bar ;-)
I actually have no intention of
Kirk Wythers <[EMAIL PROTECTED]> writes:
> On Feb 2, 2007, at 7:39 PM, Luke Lonergan wrote:
>> Now he's got to worry about how to page through 8GB of results in
>> something less than geological time with the space bar ;-)
> I actually have no intention of paging through the results, but
> rat
Geoffrey wrote:
> Joshua D. Drake wrote:
>> Luke Lonergan wrote:
\o /tmp/really_big_cursor_return
;)
>>> Tough crowd :-D
>>
>> Yeah well Andrew probably would have said use sed and pipe it through
>> awk to get the data you want.
>
> Chances are, if you're using awk, you shouldn't n
Joshua D. Drake wrote:
Luke Lonergan wrote:
\o /tmp/really_big_cursor_return
;)
Tough crowd :-D
Yeah well Andrew probably would have said use sed and pipe it through
awk to get the data you want.
Chances are, if you're using awk, you shouldn't need sed. :)
--
Until later, Geoffrey
Those
On Feb 2, 2007, at 7:39 PM, Luke Lonergan wrote:
Tom,
On 2/2/07 2:18 PM, "Tom Lane" <[EMAIL PROTECTED]> wrote:
as of 8.2 there's a psql variable
FETCH_COUNT that can be set to make it happen behind the scenes.)
FETCH_COUNT is a godsend and works beautifully for exactly this
purpose.
No
Luke Lonergan wrote:
>> \o /tmp/really_big_cursor_return
>>
>> ;)
>
> Tough crowd :-D
Yeah well Andrew probably would have said use sed and pipe it through
awk to get the data you want.
Joshua D. Drake
>
> - Luke
>
>
>
> ---(end of broadcast)-
On Feb 2, 2007, at 7:53 PM, Luke Lonergan wrote:
Tough crowd :-D
No kidding ;-)
> \o /tmp/really_big_cursor_return
>
> ;)
Tough crowd :-D
- Luke
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
Luke Lonergan wrote:
> Tom,
>
> On 2/2/07 2:18 PM, "Tom Lane" <[EMAIL PROTECTED]> wrote:
>
>> as of 8.2 there's a psql variable
>> FETCH_COUNT that can be set to make it happen behind the scenes.)
>
> FETCH_COUNT is a godsend and works beautifully for exactly this purpose.
>
> Now he's got to w
Tom,
On 2/2/07 2:18 PM, "Tom Lane" <[EMAIL PROTECTED]> wrote:
> as of 8.2 there's a psql variable
> FETCH_COUNT that can be set to make it happen behind the scenes.)
FETCH_COUNT is a godsend and works beautifully for exactly this purpose.
Now he's got to worry about how to page through 8GB of r
Kirk Wythers <[EMAIL PROTECTED]> writes:
> However, setting ulimit to unlimited does not seem to solve the
> issue.
After some experimentation I'm left wondering exactly what ulimit's -d
option is for on OS X, because it sure doesn't seem to be limiting
process data size. (I should have been su
Is there any way of monitoring and/or controlling disk buffer cache
allocation and usage on Mac OS X? I'm thinking of two things: 1) being
able to give PG a more accurate estimate of the size of the cache, and
2) being able to more quickly flush the cache for testing the
performance of cold qu
Thanks Jeff, this was exactly the kind of answer I was looking for.
On Fri, 2 Feb 2007, Jeff Frost wrote:
On Thu, 1 Feb 2007, Ben wrote:
I'm looking to replace some old crusty hardware with some sparkling new
hardware. In the process, I'm looking to move away from the previous
mentality of h
On Thu, 1 Feb 2007, Ben wrote:
I'm looking to replace some old crusty hardware with some sparkling new
hardware. In the process, I'm looking to move away from the previous
mentality of having the Big Server for Everything to having a cluster of
servers, each of which handles some discrete subs
On Feb 2, 2007, at 10:11 AM, Tom Lane wrote:
"Steinar H. Gunderson" <[EMAIL PROTECTED]> writes:
On Fri, Feb 02, 2007 at 10:05:29AM -0600, Kirk Wythers wrote:
Thanks Tom... Any suggestions as to how much to raise ulimit -d? And
how to raise ulimit -d?
Try multiplying it by 100 for a start:
At this point there are no silly questions. But I am running the
query under the shell session that I adjusted. I did discover that
ulimit -d only changes the shell session that you issue the command
in. So I changed ulimit -d to unlimited, connected to the db with
psql db_name, then ran th
Kirk Wythers <[EMAIL PROTECTED]> writes:
> However, setting ulimit to unlimited does not seem to solve the
> issue. Output from ulimit -a is:
Possibly a silly question, but you are running the client code under the
shell session that you adjusted ulimit for, yes?
regards
Tom,
I tried ulimit -d 614400, but the query ended with the same error. I
thought then that the message:
psql(21522) malloc: *** vm_allocate(size=8421376) failed (error code=3)
psql(21522) malloc: *** error: can't allocate region
psql(21522) malloc: *** set a breakpoint in szone_error to debug
"Steinar H. Gunderson" <[EMAIL PROTECTED]> writes:
> On Fri, Feb 02, 2007 at 10:05:29AM -0600, Kirk Wythers wrote:
>> Thanks Tom... Any suggestions as to how much to raise ulimit -d? And
>> how to raise ulimit -d?
> Try multiplying it by 100 for a start:
> ulimit -d 614400
Or just "ulimit -d
On Fri, Feb 02, 2007 at 10:05:29AM -0600, Kirk Wythers wrote:
> Thanks Tom... Any suggestions as to how much to raise ulimit -d? And
> how to raise ulimit -d?
Try multiplying it by 100 for a start:
ulimit -d 614400
/* Steinar */
--
Homepage: http://www.sesse.net/
--
On Feb 2, 2007, at 9:46 AM, Tom Lane wrote:
=?ISO-8859-1?Q?G=E1briel_=C1kos?= <[EMAIL PROTECTED]> writes:
Richard Huxton wrote:
Kirk Wythers wrote:
I am trying to do fairly simple joins on climate databases that
should
return ~ 7 million rows of data.
If you look at the message carefull
Thanks for the reply Steiner,
On Feb 2, 2007, at 8:41 AM, Steinar H. Gunderson wrote:
On Fri, Feb 02, 2007 at 07:52:48AM -0600, Kirk Wythers wrote:
psql(15811) malloc: *** vm_allocate(size=8421376) failed (error
code=3)
psql(15811) malloc: *** error: can't allocate region
psql(15811) malloc:
=?ISO-8859-1?Q?G=E1briel_=C1kos?= <[EMAIL PROTECTED]> writes:
> Richard Huxton wrote:
>> Kirk Wythers wrote:
>>> I am trying to do fairly simple joins on climate databases that should
>>> return ~ 7 million rows of data.
> If you look at the message carefully, it looks like (for me) that the
> c
Richard Huxton wrote:
Kirk Wythers wrote:
I am trying to do fairly simple joins on climate databases that should
return ~ 7 million rows of data. However, I'm getting an error message
on a OS X (10.4 tiger server) machine that seems to imply that I am
running out of memory. The errors are:
p
Kirk Wythers wrote:
I am trying to do fairly simple joins on climate databases that should
return ~ 7 million rows of data. However, I'm getting an error message
on a OS X (10.4 tiger server) machine that seems to imply that I am
running out of memory. The errors are:
psql(15811) malloc: ***
On Fri, Feb 02, 2007 at 07:52:48AM -0600, Kirk Wythers wrote:
> psql(15811) malloc: *** vm_allocate(size=8421376) failed (error code=3)
> psql(15811) malloc: *** error: can't allocate region
> psql(15811) malloc: *** set a breakpoint in szone_error to debug
It sounds like you are out of memory. Ha
I am trying to do fairly simple joins on climate databases that
should return ~ 7 million rows of data. However, I'm getting an error
message on a OS X (10.4 tiger server) machine that seems to imply
that I am running out of memory. The errors are:
psql(15811) malloc: *** vm_allocate(size=8
Michael Artz wrote:
Here are some numbers for 3 different queries using a very selective
query (port = ). I'm thinking that, since the row estimates are
different from the actuals (2 vs 2000), that this particular port
didn't make it into the statistics ... is that true? Does this
matter?
30 matches
Mail list logo