Hi Seb:
On Thu, May 1, 2014 at 8:50 PM, Seb wrote:
> Thanks, I'm glad to hear you've used this approach successfully.
Well, this is always successful if you are able to develop a
moderately complex script.
> It seems as though the best solution is to do a single SELECT to get the
> data out of
On Thu, 1 May 2014 23:41:04 +0200,
Szymon Guz wrote:
[...]
> In this form it is quite unreadible. Could you paste the plan to the
> http://explain.depesz.com/ and provide her an url of the page?
Nice.
http://explain.depesz.com/s/iMJi
--
Seb
--
Sent via pgsql-general mailing list (pgsql-g
On 1 May 2014 22:50, Seb wrote:
> On Thu, 1 May 2014 22:31:46 +0200,
> Szymon Guz wrote:
>
> [...]
>
> > Can you show us the query plan for the queries you are using, the view
> > definition, and how you query that view?
>
> Thanks for your help with this. Here's the view definition (eliding
> s
On Thu, 1 May 2014 22:31:46 +0200,
Szymon Guz wrote:
[...]
> Can you show us the query plan for the queries you are using, the view
> definition, and how you query that view?
Thanks for your help with this. Here's the view definition (eliding
similar column references):
---<---
On 1 May 2014 22:24, Seb wrote:
> On Thu, 1 May 2014 22:17:24 +0200,
> Szymon Guz wrote:
>
> > On 1 May 2014 21:01, Seb wrote:
> > On Thu, 1 May 2014 20:22:26 +0200,
> > Szymon Guz wrote:
>
> >> Hi, several Gb is about 1GB, that's not too much. In case you meant
> >> 'several GB', that
On Thu, 1 May 2014 22:17:24 +0200,
Szymon Guz wrote:
> On 1 May 2014 21:01, Seb wrote:
> On Thu, 1 May 2014 20:22:26 +0200,
> Szymon Guz wrote:
>> Hi, several Gb is about 1GB, that's not too much. In case you meant
>> 'several GB', that shouldn't be a problem as well.
> Sorry, I m
On 1 May 2014 21:01, Seb wrote:
> On Thu, 1 May 2014 20:22:26 +0200,
> Szymon Guz wrote:
>
> > Hi, several Gb is about 1GB, that's not too much. In case you meant
> > 'several GB', that shouldn't be a problem as well.
>
> Sorry, I meant several GB. Although that may not be a problem for
> Postg
On Thu, 01 May 2014 21:12:46 +0200,
Torsten Förtsch wrote:
[...]
> # copy (select * from generate_series(1,1000)) to program 'split -l
> 100 - /tmp/xxx'; COPY 1000 # \q
> $ ls -l /tmp/xxxa* -rw--- 1 postgres postgres 292 May 1 19:08
> /tmp/xxxaa -rw--- 1 postgres postgres 400 May 1 19:0
On 01/05/14 19:50, Seb wrote:
> Hello,
>
> I've been looking for a way to write a table into multiple files, and am
> wondering if there are some clever suggestions. Say we have a table
> that is too large (several Gb) to write to a file that can be used for
> further analyses in other languages.
On Thu, 1 May 2014 20:22:26 +0200,
Szymon Guz wrote:
> Hi, several Gb is about 1GB, that's not too much. In case you meant
> 'several GB', that shouldn't be a problem as well.
Sorry, I meant several GB. Although that may not be a problem for
PostgreSQL, it is for post-processing the output file
On Thu, 1 May 2014 20:20:23 +0200,
Francisco Olarte wrote:
[...]
> As you mention looping and a shell, I suppose you are in something
> unix like, with pipes et al. You can pipe COPY ( either with the pipe
> options for copy, or piping a psql command, or whichever thing you
> like ) through a sc
Hi,
several Gb is about 1GB, that's not too much. In case you meant 'several
GB', that shouldn't be a problem as well.
The first thing I'd do would be creating an index on the column used for
dividing the data. Then I'd just use the command COPY with a proper select
to save the data to a file.
If
Hi:
On Thu, May 1, 2014 at 7:50 PM, Seb wrote:
> I've been looking for a way to write a table into multiple files, and am
> wondering if there are some clever suggestions. Say we have a table
> that is too large (several Gb) to write to a file that can be used for
> further analyses in other lan
Hi,
Does the below kind of approach work for you. I haven't tested this, but
would like to give an idea something like below.
Create a plpgsql function which takes 3 parameters as "From Date", "To
Date" and "Interval".
prev_interval := '0'::interval;
LOOP
IF ( "From Date" + "Interval" <= "To D
Hello,
I've been looking for a way to write a table into multiple files, and am
wondering if there are some clever suggestions. Say we have a table
that is too large (several Gb) to write to a file that can be used for
further analyses in other languages. The table consists of a timestamp
field
15 matches
Mail list logo