Hi:
On Wed, Apr 30, 2014 at 7:40 PM, Elanchezhiyan Elango
wrote:
> Francisco,
> Thanks for the partitioning idea. I used to have the tables partitioned. But
> now that I have moved to a schema where data is split across about ~90
> tables I have moved away from partitioning. But it's something I
Joe, that is exactly what I want.
Could you please give more detail example for this crosstab ? I have
warehouse and product table like this :
CREATE TABLE tblwarehouse (
id integer NOT NULL,
warehousename character varying(20)
);
COPY tblwarehouse (id, warehousename) FROM stdin;
2
I've a basic setup with async replication between two distant,
geographically separated servers over vpn.
Replication happens every 0.5 seconds or so, and is incredible reliable.
Today, I've been using backup on master server every twelve hours.
I'm wondering if would be possible to execute these
On 05/01/2014 10:31 AM, Edson Richter wrote:
I'm wondering if would be possible to execute these backups in the slave
server instead, so I can avoid the overhead of backups on master system?
If you're on PostgreSQL 9.3, you can backup the slave server safely. If
not, you'll need to run this c
Hello,
I've been looking for a way to write a table into multiple files, and am
wondering if there are some clever suggestions. Say we have a table
that is too large (several Gb) to write to a file that can be used for
further analyses in other languages. The table consists of a timestamp
field
Hi,
Does the below kind of approach work for you. I haven't tested this, but
would like to give an idea something like below.
Create a plpgsql function which takes 3 parameters as "From Date", "To
Date" and "Interval".
prev_interval := '0'::interval;
LOOP
IF ( "From Date" + "Interval" <= "To D
Hi:
On Thu, May 1, 2014 at 7:50 PM, Seb wrote:
> I've been looking for a way to write a table into multiple files, and am
> wondering if there are some clever suggestions. Say we have a table
> that is too large (several Gb) to write to a file that can be used for
> further analyses in other lan
Hi,
several Gb is about 1GB, that's not too much. In case you meant 'several
GB', that shouldn't be a problem as well.
The first thing I'd do would be creating an index on the column used for
dividing the data. Then I'd just use the command COPY with a proper select
to save the data to a file.
If
All,
apologies if this has been addressed somewhere already. I don't have a
lot of experience in PostgreSQL; this is my first setup where I'm trying
to scale and provide some of the more advanced features (like WAL
shipping, master-slave sync, integrating pgbouncer, etc.), and I'm
looking for help
On Thu, 1 May 2014 20:20:23 +0200,
Francisco Olarte wrote:
[...]
> As you mention looping and a shell, I suppose you are in something
> unix like, with pipes et al. You can pipe COPY ( either with the pipe
> options for copy, or piping a psql command, or whichever thing you
> like ) through a sc
On Thu, 1 May 2014 20:22:26 +0200,
Szymon Guz wrote:
> Hi, several Gb is about 1GB, that's not too much. In case you meant
> 'several GB', that shouldn't be a problem as well.
Sorry, I meant several GB. Although that may not be a problem for
PostgreSQL, it is for post-processing the output file
On 01/05/14 19:50, Seb wrote:
> Hello,
>
> I've been looking for a way to write a table into multiple files, and am
> wondering if there are some clever suggestions. Say we have a table
> that is too large (several Gb) to write to a file that can be used for
> further analyses in other languages.
On Thu, 01 May 2014 21:12:46 +0200,
Torsten Förtsch wrote:
[...]
> # copy (select * from generate_series(1,1000)) to program 'split -l
> 100 - /tmp/xxx'; COPY 1000 # \q
> $ ls -l /tmp/xxxa* -rw--- 1 postgres postgres 292 May 1 19:08
> /tmp/xxxaa -rw--- 1 postgres postgres 400 May 1 19:0
On 01/05/2014 19:40, Stephan Fabel wrote:
> All,
>
> apologies if this has been addressed somewhere already. I don't have a
> lot of experience in PostgreSQL; this is my first setup where I'm trying
> to scale and provide some of the more advanced features (like WAL
> shipping, master-slave sync,
On Thu, May 1, 2014 at 8:54 AM, Shaun Thomas wrote:
> On 05/01/2014 10:31 AM, Edson Richter wrote:
>
> I'm wondering if would be possible to execute these backups in the slave
>> server instead, so I can avoid the overhead of backups on master system?
>>
>
> If you're on PostgreSQL 9.3, you can b
On 05/01/2014 09:35 AM, Raymond O'Donnell wrote:
> You haven't made it clear that you are actually replicating to a
> different PostgreSQL server (whether on the same machine or on another
> one) - is that the case? Ray.
Indeed that is the case. Two servers, one master, one slave. Both
identical
On 1 May 2014 21:01, Seb wrote:
> On Thu, 1 May 2014 20:22:26 +0200,
> Szymon Guz wrote:
>
> > Hi, several Gb is about 1GB, that's not too much. In case you meant
> > 'several GB', that shouldn't be a problem as well.
>
> Sorry, I meant several GB. Although that may not be a problem for
> Postg
On Thu, 1 May 2014 22:17:24 +0200,
Szymon Guz wrote:
> On 1 May 2014 21:01, Seb wrote:
> On Thu, 1 May 2014 20:22:26 +0200,
> Szymon Guz wrote:
>> Hi, several Gb is about 1GB, that's not too much. In case you meant
>> 'several GB', that shouldn't be a problem as well.
> Sorry, I m
On 1 May 2014 22:24, Seb wrote:
> On Thu, 1 May 2014 22:17:24 +0200,
> Szymon Guz wrote:
>
> > On 1 May 2014 21:01, Seb wrote:
> > On Thu, 1 May 2014 20:22:26 +0200,
> > Szymon Guz wrote:
>
> >> Hi, several Gb is about 1GB, that's not too much. In case you meant
> >> 'several GB', that
On Thu, 1 May 2014 22:31:46 +0200,
Szymon Guz wrote:
[...]
> Can you show us the query plan for the queries you are using, the view
> definition, and how you query that view?
Thanks for your help with this. Here's the view definition (eliding
similar column references):
---<---
On 05/01/2014 11:40 AM, Stephan Fabel wrote:
I'm using Ubuntu 12.04 for these deployments at the moment. The Ubuntu
packages don't put the configuration files with the cluster data (by
default under /var/lib/postgresql/9.1/main under 12.04), but in
/etc/postgresql/9.1/main) and they start postgre
On 1 May 2014 22:50, Seb wrote:
> On Thu, 1 May 2014 22:31:46 +0200,
> Szymon Guz wrote:
>
> [...]
>
> > Can you show us the query plan for the queries you are using, the view
> > definition, and how you query that view?
>
> Thanks for your help with this. Here's the view definition (eliding
> s
On Thu, 1 May 2014 23:41:04 +0200,
Szymon Guz wrote:
[...]
> In this form it is quite unreadible. Could you paste the plan to the
> http://explain.depesz.com/ and provide her an url of the page?
Nice.
http://explain.depesz.com/s/iMJi
--
Seb
--
Sent via pgsql-general mailing list (pgsql-g
On Wed, Apr 30, 2014 at 9:59 AM, Elanchezhiyan Elango
wrote:
> Hi,
>
> I need help on deciding my vacuuming strategy. I need to know if I ever
> need to do 'vacuum full' for my tables.
>
>
Important and critical configuration is "fillfactor". "fillfactor" will
have a greater impact on VACUUMING s
Hi,
Do not seem to figure out what is wrong here. Why am I getting database does
not exist. I just created the database and am able to connect to it as
"postgres" user.
I am trying to restrict "testuser" from connecting to "myDB" database.
Thanks in advance.
postgres@ulinux3:~$ createuser
I guess you need to quote the identifier, as you use mixed case. I.e. try "myDB" with the double quotes.
Tomas
Dne 2. 5. 2014 2:49 Prashanth Kumar napsal(a):
Hi,Do not seem to figure out what is wrong here. Why am I getting database does not exist. I just created the database and am able to co
Prashanth Kumar wrote
> Hi,
>
> Do not seem to figure out what is wrong here. Why am I getting database
> does not exist. I just created the database and am able to connect to it
> as "postgres" user.
> I am trying to restrict "testuser" from connecting to "myDB" database.
Thomas is likely co
28 matches
Mail list logo