On 11/22/2011 3:28 PM, Merlin Moncure wrote:
On Sun, Nov 13, 2011 at 5:38 AM, Phoenix Kiula wrote:
Hi.
I currently have a cronjob to do a full pgdump of the database every
day. And then gzip it for saving to my backup drive.
However, my db is now 60GB in size, so this daily operation is makin
On 11/8/2011 1:00 PM, Ascarabina wrote:
Would something like this work? -
select ip, max("time") - min("time") as session_duration
from log_table
group by ip;
I don't think this is the right way to do. This is based on ip
address, so if
- client connect diffrent times with same ip
-
Hello all,
I have a table which stores action logs from users. It looks
something like this:
log_type text,
date date,
"time" time without time zone,
ip inet
The log type can be action1, action2, action3, action4, or action5. I
know that each user session will have a max of one of each l
You should look at table partitioning. That is, you make a master
table and then make a table for each state that would inherit the
master. That way you can query each state individually or you can query
the whole country if need be.
http://www.postgresql.org/docs/current/static/ddl-partiti
I've had many times that before and things were very slow. That's when
I partitioned it out. Luckily that table was just for reporting and
could be slow. Are you thinking you'll need that many rows and you just
don't know how to handle it? I would recommend partitioning if at all
possible.
nd is therefore a kind of date index already.
Alex Thurlow
Blastro Networks
http://www.blastro.com
http://www.roxwel.com
http://www.yallwire.com
On 2/9/2010 11:47 AM, Asher wrote:
Hello.
I'm putting together a database to store the readings from various
measurement devices for later
as there's only one script that inserts, so I just generate the correct
table name there.
Alex Thurlow
Blastro Networks
http://www.blastro.com
http://www.roxwel.com
http://www.yallwire.com
On 5/22/2009 9:56 AM, Vick Khera wrote:
On Thu, May 21, 2009 at 3:37 PM, Alex Thurlow wrote:
I was hoping to not have to change all my code to automate the
partitioning table creation stuff, but if that's really the best way,
I'll check it out. Thanks for the advice.
Alex Thurlow
Blastro Networks
http://www.blastro.com
http://www.roxwel.com
http://www.yallwire.com
On 5
PM, Scott Marlowe wrote:
On Thu, May 21, 2009 at 1:13 PM, Alex Thurlow wrote:
I have a postgresql database that I'm using for logging of data. There's
basically one table where each row is a line from my log files. It's
getting to a size where it's running very slow though.
et these to or if
there are others I should be using that I'm missing?
Thanks,
Alex
--
Alex Thurlow
Blastro Networks
http://www.blastro.com
http://www.roxwel.com
http://www.yallwire.com
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your
these to or if
there are others I should be using that I'm missing?
Thanks,
Alex
--
Alex Thurlow
Blastro Networks
http://www.blastro.com
http://www.roxwel.com
http://www.yallwire.com
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscrip
11 matches
Mail list logo