At 08:56 16/06/2009, you wrote:
Hi,
I need to measure the memory used by postgres under various
loads in my application.How to calculate the exact memory used by
postgres for OS:Windows as well as linux ?
Download ProcessXP from Microsoft (before it was from Sysinternals).
There you
Hi.
I have a little problem (and a suggestion for a solution) that I
wondered if anyone would care to comment on.
I have a standard table filled with customers (with a unique customer
id, names etc.) and for each customer I need to store some integer
values. The problem is that the number of integ
J. Greg Davidson wrote:
> Hi dear colleagues,
>
> I'm trying to pull some platform-specific constants out of my
> code by using psql variables, e.g.:
>
> $ psql -v TypeLength=4
>
> # CREATE TYPE tref (
> INTERNALLENGTH = :TRefTypeLength,
> INPUT = tref_in,
> OUTPUT = tref_out,
> PASSEDBY
Hi everyone,
Recently we have upgraded postgres driver jdbc4 for 8.3. After that
while executing rst.first(), it raise the exception.
The statment was created correctly as
dbconnection..createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE,ResultSet.CONCUR_READ_ONLY);
org.postgresql.util.PSQLExcep
Hi Artur,
I am owner of a database about War, Worcrime and Terroism with more then
1,6 TByte and I am already fscked...
Am 2009-06-15 14:00:05, schrieb Artur:
> Hi!
>
> We are thinking to create some stocks related search engine.
> It is experimental project just for fun.
>
> The problem is that
Yes, they are both your packages from your official site. So this means that
in 8.2 and in earlier versions the rounding is not the regular one.
Best regards,
Otto
2009/6/15 Dave Page
> On Mon, Jun 15, 2009 at 3:33 PM, Tom Lane wrote:
> > =?ISO-8859-1?Q?Havasv=F6lgyi_Ott=F3?=
> writes:
> >> I
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hello Arthur,
We have a database that has a table growing ~1,5M rows each month.
The overall growth for the db is ~1GB/month.
PostgreSQL 8.2.9 on x86_64 - a very modest Dell R200 with 4GB of ram.
Although poor planning was made in the beginning (i.e.
On Tue, Jun 16, 2009 at 09:11:20AM +0200, A B wrote:
> I have a standard table filled with customers (with a unique customer
> id, names etc.) and for each customer I need to store some integer
> values. The problem is that the number of integer values that should
> be stored WILL change over time
While playing around with the new features in 8.4rc1, I observe the following:
I create a simple set returning function:
CREATE OR REPLACE FUNCTION n_plicate(x anyelement, i integer) RETURNS SETOF
anyelement LANGUAGE plpgsql AS
$$BEGIN
FOR n IN 1..i LOOP
RETURN NEXT x;
END LOOP;
R
Hi Merlin, thanks for the detailed input.
As per ur suggestion i will try to implement Slony-I.
I think i will need some help to do it.
I am useing Postgres 8.3.7, on Windows.
I was following the Slony-I example in the help for pgAdmin III. I am able
to perform the steps from 1-7. Step 8 : cre
Does anyone know why I get an unknown relation error when I query for
relation size?
kls=# select tablename, pg_size_pretty(pg_relation_size(tablename))
from pg_tables where schemaname = 'econ' order by tablename;
ERROR: relation "series_info" does not exist
kls=#
Is there a better way to do thi
> The way you described the problem the EAV solution sounds like the best
> match--not sure if I'd use your synthetic keys though, they will save a
> bit of space on disk but queries will be much more complicated to write.
I guess I'll have to build procedures for all the complicated queries
when e
The search_path isn't correct. You're looking for the size of tables
in schema 'econ' but you don't mention this schema inside the function
pg_relation_size(). Try to use the schemanames as well, saves you a
lot of problems with assumptions.
This one should work:
SELECT
tablename,
On Mon, Jun 15, 2009 at 2:04 PM, Gnanam wrote:
>
> I also read some article which talks about the type of patterns:
> 1. Meta-database
> 2. Mutating
> 3. Fixed
> 4. LOB
>
> My question here is, what is the best approach to define the architecture
> for custom fields. Performance should not be compr
Thanks, Frank.
That works perfectly.
Cheers,
Whit
On Tue, Jun 16, 2009 at 7:36 AM, Frank Heikens wrote:
> The search_path isn't correct. You're looking for the size of tables in
> schema 'econ' but you don't mention this schema inside the function
> pg_relation_size(). Try to use the schemanam
On Tue, Jun 16, 2009 at 12:21 PM, A B wrote:
>
>> Just had a quick flick through your previous posts; and I'd probably
>> stick with the multiple tables approach. It's the most natural fit to
>> relational databases and until you know more about the problem (i.e.
>> you've experienced the data you
2009/6/16 Greg Stark
> On Tue, Jun 16, 2009 at 12:21 PM, A B wrote:
>
> I don't think think it's fair to call this EAV actually. It sounds
> like the integers are a collection of things which represent the same
> thing. Ie, they're all bank balances or all distances driven, just for
> different
2009/6/16 A B :
>
> 2009/6/16 Greg Stark
>>
>> I don't think think it's fair to call this EAV actually. It sounds
>> like the integers are a collection of things which represent the same
>> thing. Ie, they're all bank balances or all distances driven, just for
>> different time periods. Storing al
On Tue, Jun 16, 2009 at 6:48 AM, Nishkarsh wrote:
>
> Hi Merlin, thanks for the detailed input.
>
> As per ur suggestion i will try to implement Slony-I.
>
> I think i will need some help to do it.
>
> I am useing Postgres 8.3.7, on Windows.
>
> I was following the Slony-I example in the help for p
"Albe Laurenz" writes:
> So it looks like the number of result rows is the least common multiple
> of the cardinalities of all columns in the select list.
It's always been that way. The lack of any obviously-sane way to
handle multiple SRFs in a targetlist is exactly why the feature is
looked on
Frank Heikens writes:
> pg_size_pretty(pg_relation_size(schemaname || '.' || tablename))
At some point you're going to wish you'd used quote_ident() here.
regards, tom lane
PS: Personally I prefer to rely on pg_relation_size(oid), but to use
that you need to be loo
Agreed.
Personally I wouldn't use pg_tables at all because of the missing oid.
Would be nice to have in this view, but it can't be changed because
it's a system-view. pg_class would do the job.
Regards,
Frank
Op 16 jun 2009, om 16:12 heeft Tom Lane het volgende geschreven:
Frank Heikens
On Mon, Jun 15, 2009 at 10:21 PM, Sim Zacks wrote:
> From the user's perspective: If you design an application for me and I
> want to add a new data field or a new form, should I have to call you
> back and pay your exorbitant consulting fees? I would prefer to pay a
> little bit more at the begin
Actually, is there any particular reason why we can't *add* that column
to the view in a future version? We certainly shouldn't go modify it,
but adding to it should be pretty safe, no?
--
Magnus Hagander
Self: http://www.hagander.net/
Work: http://www.redpill-linpro.com/
Frank Heikens wrote
The same problem, missing oid's, occurs with several other system
views as well. If you have to do some serious work, it's always
pg_class you need.
oid's in these views would be nice, but only if all the system views
have the oid's of the underlaying objects. In case of pg_tables you
nee
> The problem is that we expect to have more than 250 GB of data every month.
Sounds like Terradata or Netezza teritory
On Mon, Jun 15, 2009 at 06:53:00PM -0700, Just Someone wrote:
- Hi,
-
- I've seen both - some unknown reason for it to die (mostly related to
- the underlying hardware having issues). We also see instance failure
- from time to time with advanced notice. Just like a regular machine
- dies from tim
On Tue, Jun 16, 2009 at 01:21:42PM +0200, A B wrote:
> > The examples you gave (i.e. shoe size, hair length) would fit normal
> > table columns much better.
> Sorry, shoe size was not a good example, think of it as string> instead of shoe size. The data/name is nothing you can relate
> to in any
would suggest Oracle 11 for DB of 10TB or greater
http://www.oracle.com/solutions/performance_scalability/tpc-h-10tb-11g.html
Martin Gainty
__
Verzicht und Vertraulichkeitanmerkung/Note de déni et de confidentialité
Diese Nachricht ist vertraulich.
On Mon, 15 Jun 2009, Artur wrote:
We are thinking to create some stocks related search engine.
It is experimental project just for fun.
I hope your idea of fun involves spending a bunch of money on hardware and
endless tweaking to get data loading every day with appropriate
corrections, beca
On Mon, 15 Jun 2009, Whit Armstrong wrote:
Is anyone aware of additional examples or tutorials on partitioning?
http://www.pgcon.org/2007/schedule/events/41.en.html
http://blog.mozilla.com/webdev/2007/05/15/partitioning-fun-in-postgresql/
http://benjamin.smedbergs.us/blog/2007-05-12/when-parti
On Tue, 16 Jun 2009, Brent Wood wrote:
For data retrieval, clustered indexes may help, but as this requires a
physical reordering of the data on disk, it may be impractical.
This tends to be irrelevant for this sort of data because it's normally
inserted in a fairly clustered way in the first
Hi,
> So, when a cloud machine fails does it get de-allocated/wiped out? or
> does it is it still out there in a bad state? how do you recover your
> data?
It depends. Sometimes it dies and you can't do anything with it. In
others you can restart it. As we store the data on EBS (which is a
networ
On Tue, 16 Jun 2009, Just Someone wrote:
In case the volume got corrupted (a very rare situation, as the EBS
volumes are very durable), there are snapshots I can recover from and
the WAL files I stream to another storage system (Amazon's S3).
I wouldn't go so far as to say "very durable", beca
Hi,
On Tue, Jun 16, 2009 at 11:05 AM, Greg Smith wrote:
> You just have to recognize that the volumes are
> statistically pretty fragile compared to a traditional RAID configuration on
> dedicated hardware and plan accordingly.
I agree completely. I think the advantage is that it FORCES you to
p
Hi Greg,
Am 2009-06-16 12:13:20, schrieb Greg Smith:
> The first level of problems you'll run into are how to keep up with
> loading data every day. The main way to get bulk data in PostgreSQL,
> COPY, isn't particularly fast, and you'll be hard pressed to keep up with
> 250GB/day unless you
S Arvind wrote:
Hi everyone,
Recently we have upgraded postgres driver jdbc4 for 8.3. After that
while executing rst.first(), it raise the exception.
The statment was created correctly as
dbconnection..createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE,ResultSet.CONCUR_READ_ONLY);
org.postg
On Tue, 16 Jun 2009, Michelle Konzack wrote:
Am 2009-06-16 12:13:20, schrieb Greg Smith:
you'll be hard pressed to keep up with 250GB/day unless you write a
custom data loader that keeps multiple cores
AFAIK he was talking about 250 GByte/month which are around 8 GByte a
day or 300 MByte p
2009/6/16 Martin Gainty :
> would suggest Oracle 11 for DB of 10TB or greater
> http://www.oracle.com/solutions/performance_scalability/tpc-h-10tb-11g.html
You are joking, right ?
Better invest that money in paying someone from -hackers to add
features required, if there will be any !
Or buy for t
2009/6/16 Grzegorz Jaśkiewicz :
> 2009/6/16 Martin Gainty :
>> would suggest Oracle 11 for DB of 10TB or greater
>> http://www.oracle.com/solutions/performance_scalability/tpc-h-10tb-11g.html
>
> You are joking, right ?
> Better invest that money in paying someone from -hackers to add
> features re
On Jun 16, 2009, at 12:11 AM, A B wrote:
Hi.
I have a little problem (and a suggestion for a solution) that I
wondered if anyone would care to comment on.
I have a standard table filled with customers (with a unique customer
id, names etc.) and for each customer I need to store some integer
va
anyone know a way to get nagios to monitor the number of postgres connections?
Thanks,
Whit
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Whit Armstrong wrote:
anyone know a way to get nagios to monitor the number of postgres connections?
a script that counts how many lines of postgres processes there are and
subtracts a few
$ ps -C postmaster uww
USER PID %CPU %MEMVSZ RSS TTY STAT START TIME COMMAN
On Tue, Jun 16, 2009 at 11:32 PM, John R Pierce wrote:
> Whit Armstrong wrote:
>
>> anyone know a way to get nagios to monitor the number of postgres
>> connections?
>>
>>
>
You could also login to the database and run:
select count(1) from pg_stat_activity;
--Scott
John R Pierce wrote:
> Whit Armstrong wrote:
>> anyone know a way to get nagios to monitor the number of postgres
>> connections?
>>
>
Check out the check_postgres nagios plugin:
http://bucardo.org/check_postgres/
Specifically you want the backends check:
http://bucardo.org/check_postgres/chec
On Tue, Jun 16, 2009 at 11:32 PM, John R Pierce wrote:
> Whit Armstrong wrote:
>
>> anyone know a way to get nagios to monitor the number of postgres
>> connections?
>>
>>
>
You could also login to the database and run:
select count(1) from pg_stat_activity;
--Scott
Whit Armstrong wrote:
> anyone know a way to get nagios to monitor the number of postgres connections?
>
> Thanks,
> Whit
>
Hello
We use this plugin. Save it under /etc/munin/plugins/pg_connections and
update the file /etc/munin/plugin-conf.d/postgres with:
---
[pg_connection]
Rafael Martinez wrote:
> Whit Armstrong wrote:
>> anyone know a way to get nagios to monitor the number of postgres
>> connections?
>>
>> Thanks,
>> Whit
>>
>
> Hello
>
> We use this plugin. Save it under /etc/munin/plugins/pg_connections and
> update the file /etc/munin/plugin-conf.d/postgres w
48 matches
Mail list logo