On 24/05/12 08:18, Lonni J Friedman wrote:
On Wed, May 23, 2012 at 12:36 PM, Gavin Flower
wrote:
On 24/05/12 05:09, Lonni J Friedman wrote:
On Wed, May 23, 2012 at 9:37 AM, Tom Lane wrote:
Lonni J Friedman writes:
After banging my head on the wall for a long time, I happened to
notice th
On Wed, May 23, 2012 at 6:16 PM, Greg Williamson
wrote:
> Mike --
>
>
> <...>
>
>>
>>Is PG compressing this data? I'm curious as I was considering
>>converting this column to a byte array and gzip'ing the data to save
>>space, however if PG is already doing this for me, then I'm not going
>>to bo
Mike --
<...>
>
>Is PG compressing this data? I'm curious as I was considering
>converting this column to a byte array and gzip'ing the data to save
>space, however if PG is already doing this for me, then I'm not going
>to bother. Thanks!
>
>Mike
It may vary from version of postgres to vers
On 05/23/2012 06:07 PM, Mike Christensen wrote:
If I run this query:
select sum(length(html)) from Indexer.Pages;
I get:
15,680,005,116
However, if I type:
C:\Program Files\PostgreSQL>dir /s
I get:
Total Files Listed:
5528 File(s) 7,414,385,333 bytes
575 Dir(s) 43,146,13
If I run this query:
select sum(length(html)) from Indexer.Pages;
I get:
15,680,005,116
However, if I type:
C:\Program Files\PostgreSQL>dir /s
I get:
Total Files Listed:
5528 File(s) 7,414,385,333 bytes
575 Dir(s) 43,146,137,600 bytes free
So all the Postgres data on disk i
FindFirstFile can take a wildcard filename
pattern. It appears that we are effectively
calling FindFirstFile without a pattern, getting
all 56000 file names with complete stat
information, doing a poor-man's regex on
those names, and matching just the temporary
files.
If RemovePgTempFiles were mo
Mark Dilger writes:
> I am running this code on Windows 2003. It
> appears that postgres has in src/port/dirent.c
> a port of readdir() that internally uses the
> WIN32_FIND_DATA structure, and the function
> FindNextFile() to iterate through the directory.
> Looking at the documentation, it seem
On 05/23/2012 04:11 PM, Andreas wrote:
Hi,
I'm running a PG 9.1.3 on OpenSuse 12.1.
I found that pg_log is 2 hours ahead though date on the console
shows the right date and time.
I have the ntp daemon watching the system's time every 60 minutes so
this shouldn't be an issue.
The time in
Hi,
I'm running a PG 9.1.3 on OpenSuse 12.1.
I found that pg_log is 2 hours ahead though date on the console
shows the right date and time.
I have the ntp daemon watching the system's time every 60 minutes so
this shouldn't be an issue.
The time in PG's logfiles filenames as well as the t
I am running this code on Windows 2003. It
appears that postgres has in src/port/dirent.c
a port of readdir() that internally uses the
WIN32_FIND_DATA structure, and the function
FindNextFile() to iterate through the directory.
Looking at the documentation, it seems that
this function does collect
On Wed, May 23, 2012 at 3:33 PM, Tom Lane wrote:
> Gavin Flower writes:
>>> 16 core Xeon X5550 2.67GHz
>>> 128GB RAM
>>> $PGDATA sits on a RAID5 array comprised of 3 SATA disks. Its Linux's
>>> md software RAID.
>
>> How does this compare to your other machines running the same, or
>> similar, d
Gavin Flower writes:
>> 16 core Xeon X5550 2.67GHz
>> 128GB RAM
>> $PGDATA sits on a RAID5 array comprised of 3 SATA disks. Its Linux's
>> md software RAID.
> How does this compare to your other machines running the same, or
> similar, databases?
> However, you do say that the other machines ar
On Wed, 2012-05-23 at 14:37 -0700, Lists wrote:
> I'm wondering if there are particular performance bottlenecks we
> should be aware of as we scale the hardware up? Does
> PG handle 64 GB of RAM well? 128 GB? 16 cores? 48 cores? SAS/SATA III
> with SSDs? (etc.)
These are not "extreme" for Postgr
We're getting ready to do system upgrades to our postgresql hosting
cluster. Currently, we're using
8-core servers with 32 GB of RAM in each to host approximately 75 end
user databases per server. I'm wondering
if there are particular performance bottlenecks we should be aware of as
we scale the
Mark Dilger writes:
> We only use one database, not counting the
> built-in template databases. The server is
> running 9.1.3. We were running 9.1.1 until
> fairly recently.
OK. I had forgotten that in recent versions, RemovePgTempFiles doesn't
only iterate through the pgsql_tmp directories; i
Up...
2012/5/18 Igor
> Good day.
>
> Continuing the subject about DB cluster based on plproxy - a question for
> developers:
>
> Is it possible to manage schema visibility from one database for different
> databases, at least in the read-only mode ? For example as schema
> pg_catalog.
> Or is it
On Wed, May 23, 2012 at 12:36 PM, Gavin Flower
wrote:
> On 24/05/12 05:09, Lonni J Friedman wrote:
>
> On Wed, May 23, 2012 at 9:37 AM, Tom Lane wrote:
>
> Lonni J Friedman writes:
>
> After banging my head on the wall for a long time, I happened to
> notice that khugepaged was consuming 100% C
We only use one database, not counting the
built-in template databases. The server is
running 9.1.3. We were running 9.1.1 until
fairly recently.
We are still getting set up to test this on
non-virtual hardware, but hope to have results
from that in a few hours or less.
On 24/05/12 05:09, Lonni J Friedman wrote:
On Wed, May 23, 2012 at 9:37 AM, Tom Lane wrote:
Lonni J Friedman writes:
After banging my head on the wall for a long time, I happened to
notice that khugepaged was consuming 100% CPU every time autovacuum
was running. I did:
echo "madvise"> /sys
Mark Dilger writes:
> We do not use tablespaces at all.
[ scratches head... ] If you aren't using any tablespaces, there should
be only *one* pgsql_tmp directory, which makes this even more confusing.
(Unless you're using a pre-8.3 release, in which case there would be one
per database, so mayb
We do not use tablespaces at all. We do use table
partitioning very heavily, with many check
constraints. That is the only thing unusual about
the schema.
To my eyes, the birds appear to be flying pretty
darned fast, though we have not figured out how
to remove the message bands quickly without
Mark Dilger writes:
> Prior to posting to the mailing list, we made some
> changes in postmaster.c to identify where time was
> being spent. Based on the elog(NOTICE,...) lines
> we put in the file, we determined the time was spent
> inside RemovePgTempFiles.
> I then altered RemovePgTempFiles t
We tried setting the timezone, as:
timezone = 'US/Eastern'
in postgresql.conf, but it did not help.
From: Tom Lane
To: Mark Dilger
Cc: deepak ; Alban Hertroys ;
"pgsql-general@postgresql.org"
Sent: Wednesday, May 23, 2012 9:50 AM
Subject: Re: [GENER
On Wed, May 23, 2012 at 9:37 AM, Tom Lane wrote:
> Lonni J Friedman writes:
>> After banging my head on the wall for a long time, I happened to
>> notice that khugepaged was consuming 100% CPU every time autovacuum
>> was running. I did:
>> echo "madvise" > /sys/kernel/mm/transparent_hugepage/d
Prior to posting to the mailing list, we made some
changes in postmaster.c to identify where time was
being spent. Based on the elog(NOTICE,...) lines
we put in the file, we determined the time was spent
inside RemovePgTempFiles.
I then altered RemovePgTempFiles to take a starttime
parameter and
Mark Dilger writes:
> I tried moving the call to RemovePgTempFiles until
> after the PID file is fully written, but it did not help.
I wonder whether you correctly identified the source of the slowness.
The thing I would have suspected is identify_system_timezone(), which
will attempt to read eve
Lonni J Friedman writes:
> After banging my head on the wall for a long time, I happened to
> notice that khugepaged was consuming 100% CPU every time autovacuum
> was running. I did:
> echo "madvise" > /sys/kernel/mm/transparent_hugepage/defrag
> and immediately the entire problem went away.
F
Alberto Zanon writes:
> I agree with you about the concept of "port number". I only wanted to know if
> it is a new behavior of Postgresql 9.1.
More like 9.0, according to our commit logs:
Author: Tom Lane
Branch: master Release: REL9_0_BR [61be11ff0] 2009-09-27 03:43:10 +
Make libpq
I tried moving the call to RemovePgTempFiles until
after the PID file is fully written, but it did not help.
pg_ctl attempts to connect to the database, and does
not report the database as running until that connection
succeeds. I am not comfortable moving the call to
RemovePgTempFiles after the p
On 23/05/2012, at 18:54, Bartosz Dmytrak wrote:
> hi,
> my suggestion is to redesign reporting database to fit reporting specifics
> (e.g. brake normal form of database, in some cases this will speed up
> reports). Than you can use some ETL tool to sync production and reporting.
> Good thing i
Thanks for your reply.
On Tue, May 22, 2012 at 7:19 PM, Andy Colson wrote:
> On Mon, May 21, 2012 at 2:05 PM, Lonni J Friedman
> wrote:
>>>
>>> Greetings,
>>>
>>> When I got in this morning, I found
>>> an autovacuum process that had been running since just before the load
>>> spiked,
>
>
> Aut
Hi Richard,
I agree with you about the concept of "port number". I only wanted to know if
it is a new behavior of Postgresql 9.1.
Alberto
- Messaggio originale -
Da: "Richard Welty"
A: "Alberto Zanon" , "Merlin Moncure"
Cc: pgsql-general@postgresql.org
Inviato: Mercoledì, 23
Thanks, I have put one of the other developers working on this issue, to
comment.
--
Deepak
On Mon, May 21, 2012 at 10:55 PM, Tom Lane wrote:
> deepak writes:
> > We could reproduce the start-up problem on Windows 2003. After a reboot,
> > postmaster, in its start-up sequence cleans up old tem
hi,
my suggestion is to redesign reporting database to fit reporting specifics
(e.g. brake normal form of database, in some cases this will speed up
reports). Than you can use some ETL tool to sync production and reporting.
Good thing is to use some OLAP software to use multidimensional analyze -
t
On 23/05/2012, at 17:20, Chris Ernst wrote:
> I would have a look at slony. It is a trigger based replication system
> that allows you to replicate only the tables you define and you can have
> different indexing on the slave. The only requirement is that each
> table you want to replicate has
port numbers are restricted to 2 octets (16 bits). they are TCP/IP entities and
are restricted in size by the RFCs (internet standards.)
richard
-Original Message-
From: pgsql-general-ow...@postgresql.org on behalf of Alberto Zanon
Sent: Wed 5/23/2012 10:19 AM
To: Merlin Moncure
Cc: pgsq
Thanks Merlin for the reply,
when you define a " Unix domain socket " you can use any number because it's
just used for the filename, e.g. "/var/run/.s.PGSQL.123456".
It works in PgPool and Postgres 8.2 with no problems.
Alberto
- Messaggio originale -
Da: "Merlin Moncure"
A: "Al
On 05/23/2012 07:11 AM, Herouth Maoz wrote:
> Hi guys,
>
> I'm interested in a solution that will allow our customers to run reports -
> which may involve complicated queries - on data which is as up-to-date as
> possible.
>
> One thing I don't want to do is to let the reporting system connect
On Wed, May 23, 2012 at 7:56 AM, Alberto Zanon
wrote:
> Hi all,
>
> in my production environment I have Postgresql 8.2 + PgPool. The port number
> I specify in PgPool configuration is not a real portnumber (e.g. 123456). I
> can define a "dblink" as :
>
> dblink('dbname= host=/var/run port=123
Hi guys,
I'm interested in a solution that will allow our customers to run reports -
which may involve complicated queries - on data which is as up-to-date as
possible.
One thing I don't want to do is to let the reporting system connect to the
production database. I want the indexes in product
Hi all,
in my production environment I have Postgresql 8.2 + PgPool. The port number I
specify in PgPool configuration is not a real portnumber (e.g. 123456). I can
define a "dblink" as :
dblink('dbname= host=/var/run port=123456 user= password='::text,
'select ... '::text) t1( .
41 matches
Mail list logo