Can anyone recommend a PostgreSQL compatible free tool that I can use
to generate some schema diagrams of an existing database?
Thanks
Dan
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
On Mon, Jul 13, 2009 at 3:53 PM, Dan
Armbrust wrote:
>> So this thought leads to a couple of other things Dan could test.
>> First, see if turning off full_page_writes makes the hiccup go away.
>> If so, we know the problem is in this area (though still not exactly
>> which
>
> Propose a DTrace probe immediately after the "goto begin" at line 740 of
> xlog.c, so we can start tracing from the first backend following
> checkpoint, and turn off tracing when all backends have completed a
> transaction.
>
That's greek to me. But I'm happy to test things if you send me
pa
> So this thought leads to a couple of other things Dan could test.
> First, see if turning off full_page_writes makes the hiccup go away.
> If so, we know the problem is in this area (though still not exactly
> which reason); if not we need another idea. That's not a good permanent
> fix though,
> Hm, I'm not sure I believe any of that except the last bit, seeing that
> he's got plenty of excess CPU capability. But the last bit fits with
> the wimpy-I/O problem, and it also offers something we could test.
> Dan, please see what happens when you vary the wal_buffers setting.
> (Note you ne
> As Greg commented upthread, we seem to be getting forced to the
> conclusion that the initial buffer scan in BufferSync() is somehow
> causing this. There are a couple of things it'd be useful to try
> here:
>
> * see how the size of the hiccup varies with shared_buffers;
I tried decreasing sha
> However, the latest report says that he
> managed that, and yet there's still a one-or-two-second transient of
> some sort. I'm wondering what's causing that. If it were at the *end*
> of the checkpoint, it might be the disk again (failing to handle a bunch
> of fsyncs, perhaps). But if it rea
>> Wouldn't increasing the length between checkpoints result in the
>> checkpoint process taking even longer to complete?
>
> You don't really care how long it takes. What you want is for it not to
> be chewing a bigger fraction of your I/O bandwidth than you can spare.
> Hence, you want it to tak
On Wed, Jul 8, 2009 at 1:23 PM, Tom Lane wrote:
> Dan Armbrust writes:
>> With checkpoint_segments set to 10, the checkpoints appear to be
>> happening due to checkpoint_timeout - which I've left at the default
>> of 5 minutes.
>
> Well, you could increase both
On Wed, Jul 8, 2009 at 12:50 PM, Tom Lane wrote:
> Dan Armbrust writes:
>> However, once the checkpoint process begins, I get a whole flood of
>> queries that take between 1 and 10 seconds to complete. My throughput
>> crashes to near nothing. The checkpoint takes betwee
I'm running a steady state test where I am pushing about 600 queries
per second through a Posgres 8.3 system on an 8 CPU Linux system.
It's a mix of inserts, updates, and deletes on a few tables - the two
biggest ones probably have about 200,000 rows.
Harddrive is just a simple, run-of-the-mill de
> These reports seem to come up a bit, with disk full issues resulting in
> the need to pg_resetxlog, dump, and re-initdb, but I wouldn't be too
> shocked if they all turned out to be on xfs or something like that.
>
My particular disk-full condition was on ext2. Nothing exotic. Also,
the proces
> In general, pg_resetxlog would be
> the tool to try here. Don't panic yet. ;-)
>
>
Yep, that was the command I was looking for. That at least got the DB
to a point where it would start, and I was able to do a dump.
So, I dumped and reloaded all of the databases. Things seem fine, but
bits
I had a test system (read as not backed up, sigh) which had the disk
go full while PostgreSQL was loaded, consequently, PostgreSQL will no
longer start.
It is logging an error about detecting an invalid shutdown, trying to
replay something, and then an error about not being able to open a
file it
> Well, your throughput on this machine is horrible. It looks like with
> 8.1 all your time is sys + cpu for your cpus, while with 8.3 you've
> got more idle and more i/o wait, which tells me that 8.3 is smarter
> about vacuuming, so it's spending less time working the cpus and more
> time waiting
On PostgreSQL 8.1, while a long vacuum is running, the output of
vmstat 10 looks like this (sorry, can't format this very will in this
e-mail client):
r b swpd free buff cache si sobi bo
in cs us sy id wa st
5 2112 53732 4388 116340400 13524 13
I'm no closer to a solution, but here are some additional data points
- all taken on Fedora Core 6.
Postgres 8.1 built from source. Auto vacuum disabled.
Create Empty Database.
Run our load on the system for 2 hours to populate and exercise the database.
Run Vacuum. Takes more than a minute.
R
On Tue, Jan 6, 2009 at 3:36 PM, Tom Lane wrote:
> "Dan Armbrust" writes:
>> INFO: "cpe": found 415925 removable, 50003 nonremovable row versions
>> in 10849 pages
>
>> What on earth could be going on between PostgreSQL 8.1 and Fedora 6
>> that i
>
> Obviously the choice of operating system has no impact on the contents of
> your index.
>
> A better question might be, what did your application or maintenance
> procedures do different in the different tests?
>
>
> --
> Alan
Our problem for a long time has been assuming the "obvious". But w
> On Tue, Jan 6, 2009 at 1:39 PM, Dan Armbrust
> wrote:
>> Here is an interesting new datapoint.
>>
>> Modern Ubuntu distro - PostgreSQL 8.1. SATA drive. No Raid. Cannot
>> reproduce slow vacuum performance - vacuums take less than a second
>> for th
On Tue, Jan 6, 2009 at 3:01 PM, Alvaro Herrera
wrote:
> Dan Armbrust escribió:
>
>> What on earth could be going on between PostgreSQL 8.1 and Fedora 6
>> that is bloating and/or corrupting the indexes like this?
>
> Postgres 8.1 was slow to vacuum btree indexes. My gues
Here is an interesting new datapoint.
Modern Ubuntu distro - PostgreSQL 8.1. SATA drive. No Raid. Cannot
reproduce slow vacuum performance - vacuums take less than a second
for the whole database.
Reinstall OS - Fedora Core 6 - PostgreSQL 8.1. Push data through
PostgreSQL for a couple hours (
>
>> Their workaround had been to run a daily autovacuum at the lowest load
>> time of day, to cause the least disruption.
>
> What is a "daily autovacuum"? It sounds like some tables just need
> vacuuming more often. If they find that the system is not responsive
> during that, it tells us that
>> INFO: "cpe": found 95498 removable, 18757 nonremovable row versions
>> in 7 pages
>> DETAIL: 0 dead row versions cannot be removed yet.
>> There were 280173 unused item pointers.
>> 0 pages are entirely empty.
>> CPU 5.35s/0.99u sec elapsed 724.38 sec.
>
> How many idle transactions are th
>> INFO: "cpe": found 95498 removable, 18757 nonremovable row versions
>> in 7 pages
>> DETAIL: 0 dead row versions cannot be removed yet.
>> There were 280173 unused item pointers.
>> 0 pages are entirely empty.
>> CPU 5.35s/0.99u sec elapsed 724.38 sec.
>>
>> Then, running vacuum again imme
To follow up on an old thread that I started - I had a customer who
had a system where manual vacuum runs were taking a very long time to
run. I was seeing output like this:
INFO: "cpe": found 95498 removable, 18757 nonremovable row versions
in 7 pages
DETAIL: 0 dead row versions cannot be
>
> There was concurrent access to the table during VACUUMing, so the long
> delay is explainable as long waits for cleanup lock, plus probably
> thrashing the cache with bloated indexes. The CPU overhead per row seems
> OK. We should instrument the wait time during a VACUUM and report that
> also.
Thanks everyone. You have helped back up my suspicions. It is indeed
a Linux system, and it has a RAID IO system, but I don't yet know the
details of that IO system. Time to put them back to work looking at
their hardware, rather than blaming our software :)
Thanks for the extra tips on hunting
I have a system backed by a PostgreSQL DB at a customer site that
mysteriously slowed way down - and couldn't keep up with the load for
no apparent reason.
I had them run a vacuum analyze verbose on my database, and had these
lines come back which made me suspicious:
INFO: index "ix_cpe_ispid" n
This is interesting.
On windows, if I attempt to drop a database when I am purposefully
holding a connection open to the DB, it fails immediately - ~ 100 ms.
On linux, if I do the same thing, the drop database command takes
upwards of 4 seconds before it gives up, and says it can't drop the
DB.
It would seem that way. But if you have ever tried programming with
the constraints of an InstallAnywhere installer, you would know why :)
On Fri, Oct 17, 2008 at 10:43 AM, Alvaro Herrera
<[EMAIL PROTECTED]> wrote:
> Dan Armbrust escribió:
>> I don't suppose that there is a
I don't suppose that there is any easy way way that I can stop and/or
disable the Autovac temporarily for the database that I want to drop.
The only thing that I have seen so far, is that I would have to add
rows to the pg_autovacuum table for each table in my database, but
I'mnot confident t
work? That really doesn't seem
right.
Thanks,
Dan
On Fri, Oct 17, 2008 at 9:24 AM, Scott Marlowe <[EMAIL PROTECTED]> wrote:
> On Fri, Oct 17, 2008 at 8:11 AM, Dan Armbrust
> <[EMAIL PROTECTED]> wrote:
>>> There are obvious race conditions in that assumption. Why
> There are obvious race conditions in that assumption. Why don't you
> just try the drop and see if it succeeds?
>
>regards, tom lane
>
I don't follow - why is there a race condition? I'm driving the
commands into postgresql via the command line.
The command that does t
I have some installation tools which in some cases need to replace a
postgres database.
For this example, assume my database name is 'fred'.
Also, assume that no other processes are accessing this database. Any
database access will come from my running installer.
First, after validating some
> > That part that confused me was the "-2484". Is that just a hyphen in
> > a strange place? Or did it really calculate a negative value?
>
> Yeah, it really did. AFAICS this could only happen if the value of time(2)
> went backwards. Something fooling with your system clock?
>
>
I got a warning from PostgreSQL 8.3.1 that said something to the effect of
checkpoints are happening to frequently... (-2484 seconds)
That part that confused me was the "-2484". Is that just a hyphen in
a strange place? Or did it really calculate a negative value?
Is this expected?
Thanks,
D
On Wed, Apr 30, 2008 at 12:36 PM, <[EMAIL PROTECTED]> wrote:
> > In this case, Postgres had been started in the foreground on a
> > terminal, so I went to that terminal, and did a ctrl-c. Eventually,
> > postgres stopped, but the terminal wouldn't respond either - and I had
> > to close it.
>
> I wonder whether you saw some variant of the deadlock reported in bug
> #3883 --- see discussion here:
> http://archives.postgresql.org/pgsql-hackers/2008-01/msg00873.php
> The only known way of getting into that problem is fixed in 8.3,
> but a potential for silent deadlock is definitely st
> Well, there went the evidence :-( ... but what exactly did you have
> to do to shut it down? I'm wondering whether the backends responded
> to SIGINT or SIGTERM.
Sorry :(
First, I tried issuing a kill to the pid of the vacuum process. It
didn't respond to kill.
Then, I tried a kill -9. It
Semantics of deadlock aside, I seem to be deadlocked, yet postgres
didn't detect any deadlock situation.
There are no DDL queries running. Just a lot of updates, and some
inserts and deletes.
I had to restart postgres to let things recover - so I can't gather
any more info right now - but if/whe
I have an app that we were load testing - it maintains a pool of
connections to PostgreSQL - 8.3.1
Suddenly, after running fine for weeks, the app hung - unable to get a
free connection from the pool.
select * from pg_stat_activity;" shows me that most of my connections
in a COMMIT phase:
03:0
moot point if no changes were planned to the
windows installer WRT that debug setting.
Thanks,
Dan
On Fri, Feb 29, 2008 at 11:05 AM, Tom Lane <[EMAIL PROTECTED]> wrote:
> "Dan Armbrust" <[EMAIL PROTECTED]> writes:
>
> > If I disable that debugger library in the co
eally just warnings?
Thanks,
Dan
On Fri, Feb 29, 2008 at 10:19 AM, Tom Lane <[EMAIL PROTECTED]> wrote:
> "Dan Armbrust" <[EMAIL PROTECTED]> writes:
>
> > It turns out, that commenting out the "shared_preload_libraries" in
> > the postgresql.conf f
Ok, it turns out that the fact that my pg_restore won't work on
windows is directly related to this (from a couple days ago)
"Dan Armbrust" <[EMAIL PROTECTED]> writes:
> To follow up on my old thread -
> I tested another install of Postgres 8.3.0.1 - and on windows, the
Using PostgreSQL 8.3 on Windows, I make a backup like this:
pg_dump -U postgres -C -d -D -Fc -f ispaaa-pg.bak ispaaa
Then restore like this:
pg_restore -U postgres -C -d template1 -v -Fc ispaaa-pg.bak
And I get lots of these errors:
pg_restore: creating TABLE voip
pg_restore: creating FUNCTION
To follow up on my old thread -
I tested another install of Postgres 8.3.0.1 - and on windows, the
postgresql.conf file has this in it:
shared_preload_libraries = '$libdir/plugins/plugin_debugger.dll'
#
(change requires restart)
I got the impression that that debugger wasn't inte
Thanks for all the help. Performance is back where I thought it
should be, after I fixed our pooling bug.
I didn't think that postgres would be released with performance issues
like that - its just too good :)
Thanks,
Dan
---(end of broadcast)---
So, my ill Postgres 8.3 database is filling up log files in the pg_log
directory with the following:
2008-02-13 18:29:31 CST LOG: loaded library
"$libdir/plugins/plugin_debugger.dll"
2008-02-13 18:29:31 CST LOG: loaded library
"$libdir/plugins/plugin_debugger.dll"
2008-02-13 18:29:31 CST LOG: l
Are there any known regression issues WRT performance on the 8.3.0.1
binary build for windows?
And I mean serious -multiple orders of magnitude- performance issues
running simple queries on a small database...
A little more background. I built 8.3.0 on Cent OS 5 today. Started
using it with a s
I'm trying to convert a database from either MS Access or MySQL into
Postgres. I have found a couple of tools that will almost do what I
want - but not quite. To make things match up with code that is already
written - I need to have all of the table names and column names
converted to lower
Tony Caduto wrote:
Jim Wilson wrote:
Those Windows developer tools must be working really well
How right you are Jim, far superior and more productive than anything
available on
Linux at this time, though the netbeans IDE comes close, to bad Java is
slow for
almost all desktop application
I have a java application that is trying to dynamically drop a set of
tables. Problem is, when it gets to a particular table and I execute
the "drop table foo cascade" command from a prepared statement, the
query never returns. It just hangs indefinitely. I presume that it is
waiting on a lo
Dave Page wrote:
On 8/2/06 17:26, "Joshua D. Drake" <[EMAIL PROTECTED]> wrote:
Certainly not what I want, since that database doesn't exist. Is
there a system database I could always count on being available that I
could connect to?
template1 but only if you allow it from pg_hba.conf.
The
How do I make a connection to a Postgresql database that (potentially)
has 0 databases from jdbc, so that I can issue a create database command?
In that _other_ database, I can make a jdbc connection to an address
like this:
jdbc:mysql://servername/
And then issue a "Create database foo" com
Is there any way to specify which schema I want to use (or have at the
front of the search path) within the jdbc connection string?
I've seen several people asking for the feature in the archives, one
person saying they wrote it but the patch was rejected, but not
authoritative answers...
Th
Jim C. Nasby wrote:
> pg_dump handles table ordering properly.
>
Maybe I missed something then, because it didn't last time I tried to
move some data. I had to drop my foreign keys before I could reload it.
Dan
--
Daniel Armbrust
Biomedical Informatics
Mayo Clin
Jim C. Nasby wrote:
Since no one else has mentioned it, there has been discussion this week
on one of the lists (probably -general or -hackers) about expanding the
capabilities of pg_dump. I've advocated for allowing a file that
specifies what objects to dump and what kind of filtering to apply t
My use case is not so much for database backup purposes as it is for
fine grained export and import.
Our database schema consists of 15 tables that represent a terminology.
Each database can hold multiple terminologies - every table has a
terminologyId column which distinguishes one termino
Dan Armbrust wrote:
Bricklen Anderson wrote:
pg_dump --help
...
-n, --schema=SCHEMA dump the named schema only
-s, --schema-onlydump only the schema, no data
-t, --table=TABLEdump the named table only
...
Just after I sent the e-mail, I realized that I forgot
Bricklen Anderson wrote:
pg_dump --help
...
-n, --schema=SCHEMA dump the named schema only
-s, --schema-onlydump only the schema, no data
-t, --table=TABLEdump the named table only
...
Just after I sent the e-mail, I realized that I forgot that I saw that
option -
Does postgresql have any facility to dump anything more fine grained
than a database to a text file?
For example, to mention a bad word, MySQL's dump command allows you to
specify individual tables to dump - and not only that - you can also
specify a SQL query that must be satisfied for a row
Greg Sabino Mullane wrote:
The other answer may be the license: plugging PG into the MySQL system
(which is about as technically feasible trying to breed a porpoise
and an elephant) keeps MySQL GPL, which is another reason many people
like it.
The fact that PostgreSQL is NOT released under GP
IBM have previously bought Informix (who bought Illustra, RedBrick,
Cloudscape) None of those take-
overs has led to a product actually surviving.
Thats not exactly true - Cloudscape was just given to Apache, and is now
opensourced under the name "Derby"
http://db.apache.org/derby/
Michael Schmidt wrote:
Mr. Armbrust,
Hopefully, this isn't off topic. Just wondering why SWT instead
of swing?
Michael Schmidt
I'm pretty sure this is completely off topic, but I'll indulge it -
I've done development using Swing and SWT - and I find SWT's APIs to be
muc
Gerson - Yahoo wrote:
Dear friends,
I need to
development one system in
native Linux, Language and DataBase, today I use PostgreSQL with Delphi
2005 in
Windows 2005
Servers, and I need to know what language will use to do this ?
That you
suggest ?
This Langage
wor
Bruce Momjian wrote:
I have updated the FAQ to handle three of the items you mentioned. In
one case, I added a new FAQ entry (double-quoting identifiers with a
link to our docs), and in two other cases (indexing long columns,
case-insensitive columns) I added to existing FAQ items where
ap
[EMAIL PROTECTED] wrote:
First, makes sure the PostgreSQL database is listening for TCP/IP
connections on the computer you're trying to connect to. On Mac OS X, run
this command in the a terminal, i.e. Terminal.app:
netstat -l
You should see a line like this somewhere in the output:
tcp6 0 0
Jaime Casanova wrote:
On 8/2/05, Dan Armbrust <[EMAIL PROTECTED]> wrote:
I've been trying to track down some performance problems that I am
having doing large inserts on tables with foreign keys.
I'm 99% sure that the issue I am having is BUG 1552 -
http://archive
An aha moment:
http://archives.postgresql.org/pgsql-bugs/2005-03/msg00183.php
Some of the ensuing conversation seemed to indicate that a change was
made in the 8.0 branch in March, that was intended to fix this issue.
Any idea if that fix would have made it into the 8.0.3 release?
Or maybe
Alvaro Herrera wrote:
On Tue, Aug 02, 2005 at 10:01:50AM -0500, Dan Armbrust wrote:
I shouldn't have to manually run Analyze to make the DB be capable of
handling inserts involving tables with foreign keys correctly. My code
that is doing the inserts is a java application
My loading is done programatically, from another format, so COPY is
not an option.
Why not? A lot of my bulk-loads are generated from other systems and I
go through a temporary-file/pipe via COPY when I can. When I don't I
block inserts into groups of e.g. 1000 and stick in an analyse/etc
Joshua D. Drake wrote:
Why can't postgres compile some rough statistics on tables without
running analyze?
Why can't you just run analyze? You don't have to empty the tables to
do so and you can alter the statistics on the fly. Heck you can even
run analyze while doing the inserts.
I shou
What, ALWAYS faster, even for the first FK check when there's only one
row in the target table and that's cached?
If you're really in a hurry doing your bulk loads:
1. Use COPY.
2. Drop/restore the foreign-key constraints before/after.
That will be hugely faster than INSERTs, although it's
Dan Armbrust wrote:
I have one particular insert query that is running orders of magnitude
slower than other insert queries, and I cannot understand why.
For example, Inserts into "conceptProperty" (detailed below) are at
least 5 times faster than inserts into "conceptProperty
Dan Armbrust wrote:
I'm trying to load some data into PostgreSQL 8.0.3, and I got the
error message "index row size 2904 exceeds btree maximum, 2713".
After a bunch of searching, I believe that I am getting this error
because a value that I am indexing is longer than ~ 1/3 of
I have one particular insert query that is running orders of magnitude
slower than other insert queries, and I cannot understand why.
For example, Inserts into "conceptProperty" (detailed below) are at
least 5 times faster than inserts into "conceptPropertyMultiAttributes".
When I am running th
I'm trying to load some data into PostgreSQL 8.0.3, and I got the error
message "index row size 2904 exceeds btree maximum, 2713". After a
bunch of searching, I believe that I am getting this error because a
value that I am indexing is longer than ~ 1/3 of the block size - or the
BLCKSZ variab
Hmm, well, I don't know if it is actually building an index properly on
this column, I just assumed that it was. It doesn't fail on every
insert, only on the one that has a really long text value. I know it
doesn't use the index when I do "ILIKE" queries, resulting in poor
performance... but
also don't want to negatively affect the performance of the rest of
the DB just to accommodate this instance - hence the questions about
the implications of changing the BLCKSZ variable.
Dan
Bruno Wolff III wrote:
On Mon, Jul 18, 2005 at 14:44:26 -0500,
Dan Armbrust <[EMAIL PROTECTED]>
Nevermind this question...
>
Where is the documentation on tsearch2?
Google first, ask second, I remind myself again... I knew I hadn't
seen it mentioned in the official postgresql manual.. didn't think
about it being an extension.
Dan
Dan Armbrust wrote:
Hmm, well, I don
81 matches
Mail list logo