On 2/18/2014 12:34 AM, Heikki Linnakangas wrote:
On 02/18/2014 12:14 AM, David Wall wrote:
I am running PG 9.2.4 and I am trying to figure out why my database size
shows one value, but the sum of my total relation sizes is so much less.
Basically, I'm told my database is 188MB, but the sum of
On 02/18/2014 12:14 AM, David Wall wrote:
I am running PG 9.2.4 and I am trying to figure out why my database size
shows one value, but the sum of my total relation sizes is so much less.
Basically, I'm told my database is 188MB, but the sum of my total
relation sizes adds up to just 8.7MB, whic
I am running PG 9.2.4 and I am trying to figure out why my database size
shows one value, but the sum of my total relation sizes is so much less.
Basically, I'm told my database is 188MB, but the sum of my total
relation sizes adds up to just 8.7MB, which is 1/20th of the reported
total. Wher
Adrian Moisey wrote:
Hi
INFO: "blahxxx": scanned 27 of 27 pages, containing 1272 live rows
and 0 dead rows; 1272 rows in sample, 1272 estimated total rows
This is a small table that takes up 27 pages and it scanned all of
them. You have 1272 rows in it and none of them are dead (i.e.
delet
Hi
INFO: "blahxxx": scanned 27 of 27 pages, containing 1272 live rows
and 0 dead rows; 1272 rows in sample, 1272 estimated total rows
This is a small table that takes up 27 pages and it scanned all of them.
You have 1272 rows in it and none of them are dead (i.e. deleted/updated
but still t
Hi
You are tracking ~ 4.6 million pages and have space to track ~ 15.5
million, so that's fine. You are right up against your limit of
relations (tables, indexes etc) being tracked though - 1200. You'll
probably want to increase max_fsm_relations - see manual for details
(server configuration
Adrian Moisey wrote:
Hi
Running VACUUM VERBOSE will give you a detailed view of space usage of
each individual table.
I did that.
Not too sure what I'm looking for, can someone tell me what this means:
INFO: "blahxxx": scanned 27 of 27 pages, containing 1272 live rows and
0 dead rows; 127
Hi
Running VACUUM VERBOSE will give you a detailed view of space usage of
each individual table.
I did that.
Not too sure what I'm looking for, can someone tell me what this means:
INFO: "blahxxx": scanned 27 of 27 pages, containing 1272 live rows and
0 dead rows; 1272 rows in sample, 1272
Adrian Moisey <[EMAIL PROTECTED]> wrote:
>
> Hi
>
> >>> Now, is the bloat in the tables (which tables ?) or in the
> >>> indexes (which indexes ?), or in the toast tables perhaps, or in the
> >>> system catalogs or all of the above ? Or perhaps there is a
> >>> long-forgotten process that g
Hi
Now, is the bloat in the tables (which tables ?) or in the
indexes (which indexes ?), or in the toast tables perhaps, or in the
system catalogs or all of the above ? Or perhaps there is a
long-forgotten process that got zombified while holding a huge temp
table ? (not very likely, but
In response to Adrian Moisey <[EMAIL PROTECTED]>:
>
> We currently have a 16CPU 32GB box running postgres 8.2.
>
> When I do a pg_dump with the following parameters "/usr/bin/pg_dump -E
> UTF8 -F c -b" I get a file of 14GB in size.
>
> But the database is 110GB in size on the disk. Why the big
Will this help with performance ?
Depends if the bloat is in part of your working set. If debloating can
make the working set fit in RAM, or lower your IOs, you'll get a boost.
Now, is the bloat in the tables (which tables ?) or in the indexes
(which indexes ?), or in the toast tabl
Hi
the live one is 113G
the restored one is 78G
>
Good news for you is that you know that you can do something ;)
:)
Will this help with performance ?
Now, is the bloat in the tables (which tables ?) or in the indexes
(which indexes ?), or in the toast tables perhaps, or in the sy
Adrian Moisey wrote:
> Hi
>
>> If you suspect your tables or indexes are bloated, restore your
>> dump to a test box.
>> Use fsync=off during restore, you don't care about integrity on
>> the test box.
>> This will avoid slowing down your production database.
>> Then look at the si
If you suspect your tables or indexes are bloated, restore your
dump to a test box.
Use fsync=off during restore, you don't care about integrity on the
test box.
This will avoid slowing down your production database.
Then look at the size of the restored database.
If it i
Hi
If you suspect your tables or indexes are bloated, restore your dump
to a test box.
Use fsync=off during restore, you don't care about integrity on the
test box.
This will avoid slowing down your production database.
Then look at the size of the restored database.
If it
Hi
We currently have a 16CPU 32GB box running postgres 8.2.
When I do a pg_dump with the following parameters "/usr/bin/pg_dump -E
UTF8 -F c -b" I get a file of 14GB in size.
But the database is 110GB in size on the disk. Why the big difference
in size? Does this have anything to do with
Adrian Moisey wrote:
> Hi
>
> We currently have a 16CPU 32GB box running postgres 8.2.
>
> When I do a pg_dump with the following parameters "/usr/bin/pg_dump -E
> UTF8 -F c -b" I get a file of 14GB in size.
>
> But the database is 110GB in size on the disk. Why the big difference
> in size? D
Hi Adrian,
>When I do a pg_dump with the following parameters "/usr/bin/pg_dump -E
UTF8 -F c -b" I get a file of 14GB in size.
>From the man page of pg_dump
"
-F format, --format=format
Selects the format of the output. format can be one of the following:
c
output a custom archive suitable f
Hi
We currently have a 16CPU 32GB box running postgres 8.2.
When I do a pg_dump with the following parameters "/usr/bin/pg_dump -E
UTF8 -F c -b" I get a file of 14GB in size.
But the database is 110GB in size on the disk. Why the big difference
in size? Does this have anything to do with p
20 matches
Mail list logo