On 03/28/12 10:32 PM, Carson Gross wrote:
I've got a pretty big database (~30 gigs) and when I do a pg_dump, it
ends up only being 2 gigs.
The database consists mainly of one very large table (w/ a few varchar
columns) which, according to pg_relation_size() is 10 gigs
and pg_total_relation_si
On 29 March 2012 09:11, John R Pierce wrote:
> On 03/28/12 10:32 PM, Carson Gross wrote:
>>
>> I've got a pretty big database (~30 gigs) and when I do a pg_dump, it ends
>> up only being 2 gigs.
I suppose you're talking about a plain text dump here? A compressed
dump would likely[*] be quite a bi
leaf_yxj wrote:
> For oracle, the normal user can't see all the system catalog. but for
> postgresql, it looks like all the user can see the system catalog.
Should
> we limit the user read privilege to system catalog?
You can try that, but things may break in unexpected ways.
For example, psql's u
Vikash3 S wrote:
> Would like to submit patch on this TO Do list item which deals with
psql client, "psql : Allow
> processing of multiple -f (file) options ".
>
> The code base which I am working on is from postgres 9.1.3 release.
> But when I diff the code base from git repository, the changes a
Hi Toby,
Il 28/03/12 09:14, Toby Corkindale ha scritto:
Is there any function like pg_is_in_backup() to tell if the mode has
been enabled?
Currently, there's no such a function. In general we simply check if a
'backup_label' file exists in PGDATA. However, it could be a good idea
to add it a
Arvind Singh wrote:
> I have queries regarding columns in Postgres CSV Log.
>
> Following is a sample Logline
>
> 2012-03-28 19:25:47.968
IST,"postgres","stock_apals",2388,"localhost:1898",4f731863.954,6,"SET",
2012-
> 03-28
Hi Toby,
Il 29/03/12 10:46, Gabriele Bartolini ha scritto:
Currently, there's no such a function. In general we simply check if a
'backup_label' file exists in PGDATA. However, it could be a good idea
to add it as admin function (I will have a think about it and possibly
come up with a patch).
Thanks for your response!
it a browser based application. so would you advice me about the best way to
poll the database for notifications ?
I've been read about DB triggers but still can't tell if it will help me.
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/user-
Hi all,
Is there any real reason why checkpoint_timeout limited to 1hour?
In my case I have some replicas with WAL on SAS raid and PGDATA on SSD with
limited write endurance.
And I don't worry about possible long time recovery after power failure in
that case.
Whats more working dataset fill in s
On Wed, Mar 28, 2012 at 6:31 PM, Naoko Reeves wrote:
> Do you think this should be the next step I might take?
> Could you give me an advice of how I could identify corrupted error.
It seems to me that since you can successfully dump the table (I
assume you validated the data was all there someho
Oh, thankx for the answer on PG Log.
our postgres is on windows , it is version 9.0.5. There are a variety of
applications, like CRM , Stockrelated
and now there is a online game based on it.
We are providing application development, statistics, datametrics support . For
simple application
On Wed, Mar 28, 2012 at 6:31 PM, Naoko Reeves wrote:
> Version: "PostgreSQL 8.4.6 on
Oh, and also upgrade to 8.4.11 to ensure you do not have any known
data loss bugs.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgres
Arvind Singh wrote:
> Query 1
> ---
> do we have a standard list of following Log Codes
> - Command_tag ex. IDLE, SELECT ..
See the source code for your version:
http://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/backend/
tcop/utility.c;h=ec36644a492ab69d5306b52294daab0599f332
Interesting. Is there a perf hit to having a big file on disk? My
understanding is that the primary thing that really matters is keeping your
active set in memory.
This is on Postgres 9.0.x, running on Heroku/ec2.
We do have extremely compressible data so it may be that the dump is
compressed:
I have a database which contains two primary sets of data:
1) A large (~150GB) dataset. This data set is mainly static. It is
updated, but not by the users (it is updated by our company, which provides
the data to users). There are some deletions, but it is safe to consider
this an "add-only" d
On 03/29/12 9:43 AM, Jonathan Bartlett wrote:
1) A large (~150GB) dataset. This data set is mainly static. It is
updated, but not by the users (it is updated by our company, which
provides the data to users). There are some deletions, but it is safe
to consider this an "add-only" database, w
>
>
> by 'dataset' do you mean table, aka relation ?'
>
It's a group of tables.
> by 'not using any referential integrity', do you mean, you're NOT using
> foreign keys ('REFERENCES table(field)' in your table declaration ?
Correct.
Also, many queries cross the datasets together.
>>
>>
> by '
Hi Jonathan,
On 29/03/12 19:01, Jonathan Bartlett wrote:
Now, my issue is that right now when we do updates to the
dataset, we have to make them to the live database. I would
prefer to manage data releases the way we manage software
releases - have a staging are
I'm storing vector map attribute data in postgres tables and somehow
managed to create two databases (of similar names) rather than one. I want
to combine the two.
For tables that exist in the one database I want to eliminate, I thought
to use pg_dump to create .sql files, then use pg_resto
Hi Rich,
Il 29/03/12 21:10, Rich Shepard ha scritto:
For tables that exist in the one database I want to eliminate, I
thought
to use pg_dump to create .sql files, then use pg_restore to add the
table to
the other database. Did this for one table (with 4201 rows), but
'pg_restore
-d databas
Hi,
First of all, shut down both servers (you indicated that you have a
replica) and make a full copy of both data directories. At the first
sign of corruption, that's always a good step as long as it's a
practical amount of data (obviously this is more of a challenge if you
have terabytes of data
On 3/29/2012 2:10 PM, Rich Shepard wrote:
I'm storing vector map attribute data in postgres tables and somehow
managed to create two databases (of similar names) rather than one. I want
to combine the two.
For tables that exist in the one database I want to eliminate, I thought
to use pg_dump to
On Thu, 29 Mar 2012, Gabriele Bartolini wrote:
I suggest that you look at the -l and -L options in pg_restore, which
allow you to select which dump entries to restore (selective restore).
Gabriele,
After sending the message I realized the proper syntax is 'psql -d
database -f table.sql'. Th
On Thu, 29 Mar 2012, Andy Colson wrote:
How many tables are we talking about. If its a few tables, I'd rename them:
alter table lake rename to lake_old;
... etc
then dump it out and restore into the proper db.
Andy,
This will work just fine. Thanks for the insight.
Rich
--
Sent via pgsq
Tom -
Your suggestion gives me an idea, and I'd like your opinion since I haven't
done much with schemas.
(1) Separate the datasets into different schemas
(2) Use different schema names for different static data releases
(3) For the *company*, we can use a schema search path that includes the
nex
On Thu, Mar 29, 2012 at 2:39 PM, Jonathan Bartlett
wrote:
> Tom -
>
> Your suggestion gives me an idea, and I'd like your opinion since I haven't
> done much with schemas.
>
> (1) Separate the datasets into different schemas
> (2) Use different schema names for different static data releases
> (3)
Jonathan Bartlett wrote on 29.03.2012 22:39:
Your suggestion gives me an idea, and I'd like your opinion since I haven't
done much with schemas.
(1) Separate the datasets into different schemas
(2) Use different schema names for different static data releases
(3) For the *company*, we can use a
On 03/29/12 2:16 PM, Thomas Kellerer wrote:
Jonathan Bartlett wrote on 29.03.2012 22:39:
Your suggestion gives me an idea, and I'd like your opinion since I
haven't done much with schemas.
(1) Separate the datasets into different schemas
(2) Use different schema names for different static data
In trying to get an sql stmt to return a default value, I read in the docs..
"The COALESCE function returns the first of its arguments that is not null. Null is returned only if all arguments are null. It is
often used to substitute a default value for null values when data is retrieved for dis
On Fri, Mar 30, 2012 at 9:16 AM, David Salisbury wrote:
> development=# select coalesce(anum,100) from t1 where anum = 4;
What you have there is rather different from COALESCE, as you're
looking for a case where the row completely doesn't exist. But you can
fudge it with an outer join.
Untested
Hello
2012/3/30 David Salisbury :
>
> In trying to get an sql stmt to return a default value, I read in the docs..
>
> "The COALESCE function returns the first of its arguments that is not null.
> Null is returned only if all arguments are null. It is often used to
> substitute a default value for
On 3/29/12 4:26 PM, Chris Angelico wrote:
On Fri, Mar 30, 2012 at 9:16 AM, David Salisbury wrote:
development=# select coalesce(anum,100) from t1 where anum = 4;
What you have there is rather different from COALESCE, as you're
looking for a case where the row completely doesn't exist. But
On 29/03/12 19:50, Gabriele Bartolini wrote:
Hi Toby,
Il 29/03/12 10:46, Gabriele Bartolini ha scritto:
Currently, there's no such a function. In general we simply check if a
'backup_label' file exists in PGDATA. However, it could be a good idea
to add it as admin function (I will have a think
We have a number of varchar fields and I'm looking to see what the
greatest data length in any is, after UTF-8 encoding. The two-argument
length function appears (I think) to take a byte array, so it's the
opposite of what I'm looking for (give it a UTF-8 encoded string and
the second parameter 'UT
It depends on what exactly it is you're trying to do, and where your
default is supposed to be used. Are you wanting a single number returned?
in that case something like this
SELECT COALESCE((SELECT anum FROM t1 WHERE anum=4 [ LIMIT 1 ]),100)
that would get you back a 4 or 100 in this case. I
On Thu, 2012-03-29 at 14:49 -0500, Andy Colson wrote:
> On 3/29/2012 2:10 PM, Rich Shepard wrote:
> > I'm storing vector map attribute data in postgres tables and somehow
> > managed to create two databases (of similar names) rather than one. I want
> > to combine the two.
> >
> > For tables that e
Vick,
Thank you very much. Yes, I just go ahead did what you said and all appears
to be fine.
On Thu, Mar 29, 2012 at 7:08 AM, Vick Khera wrote:
> On Wed, Mar 28, 2012 at 6:31 PM, Naoko Reeves
> wrote:
> > Do you think this should be the next step I might take?
> > Could you give me an advice
Thank you very much!!! I really appreicate it.
Grace
At 2012-03-29 16:18:23,"Albe Laurenz *EXTERN* [via PostgreSQL]"
wrote:
leaf_yxj wrote:
> For oracle, the normal user can't see all the system catalog. but for
> postgresql, it looks like all the user can see the system catalog.
Should
> we l
On Thu, Mar 29, 2012 at 7:47 PM, Naoko Reeves wrote:
> Vick,
> Thank you very much. Yes, I just go ahead did what you said and all appears
> to be fine.
You definitely need to check your hardware for faults, especially the
one that caused your server to crash. Run some memory tests, drive
tests
Hi All friends. Thanks for you guys reply my questions. You helps me a lot.
THank you billions. Another help. THanks.
After I create table and roles. I want to double check I grant the proper
privileges to the users. I can
use \dp and \z command to check the objects privilege. I can user pg_aut
On Thu, Mar 29, 2012 at 7:49 PM, leaf_yxj wrote:
> Hi All friends. Thanks for you guys reply my questions. You helps me a lot.
> THank you billions. Another help. THanks.
>
> After I create table and roles. I want to double check I grant the proper
> privileges to the users. I can
> use \dp and \
Hi Chris,
My bosses ask me to list all the users and all the privilege which the
superuser granted to the users.
Then they can double check that I did right thing or not?
Thanks.
Grace
At 2012-03-30 10:54:50,"Chris Travers-5 [via PostgreSQL]"
wrote:
On Thu, Mar 29, 2012 at 7:49 PM, leaf
42 matches
Mail list logo