> The usual set of tricks is to
> increase shared_buffers, checkpoint_segments, and checkpoint_timeout to cut
> down
Uh, didn't know shared_buffers played a role in index insertion as well...
got to try that. Thank you.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> On a few very narrow applications I've gotten good
> performance in the
> low hundreds. After that things fall apart
> quickly.
Ehm... what exactly "fall apart quickly" means?
I can trade some "select" speed for "insert" speed...
I don't have experience with partitioning, if some of
you alr
> Does it help to reindex the index at that point?
Didn't try; but I guess a reindex of such a big table
would block inserts for a long time... but I'll try
> Bad. The partitioning code isn't designed to scale
> beyond a few dozen partitions.
What kind of problems am I going to experience?
It
Craig Ringer wrote:
> On 19/04/2010 4:51 PM, Craig Ringer wrote:
>
>> There might be another possible approach that uses the system
>> "xmin/xmax" fields of each tuple. That'd permit your incremental dumps
>> to be done read-only, saving you a whole lot of expensive I/O and bloat.
>> I'm just not
Bruce Momjian wrote:
Tom Lane wrote:
John R Pierce writes:
I don't know if the build trees can be structured so the include
directories can be differentiated the same as the bin and lib...
The RPM distributions are able to deal with this without actually
differentiating: if you
Tom Lane wrote:
> John R Pierce writes:
> > I don't know if the build trees can be structured so the include
> > directories can be differentiated the same as the bin and lib...
>
> The RPM distributions are able to deal with this without actually
> differentiating: if you install both 32- and 6
John R Pierce wrote:
> there's a definite structural problem in that the 32 bit and 64 bit
> Solaris binaries are in the same tree, with just the bin and lib
> directories differentiated. you can in theory install them both, and
> point them to different $PGDATA directories, and have them both
John R Pierce writes:
> I don't know if the build trees can be structured so the include
> directories can be differentiated the same as the bin and lib...
The RPM distributions are able to deal with this without actually
differentiating: if you install both 32- and 64-bit RPMs then the
overlapp
Bruce Momjian wrote:
Yes, great. One point is that while you are trying to fix this for
the one-off case, we should be realizing that we need a proper fix so
all your future upgrades will be clean, and other users will not also
have this problem. I agree with your approach to first find out if
sunpeng wrote:
I noticed at the function StartBufferIO() in bufmgr.c, there is no
really getting buffer data from disk, only set InProgressBuf = buf;
and return true;
i wondered where is the getting buf data from disk really done?
are there a background process to do this work? which function
sunpeng writes:
> I noticed at the function StartBufferIO() in bufmgr.c, there is no really
> getting buffer data from disk, only set InProgressBuf = buf; and return
> true;
> i wondered where is the getting buf data from disk really done?
The caller. StartBufferIO is only concerned with interlo
I noticed at the function StartBufferIO() in bufmgr.c, there is no really
getting buffer data from disk, only set InProgressBuf = buf; and return
true;
i wondered where is the getting buf data from disk really done?
are there a background process to do this work? which function does this
work?
than
The version is 8.3.3, and I use autovacuum for the routine maintenance.
The ctid's are distinct
grande=# select oid, ctid, relname from pg_class where oid IN
(26770910, 26770918, 26770919, 26770920);
oid| ctid |relname
--+-+--
On Mon, Apr 19, 2010 at 5:14 PM, Wells Oliver wrote:
> Just did an install via apt-get of 8.4 and ... well ... I was able to set
> the password for the postgres user, then I log in and type:
>
> createdb test
Is that from a command line that looks like this:
smarl...@steamboat:/$
or this:
Welc
Leonardo F wrote:
Is there anything else I can try to "help" postgres update those
index faster?
It sounds like your problem is that read/write activities on the indexes
are becoming increasingly large and random as more data is inserted.
There are two standard ways to improve on that:
Just did an install via apt-get of 8.4 and ... well ... I was able to set
the password for the postgres user, then I log in and type:
createdb test
It goes quietly, which according to the documentation is a success, but...
\l doesn't show the database. Nor does
/var/log/postgresql/postgresql-8.4
> Alban Hertroys wrote:
>
> It would help if you'd show us what result you expect from ordering the
> above.
>
> Most people would order this by path I think. However that doesn't match
> your sort column and I can't think of any method that would give results
> in such an arbitrary order as you s
John R Pierce wrote:
> Tom Lane wrote:
> > No, but trying to build against a non-self-consistent set of files is
> > bad. You really need a pg_config.h that matches the original build of
> > the server, and you haven't got that. I think Greg's point is that
> > trying to reverse-engineer that fil
On 19 Apr 2010, at 9:23, cojack wrote:
> Hello,
> id | path | sort
>
> +---+--
>
George Woodring writes:
> Upon investigation I found that I have a table that is in the database twice
> db=> select oid, relname from pg_class where oid IN (26770910,
> 26770918, 26770919);
>oid|relname
> --+---
> 26770910 | av
George Woodring wrote:
> I have found that I have a database problem after receiving the
> following error from pg_dump:
Lack of vacuuming, most likely. What version is this? Did you read
previous threads about this problem on the archives?
--
Alvaro Herrerahttp
Rene Schickbauer escribió:
> Hi!
>
> I'm looking into implementing an updateable "virtual" table using pl/perl.
>
> What i basically want to do is read out/update some external
> resources. For programs using the SQL interface, this should look
> just like a regular table except that it isn't ACI
Merlin Moncure writes:
> *) csv is supposed to be CRLF *always*. We do not do this. We do
> however read different types of newlines. So we are a 4180 reader but
> not an emitter. Not so sure if I think changing this is a good idea
> though without exposing a knob.
Given the lack of field com
On Mon, Apr 19, 2010 at 11:04 AM, Tom Lane wrote:
> Leonardo F writes:
>> I have a simple table that has indexes on 2 integer columns.
>> Data is inserted very often (no updates, no deletes, just inserts):
>> at least 4000/5000 rows per second.
>> The input for the 2 indexed columns is very rando
Leonardo F writes:
> I have a simple table that has indexes on 2 integer columns.
> Data is inserted very often (no updates, no deletes, just inserts):
> at least 4000/5000 rows per second.
> The input for the 2 indexed columns is very random.
> Everything is "fine" for the first 10-20M rows; aft
On Mon, 2010-04-19 at 09:39 -0600, Scott Marlowe wrote:
> On Mon, Apr 19, 2010 at 9:34 AM, Vasiliy G Tolstov
> wrote:
> > On Mon, 2010-04-19 at 09:18 -0600, Scott Marlowe wrote:
> >> pg_dump?
> >
> > Yes. But this tool can dump all domains, triggers, procedures and
> > functions belongs to speci
On Thu, Apr 15, 2010 at 12:29 AM, Tom Lane wrote:
> Bruce Momjian writes:
>> Merlin Moncure wrote:
>>> aside: anyone know if postgres properly handles csv according to rfc4180?
>
>> Wow, I had no idea there was an RFC for CSV.
>
> Me either. I'd bet the percentage of "CSV"-using programs that ac
Hi,
I have a simple table that has indexes on 2 integer columns.
Data is inserted very often (no updates, no deletes, just inserts):
at least 4000/5000 rows per second.
The input for the 2 indexed columns is very random.
Everything is "fine" for the first 10-20M rows; after that, performance
get
On Mon, Apr 19, 2010 at 9:34 AM, Vasiliy G Tolstov wrote:
> On Mon, 2010-04-19 at 09:18 -0600, Scott Marlowe wrote:
>> pg_dump?
>
> Yes. But this tool can dump all domains, triggers, procedures and
> functions belongs to specific database ?
>
Yep. It makes a consistent backup of an entire datab
On Mon, 2010-04-19 at 09:18 -0600, Scott Marlowe wrote:
> pg_dump?
Yes. But this tool can dump all domains, triggers, procedures and
functions belongs to specific database ?
>
> On Mon, Apr 19, 2010 at 9:07 AM, Vasiliy G Tolstov
> wrote:
> > Hello. I'm new with postgresql (before using mysql)
pg_dump?
On Mon, Apr 19, 2010 at 9:07 AM, Vasiliy G Tolstov wrote:
> Hello. I'm new with postgresql (before using mysql).
> Mysql support to dump all triggers, relations, stored procedures from
> specific databse to one file.
> How about this in postgresql?
>
>
> --
> Vasiliy G Tolstov
> Selfip.
Hello. I'm new with postgresql (before using mysql).
Mysql support to dump all triggers, relations, stored procedures from
specific databse to one file.
How about this in postgresql?
--
Vasiliy G Tolstov
Selfip.Ru
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To mak
On Mon, Apr 19, 2010 at 5:13 AM, Andre Lopes wrote:
> Ok,
>
> So the best option is to send the data to a table and then with an cronjob I
> verify if it is data to send to the MySQL database.
correct. this is the classic way of doing this type of work.
merlin
--
Sent via pgsql-general mailin
Hi all -
Can you please suggest the way to handle the code migration
between environments. We have SVN. Can we first check in and then check out
to different environments? Or we can take back up and restore that function
or database object to other environment? Appreciate your suggest
On 19/04/2010 4:51 PM, Craig Ringer wrote:
There might be another possible approach that uses the system
"xmin/xmax" fields of each tuple. That'd permit your incremental dumps
to be done read-only, saving you a whole lot of expensive I/O and bloat.
I'm just not sure what I'm thinking of will wor
Not in a huge hurry (fixed right is better than fixed fast); just trying
to figure out where the fix is in terms of progress.
Koichi Suzuki wrote:
> Not yet. I'm rebuilding the test suit for better testing. If you're
> in a hurry, I can send you an update for test.
>
> --
> Koichi Suzuk
Has there been an update on this situation?
Koichi Suzuki wrote:
> I understand the situation. I'll upload the improved code ASAP.
>
> --
> Koichi Suzuki
>
>
>
> 2010/2/11 Karl Denninger :
>
>> Will this come through as a commit on the pgfoundry codebase? I've
>> subscribed looking f
I have found that I have a database problem after receiving the
following error from pg_dump:
pg_dump: SQL command failed
pg_dump: Error message from server: ERROR: more than one row returned
by a subquery used as an expression
pg_dump: The command was: SELECT tableoid, oid, typname, typnamespace
Not yet. I'm rebuilding the test suit for better testing. If you're
in a hurry, I can send you an update for test.
--
Koichi Suzuki
2010/4/19 Karl Denninger :
> Has there been an update on this situation?
>
> Koichi Suzuki wrote:
>
> I understand the situation. I'll upload the impr
Hello,
Im here because Oleg Bartunov invite me to this mailing list. Im searching
for help with ltree module, in todo list (in this module) is Better
documentation, since 2003 year, no one make something more for this. Whats
is the problem? With example from manual about ltree, we have some data
Hi!
I'm looking into implementing an updateable "virtual" table using pl/perl.
What i basically want to do is read out/update some external resources.
For programs using the SQL interface, this should look just like a
regular table except that it isn't ACID compliant (no rollbacks).
I'm pret
Ok,
So the best option is to send the data to a table and then with an cronjob I
verify if it is data to send to the MySQL database.
Best Regards,
André.
On Mon, Apr 19, 2010 at 10:09 AM, Pavel Stehule wrote:
> 2010/4/19 Andre Lopes :
> > Hi,
> >
> > Thanks for the reply.
> >
> > I can't insta
2010/4/19 Andre Lopes :
> Hi,
>
> Thanks for the reply.
>
> I can't install languages. The database is in a shared hosting.
>
> There are other solutions?
yes. you can store values to some stack table (via trigger) and later
move content of this table to mysql from application. But you can't
execu
Hi,
Thanks for the reply.
I can't install languages. The database is in a shared hosting.
There are other solutions?
Best Regards,
On Mon, Apr 19, 2010 at 10:00 AM, Pavel Stehule wrote:
> Hello
>
> 2010/4/19 Andre Lopes :
> > Hi,
> >
> > I have an account in a shared hosting account and I c
Hello
2010/4/19 Andre Lopes :
> Hi,
>
> I have an account in a shared hosting account and I can't install any 3rd
> party software.
>
> I need to send data from a PostgreSQL database to a MySQL database. I was
> thinking of using a trigger. The trigger calling a PHP file to help me
> passing the d
Ostrovsky Eugene wrote:
> Hi.
> I need to export data from the database to external file. The difficulty
> is that only data modified or added since previous export should be
> written to the file.
> I consider adding "modification_time" timestamp field to all the tables
> that should be exported.
Hi,
I have an account in a shared hosting account and I can't install any 3rd
party software.
I need to send data from a PostgreSQL database to a MySQL database. I was
thinking of using a trigger. The trigger calling a PHP file to help me
passing the data to the MySQL database.
It is possible to
Yeb Havinga wrote:
> Craig Ringer wrote:
>> Đỗ Ngọc Trí Cường wrote:
>>
>>> Dear all,
>>>
>>> I've a problem but I search all the help file and can't find the
>>> solution.
>>>
>>> I want to track all action of a specify role on all or one schema in
>>> database.
>>>
>>> Can you help me?
>>>
Craig Ringer wrote:
Đỗ Ngọc Trí Cường wrote:
Dear all,
I've a problem but I search all the help file and can't find the solution.
I want to track all action of a specify role on all or one schema in
database.
Can you help me?
You can use statement-level logging, though there are no
49 matches
Mail list logo