Hi there
we are having some problems using OLEDB PGNP and SSIS, this is a post we
have added to experts exchange, but we were wondering whether anyone here
could shed some light on this. We are also interested how others manage ETL
Cheers
Jamie
Data Warehousing Postgres
We're
Hi,
I have a Map info file named map.TAB and when I tried to export it to
Postgres I get an error of encoding so I used konwert to convert it to utf8
using the following statement
konwert any/es-utf8 map.TAB -O
But this only convert the name of the header of each column but not the
registers of e
Hello,
After a series of sessions to search the web for information, I am
asking the help of people having a bit more knowledge of the internals
of pg_dump to try to solve a performance problem I have. I am running
PostgreSQL version 8.3.8 both server and pg_dump,
The context is a farm hosting of
I used pg_top to monitor my database and i'm looking for method to find out
what queries are currently being serviced by the database.
pg_top inform me that the queries are IDLE and the result of select * from
pg_stat_activity return that current_query is always on IDLE status.
Please can you help
paresh masani writes:
> Below function doesn't work: (I tried each combination mentioned with
> # but none of them working.)
I haven't tried it, but a look at the code makes me think that
spi_prepare wants each type name to appear as a separate argument.
It definitely won't work to smash them all
"Loic d'Anterroches" writes:
> Each night I am running:
> pg_dump --blobs --schema=%s --no-acl -U postgres indefero | gzip >
> /path/to/backups/%s/%s-%s.sql.gz
> this for each installation, so 1100 times. Substitution strings are to
> timestamp and get the right schema.
This seems like a pretty d
Tom Lane escribió:
> I'm not much of a Perl hacker, but I seem to recall that it's possible
> to pass an array to a function in a way that will make the array
> elements look like separate arguments. If you really need a dynamic
> list of types and values, maybe there's some solution in that dire
I have two Linux servers that are pretty similar to each other, and both are
running PostgreSQL servers, but in one server a certain Perl script succeeds
in connecting to the localhost server whereas in the other one the same
script fails. The error on the second server is of the form "fe_sendauth
Kynn Jones escribió:
> I have two Linux servers that are pretty similar to each other, and both are
> running PostgreSQL servers, but in one server a certain Perl script succeeds
> in connecting to the localhost server whereas in the other one the same
> script fails. The error on the second serve
OK, I did find this
http://www.postgresql.org/support/professional_hosting_asia
but does anyone have experience with any of them?
--
“Don't eat anything you've ever seen advertised on TV”
- Michael Pollan, author of "In Defense of Food"
--
Sent via pgsql-general mailing list (pgsql-g
On Wed, Oct 7, 2009 at 4:23 PM, Tom Lane wrote:
> "Loic d'Anterroches" writes:
>> Each night I am running:
>> pg_dump --blobs --schema=%s --no-acl -U postgres indefero | gzip >
>> /path/to/backups/%s/%s-%s.sql.gz
>> this for each installation, so 1100 times. Substitution strings are to
>> timesta
In response to "Loic d'Anterroches" :
> On Wed, Oct 7, 2009 at 4:23 PM, Tom Lane wrote:
> > "Loic d'Anterroches" writes:
> >> Each night I am running:
> >> pg_dump --blobs --schema=%s --no-acl -U postgres indefero | gzip >
> >> /path/to/backups/%s/%s-%s.sql.gz
> >> this for each installation, so
Loic,
>settings up each time. The added benefit of doing a per schema dump is
>that I provide it to the users directly, that way they have a full
>export of their data.
you should try the timing with
pg_dump --format=c completedatabase.dmp
and then generating the separte schemas in an extra st
On Wed, Oct 7, 2009 at 5:54 PM, Bill Moran wrote:
> In response to "Loic d'Anterroches" :
>
>> On Wed, Oct 7, 2009 at 4:23 PM, Tom Lane wrote:
>> > "Loic d'Anterroches" writes:
>> >> Each night I am running:
>> >> pg_dump --blobs --schema=%s --no-acl -U postgres indefero | gzip >
>> >> /path/to/
Harald,
>>settings up each time. The added benefit of doing a per schema dump is
>>that I provide it to the users directly, that way they have a full
>>export of their data.
>
> you should try the timing with
>
> pg_dump --format=c completedatabase.dmp
>
> and then generating the separte schemas
On Wed, 2009-10-07 at 12:51 +0200, Loic d'Anterroches wrote:
> Hello,
> My problem is that the dump increased steadily with the number of
> schemas (now about 20s from about 12s with 850 schemas) and pg_dump is
> now ballooning at 120MB of memory usage when running the dump.
>
And it will contin
A colleague gave me the following query to run:
DELETE FROM data_log_20msec_table WHERE (log_id IN (SELECT log_id FROM
data_log_20msec_table ORDER BY log_id DESC OFFSET 1000))
log_id is the primary key (big serial)
data_log is the table described below
This query keeps the most recent 10 mi
Dave Huber wrote:
A colleague gave me the following query to run:
DELETE FROM data_log_20msec_table WHERE (log_id IN (SELECT log_id FROM
data_log_20msec_table ORDER BY log_id DESC OFFSET 1000))
...
This query keeps the most recent 10 million rows and deletes the
remaining ones. If I
Hi Josua,
On Wed, Oct 7, 2009 at 6:29 PM, Joshua D. Drake wrote:
> On Wed, 2009-10-07 at 12:51 +0200, Loic d'Anterroches wrote:
>> Hello,
>
>> My problem is that the dump increased steadily with the number of
>> schemas (now about 20s from about 12s with 850 schemas) and pg_dump is
>> now balloon
John, I got your previous post, but I think I misunderstood something. You
didn't mean a disk partition. I think I get what you're describing now. I had
previously missed the link in your earlier post, too. Please accept my
apologies for not being more diligent in my reading. I'll look into this
On Wed, 7 Oct 2009, Kynn Jones wrote:
Is there some way to have Postgres dump excruciatingly thorough details
about every single step of the authentication sequence?
There's a postgresql.conf parameter named log_min_messages that you can
crank up until you see the detail level you're looking
Normal 0 21 false false false
MicrosoftInternetExplorer4 This is our first project using
PostgerSQL, where I have a problem I cant solve on a neat way (I assume PGSQL
should provide a nice solution...).
So we have an old xBase ba
> What version are you running? IIRC it should remember the password
> between databases.
8.4.0 on Linux/x86_64. It does not, and man page clearly says:
"pg_dumpall needs to connect several times to the
PostgreSQL server (once per database). If you use password
authentication it will
On Wed, Oct 07, 2009 at 09:19:58PM +0200, Zsolt wrote:
> For a given house I would like to start the numbering of tenants from
> 1. Each house could have tenant_ID=1, obviously in this case the
> house_ID will differ. The combination of tenant_ID and house_ID will
> be the unique identifier of each
Thank you all! Someone else in our team found the problem (a missing user
in the failing server).
k
You were correct below trigger worked. Giving reference for others.
CREATE OR REPLACE FUNCTION init() RETURNS TEXT AS $$
my $raw_row = "(\"col1\", \"col2\")";
my @new_row = ('5', '6');
my @col_types = ("integer", "character varying");
my $query = "INSERT INTO mytable $
In response to Zsolt :
>
> This is our first project using PostgerSQL, where I have a problem I cant
> solve
> on a neat way (I assume PGSQL should provide a nice solution...).
>
> So we have an old xBase based program we are trying to port to PostgreSQL
> while
> we should keep the original da
27 matches
Mail list logo