Solved.
Here is the procedure to get working with plpython ucs2 or ucs4 error. By
default python uses ucs2 and we have to change it to ucs4.
compile python2.7 or 3 with options as below.
./configure --enable-unicode=ucs4
then use
make and make altinstall
then use the python path for postgresql sour
yes, shipped with fedora 15 and binary installers are from EnterpriseDB -
all in one.
CPK
On Wed, Jul 6, 2011 at 8:36 AM, John R Pierce wrote:
> On 07/05/11 7:34 PM, c k wrote:
>
>> I have default python 2.7.1 installed along with fedora15. Then installed
>> postgresql from binary installers. Th
On 07/05/11 7:34 PM, c k wrote:
I have default python 2.7.1 installed along with fedora15. Then
installed postgresql from binary installers. This creates the
plpython.so, plpython2.so and plpython3.so in lib/postgresql directory
under postgresql installation. When I go for creating a new langua
I have default python 2.7.1 installed along with fedora15. Then installed
postgresql from binary installers. This creates the plpython.so,
plpython2.so and plpython3.so in lib/postgresql directory under postgresql
installation. When I go for creating a new language plpython, it gives me
some error
On 06/07/11 01:12, Geoffrey Myers wrote:
Wanted to add more specifics. Here is the actual code that generated the
error:
my $result = $conn->exec($select);
if ($result->resultStatus != PGRES_TUPLES_OK)
{
$error = $conn->errorMessage;
die "Error: <$error> Failed: <$select>";
}
That looks like
It seems to me one solution is to alter your table topology by
partitioning your table by the keys you need to query on, and then
using simple aggregates.
You;d have to set up ON INSERT DO INSTEAD rules, and you might get a
performance hit.
Another solution might be to break up the query int
On Tue, 5 Jul 2011 19:38:25 -0400
"Jonathan Brinkman" wrote:
> I was really hoping to keep the data-replication (between MSSQL
> --> PG) contained within a PG function.
>
> Instead I could write a small shell script or C service to do this
> using tsql (freetds). I have access to the MSSQL data
On 5/07/2011 11:12 PM, Geoffrey Myers wrote:
my $result = $conn->exec($select);
if ($result->resultStatus != PGRES_TUPLES_OK)
{
$error = $conn->errorMessage;
die "Error: <$error> Failed: <$select>";
}
So you're saying this select request failing would not be logged to the
postgres database log
I was really hoping to keep the data-replication (between MSSQL --> PG)
contained within a PG function.
Instead I could write a small shell script or C service to do this using
tsql (freetds). I have access to the MSSQL data via unixodbc and
tdsodbc/freetds in my Ubuntu console.
But I want to r
On 5 Jul 2011, at 23:27, Susan Cassidy wrote:
> >Thanks
> >I’m importing into Postgresql 8.4.8 from MSSQL 2005.
> >
> >I do not have control over the MSSQL server, it is at a customer’s site. I
> >only have access to read-only views on their server, from which I replicate
> >the data to my post
>From: pgsql-general-ow...@postgresql.org
>[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Jonathan Brinkman
>Sent: Tuesday, July 05, 2011 7:48 AM
>To: pgsql-general@postgresql.org
>Cc: 'Brent Wood'
>Subject: Re: [GENERAL] Read MS-SQL data into Postgres via ODBC link?
>
>Thanks
>I'm impor
Thanks for your reply
yes I did install libpq-dev, which was installed (instead) when I tried to
run this in Ubuntu 10.04LTS:
Sudo apt-get install postgresql-dev
It gave a message that postgresql-dev was replaced by libpq-dev...
But that installation did not install the files including
/usr/lib/p
Thanks
I'm importing into Postgresql 8.4.8 from MSSQL 2005.
I do not have control over the MSSQL server, it is at a customer's site. I
only have access to read-only views on their server, from which I replicate
the data to my postgres staging tables.
I cannot have the MSSQL server do anyth
c k writes:
> I updated my development machine with Fedora 15 and as there is python 2.7.
> I have also migrated my few postgresql databases. While creating plpython in
> one database, I got the following error undefined symbol
> PyUnicodeUCS4_AsEncodedString.
> Then I recompiled source code and g
On Tue, Jul 5, 2011 at 9:49 AM, Alban Hertroys
wrote:
> On 5 Jul 2011, at 9:13, Daniel Farina wrote:
>
>>> Setup a materialized view.
>>
>> This rather defeats the point AFAIK, because keeping the materialized
>> view up to date (being more than thirty seconds out of date is not
>> desirable) will
On 07/05/11 4:31 AM, Condor wrote:
Are you using some kind of old file system and operating system that
cannot handle files bigger than 2GB? If so, I'd be pretty worried
about running a database server on it.
Well, I make pg_dump on ext3 fs and postgrex 8.x and 9 and sql file was
truncated.
Dne 5.7.2011 13:31, Condor napsal(a):
> On Tue, 05 Jul 2011 18:08:21 +0800, Craig Ringer wrote:
>> On 5/07/2011 5:00 PM, Condor wrote:
>>> Hello ppl,
>>> can I ask how to dump large DB ?
>>
>> Same as a smaller database: using pg_dump . Why are you trying to
>> split your dumps into 1GB files? What
Dear All,
I updated my development machine with Fedora 15 and as there is python 2.7.
I have also migrated my few postgresql databases. While creating plpython in
one database, I got the following error undefined symbol
PyUnicodeUCS4_AsEncodedString.
Then I recompiled source code and got a plpython
On 5 Jul 2011, at 9:13, Daniel Farina wrote:
>> Setup a materialized view.
>
> This rather defeats the point AFAIK, because keeping the materialized
> view up to date (being more than thirty seconds out of date is not
> desirable) will be expensive. Maintaining the index on the (key,
> recency)
Guys, the OP isn't using MySQL, but MS-SQL.
Not that your solutions don't apply in that case, but it's just a little sloppy
to be talking about the wrong database all the time ;)
On 5 Jul 2011, at 9:37, Sim Zacks wrote:
> I've done similar things with a plpythonu function.
>
> Basically, import
Alexander Shulgin writes:
> I understand that there's really not much point in running COUNT w/o
> the FROM list, but maybe we should just disallow COUNT(*) with empty
> FROM list?
While I don't offhand see a use case for aggregates without FROM,
it's a long way from there to asserting that there
Tom Lane wrote:
Geoffrey Myers writes:
Geoffrey Myers wrote:
out of memory for query result
One other note that is bothering me. There is no reference in the log
regarding the out of memory error. Should that not also show up in the
associated database log?
Not if it's a client-side er
Geoffrey Myers writes:
> Geoffrey Myers wrote:
>> out of memory for query result
> One other note that is bothering me. There is no reference in the log
> regarding the out of memory error. Should that not also show up in the
> associated database log?
Not if it's a client-side error.
(Whic
Craig Ringer wrote:
On 3/07/2011 6:00 PM, Geoffrey Myers wrote:
out of memory for query result
How is this possible?
Resource limits?
Could this message be generated because of shared memory issues?
The odd thing is the error was generated by a user process, but there is
no reference to
On 05/07/11 11:48, Daniel Farina wrote:
This is basically exactly the same as
http://archives.postgresql.org/pgsql-sql/2008-10/msg9.php; I'm
just asking again, to see if thinking on the problem has changed:
The basic problem, restated, is one has a relation with tuples like this:
(key, rece
When the filesystem containing my database fills up, the server repeats
the following log message about as fast as it can log:
Jun 29 23:00:55 src@giraffe postgres: LOG: could not write temporary
statistics file "pg_stat_tmp/pgstat.tmp": No space left on device
Is this an infinite loop or the
Geoffrey Myers wrote:
We have a process that we successfully ran on virtually identical
databases. The process completed fine on a machine with 8 gig of
memory. The process fails when run on another machine that has 16 gig
of memory with the following error:
out of memory for query result
In article ,
Marti Raudsepp writes:
> Hi,
> On Tue, Jul 5, 2011 at 09:50, Yan Cheng CHEOK wrote:
>> The essential difference between inet and cidr data types is that inet
>> accepts values with nonzero bits to the right of the netmask, whereas cidr
>> does not.
> Say, if you have a /8 netmask
Hi,
On Tue, Jul 5, 2011 at 09:50, Yan Cheng CHEOK wrote:
> The essential difference between inet and cidr data types is that inet
> accepts values with nonzero bits to the right of the netmask, whereas cidr
> does not.
Say, if you have a /8 netmask, the 'cidr' type requires that all the
24 rig
One other note, there is no error in the postgres log for this database.
I would have expected to find an error there.
--
Until later, Geoffrey
"I predict future happiness for America if they can prevent
the government from wasting the labors of the people under
the pretense of taking care of
Craig Ringer wrote:
On 3/07/2011 6:00 PM, Geoffrey Myers wrote:
out of memory for query result
How is this possible?
Resource limits?
Could this message be generated because of shared memory issues?
The odd thing is the error was generated by a user process, but there is
no reference to
Alban Hertroys wrote:
On 3 Jul 2011, at 12:00, Geoffrey Myers wrote:
We have a process that we successfully ran on virtually identical
databases. The process completed fine on a machine with 8 gig of
memory. The process fails when run on another machine that has 16
gig of memory with the foll
On Tue, 05 Jul 2011 18:08:21 +0800, Craig Ringer wrote:
On 5/07/2011 5:00 PM, Condor wrote:
Hello ppl,
can I ask how to dump large DB ?
Same as a smaller database: using pg_dump . Why are you trying to
split your dumps into 1GB files? What does that gain you?
Are you using some kind of old fi
Hello,
Today I've mistyped a SELECT (effectively omitting the FROM clause):
$ SELECT COUNT(*) my_table;
my_table
--
1
(1 row)
Apparently, my_table was treated as an alias to the COUNT(*)
expression. This has been discussed before, e.g. here:
http://archives.postgresql.org/pgsql
On 5/07/2011 5:00 PM, Condor wrote:
Hello ppl,
can I ask how to dump large DB ?
Same as a smaller database: using pg_dump . Why are you trying to split
your dumps into 1GB files? What does that gain you?
Are you using some kind of old file system and operating system that
cannot handle file
* Condor wrote:
Problem was when I start: pg_dump dbname | split -b 1G - filenameI
unable to restore it correct. When I start restore DB i got error from
sql he did not like one line. I make investigation and the problem was
in last line of first file value field was something like '"This is a '
> On Tue, Jul 5, 2011 at 10:38, Condor wrote:
>> Hello,
>> any one can explain me why I have difference between db size when I dump
>> db,
>> I see it's 5G and when I run SELECT
>> pg_size_pretty(pg_database_size('somedatabase')) As fulldbsize; on my DB
>> postgresql return: 10 GB
>>
>> I run vacu
On Tue, 5 Jul 2011 10:43:38 +0200, Magnus Hagander wrote:
On Tue, Jul 5, 2011 at 10:38, Condor wrote:
Hello,
any one can explain me why I have difference between db size when I
dump db,
I see it's 5G and when I run SELECT
pg_size_pretty(pg_database_size('somedatabase')) As fulldbsize; on
my
Hello ppl,
can I ask how to dump large DB ? I read documentation but I has a
problem with split that was year ago and did not use it after then.
Problem was when I start: pg_dump dbname | split -b 1G - filename I
unable to restore it correct. When I start restore DB i got error from
sql he did
On Tue, Jul 5, 2011 at 10:38, Condor wrote:
> Hello,
> any one can explain me why I have difference between db size when I dump db,
> I see it's 5G and when I run SELECT
> pg_size_pretty(pg_database_size('somedatabase')) As fulldbsize; on my DB
> postgresql return: 10 GB
>
> I run vacuum on db eve
Hello,
any one can explain me why I have difference between db size when I
dump db, I see it's 5G and when I run SELECT
pg_size_pretty(pg_database_size('somedatabase')) As fulldbsize; on my DB
postgresql return: 10 GB
I run vacuum on db every night. Why is that huge difference in size ?
--
R
Jonathan Brinkman wrote:
> Makefile:12: /usr/lib/postgresql/8.4/lib/pgxs/src/makefiles/pgxs.mk:
No such
> file or directory
Maybe you have to install the software package that contains
PostgreSQL's
development environment.
Yours,
Laurenz Albe
--
Sent via pgsql-general mailing list (pgsql-genera
On Tue, Jul 5, 2011 at 12:32 AM, Simon Riggs wrote:
> I think its a pretty common requirement and we should be looking to
> optimize it if it isn't handled well.
I agree; although I wanted to be sure that it is not in fact handled
well by some mechanism I haven't seen yet.
> The only problem is
On Tue, Jul 5, 2011 at 12:48 AM, Daniel Farina wrote:
> This is basically exactly the same as
> http://archives.postgresql.org/pgsql-sql/2008-10/msg9.php; I'm
> just asking again, to see if thinking on the problem has changed:
>
> The basic problem, restated, is one has a relation with tuples
I've done similar things with a plpythonu function.
Basically, import the mysql module, call your select statement and then
for each row do a plpy.execute(insert stmt)
Sim
On 07/05/2011 12:10 AM, Jonathan Brinkman wrote:
Greetings
I'd like to INSERT data into my Postgresql 8.4.8 table di
On Mon, Jul 4, 2011 at 11:55 PM, Alban Hertroys
wrote:
> On 5 Jul 2011, at 3:23, David Johnston wrote:
>
>>> Does anyone have fresh thoughts or suggestions for dealing with
>>> INSERT-mostly tables conceived in this manner?
>
> You're struggling with read-performance in an INSERT-mostly table? Whe
46 matches
Mail list logo