Thomas,
Thank you for your comment.
> I found that using getBinaryStream(), setBinaryStream(),
> getCharacterStream()
> and setCharacterStream() to handle LOBs across different DBMS
> is much more
> portable (and reliably) than using the Clob()/Blob() methods.
According to JDBC 3.0 specifict
"EBIHARA, Yuichiro" <[EMAIL PROTECTED]> writes:
> Using Large Objects may solve my issue but I have to note that a large
> object is not automatically deleted when the record referring to it is
> deleted.
The contrib/lo module can help with this.
regards, tom lane
---
EBIHARA, Yuichiro wrote on 22.06.2007 06:09:
It seems like PG JDBC driver CANNOT handle 'bytea' as BLOB nor 'text' as CLOB.
getBlob()/setBlob()/getClob()/setClob() can work with only Large Objects (at
least with
postgresql-8.1-405.jdbc3.jar).
org.postgresql.util.PSQLException: Bad Integer Z\27
Yes please send me a copy.
Bob
- Original Message -
From: "Harvey, Allan AC" <[EMAIL PROTECTED]>
To: "Joshua D. Drake" <[EMAIL PROTECTED]>; "Scott Marlowe"
<[EMAIL PROTECTED]>
Cc: "Csaba Nagy" <[EMAIL PROTECTED]>; "David Gardner"
<[EMAIL PROTECTED]>; "Postgres general mailing list"
> > Because I'm delivering reports to dozens of people who have windows, no
> > psql client, and just want to go to a web page, click a button, and get
> > their report (or was that a banana?)
I do exactly this with bog basic HTML and bash scripts.
Can send you a copy if you want examples.
Allan
Hi,
I found my understanding was incorrect.
> > > Is there any plan to support BLOB and CLOB in future releases?
> > >
> > Looking at the spec, and postgresql's implementation, I can't
> > see much reason you couldn't just use a domain to declare that
> > a bytea is a blob and varchar is a clo
Michael Glaesemann wrote:
On Jun 21, 2007, at 17:35 , brian wrote:
I have a lookup table with a bunch of disciplines:
To answer your ordering question first:
SELECT id, name
FROM discipline
ORDER BY name = 'other'
, name;
id |name
+-
8 | community
4 |
Josh Tolley wrote:
It seems to me you could replace it all with one query, something like
this:
SELECT discipline, COUNT(1) FROM showcase WHERE EXISTS (SELECT * FROM
showcase_item WHERE showcase_id = showcase.id LIMIT 1) GROUP BY
discipline ORDER BY (discipline != 'other'), discipline;
disci
On Jun 21, 2007, at 17:35 , brian wrote:
I have a lookup table with a bunch of disciplines:
To answer your ordering question first:
SELECT id, name
FROM discipline
ORDER BY name = 'other'
, name;
id |name
+-
8 | community
4 | dance
5 | film and telev
Scott Marlowe wrote:
Csaba Nagy wrote:
On Thu, 2007-06-21 at 16:45, Scott Marlowe wrote:
Another option is to use your favorite scripting language and throw
an excel header then the data in tab delimited format. Or even in
excel xml format.
Why would you need any scripting language ?
On 6/21/07, brian <[EMAIL PROTECTED]> wrote:
I have a lookup table with a bunch of disciplines:
# SELECT id, name FROM discipline;
id |name
+-
1 | writing
2 | visual arts
3 | music
4 | dance
5 | film and television
6 | theatre
7 | media arts
Germán Hüttemann Arza wrote:
Hi,
I need a way to throw a message in a function, when an exception occurs, but I
don't want to write again and again the same message in every place I need to
throw it. So, is there a way to handle this situation in a more general
manner?
Why not create a tabl
Csaba Nagy wrote:
On Thu, 2007-06-21 at 16:45, Scott Marlowe wrote:
Another option is to use your favorite scripting language and throw an
excel header then the data in tab delimited format. Or even in excel
xml format.
Why would you need any scripting language ? COPY supports CSV ou
I have a lookup table with a bunch of disciplines:
# SELECT id, name FROM discipline;
id |name
+-
1 | writing
2 | visual arts
3 | music
4 | dance
5 | film and television
6 | theatre
7 | media arts
8 | community
9 | fine craft
10 | other
(10 rows
Sergey Konoplev schrieb:
My Question:
How can I do "OLD.columnName != NEW.columnName" if I don't know what the
columnNames are at Compile Time?
I have the columnName in a variable.
I suggest you use plpython. In this case you'll be able to do it.
TD['old'][colNameVar] != TD['new'][colNameVar]
Christan Josefsson wrote:
> Any guess when 8.4 could be production ready? A year or more?
Why don't you just use Bizgres?
Right, they don't release that often, and 0.9 misses various fixes that
went into PostgreSQL. But if it has what you are after and works for you..
--
Best regards,
Hannes D
Henk - CityWEB wrote:
> I can't wait to get a decent master/multi-slave setup going where I can
> turn fsync on and still get semi-decent performance...
I don't see how replication can help you with fsync performance
problems. Controllers with battery backed write cache are cheap. What is
the poin
Richard Huxton wrote:
Ah, but this just includes the time of the last message, not its data.
Oops, I read the OP's question as "date and time", rather than "data
and time". Nevermind. :)
- John D. Burger
MITRE
---(end of broadcast)---
T
The first thing you have to do is disable the User Access Control.
> -Original Message-
> From: [EMAIL PROTECTED] [mailto:pgsql-general-
> [EMAIL PROTECTED] On Behalf Of dfx
> Sent: Thursday, June 21, 2007 12:58 PM
> To: pgsql-general@postgresql.org
> Subject: [GENERAL] How to install Post
Joshua D. Drake <[EMAIL PROTECTED]> schrieb:
> >>I tryied it but get errors on create user postgres.
> >>Is there some workaround?
> >I'm not familiar with this crappy OS, but maybe you should disable UAC.
>
> In your mind, it may be crappy but it is indeed an officially supported
> operating sy
Andreas Kretschmer wrote:
dfx <[EMAIL PROTECTED]> schrieb:
I tryied it but get errors on create user postgres.
Is there some workaround?
I'm not familiar with this crappy OS, but maybe you should disable UAC.
In your mind, it may be crappy but it is indeed an officially supported
operati
dfx <[EMAIL PROTECTED]> schrieb:
> I tryied it but get errors on create user postgres.
>
> Is there some workaround?
I'm not familiar with this crappy OS, but maybe you should disable UAC.
Andreas
--
Really, I'm not out to destroy Microsoft. That will just be a completely
unintentional side e
I tryied it but get errors on create user postgres.
Is there some workaround?
Thank you
Domenico
---(end of broadcast)---
TIP 6: explain analyze is your friend
On Thu, 21 Jun 2007, Gregory Stark wrote:
> Ugh. The worst part is that you won't even know that there's anything wrong
> with your data. I would actually suggest that if you run with fsync off and
> have a power failure or kernel crash you should just immediately restore from
> your last backup
On Thu, 21 Jun 2007, Tom Lane wrote:
> "Henka" <[EMAIL PROTECTED]> writes:
> > I happened to notice this error in the log when my application was refused
> > a db connection (quite unexpectedly):
>
> > PANIC: corrupted item pointer: offset = 3308, size = 28
> > LOG: autovacuum process (PID 1816
Hi,
I need a way to throw a message in a function, when an exception occurs, but I
don't want to write again and again the same message in every place I need to
throw it. So, is there a way to handle this situation in a more general
manner?
Thanks in advance,
--
Germán Hüttemann Arza
CNC - C
On Jun 21, 2007, at 5:16 AM, Bruce McAlister wrote:
Thats exactly what I think. There is something strange going on. At
the
moment I think it is the disk I am writing the data to that is slow,
possibly due to the fact that it is mounted up as "forcedirectio",
so as
not to interfere with the
On Jun 21, 2007, at 11:57 , Josh Tolley wrote:
On 6/21/07, danmcb <[EMAIL PROTECTED]> wrote:
Hi
I have two tables, say A and B, that have a many-to-many
relationship, implemented in the usual way with a join table A_B.
How can I economically find all the rows in table A whose id's are
not
On 6/21/07, danmcb <[EMAIL PROTECTED]> wrote:
Hi
I have two tables, say A and B, that have a many-to-many
relationship, implemented in the usual way with a join table A_B.
How can I economically find all the rows in table A whose id's are not
in A_B at all (i.e. they have zero instances of B a
Hi
I have two tables, say A and B, that have a many-to-many
relationship, implemented in the usual way with a join table A_B.
How can I economically find all the rows in table A whose id's are not
in A_B at all (i.e. they have zero instances of B associated)?
Thanks
Daniel
--
"Henka" <[EMAIL PROTECTED]> writes:
> I happened to notice this error in the log when my application was refused
> a db connection (quite unexpectedly):
> PANIC: corrupted item pointer: offset = 3308, size = 28
> LOG: autovacuum process (PID 18165) was terminated by signal 6
FWIW, the only occu
On Thu, Jun 21, 2007 at 10:39:29AM +0200, Christan Josefsson wrote:
> Any guess when 8.4 could be production ready? A year or more?
"In the future" is what I'd be willing to state out loud ;-) 8.3
hasn't finished development yet. I wouldn't hold my breath.
You can find out more about bizgres at
Raymond O'Donnell wrote:
[EMAIL PROTECTED] wrote:
However, with this new Postgres site, I don't have access to my temp
tables after I've traversed another pg_connect. So PHP is either
creating a new connection, or giving me another session, not the one
which I created my tables in.
You wouldn
On Monday 18 June 2007 16:27, John Smith wrote:
> guys
> need to pitch postgresql to some hard-to-budge solaris sysadmins- they
> don't even know about the postgresql-solaris 10 package, just used to
> oracle and don't want to break their backs over postgresql. plus i
> don't know enough slony yet.
Reid Thompson wrote:
Each server process claims a jobq record by selecting for update a
jobq record where the pid column is null, then rewrites the record with
the pid set in the pid column.
The "distilled" sql select statement is:
* SELECT J.*, C.name, C.client_id, C.priority
* FROM j
On Thu, 2007-06-21 at 16:45, Scott Marlowe wrote:
> Another option is to use your favorite scripting language and throw an
> excel header then the data in tab delimited format. Or even in excel
> xml format.
Why would you need any scripting language ? COPY supports CSV output
pretty well, it ca
John D. Burger wrote:
On Jun 21, 2007, at 09:22, Richard Huxton wrote:
Naz Gassiep wrote:
Hi,
If I have a table with users and a table with messages, is it
possible to have a query that returns user.* as well as one extra column
with the number of messages they have posted and the data an
David Gardner wrote:
Agreed ODBC is the way to go, depending on what you are doing, Access
may be helpfull as an intermediate step.
Joshua D. Drake wrote:
Bob Pawley wrote:
Hi All
Is there a fast and easy method of transferring information between
MS Excel and PostgreSQL??
odbc?
Anot
Hello list,
We are using PostgreSQL 8.0.3. Some background, and a couple of
questions..
We have a database table called "jobq" on the database machine,
and 2 networked server machines.
One of the network server machines has around 20 server processes
connecting over the network using ODBC.
These
On Jun 21, 2007, at 09:22, Richard Huxton wrote:
Naz Gassiep wrote:
Hi,
If I have a table with users and a table with messages, is it
possible to have a query that returns user.* as well as one extra
column
with the number of messages they have posted and the data and time of
the last m
On 6/21/07, Vincenzo Romano <[EMAIL PROTECTED]> wrote:
Hi all.
I'd like to do the following:
insert into t1
values (
'atextvalue',(
insert into t2
values ( 'somethingelse' )
returning theserial
)
)
;
that is, I first insert data into t2 getting back the newly c
Naz Gassiep wrote:
Hi,
If I have a table with users and a table with messages, is it
possible to have a query that returns user.* as well as one extra column
with the number of messages they have posted and the data and time of
the last message? At the moment I am using a subquery to do this,
PFC wrote:
>> Hi. I have a few databases created with UNICODE encoding, and I would
>> like to be able to search with accent insensitivity. There's something
>> in Oracle (NLS_COMP, NLS_SORT) and SQL Server (don't remember) to do
>> this, but I found nothing in PostgreSQL, just the 'to_ascii'
"Henka" <[EMAIL PROTECTED]> writes:
>> Other than that it might be interesting to know the values of some server
>> parameters: "fsync" and "full_page_writes". Have you ever had this machine
>> crash or had a power failure? And what kind of i/o controller is this?
>
> fsync = off
> full_page_writ
>> I'm using PG 8.2.3:
>
> You should update to 8.2.4, it includes a security fix and several bug
> fixes.
That was my next option. My last backup dump looks suspiciously small,
but the day before that looks about right.
> My first thought is bad memory. It's always good to rule that out sinc
Albe Laurenz wrote:
> Richard Huxton wrote:
>>> In our environment it takes approx 2 hours to perform a PIT backup of
>>> our live system:
>>>
>>> [1] select pg_start_backup('labe;')
>>> [2] cpio & compress database directory (exclude wals)
>>> [3] select pg_stop_backup()
>>>
>>> However, if we per
Richard Huxton wrote:
> Bruce McAlister wrote:
>> Thats exactly what I think. There is something strange going on. At the
>> moment I think it is the disk I am writing the data to that is slow,
>> possibly due to the fact that it is mounted up as "forcedirectio", so as
>> not to interfere with the
"Henka" <[EMAIL PROTECTED]> writes:
> Hello all,
>
> I'm using PG 8.2.3:
You should update to 8.2.4, it includes a security fix and several bug fixes.
However afaik none of them look like this.
> PANIC: corrupted item pointer: offset = 3308, size = 28
> LOG: autovacuum process (PID 18165) was
"PFC" <[EMAIL PROTECTED]> writes:
>> Hi. I have a few databases created with UNICODE encoding, and I would like to
>> be able to search with accent insensitivity. There's something in Oracle
>> (NLS_COMP, NLS_SORT) and SQL Server (don't remember) to do this, but I found
>> nothing in PostgreSQL
Hi,
If I have a table with users and a table with messages, is it
possible to have a query that returns user.* as well as one extra column
with the number of messages they have posted and the data and time of
the last message? At the moment I am using a subquery to do this,
however it seems sub
Richard Huxton wrote:
>> In our environment it takes approx 2 hours to perform a PIT backup of
>> our live system:
>>
>> [1] select pg_start_backup('labe;')
>> [2] cpio & compress database directory (exclude wals)
>> [3] select pg_stop_backup()
>>
>> However, if we perform a plain dump (pg_dump/p
Hello all,
I'm using PG 8.2.3:
PostgreSQL 8.2.3 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.3.6
I happened to notice this error in the log when my application was refused
a db connection (quite unexpectedly):
PANIC: corrupted item pointer: offset = 3308, size = 28
LOG: autovacuum proces
Bruce McAlister wrote:
Thats exactly what I think. There is something strange going on. At the
moment I think it is the disk I am writing the data to that is slow,
possibly due to the fact that it is mounted up as "forcedirectio", so as
not to interfere with the file system cache which we want to
Richard Huxton wrote:
> Bruce McAlister wrote:
>> Hi All,
>>
>> Is it at all possible to "roll forward" a database with archive logs
>> when it has been recovered using a dump?
>>
>> Assuming I have the archive_command and archive_timeout parameters set
>> on our "live" system, then I follow these
Hi all.
I'd like to do the following:
insert into t1
values (
'atextvalue',(
insert into t2
values ( 'somethingelse' )
returning theserial
)
)
;
that is, I first insert data into t2 getting back the newly created
serial values, then i insert this values in anoth
Hi. I have a few databases created with UNICODE encoding, and I would
like to be able to search with accent insensitivity. There's something
in Oracle (NLS_COMP, NLS_SORT) and SQL Server (don't remember) to do
this, but I found nothing in PostgreSQL, just the 'to_ascii' function,
which AF
Bruce McAlister wrote:
Hi All,
Is it at all possible to "roll forward" a database with archive logs
when it has been recovered using a dump?
Assuming I have the archive_command and archive_timeout parameters set
on our "live" system, then I follow these steps:
[1] pg_dump -d database > /backup
Hi. I have a few databases created with UNICODE encoding, and I would
like to be able to search with accent insensitivity. There's something
in Oracle (NLS_COMP, NLS_SORT) and SQL Server (don't remember) to do
this, but I found nothing in PostgreSQL, just the 'to_ascii' function,
which AFAIK, d
Hi All,
Is it at all possible to "roll forward" a database with archive logs
when it has been recovered using a dump?
Assuming I have the archive_command and archive_timeout parameters set
on our "live" system, then I follow these steps:
[1] pg_dump -d database > /backup/database.dump,
[2] initd
Ok.
Big thanks for the information.
You mentioned Bizgres, do you have any more information in that direction,
or do you know who to contact regarding information on Bizgres bitmap
indexes. If there is a bitmap index patch in Bizgres which can be applied to
the latest stable source of PostgreSQL
Pedro Doria Meunier wrote:
> (First of all sorry for cross-posting but I feel this is a matter that
> interests all recipients)
> Thread on pgadmin support:
> http://www.pgadmin.org/archives/pgadmin-support/2007-06/msg00046.php
>
> Hello Dave,
Hi Pedro
> This behavior (trying to show the entire
Hi All,
I have enabled autovacuum in our PostgreSQL cluster of databases. What I
have noticed is that the autovacuum process keeps selecting the same
database to perform autovacuums on and does not select any of the others
within the cluster. Is this normal behaviour or do I need to do
something m
Hi there!
First of all, sorry if that's not the correct place to send my question but
I didn't find any installation mailing list. I'd aprecciate if you tell me
where's the correct mailling list.
My question is: can PostgresSql 8.2 be installed in Windows 2000? In the
instalation file ppl cl
63 matches
Mail list logo