Considering the discussions on the pgsql-bugs user list regarding ossp-uuip:
Re: BUG #4167: When generating UUID using UUID-OSSP module, UUIDs are not
unique on Windows,
What is the normal solution in pgsql-land for making a serious number of
rows unique across multiple databases?
I mean p
Hi Kimball-san
Thanks,
This problem concentrated the discussions by the reason
with not pgsql but uuid-ossp.
Certainly, I think that uuid is effective and the best for the deflecting
solution.
Regards,
Hiroshi Saito
>Hiroshi,
>
>Thank you very much. I will look forward to it.
>
>However, I
Hi Kimball-san
Thanks,
This problem concentrated the discussions by the reason
with not pgsql but uuid-ossp.
Certainly, I think that uuid is effective and the best for the deflecting
solution.
Regards,
Hiroshi Saito
>Hiroshi,
>
>Thank you very much. I will look forward to it.
>
>However, I
>
>
> Wow, this is a fascinating situation. Are you sure the fsyncs are the only
> thing to worry about though? Postgres will call write(2) many times even if
> you disabled fsync entirely. Surely the kernel and filesystem will
> eventually
> send some of them through even if no fsyncs arrive?
>
G
Hiroshi,
Thank you very much. I will look forward to it.
However, I really wanted to know what PGSQL people are doing now and have
done in the past to address this kind of issue.
Surely the need to have unique rows across databases has come up before?
Thanks,
Kimball
-Original Message
>
>
>> Are you sure this will work correctly for database use at all? The known
> issue listed at
> http://www.persistentfs.com/documentation/Release_Notessounded like a much
> bigger consistancy concern than the fsync trivia you're
> bringing up:
>
> "In the current Technology Preview release,
Hi.
Ahh,yes.
Patch is inspected for me now.
Please wait for a while.
Regards,
Hiroshi Saito
>From: Kimball Johnson [mailto:[EMAIL PROTECTED]
>Sent: Monday, June 02, 2008 7:42 PM
>To: 'pgsql-general@postgresql.org'
>Subject: make rows unique across db's without UUIP on windows?
>
>
>
>Conside
From: Kimball Johnson [mailto:[EMAIL PROTECTED]
Sent: Monday, June 02, 2008 7:42 PM
To: 'pgsql-general@postgresql.org'
Subject: make rows unique across db's without UUIP on windows?
Considering the discussions on the pgsql-bugs user list regarding ossp-uuip:
Re: BUG #4167: When generatin
On Mon, 2 Jun 2008, Ram Ravichandran wrote:
My current plan is to mount an Amazon S3 bucket as a drive using
PersistentFS which is a POSIX-compliant file system.
Are you sure this will work correctly for database use at all? The known
issue listed at http://www.persistentfs.com/documentation
Craig Ringer wrote:
Given the choice I'd want to take the async I/O option over threading in
C++, but I don't get the impression that libpq is really built around
that model.
libpq has several commands for asynchronous command processing:
http://www.postgresql.org/docs/8.3/static/libpq-async
"Ram Ravichandran" <[EMAIL PROTECTED]> writes:
> Hey,
> I am running a postgresql server on Amazon EC2. My current plan is to mount
> an Amazon S3 bucket as a drive using PersistentFS which is a POSIX-compliant
> file system.
> I will be using this for write-ahead-logging. The issue with S3 is tha
On Mon, Jun 2, 2008 at 6:42 PM, Ram Ravichandran <[EMAIL PROTECTED]> wrote:
>
>> Running without fsyncs is likely to lead to a corrupted db if you get
>> a crash / loss of connection etc...
>
> Just to clarify, by corrupted db you mean that all information (even the
> ones prior to the last fsync)
> Running without fsyncs is likely to lead to a corrupted db if you get
> a crash / loss of connection etc...
>
Just to clarify, by corrupted db you mean that all information (even the
ones prior to the last fsync) will be lost. Right?
Thanks,
Ram
On Mon, Jun 2, 2008 at 6:12 PM, Ram Ravichandran <[EMAIL PROTECTED]> wrote:
> Hey,
> I am running a postgresql server on Amazon EC2. My current plan is to mount
> an Amazon S3 bucket as a drive using PersistentFS which is a POSIX-compliant
> file system.
> I will be using this for write-ahead-loggi
Peter Geoghegan wrote:
> Hello,
>
> I'm writing a C++ application that stores data in a table that may
> ultimately become very large (think tens of millions of rows). It has
> an index on one row, in addition to the index created on/as part of
> its primary key. My concern is that a call to the p
Hey,
I am running a postgresql server on Amazon EC2. My current plan is to mount
an Amazon S3 bucket as a drive using PersistentFS which is a POSIX-compliant
file system.
I will be using this for write-ahead-logging. The issue with S3 is that
though the actual storage is cheap, they charge $1 per 1
I completely understand that what I am proposing is somewhat mad and I
didn't expect it to be easy.
Basically, I'm doing some research on a new operator and would like to
start testing it by inserting it into a very specific place in very
specific plans without having to do too much work in
In response to PJ <[EMAIL PROTECTED]>:
> In trying to learn both php and postgresql as a novice, I am trying to
> debug a double migration of our old website from php4 to php5 and from
> postgres 7.4 to 8.3.
> I am running freebsd 7.0 on two machines and sometimes debugging from a
> Windows XP.
In trying to learn both php and postgresql as a novice, I am trying to
debug a double migration of our old website from php4 to php5 and from
postgres 7.4 to 8.3.
I am running freebsd 7.0 on two machines and sometimes debugging from a
Windows XP.
The migrations themselves went fine; now to debug
On Mon, Jun 2, 2008 at 2:45 PM, JD Wong <[EMAIL PROTECTED]> wrote:
> Hey,
> I'm a first time postgres user. I was able to access yesterday a
> database I had loaded that same day until the system was shut down without
> warning. Now I get the error:
>
> createdb: could not connect to database
Not really. It was decided long ago that in that way madness lies.
OTOH, there are ways to tune the behaviour through changes to
random_page_cose, cpu_xxx_cost and effective_cache_size settings.
Then there's the mallet to the forebrain that are the set
enable_nestloop=off type settings. They wo
Hey,
I'm a first time postgres user. I was able to access yesterday a
database I had loaded that same day until the system was shut down without
warning. Now I get the error:
createdb: could not connect to database postgres: could not connect to
server: No such file or directory
Is t
Very thanks Scott!! this is the solution was need :)
Jordi
On 19 mayo, 23:36, [EMAIL PROTECTED] ("Scott Marlowe") wrote:
> On Mon, May 19, 2008 at 4:51 AM,jrivero<[EMAIL PROTECTED]> wrote:
> > Hi, i need help for a query. I have three fields, year, month and day
> > into table and need join and u
I'm doing some performance experiments with postgres (8.3.1) and would
like to force postgres to execute a particular query plan. Is there a
straightforward way to specify a query plan to postgres either
interactively or programatically?
Thanks.
John Cieslewicz.
--
Sent via pgsql-general
Hello,
I'm writing a C++ application that stores data in a table that may
ultimately become very large (think tens of millions of rows). It has
an index on one row, in addition to the index created on/as part of
its primary key. My concern is that a call to the pl/pgSQL function
that INSERTs data
On Mon, June 2, 2008 6:53 pm, Tom Lane wrote:
> "Henry" <[EMAIL PROTECTED]> writes:
>> I'm trying to code a function to copy rows from one machine to another
>> using dblink and cursors:
>
> What PG version is this, exactly?
Arg, dammit. Sorry, it's version 8.2.6 (where the function is running),
"Henry" <[EMAIL PROTECTED]> writes:
> I'm trying to code a function to copy rows from one machine to another
> using dblink and cursors:
What PG version is this, exactly?
> perform dblink_connect ('dbname=db1...host=othermachine.com');
> perform dblink_open ('cur_other1', 'SELECT col1 FROM tab1')
On Mon, Jun 02, 2008 at 11:55:14AM -0400, Michael P. Soulier wrote:
> Hello,
>
> I'm migrating a db schema in an automated fashion, using this
>
> UPDATE clients_client
> SET icp_id = null
> WHERE icp_id = 1;
> UPDATE icps_icp
> SET id = nextval('public.icps_icp_id_seq')
> WHERE i
Hello,
I'm migrating a db schema in an automated fashion, using this
UPDATE clients_client
SET icp_id = null
WHERE icp_id = 1;
UPDATE icps_icp
SET id = nextval('public.icps_icp_id_seq')
WHERE id = 1;
UPDATE clients_client
SET icp_id = currval('public.icps_icp_id_seq')
WHE
Hello all,
I'm trying to code a function to copy rows from one machine to another
using dblink and cursors:
...
perform dblink_connect ('dbname=db1...host=othermachine.com');
perform dblink_open ('cur_other1', 'SELECT col1 FROM tab1');
loop
fnd := 0;
for rec in
-- grab a 1000 rows at a
I trying drop old user but got some strange issues:
template1=# drop USER szhuchkov;
ERROR: role "szhuchkov" cannot be dropped because some objects depend on it
DETAIL: 1 objects in database billing
2 objects in database shop
ok... lets look closer these two DB:
shop=# drop USER szhuchkov;
E
am Mon, dem 02.06.2008, um 9:57:17 -0400 mailte Tom Lane folgendes:
> "A. Kretschmer" <[EMAIL PROTECTED]> writes:
> > i expected for a 1000 row test-table a cost per function of 2.5
> > (cpu_operator_cost = 0.0025), but i got 5.
>
> What's the data type of "i"? I suspect you really have two fun
"A. Kretschmer" <[EMAIL PROTECTED]> writes:
> i expected for a 1000 row test-table a cost per function of 2.5
> (cpu_operator_cost = 0.0025), but i got 5.
What's the data type of "i"? I suspect you really have two function
calls in that expression: a type coercion function and cos() itself.
On Mon, 2008-06-02 at 15:05 +0200, Alain Barthe wrote:
>
> Sounds like a fun project.
> I agree.
With PostgreSQL, a the agent can simply daemonize and talk to the
Postmaster using libpq and proper HBA. Everything in pg_catalog.* and
information_schema.* is already quantified in
On Mon, Jun 02, 2008 at 09:38:29AM -0400, Tom Lane wrote:
> That's not very surprising at all: a backend might have to write out a
> dirty buffer in order to reclaim the buffer for re-use, and which
> database the page is from doesn't enter into that.
> What does seem surprising is that it's had to
hubert depesz lubaczewski <[EMAIL PROTECTED]> writes:
> why backend process from one database has opened files from other databases?
That's not very surprising at all: a backend might have to write out a
dirty buffer in order to reclaim the buffer for re-use, and which
database the page is from do
On Mon, 2008-06-02 at 13:53 +0100, Dave Shield wrote:
> 2008/6/2 Brian A. Seklecki <[EMAIL PROTECTED]>:
> >> <[EMAIL PROTECTED]>:
> >> There should be a AgentX sub-agent for Xen that feeds Net-SNMP
> >> ~BAS
> >>
>
> > With Xen we'd have to look at how a Net-SNMP daemon running in
smiley2211 a écrit :
Hello all,
I have created a backup via 'pg_dump -c -f mydump.backup' - however when I
try to load it via pg_Admin tool, it does not allow me to - the 'OK' button
is grayed out even though I have selected the file to be restored...is this
doable?
pgAdmin's restore tool can
i just checked what process uses the most file descriptors on my system.
it's postgresql backend. but there is something wrong:
USER PID %CPU %MEMVSZ RSS TTY STAT START TIME COMMAND
pgdba20845 0.0 2.8 57976 29160 ?Ss May22 2:20 postgres:
jabberd jabberd 127.0
On Mon, 2008-06-02 at 09:10 +0200, Alain Barthe wrote:
> 2008/5/31 Brian A. Seklecki (Mobile)
> <[EMAIL PROTECTED]>:
> There should be a AgentX sub-agent for Xen that feeds Net-SNMP
> ~BAS
>
We can work on one. The Net-SNMP folks have a great AgentX API I hear.
I also need to w
Hi,
according the doc
(http://www.postgresql.org/docs/8.3/interactive/runtime-config-query.html#GUC-CPU-OPERATOR-COST),
Quote:
Sets the planner's estimate of the cost of processing each operator or function
executed during a query.
i expected for a 1000 row test-table a cost per function of 2.
On 29/05/2008, Bob Pawley <[EMAIL PROTECTED]> wrote:
> ... get their point across up front without making me wade through
> previous posts which I have already read.
Good for you :}
> I can understand the concept of bottom posting
No one advocates bottom-posting here. It's all about intersparsed
Hi,
I have the following problem when trying to access other PostgreSQL
databases with DBLink. I followed the instructions on
http://www.postgresonline.com/journal/index.php?/archives/44-Using-DbLink-to-access-other-PostgreSQL-Databases-and-Servers.html.
My query to get access to another da
> EXECUTE 'INSERT INTO '||tablename||' ('||fields||') VALUES
> ('||vals||') RETURNING currval('''||seqname||''')' INTO newid
>
> Note where last quote goes.
That was exactly what I wanted to do!
SELECT 'Thank you' FROM heart;
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org
I have not found any core dumps. The server seems not to stop
completely but continue to run.
2008/5/30 Zdenek Kotala <[EMAIL PROTECTED]>:
> Do you have any core dump? Stack trace should help.
>
>Zdenek
>
> A B napsal(a):
>>
>> I get a lot of
>> Error server closed the connection unexpecte
45 matches
Mail list logo