PostgreSQL,
either.
Is there an existing implementation of this? Perhaps a perl program that
creates the required triggers and stored procedures from looking at a
schema?
Thanks.
Gordan
---(end of broadcast)---
TIP 9: In versions below 8.0, the pl
me you mean that you cannot attach triggers to schema changes. Yes,
I had thought of that a minute ago. I don't suppose this could be deemed a
feature request for CREATE/ALTER/DROP schema level triggers? ;)
Gordan
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
it, is written in Perl.
I looked at all of the above, and they all seemed (to meat least) to
involve unnecessary complication or limitations I saw as unreasonable (or
both). I looked at Bucardo in detail, and I was rather disappointed to see
that it only supports two m
the biggest counter, and release the lock
on everything else until it catches up, then re-lock, then replicate. It
would add a fair bit of latency, though.
Gordan
---(end of broadcast)---
TIP 6: explain analyze is your friend
e query hash
could be implemented. Replicator function issues locks and compares the
counters/hashes to establish whether a state is consistent on all nodes
before a write query is replicated. It's a kludge and a horrible one at
that, and it will slow down the writes under load, but I thi
of the existing solutions. They're all way
easier than re-inventing the wheel.
Existing solutions can't handle multiple masters. MySQL can do it at
least in a ring arrangement.
Gordan
---(end of broadcast)---
TIP 4: Have you se
Gregory Youngblood wrote:
On Sat, 2008-01-19 at 23:46 +, Gordan Bobic wrote:
David Fetter wrote:
> In that case, use one of the existing solutions. They're all way
> easier than re-inventing the wheel.
Existing solutions can't handle multiple masters. MySQL can do it at
Scott Marlowe wrote:
On Jan 19, 2008 6:14 PM, Gordan Bobic <[EMAIL PROTECTED]> wrote:
Gregory Youngblood wrote:
On Sat, 2008-01-19 at 23:46 +, Gordan Bobic wrote:
David Fetter wrote:
In that case, use one of the existing solutions. They're all way
easier than re-inventin
statement based
triggers?
Thanks.
Gordan
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
these:
http://www.acard.com/english/fb01-product.jsp?idno_no=270&prod_no=ANS-9010&type1_title=
Solid State Drive&type1_idno=13
Gordan
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
better results at a fraction of the
cost with appliances I've built myself.
Gordan
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
e following:
CREATE TRIGGER MyTable_Trigger_DELETE BEFORE DELETE ON MyTable
FOR EACH ROW
EXECUTE PROCEDURE MyTable_Trigger_DELETE();
Can I create a trigger function like this? If not, what are my options
WRT alternatives?
Many thanks.
Gordan
---(end of broadcast)--
Richard Huxton wrote:
Gordan Bobic wrote:
Hi,
I'm trying to figure out how to do this from the documentation, but I
can't figure it out. :-(
Here is what I'm trying to do:
CREATE TABLE MyTable
(
IDbigserial unique,
MyDatachar(255),
PRIMARY KEY (ID)
p2 (ID, test) VALUES ( $1 $2 )
^^^
What did I miss?
A comma in the indicated position I guess...
Thanks. I'm feeling really stupid now. You may all mock me. :-)
Thanks for your help, it's most appreciated. :-)
Gordan
So, where can I find the specification for the protocol that I am going
to have to talk to the socket?
Many thanks.
Gordan
---(end of broadcast)---
TIP 6: Have you searched our list archives?
http://archives.postgresql.org
Mike Rylander wrote:
On Wed, 30 Mar 2005 12:07:06 +0100, Gordan Bobic <[EMAIL PROTECTED]> wrote:
Hi,
How difficult is it to write a driver for pgsql (via network or UNIX
domain sockets) for an as yet unsupported language?
Specifically, I'd like a driver for JavaScript, for use with Mo
being thick and producing broken SQL? Can anybody
think of a different way of doing this that would yield a performance
increase? I don't want to believe that doing a ~* unindexed sequential search
is the best solution here...
Thanks.
Gordan
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
mething MS SQL instead. How can I
get this to work with PostgreSQL? Who maintains the FTI contrib?
Kind regards.
Gordan
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
http://www.postgresql.org/users-lounge/docs/faq.html
n why
exactly is an additional $5,500 for another licence a problem all of a
sudden???
Regards.
Gordan
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
http://www.postgresql.org/users-lounge/docs/faq.html
a 4 GB table with 40M rows requires over 40GB of
temporary scratch space to copy, due to the WAL temp files. That sounds
totally silly. Why doesn't pg_dump insert commits every 1000 rows or so???
Cheers.
Gordan
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
n't let me tweak the parameter in need to change.
Can anybody suggest a way of doing this?
Thanks.
Gordan
Sorry for replying to my own email, but I've just stumbled upon an article
that seems to imply that v7.1 will support unlimited record lengths. Is
this the case? When is v7.1 due for release? Is a beta available?
Thanks.
Gordan
- Original Message -
From: "Gordan Bobi
eit with greater
latency.
> That's one of the greatest hurdles to distributed computing. That's
why
> the applications that are best adapted to distributed computing are those
> that don't require much data over the wire - which certainly doesn't
apply
> to databases. : )
I think it depends whether the amount of data is the problem, or fitting it
together.
Somebody please explain to me further why I am wrong in all this?
Regards.
Gordan
ve a clustered open source database with the less effort
> possible, now.
>
> The project to do good stuff (ie code) in this field is very long...
Indeed. There has to be a feasible starting point that yields modest
improvements at modest cost (in time and effort in this case)
> i hope that some guy will start a real thing ... one idea is to start a
> project on cosource or similar to receive founding $$.
> This project is very important for the OpenSource world.
I agree. Having a fully clustered database with very little network
overhead would be a major success, both for Postgres and OpenSource. Here's
an obvious question - how good is (does it exist?) clustering support on
Oracle?
Regards.
Gordan
/usr/local) for a RH6.2 system...
Cheers.
Gordan
ought to have a UPS if
you have a mission critical system. I have recently had a complete disk's
worth of data hosed due to power failure, as something went wrong and the
root inode got corrupted. Usefulness of backups is difficult to
overestimate...
HTH.
Gordan
ed, you should disable fsync() and DISABLE
WAL (can someone more clued up please confirm this?) for optimum speed? I
thought that WAL was designed as a "solution inbetween"...
Also, make sure that your benchmark findings include results for EACH test
separately. Different databases will have different performance benefits in
different environments, so make sure that your benchmark is sufficiently
diverse to test for those separate cases.
Are you put off the benchmarking yet?
Regards.
Gordan
some other kind of wierd hardware failure that will
wipe out their data. Then they will come back again and complain.
And the answer is always to simply spend an hour or so reading the
documentation...
Some people, eh...
Regards.
Gordan
till know what you need to send as the
"password" from the front end to let you into the database.
Unless I am missing something here, doing this doesn't make any
difference... Not for someone serious about breaching security, anyway...
Regards.
Gordan
they the chances are that you will go through the tuning
process yourself regardless of how it is shipped. All the default that is
slightly slower will do is encourage you to read the docs that little bit
sooner, if your system becomes large enough for this to be an issue.
Regards.
Gordan
then it doesn't matter whether the password is
encrypted or not. You are still, effectively, transmitting a "password
string" that is used for authentication.
The security of passwords, encrypted or otherwise is purely reliant on the
security of your database server that stores the data.
Does that make sense?
Regards.
Gordan
f I just use no view and do
SELECT Date FROM PastInvoices WHERE Company = 'SomeCompany' ORDER BY Date
DESC, LIMIT 1;
which does PRECISELY the same thing, that finishes in a fraction of a
second. This was the same speed that the max() view query ran at on v7.0.x.
Why such a sudden change?
Regards.
Gordan
ly, can anyone think of a solution to this problem?
Thanks.
Gordan
d run the first query, explain says that indices are used, but it STILL
takes forever. The first, slow query executes a merge join, while the
second only executes two index scans in a nested loop.
Why? This seems like a fairly basic thing, but it seems to break something
in the way the query is executed...
Regards.
Gordan
y plan changes,
but select times are still roughly the same... Doing the separate
subqueries on each table and joining data manualy in the application code
takes literaly seconds. I am sure that cannot be right and I must be doing
something wrong, so if anyone has a good idea of how to solve this type of
problem, I'm not sure I have a lot of options left...
Regards.
Gordan
n code for now. I'll try again in
straight SQL when the next beta or release are available.
Thanks.
Gordan
>For one of our customer, we are running a PostgreSQL database on a
> dynamic PHP-driven site. This site has a minimum of 40 visitors at a
> time and must be responsive 24h a day.
And from the bandwidth and hit logs, they cannot determine a time of day
when there are hardly any hits? Possible
those records.
So, what 8KB limit are you talking about? If there is one that I'm not
aware of, I'd sure like to find out about it...
Regards.
Gordan
elp or advise would be appreciated.
The only thing that comes to mind is that if you're doing a bulk
insert, you should probably drop all indices that aren't unique or for
primary keys, and re-create them once your insert all your data...
Regards.
Gordan
(Frequent Access)
If you just have lots of queries in parallel, try replication, and
pick a random server for each connection.
(Complex Queries)
If you absolutely, positively need one query to be executed across all
nodes in the cluster because one machine would just take too long no
matter how b
3 server with 1 GB of PC133 CAS2, and
everything else for signifficantly less than $1,500! Where did the
remaining $28,500 go? 4 TB hardware RAID5 disk array with 1 GB of
cache and 10 hot spare disks? Because that would cost you roughly
$28K...
Regards.
Gordan
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
http://www.postgresql.org/users-lounge/docs/faq.html
't run Redhat.
I don't have a copy of it either, and I do run RedHat. The trick is to
disable the services you don't use, and have portsentry firewall all
the ports. :-)
Regards.
Gordan
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
Are you using the "quote" function? You have to use it if you are to
guarantee that the data will be acceptable as "input".
$myVar = $myDB -> quote ($myVar)
> I'm using the Pg perl interface. But, think my problem was that I
had
> unescaped single quotes in the string. Added the following to my
Not sure, but the syntax is as I described below. Try checking the
perl DBD::Pg documentation. I think that's where I read about it
originally, many moons ago.
> Just checked the Pg docs, don't see a quote function. What is it
part of?
>
>
> > Are you using the "quote" function? You have to use i
Or try:
http://pgreplicator.sourceforge.net/
Haven't used it myself yet, but it looks pretty good...
> > Now, erserver seems to work, but it needs a bit hacking around
that I
> > hadn't done yet. Maybe when I get it working I'll see to writing
> > something. In the mean time, source code is the
45 matches
Mail list logo