>
> I wonder. If its a write heavy database, I totally agree with you. But if
> its mostly read-only, and mostly fits in ram, then a pgpool of servers
> should be faster.
>
> Be nice to know the usage patterns of this database. (and size).
>
In this case the databases are small to medium and the
On Tue, Jan 17, 2012 at 12:31 PM, David Morton wrote:
> Have you looked at a 'shared storage' solution based on DRBD ? I configured
> a test environment using SLES HAE and DRBD with relative ease and it behaved
> very well (can probably supply a build script if you like), there are lots
> of peopl
> Only a single-master. If you want a multi-master solution, see Postgres-XC.
>
Is postgres XC production ready? Can I trust my most valuable data to it?
Cheers.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org
>
> I have a few clusters running on EC2 using DRBD to replicate between
> availability zones. It's not fast, but it works. If your write load is under
> 30MB/sec it's definitely an option. I run DRBD over SSH tunnels to get around
> the random IP address issue. I use heartbeat on top for resource
>
> because you quickly get trapped into OS specific quicksand with these
> features.
>
Isn't that an issue with just about every feature? Besides the issues
have already been solved mostly. Pgpool already exists. Tatsuo Ishii
says porting a windows is just a resource issue as he doesn't have the
I have a situation where I am pulling CSV data from various sources
and putting them into a database after they are cleaned up and such.
Currently I am doing bulk of the work outside the database using code
but I think the work would go much faster if I was to use import the
data into temp tables u
>
>> 1. COPY from a text field in a table like this COPY from (select
>> text_field from table where id =2) as text_data ...
>
> The syntax is a bit different:
> CREATE TABLE text_data AS select text_field from table where id=2
Really? Wow, I would have never guessed that. That's awesome.
Thanks
I am trying to understand this bit of documentation about GIST and GIN searches
Also, * can be attached to a lexeme to specify prefix matching:
SELECT to_tsquery('supern:*A & star:A*B');
to_tsquery
--
'supern':*A & 'star':*AB
I tried various experiments but can'
I want to be able to search a lot of fields using queries that use
ILIKE and unfortunately many of the queries will be using the
'%SOMETHING%' or '%SOMETHING' type clauses. Since indexes are useless
on those I was thinking I could use tsvectors but I can't figure out
how to accomplish this.
One
>
> If you're using 9.1, you might look into contrib/pg_trgm instead.
If I was to use trgm would it be better to create a trigram index on
each text field? In the past I have created a text field which
contains the rest of the fields concatenated. That works great as long
as you are looking for a
>
> We made most of our text, varchar columns citext data types so that we
> could do case insensitive searches. Is this going to negate most of the
> index searches? It appeared to our DBA that it would be easier to use
> citext data type then need to use ILIKE instead?
>
In the same vein...
D
> However, given the size of this table, I have no idea how long something
> like this might take. In general I've had a tough time getting feedback
> from postgres on the progress of a query, how long something might take,
> etc.
>
You can always do this which would result in minimum hassles.
> It is my understanding that since the extention citext is available that
> this gives you what your asking for and at least at this point isn't
> going to be part of the core.
>
For me it's more of a workaround than a solution but yes probably good
enough. Collation is more subtle than case inse
Is there a way to backup a database or a cluster though a database
connection? I mean I want to write some code that connects to the
database remotely and then issues a backup command like it would issue
any other SQL command. I realize the backups would need to reside on
the database server.
--
On Tue, Mar 27, 2012 at 1:00 PM, David Boreham wrote:
> fwiw we run db_dump locally, compress the resulting file and scp or rsync it
> to the remote server.
I wanted to see if I can do that without running pg_dump on the remote
server. That would involve connecting to the server via ssh and I wan
>
> We're also using libpq to trigger backups using NOTIFY from a client
> app.
Do you have an example of how this is done?
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
On Thu, Dec 11, 2008 at 1:05 PM, David Wall <[EMAIL PROTECTED]> wrote:
> We've done warm standby as you indicate, and we've not needed anything
> special.
Thanks for sharing your configuation. I have one additional question thought...
How do you handle the reverting? For example.
Say I have a p
>
> You have to run a new base backup and have the slave ship logs to the
> master.
Mmmm. Does this backup have to be a full backup? What if your database
is very large?
I am hoping to get a setup which is similar to SQL server mirroring.
It uses a witness server to keep track of who got what "lo
> No probably not. I mean they are all pretty easy (especially log
> shipping) but it is definitely true they are slow, depending on the size
> of the database.
>
As an alternative is there a clustering or multi master replication
scheme that would be useful in a WAN? Preferably with a "prefered
>
> 1. It's OK if we lose a few seconds (or even minutes) of transactions
> should one of our primary databases crash.
> 2. It's unlikely we'll need to load a backup that's more than a few days
> old.
>
How do you handle failover and falling back to the primary once it's up?
On Wed, Feb 11, 2009 at 11:24 PM, Serge Fonville
wrote:
> Hi,
> I am in the process of setting up a two node cluster.
> Can PostgreSQL use DRBD as its storage?
> Since the in-memory database would be synchronized with the on-disk
> database.
> If this would be done with every query, this would gre
>
> We're very happy with pgpool-II for load-balancing and multi-master
> usage of PostgreSQL (keep in mind to enable HA for pgpool-II itself to
> avoid a SPOF, e.g. with heartbeat).
>
>
Thanks.
I am going to see which one has better documentation and try that one first.
If you could publish a brief howto on this I would be most grateful. I bet
many others would too.
On Mon, Feb 23, 2009 at 2:56 PM, Bryan Murphy wrote:
> On Sun, Feb 22, 2009 at 7:30 PM, Tim Uckun wrote:
> >> 1. It's OK if we lose a few seconds (or even minutes) of transaction
On Wed, Feb 25, 2009 at 9:40 PM, Dave Page wrote:
> On Wed, Feb 25, 2009 at 8:16 AM, Scara Maccai wrote:
> > What? Hot standby won't make it in 8.4?
>
> Hot standby != synch-rep.
>
> The former is still being reviewed, though it's starting to look like
> it's cutting it pretty fine for inclusion
>
> I think I start to have an idea about what the best suitable solution is
> for my situation.
> I was hoping there would be some sort of patch for the PostgreSQL download
> instead of an entire rebuild of the sources.
>
> I'll post any updates I find.
>
Hey Serge.
Any update on this?
I can't
>
>
> Again, this is a lot of work to avoid master / slave with failover.
> Are you sure it's really needed for your situation?
>
>
What is the most straightforward and simple way to achieve master slave with
failover?
Preferably a solution that would have decent monitoring, alerting and
failback
Today the database shut down unexpectedly. I have included the log file
that shows the shutdown. Can anybody tell me why this happened and how I can
make sure it doesn't happen again.
The only thing I can think of that I did was to specify a password for the
postgres user in the operating system.
On Thu, Mar 26, 2009 at 2:23 AM, Bill Moran wrote:
> In response to Tim Uckun :
>
> > Today the database shut down unexpectedly. I have included the log file
> > that shows the shutdown. Can anybody tell me why this happened and how I
> can
> > make sure it doesn'
According to the documentation it's not possible to log ship from a 64 bit
server to a 32 bit server.
I just want to confirm that this is the case before I waste a whole lot of
time trying to set it up.
On Thu, Mar 26, 2009 at 2:05 PM, Tatsuo Ishii wrote:
> > According to the documentation it's not possible to log ship from a 64
> bit
> > server to a 32 bit server.
>
> I think the doc is quite correct.
>
So what is the best way to accomplish a failover from a 64 bit machine to a
32 bit machin
>
>
>
> slony?
>
That sound more like a question than an answer :)
Can I presume it doesn't care about the architecture of the OS?
It looks like most avenues for high availability with postgres are not
available if one of the machines is a 64 bit machine and the other a 32.
Somebody on this list suggested I install a 32 bit version of postgres on my
x64 machine. What's the best way to handle this? Should I compile it fresh?
>
> What about running a 32bit build of PG on the 64bit machine?
>
How would one go about doing something like this?
Does anybody know if there is a sample database or text files I can import
to do some performance testing?
I would like to have tables with tens of millions of records if possible.
>
> > I would like to have tables with tens of millions of records if possible.
>
> It is easy to create such a table:
>
> test=# create table huge_data_table as select s, md5(s::text) from
> generate_series(1,10) s;
Thanks I'll try something like that.
I guess can create some random dates or so
long as the SQL statement that the view is based on is
still valid why does it care if the table is dropped and recreated?
--
Tim Uckun
Mobile Intelligence Unit.
--
"There are some who
terialized and it's simply a RULE which is to say
that it's nothing more then a SQL statement. As long as that SQL statement
is valid, parseable and returns a recordset it really ought not to care
about oids.
----------
Tim
ainst the big commercial database engines.
>
>WAL is a backup system.
>TOAST is a system for working with rows that have to use more then the 8K
>limitation.
>AFAIK!
What happened to outer joins? Don't you need outer joins to compete with
the big boys?
------
At 01:37 AM 10/12/2000 -0400, Tom Lane wrote:
>Tim Uckun <[EMAIL PROTECTED]> writes:
> > What happened to outer joins? Don't you need outer joins to compete with
> > the big boys?
>
>They're done too ;-)
Wooo
At 04:58 PM 10/12/00 -0400, Louis Bertrand wrote:
>Thanks. It helped cheer me up: I'm fighting with MS-Access at the
>moment (and losing badly).
Error Number 3135 There is no message for this error.
:wq
Tim Uckun
Due Diligence Inc. http://www.diligence.com/ Americas Background
In
anest
approach and is unlikely to break the database in any way. I am suprised
nobody has done this yet. Is there a document which describes how to create
locales?
:wq
Tim Uckun
Due Diligence Inc. http://www.diligence.com/ Americas Background
Investigation Expert.
If your company isn't doi
'll be here feeding off the scraps of knowledge
>that are dribbled here and there...
You are right of course but what happens once you have learned it? For me I
never seem seem to be able to do the right thing that being "now that I
have solved the problem I should write it down and submit
a field type in postgres.
BTW is the currency datatype working with access and ODBC yet?
:wq
Tim Uckun
Due Diligence Inc. http://www.diligence.com/Americas Background
Investigation Expert.
If your company isn't doing background checks, maybe you haven't considered
the risks of a bad hire.
201 - 243 of 243 matches
Mail list logo