I have thought of doing something like a single table inheritance and it
could be done but I thought this might be a little more elegant.
On Tue, Apr 4, 2017 at 2:15 PM, David G. Johnston <
david.g.johns...@gmail.com> wrote:
> On Mon, Apr 3, 2017 at 7:07 PM, Tim Uckun wrote:
>
>&
basic queries like listing all
> known unix variants; if that is hidden in the table names
> then you'll have to issue DDL queries to do the work of SELECT queries,
> which just sounds wrong to me.
>
> I'd go for a tree, possibly using recursive CTE's to dig it.
>
>
I am curious about the stored proc languages inside of postgres. When I
write a stored proc is it compiled to some internal representation or just
interpreted? How does this work with other languages?
Also would it be possible to extract PL-PGSQL into a standalone (albeit
crippled) language? Is th
Interesting, thank you. I was curious to know how it worked.
Cheers.
I have read many articles about dealing with hierarchies in postgres
including nested sets, ltree, materialized paths, using arrays as
parentage, CTEs etc but nobody talks about the following scenario.
Say I have a hierarchy like this
1
1.1
1.1.1
1.1.2
1.2
1.3
2
2.1
In this hierarchy the order
more
users are trying to update the hierarchy either by inserts or updates? I
can definitely see a situation where we have issues transactions trip over
each other.
On Mon, Jul 24, 2017 at 10:32 PM, Alban Hertroys wrote:
>
> > On 24 Jul 2017, at 9:02, Tim Uckun wrote:
> >
> >
I still use pgadmin3. I also have a subscription to the jetbrains tools so
I also give datagrip a try once in a while. Datagrip has a lot going for it
but it's lagging behind pgadmin3 is a lot of ways so it hasn't replaced it
for me.
On Thu, Jul 27, 2017 at 10:41 AM, Tiffany Thang
wrote:
> Hi,
>
k this page out as well:
> https://wiki.postgresql.org/wiki/Community_Guide_to_PostgreSQL_GUI_Tools
>
> <https://wiki.postgresql.org/wiki/Community_Guide_to_PostgreSQL_GUI_Tools>
> Cheers,
>
> Brent Wood
> --
> *From:* Tim Uckun
> *To:* Tiffa
I think it's funny that after all these years pgadmin3 is still the most
comprehensive GUI for postgres. Even though it's prone to crashing on my
machine and I have paid for datagrip I still reach for it first.
On Fri, Jul 28, 2017 at 2:46 AM, vinny wrote:
> On 2017-07-27 00:41, Tiffany Thang wr
I don't run windows so I haven't looked at the EMS product.
On Fri, Jul 28, 2017 at 6:53 PM, vinny wrote:
> On 2017-07-28 06:31, Tim Uckun wrote:
>
>> I think it's funny that after all these years pgadmin3 is still the
>> most comprehensive GUI for postgres.
What's the best way to deal with global constants in PLPGSQL. Currently I
am putting them in a function with out parameters and then calling that
function from every other function that needs them like this.
CREATE OR REPLACE FUNCTION hashids.constants(
OUT min_alphabet_length integer,
OU
of compute
cycles.
On Wed, Aug 2, 2017 at 1:04 AM, Merlin Moncure wrote:
> On Tue, Aug 1, 2017 at 4:56 AM, Tim Uckun wrote:
> > What's the best way to deal with global constants in PLPGSQL. Currently
> I am
> > putting them in a function with out parameters and then cal
Say I have a function like this.
CREATE OR REPLACE FUNCTION some_constant(
)
RETURNS text
LANGUAGE 'plpgsql'
COST 100
IMMUTABLE
ROWS 0
AS $BODY$
begin
return 'some_string';
end;
$BODY$;
Then I have another function that calls it but is also immutable
CREATE OR REPLACE FUNCTIO
I am setting up a database for an app. So I create a user without superuser
but with login priveleges
I then create a database and set it's owner to that user like this...
dropdb --if-exists api_development
dropuser --if-exists api_user
createuser api_user -P -d
createdb api_development -O api_u
I just downloaded postgres 8.3 for windows and noticed that citext is
not an option for columns.
The web site has the source code but no binaries for windows.
I downloaded the enterprisedb and noticed that it has the citext.dll
in the lib directory so I copied the dll over to the postgres lib
dir
Where can I get uuid-ossp for windows? Also where can I get citext for windows.
These two are missing from the windows installer.
Thanks.
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org
> citext is not part of core PostgreSQL or something we have any
> intention to include in the Windows distribution at this time.
Is there an alternative for people wanting a case insensitive collation?
---(end of broadcast)---
TIP 4: Have you sea
Does anybody know of an open source application which leverages
postgres to build a scheduling/calendaring application. Especially if
it uses some of the datetime types and functions that are unique to
postgres.
I am specifically interested in methods to deal with recurring events
and dealing with
2009/5/28 Eddy Ernesto Baños Fernández :
> Try Cybercluster
I looked into that. There is one piece of documentation that is less
than ten pages long. There is no users group, no listserve, no
community that I can discern.
Do you have experience with it and if so could you please share.
Thank
Does anybody have any experience with tungsten or sequia they would
like to share?
How about pgcluster or cybercluster?
There are a lot of options but not a lot of insight or documentation really.
Also note that the postgres-r web site says it's not production ready
and it will take months if n
I don't see any ubuntu packages for 8.4 in the default repositories.
Does anybody know if they will be upgrading the postgresql package to
8.4 or creating a new package for it.
I'd rather use the packages than to compile it myself. If anybody has
an argument as to why I should compile it I am all
On Fri, Jul 10, 2009 at 9:22 PM, Stuart Bishop wrote:
> On Fri, Jul 10, 2009 at 8:28 AM, Tim Uckun wrote:
>> I don't see any ubuntu packages for 8.4 in the default repositories.
>>
>> Does anybody know if they will be upgrading the postgresql package to
>> 8.4
I am using pg_standby to set up a warm standby.
Everything seems to be working OK so far but I do have one question.
I wanted to check to make sure that the replication was happening so I
created the trigger file which put the database to "live"mode. I ran
a query on the standby to make sure the
I am trying to monitor replication lag using zabbix. I have written a
simple script in ruby to get the lag it goes like this.
require 'date'
require 'yaml'
y = YAML.load `/usr/lib/postgresql/8.3/bin/pg_controldata
/var/lib/postgresql/8.3/main`
last_checkpoint = DateTime.parse( y['Time of latest c
> On lots of systems, giving group permissions is nearly as bad as giving
> world permissions (eg, all the users might be in a "users" group).
> So we don't do it by default. If you want to poke holes in the security
> of your own installation, go right ahead.
I decided to see if I could do it wi
I am trying to monitor my replication lag with pg_controldata and it's
driving me nuts.
If I run pg_controldata from the command line as user postgres or root
I get the following line
Time of latest checkpoint:Thu 30 Jul 2009 00:36:12 NZST
If I run it from the crontab I get this out
O
> cron runs programs in a very limited environment. Things like TZ etc. are
> usually not set. To see what your cron sets, just run a shell script with
> something like
>
> #!/bin/sh
> env
>
> and look at the email you get with the output.
> Read the cron/crontab manpage. It tells you how to set
I have a pretty simple query on a pretty simple table with about 60
million records in it.
This is the query.
SELECT * FROM "changes" WHERE (id > 1935759 and company_id = 4 and
source_model_name = 'CommissionedVisit') ORDER BY id ASC LIMIT 1
The id field is the primary key. The other fields are
> It probably thinks the id check is going to be better to limit the result
> set.
>
> How many records are there for id > 1935759 ?
About 40 million or so.
> vs
> How many records for company_id = 4 and source_model_name =
> 'CommissionedVisit' ?
>
> If this is a common query you could probably
>
> If you try the multi-column index (which is a good idea), be sure that "id"
> is the last of the three columns, since that's the column on which you have
> an inequality test rather than an equality test; eg,
> (company_id,source_model_name,id).
>
Interesting. I would have thought the order o
I just did an upgrade on two of my servers (the main and the
failover). The main went OK but the postgres on the failover won't
start.
Unfortunately there is nothing anywhere telling me what the problem
is. The log file is empty, there is nothing in the /var/log/messages
or /var/log/syslog either.
> I know, from IRC, the problem has been solved, there was no space on the
> disk ...
>
> Unfortunately, i haven't logs.
>
Yes. Thanks to everybody on the IRC who helped me out.
The suggestion that was most helpful was to call the posgres binary
directly. /usr/lib/postgresql/8.3/bin/postgres. Cal
I have the following query.
with parsed_data as (
SELECT
devicereportedtime ,
DATE_TRUNC('minute', devicereportedtime - (EXTRACT(minute FROM
devicereportedtime)::integer % 5 || 'minutes')::interval) as interval_start
FROM systemevents
WHERE devicereportedtime >=
uot;
Also seems counterintutive but I guess I can aways convert it. I am just
not getting the right offset when I convert. That's what's puzzling.
On Wed, Oct 2, 2013 at 9:15 PM, Albe Laurenz wrote:
> Tim Uckun wrote:
> > I have the following query.
>
> [...]
>
13 at 10:05 PM, Tony Theodore wrote:
>
> On 02/10/2013, at 6:49 PM, Tim Uckun wrote:
>
> >The reason for that is that in PostgreSQL there is no time zone
> information stored along with a "timestamp with time zone",
> it is stored in UTC.
>
> That seems unintuitive
I do think it would help to add it to the docs. The name "TIMESTAMP WITH
TIME ZONE" clearly implies the time zone is stored in the field. One day
there will really be a timestamp with time zone embedded in it field and I
wonder what they will call that.
I have a table foo. It has a serial column called "id". I execute the
following statement
ALTER TABLE table_name RENAME TO archived_table_name;
CREATE TABLE table_name (LIKE archived_table_name INCLUDING
DEFAULTS INCLUDING CONSTRAINTS INCLUDING INDEXES);
. Archieve the table
>
>
> I cannot readily speak to why you are not seeing sequence ownership as a
> dependent when looking at the now-archive table definition.
>
>
pgadmin knows it's a dependency because when you try to drop it you get a
message saying so but I can't see it in the defintion of the table.
BTW is ther
>
>
> As I've marked here, both default expressions are depending on the
> sequence, but there's only one "ownership" dependency of the sequence
> on a column. To complete the switchover you'd need to use ALTER SEQUENCE
> ... OWNED BY ... to move that ownership dependency to the new table.
> Then
Hi all.
I have the following scenario I want to accomplish.
In order to test a new branch of code I want to create a snapshot of the
live database into a testing database. The code will be deployed after that
and it may run some migrations which will change the schema of the
database. The code i
How can this be combined with other commands. For example pgsql doesn't
like this
psql service=test1 -d test_database -c "some command"
the PGSERVICE=test1 psql blah blah works but seems cumbersome. Why isn't
there a psql --service=blah option?
On Thu, Jan 24, 2013 at 9:48 PM, Guillaume Lela
I want to set up a central database and several satellite databases
which use some of the data from the central database.
For example
Say my central database contains people records, with a many to many
relationship with clients records.
Each client has their own database but needs read, write a
What is the preferred way to share tables between databases?
I read about dblink but it doesn't seem optimal because it needs to
pull in the entire query every time the view is referenced so it might
be highly inefficient if I am trying to join a dblinked table with a
"local" table.
Cheers.
--
That's disappointing. I guess I'll have to look towards a replication solution.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
> or carefully structure your dblink joins so they can perform efficiently,
> possibly using temp tables as a sort of materialized view.
According to the documents unless you are writing procedural code with
cursors when you touch the dblink view it will pull the entire
table/recordset over.
>
>
>
> I sometimes think it'd be nice if Pg offered the ability to translate schema
> to "databases", so it runs with a single database and multiple schema, and
> you "connect" to a schema, MySQL style. It'd help people who want to use
> multiple databases on a machine and query between them, though o
I am trying to backup one database and restore it into a new schema in
another database. Database1 has the tables in the public schema
database2 has some tables in the public schema but their names will
clash so the database needs to be stored in a different schema.
I back up like this.
/usr/bin/
>
> There is no support for that built into pg_dump. You could try:
That's too bad.
>
> * dumping to a text script and doing search-and-replace for the schema
> name on the script file.
I did a dump without privileges or owners so I was thinking I could
just replace the
SET search_path = publi
I have a couple of tables (people and addresses) which are using
serials as primary keys and contain many potentially duplicate data in
them. The problem is that the data has not been input in a careful way
so for example you have a first_name, middle_name and last_name fields
but you could have Sa
I have three tables. traffic, sales and dates. Both the traffic table
and the sales table has multiple entries per date with each row
representing the date, some subdivision, and the total. For example
every day five divisions could be reporting their sales so there would
be five entries in the sa
On Tue, Jul 12, 2011 at 3:01 PM, David Johnston wrote:
> If traffic has 5 records on a date and sales has 4 on the same date you would
> output 20 records for that date.
What would I have to do in order to get 9 records instead of 20. Like
a union but with dissimilar schema.
--
Sent via pgsql
I have two tables, traffic and sales. Each one has a date field and
lists the traffic and sales broken down by various parameters
(multiple rows for each date).
If I run select (select count(*) from traffic) as traffic, (select
count(*) from sales) as sales; I get the following 49383;167807
if
> Only the first SELECT is used to define column types and names (in the case
> of NULL AS source2_* I am not positive if you need to cast the NULL or if it
> will use the type found in the second SELECT) and I generally put a "source"
> field into the output with a textual representation of whi
I am using a library which is emitting SQL like this SELECT COUNT(*)
FROM batches LIMIT 15 OFFSET 15 the library fails because on postgres
this query returns nothing (not even zero as a result). Presumably it
returns some valid value on mysql and other databases.
Other than hacking the library is
Yea I figured it would need a subquery. I filed a ticket with the
library. Hopefully they will fix it.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
I am occasionally getting this kind of error when attempting a SELECT statement.
PGError: ERROR: missing chunk number 0 for toast value 27143 in pg_toast_2619
What does this mean? Is some sort of corruption creeping into the database?
Postgres 9.0 linux.
--
Sent via pgsql-general mailing lis
>
> Hard to tell. We've seen enough reports like that to make it seem like
> there may be some bug buried there, but no one has provided anything to
> do any debugging work with. Can you create a reproducible test case?
Not really. I have a nightly process which downloads data and sticks
it into
>
>> Not really. I have a nightly process which downloads data and sticks
>> it into a text field. Afterwards another process reads that text data
>> and processes it creating rows in another table. The problem occurs in
>> the last step and at seemingly random intervals. For example one time
>> it
>
> A self-contained test case (code and data) that triggers the error.
> If it only does so probabilistically, once in every-so-many runs,
> that's fine.
I'll see what I can do.
Give me a few days.
Cheers.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes t
I am wondering if either of these features are on the plate for
postgres anytime soon? I see conversations going back to 2007 on
updateable views and some conversations about synonyms but obviously
they have never been added to the database for some reason or another.
With regards to synonyms. It
>
> what sorts of operations do synonyms need to support?
> truncate?
> alter table?
> reindex?
>
I am no expert or anything but I would think they would be like
symlinks so yes to all of the above.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your su
I am following the instructions on the wiki
https://wiki.postgresql.org/wiki/Binary_Replication_Tutorial#PITR.2C_Warm_Standby.2C_Hot_Standby.2C_and_Streaming_Replication
using the "10 minute" version of the setup.
On the master I have
postgresql.conf
wal_level = hot_standby
max_wal_senders = 3
c
I am using 9.1.
Apparently it's working now, it took a couple of restarts but it seems
to be going.
Thanks.
On Mon, Jun 25, 2012 at 12:57 PM, Michael Nolan wrote:
>
>
> On Sun, Jun 24, 2012 at 1:57 AM, Tim Uckun wrote:
>>
>> I am following the instructio
I am sure this is intended behavior but it seems odd (and inconvenient) to me.
create database tim_test_copy template tim_test
ERROR: source database "tim_test" is being accessed by other users
DETAIL: There are 1 other session(s) using the database.
I would presume only reads are required fro
On Wed, Apr 25, 2012 at 8:34 PM, wrote:
> Andreas writes:
>
>> How would I group the table so that it shows groups that have
>> similarity () > x ?
>>
>> Lets say the table looks like this:
>>
>> id, txt
>> 1, aa1
>> 2, bb1
>> 3, cc1
>> 4, bb2
>> 5, bb3
>> 6, aa2
>> ...
>>
>> How wo
What is the best strategy for deleting orphaned records from a large table.
The usual NOT IN is very slow so there must be a better way in
postgres for dealing with these.
I know it's best not to have the orphans in the first place but in
this case it happened and I need to clean up before I can
> If the table is large, I sometimes use the following pattern:
The table is very large so I will use your advice thanks.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
I have a situation where there are dozens of daemons hitting the same
postgres database. They all run different types of queries but each
daemon runs the same set of queries over and over again.
Sometimes some queries get "stuck" in that they run for hours and
hours. They never stop running. Kil
>
> The other thing that just hit my mind, is that you mind need to bump
> up work_mem a bit, just for the session. so
> SET work_mem=32M
>
>
> and that should make it slightly faster.
I tried the method suggested. I created a table of IDs and a field
called "mark". I indexed both fields.
Then
> Why are you messing with ctid? Does the table have no key? If not, you should
> fix that first.
>
I got the idea from here
http://www.postgres.cz/index.php/PostgreSQL_SQL_Tricks
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http:/
On Tue, Sep 28, 2010 at 3:39 PM, Tom Lane wrote:
> Tim Uckun writes:
>> Is there a way to tell postgres to stop any query that runs longer
>> than a specified amount of time? Say an hour?
>
> Setting statement_timeout would do that. You ought to figure out
> what
>
> You really should try to find out why they get "stuck". Killing stuck clients
> isn't going to solve your problem (aside from the fact that you probably
> shouldn't be using kill -9 on them, that's like using a jackhammer on a
> jammed door).
Well I didn't use kill -9 I used the pg_cancel_b
I have two tables. Table C has about 300K records in it. Table E has
about a million records in it. Today I tried to run this query.
update C
set result_count = X.result_count
from C
inner join (select c_id, count(c_id) as result_count
On Tue, Oct 5, 2010 at 10:33 PM, Thomas Guettler wrote:
> Hi,
>
> just a guess: Counting is slow, since it needs to check all rows. Explained
> here:
>
Actually counting was pretty fast. As I said when I created a table
using the subquery that went pretty fast. The update was the slow
part.
--
On Tue, Oct 5, 2010 at 11:38 PM, Thom Brown wrote:
> On 5 October 2010 10:20, Tim Uckun wrote:
>> The database machine is a linode host.
>
> Good choice ;)
>
> Obvious question, but I'll ask it: Have you run ANALYZE or VACUUM
> ANALZYE on the database? Also bei
>
> I'd still recommend manually running ANALYZE on any involved tables,
> just in case.
>
Just out of curiosity
Say I did run it. Would it make the query run a few orders of magnitude faster?
If you read my original post you'd see that I let the query run for
over an hour before I killed
> whats the storage like on linode? Is that some sort of virtualized cloudy
> thing? wild guess says committed random writes are really slow.
Yes it's some sort of a virtual cloudy thing. I can't tell you more
than that though. They sell virtual machines and this project runs on
them.
Does a
If I have a primary key constraint defined in the database do I also
need to create an index on that field for fast lookup?
The documentation on the web seems to imply that the contraint is not
an index. Is that right?
What the difference between creating a unique, not null index and
setting a pr
Hey all.
I am using postgres 8.3 with a rails application. I have a column
defined like this.
ALTER TABLE provisions ADD COLUMN provider_id integer;
ALTER TABLE provisions ALTER COLUMN provider_id SET STORAGE PLAIN;
ALTER TABLE provisions ALTER COLUMN provider_id SET NOT NULL;
ALTER TABLE provisi
> Not sure how you do this in rails but if you use DEFAULT in the query it
> works:
>
>> create table a(a int default 5);
> CREATE TABLE
>
>> insert into a(a) values (DEFAULT);
> INSERT 0 1
Unfortunately the SQL is being generated by the ORM. I really don't
want to bypass the ORM that would be wa
>
> Are you using the ruby-pg interface? I was under the impression it
> handled this properly.
>
I am using postgres-pr
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
On Fri, Oct 17, 2008 at 4:40 PM, Scott Marlowe <[EMAIL PROTECTED]> wrote:
> On Thu, Oct 16, 2008 at 9:09 PM, Tim Uckun <[EMAIL PROTECTED]> wrote:
>> Hey all.
>>
>> I am using postgres 8.3 with a rails application. I have a column
>> defined like this.
I want to accomplish what I would think would be a simple thing. I
want the root user to be able to connect to the postgres database as
user postgres from the local machine without passwords. Since I am
doing this from a program I don't want to use the su facility.
I have tried a lot of different
> I suspect you are expecting that the map will cause root to be
> logged in as postgres without asking for that. It won't.
> What it will do is allow "psql -U postgres" and similar to work.
That's exactly what I am looking to do. In my case I have a script
that runs as root. I want to log in as
> then say you're postgres in the script with the -U (if you're using psql)
>
> AS ROOT:
> psql -U postgres -h remote_db dbname
>
> Note that ident doesn't work so well between machines, so you might
> want to look at .pgpass
>
That's what I am trying to get working. In actuality I am using ruby
a
>
>
> either create a postgres user named 'root' and give it superuser privileges,
In order to do that I need to connect to the database with my script
which is running under the root account.
> or switch to a different method of authentication for LOCAL users
I am confused. I presumed the prop
> But afterwards, inside the script, you could use su to temporarily switch to
> a less priviledged user:
>
> ... commands running as root
> su postgres -c 'psql ' # running as postgres
> ... running as root again
OK I will try this.
I am very confused about something though. Not one person
>
> authenication type is controlled via the pg_hba.conf file.
>
> frankly, I've never used the pg_ident file, it just seems like it would add
> more confusion to things. But, it appears to use it you need a
> map=/mapname/ primitive in your pg_hba.conf
>
That's why I attempted to do. I read th
> I just tried with 8.4.1. Started with the default configuration, created
> data/pg_ident.conf with:
> pg_map root postgres
> pg_map postgres postgres
>
> Replaced in pg_hba.conf:
> < local all all trust
> by
>> local all all
Is there a way I can get a list of the top 10 longest running queries
for the day/week/month or the top 10 queries that took the most CPU?
select * from pg_stat_activity only shows the current status.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your s
Is there a way to get pg_standby to put timestamps in the output it
generates? I am currently piping the output to a log fie but since it
contains no timestamps it's of limited use to me.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
h
> printf "%s %s\n", strftime("%Y-%m-%d %H:%M:%S", localtime(time)), $item
>
> (untested, and I am not a regular Perl programmer, its but File::Tail is the
> best library I know of to do this sort of thing)
>
Ah if I am going to do that I suppose something like this would work.
#!/bin/sh
whil
>
> If that works, great. I'm not sure if you'll run afoul of output buffering
> in this situation. Clearly you've got the right idea, just need to make
> sure it behaves as you expect and doesn't clump the line reads into larger
> chunks.
Actually I could not get it to send the output to the pi
> You might use the copy command instead of insert, which is far faster.
> If you want the fastest possible inserts, then probably copy is the way
> to go instead of insert.
> Here is copy command via API:
> http://www.postgresql.org/docs/current/static/libpq-copy.html
> Here is copy command via SQ
>
> Technically you *can* disable triggers, including RI checks, but it's VERY
> unwise and almost completely defeats the purpose of having the checks. In
> most such situations you're much better off dropping the constraints then
> adding them again at the end of the load.
I know that the SQL se
> I, for one, would loudly and firmly resist the addition of such a
> feature. Almost-as-fast options such as intelligent re-checking of
Even if it was not the default behavior?
>
> If you really want to do that, look at the manual for how to disable
> triggers, but understand that you are throwi
On Thu, Jan 7, 2010 at 3:13 AM, Dimitri Fontaine wrote:
> Tim Uckun writes:
>> Is there a command like COPY which will insert the data but skip all
>> triggers and optionally integrity checks.
>
> pg_bulkload does that AFAIK.
>
That's a great utility. Unfortunatel
Hey Guys.
It's been a while since I looked into this and it seems like new
options have cropped up for postgres HA and scalability. Is there a
consensus on the "best" way to achieve HA. My primary concern is HA
but of course any scalability gains would be more than welcome. All
the servers will
On Tue, Jan 17, 2012 at 10:47 AM, David Morton wrote:
> Is shared storage an option for you ? We've had a fairly pleasant experience
> with shared storage partnered up with SLES and its HAE (high availability
> extension) suite using a Pacemaker cluster for resource control. On top of
> this we re
>
> virtual servers tend to have lousy storage performance, for what thats
> worth. the actual physical resources are being shared by who knows what
> other workloads, and they tend to be higher latency than direct-attach
> storage, or proper SAN.
I realize that. Eventually we might have to go to
101 - 200 of 243 matches
Mail list logo