/docs/9.6/static/sql-createtrigger.html that
postgresql uses triggers to implement foreign keys so I am probably
just missing the syntactic sugar for arrays. I will try to use a trigger.
Thank you.
On Wed, Apr 19, 2017 at 12:24 PM Rob Sargent wrote:
>
>
> On 04/19/2017 01:13 PM, Henr
I was just reading this question on reddit (the text duplicated below). I
was wondering if there is an approach for handling array foreign key
references. I am interested in the answer since I started using array
fields as well. Thank you.
- below is the message from the reddit poster:
https
On Sun, Dec 9, 2012 at 7:16 PM, Jeff Janes wrote:
> The obvious difference is that this one finds all 5 buffers it needs
> in buffers already, while the first one had to read them in. So this
> supports the idea that your data has simply grown too large for your
> RAM.
>
> Cheers,
>
> Jeff
>
J
;@ '[2012-07-03,2012-07-11)'::
daterange)"
" Buffers: shared hit=5"
"Total runtime: 0.046 ms"
Thank you.
On Sun, Dec 2, 2012 at 12:44 AM, Jeff Janes wrote:
> On Fri, Nov 30, 2012 at 12:22 PM, Henry Drexler
> wrote:
> > On Fri, Nov 30, 2012 at
On Sun, Dec 2, 2012 at 12:44 AM, Jeff Janes wrote:
> Could you do it for the recursive
> SQL (the one inside the function) like you had previously done for the
> regular explain?
>
> Cheers,
>
> Jeff
>
Here they are:
for the 65 million row table:
"Index Scan using ctn_source on massive (cost=0
On Fri, Nov 30, 2012 at 1:23 PM, Kevin Grittner wrote:
> Henry Drexler wrote:
>
> > why would the query time go from 4 minutes to over 50, for an
> > increase in table rows from 30 million to 65 million?
>
> Did the active (frequently referenced) portion of the database g
On Fri, Nov 30, 2012 at 1:42 PM, Jeff Janes wrote:
> Can you report the EXPLAIN (ANALYZE, BUFFERS) instead?
Thanks, here they are:
for the approx 65 million row approx 50 min version:
EXPLAIN (ANALYZE, BUFFERS)
select
massive_expansion(ctn,the_range)
from
critical_visitors;
"Seq Scan on crit
On Fri, Nov 30, 2012 at 8:22 AM, Henry Drexler wrote:
> Hello, and thank you in advance.
>
>
> Beyond the date vs timestamp troubleshooting I did,
>
I realize this could be confusing - since I ruled out that difference, the
real question is - given this setup, why would the query
Thanks to all who responded - upgrade was successful!
One final note, when using pg_upgrade ... --link, it finally recommends
use of delete_old_cluster.sh to remove the old data files. I'm tempted,
but --link re-uses old data files,... bit of a contradiction there, if you
follow my meaning?
Is
> "C" is the official name of that locale. Not sure how you got it to say
> "POSIX" ... maybe we didn't have normalization of the locale name back
> then?
>
> Anyway, simplest fix seems to be to update the 9.0 installation's
> pg_database to say "C" in those entries.
Never ceases to amaze me wher
Hi all,
Using centos 5.x
I'm trying to upgrade (without having to dump/restore a 1.5TB db) from 9.0
to 9.2 using pg_upgrade, but am having a few issues.
1. I ran into the (usual?) issue with ld libraries conflicting, so
renamed /etc/ld.so.conf.d/postgresql-9.0-libs.conf to blah, and reran
ldcon
The combination of pandas ipython and psycopg2 work wonders for pulling
data from db and manipulating/plotting,
although I don't know in more detail of what the client's use cases are.
On Wed, Jul 25, 2012 at 1:41 PM, Mark Phillips
wrote:
> I am seeking suggestions for business intelligence and
On Fri, Dec 9, 2011 at 5:48 PM, Jack Christensen wrote:
> CREATE TABLE people(
> id serial PRIMARY KEY,
> name varchar NOT NULL
> );
>
> INSERT INTO people(name) VALUES('Adam'), ('Adam'), ('Adam'), ('Bill'),
> ('Sam'), ('Joe'), ('Joe');
>
> SELECT name, count(*), random()
> FROM people
> GROUP B
google 'weeks of supply'
On Mon, Nov 21, 2011 at 1:18 PM, Jason Long
wrote:
> I have a custom inventory system that runs on PG 9.1. I realize this is
> not a postgres specify question, but I respect the skills of the members of
> this list and was hoping for some general advice.
>
> The system i
On Thu, Nov 10, 2011 at 8:34 AM, Thomas Kellerer wrote:
>
>>
> SELECT type,
> string_agg(color, ',') as organized_by_type
> FROM clothes
> GROUP BY type;
>
>
>
wow, yes that is cleaner.
Thank you for taking the time - obviously I need to read through the string
functions again.
I am thinking there is a better/simpler way, though this is what I have
working:
(postgres 9.1)
I would like to have the list of colors for each type of clothing to be
comma seperated in the end result.
like this:
typeorganized_by_type
pants red, blue, orange
shirt black, gra
On Fri, Oct 21, 2011 at 2:57 PM, Henry Drexler wrote:
> I realize I have sent a lot of messages on this thread so this will be the
> last one unless I come up with a solution, then I will post that.
>
>
Resolved.
Ray - thanks again for your help.
The pattern was it was only matchin
I realize I have sent a lot of messages on this thread so this will be the
last one unless I come up with a solution, then I will post that.
The idea behind this is to take a string and remove one character from it
successively and try to match that against any of the nodes in the query.
So for
On Fri, Oct 21, 2011 at 1:02 PM, Henry Drexler wrote:
>
> On Fri, Oct 21, 2011 at 6:10 AM, Raymond O'Donnell wrote:
>
>>
>> Glad you got sorted. What was the problem in the end?
>>
>> Ray.
>>
>> apart from the solution I sent earlier I have n
On Fri, Oct 21, 2011 at 6:10 AM, Raymond O'Donnell wrote:
>
> Glad you got sorted. What was the problem in the end?
>
> Ray.
>
> apart from the solution I sent earlier I have now noticed an abberation -
and in testing I have not isolated but have a simple example.
for instance, using the functio
, 2011 at 6:10 AM, Raymond O'Donnell wrote:
> On 20/10/2011 23:16, Henry Drexler wrote:
> >
> >
> > On Thu, Oct 20, 2011 at 5:41 PM, Raymond O'Donnell > <mailto:r...@iol.ie>> wrote:
> >
> >
> > Are you sure about this? Try using
On Thu, Oct 20, 2011 at 5:41 PM, Raymond O'Donnell wrote:
>
> Are you sure about this? Try using RAISE NOTICE statements in the
> function to output the value of nnlength each time it's executed.
>
> Ray.
>
>
Thank you for showing me the 'Rase Notice' , I had not seen that before and
it helped me
On Thu, Oct 20, 2011 at 5:42 PM, Raymond O'Donnell wrote:
>
> I was just trying to figure your function out... :-) I think you're
> mistaken about step 3 - This statement -
>
> node = substring(newnode, 1, i-1) || substring (newnode, i+1, nnlength)
>
> - is contatenating two substrings - the firs
On Thu, Oct 20, 2011 at 4:57 PM, Raymond O'Donnell wrote:
>
>
> Not sure what you mean by the above...
>
> Ray.
>
>
This is what I thought it was doing.
1. it gets the node from the first row
2. measures its length
3. then loops through removing one character at a time and comparing that
to th
On Thu, Oct 20, 2011 at 4:57 PM, Raymond O'Donnell wrote:
>
> Not sure what you mean by the above... that statement only gets executed
> once, so the value of nnlength doesn't change.
>
> Ray.
doesn't the function get executed once for each row in the query?
so in the below example
thr wil
I found the problem, it looks like nnlength := length(newnode); is not
getting reset
create or replace function nnodetestt(text) returns text language plpgsql as
$$
DECLARE
newnode alias for $1;
nnlength integer;
t text;
nmarker text;
BEGIN
nnlength := length(newnode);
for i in 1..(nnlength-1) loo
I am struggling to understand at what point the query knowledge comes into
play here.
Ideally it should look in nmarker and if there is an 'N' then execute the
query (but how would it know that without running the query first?) and
return the results in the nnodetest, but (in its current form it s
On Mon, Oct 17, 2011 at 3:11 PM, Henry Drexler wrote:
> couldn't you just wrap it in a case statement to change the t to true
> etc...?
>
>
example:
select
case when (1=1) = true then 'true' else 'false' end
couldn't you just wrap it in a case statement to change the t to true
etc...?
On Mon, Oct 17, 2011 at 2:29 PM, Viktor Rosenfeld wrote:
> Hi,
>
> I need to move data from PostgreSQL to MonetDB and also bulk-import data
> into MonetDB that was bulk-exported from PostgreSQL by other people. My
> pr
down the
line.
You need to eliminate the date column in the query, or whatever fits your
requirements.
On Mon, Oct 10, 2011 at 6:24 PM, Henry Drexler wrote:
> you are also grouping by sample date, those are the largest values for the
> criteria you have set out in the group by.
>
>
>
you are also grouping by sample date, those are the largest values for the
criteria you have set out in the group by.
On Mon, Oct 10, 2011 at 6:17 PM, Rich Shepard wrote:
> I'm trying to query the table to extract the single highest value of a
> chemical by location and date. This statement gi
On Thu, Oct 6, 2011 at 4:37 PM, Gavin Flower
wrote:
> On 07/10/11 01:40, Henry Drexler wrote:
>
>> I have a workaround to the error/result, but am wondering what the result
>> of ts_rank of '1e-020' represents?
>>
>> Here is the original:
>
it sent before I finished, here is the rest:
I have fixed this by doing the following:
select
ts_rank(to_tsvector(replace('a_a_do_ug_read_retreqmon_ptam','_','
')),plainto_tsquery(replace('a_a_do_ug_read_retrmso.com_ptam','_',' ')))
so I have found a solution, just wondering what the earlier err
I have a workaround to the error/result, but am wondering what the result of
ts_rank of '1e-020' represents?
Here is the original:
select
ts_rank(to_tsvector('a_a_do_ug_read_retreqmon_ptam'),to_tsquery('a_a_do_ug_read_retrmso.com_ptam'))
that was spot on Richard. Thank you for your time and the solution.
On Wed, Oct 5, 2011 at 3:22 PM, Richard Huxton wrote:
> On 05/10/11 19:29, Henry Drexler wrote:
>
>>
>> and would like to have a column indicate like this:
>>
>> 'evaluation' '
I can do this in excel with vba, though due to the volume of data that is
now impracticable and I am trying to move most of my logic into the query
and db for analysis.
Looking at the analytic functions I see no way to carry values over the way
they need to be.
Example column:
I have a column th
Are you looking for stuff like this?
http://www.postgresql.org/docs/9.0/static/functions-window.html
http://www.postgresql.org/docs/9.0/static/functions-string.html
On Fri, Sep 30, 2011 at 10:12 AM, Dario Beraldi wrote:
> Hello,
>
> I'm looking for some information (guidelines, docs, tutorials,
*From*: Rohan Malhotra
select * from items order by random() limit 5;
my basic requirement is to get random rows from a table, my where clause
will make sure I won't get same rows in repeated execution of above queries.
--
Regards
To clarify, you are not looking for random then yes? as you
tested pairs.
>
> David J.
>
>
> On Sep 19, 2011, at 10:37, Henry Drexler wrote:
>
> Thanks you that is the kind of suggestion I was looking for - I will look
> into plpgsql.
>
> Yes, there are several optimizations in it - though due to the actual data
> the first few char
ou are doing (given your specification below) in VBA is
> also doable in PostgreSQL.
>
> ** **
>
> David J.
>
> ** **
>
> ** **
>
> *From:* pgsql-general-ow...@postgresql.org [mailto:
> pgsql-general-ow...@postgresql.org] *On Behalf Of *Henry Drexler
> *Sent:*
I have no problem doing this in excel vba, though as the list grows larger
obviously excel has row limits.
What is being done:
There is a column of data imported into the db - they are just text strings,
there are about 80,000 rows of them. The goal is to do a single character
elimination to fin
Perfect, thank you. I will try to find that in the documentation as I was
obviously not looking at the correct page I had linked to earlier.
On Fri, Sep 9, 2011 at 11:05 AM, Day, David wrote:
> Henry,
>
> ** **
>
> Does this suit your need?
>
> ** **
>
>
thanks Tom and Guillaume,
*That sequencing of casting makes sense - I appreciate the clear
explanation.
*
*
*
On Fri, Sep 9, 2011 at 11:12 AM, Tom Lane wrote:
> Henry Drexler writes:
> > [ "1/3" yields zero ]
>
> Yeah, it's an integer division.
>
> > I th
take any table and run
Query
-
select
1/3
from
storage
limit 1
Result
-
?column?
integer
0
Expected Result
-
?column?
double precision
0.3...
Question
-
Since there is no column type to begin with as this is
Is there a way to set the display format of boolean values in psql just
as one can set the display of nulls using \pset null ? I
find presentation of true as 't' and false as 'f' rather poor since 't'
and 'f' do not look very different from each other. I'd like to instead
get 'TRUE' or 'FALSE'.
No
On Fri, June 3, 2011 13:57, t...@fuzzy.cz wrote:
> There's something very wrong with snames - the planner expects 22 rows but
> gets 164147851. Which probably causes a bad plan choice or something like
> that.
> Try to analyze the snames table (and maybe increase the statistics
> target on the col
lyze the snames table (and maybe increase the statistics
> target on the columns).
Thanks - like you say, looks like the interesting bit is:
rows=22 --> rows=164147851 for table snames.
Nice online tool you have there my china!
Cheers
Henry
--
Sent via pgsql-general mailing list (pgsql
ank pr (cost=0.00..2.53 rows=1
width=64) (actual time=2.000..2.000 rows=0 loops=1)
Index Cond: (pl.did = pr.did)
-> Index Scan using skplink_count0 on plink_count plc (cost=0.00..3.92
rows=1 width=36) (actual time=0.000..0.000 rows=0 loops=1)
Index Cond: (md
Hi,
Is it possible to replicate only a single or selected tables (as opposed to
the whole shebang) using PG's built-in replication?
I can't seem to find much on this topic, so I'm guessing not.
I have a feeling I'll need to return to Londiste for this particular
application.
Thanks
--
Sen
Greets,
I've just activated another replication slave and noticed the following in the
logs:
WARNING: xlog min recovery request 38E/E372ED60 is past current point
38E/D970
It seems to be happily restoring log files from the archive, but the warning
message above concerns me.
Googling only
Resolved the startup problem by identifying which pg_clog file it was failing
on with:
strace postgres --single -D 9.0/data
Then grabbed that file from the replication slave.
Cheers
h
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
h
I managed to resolve this issue. Using strace
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
On Sat, April 23, 2011 09:56, Henry C. wrote:
> 1. how to proceed with getting db1 back up so I can run the script?
> 2. how to proceed with replicated database (db2)? (switch to standalone
> (since it's in readonly replication mode) and run upgrade fix script as per
> wik
e
DEBUG: proc_exit(1): 3 callbacks to make
DEBUG: exit(1)
DEBUG: shmem_exit(-1): 0 callbacks to make
DEBUG: proc_exit(-1): 0 callbacks to make
Any suggestions would be welcomed with even more misty-eyed thanks.
Cheers
Henry
--
Sent via pgsql-general mailing list (pgsql-ge
a redundant sense of safety)? ie, use a
non-journalling battle-tested fs like ext2.
Regards
Henry
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
On Thu, April 14, 2011 20:54, Andrew Sullivan wrote:
> On Thu, Apr 14, 2011 at 12:27:34PM -0600, Scott Marlowe wrote:
>
>>> That's what a UPS and genset are for. Â Who writes critical stuff to
>>> *any*
>>> drive without power backup?
>>
>> Because power supply systems with UPS never fail.
>>
>
> R
On Thu, April 14, 2011 18:56, Benjamin Smith wrote:
> After a glowing review at AnandTech (including DB benchmarks!) I decided to
> spring for an OCX Vertex 3 Pro 120 for evaluation purposes. It cost about $300
> with shipping, etc and at this point, won't be putting any
>
> Considering that I sp
> On 14/04/2011 2:15 AM, Henry C. wrote:
> Nope, it's working as designed I'm afraid.
>
> There are params you can tune to control how far slaves are allowed to
> get behind the master before cancelling queries...
Thanks Craig - this dawned on me eventually.
--
Sent
On Thu, April 14, 2011 11:30, Leonardo Francalanci wrote:
> have a look at
>
> http://postgresql.1045698.n5.nabble.com/Intel-SSDs-that-may-not-suck-td426826
> 1.html
>
>
>
> It looks like those are "safe" to use with a db, and aren't that expensive.
The new SSDs look great. From our experience,
On Thu, April 14, 2011 10:51, Craig Ringer wrote:
> On 14/04/2011 4:35 PM, Henry C. wrote:
>
>
>> There is no going back. Hint: don't use cheap SSDs - cough up and use
>> Intel.
>>
>
> The server-grade SLC stuff with a supercap, I hope, not the scary
>
h is extremely busy (xid wraparound stuff)
and the performance gains are game-changing.
There is no going back. Hint: don't use cheap SSDs - cough up and use Intel.
Cheers
Henry
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.
> However, a SELECT eventually fails with "canceling statement due to conflict
> with recovery".
>
> Where else can I check, or what else can I do to determine what the problem
> is?
...or maybe there _is_ no problem.
select count(*) from big_table; -- will fail because it's long-lived and rows
a
On Wed, April 13, 2011 20:15, Henry C. wrote:
> If I try and execute a long-lived SQL query on the slave, it eventually fails
> with "canceling statement due to conflict with recovery". Replication is
> definitely working (DML actions are propagated to the slave), but somethi
Forgot to mention recovery.conf on slave:
standby_mode = 'on'
primary_conninfo = 'host..."
restore_command = 'cp /home/psql-wal-archive/%f "%p"'
archive_cleanup_command = 'pg_archivecleanup /home/psql-wal-archive %r'
The wiki states "If wal_keep_segments is a high enough number to retain the
WA
Greets,
Pg 9.0.3
This must be due to my own misconfiguration, so apologies if I'm not seeing
the obvious - I've noticed that my slave seems to be stuck in a permanent
startup/recovery state. ps on the slave shows:
...
postgres: wal receiver process streaming 190/A6C384A0
postgres: startup pro
ere
are a lot of updates to catch up on (recovery has been at it for several hours
now).
Cheers
Henry
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
1.1.1.1(55390) streaming 190/244FEA80
There are quite a few log files to process and both machines are not heavily
taxed. Is there any way to expedite this initial recovery process (1)? It
seems to be chugging along at a rather sedate pace.
Thanks
Henry
--
Sent via pgsql-general mailing list
On Wed, April 13, 2011 04:28, Fujii Masao wrote:
> When the standby fails to read the WAL file from the archive, it tries to
> read that from the master via replication connection. So the standby would not
> skip that file.
Great, thanks. It looks like it's proceeding normally (if slow) then.
-
unexpected pageaddr' is possibly
not that serious and is probably unrelated to the cp/stat error above.
However, since recovery seems to have skipped a log file, what would that mean
in terms of the slave being a true copy of master and integrity of the data?
thanks
Henry
--
Sent via pgs
to do anything.
It's been on the default 20ms. Now giving 0 a try. In our app responsiveness
is less of a concern since we don't have human interaction. Reliability is a
greater concern.
> It's also possible that Henry is getting bit by the bug fixed here:
>
>
> Autho
On Sat, April 2, 2011 21:26, Scott Marlowe wrote:
> On Sat, Apr 2, 2011 at 11:26 AM, Henry C. wrote:
>
>> On Sat, April 2, 2011 14:17, Jens Wilke wrote:
>>
>>> Nevertheless since at least 8.4 IMO there's no need to bother with
>>> manual vacuum any mo
xid was 64 bits instead of 32, but that's another topic
entirely.
Cheers
Henry
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Forgot to mention: I'm using 9.0.3
> Usually a manual vacuum cancels a running autovacuum task.
Not in my case - however, the autovac does seem to be in a waiting state.
> You should find a notice about the cancelation in th logfile.
>
> > current_query | vacuum analyze
> > age | 11:
* not re-vacuum the table?
Regards
Henry
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
hell of a lot slower to mitigate impact on general performance.
Anyway, is that autovac duplicating work or locked out and waiting?
Thanks
Henry
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
On Fri, November 5, 2010 09:52, Grzegorz JaÅkiewicz wrote:
> Timing is on.
> I would say hashtext is consequently beating md5 in terms of performance
> here.
nice concise answer, thanks Grzegorz.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subs
nice tight integer value, whereas md5() produces a fixed
string. My instinct says hashtext(), but there may be a lot more to hashext()
than meets the eye.
Any ideas?
Thanks
Henry
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
Quoting "A. Kretschmer" :
Try:
test=*# SELECT '1.1.1.1' ~ E'^\\d+';
Ag, of course, thanks Andreas.
Cheers
Henry
pgp5XMelkfQ4Y.pgp
Description: PGP Digital Signature
Greets,
I must be missing something here:
SELECT '1.1.1.1' ~ E'^\d+';
returns FALSE, when I would expect TRUE, as for:
SELECT '1.1.1.1' ~ E'^[[:digit:]]+';
ie, '[[:digit:]]' != '\d'
In config, "regex_flavor = advanced&
ce the most heavily used data would be
in smaller tables (partitions).
Thoughts?
Cheers
Henry
pgplUZU1AJJNO.pgp
Description: PGP Digital Signature
cenario).
Cheers
Henry
pgppSmvMVo89W.pgp
Description: PGP Digital Signature
tional relations tipped things over -- suddenly many defaults are
just too low and I'm having to dig into arcane settings.
This thread seems to be related:
http://archives.postgresql.org//pgsql-admin/2008-10/msg00041.php
Cheers
Henry
pgpQgMvsm37S7.pgp
Description: PGP Digital Signature
ault of 1000 or something. My database reindex in single-user mode
kindly made the suggestion (we have many, many table partitions with
hordes of indexes - relations approaching 9000+). reindexing due to
"Cannot find namespace X" error on insert.
Henry
pgpPJ4fIBJg92.pgp
Description: PGP Digital Signature
strange connectivity-delay could be about?
Cheers
Henry
pgplt4Jrfvjng.pgp
Description: PGP Digital Signature
Quoting "Craig Ringer" :
... I have a SCO OpenServer 5.0.5 VM ... business critical
application ... compiled for Microsoft Xenix, ... source code ...
long-lost, ... OpenServer's Xenix emulation mode.
triple egad; otherwise known as Good Lord Almighty, better you than
from scratch? Only problem is the DB is
large and takes 24*n hours to restore, so this is a last resort.
Thanks
Henry
This message was sent using IMP, the Internet Messaging Program.
--
Sent via pgsql-general mailing list (pgsql-ge
ol-general
Henry
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Quoting "Simon Riggs" :
Use Rules is the current answer, though that has other issues also.
Hi Simon - as you say, Rules have issues. From my understanding,
partitioning with rules is impractical.
Anyway, thanks for clarifying.
Cheers
Henry
--
Sent via pgsql-general mailing l
or is the *only*
solution to mess with existing (working) front-end code to work around
this issue?
Right now, the untenable situation is to simply ignore the return
codes and act like all is well in la-la land.
Comments, admonishments, hope for the future, all welcome.
Cheers!
Henry
--
Hola a todos
necesito ayuda como configurar mi base de datos como aceptar conexiones ssl
desde cualquier ip, mi base de datos esta instalada en Windows:
En donde tengo que hacerlo:
pg_hba.conf
postgresql.conf
Gracias
Henry Interiano
San Pedro Sula, Honduras
I am looking for a mac platform installer for what I was told I
needed, pgcrypto.
Assistance finding this would be appreciated...
Steve Henry
San Diego Mac IT
http://www.sdmacit.com
760.751.4292 Office - 760.546.8863 Cell
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org
What am I missing? Any help would be greatly appreciated...
Steve Henry
San Diego Mac IT
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
w
many rows are fetched each time (instead of 1 at a time).
So, setting aside my self-outsmartiness, is there a way to achieve this?
Regards
Henry
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
work, but I was wondering whether this could be done.
Cheers
Henry
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
d Skytools (as used by Skype, I believe, which is now Open Source),
which is far simpler to use and in my experience far more reliable.
Cheers
Henry
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
requirements, it will make up - and save - a mountain
of time. The alternative will be a few years of stabilising any new
replication code before it's considered safe to adopt in production.
Their. I've had my moan for the day, and I feel much better :-)
Cheers
Henry
--
Sent via pgsql-
On Sun, August 10, 2008 3:03 pm, Henry wrote:
>
> I scratched around some more, found doc/pgpool-en.html and my ignorance
> has been somewhat lessened.
oi, wrong list /blushes
I really should *not* use multi-users under one login in squirrelmail...
--
Sent via pgsql-general mai
localhost, on which
Pg wasn't LISTENing at time... a deal-killer :P
Regards
Henry
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
ful as Slony, but
if all you need to do is replicate some tables with minimum fuss and
without having to learn a new language, then Skytools (based on my
personal experience with a cluster and Slony versus Skytools) is my
recommendation.
Regards
Henry
--
Sent via pgsql-general mailing
r to use and to manage (eg, when things go wrong [they do]).
Regards
Henry
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
1 - 100 of 135 matches
Mail list logo