Hi
I'm trying to build a Windows Form application that needs to alter the
definition of a View, depending on user input/selection. Essentially,
the user is selecting a property of known coordinates ('x','y') and
setting a 'buffer' distance . I then want to retrieve the records which
represent t
On 10 May 2010, at 24:01, Christoph Zwerschke wrote:
> We want to find all entries in b where txt begins with an
> existing txt entry in a:
>
> select * from b join a on b.txt like a.txt||'%'
>
> On the first glance you would expect that this is performant
> since it can use the index, but sadly
On 10 May 2010, at 6:02, Boyd, Craig wrote:
> I have been using PostgreSQL for a short while, but I have not had to use the
> pg_catalog tables before and the columns are a little cryptic to me. I think
> it ties to pg_class, but I am not sure how to relate them. Also, I have not
> had a chan
On 10 May 2010, at 2:09, Jonathan Vanasco wrote:
> i was given a unique index on
> (country_id, state_id, city_id, postal_code_id)
> in the two records below, only country_id and state_id are assigned ( aside
> from the serial )
>
> geographic_location_id | coordinates_latitude | coordin
I've encountered the following problem:
ivoras=# create table htest2(id integer, t hstore);
CREATE TABLE
ivoras=# create table htest3(id integer, t2 hstore);
CREATE TABLE
ivoras=# select id, t from htest2 union select id,t2 as t from htest3;
ERROR: could not identify an ordering operator for t
Bruce Momjian wrote:
Bruce Momjian wrote:
John Gage wrote:
Is the documentation available anywhere as a single page text file?
This would be enormously helpful for searching using regular
expressions in Vim, for example, or excerpting pieces for future
reference.
Uh, no, and no one has e
> How to get around this? I really don't care how hstores get sorted and
> more, would like to avoid sorting them at all as they could get big.
union all
seems to work. Would that serve the purpose?
Regards,
Jayadevan
DISCLAIMER:
"The information in this e-mail and any attachment is inten
When we do a union, the database has to get rid of duplicates and get
distinct values. To achieve this, probably it does a sort. Just
guesswork
Regards,
Jayadevan
DISCLAIMER:
"The information in this e-mail and any attachment is intended only for
the person to whom it is addressed and
On 05/10/10 14:10, Jayadevan M wrote:
When we do a union, the database has to get rid of duplicates and get
distinct values. To achieve this, probably it does a sort. Just
guesswork
You are right, it looks like I have inverted the logic of UNION and
UNION ALL - I actually needed "UNION ALL
On 5/10/2010 4:43 AM, OisinJK wrote:
Hi
I’m trying to build a Windows Form application that needs to alter the
definition of a View, depending on user input/selection. Essentially,
the user is selecting a property of known coordinates (‘x’,’y’) and
setting a ‘buffer’ distance . I then want to re
On May 10, 2010, at 6:29 AM, Alban Hertroys wrote:
As the docs state and as others already mentioned, "Null values are
not considered equal".
Ah. I interpreted that wrong. I thought it applied to indexes
differently. I'll have to experiment now...
--
Sent via pgsql-general mailing li
On Sat, May 8, 2010 at 12:12 AM, Mike Christensen wrote:
> What's the best way to do this? Looks like something like pgPool
> might be what I want, but I haven't looked into it deeply yet.
I don't think your requirement and postgres are consistent with each
other.Unless your data volume is *
Excerpts from John Gage's message of sáb may 08 05:06:35 -0400 2010:
> Is the documentation available anywhere as a single page text file?
> This would be enormously helpful for searching using regular
> expressions in Vim, for example, or excerpting pieces for future
> reference.
There's a
Excerpts from Alvaro Herrera's message of lun may 10 12:01:22 -0400 2010:
> There's a texinfo output that could perhaps be useful. Try
> "make postgres.info" in the doc/src/sgml directory; while it's tagged
> "experimental" and outputs a boatload of warnings, it does work for me and the
> text it
On Fri, May 7, 2010 at 10:12 PM, Mike Christensen wrote:
> I'm considering using a cloud hosting solution for my website. It
> will probably be either Amazon, Rackspace or Hosting.com. I'm still
> comparing. Either way, my site will consist of multiple virtual
> server instances that I can crea
I would like to replicate the following Unix pipe within a Perl script,
perhaps using DBD::Pg:
% pg_dump -Z9 -Fc -U | pg_restore -v -d -p
-h localhost -U
Of course, I can try to use Perl's system, and the like, to run this pipe
verbatim, but I this as a last-resort approach.
Is there a more
Hi,
I have a question about a feature in PostgreSQL 9.0.
I am looking for support for windowing functions when using: RANGE
BETWEEN PRECEDING/FOLLOWING AND
PRECEDING/FOLLOWING
The latest documentation:
http://www.postgresql.org/docs/9.0/static/sql-expressions.html#SYNTAX-WINDOW-FUNCTIONS
Say
I am using the Mac and, although the Mac does not ship with this, the
Zotero add-on to Firefox includes it:
/Users/johngage/Library/Application Support/Firefox/Profiles/
m35vu1ez.default/zotero/pdftotext-MacIntel
Will try it out. Thanks very much,
John
On May 10, 2010, at 1:58 PM, Geoff
-BEGIN PGP SIGNED MESSAGE-
Hash: RIPEMD160
> I would like to replicate the following Unix pipe within a Perl script,
> perhaps using DBD::Pg:
>
>
> % pg_dump -Z9 -Fc -U | pg_restore -v -d -p
> -h localhost -U
>
> Of course, I can try to use Perl's system, and the like, to run this
Excerpts from Daniel Scott's message of lun may 10 13:20:06 -0400 2010:
> Says "The value PRECEDING and value FOLLOWING cases are currently only
> allowed in ROWS mode."
>
> However, I have found this post:
>
> http://archives.postgresql.org/message-id/e08cc0400912310149me7150cek3c9aa92e4d396...
On Sun, May 9, 2010 at 8:33 AM, Ovid wrote:
> My apologies. This isn't PG-specific, but since this is running on PostgreSQL
> 8.4, maybe there are specific features which might help.
>
> I have a tree structure in a table and it uses materialized paths to allow me
> to find children quickly. How
On Sun, May 9, 2010 at 4:47 PM, Tom Lane wrote:
> Ovid writes:
>> My apologies. This isn't PG-specific, but since this is running on
>> PostgreSQL 8.4, maybe there are specific features which might help.
>> I have a tree structure in a table and it uses materialized paths to allow
>> me to find
On Mon, 2010-05-10 at 17:33 +, Greg Sabino Mullane wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: RIPEMD160
>
>
> > I would like to replicate the following Unix pipe within a Perl script,
> > perhaps using DBD::Pg:
> >
> >
> > % pg_dump -Z9 -Fc -U | pg_restore -v -d -p
> > -h localh
Hi,
On Mon, May 10, 2010 at 13:35, Alvaro Herrera wrote:
> It was ripped out of the patch before commit because the implementation was
> not
> acceptable.
That's strange - the CommitFest says that it was committed and I can't
find any mention of it being removed. Is there somewhere I can see a
Am 10.05.2010 11:50 schrieb Alban Hertroys:
> On 10 May 2010, at 24:01, Christoph Zwerschke wrote:
>
>> select * from b join a on b.txt like a.txt||'%'
>>
>> I feel there should be a performat way to query these entries,
>> but I can't come up with anything. Can anybody help me?
>
> Have you tried
On Mon, May 10, 2010 at 1:33 PM, Greg Sabino Mullane wrote:
>
> -BEGIN PGP SIGNED MESSAGE-
> Hash: RIPEMD160
>
>
> > I would like to replicate the following Unix pipe within a Perl script,
> > perhaps using DBD::Pg:
> >
> >
> > % pg_dump -Z9 -Fc -U | pg_restore -v -d -p
> > -h localhos
On Mon, May 10, 2010 at 2:59 PM, Joshua D. Drake wrote:
> On Mon, 2010-05-10 at 17:33 +, Greg Sabino Mullane wrote:
> > -BEGIN PGP SIGNED MESSAGE-
> > Hash: RIPEMD160
> >
> >
> > > I would like to replicate the following Unix pipe within a Perl script,
> > > perhaps using DBD::Pg:
> >
Daniel Scott writes:
> On Mon, May 10, 2010 at 13:35, Alvaro Herrera wrote:
>> It was ripped out of the patch before commit because the implementation was
>> not
>> acceptable.
> That's strange - the CommitFest says that it was committed and I can't
> find any mention of it being removed. Is th
Kynn Jones writes:
> But I have not found a way for my script to provide a password when it
> runs commands like dropdb, createdb, and pg_restore with the "-h HOST>" flag. So I end up resorting to SSH-tunneling. This is what I'm
> trying to avoid.
You don't really want to embed a password in t
Alban Hertroys wrote:
> [...]
> None of these solutions are pretty. It should be quite a common problem
> though, how do people normally solve this?
Partial indexes? Doesn't look pretty either though:
| tim=# \d DE_Postcodes
| Tabelle »public.de_postcodes«
| Spalte | Typ | Attribute
|
Kynn Jones writes:
> Actually, that was a mistake on my part. That should have been "-Ft" rather
> than "-Z9 -Fc", since I *don't* want compression (most of the data being
> transmitted consists of highly incompressible blobs anyway). Regarding SSH,
> my understanding is that to get compression
as it keeps coming up on the list off and on, I decided to write a wiki article,
comments suggestions
http://wiki.postgresql.org/wiki/BinaryFilesInDB
I also read over the 9.0 beta release notes, bytea type now allows hex values??
http://developer.postgresql.org/pgdocs/postgres/release-9-0.html#AE
> > The solution is very simple and can be done in the cPanel configuration,
> > just disabled "Shell Fork Bomb Protection" in the security center. That's
> > all. The ulimit restrictions are removed!
>
> Huh, that's interesting. With a name like that, I'd have thought it
> would set limits on n
On 5/10/2010 2:46 PM, Kynn Jones wrote:
On Mon, May 10, 2010 at 1:33 PM, Greg Sabino Mullane mailto:g...@turnstep.com>> wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: RIPEMD160
> I would like to replicate the following Unix pipe within a Perl
script,
> perhaps using DBD
On 10 May 2010, at 20:06, Greg Stark wrote:
> On Sun, May 9, 2010 at 4:47 PM, Tom Lane wrote:
>> Ovid writes:
>>> My apologies. This isn't PG-specific, but since this is running on
>>> PostgreSQL 8.4, maybe there are specific features which might help.
>>> I have a tree structure in a table and
On 10 May 2010, at 21:24, Christoph Zwerschke wrote:
> Am 10.05.2010 11:50 schrieb Alban Hertroys:
> > On 10 May 2010, at 24:01, Christoph Zwerschke wrote:
> >
> >> select * from b join a on b.txt like a.txt||'%'
> >>
> >> I feel there should be a performat way to query these entries,
> >> but I
Ovid wrote on 09.05.2010 15:33:
My apologies. This isn't PG-specific, but since this is running on
PostgreSQL 8.4, maybe there are specific features which might help.
I have a tree structure in a table and it uses materialized paths to
allow me to find children quickly. However, I also need to s
Running 8.4.3, I have a table with 43 million rows. Two of the columns are
(topic_id int not null) and (status message_status_enum not null), where
message_status_enum is defined as
CREATE TYPE message_status_enum AS ENUM ( 'V', 'X', 'S', 'R', 'U', 'D' );
Among the indexes there is this:
"m_2010
Gordon Shannon writes:
>-> Bitmap Heap Scan on m_20100201 (cost=987806.75..987810.75 rows=1
> width=0) (actual time=2340.191..2340.191 rows=0 loops=1)
> Recheck Cond: (status = ANY ('{S,X}'::message_status_enum[]))
> -> Bitmap Index Scan on m_20100201_topic_multi
> (cost=
Hi,
I'm stumped by an issue we are experiencing at the moment. We have
been successfully archiving logs to two standby sites for many months
now using the following command:
rsync -a %p postg...@192.168.80.174:/WAL_Archive/ && rsync
--bwlimit=1250 -az %p postg...@14.121.70.98:/WAL_Archive/
Due t
valentin.hoc...@kabelbw.de (Valentin Hocher) writes:
> [ cPanel's "Shell Fork Bomb Protection" actually does this: ]
> ulimit -n 100 -u 20 -m 20 -d 20 -s 8192 -c 20 -v 20
> 2>/dev/null
Just to annotate that: some experimentation I did confirms that on
RHEL5 x86_64, PG 8.4.
Sorry, version: PostgreSQL 8.4.2 on x86_64-redhat-linux-gnu, compiled
by GCC gcc (GCC) 4.1.2 20071124 (Red Hat 4.1.2-42), 64-bit
On Mon, May 10, 2010 at 5:01 PM, bricklen wrote:
> Hi,
>
> I'm stumped by an issue we are experiencing at the moment. We have
> been successfully archiving logs to two
bricklen writes:
> Due to some heavy processing today, we have been falling behind on
> shipping log files (by about a 1000 logs or so), so wanted to up our
> bwlimit like so:
> rsync -a %p postg...@192.168.80.174:/WAL_Archive/ && rsync
> --bwlimit=1875 -az %p postg...@14.121.70.98:/WAL_Archive/
Thanks for the advice. In that case, I'll stick with the standard
approach of having a single SQL server and several web frontends and
employ a caching mechanism such as memcache as well. Thanks!
Mike
On Mon, May 10, 2010 at 9:30 AM, Scott Marlowe wrote:
> On Fri, May 7, 2010 at 10:12 PM, Mike
Tom Lane-2 wrote:
>
> My first suspicion
> is that those are unvacuumed dead rows ... what's your vacuuming policy
> on this database?
>
Ah, I didn't know that number included dead tuples. That probably explains
it. pg_stat_user_tables says the table has 370,269 dead tuples. On this
table,
On Mon, May 10, 2010 at 7:04 PM, Mike Christensen wrote:
> Thanks for the advice. In that case, I'll stick with the standard
> approach of having a single SQL server and several web frontends and
> employ a caching mechanism such as memcache as well. Thanks!
And with 9.0 it will be pretty easy
Tom Lane wrote:
A look at the code shows that the archiver only notices SIGHUP once
per outer loop, so the change would only take effect once you catch up,
which is not going to help much in this case. Possibly we should change
it to check for SIGHUP after each archive_command execution.
I
On Mon, May 10, 2010 at 7:21 PM, Mike Christensen wrote:
> Man that sounds awesome. I need that now. So does that mean you'd
> have one beefy SQL server for all the updates and everything writes to
> that, and then you'd have a bunch of read-only servers and new data
> trickles into them from th
On Mon, May 10, 2010 at 5:50 PM, Tom Lane wrote:
> A look at the code shows that the archiver only notices SIGHUP once
> per outer loop, so the change would only take effect once you catch up,
> which is not going to help much in this case. Possibly we should change
> it to check for SIGHUP after
On Mon, May 10, 2010 at 6:12 PM, Greg Smith wrote:
> Tom Lane wrote:
>>
>> A look at the code shows that the archiver only notices SIGHUP once
>> per outer loop, so the change would only take effect once you catch up,
>> which is not going to help much in this case. Possibly we should change
>> i
The concept of updating one database and doing all your reads from
another database is kinda confusing to me. Does that mean you have to
design your whole app around that concept, have a different connection
string and what not for your "writable" database and "read-only
databases"? I'm using Cas
On Mon, May 10, 2010 at 8:00 PM, Mike Christensen wrote:
> The concept of updating one database and doing all your reads from
> another database is kinda confusing to me. Does that mean you have to
> design your whole app around that concept, have a different connection
> string and what not for
Man that sounds awesome. I need that now. So does that mean you'd
have one beefy SQL server for all the updates and everything writes to
that, and then you'd have a bunch of read-only servers and new data
trickles into them from the master continuously?
Mike
On Mon, May 10, 2010 at 6:09 PM, Sco
Scott Marlowe wrote:
Is there any sort of abstraction layer (like in the driver level) that
can abstract that and just make updates go to one DB and reads
round-robin to other DBs? Hopefully there's a way to make this design
simple to implement.
Pretty sure pgpool can do the "read from th
On Mon, May 10, 2010 at 8:59 PM, John R Pierce wrote:
> Scott Marlowe wrote:
>>>
>>> Is there any sort of abstraction layer (like in the driver level) that
>>> can abstract that and just make updates go to one DB and reads
>>> round-robin to other DBs? Hopefully there's a way to make this design
On Tue, May 11, 2010 at 9:50 AM, Tom Lane wrote:
> bricklen writes:
>> Due to some heavy processing today, we have been falling behind on
>> shipping log files (by about a 1000 logs or so), so wanted to up our
>> bwlimit like so:
>
>> rsync -a %p postg...@192.168.80.174:/WAL_Archive/ && rsync
>>
I was wondering, how can I check whether Vacuum operation had been executed
without problem?
I use the following Java code to execute Vacuum operation
final Statement st2 = connection.createStatement();
st2.executeUpdate("VACUUM FULL ANALYZE VERBOSE");
st2.close();
Nothing print out at console.
Hi all,
In my database application, I've a table whose records can
reach 10M and insertions can happen at a faster rate like 100
insertions per second in the peak times. I configured postgres to do
auto vacuum on hourly basis. I have frontend GUI application in CGI
which displays the data fr
On 11 May 2010 10:18, venu madhav wrote:
> select e.cid, timestamp, s.sig_class, s.sig_priority, s.sig_name,
> e.sniff_ip, e.sniff_channel, s.sig_config, e.wifi_addr_1,
> e.wifi_addr_2, e.view_status, bssid FROM event e, signature s WHERE
> s.sig_id = e.signature AND e.timestamp >= '1270449180'
59 matches
Mail list logo