I just noticed that path_distance() in geo_ops.c (the "<->" operator
for path datatype) claims to be computing the minimum distance between
any two line segments of the two paths, but actually it's computing the
maximum such distance.
Isn't this broken?
regards, tom lane
> > That's an excellent point, especially considering that *sequences* use
> > an integer to hold their max_value, which is by default 2,147,483,647.
> > You cannot go larger than that, either. I guess it's constrained to be
> > positive. So OIDs give you more potential unique values than seque
"Jason C. Pion" <[EMAIL PROTECTED]> writes:
> I have a legacy database that I am porting to PostgreSQL. One of the
> fields is an integer column that actually represents a date. It is
> represented as the number of days since July 1, 1867.
> What I am wondering is: Is there a function or other
On Fri, Jul 28, 2000 at 06:53:41PM -0400, Tom Lane wrote:
> George Robinson II <[EMAIL PROTECTED]> writes:
> > What approach would be the most efficient way to accomplish this goal?
> > With what language or tools would you recommend? If I were to leave the
> > time as a int4, epoch time, wh
I have a legacy database that I am porting to PostgreSQL. One of the
fields is an integer column that actually represents a date. It is
represented as the number of days since July 1, 1867.
What I am wondering is: Is there a function or other means of getting this
integer converted into a real
George Robinson II <[EMAIL PROTECTED]> writes:
> What approach would be the most efficient way to accomplish this goal?
> With what language or tools would you recommend? If I were to leave the
> time as a int4, epoch time, what would the select look like to return
> other time formats?
P
I'm new to postgres, but I've quickly become a big fan. Thank you for
such a great project and I hope in the future to be able to contribute
to the effort.
I'm a newbie to the list and as such, I haven't have much of a change
to lurk. I hope my explanation isn't too long and my q
> How suitable is PG for doing larger databases? The need I am
> considering would be a financial database that does maybe up to 100k
> transactions/day.
In a day? I think a lot of us do that much in an hour
> Obviously, it needs to be very reliable, and have
> minimal scheduled, and no
Looks like one of my tables got corrupted. Can someone explain how to
recover from this?? Trying to drop the table is not working...Postgres
hangs.
Any help is appreciated.
Arthur
I have added the PostgreSQL manual pages to the appendix of my book. I
will be adding an index once the publisher is done proofreading it. All
reports I get are that it looks good.
Addison-Wesley will be printing this book directly from a
Latex-generated Postscript file that I provide. I have a
Ernie <[EMAIL PROTECTED]> writes:
> This query is very fast.
>
> cw=# SELECT distinct n.news_id, headline, link, to_char(created,
> 'mm-dd-hh24:mi'),
> cw-# created FROM news_article_summary n, news_cat nc WHERE n.news_id =
> nc.news_id AND
> cw-# created > CURRENT_TIMESTAMP-30 AND nc.code_i
How suitable is PG for doing larger databases? The need I am
considering would be a financial database that does maybe up to 100k
transactions/day. Obviously, it needs to be very reliable, and have
minimal scheduled, and no unscheduled downtime. Should this project
be on Oracle or Postgres?
t
On Fri, Jul 28, 2000 at 11:48:10AM -0500, Keith G. Murphy wrote:
> Mitch Vincent wrote:
> >
> > There is something else that many aren't considering. In every application
> > I've ever written to use any database I use ID numbers of my own making,
> > always they're integer. 4 billion is the lim
Thomas Lockhart wrote:
>
> > FWIW, I checked into MySQL, and as far as I can tell, they have nothing
> > like this implicit 4 billion transactional "limit". So maybe competitive
> > spirit will drive the postgres hackers to fix this problem sooner than
> > later. ;-)
>
> We have *never* had a
Mitch Vincent wrote:
>
> There is something else that many aren't considering. In every application
> I've ever written to use any database I use ID numbers of my own making,
> always they're integer. 4 billion is the limit on any integer field, not
> just the OID so there are limitations everyo
15 matches
Mail list logo