> > The first result (30 sept 23:00:00) is obviously due to
> > a timezone-daylight saving issue.
Fixed in current sources by using mktime() rather than by rotating the
date to 12 noon to try to get the correct time zone (didn't work around
daylight savings time).
> Thomas Lockhart is our lead g
Hello Steven,
Tuesday, September 19, 2000, 11:00:02 PM, you wrote:
SL> A couple of questions and concerns about Blobs.
SL> I'm wondering what kind of performance hits do BLOBS have on a database
SL> large database.
SL> Currently working on implementing a database with images. I guess i'm
SL>
"Edward Q. Bridges" <[EMAIL PROTECTED]> writes:
> since there is no email address for a maintainer on that, i post it here
> for review, comment, and (hopefully) integration with the source tree.
Mark Hollomon <[EMAIL PROTECTED]> is the originator of plperl. Please
get together with him on docum
Buddy Lee Haystack <[EMAIL PROTECTED]> writes:
> Here they are, but they seem as vague as the Apache error logs -to me anyway...
You're right, not much info there except that a backend died untimely.
Unless you had ulimit set to prevent it, the crashing backend should've
left a core file in the
> /usr/bin/postmaster: CleanupProc: pid 888 exited with status 139
Okay, we have a postgres process going down with a SEGV. Do you
have a core file? I don't quite remember where they end up, but my
guess would be either the directory with postgres or somewhere in
the data directory (probably t
Thanks for the quick response!
Here they are, but they seem as vague as the Apache error logs -to me anyway...
FindExec: found "/usr/bin/postgres" using argv[0]
/usr/bin/postmaster: BackendStartup: pid 886 user nobody db rzone socket 4
FindExec: found "/usr/bin/postgres" using argv[0]
started:
Actually, MySQL itself does not support transactions, and, from what I can
tell, it never will. Berkeley DB, though, does support transactions ...
what MySQL has done is provided an SQL interface over top of Berkeley DB
files to give the *appearance* of transactions ...
Basically, MySQL remains
Many others have posted on this but I have not seen an authoratative answer:
execution of Initdb on NT results in syntax errors - these seem to be
induced by whitespace only on some control command lines (for, case, ???).
I've been correcting them, one-by-one, by adding spaces on the ends of the
I've looked there but the site seems seriously out of date. Hasn't been
updated since June 1999? There are CVS commits that date from June 20 this
year!
Any other ideas?
Mike
-Original Message-
From: Jackson Ching [mailto:[EMAIL PROTECTED]]
Sent: Thursday, September 21, 2000 2:14 AM
To:
Hi guys,
Where can I get a compiled version of the latest JDBC driver? The one I have
(downloaded from FTP a few days ago) gives errors when using
DatabaseMetaData - which seem to be fixed in CVS ages ago.
Ideas?
Mike
[EMAIL PROTECTED] writes:
> Tom Lane wrote:
>> I'll bet there is some bit of internal state somewhere that affects
>> the results. It could be inside libc, or it could be in Postgres.
> postgres, I would tend to think...
> For one thing I've just found out: the 'histeresis' effect occurs
> only
to replace the one currently in $PGSRC/src/pl/plperl
it encompasses the information in that document while adding more structure
and more specific details about what is needed. it also addresses
a couple of issues that came up when i had personally installed it.
since there is no email address
On Tue, 19 Sep 2000, Buddy Lee Haystack wrote:
> I'm confused. Where do I need to start looking? The script is fine...
Best bet is to start by getting the end of the postmaster logs. That'll
probably have more information about immediate causes.
Tom Lane wrote:
>> Timezone is set to America/Buenos Aires
>> Changing this seems to elliminate the bug.
> What did you change it *to*, exactly? And what dates did you test
> after changing?
I changed to "Etc/GMT+4" and tested the same just the same dates
>
Edward Q
I have been using:
*RedHat Linux 6.1 [2.2.12-20] on Intel
*PostgreSQL 6.5.3-3 [your RPMs]
*Perl 5.00503
*Apache 1.3.9
*mod_perl 1.21
*DBI 1.13
*DBD-Pg-0.93
on 2 Intel systems without any problems for several months now, the production website
is an SMP box & the development box is a old, single
[EMAIL PROTECTED] writes:
> test6=# select '01-10-2000'::date::timestamp;
>?column?
> --
> Sat 30 Sep 23:00:00 2000 ART
> (1 row)
> test6=# select '13-10-2000'::date::timestamp;
>?column?
> ---
> Fri 13 Oct 00:00:0
Question:
Must transaction logging be ON while importing large data sets? Can
transaction logging be disabled when making batch updates to a large
database?
Background:
We have been testing with MS SQL server for some time with some large
databases (greater 30 million records per db). Th
for what it's worth, when i run these two tests, i
get the correct results
i'm using RedHat 6.2 also.
here are more details:
[ebridges@sleeepy]$ uname -a
Linux sleeepy 2.2.16 #2 SMP Mon Jul 31 14:51:33 EDT 2000 i686 unknown
[ebridges@sleeepy]$ psql -V
psql (PostgreSQL) 7.0.2
Portions Copyright
[EMAIL PROTECTED] writes:
> Timezone is set to America/Buenos Aires
> Changing this seems to elliminate the bug.
What did you change it *to*, exactly? And what dates did you test
after changing?
I would expect the bug to follow the DST transition date, which varies
in different timezones. Als
"Alexey V. Borzov" <[EMAIL PROTECTED]> writes:
> Nope, that's not the problem. I just checked and every DB has its own
> PG_VERSION. Besides, _all_ of the databases are accessed on regular
> basis (I'm speaking of a website), but the crashes occur only once in
> a while (like, once a week)...
Doe
Hello Tom,
Tuesday, September 19, 2000, 8:24:01 PM, you wrote:
TL> There is also supposed to be a PG_VERSION file in each database
TL> subdirectory.
TL> If you accidentally deleted one of these per-database PG_VERSION files
TL> then future connects to that database would fail with the above
TL>
On Tue, 19 Sep 2000, Tomas B. Winkler wrote:
>
> I would like to allow any user which has an unix account in our system to
> be able to connect a DB. Can be postgres configured that an unix user
> become automatically also a postgres user ? I can figure out some ways
> to do it yet I'm looking f
Danny writes:
> mydb=# INSERT INTO Customer
>(Customer_ID,Customer_Name,Customer_Address,Customer_Email)
> mydb-# VALUES ('1','Danny Ho','99 Second Ave, Kingswood','[EMAIL PROTECTED]'),
> mydb-# ('2','Randal Handel','54 Oxford Road, Cambridge','[EMAIL PROTECTED]')
> mydb-# ;
>
> -and I get the
Tomas B. Winkler writes:
> I would like to allow any user which has an unix account in our system to
> be able to connect a DB. Can be postgres configured that an unix user
> become automatically also a postgres user ? I can figure out some ways
> to do it yet I'm looking for the most transparent
Marko Kreen writes:
> But now I am only curious: Will PostgreSQL support binary
> arithmetics on ordinary integers someday or is the 'bit-string'
> only way to go?
AFAIK, there's no one working on the former. Feel free to contribute. :-)
--
Peter Eisentraut [EMAIL PROTECTED] http://
Well, I've tracked down the problem to its
mininal form, I think:
Here it goes:
[postgres@bert postgres]$ createdb test5
CREATE DATABASE
[postgres@bert postgres]$ psql test5
Welcome to psql, the PostgreSQL interactive terminal.
Type: \copyright for distribution terms
\h for help with
A couple of questions and concerns about Blobs.
I'm wondering what kind of performance hits do BLOBS have on a database
large database.
Currently working on implementing a database with images. I guess i'm
looking for some numbers showing the performence. Note that it would be
for web databas
On Tue, Sep 19, 2000 at 09:15:32AM +0200, Andreas Tille wrote:
> if I do a database dump via pg_dump also PostgreSQL internal tables
> named pga_* are stored in the dump. However if I drop a database and
pga_* are not really internal tables. The internal tables are named pg_*.
pga_* are tables c
>Recompile your 7.0.2 without --enable-multibyte option.
That's a static setting, then? Oh, bother. I was hoping the pg7 clients
would be smart enough to fall back as necessary, for connecting
non-multibyte servers.
Hello,
if I do a database dump via pg_dump also PostgreSQL internal tables
named pga_* are stored in the dump. However if I drop a database and
create it via "create database " those tables are created
automatically. Restoring the old content of the database using
cat .dump | psql
leads to w
30 matches
Mail list logo