Christopher Kings-Lynne <[EMAIL PROTECTED]> wrote:
> http://lwn.net/Articles/178199/
> Check out the article on sync_file_range():
> Is that at all useful for PostgreSQL's purposes?
I'm interested in it, with which we could improve responsiveness during
checkpoints. Though it is Linux specific s
"Tom Lane" <[EMAIL PROTECTED]> wrote
>
> Really? An indexscan will release pin before returning no-more-tuples,
> and had better do so else we leak pins during queries involving many
> indexscans.
>
I guess I see your point. For the scan stages not returning no-more-tuples,
we can do kill, but t
http://lwn.net/Articles/178199/
Check out the article on sync_file_range():
long sync_file_range(int fd, loff_t offset, loff_t nbytes, int flags);
This call will synchronize a file's data to disk, starting at the given
offset and proceeding for nbytes bytes (or to the end of the file if
"Qingqing Zhou" <[EMAIL PROTECTED]> writes:
> As I read, the kill_prior_tuple optimization doesn't work for bitmap scan
> code. To fix this problem, we have two choices.
> One is still use the kill_prior_tuple trick in a modified way. Heap TIDs
> recycling should not be a problem. This is because
As I read, the kill_prior_tuple optimization doesn't work for bitmap scan
code. To fix this problem, we have two choices.
One is still use the kill_prior_tuple trick in a modified way. Heap TIDs
recycling should not be a problem. This is because generally we always hold
pin of the last index page
"Bort, Paul" <[EMAIL PROTECTED]> writes:
>> Anyone know a variant of this that really works?
> Here's a theory: If the counter is bumped to an odd number before
> modification, and an even number after it's done, then the reader will
> know it needs to re-read if the counter is an odd number.
Gr
>
> > BTW, I think the writer would actually need to bump the
> counter twice,
> > once before and once after it modifies its stats area.
> Else there's
> > no way to detect that you've copied a partially-updated stats entry.
>
> Actually, neither of these ideas works: it's possible that
>
On Sun, Jun 18, 2006 at 07:18:07PM -0600, Michael Fuhr wrote:
> Maybe I'm misreading the packet, but I think the query is for
> ''kaltenbrunner.cc (two single quotes followed by kaltenbrunner.cc)
Correction: ''.kaltenbrunner.cc
--
Michael Fuhr
---(end of broadcast)--
On Sun, Jun 18, 2006 at 07:50:04PM -0400, Tom Lane wrote:
> 24583 postgres CALL
> recvfrom(0x3,0x477e4000,0x1,0,0xfffe4da0,0xfffe4d5c)
> 24583 postgres GIO fd 3 read 37 bytes
>"\M-Sr\M^A\M^B\0\^A\0\0\0\0\0\0\^B''\rkaltenbrunner\^Bcc\0\0\^A\0\^A"
> 24583 postgres R
Tom Lane wrote:
Anyway, the tail end of the trace
shows it repeatedly sending off a UDP packet and getting practically the
same data back:
I'm not too up on what the DNS protocol looks like on-the-wire, but I'll
bet this is it. I think it's trying to look up "kaltenbrunner.cc" and
failing.
Andrew Dunstan <[EMAIL PROTECTED]> writes:
> Tom Lane wrote:
>> One idea that comes to mind is a DNS lookup timeout. Can you strace the
>> postmaster to see what it's doing?
> There is ktrace output I managed to capture at
> http://developer.postgresql.org/~adunstan/ktrace.txt
> Not sure what it
Tom Lane wrote:
Andrew Dunstan <[EMAIL PROTECTED]> writes:
We need both, I think. I am still trying to find out why it's taking so
long. This is on the 8.0 branch, though. Later branches seem to be working.
One idea that comes to mind is a DNS lookup timeout. Can you strace the
pos
Andrew Dunstan <[EMAIL PROTECTED]> writes:
> We need both, I think. I am still trying to find out why it's taking so
> long. This is on the 8.0 branch, though. Later branches seem to be working.
One idea that comes to mind is a DNS lookup timeout. Can you strace the
postmaster to see what it's d
Tom Lane wrote:
Andrew Dunstan <[EMAIL PROTECTED]> writes:
The problem is that if the postmaster takes more than 60 seconds to
start listening (as is apparently happening on spoonbill - don't yet
know why) this code falls through.
If the postmaster takes that long to start listenin
Andrew Dunstan <[EMAIL PROTECTED]> writes:
> The problem is that if the postmaster takes more than 60 seconds to
> start listening (as is apparently happening on spoonbill - don't yet
> know why) this code falls through.
If the postmaster takes that long to start listening, I'd say we need to
fi
>There is no "regular shared locks" in postgres in that sense. Shared locks >are only used for maintaining FK integrity. Or by manually issuing a >SELECT FOR SHARE, but that's also for maintaining integrity. MVCC >rules take care of the "plain reads". If you're not familiar with MVCC, >it's explain
While investigating some problems with buildfarm member spoonbill I came
across this piece of code in pg_regress.sh, which seems less than robust:
# Wait till postmaster is able to accept connections (normally only
# a second or so, but Cygwin is reportedly *much* slower). Don't
# w
I wrote:
> PFC <[EMAIL PROTECTED]> writes:
>> So, the proposal :
>> On executing a command, Backend stores the command string, then
>> overwrites the counter with (counter + 1) and with the timestamp of
>> command start.
>> Periodically, like every N seconds, a separate process reads the counte
Ühel kenal päeval, P, 2006-06-18 kell 15:09, kirjutas Tom Lane:
> "Magnus Hagander" <[EMAIL PROTECTED]> writes:
> > Might it not be a win to also store "per backend global values" in the
> > shared memory segment? Things like "time of last command", "number of
> > transactions executed in this back
"Magnus Hagander" <[EMAIL PROTECTED]> writes:
> Might it not be a win to also store "per backend global values" in the
> shared memory segment? Things like "time of last command", "number of
> transactions executed in this backend", "backend start time" and other
> values that are fixed-size?
I'm
> The existing stats collection mechanism seems OK for event
> counts, although I'd propose two changes: one, get rid of the
> separate buffer process, and two, find a way to emit event
> reports in a time-driven way rather than once per transaction
> commit. I'm a bit vague about how to do th
On Sun, 18 Jun 2006, paolo romano wrote:
Anyway, again in theory, if one wanted to minimize logging overhead for
shared locks, one might adopt a different treatment for (i) regular
shared locks (i.e. locks due to plain reads not requiring durability in
case of 2PC) and (ii) shared locks held
Jason Essington <[EMAIL PROTECTED]> writes:
> Has there been any movement on this? as of 8.1.2 psql still whines on
> OS X tiger when you exit.
> I realize it is not significant, but I'd still rather not see it.
I've committed that fix into CVS HEAD.
regards, tom lane
-
Douglas McNaught <[EMAIL PROTECTED]> writes:
> (a) and (b): of course you would only do it on a temporary basis for
> problem diagnosis.
Temporary or not it isn't really an option when you're dealing with high
volumes. You could imagine a setup where say, 1% of page requests randomly
turn on deb
"Gurjeet Singh" <[EMAIL PROTECTED]> writes:
> Probably this explains the ERROR for the last query... The ORDER BY
> and LIMIT clauses are expected to end a query (except for subqueries,
> of course), and hence the keyword UNION is not expected after the
> LIMIT clause...
Yeah. In theory that's un
Probably this explains the ERROR for the last query... The ORDER BY
and LIMIT clauses are expected to end a query (except for subqueries,
of course), and hence the keyword UNION is not expected after the
LIMIT clause...
On 6/18/06, Tom Lane <[EMAIL PROTECTED]> wrote:
Joe Conway <[EMAIL PROTECTED
Joe Conway <[EMAIL PROTECTED]> writes:
> I was trying to work around limitations with "partitioning" of tables
> using constraint exclusion, when I ran across this little oddity:
I think you're under a misimpression about the syntax behavior of ORDER
BY and UNION. Per spec, ORDER BY binds less t
I was trying to work around limitations with "partitioning" of tables
using constraint exclusion, when I ran across this little oddity:
-- works
test=# select * from (select time from url_access_2006_06_07 order by 1
limit 2) as ss1;
time
-
2006-06-07 15:07:41
200
Lines 509-512 of contrib/dblink/expected/dblink.out read:
-- this should fail because there is no open transaction
SELECT dblink_exec('myconn','DECLARE xact_test CURSOR FOR SELECT * FROM foo');
ERROR: sql error
DETAIL: ERROR: cursor "xact_test" already exists
The error message is not consisten
paolo romano <[EMAIL PROTECTED]> writes:
> Anyway, again in theory, if one wanted to minimize logging overhead for
> shared locks, one might adopt a different treatment for (i) regular shared
> locks (i.e. locks due to plain reads not requiring durability in case of 2PC)
> and (ii) shared locks
Never mind. I scrubbed my folders and obtained a new fresh copy from CVS. Now
it works.
Regards,
Thomas Hallgren
---(end of broadcast)---
TIP 6: explain analyze is your friend
No, it's not safe to release them until 2nd phase commit.Imagine table foo and table bar. Table bar has a foreign key reference to foo.1. Transaction A inserts a row to bar, referencing row R in foo. This acquires a shared lock on R.2. Transaction A precommits, releasing the lock.3. Transaction B d
Some more info. If I manually create the data directory first, the output is
different:
C:\Tada\Workspace>mkdir data
C:\Tada\Workspace>initdb -D data
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.
The database
I just compiled a fresh copy from CVS head. I then tried to do an initdb
as user 'postgres' (non admin user on my system). I get the following error:
C:\Tada\Workspace>initdb -D data
The files belonging to this database system will be owned by user
"postgres".
This user must also own the se
Greg Stark <[EMAIL PROTECTED]> writes:
> Douglas McNaught <[EMAIL PROTECTED]> writes:
>
>> Yeah, but if you turn on query logging in that case you'll see the
>> bajillions of short queries, so you don't need the accurate snapshot
>> to diagnose that.
>
> Query logging on a production OLTP machine?
On 17-6-2006 1:24, Josh Berkus wrote:
Arjen,
I can already confirm very good scalability (with our workload) on
postgresql on that machine. We've been testing a 32thread/16G-version
and it shows near-linear scaling when enabling 1, 2, 4, 6 and 8 cores
(with all four threads enabled).
Keen.
Douglas McNaught <[EMAIL PROTECTED]> writes:
> Yeah, but if you turn on query logging in that case you'll see the
> bajillions of short queries, so you don't need the accurate snapshot
> to diagnose that.
Query logging on a production OLTP machine? a) that would be a huge
performance drain on th
37 matches
Mail list logo