N ALL
(SELECT 2 AS seq, *
FROM mydata
ORDER BY random_number ASC LIMIT )
ORDER BY seq ASC, random_number ASC LIMIT K;
This should provide each row with an equal chance of being selected
while requiring the database to fetch at most 2 * K rows.
Regards,
Paul Tillotson
-
in there. The default
stack depth should be good for hundreds of function calls, but if your
triggers are recursive then no depth will be enough.
Regards,
Paul Tillotson
---(end of broadcast)---
TIP 4: Have you searched our list archives?
Alvaro Herrera wrote:
On Wed, Jul 27, 2005 at 11:46:05AM -0400, Robert Treat wrote:
Seems unlikely unless folks like Tom Lane, Stephan Szabo, and Richard
Huxton have unsubscribed and resubscribed lately... Funny thing is it
isnt every messages, but maybe half of them. And its not to specifi
ht find this informative:
http://www.postgresql.org/docs/8.0/interactive/explicit-locking.html
Regards,
Paul Tillotson
(*) Actually, you can get this kind of deadlock with just UPDATES.
Suppose that your web application does:
BEGIN;
UPDATE hits SET count = count + 1 WHERE page = 'somepage.a
27;re
not there). If you need to run it regularly, you're almost certainly
not reserving enough space in the free space map. VACUUM takes no locks
that conflict with selecting, inserting, updating, or deleting, so that
should be perfectly safe.
Regards,
Paul Tillotson
---
Tom Lane wrote:
Paul Tillotson <[EMAIL PROTECTED]> writes:
Tom Lane wrote:
Hardly likely, considering it's estimating only 296 rows in the subquery
output. My bet is that you've chosen a datatype whose comparisons are
not hashable (like char(n)). What is the datatyp
Tom Lane wrote:
Paul Tillotson <[EMAIL PROTECTED]> writes:
For the following query, postgres is running the IN subquery over and
over again (once for each row scanned in the parent table.)
I would have expected it to run the whole query once and create a hash
which would then be
tuples in the
whole table = 25 KB. Shouldn't the optimizer think that the subquery
will only fetch 25 KB worth of rows?
(Later, I realized that the official name for "sort_mem" is now
work_mem. Now, does this mean that my set sort_mem = 50 did not do
anything?)
Regards,
Tom Lane wrote:
Paul Tillotson <[EMAIL PROTECTED]> writes:
I don't think anyone wants to defend the negative modulus as such, but to fix
it, we have to do one of these:
(1) Keep rounding division, but rewrite the numeric modulus operator to use a
form of division
ast numerics with no fractional part)
behave like integer division seems more useful.
Thoughts, anyone?
Regards,
Paul Tillotson
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
(perhaps wrongly--I have not tested much) that putty goes much
slower when using UTF8.
Have you confirmed that psql is at fault, and not your shell? (I.e.,
try pasting directly to the shell--hopefully your query doesn't start
with rm -rf / : )
Regards,
Paul Tillotson
Joseph Shraibman
lement your fix and take the credit
for this. Ideally, over time, you will build up a reputation as a
problem solver and gain more responsibility for database design and thus
have more ability to fix the underlying problems.
Regards,
Paul Tillotson
---(end of broadcast)---
-returning functions), was efficiently implemented, was well tested,
and was installed by default.
Regards,
Paul Tillotson
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match
SELECT * FROM information_schema.tables;
SELECT * FROM information_schema.columns;
These should have most of the information you need.
Regards,
Paul Tillotson
Srikanth Utpala (Virinchi) wrote:
Hi
I am Srikanth. I want to get the information about all tables in the
existing postgresql . I want
Greg Stark wrote:
Tom Lane <[EMAIL PROTECTED]> writes:
Alvaro Herrera <[EMAIL PROTECTED]> writes:
On Fri, Apr 01, 2005 at 10:14:07PM -0500, Paul Tillotson wrote:
...
Well, at that point you need to take a lock in order to be able to
manage locks. Managing no
ts TOASTED when it gets too wide. I suppose this
would make taking a lock as expensive as doing an update, though, right?
Paul Tillotson
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
tion of them) without a lot of extra IO
provided that the pages were not 100 % full. (Which is almost never the
case in postgresql anyway.)
Paul Tillotson
Alvaro Herrera wrote:
On Thu, Mar 31, 2005 at 06:54:31PM -0600, Guy Rouillier wrote:
Alvaro Herrera wrote:
Now this can't be appl
u post any more timings on this list, please post the EXPLAIN
ANALYZE as well. This allows us to see what plan the planner picked,
how much time each step took, and how many rows were actually affected.
To get the EXPLAIN ANALYZE, just type EXPLAIN ANALYZE and copy the output.
Regards,
Paul
there is a gltrans posting to
the period and account of the chartdetails ie quite often.
If it gets updated often it will need vacuuming often as well.
My rule of thumb is that if more than 10% of the data in a table is
getting updated, vacuum immediately before and immediately after the the
code t
hich is the
variable that governs the WHILE loop that it is within. Can you take it
out of the while loop and still get the same results?
Second, could you write that update statement to say WHERE period =
$periodno rather than WHERE period >= $period? If not, why not?
Regards,
Paul Till
could use:
SELECT EXISTS (SELECT 1 FROM chartdetails WHERE );
This will be much faster since with EXISTS, postgres only runs the query
long enough to find out whether even one row would be returned--if so,
it stops.
Regards,
Paul Tillotson
Phil Daintree wrote:
Dear psqlers,
I need your help!
I
Correct me if I am wrong, but doesn't the postmaster notice that
something killed a backend and cause all the other ones to roll back?
Paul Tillotson
Neil Conway wrote:
Paul Tillotson wrote:
Does anyone know a safe way to shutdown just one backend
Sending it a SIGTERM via kill(1) should be
ip address connection
limits. How hard would that be?
Paul Tillotson
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
The usual method for handling this is the LIMIT and OFFSET clauses in a
SELECT.
For example, this would get the results to put on the fifth page:
SELECT * FROM products ORDER BY stock_number DESC LIMIT 10 OFFSET 40;
Paul Tillotson
Rick Schumeyer wrote:
I think this is a common task, but I’m not
You do not have to send the transaction all at once.
Paul Tillotson
A question about using transactions from php:
Does the entire transaction have to be sent all at once,
or can I begin the transaction, issue commands one
at a time, and then end the transaction?
---(end
r Postgres,
which would presumably show you the communication between the client and
server in a nicely-formatted way.
Paul Tillotson
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings
olumn is
varchar(8192).
How do you figure this out? Use ethereal.
http://www.ethereal.com/
Paul Tillotson
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
omplish the desired shrinking.
Comments? Am I missing some obvious way of accomplishing this goal? Is
anyone working on something like this?
Paul Tillotson
P. S. Possible snags that I have thought of include:
- I don't know for sure that UPDATE will use the free space map (will it
put the new
your query to see if it is using your index. If not, it
is only slowing you down.)
- As said before, VACUUM frequently, maybe even every 10 seconds
(experiment with different intervals.)
Paul Tillotson
I have a small table about 20 rows, a constant, that is receiving
about 160 updates per second
Don Isgitt wrote:
Hello,
I have received no posts from GENERAL since yesterday morning; is the
list broken? Thank you.
Don
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAd
running that query (as each process only runs the query
once). The amount of per-process memory used will vary with the
complexity of the query and the plan chosen by the planner.
Paul Tillotson
---(end of broadcast)---
TIP 3: if posting/readi
Alvaro Herrera wrote:
On Tue, Dec 07, 2004 at 12:02:13PM +1100, Neil Conway wrote:
On Mon, 2004-12-06 at 19:37 -0500, Paul Tillotson wrote:
I seem to remember hearing that the memory limit on certain operations,
such as sorts, is not "enforced" (may the hackers correct me if I
drastically more rows than the
planner thinks it will.
Paul Tillotson
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match
t any requirements for being able to math "in the database"
and get a valid answer. (why don't you store it as a string?)
Paul Tillotson
*As evidenced:
mysql> create table foobar (i bigint unsigned);
Query OK, 0 rows affected (0.00 sec)
mysql> insert into foobar values (
e1=# insert into foobar (i) values (pow(2::numeric,
64::numeric)); --too large
ERROR: value for domain bigint_unsigned violates check constraint "$1"
template1=# select * from foobar;
i
--
0
18446744073709551615
(2 rows)
Paul Tillotson
Hi All
What solution to use depends how many other tables and the relative
sizes of tables, but the following option has a reasonably good chance:
Suppose you have basetable, and joined1, and joined2.
Basetable.tablename tells which of the secondary tables to join against
(contains either 'joined1' o
ity. We realized that alot
of simplicity was to be gained by connecting directly to the database
and putting most of the middle-tier (there isn't that much business
logic anyway) inside postgres itself in the form of used defined
fuctions and triggers.
Regards,
Paul Tillotson
At 07:35 PM 9/2/
everything
over ssh, or (b) just making sure that users have "strong" passwords and
requiring "md5" authentication in pg_hba.conf.
Our client app is in C# using the postgresql .net data provider.
Regards,
Paul Tillotson
---(end of broadcast)-
2. Instability of plans. Right now, the planner will not change plans
underneath you --- you have to issue an explicit VACUUM or ANALYZE
to change the terms of discussion. That would stop being true if
physical file size were always taken into account. Maybe this is a
problem, or maybe
Kevin,
I've been casually following this thread (you might say "lurking"), and
I don't know *why* the problem is occurring, but you said that you
didn't see a foolproof way to break the problem into steps. I think
there is such a way, which will at least allow you to get the job done.
This exam
as a round up has already occurred.
Thus, the value of (x / y) is 1 too large, and so x % y is actually
giving you (x % y) - y, a negative number. I tried looking at how the
division actually works, but it is over my head at least for the 30
minute perusal.
Regards,
Paul Till
eleted from? If so, check who else
is deleting or updating while you are looking at the data. You should
make a careful check to see if this is the case before assuming that
this is a problem with postgres.
Regards,
Paul Tillotson
On Fri, 23 Apr 2004, Jerry Robertson wrote:
We have been r
42 matches
Mail list logo