[SQL] We all are looped on Internet: request + transport = invariant
Usage of database consist of request and transportation of results into visualizer. We all are looped on Internet, thus visualizer should be remoted. The most general format for data is XML. We also need to choose transport protocol: the most widespread is HTTP. SQL could send data so. Now SQL don't make this, thus we need one more language (php,etc). It's very, very bad for un-experienced user. We dream, that we simplify SQL by reject of this feature, but really we only carry un-triviality into other language. Un-triviality is saved ! But in addition to un-triviality, you have place of dock between SQL and other language. I offer the following way: http://sql40.by.ru/site/sql40/en/author/introduction_eng.htm http://sql40.by.ru/site/sql40/en/author/determination_eng.htm Representation of data as tree (instead of as matrix) is not aim itself, it's sideline-result. ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
[SQL] How can I know if a row is Locked?
How can I know if a row is locked by another transaction. I have in a transaction like that: BEGIN; SELECT * FROM compels.teste WHERE id = '1' FOR UPDATE; PS1: where id is the primary key. PS2: The COMMIT command is done after a long time. In this case the row with the primary-key equals to '1' will be locked to other transactions until the COMMIT command be executed. So, how can I detect if this row is locked? Tks Carlos Henrique Iazzetti Santos Compels Informática Santa Rita do Sapucaí - MG www.compels.net __ Fale com seus amigos de graça com o novo Yahoo! Messenger http://br.messenger.yahoo.com/
Re: [SQL] How can I know if a row is Locked?
am Fri, dem 13.04.2007, um 4:27:34 -0700 mailte Carlos Santos folgendes: > How can I know if a row is locked by another transaction. > I have in a transaction like that: > > BEGIN; > SELECT * FROM compels.teste WHERE id = '1' FOR UPDATE; > > PS1: where id is the primary key. > PS2: The COMMIT command is done after a long time. > > In this case the row with the primary-key equals to '1' will be locked to > other > transactions until the COMMIT command be executed. > So, how can I detect if this row is locked? You can call the insert with 'for update nowait' and catch the error, if an error occurs. Maybe there are better solutions... Andreas -- Andreas Kretschmer Kontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header) GnuPG-ID: 0x3FFF606C, privat 0x7F4584DA http://wwwkeys.de.pgp.net ---(end of broadcast)--- TIP 7: You can help support the PostgreSQL project by donating at http://www.postgresql.org/about/donate
Re: [SQL] A long-running transaction
On Fri, Apr 13, 2007 at 09:02:58AM +0800, John Summerfield wrote: > > Why is this implementation preferable to not doing that? Because this way, readers never wait for writers. On most database systems, that pending UPDATE would block anyone reading the row too, even if they weren't going to write on it. It's a trade-off. It happens to mean that the trade is bad for the thing you happen to be doing, but since most database operations aren't bulk ones in the way you're trying to achieve this, that's quite possibly a reasonable trade off. > Where is enterprisedb? enterprisedb.com seems to be held by a squatter > in Kingston. About where they're playing an important cricket > competition about now. Google took me here in one: http://www.enterprisedb.com But anyway, their stuff is built on top of Pg, so going to their site won't help you. > I can see we need before and after, but why all those intermediate rows? Because in the general case, you need the intermediate rows. The problem is that you'd have to write a special piece of code to catch the case where nobody else can see the row that you're about to expire, and that test isn't free. Therefore, you do it the same way any other row gets expired. > Also, I don't see why (apparently) a sequential search is used; surely > if all of these rows might be required, still a pointer to the last > would be right? Or last two, if you need the ability to check the order. No, it's not a seqscan. It's following the chain of versions of the row. You don't know until you look at the actual row whether the version of it you are looking at is valid for you. There isn't any other place to store that information. (If you want someone who really understands how all this works to explain it to you in more accurate terms, you should ask the question on -hackers. They'll point you to the developers' docs that I can't seem to find right now.) > Is there a development version I can try, with this improvement in it? Well, you could try using the current CVS HEAD, which is close to feature freeze. It'd sure be a service to the community, at least, because we'd learn whether the proposed change fixes this sort of case. (But I wouldn't trust my data to the HEAD for real. You said you're not actually in production yet, though.) > 1. For the first day or so, my observation was that the disk was not > particularly busy. That's completely consistent with the theory I have. As the number of dead tuples goes up, your disk activity will slowly get worse. > At present I'm trying to clarify in my mind the nature of the problem. > What I'm trying to do seems to me reasonable. I have some data, and I > want it all in or none of it, so it fits the idea of a single transaction. > > It might be that my demands exceed Postgresql's current capabilities, > but by itself it doesn't make what I'm trying to do unreasonable. No, it's not unreasonable, but it happens to be a pessimal case under Postgres. A -- Andrew Sullivan | [EMAIL PROTECTED] "The year's penultimate month" is not in truth a good way of saying November. --H.W. Fowler ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [SQL] How can I know if a row is Locked?
Dnia Fri, 13 Apr 2007 04:27:34 -0700, Carlos Santos napisał(a): > How can I know if a row is locked by another transaction. > I have in a transaction like that: > > BEGIN; > SELECT * FROM compels.teste WHERE id = '1' FOR UPDATE; > > PS1: where id is the primary key. > PS2: The COMMIT command is done after a long time. I've a feeling that you're trying to do a BadThing(tm). It shouldn't be necessary unless you've got some very specific needs. To quote Tom Lane: --- > Is there a recommended "postgres way" to determine if a certain row is > locked... without blocking? 8.1 has a SELECT FOR UPDATE NOWAIT option. Alternatively, just do a wait while having a very short statement_timeout. > In my custom postgres client app I'd like to be able to determine if > another user is "modifying" a given record. If so, I would present a > dialog to the user such as "Record Locked. Sam Smith is already > modifying this record. Try again later." However, I think the question is moot because it's predicated on a terrible underlying approach. You should NEVER design a DB app to hold a lock while some user is editing a record (and answering the phone, going out to lunch, etc). Fetch the data and then let the user edit it while you are not in a transaction. When he clicks UPDATE, do BEGIN; SELECT the row FOR UPDATE; check for any changes since you fetched the data originally if none, UPDATE and commit else rollback and tell user about it If you do see conflicting changes, then you have enough info to resolve the conflicts or abandon the update. --- -- | And Do What You Will be the challenge | http://apcoln.linuxpl.org |So be it in love that harms none | http://biznes.linux.pl | For this is the only commandment. | http://www.juanperon.info `---* JID: [EMAIL PROTECTED] *---' http://www.naszedzieci.org ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [SQL] We all are looped on Internet: request + transport = invariant
On Fri, Apr 13, 2007 at 10:58:12AM +0300, Dmitry Turin wrote: > The most general format for data is XML. Surely not. In particular, XML is actually miserably bad at capturing certain kinds of relations between items. It's in fact this assumption that has made this conversation seem like we're talking past one another. > Now SQL don't make this, thus we need one more language (php,etc). > It's very, very bad for un-experienced user. So what? If a user has no experience, and wants to do something new, why should they not have to do some work to learn how to do those things? SQL is not so fantastically hard that you can't learn it. Indeed, I'm pretty sure that if someone as foolish as I can learn it, anyone can. A -- Andrew Sullivan | [EMAIL PROTECTED] If they don't do anything, we don't need their acronym. --Josh Hamilton, on the US FEMA ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [SQL] A long-running transaction
On Fri, Apr 13, 2007 at 07:49:19AM -0400, Andrew Sullivan wrote:
> On Fri, Apr 13, 2007 at 09:02:58AM +0800, John Summerfield wrote:
> > 1. For the first day or so, my observation was that the disk was not
> > particularly busy.
>
> That's completely consistent with the theory I have. As the number
> of dead tuples goes up, your disk activity will slowly get worse.
This simple demonstration occurred to me on the subway on the way
here, to show that the dead tuples really will mount.
testing=# \d testtab
Table "public.testtab"
Column | Type | Modifiers
+-+--
id | integer | not null default nextval('testtab_id_seq'::regclass)
col1 | text|
col2 | text|
Indexes:
"testtab_pkey" PRIMARY KEY, btree (id)
testing=# SELECT * from testtab;
id | col1 | col2
+--+--
1 | one |
(1 row)
Now, we check how many dead tuples we have:
testing=# VACUUM VERBOSE testtab ;
INFO: vacuuming "public.testtab"
INFO: index "testtab_pkey" now contains 1 row versions in 2 pages
DETAIL: 0 index pages have been deleted, 0 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO: "testtab": found 0 removable, 1 nonremovable row versions in 1 pages
DETAIL: 0 dead row versions cannot be removed yet.
There were 3 unused item pointers.
0 pages are entirely empty.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO: vacuuming "pg_toast.pg_toast_90325"
INFO: index "pg_toast_90325_index" now contains 0 row versions in 1 pages
DETAIL: 0 index pages have been deleted, 0 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO: "pg_toast_90325": found 0 removable, 0 nonremovable row versions in 0
pages
DETAIL: 0 dead row versions cannot be removed yet.
There were 0 unused item pointers.
0 pages are entirely empty.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
VACUUM
Notice the lines:
INFO: "testtab": found 0 removable, 1 nonremovable row versions in 1 pages
DETAIL: 0 dead row versions cannot be removed yet.
So, let's do an update
BEGIN
testing=# UPDATE testtab set col2='1';
UPDATE 1
testing=# commit;
COMMIT
This time, when we run vacuum, we get this (snipped out for brevity):
INFO: "testtab": found 1 removable, 1 nonremovable row versions in 1 pages
DETAIL: 0 dead row versions cannot be removed yet.
So, several updates in one transaction:
testing=# begin;
BEGIN
testing=# UPDATE testtab set col2='2';
UPDATE 1
testing=# UPDATE testtab set col2='3';
UPDATE 1
testing=# UPDATE testtab set col2='4';
UPDATE 1
testing=# UPDATE testtab set col2='5';
UPDATE 1
testing=# commit;
COMMIT
This time, VACUUM VERBOSE tells us that all of those were dead:
INFO: "testtab": found 4 removable, 1 nonremovable row versions in 1 pages
DETAIL: 0 dead row versions cannot be removed yet.
Ok, what if we just update without actually changing anything?
testing=# begin;
BEGIN
testing=# UPDATE testtab set col2='5';
UPDATE 1
testing=# UPDATE testtab set col2='5';
UPDATE 1
testing=# UPDATE testtab set col2='5';
UPDATE 1
testing=# UPDATE testtab set col2='5';
UPDATE 1
testing=# UPDATE testtab set col2='5';
UPDATE 1
testing=# commit;
COMMIT
We get the same result:
INFO: "testtab": found 5 removable, 1 nonremovable row versions in 1 pages
DETAIL: 0 dead row versions cannot be removed yet.
A
>
> > At present I'm trying to clarify in my mind the nature of the problem.
> > What I'm trying to do seems to me reasonable. I have some data, and I
> > want it all in or none of it, so it fits the idea of a single transaction.
> >
> > It might be that my demands exceed Postgresql's current capabilities,
> > but by itself it doesn't make what I'm trying to do unreasonable.
>
> No, it's not unreasonable, but it happens to be a pessimal case under
> Postgres.
>
> A
>
> --
> Andrew Sullivan | [EMAIL PROTECTED]
> "The year's penultimate month" is not in truth a good way of saying
> November.
> --H.W. Fowler
>
> ---(end of broadcast)---
> TIP 9: In versions below 8.0, the planner will ignore your desire to
>choose an index scan if your joining column's datatypes do not
>match
--
Andrew Sullivan | [EMAIL PROTECTED]
Users never remark, "Wow, this software may be buggy and hard
to use, but at least there is a lot of code underneath."
--Damien Katz
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
Re: [SQL] A long-running transaction
Hi Andrew, On Fri, 2007-04-13 at 07:49 -0400, Andrew Sullivan wrote: > Because in the general case, you need the intermediate rows. The > problem is that you'd have to write a special piece of code to catch > the case where nobody else can see the row that you're about to > expire, and that test isn't free. Therefore, you do it the same way > any other row gets expired. Just curious: is there a way to defeat MVCC? i.e., if you can lock the database exclusively, there won't be any readers or writers? Joe ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [SQL] A long-running transaction
On Fri, Apr 13, 2007 at 12:42:36PM -0400, Joe wrote: > > Just curious: is there a way to defeat MVCC? No. That is, > i.e., if you can lock the > database exclusively, there won't be any readers or writers? while this is true, it doesn't shut off the way writes go through the system. MVCC is rather close to the central guts of PostgreSQL. If you want to understand more about why this is the case, what the trade-off considerations are, &c., then I stronly advise you to troll through the -hackers archives. Approximately once every 8-10 months someone comes along with an idea for a change that disables MVCC in just this or that case. Most of the time, the actual effects of this are different than people think they will be -- concurrency is hard, and making changes so that it's impossible to get into a race is tricky. This is why the in-place storage management that EnterpriseDB contributed was both somewhat controversial and somewhat limited in its application. A -- Andrew Sullivan | [EMAIL PROTECTED] This work was visionary and imaginative, and goes to show that visionary and imaginative work need not end up well. --Dennis Ritchie ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
