cution_time"
value at php.ini file?
--Nirmalya
--- Ben-Nes Yonatan <[EMAIL PROTECTED]> wrote:
Hi all,
I sent the following email to the php mailing list also but maybe
this
is a more appropriate mailing list.
I wrote a php script which is running very long queries (hours) on
a
da
- Original Message -
From: "Tino Wildenhain" <[EMAIL PROTECTED]>
To: "Ben-Nes Yonatan" <[EMAIL PROTECTED]>
Cc: "Martijn van Oosterhout" ;
Sent: Sunday, October 02, 2005 4:26 PM
Subject: Re: [GENERAL] Broken pipe
Am Sonntag, den 02.10
Martijn van Oosterhout wrote:
On Sun, Oct 02, 2005 at 12:07:18PM +0200, Ben-Nes Yonatan wrote:
I wrote a php script which is running very long queries (hours) on a
database.
I seem to have a problem to run the code when there are single queries
which take long times (like 5 hours for an
e to think what will be better at
my case.
Shana Tova Everyone! (Happy new year in hebrew :))
Ben-Nes Yonatan
---(end of broadcast)---
TIP 6: explain analyze is your friend
untered no connection to the file
(broken pipe).
Am I correct at my assumption? if so how can I set the PHP to wait how
much I tell him?
Ofcourse if im wrong I would like to know the reason also :)
Thanks in advance,
Ben-Nes Yonatan
---(end of broadcast)---
untered no connection to the file
(broken pipe).
Am I correct at my assumption? if so how can I set the PHP to wait how
much I tell him?
Ofcourse if im wrong I would like to know the reason also :)
Thanks in advance,
Ben-Nes Yonatan
---(end of broadcast)---
ncreasingly amateur at this subject... :)
Cheers!
Ben-Nes Yonatan
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
Bohdan Linda wrote:
On Tue, Aug 30, 2005 at 06:07:24PM +0200, Michael Fuhr wrote:
tables, and a VACUUM might start or complete immediately after you
issue the query but before you read the results). This method is
therefore unreliable.
I intend to do the VACUUM FULL during quiet hours, thus
Martijn van Oosterhout wrote:
On Wed, Aug 31, 2005 at 09:19:05AM +0200, Ben-Nes Yonatan wrote:
If the subtransaction writes at least a tuple, it counts as another
transaction. Else it doesn't count.
Oh crap I fear that now im in serious troubles
Where can I read about this limit
Alvaro Herrera wrote:
On Tue, Aug 30, 2005 at 10:39:57PM -0500, Bruno Wolff III wrote:
On Wed, Aug 31, 2005 at 01:27:30 +0200,
Ben-Nes Yonatan <[EMAIL PROTECTED]> wrote:
Now again im probably just paranoid but when I'm starting a transaction
and in it im making more then 4 billio
im closing it, its counted
as only one transaction right? (should I duck to avoid the manual? ;))
As always thanks alot!
Ben-Nes Yonatan
---(end of broadcast)---
TIP 4: Have you searched our list archives?
http://archives.postgresql.org
Tom Lane wrote:
Ben-Nes Yonatan <[EMAIL PROTECTED]> writes:
Indexes:
"items_items_id_key" UNIQUE, btree (items_id)
"items_left" btree (left)
"items_left_right" btree (left, right)
You could get rid of the items_left index --- it's redund
with diffrent values up to even 1000 but that
didnt help a bit (I did ran VACUUM ANALYZE after each change).
I'm quite clueless and also quite in a hurry to finish this project so
any help or a piece of clue will be welcomed gladly!
Thanks alot in advance (even only for readi
Bruno Wolff III wrote:
On Sat, Aug 27, 2005 at 18:19:54 +0530,
sunil arora <[EMAIL PROTECTED]> wrote:
Bruno,
thanks for the reply,
we did run vaccum on it.. and we do it regulary to maintain its
performance but its not giving the expected results.
Did you do VACUUM FULL or just plain VACUU
Dann Corbit wrote:
-Original Message-
From: Ben-Nes Yonatan [mailto:[EMAIL PROTECTED]
Sent: Monday, August 22, 2005 3:28 PM
To: Jim C. Nasby; Sean Davis; Dann Corbit
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Query results caching?
On Mon, Aug 22, 2005 at 10:13:49PM
Jim C. Nasby wrote:
On Tue, Aug 23, 2005 at 12:27:39AM +0200, Ben-Nes Yonatan wrote:
Jim C. Nasby wrote:
Emptying the cache will not show real-life results. You are always going
to have some stuff cached, even if you get a query for something new. In
this case (since you'll obvi
On Mon, Aug 22, 2005 at 10:13:49PM +0200, Ben-Nes Yonatan wrote:
I think that I was misunderstood, Ill make an example:
Lets say that im making the following query for the first time on the
"motorcycles" table which got an index on the "manufacturer" field:
EX
am 22.08.2005, um 22:13:49 +0200 mailte Ben-Nes Yonatan folgendes:
I think that I was misunderstood, Ill make an example:
Okay:
Lets say that im making the following query for the first time on the
"motorcycles" table which got an index on the "manufacturer" fiel
Sean Davis wrote:
On 8/22/05 1:59 PM, "Dann Corbit" <[EMAIL PROTECTED]> wrote:
-Original Message-
From: [EMAIL PROTECTED] [mailto:pgsql-general-
[EMAIL PROTECTED] On Behalf Of Ben-Nes Yonatan
Sent: Monday, August 22, 2005 9:03 AM
To: pgsql-general@postgresql.org
S
delete that "caching" after every
query test that I run, cause I want to see the real time results for my
queries (its for a searching option for users so it will vary alot).
Is it possible to do it manually each time or maybe only from the
configuration?
Thanks in advance,
Ben-Nes Yona
that I tried) it
becomes extremly slow, what can I do to solve this problem?
Thanks in advance,
Ben-Nes Yonatan
Canaan Surfing ltd.
http://www.canaan.net.il
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
ns of rows) that its making the process of deleting
the content way too slow and I need to do it each day am I correct
with what im doing?
Thanks again,
Yonatan
Richard Huxton wrote:
Ben-Nes Yonatan wrote:
If ill query: DELETE FROM table1; it will just get stuck...
If ill try: DELETE F
B server I was able to delete the
current row which stucked the process but then I got stuck at some other
row at the table
Thanks in advance,
Ben-Nes Yonatan
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your
Richard Huxton wrote:
Ben-Nes Yonatan wrote:
Richard Huxton wrote:
Can anyone tell me if Pl/PgSQL can support a multi dimensional array
(of up to 5 levels top I guess) with about 100,000 values?
and does it stress the system too much?
I can't imagine it being wonderful - you pro
Richard Huxton wrote:
Ben-Nes Yonatan wrote:
Hi all,
Can anyone tell me if Pl/PgSQL can support a multi dimensional array
(of up to 5 levels top I guess) with about 100,000 values?
and does it stress the system too much?
I can't imagine it being wonderful - you probably want a diff
Hi all,
Can anyone tell me if Pl/PgSQL can support a multi dimensional array (of
up to 5 levels top I guess) with about 100,000 values?
and does it stress the system too much?
Thanks!
Ben-Nes Yonatan
Canaan Surfing ltd.
---(end of broadcast
f this process
will stress the server... and what can i do to let the server work on it
in a way that it wont disturb the rest of the processes.
Thanks alot again,
Ben-Nes Yonatan
Canaan Surfing ltd.
http://www.canaan.net.il
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
te "Temp table2" which will
contain all of the information of the "Temp table" + a subquery will
retrieve the ID of the "Second table" for the foreign key - quite heavy
process i suspect.
F. INSERT the data from "Temp table2" to the "Main ta
> On Feb 4, 2005, at 8:34 AM, Ben-Nes Yonatan wrote:
>
>>> On Fri, Feb 04, 2005 at 09:27:08AM +0200, Ben-Nes Yonatan wrote:
>>>> Hi all,
>>>>
>>>> Does anyone know if PostgreSQL got a function which work like
>>>> load_file() of mySQ
>On Fri, Feb 04, 2005 at 09:27:08AM +0200, Ben-Nes Yonatan wrote:
>> Hi all,
>>
>> Does anyone know if PostgreSQL got a function which work like
>> load_file() of mySQL?
>
> I am not quite sure what load_file() does, but check the COPY command
> and the analgou
che 1.3.26
Thanks in advance,
Ben-Nes Yonatan
seen='0'
doesnt anyone know y?
with thx in advance
Ben-Nes Yonatan
seen='0'
doesnt anyone know y?
with thx in advance
Ben-Nes Yonatan
33 matches
Mail list logo