the DB stores the date/timestamps changes. I mean, if
instead of being stored as MMDD is stored as DDMM, should we
have to change all the queries? I thought the
to_char/to_date/to_timestamp functions were intented for this purposes
--
Arnau
-
stamp) could be used, but I haven't been able to do
it. Any suggestion?
Thanks all
--
Arnau
---(end of broadcast)---
TIP 6: explain analyze is your friend
?
Thanks all
--
Arnau
---(end of broadcast)---
TIP 6: explain analyze is your friend
Hi Michael,
Michael Glaesemann wrote:
On Jul 6, 2007, at 9:42 , Arnau wrote:
I have the following scenario, I have users and groups where a user
can belong to n groups, and a group can have n users. A user must
belogn at least to a group. So when I delete a group I must check that
there
ction is fired from the user interface of the application.
Do you have any idea about how I could improve the performance of this?
Thanks all
--
Arnau
---(end of broadcast)---
TIP 4: Have you searched our list archives?
re if this will be the maximum memory used by PostgreSQL or
additional to this it will take more memory. Because if shared_buffers
is the maximum I could raise that value even more.
Cheers!
--
Arnau
---(end of broadcast)---
TIP 1: if posting/reading
Tom Lane wrote:
Arnau <[EMAIL PROTECTED]> writes:
Can you instead run things with one postmaster per machine and one
database per customer within that instance? From a performance
perspective this is likely to work much better.
What I meant is just have only one postmaster per serv
Hi Tom,
Arnau <[EMAIL PROTECTED]> writes:
I have an application that works with multiple customers. Thinking in
scalability we are thinking in applying the following approaches:
- Create a separate database instance for each customer.
- We think that customer's DB wi
rmance would be worse.
I have been following the list and one of the advises that appears
more often is keep your DB in memory, so if I have just one instance
instead of "hundreds" the performance will be better?
Thank you very much
--
Arnau
---
1.357
rows=1587 loops=1)
Index Cond: ((epoch_in2 >= 1171321200::double precision) AND
(epoch_in2 <= 1171494000::double precision))
Total runtime: 57.065 ms
(3 rows)
As you can see the time difference are very big
Timestamp:318.328 ms
int8 index: 120.804 ms
Hi Thor,
Thor-Michael Støre wrote:
On 2007-04-04 Arnau wrote:
Josh Berkus wrote:
Arnau,
Is there anything similar in PostgreSQL? The idea behind this
is how I can do in PostgreSQL to have tables where I can query
on them very often something like every few seconds and get
results very fast
Hi Ansgar ,
On 2007-04-04 Arnau wrote:
Josh Berkus wrote:
Is there anything similar in PostgreSQL? The idea behind this is how
I can do in PostgreSQL to have tables where I can query on them very
often something like every few seconds and get results very fast
without overloading the
Hi Josh,
Josh Berkus wrote:
Arnau,
Is there anything similar in PostgreSQL? The idea behind this is how I
can do in PostgreSQL to have tables where I can query on them very often
something like every few seconds and get results very fast without
overloading the postmaster.
If you're
determine the maximum and minimum numbers of rows"
Is there anything similar in PostgreSQL? The idea behind this is how I
can do in PostgreSQL to have tables where I can query on them very often
something like every few seconds and get results very fast without
overloading the postmaster.
Hi Bill,
In response to Arnau <[EMAIL PROTECTED]>:
I have postgresql 7.4.2 running on debian and I have the oddest
postgresql behaviour I've ever seen.
I do the following queries:
espsm_asme=# select customer_app_config_id, customer_app_config_name
from customer_app_c
es:
A lot of rules that I don't paste as matter of length.
Do you have any idea about how I can fix this?
--
Arnau
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
index definition? I
have checked the postgresql documentation I haven't been able to find
anything about.
Thanks
--
Arnau
---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at
http://www.postgresql
cron task that
creates automatically these monthly elements (tables, rules, ... ) or
there is another approach that doesn't require external things like cron
only PostgreSQL.
--
Arnau
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
Hi all,
In a previous post, Ron Peacetree suggested to check what work_mem
needs a query needs. How that can be done?
Thanks all
--
Arnau
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
x27;s.
How can I know what work_mem needs a query needs?
Regards
--
Arnau
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
messag
, like how many rows have been deleted for date. I was thinking in
creating a function, any recommendations?
Thank you very much
--
Arnau
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
eric)
-> Index Scan using pk_agenda_uid on agenda_users u
(cost=0.00..3.09 rows=1 width=78) (actual time=2.262..2.264 rows=1
loops=150)
Index Cond: ("outer".user_id = u.user_id)
Total runtime: 76853.504 ms
(16 rows)
Do you think I could do anything t
=0.00..2722.33
rows=400379 width=0) (actual time=151.298..151.298 rows=367026 loops=1)
Index Cond: (group_id = 9::numeric)
Total runtime: 1527.039 ms
(5 rows)
Thanks
--
Arnau
---(end of broadcast)---
TIP 3: Have you ch
Tom Lane wrote:
Arnau <[EMAIL PROTECTED]> writes:
Seq Scan on agenda_users_groups (cost=0.00..53108.45 rows=339675
width=8) (actual time=916.903..5763.830 rows=367026 loops=1)
Filter: (group_id = 9::numeric)
Total runtime: 7259.861 ms
(3 filas)
espsm_moviltelevision=# select
chris smith wrote:
On 4/25/06, Arnau <[EMAIL PROTECTED]> wrote:
Hi all,
I have the following running on postgresql version 7.4.2:
CREATE SEQUENCE agenda_user_group_id_seq
MINVALUE 1
MAXVALUE 9223372036854775807
CYCLE
INCREMENT 1
START 1;
CREATE TABLE AGENDA_USERS_
it does a sequential scan and doesn't use the index and I don't
understand why, any idea? I have the same in postgresql 8.1 and it uses
the index :-|
Thanks
--
Arnau
---(end of broadcast)---
TIP 4: Have you searched our list archives?
is even higher than 60%.
I know it's a problem with a very big scope, but could you give me a
hint about where I should look to?
Thank you very much
--
Arnau
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please
in one go. This minimizes the number
of round trips between the client and the server.
Thanks Teemu! could you paste an example of one of those functions? ;-)
An example of those SELECTS also would be great, I'm not sure I have
completly understood what you mean.
--
Hi all,
Which is the best way to import data to tables? I have to import
9 rows into a column and doing it as inserts takes ages. Would be
faster with copy? is there any other alternative to insert/copy?
Cheers!
---(end of broadcast)---
T
e a similar tool. Any
of you is using anything like that? all kind of hints are welcome :-)
Cheers!
--
Arnau
---(end of broadcast)---
TIP 4: Have you searched our list archives?
http://archives.postgresql.org
Hi all,
>
> COPY FROM a file with all the ID's to delete, into a temporary
table, and do a joined delete to your main table (thus, only one query).
I already did this, but I don't have idea about how to do this join,
could you give me a hint ;-) ?
Thank you ve
Hi all,
I have the following table:
espsm_asme=# \d statistics_sasme
Table "public.statistics_sasme"
Column | Type |
Modifiers
--+--+-
q Scan on statistics2 (cost=0.00..638.00 rows=9289 width=35) (actual
time=0.41..688.34 rows=27867 loops=1)
Total runtime: 730.82 msec
That query is not using the index. Anybody knows what I'm doing wrong?
Thank you very much
--
Arnau
---(end of broadcast)-
q Scan on statistics2 (cost=0.00..638.00 rows=9289 width=35) (actual
time=0.41..688.34 rows=27867 loops=1)
Total runtime: 730.82 msec
That query is not using the index. Anybody knows what I'm doing wrong?
Thank you very much
--
Arnau
---(end of broadcast)-
here time::date < current_date - interval
'1 month';
As the number of rows grows the time needed to execute this query takes
longer. What'd I should do improve the performance of this query?
Thank you very much
--
Arnau
---(end of broadcast)
35 matches
Mail list logo