he table was vacuum analyzed during the tests
total number of records in table: 93
-------------
Regds
Rajesh Kumar Mallah.
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
On 9/28/05, Gavin Sherry <[EMAIL PROTECTED]> wrote:
> On Wed, 28 Sep 2005, Rajesh Kumar Mallah wrote:
>
> > Hi
> >
> > While doing some stress testing for updates in a small sized table
> > we found the following results. We are not too happy about the speed
>
On 9/29/05, Gavin Sherry <[EMAIL PROTECTED]> wrote:
> On Wed, 28 Sep 2005, Rajesh Kumar Mallah wrote:
>
> > > > Number of Copies | Update perl Sec
> > > >
> > > > 1 --> 119
> > > > 2 ---> 59
> > > > 3 ---> 3
On 12/5/06, Tom Lane <[EMAIL PROTECTED]> wrote:
Jean Arnaud <[EMAIL PROTECTED]> writes:
> Is there a relation between database size and PostGreSQL restart
duration ?
No.
> Does anyone now the behavior of restart time ?
It depends on how many updates were applied since the last checkpoint
befo
On 12/6/06, Tom Lane <[EMAIL PROTECTED]> wrote:
"Rajesh Kumar Mallah" <[EMAIL PROTECTED]> writes:
> Startup time of a clean shutdown database is constant. But we still
> face problem when it comes to shutting down. PostgreSQL waits
> for clients to finish graceful
On 12/6/06, asif ali <[EMAIL PROTECTED]> wrote:
Hi,
I have a "product" table having 350 records. It takes approx 1.8 seconds to
get all records from this table. I copies this table to a "product_temp"
table and run the same query to select all records; and it took 10ms(much
faster).
I did "VACU
We have a view in our database.
CREATE view public.hogs AS
SELECT pg_stat_activity.procpid, pg_stat_activity.usename,
pg_stat_activity.current_query
FROM ONLY pg_stat_activity;
Select current_query from public.hogs helps us to spot errant queries
at times.
regds
mallah.
On 12/7/06, asif
On 12/11/06, Ravindran G - TLS, Chennai. <[EMAIL PROTECTED]> wrote:
Hello,
How to get Postgresql Threshold value ?. Any commands available ?.
What is meant my threshold value ?
---(end of broadcast)---
TIP 1: if posting/reading through Usenet,
On 12/11/06, Ravindran G - TLS, Chennai. <[EMAIL PROTECTED]> wrote:
Thanks.
I am using Postgres 8.1.4 in windows 2000 and i don't get the proper
response for threshold.
what is the response you get ? please be specific about the issues.
also the footer that comes with your emails are
not appr
So, my questions:
Is it possible to use COPY FROM STDIN with JDBC?
Should be. Its at least possible using DBI and DBD::Pg (perl)
my $copy_sth = $dbh -> prepare( "COPY
general.datamining_mailing_lists (query_id,email_key) FROM STDIN;") ;
$copy_sth -> execute();
while (my ($email_key ) = $fetc
On 12/13/06, Steven Flatt <[EMAIL PROTECTED]> wrote:
Hi,
Our application is using Postgres 7.4 and I'd like to understand the root
cause of this problem:
To speed up overall insert time, our application will write thousands of
rows, one by one, into a temp table
1. how frequently are you comm
[offtopic];
hmm quite a long thread below is stats of posting
Total Messages:87Total Participants: 27
-
19 Daniel van Ham Colchete
12 Michael Stone
9 Ron
5 Steinar H. Gunderson
5 Alexander Staubo
4 Tom Lane
4 Greg
hi,
this is not really postgresql specific, but any help is appreciated.
i have read more spindles the better it is for IO performance.
suppose i have 8 drives , should a stripe (raid0) be created on
2 mirrors (raid1) of 4 drives each OR should a stripe on 4 mirrors
of 2 drives each be created
you can lose up to half of the disks and still be
operational. In the mirror of stripes, the most you could lose is two
drives. The performance of the two should be similar - perhaps the seek
performance would be different for high concurrent use in PG.
- Luke
On 5/29/07 2:14 PM, "Raj
Sorry for posting and disappearing.
i am still not clear what is the best way of throwing in more
disks into the system.
does more stripes means more performance (mostly) ?
also is there any thumb rule about best stripe size ? (8k,16k,32k...)
regds
mallah
On 5/30/07, [EMAIL PROTECTED] <[EMAIL
On 5/31/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
On Thu, May 31, 2007 at 01:28:58AM +0530, Rajesh Kumar Mallah wrote:
> i am still not clear what is the best way of throwing in more
> disks into the system.
> does more stripes means more performance (mostly) ?
> also
PasteBin for the vmstat output
http://pastebin.com/mpHCW9gt
On Wed, Jun 23, 2010 at 8:22 PM, Rajesh Kumar Mallah
wrote:
> Dear List ,
>
> I observe that my postgresql (ver 8.4.2) dedicated server has turned cpu
> bound and there is a high load average in the server > 50 usuall
On 6/23/10, Kevin Grittner wrote:
> Rajesh Kumar Mallah wrote:
>> PasteBin for the vmstat output
>> http://pastebin.com/mpHCW9gt
>>
>> On Wed, Jun 23, 2010 at 8:22 PM, Rajesh Kumar Mallah
>> wrote:
>>> Dear List ,
>>>
>>> I observe th
riable class names
general.report_level = ''
general.disable_audittrail2 = ''
general.employee=''
Also i would like to apologize that some of the discussions on this problem
inadvertently became private between me & kevin.
On Thu, Jun 24, 2010 at 12:10 AM, Rajes
und and 90% of syscalls being
lseek(XXX, 0, SEEK_END) = YYY
>
> Rajesh Kumar Mallah wrote:
>
>> 3. we use xfs and our controller has BBU , we changed barriers=1
>> to barriers=0 as i learnt that having barriers=1 on xfs and fsync
>> as the sync method, the
010 at 10:55 PM, Rajesh Kumar Mallah
wrote:
> On Thu, Jun 24, 2010 at 8:57 PM, Kevin Grittner
> wrote:
>> I'm not clear whether you still have a problem, or whether the
>> changes you mention solved your issues. I'll comment on potential
>> issues that leap out a
A scary phenomenon is being exhibited by the server , which is the server
is slurping all the swap suddenly , some of the relevant sar -r output are:
10:30:01 AM kbmemfree kbmemused %memused kbbuffers kbcached
kbswpfree kbswpused %swpused kbswpcad
10:40:01 AM979068 31892208 97.02
g business hours.
Warm Regds
Rajesh Kumar Mallah.
On Fri, Jun 25, 2010 at 4:58 PM, Yeb Havinga wrote:
> Rajesh Kumar Mallah wrote:
>>
>> A scary phenomenon is being exhibited by the server , which is the server
>> is slurping all the swap suddenly
>> 8 1 4192912 9
I changed shared_buffers from 10G to 4G ,
swap usage has almost become nil.
# free
total used free sharedbuffers cached
Mem: 32871276 245758248295452 0 11064 22167324
-/+ buffers/cache:2397436 30473840
Swap: 4192912
Dear List,
pgtune suggests the following:
(current value are in braces via reason) , (*) indicates significant
difference from current value.
default_statistics_target = 50 # pgtune wizard 2010-06-25 (current 100
via default)
(*) maintenance_work_mem = 1GB # pgtune wizard 2010-06-25 (16MB v
Dear Criag,
also check for the possibility of installing sysstat in our system.
it goes a long way in collecting the system stats. you may
consider increasing the frequency of data collection by
changing the interval of cron job manually in /etc/cron.d/
normally its */10 , you may make it */2 for
commit nor rollback.
On 6/25/10, Tom Molesworth wrote:
> On 25/06/10 16:59, Rajesh Kumar Mallah wrote:
>> when i reduce max_connections i start getting errors, i will see again
>> concurrent connections
>> during business hours. lot of our connections are in > transactio
Dear Greg/Kevin/List ,
Many thanks for the comments regarding the params, I am however able to
change an
experiment on production in a certain time window , when that arrives i
shall post
my observations.
Rajesh Kumar Mallah.
Tradeindia.com - India's Largest B2B eMarketPlace.
Dear List,
Today has been good since morning. Although it is a lean day
for us but the indications are nice. I thank everyone who shared
the concern. I think the most significant change has been to reduce
shared_buffers from 10G to 4G , this has lead to reduced memory
usage and some breathing spa
SZfLB
Regds
mallah.
On Sat, Jun 26, 2010 at 3:23 PM, Rajesh Kumar Mallah <
mallah.raj...@gmail.com> wrote:
> Dear List,
>
> Today has been good since morning. Although it is a lean day
> for us but the indications are nice. I thank everyone who shared
> the concern. I think
Dear List,
just by removing the order by co_name reduces the query time dramatically
from ~ 9 sec to 63 ms. Can anyone please help.
Regds
Rajesh Kumar Mallah.
explain analyze SELECT * from ( SELECT
a.profile_id,a.userid,a.amount,a.category_id,a.catalog_id,a.keywords,b.co_name
from
On Mon, Jun 28, 2010 at 5:09 PM, Yeb Havinga wrote:
> Rajesh Kumar Mallah wrote:
>
>> Dear List,
>>
>> just by removing the order by co_name reduces the query time dramatically
>> from ~ 9 sec to 63 ms. Can anyone please help.
>>
> The 63 ms query result
Dear Tom/Kevin/List
thanks for the insight, i will check the suggestion more closely and post
the results.
regds
Rajesh Kumar Mallah.
The way to make this go faster is to set up the actually recommended
> infrastructure for full text search, namely create an index on
> (co_name_vec)::tsvector (either directly or using an auxiliary tsvector
> column). If you don't want to maintain such an index, fine, but don't
> expect full text
analysis a trivial problem. We want that the subsequent runs
of query should take similar times as the first run so that we can work
on the optimizing the calling patterns to the database.
regds
Rajesh Kumar Mallah.
the i/o bandwidth . I think you should check when
the max cpu utilisation
is taking place exactly.
regds
Rajesh Kumar Mallah.
On Sat, Jun 26, 2010 at 3:55 AM, Deborah Fuentes wrote:
> Hello,
>
> When I run an SQL to create new tables and indexes is when Postgres
> consumes
Dear Sri,
Please post at least the Explain Analyze output . There is a nice posting
guideline
also regarding on how to post query optimization questions.
http://wiki.postgresql.org/wiki/SlowQueryQuestions
On Thu, Jul 1, 2010 at 10:49 AM, Srikanth Kata wrote:
>
> Please tell me What is the best
On Thu, Jul 1, 2010 at 10:07 PM, Craig Ringer
wrote:
> On 01/07/10 17:41, Rajesh Kumar Mallah wrote:
> > Hi,
> >
> > this is not really a performance question , sorry if its bit irrelevant
> > to be posted here. We have a development environment and we want
> > t
profiling requires multiple
iterations it is not feasible to reboot the machine. I think i will try to
profile
my code using new and unique input parameters in each iteration, this shall
roughly serve my purpose.
On Fri, Jul 2, 2010 at 8:30 AM, Craig Ringer wrote:
> On 02/07/10 01:59, Rajesh Ku
about how much data you are loading ? rows count or
GB data etc
2. how many indexes are you creation ?
regds
Rajesh Kumar Mallah.
rious
why
inspite of 0 clients waiting pgbounce introduces a drop in tps.
Warm Regds
Rajesh Kumar Mallah.
CTO - tradeindia.com.
Keywords: pgbouncer performance
On Mon, Jul 12, 2010 at 6:11 PM, Kevin Grittner wrote:
> Craig Ringer wrote:
>
> > So rather than asking "
note: my postgresql server & pgbouncer were not in virtualised environment
in the first setup. Only application server has many openvz containers.
Nice suggestion to try ,
I will put pgbouncer on raw hardware and run pgbench from same hardware.
regds
rajesh kumar mallah.
> Why in VM (openvz container) ?
>
> Did you also try it in the same OS as your appserver ?
>
> Perhaps even connecting from appserver via unix seckets
i get less performance
(even if no clients waiting)
without pooling the dbserver CPU usage increases but performance of apps
is also become good.
Regds
Rajesh Kumar Mallah.
On Sun, Jul 18, 2010 at 10:55 PM, Greg Smith wrote:
> Rajesh Kumar Mallah wrote:
>
>> the no of clients was
On Sun, Jul 18, 2010 at 10:55 PM, Greg Smith wrote:
> Rajesh Kumar Mallah wrote:
>
>> the no of clients was 10 ( -c 10) carrying out 1 transactions each
>> (-t 1) .
>> pgbench db was initilised with scaling factor -s 100.
>>
>> since client co
Looks like ,
pgbench cannot be used for testing with pgbouncer if number of
pgbench clients exceeds pool_size + reserve_pool_size of pgbouncer.
pgbench keeps waiting doing nothing. I am using pgbench of postgresql 8.1.
Are there changes to pgbench in this aspect ?
regds
Rajesh Kumar Mallah.
On
Thanks for the thought but it (-C) does not work .
>
>
> BTW, I think you should use -C option with pgbench for this kind of
> testing. -C establishes connection for each transaction, which is
> pretty much similar to the real world application which do not use
> connection pooling. You will be s
applicable to your case.
Regds
Rajesh Kumar Mallah
On 4/3/06, Kenji Morishige <[EMAIL PROTECTED]> wrote:
> I am using postgresql to be the central database for a variety of tools for
> our testing infrastructure. We have web tools and CLI tools that require
> access
> to machine
On 4/9/06, Chethana, Rao (IE10) <[EMAIL PROTECTED]> wrote:> > > > > Hello! > >
> > Kindly go through the following , > >> >> >
I wanted to know whether, the command line arguments(function
arguments) -- $1 $2 $3 -- can be
used as in the following , like, ---
On 4/10/06, Jesper Krogh <[EMAIL PROTECTED]> wrote:
HiI'm currently upgrading a Posgresql 7.3.2 database to a8.1.I'd run pg_dump | gzip > sqldump.gz on the old system. That took about30 hours and gave me an 90GB zipped file. Running
cat sqldump.gz | gunzip | psqlinto the 8.1 database seems to take
sorry for the post , i didn' saw the other replies only after posting.On 4/10/06, Rajesh Kumar Mallah <[EMAIL PROTECTED]
> wrote:
On 4/10/06, Jesper Krogh <[EMAIL PROTECTED]
> wrote:
HiI'm currently upgrading a Posgresql 7.3.2 database to a8.1.I'd run pg_dump | gzip >
what is the query ?use LIMIT or a restricting where clause.regdsmallah.On 4/10/06, soni de <
[EMAIL PROTECTED]> wrote:Hello,
I have difficulty in fetching the records from the database.
Database table contains more than 1 GB data.
For fetching the records it is taking more the 1 hour and that's w
4. fsync can also be turned off while loading huge dataset , but seek others comments too (as study docs) as i am not sure about the reliability. i think it can make a lot of difference.
On 4/10/06, Jesper Krogh <[EMAIL PROTECTED]> wrote:
Rajesh Kumar Mallah wrote:>> I'd r
On 4/11/06, Simon Dale <[EMAIL PROTECTED]> wrote:
>
>
>
> Hi,
>
>
>
> I'm trying to evaluate PostgreSQL as a database that will have to store a
> high volume of data and access that data frequently. One of the features on
> our wish list is to be able to use stored procedures to access the data and
Greetings,
Is there any performance penalty of having too many columns in
a table in terms of read and write speeds.
To order to keep operational queries simple (avoid joins) we plan to
add columns in the main customer dimension table.
Adding more columns also means increase in concurrency in the
Shea,Dan [CIS] wrote:
The index is
Indexes:
"forecastelement_rwv_idx" btree (region_id, wx_element, valid_time)
-Original Message-
From: Shea,Dan [CIS] [mailto:[EMAIL PROTECTED]
Sent: Monday, April 12, 2004 10:39 AM
To: Postgres Performance
Subject: [PERFORM] Deleting certain duplicates
Richard Huxton wrote:
On Wednesday 14 April 2004 18:53, Rajesh Kumar Mallah wrote:
Hi
I have .5 million rows in a table. My problem is select count(*) takes
ages. VACUUM FULL does not help. can anyone please tell me
how to i enhance the performance of the setup.
SELECT count(*) from
Hi
I have .5 million rows in a table. My problem is select count(*) takes
ages.
VACUUM FULL does not help. can anyone please tell me
how to i enhance the performance of the setup.
Regds
mallah.
postgresql.conf
--
max_fsm_pages = 55099264 # min max_fsm_rela
:53, Rajesh Kumar Mallah wrote:
Hi
I have .5 million rows in a table. My problem is select count(*) takes
ages. VACUUM FULL does not help. can anyone please tell me
how to i enhance the performance of the setup.
SELECT count(*) from eyp_rfi;
If this is the actual query you're runn
The relation size for this table is 1.7 GB
tradein_clients=# SELECT public.relation_size ('general.rfis');
+--+
| relation_size|
+--+
|1,762,639,872 |
+--+
(1 row)
Regds
mallah.
Rajesh Kumar Mallah wrote:
The problem is that
, Rajesh Kumar Mallah wrote:
The problem is that i want to know if i need a Hardware upgrade
at the moment.
Eg i have another table rfis which contains ~ .6 million records.
SELECT count(*) from rfis where sender_uid > 0;
Time: 117560.635 ms
Which is approximate 4804 records
Richard Huxton wrote:
On Thursday 15 April 2004 08:10, Rajesh Kumar Mallah wrote:
The problem is that i want to know if i need a Hardware upgrade
at the moment.
Eg i have another table rfis which contains ~ .6 million records.
SELECT count(*) from rfis where sender_uid >
Bill Moran wrote:
Rajesh Kumar Mallah wrote:
Hi,
The problem was solved by reloading the Table.
the query now takes only 3 seconds. But that is
not a solution.
If dropping/recreating the table improves things, then we can reasonably
assume that the table is pretty active with updates/inserts
.8 0:00 postmaster
Richard Huxton wrote:
On Thursday 15 April 2004 17:19, Rajesh Kumar Mallah wrote:
Bill Moran wrote:
Rajesh Kumar Mallah wrote:
Hi,
The problem was solved by reloading the Table.
the query now takes only 3 second
Have you checked Tsearch2
http://www.sai.msu.su/~megera/postgres/gist/tsearch/V2/
is the most feature rich Full text Search system available
for postgresql. We are also using the same system in
the revamped version of our website.
Regds
Mallah.
Mark Stosberg wrote:
Hello,
I work for Summersault
V i s h a l Kashyap @ [Sai Hertz And Control Systems] wrote:
Dear all,
Have anyone compiled PostgreSQL with kernel 2.6.x if YES
1. Was their any performance gains
Else
1. Is it possible
2. What problems would keeping us away from compiling on kernel 2.6
We run pgsql on 2.6.6 there was upto 30% impr
Hi,
I am going to get a Dell 2950 with PERC6i with
8 * 73 15K SAS drives +
300 GB EMC SATA SAN STORAGE,
I seek suggestions from users sharing their experience with
similar hardware if any. I have following specific concerns.
1. On list i read that RAID10 function in PERC5 is not really
strip
t; Are there any reasonable choices for bigger (3+ shelf) direct-connected
> RAID10 arrays, or are hideously expensive SANs the only option? I've
> checked out the latest Areca controllers, but the manual available on
> their website states there's a limitation of 32 disks in an ar
Hi ,
I have a query in which two huge tables (A,B) are joined using an indexed
column and a search is made on tsvector on some column on B. Very limited
rows of B are expected to match the query on tsvector column.
With default planner settings the query takes too long ( > 100 secs) , but
with h
Index Cond: (trade_leads.profile_id = pm.profile_id)
Total runtime: 55.333 ms
(11 rows)
SELECT SUM(1) FROM general.trade_leads WHERE status = 'm';
sum
127371
this constitutes 90% of the total rows.
regds
mallah.
On Tue, Feb 10, 2009 at 6:36 PM, Robert Haas wrot
> Can't use an undefined value as an ARRAY reference at
> /usr/lib/perl5/site_perl/5.8.8/Test/Parser/Dbt2.pm line 521.
>
> Can someone please give inputs to resolve this issue? Any help on this will
> be appreciated.
519 sub transactions {
520 my $self = shift;
521 return @{$self->{data}->
On Tue, Feb 10, 2009 at 9:09 PM, Tom Lane wrote:
> Rajesh Kumar Mallah writes:
>> On Tue, Feb 10, 2009 at 6:36 PM, Robert Haas wrote:
>>> I'm guessing that the problem is that the selectivity estimate for
>>> co_name_vec @@ to_tsquery('plastic&tubes
r_uid) CLUSTER
"rfis_part_2009_01_sender_uid" btree (sender_uid)
Check constraints:
"rfis_part_2009_01_generated_date_check" CHECK (generated_date >=
3289 AND generated_date <= 3319)
"rfis_part_2009_01_rfi_id_check" CHECK (rfi_id >= 12344252 AND
rfi_id <= 126
thanks for the hint,
now the peak hour is over and the same scan is taking 71 ms in place of 8 ms
and the total query time is also acceptable. But it is surprising that
the scan was
taking so long consistently at that point of time. I shall test again
under similar
circumstance tomorrow.
Is i
eiver_uid = 1320721)
Filter: (generated_date >= 2251)
Total runtime: 0.082 ms
(5 rows)
tradein_clients=>
On Wed, Feb 11, 2009 at 6:07 PM, Rajesh Kumar Mallah
wrote:
> thanks for the hint,
>
> now the peak hour is over and the same scan is taking 71 ms in place of 8
> ms
Hi,
Is it possible to configure autovacuum to run only
during certain hours ? We are forced to keep
it off because it pops up during the peak
query hours.
Regds
rajesh kumar mallah.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your
On Wed, Feb 11, 2009 at 7:11 PM, Guillaume Cottenceau wrote:
> Rajesh Kumar Mallah writes:
>
>> Hi,
>>
>> Is it possible to configure autovacuum to run only
>> during certain hours ? We are forced to keep
>> it off because it pops up during the peak
>> q
On Wed, Feb 11, 2009 at 10:03 PM, Grzegorz Jaśkiewicz wrote:
> On Wed, Feb 11, 2009 at 2:57 PM, Rajesh Kumar Mallah
> wrote:
>
>>> vacuum_cost_delay = 150
>>> vacuum_cost_page_hit = 1
>>> vacuum_cost_page_miss = 10
>>> vacuum_cost
On Wed, Feb 11, 2009 at 11:30 PM, Brad Nicholson
wrote:
> On Wed, 2009-02-11 at 22:57 +0530, Rajesh Kumar Mallah wrote:
>> On Wed, Feb 11, 2009 at 10:03 PM, Grzegorz Jaśkiewicz
>> wrote:
>> > On Wed, Feb 11, 2009 at 2:57 PM, Rajesh Kumar Mallah
>> > wrote:
&g
I have received Dell Poweredge 2950 MIII with 2 kind of
drives. I cant' make out the reason behind it , does it
make any difference in long run or in performance
the drives are similar in overall characteristics but does
the minor differences if will cause any problem ?
scsi0 : LSI Logic SAS based
Its nice to know the evolution of autovacuum and i understand that
the suggestion/requirement of "autovacuum at lean hours only"
was defeating the whole idea.
regds
--rajesh kumar mallah.
On Fri, Feb 13, 2009 at 11:07 PM, Chris Browne wrote:
> mallah.raj...@gmail.com (Rajesh
BTW
our Machine got build with 8 15k drives in raid10 ,
from bonnie++ results its looks like the machine is
able to do 400 Mbytes/s seq write and 550 Mbytes/s
read. the BB cache is enabled with 256MB
sda6 --> xfs with default formatting options.
sda7 --> mkfs.xfs -f -d sunit=128,swidth=512 /
The URL of the result is
http://98.129.214.99/bonnie/report.html
(sorry if this was a repost)
On Tue, Feb 17, 2009 at 2:04 AM, Rajesh Kumar Mallah
wrote:
> BTW
>
> our Machine got build with 8 15k drives in raid10 ,
> from bonnie++ results its looks like the machine is
>
On Tue, Feb 17, 2009 at 5:15 PM, Matthew Wakeling wrote:
> On Tue, 17 Feb 2009, Rajesh Kumar Mallah wrote:
>>
>> sda6 --> xfs with default formatting options.
>> sda7 --> mkfs.xfs -f -d sunit=128,swidth=512 /dev/sda7
>> sda8 --> ext3 (default)
>>
&g
ad-performance-single-command
>
> ____
> From: pgsql-performance-ow...@postgresql.org
> [pgsql-performance-ow...@postgresql.org] On Behalf Of Rajesh Kumar Mallah
> [mallah.raj...@gmail.com]
> Sent: Tuesday, February 17, 2009 5:25 AM
> To:
Detailed bonnie++ figures.
http://98.129.214.99/bonnie/report.html
On Wed, Feb 18, 2009 at 1:22 PM, Rajesh Kumar Mallah
wrote:
> the raid10 voulme was benchmarked again
> taking in consideration above points
>
> # fdisk -l /dev/sda
> Disk /dev/sda: 290.9 GB, 290984034304 bytes
>> Effect of ReadAhead Settings
>> disabled,256(default) , 512,1024
>>
SEQUENTIAL
>> xfs_ra0 414741 , 66144
>> xfs_ra256403647, 545026 all tests on sda6
>> xfs_ra512411357, 564769
>> xfs_ra1024 404392, 431168
>>
>> looks like 512
On Wed, Feb 18, 2009 at 2:27 PM, Grzegorz Jaśkiewicz wrote:
> have you tried hanging bunch of raid1 to linux's md, and let it do
> raid0 for you ?
Hmmm , i will have only 3 bunches in that case as system has to boot
from first bunch
as system has only 8 drives. i think reducing spindles will red
There has been an error in the tests the dataset size was not 2*MEM it
was 0.5*MEM
i shall redo the tests and post results.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Databases are usually IO bound , vmstat results can confirm individual
cases and setups.
In case the server is IO bound the entry point should be setting up
properly performing
IO. RAID10 helps a great extent in improving IO bandwidth by
parallelizing the IO operations,
more spindles the better. Al
Hi All,
data_bank.updated_profiles and public.city_master are small tables
with 21790 and 49303 records repectively. both have indexes on the join
column. in first one on (city,source) and in second one on (city)
The query below does not return for long durations > 10 mins.
explain analyze s
Hi ,
I have a view which is a union of select of certain feilds from
indentical tables. The problem is when we query a column on
which index exists exists foreach of the tables does not use the
indexes.
But when we query individual tables it uses indexes.
Regds
Mallah.
tradein_clients=# crea
Hi,
For each company_id in certain table i have to search the same table
get certain rows sort them and pick up the top one , i tried using this
subselect:
explain analyze SELECT company_id , (SELECT edition FROM ONLY
public.branding_master b WHERE old_company_id = a.company_id OR company_id =
Tom Lane wrote:
Rajesh Kumar Mallah <[EMAIL PROTECTED]> writes:
explain analyze SELECT company_id , (SELECT edition FROM ONLY
public.branding_master b WHERE old_company_id = a.company_id OR company_id =
a.company_id ORDER BY b.company_id DESC LIMIT 1
On Wednesday 30 Jul 2003 3:02 am, Tom Lane wrote:
> Rajesh Kumar Mallah <[EMAIL PROTECTED]> writes:
> > Tom Lane wrote:
> >> Odd. Apparently the planner is picking a better plan in the function
> >> context than in the subselect context --- which is strange since
Tom Lane wrote:
Rajesh Kumar Mallah <[EMAIL PROTECTED]> writes:
What lead to degradation was the bumping off of
effective_cache_size parameter from 1000 to 64K
Check the plan then; AFAIR the only possible effect of changing
effective_cache_size is to influence which plan the planner
Stephan Szabo wrote:
On Thu, 31 Jul 2003, Christopher Browne wrote:
select * from log_table where request_time between 'june 11 2003' and
'june 12 2003';
returns a plan:
Subquery Scan log_table (cost=0.00..10950.26 rows=177126 width=314)
->
On Thursday 30 Oct 2003 4:53 am, you wrote:
> <[EMAIL PROTECTED]> writes:
> > Actually PostgreSQL is at par with MySQL when the query is being
> > Properly Written(simplified)
>
> These are not the same query, though. Your original looks like
Yes that was an optimisation on haste the simplifica
Dear Tom,
Can you please have a Look at the below and suggest why it apparently puts
7.3.4 on an infinite loop . the CPU utilisation of the backend running it
approches 99%.
Query:
I have tried my best to indent it :)
SELECT DISTINCT main.* FROM
(
(
(
(
Tickets
27;open'::text)))
-> Index Scan using groups1 on groups groups_1 (cost=0.00..5.90 rows=1 width=12)
Index Cond: (((groups_1."domain")::text = 'RT::Ticket-Role'::text) AND (("outer".id)::text = (groups_1.ins
1 - 100 of 123 matches
Mail list logo