Hi,
On Monday 08 November 2010 23:12:57 Greg Smith wrote:
> This seems to be ignoring the fact that unless you either added a
> non-volatile cache or specifically turned off all write caching on your
> drives, the results of all power-fail testing done on earlier versions
> of Linux was that it
Scott Carey wrote:
Im my opinion, the burden of proof lies with those contending that the default
value should _change_ from fdatasync to O_DSYNC on linux. If the default
changes, all power-fail testing and other reliability tests done prior on a
hardware configuration may become invalid with
Use a replicated setup?
On Nov 8, 2010 4:21 PM, "Lello, Nick" wrote:
How about either:-
a) Size the pool so all your data fits into it.
b) Use a RAM-based filesystem (ie: a memory disk or SSD) for the
data storage [memory disk will be faster] with a Smaller pool
- Your seed data should be
"Lello, Nick" writes:
> A bigger gain can probably be had if you have a tightly controlled
> suite of queries that will be run against the database and you can
> spend the time to tune each to ensure it performs no sequential scans
> (ie: Every query uses index lookups).
Given a fixed pool of que
On Mon, Nov 8, 2010 at 1:16 AM, shaiju.ck wrote:
> [] I have increased the shared_buffres to 1024MB, but no
> improvement. I have noticed that the query "show shared_buffers" always show
> 8MB.Why is this? Does it mean that changing the shared_buffers in config
> file have no impact? Can anybo
The table have 200 records now.
Select * from employee takes 15 seconds to fetch the data!!!
Which seems to be very slow.
But when I say select id,name from empoyee it executes in 30ms.
30 ms is also amazingly slow for so few records and so little data.
- please provide results of "EXPLAIN AN
Scott Carey writes:
> No matter how you slice it, the default on Linux is implicitly changing and
> the choice is to either:
> * Return the default to fdatasync
> * Let it implicitly change to O_DSYNC
> The latter choice is the one that requires testing to prove that it is the
> proper and pr
On Nov 7, 2010, at 6:35 PM, Marti Raudsepp wrote:
> On Mon, Nov 8, 2010 at 01:35, Greg Smith wrote:
>> Yes; it's supposed to, and that logic works fine on some other platforms.
>
> No, the logic was broken to begin with. Linux technically supported
> O_DSYNC all along. PostgreSQL used fdatasync
Thomas Kellerer wrote:
> Kevin Grittner, 08.11.2010 18:01:
>> I would add a request to see the output from `VACUUM VERBOSE
>> employee;`.
> Do you really think that VACCUM is the problem? If the OP only
> selects two columns it is apparently fast.
> If he selects all columns it's slow, so I wo
Kevin Grittner, 08.11.2010 18:01:
"shaiju.ck" wrote:
The table have 200 records now.
Select * from employee takes 15 seconds to fetch the data!!!
Which seems to be very slow.
But when I say select id,name from empoyee it executes in 30ms.
Same pefromance if I say select count(*) from emloyee.
"shaiju.ck" wrote:
> I have increased the shared_buffres to 1024MB, but no improvement.
> I have noticed that the query "show shared_buffers" always show
> 8MB.Why is this? Does it mean that changing the shared_buffers in
> config file have no impact?
Did you signal PostgreSQL to "reload" its
"shaiju.ck" wrote:
> The table have 200 records now.
> Select * from employee takes 15 seconds to fetch the data!!!
> Which seems to be very slow.
> But when I say select id,name from empoyee it executes in 30ms.
> Same pefromance if I say select count(*) from emloyee.
You haven't given nearly
On 8 November 2010 06:16, shaiju.ck wrote:
> Hi, I have a table employee with 33 columns. The table have 200 records
> now. Select * from employee takes 15 seconds to fetch the data!!! Which
> seems to be very slow. But when I say select id,name from empoyee it
> executes in 30ms. Same pefromance
Hello
do you use a VACUUM statement?
Regards
Pavel Stehule
2010/11/8 shaiju.ck :
> Hi, I have a table employee with 33 columns. The table have 200 records now.
> Select * from employee takes 15 seconds to fetch the data!!! Which seems to
> be very slow. But when I say select id,name from empoye
Hi,
I have a table employee with 33 columns.
The table have 200 records now.
Select * from employee takes 15 seconds to fetch the data!!!
Which seems to be very slow.
But when I say select id,name from empoyee it executes in 30ms.
Same pefromance if I say select count(*) from emloyee.
Why the que
How about either:-
a) Size the pool so all your data fits into it.
b) Use a RAM-based filesystem (ie: a memory disk or SSD) for the
data storage [memory disk will be faster] with a Smaller pool
- Your seed data should be a copy of the datastore on disk filesystem;
at startup time copy the sto
I wrote a little Perl script, intended to test the difference that array
insert makes with PostgreSQL. Imagine my surprise when a single record
insert into a local database was faster than batches of 100 records.
Here are the two respective routines:
sub do_ssql
{
my $exec_cnt = 0;
whil
I wrote a little Perl script, intended to test the difference that array
insert makes with PostgreSQL. Imagine my surprise when a single record
insert into a local database was faster than batches of 100 records.
Here are the two respective routines:
sub do_ssql
{
my $exec_cnt = 0;
whil
Hello
We have an application that needs to do bulk reads of ENTIRE Postgres tables very quickly (i.e. select * from table). We have
observed that such sequential scans run two orders of magnitude slower than observed raw disk reads (5 MB/s versus 100 MB/s). Part
of this is due to the storage
On Mon, Nov 8, 2010 at 02:05, Greg Smith wrote:
> Where's your benchmarks proving it then? If you're right about this, and
> I'm not saying you aren't, it should be obvious in simple bechmarks by
> stepping through various sizes for wal_buffers and seeing the
> throughput/latency situation improv
Robert Haas writes:
This part looks really strange to me. Here we have a nested loop
whose outer side is estimated to produce 33 rows and whose outer side
is estimated to produce 2 rows.
We have retained someone to help us troubleshoot the issue.
Once the issue has been resolved I will make s
21 matches
Mail list logo