On Wed, 01 Dec 2010 18:24:35 +0100, Kevin Grittner
wrote:
Mladen Gogala wrote:
There is a operating system which comes with a very decent extent
based file system and a defragmentation tool, included in the OS.
The file system is called "NTFS"
Been there, done that. Not only was performa
Kevin Grittner wrote:
Mladen Gogala wrote:
Been there, done that. Not only was performance quite poor compared
to Linux, but reliability and staff time to manage things suffered
in comparison to Linux.
I must say that I am quite impressed with Windows 7 servers, especially
64 bit
Kenneth Marshall wrote:
Redhat6 comes with ext4 which is an extent based filesystem with
decent performance.
Ken
But e4defrag is still not available. And, of course, Red Hat 6 is still
not available, either. Maybe Red Hat 7 will do the trick? I assume it
will work beautifully with PostgreS
Mladen Gogala wrote:
> There is a operating system which comes with a very decent extent
> based file system and a defragmentation tool, included in the OS.
> The file system is called "NTFS"
Been there, done that. Not only was performance quite poor compared
to Linux, but reliability and sta
On Wed, Dec 01, 2010 at 12:15:19PM -0500, Mladen Gogala wrote:
> Mario Splivalo wrote:
>> I'll try what Pierre suggested, on whole new filesystem. This one did get
>> quite filled with thousands of files that I deleted while the database was
>> working.
>>
>> Mario
>>
>
> Yes, that is a g
Mario Splivalo wrote:
I'll try what Pierre suggested, on whole new filesystem. This one did
get quite filled with thousands of files that I deleted while the
database was working.
Mario
Yes, that is a good idea. That's the reason why we need a
defragmentation tool on Linux. Unfor
Mladen Gogala wrote:
> How is it possible to insert the same value twice into a UNIQUE
> index?
You get multiple entries for the same value in a UNIQUE indexes all
the time in PostgreSQL. Any non-HOT update of a table with a UNIQUE
index will cause that. You just can't have duplicate entries
Richard Broersma wrote:
It looks like the check isn't preformed until COMMIT.
So, the index is not actually updated until commit? H, that seems
unlikely.
--
Mladen Gogala
Sr. Oracle DBA
1500 Broadway
New York, NY 10036
(212) 329-5251
http://www.vmsinfo.com
The Leader in Integrated
On 12/01/2010 05:34 PM, Mladen Gogala wrote:
Mario Splivalo wrote:
Yes, as Mladen Gogala had advised. No noticable change in performance -
it's still slow :)
Declaring constraints as deferrable doesn't do anything as such, you
have to actually set the constraints deferred to have an effect
On Wed, Dec 1, 2010 at 8:46 AM, Mladen Gogala wrote:
> PostgreSQL 9.0, however, creates a unique index:
>
> scott=# create table test1
> scott-# (col1 integer,
> scott(# constraint test1_pk primary key(col1) deferrable);
> NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index
>
In Oracle, deferrable primary keys are enforced by non-unique indexes.
That seems logical, because index should tolerate duplicate values for
the duration of transaction:
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, OLAP, D
Mario Splivalo wrote:
Yes, as Mladen Gogala had advised. No noticable change in performance -
it's still slow :)
Declaring constraints as deferrable doesn't do anything as such, you
have to actually set the constraints deferred to have an effect. You
have to do it within a transaction
Pierre C wrote:
>
> > Is that true? I have no idea. I thought everything was done at the
> > 512-byte block level.
>
> Newer disks (2TB and up) can have 4k sectors, but this still means a page
> spans several sectors.
Yes, I had heard about that.
--
Bruce Momjian http://momjian.
>FWIW, it isn't going to happen anyway, because the TID scan mechanism
>doesn't support scanning based on a join condition. That hasn't gotten
>to the top of the to-do list because the use case is almost vanishingly
>small. ctids generally aren't stable enough for it to be useful to
>store refere
So, I did. I run the whole script in psql, and here is the result for
the INSERT:
realm_51=# explain analyze INSERT INTO drones_history (2771, drone_id,
drone_log_notice, drone_temperature, drone_pressure) SELECT * FROM
tmp_drones_history;
Just once.
OK, another potential problem eliminated, it gets strange...
If I have 5000 lines in CSV file (that I load into 'temporary' table
using COPY) i can be sure that drone_id there is PK. That is because CSV
file contains measurements from all the drones, one measurement per
drone. I u
Is that true? I have no idea. I thought everything was done at the
512-byte block level.
Newer disks (2TB and up) can have 4k sectors, but this still means a page
spans several sectors.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your
On 12/01/2010 02:47 AM, Joshua D. Drake wrote:
> On Sun, 2010-11-28 at 12:46 +0100, Mario Splivalo wrote:
>> The database for monitoring certain drone statuses is quite simple:
>>
>
>> This is the slow part:
>> INSERT INTO drones_history (sample_id, drone_id, drone_log_notice,
>> drone_temperatur
On 12/01/2010 01:51 AM, Pierre C wrote:
>
>> Now I tried removing the constraints from the history table (including
>> the PK) and the inserts were fast. After few 'rounds' of inserts I
>> added constraints back, and several round after that were fast again.
>> But then all the same. Insert of som
19 matches
Mail list logo