Can you help me appending two table values into single table without
performing INSERT?
Note that these tables are of same schema.
Is there any sql command is supported?
Thanks,
Hanu
On 5/29/07, Alvaro Herrera <[EMAIL PROTECTED]> wrote:
Michal Szymanski wrote:
> There is another strange thin
On 5/31/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
On Thu, May 31, 2007 at 01:28:58AM +0530, Rajesh Kumar Mallah wrote:
> i am still not clear what is the best way of throwing in more
> disks into the system.
> does more stripes means more performance (mostly) ?
> also is there any thumb ru
You are referring to pgpool? BTW, thanks for this insight.
Yudhvir
On 5/30/07, Tom Lane <[EMAIL PROTECTED]> wrote:
"Y Sidhu" <[EMAIL PROTECTED]> writes:
> The question is: Is this method of repeatedly establishing and
> re-establishing database connections with the same 3 tables effi
"Y Sidhu" <[EMAIL PROTECTED]> writes:
> The question is: Is this method of repeatedly establishing and
> re-establishing database connections with the same 3 tables efficient?
No. Launching a new backend process is a fairly expensive proposition;
if you're striving for performance you don't want
I have a question regarding "connection for xxyy established" The situation
below shows records being added to 3 tables which are heavily populated. We
never "update" any table, only read from them. Or we delete a full-day worth
of records from them.
The question is: Is this method of repeatedly
On Thu, May 31, 2007 at 01:28:58AM +0530, Rajesh Kumar Mallah wrote:
> i am still not clear what is the best way of throwing in more
> disks into the system.
> does more stripes means more performance (mostly) ?
> also is there any thumb rule about best stripe size ? (8k,16k,32k...)
It isn't that
Mark,
On 5/30/07 8:57 AM, "[EMAIL PROTECTED]" <[EMAIL PROTECTED]> wrote:
> One part is corruption. Another is ordering and consistency. ZFS represents
> both RAID-style storage *and* journal-style file system. I imagine consistency
> and ordering is handled through journalling.
Yep and versionin
Albert,
On 5/30/07 8:00 AM, "Albert Cervera Areny" <[EMAIL PROTECTED]> wrote:
> Hardware isn't very good I believe, and it's about 2-3 years old, but the RAID
> is Linux software, and though not very good the difference between reading
> and writing should probably be greater... (?)
Not for one
Sorry for posting and disappearing.
i am still not clear what is the best way of throwing in more
disks into the system.
does more stripes means more performance (mostly) ?
also is there any thumb rule about best stripe size ? (8k,16k,32k...)
regds
mallah
On 5/30/07, [EMAIL PROTECTED] <[EMAIL
"Michael Stone" <[EMAIL PROTECTED]> writes:
"Michael Stone" <[EMAIL PROTECTED]> writes:
> On Wed, May 30, 2007 at 07:06:54AM -0700, Luke Lonergan wrote:
>
> > Much better to get a RAID system that checksums blocks so that "good" is
> > known. Solaris ZFS does that, as do high end systems from EMC
As there is no 'continuous space' option on ext3/ext2 (or probably "-f
fragment_size" may do a trick?) - I think after some filesystem
activity you simply loose continuous space allocation and rather
expected sequential reading may be transformed into random seeking of
'logically' sequentual block
"Tyrrill, Ed" <[EMAIL PROTECTED]> writes:
> I did a vacuum analyze after inserting all the data. Is there possibly
> a bug in analyze in 8.1.5-6? I know it says rows=3D436915, but the last
> time the backup_location table has had that little data in it was a
> couple months ago, and analyze has b
Tom Lane <[EMAIL PROTECTED]> writes:
> Klint Gore <[EMAIL PROTECTED]> writes:
>> On Tue, 29 May 2007 17:16:57 -0700, "Tyrrill, Ed"
<[EMAIL PROTECTED]> wrote:
>>> mdsdb=# explain analyze select backupobjects.record_id from
>>> backupobjects left outer join backup_location using(record_id) where
>>>
Michael Glaesemann <[EMAIL PROTECTED]> writes:
> Off the cuff, when was the last time you vacuumed or ran ANALYZE?
> Your row estimates look off by a couple orders of magnitude. With up-
> to-date statistics the planner might do a better job.
>
> As for any other improvements, I'll leave that
On Wed, May 30, 2007 at 08:51:45AM -0700, Luke Lonergan wrote:
> > This is standard stuff, very well proven: try googling 'self healing zfs'.
> The first hit on this search is a demo of ZFS detecting corruption of one of
> the mirror pair using checksums, very cool:
>
> http://www.opensolaris.or
> This is standard stuff, very well proven: try googling 'self healing zfs'.
The first hit on this search is a demo of ZFS detecting corruption of one of
the mirror pair using checksums, very cool:
http://www.opensolaris.org/os/community/zfs/demos/selfheal/;jsessionid=52508
D464883F194061E341F
Oh by the way, I saw a nifty patch in the queue :
Find a way to reduce rotational delay when repeatedly writing last WAL page
Currently fsync of WAL requires the disk platter to perform a full
rotation to fsync again.
One idea is to write the WAL to different offsets that might reduce
On Wed, 30 May 2007 16:36:48 +0200, Luke Lonergan
<[EMAIL PROTECTED]> wrote:
I don't see how that's better at all; in fact, it reduces to
exactly the same problem: given two pieces of data which
disagree, which is right?
The one that matches the checksum.
- postgres tells OS "write
It's created when the data is written to both drives.
This is standard stuff, very well proven: try googling 'self healing zfs'.
- Luke
Msg is shrt cuz m on ma treo
-Original Message-
From: Michael Stone [mailto:[EMAIL PROTECTED]
Sent: Wednesday, May 30, 2007 11:11 AM Eastern Stand
On Wed, May 30, 2007 at 10:36:48AM -0400, Luke Lonergan wrote:
I don't see how that's better at all; in fact, it reduces to
exactly the same problem: given two pieces of data which
disagree, which is right?
The one that matches the checksum.
And you know the checksum is good, how?
Mike St
Hardware isn't very good I believe, and it's about 2-3 years old, but the RAID
is Linux software, and though not very good the difference between reading
and writing should probably be greater... (?)
Would you set 512Kb readahead on both drives and RAID? I tried various
configurations and none
> I don't see how that's better at all; in fact, it reduces to
> exactly the same problem: given two pieces of data which
> disagree, which is right?
The one that matches the checksum.
- Luke
---(end of broadcast)---
TIP 5: don't forget to inc
On Wed, May 30, 2007 at 07:06:54AM -0700, Luke Lonergan wrote:
On 5/30/07 12:29 AM, "Peter Childs" <[EMAIL PROTECTED]> wrote:
Good point, also if you had Raid 1 with 3 drives with some bit errors at least
you can take a vote on whats right. Where as if you only have 2 and they
disagree how do yo
On Tue, May 29, 2007 at 07:56:07PM +0200, Joost Kraaijeveld wrote:
> Thanks, I tried it and it worked. I did not know that changing this
> setting would result in such a performance drop ( I just followed an
It's not a performance drop. It's an on-purpose delay of the
functionality, introduced so
This sounds like a bad RAID controller - are you using a built-in hardware
RAID? If so, you will likely want to use Linux software RAID instead.
Also - you might want to try a 512KB readahead - I've found that is optimal
for RAID1 on some RAID controllers.
- Luke
On 5/30/07 2:35 AM, "Albert C
Hi Peter,
On 5/30/07 12:29 AM, "Peter Childs" <[EMAIL PROTECTED]> wrote:
> Good point, also if you had Raid 1 with 3 drives with some bit errors at least
> you can take a vote on whats right. Where as if you only have 2 and they
> disagree how do you know which is right other than pick one and ho
"Jonah H. Harris" <[EMAIL PROTECTED]> writes:
> On 5/29/07, Luke Lonergan <[EMAIL PROTECTED]> wrote:
>> AFAIK you can't RAID1 more than two drives, so the above doesn't make sense
>> to me.
Sure you can. In fact it's a very common backup strategy. You build a
three-way mirror and then when it co
* Peter Childs ([EMAIL PROTECTED]) wrote:
> Good point, also if you had Raid 1 with 3 drives with some bit errors at
> least you can take a vote on whats right. Where as if you only have 2 and
> they disagree how do you know which is right other than pick one and hope...
> But whatever it will be s
Hi,
after doing the "dd" tests for a server we have at work I obtained:
Read: 47.20 Mb/s
Write: 39.82 Mb/s
Some days ago read performance was around 20Mb/s due to no readahead in
md0
so I modified it using hdparm. However, it seems to me that being it a RAID1
read speed could be
Joost Kraaijeveld wrote:
> On Tue, 2007-05-29 at 21:43 +0100, Dave Page wrote:
>> Cliff, Jason or Rob era? Could be important...
> Cliff and Jason.
>
> Rob is in my Ozzy collection ;-)
And rightly so imho.
:-)
/D
---(end of broadcast)---
TIP 7: Y
On 30/05/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
On Wed, 30 May 2007, Jonah H. Harris wrote:
> On 5/29/07, Luke Lonergan <[EMAIL PROTECTED]> wrote:
>> AFAIK you can't RAID1 more than two drives, so the above doesn't make
>> sense
>> to me.
>
> Yeah, I've never seen a way to RAID-1 m
31 matches
Mail list logo