Greg Smith wrote:
> Bruce Momjian wrote:
>> I always assumed SCSI disks had a write-through cache and therefore
>> didn't need a drive cache flush comment.
Some do. SCSI disks have write-back caches.
Some have both(!) - a write-back cache but the user can explicitly
send write-through requests.
I always assumed SCSI disks had a write-through cache and therefore
didn't need a drive cache flush comment.
Maximum performance can only be reached with a writeback cache so the
drive can reorder and cluster writes, according to the realtime position
of the heads and platter rotation.
T
Bruce Momjian wrote:
I always assumed SCSI disks had a write-through cache and therefore
didn't need a drive cache flush comment.
There's more detail on all this mess at
http://wiki.postgresql.org/wiki/SCSI_vs._IDE/SATA_Disks and it includes
this perception, which I've recently come to bel
Greg Smith wrote:
> Ron Mayer wrote:
> > Linux apparently sends FLUSH_CACHE commands to IDE drives in the
> > exact sample places it sends SYNCHRONIZE CACHE commands to SCSI
> > drives[2].
> > [2] http://hardware.slashdot.org/comments.pl?sid=149349&cid=12519114
> >
>
> Well, that's old enough
Ron Mayer wrote:
> Bruce Momjian wrote:
> > Greg Smith wrote:
> >> Bruce Momjian wrote:
> >>> I have added documentation about the ATAPI drive flush command, and the
> >>
> >> If one of us goes back into that section one day to edit again it might
> >> be worth mentioning that FLUSH CACHE EXT i
Ron Mayer wrote:
Linux apparently sends FLUSH_CACHE commands to IDE drives in the
exact sample places it sends SYNCHRONIZE CACHE commands to SCSI
drives[2].
[2] http://hardware.slashdot.org/comments.pl?sid=149349&cid=12519114
Well, that's old enough to not even be completely right anymore
Bruce Momjian wrote:
> Greg Smith wrote:
>> Bruce Momjian wrote:
>>> I have added documentation about the ATAPI drive flush command, and the
>>
>> If one of us goes back into that section one day to edit again it might
>> be worth mentioning that FLUSH CACHE EXT is the actual ATAPI-6 command
>
Greg Smith wrote:
> Bruce Momjian wrote:
> > I have added documentation about the ATAPI drive flush command, and the
> > typical SSD behavior.
> >
>
> If one of us goes back into that section one day to edit again it might
> be worth mentioning that FLUSH CACHE EXT is the actual ATAPI-6 comman
Bruce Momjian wrote:
I have added documentation about the ATAPI drive flush command, and the
typical SSD behavior.
If one of us goes back into that section one day to edit again it might
be worth mentioning that FLUSH CACHE EXT is the actual ATAPI-6 command
that a drive needs to support pr
I have added documentation about the ATAPI drive flush command, and the
typical SSD behavior.
---
Greg Smith wrote:
> Ron Mayer wrote:
> > Bruce Momjian wrote:
> >
> >> Agreed, thought I thought the problem was that SSDs
It's always possible to rebuild into a consistent configuration by assigning
a precedence order; for parity RAID, the data drives take precedence over
parity drives, and for RAID-1 sets it assigns an arbitrary master.
You *should* never lose a whole stripe ... for example, RAID-5 updates do
"read
On 02/23/2010 04:22 PM, da...@lang.hm wrote:
On Tue, 23 Feb 2010, Aidan Van Dyk wrote:
* da...@lang.hm [100223 15:05]:
However, one thing that you do not get protection against with software
raid is the potential for the writes to hit some drives but not others.
If this happens the software
On Tue, 23 Feb 2010, Aidan Van Dyk wrote:
* da...@lang.hm [100223 15:05]:
However, one thing that you do not get protection against with software
raid is the potential for the writes to hit some drives but not others.
If this happens the software raid cannot know what the correct contents
of
* da...@lang.hm [100223 15:05]:
> However, one thing that you do not get protection against with software
> raid is the potential for the writes to hit some drives but not others.
> If this happens the software raid cannot know what the correct contents
> of the raid stripe are, and so you co
On Tue, 23 Feb 2010, da...@lang.hm wrote:
On Mon, 22 Feb 2010, Ron Mayer wrote:
Also worth noting - Linux's software raid stuff (MD and LVM)
need to handle this right as well - and last I checked (sometime
last year) the default setups didn't.
I think I saw some stuff in the last few month
On Feb 23, 2010, at 3:49 AM, Pierre C wrote:
> Now I wonder about something. SSDs use wear-leveling which means the
> information about which block was written where must be kept somewhere.
> Which means this information must be updated. I wonder how crash-safe and
> how atomic these updates
On Tue, Feb 23, 2010 at 6:49 AM, Pierre C wrote:
> Note that's power draw per bit. dram is usually much more densely
>> packed (it can be with fewer transistors per cell) so the individual
>> chips for each may have similar power draws while the dram will be 10
>> times as densely packed as the
Note that's power draw per bit. dram is usually much more densely
packed (it can be with fewer transistors per cell) so the individual
chips for each may have similar power draws while the dram will be 10
times as densely packed as the sram.
Differences between SRAM and DRAM :
- price per byte
On Mon, 22 Feb 2010, Ron Mayer wrote:
Also worth noting - Linux's software raid stuff (MD and LVM)
need to handle this right as well - and last I checked (sometime
last year) the default setups didn't.
I think I saw some stuff in the last few months on this issue on the
kernel mailing list.
On Mon, Feb 22, 2010 at 7:21 PM, Scott Marlowe wrote:
> On Mon, Feb 22, 2010 at 6:39 PM, Greg Smith wrote:
>> Mark Mielke wrote:
>>>
>>> I had read the above when posted, and then looked up SRAM. SRAM seems to
>>> suggest it will hold the data even after power loss, but only for a period
>>> of t
On Mon, Feb 22, 2010 at 6:39 PM, Greg Smith wrote:
> Mark Mielke wrote:
>>
>> I had read the above when posted, and then looked up SRAM. SRAM seems to
>> suggest it will hold the data even after power loss, but only for a period
>> of time. As long as power can restore within a few minutes, it see
Mark Mielke wrote:
I had read the above when posted, and then looked up SRAM. SRAM seems
to suggest it will hold the data even after power loss, but only for a
period of time. As long as power can restore within a few minutes, it
seemed like this would be ok?
The normal type of RAM everyone u
Ron Mayer wrote:
I know less about other file systems. Apparently the NTFS guys
are aware of such stuff - but don't know what kinds of fsync equivalent
you'd need to make it happen.
It's actually pretty straightforward--better than ext3. Windows with
NTFS has been perfectly aware how to d
On 02/22/2010 08:04 PM, Greg Smith wrote:
Arjen van der Meijden wrote:
That's weird. Intel's SSD's didn't have a write cache afaik:
"I asked Intel about this and it turns out that the DRAM on the Intel
drive isn't used for user data because of the risk of data loss,
instead it is used as memor
Arjen van der Meijden wrote:
That's weird. Intel's SSD's didn't have a write cache afaik:
"I asked Intel about this and it turns out that the DRAM on the Intel
drive isn't used for user data because of the risk of data loss,
instead it is used as memory by the Intel SATA/flash controller for
d
Bruce Momjian wrote:
> Greg Smith wrote:
>> If you have a regular SATA drive, it almost certainly
>> supports proper cache flushing
>
> OK, but I have a few questions. Is a write to the drive and a cache
> flush command the same?
I believe they're different as of ATAPI-6 from 2001.
>
Ron Mayer wrote:
> Bruce Momjian wrote:
> > Agreed, thought I thought the problem was that SSDs lie about their
> > cache flush like SATA drives do, or is there something I am missing?
>
> There's exactly one case I can find[1] where this century's IDE
> drives lied more than any other drive with
Greg Smith wrote:
> Ron Mayer wrote:
> > Bruce Momjian wrote:
> >
> >> Agreed, thought I thought the problem was that SSDs lie about their
> >> cache flush like SATA drives do, or is there something I am missing?
> >>
> >
> > There's exactly one case I can find[1] where this century's IDE
>
On 22-2-2010 6:39 Greg Smith wrote:
But the point of this whole testing exercise coming back into vogue
again is that SSDs have returned this negligent behavior to the
mainstream again. See
http://opensolaris.org/jive/thread.jspa?threadID=121424 for a discussion
of this in a ZFS context just last
Ron Mayer wrote:
Bruce Momjian wrote:
Agreed, thought I thought the problem was that SSDs lie about their
cache flush like SATA drives do, or is there something I am missing?
There's exactly one case I can find[1] where this century's IDE
drives lied more than any other drive with a ca
Bruce Momjian wrote:
> Agreed, thought I thought the problem was that SSDs lie about their
> cache flush like SATA drives do, or is there something I am missing?
There's exactly one case I can find[1] where this century's IDE
drives lied more than any other drive with a cache:
Under 120GB Maxto
Scott Carey wrote:
> On Feb 20, 2010, at 3:19 PM, Bruce Momjian wrote:
>
> > Dan Langille wrote:
> >> -BEGIN PGP SIGNED MESSAGE-
> >> Hash: SHA1
> >>
> >> Bruce Momjian wrote:
> >>> Matthew Wakeling wrote:
> On Fri, 13 Nov 2009, Greg Smith wrote:
> > In order for a drive to work r
On Feb 20, 2010, at 3:19 PM, Bruce Momjian wrote:
> Dan Langille wrote:
>> -BEGIN PGP SIGNED MESSAGE-
>> Hash: SHA1
>>
>> Bruce Momjian wrote:
>>> Matthew Wakeling wrote:
On Fri, 13 Nov 2009, Greg Smith wrote:
> In order for a drive to work reliably for database use such as for
Dan Langille wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> Bruce Momjian wrote:
> > Matthew Wakeling wrote:
> >> On Fri, 13 Nov 2009, Greg Smith wrote:
> >>> In order for a drive to work reliably for database use such as for
> >>> PostgreSQL, it cannot have a volatile write cache.
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Bruce Momjian wrote:
> Matthew Wakeling wrote:
>> On Fri, 13 Nov 2009, Greg Smith wrote:
>>> In order for a drive to work reliably for database use such as for
>>> PostgreSQL, it cannot have a volatile write cache. You either need a write
>>> cache
Matthew Wakeling wrote:
> On Fri, 13 Nov 2009, Greg Smith wrote:
> > In order for a drive to work reliably for database use such as for
> > PostgreSQL, it cannot have a volatile write cache. You either need a write
> > cache with a battery backup (and a UPS doesn't count), or to turn the cache
On Fri, 13 Nov 2009, Greg Smith wrote:
In order for a drive to work reliably for database use such as for
PostgreSQL, it cannot have a volatile write cache. You either need a write
cache with a battery backup (and a UPS doesn't count), or to turn the cache
off. The SSD performance figures you
On 11/19/09 1:04 PM, "Greg Smith" wrote:
> That won't help. Once the checkpoint is done, the problem isn't just
> that the WAL segments are recycled. The server isn't going to use them
> even if they were there. The reason why you can erase/recycle them is
> that you're doing so *after* writi
Ron Mayer wrote:
> Bruce Momjian wrote:
> > Greg Smith wrote:
> >> Bruce Momjian wrote:
> >>> I thought our only problem was testing the I/O subsystem --- I never
> >>> suspected the file system might lie too. That email indicates that a
> >>> large percentage of our install base is running on unr
Bruce Momjian wrote:
> Greg Smith wrote:
>> Bruce Momjian wrote:
>>> I thought our only problem was testing the I/O subsystem --- I never
>>> suspected the file system might lie too. That email indicates that a
>>> large percentage of our install base is running on unreliable file
>>> systems ---
Bruce Momjian wrote:
>> For example, ext3 fsync() will issue write barrier commands
>> if the inode was modified; but not if the inode wasn't.
>>
>> See test program here:
>> http://www.mail-archive.com/linux-ker...@vger.kernel.org/msg272253.html
>> and read two paragraphs further to see how touchi
Greg Smith wrote:
> Bruce Momjian wrote:
> > I thought our only problem was testing the I/O subsystem --- I never
> > suspected the file system might lie too. That email indicates that a
> > large percentage of our install base is running on unreliable file
> > systems --- why have I not heard abo
Bruce Momjian wrote:
I thought our only problem was testing the I/O subsystem --- I never
suspected the file system might lie too. That email indicates that a
large percentage of our install base is running on unreliable file
systems --- why have I not heard about this before? Do the write
barr
Ron Mayer wrote:
> Bruce Momjian wrote:
> > Greg Smith wrote:
> >> A good test program that is a bit better at introducing and detecting
> >> the write cache issue is described at
> >> http://brad.livejournal.com/2116715.html
> >
> > Wow, I had not seen that tool before. I have added a link to
Bruce Momjian wrote:
> Greg Smith wrote:
>> A good test program that is a bit better at introducing and detecting
>> the write cache issue is described at
>> http://brad.livejournal.com/2116715.html
>
> Wow, I had not seen that tool before. I have added a link to it from
> our documentation, an
Greg Smith wrote:
> Merlin Moncure wrote:
> > I am right now talking to someone on postgresql irc who is measuring
> > 15k iops from x25-e and no data loss following power plug test.
> The funny thing about Murphy is that he doesn't visit when things are
> quiet. It's quite possible the window fo
On Fri, Nov 20, 2009 at 7:27 PM, Greg Smith wrote:
> Richard Neill wrote:
>>
>> The key issue for short,fast transactions seems to be
>> how fast an fdatasync() call can run, forcing the commit to disk, and
>> allowing the transaction to return to userspace.
>> Attached is a short C program which
Richard Neill wrote:
The key issue for short,fast transactions seems to be
how fast an fdatasync() call can run, forcing the commit to disk, and
allowing the transaction to return to userspace.
Attached is a short C program which may be of use.
Right. I call this the "commit rate" of the storage
Axel Rau wrote:
Am 13.11.2009 um 14:57 schrieb Laszlo Nagy:
I was thinking about ARECA 1320 with 2GB memory + BBU. Unfortunately,
I cannot find information about using ARECA cards with SSD drives.
They told me: currently not supported, but they have positive customer
reports. No date yet for
On Wed, Nov 18, 2009 at 8:24 PM, Tom Lane wrote:
> Scott Carey writes:
>> For your database DATA disks, leaving the write cache on is 100% acceptable,
>> even with power loss, and without a RAID controller. And even in high write
>> environments.
>
> Really? How hard have you tested that config
On Thu, 19 Nov 2009, Greg Smith wrote:
This is why turning the cache off can tank performance so badly--you're going
to be writing a whole 128K block no matter what if it's force to disk without
caching, even if it's just to write a 8K page to it.
Theoretically, this does not need to be the ca
Am 13.11.2009 um 14:57 schrieb Laszlo Nagy:
I was thinking about ARECA 1320 with 2GB memory + BBU.
Unfortunately, I cannot find information about using ARECA cards
with SSD drives.
They told me: currently not supported, but they have positive customer
reports. No date yet for implementatio
On Thu, Nov 19, 2009 at 2:39 PM, Merlin Moncure wrote:
> On Thu, Nov 19, 2009 at 4:10 PM, Greg Smith wrote:
>> You can use pgbench to either get interesting peak read results, or peak
>> write ones, but it's not real useful for things in between. The standard
>> test basically turns into a huge
On Thu, Nov 19, 2009 at 4:10 PM, Greg Smith wrote:
> You can use pgbench to either get interesting peak read results, or peak
> write ones, but it's not real useful for things in between. The standard
> test basically turns into a huge stack of writes to a single table, and the
> select-only one
Scott Marlowe wrote:
On Thu, Nov 19, 2009 at 10:01 AM, Merlin Moncure wrote:
pgbench is actually a pretty awesome i/o tester assuming you have big
enough scaling factor
Seeing as how pgbench only goes to scaling factor of 4000, are the any
plans on enlarging that number?
I'm doing pgbenc
Scott Carey wrote:
Have PG wait a half second (configurable) after the checkpoint fsync()
completes before deleting/ overwriting any WAL segments. This would be a
trivial "feature" to add to a postgres release, I think. Actually, it
already exists! Turn on log archiving, and have the script th
On Thu, 2009-11-19 at 19:01 +0100, Anton Rommerskirchen wrote:
> Am Donnerstag, 19. November 2009 13:29:56 schrieb Craig Ringer:
> > On 19/11/2009 12:22 PM, Scott Carey wrote:
> > > 3: Have PG wait a half second (configurable) after the checkpoint
> > > fsync() completes before deleting/ overwriti
Am Donnerstag, 19. November 2009 13:29:56 schrieb Craig Ringer:
> On 19/11/2009 12:22 PM, Scott Carey wrote:
> > 3: Have PG wait a half second (configurable) after the checkpoint
> > fsync() completes before deleting/ overwriting any WAL segments. This
> > would be a trivial "feature" to add to a
On Thu, Nov 19, 2009 at 10:01 AM, Merlin Moncure wrote:
> On Wed, Nov 18, 2009 at 11:39 PM, Scott Carey wrote:
>> Well, that is sort of true for all benchmarks, but I do find that bonnie++
>> is the worst of the bunch. I consider it relatively useless compared to
>> fio. Its just not a great be
On Wed, Nov 18, 2009 at 11:39 PM, Scott Carey wrote:
> Well, that is sort of true for all benchmarks, but I do find that bonnie++
> is the worst of the bunch. I consider it relatively useless compared to
> fio. Its just not a great benchmark for server type load and I find it
> lacking in the ab
Scott Carey wrote:
Moral of the story: Nothing is 100% safe, so sometimes a small bit of KNOWN
risk is perfectly fine. There is always UNKNOWN risk. If one risks losing
256K of cached data on an SSD if you're really unlucky with timing, how
dangerous is that versus the chance that the raid car
Greg Smith wrote:
> Scott Carey wrote:
>> For your database DATA disks, leaving the write cache on is 100%
>> acceptable,
>> even with power loss, and without a RAID controller. And even in
>> high write
>> environments.
>>
>> That is what the XLOG is for, isn't it? That is where this behavior is
Scott Carey wrote:
For your database DATA disks, leaving the write cache on is 100% acceptable,
even with power loss, and without a RAID controller. And even in high write
environments.
That is what the XLOG is for, isn't it? That is where this behavior is
critical. But that has completely di
On 19/11/2009 12:22 PM, Scott Carey wrote:
> 3: Have PG wait a half second (configurable) after the checkpoint fsync()
> completes before deleting/ overwriting any WAL segments. This would be a
> trivial "feature" to add to a postgres release, I think.
How does that help? It doesn't provide any
On 11/17/09 10:58 PM, "da...@lang.hm" wrote:
>
> keep in mind that bonnie++ isn't always going to reflect your real
> performance.
>
> I have run tests on some workloads that were definantly I/O limited where
> bonnie++ results that differed by a factor of 10x made no measurable
> difference in
On 11/17/09 10:51 AM, "Greg Smith" wrote:
> Merlin Moncure wrote:
>> I am right now talking to someone on postgresql irc who is measuring
>> 15k iops from x25-e and no data loss following power plug test.
> The funny thing about Murphy is that he doesn't visit when things are
> quiet. It's quit
Scott Carey writes:
> For your database DATA disks, leaving the write cache on is 100% acceptable,
> even with power loss, and without a RAID controller. And even in high write
> environments.
Really? How hard have you tested that configuration?
> That is what the XLOG is for, isn't it?
Once
On 11/15/09 12:46 AM, "Craig Ringer" wrote:
> Possible fixes for this are:
>
> - Don't let the drive lie about cache flush operations, ie disable write
> buffering.
>
> - Give Pg some way to find out, from the drive, when particular write
> operations have actually hit disk. AFAIK there's no su
On 11/13/09 10:21 AM, "Karl Denninger" wrote:
>
> One caution for those thinking of doing this - the incremental
> improvement of this setup on PostGresql in WRITE SIGNIFICANT environment
> isn't NEARLY as impressive. Indeed the performance in THAT case for
> many workloads may only be 20 or
I found a bit of time to play with this.
I started up a test with 20 concurrent processes all inserting into
the same table and committing after each insert. The db was achieving
about 5000 inserts per second, and I kept it running for about 10
minutes. The host was doing about 5MB/s of P
On Wed, 18 Nov 2009, Greg Smith wrote:
Merlin Moncure wrote:
But what's up with the 400 iops measured from bonnie++?
I don't know really. SSD writes are really sensitive to block size and the
ability to chunk writes into larger chunks, so it may be that Peter has just
found the worst-case be
Merlin Moncure wrote:
But what's up with the 400 iops measured from bonnie++?
I don't know really. SSD writes are really sensitive to block size and
the ability to chunk writes into larger chunks, so it may be that Peter
has just found the worst-case behavior and everybody else is seeing
som
On 11/17/2009 01:51 PM, Greg Smith wrote:
Merlin Moncure wrote:
I am right now talking to someone on postgresql irc who is measuring
15k iops from x25-e and no data loss following power plug test.
The funny thing about Murphy is that he doesn't visit when things are
quiet. It's quite possible
On Tue, Nov 17, 2009 at 1:51 PM, Greg Smith wrote:
> Merlin Moncure wrote:
>>
>> I am right now talking to someone on postgresql irc who is measuring
>> 15k iops from x25-e and no data loss following power plug test.
>
> The funny thing about Murphy is that he doesn't visit when things are quiet.
Merlin Moncure wrote:
I am right now talking to someone on postgresql irc who is measuring
15k iops from x25-e and no data loss following power plug test.
The funny thing about Murphy is that he doesn't visit when things are
quiet. It's quite possible the window for data loss on the drive is
v
On tis, 2009-11-17 at 11:36 -0500, Merlin Moncure wrote:
> I am right now talking to someone on postgresql irc who is measuring
> 15k iops from x25-e and no data loss following power plug test. I am
> becoming increasingly suspicious that peter's results are not
> representative: given that 90% of
On Tue, Nov 17, 2009 at 9:54 AM, Brad Nicholson
wrote:
> On Tue, 2009-11-17 at 11:36 -0500, Merlin Moncure wrote:
>> 2009/11/13 Greg Smith :
>> > As far as what real-world apps have that profile, I like SSDs for small to
>> > medium web applications that have to be responsive, where the user shows
On Tue, 2009-11-17 at 11:36 -0500, Merlin Moncure wrote:
> 2009/11/13 Greg Smith :
> > As far as what real-world apps have that profile, I like SSDs for small to
> > medium web applications that have to be responsive, where the user shows up
> > and wants their randomly distributed and uncached dat
2009/11/13 Greg Smith :
> As far as what real-world apps have that profile, I like SSDs for small to
> medium web applications that have to be responsive, where the user shows up
> and wants their randomly distributed and uncached data with minimal latency.
> SSDs can also be used effectively as se
Craig James wrote:
> I've wondered whether this would work for a read-mostly application: Buy
> a big RAM machine, like 64GB, with a crappy little single disk. Build
> the database, then make a really big RAM disk, big enough to hold the DB
> and the WAL. Then build a duplicate DB on another mach
I've wondered whether this would work for a read-mostly application: Buy a big
RAM machine, like 64GB, with a crappy little single disk. Build the database,
then make a really big RAM disk, big enough to hold the DB and the WAL. Then
build a duplicate DB on another machine with a decent disk
- Pg doesn't know the erase block sizes or positions. It can't group
writes up by erase block except by hoping that, within a given file,
writing in page order will get the blocks to the disk in roughly
erase-block order. So your write caching isn't going to do anywhere near
as good a job as the
On 15/11/2009 2:05 PM, Laszlo Nagy wrote:
>
>> A change has been written to the WAL and fsync()'d, so Pg knows it's hit
>> disk. It can now safely apply the change to the tables themselves, and
>> does so, calling fsync() to tell the drive containing the tables to
>> commit those changes to disk.
A change has been written to the WAL and fsync()'d, so Pg knows it's hit
disk. It can now safely apply the change to the tables themselves, and
does so, calling fsync() to tell the drive containing the tables to
commit those changes to disk.
The drive lies, returning success for the fsync when
On 15/11/2009 11:57 AM, Laszlo Nagy wrote:
> Ok, I'm getting confused here. There is the WAL, which is written
> sequentially. If the WAL is not corrupted, then it can be replayed on
> next database startup. Please somebody enlighten me! In my mind, fsync
> is only needed for the WAL. If I could c
* I could buy two X25-E drives and have 32GB disk space, and some
redundancy. This would cost about $1600, not counting the RAID
controller. It is on the edge.
This was the solution I went with (4 drives in a raid 10 actually).
Not a cheap solution, but the performance is amazing.
Robert Haas wrote:
2009/11/14 Laszlo Nagy :
32GB is for one table only. This server runs other applications, and you
need to leave space for sort memory, shared buffers etc. Buying 128GB memory
would solve the problem, maybe... but it is too expensive. And it is not
safe. Power out -> data lo
On Sat, Nov 14, 2009 at 8:47 AM, Heikki Linnakangas
wrote:
> Merlin Moncure wrote:
>> On Sat, Nov 14, 2009 at 6:17 AM, Heikki Linnakangas
>> wrote:
lots of ram doesn't help you if:
*) your database gets written to a lot and you have high performance
requirements
>>> When all the (h
2009/11/14 Laszlo Nagy :
> 32GB is for one table only. This server runs other applications, and you
> need to leave space for sort memory, shared buffers etc. Buying 128GB memory
> would solve the problem, maybe... but it is too expensive. And it is not
> safe. Power out -> data loss.
Huh?
...Rob
Heikki Linnakangas wrote:
Laszlo Nagy wrote:
* I need at least 32GB disk space. So DRAM based SSD is not a real
option. I would have to buy 8x4GB memory, costs a fortune. And
then it would still not have redundancy.
At 32GB database size, I'd seriously consider just buying
Merlin Moncure wrote:
> On Sat, Nov 14, 2009 at 6:17 AM, Heikki Linnakangas
> wrote:
>>> lots of ram doesn't help you if:
>>> *) your database gets written to a lot and you have high performance
>>> requirements
>> When all the (hot) data is cached, all writes are sequential writes to
>> the WAL,
On Sat, Nov 14, 2009 at 6:17 AM, Heikki Linnakangas
wrote:
>> lots of ram doesn't help you if:
>> *) your database gets written to a lot and you have high performance
>> requirements
>
> When all the (hot) data is cached, all writes are sequential writes to
> the WAL, with the occasional flushing
Merlin Moncure wrote:
> 2009/11/13 Heikki Linnakangas :
>> Laszlo Nagy wrote:
>>>* I need at least 32GB disk space. So DRAM based SSD is not a real
>>> option. I would have to buy 8x4GB memory, costs a fortune. And
>>> then it would still not have redundancy.
>> At 32GB database size,
Lists wrote:
Laszlo Nagy wrote:
Hello,
I'm about to buy SSD drive(s) for a database. For decision making, I
used this tech report:
http://techreport.com/articles.x/16255/9
http://techreport.com/articles.x/16255/10
Here are my concerns:
* I need at least 32GB disk space. So DRAM based SS
Laszlo Nagy wrote:
Hello,
I'm about to buy SSD drive(s) for a database. For decision making, I
used this tech report:
http://techreport.com/articles.x/16255/9
http://techreport.com/articles.x/16255/10
Here are my concerns:
* I need at least 32GB disk space. So DRAM based SSD is not a rea
The FusionIO products are a little different. They are card based vs trying to
emulate a traditional disk. In terms of volatility, they have an on-board
capacitor that allows power to be supplied until all writes drain. They do not
have a cache in front of them like a disk-type SSD might. I
Fernando Hevia wrote:
Shouldn't their write performance be more than a trade-off for fsync?
Not if you have sequential writes that are regularly fsync'd--which is
exactly how the WAL writes things out in PostgreSQL. I think there's a
potential for SSD to reach a point where they can give go
2009/11/13 Greg Smith :
> As far as what real-world apps have that profile, I like SSDs for small to
> medium web applications that have to be responsive, where the user shows up
> and wants their randomly distributed and uncached data with minimal latency.
> SSDs can also be used effectively as se
Brad Nicholson wrote:
Out of curiosity, what are those narrow use cases where you think
SSD's are the correct technology?
Dave Crooke did a good summary already, I see things like this:
* You need to have a read-heavy app that's bigger than RAM, but not too
big so it can still fit on SSD
* You
> -Mensaje original-
> Laszlo Nagy
>
> My question is about the last option. Are there any good RAID
> cards that are optimized (or can be optimized) for SSD
> drives? Do any of you have experience in using many cheaper
> SSD drives? Is it a bad idea?
>
> Thank you,
>
>Laszlo
>
1 - 100 of 117 matches
Mail list logo