Am Sonntag 31 Juli 2011, 19:11:06 schrieb Michael Mol:
> On Sun, Jul 31, 2011 at 6:37 PM, Volker Armin Hemmann
>
> <volkerar...@googlemail.com> wrote:
> > Am Sonntag 31 Juli 2011, 10:44:28 schrieb Michael Mol:
> >> While I take your point about write-cycle limitations, and I would
> >> *assume* you're familiar with the various improvements on
> >> wear-leveling technique that have happened over the past *ten years*
> >
> > yeah, I am. Or let it phrase it differently:
> > I know what is claimed.
> >
> > The problem is, the best wear leveling does not help you if your disk is
> > pretty filled up and you still do a lot of writing. 1 000 000 write
> > cycles aren't much.
>
> Ok; I wasn't certain, but it sounded like you'd had your head in the
> sand (if you'll pardon the expression). It's clear you didn't. I'm
> sorry.
>
> >> since those concerns were first raised, I could probably raise an
> >> argument that a fresh SSD is likely to last longer as a swap device
> >> than as a filesystem.
> >
> > depends - because thanks to wear leveling that 'swap partition' is just
> > something the firmware makes the kernel believe to be there.
> >
> >> Swap is only touched as-needed, while there's been an explosion in
> >> programs and user software which demands synchronous writes to disk
> >> for data integrity purposes. (Firefox uses sqlite in such a way, for
> >> example; I discovered this when I was using sqlite heavily in my *own*
> >> application, and Firefox hung for a couple minutes during every batch
> >> insert.)
> >
> > which is another goof reason not to use firefox - but
> > total used free shared buffers cached
> > Mem: 8182556 7373736 808820 0 56252
> > 2197064
> > -/+ buffers/cache: 5120420 3062136
> > Swap: 23446848 82868 23363980
> >
> > even with lots of ram, you will hit swap. And since you are using the
> > wear- leveling of the drive's firmware it does not matter that your
> > swap resides on its own partition - every page written means a
> > block-rewrite somewhere. Really not good for your ssd.
>
> Fair enough.
>
> It Would Be Nice(tm) if the SSD's block size and alignment matched
> that of the kernel's pagesize. Not certain if it's possible to tune
> those settings (reliably) in the kernel.
>
> Also, my stats, from three different systems (they appear to be using
> trivial amounts of swap, though my Gentoo box doesn't appear to be
> using any)
>
> (Desktop box)
> shortcircuit:1@serenity~
> Sun Jul 31 07:03 PM
> !499 #1 j0 ?0 $ free -m
> total used free shared buffers cached
> Mem: 5975 3718 2256 0 617 1106
> -/+ buffers/cache: 1994 3980
> Swap: 9993 0 9993
>
> (laptop)
> shortcircuit@saffron:~$ free -m
> total used free shared buffers cached
> Mem: 1995 1732 263 0 169 913
> -/+ buffers/cache: 648 1347
> Swap: 3921 3 3918
>
> (server)
> shortcirc...@rosettacode.xen.prgmr.com~
> 23:05:34 $ free -m
> total used free shared buffers cached
> Mem: 2048 2000 47 0 285 488
> -/+ buffers/cache: 1225 822
> Swap: 511 1 510
>
> >> Also, despite the MBTF data provided by the manufacturers, there's
> >> more empirical evidence that the drives expire faster than expected,
> >> anyway. I'm aware of this, and not particularly concerned about it.
> >
> > well, it is your money to burn.
>
> Best evidence I've read lately is that the drives last about a year
> under heavy use. I was going to include a reference in the last email,
> but I can't find a link to the post. I thought it was something Joel
> Spolsky (or *someone* at StackOverflow) wrote, but I was unable to
> find it quickly.
>
> My parts usually last 3-5 years, so that's pretty low. Still, having
> my swap partition drop (and the entire system halt) would be generally
> less damaging to me than having real data on the drive.
>
> >> False dichotomy. Yes, it increases the wear on the device. That says
> >> nothing of its impact on system performance, which was the nature of
> >> my point.
> >
> > if you are so concerned of swap performance you should probably go with
> > a
> > smaller ssd, get more ram and let that few mb of swap you need been
> > handled by several swap partitions.
>
> This is where I get back to my original, 'prohibitively expensive'
> bit. I can get 16GB of RAM into my system for about $200. The use
> cases where I've been contemplating this have been where I wanted to
> have 60GB to 80GB of data quickly accessible in a random-access
> fashion, but where that type of load wasn't what I normally spent my
> time doing. (Hence the idea to have a broader improvement from
> something such as the file cache)
>
> And, really, the whole point of the thread was for thought
> experiments. Posits are occasionally required.
>
> >> As for a filecache not being that important, that's only the case if
> >> your data of interest exists on the filesystem you put on the SSD.
> >>
> >> Let's say you're someone like me, who would tend to go with 60GB for /
> >> and 3TB for /home. At various times, I'll be doing HDR photo
> >> processing, some video transcoding, some random non-portage compile
> >> jobs, web browsing, coding, etc.
> >
> > 60gb for /, 75gb for /var, and 2.5tb data...
> > my current setup.
>
> Handy; we'll have common frames of reference.
>
> >> If I take a 160GB SSD, I could put / (or, at least, /var/ and /usr),
> >> and have some space left over for scratch--but it's going to be a pain
> >> trying to figure out which of my 3TB of /home data I want in that fast
> >> scratch.
> >>
> >> File cache is great, because it takes caches your most-used data from
> >> *anywhere* and keeps it in a fast-access datastore. I could have a 3
> >> *petabyte* volume, not be particularly concerned about data
> >> distribution, and have just as response from the filecache as if I had
> >> a mere 30GB volume. Putting a filesystem on an SSD simply cannot scale
> >> that way.
> >
> > true, but all those microseconds saved with swap on ssd won't offset the
> > pain when the ssd dies earlier.
>
> It really depends on the quantity and nature of the pain. When the
> things I'm toying around with have projected completion times of a
> *week* rather than an hour or two, and when I don't normally need so
> much memory, it wouldn't be too much of a hassle to remove the dead
> drive from fstab and boot back up. (after fsck, etc, natch). In the
> words of the Architect, "There are levels of existence we are prepared
> to accept..."
>
> >> Actually, this conversation reminds me of another idea I'd had at one
> >> point...putting ext3/ext4's journal on an SSD, while keeping the bulk
> >> of the data on large, dense spinning platters.
> >
> > which sounds nice in theory.
>
> Yet would potentially run afoul of the SSD's write block resolution.
> And, of course, having the journal fail out from under me would be a
> fair bit worse than the kernel panicking during a swap operation.
>
> >> Did you miss the last week's worth of discussion of memory limits on
> >> tmpfs?>
> > probably. Because I am using tempfs for /var/tmp/portage for ages and
> > the only problematic packet is openoffice/libreoffice.
>
> I ran into trouble with Thunderbird a couple months ago, which is why
> I had to drop from using tmpfs. (Also, I compile with -ggdb in CFLAGS,
> so I expect my build sizes bloat a bit more than most)
>
> Anyway, the edge cases and caveats like the ones discussed are why I
> ask about what people have tried, and what mitigators, workarounds and
> technological improvements people have been working on.
--
#163933