On 04/12/10 05:39 PM, Willard Korfhage wrote:
IT is a Corsair 650W modular power supply, with 2 or 3 disks per cable.
However, the Areca card is not reporting any errors, so I think power to the
disks is unlikely to be a problem.
Here's what is in /var/adm/messages
Apr 11 22:37:41 fs9 fmd: [I
OpenSolaris needs support for the TRIM command for SSDs. This command is
issued to an SSD to indicate that a block is no longer in use and the SSD may
erase it in preparation for future writes.
A SECURE_FREE dataset property might be added that says that when a block is
released to free space
IT is a Corsair 650W modular power supply, with 2 or 3 disks per cable.
However, the Areca card is not reporting any errors, so I think power to the
disks is unlikely to be a problem.
Here's what is in /var/adm/messages
Apr 11 22:37:41 fs9 fmd: [ID 377184 daemon.error] SUNW-MSG-ID: ZFS-8000-GH,
On Sun, Apr 11, 2010 at 23:59, Willard Korfhage wrote:
> I'm struggling to get a reliable OpenSolaris system on a file server. I'm
> running an Asus P5BV-C/4L server motherboard, 4GB ECC ram, an E3110
> processor, and an Areca 1230 with 12 1-TB disks attached. In a previous
> posting, it looked
I'm struggling to get a reliable OpenSolaris system on a file server. I'm
running an Asus P5BV-C/4L server motherboard, 4GB ECC ram, an E3110 processor,
and an Areca 1230 with 12 1-TB disks attached. In a previous posting, it looked
like RAM or the power supply by be a problem, so I ended up upg
> From: Daniel Carosone [mailto:d...@geek.com.au]
>
> Please look at the pool property "failmode". Both of the preferences
> you have expressed are available, as well as the default you seem so
> unhappy with.
I ... did not know that. :-)
Thank you.
On Sun, Apr 11, 2010 at 07:03:29PM -0400, Edward Ned Harvey wrote:
> Heck, even if the faulted pool spontaneously sent the server into an
> ungraceful reboot, even *that* would be an improvement.
Please look at the pool property "failmode". Both of the preferences
you have expressed are available
> From: Richard Elling [mailto:richard.ell...@gmail.com]
>
> On Apr 11, 2010, at 5:36 AM, Edward Ned Harvey wrote:
> >
> > In the event a pool is faulted, I wish you didn't have to power cycle
> the
> > machine. Let all the zfs filesystems that are in that pool simply
> > disappear, and when some
On 04/11/10 12:46, Volker A. Brandt wrote:
The most paranoid will replace all the disks and then physically
destroy the old ones.
I thought the most paranoid will encrypt everything and then forget
the key... :-)
Actually, I hear that the most paranoid encrypt everything *and then*
destroy th
> The most paranoid will replace all the disks and then physically destroy
> the old ones.
I thought the most paranoid will encrypt everything and then forget
the key... :-)
Seriously, once encrypted zfs is integrated that's a viable method.
Regards -- Volker
--
--
On 04/11/10 10:19, Manoj Joseph wrote:
Earlier writes to the file might have left
older copies of the blocks lying around which could be recovered.
Indeed; to be really sure you need to overwrite all the free space in
the pool.
If you limit yourself to worrying about data accessible via a re
Hi all,
An update.
After a while I emailed Jeb Campbel who created logfix.
Because I'm running snv_134 he suggested to try zpool import -F.
I tried it but with no luck.
Next, I tried zpool import -FX still with no luck. It seemed the OS was
stucked.
I assumed maybe because the HD is 1.5TB it might
joerg.schill...@fokus.fraunhofer.de wrote:
> The secure deletion of the data would be something that hallens before
> the file is actually unlinked (e.g. by rm). This secure deletion would
> need open the file in a non COW mode.
That may not be sufficient. Earlier writes to the file might have lef
Hi all,
on Friday night two disk in one raidz2 vdev decided to die within a couple of
minutes. Swapping drives and resilvering one at a time worked quite ok,
however, now I'm faced with a nasty problem:
s07:~# zpool status -v
pool: atlashome
state: ONLINE
status: One or more d
On Apr 11, 2010, at 5:36 AM, Edward Ned Harvey wrote:
>
> In the event a pool is faulted, I wish you didn't have to power cycle the
> machine. Let all the zfs filesystems that are in that pool simply
> disappear, and when somebody does "zpool status" you can see why.
In general, I agree. How wou
On Apr 10, 2010, at 11:32 PM, valrh...@gmail.com wrote:
> A theoretical question on how ZFS works, for the experts on this board.
> I am wondering about how and where ZFS puts the physical data on a mechanical
> hard drive. In the past, I have spent lots of money on 15K rpm SCSI and then
> SAS dr
On 10.04.10 21:06, Andrey Kuzmin wrote:
No, until all snapshots referencing the file in question are removed.
Simplest way to understand snapshots is to consider them as
references. Any file-system object (say, file or block) is only
removed when its reference count drops to zero.
another thin
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> wrote:
> > Hi all
> >
> > Is it possible to securely delete a file from a zfs dataset/zpool
> once it's been snapshotted, meaning "delete (and perhaps overwrite) all
> copies of this file"?
>
> No, until all snapshots referencing
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>
> Thanks for the testing. so FINALLY with version > 19 does ZFS
> demonstrate production-ready status in my book. How long is it going to
> take Solaris to catch up?
Oh, it's been production worthy for some time - Just don't use u
> From: Tim Cook [mailto:t...@cook.ms]
>
> Awesome! Thanks for letting us know the results of your tests Ed,
> that's extremely helpful. I was actually interested in grabbing some
> of the cheaper intel SSD's for home use, but didn't want to waste my
> money if it wasn't going to handle the vari
That was a while back when I was shopping for my own HBAs. There were
compatibility warnings all over the place with some Adpatec controllers and LSI
SAS expanders.
AFAIK, even the 106x need to be operated in IT mode to properly work with SAS
expanders. IT mode disables all RAID functions of th
21 matches
Mail list logo