>hopefully the lead itself won't be >radioactive)
Or the chips themselves don't have some alpha particle generation. It
has happened and from premium vendors
There is no replacement for good system design :)
khb...@gmail.com
Sent from my iPod
> Enterprises will not care about ease so much as they
> have dedicated professionals to pamper their arrays.
Enterprises can afford the professionals. I work for a fairly large bank which
can, and does, afford a dedicated storage team.
On the other hand, no enterprise can afford downtime. Whe
Thanks for your reply.
What if I wrap the ram in a sheet of lead?;-)
(hopefully the lead itself won't be radioactive)
I found these 4 AM3 motherboard with "optional" ECC memory support. I don't
know whether this means ECC works, or ECC memory can be used but ECC will not.
Do you?
Asus M4N7
On Mon, Jul 20, 2009 at 7:52 PM, Bob
Friesenhahn wrote:
> On Mon, 20 Jul 2009, Marion Hakanson wrote:
>
> It is definitely real. Sun has opened internal CR 6859997. It is now in
> Dispatched state at High priority.
>
Is there a way we can get a Sun person on this list to supply a little
bit mor
On Mon, 20 Jul 2009, Marion Hakanson wrote:
Bob, have you tried changing your benchmark to be multithreaded? It
occurs to me that maybe a single cpio invocation is another bottleneck.
I've definitely experienced the case where a single bonnie++ process was
not enough to max out the storage syst
I have a 10 drive Raidz2 setup with one hot spare, I checked the status of my
array this morning and it had a weird reading, it shows all 10 of my drives in
ONLINE no-fault status but that my hot-spare is also currently replacing a
perfectly ok drive:
NAME STATE READ WRITE
On Mon, 20 Jul 2009, chris wrote:
If none, maybe top quality ram (suggestions?) would allow me to
forego ECC and use a well supported low power intel board
(suggestions?) instead? and a E5200?
Even top quality RAM will not protect you from an alpha particle.
I would be surprised if the AMD
Ok, sorry for spamming - got some more info from mdb -k
devu...@zfs05:/var/crash/zfs05# mdb -k unix.0 vmcore.0
mdb: failed to read panicbuf and panic_reg -- current register set will be
unavailable
Loading modules: [ unix genunix specfs dtrace cpu.generic uppc pcplusmp
scsi_vhci zfs sd ip hook n
Forgot to mention - 1. this system was installed as 2008.11, so it should have
no upgrade issues.
2. Not sure how to do the mdb -k on the dump, the only thing it produced is the
following:
> ::status
debugging live kernel (64-bit) on zfs05
operating system: 5.11 snv_101b (i86pc)
> $C
>
--
This m
We have just got a hang like this.
Here's the output of ps -ef | grep zfs:
root 425 7 0 Jun 17 console 0:00 /usr/lib/saf/ttymon -g -d
/dev/console -l console -m ldterm,ttcompat -h -p zfs0
root 22879 22876 0 18:18:37 ? 0:01 /usr/sbin/zfs rollback -r
tank/aa
root
bfrie...@simple.dallas.tx.us said:
> No. I am suggesting that all Solaris 10 (and probably OpenSolaris systems)
> currently have a software-imposed read bottleneck which places a limit on
> how well systems will perform on this simple sequential read benchmark.
> After a certain point (which is
Ok, so the choice for a MB boils down to:
- Intel desktop MB, no ECC support
- Intel server MB, ECC support, expensive (requires a Xeon for speedstep
support). It is a shame to waste top kit doing nothing 24/7.
- AMD K8: ECC support(right?), no Cool'n'quiet support (but maybe still cool
enough w
I recently had an X86 system (running Nexenta Elatte, if that matters -- b101
kernel, I think) suffer hardware failure and refuse to boot. I've migrated the
disks into a SPARC system (b115) in an attempt to bring the data back online
while I see about repairing the former system. However, I'm
This all started when I decided to name the pool on my laptop "root" I didn't
think anything of it until I realized the zfs boot info ended up in my root
user's home directory... I also had a hard time doing LUs (I'm running SXCE).
I decided to rename my pool by exporting it and reimporting it
On Mon, 20 Jul 2009, chris wrote:
That would be nice. Before developers worry about such exotic
features, I would rather that they attend to the gross performance
issues so that zfs performs at least as well as Windows NTFS or Linux
XFS in all common cases.
To each their own.
I was referring
On Tue 21/07/09 03:13 , "Andre Lue" no-re...@opensolaris.org sent:
> I have noticed this in snv_114 now at 117.
>
> I have the following filesystems.
> fs was created using zfs create pool/fs
> movies created using zfs create pool/fs/movies
> pool/fs/movies
> pool/fs/music
> pool/fs/photos
> pool/
>That would be nice. Before developers worry about such exotic
>features, I would rather that they attend to the gross performance
>issues so that zfs performs at least as well as Windows NTFS or Linux
>XFS in all common cases.
To each their own.
A FS that calculates and writes parity onto dis
http://mail.opensolaris.org/pipermail/onnv-notify/2009-July/009872.html
second bug, its the same link like in the first post.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.or
I have noticed this in snv_114 now at 117.
I have the following filesystems.
fs was created using zfs create pool/fs
movies created using zfs create pool/fs/movies
pool/fs/movies
pool/fs/music
pool/fs/photos
pool/fs/archives
at boot /lib/svc/method/fs-local fails where zfs mount -a is called. fa
On Jul 20, 2009, at 15:54, Roger wrote:
Several PDFs out there suggest any of the following:
a) Solaris comes with 128bit encryption (full filesystem)
b) Solaris supports full root encryption.
Any truth to any of this?
The company I work for tis mandating full root encryption.
Part (a) is in-
Hello,
I am new to Solaris.
Several PDFs out there suggest any of the following:
a) Solaris comes with 128bit encryption (full filesystem)
b) Solaris supports full root encryption.
Any truth to any of this?
The company I work for tis mandating full root encryption.
Thanks.
--
This message posted
hello,
I cannot help for your problem but one of my drives wwas also destroyed by last
week-end's storm. and what a storm it was!
good luck with your restore.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensol
Hello,
Just like sid81 i had the same storm destroying 1 of my drives in the pool.
i had 2 drives in a pool one of 160GB and one of 1TB the 160GB is dead but the
1TB is still ok
But i did something really stupid:
I destroyed the header of the disk by typing the command zpool create piszkos
/
On 07/19/09 06:10 PM, Richard Elling wrote:
Not that bad. Uncommitted ZFS data in memory does not tend to
live that long. Writes are generally out to media in 30 seconds.
Yes, but memory hits are instantaneous. On a reasonably busy
system there may be buffers in queue all the time. You may hav
After rebuilding a server which included moving the disks around and
replacing one, zpool status reports the following...
# zpool status
pool: export
state: DEGRADED
status: One or more devices could not be used because the label is
missing or
invalid. Sufficient replicas exist for
I notice that on my ZFS filesystems the "atime" of snapshots is equal to the
snapshot creation time. Is this guaranteed to be kosher? It does not seem to
change when I read the snapshot:
# ls -lu /blah/.zfs/snapshot
total 55
drwxr-xr-x 16 root root 16 Jul 7 00:43 2009-07-07-00:43
F. Wessels wrote:
Also in reply to the previous email by Will.
Can anyone shed more light on the combination lsi sas hba , the lsisasx36
expander chip (or it's relatives) and sata disks.
I'm investigating a migration from discrete channels (like in the thumper) to a
multiplexed solution via a
Hello,
I setup a NAS based on EON 0.58.9-b104 (Osol 2008/11) incl. a zfs (raid-z
zpool). This is shared in my LAN via smb = working. System is running from a
flash USB stick.
I now installed Osol 2009/11 on a flash disk in the same machine, so I be able
to boot either EON or Osol 2009/11.
I trie
On Mon, Jul 20, 2009 at 05:44, F. Wessels wrote:
> Also in reply to the previous email by Will.
>
> Can anyone shed more light on the combination lsi sas hba , the lsisasx36
> expander chip (or it's relatives) and sata disks.
> I'm investigating a migration from discrete channels (like in the thum
> which gap?
>
> 'RAID-Z should mind the gap on writes' ?
>
> Message was edited by: thometal
I believe this is in reference to the raid 5 write hole, described here:
http://en.wikipedia.org/wiki/Standard_RAID_levels#RAID_5_performance
RAIDZ should avoid this via it's Copy on Write model:
http:
On 20-Jul-09, at 6:26 AM, Russel wrote:
Well I did have a UPS on the machine :-)
but the machine hung and I had to power it off...
(yep it was vertual, but that happens on direct HW too,
As has been discussed here before, the failure modes are different as
the layer stack from filesystem t
OK.
So do we have an zpool import --xtg 56574 mypoolname
or help to do it (script?)
Russel
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Thanks a lot, Cindy!
Let me know how it goes or if I can provide more info.
Part of the bad luck I've had with that set, is that it reports such errors
about once a month, then everything goes back to normal again. So I'm pretty
sure that I'll be able to try to offline the disk someday.
Lauren
> You're right, from the documentation it definitely
> should work. Still, it doesn't. At least not in
> Solaris 10. But i am not a zfs-developer, so this
> should probably answered by them. I will give it a
> try with a recent OpneSolaris-VM and check, wether
> this works in newer implementations
> the machine hung and I had to power it off.
kinda getting off the "zpool import --tgx -3" request, but
"hangs" are exceptionally rare and usually ram or other
hardware issue, solairs usually abends on software faults.
r...@pdm # uptime
9:33am up 1116 day(s), 21:12, 1 user, load average:
> Hm, what are you actually referring to?
Sorry, I'm not subscribed to this list, so I just replied on the forum. This
segment of the discussion is what I'm replying to:
http://www.opensolaris.org/jive/message.jspa?messageID=397730#397730
--
This message posted from opensolaris.org
Hi.
Hm, what are you actually referring to?
On Mon, Jul 20, 2009 at 13:45, Ross wrote:
> That's the stuff. I think that is probably your best bet at the moment.
> I've not seen even a mention of an actual tool to do that, and I'd be
> surprised if we saw one this side of Christmas.
> --
> Thi
That's the stuff. I think that is probably your best bet at the moment. I've
not seen even a mention of an actual tool to do that, and I'd be surprised if
we saw one this side of Christmas.
--
This message posted from opensolaris.org
___
zfs-discuss
Well I did have a UPS on the machine :-)
but the machine hung and I had to power it off...
(yep it was vertual, but that happens on direct HW too,
and virtualisasion is the happening ting at sun and else where!
I have a version of the data backed up, but will
take ages (10days) to restore).
--
Th
Peter Farmer writes:
> Hi All,
>
> I have a zfs pool setup on one server, the pool is made up of 4 iSCSI
> luns, is it possible to migrate the zfs pool to another server? Each
> of the iSCSI luns would be available on the other server.
>
>
> Thanks,
yes.
zpool export $mypoolname on the old serv
Hi,
Yes I read those threads, wow, dd directly over blocks at some offset point.
I was hoping some tools may have been created by now. hoping
Russel
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolari
Hi All,
I have a zfs pool setup on one server, the pool is made up of 4 iSCSI
luns, is it possible to migrate the zfs pool to another server? Each
of the iSCSI luns would be available on the other server.
Thanks,
--
Peter Farmer
___
zfs-discuss maili
After a power outage due to a thunder storm my 3 disk raidz1 pool has become
UNAVAILable.
It is a ZFV v13 pool using the whole 3 disks created on FreeBSD current 8 x64
and worked well for over a month. Unfortunately I wasn't able to import the
pool with neither a FreeBSD LiveCD or the current Op
Also in reply to the previous email by Will.
Can anyone shed more light on the combination lsi sas hba , the lsisasx36
expander chip (or it's relatives) and sata disks.
I'm investigating a migration from discrete channels (like in the thumper) to a
multiplexed solution via a sas expander.
I'm aw
which gap?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
45 matches
Mail list logo