On Tue, 29 Jul 2008, Sam wrote:
> So it says its a minor error but still one to be concerned about, I
> thought resilvering takes care of checksum errors, does it not?
> Should I be running to buy 3 new 500GB drives?
Presumably these are SATA drives. Studies show that typical SATA
drives tend
exactly.
that's why i'm trying to get an account on that site (looks like open
registration for the forums is disabled) so i can shoot the breeze and talk
about all this stuff too.
zfs would be perfect for this as most these guys are trying to find hardware
raid cards that will fit, etc... wit
Sam schrieb:
> I've had my 10x500 ZFS+ running for probably 6 months now and had thought it
> was scrubbing occasionally (wrong) so I started a scrub this morning, its
> almost done now and I got this:
>
> errors: No known data errors
> # zpool status
> pool: pile
> state: ONLINE
> status: One
Could this someway be related to this rather large (100GB) difference that 'zfs
list' and 'zpool list' report:
NAME SIZE USED AVAILCAP HEALTH ALTROOT
pile 4.53T 4.31T 223G95% ONLINE -
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
pile 3.44T 120G 3.44T /pile
I know th
I've had my 10x500 ZFS+ running for probably 6 months now and had thought it
was scrubbing occasionally (wrong) so I started a scrub this morning, its
almost done now and I got this:
errors: No known data errors
# zpool status
pool: pile
state: ONLINE
status: One or more devices has experienc
On Wed, 30 Jul 2008, Robert Milkowski wrote:
>
> Both cases are basically the same.
> Please notice I'm not talking about disabling ZIL, I'm talking about
> disabling cache flushes in ZFS. ZFS will still wait for the array to
> confirm that it did receive data (nvram).
So it seems that in your opi
If I understood properly there is just one piece that has to be modified: a
flat alluminium board with a squared hole in the center, that any fine mechanic
around your city should do very easily...
The problem more than the noise in this tight case might be the temperature!
This message post
Hello Bob,
Friday, July 25, 2008, 4:58:54 PM, you wrote:
BF> On Fri, 25 Jul 2008, Robert Milkowski wrote:
>> Both on 2540 and 6540 if you do not disable it your performance will
>> be very bad especially for synchronous IOs as ZIL will force your
>> array to flush its cache every time. If you
that mashie link might be exactly what i wanted...
that mini-itx board w/ 6 SATA. use CF maybe for boot (might need IDE to CF
converter) - 5 drive holder (hotswap as a bonus) - you get 4 gig ram,
core2-based chip (64-bit), onboard graphics, 5 SATA2 drives... that is cool.
however. would need to
A little case modding maybe not so difficult...there are examples (and
instructions) like:
http://www.mashie.org/casemods/udat2.html
But for sure there are more advanced like:
http://forums.bit-tech.net/showthread.php?t=76374&pp=20
And here you can have a full example of the human ingenious!!
h
Hello Bob,
Friday, July 25, 2008, 9:00:41 PM, you wrote:
BF> On Fri, 25 Jul 2008, Brandon High wrote:
>>> I am not sure if ZFS really has to wait for both sides of a mirror to
>>> finish, but even if it does, if there are enough VDEVs then ZFS can still
>>> proceed with writing.
>>
>> It would h
Obviously, I should stop answering, as all I deal with and all that I will
deal with is GA Solaris. OpenSolaris might as well not exist as far as I'm
concerned. With that in mind, I'll just keep reading and appreciating all of
the good zfs info that comes along.
Peace out.
On Tue, Jul 29, 2008 at
On Jul 29, 2008, at 2:24 PM, Chris Cosby wrote:
>
>
> On Tue, Jul 29, 2008 at 5:13 PM, Stefano Pini <[EMAIL PROTECTED]>
> wrote:
> Hi guys,
> we are proposing a customer a couple of X4500 (24 Tb) used as NAS
> (i.e. NFS server).
> Both server will contain the same files and should be acces
Stefano Pini wrote:
> Hi guys,
> we are proposing a customer a couple of X4500 (24 Tb) used as NAS
> (i.e. NFS server).
> Both server will contain the same files and should be accessed by
> different clients at the same time (i.e. they should be both active)
What exactly are they trying to do
I'd say some good places to look are silentpcreview.com and mini-itx.com.
I found this tasty morsel on an ad at mini-itx...
http://www.american-portwell.com/product.php?productid=16133
6x onboard SATA. 4 gig support. core2duo support. which means 64 bit = yes, 4
gig = yes, 6x sata is nice.
now
On Tue, Jul 29, 2008 at 5:13 PM, Stefano Pini <[EMAIL PROTECTED]> wrote:
> Hi guys,
> we are proposing a customer a couple of X4500 (24 Tb) used as NAS (i.e.
> NFS server).
> Both server will contain the same files and should be accessed by different
> clients at the same time (i.e. they should
Hi guys,
we are proposing a customer a couple of X4500 (24 Tb) used as NAS
(i.e. NFS server).
Both server will contain the same files and should be accessed by
different clients at the same time (i.e. they should be both active)
So we need to guarantee that both x4500 contain the same files:
On Tue, Jul 29, 2008 at 9:20 AM, Steve <[EMAIL PROTECTED]> wrote:
> So is Intel better? Which motherboard could be a good choice? (microatx?)
Inexpensive Intel motherboards do not support ECC memory while all
current AMD cpus do.
If ECC is important to you, Intel is not a good choice.
I'm disapp
Just a side comment: this discussion shows all the classic symptoms of
two groups of people with different basic assumptions, each wondering why
the other said what they did.
Getting these out in the open would be A Good Thing (;-))
--dave
Jonathan Loran wrote:
> I think the important point
I think the important point here is that this makes the case for ZFS
handling at least one layer of redundancy. If the disk you pulled was
part of a mirror or raidz, there wouldn't be data loss when your system
was rebooted. In fact, the zpool status commands would likely keep
working, and a
I just read about this new NAS product based on OpenSolaris and ZFS. There are
lots of questions on this forum about good hardware for a home NAS box so the
hardware/software this company is using might be interesting. From their site,
they are using a "1.5 Ghz Low Voltage VIA C7 processor with
Bob Friesenhahn wrote:
> On Tue, 29 Jul 2008, Emiel van de Laar wrote:
>
>> I'm not sure what to do next. Is my final pool completely lost?
>>
>
> It sounds like your "good" disk has some serious problems and that
> formatting the two disks with bad sectors was the wrong thing to do.
> Yo
waynel wrote:
>
> We have a couple of machines similar to your just
> spec'ed. They have worked great. The only problem
> is, the power management routine only works for K10
> and later. We will move to Intel core 2 duo for
> future machines (mainly b/c power management
> considerations).
>
S
Mark,
Thanks for your detailed review comments. I will check where the latest
man pages are online and get back to you.
In the meantime, I can file the bugs to get these issues fixed on your
behalf.
Thanks again,
Cindy
Marc Bevand wrote:
> I noticed some errors in ls(1), acl(5) and the ZFS Adm
There may be some work being done to fix this:
zpool should support raidz of mirrors
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6485689
Discussed in this thread:
Mirrored Raidz ( Posted: Oct 19, 2006 9:02 PM )
http://opensolaris.org/jive/thread.jspa?threadID=15854&tstart=0
Thi
On Tue, 29 Jul 2008, Steve wrote:
> I agree with mike503. If you create the awareness (of the
> instability of recorded information) there is a large potential
> market waiting for a ZFS/NAS little server!
The big mistake in the posting was to assume that Sun should be in
this market. Sun has
On Tue, 29 Jul 2008, Emiel van de Laar wrote:
>
> I'm not sure what to do next. Is my final pool completely lost?
It sounds like your "good" disk has some serious problems and that
formatting the two disks with bad sectors was the wrong thing to do.
You might have been able to recover using the
A little more information today. I had a feeling that ZFS would continue quite
some time before giving an error, and today I've shown that you can carry on
working with the filesystem for at least half an hour with the disk removed.
I suspect on a system with little load you could carry on wo
Ian Collins wrote:
> I'd like to extend my ZFS root pool by adding the old swap and root slice
> left over from the previous LU BE.
>
> Are there any known issues with concatenating slices from the same drive?
Having done this in the past (many builds ago) I found the performance
wasn't good.
I noticed some errors in ls(1), acl(5) and the ZFS Admin Guide about ZFS/NFSv4
ACLs:
ls(1): "read_acl (r) Permission to read the ACL of a file." The compact
representation of read_acl is "c", not "r".
ls(1): "-c | -vThe same as -l, and in addition displays the [...]" The
options are in
I agree with mike503. If you create the awareness (of the instability of
recorded information) there is a large potential market waiting for a ZFS/NAS
little server!
Very nice the thin client idea. It will be good to also use the NAS server as a
full server and use remotely with a very thin cli
I didn't use any.
That would be my -ideal- setup :)
I waited and waited, and still no eSATA/Port Multiplier support out there, or
isn't stable enough. So I scrapped it.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discu
> Actually, my ideal setup would be:
>Shuttle XPC w/ 2x PCI-e x8 or x16 lanes
>2x PCI-e eSATA cards (each with 4 eSATA port multiplier ports)
Mike, may I ask which eSATA controllers you used? I searched the Solaris HCL
and found very few listed there
Thanks
justin
smime.p7s
Description: S/MIME
Hello list,
My ZFS pool has found it's way into a bad state after a period of
neglect and now I'm having trouble recovering. The pool is a three-way
mirror of which 2 disks started showing errors and thus the pool
was degraded.
I shut down the system and started at the lowest level by using ES To
34 matches
Mail list logo