't get a single response, I have a hard time
recommending ANYONE go to Nexenta. It's great they're employing you now,
but the community edition has an extremely long way to go before it comes
close to touching the community that still hangs around here, despite
Oracle's lack of care and feeding.
http://www.nexenta.org/boards/1/topics/211
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Fri, Jul 2, 2010 at 9:25 PM, Richard Elling wrote:
> On Jul 2, 2010, at 6:48 PM, Tim Cook wrote:
> > Given that the most basic of functionality was broken in Nexenta, and not
> Opensolaris, and I couldn't get a single response, I have a hard time
> recommending ANYONE
On Fri, Jul 2, 2010 at 9:55 PM, James C. McPherson wrote:
> On 3/07/10 12:25 PM, Richard Elling wrote:
>
>> On Jul 2, 2010, at 6:48 PM, Tim Cook wrote:
>>
>>> Given that the most basic of functionality was broken in Nexenta, and not
>>> Opensolaris, and I coul
rown jewels, then runs off to a new company and
creates a filesystem that looks and feels so similar.
Of course, taking stabs in the dark on this mailing list without having
access to all of the court documents isn't really constructive in the first
place. Then again, neither are people trying
On Mon, Jul 12, 2010 at 8:32 AM, Edward Ned Harvey
wrote:
> > From: Tim Cook [mailto:t...@cook.ms]
> >
> > Because VSS isn't doing anything remotely close to what WAFL is doing
> > when it takes snapshots.
>
> It may not do what you want it to do, but it'
First SectorLast
* Sector CountSector
* 34 1953525100 1953525133
*
* First SectorLast
* Partition Tag FlagsSector CountSector Mount Directory
j...@opensolaris:~#
OK. There it is.
Should I carefully dd label 0 and 1 to the label 2 and 3 place on each drive?
What about the strange prtvtoc statuses?
Please help me: How can I import my pool? What should I do?
Tim
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
on that
controller)
Disk 0 (first disk at the end of that target)
http://www.idevelopment.info/data/Unix/Solaris/SOLARIS_UnderstandingDiskDeviceFiles.shtml
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
out there. Unless
they're google, and they can leave a dead server in a rack for years, it's
an unsustainable plan. Out of the fortune 500, I'd be willing to bet
there's exactly zero companies that use whitebox systems, and for a reason.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
:~# zpool import -f files
internal error: Value too large for defined data type
Abort (core dumped)
j...@opensolaris:~# zpool import -d /dev
...shows nothing after 20 minutes
Tim
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zf
Alright, I created the links
# ln -s /dev/ad6 /mydev/ad6
...
# ln -s /dev/ad10 /mydev/ad10
and ran 'zpool import -d /mydev'
Nothing - the links in /mydev are all broken.
Thanks again,
Tim
--
This message posted from opensolaris.org
___
z
On Thu, Jul 15, 2010 at 1:50 AM, BM wrote:
> On Thu, Jul 15, 2010 at 1:51 PM, Tim Cook wrote:
> > Not to mention you've then got full-time staff on-hand to constantly be
> replacing
> > parts.
>
> Maybe I don't understand something, but we also had on-ha
On Thu, Jul 15, 2010 at 9:09 AM, David Dyer-Bennet wrote:
>
> On Wed, July 14, 2010 23:51, Tim Cook wrote:
> > On Wed, Jul 14, 2010 at 9:27 PM, BM wrote:
> >
> >> On Thu, Jul 15, 2010 at 12:49 AM, Edward Ned Harvey
> >> wrote:
> >> > I'll s
x27;re simply using ZFS, there is no VMFS to worry about. You don't have to
have another ESX box if something goes wrong, any client with an nfs client
can mount the share and diagnose the VMDK.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
hort of a piss-poor NFS server
implementation, I've never once seen iSCSI beat out NFS in a VMware
environment. I have however seen countless examples of their "clustered
filesystem" causing permanent SCSI locks on a LUN that result in an entire
datastore going offline.
--Tim
ng VMFS
> resignaturing, which is also irritating.
>
> I don't want to argue with you about the other stuff.
>
>
Which is why block with vmware blows :)
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
s in its
tracks before it really even got started (perhaps that explains the timing
of this press release) as well as killed the Opensolaris community.
Quite frankly, I think there will be an even faster decline of Solaris
installed base after this move. I know I have no interest in pushing it
anywher
On Fri, Aug 13, 2010 at 3:54 PM, Erast wrote:
>
>
> On 08/13/2010 01:39 PM, Tim Cook wrote:
>
>> http://www.theregister.co.uk/2010/08/13/opensolaris_is_dead/
>>
>> I'm a bit surprised at this development... Oracle really just doesn't
>> get it. The
unning.
>
> The previous Sun software support pricing model was completely bogus. The
> Oracle model is also bogus, but at least it provides a means for an
> entry-level user to be able to afford support.
>
> Bob
>
>
The cost discussion is ridiculous, period. $400 is a ste
ermine
> how much longer left?
>
> I'd appreciate any advice, cheers
>
>
It would be extremely beneficial for you to switch off and upgrade to 8GB.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ute it due to the GPL. The original author is free to license the
code as many times under as many conditions as they like, and release or not
release subsequent changes they make to their own code.
I absolutely guarantee Oracle can and likely already has dual-licensed
BTRFS.
--Tim
__
On Mon, Aug 16, 2010 at 10:40 AM, Ray Van Dolson wrote:
> On Mon, Aug 16, 2010 at 08:35:05AM -0700, Tim Cook wrote:
> > No, no they don't. You're under the misconception that they no
> > longer own the code just because they released a copy as GPL. That
> > is
BTRFS+Oracle-license troll-ml
>
Before making yourself look like a fool, I suggest you look at the BTRFS
commits. Can you find a commit submitted by anyone BUT Oracle employees?
I've yet to see any significant contribution from anyone outside the walls
of Oracle to the project.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
egated itself to a non-player in the Linux filesystem
> space...
>
> So, yes, they can do it if they want, I just think they're not THAT
> stupid. :)
>
>
>
Or, for all you know, Chris Mason's contract has a non-compete that states
if he leaves Oracle he's not al
2010/8/16 "C. Bergström"
> Tim Cook wrote:
>
>>
>>
>> 2010/8/16 "C. Bergström" > codest...@osunix.org>>
>>
>>
>>Joerg Schilling wrote:
>>
>>"C. Bergström" ><mailto:codest.
e, the duplicate metadata copy might be corrupt but the problem
>>> is not detected since it did not happen to be used.
>>>
>>
>> Too bad we cannot scrub a dataset/object.
>>
>
> Can you provide a use case? I don't see why scrub couldn't start and
> stop at specific txgs for instance. That won't necessarily get you to a
> specific file, though.
> -- richard
>
I get the impression he just wants to check a single file in a pool without
waiting for it to check the entire pool.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
the face of a drive failure.
BTW, you shouldn't need one disk per tray of 14 disks. Unless you've got
some known bad disks/environmental issues, every 2-3 should be fine. Quite
frankly, if you're doing raid-z3, I'd feel comfortable with one per thumper.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
FS
> utilities (zfs list, zpool list, zpool status) causes a hang until I replace
> the disk.
> --
>
Did you set your failmode to continue?
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
rt is I cannot estimate how much of the old disks have life
> is left because in a few months, I am going to have a handful of the fastest
> SSD's around and not sure if I would trust them for much of anything.
>
> Am I really that wrong?
>
> Derek
>
I'll take them whe
On Tue, Oct 13, 2009 at 9:42 AM, Aaron Brady wrote:
> I did, but as tcook suggests running a later build, I'll try an
> image-update (though, 111 > 2008.11, right?)
>
It should be, yes. b111 was released in April of 2009.
--Tim
__
run on systems purchased as a 7000 series, Sun will
not support it on anything else.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Fri, Oct 16, 2009 at 1:14 PM, Frank Cusack wrote:
> On October 16, 2009 1:08:17 PM -0500 Tim Cook wrote:
>
>> On Fri, Oct 16, 2009 at 1:05 PM, Frank Cusack
>> wrote:
>>
>>> Can the software which runs on the 7000 series servers be installed
>>> on
r problem with its
latency? Assuming you aren't using absurdly large block sizes, it would
appear to fly. 0.15ms is bad?
http://blogs.sun.com/BestPerf/entry/1_6_million_4k_iops
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ch a workload in the real world. It sounds like
you're comparing paper numbers for the sake of comparison, rather than to
solve a real-world problem...
BTW, latency does not give you "# of random access per second".
5microsecond latency for one access != # of random access per second, sorry.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
that can somehow take in 1billion IO requests, process them, have a
memory back end that can return them, but does absolutely nothing with them
for a full minute. Even if you scale those numbers down, your theory is
absolutely ridiculous.
Of course, you also failed to address the other issue. H
t and
> repository will
>also be removed shortly.
>
> The community is migrating to a new google group:
>http://groups.google.com/group/zfs-macos
>
> -- richard
>
Any official word from Apple on the abandonment?
--Tim
__
a
> while without losing anything? I would expect the system to resliver the
> data onto the remaining vdevs, or tell me to go jump off a pier. :)
> --
>
Jump off a pier. Removing devices is not currently supported but it is in
the works.
--Tim
__
expect.
http://www.sun.com/servers/x64/x4540/server_architecture.pdf
One drive per channel, 6 channels total.
I also wouldn't be surprised to find out that they found this the optimal
configuration from a performance/throughput/IOPS perspective as well. Can't
seem to find those numbers publ
t once. You're talking (VERY conservatively)
2800 IOPS.
Even ignoring that, I know for a fact that the chip can't handle raw
throughput numbers on 46 disks unless you've got some very severe raid
overhead. That chip is good for roughly 2GB/sec each direction. 46 7200RPM
drive
zfs and then upgraded to the
latest? Second, would all of the blocks be re-checksummed with a zfs
send/receive on the receiving side?
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
;s
backup. My assumption would be it's something coming in over the network,
in which case I'd say you're far, far better off throttling at the network
stack.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Fri, Oct 23, 2009 at 7:17 PM, Richard Elling wrote:
>
> Tim has a valid point. By default, ZFS will queue 35 commands per disk.
> For 46 disks that is 1,610 concurrent I/Os. Historically, it has proven to
> be
> relatively easy to crater performance or cause problems with ver
On Fri, Oct 23, 2009 at 7:19 PM, Adam Leventhal wrote:
> On Fri, Oct 23, 2009 at 06:55:41PM -0500, Tim Cook wrote:
> > So, from what I gather, even though the documentation appears to state
> > otherwise, default checksums have been changed to SHA256. Making that
> > a
On Sat, Oct 24, 2009 at 4:49 AM, Adam Cheal wrote:
> The iostat I posted previously was from a system we had already tuned the
> zfs:zfs_vdev_max_pending depth down to 10 (as visible by the max of about 10
> in actv per disk).
>
> I reset this value in /etc/system to 7, rebooted, and started a sc
On Sat, Oct 24, 2009 at 11:20 AM, Tim Cook wrote:
>
>
> On Sat, Oct 24, 2009 at 4:49 AM, Adam Cheal wrote:
>
>> The iostat I posted previously was from a system we had already tuned the
>> zfs:zfs_vdev_max_pending depth down to 10 (as visible by the max of about 10
>&
On Sat, Oct 24, 2009 at 12:30 PM, Carson Gaspar wrote:
>
> I saw this with my WD 500GB SATA disks (HDS725050KLA360) and LSI firmware
> 1.28.02.00 in IT mode, but I (almost?) always had exactly 1 "stuck" I/O.
> Note that my disks were one per channel, no expanders. I have _not_ seen it
> since rep
MORE if they're forced into having to deal
with third party vendors that are pointing fingers at software problems vs.
hardware problems and wasting Sun support engineers valuable time. I think
you'd find yourself unpleasantly surprised at the end price tag.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
XYZ isnt' working is
because their hardware isn't supported... oh, and they have no plans to ever
add support either.
I honestly can't believe this is even a discussion. What next, are you
going to ask NetApp to support ONTAP on Dell systems, and EMC to support
Enginuity on
n old version of zfs. Grab a new iso.
How would you expect a system that shipped with verison 10 of zfs to know
what to do with version 15?
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Tue, Oct 27, 2009 at 4:59 PM, dick hoogendijk wrote:
> Tim Cook wrote:
>
>>
>>
>> On Tue, Oct 27, 2009 at 4:25 PM, Paul Lyons > paulrly...@gmail.com>> wrote:
>>
>>When I boot off Solaris 10 U8 I get the error that pool is
>>forma
ot; will cause the system to panic and core dump. The only
real advantage I see in wait is that it will alert the admin to a failure
rather quickly if you aren't checking the health of the system on a regular
basis.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
7;s a step up from a whitebox 2-disk mirror from some no-name
> vendor who won't exist in 6 months.
>
> --eric
>
> PS: Not having enough engineers to support a growing and paying
> customer base is a *good* problem to have. The opposite is much, much
> worse.
>
So use Nexenta?
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
2009/10/28 Eric D. Mudama
> On Wed, Oct 28 at 13:40, "C. Bergström" wrote:
>
>> Tim Cook wrote:
>>
>>>
>>>
>>> PS: Not having enough engineers to support a growing and paying
>>> customer base is a *good* problem to have.
t; -Kyle
>
>
Either they don't like you, or you don't read your emails :)
It's now hub.opensolaris.org for the main page.
The forums can be found at:
http://opensolaris.org/jive/index.jspa?categoryID=1
Although they appear to be having technical difficulties with the forum at
the moment.
-
I've sent this to the driver list as well, but since the zfs folks tend to
be intimately involved with the marvell driver stack, I figured I'd give you
guys a shot too.
Does anyone happen to know if there was a driver change with build 126? I
had a pool that was 2x5+1 raidz vdev's. I moved all
you Sun folks comment on this?
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
m Sun. It seems the conflicts from the lawsuit may or
> may not be resolved, but still..
>
> Where's the code?
I highly doubt you're going to get any commentary from sun engineers
on pending litigation.
--Tim
___
zfs-discuss ma
d, in most cases ZFS currently provides
*much* better solution to random data corruption than any other
filesystem+fsck in the market.
The code for the putback of 2009/479 allows reverting to an earlier uberblock
AND defers the re-use of blocks for a short time to make this &q
Orvar Korvar wrote:
Does this putback mean that I have to upgrade my zpool, or is it a zfs tool? If
I missed upgrading my zpool I am smoked?
The putback did not bump zpool or zfs versions. You shouldn't have to upgrade
your pool.
-tim
__
The current build in-process is 128 and that's the
build into which the changes were pushed.
-tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ivers causing the problem or not.
It's tough to say what exactly is causing the problems. I would imagine
ripping something like sd from the older version would break more than it
would fix.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Sat, Nov 7, 2009 at 12:02 PM, Cindy Swearingen
wrote:
> Hi Tim and all,
>
> I believe you are saying that marvell88sx2 driver error messages started
> in build 126, along with new disk errors in RAIDZ pools.
>
> Is this correct? If so, please send me the following informati
le
> personal information.
>
> Thanks in advance.
>
> Leandro.
>
> --
>
Of course, it doesn't matter which drive is plugged in where. When you
import a pool, zfs scans the headers of each disk to verify if they're part
of a pool or no
Rich Teer wrote:
Congrats for integrating dedup! Quick question: in what build
of Nevada will dedep first be found? b126 is the current one
presently.
Cheers,
128
-tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
ratio from 1.16x to 1.11x which seems to indicate
that dedupe does not detect the english text is identical in every file.
Theory: Your files may end up being in one large 128K block or maybe a
couple of 64K blocks where there isn't much redundancy to de-dup.
nstall b125? Like "0.5.12-0.125"?
>
No. That's the SunOS version number, and you should always use 0.5.11- for
anything in opensolaris today. Solaris 10= "5.10". Opensolaris="5.11".
9=5.9 etc. etc. etc.
http://en.wikipedia.org/wiki/Solaris_%28operating
his VM, I'm prepared to do that.
>
>
> Is this idea retarded? Something you would recommend or do yourself? All of
> this convenience is pointless if there will be significant problems, I would
> like to eventually serve production serve
On Sun, Nov 8, 2009 at 11:20 AM, Joe Auty wrote:
> Tim Cook wrote:
>
> On Sun, Nov 8, 2009 at 2:03 AM, besson3c wrote:
>
>> I'm entertaining something which might be a little wacky, I'm wondering
>> what your general reaction to this scheme might be :)
>&
On Sun, Nov 8, 2009 at 11:37 AM, Joe Auty wrote:
> Tim Cook wrote:
>
> On Sun, Nov 8, 2009 at 11:20 AM, Joe Auty wrote:
>
>> Tim Cook wrote:
>>
>> On Sun, Nov 8, 2009 at 2:03 AM, besson3c wrote:
>>
>>> I'm entertaining something which mi
On Sun, Nov 8, 2009 at 11:48 AM, Joe Auty wrote:
> Tim Cook wrote:
>
>
>
>> It appears that one can get more in the way of features out of VMWare
>> Server for free than with ESX, which is seemingly a hook into buying more
>> VMWare stuff.
>>
>> I
the entire product suite. vCenter is only
required for advanced functionality like HA/DPM/DRS that you don't have with
VMware server either.
Are you just throwing out buzzwords, or do you actually know what they do?
--Tim
___
zfs-discuss mailing list
zf
#x27;d probably lose a
lot of other data at the same time. We don't offer the ability to rollback if
the pool can be opened/imported successfully anyway.
-tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/
gt; > driver change with build 126?
> not for the SATA framework, but for HBAs there is:
> http://hub.opensolaris.org/bin/view/Community+Group+on/2009093001
>
> I will find a thumper, load build 125, create a raidz pool, and
> upgrade to b126.
>
> I'll also send the error
Anyone have any thoughts? I'm trying to figure out how to get c7t6d0 back
to being a hotspare since c7t5d0 is installed, there, and happy. It's
almost as if it's using both disks for "spare-11" right now.
--Tim
___
zfs-discuss
d to
corrupt blocks that are part of an existing snapshot though, as they'd be
read-only. The only way that should even be able to happen is if you took a
snapshot after the blocks were already corrupted. Any new writes would be
allocated from new blocks.
--Tim
_
On Tue, Nov 10, 2009 at 3:19 PM, A Darren Dunham wrote:
> On Tue, Nov 10, 2009 at 03:04:24PM -0600, Tim Cook wrote:
> > No. The whole point of a snapshot is to keep a consistent on-disk state
> > from a certain point in time. I'm not entirely sure how you managed to
> &g
On Tue, Nov 10, 2009 at 4:38 PM, Cindy Swearingen
wrote:
> Hi Tim,
>
> I'm not sure I understand this output completely, but have you
> tried detaching the spare?
>
> Cindy
>
>
Hey Cindy,
Detaching did in fact solve the issue. During my previous issues when the
c7t3d0ONLINE 0 0 0 2.05G
resilvered
c7t4d0 ONLINE 0 0 0
c7t5d0 ONLINE 0 0 0
spares
c7t6d0AVAIL
errors: No known data erro
e same boat, it should constantly be filling and
emptying as new data comes in. I'd imagine the TRIM would just add
unnecessary overhead. It could in theory help there by zeroing out blocks
ahead of time before a new batch of writes come in if you have a period of
little I/O. My thou
On Tue, Nov 10, 2009 at 5:15 PM, Tim Cook wrote:
>
>
> On Tue, Nov 10, 2009 at 10:55 AM, Richard Elling > wrote:
>
>>
>> On Nov 10, 2009, at 1:25 AM, Orvar Korvar wrote:
>>
>> Does this mean that there are no driver changes in marvell88sx2, between
>&g
gelog-b126.html
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Wed, Nov 11, 2009 at 11:51 AM, Bob Friesenhahn <
bfrie...@simple.dallas.tx.us> wrote:
> On Tue, 10 Nov 2009, Tim Cook wrote:
>
>>
>> My personal thought would be that it doesn't really make sense to even
>> have it, at least for readzilla. In theory, you al
p? Am I just missing something obvious? Detach seems to only apply
to mirrors and hot spares.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Wed, Nov 11, 2009 at 12:29 PM, Darren J Moffat
wrote:
> Joerg Moellenkamp wrote:
>
>> Hi,
>>
>> Well ... i think Darren should implement this as a part of zfs-crypto.
>> Secure Delete on SSD looks like quite challenge, when wear leveling and bad
>> block relocation kicks in ;)
>>
>
> No I won't
I've tried exporting and
importing the pool, and it doesn't make a difference.
NAMESIZE USED AVAILCAP HEALTH ALTROOT
fserv 3.25T 2.73T 532G84% ONLINE -
--Tim
___
zfs-discuss mailing list
zfs-discuss@opens
On Thu, Nov 12, 2009 at 4:05 PM, Cindy Swearingen
wrote:
> Hi Tim,
>
> In a pool with mixed disk sizes, ZFS can use only the amount of disk
> space that is equal to the smallest disk and spares aren't included in
> pool size until they are used.
>
> In your RAIDZ-2 pool
previous thread, Adam had said that it automatically keeps
more copies of a block based on how many references there are to that block.
IE: If there's 20 references it would keep 2 copies, whereas if there's
20,000 it would keep 5. I'll have to see if I can dig up th
problem with the SCSI bus
> termination or a bad cable?
>
>
> Bob
>
SCSI? Try PATA ;)
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
es scrub finish in 8h, and
> then rearranging the SATA cables, it takes 15h - with the same data?
>
>
What's the motherboard model?
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
sponse because the first result on google
should have the answer you're looking for. In any case, if memory serves
correctly, Jeff's blog should have all the info you need:
http://blogs.sun.com/bonwick/entry/raid_z
--Tim
___
zfs-discuss m
t space possible out of them.
>
So have two raidsets. One with the 1TB drives, and one with the 300's.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ll just stripe across all the drives. You're taking a performance
penalty for a setup that essentially has 0 redundancy. You lose a 500gb
drive, you lose everything.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
2008/07/19/opensolaris-upgrade-instructions/
If you want the latest development build, which would be required to get to
a build 21 zpool, you'd need to change your repository.
http://pkg.opensolaris.org/dev/en/index.shtml
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Mon, Nov 16, 2009 at 12:09 PM, Bob Friesenhahn <
bfrie...@simple.dallas.tx.us> wrote:
> On Sun, 15 Nov 2009, Tim Cook wrote:
>
>>
>> Once again I question why you're wasting your time with raid-z. You might
>> as well just stripe across all the drives. You
On Mon, Nov 16, 2009 at 2:10 PM, Martin Vool wrote:
> I encountered the same problem...like i sed in the first post...zpool
> command freezes. Anyone knows how to make it respond again?
> --
>
>
Is your failmode set to wait?
--Tim
On Mon, Nov 16, 2009 at 4:00 PM, Martin Vool wrote:
> I already got my files back acctuay and the disc contains already new
> pools, so i have no idea how it was set.
>
> I have to make a virtualbox installation and test it.
> Can you please tell me how-to set the failmode?
>
>
>
http://prefetch
ocess that stream (sshing to a remote server and doing
a zfs recv, for example)
If you do use that functionality, it'd be good to drop a mail to the
thread[1] on the zfs-auto-snapshot alias.
It's been a wild ride, but my work on zfs-auto-snapshot is done I
think :-)
cheers
t;>
>
Also, I never said anything about setting it to panic. I'm not sure why you
can't set it to continue while alerting you that a vdev has failed?
--
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ions
> on http://pkg.opensolaris.org/dev/en/index.shtml to bring my system up to
> date, however, the system is reporting no updates are available and stays at
> zfs v19, any ideas?
>
>
v21 isn't included in b127. As far as I know, the only way to get to 21 is
to buil
On Wed, Nov 18, 2009 at 12:49 PM, Jacob Ritorto wrote:
> Tim Cook wrote:
>
> > Also, I never said anything about setting it to panic. I'm not sure why
> > you can't set it to continue while alerting you that a vdev has failed?
>
>
> Ah, right, thanks for the
ay be going awry, could
> anyone tell me or point me in the right direction?
>
> Thanks,
> Emily
>
> --
> Emily
>
CIFS information generally gets dumped into /var/adm/messages. What do you
mean by "it stops working". You have to remount the shar
401 - 500 of 959 matches
Mail list logo