l power sources can
> do. Conveniently, they also allow to do a remote hard-reset of hung
> boxes without walking to the server room ;)
>
> My 2c,
> //Jim Klimov
>
>
Any modern JBOD should have the intelligence built in to stagger drive
spin-up. I wouldn't spend money o
would
not consider that production ready.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
tros as long as you
are at a compatible zpool version (which they currently are). I'd avoid
deduplication unless you absolutely need it... it's still a bit of a
kludge. Stick to compression and your world will be a much happier place.
--Tim
___
On Mon, Feb 25, 2013 at 7:57 AM, Volker A. Brandt wrote:
> Tim Cook writes:
> > > I need something that will allow me to share files over SMB (3 if
> > > possible), NFS, AFP (for Time Machine) and iSCSI. Ideally, i would
> > > like something i can manage "ea
Thanks.
>
>
All of them should provide the basic functionality you're looking for.
None of them will provide SMB3 (at all) or AFP (without a third party
package).
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
t is busy
>
> Does this do what you want? (zpool destroy is already undo-able)
>
> Jan
>
>
That suggestion makes the very bold assumption that you want a
long-standing snapshot of the dataset. If it's a rapidly changing dataset,
the snapshot will become an issue very quickly.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Wed, Feb 20, 2013 at 6:47 PM, Richard Elling wrote:
> On Feb 20, 2013, at 3:27 PM, Tim Cook wrote:
>
> On Wed, Feb 20, 2013 at 5:09 PM, Richard Elling
> wrote:
>
>> On Feb 20, 2013, at 2:49 PM, Markus Grundmann
>> wrote:
>>
>> Hi!
>>
>> My
rved from fewer spindles than
> data written after the new vdev is added. Performance with the newer data
> should be improved.
>
> Bob
>
That depends entirely on how full the pool is when the new vdev is added,
and how frequently the older data changes, snapshot
t;deny", so
that means you either have to give *everyone* all permissions besides
delete, or you have to go through every user/group on the box and give
specific permissions and on top of not allowing destroy. And then if you
change your mind later you have to go back through and give everyo
s,
> Markus
>
>
>
I think you're underestimating your English, it's quite good :) In any
case, I think the proposal is a good one. With the default behavior being
off, it won't break anything for existing datasets, and it can absolutely
help prevent a fat finger
On Sun, Feb 17, 2013 at 8:58 AM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) <
opensolarisisdeadlongliveopensola...@nedharvey.com> wrote:
> > From: Tim Cook [mailto:t...@cook.ms]
> >
> > Why would I spend all that time and
> > energy participating in ANO
rrent maintainers decide they
no longer wish to contribute to the project. On the flip side, I think we
welcome all Oracle employees to participate in that list should corporate
policy allow you to.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Sat, Feb 16, 2013 at 11:21 AM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) <
opensolarisisdeadlongliveopensola...@nedharvey.com> wrote:
> > From: Tim Cook [mailto:t...@cook.ms]
> >
> > That would be the logical decision, yes. Not to poke fun, but did yo
s the same company that refused to release any Java patches until
the DHS issued a national warning suggesting that everyone uninstall Java.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
...
>
>
I have a few coworkers using it. No horror stories and it's been in use
about 6 months now. If there were any showstoppers I'm sure I'd have heard
loud complaints by now :)
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
f statistics I think would be
useful/interesting to record over time.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Sun, Jan 20, 2013 at 6:19 PM, Richard Elling wrote:
> On Jan 20, 2013, at 8:16 AM, Edward Harvey
> wrote:
> > But, by talking about it, we're just smoking pipe dreams. Cuz we all
> know zfs is developmentally challenged now. But one can dream...
>
> I disagree the ZFS is developmentally chal
x27;s.
The HP H221 is the newer SAS2008 based HBA that replaces the SC08Ge,
it's definitely a pure HBA as I have one but I don't have any external
disk shelves to test with currently.
http://h18004.www1.hp.com/products/quickspecs/14222_div/14222_div.html
n IT firmware.
The Dell method is more involved but it's the only why that I've
managed to got a Dell H200 cross flashed.
Seems like the M1015 has spiked in price again on eBay (US) whilst
the H200 is still under $100.
--
Tim Fletcher
__
On 07/01/13 21:16, Sašo Kiselkov wrote:
On 01/07/2013 09:32 PM, Tim Fletcher wrote:
On 07/01/13 14:01, Andrzej Sochon wrote:
Hello *Sašo*!
I found you here:
http://mail.opensolaris.org/pipermail/zfs-discuss/2012-May/051546.html
“How about reflashing LSI firmware to the card? I read on Dell
ow to get LSI firmware and reflash
Dell H310.
I've successfully crossflashed Dell H200 cards with this method
http://forums.servethehome.com/showthread.php?467-DELL-H200-Flash-to-IT-firmware-Procedure-for-DELL-servers
--
Tim Fletcher
___
z
related to raidz but unsure.
>
>
Why don't you just use a SAN that supports full drive encryption? There
should be basically 0 performance overhead.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
dware...
> Or, rather, shop for the equivalent non-appliance servers...
>
> //Jim
>
You'd be paying a massive premium to buy them and then install some other
OS on them. You'd be far better off buying equivalent servers.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Mon, Nov 12, 2012 at 10:39 AM, Trond Michelsen wrote:
> On Sat, Nov 10, 2012 at 5:00 PM, Tim Cook wrote:
> > On Sat, Nov 10, 2012 at 9:48 AM, Jan Owoc wrote:
> >> On Sat, Nov 10, 2012 at 8:14 AM, Trond Michelsen
> >> wrote:
> >>> How can I replace t
only have
a very small pool, and have the ability to add an equal amount of storage
to dump to. Probably not a big deal if you've only got a handful of
drives, or if the drives you have are small and you can take downtime.
Likely impossible for OP with 42 large drives.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ade
up of a different block structure. Not happening.
*insert everyone saying they want bp_rewrite and the guys who have the
skills to do so saying their enterprise customers have other needs*
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
are far more common at
> the 1/2Gb range, while PCI-E starts to be the most common choice at 4Gb+
>
> Here's a list of all the old Sun FC HBAs (which can help you sort out
> which are for x64 systems, and which were for SPARC systems):
>
>
> http://www.oracle.com/technetwork/d
The built in drivers support Mpha so you're good to go.
On Friday, October 19, 2012, Christof Haemmerle wrote:
> Yep i Need. 4 Gig with multipathing if possible.
>
> On Oct 19, 2012, at 10:34 PM, Tim Cook 'cvml', 't...@cook.ms');>> wrote:
>
>
>
On Sat, Oct 20, 2012 at 2:54 AM, Arne Jansen wrote:
> On 10/20/2012 01:10 AM, Tim Cook wrote:
> >
> >
> > On Fri, Oct 19, 2012 at 3:46 PM, Arne Jansen > <mailto:sensi...@gmx.net>> wrote:
> >
> > On 10/19/2012 09:58 PM, Matthew Ahrens wrote:
&g
On Friday, October 19, 2012, Christof Haemmerle wrote:
> hi there,
> i need to connect some old raid subsystems to a opensolaris box via fibre
> channel. can you recommend any FC HBA?
>
> thanx
> __
>
How old? If its 1gbit you'll need a 4gb or slower hba. Qlogic woul
file format" are related to these
> > two formats. "FITS btrfs" didn't return anything specific to the file
> > format, either.
>
> It's not too late to change it, but I have a hard time coming up with
> some better name. Also, the format is stil
On 10/01/2012 09:09 AM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) wrote:
Just perform a bunch of writes, time it. Then set sync=disabled,
perform the same set of writes, time it. Then enable sync, add a ZIL
device, time it. The third option will be somewhere in between the
first
On Thu, Sep 27, 2012 at 12:48 PM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) <
opensolarisisdeadlongliveopensola...@nedharvey.com> wrote:
> > From: Tim Cook [mailto:t...@cook.ms]
> > Sent: Wednesday, September 26, 2012 3:45 PM
> >
> > I would sugge
force import it on the other
> host.
>
> ** **
>
> Can anybody think of a reason why Option 2 would be stupid, or can you
> think of a better solution?
>
>
>
I would suggest if you're doing a crossover between systems, you use
infiniband rather than etherne
ore you
take snapshot 2, snapshot 2 will only capture the final state of the file.
You will not get 50 revisions of the file. This is not continuous data
protection it's a point in time copy.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
change rate, and
how long you keep the snapshots around, it may very well be true. It's not
universally true, but it's also no universally false.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
No. Missing slogs is a potential data-loss condition. Importing the pool
> without
> slogs requires acceptance of the data-loss -- human interaction.
> -- richard
>
> --
> ZFS Performance and Training
> richard.ell...@richardelling.com
> +1-760-896-4422
>
&
; So, the "10 extra reads" will sometimes be true - if the duplicate block
> doesn't already exist in ARC. And the "10 extra reads" will sometimes be
> false - if the duplicate block is already in ARC.
Sasso: yes, it's absolutely worth implementing a higher
to the disk. Scrub more often!
>
> --
> Dan.
>
>
>
>
Personally unless the dataset is huge and you're using z3, I'd be scrubbing
once a week. Even if it's z3, just do a window on Sunday's or something so
that you at
Oracle never promised anything. A leaked internal memo does not signify an
official company policy or statement.
On Apr 18, 2012 11:13 AM, "Freddie Cash" wrote:
> On Wed, Apr 18, 2012 at 7:54 AM, Cindy Swearingen
> wrote:
> >>Hmmm, how come they have encryption and we don't?
> >
> > As in Solar
On Fri, Apr 13, 2012 at 11:46 AM, Freddie Cash wrote:
> On Fri, Apr 13, 2012 at 9:30 AM, Tim Cook wrote:
> > You will however have an issue replacing them if one should fail. You
> need
> > to have the same block count to replace a device, which is why I asked
> for a
>
bably skip that step.
>
You will however have an issue replacing them if one should fail. You need
to have the same block count to replace a device, which is why I asked for
a "right-sizing" years ago. Deaf ears :/
--Tim
>
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
gt; > RAID" would go into just making another write-block allocator
> > in the same league "raidz" or "mirror" are nowadays...
> > BTW, are such allocators pluggable (as software modules)?
> >
> > What do you think - can and should
ttp://www.RichardElling.com
> illumos meetup, Jan 10, 2012, Menlo Park, CA
> http://www.meetup.com/illumos-User-Group/events/41665962/
>
>
>
Speaking of illumos, what exactly is the deal with the zfs discuss mailing
list? There's all of 3 posts that show up for
g
to be a nightmare long-term (which is why most products use a version
number in the first place).
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
you are proactively looking for them.
>
> myers
>
>
>
>
Or, if you aren't scrubbing on a regular basis, just change your zpool
failmode property. Had you set it to wait or panic, it would've been very
clear, very quickly that something was wrong.
http:/
Do you still need to do the grub install?
On Dec 15, 2011 5:40 PM, "Cindy Swearingen"
wrote:
> Hi Anon,
>
> The disk that you attach to the root pool will need an SMI label
> and a slice 0.
>
> The syntax to attach a disk to create a mirrored root pool
> is like this, for example:
>
> # zpool att
? I seem to recall people on this mailing
list using mbuff to speed it up because it was so bursty and slow at one
point. IE:
http://blogs.everycity.co.uk/alasdair/2010/07/using-mbuffer-to-speed-up-slow-zfs-send-zfs-receive/
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Tue, Oct 18, 2011 at 3:27 PM, Peter Tribble wrote:
> On Tue, Oct 18, 2011 at 9:12 PM, Tim Cook wrote:
> >
> >
> > On Tue, Oct 18, 2011 at 3:06 PM, Peter Tribble
> > wrote:
> >>
> >> On Tue, Oct 18, 2011 at 8:52 PM, Tim Cook wrote:
> >>
On Tue, Oct 18, 2011 at 3:06 PM, Peter Tribble wrote:
> On Tue, Oct 18, 2011 at 8:52 PM, Tim Cook wrote:
> >
> > Every scrub I've ever done that has found an error required manual
> fixing.
> > Every pool I've ever created has been raid-z or raid-z2, so the
On Tue, Oct 18, 2011 at 2:41 PM, Kees Nuyt wrote:
> On Tue, 18 Oct 2011 12:05:29 -0500, Tim Cook wrote:
>
> >> Doesn't a scrub do more than what
> >> 'fsck' does?
> >>
> > Not really. fsck will work on an offline filesystem to correct er
ut it's good to have it anyways, and is critical for
> > personal systems such as laptops.
>
> IIRC, fsck was seldom needed at
> my former site once UFS journalling
> became available. Sweet update.
>
> Mark
>
>
We all hope to never have to run fsck, but not having it at all is a bit of
a non-starter in most environments.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
blade for
> VM farming needs, but it would consume much of the LAN
> bandwidth of the blades using its storage services.
>
> Today, HDDs aren't fast, and are not getting faster.
>> -- richard
>>
> Well, typical consumer disks did get about 2-3 times faster for
> linear RW speeds over the past decade; but for random access
> they do still lag a lot. So, "agreed" ;)
>
> //Jim
>
>
Quite frankly your choice in blade chassis was a horrible design decision.
From your description of its limitations it should never be the building
block for a vmware cluster in the first place. I would start by rethinking
that decision instead of trying to pound a round ZFS peg into a square hole.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
t; file in vain - it will be equally available to the "new host"
>>> at the correct point in migration, just as it was accessible
>>> to the "old host".
>>>
>> Again. NFS/iscsi/IB = ok.
>>
>
> True, except that this is not an optima
explicitly states doesn't
offer support from the parent company. Nobody from Oracle is going to show
up with a patch for you on this mailing list because none of the Oracle
employees want to lose their job and subsequently be subjected to a
lawsuit. If that's what you're planning o
What are the specs on the client?
On Aug 18, 2011 10:28 AM, "Thomas Nau" wrote:
> Dear all.
> We finally got all the parts for our new fileserver following several
> recommendations we got over this list. We use
>
> Dell R715, 96GB RAM, dual 8-core Opterons
> 1 10GE Intel dual-port NIC
> 2 LSI 920
On Tue, Jun 14, 2011 at 3:16 PM, Frank Van Damme
wrote:
> 2011/6/10 Tim Cook :
> > While your memory may be sufficient, that cpu is sorely lacking. Is it
> even
> > 64bit? There's a reason intel couldn't give those things away in the
> early
> > 2000s
llow Exchange to
be more "storage friendly" (IE: more of a large sequential I/O profile),
they've done away with SIS. The defense for it is that you can buy more
"cheap" storage for less money than you'd save with SIS and 15k rpm disks.
Whether that's factual I suppose is for the reader to decide.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ci8086,32c@0/pci11ab,11ab@1/disk@2,0
> 9. c8t3d0
>
> So the question is, why didn't it expand? And can I fix it?
>
>
Autoexpand is likely turned off.
http://download.oracle.com/docs/cd/E19253-01/819-5461/githb/index.html
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
made workarounds for the OS to come up okay,
> though. Since the root pool is separate, I removed
> "pool" and "dcpool" from zpool.cache file, and now the
> OS milestones do not depend on them to be available.
>
> Instead, importing the "pool" (with cachefile=none),
&
;
>
I'd go with the option of allowing both a weighted and a forced option. I
agree though, if you do primarycache=metadata, the system should still
attempt to cache userdata if there is additional space remaining.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
S600 (300Gb SAS 15kRPM) - or is it
> a waste of money? (And are they known to work in these boxes?)
>
> > > Hint: Nexenta people seem to be good OEM friends with
> > Supermicro, so they
> > > might know ;)
> >
> > Yes :-)
> > -- richard
>
> Thanks!
>
> //Jim Klimov
>
>
SAS drives are SAS drives, they aren't like SCSI. There aren't 20 different
versions with different pinouts.
Multipathing is handled by mpxio.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
't just Microsoft at all. There were three vendors on the
original RFC, and one of the authors was Paul Vixie... the author of BIND.
http://www.ietf.org/rfc/rfc2782.txt
You should probably do a bit of research before throwing out claims like
that to try to shoot someone down.
--Tim
___
d to sell licenses
> to an open source product)?
>
Because they OWN the code, and the patents to protect the code.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
sed on the
FreeNAS/FreeBSD code. I don't think they have a full-blown implementation
of CIFS (just Samba), but other than that, I don't think you'll have too
many issues. I actually considered moving over to it, but I made the
unfortunate mistake of upgrading to Solaris 11 Expre
at LUN creation time. You
still align to a 4K block on a filer because there is no way to
automatically align an encapsulated guest, especially when you could have
different guest OS types on a LUN.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
u choose blocksizes carefully to make them align. But that seems
> complicated and likely to fail.
>
>
>
That's patently false. VM images are the absolute best use-case for dedup
outside of backup workloads. I'm not sure who told you/where you got the
idea th
w they're allocating
resources from soup to nuts.
As far as this discussion is concerned, there's only two points that matter:
They've got dedup on primary storage, it works in the field. The rest is
just static that doesn't matter. Let's focus on how to mak
On Wed, May 4, 2011 at 6:51 PM, Erik Trimble wrote:
> On 5/4/2011 4:44 PM, Tim Cook wrote:
>
>
>
> On Wed, May 4, 2011 at 6:36 PM, Erik Trimble wrote:
>
>> On 5/4/2011 4:14 PM, Ray Van Dolson wrote:
>>
>>> On Wed, May 04, 2011 at 02:55:55PM -0700, Brandon H
souces for EACH pool ALL the time,
> and can't really share them well if it expects to keep performance from
> tanking... (no pun intended)
>
>
On a 2050? Probably not. It's got a single-core mobile celeron CPU and
2GB/ram. You couldn't even run ZFS on that box, much
back.
>
>- Garrett
>
>
>
That's mature. "If you don't like it, fork it yourself". With responses
like that, I can only imagine how quickly you're going to build up steam
behind your project outside of th
2011/3/12 Fred Liu
> Tim,
>
>
>
> Thanks.
>
>
>
> Is there a mapping mechanism like what DataOnTap does to map the
> permission/acl between NIS/LDAP and AD?
>
>
>
> Thanks.
>
>
>
> Fred
>
>
>
> *From:* Tim Cook [mailto:t...@coo
f
you're using NFSv4 with AD integration, it's a bit more manageable, but it's
still definitely a work in progress.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ith Sun OEM disks as well). It's ridiculous they don't take
into account the slight differences in drive sizes from vendor to vendor.
Forcing you to single-source your disks is a bad habit to get into IMO.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Tue, Jan 4, 2011 at 8:21 PM, Garrett D'Amore wrote:
> On 01/ 4/11 09:15 PM, Tim Cook wrote:
>
>
>
> On Mon, Jan 3, 2011 at 5:56 AM, Garrett D'Amore wrote:
>
>> On 01/ 3/11 05:08 AM, Robert Milkowski wrote:
>>
>> On 12/26/10 05:40 AM, Tim Cook wr
On Mon, Jan 3, 2011 at 5:56 AM, Garrett D'Amore wrote:
> On 01/ 3/11 05:08 AM, Robert Milkowski wrote:
>
> On 12/26/10 05:40 AM, Tim Cook wrote:
>
>
>
> On Sat, Dec 25, 2010 at 11:23 PM, Richard Elling > wrote:
>
>>
>> There are more people outs
ty and bugfixes to ZFS. I've seen exactly nothing out of
"outside of Oracle" in the time since it went closed. We used to see
updates bi-weekly out of Sun. Nexenta spending hundreds of man-hours on a
GUI and userland apps isn't work on ZFS.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ilized
> country?
>
>
> --
> Erik Trimble
> Java System Support
> Mailstop: usca22-123
> Phone: x17195
> Santa Clara, CA
> Timezone: US/Pacific (GMT-0800)
>
>
If you've got enough money, we do. You just have to make it to the end of
the trial, and have a judge who feels similar. They often award monetary
settlements for the cost of legal defense to the victor.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
their time for free instead of collecting a
paycheck since it's quite obvious they should no longer be able to charge
for their product.
What I find most entertaining is all the armchair lawyers on this mailing
list that think they've got prior art when THEY'VE NEVER EV
Just boot off a live cd, import the pool, and swap it that way.
I'm guessing you havent changed your failmode to continue?
On Dec 20, 2010 10:48 AM, "Albert Frenz" wrote:
> hi there,
>
> i got freenas installed with a raidz1 pool of 3 disks. one of them now
failed and it gives me errors like "Unr
You have to have a support contract to download BIOS and firmware now.
On Dec 19, 2010 12:29 PM, "Eugen Leitl" wrote:
>
> I realize this is off-topic, but Oracle has completely
> screwed up the support site from Sun. I figured someone
> here would know how to obtain
>
> Sun Fire X2100 M2 Server So
-
>
>
Random IOPS won't max out the SAS link. You'll be fine stacking them. But
again, if you have the ports available, and already have the cables, it
won't hurt anything to use them.
--Tim
___
zfs-discuss mailing list
zfs-dis
ern you will get more peak
bandwidth putting them on separate ports.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ills. Claiming you'd
start paying for Solaris if they gave you ZFS for free in Linux is
absolutely ridiculous. If the best response you can come up with is
"goodwill", I suggest wishing in one hand and shitting in the other because
there's no way Oracle is going to give away such
ity, and
simply adds more complexity. If you're doing iSCSI across a WAN (I really
hope you aren't), you'd be better served using a VPN. If you're doing it on
a LAN and you're concerned about security, use VLAN's. It's generally a
good idea to dedicate a VLAN to vmware storage traffic anyways (whether it
be iSCSI or NFS) if your infrastructure can handle VLAN's.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Sun, Dec 12, 2010 at 6:41 PM, Bob Friesenhahn <
bfrie...@simple.dallas.tx.us> wrote:
> On Sat, 11 Dec 2010, Tim Cook wrote:
>
> You are not a court of law, and that statement has not been tested. It is
>> your opinion and nothing more. I'd appreciate if every
On Sat, Dec 11, 2010 at 5:17 PM, Joerg Schilling <
joerg.schill...@fokus.fraunhofer.de> wrote:
> Tim Cook wrote:
>
> > > I don't believe that there is a significant risk as the NetApp patents
> are
> > > invalid because of prior art.
> > >
> >
nough to consult with a lawyer, but it's probably better to just
not spread unsubstantiated rumor in the first place.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
m not sure how it's trolling. There have been 0 public statements I've
seen from Oracle on their future plans for what was opensolaris. A leaked
internal memo is NOT official company policy. Until I see source or an
official statement, I'm not holding my breath.
--Tim
_
It's based on a jumper on most new drives.
On Dec 6, 2010 8:41 PM, "taemun" wrote:
> On 7 December 2010 13:25, Brandon High wrote:
>>
>> There shouldn't be any problems using a 3TB drive with Solaris, so
>> long as you're using a 64-bit kernel. Recent versions of zfs should
>> properly recognize
c4t41d0ONLINE 0 0 0
>c4t42d0ONLINE 0 0 0
>cache
> c8t0d0 ONLINE 0 0 0
> c8t1d0 ONLINE 0 0 0
> spares
> c4t43d0 INUSE currently in use
&
s to me. Do you have examples? There were plenty of
revisions when they first dropped 6-8 months ago, but I haven't heard of
anything similar in quite some time. As for Intel, they've had their share
of issues as well. I assume you remember the data-loss inducing BIOS
password bug?
-
On Sun, Nov 28, 2010 at 10:42 AM, David Magda wrote:
> On Nov 27, 2010, at 16:14, Tim Cook wrote:
>
> You don't need drivers for any SATA based SSD. It shows up as a standard
>> hard drive and plugs into a standard SATA port. By the time the G3 Intel
>> drive is o
have fewer problems.
>
>
According to what forum posts? There were issues when Crucial and a few
others released alpha firmware into production... Anandtech has put those
drives through the ringer without issue. Several people on this list are
running them as well.
--Tim
_
ts corrupted. I suspect that a DDRdrive or one
> of the STEC Zeus drives might help me, but I can overwhelm any other SSD
> quickly.
>
> I'm doing compiles of the JDK, with a single backed ZFS system handing the
> files for 20-30 clien
next gen Sandforce should be out as well. Unless Intel
does something revolutionary, they will still be behind the Sandforce
drives.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
>
TRIM was putback in July... You're telling me it didn't make it into S11
Express?
http://mail.opensolaris.org/pipermail/onnv-notify/2010-July/012674.html
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
37,0 442,1 4489,6 51326,1 7,5 2,0 15,74,1 98 100 c7d0
Desktop usage is a different beast as I alluded to. A dedicated server
typically doesn't have any issues. I'd strongly suggest getting one of the
sandforce controller based SSD's. They're the best on the marke
kind of IOPS you can get out of most
modern SSD's. If you were using the system as a workstation, it'd
definitely help, as applications tend to feel more responsive with an SSD.
That's all I run in my laptops now.
--Tim
___
zfs-discuss mai
-IOPS workloads like that,
the back-end is going to fall over and die long before the hour time-limit.
Your 38k IOPS would need nearly 500 drives to sustain that workload with any
kind of decent latency. If you've got 500 drives, you're going to want a
hell of a lot more ZIL space than t
1 - 100 of 959 matches
Mail list logo