On Fri, Nov 20, 2009 at 6:41 PM, Emily Grettel <
emilygrettelis...@hotmail.com> wrote:
> Wow that was mighty quick Tim!
>
> Sorry, I have to reboot the server. I can SSH into the box, VNC etc but no
> CIFS shares are visible.
>
> Here are the last few messages from /var
>
> Cheers,
> Em
>
>
My novice opinion is that your dyndns client is broken, and affecting smbd.
If the system is confused as to what the domain is, I'd imagine that would
cause lots of stuff to become sluggish/broken.
--
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
we can revert to 2009.06 later but is there a way of just updating ZFS
> instead of downloading 800Mb more and updating the entire OS?
>
> Cheers,
> Em
>
>
>
We'd need to know what the console output was to help. If you have console
access can you login locally?
lly have a need for ssd's. You'd be better off getting a
large SATA drive for your OS and using the ssd's as readzilla/logzilla.
I think everyone will also tell you to get ECC ram. To do this cheaply, it
generally means going the AMD route.
--
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
nd that. I
> ... think ... the SATA controller works under opensolaris, and you could run
> an ESATA cable back through a hole in the case to get a sixth SATA disk.
>
> This uncertainty is what pushed me back to the intel Xeon stack.
>
> I've thrashed this pretty hard for several weeks now.
>
>
Someone can correct me if I'm wrong... but I believe that opensolaris can do
the ECC scrubbing in software even of the motherboard BIOS doesn't support
it.
--
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
think an ls would tell you if it was or not. Do you see this output
when you run a '/bin/ls -dV'?
root# /bin/ls -dV /
drwxr-xr-x 26 root root 35 Nov 15 10:58 /
owner@:--:---:deny
owner@:rwxp---A-W-Co-:---:allow
iate to
> know.
>
> Thanks,
> Moshe
>
>
>
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6574286
It says right there it was fixed in b125, which would mean it will be
included in the next opensolaris release.
--
--Tim
___
z
e extensive testing, but booting off a livecd, it sees the disks
just fine, and loads a driver for them.
--
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ver, and also to get a guaranteed consistent
snapshot, you'll need to pause all traffic to the LUN while taking the
snapshot.
--
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
e to include GPL stuff as
> well and no idea the status of removing whatever parts of that may be
> hanging around. Who cares about license as long as you have the right to do
> what *you* need with the source.
>
> /me -> back to coding..
>
>
I'd say EVERYONE should care. If they're improperly using a license, it
could cause the project to be discontinued entirely. Tying yourself/your
production workload to a project that may potentially be gone tomorrow isn't
exactly a good idea.
--
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
t and
> make sure it's doing something.
>
>
How big was the pool you destroyed, and what are the system specs?
--
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
't care where
the physical location of the disk is. It's no different than setting up ASM
and moving from direct device paths to a multipathing situation. I've never
sen powerpath touch the actual contents of the LUN, it merely manages paths.
--
--Tim
___
868.80.0 0.0 0.40.0 13.0 0 38 c3d1
> 30.20.0 1932.80.0 0.0 0.60.0 20.3 0 61 c4d0
> 30.20.0 1920.20.0 0.0 0.40.0 12.1 0 37 c4d1
>
>
That's expected, ZFS will use all the memory it can unless you tell it not
to. It shouldn't freeze the box.
--
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
As long as you've already rebooted, you should limit the amount of memory
ZFS can use before you restart the import.
--
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
rd
reset anyway.
---
Number 1 is best done now, before you have a hang. It won't hurt
anything to have crash dumps enabled - and if you ever get a panic
you'll have the data needed for someone to analyze the issue.
If the crash dump savi
to use a newer version of opensolaris to recover it
automagically.
http://www.c0t0d0s0.org/archives/6067-PSARC-2009479-zpool-recovery-support.html
--
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
; --
>
>
I would bet that the /etc/driver_aliases no longer has the pci device ID's
for your cards and thus isn't attaching any driver.
--
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ds going, I don't see how that's
a "sequential" I/O load. To the backend storage it's going to look like the
equivalent of random I/O. I'd also be surprised to see 12 1TB disks
supporting 600MB/sec throughput and would be interested in hearing where
ing to look like
> the
> > equivalent of random I/O. I'd also be surprised to see 12 1TB disks
> > supporting 600MB/sec throughput and would be interested in hearing where
> you
> > got those numbers from.
> >
> > Is your video capture doing 430MB or 430Mb
pand a raid-z, and there is no ETA on being able
to do so.
--
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Sun, Dec 27, 2009 at 1:38 PM, Roch Bourbonnais
wrote:
>
> Le 26 déc. 09 à 04:47, Tim Cook a écrit :
>
>
>>
>> On Fri, Dec 25, 2009 at 11:57 AM, Saso Kiselkov
>> wrote:
>> -BEGIN PGP SIGNED MESSAGE-
>> Hash: SHA1
>>
>> I've s
On Sun, Dec 27, 2009 at 6:43 PM, Bob Friesenhahn <
bfrie...@simple.dallas.tx.us> wrote:
> On Sun, 27 Dec 2009, Tim Cook wrote:
>
>>
>> That is ONLY true when there's significant free space available/a fresh
>> pool. Once those files have been deleted and the bl
On Sun, Dec 27, 2009 at 8:40 PM, Bob Friesenhahn <
bfrie...@simple.dallas.tx.us> wrote:
> On Sun, 27 Dec 2009, Tim Cook wrote:
>
> How is that going to prevent blocks being spread all over the disk when
>> you've got files several GB in size being written concurrentl
it
>> doesn't have checksumming built-in.
>>
>
> ZFS always checksums everything unless you explicitly disable
> checksumming for data. Metadata is always checksummed.
> -- richard
>
>
>
I imagine he's referring to the fact that it cannot fix any checksum
7;t just a troll as I don't recall seeing you ask for help
on php or cifs, you'd need to ask on the zfs-fuse mailing list. That
project has no relation to Opensolaris or the dev's here. Are they even
using the same/newer zfs version you created your pool on Opensolaris with?
If
On Tue, Dec 29, 2009 at 12:48 PM, Eric D. Mudama
wrote:
> On Tue, Dec 29 at 12:40, Tim Cook wrote:
>
>> On Tue, Dec 29, 2009 at 9:04 AM, Duane Walker > >wrote:
>>
>> I tried running an OpenSolaris server so I could use ZFS but SMB Serving
>>> wasn'
linux for a few years more but solaris is very new to me and
> quite different in a lot of ways.
>
>
Nope, on import it will scan all the disks for ZFS pools. It doesn't care
about the physical device names changing.
--
--Tim
___
zfs-d
ives! Yeah!
>
>
>
While I'm sure to offend someone, it must be stated. That's not going to
happen for the simple fact that there's all of two vendors that could
utilize it, both niche (in relative terms). NetApp and Sun. Why would SSD
MFG's waste their time bui
ft's tail does not even wiggle. It just sort of lays there.
>
> Bob
> --
Ahhh, so the truth comes out. Another apple zealot. They to this day
cant touch outlook/exchange or SQL or AD, or anything remotely
resembling enterprise management of wokstations. Win7 is an incredibl
On Sat, Jan 2, 2010 at 9:45 PM, David Magda wrote:
> On Jan 2, 2010, at 20:51, Tim Cook wrote:
>
> Apple users not complaining is more proof of them having
>> not only drunk the koolaid but also bathed in it than them knowing any
>> lImtations of what they have today. Thi
the other way around, specify the ones I agree
> with).
> Is there any other way than setting up a firewall to filter the interface?
>
>
>
I believe it can be done with Crossbow and Flows by defining cifs as a
service.
http://hub.opensolaris.org/bi
n I said claiming unlimited
snapshots and filesystems was disingenuous at best, and that likely we'd
need to see artificial limitations to make many of these features usable.
But I digress :)
--
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
rage the VSS providers in
Windows... but it won't be easy.
Any reason in particular you're using iSCSI? I've found NFS to be much more
simple to manage, and performance to be equivalent if not better (in large
clusters).
--
--Tim
___
lity force it to
be web-only? The 7000 series has a VERY robust web GUI, but they also have
a CLI that provides all of the same functions. Why would a web GUI cause
sun to go browser-only?
--
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolar
#x27;s a must. Most DC's I operate in wouldn't tolerate
having a card separately wired from the chassis power. It's far, far, far
more likely to have a tech knock that power cord out and not have anyone
notice than to have a battery spontaneously combust. My .02.
--
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
formance tanking. Where as we can expect linear performance if it's a
percentage. No?
--
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
r
> of servers.
>
> I'm very interested in a cost-effective device that will interface to two
> systems.
>
>
That's called an SSD in a SAS array.
--
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
t; FC/SAS->SATA gateway in the external drive enclosure.
>
>
>
Seagate claims the SAS versions of their drives actually see IOPS
improvements:
http://www.seagate.com/www/en-us/products/servers/barracuda_es/barracuda_es.2
If the SAS version is dual
hear the components they use for their SAS interfaces yield significantly
better performance. Plus, if it's dual ported... I wouldn't expect to see
38% consistently, but I would expect to see better performance across the
board.
--
--Tim
d to imagine any other filesystem which can exploit them so
> completely.
>
>
>
You mean like WAFL?
--
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> I can't answer the latter, but judging by the date of the CR, I wouldn't
> hold my
> breath. Give the fine folks at Zmanda a look.
>
> Also, the ADM project seems to be dead. Unfortunate?
> http://hub.opensolaris.org/bin/view/Project+adm/WhatisADM
> -- richard
>
>
I
tml?locale=EN&remote=1
--
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
id, it is possible that 2010.03 will resolve this. But we do
> not put development releases in production.
>
>
You should probably make that clear from the start then. You just bashed
the opensource drivers based on your experience with som
redundancy? It's the larger drives that forces you
> to add more parity.
>
> -frank
Smaller devices get you to raid-z3 because they cost less money. Therefore,
you can afford to buy more of them.
--
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Sat, Jan 23, 2010 at 5:39 PM, Frank Cusack wrote:
> On January 23, 2010 5:17:16 PM -0600 Tim Cook wrote:
>
>> Smaller devices get you to raid-z3 because they cost less money.
>> Therefore, you can afford to buy more of them.
>>
>
> I sure hope you are
On Sat, Jan 23, 2010 at 7:57 PM, Frank Cusack wrote:
> On January 23, 2010 6:09:49 PM -0600 Tim Cook wrote:
>
>> When you've got a home system and X amount of dollars
>> to spend, $/GB means absolutely nothing when you need a certain number of
>> drives to ha
On Sun, Jan 24, 2010 at 10:38 AM, Frank Cusack wrote:
> On January 23, 2010 8:23:08 PM -0600 Tim Cook wrote:
>
>> I bet you'll get the same performance out of 3x1.5TB drives you get out of
>> 6x500GB drives too.
>>
>
> Yup. And if that's the case, p
s-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
This sounds like yet another instance of
6910767 deleting large holey objects hangs other I/Os
I have a module based on 130 that includes this fix if you would like to try it.
-tim
_
/read_attributes/write_attributes/delete/read_acl
/write_acl/synchronize:allow
but this makes no difference...I can still change the group ownership.
Clearly I am doing something wrong..or have incorrect expectations.
Anyone got any ideas on this ?
Thanks
Tim
--
*Tim Thomas
Open Storage Technical
erested in paying
> 10x for a silly SATA hard drive.
>
>
>
Good luck. Unless you find a third party to buy drive sled's from, you're
buying them from s/Sun/Oracle.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
hardware. If it were easy, everyone would be doing it, and it wouldn't
be expensive.
If you think they're overcharging, you're more than welcome to go into
business, undercut the shit out of them, and still make a ton of money,
since you think they 're charging 10x market val
On Tue, Feb 2, 2010 at 9:45 AM, David Dyer-Bennet wrote:
>
> On Tue, February 2, 2010 01:27, Tim Cook wrote:
>
> > Except you think the original engineering is just a couple grand, and
> > that's
> > where you're wrong. I hate the prices just as much as
's left?
>
>
Pretty sure HP and IBM are still alive and well.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Tue, Feb 2, 2010 at 12:00 PM, Frank Cusack
wrote:
> On February 2, 2010 11:58:17 AM -0600 Tim Cook wrote:
>
>> On Tue, Feb 2, 2010 at 11:53 AM, Frank Cusack
>> wrote:
>>
>> On February 2, 2010 8:57:32 AM -0800 Orvar Korvar <
>>> knatte_fnatte_tja...@
't think
anyone in the Unix community has duplicated it to date. As for differences,
google is your friend?
http://www3.sympatico.ca/n.rieck/docs/vms_vs_unix.html
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
; Probably still somewhat marginal on your ranch, though better than a
> Ferrari. The ground clearance is medium, and it's not mainly a
> cargo-hauler.
>
>
And how well does your Camry run when you try to replace the Toyota
transmission with one manufactured by Ford? A
you quite a bit. If
you've got a good cisco/hp/brocade/extreme networks/force10/etc switch, it's
fine. If you've got a $50 soho netgear, you typically are going to get what
you paid for :)
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
to the dev branch at this point. The next stable "named" release
> is due out in a month or so (tentatively for late March, though April is
> likely). This will be automatically available via your current pkg
> repository. This release should be based on b133 or thereabouts.
>
&
s 10U8 Generic_141445-09 - zpool version 15 - zfs version 4
>>
>>
>> Thx for your answers.
>>
>> --
>> Francois
>> ___
>>
>
I think it might be helpful to explain exactly what that means. I'll give
it a shot, feel free to correct my mistake(s). Francois: when you have
autoreplace on, what that means is if you remove the bad drive, and stick in
a new one to replace it, it will automatically be added to the pool. To do
what you're trying to do, you shouldn't have drives added as hot spares at
all. If you want it to be a "cold" spare, put it in the system, and just
leave it unassigned.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
hem an arm and a leg for
the enclosure, and nothing for the drives? Again, the idea is that you're
charging based on capacity. Generally speaking, an entity that needs tons
and tons of storage has the money to pay for it. Home users ripping
legitimately or pirating illegitimate
the drives under linux either.
>
> Can anyone shed any light on this issue, or suggest what I could try next ? I
> am sort of discounting hardware problems given that I do not see errors from
> the live linux CD. Maybe I should install linux and see if the problem
> persists ?
>
> Cheers.
> --
> This message posted fro
On Monday, February 8, 2010, Kjetil Torgrim Homme wrote:
> Daniel Carosone writes:
>
>> In that context, I haven't seen an answer, just a conclusion:
>>
>> - All else is not equal, so I give my money to some other hardware
>> manufacturer, and get frustrated that Sun "won't let me" buy the
>>
w do you figure that? There are 5 columns on the front page:
Database
Middleware
Applications
Server and Storage Systems
Industry
How much more focus were you hoping for beyond front page status? Were you
expecting them to remove all references to that little database thing that
their en
d it's
gotten much better since then. I'd imagine there's only so much integration
they could do ahead of time.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
s.
>
> -Ross
You'd need more than 3 to get his 40TB usable with current 2.5" drive
capacities, unless you're suggesting he use laptop drives.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ged blocks after that
point. If 9pm goes away, it would use 8pm instead, if 8pm is gone... etc.
If you had moved to the secondary, but for some reason get the primary up
and had snapshots through say, 10am the next day, everything after the 11pm
snapshot on the prima
r final goal.
>
>
>
You're extremely light on ram for a system with 24TB of storage and two
E5520's. I don't think it's the entire source of your issue, but I'd
strongly suggest considering doubling what you have as a starting point.
What version of opensolaris are
On Wed, Feb 10, 2010 at 4:31 PM, David Dyer-Bennet wrote:
>
> On Wed, February 10, 2010 16:15, Tim Cook wrote:
> > On Wed, Feb 10, 2010 at 3:38 PM, Terry Hull wrote:
> >
> >> Thanks for the info.
> >>
> >> If that last common snapshot gets destroyed
at's why they show up as the same controller,
different disk. He might have a tough time turning IDE ports into SATA in
the BIOS ;)
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
d-of-service-life page which is fairly standard practice, then decided to
freak out about it.
http://www.sun.com/software/solaris/lifecycle.xml
The reason there's an end of service page is because Oracle isn't going to
be supporting 2009.06 for 30 years. I don't see how that le
ceplans/sunspectrum/index.jsp
Sun System Service Plans for Solaris
Sun System Service Plans for the Solaris Operating System provide integrated
hardware and* Solaris OS (or OpenSolaris OS)* support service coverage to
help keep your systems running smoothly. This singl
the correct end. You can also measure this directly using
> something like iosnoop
> when running zdb -l.
> -- richard
>
> >
> > Any hints appreshiated ..
> >
> > p.s. Clearing the whole disk is troublesome, because those are a bunch of
>
ng mismatched replication levels?
> Is there any performance penalty?
>
> Thanks,
> Eduardo
>
The primary concern as I understand it is performance. If they're close in
size, it shouldn't be a big deal, but when you've got mismatched rg's it can
cause quite the p
en customers dislike this type of licensing
> model most. Dan may or may not be reading this, but I'd strongly discourage
> this approach. Without knowing more I don't know what alternative I could
> recommend though.. (Too bad I missed that irc meeting..)
>
> ./C
>
>
So don't buy the 7000 series. I find no issue with that model.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
t; file in vain - it will be equally available to the "new host"
>>> at the correct point in migration, just as it was accessible
>>> to the "old host".
>>>
>> Again. NFS/iscsi/IB = ok.
>>
>
> True, except that this is not an optima
blade for
> VM farming needs, but it would consume much of the LAN
> bandwidth of the blades using its storage services.
>
> Today, HDDs aren't fast, and are not getting faster.
>> -- richard
>>
> Well, typical consumer disks did get about 2-3 times faster for
> linear RW speeds over the past decade; but for random access
> they do still lag a lot. So, "agreed" ;)
>
> //Jim
>
>
Quite frankly your choice in blade chassis was a horrible design decision.
From your description of its limitations it should never be the building
block for a vmware cluster in the first place. I would start by rethinking
that decision instead of trying to pound a round ZFS peg into a square hole.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ut it's good to have it anyways, and is critical for
> > personal systems such as laptops.
>
> IIRC, fsck was seldom needed at
> my former site once UFS journalling
> became available. Sweet update.
>
> Mark
>
>
We all hope to never have to run fsck, but not having it at all is a bit of
a non-starter in most environments.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Tue, Oct 18, 2011 at 2:41 PM, Kees Nuyt wrote:
> On Tue, 18 Oct 2011 12:05:29 -0500, Tim Cook wrote:
>
> >> Doesn't a scrub do more than what
> >> 'fsck' does?
> >>
> > Not really. fsck will work on an offline filesystem to correct er
On Tue, Oct 18, 2011 at 3:06 PM, Peter Tribble wrote:
> On Tue, Oct 18, 2011 at 8:52 PM, Tim Cook wrote:
> >
> > Every scrub I've ever done that has found an error required manual
> fixing.
> > Every pool I've ever created has been raid-z or raid-z2, so the
On Tue, Oct 18, 2011 at 3:27 PM, Peter Tribble wrote:
> On Tue, Oct 18, 2011 at 9:12 PM, Tim Cook wrote:
> >
> >
> > On Tue, Oct 18, 2011 at 3:06 PM, Peter Tribble
> > wrote:
> >>
> >> On Tue, Oct 18, 2011 at 8:52 PM, Tim Cook wrote:
> >>
? I seem to recall people on this mailing
list using mbuff to speed it up because it was so bursty and slow at one
point. IE:
http://blogs.everycity.co.uk/alasdair/2010/07/using-mbuffer-to-speed-up-slow-zfs-send-zfs-receive/
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Do you still need to do the grub install?
On Dec 15, 2011 5:40 PM, "Cindy Swearingen"
wrote:
> Hi Anon,
>
> The disk that you attach to the root pool will need an SMI label
> and a slice 0.
>
> The syntax to attach a disk to create a mirrored root pool
> is like this, for example:
>
> # zpool att
you are proactively looking for them.
>
> myers
>
>
>
>
Or, if you aren't scrubbing on a regular basis, just change your zpool
failmode property. Had you set it to wait or panic, it would've been very
clear, very quickly that something was wrong.
http:/
g
to be a nightmare long-term (which is why most products use a version
number in the first place).
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ttp://www.RichardElling.com
> illumos meetup, Jan 10, 2012, Menlo Park, CA
> http://www.meetup.com/illumos-User-Group/events/41665962/
>
>
>
Speaking of illumos, what exactly is the deal with the zfs discuss mailing
list? There's all of 3 posts that show up for
gt; > RAID" would go into just making another write-block allocator
> > in the same league "raidz" or "mirror" are nowadays...
> > BTW, are such allocators pluggable (as software modules)?
> >
> > What do you think - can and should
bably skip that step.
>
You will however have an issue replacing them if one should fail. You need
to have the same block count to replace a device, which is why I asked for
a "right-sizing" years ago. Deaf ears :/
--Tim
>
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Fri, Apr 13, 2012 at 11:46 AM, Freddie Cash wrote:
> On Fri, Apr 13, 2012 at 9:30 AM, Tim Cook wrote:
> > You will however have an issue replacing them if one should fail. You
> need
> > to have the same block count to replace a device, which is why I asked
> for a
>
Oracle never promised anything. A leaked internal memo does not signify an
official company policy or statement.
On Apr 18, 2012 11:13 AM, "Freddie Cash" wrote:
> On Wed, Apr 18, 2012 at 7:54 AM, Cindy Swearingen
> wrote:
> >>Hmmm, how come they have encryption and we don't?
> >
> > As in Solar
to the disk. Scrub more often!
>
> --
> Dan.
>
>
>
>
Personally unless the dataset is huge and you're using z3, I'd be scrubbing
once a week. Even if it's z3, just do a window on Sunday's or something so
that you at
; So, the "10 extra reads" will sometimes be true - if the duplicate block
> doesn't already exist in ARC. And the "10 extra reads" will sometimes be
> false - if the duplicate block is already in ARC.
Sasso: yes, it's absolutely worth implementing a higher
No. Missing slogs is a potential data-loss condition. Importing the pool
> without
> slogs requires acceptance of the data-loss -- human interaction.
> -- richard
>
> --
> ZFS Performance and Training
> richard.ell...@richardelling.com
> +1-760-896-4422
>
&
change rate, and
how long you keep the snapshots around, it may very well be true. It's not
universally true, but it's also no universally false.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ore you
take snapshot 2, snapshot 2 will only capture the final state of the file.
You will not get 50 revisions of the file. This is not continuous data
protection it's a point in time copy.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
force import it on the other
> host.
>
> ** **
>
> Can anybody think of a reason why Option 2 would be stupid, or can you
> think of a better solution?
>
>
>
I would suggest if you're doing a crossover between systems, you use
infiniband rather than etherne
On Thu, Sep 27, 2012 at 12:48 PM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) <
opensolarisisdeadlongliveopensola...@nedharvey.com> wrote:
> > From: Tim Cook [mailto:t...@cook.ms]
> > Sent: Wednesday, September 26, 2012 3:45 PM
> >
> > I would sugge
On 10/01/2012 09:09 AM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) wrote:
Just perform a bunch of writes, time it. Then set sync=disabled,
perform the same set of writes, time it. Then enable sync, add a ZIL
device, time it. The third option will be somewhere in between the
first
file format" are related to these
> > two formats. "FITS btrfs" didn't return anything specific to the file
> > format, either.
>
> It's not too late to change it, but I have a hard time coming up with
> some better name. Also, the format is stil
On Friday, October 19, 2012, Christof Haemmerle wrote:
> hi there,
> i need to connect some old raid subsystems to a opensolaris box via fibre
> channel. can you recommend any FC HBA?
>
> thanx
> __
>
How old? If its 1gbit you'll need a 4gb or slower hba. Qlogic woul
On Sat, Oct 20, 2012 at 2:54 AM, Arne Jansen wrote:
> On 10/20/2012 01:10 AM, Tim Cook wrote:
> >
> >
> > On Fri, Oct 19, 2012 at 3:46 PM, Arne Jansen > <mailto:sensi...@gmx.net>> wrote:
> >
> > On 10/19/2012 09:58 PM, Matthew Ahrens wrote:
&g
501 - 600 of 959 matches
Mail list logo