Hi,
I'm trying to do a simple data retention hack wherein I keep
hourly, daily, weekly and monthly zfs auto snapshots.
To save space,
I want the dailies to go away when the weekly is taken.
I want the weeklies to go away when the monthly is taken.
From what I've gathered, it seems ti
Right, put some small (30GB or something trivial) disks in for root and
then make a nice fast multi-spindle pool for your data. If your 320s
are around the same performance as your 500s, you could stripe and
mirror them all into a big pool. ZFS will waste the extra 180 on the
bigger disks but
On 12/10/10 09:54, Bob Friesenhahn wrote:
On Fri, 10 Dec 2010, Edward Ned Harvey wrote:
It's been a while since I last heard anybody say anything about this.
What's the latest version of publicly
released ZFS? Has oracle made it closed-source moving forward?
Nice troll.
Bob
Totally! But
Thanks for posting your findings. What was incorrect about the client's
config?
On Oct 7, 2010 4:15 PM, "Eff Norwood" wrote:
Figured it out - it was the NFS client. I used snoop and then some dtrace
magic to prove that the client (which was using O_SYNC) was sending very
bursty requests to the
+1: This thread is relevant and productive discourse that'll assist
OpenSolaris orphans in pending migration choices.
On 08/18/10 12:27, Edward Ned Harvey wrote:
Compatibility of ZFS& Linux, as well as the future development of ZFS, and
the health and future of opensolaris / solaris, oracle&
Well, OK, but where do I find it?
I'd still expect some problems with FCODE - vs. - BIOS issues if it's
not SPARC firmware.
thx
jake
On 07/07/10 17:46, Garrett D'Amore wrote:
On Wed, 2010-07-07 at 17:33 -0400, Jacob Ritorto wrote:
Thank goodness! Where, specifically, does
Thank goodness! Where, specifically, does one obtain this firmware for
SPARC?
On 07/07/10 17:04, Daniel Bakken wrote:
Upgrade the HBA firmware to version 1.30. We had the same problem, but
upgrading solved it for us.
Daniel Bakken
On Wed, Jul 7, 2010 at 1:57 PM, Joeri Vanthienen
mailto:m...
ppreciate Tim's clueless
would-be cynicisms :)
On Tue, Mar 23, 2010 at 9:48 AM, Tim Cook wrote:
>
>
> On Tue, Mar 23, 2010 at 7:11 AM, Jacob Ritorto
> wrote:
>>
>> Sorry to beat the dead horse, but I've just found perhaps the only
>> written proof that OpenSol
Sorry to beat the dead horse, but I've just found perhaps the only
written proof that OpenSolaris is supportable. For those of you who
deny that this is an issue, its existence as a supported OS has been
recently erased from every other place I've seen on the Oracle sites.
Everyone please grab a c
It's a kind gesture to say it'll continue to exist and all, but
without commercial support from the manufacturer, it's relegated to
hobbyist curiosity status for us. If I even mentioned using an
unsupported operating system to the higherups here, it'd be considered
absurd. I like free stuff to fo
On Mon, Feb 22, 2010 at 12:47 PM, Justin Lee Ewing
wrote:
> I'm not sure how there is mistreatment when known that Solaris 10 is the
> current production-grade product and OpenSolaris, for all intents and
> purposes, a beta product that is currently under active development. I was
> actually surp
On Mon, Feb 22, 2010 at 12:42 PM, Tim Cook wrote:
>
>
> On Mon, Feb 22, 2010 at 11:12 AM, Jacob Ritorto
> wrote:
>>
>> 2010/2/22 Matthias Pfützner :
>> > You (Jacob Ritorto) wrote:
>> >> FWIW, I suspect that this situation does not warrant a &quo
2010/2/22 Matthias Pfützner :
> You (Jacob Ritorto) wrote:
>> FWIW, I suspect that this situation does not warrant a "Wait and See"
>> response. We're being badly mistreated here and it's probably too
>> late to do anything about it. Probably the only ch
On Mon, Feb 22, 2010 at 10:04 AM, Henrik Johansen wrote:
> On 02/22/10 03:35 PM, Jacob Ritorto wrote:
>>
>> On 02/22/10 09:19, Henrik Johansen wrote:
>>>
>>> On 02/22/10 02:33 PM, Jacob Ritorto wrote:
>>>>
>>>> On 02/22/10 06:12, Henrik Jo
On 02/22/10 09:19, Henrik Johansen wrote:
On 02/22/10 02:33 PM, Jacob Ritorto wrote:
On 02/22/10 06:12, Henrik Johansen wrote:
Well - once thing that makes me feel a bit uncomfortable is the fact
that you no longer can buy OpenSolaris Support subscriptions.
Almost every trace of it has
Seems your controller is actually doing only harm here, or am I missing
something?
On Feb 4, 2010 8:46 AM, "Karl Pielorz" wrote:
--On 04 February 2010 11:31 + Karl Pielorz
wrote:
> What would happen...
A reply to my own post... I tried this out, when you make 'ad2' online
again, ZFS immed
Hey Mark,
I spent *so* many hours looking for that firmware. Would you please
post the link? Did the firmware dl you found come with fcode? Running blade
2000 here (SPARC).
Thx
Jake
On Jan 26, 2010 11:52 AM, "Mark Nipper" wrote:
> It may depend on the firmware you're running. We've
>
Thomas Burgess wrote:
I'm not used to the whole /home vs /export/home difference and when you
add zones to the mix it's quite confusing.
I'm just playing around with this zone.to learn but in the next REAL
zone i'll probably:
mount the home directories from the base system (this machine
Thomas,
If you're trying to make user home directories on your local machine in
/home, you have to watch out because the initial Solaris config assumes
that you're in an enterprise environment and the convention is to have a
filer somewhere that serves everyone's home directories which, with t
Hi all,
I need to move a filesystem off of one host and onto another smaller
one. The fs in question, with no compression enabled, is using 1.2 TB
(refer). I'm hoping that zfs compression will dramatically reduce this
requirement and allow me to keep the dataset on an 800 GB store. Does
th
OK, it's been used twice now. Please closely define 'temporal failure.'
On Dec 16, 2009 1:25 PM, "Frank Cusack" wrote:
On December 16, 2009 9:37:08 AM -0800 Richard Elling <
richard.ell...@gmail.com> wrote:
> > > On Dec 15, 2009, at 11:04 PM, Frank Cusack wrote:
>
> > AVS can also be used. Thi
Hi all,
Is it sound to put rpool and ZIL on an a pair of SSDs (with rpool
mirrored)? I have (16) 500GB SATA disks for the data pools and they're
doing lots of database work, so I'd been hoping to cut down the seeks a
bit this way. Is this a sane, safe, practical thing to do and if so,
how m
Hi,
Can anyone identify whether this is a known issue (perhaps 6667208) and
if the fix is going to be pushed out to Solaris 10 anytime soon? I'm
getting badly beaten up over this weekly, essentially anytime we drop a
packet between our twenty-odd iscsi-backed zones and the filer.
Chris was
You need Solaris for the zfs webconsole, not OpenSolaris.
Paul wrote:
Hi there, my first post (yay).
I have done much googling and everywhere I look I see people saying "just browse to
https://localhost:6789 and it is there". Well its not, I am running 2009.06
(snv_111b) the current latest st
Tim Cook wrote:
> Also, I never said anything about setting it to panic. I'm not sure why
> you can't set it to continue while alerting you that a vdev has failed?
Ah, right, thanks for the reminder Tim!
Now I'd asked about this some months ago, but didn't get an answer so
forgive me for ask
I don't wish to hijack, but along these same comparing lines, is there
anyone able to compare the 7200 to the HP LeftHand series? I'll start
another thread if this goes too far astray.
thx
jake
Darren J Moffat wrote:
Len Zaifman wrote:
We are looking at adding to our storage. We would li
Hi all,
Not sure if you missed my last response or what, but yes, the pool is
set to wait because it's one of many pools on this prod server and we
can't just panic everything because one pool goes away.
I just need a way to reset one pool that's stuck.
If the architecture of zfs ca
On Mon, Nov 16, 2009 at 4:49 PM, Tim Cook wrote:
> Is your failmode set to wait?
Yes. This box has like ten prod zones and ten corresponding zpools
that initiate to iscsi targets on the filers. We can't panic the
whole box just because one {zone/zpool/iscsi target} fails. Are there
undocumente
zpool for zone of customer-facing production appserver hung due to iscsi
transport errors. How can I {forcibly} reset this pool? zfs commands
are hanging and iscsiadm remove refuses.
r...@raadiku~[8]8:48#iscsiadm remove static-config
iqn.1986-03.com.sun:02:aef78e-955a-4072-c7f6-afe087723466
With the web redesign, how does one get to zfs-discuss via the
opensolaris.org website?
Sorry for the ot question, but I'm becoming desperate after clicking
circular links for the better part of the last hour :(
___
zfs-discuss mailing list
zfs-di
My goal is to have a big, fast, HA filer that holds nearly everything for a
bunch of development services, each running in its own Solaris zone. So when I
need a new service, test box, etc., I provision a new zone and hand it to the
dev requesters and they load their stuff on it and go.
Ea
My goal is to have a big, fast, HA filer that holds nearly everything
for a bunch of development services, each running in its own Solaris
zone. So when I need a new service, test box, etc., I provision a new
zone and hand it to the dev requesters and they load their stuff on it
and go.
Ea
Gaëtan Lehmann wrote:
zfs list -r -t snapshot -o name -H pool | xargs -tl zfs destroy
should destroy all the snapshots in a pool
Thanks Gaëtan. I added 'grep auto' to filter on just the rolling snaps
and found that xargs wouldn't let me put both flags on the same dash, so:
zfs list -r
Sorry if this is a faq, but I just got a time sensitive dictim from the
higherups to disable and remove all remnants of rolling snapshots on our
DR filer. Is there a way for me to nuke all snapshots with a single
command, or to I have to manually destroy all 600+ snapshots with zfs
destroy?
Torrey McMahon wrote:
3) Performance isn't going to be that great with their design but...they
might not need it.
Would you be able to qualify this assertion? Thinking through it a bit,
even if the disks are better than average and can achieve 1000Mb/s each,
each uplink from the multiplier
x27;ve made the wrong decision on vendor/platform.
Anyway, looking forward to shrink. Thanks for the tips.
Kyle McDonald wrote:
Kyle McDonald wrote:
Jacob Ritorto wrote:
Is this implemented in OpenSolaris 2008.11? I'm moving move my
filer's rpool to an ssd mirror to free up bigdisk slo
+1
Thanks for putting this in a real world perspective, Martin. I'm faced with
this exact circumstance right now (see my post to the list from earlier today).
Our ZFS filers are highly utilised, highly trusted components at the core of
our enterprise and serve out OS images, mail storage, cus
Is this implemented in OpenSolaris 2008.11? I'm moving move my filer's rpool
to an ssd mirror to free up bigdisk slots currently used by the os and need to
shrink rpool from 40GB to 15GB. (only using 2.7GB for the install).
thx
jake
--
This message posted from opensolaris.org
_
I think this is the board that shipped in the original T2000 machines
before they began putting the sas/sata onboard: LSISAS3080X-R
Can anyone verify this?
Justin Stringfellow wrote:
Richard Elling wrote:
Miles Nordin wrote:
"ave" == Andre van Eyssen writes:
"et" == Erik Trimble writes
Is there a card for OpenSolaris 2009.06 SPARC that will do SATA correctly yet?
Need it for a super cheapie, low expectations, SunBlade 100 filer, so I think
it has to be notched for 5v PCI slot, iirc. I'm OK with slow -- main goals here
are power saving (sleep all 4 disks) and 1TB+ space. Oh,
I've been dealing with this at an unusually high frequency these days.
It's even dodgier on SPARC. My recipe has been to run format -e and
first try to label as SMI. Solaris PCs sometimes complain that the disk
needs fdisk partitioning and I always delete *all* partitions, exit
fdisk, enter f
Caution: I built a system like this and spent several weeks trying to
get iscsi share working under Solaris 10 u6 and older. It would work
fine for the first few hours but then performance would start to
degrade, eventually becoming so poor as to actually cause panics on
the iscsi initiator boxes
I like that, although it's a bit of an intelligence insulter. Reminds
me of the old pdp11 install (
http://charles.the-haleys.org/papers/setting_up_unix_V7.pdf ) --
This step makes an empty file system.
6.The next thing to do is to restore the data onto the new empty
file system. To do this y
I hear that. Had it been a prod box, I'd have been a lot more
paranoid and careful. This was a new vdev with a fresh zone installed
on it, so I only lost a half hour of effort (whew). The seriousness
of the zfs destroy command, though, really hit home during this
process, and I wanted to find ou
Hi,
I just said zfs destroy pool/fs, but meant to say zfs destroy
pool/junk. Is 'fs' really gone?
thx
jake
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> Yes, iozone does support threading. Here is a test with a record size of
> 8KB, eight threads, synchronous writes, and a 2GB test file:
>
>Multi_buffer. Work area 16777216 bytes
>OPS Mode. Output is in operations per second.
>Record Size 8 KB
>SYNC Mode.
>
I have that iozone program loaded, but its results were rather cryptic
for me. Is it adequate if I learn how to decipher the results? Can
it thread out and use all of my CPUs?
> Do you have tools to do random I/O exercises?
>
> --
> Darren
___
zfs-di
OK, so use a real io test program or at least pre-generate files large
enough to exceed RAM caching?
On Tue, Jan 6, 2009 at 1:19 PM, Bob Friesenhahn
wrote:
> On Tue, 6 Jan 2009, Jacob Ritorto wrote:
>
>> Is urandom nonblocking?
>
> The OS provided random devices need to be
Is urandom nonblocking?
On Tue, Jan 6, 2009 at 1:12 PM, Bob Friesenhahn
wrote:
> On Tue, 6 Jan 2009, Keith Bierman wrote:
>
>> Do you get the same sort of results from /dev/random?
>
> /dev/random is very slow and should not be used for benchmarking.
>
> Bob
> ==
My OpenSolaris 2008/11 PC seems to attain better throughput with one big
sixteen-device RAIDZ2 than with four stripes of 4-device RAIDZ. I know it's by
no means an exhaustive test, but catting /dev/zero to a file in the pool now
frequently exceeds 600 Megabytes per second, whereas before with t
Update: It would appear that the bug I was complaining about nearly a
year ago is still at play here:
http://opensolaris.org/jive/thread.jspa?threadID=49372&tstart=0
Unfortunate Solution: Ditch Solaris 10 and run Nevada. The nice folks
in the OpenSolaris project fixed the problem a long tim
fwiw, my attempt to lu from sol 10 u6 to b101 failed miserably with lots
of broken services, etc. I ditched it but was able to revert to sol 10 u6.
Vincent Boisard wrote:
> Do you have an idea if your problem is due to live upgrade or b101
> itself ?
>
> Vincent
>
> On Thu, Nov 13, 2008 at 8:06
FWIW:
[EMAIL PROTECTED]:01#kstat vmem::heap
module: vmeminstance: 1
name: heapclass:vmem
alloc 25055
contains0
contains_search 0
crt
It's a 64 bit dual processor 4 core Xeon kit. 16GB RAM. Supermicro-Marvell
SATA boards featuring the same S-ATA chips as the Sun x4500.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.op
Thanks for the reply and corroboration, Brent. I just liveupgraded the machine
from Solaris 10 u5 to Solaris 10 u6, which purports to have fixed all known
issues with the Marvell device, and am still experiencing the hang. So I guess
this set of facts would imply one of:
1) they missed one, o
I have a PC server running Solaris 10 5/08 which seems to frequently become
unable to share zfs filesystems via the shareiscsi and sharenfs options. It
appears, from the outside, to be hung -- all clients just freeze, and while
they're able to ping the host, they're not able to transfer nfs or
Pls pardon the off-topic question, but is there a Solaris backport of the fix?
On Tue, Oct 21, 2008 at 2:15 PM, Victor Latushkin
<[EMAIL PROTECTED]> wrote:
> Blake Irvin wrote:
>> Looks like there is a closed bug for this:
>>
>> http://bugs.opensolaris.org/view_bug.do?bug_id=6655927
>>
>> It's be
Hi,
I made a zvol and set it up as a target like this:
[EMAIL PROTECTED]:19#zfs create -V20g Allika/joberg
[EMAIL PROTECTED]:19#zfs set shareiscsi=on Allika/joberg
[EMAIL PROTECTED]:19#iscsitadm list target
Target: Allika/joberg
iSCSI Name: iqn.1986-03.com.sun:02:085ec10a-16f7-e09d-968a
While on the subject, in a home scenario where one actually notices
the electric bill personally, is it more economical to purchase a big
expensive 1tb disk and save on electric to run it for five years or to
purchase two cheap 1/2 TB disk and spend double on electric for them
for 5 years? Has any
I bought similar kit from them, but when I received the machine,
uninstalled, I looked at the install manual for the Areca card and
found that it's a manual driver add that is documented to
_occasionally hang_ and you have to _kill it off manually_ if it does.
I'm really not having that in a produ
Right, a nice depiction of the failure modes involved and their
probabilities based on typical published mtbf of components and other
arguments/caveats, please? Does anyone have the cycles to actually
illustrate this or have urls to such studies?
On Tue, Apr 15, 2008 at 1:03 PM, Keith Bierman <[E
Hi all,
Did anyone ever confirm whether this ssr212 box, without hardware raid
option, works reliably under OpenSolaris without fooling around with external
drivers, etc.? I need a box like this, but can't find a vendor that will give
me a try & buy. (Yes, I'm spoiled by Sun).
thx
jak
Hi,
I too am waiting for this to ship so I can build a Proof of
concept / demo cluster. Is there any other feature or enhancement
I'll have to wait for before pointing my cluster nodes at a Solaris
ISCSI target as global storage? Has anyone been actively developing
toward and testing thi
63 matches
Mail list logo