Hi,
One of my server's zfs faulted and it shows following:
NAMESTATE READ WRITE CKSUM
backup UNAVAIL 0 0 0 insufficient replicas
raidz2-0 UNAVAIL 0 0 0 insufficient replicas
c4t0d0 ONLINE 0 0 0
c
on-updated versions of everything.
On Tue, Oct 16, 2012 at 2:48 PM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) <
opensolarisisdeadlongliveopensola...@nedharvey.com> wrote:
> > From: jason.brian.k...@gmail.com [mailto:jason.brian.k...@gmail.com] On
> > Behalf Of Jason
--- On Tue, 9/25/12, Volker A. Brandt wrote:
> Well, he is telling you to run the dtrace program as root in
> one
> window, and run the "zfs get all" command on a dataset in
> your pool
> in another window, to trigger the dataset_stats variable to
> be filled.
>
> > none can hide from dtrace
--- On Mon, 9/24/12, Richard Elling wrote:
I'm hoping the answer is yes - I've been looking but do not see it ...
none can hide from dtrace!# dtrace -qn 'dsl_dataset_stats:entry {this->ds =
(dsl_dataset_t *)arg0;printf("%s\tcompressed size = %d\tuncompressed
size=%d\n", this->ds->ds_dir->dd_m
Oh, and one other thing ...
--- On Fri, 9/21/12, Jason Usher wrote:
> > It shows the allocated number of bytes used by the
> > filesystem, i.e.
> > after compression. To get the uncompressed size,
> multiply
> > "used" by
> > "compressratio&qu
--- On Fri, 9/21/12, Sašo Kiselkov wrote:
> > I have a ZFS filesystem with compression turned
> on. Does the "used" property show me the actual data
> size, or the compressed data size ? If it shows me the
> compressed size, where can I see the actual data size ?
>
> It shows the allocated n
Hi,
I have a ZFS filesystem with compression turned on. Does the "used" property
show me the actual data size, or the compressed data size ? If it shows me the
compressed size, where can I see the actual data size ?
I also wonder about checking status of dedupe - I created my pool without
de
on LSI 9211-8i)
To: "Jason Usher"
Cc: zfs-discuss@opensolaris.org
Date: Tuesday, July 17, 2012, 5:05 PM
Hi Jason,
I have done this in the past. (3x LSI 1068E - IBM BR10i).
Your pool has no tie with the hardware used to host it (including your HBA).
You could change all your hardware, and s
We have a running zpool with a 12 disk raidz3 vdev in it ... we gave ZFS the
full, raw disks ... all is well.
However, we built it on two LSI 9211-8i cards and we forgot to change from IR
firmware to IT firmware.
Is there any danger in shutting down the OS, flashing the cards to IT firmware,
a
Did you try rm -- filename ?
Sent from my iPhone
On Nov 23, 2011, at 1:43 PM, Harry Putnam wrote:
> Somehow I touched some rather peculiar file names in ~. Experimenting
> with something I've now forgotten I guess.
>
> Anyway I now have 3 zero length files with names -O, -c, -k.
>
> I've tri
if you get rid of the HBA and log device, and run with ZIL
> disabled (if your work load is compatible with a disabled ZIL.)
By "get rid of the HBA" I assume you mean put in a battery-backed RAID
card instead?
-J
___
zfs-discuss mailing list
zfs-discus
oller and a CSE-SAS-833TQ SAS backplane.
Have run ZFS with both Solaris and FreeBSD without a problem for a
couple years now. Had one drive go bad, but it was caught early by
running periodic scrubs.
--
Jason Fortezzo
forte...@mechanicalism.net
___
This might be related to your issue:
http://blog.mpecsinc.ca/2010/09/western-digital-re3-series-sata-drives.html
On Saturday, August 6, 2011, Roy Sigurd Karlsbakk wrote:
>> In my experience, SATA drives behind SAS expanders just don't work.
>> They "fail" in the manner you
>> describe, sooner or
WD's drives have gotten better the last few years but their quality is still
not very good. I doubt they test their drives extensively for heavy duty server
configs, particularly since you don't see them inside any of the major server
manufactures' boxes.
Hitachi in particular does well in mas
Use the Solaris cp (/usr/bin/cp) instead
On Wed, Mar 16, 2011 at 8:59 AM, Fred Liu wrote:
> It is from ZFS ACL.
>
>
>
> Thanks.
>
>
>
> Fred
>
>
>
> From: Fred Liu
> Sent: Wednesday, March 16, 2011 9:57 PM
> To: ZFS Discussions
> Subject: GNU 'cp -p' can't work well with ZFS-based-NFS
>
>
>
> Alw
HyperDrive5 = ACard ANS9010
I have personally been wanting to try one of these for some time as a
ZIL device.
On 12/29/2010 06:35 PM, Kevin Walker wrote:
You do seem to misunderstand ZIL.
ZIL is quite simply write cache and using a short stroked rotating
drive is never going to provide a pe
On Tue, Dec 21, 2010 at 7:58 AM, Jeff Bacon wrote:
> One thing I've been confused about for a long time is the relationship
> between ZFS, the ARC, and the page cache.
>
> We have an application that's a quasi-database. It reads files by
> mmap()ing them. (writes are done via write()). We're talki
I've done mpxio over multiple ip links in linux using multipathd. Works just
fine. It's not part of the initiator but accomplishes the same thing.
It was a linux IET target. Need to try it here with a COMSTAR target.
-Original Message-
From: Ross Walker
Sender: zfs-discuss-boun...@op
Just for history as to why Fishworks was running on this box...we were
in the beta program and have upgraded along the way. This box is an
X4240 with 16x 146GB disks running the Feb 2010 release of FW with
de-dupe.
We were getting ready to re-purpose the box and getting our data off.
We then delet
Replace it. Reslivering should not as painful if all your disks are functioning
normally.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Thanks Tuomas. I'll run the scrub. It's an aging X4500.
-J
On Thu, Sep 30, 2010 at 3:25 AM, Tuomas Leikola wrote:
> On Thu, Sep 30, 2010 at 9:08 AM, Jason J. W. Williams <
> jasonjwwilli...@gmail.com> wrote:
>
>>
>> Should I be worried about these check
Hi,
I just replaced a drive (c12t5d0 in the listing below). For the first 6
hours of the resilver I saw no issues. However, sometime during the last
hour of the resilver, the new drive and two others in the same RAID-Z2 strip
threw a couple checksum errors. Also, two of the other drives in the str
If one was sticking with OpenSolaris for the short term, is something older
than 134 more stable/less buggy? Not using de-dupe.
-J
On Thu, Sep 23, 2010 at 6:04 PM, Richard Elling wrote:
> Hi Charles,
> There are quite a few bugs in b134 that can lead to this. Alas, due to the
> new
> regime, the
Err...I meant Nexenta Core.
-J
On Mon, Sep 27, 2010 at 12:02 PM, Jason J. W. Williams <
jasonjwwilli...@gmail.com> wrote:
> 134 it is. This is an OpenSolaris rig that's going to be replaced within
> the next 60 days, so just need to get it to something that won't through
&
134 it is. This is an OpenSolaris rig that's going to be replaced within the
next 60 days, so just need to get it to something that won't through false
checksum errors like the 120-123 builds do and has decent rebuild times.
Future boxes will be NexentaStor.
Thank you guys. :)
-J
On Sun, Sep 26
Upgrading is definitely an option. What is the current snv favorite for ZFS
stability? I apologize, with all the Oracle/Sun changes I haven't been paying
as close attention to big reports on zfs-discuss as I used to.
-J
Sent via iPhone
Is your e-mail Premiere?
On Sep 26, 2010, at 10:22, Roy
I just witnessed a resilver that took 4h for 27gb of data. Setup is 3x raid-z2
stripes with 6 disks per raid-z2. Disks are 500gb in size. No checksum errors.
It seems like an exorbitantly long time. The other 5 disks in the stripe with
the replaced disk were at 90% busy and ~150io/s each during
::spa -ev
::arc
Kind regards,
Jason
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I can think of two rather ghetto ways to go.
1. write data then set the read-only property. If you need to make updates
cycle back to rw, write data, set read only.
2. Write data, snapshot the fs, expose the snapshot instead of the r/w file
system. Your mileage may vary depending on the impleme
Has any thought been given to exposing some sort of transactional API
for ZFS at the user level (even if just consolidation private)?
Just recently, it would seem a poorly timed unscheduled poweroff while
NWAM was attempting to update nsswitch.conf left me with a 0 byte
nsswitch.conf (which when t
On Mon, Jul 12, 2010 at 11:09 AM, Garrett D'Amore wrote:
> On Mon, 2010-07-12 at 17:05 +0100, Andrew Gabriel wrote:
>> Linder, Doug wrote:
>> > Out of sheer curiosity - and I'm not disagreeing with you, just wondering
>> > - how does ZFS make money for Oracle when they don't charge for it? Do
>
On Thu, Jun 10, 2010 at 11:32 PM, Erik Trimble wrote:
> On 6/10/2010 9:04 PM, Rodrigo E. De León Plicet wrote:
>>
>> On Tue, Jun 8, 2010 at 7:14 PM, Anurag Agarwal
>> wrote:
>>
>>>
>>> We at KQInfotech, initially started on an independent port of ZFS to
>>> linux.
>>> When we posted our progress
Ok,
I got it working: however I set up two partitions on each disk using fdisk
inside of format
what's the difference to slices (I checked with gparted)
Bye
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensola
Hi,
something like this
Disk # Slice 1 Slice 2
1 raid5 raid0
2 raid5 raid0
3 raid5 raid0
I want to have some fast scratch space (raid0) and some protected (raidz)
Greetings
J
--
This message posted from opensolaris.org
_
Jason
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
In the meantime, you can use autofs to do something close to this if
you like (sort of like the pam_mkhomedir module) -- you can have it
execute a script that returns the appropriate auto_user entry (given a
username as input). I wrote one a long time ago that would do a zfs
create if the dataset
Well the GUI I think is just Windows, it's all just APIs that are
presented to windows.
On Mon, May 3, 2010 at 10:16 PM, Edward Ned Harvey
wrote:
>> From: jason.brian.k...@gmail.com [mailto:jason.brian.k...@gmail.com] On
>> Behalf Of Jason King
>>
>> If you're
If you're just wanting to do something like the netapp .snapshot
(where it's in every directory), I'd be curious if the CIFS shadow
copy support might already have done a lot of the heavy lifting for
this. That might be a good place to look
On Mon, May 3, 2010 at 7:25 PM, Peter Jeremy
wrote:
> On
It still has the issue that the end user has to know where the root of
the filesystem is in the tree (assuming it's even accessible on the
system -- might not be for an NFS mount).
On Wed, Apr 21, 2010 at 6:01 PM, Brandon High wrote:
> On Wed, Apr 21, 2010 at 10:38 AM, Edward Ned Harvey
> wrote
ISTR POSIX also doesn't allow a number of features that can be turned
on with zfs (even ignoring the current issues that prevent ZFS from
being fully POSIX compliant today). I think an additional option for
the snapdir property ('directory' ?) that provides this behavior (with
suitable warnings ab
Well I would like to thank everyone for there comments and ideas.
I finally have this machine up and running with Nexenta Community edition and
am really liking the GUI for administering it. It suits my needs perfectly and
is running very well. I ended up going with 2 X 7 RaidZ2 vdevs in one poo
Since i already have Open Solaris installed on the box, i probably wont jump
over to FreeBSD. However someone has suggested to me to look into
www.nexenta.org and i must say it is quite interesting. Someone correct me if i
am wrong but it looks like it is Open Solaris based and has basically
ev
Freddie,
now you have brought up another question :) I had always assumed that i would
just used open solaris for this file server build, as i had not actually done
any research in regards to other operatin systems that support ZFS. Does anyone
have any advice as to wether i should be consideri
I am booting from a single 74gig WD raptor attached to the motherboards onboard
SATA port.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Ahh,
Thank you for the reply Bob, that is the info i was after. It looks like i will
be going with the 2 X 7 RaidZ2 option.
And just to clarify as far as expanding this pool in the future my only option
is to add another 7 spindle RaidZ2 array correct?
Thanks for all the help guys !
--
This
Thank you for the replies guys!
I was actually already planning to get another 4 gigs of ram for the box right
away anyway, but thank you for mentioning it! As there appears to be a couple
ways to "skin the cat" here i think i am going to try both a 14 spindle RaidZ2
and 2 X 7 RaidZ2 configura
I have been searching this forum and just about every ZFS document i can find
trying to find the answer to my questions. But i believe the answer i am
looking for is not going to be documented and is probably best learned from
experience.
This is my first time playing around with open solaris
On Thu, Apr 1, 2010 at 9:06 AM, David Magda wrote:
> On Wed, March 31, 2010 21:25, Bart Smaalders wrote:
>
>> ZFS root will be the supported root filesystem for Solaris Next; we've
>> been using it for OpenSolaris for a couple of years.
>
> This is already supported:
>
>> Starting in the Solaris 1
On Wed, Mar 31, 2010 at 7:53 PM, Erik Trimble wrote:
> Brett wrote:
>>
>> Hi Folks,
>>
>> Im in a shop thats very resistant to change. The management here are
>> looking for major justification of a move away from ufs to zfs for root file
>> systems. Does anyone know if there are any whitepapers/b
illing to go through more hackery if needed.
(If I need to destroy and re-create these LUNS on the storage array, I can do
that too, but I'm hoping for something more host based)
--Jason
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Did you try adding:
nfs4: mode = special
vfs objects = zfsacl
To the shares in smb.conf? While we haven't done extensive work on
S10, it appears to work well enough for our (limited) purposes (along
with setting the acl properties to passthrough on the fs).
On Fri, Feb 26, 2010 at
Could also try /usr/gnu/bin/ls -U.
I'm working on improving the memory profile of /bin/ls (as it gets
somewhat excessive when dealing with large directories), which as a
side effect should also help with this.
Currently /bin/ls allocates a structure for every file, and doesn't
output anything unt
If you're doing anything with ACLs, the GNU utilities have no
knowledge of ACLs, so GNU chmod will not modify them (nor will GNU ls
show ACLs), you need to use /bin/chmod and /bin/ls to manipulate them.
It does sound though that GNU chmod is explicitly testing and skipping
any entry that's a link
My problem is when you have 100+ luns divided between OS and DB,
keeping track of what's for what can become problematic. It becomes
even worse when you start adding luns -- the chance of accidentally
grabbing a DB lun instead of one of the new ones is non-trivial (then
there's also the chance th
On Sat, Feb 13, 2010 at 9:58 AM, Jim Mauro wrote:
> Using ZFS for Oracle can be configured to deliver very good performance.
> Depending on what your priorities are in terms of critical metrics, keep in
> mind
> that the most performant solution is to use Oracle ASM on raw disk devices.
> That is
On Wed, Feb 10, 2010 at 6:45 PM, Paul B. Henson wrote:
>
> We have an open bug which results in new directories created over NFSv4
> from a linux client having the wrong group ownership. While waiting for a
> patch to resolve the issue, we have a script running hourly on the server
> which finds d
ee0a0aRCRD
--
Jason Fortezzo
forte...@mechanicalism.net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Tue, Jan 19, 2010 at 9:25 PM, Matthew Ahrens wrote:
> Michael Schuster wrote:
>>
>> Mike Gerdts wrote:
>>>
>>> On Tue, Jan 5, 2010 at 4:34 AM, Mikko Lammi wrote:
Hello,
As a result of one badly designed application running loose for some
time,
we now seem to have
On Thu, Dec 3, 2009 at 9:58 AM, Bob Friesenhahn
wrote:
> On Thu, 3 Dec 2009, Erik Ableson wrote:
>>
>> Much depends on the contents of the files. Fixed size binary blobs that
>> align nicely with 16/32/64k boundaries, or variable sized text files.
>
> Note that the default zfs block size is 128K a
failed (5)
I've searched the forums and they've been very helpful but I don't see anything
about this. I created a pool with the internal sata drives and there are no
issues transferring data on those ports. What should I try to isolate and
hopefully resolve the issue
On Sun, Nov 8, 2009 at 7:55 AM, Robert Milkowski wrote:
>
> fyi
>
> Robert Milkowski wrote:
>>
>> XXX wrote:
>>>
>>> | Have you actually tried to roll-back to previous uberblocks when you
>>> | hit the issue? I'm asking as I haven't yet heard about any case
>>> | of the issue witch was not solved
it's beefs with Sun does). But, I can
live with detaching them if I have to.
Another thing that would be nice would be to receive notification of
disk failures from the OS via email or SMS (like the vendor I
previously alluded to), but I know I'm talking crazy now.
Jason
On Thu, Oct 2
On Thu, Oct 15, 2009 at 9:25 AM, Enda O'Connor wrote:
>
>
> Jason King wrote:
>>
>> On Thu, Oct 15, 2009 at 2:57 AM, Ian Collins wrote:
>>>
>>> Dale Ghent wrote:
>>>>
>>>> So looking at the README for patch 14144[45]-09,
On Thu, Oct 15, 2009 at 2:57 AM, Ian Collins wrote:
> Dale Ghent wrote:
>>
>> So looking at the README for patch 14144[45]-09, there are ton of ZFS
>> fixes and feature adds.
>>
>> The big features are already described in the update 8 release docs, but
>> would anyone in-the-know care to comment
X read errors in Y minutes", Then we can really see
what happened.
Jason
On Wed, Oct 14, 2009 at 4:32 PM, Eric Schrock wrote:
> On 10/14/09 14:26, Jason Frank wrote:
>>
>> Thank you, that did the trick. That's not terribly obvious from the
>> man page though. The man
lot of attempts out there, but nothing I've found is comprehensive.
Jason
On Wed, Oct 14, 2009 at 4:23 PM, Eric Schrock wrote:
> On 10/14/09 14:17, Cindy Swearingen wrote:
>>
>> Hi Jason,
>>
>> I think you are asking how do you tell ZFS that you want to replace t
So, my Areca controller has been complaining via email of read errors for a
couple days on SATA channel 8. The disk finally gave up last night at 17:40.
I got to say I really appreciate the Areca controller taking such good care of
me.
For some reason, I wasn't able to log into the server las
It does seem to come up regularly... perhaps someone with access could
throw up a page under the ZFS community with the conclusions (and
periodic updates as appropriate)..
On Fri, Sep 25, 2009 at 3:32 AM, Erik Trimble wrote:
> Nathan wrote:
>>
>> While I am about to embark on building a home NAS
@now > /datapool/data/Temp/test.zfs
What am I doing wrong? Why wont the whole thing copy? I've tried an
incremental from origin to @now, but it still doesn't work right...
Thanks for all your help.
-Jason
--
This message posted from opensolaris.org
I guess I should come at it from the other side:
If you have 1 iscsi target box and it goes down, you're dead in the water.
If you have 2 iscsi target boxes that replicate and one dies, you are OK but
you then have to have a 2:1 total storage to usable ratio (excluding expensive
shared disks).
True, though an enclosure for shared disks is expensive. This isn't for
production but for me to explore what I can do with x86/x64 hardware. The idea
being that I can just throw up another x86/x64 box to add more storage. Has
anyone tried anything similar?
--
This message posted from openso
So aside from the NFS debate, would this 2 tier approach work? I am a bit
fuzzy on how I would get the RAIDZ2 redundancy but still present the volume to
the VMware host as a raw device. Is that possible or is my understanding
wrong? Also could it be defined as a clustered resource?
--
This m
Specifically I remember storage vmotion being supported on NFS last as well as
jumbo frames. Just the impression I get from past features, perhaps they are
doing better with that.
I know the performance problem had specifically to do with ZFS and the way it
handled something. I know lots of i
Well, I knew a guy who was involved in a project to do just that for a
production environment. Basically they abandoned using that because there was
a huge performance hit using ZFS over NFS. I didn’t get the specifics but his
group is usually pretty sharp. I’ll have to check back with him.
I've been looking to build my own cheap SAN to explore HA scenarios with VMware
hosts, though not for a production environment. I'm new to opensolaris but I
am familiar with other clustered HA systems. The features of ZFS seem like
they would fit right in with attempting to build an HA storage
Thanks for the reply!
The reason I'm not waiting until I have the disks is mostly because it will
take me several months to get the funds together and in the meantime, I need
the extra space 1 or 2 drives gets me. Since the sparse files will only take
up the space in use, if I've migrated 2 of
As you can add multiple vdevs to a pool, my suggestion would be to do several
smaller raidz1 or raidz2 vdevs in the pool.
With your setup - assuming 2 HBAs @ 24 drives each your setup would have
yielded 20 drives usable storage (about) (assuming raidz2 with 2 spares on each
HBA) and then mirror
This is an odd question, to be certain, but I need to find out what size a 1.5
TB drive is to help me create a sparse/fake array.
Basically, if I could have someone do a dd if=<1.5 TB disk> of= and
then post the ls -l size of that file, it would greatly assist me.
Here's what I'm doing:
I hav
thousands and thousands of zpools. I started
collecting such zpools back in 2005. None have been lost.
Best regards, Jason
Jason A. Hoffman, PhD | Founder, CTO, Joyent Inc.
ja...@joyent.com
http://joyent.com/
mobile: +1-415-279-6196
John Hoogerdijk wrote:
so i guess there is some porting to do - no O_DIRECT in solaris...
anyone have bonnie++ 1.03e ported already?
For your purposes, couldn't you replace O_DIRECT with O_SYNC as a hack?
If you're trying to benchmark the log device, the important thing is to
generate synch
Mark J Musante wrote:
On Tue, 30 Jun 2009, John Hoogerdijk wrote:
i've setup a RAIDZ2 pool with 5 SATA drives and added a 32GB SSD log
device. to see how well it works, i ran bonnie++, but never saw any
io's on the log device (using iostat -nxce) . pool status is good -
no issues or errors.
On Tue, Jun 30, 2009 at 1:36 PM, Erik Trimble wrote:
> Bob Friesenhahn wrote:
>>
>> On Tue, 30 Jun 2009, Neal Pollack wrote:
>>
>>> Actually, they do quite a bit more than that. They create jobs, generate
>>> revenue for battery manufacturers, and tech's that change batteries and do
>>> PM maintena
Nevermind, found it at
http://www.gluster.org/docs/index.php/Install_guide#Solaris
-J
On May 14, 2009, at 1:15 PM, Jason A. Hoffman wrote:
Is there a solaris build or any information on how you're compiling
it on solaris something?
Regards, Jason
On May 14, 2009, at 5:17 AM, Sh
Is there a solaris build or any information on how you're compiling it
on solaris something?
Regards, Jason
On May 14, 2009, at 5:17 AM, Shehjar Tikoo wrote:
Hi Folks!
GlusterFS is a clustered file system that runs on commodity
off-the-shelf hardware, delivering multiple time
On Mon, Mar 9, 2009 at 5:31 PM, Jan Hlodan wrote:
> Hi Tomas,
>
> thanks for the answer.
> Unfortunately, it didn't help much.
> However I can mount all file systems, but system is broken - desktop
> wont come up.
>
> "Could not update ICEauthority file /.ICEauthority
> There is a problem with the
On Fri, Feb 20, 2009 at 2:59 PM, Darin Perusich
wrote:
> Hello All,
>
> I'm in the process of migrating a file server from Solaris 9, where
> we're making extensive use of POSIX-ACLs, to ZFS and I have a question
> that I'm hoping someone can clear up for me. I'm using ufsrestore to
> restore the
option somewhere to allow sharing tank/nfs/vmware and the zfs
filesystems mounted into that directory tree? It would make for a very neat
solution if it did.
If not I can get around it with one nfs mount per virtual machine, but that is
extra overhead I was hoping to avoid.
Thanks in advance
Ja
?
Or, is a new release coming out that might relieve me of some of these
issues?
Thanks,
-Jason
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Since iSCSI is block-level, I don't think the iSCSI intelligence at
the file level you're asking for is feasible. VSS is used at the
file-system level on either NTFS partitions or over CIFS.
-J
On Wed, Jan 7, 2009 at 5:06 PM, Mr Stephen Yum wrote:
> Hi all,
>
> If I want to make a snapshot of an
On Wed, Jan 7, 2009 at 3:51 PM, Kees Nuyt wrote:
> On Tue, 6 Jan 2009 21:41:32 -0500, David Magda
> wrote:
>
>>On Jan 6, 2009, at 14:21, Rob wrote:
>>
>>> Obviously ZFS is ideal for large databases served out via
>>> application level or web servers. But what other practical ways are
>>> there to
On Aug 3, 2008, at 8:46 PM, Rahul wrote:
> hi
> can you give some disadvantages of the ZFS file system??
>
> plzz its urgent...
>
> help me.
>
>
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.or
On Tue, Jul 15, 2008 at 4:17 AM, Ross <[EMAIL PROTECTED]> wrote:
> Well I haven't used a J4500, but when we had an x4500 (Thumper) on loan they
> had Solaris pretty well integrated with the hardware. When a disk failed, I
> used cfgadm to offline it and as soon as I did that a bright blue "Ready
On Tue, Jul 1, 2008 at 8:10 AM, Mike Gerdts <[EMAIL PROTECTED]> wrote:
> On Tue, Jul 1, 2008 at 7:31 AM, Darren J Moffat <[EMAIL PROTECTED]> wrote:
>> Mike Gerdts wrote:
>>>
>>> On Tue, Jul 1, 2008 at 5:56 AM, Darren J Moffat <[EMAIL PROTECTED]>
>>> wrote:
Instead we should take it comple
On Wed, May 14, 2008 at 6:42 PM, Dave Koelmeyer
<[EMAIL PROTECTED]> wrote:
> Hi All, first time caller here, so please be gentle...
>
> I'm on OpenSolaris 2008.05, and following the really useful guide here to
> create a CIFs share in domain mode:
>
> http://blogs.sun.com/timthomas/entry/configuri
On Thu, May 8, 2008 at 8:59 PM, EchoB <[EMAIL PROTECTED]> wrote:
> I cannot recall if it was this (-discuss) or (-code) but a post a few
> months ago caught my attention.
> In it someone detailed having worked out the math and algorithms for a
> flexible expansion scheme for ZFS. Clearly this is
btw, my machine doesn't have a dns name so i had to enter a phony one to get
nfs/server online
can that have any ill effects?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.o
that doesn't work
it looks like something maybe corrupt, maybe something didn't get installed
properly or i have a bad disc, for some reason my share command doesn't have an
-F option
i'm going to get a new disc and reinstall everything
thanks for the help everyone
This message posted from
> Try sharing something else, maybe:
> share -F nfs /mnt
>
> After that, you should see the services started.
> Once you get that to work, then try sharing the
> zfs file systems. Your problems aren't zfs related...
> at least not yet.
> -- richard
# share -F nfs /mnt
share: illegal option -- F
i got all nfs/server dependancies online, but nfs/server is disabled because
"No NFS filesystems are shared"
# svcs -l nfs/server
fmri svc:/network/nfs/server:default
name NFS server
enabled false (temporary)
statedisabled
next_state none
state_time Sun Feb 17 21:
> You're missing the server bits, check for the following packages:
> SUNWnfsskr, SUNWnfssr, and SUNWnfssu
> -- richard
i added those packages and rebooted then did
# svcadm enable network/nfs/server
but nfs still doesn't work
# zfs share tank/storage
cannot share 'tank/storage': share(1M) fail
1 - 100 of 282 matches
Mail list logo