The question that has occurred to me is:
I *must* choose one of those support options for how long?
I mean if I buy support for a machine for a year and put S11 Express
in production on it, then I don't renew the support, am I now
violating the license?
That's bogus. I could be wrong but I don't
Does OpenSolaris/Solaris11 Express have a driver for it already?
Anyone used one already?
-Kyle
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi all,
I'd like to give my machine a little more swap.
I ran:
zfs get volsize rpool/swap
and saw it was 2G
So I ran:
zfs set volsize=4G rpool/swap
to double it. zfs get shows it took affect, but swap -l doesn't show
any change.
I ran swap -d
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 11/12/2010 10:03 AM, Edward Ned Harvey wrote:
>
> Since combining ZFS storage backend, via nfs or iscsi, with ESXi
> heads, I?m in love. But for one thing. The interconnect between
> the head & storage.
>
>
>
> 1G Ether is so cheap, but not as f
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
I'm shopping for an SSD for a ZIL.
Looking around on NewEgg, at the claimed (not sure I beleive them)
IOPS, these caught my attention:
Corsair Force 80GB CSSD-F80GBP2-BRKT50K 4K aligned ran.
write IOPS
OCZ Vertex 2 120GB OC
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 8/7/2010 4:11 PM, Terry Hull wrote:
>
> It is just that lots of the PERC controllers do not do JBOD very well. I've
> done it several times making a RAID 0 for each drive. Unfortunately, that
> means the server has lots of RAID hardware that is
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 10/25/2010 3:39 AM, Markus Kovero wrote:
>
> Any other feasible alternatives for Dell hardware? Wondering, are these
issues mostly related to Nehalem-architectural problems, eg. c-states.
> So is there anything good in switching hw vendor? HP an
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi All,
I'm currently considering purchasing 1 or 2 Dell R515's.
With up to 14 drives, and up to 64GB of RAM, it seems like it's well
suited
for a low-end ZFS server.
I know this box is new, but I wonder if anyone out there has any
experience with
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 10/18/2010 5:40 AM, Habony, Zsolt wrote:
> (I do not mirror, as the storage gives redundancy behind LUNs.)
>
By not enabling redundancy (Mirror or RAIDZ[123]) at the ZFS level,
you are opening yourself to corruption problems that the underlying
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 10/18/2010 4:28 AM, Habony, Zsolt wrote:
>
> I worry about head thrashing.
Why?
If your SAN group gives you a LUN that is at the opposite end of the
array, I would think that was because they had already assigned the
space in the middle to othe
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 10/17/2010 9:38 AM, Edward Ned Harvey wrote:
>
> The default blocksize is 128K. If you are using mirrors, then
> each block on disk will be 128K whenever possible. But if you're
> using raidzN with a capacity of M disks (M disks useful capacity
oc
> 0% done: 0 pages dumped, dump failed: error 5
> rebooting...
>
As I read this, it's probably a bug in the IPS driver. But I really
don't know anything about kernel panic's.
This seems 100% reproducible, so I'm happy to run more tests in KDB if
it will help.
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 6/28/2010 10:30 PM, Edward Ned Harvey wrote:
>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of Tristram Scott
>>
>> If you would like to try it out, download the package from:
>> http://www.qu
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
I've very in-frequently seen the RAMSAN devices mentioned here. Probably
due to price.
However a long time ago I think I remember someone suggesting a build it
yourself RAMSAN.
Where is the down side of one or 2 OS boxes with a whole lot of RAM
(and/
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 6/11/2010 12:32 AM, Erik Trimble wrote:
> On 6/10/2010 9:04 PM, Rodrigo E. De León Plicet wrote:
>> On Tue, Jun 8, 2010 at 7:14 PM, Anurag Agarwal
>> wrote:
>>
>>> We at KQInfotech, initially started on an independent port of ZFS to
>>> linux.
>
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 6/9/2010 5:04 PM, Edward Ned Harvey wrote:
>
> Everything is faster with more ram. There is no limit, unless the total
> used disk in your system is smaller than the available ram in your system
> ... which seems very improbable.
>
Off topic, bu
On 5/27/2010 9:30 PM, Reshekel Shedwitz wrote:
> Some tips…
>
> (1) Do a zfs mount -a and a zfs share -a. Just in case something didn't get
> shared out correctly (though that's supposed to automatically happen, I think)
>
> (2) The Solaris automounter (i.e. in a NIS environment) does not seem to
On 5/27/2010 2:45 PM, Jan Kryl wrote:
> Hi Frank,
>
> On 24/05/10 16:52 -0400, Frank Middleton wrote:
>
>> Many many moons ago, I submitted a CR into bugs about a
>> highly reproducible panic that occurs if you try to re-share
>> a lofi mounted image. That CR has AFAIK long since
>> disappe
On 5/25/2010 11:39 AM, Edward Ned Harvey wrote:
>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of Kyle McDonald
>>
>> I've been thinking lately that I'm not sure I like the root pool being
>> un
Hi,
I know the general discussion is about flash SSD's connected through
SATA/SAS or possibly PCI-E these days. So excuse me if I'm askign
something that makes no sense...
I have a server that can hold 6 U320 SCSI disks. Right now I put in 5
300GB for a data pool, and 1 18GB for the root pool.
I
Hi guys.
yep I know about the ZIL, and SSD Slogs.
While setting Nextenta up it offered to disable the ZIL entirely. For
now I left it on. In the end (hopefully for only specifc filesystems -
once that feature is released.) I'll end up disabling the ZIL for our
software builds since:
1) The bui
Hi all,
I recently installed Nexenta Community 3.0.2 on one of my servers:
IBM eSeries X346
2.8Ghz Xeon
12GB DDR2 RAM
1 builtin BGE interface for management
4 port Intel GigE card aggregated for Data
IBM ServRAID 7k with 256MB BB Cache with (isp driver)
6 RAID0 single drive LUNS (so I can use t
On 3/2/2010 10:15 AM, Kjetil Torgrim Homme wrote:
> "valrh...@gmail.com" writes:
>
>
>> I have been using DVDs for small backups here and there for a decade
>> now, and have a huge pile of several hundred. They have a lot of
>> overlapping content, so I was thinking of feeding the entire stack
On 5/3/2010 4:56 PM, Edward Ned Harvey wrote:
>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of Kyle McDonald
>>
>> If you're only sharing them to Linux machines, then NFS would be so
>> much
>>
On 5/3/2010 7:41 AM, Michelle Knight wrote:
> The long ls command worked, as in it created the links, but they didn't work
> properly under the ZFS SMB share.
>
I'm guessing you meant the 'long ln' command?
If you look at what those 2 commadns create you'll notice (in the output
of ls -l) that
On 3/9/2010 1:55 PM, Matt Cowger wrote:
> That's a very good point - in this particular case, there is no option to
> change the blocksize for the application.
>
>
I have no way of guessing the effects it would have, but is there a
reason that the filesystem blocks can't be a multiple of the app
On 4/17/2010 9:03 AM, Edward Ned Harvey wrote:
>
It would be cool to only list files which are different.
>>> Know of any way to do that?
>>>
>> cmp
>>
> Oh, no. Because cmp and diff require reading both files, it could take
> forever, especially if you have a lot of
On 4/16/2010 10:30 AM, Bob Friesenhahn wrote:
> On Thu, 15 Apr 2010, Eric D. Mudama wrote:
>>
>> The purpose of TRIM is to tell the drive that some # of sectors are no
>> longer important so that it doesn't have to work as hard in its
>> internal garbage collection.
>
> The sector size does not typ
On 4/12/2010 9:10 AM, Willard Korfhage wrote:
> I upgraded to the latest firmware. When I rebooted the machine, the pool was
> back, with no errors. I was surprised.
>
> I will work with it more, and see if it stays good. I've done a scrub, so now
> I'll put more data on it and stress it some mor
On 4/6/2010 3:41 PM, Erik Trimble wrote:
> On Tue, 2010-04-06 at 08:26 -0700, Anil wrote:
>
>> Seems a nice sale on Newegg for SSD devices. Talk about choices. What's the
>> latest recommendations for a log device?
>>
>> http://bit.ly/aL1dne
>>
> The Vertex LE models should do well as ZIL
I've seen the Nexenta and EON webpages, but I'm not looking to build my own.
Is there anything out there I can just buy?
-Kyle
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 4/4/2010 11:04 PM, Edward Ned Harvey wrote:
>> Actually, It's my experience that Sun (and other vendors) do exactly
>> that for you when you buy their parts - at least for rotating drives, I
>> have no experience with SSD's.
>>
>> The Sun disk label shipped on all the drives is setup to make the
On 4/2/2010 8:08 AM, Edward Ned Harvey wrote:
>> I know it is way after the fact, but I find it best to coerce each
>> drive down to the whole GB boundary using format (create Solaris
>> partition just up to the boundary). Then if you ever get a drive a
>> little smaller it still should fit.
>>
On 3/27/2010 3:14 AM, Svein Skogen wrote:
> On 26.03.2010 23:55, Ian Collins wrote:
> > On 03/27/10 09:39 AM, Richard Elling wrote:
> >> On Mar 26, 2010, at 2:34 AM, Bruno Sousa wrote:
> >>
> >>> Hi,
> >>>
> >>> The jumbo-frames in my case give me a boost of around 2 mb/s, so it's
> >>> not that
On 3/30/2010 2:44 PM, Adam Leventhal wrote:
> Hey Karsten,
>
> Very interesting data. Your test is inherently single-threaded so I'm not
> surprised that the benefits aren't more impressive -- the flash modules on
> the F20 card are optimized more for concurrent IOPS than single-threaded
> laten
On 3/10/2010 3:27 PM, Robert Thurlow wrote:
As said earlier, it's the string returned from the reverse DNS lookup
that needs to be matched.
So, to make a long story short, if you log into the server
from the client and do "who am i", you will get the host
name you need for the share.
Anothe
dick hoogendijk wrote:
glidic anthony wrote:
I have a solution with use zfs set sharenfs=rw,nosuid zpool but i prefer
use the sharemgr command.
Then you prefere wrong.
To each their own.
ZFS filesystems are not shared this way.
They can be. I do it all the time. There's nothing
Darren J Moffat wrote:
Jozef Hamar wrote:
Hi all,
I can not find any instructions on how to set the file quota (i.e.
maximum number of files per filesystem/directory) or directory quota
(maximum size that files in particular directory can consume) in ZFS.
That is because it doesn't exist.
Hi Darren,
More below...
Darren J Moffat wrote:
Tristan Ball wrote:
Obviously sending it deduped is more efficient in terms of bandwidth
and CPU time on the recv side, but it may also be more complicated to
achieve?
A stream can be deduped even if the on disk format isn't and vice versa.
Jacob Ritorto wrote:
With the web redesign, how does one get to zfs-discuss via the
opensolaris.org website?
Sorry for the ot question, but I'm becoming desperate after
clicking circular links for the better part of the last hour :(
You can get the web pages to load? All I get are
David Magda wrote:
On Oct 24, 2009, at 08:53, Joerg Schilling wrote:
The article that was mentioned a few hours ago did mention
licensing problems without giving any kind of evidence for
this claim. If there is evidence, I would be interested in
knowing the background, otherwise it looks to me
Bob Friesenhahn wrote:
On Fri, 23 Oct 2009, Kyle McDonald wrote:
Along these lines, it's always struck me that most of the
restrictions of the GPL fall on the entity who distrbutes the 'work'
in question.
A careful reading of GPLv2 shows that restrictions only apply whe
Bob Friesenhahn wrote:
On Fri, 23 Oct 2009, Anand Mitra wrote:
One of the biggest questions around this effort would be “licensing”.
As far as our understanding goes; CDDL doesn’t restrict us from
modifying ZFS code and releasing it. However GPL and CDDL code cannot
be mixed, which implies that
Mike Bo wrote:
Once data resides within a pool, there should be an efficient method of moving
it from one ZFS file system to another. Think Link/Unlink vs. Copy/Remove.
Here's my scenario... When I originally created a 3TB pool, I didn't know the
best way carve up the space, so I used a single
Owen Davies wrote:
Thanks. I took a look and that is exactly what I was looking for. Of course I
have since just reset all the permissions on all my shares but it seems that
the proper way to swap UIDs for users with permissions on CIFS shares is to:
Edit /etc/passwd
Edit /var/smb/smbpasswd
Scott Meilicke wrote:
I am still not buying it :) I need to research this to satisfy myself.
I can understand that the writes come from memory to disk during a txg write
for async, and that is the behavior I see in testing.
But for sync, data must be committed, and a SSD/ZIL makes that faster
Adam Sherman wrote:
On 6-Aug-09, at 11:50 , Kyle McDonald wrote:
i've seen some people use usb sticks, and in practice it works on
SOME machines. The biggest difference is that the bios has to
allow for usb booting. Most of todays computers DO. Personally i
like compact flash because
Adam Sherman wrote:
On 6-Aug-09, at 11:32 , Thomas Burgess wrote:
i've seen some people use usb sticks, and in practice it works on
SOME machines. The biggest difference is that the bios has to allow
for usb booting. Most of todays computers DO. Personally i like
compact flash because it is
Will Murnane wrote:
I'm using Solaris 10u6 updated to u7 via patches, and I have a pool
with a mirrored pair and a (shared) hot spare. We reconfigured disks
a while ago and now the controller is c4 instead of c2. The hot spare
was originally on c2, and apparently on rebooting it didn't get foun
Kyle McDonald wrote:
Jacob Ritorto wrote:
Is this implemented in OpenSolaris 2008.11? I'm moving move my
filer's rpool to an ssd mirror to free up bigdisk slots currently
used by the os and need to shrink rpool from 40GB to 15GB. (only
using 2.7GB for the install).
Your best
Jacob Ritorto wrote:
Is this implemented in OpenSolaris 2008.11? I'm moving move my filer's rpool
to an ssd mirror to free up bigdisk slots currently used by the os and need to
shrink rpool from 40GB to 15GB. (only using 2.7GB for the install).
Your best bet would be to install the new ssd
Martin wrote:
C,
I appreciate the feedback and like you, do not wish to start a side rant, but
rather understand this, because it is completely counter to my experience.
Allow me to respond based on my anecdotal experience.
What's wrong with make a new pool.. safely copy the data. verify
Volker A. Brandt wrote:
I'm currently trying to decide between a MB with that chipset and
another that uses the nVidia 780a and nf200 south bridge.
Is the nVidia SATA controller well supported? (in AHCI mode?)
Be careful with nVidia if you want to use Samsung SATA disks.
There is a proble
Hi all,
I think I've read that the AMD 790FX/750SB chipset's SATA controller is
upported, but may have recently had bugs?
I'm currently trying to decide between a MB with that chipset and
another that uses the nVidia 780a and nf200 south bridge.
Is the nVidia SATA controller well supported?
dick hoogendijk wrote:
On Fri, 31 Jul 2009 18:38:16 +1000
Tristan Ball wrote:
Because it means you can create zfs snapshots from a non solaris/non
local client...
Like a linux nfs client, or a windows cifs client.
So if I want a snapshot of i.e. "rpool/export/home/dick" I can do a
Markus Kovero wrote:
btw, there's coming new Intel X25-M (G2) next month that will offer better
random read/writes than E-series and seriously cheap pricetag, worth for a try
I'd say.
The suggested MSRP of the 80GB generation 2 (G2) is supposed to be $225.
Even though the G2 is not shippin
Ralf Gans wrote:
Jumpstart puts a loopback mount into the vfstab,
and the next boot fails.
The Solaris will do the mountall before ZFS starts,
so the filesystem service fails and you have not even
an sshd to login over the network.
This is why I don't use the mountpoint settings in ZFS. I se
Darren J Moffat wrote:
Kyle McDonald wrote:
Andriy Gapon wrote:
What do you think about the following feature?
"Subdirectory is automatically a new filesystem" property - an
administrator turns
on this magic property of a filesystem, after that every mkdir *in
the root* of
that
Andriy Gapon wrote:
What do you think about the following feature?
"Subdirectory is automatically a new filesystem" property - an administrator
turns
on this magic property of a filesystem, after that every mkdir *in the root* of
that filesystem creates a new filesystem. The new filesystems hav
Miles Nordin wrote:
"km" == Kyle McDonald writes:
km> hese drives do seem to do a great job at random writes, most
km> of the promise shows at sequential writes, so Does the slog
km> attempt to write sequentially through the space given to it?
N
Bob Friesenhahn wrote:
Of course, it is my understanding that the zfs slog is written
sequentially so perhaps this applies instead:
Actually, reading up on these drives I've started to wonder about the
slog writing pattern. While these drives do seem to do a great job at
random writes, most
Michael McCandless wrote:
I've read in numerous threads that it's important to use ECC RAM in a
ZFS file server.
My question is: is there any technical reason, in ZFS's design, that
makes it particularly important for ZFS to require ECC RAM?
I think, basically the idea is, that if you're goin
Tristan Ball wrote:
It just so happens I have one of the 128G and two of the 32G versions in
my drawer, waiting to go into our "DR" disk array when it arrives.
Hi Tristan,
Just so I can be clear, What model/brand are the drives you were testing?
-Kyle
I dropped the 128G into a spare De
Adam Sherman wrote:
In the context of a low-volume file server, for a few users, is the
low-end Intel SSD sufficient?
You're right, it supposedly has less than half the the write speed, and
that probably won't matter for me, but I can't find a 64GB version of it
for sale, and the 80GB version
Greg Mason wrote:
I think it is a great idea, assuming the SSD has good write performance.
This one claims up to 230MB/s read and 180MB/s write and it's only $196.
http://www.newegg.com/Product/Product.aspx?Item=N82E16820609393
Compared to this one (250MB/s read and 170MB/s write) whi
Kyle McDonald wrote:
Richard Elling wrote:
On Jul 23, 2009, at 9:37 AM, Kyle McDonald wrote:
Richard Elling wrote:
On Jul 23, 2009, at 7:28 AM, Kyle McDonald wrote:
F. Wessels wrote:
Thanks posting this solution.
But I would like to point out that bug 6574286 "removing a slog
do
Richard Elling wrote:
On Jul 23, 2009, at 9:37 AM, Kyle McDonald wrote:
Richard Elling wrote:
On Jul 23, 2009, at 7:28 AM, Kyle McDonald wrote:
F. Wessels wrote:
Thanks posting this solution.
But I would like to point out that bug 6574286 "removing a slog
doesn't work&qu
Richard Elling wrote:
On Jul 23, 2009, at 7:28 AM, Kyle McDonald wrote:
F. Wessels wrote:
Thanks posting this solution.
But I would like to point out that bug 6574286 "removing a slog
doesn't work" still isn't resolved. A solution is under it's way,
according to
Brian Hechinger wrote:
On Thu, Jul 23, 2009 at 10:28:38AM -0400, Kyle McDonald wrote:
In my case the slog slice wouldn't be the slog for the root pool, it
would be the slog for a second data pool.
I didn't think you could add a slog to the root pool anyway. O
F. Wessels wrote:
Thanks posting this solution.
But I would like to point out that bug 6574286 "removing a slog doesn't work"
still isn't resolved. A solution is under it's way, according to George Wilson. But in
the mean time, IF something happens you might be in a lot of trouble. Even withou
I've started reading up on this, and I know I have alot more reading to
do, but I've already got some questions... :)
I'm not sure yet that it will help for my purposes, but I was
considering buying 2 SSD's for mirrored boot devices anyway.
My main question is: Can a pair of say 60GB SSD's
Joseph L. Casale wrote:
Another thing to remember is the expansion slots. You mentioned putting
in a SATA controller for more drives, You'll want to make sure the board
has a slot that can handle the card you want. If you're not using
graphics then any board with a single PCI-E x16 slot should ha
chris wrote:
Thanks for your reply.
What if I wrap the ram in a sheet of lead?;-)
(hopefully the lead itself won't be radioactive)
I've been looking at the same thing recently.
I found these 4 AM3 motherboard with "optional" ECC memory support. I don't
know whether this means ECC works
Erik Ableson wrote:
Just a side note on the PERC labelled cards: they don't have a JBOD
mode so you _have_ to use hardware RAID. This may or may not be an
issue in your configuration but it does mean that moving disks between
controllers is no longer possible. The only way to do a pseudo JBOD
Hi all,
I'm setting up a new fileserver, and while I'm not planning on enabling
CIFS right away, I know I will in the future.
I know there are several ZFS properties or attributes that affect how
CIFS behaves. I seem to recall that at least one of those needs to be
set early (like when the f
Darren J Moffat wrote:
Kyle McDonald wrote:
Bob Friesenhahn wrote:
On Mon, 15 Jun 2009, Thommy M. wrote:
In most cases compression is not desireable. It consumes CPU and
results in uneven system performance.
IIRC there was a blog about I/O performance with ZFS stating that
it was
faster
Bob Friesenhahn wrote:
On Mon, 15 Jun 2009, Thommy M. wrote:
In most cases compression is not desireable. It consumes CPU and
results in uneven system performance.
IIRC there was a blog about I/O performance with ZFS stating that it was
faster with compression ON as it didn't have to wait fo
Joep Vesseur wrote:
All,
I was wondering why "zfs destroy -r" is so excruciatingly slow compared to
parallel destroys.
< SNIP>
while a little handy-work with
# time for i in `zfs list | awk '/blub2\\// {print $1}'` ;\
do ( zfs destroy $i & ) ; done
yields
real0m8.191s
On 2/20/2009 9:33 AM, Gary Mills wrote:
On Thu, Feb 19, 2009 at 09:59:01AM -0800, Richard Elling wrote:
Gary Mills wrote:
Should I file an RFE for this addition to ZFS? The concept would be
to run ZFS on a file server, exporting storage to an application
server where ZFS also runs on
On 2/13/2009 5:58 AM, Ross wrote:
huh? but that looses the convenience of USB.
I've used USB drives without problems at all, just remember to "zpool export"
them before you unplug.
I think there is a subcommand of cfgaadm you should run to to notify
Solariss that you intend to unplug the
On 2/11/2009 1:50 PM, Richard Elling wrote:
Solaris can now (as of b105) use extended partitions.
http://www.opensolaris.org/os/community/on/flag-days/pages/2008120301/
That's interesting, but I'm not sure how it helps.
It's my understanding that Solaris doesn't like it if more than one of
t
On 2/11/2009 1:03 PM, Kyle McDonald wrote:
Since you can't mix EFI and FDisk partition tables, and you can't have
more than one Solaris fdisk partition (that I'm aware of anyway) it
looks like 1TB is all you can give Solaris at the moment.
I should have qualified that with &
On 2/11/2009 12:57 PM, Tomas Ögren wrote:
On 11 February, 2009 - Kyle McDonald sent me these 1,2K bytes:
On 2/11/2009 12:11 PM, Bob Friesenhahn wrote:
My understanding is that 1TB is the maximum bootable disk size since
EFI boot is not supported. It is good that you were allowed to
On 2/11/2009 12:35 PM, Toby Thain wrote:
On 11-Feb-09, at 11:19 AM, Tim wrote:
...
And yes, I do keep checksums of all the data sitting on them and
periodically check it. So, for all of your ranting and raving, the
fact remains even a *crappy* filesystem like fat32 manages to handle
a hot
On 2/11/2009 12:11 PM, Bob Friesenhahn wrote:
My understanding is that 1TB is the maximum bootable disk size since
EFI boot is not supported. It is good that you were allowed to use
the larger disk, even if its usable space is truncated.
I don't dispute that, but I don't understand it eith
On 2/10/2009 4:48 PM, Roman V. Shaposhnik wrote:
On Wed, 2009-02-11 at 09:49 +1300, Ian Collins wrote:
These posts do sound like someone who is blaming their parents after
breaking a new toy before reading the instructions.
It looks like there's a serious denial of the fact that "bad
On 2/10/2009 3:37 PM, D. Eckert wrote:
(...)
Possibly so. But if you had that ufs/reiserfs on a LVM or on a RAID0
spanning removable drives, you probably wouldn't have been so lucky.
(...)
we are not talking about a RAID 5 array or an LVM. We are talking about a
single FS setup as a zpool over
On 2/10/2009 2:54 PM, D. Eckert wrote:
I disagree, see posting above.
ZFS just accepts it 2 or 3 times. after that, your data are passed away to
nirvana for no reason.
And it should be legal, to have an external USB drive with a ZFS. with all
respect, why should a user always care for redunda
On 2/10/2009 2:50 PM, D. Eckert wrote:
(..)
Dave made a mistake pulling out the drives with out exporting them first.
For sure also UFS/XFS/EXT4/.. doesn't like that kind of operations but only
with ZFS you risk to loose ALL your data.
that's the point!
(...)
I did that many times after perform
D. Eckert wrote:
> too many words wasted, but not a single word, how to restore the data.
>
> I have read the man pages carefully. But again: there's nothing said, that on
> USB drives zfs umount pool is not allowed.
>
It is allowed. But it's not enough. You need to read both the 'zpool '
and
Hi Dave,
Having read through the whole thread, I think there are several things
that could all be adding to your problems.
At least some of which are not related to ZFS at all.
You mentioned the ZFS docs not warning you about this, and yet I know
the docs explictly tell you that:
1. While a ZF
I jumpstarted my machine with sNV b106, and installed with ZFS root/boot.
It left me at a shell prompt in the JumpStart environment, with my ZFS
root on /a.
I wanted to try out some things that I planned on scripting for the
JumpStart to run, one of these waas creating a new ZFS pool from the
r
On 1/28/2009 12:16 PM, Nicolas Williams wrote:
On Wed, Jan 28, 2009 at 09:07:06AM -0800, Frank Cusack wrote:
On January 28, 2009 9:41:20 AM -0600 Bob Friesenhahn
wrote:
On Tue, 27 Jan 2009, Frank Cusack wrote:
i was wondering if you have a zfs filesystem that mounts in a su
Richard Elling wrote:
scott stanley wrote:
(i use the term loosely because i know that zfs likes whole volumes better)
when installing ubuntu, i got in the habit of using a separate partition for my
home directory so that my data and gnome settings would all remain intact when
i reinstalle
Brad Hudson wrote:
> Thanks for the response Peter. However, I'm not looking to create a
> different boot environment (bootenv). I'm actually looking for a way within
> JumpStart to separate out the ZFS filesystems from a new installation to have
> better control over quotas and reservations f
Tim Haley wrote:
> Ross wrote:
>
>> While it's good that this is at least possible, that looks horribly
>> complicated to me.
>> Does anybody know if there's any work being done on making it easy to remove
>> obsolete
>> boot environments?
>>
>
> If the clones were promoted at the time
kristof wrote:
> I don't think this is possible.
>
> I already tried to add extra vdevs after install, but I got an error message
> telling me that multiple vdevs for rpool are not allowed.
>
> K
>
Oh. Ok. Good to know.
I always put all my 'data' diskspace in a separate pool anyway to make
mi
Ian Collins wrote:
> Stephen Le wrote:
>
>> Is it possible to create a custom Jumpstart profile to install Nevada
>> on a RAID-10 rpool?
>>
>
> No, simple mirrors only.
>
Though a finish sscript could add additional simple mirrors to create
the config his example would have created.
Pr
Douglas R. Jones wrote:
> 4) I change the auto.ws map thusly:
> Integration chekov:/mnt/zfs1/GroupWS/&
> Upgradeschekov:/mnt/zfs1/GroupWS/&
> cstools chekov:/mnt/zfs1/GroupWS/&
> com chekov:/mnt/zfs1/GroupWS
>
>
This is standard NFS behavior (prior to NFSv4). Chi
Darren J Moffat wrote:
> John Cecere wrote:
>
>> The man page for dumpadm says this:
>>
>> A given ZFS volume cannot be configured for both the swap area and the dump
>> device.
>>
>> And indeed when I try to use a zvol as both, I get:
>>
>> zvol cannot be used as a swap device and a dump devic
1 - 100 of 172 matches
Mail list logo