Thanks for the reply.
Sorry i confused you too. when I mentioned ufs , i just meant ufs root scenario
(pre u6).
Suppose I have a 136G Hdd which as my boot disk,which has been sliced it like
s0-80gb (root slice)
s1-55Gb (swap slice)
s7- (SVM metadb)
My understanding was that if I had 16Gb of
On Jan 31, 2010, at 10:15 PM, Prakash Kochummen wrote:
> Hi,
>
> While using ufs root, we had an option for limiting the /tmp size using mount
> -o size manual option or setting size=1024m in the vfstab.
This is no different when using ZFS root. /tmp is, by default, a tmpfs file
system.
> Do
Hi,
While using ufs root, we had an option for limiting the /tmp size using mount
-o size manual option or setting size=1024m in the vfstab.
Do we have any comparable option available when we use zfs root. If we execute
zfs set size=1024m rpool/swap
it resizes the whole of the swap area which
I thank each of you for all of your insights. I think if this was a production
system I'd abandon the idea of 2 drives and get a more capable system, maybe a
2U box with lots of SAS drives so I could use RAIDZ configurations. But in this
case, I think all I can do is try some things until I unde
Thanks Bill, that looks relevant. Note however this only happens with gzip
compression, but it's definiteness something I've experienced.
I've decided to wait for the next full release before upgrading. I was just
wondering if the problem was resolved.
I'll migrate to COMSTAR soon, I hope the k
Hey Mark,
I spent *so* many hours looking for that firmware. Would you please
post the link? Did the firmware dl you found come with fcode? Running blade
2000 here (SPARC).
Thx
Jake
On Jan 26, 2010 11:52 AM, "Mark Nipper" wrote:
> It may depend on the firmware you're running. We've
>
Two related questions:
- given an existing pool with dedup'd data, how can I find the
current size of the DDT? I presume some zdb work to find and dump the
relevant object, but what specifically?
- what's the expansion ratio for the memory overhead of L2ARC entries?
If I know my DDT
Ian,
Thank you very much for your response. On Friday, as I was writing my post, I
was navigating through the zpool user land binary's source code and had
observed that method. I was just hoping that there was some other and simpler
way. I guess not. Unless the time was taken to patch ZFS with
On Sat, Jan 30 at 18:07, Frank Middleton wrote:
After more than a year or so of experience with ZFS on drive constrained
systems, I am convinced that it is a really good idea to keep the root pool
and the data pools separate.
That's what we do at the office. The data pool is a collection of
m
See also PSARC 2008/769 which considers 4 KB blocks for the entire OS
in a phased approach.
http://arc.opensolaris.org/caselog/PSARC/2008/769/inception.materials/design_doc
-- richard
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.
Mark Bennett writes:
> Update:
>
> For the WD10EARS, the blocks appear to be aligned on the 4k boundary
> when zfs uses the whole disk (whole disk as EFI partition).
>
> Part TagFlag First Sector Size Last Sector
> 0usrwm256 931.51Gb
On January 30, 2010 10:27:41 AM -0800 Michelle Knight
wrote:
I did this as a test because I am aware that zpools don't like drives
switching controlers without being exported first.
They don't mind it at all. It's one of the great things about zfs.
What they do mind is being remounted on a s
Update:
For the WD10EARS, the blocks appear to be aligned on the 4k boundary when zfs
uses the whole disk (whole disk as EFI partition).
Part TagFlag First Sector Size Last Sector
0usrwm256 931.51Gb 1953508750
calc25
On 01/31/10 07:07, Christo Kutrovsky wrote:
I've also experienced similar behavior (short freezes) when running
zfs send|zfs receive with compression on LOCALLY on ZVOLs again.
Has anyone else experienced this ? Know any of bug? This is on
snv117.
you might also get better results after the fi
On Sat, Jan 30, 2010 at 06:07:48PM -0500, Frank Middleton wrote:
> On 01/30/10 05:33 PM, Ross Walker wrote:
>> Just install the OS on the first drive and add the second drive to form
>> a mirror.
>
> After more than a year or so of experience with ZFS on drive constrained
> systems, I am convinced
Perhaps an expert could kindly chime in an opinion of making the drives one
large zpool (rather than separate hard partitions) and using the various
options within ZFS to ensure that there is always disk space available to the
operating system (zpool reservation) ... but the more I sit and think
Correct that ... I have seen a bad batch of drives fail in close succession;
down to a manufacturing problem.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listin
Hi whitetr6,
An interesting situation to which there is no, "right," answer. In fact, many
different answers depending on where you put your priorities.
I'm with Frank in keeping data and OS separate. As you've only got two drives,
I'd put between 30 to 40 gig as an OS pool on each drive (makin
Thanks for the info.
I will take the Napp-it question off line with Günther and see if i can
fix that.
Intel Nic sounds like a plan... was thinking of sticking 2 in anyway...
just looking at the cards though, they are GigaNIX 2032T and searching
for this online results nothing when i incl
This comment has only to do with booting an old drive on a different
computer--a bit of a tangent to this discussion:
I've also used this to migrate to a new computer with larger disks. The only
caveat I've run into is you need to move from SATA/AHCI to the same, or
SATA/IDE to the same. They c
Richard already addressed this process, but I do this basic concept all the
time (moving to a larger disk or new computer). I simply create the partition
on the new disk with format, then "zpool attach -f" the larger drive. Once
done mirroring, use installgrub as normal. Remove the smaller dr
hello
i also suggest, use your 750g drives as raid-1 data pool.
i usually use one or better two (raid-1) 2,5" drives in the floppy-bay
as system drive
gea
http://www.napp-it.org/hardware/
--
This message posted from opensolaris.org
___
zfs-discuss ma
I have an external disk that was offline yesterday, so today when I booted my
system I made sure it was turned on. ZFS of course brought it current with the
pool (I have a 3 disk zfs mirror), and for the first time I saw this result for
the resilver process:
resilver completed after 3074457345
hello
napp-it is just a simple cgi-script
common 3 reasons for error 500:
- file permission: just set all files in /napp-it/.. to 777 recursively
(rwx for all - napp-it will set them correct at first call)
-files must be copied in ascii-mode
have you used winscp? ; otherwise you must care ab
On Jan 31, 2010, at 9:39 AM, Bob Friesenhahn wrote:
> On Sun, 31 Jan 2010, Tony MacDoodle wrote:
>
>> Has anyone encountered any file corruption when snapping ZFS file systems?
>> How does ZFS handle open files when compared to other file system types that
>> use similar technology ie. Veritas,
On Sun, 31 Jan 2010, Tony MacDoodle wrote:
Has anyone encountered any file corruption when snapping ZFS file
systems? How does ZFS handle open files when compared to other file
system types that use similar technology ie. Veritas, etc...??
I see that Richard did not really answer your questio
Thanks for your replies.
I am aware of the 512 bytes concept, thus my selection of 8 KB (matched with
8KB ntfs). Even 20% reduction is still good, that's like having 20% extra ram
(for cache).
I haven't experimented with the default lzjb compression. If I want to compress
something usually I w
Richard Elling wrote:
It is not true that there is only a "horrible Gnome based installer." Try the
Automated
Installation (AI) version instead of the LiveCD if you've used JumpStart
previously.
But if you just want a text-based installer and AI is overkill, then b131 is available
with the
On Jan 31, 2010, at 7:21 AM, Henrik Johansson wrote:
> Hello Christo,
>
> On Jan 31, 2010, at 4:07 PM, Christo Kutrovsky wrote:
>
>> Hello All,
>>
>> I am running NTFS over iSCSI on a ZFS ZVOL volume with compression=gzip-9
>> and blocksize=8K. The server is 2 core P4 3.0 Ghz with 5 GB of RAM.
This is a topic for indiana-discuss, not zfs-discuss. If you read
through the archives of that alias you should see some pointers.
On 1/31/2010 11:38 AM, Tom Bird wrote:
Afternoon,
I note to my dismay that I can't get the "community edition" any more
past snv_129, this version was closest to
On Jan 31, 2010, at 6:55 AM, Tony MacDoodle wrote:
> Has anyone encountered any file corruption when snapping ZFS file systems?
I've had no problems. My first snapshot was in June 2006 and I've been regularly
snapshotting since then.
> How does ZFS handle open files when compared to other file s
On Jan 31, 2010, at 8:38 AM, Tom Bird wrote:
> Afternoon,
>
> I note to my dismay that I can't get the "community edition" any more past
> snv_129, this version was closest to the normal way of doing things that I am
> used to with Solaris <= 10, the standard OpenSolaris releases seem only to
Afternoon,
I note to my dismay that I can't get the "community edition" any more
past snv_129, this version was closest to the normal way of doing things
that I am used to with Solaris <= 10, the standard OpenSolaris releases
seem only to have this horrible Gnome based installer that gives you
Hi
I am accessing files in a ZFS file system via NFSv4.
I am not logged in a root.
File permissions look as expected when I inspect them with ls -v and ls -V
I only have owner and group ACLs...nothing for everyone.
bash-3.00$ id
uid=100(timt) gid=10001(ccbcadmins)
bash-3.00$ groups
ccbcadmin
Hello Christo,
On Jan 31, 2010, at 4:07 PM, Christo Kutrovsky wrote:
> Hello All,
>
> I am running NTFS over iSCSI on a ZFS ZVOL volume with compression=gzip-9 and
> blocksize=8K. The server is 2 core P4 3.0 Ghz with 5 GB of RAM.
>
> Whenever I start copying files from Windows onto the ZFS dis
Hello All,
I am running NTFS over iSCSI on a ZFS ZVOL volume with compression=gzip-9 and
blocksize=8K. The server is 2 core P4 3.0 Ghz with 5 GB of RAM.
Whenever I start copying files from Windows onto the ZFS disk, after about
100-200 Mb been copied the server starts to experience freezes. I h
Has anyone encountered any file corruption when snapping ZFS file systems?
How does ZFS handle open files when compared to other file system types that
use similar technology ie. Veritas, etc...??
Thanks
___
zfs-discuss mailing list
zfs-discuss@opensolar
37 matches
Mail list logo