I have used fdisk to partition the harddisk:
Total disk size is 60800 cylinders
Cylinder size is 16065 (512 byte) blocks
Cylinders
Partition StatusType Start End Length%
= ==
On Tue, Nov 3, 2009 at 2:48 PM, Cindy Swearingen
wrote:
> Alex,
>
> You can download the man page source files from this URL:
>
> http://dlc.sun.com/osol/man/downloads/current/
Thanks, that's great.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.o
On Tue, Nov 03, 2009 at 11:39:28AM -0800, Ralf Teckelmann wrote:
> Hi and hello,
>
> I have a problem confusing me. I hope someone can help me with it.
> I followed a "best practise" - I think - using dedicated zfs filesystems for
> my virtual machines.
> Commands (for completion):
> [i]zfs creat
Peter Teoh wrote:
I have used fdisk to partition the harddisk:
Total disk size is 60800 cylinders
Cylinder size is 16065 (512 byte) blocks
Cylinders
Partition StatusType Start End Length%
The real problem for us is down to the fact that with ufsdump and ufsrestore
they handled tape spanning and zfs send does not.
we looked into having a wrapper to "zfs send" to a file and running gtar (which
does support tape spanning), or cpio ... then we looked at the amount we
started storing
Hi all,
Just subscribed to the list after a debate on our helpdesk lead me to the
posting about ZFS corruption and the need for a fsck repair tool of some
kind...
Has there been any update on this?
Kind regards,
Kevin Walker
Coreix Limited
DDI: (+44) 0207 183 1725 ext 90
Mobile: (+44) 0
This is on Opensolaris 118b
# zfs get all torstor/tor/fs
NAMEPROPERTY VALUE SOURCE
torstor/tor/fs type volume -
torstor/tor/fs creation Wed May 13 17:57 2009 -
torstor/tor/fs used 1.51T
ZFS scrub will detect many types of error in your data or the filesystem
metadata.
If you have sufficient redundancy in your pool and the errors were not due to
dropped or misordered writes, then they can often be automatically corrected
during the scrub.
If ZFS detects an error from which it
I believe that space shared between multiple snapshots is not assigned to any
of the snapshots. So if you have a 100 GB file and take two snapshots, then
delete it, the space used won't show up in the snapshot list, but will show up
in the 'usedbysnapshots' property.
--
This message posted fro
On Wed, 4 Nov 2009, Steven Samuel Cole wrote:
errors: Permanent errors have been detected in the following files:
zpool01:<0x3736a>
How can there be an error in a file that does not seem to exist ?
I don't know the answer to this. Maybe it is data retained by a
snapshot?
How can
Hello all.
Like many others, I've come close to making a home NAS server based on
ZFS and OpenSolaris. While this is not an enterprise solution with high IOPS
expectation, but rather a low-power system for storing everything I have,
I plan on cramming in some 6-10 5400RPM "Green" drives with low
On Wed, Nov 4, 2009 at 4:59 AM, Andrew Gabriel wrote:
> Peter Teoh wrote:
>
>>
>> I have used fdisk to partition the harddisk:
>>
>> Total disk size is 60800 cylinders
>> Cylinder size is 16065 (512 byte) blocks
>>
>> Cylinders
I've been noticing regular writing activity to my data pool while the system's
relatively idle, just a little read IO. Turns out the system's writing up to
20MB of data to the pool every 15-30 seconds. Using iotop from the DTrace
Toolkit, apparently the process responsible is sched. What's going
On Wed, 4 Nov 2009, Mario Goebbels wrote:
I've been noticing regular writing activity to my data pool while
the system's relatively idle, just a little read IO. Turns out the
system's writing up to 20MB of data to the pool every 15-30 seconds.
Using iotop from the DTrace Toolkit, apparently th
Not sure what you mean. Can I ask for more detailed explanation?
Is anybody else see the difference in snapshots size on own file-systems?
I mean the difference between "zfs list -t snapshot" and "zfs list -o space"
Roman Naumenko
ro...@frontline.ca
--
This message posted from opensolaris.org
__
On Wed, Nov 04, 2009 at 09:59:05AM +, Andrew Gabriel wrote:
> It can be done by careful use of fdisk (with some risk of blowing away
> the data if you get it wrong), but I've seen other email threads here
> that indicate ZFS then won't mount the pool, because the two labels at
> the end of t
I have created an iSCSI target using ZFS on host k01:
k01# zfs create -V 100g kpool_k01/k01tgt-i21-solotest
k01# zfs set shareiscsi=on kpool_k01/k01tgt-i21-solotest
And attached it statically to an initiator node i21:
i21# iscsiadm add static-config iqn.1986-03.com.sun:
Hi Karl,
Welcome to Solaris/ZFS land ...
ZFS administration is pretty easy but our device administration
is more difficult.
I'll probably bungle this response because I don't have similar
hardware and I hope some expert will correct me.
I think you will have to experiment with various forms of
A Darren Dunham wrote:
On Wed, Nov 04, 2009 at 09:59:05AM +, Andrew Gabriel wrote:
It can be done by careful use of fdisk (with some risk of blowing away
the data if you get it wrong), but I've seen other email threads here
that indicate ZFS then won't mount the pool, because the two lab
On Wed, Nov 04, 2009 at 04:41:34PM +, Andrew Gabriel wrote:
> A Darren Dunham wrote:
> >I don't think the second fdisk partition can be used. The system
> >doesn't like to have multiple "Solaris" partitions.
> >
>
> Make sure it isn't a Solaris partition (pick some partition type which
I've been noticing regular writing activity to my data pool while the
system's relatively idle, just a little read IO. Turns out the
system's writing up to 20MB of data to the pool every 15-30 seconds.
Using iotop from the DTrace Toolkit, apparently the process
responsible is sched. What's going o
On Wed, 4 Nov 2009, Mario Goebbels wrote:
Did you disable 'atime' updates for your filesystem? Otherwise the file
access times need to be periodically updated and this would happen may
every 15-30 seconds.
Not disabled. But 20MB worth of metadata updates while the system practically
does noth
Did you disable 'atime' updates for your filesystem? Otherwise the file
access times need to be periodically updated and this would happen may
every 15-30 seconds.
Not disabled. But 20MB worth of metadata updates while the system
practically does nothing? Only real things happening is a video
pl
zfs groups writes together into transaction groups; the physical writes
to disk are generally initiated by kernel threads (which appear in
dtrace as threads of the "sched" process). Changing the attribution is
not going to be simple as a single physical write to the pool may
contain data and metad
Kim
You've been able to spin down drives since about Solaris 8.
http://www.sun.com/bigadmin/features/articles/disk_power_saving.jsp
Jim Klimov wrote:
Hello all.
Like many others, I've come close to making a home NAS server based on
ZFS and OpenSolaris. While this is not an enterpr
On Wed, November 4, 2009 15:36, Trevor Pretty wrote:
> You've been able to spin down drives since about Solaris 8.
And thanks for the link to the article.
The article specifies SAS and SCSI a lot; does this also apply to SATA?
Will anything in serving a ZFS filesystem out via in-kernel CIFS ha
zfs...@jeremykister.com said:
> unfortunately, fdisk won't help me at all:
> # fdisk -E /dev/rdsk/c12t1d0p0
> # zpool create -f testp c12t1d0
> invalid vdev specification
> the following errors must be manually repaired:
> /dev/dsk/c3t11d0s0 is part of active ZFS pool dbzpool. Please see zpool(1M).
Such a functionality is in the ZFS code now. It will be available later for us
http://c0t0d0s0.org/archives/6067-PSARC-2009479-zpool-recovery-support.html
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.or
Also, read this:
http://c0t0d0s0.org/archives/6067-PSARC-2009479-zpool-recovery-support.html
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I read about some guy that shut off his RAID when he didnt use it. And he had a
large system disc he used for temporary storage. So he copied everything to the
temp storage and immediately shut down the raid.
--
This message posted from opensolaris.org
___
Thanks for the link, but the main concern in spinning down drives of a ZFS pool
is that ZFS by default is not so idle. Every 5 to 30 seconds it closes a
transaction
group (TXG) which requires a synchronous write of metadata to disk.
I mentioned reading many blogs/forums on the matter, and some
Joerg just posted a lengthy answer to the fsck question:
http://www.c0t0d0s0.org/archives/6071-No,-ZFS-really-doesnt-need-a-fsck.html
Good stuff. I see two answers to "nobody complained about lying hardware
before ZFS".
One: The user has never tried another filesystem that tests for end-to-en
Kevin Walker wrote:
Hi all,
Just subscribed to the list after a debate on our helpdesk lead me to the
posting about ZFS corruption and the need for a fsck repair tool of some
kind...
Has there been any update on this?
I guess the discussion started after someone read an article on OSNE
Robert Milkowski wrote:
Kevin Walker wrote:
Hi all,
Just subscribed to the list after a debate on our helpdesk lead me to
the posting about ZFS corruption and the need for a fsck repair tool
of some kind...
Has there been any update on this?
I guess the discussion started after someo
Tim Haley wrote:
Robert Milkowski wrote:
There is another CR (don't have its number at hand) which is about
implementing a delayed re-use on just freed blocks which should allow
for more data to be recovered in such a case as above. Although I'm
not sure if it has been implemented yet.
IMH
Researching about ZFS and had a question leating to Raid-Z and the striping.
So, I was glacing over Jeff's blog (http://blogs.sun.com/bonwick/entry/raid_z):
[i]"RAID-Z is a data/parity scheme like RAID-5, but it uses dynamic stripe
width. Every block is its own RAID-Z stripe, regardless of block
jimkli...@cos.ru said:
> Thanks for the link, but the main concern in spinning down drives of a ZFS
> pool is that ZFS by default is not so idle. Every 5 to 30 seconds it closes
> a transaction group (TXG) which requires a synchronous write of metadata to
> disk.
You know, it's just going to de
Forgot to add, are those four stripe units (for that one file) above considered
the stripe itself? Or are each of those stripe units on the seperate disks
considered as separate stripes?
--
This message posted from opensolaris.org
___
zfs-discuss maili
On 3 Nov 2009, at 14:48, Cindy Swearingen wrote:
Alex,
You can download the man page source files from this URL:
http://dlc.sun.com/osol/man/downloads/current/
FYI there's a couple of nits in the man pages:
* the zpool create synopsis hits the 80 char mark. Might be better to
fit on sever
39 matches
Mail list logo