I am wondering if the following idea makes any sense as a way to get
ZFS to cache compressed data in DRAM?
In particular, given a 2-way zvol mirror of highly compressible data
on persistent storage devices, what would go wrong if I dynamically
added a ramdisk as a 3rd mirror device at boot
Hi,
We are seeing more long delays in zpool import, say, 4~5 or even
25~30 minutes, especially when backup jobs are going on in the FC SAN
the LUNs resides (no iSCSI LUNs yet). On the same node for the LUNs of the same
array,
some pools takes a few seconds, but minutes for some. the pattern
see
On 01.10.09 08:25, camps support wrote:
I did zpool import -R /tmp/z rootpool
It only mounted /export and /rootpool only had /boot and /platform.
I need to be able to get /etc and /var?
zfs set mountpoint ...
zfs mount
--
Michael Schusterhttp://blogs.sun.com/recursion
Recursion, n.:
Also can someone tell me if I'm too late for an uberblock rollback to help me?
Diffing "zdb -l" output between c7t0 and c7t1 I see:
-txg=12968048
+txg=12968082
Is that too large a txg gap to roll back, or is it still possible?
Carson Gaspar wrote:
Carson Gaspar wrote:
I'm booted back
Carson Gaspar wrote:
I'm booted back into snv118 (booting with the damaged pool disks
disconnected so the host would come up without throwing up). After hot
plugging the disks, I get:
bash-3.2# /usr/sbin/zdb -eud media
zdb: can't open media: File exists
OK, things are now different (possibl
David,
When you get back to the original system, it would be helpful if
you could provide a side-by-side comparison of the zpool create
syntax and the zfs list output of both pools.
Thanks,
Cindy
On 10/01/09 13:48, David Stewart wrote:
Cindy:
I am not at the machine right now, but I installe
Also, when a pool is created, there is only metadata which uses
fletcher4[*].
So it is not a crime if you set the checksum after the pool is created
and before
data is written :-)
* note: the uberblock uses SHA-256
-- richard
On Oct 1, 2009, at 12:34 PM, Cindy Swearingen wrote:
You are co
Cindy:
I am not at the machine right now, but I installed from the OpenSolaris 2009.06
LiveCD and have all of the updates installed. I have solely been using "zfs
list" to look at the size of the pools.
from a saved file on my laptop:
me...@opensolarisnas:~$ zfs list
NAME
On 1 Oct 2009, at 19:34, Andrew Gabriel wrote:
Pick a file which isn't in a snapshot (either because it's been
created since the most recent snapshot, or because it's been
rewritten since the most recent snapshot so it's no longer sharing
blocks with the snapshot version).
Out of curiosi
You are correct. The zpool create -O option isn't available in a Solaris
10 release but will be soon. This will allow you to set the file system
checksum property when the pool is created:
# zpool create -O checksum=sha256 pool c1t1d0
# zfs get checksum pool
NAME PROPERTY VALUE SOURCE
poo
Ray, if you use -o it sets properties for the pool. If you use -O (capital),
it sets the filesystem properties for the default filesystem created with the
pool.
zpool -O can use any valid zfs filesystem option.
But I agree, it's not very clearly documented.
--
This message posted from opensol
Rudolf Potucek wrote:
Hmm ... I understand this is a bug, but only in the sense that the message is
not sufficiently descriptive. Removing the file from the source filesystem will
not necessarily free any space because the blocks have to be retained in the
snapshots.
and if it's in a snapsho
On Thu, Oct 01, 2009 at 11:03:06AM -0700, Rudolf Potucek wrote:
> Hmm ... I understand this is a bug, but only in the sense that the
> message is not sufficiently descriptive. Removing the file from the
> source filesystem will not necessarily free any space because the
> blocks have to be retained
U4 zpool does not appear to support the -o option... Reading a current zpool
manpage online lists the valid properties for the current zpool -o, and
checksum is not one of them. Are you mistaken or am I missing something?
Another thought is that *perhaps* all of the blocks that comprise an em
Hmm ... I understand this is a bug, but only in the sense that the message is
not sufficiently descriptive. Removing the file from the source filesystem will
not necessarily free any space because the blocks have to be retained in the
snapshots. The same problem exists for zeroing the file with
Hi David,
Which Solaris release is this?
Are you sure you are using the same ZFS command to review the sizes
of the raidz1 and raidz pools? The zpool list and zfs list commands
will display different values.
See the output below of my tank pool created with raidz or raidz1
redundancy. The pool
So, I took four 1.5TB drives and made RAIDZ, RAIDZ1 and RAIDZ2 pools. The
sizes for the pools were 5.3TB, 4.0TB, and 2.67TB respectively. The man page
for RAIDZ states that "The raidz vdev type is an alias for raidz1." So why was
there a difference between the sizes for RAIDZ and RAIDZ1? Sho
Ray, if you don't mind me asking, what was the original problem you had on your
system that makes you think the checksum type is the problem?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mai
On 01.10.09 17:54, Osvald Ivarsson wrote:
I'm running OpenSolaris build svn_101b. I have 3 SATA disks connected to my motherboard.
The raid, a raidz, which is called "rescamp", has worked good before until a
power failure yesterday. I'm now unable to import the pool. I can't export the raid,
s
On 10/01/09 09:25, camps support wrote:
I did zpool import -R /tmp/z rootpool
It only mounted /export and /rootpool only had /boot and /platform.
I need to be able to get /etc and /var?
You need to explicitly mount the root file system (its canmount
property is set to "noauto", which means
On Oct 1, 2009, at 7:10 AM, Ray Clark wrote:
Darren, thank you very much! Not only have you answered my
question, you have made me aware of a tool to verify, and probably
do alot more (zdb).
Can you comment on my concern regarding what checksum is used in the
base zpool before anything i
On Wed, 30 Sep 2009, Richard Elling wrote:
a big impact. With 2+ TB drives, the resilver time is becoming dominant.
As disks becoming larger and not faster, there will be a day when the
logistical response time will become insignificant. In other words, you
won't need a spare to improve logistica
I did zpool import -R /tmp/z rootpool
It only mounted /export and /rootpool only had /boot and /platform.
I need to be able to get /etc and /var?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http:
On 01.10.09 07:20, camps support wrote:
I have a system that is having issues with the pam.conf.
I have booted to cd but am stuck at how to mount the rootpool in single-user. I need to make some changes to the pam.conf but am not sure how to do this.
I think "zpool import" should be the firs
I have a system that is having issues with the pam.conf.
I have booted to cd but am stuck at how to mount the rootpool in single-user.
I need to make some changes to the pam.conf but am not sure how to do this.
Thanks in advance.
--
This message posted from opensolaris.org
___
Darren, thank you very much! Not only have you answered my question, you have
made me aware of a tool to verify, and probably do alot more (zdb).
Can you comment on my concern regarding what checksum is used in the base zpool
before anything is created in it? (No doubt my terminology is wrong,
> Yes, this is something that should be possible once we have bp rewrite
> (the
> ability to move blocks around).
[snip]
> FYI, I am currently working on bprewrite for device removal.
>
> --matt
That's very cool. I don't code (much/enough to help), but I'd like to help
if I can. If nothing else,
On 10/01/09 05:08 AM, Darren J Moffat wrote:
In the future there will be a distinction between the local and the
received values see the recently (yesterday) approved case PSARC/2009/510:
http://arc.opensolaris.org/caselog/PSARC/2009/510/20090924_tom.erickson
Currently non-recursive increment
I'm running OpenSolaris build svn_101b. I have 3 SATA disks connected to my
motherboard. The raid, a raidz, which is called "rescamp", has worked good
before until a power failure yesterday. I'm now unable to import the pool. I
can't export the raid, since it isn't imported.
# zpool import resc
On Wed, Sep 30, 2009 at 05:03:21PM -0700, Brandon High wrote:
> Supermicro has a 3 x 5.25" bay rack that holds 5 x 3.5" drives. This
> doesn't leave space for a optical drive, but I used a USB drive to
> install the OS and don't need it anymore.
I've had such a bay rack for years, and it survived
joerg.schill...@fokus.fraunhofer.de (Joerg Schilling) wrote:
> Dennis Clarke wrote:
> It could be that Sun's NFS implementation _creates_ ACLs when star sends a
> request to _clear_ the ACLs by establishing "base ACLs" that just contain
> the UNIX file permissins. From the Sun documentation, th
Dennis Clarke wrote:
> I use star a great deal, daily in fact. I have two versions that I am
> using because one of them seems to mysteriously create ACL's when I
> perform a copy from one directory to another.
>
> The two versions that I have are :
>
> # /opt/csw/bin/star --version
> star: star
Ray Clark wrote:
Dynamite!
I don't feel comfortable leaving things implicit. That is how misunderstandings happen.
It isn't implicit it is explicitly inherited that is how ZFS is designed
to (and does) work.
Would you please acknowlege that zfs send | zfs receive uses the checksum
sett
> Ray Clark wrote:
>
>> Joerg, Thanks. As you (of all people) know, this area is quite a
>> quagmire.
> Be careful! Sun tar creates non standard and thus non portable archives
> wich -E
> Only star can read them.
>
>> My next problem is that I want to do an exhaustive file compare
>> after
34 matches
Mail list logo