On 07/ 9/10 01:29 PM, zfsnoob4 wrote:
Hi,
I have a question about snapshots. If I restore a file system based on some
snapshot I took in the past, is it possible to revert back to before I
restored? ie:
zfs snapshot t...@yesterday
mkdir /test/newfolder
zfs rollback t...@yesterday
so now ne
On Fri, 2010-07-09 at 10:23 +1000, Peter Jeremy wrote:
> On 2010-Jul-09 06:46:54 +0800, Edward Ned Harvey
> wrote:
> >md5 is significantly slower (but surprisingly not much slower) and it's a
> >cryptographic hash. Probably not necessary for your needs.
>
> As someone else has pointed out, MD5
Greetings All,
I can't believe it didn't figure this out sooner. First of all, a big thank
you to everyone who gave me advice and suggestions, especially Richard. The
problem was with the -d switch. When importing a pool if you specify -d and
a path it ONLY looks there. So if I run:
# z
Hi,
I have a question about snapshots. If I restore a file system based on some
snapshot I took in the past, is it possible to revert back to before I
restored? ie:
zfs snapshot t...@yesterday
mkdir /test/newfolder
zfs rollback t...@yesterday
so now newfolder is gone. But is there a way to t
On 2010-Jul-09 06:46:54 +0800, Edward Ned Harvey wrote:
>md5 is significantly slower (but surprisingly not much slower) and it's a
>cryptographic hash. Probably not necessary for your needs.
As someone else has pointed out, MD5 is no longer considered secure
(neither is SHA-1). If you want cryp
I am getting the following erorr message when trying to do a zfs snapshot:
r...@pluto#zfs snapshot datapool/m...@backup1
cannot create snapshot 'datapool/m...@backup1': out of space
r...@pluto#zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
datapool 556G 110G 446G 19% ONLINE -
rpool 278G 12
> I think it is quite likely to be possible to get
> readonly access to your data, but this requires
> modified ZFS binaries. What is your pool version?
> What build do you have installed on your system disk
> or available as LiveCD?
[Prompted by an off-list e-mail from Victor asking if I was stil
On Thu, 8 Jul 2010, Edward Ned Harvey wrote:
> apple servers contribute negative value to an infrastructure, I do know a
> lot of people who buy / have bought them. And I think that number would be
> higher, if Apple were shipping ZFS.
Yep. Provided it supported ZFS, a Mac Mini makes for a comp
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Philippe Schwarz
>
> 3Ware cards
>
> Any drawback (except that without BBU, i've got a pb in case of power
> loss) in enabling the WC with ZFS ?
If you don't have a BBU, and you care about y
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Peter Taps
>
> As you may have heard, NetApp has a lawsuit against Sun in 2007 (and
> now carried over to Oracle) for patent infringement with the zfs file
>
> Given this, I am wondering what
On 07/ 9/10 10:59 AM, Brandon High wrote:
Personally, I've started organizing datasets in a hierarchy, setting
the properties that I want for descendant datasets at a level where it
will apply to everything that I want to get it. So if you have your
source at tank/export/foo and your destinat
On Thu, Jul 8, 2010 at 2:21 PM, Edward Ned Harvey
wrote:
> Can I "zfs send" from the fileserver to the backupserver and expect it to
be
> compressed and/or dedup'd upon receive? Does "zfs send" preserve the
> properties of the originating filesystem? Will the "zfs receive" clobber
or
> ignore th
Thanks! I just need the SATA part for the SSD serving as my L2ARC. Could care
less about PATA, and have no USB3 peripherals, anyway. I'll let everyone know
how it works!
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discu
On Thu, 2010-07-08 at 18:46 -0400, Edward Ned Harvey wrote:
> > From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> > boun...@opensolaris.org] On Behalf Of Bertrand Augereau
> >
> > is there a way to compute very quickly some hash of a file in a zfs?
> > As I understand it, everything
On 07/ 9/10 09:21 AM, Edward Ned Harvey wrote:
Suppose I have a fileserver, which may be zpool 10, 14, or 15. No
compression, no dedup.
Suppose I have a backupserver. I want to zfs send from the fileserver
to the backupserver, and I want the backupserver to receive and store
compressed an
On Fri, 2010-07-09 at 00:23 +0200, Ragnar Sundblad wrote:
> On 8 jul 2010, at 17.23, Garrett D'Amore wrote:
>
> > You want the write cache enabled, for sure, with ZFS. ZFS will do the
> > right thing about ensuring write cache is flushed when needed.
>
> That is not for sure at all, it all depen
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Bertrand Augereau
>
> is there a way to compute very quickly some hash of a file in a zfs?
> As I understand it, everything is signed in the filesystem, so I'm
> wondering if I can avoid readin
Hi Ryan,
What events lead up to this situation? I've seen a similar problem when
a system upgrade caused the controller numbers of the spares to change.
In that case, the workaround was to export the pool, correct the spare
device names, and import the pool. I'm not sure if this workaround
ap
On 8 jul 2010, at 17.23, Garrett D'Amore wrote:
> You want the write cache enabled, for sure, with ZFS. ZFS will do the
> right thing about ensuring write cache is flushed when needed.
That is not for sure at all, it all depends on what "the right thing"
is, which depends on the application and
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Le 08/07/2010 18:52, Freddie Cash a écrit :
> On Thu, Jul 8, 2010 at 6:10 AM, Philippe Schwarz wrote:
>> With dual-Xeon, 4GB of Ram (will be 8GB in a couple of weeks), two PCI-X
>> 3Ware cards 7 Sata disks (750G & 1T) over FreeBSD 8.0 (But i think it'
Suppose I have a fileserver, which may be zpool 10, 14, or 15. No
compression, no dedup.
Suppose I have a backupserver. I want to zfs send from the fileserver to
the backupserver, and I want the backupserver to receive and store
compressed and/or dedup'd. The backupserver can be a more recen
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Andrew Kener
>
> According to 'zpool upgrade' my pool versions are are 22. All pools
> were upgraded several months ago, including the one in question. Here
> is what I get when I try to impo
I've got an x4500 with a zpool in a weird state. The two spares are listed
twice each, once as AVAIL, and once as FAULTED.
[IDGSUN02:/opt/src] root# zpool status
pool: idgsun02
state: ONLINE
scrub: none requested
config:
NAMESTATE READ WRITE CKSUM
idgsun02ONLINE
On Wed, Jul 7, 2010 at 3:52 PM, valrh...@gmail.com wrote:
> Does anyone have an opinion, or some experience? Thanks in advance!
Both of them support AHCI, so they should work for SATA 6G. The USB3.0
and PATA may not work however.
-B
--
Brandon High : bh...@freaks.com
__
On Jul 7, 2010, at 3:27 AM, Richard Elling wrote:
>
> On Jul 6, 2010, at 10:02 AM, Sam Fourman Jr. wrote:
>
>> Hello list,
>>
>> I posted this a few days ago on opensolaris-discuss@ list
>> I am posting here, because there my be too much noise on other lists
>>
>> I have been without this zfs
On Thu, Jul 8, 2010 at 6:10 AM, Philippe Schwarz wrote:
> With dual-Xeon, 4GB of Ram (will be 8GB in a couple of weeks), two PCI-X
> 3Ware cards 7 Sata disks (750G & 1T) over FreeBSD 8.0 (But i think it's
> OS independant), i made some tests.
>
> The disks are exported as JBOD, but i tried enablin
On Jul 8, 2010, at 11:15 AM, R. Eulenberg wrote:
>
> pstack 'pgrep zdb'/1
>
> and system answers:
>
> pstack: cannot examine pgrep zdb/1: no such process or core file
use ` instead of ' in the above command.
___
zfs-discuss mailing list
zfs-discuss@o
You want the write cache enabled, for sure, with ZFS. ZFS will do the
right thing about ensuring write cache is flushed when needed.
For the case of a single JBOD, I don't find it surprising that UFS beats
ZFS. ZFS is designed for more complex configurations, and provides much
better data integr
Hi,
With dual-Xeon, 4GB of Ram (will be 8GB in a couple of weeks), two PCI-X
3Ware cards 7 Sata disks (750G & 1T) over FreeBSD 8.0 (But i think it's
OS independant), i made some tests.
The disks are exported as JBOD, but i tried enabling/disabling write-cache .
I tried with UFS and ZFS on the sa
"Garrett D'Amore" wrote:
> This situation is why I'm coming to believe that there is almost no case
> for software patents. (I still think there may be a few exceptions --
> the RSA patent being a good example where there was significant enough
> innovation to possibly justify a patent). The sa
Hi,
today I was running
zdb -e -bcsvL tank1
and
zdb -eC tank1
again and it don't comes back a reply or prompt from the system. Than I was
open a new console and run
pstack 'pgrep zdb'/1
and system answers:
pstack: cannot examine pgrep zdb/1: no such process or core file
What's that? Why
31 matches
Mail list logo