Hi Jeffrey,
Jeffrey Huang wrote:
Hi, Jan,
于 2009/12/9 20:41, Jan Damborsky 写道:
# dumpadm -d swap
dumpadm: no swap devices could be configured as the dump device
# dumpadm
Dump content: kernel pages
Dump device: /dev/zvol/dsk/rpool/dump (dedicated)
Savecore directory: /var/crash/o
Been lurking for about a week and a half and this is my first post...
--- bfrie...@simple.dallas.tx.us wrote:
>On Fri, 11 Dec 2009, Bob wrote:
>> Thanks. Any alternatives, other than using enterprise-level drives?
>You can of course use normal consumer drives. Just don't expect them
>to recove
Hi, Jan,
于 2009/12/9 20:41, Jan Damborsky 写道:
# dumpadm -d swap
dumpadm: no swap devices could be configured as the dump device
# dumpadm
Dump content: kernel pages
Dump device: /dev/zvol/dsk/rpool/dump (dedicated)
Savecore directory: /var/crash/opensolaris
Savecore enabled: no
On Dec 13, 2009, at 12:17 PM, Peter Tribble wrote:
On Sat, Dec 12, 2009 at 5:08 PM, Richard Elling
wrote:
On Dec 12, 2009, at 12:53 AM, dick hoogendijk wrote:
Do I understand correctly if I read this as: OpenSolaris is able to
switch between systems without reinstalling? Just a zfs import -f
On Dec 13, 2009, at 5:04 PM, Jens Elkner wrote:
On Sat, Dec 12, 2009 at 04:23:21PM +, Andrey Kuzmin wrote:
As to whether it makes sense (as opposed to two distinct physical
devices), you would have read cache hits competing with log writes
for
bandwidth. I doubt both will be pleased :-)
I can't (yet!) say I've seen the same, with respect to disappearing snapshots.
However, I can confirm that I am seeing the same thing, with respect to
snapshots without the "frequent" prefix..
$ zfs list -t snapshot | fgrep :-
rp...@zfs-auto-snap:-2009-12-14-13:15
On Sat, Dec 12, 2009 at 04:23:21PM +, Andrey Kuzmin wrote:
> As to whether it makes sense (as opposed to two distinct physical
> devices), you would have read cache hits competing with log writes for
> bandwidth. I doubt both will be pleased :-)
Hmm - good point. What I'm trying to accomplis
A majority of the time when the server is rebooted I get this on a zpool:
pool: ipapool
state: FAULTED
status: An intent log record could not be read.
Waiting for adminstrator intervention to fix the faulted pool.
action: Either restore the affected device(s) and run 'zpool online',
On Sat, Dec 12, 2009 at 03:28:29PM +, Robert Milkowski wrote:
> Jens Elkner wrote:
Hi Robert,
> >
> >just got a quote from our campus reseller, that readzilla and logzilla
> >are not available for the X4540 - hmm strange Anyway, wondering
> >whether it is possible/supported/would make sense
I enabled compression on a zfs filesystem with compression=gzip9 - i.e. fairly
slow compression - this stores backups of databases (which compress fairly
well).
The next question is: Is the CRC on the disk based on the uncompressed data
(which seems more likely to be able to be recovered) or b
On Sat, Dec 12, 2009 at 5:08 PM, Richard Elling
wrote:
> On Dec 12, 2009, at 12:53 AM, dick hoogendijk wrote:
>
>> Do I understand correctly if I read this as: OpenSolaris is able to
>> switch between systems without reinstalling? Just a zfs import -f and
>> everything runs? Wow, that would be an
Actually, recent batches of WD drives don't let you change the TLER setting
anymore, which is why I was concerned about this.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.or
It may not be supported, but you can swap drives between systems and it
does work very well.
I did a Solaris 8 -> Solaris 10 migration on ~200 systems in 2007. I
had a set of systems that I jumpstarted, then, once the system was
built, I pulled the drives and placed them in the new system.
On Sat, 12 Dec 2009, Brent Jones wrote:
There is a little bit of disk activity, maybe a MB/sec on average, and
about 30 iops.
So it seems the hosts are exchanging a lot of data about the snapshot,
but not actually replicating any data for a very long time.
Note that 'zfs send' is a one-way stre
Thank you.
However I think it should be more clearly stated in zpool(1M) perhaps
even referring to compressratio and explaining that this one is
different, plus information as shown below how to get a dedupratio which
is similar in meaning to compressratio.
On 13/12/2009 11:44, Jeff Bonwic
It is by design. The idea is to report the dedup ratio for the data
you've actually attempted to dedup. To get a 'diluted' dedup ratio
of the sort you describe, just compare the space used by all datasets
to the space allocated in the pool. For example, on my desktop,
I have a pool called 'build
On 13 Dec 2009, at 09:05, dick hoogendijk wrote:
> I just noticed that my zpool is still running v10 and my zfs filesystems
> are on v3. This is on solaris 10U3. Before upgrading the zpool and ZFS
> versions I'd like to know the supported versions by solaris 10 update.7
> I'd rather not make my z
I just noticed that my zpool is still running v10 and my zfs filesystems
are on v3. This is on solaris 10U3. Before upgrading the zpool and ZFS
versions I'd like to know the supported versions by solaris 10 update.7
I'd rather not make my zpools unaccessable ;)
Note you don't get the better vibration control and other improvements the
enterprise drives have. So it's not exactly that easy. :)
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensol
On Sat, Dec 12, 2009 at 8:14 PM, Brent Jones wrote:
> On Sat, Dec 12, 2009 at 11:39 AM, Brent Jones wrote:
>> On Sat, Dec 12, 2009 at 7:55 AM, Bob Friesenhahn
>> wrote:
>>> On Sat, 12 Dec 2009, Brent Jones wrote:
>>>
I've noticed some extreme performance penalties simply by using snv_128
>>
20 matches
Mail list logo