Hello!
I've got a Solaris 10 10/08 Sparc system and use ZFS pool version 15. I'm
playing
around a bit to make it break.
I've created a mirrored "Test" pool using mirrored log devices:
# zpool create Test \
mirror /dev/zvol/dsk/data/DiskNr1 /dev/zvol/dsk/data/DiskNr2 \
log mirror /dev/zvo
Hello again!
2010/9/24 Gary Mills :
> On Fri, Sep 24, 2010 at 12:01:35AM +0200, Alexander Skwar wrote:
>> Yes. I was rather thinking about RAIDZ instead of mirroring.
>
> I was just using a simpler example.
Understood. Like I just wrote, we're actually now going
to use mir
Hello.
2010/9/24 Marty Scholes :
> ZFS will ensure integrity, even when the underlying device fumbles.
Yes.
> When you mirror the iSCSI devices, be sure that they are configured
> in such a way that a failure on one iSCSI "device" does not imply a
> failure on the other iSCSI device.
Very good
Hi!
2010/9/23 Gary Mills
>
> On Tue, Sep 21, 2010 at 05:48:09PM +0200, Alexander Skwar wrote:
> >
> > We're using ZFS via iSCSI on a S10U8 system. As the ZFS Best
> > Practices Guide http://j.mp/zfs-bp states, it's advisable to use
> > redundancy (ie. R
Hi.
2010/9/19 R.G. Keen
> and last-generation hardware is very, very cheap.
Yes, of course, it is. But, actually, is that a true statement? I've read
that it's *NOT* advisable to run ZFS on systems which do NOT have ECC
RAM. And those cheapo last-gen hardware boxes quite often don't have
ECC, d
Hi!
On Fri, Dec 11, 2009 at 15:55, Fajar A. Nugraha wrote:
> On Fri, Dec 11, 2009 at 4:17 PM, Alexander Skwar
> wrote:
>> $ sudo zfs create rpool/rb-test
>>
>> $ zfs list rpool/rb-test
>> NAME USED AVAIL REFER MOUNTPOINT
>> rpool/rb-test
Hi.
On Fri, Dec 11, 2009 at 15:35, Ross Walker wrote:
> On Dec 11, 2009, at 4:17 AM, Alexander Skwar
> wrote:
>
>> Hello Jeff!
>>
>> Could you (or anyone else, of course *G*) please show me how?
[...]
>> Could you please be so kind and show what exactly
>
lightly indirect:
>
>- make a clone of the snapshot you want to roll back to
>- promote the clone
>
> See 'zfs promote' for details.
>
> Jeff
>
> On Fri, Dec 11, 2009 at 08:37:04AM +0100, Alexander Skwar wrote:
> > Hi.
> >
> > Is i
Hi.
Is it possible on Solaris 10 5/09, to rollback to a ZFS snapshot,
WITHOUT destroying later created clones or snapshots?
Example:
--($ ~)-- sudo zfs snapshot rpool/r...@01
--($ ~)-- sudo zfs snapshot rpool/r...@02
--($ ~)-- sudo zfs clone rpool/r...@02 rpool/ROOT-02
--($ ~)-- LC_ALL=C sudo
Hi.
Good to Know!
But how do we deal with that on older sStems, which don't have the
patch applied, once it is out?
Thanks, Alexander
On Tuesday, July 21, 2009, George Wilson wrote:
> Russel wrote:
>
> OK.
>
> So do we have an zpool import --xtg 56574 mypoolname
> or help to do it (script?)
>
Hi.
Hm, what are you actually referring to?
On Mon, Jul 20, 2009 at 13:45, Ross wrote:
> That's the stuff. I think that is probably your best bet at the moment.
> I've not seen even a mention of an actual tool to do that, and I'd be
> surprised if we saw one this side of Christmas.
> --
> Thi
Hi!
On Thu, Jul 16, 2009 at 14:00, Cyril Ducrocq wrote:
> moreover i added an "on the fly" compression using gzip
You can dump the gzip|gunzip, if you use SSH on-the-fly compression, using
ssh -C
ssh also uses gzip, so there won't be much difference.
Regards,
Alexander
--
[[ http://zen
Here's a more useful output, with having set the number of
files to 6000, so that it has a dataset which is larger than the
amount of RAM.
--($ ~)-- time sudo ksh zfs-cache-test.ksh
zfs create rpool/zfscachetest
Creating data file set (6000 files of 8192000 bytes) under
/rpool/zfscachetest ...
Don
Bob,
On Sun, Jul 12, 2009 at 23:38, Bob
Friesenhahn wrote:
> There has been no forward progress on the ZFS read performance issue for a
> week now. A 4X reduction in file read performance due to having read the
> file before is terrible, and of course the situation is considerably worse
> if the
Hallo.
I'm trying do "zfs send -R" from a S10 U6 Sparc system to a Solaris 10 U7 Sparc
system. The filesystem in question is running version 1.
Here's what I did:
$ fs=data/oracle ; snap=transfer.hot-b ; sudo zfs send -R $...@$snap |
sudo rsh winds07-bge0 "zfs create rpool/trans/winds00r/${fs%%/
Hallo Jörg!
On Tue, Jul 7, 2009 at 13:53, Joerg
Schilling wrote:
> Alexander Skwar wrote:
>
>> Hi.
>>
>> I've got a fully patched Solaris 10 U7 Sparc system, on which
>> I enabled SNMP disk monitoring by adding those lines to the
>> /etc/sma/snmp/snmp
Hi.
I've got a fully patched Solaris 10 U7 Sparc system, on which
I enabled SNMP disk monitoring by adding those lines to the
/etc/sma/snmp/snmpd.conf configuration file:
disk / 5%
disk /tmp 10%
disk /data 5%
That's supposed to mean, that I see <5% available on / to be
critical, <10% on /tmp and
Hello.
On a Solaris 10 10/08 (137137-09) Sparc system, I setup SMA to also return
values for disk usage, by adding the following to snmpd.conf:
disk / 5%
disk /tmp 10%
disk /apps 4%
disk /data 3%
/data and /apps are on ZFS. But when I do "snmpwalk -v2c -c public 10.0.1.26
UCD-SNMP-MIB::dskPer
18 matches
Mail list logo