Hello everybody! Please, help me!
I have Solaris 10x86_64 server with a 5x40gb hdd.
1 HDD with /root and /usr (and other partition) (ufs filesystem) were crashed.
He's died.
Other 4 HDD (zfs file system) were mounted by 4 pool (zfs create pool disk1
c0t1d0 and etc.).
I install Solaris 10x86_64
Hi list,
I have a question about setting up zfs send-receive functionality (between
remote machine) as non-root user.
"server1" - is a server where "zfs send" will be executed
"server2" - is a server where "zfs receive" will be executed.
I am using the following zfs structure:
[server1]$ zfs l
painfull to reserve pretty large
amount if disk space to store intermidiate .zfs file..
Of course, I can write to remote type using ssh using the command below but I'd
lile to see some kind of meaningful names on the tape:
# zfs send tank/[EMAIL PROTECTED] | ssh remote_server "cat > /d
I am running Solaris U4 x86_64.
Seems that something is changed regarding mdb:
# mdb -k
Loading modules: [ unix krtld genunix specfs dtrace cpu.AuthenticAMD.15 uppc
pcplusmp ufs ip hook neti sctp arp usba fctl nca lofs zfs random nfs sppp
crypto ptm ]
> arc::print -a c_max
mdb: failed to derefe
ation on a Solaris box. Now I
cannot do it.
Thanks,
Sergey
On August 1, 2007 08:15 am, [EMAIL PROTECTED] wrote:
> > On 01/08/2007, at 7:50 PM, Joerg Schilling wrote:
> > > Boyd Adamson <[EMAIL PROTECTED]> wrote:
> > >> Or alternatively, are you comparing ZFS(Fuse
,
Sergey Chechelnitskiy ([EMAIL PROTECTED])
WestGrid/SFU
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
The setup below works fine for me.
macmini:~ jimb$ mount | grep jimb
ride:/xraid2/home/jimb on /private/var/automount/home/jimb (nosuid, automounted)
macmini:~ jimb$ nidump fstab / | grep jimb
ride:/xraid2/home/jimb /home/jimb nfs rw,nosuid,tcp 0 0
NFS server: Solaris 10 11/06 x86_64 + patches,
After bfuing from b37 to current zpool can't start with error:
wis-2 ~ # zpool status -x
pool: zstore
state: FAULTED
status: One or more devices could not be opened. There are insufficient
replicas for the pool to continue functioning.
action: Attach the missing device and online it usi
+ a little addition to the original quesion:
Imagine that you have a RAID attached to Solaris server. There's ZFS on RAID.
And someday you lost your server completely (fired motherboard, physical crash,
...). Is there any way to connect the RAID to some another server and restore
ZFS layout (no
Please read also http://docs.info.apple.com/article.html?artnum=303503.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I had the same problem. Read the following article -
http://docs.info.apple.com/article.html?artnum=302780
Most likely you have "Allow host cache Flushing" checked. Uncheck it and try
again.
This message posted from opensolaris.org
___
zfs-discuss m
Hi all,
I am trying to organize our small (and the only one) filestorage using and
thinking in ZFS-style )
So I have SF x4100 (2 x DualCore AMD Opteron 280, 4 Gb of RAM, Solaris 10 x86
06/06 64 bit kernel + updates), Sun Fiber Channel HBA card (Qlogic-based) and
Apple Xraid 7Tb (2 raid contro
12 matches
Mail list logo