just destroy the swap snapshot and it doesn't get sent when you do a full send
2009/9/20 Frank Middleton :
> A while back I posted a script that does individual send/recvs
> for each file system, sending incremental streams if the remote
> file system exists, and regular streams if not.
>
> The re
On Mon, Sep 21, 2009 at 3:41 AM, vattini giacomo wrote:
> sudo zpool destroy hazz0
> sudo reboot
> Now opensolaris is not booting everything is vanished
ROFL
This actually has to go to the daily WTF... :-)
--
Kind regards, BM
Things, that are stupid at the beginning, rarely ends up wisely.
__
If you are just building a cache, why not just make a file system and
put a reservation on it? Turn off auto snapshots and set other features
as per best practices for your workload? In other words, treat it like
we
treat dump space.
I think that we are getting caught up in trying to answer th
A while back I posted a script that does individual send/recvs
for each file system, sending incremental streams if the remote
file system exists, and regular streams if not.
The reason for doing it this way rather than a full recursive
stream is that there's no way to avoid sending certain file
On Fri, 18 Sep 2009 17:54:41 -0400
Robert Milkowski wrote:
> There will be a delay of up-to 30s currently.
>
> But how much data do you expect to be pushed within 30s?
> Lets say it would be even 10g to lots of small file and you would
> calculate the total size by only summing up a logical siz
Under the Ubuntu system i've done a zpool import -D but no way
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Thursday, Paul Archer wrote:
Tomorrow, Fajar A. Nugraha wrote:
There was a post from Ricardo on zfs-fuse list some time ago.
Apparently if you do a "zpool create" on whole disks, Linux on
Solaris behaves differently:
- solaris will create EFI partition on that disk, and use the partition as
On Sun, 2009-09-20 at 11:41 -0700, vattini giacomo wrote:
> Hi there,i'm in a bad situation,under Ubuntu i was tring to import a solaris
> zpool that is in /dev/sda1,while the Ubuntu is in /dev/sda5;not being able to
> mount the solaris pool i decide to destroy the pool created like that
> sudo
Hi there,i'm in a bad situation,under Ubuntu i was tring to import a solaris
zpool that is in /dev/sda1,while the Ubuntu is in /dev/sda5;not being able to
mount the solaris pool i decide to destroy the pool created like that
sudo zfs-fuse
sudo zpool create hazz0 /dev/sda1
sudo zpool destroy hazz
On 09/20/09 03:20 AM, dick hoogendijk wrote:
On Sat, 2009-09-19 at 22:03 -0400, Jeremy Kister wrote:
I added a disk to the rpool of my zfs root:
# zpool attach rpool c1t0d0s0 c1t1d0s0
# installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1t1d0s0
I waited for the resilver to complete, the
Ok, the resilver has been restarted a number of times over the past few days
due to two main issues - a drive disconnecting itself, and power failure. I
think my troubles are 100% down to these environmental factors, but would like
some confidence that after the resilver has completed, if it rep
Hi Everyone,
I have a couple of systems running opensolaris b118, one of which sends
hourly snapshots to the other. This has been working well, however as
of today, the receiving zfs process has started running extremely
slowly, and is running at 100% CPU on one core, completely in kernel
mode
On Sat, 2009-09-19 at 22:03 -0400, Jeremy Kister wrote:
> I added a disk to the rpool of my zfs root:
> # zpool attach rpool c1t0d0s0 c1t1d0s0
> # installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1t1d0s0
>
> I waited for the resilver to complete, then i shut the system down.
>
> then i
13 matches
Mail list logo