Got a curious message the other day.. that my tank is över 80% full and that
ZFS has deleted old backups to free up space. That's curious since I'm not
using the Time Slider for tank, only for rpool...
So what exactly did it delete??
--
This message posted from opensolaris.org
_
So a full NTFS defrag should result in just a long sequential ZFS write?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
So I submitted a bug almost a year ago on cifs fqdn mapping from a windows
system to opensolaris failing. In my migration to a new mail system, I
somehow lost the old saved emails I had with the bug number. In any case,
it appears that using fqdn still fails with the latest builds of
opensolaris.
Paul,
Is it possible to replicate an entire zpool with AVS?
Yes. http://blogs.sun.com/AVS/entry/is_it_possible_to_replicate
- Jim
From what I see, you can replicate a zvol, because AVS is filesystem
agnostic. I can create zvols within a pool, and AVS can replicate
replicate those, but th
On Thu, 20 Aug 2009, Johan Eliasson wrote:
Thinking about running a WinXP instance in VirtualBox on
OpenSolaris, using a 20 GB harddisk-file. However, I am worried
about fragmentation... with the constant reading/writing that WinXP
does... will not the fragmentation of the hardrive-file in ZFS
On Thu, 20 Aug 2009, Richard Elling wrote:
I will be hosting a ZFS tutorial at the USENIX Large Installation System
Administration (LISA09) conference in Baltimore, MD this November.
http://www.usenix.org/events/lisa09/
I will need to submit the presentation materials by September 14, 2009.
If
Jeff old mate I assume you used format -e?
Have you tried swapping the label back to SMI and then back to EFI?
Trevor
Jeff Victor wrote:
I am trying to mirror an existing zpool on OpenSolaris 2009.06. I think
I need to delete two alternate cylinders...
The existing disk in the pool (c
Hi folks!
Thinking about running a WinXP instance in VirtualBox on OpenSolaris, using a
20 GB harddisk-file. However, I am worried about fragmentation... with the
constant reading/writing that WinXP does... will not the fragmentation of the
hardrive-file in ZFS be humongus??
Best regards, gigan
I am trying to mirror an existing zpool on OpenSolaris 2009.06. I think
I need to delete two alternate cylinders...
The existing disk in the pool (c7d0s0):
Part TagFlag Cylinders SizeBlocks
0 rootwm 1 - 19453 149.02GB(19453/0/0) 3125124
Matthew Stevenson wrote:
Ha ha, I know! Like I say, I do get COW principles!
I guess what I'm after is for someone to look at my specific example (in txt
file attached to first post) and tell me specifically how to find out where the
13.8GB number is coming from.
I feel like a total numpty fo
I will be hosting a ZFS tutorial at the USENIX Large Installation System
Administration (LISA09) conference in Baltimore, MD this November.
http://www.usenix.org/events/lisa09/
I will need to submit the presentation materials by September 14, 2009.
If you were to attend, what subjects should we c
>I have my own application that uses large circular buffers and a socket
>connection between hosts. The buffers keep data flowing during ZFS
>writes and the direct connection cuts out ssh.
Application, as in not script (something you can share)?
:)
jlc
___
Hi
I have installed open solaris on HP Proliant ML 370 G6.While creating zones
I am getting error message for the following command.
#zfs create -o canmount=noauto rpool/ROOT/S10be/zones
cannot create 'rpool/ROOT/S10be/zones': parent does not exist.
please let me know the best way to resolve t
Hi
I have installed open solaris on HP Proliant ML 370 G6.While creating zones
I am getting error message for the following command.
#zfs create -o canmount=noauto rpool/ROOT/S10be/zones
cannot create 'rpool/ROOT/S10be/zones': parent does not exist.
please let me know the best way to resolve t
Hello
I'm curious what is the best (if any) way to access ZFS snapshots from
the non-global zones?
I have a common ZFS file system (on Solaris Express b116 for ex.) in a
global zone, mounted as lofs to several non-global zones. In each zone I
can access all files with no problem, but I'm unable
Hi
Is it possible to get filesystem notification like when files are created,
modified, deleted? or can the "activity" be exported?
Thanks
Felix
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http:
Hi Kris,
The /tmp issue described below looks like a bug to me.
I will try to reproduce this and get back to you with either
a bug ID or a request for more info.
Thanks,
Cindy
On 08/20/09 12:00, Kris Kasner wrote:
Thanks for the reply!
My /var issue was a problem with my process.. I did:
Joseph L. Casale wrote:
With Solaris 10U7 I see about 35MB/sec between Thumpers using a direct
socket connection rather than ssh for full sends and 7-12MB/sec for
incrementals, depending on the data set.
Ian,
What's the syntax you use for this procedure?
I have my own application that u
Thanks for the reply!
My /var issue was a problem with my process.. I did:
install UFS root, install extra stuff, create UFS flar.
lucreate ZFS root <- this is where my /var dissappeared
flar create zfs flar
In my straight zfsroot install profiles I have a seperate /var - I didn't even
think
> Greg Mason wrote:
> >
> >> How about the bug "removing slog not possible"?
> What if this slog
> >> fails? Is there a plan for such situation (pool
> becomes inaccessible
> >> in this case)?
> >>
> > You can "zpool replace" a bad slog device now.
>
> And I can testify that it works as desc
Ross wrote:
Yup, that one was down to a known (and fixed) bug though, so it isn't
the normal story of ZFS problems.
Got a bug ID or anything for that, just out of interest?
As an update on my storage situation, I've got some JBODs now, see how
that goes.
--
Tom
// www.portfast.co.uk -- int
> I added some preliminary ZFS/flash information here:
>
> http://opensolaris.org/os/community/zfs/boot/flash/
Cool.
Just a general comment: Since the term "flash" is quite overloaded,
especially in the context of ZFS, I suggest that you use the
term "flash archive" together whenever possible, t
Greg Mason wrote:
How about the bug "removing slog not possible"? What if this slog
fails? Is there a plan for such situation (pool becomes inaccessible
in this case)?
You can "zpool replace" a bad slog device now.
And I can testify that it works as described.
Steve
--
Stephen Green
How about the bug "removing slog not possible"? What if this slog fails? Is
there a plan for such situation (pool becomes inaccessible in this case)?
You can "zpool replace" a bad slog device now.
-Greg
___
zfs-discuss mailing list
zfs-discuss@op
> Does anybody aware if this bug is going to be fixed
> in nearest future?
> IBM just started to sale new X25 model for half a
> price.
Seems like many folks consider SSD in their storages, but this bug may lead to
a pretty bad results...
Can anybody estimate time to fix this? Next dev release?
I added some preliminary ZFS/flash information here:
http://opensolaris.org/os/community/zfs/boot/flash/
This page includes extracts from the not-yet-released Solaris 10 ZFS
Admin Guide.
Let me know this if its not enough to get you rolling.
Whatever comments/corrections are provided will be a
> Something our users do quite a bit of is untarring
> archives with a lot
> of small files. Also, many small, quick writes are
> also one of the many
> workloads our users have.
>
> Real-world test: our old Linux-based NFS server
> allowed us to unpack a
> particular tar file (the source for b
>With Solaris 10U7 I see about 35MB/sec between Thumpers using a direct
>socket connection rather than ssh for full sends and 7-12MB/sec for
>incrementals, depending on the data set.
Ian,
What's the syntax you use for this procedure?
___
zfs-discuss mail
Something our users do quite a bit of is untarring archives with a lot
of small files. Also, many small, quick writes are also one of the many
workloads our users have.
Real-world test: our old Linux-based NFS server allowed us to unpack a
particular tar file (the source for boost 1.37) in aro
This case was approved in PSARC on Wednesday 19th August 2009.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Paul Kraus wrote:
There are about 3.3 million files / directories in the 'dataset',
files range in size from 1 KB to 100 KB.
pkr...@nyc-sted1:/IDR-test/ppk> time sudo zfs send
IDR-test/data...@1250616026 >/dev/null
real91m19.024s
user0m0.022s
sys 11m51.422s
pkr...@nyc-sted1:/IDR-tes
31 matches
Mail list logo