On 04/ 6/11 01:05 PM, Richard Elling wrote:
On Apr 6, 2011, at 12:01 PM, Linder, Doug wrote:
Torrey Minton wrote:
I'm sure someone has a really good reason to keep /var separated but those cases
are fewer and> far between than I saw 10 years ago.
I agree that the causes and repercussions a
On Wed, Apr 6, 2011 at 10:42 AM, Paul Kraus wrote:
> I thought I saw that with zpool 10 (or was it 15) the zfs send
> format had been committed and you *could* send/recv between different
> version of zpool/zfs. From Solaris 10U9 with zpool 22 manpage for zfs:
There is still a problem if the d
On Apr 6, 2011, at 12:01 PM, Linder, Doug wrote:
> Torrey Minton wrote:
>
>> I'm sure someone has a really good reason to keep /var separated but those
>> cases are fewer and > far between than I saw 10 years ago.
>
> I agree that the causes and repercussions are less now than they were a long
Torrey Minton wrote:
> I'm sure someone has a really good reason to keep /var separated but those
> cases are fewer and > far between than I saw 10 years ago.
I agree that the causes and repercussions are less now than they were a long
time ago. But /var still can and sometimes does fill up, a
On 04/ 6/11 11:42 AM, Paul Kraus wrote:
On Wed, Apr 6, 2011 at 1:14 PM, Brandon High wrote:
The only thing to watch out for is to make sure that the receiving datasets
aren't a higher version that the zfs version that you'll be using on the
replacement server. Because you can't downgrade a da
On 04/06/11 12:43, Paul Kraus wrote:
xxx> zfs holds zpool-01/dataset-01@1299636001
NAME TAGTIMESTAMP
zpool-01/dataset-01@1299636001 .send-18440-0 Tue Mar 15 20:00:39 2011
xxx> zfs holds zpool-01/dataset-01@1300233615
NAME T
On Wed, Apr 6, 2011 at 1:14 PM, Brandon High wrote:
> The only thing to watch out for is to make sure that the receiving datasets
> aren't a higher version that the zfs version that you'll be using on the
> replacement server. Because you can't downgrade a dataset, using snv_151a
> and planning t
On Tue, Apr 5, 2011 at 12:38 PM, Joe Auty wrote:
> How about getting a little more crazy... What if this entire server
> temporarily hosting this data was a VM guest running ZFS? I don't foresee
> this being a problem either, but with so
>
The only thing to watch out for is to make sure that the
On Wed, Apr 6, 2011 at 10:51 AM, David Dyer-Bennet wrote:
>
> On Tue, April 5, 2011 14:38, Joe Auty wrote:
>> Also, more generally, is ZFS send/receive mature enough that when you do
>> data migrations you don't stress about this? Piece of cake? The
>> difficulty of this whole undertaking will in
On Tue, Apr 5, 2011 at 9:26 PM, Edward Ned Harvey
wrote:
> This may not apply to you, but in some other unrelated situation it was
> useful...
>
> Try zdb -d poolname
> In an older version of zpool, under certain conditions, there would
> sometimes be "hidden" clones listed with a % in the name.
On Tue, Apr 5, 2011 at 6:56 PM, Rich Morris wrote:
> On 04/05/11 17:29, Ian Collins wrote:
> If there are clones then zfs destroy should report that. The error being
> reported is "dataset is busy" which would be reported if there are user
> holds on the snapshots that can't be deleted.
>
> Try
On Wed, April 6, 2011 11:29, Gary Mills wrote:
> People forget (c), the ability to set different filesystem options on
> /var. You might want to have `setuid=off' for improved security, for
> example.
Or better yet: exec=off,devices=off. Another handy one could be
"compression=on" (or a even "gz
On Wed, April 6, 2011 10:51, David Dyer-Bennet wrote:
> I'm a big fan of rsync, in cronjobs or wherever. What it won't do is
> properly preserve ZFS ACLs, and ZFS snapshots, though. I moved from using
> rsync to using zfs send/receive for my backup scheme at home, and had
> considerable trouble
On 4/6/2011 11:08 AM, Erik Trimble wrote:
Traditionally, the reason for a separate /var was one of two major items:
(a) /var was writable, and / wasn't - this was typical of diskless or
minimal local-disk configurations. Modern packaging systems are making
this kind of configuration increasi
On Wed, Apr 06, 2011 at 08:08:06AM -0700, Erik Trimble wrote:
>On 4/6/2011 7:50 AM, Lori Alt wrote:
>On 04/ 6/11 07:59 AM, Arjun YK wrote:
>
>I'm not sure there's a defined "best practice". Maybe someone else
>can answer that question. My guess is that in environments where,
On 4/6/2011 7:50 AM, Lori Alt wrote:
On 04/ 6/11 07:59 AM, Arjun YK wrote:
Hi,
I am trying to use ZFS for boot, and kind of confused about how the
boot paritions like /var to be layed out.
With old UFS, we create /var as sepearate filesystem to avoid various
logs filling up the / filesystem
On Tue, April 5, 2011 14:38, Joe Auty wrote:
> Migrating to a new machine I understand is a simple matter of ZFS
> send/receive, but reformatting the existing drives to host my existing
> data is an area I'd like to learn a little more about. In the past I've
> asked about this and was told that
On 04/ 6/11 07:59 AM, Arjun YK wrote:
Hi,
I am trying to use ZFS for boot, and kind of confused about how the
boot paritions like /var to be layed out.
With old UFS, we create /var as sepearate filesystem to avoid various
logs filling up the / filesystem
I believe that creating /var as a
Hello,
I'm debating an OS change and also thinking about my options
for data migration to my next server, whether it is on new or
the same hardware.
Migrating to a new machine I understand is a simple matter of
ZFS
Hi,
I am trying to use ZFS for boot, and kind of confused about how the boot
paritions like /var to be layed out.
With old UFS, we create /var as sepearate filesystem to avoid various logs
filling up the / filesystem
With ZFS, during the OS install it gives the option to "Put /var on a
separate
20 matches
Mail list logo