>My question: What apps are these? I heard mention of some SunOS 4.x
>library. I don't think that's anywhere near important enough to warrant
>changing the current ZFS behavior.
Not apps; NFS clients such as *BSD.
On Solaris the issue is next to non-existant (SunOS 4.x binaries using
scandi
[EMAIL PROTECTED] wrote:
After one aborted ufsrestore followed by some cleanup I tried
to restore again but this time ufsrestore faultered with:
bad filesystem block size 2560
The reason was this return value for the stat of "." of the
filesystem:
8339: stat(".", 0xFFBFF818)
Samuel Borgman wrote:
I just started to use zfs after longing to try it out for a long while now. The problem
is that I've "lost" 240Gb out of 700Gb
I have single 700G pool on a 3510 HW raid mounted on /nm4/data running
# du -sk /nm4/data
411025338 /nm4/data
While a
# df -hk
Filesyst
Ross Newell wrote:
What are this issues preventing the root directory being stored on raidz?
I'm talking specifically about root, and not boot which I can see would be
difficult.
Would it be something an amateur programmer could address in a weekend, or
is it more involved?
I believe this used
Rick Mann wrote:
Hi. I've been reading the ZFS admin guide, and I don't understand the distinction between
"adding" a device and "attaching" a device to a pool?
"attach" is used to create or add a side to a mirror.
"add" is to add a new top level vdev where that can be a raidz, mirror
or singl
George Plymale wrote:
Couple of questions regarding ZFS:
First, can slices and vdevs be removed from a pool? It appears to
only want to remove a hotspare from a pool, which makes sense, however
is there some work around that will migrate data off of a vdev and
thus allow you to remove it? (In
Hi. I've been reading the ZFS admin guide, and I don't understand the
distinction between "adding" a device and "attaching" a device to a pool?
TIA
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
htt
On 12-Jun-07, at 9:02 AM, eric kustarz wrote:
Comparing a ZFS pool made out of a single disk to a single UFS
filesystem would be a fair comparison.
What does your storage look like?
The storage looks like:
NAMESTATE READ WRITE CKSUM
tankONLINE 0
Matthew Ahrens wrote:
Manoj Joseph wrote:
Hi,
I find that fchmod(2) on a zfs filesystem can sometimes generate errno
= ENOSPC. However this error value is not in the manpage of fchmod(2).
Here's where ENOSPC is generated.
zfs`dsl_dir_tempreserve_impl
zfs`dsl_dir_
On Wed, Jun 13, 2007 at 09:42:26PM -0400, Ed Ravin wrote:
> As mentioned before, NetBSD's scandir(3) implementation was one. The
> NetBSD project has fixed this in their CVS. OpenBSD and FreeBSD's scandir()
> looks like another, I'll have to drop them a line.
I heard from an OpenBSD developer wh
Ed Ravin wrote:
As mentioned before, NetBSD's scandir(3) implementation was one. The
NetBSD project has fixed this in their CVS. OpenBSD and FreeBSD's scandir()
looks like another, I'll have to drop them a line.
...
Thanks much for investigating this and pushing for fixes!
--matt
___
On Wed, Jun 13, 2007 at 05:27:18PM -0700, Matthew Ahrens wrote:
> To summarize my understanding of this issue: st_size on directories is
> undefined; apps/libs which do anything other than display it are broken.
> However, we should avoid exercising this bug in these broken apps if
> possible.
Manoj Joseph wrote:
Hi,
I find that fchmod(2) on a zfs filesystem can sometimes generate errno =
ENOSPC. However this error value is not in the manpage of fchmod(2).
Here's where ENOSPC is generated.
zfs`dsl_dir_tempreserve_impl
zfs`dsl_dir_tempreserve_space+0x4e
On Thu, 14 Jun 2007, James C. McPherson wrote:
Al Hopper wrote:
On Wed, 13 Jun 2007, James C. McPherson wrote:
Robert Milkowski wrote:
...
JCM> As far as I understand it, I do not think that a plain
JCM> jbod version of the ST2530 is supported. I believe that
JCM> a jbod attached to the ST25
On Wed, Jun 13, 2007 at 05:27:18PM -0700, Matthew Ahrens wrote:
> [EMAIL PROTECTED] wrote:
> >
> >>I believe we should rather educate other people that st_size/24 is a bad
> >>"solution".
> >
> >That's all well and good but fixing all clients, including potentially
> >really old ones, might not be
[EMAIL PROTECTED] wrote:
I believe we should rather educate other people that st_size/24 is a bad
"solution".
That's all well and good but fixing all clients, including potentially
really old ones, might not be feasible. Being correct doesn't help
our customers.
To summarize my understand
OK, so I get the reason behind this message but I do not understand
why we're unmounting the clone filesystem in the first place?
This is bug 6472202 "'zfs rollback' and 'zfs rename' requires that clones be
unmounted".
Sorry about that,
--matt
__
Al Hopper wrote:
On Wed, 13 Jun 2007, James C. McPherson wrote:
Robert Milkowski wrote:
...
JCM> As far as I understand it, I do not think that a plain
JCM> jbod version of the ST2530 is supported. I believe that
JCM> a jbod attached to the ST2540 (fc-connected) is supported.
If it works it d
On 13-Jun-07, at 4:09 PM, Frank Cusack wrote:
On June 13, 2007 9:14:48 AM -0700 Rick Mann <[EMAIL PROTECTED]>
wrote:
From
(http://www.informationweek.com/news/showArticle.jhtml;?
articleID=199903
525)
...
In a follow-up interview today, Croll explained, "ZFS is not the
default
file syste
Hi,
I find that fchmod(2) on a zfs filesystem can sometimes generate errno =
ENOSPC. However this error value is not in the manpage of fchmod(2).
Here's where ENOSPC is generated.
zfs`dsl_dir_tempreserve_impl
zfs`dsl_dir_tempreserve_space+0x4e
zfs`dmu
So it's been what, a day (2?) and no one has tried to import a pool on
the Leopard beta?
-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On June 13, 2007 9:14:48 AM -0700 Rick Mann <[EMAIL PROTECTED]> wrote:
From
(http://www.informationweek.com/news/showArticle.jhtml;?articleID=199903
525)
...
In a follow-up interview today, Croll explained, "ZFS is not the default
file system for Leopard. We are exploring it as a file system op
2007/6/10, arb <[EMAIL PROTECTED]>:
Hello, I'm new to OpenSolaris and ZFS so my apologies if my questions are naive!
I've got solaris express (b52) and a zfs mirror, but this command locks up my
box within 5 seconds:
% cmp first_4GB_file second_4GB_file
It's not just these two 4GB files, any s
So you can migrate all your ZFS volumes to HFS+ ;-)
>>> Toby Thain <[EMAIL PROTECTED]> 6/13/2007 12:22 PM >>>
On 13-Jun-07, at 1:14 PM, Rick Mann wrote:
>> From (http://www.informationweek.com/news/showArticle.jhtml;?
>> articleID=199903525)
>
> ... Croll explained, "ZFS is not the default fi
The whole read-only business sounds like baloney to me. Read-only ZFS
implies that the file system would be created elsewhere - and I don't know
if there will be continuing compatibility between
Solaris/Linux(FUSE)/FreeBSD implementations - so they would presumably
support read-only of Solaris' re
Toby Thain wrote:
What possible use is "read only" ZFS?
A user of an OS that _does_ support read+write ZFS might, for
example, have one spare USB disk/drive.
The user may opt for ZFS for that one disk, gaining the benefits of
COW, rollback etc..
The user will be able to read (only) tha
Toby Thain, et al,
I am guessing here, but to just be able to access
the FS data locally without the headaches of
verifying FS consistency, write caches, etc.
Mitchell Erblich
Toby Thain wrote:
>
> On 13-Jun-07, at 1:14 PM, Rick Mann wrot
>From
>(http://www.informationweek.com/news/showArticle.jhtml;?articleID=199903525)
---
[...]
Seeking to clarify a statement made on Monday by Brian Croll, senior director
of Mac OS X Product Marketing, to two InformationWeek reporters that Apple's
new "Leopard" operating system would not inclu
> I just want to be sure to understand good, here my understanding: (I
> hope I'm not totally wrong ;p)
>
> The slide demonstrate how an existing file is modified. The boxes in
> blue represents the existing data, and the green ones the new data. So
> when an application wants modified the existing
On 13-Jun-07, at 1:14 PM, Rick Mann wrote:
From (http://www.informationweek.com/news/showArticle.jhtml;?
articleID=199903525)
... Croll explained, "ZFS is not the default file system for
Leopard. We are exploring it as a file system option for high-end
storage systems with really large st
Douglas Atique wrote:
Now that I know *what*, could you perhaps explain to my *why*? I understood
zpool import and export operations much as mount and unmount, like maybe some
checks on the integrity of the pool and updates to some structure on the OS to
maintain the imported/exported state o
>On Mon, 11 Jun 2007, Rick Mann wrote:
>> ZFS Readonly implemntation is loaded!
>Is that a copy-n-paste error, or is that typo in the actual output?
It's a typo in the actual output.
This message posted from opensolaris.org
___
zfs-discuss mailing
dudekula mastan wrote:
Is it possible to create a ZPool on SVM volumes ? What are the limitations
for this ?
Not as far as I am aware. libdiskmgmt gets in the
way - it protects you.
This is incorrect. If you attempt to use the same underlying disks, then
libdiskmgmt will protect you.
B
> dudekula mastan wrote:
> > Is it possible to create a ZPool on SVM volumes ? What are the
> > limitations for this ?
>
> Not as far as I am aware. libdiskmgmt gets in the
> way - it protects you.
Should be able to. We've had some threads about ZFS on top of SVM.
http://www.opensolaris.org/ji
On Wed, 13 Jun 2007, James C. McPherson wrote:
Robert Milkowski wrote:
...
JCM> As far as I understand it, I do not think that a plain
JCM> jbod version of the ST2530 is supported. I believe that
JCM> a jbod attached to the ST2540 (fc-connected) is supported.
If it works it doesn't have to be
Hello,
I have the following situation:
1) A ZFS filesystem, created with zfs create:
- multipack/u01
2) Data created in said filesystem
3) A snapshot taken of this filesystem:
- multipack/[EMAIL PROTECTED]
4) A clone filesystem created from the snapshot:
- multipack/u09
multipack/u01
Hi Lin,
A few moments after replying to your post, I had an idea. I had tweaked with
almost every part of the script but I couldn't figure out what the difference
was between the script and the manual execution.
The difference is (as I found later) that when I created the ZFS root fs by
hand,
Hello Bruno,
Wednesday, June 13, 2007, 3:45:07 PM, you wrote:
BB> Hello,
BB> as the president of the french OSUG [1], I'll give a talk about
BB> ZFS and zones at RMLL [2] (libre software meeting) and I have few
BB> questions about Jeff Bonwick's slides [3], especially for slide 11.
BB> I just w
Hello,
as the president of the french OSUG [1], I'll give a talk about ZFS and zones
at RMLL [2] (libre software meeting) and I have few questions about Jeff
Bonwick's slides [3], especially for slide 11.
I just want to be sure to understand good, here my understanding: (I hope I'm
not totall
I have a system that is running Solaris 10 Update 3 TX with 1 zpool and 5
zones. Everything on it is running fine. I take the drive to my disk duplicator
and dupe it bit by bit to another drive, put the newly duped drive in the same
machine and boot it up everything boots up fine. Then I do a zp
On Tue, 12 Jun 2007, Tim Cook wrote:
> This pool should have 7 drives total, which it does, but for some reason
> c4d0 is displayed twice. Once as online (which it is), and once as
> unavail (which it is not).
What's the name of the 7th drive? Did you take all the drives from the
old system and
Robert Milkowski wrote:
...
JCM> As far as I understand it, I do not think that a plain
JCM> jbod version of the ST2530 is supported. I believe that
JCM> a jbod attached to the ST2540 (fc-connected) is supported.
If it works it doesn't have to be supported.
and practically speaking, I expect t
Robert Milkowski wrote:
...
JCM> Yes, my team's test plan did include ST2530 array attached
JCM> to SAS hba.
But there's 2530 with RAID controller and SAS external ports.
To clarify I was asking about expansion trays without any RAID
controllers - just 2530 jbod attached with dual links to a hos
Hello James,
Wednesday, June 13, 2007, 1:06:22 PM, you wrote:
JCM> Robert Milkowski wrote:
>> Hello Louwtjie,
>>
>> Monday, June 4, 2007, 9:14:26 AM, you wrote:
>>
>> LB> On 5/30/07, James C. McPherson <[EMAIL PROTECTED]> wrote:
Louwtjie Burger wrote:
> I know the above mentioned kit (
> I would suggest that this thread will be moved to an
> apple-related
> list since it has nothing to do with zfs anymore.
Hmm, I don't know how you figure this has nothing to do with zfs. This is all
about zfs and seems to me zfs-discuss is the perfect thread for it.
Because the discussion tu
> Once you switch over to zfs root, adding new hardware
> should just behave
> as what you expect on ufs root.
> Copy /devices and /dev is just a one-time thing (as
> part of
> 'installation') to setup the initial zfs root.
Ok, but what about the first boot? Why can't /devices and /dev be generat
> I would suggest that this thread will be moved to an
> apple-related
> list since it has nothing to do with zfs anymore.
Hmm, I don't know how you figure this has nothing to do with zfs. This is all
about zfs and seems to me zfs-discuss is the perfect thread for it.
This message posted from
Robert Milkowski wrote:
Hello Louwtjie,
Monday, June 4, 2007, 9:14:26 AM, you wrote:
LB> On 5/30/07, James C. McPherson <[EMAIL PROTECTED]> wrote:
Louwtjie Burger wrote:
I know the above mentioned kit (2530) is new, but has anybody tried a
direct attached SAS setup using zfs? (and the Sun SG-
Hello Louwtjie,
Monday, June 4, 2007, 9:14:26 AM, you wrote:
LB> On 5/30/07, James C. McPherson <[EMAIL PROTECTED]> wrote:
>> Louwtjie Burger wrote:
>> > I know the above mentioned kit (2530) is new, but has anybody tried a
>> > direct attached SAS setup using zfs? (and the Sun SG-XPCIESAS-E-Z
>>
49 matches
Mail list logo