to go if it breaks existing applications which
> rely on this feature. It does break applications in our case.
Existing applications rely on the ability to corrupt UFS filesystems?
Sounds horrible.
--
-Alan Coopersmith- alan.coopersm...@oracle.com
Ora
I have an old opensolaris server (snv_101b) that had a drive bay fan
failure. There were two mirrored volumes, each one lost a disk in the
mirror (fortunately, I split the mirrors across bays). One volume was
able to come up and I started resilvering, but got disk errors and
failed to complete th
Title: signature
There is a ZFS Community on the Oracle Communities that was just
kicked off this month -
https://communities.oracle.com/portal/server.pt/community/oracle_solaris_zfs_file_system/526
Regards,
Alan Hargreaves
On 06/12/12 08:05, Tomas
Title: signature
You do it as you would any zpool. Mirroring is OK for the zpool.
It's just things like raidz* and concats that are not.
# zpool attach rpool device
Note the use of attach. "add" will try to make a concat.
Regards,
There is no Solaris or ZFS functionality associated with those
objects and you can safely delete them on ZFS: they will be
recreated as required whenever the dataset is shared over SMB.
For more information on those files, look for Quota Tracking in
http://msdn.microsoft.com/en-us/li
The issue was somehow a file got created in /export and ZFS prefers an empty
mount point.
rm * and the issue was resolved.
alan
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
276M 9.23G32K legacy
rpool/export/home 276M 9.23G32K legacy
rpool/export/home/alan 276M 9.23G 276M /export/home/alan
rpool/swap2.04G 9.23G 2.04G -
Any ideas?
thanks in advance,
alan
--
This message po
I tried to recreate this scenario using VirtualBox under Windows XP so I could
capture the actual messages.
Could not duplicate.
Oh well.
alan
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
can produce confusing results on Windows.
Alan
On 12/17/10 1:24 AM, artiepen wrote:
I'm using zfs/osol snv_134. I have 2 zfs volumes: /zpool1/test/share1 and
/zpool1/test/share2. share1 is using CIFS, share2: nfs.
I've recently put a cronjob in place that changes the ownership of s
What are the property settings on your dataset?
Alan
On 11/22/10 6:34 AM, Harry Putnam wrote:
Harry wrote:
When *.mov file reside on a windows host, and assuming your browser
has the right plugins, you can open them with either quicktime player
or firefox (which also uses the quicktime player
ecent builds don't have the problem, that's the main thing.
The following update was pushed to snv_149:
PSARC/2010/154 Unified sharing system call
6968897 sharefs: Unified sharing system call
Alan
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
tised, it looks like it could be.
When you say "advertised" do you mean that it appears in
/etc/dfs/sharetab when the dataset is not mounted and/or
you can see it from a client with 'net view' on a client?
I'm using a recent build and I see the smb share disappear
from b
On 10/26/10 06:25 AM, Andy Graybeal wrote:
Yes, if you set up the directory ACLs for inheritance (include :fd:
when you specify the ACEs), the ACLs on copied files will be inherited
from the parent folder (probably best not to use cp -p).
Alan
Alan, thank you for the response.
For my example
Yes, if you set up the directory ACLs for inheritance (include :fd:
when you specify the ACEs), the ACLs on copied files will be inherited
from the parent folder (probably best not to use cp -p).
Alan
Am I headed the wrong direction? I need some hand-holding.
Thank you,
-Andy
_
could just leave it named "new directory", though. And I could rename it
on the Linux side as the same user that failed to rename it from the
Windows side.)
If you want a Windows-like permission experience: use ACLs rather than
perms and ensure
olaris.org/view_bug.do?bug_id=6582165
Is there any commonality besides the observed behaviors?
No, the SMB/CIFS share limitation is that we have not yet added
support for child mounts over SMB; this is completely unrelated
to any configuration problems encountered with
Title: signature
But it is surely taking in a whole boatload of anecdotal information
and precious little in the way of real data or online references.
alan.
Eugen Leitl wrote:
On Tue, Apr 20, 2010 at 06:51:01PM +0100, Bayard Bell wrote:
These folks running the relevant
o that the resources shared
between the two (such as QA) wouldn't be overloaded trying to get both
OpenSolaris 2010.12 and Solaris 10 10/09 finished up around the same time
(or when many of them would be normally out for the end-of-year holidays).
--
-Alan Coopersmith- al
Joerg Schilling wrote:
> Alan Coopersmith wrote:
>
>> If the test suite is going to be running on nv_128 or later, then
>> you are guaranteed to have a zfs filesystem, since root must be
>> zfs then (since the only install method will be IPS, which requires
>> zfs
s which the test
> suite won't have...
If the test suite is going to be running on nv_128 or later, then
you are guaranteed to have a zfs filesystem, since root must be
zfs then (since the only install method will be IPS, which requires
zfs root). Until then you could just document to
R-c---:fd-:allow
user:chris:rwxpdDaARWcCos:fd-:allow
(The "x" shouldn't be necessary, but XP seems not able to list
subdirectories without it...)
Why do you think the "x" is unnecessary?
Alan
So I thought about using NFS instead, which should be better for an
U
want.
Then retry the scenario that's causing a problem.
Alan
--
On 07/01/09 18:55, Afshin Salek wrote:
I can't really explain the changes that happen to the file's
ACL using vi over NFS. I'm CC'ing zfs-discuss maybe someone
there can help out.
Afshin
John Keiffer wrote:
they don't benchmark the operations that are
critical to business. Sure we can spend a lot of time examining the
issue and then addressing it; but would it actually help address a real
business concern, or just an "itch"?
Regards,
Alan Hargreaves
Paisit Wongsongsarn wrote:
Hi
Nicholas Lee wrote:
>
> The standard controller that has been recommended in the past is the
> AOC-SAT2-MV8 - an 8 port with a marvel chipset. There have been several
> mentions of LSI based controllers on the mailing lists and I'm wondering
> about them.
We tried the Marvel controller, and it
I am pretty sure that Oxford 911 is a family of parts. The current Oxford
Firewire parts are the 934 and 936 families. It appears that the Oxford 911
was commonly used in drive enclosures.
The most troublesome part in my experience is the Initio INIC-1430. It does not
get along with scsa1394
Which firewire card? Any firewire card that is OHCI compliant, which is almost
any add-on firewire card that you would buy new these days.
The bigger question is the firewire drive that you want to use or, more
precisely, the 1394-to-ATA (or SATA) bridge used by the drive. Some work
better th
Thanks for the tips. I'm not sure if they will be relevant, though. We don't
talk directly with the AMS1000. We are using a USP-VM to virtualize all of our
storage and we didn't have to add anything to the drv configuration files to
see the new disk (mpxio was already turned on). We are usin
I think we found the choke point. The silver lining is that it isn't the T2000
or ZFS. We think it is the new SAN, an Hitachi AMS1000, which has 7200RPM SATA
disks with the cache turned off. This system has a very small cache, and when
we did turn it on for one of the replacement LUNs we saw
It's something we've considered here as well.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
We will be considering it in the new year, but that will not happen in time to
affect our current SAN migration.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/li
I had posted at the Sun forums, but it was recommended to me to try here as
well. For reference, please see
http://forums.sun.com/thread.jspa?threadID=5351916&tstart=0.
In the process of a large SAN migration project we are moving many large
volumes from the old SAN to the new. We are making u
I had a problem like that on my laptop that also has an rge interface, ping
worked fine, but ssh and ftp didn't. To get around it I had to add
set ip:dohwcksum = 0
to /etc/system and reboot.
That worked and is worth a try for you :)
Cheers,
Alan
--
This message posted from opensolari
Good question.
Well, the hosts are Netbackup Media servers. The idea behind the design is that
we stream the RMAN stuff to disk, via NFS mounts, and then write to tape during
the day. With the SAN attached disks sitting on these hosts and with disk
storage units configured for NBU the data strea
Thanks to all for your comments and sharing your experiences.
In my setup the pools are split and then NFS mounted to other nodes, mostly
Oracle DB boxes. These mounts will provide areas for RMAN Flash backups to be
written.
If I lose connectivity to any host I will swing the luns over to the al
I was just thinking of a similar "feature request": one of the things I'm doing
is hosting vm's. I build a base vm with standard setup in a dedicated
filesystem, then when I need a new instance "zfs clone" and voila! ready to
start tweaking for the needs of the new instance, using a fraction o
I took the brute force approach, but it was simple and passed the "boot from
either" test: install on both, then mirror s0, and I'm reasonably confident
identical disks will look the same ;-)
This message posted from opensolaris.org
___
zfs-discuss
e".
Ok, thanks for the explanation :-)
--
Alan Burlison
--
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
NAME USED AVAIL REFER MOUNTPOINT
pool/ROOT 5.58G 53.4G18K legacy
What's the legacy mount for? Is it related to zones?
thanks,
--
Alan Burlison
--
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
ly.
If there was a '-nomount' flag to zfs receive, snapshotting & saving
a pool would be just 2 commands.
Looks like a RFE is needed to me...
--
Alan Burlison
--
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Alan Burlison wrote:
> So how do I tell zfs receive to create the new filesystems in pool3, but
> not actually try to mount them?
This is even more of an issue with ZFS root - as far as I can tell it's
impossible to recursively back up all the filesystems in a root pool
because of
Mark J Musante wrote:
> Alan, can you point me at your machine (if it's on SWAN)? I'd like to
> see what's going on in there.
Many thanks to Mark for his help, I eventually figured out the problem:
6730154 Grub: findroot fails to find ZFS BE
Precis: USB disks, LU
Mark J Musante wrote:
> Alan, can you point me at your machine (if it's on SWAN)? I'd like to
> see what's going on in there.
Might be easiest to use sun shared shell to get you access...
--
Alan Burlison
--
___
zfs-discuss mai
Mark J Musante wrote:
> As a workaround, you can pre-create the swap & dump zvols. E.g.:
>
> zfs create -V 512m {pool}/swap
> zfs create -V 2g {pool}/dump
>
> If LU sees that the zvols already exist, it assumes they are correctly
> sized.
Nice tip, than
Enda O'Connor wrote:
> probably
> 6722767 lucreate did not add new BE to menu.lst ( or grub )
Yeah, I found that bug, added A CR & bumped the priority. Unfortunatrly
there's no analysis or workaround in the bug, so I've no idea what the
real probl
one point. I've tried blitzing and reinstalling LU entirely - still no joy.
--
Alan Burlison
--
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
x27;/home4': directory is not empty
So how do I tell zfs receive to create the new filesystems in pool3, but
not actually try to mount them?
--
Alan Burlison
--
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailm
next step:
8. copy the ZFS BE into the new pool made from the UFS BE
because I can't get LU to create the BE in a different ZFS pool.
--
Alan Burlison
--
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
work, because LU wants to create both swap & dump ZFS
filesystems in there too, my machine has 16Gb of memory and the slice is
8Gb - so there isn't enough space & LU throws a cog. Which is why I
wanted to get it to use the old swap partition in
want to
> continue to refine how zfs works as a root file system.
I'm really liking what I see so far, it's just a question of getting my
head around the best way of setting things up, and figuring out the
easiest way of migrating.
--
Alan Burlison
--
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ly using swap or dump,
no space is actually used (other than the zfs filesystem overhead) - is
that correct?
I'm now coming up with a mad scheme involving ZFS boot, a USB disk,
string and prayer to enable me to get rid of the old UFS root & swap
slices on my root disk and
wap & dump.
Basically I want to migrate my root filesystem from UFS to ZFS and leave
everything else as it it, there doesn't seem to be a way to do this.
--
Alan Burlison
--
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I'm upgrading my B92 UFS-boot system to ZFS root using Live Upgrade. It
appears to work fine so far, but I'm wondering why it allocates a ZFS
filesystem for swap when I already have a dedicated swap slice.
Shouldn't it just use any existing swap slice rather than creating a ZFS
I swear I tried that and got "file not found", but lo! it worked. argh!
thanks...
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
We just got a new system to use as a zfs file server with 8 drives on 2
controllers. I've installed on and mirrored c4t0d0s0 and c5t0d0s0, and am now
trying to create a mirrored set out of the rest of the disks, however all of
them are giving me i/o errors, e.g.:
# uname -a
SunOS zfs01.server.
ovell "zealot" and the
rest were just folks who make a living supporting Novell customers.
Also, NSS was apparently been ported to Linux.
alan
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Alan Perry wrote:
> I gave a talk on ZFS at a local user group meeting this evening.
What I didn't
> know going in was that the meeting was hosted at a Novell consulting
shop. I got
> asked a lot of "what does ZFS do that NSS doesn't do" questions that
almost nothing about Novell).
Is there some white paper or something on the topic?
I am not on the zfs discuss list, so please remember to include my
e-mail address on any response.
alan
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://
performs, hopefully
it'll be worth the upgrade cost and hassle.
Cheers,
Alan
> >
> >So, before I go and shout at the motherboard
> manufacturer are
> > there any components in b78 that might not be
> expecting a quad core
> > AMD cpu? Possibly in the
ipper 5.2e.
>>
>> Is this a known issue? Should I file a bug?
What Nevada build are you using?
What output do you see for:
zfs get casesensitivity
Thanks,
Alan
> I'm not aware of any such problem. This problem is better asked in
> [EMAIL PROTECTED] where the CI
before I go and shout at the motherboard manufacturer are there any
components in b78 that might not be expecting a quad core AMD cpu? Possibly in
the marvell88sx driver? Or is there anything more I can do to track this issue
down.
Thanks,
Alan
91K 0 496M
raidpool 170G 2.91T 0 3.67K 0 464M
This is certainly different to the snv_57 behaviour, and was the same after I
had upgraded the pool to version 8. Has anyone else seen this on their systems?
Cheers,
Alan
This message posted from opens
" is what gets used most of the time.
How current is that? I thought that while "Zettabyte File System"
was the original name, use of it was dropped a couple years ago and
ZFS became the only name. I don't see "Zettabyte" appearing anywhere
in the ZFS community p
goto top;
I think the snoop would be very useful to pour over.
Cheers,
Alan
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hold fire on the re-init until one of the devs chips in, maybe I'm barking up
the wrong tree ;)
--a
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-di
zdb -dd zmir
There are more options, and they give even more info if you repeat the option
letter more times ( especially the -d flag... )
These might be worth posting to help one of the developers spot something.
Cheers,
Alan
This message posted from opensolaris.org
_
I know, bad form replying to myself, but I am wondering if it might be
related to
6438702 error handling in zfs_getpage() can trigger "page not
locked"
Which is marked "fix in progress" with a target of the current build.
alan.
Alan Hargreaves wrote:
Folks, be
ped. The dereference looks like the first
dereference in page_unlock(), which looks at pp->p_selock.
I can spend a little time looking at it, but was wondering if anyone had
seen this kind of panic previously?
I have two identical crashdumps created in exactly the same way.
alan.
--
plans for PxFS on ZFS any time soon :) ? Or any plans to release
PxFS as part of opensolaris?
Cheers,
Alan
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinf
Eh maybe it's not a problem after all, the scrub has completed well...
--a
bash-3.00# zpool status -v
pool: raidpool
state: ONLINE
scrub: scrub completed with 0 errors on Tue May 9 21:10:55 2006
config:
NAMESTATE READ WRITE CKSUM
raidpoolONLINE 0 0
ac 913a9 lvl=0 blkid=0
I've set off a scrub to check things, there was no resilver of any data on
boot, but there's mention of corruption... Is there any way of translating
this output to filenames? As this is a zfs root, I'd like to be absolutely
sure before d
70 matches
Mail list logo