Hi
I know some of this has been discussed in the past but I can't quite find the
exact information I'm seeking
(and I'd check the ZFS wikis but the websites are down at the moment).
Firstly, which is correct, free space shown by "zfs list" or by "zpool iostat" ?
zfs list:
used 50.3 TB, free 13.
From: Cindy Swearingen
>No doubt. This is a bad bug and we apologize.
>1. If you are running Solaris 11 or Solaris 11.1 and have separate
>cache devices, you should remove them to avoid this problem.How is the
>7000-series storage appliance affected?
2. A MOS knowledge article (1497293.1) is ava
Here it is:
# pstack core.format1
core 'core.format1' of 3351: format
- lwp# 1 / thread# 1
0806de73 can_efi_disk_be_expanded (0, 1, 0, ) + 7
08066a0e init_globals (8778708, 0, f416c338, 8068a38) + 4c2
08068a41 c_disk (4, 806f250, 0, 0, 0, 0) +
That's right I'm only using the 3114 out of desperation.
Does anyone else have the marvell88sx working in Solaris 11.1?
>
> From: Andrew Gabriel
>3112 and 3114 were very early SATA controllers before there were any SATA
>drivers, which pretend to be ATA contro
Hi
I added a 3TB Seagate disk (ST3000DM001) and ran the 'format' command but it
crashed and dumped core.
However the zpool 'create' command managed to create a pool on the whole disk
(2.68 TB space).
I hope that's only a problem with the format command and not with zfs or any
other part of t
_
> From: Matthew Ahrens
>On Thu, Jan 5, 2012 at 6:53 AM, sol wrote:
>
>
>>
I would have liked to think that there was some good-will between the ex- and
current-members of the zfs team, in the sense that the people who created zfs
but then left Oracle still care about it enough
Oh I can run the disks off a SiliconImage 3114 but it's the marvell controller
that I'm trying to get working. I'm sure it's the controller which is used in
the Thumpers so it should surely work in solaris 11.1
>
> From: Bob Friesenhahn
>
> If the SATA card you
Some more information about the system:
Solaris 11.1 with latest updates (assembled 19 Sep 2012), amd64
The card is vendor 0x11ab device 0x6081
Marvell Technology Group Ltd. MV88SX6081 8-port SATA II PCI-X Controller
CardVendor 0x11ab card 0x11ab (Marvell Technology Group Ltd., Card unknown)
S
func_enable=0x5, ahci_msi_enabled=0, sata_max_queue_depth=1)
Is there anything else I can try?
>
> From: Bob Friesenhahn
>To: sol
>Cc: "zfs-discuss@opensolaris.org"
>Sent: Wednesday, 12 December 2012, 14:49
>Subject: Re: [zfs-discuss] ZFS arra
Hello
I've got a ZFS box running perfectly with an 8-port SATA card
using the marvell88sx driver in opensolaris-2009.
However when I try to run Solaris-11 it won't boot.
If I unplug some of the hard disks it might boot
but then none of them show up in 'format'
and none of them have configured sta
Other than Oracle do you think any other companies would be willing to take
over support for a clustered 7410 appliance with 6 JBODs?
(Some non-Oracle names which popped out of google:
Joyent/Coraid/Nexenta/Greenbytes/NAS/RackTop/EraStor/Illumos/???)
_
Hello
It seems as though every time I scrub my mirror I get a few megabytes of
checksum errors on one disk (luckily corrected by the other). Is there some way
of tracking down a problem which might be persistent?
I wonder if it's anything to do with these messages which are constantly
appearin
Thanks for that, Matt, very reassuring :-)
>
> There is plenty of good will between everyone who's worked on ZFS -- current
> Oracle employees, former employees, and those never employed by Oracle. We
> would all like to see all implementations of ZFS be the
> if a bug fixed in Illumos is never reported to Oracle by a customer,
> it would likely never get fixed in Solaris either
:-(
I would have liked to think that there was some good-will between the ex- and
current-members of the zfs team, in the sense that the people who created zfs
but then le
Richard Elling wrote:
> many of the former Sun ZFS team
> regularly contribute to ZFS through the illumos developer community.
Does this mean that if they provide a bug fix via illumos then the fix won't
make it into the Oracle code?
___
zfs-discu
Yes, it's moving a tree of files, and the shell ulimit is the default (which I
think is 256).
It happened twice recently in normal use but not when I tried to replicate it
(standard test response ;-))
Anyway it only happened moving between zfs filesystems in Solaris 11, I've
never seen it be
Hi
Several observations with zfs cifs/smb shares in the new Solaris 11.
1) It seems that the previously documented way to set the smb share name no
longer works
zfs set sharesmb=name=my_share_name
You have to use the long-winded
zfs set share=name=my_share_name,path=/my/share/path,prot=smb
This
Hello
Has anyone else come across a bug moving files between two zfs file systems?
I used "mv /my/zfs/filesystem/files /my/zfs/otherfilesystem" and got the error
"too many open files".
This is on Solaris 11
___
zfs-discuss mailing list
zfs-discuss@ope
Hello
I have some zfs filesystems shared via cifs. Some of them I can mount and
others I can't. They appear identical in properties and ACLs; the only
difference I've found is the successful ones have xattr {A--m} and the
others have {}. But I can't set that xattr on the share to see if
Hi
Having just done a scrub of a mirror I've lost a file and I'm curious how this
can happen in a mirror. Doesn't it require the almost impossible scenario
of exactly the same sector being trashed on both disks? However the
zpool status shows checksum errors not I/O errors and I'm not sure what
Richard Elling wrote:
> Gregory Gee wrote:
> > I am using OpenSolaris to host VM images over NFS for XenServer. I'm
> > looking
>for tips on what parameters can be set to help optimize my ZFS pool that holds
>my VM images.
> There is nothing special about tuning for VMs, the normal NFS tuning a
21 matches
Mail list logo