Were you able to get more insight about this problem ?
U7 did not encounter such problems.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Are there way to force ZFS to update, or refresh it in some way when the user
quota/used value is not true to what is the case? Are there known way to make it
out of sync that we should avoid?
SunOS x4500-11.unix 5.10 Generic_141445-09 i86pc i386 i86pc
(Solaris 10 10/09 u8)
zpool1/sd01_mail
Hi,
This might be a stupid question, but I can't figure it out.
Let's say I've chosen to live with a zpool without redundancy,
(SAN disks, has actually raid5 in disk-cabinet)
m...@mybox:~# zpool status BACKUP
pool: BACKUP
state: ONLINE
scrub: none requested
config:
NAME
I have many disks created with ZFS on Mac OSX that I'm trying to move to
OpenSolaris 2009.6
I created an OpenSolaris user with the same numeric userid as the Mac system.
Then I performed a [b]zpool import macpool[/b] to mount the data.
It's all there, fine, but the OpenSolaris (non-root) user c
On Fri, Oct 16, 2009 at 1:40 PM, Erik Trimble wrote:
> Prasad Unnikrishnan wrote:
>
>> Add the new disk - start writing new blocks to that disk, instead of
>> waiting to re-layout all the stipes. And when the disk is not active, do
>> slow/safe copy on write to balance all the blocks?
>>
>>
> Con
On Mon, Oct 19, 2009 at 03:31:46PM -0700, Matthew Ahrens wrote:
> Thanks for reporting this. I have fixed this bug (6822816) in build
> 127.
Thanks. I just installed OpenSolaris Preview based on 125 and will
attempt to apply the patch you made to this release and import the pool.
> --matt
>
>
Hi everyone,
Currently, the device naming changes in build 125 mean that you cannot
use Solaris Live Upgrade to upgrade or patch a ZFS root dataset in a
mirrored root pool.
If you are considering this release for the ZFS log device removal
feature, then also consider that you will not be able t
Thanks for reporting this. I have fixed this bug (6822816) in build
127. Here is the evaluation from the bug report:
The problem is that the clone's dsobj does not appear in the origin's
ds_next_clones_obj.
The bug can occur can occur under certain circumstances if there was a
"botched upg
Hi Tomas,
Increasing the slice size in a pool by using the format utility is not
equivalent to increasing a LUN size. Increasing a LUN size triggers
a sysevent from the underlying device that ZFS recognizes. The
autoexpand feature takes advantage of this mechanism.
I don't know if a bug is here,
On Sat, 17 Oct 2009, dick hoogendijk wrote:
> It's a bootblock issue. If you really want to get back to u6 you have to
> "installgrub /boot/grub/stage1 /boot/grub/stage2" from th update 6 image
> so mount it (with lumount or easier, with zfs mount) and make sure you
> take the stage1 stage2 from t
On 19 October, 2009 - Cindy Swearingen sent me these 2,4K bytes:
> Hi Tomas,
>
> I think you are saying that you are testing what happens when you
> increase a slice under a live ZFS storage pool and then reviewing
> the zdb output of the disk labels.
>
> Increasing a slice under a live ZFS stor
Frank Cusack wrote:
On October 19, 2009 9:53:14 AM +1300 Trevor Pretty
wrote:
Frank
I've been looking into:-
http://www.nexenta.com/corp/index.php?option=com_content&task=blogsection
&id=4&Itemid=128
Thanks! I *thought* there was a Nexenta solution but a google search
didn't turn anything u
We are working on evaluating all the issues and will get problem
descriptions and resolutions posted soon. I've asked some of you to
contact us directly to provide feedback and hope those wheels are
turning.
So far, we have these issues:
1. Boot failure after LU with a separate var dataset.
Thi
Hi Tomas,
I think you are saying that you are testing what happens when you
increase a slice under a live ZFS storage pool and then reviewing
the zdb output of the disk labels.
Increasing a slice under a live ZFS storage pool isn't supported and
might break your pool.
I think you are seeing s
Hi Markus,
The numbered VDEVs listed in your zpool status output facilitate log
device removal that integrated into build 125. Eventually, they will
also be used for removal of redundant devices when device removal
integrates.
In build 125, if you create a pool with mirrored log devices, and
the
Embedded Operating system/Networking (EON), RAM based live ZFS NAS appliance is
released on Genunix! Many thanks to Genunix.org for download hosting and
serving the opensolaris community.
EON ZFS storage is available in a 32/64-bit CIFS and Samba versions:
tryitEON 64-bit x86 CIFS ISO image vers
Hi.
We've got some test machines which amongst others has zpools in various
sizes and placements scribbled all over the disks.
0. HP DL380G3, Solaris10u8, 2x16G disks; c1t0d0 & c1t1d0
1. Took a (non-emptied) disk, created a 2GB slice0 and a ~14GB (to the
last cyl) slice7.
2. zpool create stric
On Mon, 19 Oct 2009, Jonas Nordin wrote:
Hi, thank you for replying.
I tried to set a label but I got this.
#format -e
The device does not support mode page 3 or page 4,
or the reported geometry info is invalid.
WARNING: Disk geometry is based on capacity data.
Maybe your drives have bad fi
Hi, thank you for replying.
I tried to set a label but I got this.
#format -e
The device does not support mode page 3 or page 4,
or the reported geometry info is invalid.
WARNING: Disk geometry is based on capacity data.
The current rpm value 0 is invalid, adjusting it to 3600
The device does
My goal is to have a big, fast, HA filer that holds nearly everything
for a bunch of development services, each running in its own Solaris
zone. So when I need a new service, test box, etc., I provision a new
zone and hand it to the dev requesters and they load their stuff on it
and go.
Ea
Hi, I just noticed this on snv_125, is there oncoming feature that allows use
of numbered vdevs or what for are these?
(raidz2-N)
pool: tank
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
On October 19, 2009 9:53:14 AM +1300 Trevor Pretty
wrote:
Frank
I've been looking into:-
http://www.nexenta.com/corp/index.php?option=com_content&task=blogsection
&id=4&Itemid=128
Thanks! I *thought* there was a Nexenta solution but a google search
didn't turn anything up for me. I'll defin
Thanks again for comments, I want to clear this up with a few notes:
o In OSOL 2009-06, zones MUST be installed in a zfs filesystem.
o This is different than any dataset specified, which is like adding an fs.
And of course if you specify as a dataset the same zfs pool that you installed
into
Its updated now. Thanks for mentioning it.
Cindy
On 10/18/09 10:19, Sriram Narayanan wrote:
All:
Given that the latest S10 update includes user quotas, the FAQ here
[1] may need an update
-- Sriram
[1] http://opensolaris.org/os/community/zfs/faq/#zfsquotas
___
Frank Middleton wrote:
On 10/13/09 18:35, Albert Chin wrote:
Maybe this will help:
http://mail.opensolaris.org/pipermail/storage-discuss/2009-September/007118.html
Well, it does seem to explain the scrub problem. I think it might
also explain the slow boot and startup problem - the VM only
Hi Jonas,
At first sight it looks like your "unopenable" disks were relabeled with
SMI label (hence the cyl count on format's output).
Not sure if it is completely safe data wise but you could try to relabel
your disks with EFI label (use format -e to access label choice).
F.
On 10/18/09 20
26 matches
Mail list logo