Hi all,
Miles Nordin wrote:
>> "rl" == Rob Logan <[EMAIL PROTECTED]> writes:
>
> rl> the sata framework uses the sd driver so its:
>
> yes but this is a really tiny and basically useless amount of output
> compared to what smartctl gives on Linux with SATA disks, where SATA
> disks also
#zpool replace data c0t2d0
cannot replace c0t2d0 with c0t2d0: cannot replace a replacing device
I dont have another drive of that size unfortunately, though since the device
was zeroed there shouldnt be any pool config data on it
--
This message posted from opensolaris.org
And I'm also wondering if it might be worth trying a different disk. I wonder
if it's struggling now because it's seeing the same disk as it's already tried
to use, or if the zeroing of the disk confused it.
Do you have another drive of the same size you could try?
--
This message posted from
This is only a guess, but have you tried
# zpool replace data c0t2d0
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
i would like to know how to include a ioctl in zfs_ioctl.c.
So would be grateful if somebody explained how zfs ioctls work.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
Hello,
Zfs filesystems != zpool this is your mistake I think,
Have you tried to export and import your zpool ? I 've read something like that
on the list but not sure...
-C
-Message d'origine-
De : [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] De la part de gsorin
Envoyé : mardi 9 décemb
Hello,
I have the following issue:
I'm running solaris 10 in a vmware enviroment. I have a virtual hdd of 8gig
(for example). At some point I can increase the hard drive to 10gig. How can I
resize the ZFS pool to take advantage of the new available space?
The same question applies for physical
my apologies... 11s, 12s, and 13s represent the number of seconds in a
read/write period, not disks. so, 11 seconds into a period, %b suddenly jumps
to 100 after having been 0 for the first 10.
--
This message posted from opensolaris.org
___
zfs-discu
On Mon, 8 Dec 2008, milosz wrote:
> compression is off across the board.
>
> svc_t is only maxed during the periods of heavy write activity (2-3
> seconds every 10 or so seconds)... otherwise disks are basically
> idling.
Check for some hardware anomaly which might impact disks 11, 12, and
13
compression is off across the board.
svc_t is only maxed during the periods of heavy write activity (2-3 seconds
every 10 or so seconds)... otherwise disks are basically idling.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
z
On Mon, Dec 8, 2008 at 3:09 PM, milosz <[EMAIL PROTECTED]> wrote:
> hi all,
>
> currently having trouble with sustained write performance with my setup...
>
> ms server 2003/ms iscsi initiator 2.08 w/intel e1000g nic directly connected
> to snv_101 w/ intel e1000g nic.
>
> basically, given enough
> (with iostat -xtc 1)
it sure would be nice to know if actv > 0 so
we would know if the lun was busy because
its queue is full or just slow (svc_t > 200)
for tracking errors try `iostat -xcen 1`
and `iostat -E`
Rob
___
zfs-d
hi all,
currently having trouble with sustained write performance with my setup...
ms server 2003/ms iscsi initiator 2.08 w/intel e1000g nic directly connected to
snv_101 w/ intel e1000g nic.
basically, given enough time, the sustained write behavior is perfectly
periodic. if i copy a large f
On Mon, Dec 08, 2008 at 04:46:37PM -0600, Brian Cameron wrote:
> >Is there a shortcomming in VT here?
>
> I guess it depends on how you think VT should work. My understanding
> is that VT works on a first-come-first-serve basis, so the first user
> who calls logindevperm interfaces gets permissio
Nicolas:
> On Mon, Dec 08, 2008 at 03:27:49PM -0600, Brian Cameron wrote:
>> login, they get the audio device. Then you can use VT switching in GDM
>> to start up a second graphical login. If this user needs text-to-speech,
>> they are out of luck since they can't access the audio device from t
On Mon, Dec 08, 2008 at 03:27:49PM -0600, Brian Cameron wrote:
> Once VT is enabled in the Xserver and GDM, users can start multiple
> graphical logins with GDM. So, if a user logs into the first graphical
Ah, right, I'd forgotten this.
> login, they get the audio device. Then you can use VT sw
Nicolas:
>> I agree that the solution of GDM messing with ACL's is not an ideal
>> solution. No matter how we resolve this problem, I think a scenario
>> could be imagined where the audio would not be managed as expected.
>> This is because if multiple users are competing for the same audio
>> d
Will, thanks for the info on the 'zfs get origin' command. I had previously
tried to promote the BEs but saw no effect. I can now see with 'origin' that
there is a sequential promotion scheme and some fo the BEs had to be promoted
twice to free them from their life of servitude and disgrace.
unfortunately i get the same thing whether i use either 11342560969745958696 or
17096229131581286394:
zpool replace data 11342560969745958696 c0t2d0
returns:
cannot replace 11342560969745958696 with c0t2d0: cannot replace a replacing
device
--
This message posted from opensolaris.org
___
> "rl" == Rob Logan <[EMAIL PROTECTED]> writes:
rl> the sata framework uses the sd driver so its:
yes but this is a really tiny and basically useless amount of output
compared to what smartctl gives on Linux with SATA disks, where SATA
disks also use the sd driver (the same driver Linux u
On Mon, Dec 08, 2008 at 02:22:01PM -0600, Brian Cameron wrote:
> >That said, I don't see why di_devperm_login() couldn't stomp all over
> >the ACL too. So you'll need to make sure that di_devperm_login()
> >doesn't stomp over the ACL, which will probably mean running an ARC case
> >and updating th
Nicholas:
> On Sun, Dec 07, 2008 at 03:20:01PM -0600, Brian Cameron wrote:
>> Thanks for the information. Unfortunately, using chmod/chown does not
>> seem a workable solution to me, unless I am missing something. Normally
>> logindevperm(4) is used for managing the ownership and permissions of
the sata framework uses the sd driver so its:
4 % smartctl -d scsi -a /dev/rdsk/c4t2d0s0
smartctl version 5.36 [i386-pc-solaris2.8] Copyright (C) 2002-6 Bruce Allen
Home page is http://smartmontools.sourceforge.net/
Device: ATA WDC WD1001FALS-0 Version: 0K05
Serial number:
Device type: disk
I did not use the Marvell nic.
I use an Intel gigabit pci nic (e1000g0).
On Sun, Dec 7, 2008 at 2:03 PM, SV <[EMAIL PROTECTED]> wrote:
> js.lists , or anyone else who is using a XFX MDA72P7509 Motherboard ---
>
> that onboard NIC is a Marvell? - Do you choose not to use it in favor of the
> I
Thanks for the recommendations.
I should have mentioned that I already tried that smartmontools. I've
read that there are problems with smartmontools and Solaris.
Sure enough, I get this error:
# /usr/local/sbin/smartctl -a -d ata /dev/rdsk/c2t0d0
smartctl version 5.38 [i386-pc-solaris2.11] Cop
> "jh" == Johan Hartzenberg <[EMAIL PROTECTED]> writes:
jh> raid5 suffers from the "write-hole" problem.
this is only when you use it without a battery.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
> "cm" == Courtney Malone <[EMAIL PROTECTED]> writes:
cm> # zpool detach data 17096229131581286394
cm> cannot detach 17096229131581286394: no valid replicas
I think detach is only for mirrors. That slot in the raidz stripe has
to be filled with some kind of marker, even if the drive
On Mon, Dec 8, 2008 at 13:03, Seymour Krebs <[EMAIL PROTECTED]> wrote:
> ~# zfs destroy -r rpool/ROOT/b99
> cannot destroy 'rpool/ROOT/b99': filesystem has dependent clones
Take a look at the output of "zfs get origin" for the other
filesystems in the pool. One of them is a clone of rpool/ROOT/b99
Well it shows that you're not suffering from a known bug. The symptoms
you were describing were the same as those seen when a device
spontaneously shrinks within a raid-z vdev. But it looks like the sizes
are the same ("config asize" = "asize"), so I'm at a loss.
- Eric
On Sun, Dec 07, 2008 at
Please if anyone can help with this mess, I'd appreciate it.
~# beadm list
BE Active Mountpoint SpacePolicy Created
-- -- -- --- ---
b100- - 6.88Gstatic 2008-10-30 13:59
b101a - - 960.
On Mon, 8 Dec 2008, [EMAIL PROTECTED] wrote:
>
> Solaris does the same thing.
>
> (The X server will run the "foreground" processes with a higher priority)
Yes, it does, but I suspect not to the extreme degree as seen under
Windows. The performance difference between the "foreground"
and backgr
>Microsoft Windows is surely faster than any Unix because it executes
>the "foreground" task (the program with the highlighted title bar)
>with far more priority than any other task. Microsoft Windows XP Home
>Edition is fastest since it maximally cranks up the priority on the
>foreground tas
On Mon, 8 Dec 2008, Joerg Schilling wrote:
> An OS that feels slower may actuall be much faster just because
> people have subjective impressions and because one OS may have been
> optiomized to result in best subjective impressions only.
Microsoft Windows is surely faster than any Unix because
unfortunately i've tried zpool attach -f and exporting and reimporting the pool
both with and without the disk present.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mail
On Sun, Dec 07, 2008 at 03:20:01PM -0600, Brian Cameron wrote:
> Thanks for the information. Unfortunately, using chmod/chown does not
> seem a workable solution to me, unless I am missing something. Normally
> logindevperm(4) is used for managing the ownership and permissions of
> device files (
>> But it is a problem when you more memory then you can map (1GB will
>> probably still work but 2GB is "too big".
>
>This here is my problem. I've got 4GB of RAM in the box. It's painful. ;)
Have you changed kernelbase? Lowering it will help performance.
Casper
__
On Mon, Dec 08, 2008 at 10:39:45AM +0100, [EMAIL PROTECTED] wrote:
>
> >Ok, thanx for your input, guys. So Bvians comment still is valid. I
> >tell the Linux guys that "Open Solaris on 32 bit will fragment the
> >memory to the point that you have to reboot once in a while. It
> >shouldnt corrupt y
By combining two great tools arcstat and dimstat you can get ZFS statistics in:
* table view
* chart view
* any date/time interval
* host to host compare
For example, online table and chart view
Ream more here
http://blogs.sun.com/pomah/entry/monitoring_zfs_statistic
--
This mes
On Mon, Dec 8, 2008 at 4:39 AM, <[EMAIL PROTECTED]> wrote:
> I'm not sure that that is completely true; I've run a small 32-bit file server
> and it ran for half a year or more (except when I wanted to upgrade)
>
> But that system had only 512 MB memory and I made the kernel's VA bigger
> than th
[EMAIL PROTECTED] wrote:
> For ufs "ufsdump | ufsrestore" I have found that I prefer the buffer on the
> receive side, but it should be much bigger. ufsrestore starts with
> creating all directories and that is SLOW.
This is why copying filesystems via star is much faster:
- There is no
[EMAIL PROTECTED] wrote:
>
>> In my experimentation (using my own buffer program), it's the receive
>> side buffering you need. The size of the buffer needs to be large enough
>> to hold 5 seconds worth of data. How much data/second you get will
>> depend on which part of your system is the l
>In my experimentation (using my own buffer program), it's the receive
>side buffering you need. The size of the buffer needs to be large enough
>to hold 5 seconds worth of data. How much data/second you get will
>depend on which part of your system is the limiting factor. In my case,
>with 7
>Ok, thanx for your input, guys. So Bvians comment still is valid. I
>tell the Linux guys that "Open Solaris on 32 bit will fragment the
>memory to the point that you have to reboot once in a while. It
>shouldnt corrupt your data when it runs out of RAM."
I'm not sure that that is completely tru
Thomas Maier-Komor wrote:
> First start the receive side, then the sender side:
>
> receiver> mbuffer -s 128k -m 200M -I sender:8000 | zfs receive filesystem
>
> sender> zfs send pool/filesystem | mbuffer -s 128k -m 200M -O receiver:8000
>
> Of course, you should adjust the hostnames accordingly, a
"James C. McPherson" <[EMAIL PROTECTED]> wrote:
> On Sat, 06 Dec 2008 22:28:36 -0500
> Joseph Zhou <[EMAIL PROTECTED]> wrote:
>
> > Ian, Tim, again, thank you very much in answering my question.
> >
> > I am a bit disappointed that the whole discussion group does not have
> > one person to stand
Ok, thanx for your input, guys. So Bvians comment still is valid. I tell the
Linux guys that "OpenSolaris on 32 bit will fragment the memory to the point
that you have to reboot once in a while. It shouldnt corrupt your data when it
runs out of RAM."
Vodevick.
--
This message posted from opens
46 matches
Mail list logo