Rob Healey wrote:
Does anyone know if related problems to the panic's dismissed as
"duplicate of 6746456" ever resulted in Solaris 10 patches? It sounds
like they were actually solved in OpenSolaris but S10 is still
panicing predictably when Linux NFS clients try to change a nobody
UID/GID on a Z
On Wed, Jun 24, 2009 at 6:32 PM, Simon Breden wrote:
> FIRST QUESTION:
> Although, it seems possible to add a drive to form a mirror for the ZFS boot
> pool 'rpool', the main problem I see is that in my case, I would be
> attempting to form a mirror using a smaller drive (30GB) than the initial
Does anyone know if related problems to the panic's dismissed as "duplicate of
6746456" ever resulted in Solaris 10 patches? It sounds like they were actually
solved in OpenSolaris but S10 is still panicing predictably when Linux NFS
clients try to change a nobody UID/GID on a ZFS exported files
> On Wed, 24 Jun 2009, Richard Elling wrote:
> >>
> >> "The new code keeps track of the amount of data
> accepted in a TXG and the
> >> time it takes to sync. It dynamically adjusts that
> amount so that each TXG
> >> sync takes about 5 seconds (txg_time variable). It
> also clamps the limit to
Hi Mykola,
Yes, if you are speaking of the automatic TimeSlider snapshots,
the snapshots are rotated. I think the threshold is 80% full
disk space.
Cheers,
Cindy
Mykola Maslov wrote:
How to turn off the timeslider snapshots on certain file systems?
http://wikis.sun.com/display/OpenSolari
On Wed, 24 Jun 2009, Richard Elling wrote:
"The new code keeps track of the amount of data accepted in a TXG and the
time it takes to sync. It dynamically adjusts that amount so that each TXG
sync takes about 5 seconds (txg_time variable). It also clamps the limit to
no more than 1/8th of phy
On Wed, 24 Jun 2009, Eric D. Mudama wrote:
The main purpose for using SSDs with ZFS is to reduce latencies for
synchronous writes required by network file service and databases.
In the "available 5 months ago" category, the Intel X25-E will write
sequentially at ~170MB/s according to the data
Bob Friesenhahn wrote:
On Wed, 24 Jun 2009, Marcelo Leal wrote:
I think that is the purpose of the current implementation:
http://blogs.sun.com/roch/entry/the_new_zfs_write_throttle But seems
like is not that easy... as i did understand what Roch said, seems
like the cause is not always a "ha
On Jun 24, 2009, at 16:54, Philippe Schwarz wrote:
Out of curiosity, any reason why went with iSCSI and not NFS? There
seems
to be some debate on which is better under which circumstances.
iSCSI instead of NFS ?
Because of the overwhelming difference in transfer rate between
them, In fac
On Wed, Jun 24 at 15:38, Bob Friesenhahn wrote:
On Wed, 24 Jun 2009, Orvar Korvar wrote:
I thought of exchanging my PCI card with a PCIe card variant instead
to reach higher speeds. PCI-X is legacy. The problem with PCIe cards
is that soon SSD drives will be common. A ZFS raid with SSD would
> - - the VM will be mostly few IO systems :
> - -- WS2003 with Trend Officescan, WSUS (for 300 XP) and RDP
> - -- Solaris10 with SRSS 4.2 (Sunray server)
>
> (File and DB servers won't move in a nearby future to VM+SAN)
>
> I thought -but could be wrong- that those systems could afford a high
> la
On Thu, 25 Jun 2009, Ian Collins wrote:
I wonder whether a filesystem property "streamed" might be appropriate? This
could act as hint to ZFS that the data is sequential and should be streamed
direct to disk.
ZFS does not seem to offer an ability to stream direct to disk other
than perhaps
On Wed, 24 Jun 2009, Marcelo Leal wrote:
I think that is the purpose of the current implementation:
http://blogs.sun.com/roch/entry/the_new_zfs_write_throttle But seems
like is not that easy... as i did understand what Roch said, seems
like the cause is not always a "hardy" writer.
I see thi
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
David Magda a écrit :
> On Wed, June 24, 2009 08:42, Philippe Schwarz wrote:
>
>> In my tests ESX4 seems to work fine with this, but i haven't already
>> stressed it ;-)
>>
>> Therefore, i don't know if the 1Gb FDuplex per port will be enough, i
>> do
I think that is the purpose of the current implementation:
http://blogs.sun.com/roch/entry/the_new_zfs_write_throttle
But seems like is not that easy... as i did understand what Roch said, seems
like the cause is not always a "hardy" writer.
Leal
[ http://www.eall.com.br/blog ]
--
This messag
On Wed, 24 Jun 2009, Orvar Korvar wrote:
I thought of exchanging my PCI card with a PCIe card variant instead
to reach higher speeds. PCI-X is legacy. The problem with PCIe cards
is that soon SSD drives will be common. A ZFS raid with SSD would
need maybe PCIe x 16 or so, to reach max band wid
On Wed, 24 Jun 2009, Ross wrote:
Wouldn't it make sense for the timing technique to be used if the
data is coming in at a rate slower than the underlying disk storage?
I am not sure how zfs would know the rate of the underlying disk
storage without characterizing it for a while with actual I/
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
milosz a écrit :
>> Within the thread there are instructions for using iometer to load test your
>> storage. You should test out your solution before going live, and compare
>> what you get with what you need. Just because striping 3 mirrors *will* g
Bob Friesenhahn wrote:
On Wed, 24 Jun 2009, Marcelo Leal wrote:
Hello Bob,
I think that is related with my post about "zio_taskq_threads and TXG
sync ":
( http://www.opensolaris.org/jive/thread.jspa?threadID=105703&tstart=0 )
Roch did say that this is on top of the performance problems, and i
Wouldn't it make sense for the timing technique to be used if the data is
coming in at a rate slower than the underlying disk storage?
But then if the data starts to come at a faster rate, ZFS needs to start
streaming to disk as quickly as it can, and instead of re-ordering writes in
blocks, it
Hey sbreden! :o)
No, I havent tried to tinker with my drives. They have been functioning all the
time. I suspect (can not remember) that each SATA slot in the card has a number
attached to it? Can anyone confirm this? If I am right, OpenSolaris will say
something about "disc 6 is broken" and on
Thomas Maier-Komor wrote:
Ben schrieb:
Thomas,
Could you post an example of what you mean (ie commands in the order to use
them)? I've not played with ZFS that much and I don't want to muck my system
up (I have data backed up, but am more concerned about getting myself in a mess
and h
> "jr" == Jacob Ritorto writes:
jr> I think this is the board that shipped in the original
jr> T2000 machines before they began putting the sas/sata onboard:
jr> LSISAS3080X-R
jr> Can anyone verify this?
can't verify but FWIW i fucked it up:
I thought the L
>Dennis is correct in that there are significant areas where 32-bit
>systems will remain the norm for some time to come.
think of that hundreds of thousands of VMWare ESX/Workstation/Player/Server
installations on non VT capable cpu`s - even if the cpu has 64bit capability, a
VM cannot run in 6
I think this is the board that shipped in the original T2000 machines
before they began putting the sas/sata onboard: LSISAS3080X-R
Can anyone verify this?
Justin Stringfellow wrote:
Richard Elling wrote:
Miles Nordin wrote:
"ave" == Andre van Eyssen writes:
"et" == Erik Trimble writes
On Wed, 24 Jun 2009, Marcelo Leal wrote:
Hello Bob,
I think that is related with my post about "zio_taskq_threads and TXG sync ":
( http://www.opensolaris.org/jive/thread.jspa?threadID=105703&tstart=0 )
Roch did say that this is on top of the performance problems, and in
the same email i did ta
Chookiex wrote:
Thank you for your reply.
I had read the blog. The most interesting thing is WHY is there no
performance improve when it set any compression?
There are many potential reasons, so I'd first try to identify what your
current bandwidth limiter is. If you're running out of CPU on
Ben wrote:
Hi all,
I have a ZFS mirror of two 500GB disks, I'd like to up these to 1TB disks, how
can I do this? I must break the mirror as I don't have enough controller on my
system board. My current mirror looks like this:
[b]r...@beleg-ia:/share/media# zpool status share
pool: share
sta
Ok, this is getting weird. I just ran a zpool clear, and now it says:
# zpool clear zfspool
# zpool status
pool: zfspool
state: ONLINE
status: The pool is formatted using an older on-disk format. The pool can
still be used, but some features are unavailable.
action: Upgrade the pool u
Thanks Mark, it looks like that was good advice. It also appears that as
suggested, it's not the drive that's faulty... anybody have any thoughts as to
how I find what's actually the problem?
# zpool status
pool: zfspool
state: DEGRADED
status: One or more devices has experienced an unrecove
Hello Bob,
I think that is related with my post about "zio_taskq_threads and TXG sync ":
( http://www.opensolaris.org/jive/thread.jspa?threadID=105703&tstart=0 )
Roch did say that this is on top of the performance problems, and in the same
email i did talk about the change from 5s to 30s, what i
On Wed, 24 Jun 2009, Ethan Erchinger wrote:
http://opensolaris.org/jive/thread.jspa?threadID=105702&tstart=0
Yes, this does sound very similar. It looks to me like data from read
files is clogging the ARC so that there is no more room for more
writes when ZFS periodically goes to commit unwri
Bottim line with virtual machines is that your IO will be random by
definition since it all goes into the same pipe. If you want to be
able to scale, go with RAID 1 vdevs. And don't skimp on the memory.
Our current experience hasn't shown a need for an SSD for the ZIL but
it might be useful
> Within the thread there are instructions for using iometer to load test your
> storage. You should test out your solution before going live, and compare
> what you get with what you need. Just because striping 3 mirrors *will* give
> you more performance than raidz2 doesn't always mean that is
> > http://opensolaris.org/jive/thread.jspa?threadID=105702&tstart=0
>
> Yes, this does sound very similar. It looks to me like data from read
> files is clogging the ARC so that there is no more room for more
> writes when ZFS periodically goes to commit unwritten data.
I'm wondering if chang
On Tue, 23 Jun 2009, milosz wrote:
is this a direct write to a zfs filesystem or is it some kind of zvol export?
This is direct write to a zfs filesystem implemented as six mirrors of
15K RPM 300GB drives on a Sun StorageTek 2500. This setup tests very
well under iozone and performs remarka
See this thread for information on load testing for vmware:
http://communities.vmware.com/thread/73745?tstart=0&start=0
Within the thread there are instructions for using iometer to load test your
storage. You should test out your solution before going live, and compare what
you get with what yo
> 2 first disks Hardware mirror of 146Go with Sol10 & UFS filesystem on it.
> The next 6 others will be used as a raidz2 ZFS volume of 535G,
> compression and shareiscsi=on.
> I'm going to CHAP protect it soon...
you're not going to get the random read & write performance you need
for a vm backend
It might be easier to look for the pool status thusly
zpool get health poolname
-- richard
Tomasz Kłoczko wrote:
Hi,
In company where I'm working we are using "zpool status -x" command output
in monitoring scripts for check health all ZFS pools. Everything is OK
except few systems where "zp
On Wed, June 24, 2009 08:42, Philippe Schwarz wrote:
> In my tests ESX4 seems to work fine with this, but i haven't already
> stressed it ;-)
>
> Therefore, i don't know if the 1Gb FDuplex per port will be enough, i
> don't know either i'have to put sort of redundant access form ESX to
> SAN,etc..
On 24.06.09 17:10, Thomas Maier-Komor wrote:
Ben schrieb:
Thomas,
Could you post an example of what you mean (ie commands in the order to use
them)? I've not played with ZFS that much and I don't want to muck my system
up (I have data backed up, but am more concerned about getting myself in
Mark and all,
thank you for your reply and your explanations.
I don't want to just open a yet another bug before having a rough idea about how
the code should be improved. I'll see if I find some time to prepare a
suggestion (this might never happen).
At any rate, simplicity is always a good
On 24 Jun 2009, at 22:37 , Ben wrote:
Many thanks Thomas,
I have a test machine so I shall try it on that before I try it on
my main system.
This is one of the best ways I've found to do things.
Sorry to be off-topic for a minute, but VirtualBox has helped me prove
out a lot of things -
Nils Goroll wrote:
Hi,
I just noticed that Mark Shellenbaum has replied to the same question in
a thread "ACL not being inherited correctly" on zfs-discuss.
Sorry for the noise.
Out of curiosity, I would still be interested in answers to this question:
It there a reason why inheritable AC
Many thanks Thomas,
I have a test machine so I shall try it on that before I try it on my main
system.
Thanks very much once again,
Ben
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.o
Ben schrieb:
> Thomas,
>
> Could you post an example of what you mean (ie commands in the order to use
> them)? I've not played with ZFS that much and I don't want to muck my system
> up (I have data backed up, but am more concerned about getting myself in a
> mess and having to reinstall, th
cindy.swearin...@sun.com writes:
> Hi Harry,
>
> Are you attempting this change when logged in as yourself or
> as root?
my user
> The top section of this procedure describes how to add yourself
> to zfssnap role. Otherwise, if you are doing this step as a
> non-root user, it probably won't work
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi,
i'm getting involved in a pre-production test and want to be sure of the
means i'll have to use.
Take 2 SunFire x4150 & 1 3750 Gb Cisco Switche
1 private VLAN on the Gb ports of the SW.
1 x4150 is going to be ESX4 aka VSphere Server ( 1 Hardwar
Thomas,
Could you post an example of what you mean (ie commands in the order to use
them)? I've not played with ZFS that much and I don't want to muck my system
up (I have data backed up, but am more concerned about getting myself in a mess
and having to reinstall, thus losing my configuratio
dick hoogendijk schrieb:
> On Wed, 24 Jun 2009 03:14:52 PDT
> Ben wrote:
>
>> If I detach c5d1s0, add a 1TB drive, attach that, wait for it to
>> resilver, then detach c5d0s0 and add another 1TB drive and attach
>> that to the zpool, will that up the storage of the pool?
>
> That will do the tri
Hi,
I have OpenSolaris 2009.06 currently installed on a 160 GB IDE drive.
I want to replace this with a 2-way mirror 30 GB SATA SSD boot setup.
I found these 2 threads which seem to answer some questions I had, but I still
have some questions.
http://opensolaris.org/jive/thread.jspa?messageID=38
How to turn off the timeslider snapshots on certain file systems?
http://wikis.sun.com/display/OpenSolarisInfo/How+to+Manage+the+Automatic+ZFS+Snapshot+Service
Thank you, very handy stuff!
BTW - will zfs automatically delete snapshots, when I`ll go low on disk space?
--
With respect,
Nik
Hi,
In company where I'm working we are using "zpool status -x" command output
in monitoring scripts for check health all ZFS pools. Everything is OK
except few systems where "zpool status -x" is exactly the same as "zpool
status". I'm not sure but looks like this behavior is not OS version
specif
Hi,
I just noticed that Mark Shellenbaum has replied to the same question in a
thread "ACL not being inherited correctly" on zfs-discuss.
Sorry for the noise.
Out of curiosity, I would still be interested in answers to this question:
It there a reason why inheritable ACEs are split always
Cindy Swearingen writes:
> I wish we had a zpool destroy option like this:
>
> # zpool destroy -really_dead tank2
I think it would be clearer to call it
zpool export --clear-name tank # or 3280066346390919920
or alternatively,
zpool destroy --exported 3280066346390919920
I guess the rea
On Wed, 24 Jun 2009 03:14:52 PDT
Ben wrote:
> If I detach c5d1s0, add a 1TB drive, attach that, wait for it to
> resilver, then detach c5d0s0 and add another 1TB drive and attach
> that to the zpool, will that up the storage of the pool?
That will do the trick perfectly. I just did the same last
Thank you for your reply.
I had read the blog. The most interesting thing is WHY is there no performance
improve when it set any compression?
The compressed read I/O is less than uncompressed data, and decompress is
faster than compress.
so if lzjb write is better than non-compressed, the lzjb r
Hi all,
I have a ZFS mirror of two 500GB disks, I'd like to up these to 1TB disks, how
can I do this? I must break the mirror as I don't have enough controller on my
system board. My current mirror looks like this:
[b]r...@beleg-ia:/share/media# zpool status share
pool: share
state: ONLINE
sc
Hi,
in nfs-discuss, Andrwe Watkins has brought up the question, why an inheritable
ACE is split into two ACEs when a descendant directory is created.
Ref:
http://cvs.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/fs/zfs/zfs_acl.c#1506
I must admit that I had observed this beh
Richard Elling wrote:
Miles Nordin wrote:
"ave" == Andre van Eyssen writes:
"et" == Erik Trimble writes:
"ea" == Erik Ableson writes:
"edm" == "Eric D. Mudama" writes:
ave> The LSI SAS controllers with SATA ports work nicely with
ave> SPARC.
I think what you mean is ``s
60 matches
Mail list logo