Am 13.02.2007 um 22:46 schrieb Ian Collins:
[EMAIL PROTECTED] wrote:
Hello,
I switched my home server from Debian to Solaris. The main cause for
this step was stability and ZFS.
But now after the migration (why isn't it possible to mount a linux
fs on Solaris???) I make a few benchmarks
and
Hi all,
My disk resources are all getting full again, so it must be time to
buy more storage :-) I'm using ZFS at home, and it's worked great
on the concat of a 74Gb IDE and a 74 Gb SATA drive, especially with
redundant meta-data. That's puny compared to some of the external
storage bricks I se
I'm also putting together a server on Solaris 10. My hardware so far:
Mainboard: Tyan Tiger 230 S2507
Processors: 2 x Pentium III
RAM: 512 MB PC133 ECC
Hard drives:
c0d0: ST380021A (80gb PATA)
c0d1: ST325062 (250gb PATA)
c1d1: ST325062 (250gb PATA)
Not the fastest processor-wise... I have the t
The space management algorithms in many file systems don't always perform well
when they can't find a free block of the desired size. There's often a "cliff"
where on average, once the file system is too full, performance drops off
exponentially. UFS deals with this by reserving space explicitly
I did find zfs.h and libzfs.h (thanks Eric). However, when I try to compile the
latest version (4.87C) of lsof it finds the following files missing: dmu.h
zfs_acl.h zfs_debug.h zfs_rlock.h zil.h spa.h zfs_context.h zfs_dir.h
zfs_vfsops.h zio.h txg.h zfs_ctldir.h zfs_ioctl.h zfs_znode.h zio_impl.
On 2/13/07, Anantha N. Srirama <[EMAIL PROTECTED]> wrote:
I contacted the author of 'lsof' regarding the missing ZFS support. The command
works but fails to display any files that are opened by the process in a ZFS
filesystem. He indicates that the required ZFS kernel structure definitions
(he
I contacted the author of 'lsof' regarding the missing ZFS support. The command
works but fails to display any files that are opened by the process in a ZFS
filesystem. He indicates that the required ZFS kernel structure definitions
(header files) are not shipped with the OS. He further indicate
Hello Matthew,
Wednesday, February 14, 2007, 1:50:28 AM, you wrote:
MA> Robert Milkowski wrote:
>> Hello zfs-discuss,
>>
>> A file system with a lot of small files.
>> zfs send fsA | ssh [EMAIL PROTECTED] zfs recv fsB
>>
>> On a sending site nothing else is running or touching the disks.
Howdy,
I have seen a number of folks run into issues due to ZFS file system
fragmentation, and was curious if anyone on team ZFS is working on
this issue? Would it be possible to share with the list any changes
that will be made to to help address fragmentation problems?
Thanks,
- Ryan
--
UNIX A
Robert Milkowski wrote:
Hello zfs-discuss,
A file system with a lot of small files.
zfs send fsA | ssh [EMAIL PROTECTED] zfs recv fsB
On a sending site nothing else is running or touching the disks.
Yet still the performance is far from being satisfactionary.
When serving data the sam
[EMAIL PROTECTED] said:
> The only obvious thing would be if the exported ZFS filesystems where
> initially mounted at a point in time when zil_disable was non-null.
No changes have been made to zil_disable. It's 0 now, and we've never
changed the setting. Export/import doesn't appear to change
Hello Robert,
Wednesday, February 14, 2007, 1:02:08 AM, you wrote:
RM> Hello zfs-discuss,
RM> A file system with a lot of small files.
RM> zfs send fsA | ssh [EMAIL PROTECTED] zfs recv fsB
RM> On a sending site nothing else is running or touching the disks.
RM> Yet still the performance
Hello zfs-discuss,
A file system with a lot of small files.
zfs send fsA | ssh [EMAIL PROTECTED] zfs recv fsB
On a sending site nothing else is running or touching the disks.
Yet still the performance is far from being satisfactionary.
When serving data the same pool/fs can read over 10
> > And I eagerly await the day I'll get to read a blog discussing how this
> > works and what you had to do with respect to snapshot blocks. :-) (or
> > will you have to remove snapshots?)
>
> Yeah, the implementation is nontrivial.
I thought that might be the case from the tiny details I have
[EMAIL PROTECTED] wrote:
> Hello,
>
> I switched my home server from Debian to Solaris. The main cause for
> this step was stability and ZFS.
> But now after the migration (why isn't it possible to mount a linux
> fs on Solaris???) I make a few benchmarks
> and now I thought about swtching back
Hello Matthew,
Tuesday, February 13, 2007, 9:53:35 PM, you wrote:
MA> One of the main bugs causing this recommendation is 6495013. Fixing
MA> this is one of our top priorities.
I would be VERY interested when this is fixed.
--
Best regards,
Robertmailto:[EMAIL P
Matty wrote:
Howdy,
We bumped into the issues described in bug #6456888 on one of our
production systems, and I was curious if any progress has been made
on this bug? Are there any workarounds available for this issue (the
work around section in the bug is empty)?
No known workarounds, but we
Darren Dunham wrote:
Ralf Gans wrote:
No 'home user' needs shrink.
Every professional datacenter needs shrink.
Regardless of where you want or don't want to use shrink, we are
actively working on this, targeting delivery in s10u5.
And I eagerly await the day I'll get to read a blog discussing
> Ralf Gans wrote:
> > No 'home user' needs shrink.
> > Every professional datacenter needs shrink.
>
> Regardless of where you want or don't want to use shrink, we are
> actively working on this, targeting delivery in s10u5.
And I eagerly await the day I'll get to read a blog discussing how thi
Ralf Gans wrote:
No 'home user' needs shrink.
Every professional datacenter needs shrink.
Regardless of where you want or don't want to use shrink, we are
actively working on this, targeting delivery in s10u5.
--matt
ps. To answer a later poster's question, replacing a disk with a smaller
Jarod Nash - Sun UK wrote:
In the ZFS Best Practises Guide here:
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
It says:
``Currently, pool performance can degrade when a pool is very full
and file systems are updated frequently, such as on a busy mail
serve
Uwe Dippel wrote:
It is my impression, that there has so far been a lack of activities
to list the needs of the potential user and address these in a
high-level syntax. ...
Especially items like RAID, Backup, Install and Repair need to be
specified.
ZFS was designed from day 1 to be easy to u
Uwe Dippel wrote:
[EMAIL PROTECTED]:/u01/home# zfs snapshot u01/[EMAIL PROTECTED]
[EMAIL PROTECTED]:/u01/home# zfs send u01/[EMAIL PROTECTED] | zfs receive
u02/home
One caveat here is that I could not find a way to back up the base of
the zpool "u01" into the base of zpool "u02". i.e.
zfs snap
Hello,
I switched my home server from Debian to Solaris. The main cause for
this step was stability and ZFS.
But now after the migration (why isn't it possible to mount a linux
fs on Solaris???) I make a few benchmarks
and now I thought about swtching back to Debian. First of all the
hardwa
> Now, so my humble guess, I need to know the commands
> to be run in the new install to de-associate c0d0s7
> from the old install and re-associate this drive with
> the new install.
> All this probably happened through the '-f' in 'zpool
> create -f newhome c0d0s7'; which seemingly takes
> preced
> This is expected because of the copy-onwrite nature of ZFS. During
> truncate it is trying to allocate
> new disk blocks probably to write the new metadata and fails to find them.
I realize there is a fundamental issue with copy on write, but does
this mean ZFS does not maintain some kind of re
> No 'home user' needs shrink.
I strongly disagree with this.
The ability to shrink can be useful in many specific situations, but
in the more general sense, and this is in particular for home use, it
allows you to plan much less rigidly. You can add/remove drives left
and right at your leasure a
In continuation of another thread, I feel the need to address this topic
urgently:
Despite of the great and enormous potential of ZFS and its advanced
architecture, in the end success is measured in use and user acceptance.
One of the promises is (was) a high-level interface. "No more 'format'".
[EMAIL PROTECTED] wrote on 02/13/2007 09:48:54 AM:
> In the ZFS Best Practises Guide here:
>
>
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
>
> It says:
>
>``Currently, pool performance can degrade when a pool is very full
> and file systems are updated fre
Hello Mark,
Tuesday, February 13, 2007, 3:54:36 PM, you wrote:
MM> Robert,
MM> This doesn't look like cache flushing, rather it looks like we are
MM> trying to finish up some writes... but are having a hard time allocating
MM> space for them. Is this pool almost 100% full? There are lots of
MM
In the ZFS Best Practises Guide here:
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
It says:
``Currently, pool performance can degrade when a pool is very full
and file systems are updated frequently, such as on a busy mail
server. Under these circumstances
Uuh, I just found out that I now have the new data ... whatever, here it is:
[I did have to boot to the old system, since the new install lost its new
'home']
[i]zpool status
pool: home
state: ONLINE
scrub: none requested
config:
NAMESTATE READ WRITE CKSUM
home
Hello,
We had a situation at customer site where one of the zpool complains about
missing devices. We do not know which devices are missing. Here are the details:
Customer had a zpool created on a hardware raid(SAN). There is no redundancy in
the pool. Pool had 13 LUN's, customer wanted to i
> No 'home user' needs shrink.
> Every professional datacenter needs shrink.
I can think of a scenario. I have a n disk RAID that I built with n newly
purchased disks that are m GB. One dies. I buy a replacement disk, also m GB
but when I put it in, it's really ( m - x ) GB. I need to shrink
[i]
zpool create newhome c0d0s7
zfs snapshot [EMAIL PROTECTED]
zfs send [EMAIL PROTECTED] | zfs receive newhome/home
A 1:1 copy of the zfs "home" should then exist in "/newhome/home".
[/i]
'should' was the right word. It doesn't; and has actually destroyed my poor
chances to mount it. I hope some
Robert,
This doesn't look like cache flushing, rather it looks like we are
trying to finish up some writes... but are having a hard time allocating
space for them. Is this pool almost 100% full? There are lots of
instances of zio_write_allocate_gang_members(), which indicates a very
high degree
On x86 try with sd_send_scsi_SYNCHRONIZE_CACHE
Leon Koll writes:
> Hi Marion,
> your one-liner works only on SPARC and doesn't work on x86:
> # dtrace -n fbt::ssd_send_scsi_SYNCHRONIZE_CACHE:entry'[EMAIL PROTECTED] =
> count()}'
> dtrace: invalid probe specifier fbt::ssd_send_scsi_SYNCHRON
Hi Marion,
your one-liner works only on SPARC and doesn't work on x86:
# dtrace -n fbt::ssd_send_scsi_SYNCHRONIZE_CACHE:entry'[EMAIL PROTECTED] =
count()}'
dtrace: invalid probe specifier fbt::ssd_send_scsi_SYNCHRONIZE_CACHE:[EMAIL
PROTECTED] = count()}: probe description
fbt::ssd_send_scsi_SYN
>
> Given ZFS's copy-on-write transactional model, would it not be almost
trivial
> to implement fbarrier()? Basically just choose to wrap up the transaction at
> the point of fbarrier() and that's it.
>
> Am I missing something?
How do you guarantee that the disk driver and/or the
Hello eric,
Monday, February 12, 2007, 7:08:20 PM, you wrote:
ek> On Feb 12, 2007, at 7:52 AM, Robert Milkowski wrote:
>> Hello Roch,
>>
>> Monday, February 12, 2007, 3:54:30 PM, you wrote:
>>
>> RP> Duh!.
>>
>> RP> Long sync (which delays the next sync) are also possible on
>> RP> a write inte
Hi.
T2000 1.2GHz 8-core, 32GB RAM, S10U3, zil_disable=1.
Command 'zpool export f3-2' is hung for 30 minutes now and still is going.
Nothing else is running on the server. I can see one CPU being 100% in SYS like:
bash-3.00# mpstat 1
[...]
CPU minf mjf xcal intr ithr csw icsw migr smtx sr
Hello there.
I do agree, in a small environment you normaly do not need to shrink.
One reason to shrink is between keyboard and chair.
You just add the wrong disk. 1 TB instead of 100 GB.
What do you do? Ask the SAN team to provide space
for a second pool of 15 TB to copy it all over into a tempo
> > That is interesting. Could this account for disproportionate kernel
> > CPU usage for applications that perform I/O one byte at a time, as
> > compared to other filesystems? (Nevermind that the application
> > shouldn't do that to begin with.)
>
> I just quickly measured this (overwritting
The only obvious thing would be if the exported ZFS
filesystems where initially mounted at a point in time when
zil_disable was non-null.
The stack trace that is relevant is:
sd_send_scsi_SYNCHRONIZE_CACHE
sd`sdioctl+0x1770
zfs`vdev_d
Peter Schuller writes:
> > I agree about the usefulness of fbarrier() vs. fsync(), BTW. The cool
> > thing is that on ZFS, fbarrier() is a no-op. It's implicit after
> > every system call.
>
> That is interesting. Could this account for disproportionate kernel
> CPU usage for applications
Hi
I'm using fairly stock S10, but this is really just a zfs/zpool question.
# uname -a
SunOS peach 5.10 Generic_118833-36 sun4u sparc SUNW,Sun-Blade-100
Misremembering the option to file for handling special files (-s), I executed
the following:
# file -m /dev/dsk/c*s2
My shell would have
Erblichs writes:
> Jeff Bonwick,
>
> Do you agree that their is a major tradeoff of
> "builds up a wad of transactions in memory"?
>
> We loose the changes if we have an unstable
> environment.
>
> Thus, I don't quite understand why a 2-phase
> approach to
47 matches
Mail list logo