Hi
What is the impact of not aligning the DB blocksize (16K) with ZFS,
especially when it comes to random reads on single HW RAID LUN.
How would one go about measuring the impact (if any) on the workload?
Thank you
___
zfs-discuss mailing list
zfs-disc
On 11/7/07, can you guess? <[EMAIL PROTECTED]> wrote:
> > Monday, November 5, 2007, 4:42:14 AM, you wrote:
> >
> > cyg> Having gotten a bit tired of the level of ZFS
> > hype floating
I think a personal comment might help here ...
I spend a large part of my life doing system administration, and l
Hi Lukasz,
The output of the first sript gives
bash-3.00# ./test.sh
dtrace: script './test.sh' matched 4 probes
CPU IDFUNCTION:NAME
0 42681:tick-10s
0 42681:tick-10s
0 42681:tick-10s
0 426
it's got to do with vmware obviously, as we've been able to make +20TB
fs with zfs
selim
--
Blog: http://fakoli.blogspot.com/
On 11/7/07, Chris Murray <[EMAIL PROTECTED]> wrote:
> Hi all,
>
> I am experiencing an issue when trying to set up a l
On Thu, 8 Nov 2007, Ian Collins wrote:
> Dan Pritts wrote:
>> i/o to/from the disk's cache will be marginally slower but you want to
>> disable the write cache for data integrity anyway.
>>
>>
> Do you? I though ZFS enabled the drive cache when it used the entire drive.
>
> Ian.
AFAIK the write
We have this identical problem on all 10 or so of our thumpers. They're running
stock Solaris 10, whatever came with them. We think it's starting to cause
problems, as we will see a rash of those errors on one of our machines, and
then NFS will stop serving.
This message posted from opensola
On Thu, Nov 08, 2007 at 12:42:01PM +1300, Ian Collins wrote:
> True, but I'd image things go wonky if two PATA drives (master and
> slave) are used.
Absolutely. Never use PATA slave config if you care at all about
performance.
> > i/o to/from the disk's cache will be marginally slower but you wa
Dan Pritts wrote:
> On Fri, Sep 14, 2007 at 01:48:40PM -0500, Christopher Gibbs wrote:
>
>> I suspect it's probably not a good idea but I was wondering if someone
>> could clarify the details.
>>
>> I have 4 250G SATA(150) disks and 1 250G PATA(133) disk. Would it
>> cause problems if I created
On Fri, Sep 14, 2007 at 01:48:40PM -0500, Christopher Gibbs wrote:
> I suspect it's probably not a good idea but I was wondering if someone
> could clarify the details.
>
> I have 4 250G SATA(150) disks and 1 250G PATA(133) disk. Would it
> cause problems if I created a raidz1 pool across all 5 d
I am having some odd ZFS performance issues and looking for some assistance on
where to look to figure out what the underlying problem is.
System Config
- Solaris 10 Update 3 (11/06) on a Sun v440- 118833-36
- 9-Bay JBOD with 180GB disks on two SCSI channels. 4 on one, 5 on the
other..I forget
Not for NexentaStor as yet to my knowledge. I'd like to caution that
the target of the initial product release is digital
archiving/tiering/etc and is not necessarily primary NAS usage, though
it can be used as such for those so inclined. However, interested
parties should contact them as they fles
> Monday, November 5, 2007, 4:42:14 AM, you wrote:
>
> cyg> Having gotten a bit tired of the level of ZFS
> hype floating
> cyg> around these days (especially that which
> Jonathan has chosen to
> cyg> associate with his spin surrounding the fracas
> with NetApp)
...
> Bill - I have a very stron
Hi all,
I am experiencing an issue when trying to set up a large ZFS volume in
OpenSolaris build 74 and the same problem in Nexenta alpha 7. I have looked on
Google for the error and have found zero (yes, ZERO) results, so I'm quite
surprised! Please can someone help?
I am setting up a test en
Bud ig 6538014 (
http://bugs.opensolaris.org/view_bug.do?bug_id=6538014 ) was also
related to mounting many fs. but I have no visibility on this bugID
progress..maybe someone from Sun can update us?
selim
--
Blog: http://fakoli.blogspot.com/
On
I don't currently have a way to test this, but did you try:
make a clone of the snapshot
in the clone, remove the directories
make a snapshot of the clone
destroy the clone
destroy the old snapshot
In my mind, this should work, given no other dependencies exist.
Then again, a m
No compression enabled. Zpool status and more info on the the config is listed
in this other thread:
http://www.opensolaris.org/jive/thread.jspa?threadID=44033&tstart=0
Wasn't getting a response here, so I looped in the code forum.
This message posted from opensolaris.org
___
Hello again,
Some of you may have read my earlier post about wanting to recover partial ZFS
space. It seems as though that isn't possible given the current
implementation... so, I would like to suggest the following "enhancement":
A zfs-aware rm command (ie. zfs rm). The idea here is that we
On 7-Nov-07, at 9:32 AM, Robert Milkowski wrote:
> Hello can,
>
> Monday, November 5, 2007, 4:42:14 AM, you wrote:
>
> cyg> Having gotten a bit tired of the level of ZFS hype floating
> cyg> around these days (especially that which Jonathan has chosen to
> cyg> associate with his spin surrounding
i have a big testing pool attached to my system running solaris 10 8/07 (u4)
sparc.
there is only one zfs on the pool (testdata).
i did a big copy job to the device to fill it up completely.
somehow there is a "big" difference of 2% on the usage of the pool. can one
explain this ... ?
root / #
Dick Davies wrote:
> Does anybody know if the upcoming CIFS integration in b77 will
> provide a mechanism for users to see snapshots (like .zfs/snapshot/
> does for NFS)?
>
I don't believe that the version in build 77 will traverse down .zfs
It would be a good thing to add though.
-Mark
Hi,
I think your problem is filesystem fragmentation.
When available space is less than 40% ZFS might have problems with
finding free blocks. Use this script to check it:
#!/usr/sbin/dtrace -s
fbt::space_map_alloc:entry
{
self->s = arg1;
}
fbt::space_map_alloc:return
/arg1 != -1/
{
self-
Hello can,
Monday, November 5, 2007, 4:42:14 AM, you wrote:
cyg> Having gotten a bit tired of the level of ZFS hype floating
cyg> around these days (especially that which Jonathan has chosen to
cyg> associate with his spin surrounding the fracas with NetApp), I
cyg> chose to respond to that artic
Michael McKnight wrote:
> Hi everyone,
>
> I have what I think is a simple question, but the answer is eluding me...
>
> I have a ZFS filesystem in which I needed to move part of it to a new
> pool. I want to recover the space from the part I moved so that it
> returns to the original pool, wi
Was that with compression enabled ?
Got "zpool status" output ?
-r
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Does anybody know if the upcoming CIFS integration in b77 will
provide a mechanism for users to see snapshots (like .zfs/snapshot/
does for NFS)?
--
Rasputnik :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss maili
25 matches
Mail list logo