>On Fri, 25 Sep 2009, James Lever wrote:
>>
>> NFS Version 3 introduces the concept of "safe asynchronous writes.?
>
>Being "safe" then requires a responsibilty level on the client which
>is often not present. For example, if the server crashes, and then
>the client crashes, how does the client
While I am about to embark on building a home NAS box using OpenSolaris with
ZFS.
Currently I have a chassis that will hold 16 hard drives, although not in
caddies - down time doesn't bother me if I need to switch a drive, probably
could do it running anyways just a bit of a pain. :)
I am afte
Nathan wrote:
While I am about to embark on building a home NAS box using OpenSolaris with
ZFS.
Currently I have a chassis that will hold 16 hard drives, although not in
caddies - down time doesn't bother me if I need to switch a drive, probably
could do it running anyways just a bit of a pai
On Fri, 2009-09-25 at 01:32 -0700, Erik Trimble wrote:
> Go back and look through the archives for this list. We just had this
> discussion last month. Let's not rehash it again, as it seems to get
> redone way too often.
You know, this seems like such a common question to the list, would we
(t
On Fri, Sep 25, 2009 at 10:18:15AM +0100, Tim Foster wrote:
> I don't have enough experience myself in terms of knowing what's the
> best hardware on the market, but from time to time, I do think about
> upgrading my system at home, and would really appreciate a
> zfs-community-recommended configu
Hi Guys,
maybe someone has some time to take a look at my issue, I didn't find a answer
using the search.
Here we go:
I was running a backup of a directory located on a ZFS pool named TimeMachine,
before I started the job, I checked the size of the directory called NFS, and
du -h or du -s was
It does seem to come up regularly... perhaps someone with access could
throw up a page under the ZFS community with the conclusions (and
periodic updates as appropriate)..
On Fri, Sep 25, 2009 at 3:32 AM, Erik Trimble wrote:
> Nathan wrote:
>>
>> While I am about to embark on building a home NAS
This was previously posed to the sun-managers mailing list but the only
reply I received recommended I post here at well.
We have a production Solaris 10u5 / ZFS X4500 file server which is
reporting NLM_DENIED_NOLOCKS immediately for any nfs locking request. The
lockd does not appear to be busy s
The opensolaris.org site will be transitioning to a wiki-based site
soon, as described here:
http://www.opensolaris.org/os/about/faq/site-transition-faq/
I think it would be best to use the new site to collect this
information because it will be much easier for community members
to contribute.
> I am after suggestions of motherboard, CPU and ram.
> Basically I want ECC ram and at least two PCI-E x4
> channels. As I want to run 2 x AOC-USAS_L8i cards
> for 16 drives.
Asus M4N82 Deluxe. I have one running with 2 USAS-L8i cards just fine. I don't
have all the drives loaded in yet, but t
On Thu, Sep 24, 2009 at 11:29 PM, James Lever wrote:
>
> On 25/09/2009, at 11:49 AM, Bob Friesenhahn wrote:
>
> The commentary says that normally the COMMIT operations occur during
> close(2) or fsync(2) system call, or when encountering memory pressure. If
> the problem is slow copying of many s
On Fri, 25 Sep 2009, Ross Walker wrote:
As a side an slog device will not be too beneficial for large
sequential writes, because it will be throughput bound not latency
bound. slog devices really help when you have lots of small sync
writes. A RAIDZ2 with the ZIL spread across it will provide mu
I tired to install the flar image using the method explained in this link
http://opensolaris.org/os/community/zfs/boot/flash/
I installed 119534-15 patch on the box whose flar image was required. Then
created a flar image using
flarcreate -n zfs_flar /flar_dir/zfs_flar.flar
I then installe
On Fri, Sep 25, 2009 at 11:34 AM, Bob Friesenhahn
wrote:
> On Fri, 25 Sep 2009, Ross Walker wrote:
>>
>> As a side an slog device will not be too beneficial for large
>> sequential writes, because it will be throughput bound not latency
>> bound. slog devices really help when you have lots of smal
On 09/25/09 09:59, RB wrote:
I tired to install the flar image using the method explained in this link
http://opensolaris.org/os/community/zfs/boot/flash/
I installed 119534-15 patch on the box whose flar image was required. Then created a flar image using
flarcreate -n zfs_flar /flar_dir/
Assertion failures indicate bugs. You might try another version of the
OS.
In general, they are easy to search for in the bugs database. A quick
search reveals
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6822816
but that doesn't look like it will help you. I suggest filing a new
Hi,
I have a zfs send command failing for some reason...
# uname -a
SunOS 5.11 snv_123 i86pc i386 i86pc Solaris
# zfs send -R -I
archive-1/archive/x...@rsync-2009-06-01_07:45--2009-06-01_08:50
archive-1/archive/x...@rsync-2009-09-01_07:45--2009-09-01_07:59 >/dev/null
cannot hold 'archiv
Try nfs-disc...@opensolaris.org
-- richard
On Sep 25, 2009, at 7:28 AM, Chris Banal wrote:
This was previously posed to the sun-managers mailing list but the
only reply I received recommended I post here at well.
We have a production Solaris 10u5 / ZFS X4500 file server which is
reporting
On Sep 25, 2009, at 11:54 AM, Robert Milkowski wrote:
Hi,
I have a zfs send command failing for some reason...
# uname -a
SunOS 5.11 snv_123 i86pc i386 i86pc Solaris
# zfs send -R -I archive-1/archive/
x...@rsync-2009-06-01_07:45--2009-06-01_08:50 archive-1/archive/
x...@rsync-2009-09
Can you select the LU boot environment from sparc obp, if the
filesystem is zfs? With ufs, you simply invoke 'boot [slice]'.
thanks
donour
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discu
Hi Donour,
You would use the boot -L syntax to select the ZFS BE to boot from,
like this:
ok boot -L
Rebooting with command: boot -L
Boot device: /p...@8,60/SUNW,q...@4/f...@0,0/d...@w2104cf7fa6c7,0:a
File and args: -L
1 zfs1009BE
2 zfs10092BE
Select environment to boot: [ 1 - 2 ]: 2
On Sep 25, 2009, at 9:14 AM, Ross Walker wrote:
On Fri, Sep 25, 2009 at 11:34 AM, Bob Friesenhahn
wrote:
On Fri, 25 Sep 2009, Ross Walker wrote:
As a side an slog device will not be too beneficial for large
sequential writes, because it will be throughput bound not latency
bound. slog device
Hi Peter,
Do you have any notes on what you did to restore a sendfile to an existing BE?
I'm interested in creating a 'golden image' and restring this into a
new BE on a running system as part of a hardening project.
Thanks
Peter
2009/9/14 Peter Karlsson :
> Hi Greg,
>
> We did a hack on thos
Hi Lori,
Is the u8 flash support for the whole root pool or an individual BE
using live upgrade?
Thanks
Peter
2009/9/24 Lori Alt :
> On 09/24/09 15:54, Peter Pickford wrote:
>
> Hi Cindy,
>
> Wouldn't
>
> touch /reconfigure
> mv /etc/path_to_inst* /var/tmp/
>
> regenerate all device information
What is the "Best" way to convert the checksums of an existing ZFS file system
from one checksum to another? To me "Best" means safest and most complete.
My zpool is 39% used, so there is plenty of space available.
Thanks.
--
This message posted from opensolaris.org
___
The whole pool. Although you can choose to exclude individual datasets
from the flar when creating it.
lori
On 09/25/09 12:03, Peter Pickford wrote:
Hi Lori,
Is the u8 flash support for the whole root pool or an individual BE
using live upgrade?
Thanks
Peter
2009/9/24 Lori Alt :
O
On 09/25/09 11:08 AM, Travis Tabbal wrote:
... haven't heard if it's a known
bug or if it will be fixed in the next version...
Out of courtesy to our host, Sun makes some quite competitive
X86 hardware. I have absolutely no idea how difficult it is
to buy Sun machines retail, but it seems they
2009/9/24 Robert Milkowski
> Mike Gerdts wrote:
>
>> On Wed, Sep 23, 2009 at 7:32 AM, bertram fukuda
>> wrote:
>>
>>
>>> Thanks for the info Mike.
>>>
>>> Just so I'm clear. You suggest 1)create a single zpool from my LUN 2)
>>> create a single ZFS filesystem 3) create 2 zone in the ZFS filesys
Hi,
Since I don't even have a mirror for my root pool "rpool," I'd like to
move as much of my system as possible over to my raidz2 pool, "tank."
Can someone tell me which parts need to stay in rpool in order for the
system to work normally?
Thanks.
--
Dave Abrahams
BoostPro Computing
http://ww
I didn't want my question to lead to an answer, but perhaps I should have put
more information. My idea is to copy the file system with one of the following:
cp -rp
zfs send | zfs receive
tar
cpio
But I don't know what would be the best.
Then I would do a "diff -r" on them before del
Chris Kirby wrote:
On Sep 25, 2009, at 11:54 AM, Robert Milkowski wrote:
Hi,
I have a zfs send command failing for some reason...
# uname -a
SunOS 5.11 snv_123 i86pc i386 i86pc Solaris
# zfs send -R -I
archive-1/archive/x...@rsync-2009-06-01_07:45--2009-06-01_08:50
archive-1/archive/
On Sep 25, 2009, at 2:43 PM, Robert Milkowski wrote:
Chris Kirby wrote:
On Sep 25, 2009, at 11:54 AM, Robert Milkowski wrote:
That's useful information indeed. I've filed this CR:
6885860 zfs send shouldn't require support for snapshot holds
Sorry for the trouble, please look for this to b
Hi David,
All system-related components should remain in the root pool, such as
the components needed for booting and running the OS.
If you have datasets like /export/home or other non-system-related
datasets in the root pool, then feel free to move them out.
Moving OS components out of the ro
on Fri Sep 25 2009, Cindy Swearingen wrote:
> Hi David,
>
> All system-related components should remain in the root pool, such as
> the components needed for booting and running the OS.
Yes, of course. But which *are* those?
> If you have datasets like /export/home or other non-system-related
Since I got my zfs pool working under solaris (I talked on this list
last week about moving it from linux & bsd to solaris, and the pain that
was), I'm seeing very good reads, but nada for writes.
Reads:
r...@shebop:/data/dvds# rsync -aP young_frankenstein.iso /tmp
sending incremental file lis
On 09/25/09 13:35, David Abrahams wrote:
Hi,
Since I don't even have a mirror for my root pool "rpool," I'd like to
move as much of my system as possible over to my raidz2 pool, "tank."
Can someone tell me which parts need to stay in rpool in order for the
system to work normally?
Thanks.
Oh, for the record, the drives are 1.5TB SATA, in a 4+1 raidz-1 config.
All the drives are on the same LSI 150-6 PCI controller card, and the M/B
is a generic something or other with a triple-core, and 2GB RAM.
Paul
3:34pm, Paul Archer wrote:
Since I got my zfs pool working under solaris (I
I have no idea why that last mail lost its line feeds. Trying again:
On 09/25/09 13:35, David Abrahams wrote:
Hi,
Since I don't even have a mirror for my root pool "rpool," I'd like to
move as much of my system as possible over to my raidz2 pool, "tank."
Can someone tell me which parts nee
* David Abrahams (d...@boostpro.com) wrote:
>
> on Fri Sep 25 2009, Cindy Swearingen wrote:
>
> > Hi David,
> >
> > All system-related components should remain in the root pool, such as
> > the components needed for booting and running the OS.
>
> Yes, of course. But which *are* those?
>
> >
On 25-Sep-09, at 2:58 PM, Frank Middleton wrote:
On 09/25/09 11:08 AM, Travis Tabbal wrote:
... haven't heard if it's a known
bug or if it will be fixed in the next version...
Out of courtesy to our host, Sun makes some quite competitive
X86 hardware. I have absolutely no idea how difficult
Hi,
Definitely large SGA, small arc. In fact, it's best to disable the ARC
altogether for the Oracle filesystems.
Blocks in the db_cache (oracle cache) can be used "as is" while cached data
from ARC needs significant CPU processing before it's inserted back into the
db_cache.
Not to mention t
Hi David,
I believe /opt is an essential file system as it contains software
that is maintained by the packaging system.
In fact anywhere you install software via pkgadd probably should be in
the BE under /rpool/ROOT/bename
AFIK it should not even be split from root in the BE under zfs boot
(only
On Sep 25, 2009, at 16:39, Glenn Lagasse wrote:
There's very little you can safely move in my experience. /export
certainly. Anything else, not really (though ymmv). I tried to
create
a seperate zfs dataset for /usr/local. That worked some of the time,
but it also screwed up my system a t
On 26/09/2009, at 1:14 AM, Ross Walker wrote:
By any chance do you have copies=2 set?
No, only 1. So the double data going to the slog (as reported by
iostat) is still confusing me and clearly potentially causing
significant harm to my performance.
Also, try setting zfs_write_limit_ov
On Fri, Sep 25, 2009 at 5:24 PM, James Lever wrote:
>
> On 26/09/2009, at 1:14 AM, Ross Walker wrote:
>
>> By any chance do you have copies=2 set?
>
> No, only 1. So the double data going to the slog (as reported by iostat) is
> still confusing me and clearly potentially causing significant harm
On Fri, Sep 25, 2009 at 10:56 PM, Toby Thain wrote:
>
> On 25-Sep-09, at 2:58 PM, Frank Middleton wrote:
>
>> On 09/25/09 11:08 AM, Travis Tabbal wrote:
>>>
>>> ... haven't heard if it's a known
>>> bug or if it will be fixed in the next version...
>>
>> Out of courtesy to our host, Sun makes some
On Fri, Sep 25, 2009 at 1:39 PM, Richard Elling
wrote:
> On Sep 25, 2009, at 9:14 AM, Ross Walker wrote:
>
>> On Fri, Sep 25, 2009 at 11:34 AM, Bob Friesenhahn
>> wrote:
>>>
>>> On Fri, 25 Sep 2009, Ross Walker wrote:
As a side an slog device will not be too beneficial for large
se
j...@jamver.id.au said:
> For a predominantly NFS server purpose, it really looks like a case of the
> slog has to outperform your main pool for continuous write speed as well as
> an instant response time as the primary criterion. Which might as well be a
> fast (or group of fast) SSDs or 15kRPM d
Chris Kirby wrote:
On Sep 25, 2009, at 2:43 PM, Robert Milkowski wrote:
Chris Kirby wrote:
On Sep 25, 2009, at 11:54 AM, Robert Milkowski wrote:
That's useful information indeed. I've filed this CR:
6885860 zfs send shouldn't require support for snapshot holds
Sorry for the trouble, pleas
On Fri, Sep 25, 2009 at 5:47 PM, Marion Hakanson wrote:
> j...@jamver.id.au said:
>> For a predominantly NFS server purpose, it really looks like a case of the
>> slog has to outperform your main pool for continuous write speed as well as
>> an instant response time as the primary criterion. Which
* David Magda (dma...@ee.ryerson.ca) wrote:
> On Sep 25, 2009, at 16:39, Glenn Lagasse wrote:
>
> >There's very little you can safely move in my experience. /export
> >certainly. Anything else, not really (though ymmv). I tried to
> >create
> >a seperate zfs dataset for /usr/local. That worked
On Fri, 25 Sep 2009, Richard Elling wrote:
By default, the txg commit will occur when 1/8 of memory is used
for writes. For 30 GBytes, that would mean a main memory of only
240 Gbytes... feasible for modern servers.
Ahem. We were advised that 7/8s of memory is currently what is
allowed for wr
On Fri, 25 Sep 2009, Ross Walker wrote:
Problem is most SSD manufactures list sustained throughput with large
IO sizes, say 4MB, and not 128K, so it is tricky buying a good SSD
that can handle the throughput.
Who said that the slog SSD is written to in 128K chunks? That seems
wrong to me. P
rswwal...@gmail.com said:
> Yes, but if it's on NFS you can just figure out the workload in MB/s and use
> that as a rough guideline.
I wonder if that's the case. We have an NFS server without NVRAM cache
(X4500), and it gets huge MB/sec throughput on large-file writes over NFS.
But it's painful
On 09/25/09 04:44 PM, Lori Alt wrote:
rpool
rpool/ROOT
rpool/ROOT/snv_124 (or whatever version you're running)
rpool/ROOT/snv_124/var (you might not have this)
rpool/ROOT/snv_121 (or whatever other BEs you still have)
rpool/dump
rpool/export
rpool/export/home
rpool/swap
Unless you machine is s
From a product standpoint, expanding the variety available in the
Storage 7000 (Amber Road) line is somewhere I think we'd (Sun) make bank on.
Things like:
[ for the home/very small business market ]
Mini-Tower sized case, 4-6 3.5" HS SATA-only bays (to take the
X2200-style spud bracket drives
Ah yes.
Thanks Cindy!
donour
On Sep 25, 2009, at 10:37 AM, Cindy Swearingen wrote:
Hi Donour,
You would use the boot -L syntax to select the ZFS BE to boot from,
like this:
ok boot -L
Rebooting with command: boot -L
Boot device: /p...@8,60/SUNW,q...@4/f...@0,0/
d...@w2104cf7fa6c7
on Fri Sep 25 2009, Glenn Lagasse wrote:
> The question you're asking can't easily be answered. Sun doesn't test
> configs like that. If you really want to do this, you'll pretty much
> have to 'try it and see what breaks'. And you get to keep both pieces
> if anything breaks.
Heh, that does
On Fri, 2009-09-25 at 14:39 -0600, Lori Alt wrote:
> The list of datasets in a root pool should look something like this:
...
> rpool/swap
I've had success with putting swap into other pools. I believe others
have, as well.
- Bill
_
On Sep 25, 2009, at 6:19 PM, Bob Friesenhahn > wrote:
On Fri, 25 Sep 2009, Ross Walker wrote:
Problem is most SSD manufactures list sustained throughput with large
IO sizes, say 4MB, and not 128K, so it is tricky buying a good SSD
that can handle the throughput.
Who said that the slog SSD
I have a zpool named rtank. I accidently attached a single drive to the pool.
I am an idiot I know :D Now I want to replace this single drive with a raidz
group. Below is the pool setup and what I tried:
NAMESTATE READ WRITE CKSUM
rtank ONLINE 0 0
On 09/25/09 16:19, Bob Friesenhahn wrote:
On Fri, 25 Sep 2009, Ross Walker wrote:
Problem is most SSD manufactures list sustained throughput with large
IO sizes, say 4MB, and not 128K, so it is tricky buying a good SSD
that can handle the throughput.
Who said that the slog SSD is written to
On Sep 25, 2009, at 19:39, Frank Middleton wrote:
/var/tmp is a strange beast. It can get quite large, and be a
serious bottleneck if mapped to a physical disk and used by any
program that synchronously creates and deletes large numbers of
files. I have had no problems mapping /var/tmp to /tmp.
On Fri, 25 Sep 2009, Ryan Hirsch wrote:
I have a zpool named rtank. I accidently attached a single drive to
the pool. I am an idiot I know :D Now I want to replace this single
drive with a raidz group. Below is the pool setup and what I tried:
I think that the best you will be able to do i
Hey Jim - There's something we're missing here.
There does not appear to be enough ZFS write
activity to cause the system to pause regularly.
Were you able to capture a kernel profile during the
pause period?
Thanks,
/jim
Jim Leonard wrote:
The only thing that jumps out at me is the ARC size
65 matches
Mail list logo