On 10/30/06, Robert Milkowski <[EMAIL PROTECTED]> wrote:
1. rebooting server could take several hours right now with so many file system
I belive this problem is being addressed right now
Well, I've done a quick test on b50 - 10K filesystems took around 5 minutes
to boot. Not bad, conside
On Oct 30, 2006, at 10:45 PM, David Dyer-Bennet wrote:
Also, stacking it on top of an existing RAID setup is kinda missing
the entire point!
Everyone keeps saying this, but I don't think it is missing the point
at all. Checksumming and all the other goodies still work fine and
you can ru
On 10/30/06, Jay Grogan <[EMAIL PROTECTED]> wrote:
Ran 3 test using mkfile to create a 6GB on a ufs and ZFS file system.
command ran mkfile -v 6gb /ufs/tmpfile
Test 1 UFS mounted LUN (2m2.373s)
Test 2 UFS mounted LUN with directio option (5m31.802s)
Test 3 ZFS LUN (Single LUN in a pool) (3m13
Hi Senthil,
We experienced a situation very close to this. Due to some instabilities, we
weren't able to export the zpool safely from the distressed system (a T2000
running SXb41). The only free system we had was an X4100, which was running S10
6/06. Both were SAN attached. The filesystem impor
Thanks again for your input Gents, I was able to get a W1100z inexpensively
with 1Gb RAM and a 2.4 GHz Opteron...now I'll just have to manufacture my own
drive slide rails since Sun won't sell the darn things [no, I don't want a 80Gb
IDE drive and apple pie with that!] and I'm not paying $100 fo
Ran 3 test using mkfile to create a 6GB on a ufs and ZFS file system.
command ran mkfile -v 6gb /ufs/tmpfile
Test 1 UFS mounted LUN (2m2.373s)
Test 2 UFS mounted LUN with directio option (5m31.802s)
Test 3 ZFS LUN (Single LUN in a pool) (3m13.126s)
Sunfire V120
1 Qlogic 2340
Solaris 10 06/06
Wes Williams wrote:
Thanks gents for your replies. I've used to a very large config W2100z and ZFS for
awhile but didn't know "how low can you go" for ZFS to shine, though a 64-bit
CPU seems to be the minimum performance threshold.
Now that Sun's store is [sort of] working again, I can see s
> Though there isn't a Sun "tower server" that fits
> your description, the Ultra-40
> can hold 4 3.5" drives (80, 250, or 500 GBytes). You
> might actually prefer
> something designed for office use at home, rather
> than something designed for a
> data center.
> http://www.sun.com/desktop/
Hi all,I recently created a RAID-Z1 pool out of a set of 7 SCSI disks, using the following command:# zpool create magicant raidz c5t0d0 c5t1d0 c5t2d0 c5t3d0 c5t4d0 c5t5d0 c5t6d0It worked fine, but I was slightly confused by the size yield (99 GB vs the 116 GB I had on my other RAID-Z1 pool of same-
Wes Williams wrote:
Thanks gents for your replies. I've used to a very large config W2100z and ZFS
for awhile but didn't know "how low can you go" for ZFS to shine, though a 64-bit
CPU seems to be the minimum performance threshold.
Now that Sun's store is [sort of] working again, I can see s
Hi,
My suggestion is direct any command output to a file
that may print thous of lines.
I have not tried that number of FSs. So, my first
suggestion is to have alot of phys mem installed.
The second item that I could be concerned with is
path tran
Jeremy Teo wrote:
This is the same problem described in
6343653 : want to quickly "copy" a file from a snapshot.
Actually it's a somewhat different problem. "Copying" a file from a
snapshot is a lot simpler than "copying" a file from a different
filesystem. With snapshots, things are a lot
On 10/30/06, Asif Iqbal <[EMAIL PROTECTED]> wrote:
On 10/20/06, Darren J Moffat <[EMAIL PROTECTED]> wrote:
> Asif Iqbal wrote:
> > On 10/20/06, Darren J Moffat <[EMAIL PROTECTED]> wrote:
> >> Asif Iqbal wrote:
> >> > Hi
> >> >
> >> > I have a X2100 with two 74G disks. I build the OS on the first
Thanks gents for your replies. I've used to a very large config W2100z and
ZFS for awhile but didn't know "how low can you go" for ZFS to shine, though a
64-bit CPU seems to be the minimum performance threshold.
Now that Sun's store is [sort of] working again, I can see some X2100's with
the
Wes Williams wrote:
I could use the list's help.
My goal: Build a cheap ZFS file server with OpenSolairs on UFS boot (for now)
10,000 rpm U320 SCSI drive while having a ZFS pool in the same machine. The ZFS
pool will either be a mirror or raidz setup consisting of either two or three
500Gb
I don't have the crashes anymore! What I did was on the receiving pool
explicitly set mountpoint=none
so that on the receiving side the filesystem is never mounted. Now this
shouldn't make a difference. From what I saw before - and If i've understood
the documentation - when you do have the re
[Richard removes his Sun hat...]
Ceri Davies wrote:
On Sun, Oct 29, 2006 at 12:01:45PM -0800, Richard Elling - PAE wrote:
Chris Adams wrote:
We're looking at replacing a current Linux server with a T1000 + a fiber
channel enclosure to take advantage of ZFS. Unfortunately, the T1000 only
has a
On 10/20/06, Darren J Moffat <[EMAIL PROTECTED]> wrote:
Asif Iqbal wrote:
> On 10/20/06, Darren J Moffat <[EMAIL PROTECTED]> wrote:
>> Asif Iqbal wrote:
>> > Hi
>> >
>> > I have a X2100 with two 74G disks. I build the OS on the first disk
>> > with slice0 root 10G ufs, slice1 2.5G swap, slice6 25
Thanks Robert, Michael.
I guess that has answered my question. I now have got to do a couple
of experiments and get this under control. I will keep you posted if I
see something strange, which I don't hope for. ;o)
senthil
On 10/30/06, Michael Schuster <[EMAIL PROTECTED]> wrote:
senthil raman
> I've been looking at building this setup in some
> cheap eBay rack-mount servers that are generally
> single or dual 1.0GHz Pentium III, 1Gb PC133 RAM, and
> I'd have to add the SATA II controller into a spare
> PCI slot.
>
> For maximum file system performance of the ZFS pool,
> would anyone ca
Hello Wes,
Monday, October 30, 2006, 3:28:19 PM, you wrote:
WW> I could use the list's help.
WW> My goal: Build a cheap ZFS file server with OpenSolairs on UFS
WW> boot (for now) 10,000 rpm U320 SCSI drive while having a ZFS pool
WW> in the same machine. The ZFS pool will either be a mirror or
Hello Rafael,
Monday, October 30, 2006, 2:58:56 PM, you wrote:
>
Hi,
An IT organization needs to implement highly available file server, using Solaris 10, SunCluster, NFS and Samba. We are talking about thousands, even 10s of thousands of ZFS file systems.
Is this doable? Should I expe
I could use the list's help.
My goal: Build a cheap ZFS file server with OpenSolairs on UFS boot (for now)
10,000 rpm U320 SCSI drive while having a ZFS pool in the same machine. The
ZFS pool will either be a mirror or raidz setup consisting of either two or
three 500Gb 7,200 rpm SATA II driv
Hi,
An IT organization needs to implement highly available file server,
using Solaris 10, SunCluster, NFS and Samba. We are talking about
thousands, even 10s of thousands of ZFS file systems.
Is this doable? Should I expect any impact on performance or stability
due to the fact I'll have that
senthil ramanujam wrote:
Hi,
I am trying to experiment a scenario that we would like to find a
possible solution. Is there anyone out there experienced or analyzed
before the scenario given below?
Scenario: The system is attached to an array. The array type is really
doesn't matter, i,e., it ca
Hello senthil,
Monday, October 30, 2006, 1:12:28 PM, you wrote:
sr> Hi,
sr> I am trying to experiment a scenario that we would like to find a
sr> possible solution. Is there anyone out there experienced or analyzed
sr> before the scenario given below?
sr> Scenario: The system is attached to an
Hi,
I am trying to experiment a scenario that we would like to find a
possible solution. Is there anyone out there experienced or analyzed
before the scenario given below?
Scenario: The system is attached to an array. The array type is really
doesn't matter, i,e., it can be a JBOD or a RAID arra
Thanks for the reply,
I heard separately that it's fixed in snv_52, don't know if it'll be available
as a ZFS patch or in s10u3.
Pete
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.ope
This is the same problem described in
6343653 : want to quickly "copy" a file from a snapshot.
On 10/30/06, eric kustarz <[EMAIL PROTECTED]> wrote:
Pavan Reddy wrote:
> This is the time it took to move the file:
>
> The machine is a Intel P4 - 512MB RAM.
>
> bash-3.00# time mv ../share/pav.tar .
On Sun, Oct 29, 2006 at 12:01:45PM -0800, Richard Elling - PAE wrote:
> Chris Adams wrote:
> >We're looking at replacing a current Linux server with a T1000 + a fiber
> >channel enclosure to take advantage of ZFS. Unfortunately, the T1000 only
> >has a single drive bay (!) which makes it impossib
Hello Jeff,
Monday, October 30, 2006, 2:03:52 AM, you wrote:
>> Nice, this is definitely pointing the finger more definitively. Next
>> time could you try:
>>
>> dtrace -n '[EMAIL PROTECTED](20)] = count()}' -c 'sleep 5'
>>
>> (just send the last 10 or so stack traces)
>>
>> In the mean time
31 matches
Mail list logo