[zfs-discuss] CIFS in production and my experience so far, advice needed

2010-04-12 Thread charles
I am looking at opensolaris with ZFS and CIFS shares as an option for large scale production use with Active Directory. I have successfully joined the opensolaris CIFS server to our Windows AD test domain and created an SMB share that the Windows server 2003 can see. I have also created test us

[zfs-discuss] Problems (bug?) with slow bulk ZFS filesystem creation

2010-05-10 Thread charles
Hi, This thread refers to Solaris 10, but it was suggested that I post it here as ZFS developers may well be more likely to respond. http://forums.sun.com/thread.jspa?threadID=5438393&messageID=10986502#10986502 Basically after about ZFS 1000 filesystem creations the creation time slows down t

Re: [zfs-discuss] Problems (bug?) with slow bulk ZFS filesystem creation

2010-05-10 Thread charles
Yes, I have recently tried the userquota option, (one ZFS filesystem with 60,000 quotas and 60,000 ordinary 'mkdir' home directories within), and this works finebut you end up with less granularity of snapshots. It does seem odd that after only 1000 ZFS filesystems there is a slow down. It

[zfs-discuss] Problem with time-slider

2008-12-29 Thread Charles
Hi I'm a new user of OpenSolaris 2008.11, I switched from Linux to try the time-slider, but now when I execute the time-slider I get this message: http://img115.imageshack.us/my.php?image=capturefentresansnomfx9.png Thanks you and happy new year ^^ -- This message posted from opensolaris.org

Re: [zfs-discuss] Problem with time-slider

2008-12-29 Thread Charles
,00G - rpool/export 19,7G 424G19K /export rpool/export/home 19,7G 424G19K /export/home rpool/export/home/charles 19,7G 424G 16,2G /export/home/charles I don't know what to unmount here thanks again for your help :) -- This message po

Re: [zfs-discuss] Problem with time-slider

2008-12-30 Thread Charles
Yeah Thanks a lot to timf and mgerdts, it's working now ! -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] can't destroy snapshot

2010-03-31 Thread Charles Hedrick
We're getting the notorious "cannot destroy ... dataset already exists". I've seen a number of reports of this, but none of the reports seem to get any response. Fortunately this is a backup system, so I can recreate the pool, but it's going to take me several days to get all the data back. Is t

Re: [zfs-discuss] can't destroy snapshot

2010-03-31 Thread Charles Hedrick
Incidentally, this is on Solaris 10, but I've seen identical reports from Opensolaris. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] can't destroy snapshot

2010-03-31 Thread Charles Hedrick
# zfs destroy -r OIRT_BAK/backup_bad cannot destroy 'OIRT_BAK/backup_...@annex-2010-03-23-07:04:04-bad': dataset already exists No, there are no clones. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.o

Re: [zfs-discuss] can't destroy snapshot

2010-03-31 Thread Charles Hedrick
So we tried recreating the pool and sending the data again. 1) compression wasn't set on the copy, even though I did sent -R, which is supposed to send all properties 2) I tried killing to send | receive pipe. Receive couldn't be killed. It hung. 3) This is Solaris Cluster. We tried forcing a fai

Re: [zfs-discuss] can't destroy snapshot

2010-03-31 Thread Charles Hedrick
Ah, I hadn't thought about that. That may be what was happening. Thanks. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] can't destroy snapshot

2010-03-31 Thread Charles Hedrick
So that eliminates one of my concerns. However the other one is still an issue. Presumably Solaris Cluster shouldn't import a pool that's still active on the other system. We'll be looking more carefully into that. -- This message posted from opensolaris.org _

Re: [zfs-discuss] Is the J4200 SAS array suitable for Sun Cluster?

2010-05-16 Thread Charles Hedrick
We use this configuration. It works fine. However I don't know enough about the details to answer all of your questions. The disks are accessible from both systems at the same time. Of course with ZFS you had better not actually use them from both systems. Actually, let me be clear about what w

[zfs-discuss] getting decent NFS performance

2009-12-22 Thread Charles Hedrick
We have a server using Solaris 10. It's a pair of systems with a shared J4200, with Solaris cluster. It works very nicely. Solaris cluster switches over transparently. However as an NFS server it is dog-slow. This is the usual synchronous write problem. Setting zfs_disable fixes the problem. ot

Re: [zfs-discuss] getting decent NFS performance

2009-12-22 Thread Charles Hedrick
Thanks. That's what I was looking for. Yikes! I hadn't realized how expensive the Zeus is. We're using Solaris cluster, so if the system goes down, the other one takes over. That means that if the ZIL is on a local disk, we lose it in a crash. Might as well just set zil_disable (something I'm c

Re: [zfs-discuss] getting decent NFS performance

2009-12-22 Thread Charles Hedrick
It turns out that our storage is currently being used for * backups of various kinds, run daily by cron jobs * saving old log files from our production application * saving old versions of java files from our production application Most of the usage is write-only, and a fair amount of it involves

Re: [zfs-discuss] getting decent NFS performance

2009-12-22 Thread Charles Hedrick
Is ISCSI reliable enough for this? -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] zfs fast mirror resync?

2010-01-15 Thread Charles Menser
Perhaps an ISCSI mirror for a laptop? Online it when you are back "home" to keep your backup current. Charles On Thu, Jan 14, 2010 at 7:04 PM, A Darren Dunham wrote: > On Thu, Jan 14, 2010 at 06:11:10PM -0500, Miles Nordin wrote: >> zpool offline / zpool online of a mirror co

Re: [zfs-discuss] ZFS + OpenSolaris for home NAS?

2010-01-15 Thread Charles Edge
To have Mac OS X connect via iSCSI: http://krypted.com/mac-os-x/how-to-use-iscsi-on-mac-os-x/ -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] New ZFS Intent Log (ZIL) device available - Beta program now open!

2010-01-18 Thread Charles Hedrick
>From the web page it looks like this is a card that goes into the computer >system. That's not very useful for enterprise applications, as they are going >to want to use an external array that can be used by a redundant pair of >servers. I'm very interested in a cost-effective device that will

[zfs-discuss] available space

2010-02-13 Thread Charles Hedrick
I have the following pool: NAME SIZE USED AVAILCAP HEALTH ALTROOT OIRT 6.31T 3.72T 2.59T58% ONLINE / "zfs list" shows the following for a typical file system: NAMEUSED AVAIL REFER MOUNTPOINT OIRT/sakai/production 1.40T 1.77T 1.40T /OIRT/sakai/produc

Re: [zfs-discuss] Pool import with failed ZIL device now possible ?

2010-02-13 Thread Charles Hedrick
I have a similar situation. I have a system that is used for backup copies of logs and other non-critical things, where the primary copy is on a Netapp. Data gets written in batches a few times a day. We use this system because storage on it is a lot less expensive than on the Netapp. It's only

Re: [zfs-discuss] available space

2010-02-15 Thread Charles Hedrick
Thanks. That makes sense. This is raidz2. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] performance problem with Mysql

2010-02-20 Thread Charles Hedrick
We recently moved a Mysql database from NFS (Netapp) to a local disk array (J4200 with SAS disks). Shortly after moving production, the system effectively hung. CPU was at 100%, and one disk drive was at 100%. I had tried to follow the tuning recommendations for Mysql mostly: * recordsize set to

Re: [zfs-discuss] performance problem with Mysql

2010-02-20 Thread Charles Hedrick
We had been using the same pool for a backup Mysql server for 6 months before using it for the primary server. Neither zpool status -v nor fmdump shows any signs of problems. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-

Re: [zfs-discuss] performance problem with Mysql

2010-02-20 Thread Charles Hedrick
I hadn't considered stress testing the disks. Obviously that's a good idea. We'll look at doing something in May, when we have the next opportunity to take down the database. I doubt that doing testing during production is a good idea... -- This message posted from opensolaris.org _

Re: [zfs-discuss] shrinking a zpool - roadmap

2010-02-22 Thread Charles Hedrick
I talked with our enterprise systems people recently. I don't believe they'd consider ZFS until it's more flexible. Shrink is a big one, as is removing an slog. We also need to be able to expand a raidz, possibly by striping it with a second one and then rebalancing the sizes. -- This message p

[zfs-discuss] Sudden and Dramatic Performance Drop-off

2012-10-04 Thread Knipe, Charles
re my performance has gone? Thanks -Charles ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] ZFS resilvering loop from hell

2011-07-26 Thread Charles Stephens
I'm on S11E 150.0.1.9 and I replaced one of the drives and the pool seems to be stuck in a resilvering loop. I performed a 'zpool clear' and 'zpool scrub' and just complains that the drives I didn't replace are degraded because of too many errors. Oddly the replaced drive is reported as being

[zfs-discuss] How do you grow a ZVOL?

2008-07-17 Thread Charles Menser
I've looked for anything I can find on the topic, but there does not appear to be anything documented. Can a ZVOL be expanded? In particular, can a ZVOL sharded via iscsi be expanded? Thanks, Charles ___ zfs-discuss mailing list zfs-di

Re: [zfs-discuss] ZFS deduplication

2008-07-22 Thread Charles Soto
data, which takes time and effort. If the system can say "these 500K blocks are the same as these 500K, don't bother copying them to the DR site AGAIN," then I have a less daunting data management task. De-duplication makes a lot of sense at some

Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-07-23 Thread Charles Menser
controller density or if two-to-one is the norm. Charles On Wed, Jul 23, 2008 at 3:37 PM, Steve <[EMAIL PROTECTED]> wrote: > I'm a fan of ZFS since I've read about it last year. > > Now I'm on the way to build a home fileserver and I'm thinking to go with > Opensolar

Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-07-24 Thread Charles Menser
Yes, I am vary happy with the M2A-VM. Charles On Wed, Jul 23, 2008 at 5:05 PM, Steve <[EMAIL PROTECTED]> wrote: > Thank you for all the replays! > (and in the meantime I was just having a dinner! :-) > > To recap: > > tcook: > you are right, in fact I'm thinking t

Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-07-24 Thread Charles Menser
I installed it with snv_86 in IDE controller mode, and have since upgraded ending up at snv_93. Do you know what implications there are for using AHCI vs IDE modes? Thanks, Charles On Thu, Jul 24, 2008 at 9:26 AM, Florin Iucha <[EMAIL PROTECTED]> wrote: > On Thu, Jul 24, 2008 at 0

Re: [zfs-discuss] Cannot attach mirror to SPARC zfs root pool

2008-07-24 Thread Charles Meeks
Hoping this is not too off topic.Can anyone confirm you can break a mirrored zfs root pool once formed. I basically want to clone a boot drive, take it to another piece of identical hardware and have two machines ( or more ). I am running indiana b93 on x86 hardware. I have read that

[zfs-discuss] zpool status my_pool , shows a pulled disk c1t6d0 as ONLINE ???

2008-07-28 Thread Charles Emery
New server build with Solaris-10 u5/08, on a SunFire t5220, and this is our first rollout of ZFS and Zpools. Have 8 disks, boot disk is hardware mirrored (c1t0d0 + c1t1d0) Created Zpool my_pool as RaidZ using 5 disks + 1 spare: c1t2d0, c1t3d0, c1t4d0, c1t5d0, c1t6d0, and spare c1t7d0 I am work

[zfs-discuss] Dribbling checksums

2008-10-28 Thread Charles Menser
some weird interaction with the AOC-SAT2-MV8) and still take errors. Has anyone had a similar problem? Any ideas what may be happening? Is there more data I can provide? Many thanks, Charles ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http

Re: [zfs-discuss] Dribbling checksums

2008-10-30 Thread Charles Menser
I'll do that today. Thank you! Charles On Thu, Oct 30, 2008 at 2:12 AM, Marc Bevand <[EMAIL PROTECTED]> wrote: > Charles Menser gmail.com> writes: >> >> Nearly every time I scrub a pool I get small numbers of checksum >> errors on random drives on either co

[zfs-discuss] Peculiar disk loading on raidz2

2008-11-21 Thread Charles Menser
0.00.00.00.0 0.0 0.00.00.0 0 0 c3t1d0 0.00.00.00.0 0.0 0.00.00.0 0 0 c3t2d0 0.00.00.00.0 0.0 0.00.00.0 0 0 c3t3d0 Thanks, Charles ___ zfs-discuss mailing list zfs

Re: [zfs-discuss] Peculiar disk loading on raidz2

2008-11-21 Thread Charles Menser
0 0 c5t1d0 ONLINE 0 0 0 errors: No known data errors I appreciate your feedback, I had not thought to aggregate the stats and check the aggregate. Thanks, Charles On Fri, Nov 21, 2008 at 3:24 PM, Will Murnane <[EMAIL PROTECTED]> wrote: > On Fri, Nov 21, 2008 at 14

[zfs-discuss] OpenSolaris better Than Solaris10u6 with requards to ARECA Raid Card

2009-01-13 Thread Charles Wright
Under Solaris 10 u6 , No matter how I configured my ARECA 1261ML Raid card I got errors on all drives that result from SCSI timeout errors. yoda:~ # tail -f /var/adm/messages Jan 9 11:03:47 yoda.asc.edu scsi: [ID 107833 kern.notice] Requested Block: 239683776 Error Block:

Re: [zfs-discuss] OpenSolaris better Than Solaris10u6 with requards to ARECA Raid Card

2009-01-13 Thread Charles Wright
Thanks for the reply, I've also had issue with consumer class drives and other raid cards. The drives I have here (all 16 drives) are Seagate® Barracuda® ES enterprise hard drives Model Number ST3500630NS If the problem was with the drive I would expect the same behavior in both solaris and o

Re: [zfs-discuss] OpenSolaris better Than Solaris10u6 with requards to ARECA Raid Card

2009-01-14 Thread Charles Wright
Thanks for the info.I'm running the Latest Firmware for my card: V1.46 with BOOT ROM Version V1.45 Could you tell me how you have your card configured? Are you using JBOD, RAID, or Pass Through? What is your Max SATA mode set too? How may drives do you have attached? What is your ZFS

Re: [zfs-discuss] OpenSolaris better Than Solaris10u6 with requards to ARECA Raid Card

2009-01-14 Thread Charles Wright
Here's an update: I thought that the error message arcmsr0: too many outstanding commands might be due to a Scsi queue being over ran The areca driver has #*define*ARCMSR_MAX_OUTSTANDING_CMD 256 What might be happe

Re: [zfs-discuss] OpenSolaris better Than Solaris10u6 with requards to ARECA Raid Card

2009-01-15 Thread Charles Wright
I've tried putting this in /etc/system and rebooting set zfs:zfs_vdev_max_pending = 16 Are we sure that number equates to a scsi command? Perhaps I should set it to 8 and see what happens. (I have 256 scsi commands I can queue across 16 drives) I still got these error messages in the log. Jan 15

Re: [zfs-discuss] OpenSolaris better Than Solaris10u6 with requards to ARECA Raid Card

2009-01-16 Thread Charles Wright
I tested with zfs_vdev_max_pending=8 I hoped this should make the error messages arcmsr0: too many outstanding commands (257 > 256) go away but it did not. zfs_vdev_max_pending=8 this should have only allowed 128 commands total to be outstanding I would think (16 Drives * 8 = 128). However

Re: [zfs-discuss] ZFS: unreliable for professional usage?

2009-02-10 Thread Charles Binford
king about something else, or does ZFS request Order Queue Tags on certain commands? Charles Jeff Bonwick wrote: >> There is no substitute for cord-yank tests - many and often. The >> weird part is, the ZFS design team simulated millions of them. >> So the full explanation remains to

Re: [zfs-discuss] ZFS: unreliable for professional usage?

2009-02-10 Thread Charles Binford
DE - could you please post the output of your 'zpool umount usbhdd1' command? I believe the output will prove useful to the point being discussed below. Charles D. Eckert wrote: > (...) > You don't move a pool with 'zfs umount', that only unmounts a single zfs

[zfs-discuss] two pools on boot disk?

2009-06-20 Thread Charles Hedrick
I have a small system that is going to be a file server. It has two disks. I'd like just one pool for data. Is it possible to create two pools on the boot disk, and then add the second disk to the second pool? The result would be a single small pool for root, and a second pool containing the res

[zfs-discuss] how to do backup

2009-06-20 Thread Charles Hedrick
I have a USB disk, to which I want to do a backup. I've used send | receive. It works fine until I try to reboot. At that point the system fails to come up because the backup copy is set to be mounted at the original location so the system tries to mount two different things the same place. I gu

[zfs-discuss] core dump on zfs receive

2009-06-22 Thread Charles Hedrick
I'm trying to do a simple backup. I did zfs snapshot -r rp...@snapshot zfs send -R rp...@snapshot | zfs receive -Fud external/rpool zfs snapshot -r rp...@snapshot2 zfs send -RI rp...@snapshot1 rp...@snapshot2 | zfs receive -d external/rpool The receive coredumps $c libc_hwcap1.so.1`strcmp+0xec(8

Re: [zfs-discuss] core dump on zfs receive

2009-06-27 Thread Charles Hedrick
I'd like to maintain a backup of the main pool on an external drive. Can you suggest a way to do that? I was hoping to do zfs send | zfs receive and then do that with incrementals. It seems that this isn't going to work. How do people actually back up ZFS-based systems? -- This message posted f

Re: [zfs-discuss] Pool iscsi /zfs performance in opensolaris 0906

2009-08-04 Thread Charles Baker
> My testing has shown some serious problems with the > iSCSI implementation for OpenSolaris. > > I setup a VMware vSphere 4 box with RAID 10 > direct-attached storage and 3 virtual machines: > - OpenSolaris 2009.06 (snv_111b) running 64-bit > - CentOS 5.3 x64 (ran yum update) > - Ubuntu Server 9.

Re: [zfs-discuss] add-view for the zfs snapshot

2009-08-07 Thread Charles Baker
> I frist create lun by "stmfadm create-lu ", and > add-view , so the initiator can see the created > lun. > > Now I use "zfs snapshot" to create snapshot for the > created lun. > > hat can I do to make the snapshot is accessed by the > Initiator? Thanks. Hi, This is a good question and some

[zfs-discuss] Question about mirror vdev performance considerations

2009-08-12 Thread Charles Menser
B B and zpool create mypool mirror A B mirror A B and zpool create mypool mirror A B mirror B A Thanks, Charles Menser ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] [indiana-discuss] Boot failure with snv_122 and snv_123

2009-09-23 Thread Charles Menser
d my grub config, and nothing seems out of line there (though I have edited the boot entries to remove the splashimage, foreground, background, and console=graphics). Thanks, Charles > Hi, > > A problem with your root pool - something went wrong > when you upgraded > which exp

Re: [zfs-discuss] Q: recreate pool?

2007-05-02 Thread Charles Debardeleben
native device support, but I could be wrong about the name. -Charles Gonzalo Siero wrote: Hi there, because of a problem with EMC Power Path we need to change the configuration of a ZFS pool changing "emcpower?g" devices (EMC Power Path created devices) to underlaying "c#t#d#

Re: [zfs-discuss] AVS replication vs ZFS send recieve for odd sized volume pairs

2007-05-22 Thread Charles DeBardeleben
could probably write your own agent for zfs send using our agent builder tool. However, integrating this with the HANFS agent that ships with SolarisCluster will require that you are familiar with all of the failures that you may hit and what recovery action you want to take. -Charles a habman

Re: [zfs-discuss] Single SAN Lun presented to 4 Hosts

2007-08-28 Thread Charles DeBardeleben
Are you sure that UFS writes a-time on read-only filesystems? I do not think that it is supposed to. If it does, I think that this is a bug. I have mounted read-only media before, and not gotten any write errors. -Charles David Olsen wrote: >> On 27/08/2007, at 12:36 AM, Rainer J.H.

Re: [zfs-discuss] [storage-discuss] server-reboot

2007-10-11 Thread Charles Baker
Hi Claus, Were you able to collect the core file? If so, please provide us with the core file so we can take a look. I can provide specific upload instructions offline. thanks Charles eric kustarz wrote: This looks like a bug in the sd driver (SCSI). Does this look familiar to anyway from

Re: [zfs-discuss] Solaris SAMBA questions

2008-05-15 Thread Charles Soto
week to look through the logs as this fails, so I may have a solution for it soon enough. Charles On 5/15/08 2:51 PM, "Mertol Ozyoney" <[EMAIL PROTECTED]> wrote: > Hi All ; > > > > Need help for figuring out a solution for customer requirements. > > &

Re: [zfs-discuss] 3510 JBOD with multipath

2008-05-23 Thread Charles Soto
The Solaris SAN Configuration and Multipathing Guide proved very helpful for me: http://docs.sun.com/app/docs/doc/820-1931/ I, too was surprised to see MPIO enabled by default on x86 (we're using Dell/EMC CX3-40 with our X4500 & X6250 systems). Charles Quoting Krutibas Biswal <[EMA

Re: [zfs-discuss] Per-user home filesystems and OS-X Leopard anomaly

2008-06-08 Thread Charles Soto
de. I had heard it was, and I have to concur. Leopard is the first OS X automounter that actually works as expected. There was zero fiddling with our Solaris 10U5 NFS server (a Thumper). Charles - Charles Soto[EMAIL PROTECTED] Director, Information Technology   

Re: [zfs-discuss] [caiman-discuss] disk names?

2008-06-09 Thread Charles Soto
I agree 100%. If we went by "this is how we always did it," then we would not have ZFS :) Charles (not to mention X64, CMT, or iPhones!;) On 6/4/08 10:55 AM, "Bob Friesenhahn" <[EMAIL PROTECTED]> wrote: > On Tue, 3 Jun 2008, Dave Miner wrote: >> >>

Re: [zfs-discuss] Filesystem for each home dir - 10,000 users?

2008-06-13 Thread Charles Soto
ope that all storage technologies take a holistic view of the storage management picture. While ZFS goes a long way to eliminating distinctions between volume and filesystem management, it is still a niche player. As much hype as ZFS snapshots get, that's barely tiptoeing into the man

Re: [zfs-discuss] Filesystem for each home dir - 10,000 users?

2008-06-13 Thread Charles Soto
I think the resource emphasis on storage is quite appropriate. The DATA are the valuable things, not the servers or applications. Appropriately, servers reached commodity status before storage. But storage hardware will go that way, and the focus will be on data (storage) management, where it

Re: [zfs-discuss] raid card vs zfs

2008-06-23 Thread Charles Soto
rage stack" that Sun and the OpenSolaris project have envisioned will make such "commodity" hardware useful pieces of our solution. I love our EMC/Brocade/HP SAN gear, but it's just too expensive to scale (particularly when it comes to total data management). Charles

Re: [zfs-discuss] memory hog

2008-06-23 Thread Charles Soto
s. "I need more performance. It's worth $x to get that." > To my experiance ZFS still performs nicely on 1 GB boxes. This is probably fine for the "typical consumer usage pattern." > PS: How much 4 GB Ram costs for a desktop ? I just bought 2GB DIMMs fo

Re: [zfs-discuss] memory hog

2008-06-23 Thread Charles Soto
, you are allowed to use more RAM). The 3GB per-process limit is the real factor. But then again, who runs Oracle on Windows? :) Charles (ok, I have, but only for testing) ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZFS root finally here in SNV90

2008-06-24 Thread Charles Soto
a "Solaris thing." And S10U5 at least now only defaults to TWO partitions (bigger / than before, and /export/home). Baby steps, I suppose :) Charles ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] raid card vs zfs

2008-06-25 Thread Charles Soto
out around 6bit/sec with > current drivers. Wow, 6bps! You need a new acoustic coupler ;) I think the X4500 designers appreciate the bandwidth ceiling, as the 10Gig card we put in ours is single port, while the cards we have for our X6250s are dual port (PCIe). Charles _

Re: [zfs-discuss] raid card vs zfs

2008-06-25 Thread Charles Soto
ort are going down (even 10Gig) but you get quite good performance with a 4-link aggregate on the X4500. You could go 8-way if you add another 4-port PCI-X card. IIRC, Solaris 10 supports up to 16-way at this speed (but at some point you're pr

Re: [zfs-discuss] zfs mount failed at boot stops network services.

2008-06-28 Thread Charles Soto
r message: >> cannot mount '/tank': directory is not empty; >> 4. reboot. >> then the os can only be login in from console. does it a bug? > > No, I would not consider that a bug. Why? Charles (to paraphrase PBS - "be more helpful" ; conversely, "

Re: [zfs-discuss] ZFS deduplication

2008-07-07 Thread Charles Soto
A really smart nexus for dedup is right when archiving takes place. For systems like EMC Centera, dedup is basically a byproduct of checksumming. Two files with similar metadata that have the same hash? They're identical. Charles On 7/7/08 4:25 PM, "Neil Perrin" <[EMAIL

Re: [zfs-discuss] ZFS deduplication

2008-07-07 Thread Charles Soto
IIRC, WebFS uses a relational database to track this (among much of its other metadata). Charles On 7/7/08 7:40 PM, "Bob Friesenhahn" <[EMAIL PROTECTED]> wrote: > On Tue, 8 Jul 2008, Nathan Kroenert wrote: > >> Even better would be using the ZFS block checksums (

Re: [zfs-discuss] ZFS deduplication

2008-07-07 Thread Charles Soto
eful, but that user isn't there to know how to "manage data" for my benefit. They're there to learn how to be filmmakers, journalists, speech pathologists, etc. Charles On 7/7/08 9:24 PM, "Bob Friesenhahn" <[EMAIL PROTECTED]> wrote: > On Mon, 7 Jul 2

Re: [zfs-discuss] ZFS and Sun Cluster....

2006-05-30 Thread Charles Debardeleben
upport Sol 10u2, but not Nevada or OpenSolaris. -Charles >Date: Fri, 26 May 2006 12:08:38 -0700 >From: Erik Trimble <[EMAIL PROTECTED]> >Subject: [zfs-discuss] ZFS and Sun Cluster >To: ZFS Discussions >MIME-version: 1.0 >Content-transfer-encoding: 7BIT >X-BeenThere: z

Re: [zfs-discuss] Re: ZFS and Sun Cluster....

2006-05-30 Thread Charles Debardeleben
bout why PxFS did not work with ZFS, contact me, and I will try to get more details. -Charles >Date: Tue, 30 May 2006 10:30:14 -0700 (PDT) >From: Tatjana S Heuser <[EMAIL PROTECTED]> >Subject: [zfs-discuss] Re: ZFS and Sun Cluster >To: zfs-discuss@opensolaris.org >MIME-ve

[zfs-discuss] Intermittent ZFS hang

2010-08-30 Thread Charles J. Knipe
. Next we suspected the SSD log disks, but we've seen the problem with those removed, as well. Has anyone seen anything like this before? Are there any tools we can use to gather information during the hang which might be useful in determining what's going wrong? Thanks for any i

Re: [zfs-discuss] Intermittent ZFS hang

2010-08-30 Thread Charles J. Knipe
. I'm doing some reading now toward being in a position to ask intelligent questions. -Charles -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Intermittent ZFS hang

2010-09-13 Thread Charles J. Knipe
> > > Charles, > > Just like UNIX, there are several ways to drill down > on the problem.  I > would probably start with a live crash dump (savecore > -L) when you see > the problem.  Another method would be to grap > multiple "stats" commands > du

Re: [zfs-discuss] Intermittent ZFS hang

2010-09-13 Thread Charles J. Knipe
t copying the contents of the pool over to a new pool, but considering the effort/disruption I'd want to make sure it's not just a shot in the dark. If I don't have a good theory in another week, that's when I start shooting in the dark... -Charles ___

Re: [zfs-discuss] Intermittent ZFS hang

2010-09-23 Thread Charles J. Knipe
e done we could remove the deduplicated volumes. Is this sound? Thanks again for all the help! -Charles > Howdy, > > We're having a ZFS performance issue over here that I > was hoping you guys could help me troubleshoot. We > have a ZFS pool made up of 24 disks, arranged in