Re: [zfs-discuss] Are there (non-Sun/Oracle) vendors selling OpenSolaris/ZFS based NAS Hardware?
Kyle McDonald writes: > I've seen the Nexenta and EON webpages, but I'm not looking to build my own. > > Is there anything out there I can just buy? In Germany, someone sells preconfigured hardware based on Nexenta: http://www.thomas-krenn.com/de/storage-loesungen/storage-systeme/nexentastor/nexentastor-sc846-unified-storage.html I have no experience with them but I wish them success. :-) Regards -- Volker -- ------------ Volker A. Brandt Consulting and Support for Sun Solaris Brandt & Brandt Computer GmbH WWW: http://www.bb-c.de/ Am Wiesenpfad 6, 53340 Meckenheim Email: v...@bb-c.de Handelsregister: Amtsgericht Bonn, HRB 10513 Schuhgröße: 45 Geschäftsführer: Rainer J. H. Brandt und Volker A. Brandt ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Secure delete?
> The most paranoid will replace all the disks and then physically destroy > the old ones. I thought the most paranoid will encrypt everything and then forget the key... :-) Seriously, once encrypted zfs is integrated that's a viable method. Regards -- Volker -- -------- Volker A. Brandt Consulting and Support for Sun Solaris Brandt & Brandt Computer GmbH WWW: http://www.bb-c.de/ Am Wiesenpfad 6, 53340 Meckenheim Email: v...@bb-c.de Handelsregister: Amtsgericht Bonn, HRB 10513 Schuhgröße: 45 Geschäftsführer: Rainer J. H. Brandt und Volker A. Brandt ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] [ZFS-discuss] RAIDZ drive "removed" status
> > How do I identify which drive it is? I hear each drive spinning (I listened > > to them individually) so I can't simply select the one that is not spinning. > > You can try reading from each raw device, and looking for a blinky-light > to identify which one is active. If you don't have individual lights, > you may be able to hear which one is active. The "dd" command should do. Write down the serial numbers on your drives. Then do the following for all "good" drives (the bad one might hang). You can recognize the good ones because format shows the SCSI targets in the initial disk selection prompt. Procedure: - Run "format -e" as root. - Select the first "good" disk. - Type "scsi" to get into the mode pages menu (don't worry about the warning, you won't do anything to the disks). - Type "inq" to see the raw inquiry string returned by the disk. Somewhere in there is the serial number as an ASCII string. - Type "q" to get back to the main menu. - Type "disk" to select the next disk. This way, you can match serial numbers and "good" disks. The one left will be the bad one. HTH -- Volker -- Volker A. Brandt Consulting and Support for Sun Solaris Brandt & Brandt Computer GmbH WWW: http://www.bb-c.de/ Am Wiesenpfad 6, 53340 Meckenheim Email: v...@bb-c.de Handelsregister: Amtsgericht Bonn, HRB 10513 Schuhgröße: 45 Geschäftsführer: Rainer J. H. Brandt und Volker A. Brandt ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Accidentally added disk instead of attaching
> I wanted to add a disk to the tank pool to create a mirror. I accidentally > used zpool add ? instead of zpool attach ? and now the disk is added. Is > there a way to remove the disk without loosing data? Been there, done that -- at a customer site while showing off ZFS. :-) Currently, you cannot remove a "simple" device. Depending on your Solaris version, you can remove things like hot spares and cache devices, but not simple vdevs. Backup the pool and recreate it in the correct way. Sorry for the bad news -- Volker -- ------------ Volker A. Brandt Consulting and Support for Sun Solaris Brandt & Brandt Computer GmbH WWW: http://www.bb-c.de/ Am Wiesenpfad 6, 53340 Meckenheim Email: v...@bb-c.de Handelsregister: Amtsgericht Bonn, HRB 10513 Schuhgröße: 45 Geschäftsführer: Rainer J. H. Brandt und Volker A. Brandt ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Accidentally added disk instead of attaching
> Do you know a good zfs backup restore walktrough online? Not really. Check out the zfs send and receive commands in the ZFS Administration Guide: http://docs.sun.com/app/docs/doc/819-5461/gbinw?a=view Basically, you make a snapshot of every filesystem you want to backup, then you use "zfs send" for each snapshot to create a stream that you store in some other place. Next, you destroy the pool and recreate it. Finally, you restore the filesystems one by one with the "zfs receive" command, using the streams that you created in the first step. See also the section on send and receive in zfs(1M). Good luck -- Volker -- ------------ Volker A. Brandt Consulting and Support for Sun Solaris Brandt & Brandt Computer GmbH WWW: http://www.bb-c.de/ Am Wiesenpfad 6, 53340 Meckenheim Email: v...@bb-c.de Handelsregister: Amtsgericht Bonn, HRB 10513 Schuhgröße: 45 Geschäftsführer: Rainer J. H. Brandt und Volker A. Brandt ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Accidentally added disk instead of attaching
Andriy Gapon writes: > on 06/12/2009 19:40 Volker A. Brandt said the following: > >> I wanted to add a disk to the tank pool to create a mirror. I accidentally > >> used zpool add ? instead of zpool attach ? and now the disk is added. Is > >> there a way to remove the disk without loosing data? > > > > Been there, done that -- at a customer site while showing off > > ZFS. :-) [...] > > Yep. My 2 cents -- 'add' and 'attach' are so similar the words that I think > that ZFS tools UI designers (if any) should re-consider naming of these > commands. Or 'add' command should always be interactive and ask for at least > two confirmations that a user knows what he is doing and why. Perhaps, it > should include a ZFS micro-exam too. > Jokes aside, this is too easy to make a mistake with the consequences that are > too hard to correct. Anyone disagrees? I absolutely totally, like, fully agree, man! :-) There should be some interaction and maybe even a display of the new pool structure, followed for a confirmation prompt. This could be overridden with the already well-established "-f" flag. Regards -- Volker -- Volker A. Brandt Consulting and Support for Sun Solaris Brandt & Brandt Computer GmbH WWW: http://www.bb-c.de/~vab/ Am Wiesenpfad 6, 53340 Meckenheim Email: v...@bb-c.de Handelsregister: Amtsgericht Bonn, HRB 10513 Schuhgröße: 45 Geschäftsführer: Rainer J. H. Brandt und Volker A. Brandt ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Any rhyme or reason to disk dev names?
> I am curious to know if there is an easy way to guess or identify > the device names of disks. Have a look at the file /etc/path_to_inst. There you will find all device instances managed by a particular driver. The first entry of each line is the physical device. If you then look in /dev/rdsk and check which symbolic link of the form ctds points to this physical device, you have your match. One caveat is that if you move disks around, /etc/path_to_inst will grow, and there is no guarantee that any device listed in this file is really present in the running system. HTH -- Volker -- ---- Volker A. Brandt Consulting and Support for Oracle Solaris Brandt & Brandt Computer GmbH WWW: http://www.bb-c.de/ Am Wiesenpfad 6, 53340 Meckenheim, GERMANYEmail: v...@bb-c.de Handelsregister: Amtsgericht Bonn, HRB 10513 Schuhgröße: 46 Geschäftsführer: Rainer J.H. Brandt und Volker A. Brandt "When logic and proportion have fallen sloppy dead" ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Compatibility of Hitachi Deskstar 7K3000 HDS723030ALA640 with ZFS
> Deskstar 7K3000 (HDS723030ALA640) We are using these disks. They work fine with ZFS. Regards -- Volker -- ---- Volker A. Brandt Consulting and Support for Oracle Solaris Brandt & Brandt Computer GmbH WWW: http://www.bb-c.de/ Am Wiesenpfad 6, 53340 Meckenheim, GERMANYEmail: v...@bb-c.de Handelsregister: Amtsgericht Bonn, HRB 10513 Schuhgröße: 46 Geschäftsführer: Rainer J.H. Brandt und Volker A. Brandt "When logic and proportion have fallen sloppy dead" ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Drive upgrades
Michael Armstrong writes: > Is there a way to quickly ascertain if my seagate/hitachi drives are as > large as the 2.0tb samsungs? I'd like to avoid the situation of replacing > all drives and then not being able to grow the pool... Hitachi prints the block count of the drives on the physical product label. If you compare that number to the one given in the Solaris label as printed by the prtvtoc command, you should be able to answer your question. Don't know about the Seagate drives, but they should at least have a block count somewhere in their documentation. HTH -- Volker -- ------------ Volker A. Brandt Consulting and Support for Oracle Solaris Brandt & Brandt Computer GmbH WWW: http://www.bb-c.de/ Am Wiesenpfad 6, 53340 Meckenheim, GERMANYEmail: v...@bb-c.de Handelsregister: Amtsgericht Bonn, HRB 10513 Schuhgröße: 46 Geschäftsführer: Rainer J.H. Brandt und Volker A. Brandt "When logic and proportion have fallen sloppy dead" ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS stats output - used, compressed, deduped, etc.
> I'm hoping the answer is yes - I've been looking but do not see it > ... Well, he is telling you to run the dtrace program as root in one window, and run the "zfs get all" command on a dataset in your pool in another window, to trigger the dataset_stats variable to be filled. > none can hide from dtrace!# dtrace -qn 'dsl_dataset_stats:entry > {this->ds = (dsl_dataset_t *)arg0;printf("%s\tcompressed size = > %d\tuncompressed size=%d\n", this->ds->ds_dir->dd_myname, > this->ds->ds_phys->ds_compressed_bytes, > this->ds->ds_phys->ds_uncompressed_bytes)}'openindiana-1 > compressed size = 3667988992 uncompressed size=3759321088 > [zfs get all rpool/openindiana-1 in another shell] HTH -- Volker -- Volker A. Brandt Consulting and Support for Oracle Solaris Brandt & Brandt Computer GmbH WWW: http://www.bb-c.de/ Am Wiesenpfad 6, 53340 Meckenheim, GERMANYEmail: v...@bb-c.de Handelsregister: Amtsgericht Bonn, HRB 10513 Schuhgröße: 46 Geschäftsführer: Rainer J.H. Brandt und Volker A. Brandt "When logic and proportion have fallen sloppy dead" ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] maczfs / ZEVO
Edward Ned Harvey (opensolarisisdeadlongliveopensolaris) writes: > Anybody using maczfs / ZEVO? Have good or bad things to say, in > terms of reliability, performance, features? I use it on and off for data exchange between a MacBook Pro, some iMacs, and various Solarisoid systems. I do this with single-disk pools on external drives, connected via eSATA to Solaris servers and via USB to the MacBook Pro. It works. There is just one outrageously brain-damaged idiocy, and that is the installer. I have one iMac that only has 3 GB RAM. The installer flatly refuses to install on that box because it insists on a minimum of 4 GB. It cannot guarantee stable performance for a single-disk pool with only 3 GB of memory. What a load of crap. If that had been the first box I installed it on, I would have ditched it then and there. I signed up to the ZEVO community forum and saw that other users had already complained about that problem, but it seems neither an official fix nor a workaround is available. ZEVO is free, and Greenbytes are entitled to release it in whatever form they want, but I found that restriction remarkably stupid. Best regards -- Volker -- ---- Volker A. Brandt Consulting and Support for Oracle Solaris Brandt & Brandt Computer GmbH WWW: http://www.bb-c.de/ Am Wiesenpfad 6, 53340 Meckenheim, GERMANYEmail: v...@bb-c.de Handelsregister: Amtsgericht Bonn, HRB 10513 Schuhgröße: 46 Geschäftsführer: Rainer J.H. Brandt und Volker A. Brandt "When logic and proportion have fallen sloppy dead" ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] zfs-discuss mailing list & opensolaris EOL
Ian Collins writes: > Edward Ned Harvey (opensolarisisdeadlongliveopensolaris) wrote: > >> From: Tim Cook [mailto:t...@cook.ms] > >> > >> We can agree to disagree. > >> > >> I think you're still operating under the auspices of Oracle > >> wanting to have an open discussion. This is patently false. > > I'm just going to respond to this by saying thank you, Cindy, > > Casper, Neil, and others, for all the help over the years. I > > think we all agree it was cooler when opensolaris was open, but > > things are beyond our control, so be it. Moving forward, I don't > > expect Oracle to be any more open than MS or Apple or Google, > > which is to say, I understand there's stuff you can't talk about, > > and support you can't give freely or openly. But to the extent > > you're still able to discuss publicly known things, thank you. > > +1. +9 :-) -- Volker A. Brandt Consulting and Support for Oracle Solaris Brandt & Brandt Computer GmbH WWW: http://www.bb-c.de/ Am Wiesenpfad 6, 53340 Meckenheim, GERMANY Email: v...@bb-c.de Handelsregister: Amtsgericht Bonn, HRB 10513 Schuhgröße: 46 Geschäftsführer: Rainer J.H. Brandt und Volker A. Brandt "When logic and proportion have fallen sloppy dead" ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS Distro Advice
Hi Tiernan! > But, now i am confused as to what OS to use... OpenIndiana? Nexenta? > FreeNAS/FreeBSD? > > I need something that will allow me to share files over SMB (3 if > possible), NFS, AFP (for Time Machine) and iSCSI. Ideally, i would > like something i can manage "easily" and something that works with > the Dell... I can recommend FreeNAS. It lives in a USB stick, thus leaving all your 8 disk slots free. It can do all the things you have listed above. It has a nice management GUI to bring it all together. And it is free, so you can download it and see if it recognizes all your hardware, especially the storage and network controllers. Best regards -- Volker A. Brandt -- ------------ Volker A. Brandt Consulting and Support for Oracle Solaris Brandt & Brandt Computer GmbH WWW: http://www.bb-c.de/ Am Wiesenpfad 6, 53340 Meckenheim, GERMANYEmail: v...@bb-c.de Handelsregister: Amtsgericht Bonn, HRB 10513 Schuhgröße: 46 Geschäftsführer: Rainer J.H. Brandt und Volker A. Brandt "When logic and proportion have fallen sloppy dead" ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS Distro Advice
Tim Cook writes: > > I need something that will allow me to share files over SMB (3 if > > possible), NFS, AFP (for Time Machine) and iSCSI. Ideally, i would > > like something i can manage "easily" and something that works with > > the Dell... > > All of them should provide the basic functionality you're looking > for. > None of them will provide SMB3 (at all) or AFP (without a third > party package). FreeNAS has AFP built-in, including a Time Machine discovery method. The latest FreeNAS is still based on Samba 3.x, but they are aware of 4.x and will probably integrate it at some point in the future. Then you should have SMB3. I don't know how far along they are... Best regards -- Volker -- ------------ Volker A. Brandt Consulting and Support for Oracle Solaris Brandt & Brandt Computer GmbH WWW: http://www.bb-c.de/ Am Wiesenpfad 6, 53340 Meckenheim, GERMANYEmail: v...@bb-c.de Handelsregister: Amtsgericht Bonn, HRB 10513 Schuhgröße: 46 Geschäftsführer: Rainer J.H. Brandt und Volker A. Brandt "When logic and proportion have fallen sloppy dead" ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS disk space monitoring with SNMP
Hello Ray, hello list! > Running on Solaris 10 U9 here. How do most of you monitor disk usage / > capacity on your large zpools remotely via SNMP tools? > > Net SNMP seems to be using a 32-bit unsigned integer (based on the MIB) > for hrStorageSize and friends, and thus we're not able to get accurate > numbers for sizes >2TB. > > Looks like potentially later versions of Net-SNMP deal with this > (though I'm not sure on that), but the version of Net-SNMP with Solaris > 10 is of course, not bleeding edge. :) Sorry to be a lamer, but "me too"... Has anyone integrated an SNMP-based ZFS monitoring with their favorite management tool? I am looking for disk usage warnings, but I am also interested in "OFFLINE" messages, or nonzero values for READ/WRITE/CKSUM errors. Casual googling did not turn up anything that looked promising. There is an older ex-Sun download of an SNMP kit, but to be candid I haven't really looked at it yet. Thanks -- Volker -- ------------ Volker A. Brandt Consulting and Support for Sun Solaris Brandt & Brandt Computer GmbH WWW: http://www.bb-c.de/ Am Wiesenpfad 6, 53340 Meckenheim Email: v...@bb-c.de Handelsregister: Amtsgericht Bonn, HRB 10513 Schuhgröße: 45 Geschäftsführer: Rainer J. H. Brandt und Volker A. Brandt ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] What are .$EXTEND directories?
> On our build 147 server (pool version 22) I've noticed that some directories > called ".$EXTEND" (no quotes) are appearing underneath some shared NFS > filesystems, containing an empty file called "$QUOTA". We aren't using quotas. > > What are these ? Googling for the names doesn't really work too well :-( > > I don't think they're doing any harm, but I'm curious. Someone's bound to > notice and ask me as well :-) Well, googling for '.$EXTEND' and '$QUOTA' does give some results, especially when combined with 'NTFS'. :-) Check out the table on "Metafiles" here: http://en.wikipedia.org/wiki/NTFS Regards -- Volker -- Volker A. Brandt Consulting and Support for Oracle Solaris Brandt & Brandt Computer GmbH WWW: http://www.bb-c.de/ Am Wiesenpfad 6, 53340 Meckenheim Email: v...@bb-c.de Handelsregister: Amtsgericht Bonn, HRB 10513 Schuhgröße: 46 Geschäftsführer: Rainer J. H. Brandt und Volker A. Brandt ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] [osol-announce] FRAOSUG Meeting on Apr 5th, 2011 (Tuesday)
[This is an invitation to a local event in Frankfurt, Germany. While everyone is welcome to attend, please note that the meeting language and hence the language of this invitation is German. Since this event will focus on ZFS, I have included the ZFS list this time.] Hallo allerseits! Dies ist die Einladung zum 13. Treffen der FRAnkfurter OpenSolaris User Group http://fraosug.de Wann: Dienstag, 05. April 2011 - 18 bis ca. 21 Uhr Wo: Commerzbank AG, DLZ5/PHH, Hafenstraße 51 60327 Frankfurt am Main Agenda: 1. Begrüßung 2. Erfahrungen mit FreeBSD STABLE und ZFS v28 (Christopher J. Ruwe) 3. ZFS in der Computer-Forensik (Norbert Freitag, Stephan Forth) 4. Aktueller Stand der ZFS-Entwicklung innerhalb und außerhalb von Oracle (Volker A. Brandt) 5. offene Diskussion, Ausblick, "Community Leader", und alle anderen Themen 6. danach optional noch ein Bier um die Ecke am "Saar-Karree"... :-) Verpflegung Die Commerzbank stellt freundlicherweise den Raum zur Verfügung. Aber diesmal haben wir ein besonderes Highlight: Die Baluna GmbH unterstützt dieses FRAOSUG-Treffen mit Speisen und Getränken! Die Baluna GmbH ( http://www.baluna.de/baluna/cms/pid/2 ) als etabliertes IT-Beratungsunternehmen bietet Know-How und Spezialisten für IT-Projekte sowie Software-Entwicklung. Baluna bietet sowohl Expertise speziell für die Finanzbranche als auch Erfahrung in einer Vielzahl von Industrien (Telekommunikation, Airline, Transport und Touristik) sowie Technologiewissen in Unix, Java, Salesfoce und SAP. Wegbeschreibung: Mit dem Auto: von der A66 / A5 auf die A648 bis Autobahnende; Verlängerung über Theodor-Heuss-Allee und Friedrich-Ebert-Anlage bis Platz der Republik, dann rechts abbiegen auf Mainzer Landstraße Achtung, es gibt abends ein paar Parkplätze, aber auch nur wenige! Mit den öffentlichen Verkehrsmitteln: * Straßenbahn: Linien 11, 21 - Haltestelle Güterplatz * ... oder einfach zu Fuß vom Hauptbahnhof die Niddastraße Richtung Westen bis zum Ende durchlaufen. Anmeldung: Bitte hier ( http://www.doodle.com/2x8ndw46q65cz66g ) bis spätestens Dienstag, den 05.04.2011 16:00 Uhr anmelden! Am Dienstagabend geht Ihr einfach durch die große Drehtür ins DLZ5-Gebäude (an diversen Stellen auf dem Commerzbank-Gelände sind Hinweistafeln). Das Gebäude ist das frühere Post-Hochhaus. Drinnen erwartet Euch jemand an der Empfangstheke. Bitte Personalausweis o.ä. mitbringen,damit ein Gästeausweis erstellt werden kann. Commerzbank-Mitarbeiter, die Zugang zum DLZ5 haben, sollten bitte trotzdem die Doodle-Anmeldung benutzen, damit wir abschätzen können, wieviele Teilnehmer kommen. Das Treffen findet in Raum 11.60.018 statt. Jeder kann teilnehmen! Wer etwas über seine Erfahrungen, seine Interessen oder seine aktuelle Bastelei mit OpenSolaris erzählen möchte, ist gern dazu eingeladen. Viele Grüße -- Volker A. Brandt -- ------------ Volker A. Brandt Consulting and Support for Oracle Solaris Brandt & Brandt Computer GmbH WWW: http://www.bb-c.de/ Am Wiesenpfad 6, 53340 Meckenheim Email: v...@bb-c.de Handelsregister: Amtsgericht Bonn, HRB 10513 Schuhgröße: 46 Geschäftsführer: Rainer J. H. Brandt und Volker A. Brandt ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] arcstat updates
Hello Richard! > I've been working on merging the Joyent arcstat enhancements with some of my > own and am now to the point where it is time to broaden the requirements > gathering. The result is to be merged into the illumos tree. Great news! > 1. Should there be flag compatibility with vmstat, iostat, mpstat, and > friends? Don't bother. I find that I need to look at the man page anyway if I want to do anything that goes beyond -i 1. :-) > 2. What is missing? Nothing obvious to me. > 3. Is it ok if the man page explains the meanings of each field, even though > it > might be many pages long? Yes please!! > 4. Is there a common subset of columns that are regularly used that would > justify > a shortcut option? Or do we even need shortcuts? (eg -x) No. Anything I need more than 1-2 times I wil turn into a shell alias anyway ("alias zlist zfs list -tall -o mounted,mountpoint,name" :-). > 5. Who wants to help with this little project? My first reaction was ENOTIME. :-( What kind of help do you need? Regards -- Volker -- ------------ Volker A. Brandt Consulting and Support for Oracle Solaris Brandt & Brandt Computer GmbH WWW: http://www.bb-c.de/ Am Wiesenpfad 6, 53340 Meckenheim Email: v...@bb-c.de Handelsregister: Amtsgericht Bonn, HRB 10513 Schuhgröße: 46 Geschäftsführer: Rainer J. H. Brandt und Volker A. Brandt ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] NexentaCore 3.1 - ZFS V. 28
Hello Eugen! > I finally came around installing NexentaCore 3.1 along with > napp-it and AMP on a HP N36L with 8 GBytes RAM. I'm testing > it with 4x 1 and 1.5 TByte consumer SATA drives (Seagate) > with raidz2 and raidz3 and like what I see so far. > > Given http://opensolaris.org/jive/thread.jspa?threadID=139315 > I've ordered an Intel 311 series for ZIL/L2ARC. > > I hope to use above with 4x 3 TByte Hitachi Deskstar 5K3000 HDS5C3030ALA630 > given the data from Blackblaze in regards to their reliability. I would be very interested in hearing about your success. Especially, if the Hitachi HDS5C3030ALA630 SATA-III disks work in the N36L at all. My guess would be that the on-board SATA-II controller will not support more than 2TB, but I have not found a definitive statement. HP certainly will not sell you disks bigger than 2TB for the N36L. Regards -- Volker -- ------------ Volker A. Brandt Consulting and Support for Oracle Solaris Brandt & Brandt Computer GmbH WWW: http://www.bb-c.de/ Am Wiesenpfad 6, 53340 Meckenheim Email: v...@bb-c.de Handelsregister: Amtsgericht Bonn, HRB 10513 Schuhgröße: 46 Geschäftsführer: Rainer J. H. Brandt und Volker A. Brandt ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] NexentaCore 3.1 - ZFS V. 28
> I'm 99% sure N36L takes 3 TByte SATA, as we have 5 of such > systems in production using the more expensive 3 TByte Hitachis. That is very good to hear, thank you! > You can't boot from them, of course, but that's what the internal > USB and external eSATA ports are good for. Of course, but that's a BIOS limitation and has nothing to do with the SATA controller, which was my first concern. Regards -- Volker -- ------------ Volker A. Brandt Consulting and Support for Oracle Solaris Brandt & Brandt Computer GmbH WWW: http://www.bb-c.de/ Am Wiesenpfad 6, 53340 Meckenheim Email: v...@bb-c.de Handelsregister: Amtsgericht Bonn, HRB 10513 Schuhgröße: 46 Geschäftsführer: Rainer J. H. Brandt und Volker A. Brandt ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ? SX:CE snv_91 - ZFS - raid and mirror - drive
> > Or, I suppose, you could use SVM for RAID-5 and ZFS to mirror those. > > Richard, The ZFS Best Practices Guide says not. > > "Do not use the same disk or slice in both an SVM and ZFS configuration." Hmmm... my guess is that this means that one shouldn't layer SVM and ZFS devices. I can't see any problems with just using the same disk. For Solaris 10 (without the ZFS root feature) I have been doing this routinely (root and swap are a mirrored metadevice, the rest of the root disks are a mirrored zpool providing /var, /opt, etc). Works Just Fine(TM) Regards -- Volker -- ------------ Volker A. Brandt Consulting and Support for Sun Solaris Brandt & Brandt Computer GmbH WWW: http://www.bb-c.de/ Am Wiesenpfad 6, 53340 Meckenheim Email: [EMAIL PROTECTED] Handelsregister: Amtsgericht Bonn, HRB 10513 Schuhgröße: 45 Geschäftsführer: Rainer J. H. Brandt und Volker A. Brandt ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] more ZFS recovery
Anton B. Rang writes: > dumping out the raw data structures and looking at > them by hand is the only way to determine what > ZFS doesn't like and deduce what went wrong (and > how to fix it). http://www.osdevcon.org/2008/files/osdevcon2008-max.pdf :-) -- -------- Volker A. Brandt Consulting and Support for Sun Solaris Brandt & Brandt Computer GmbH WWW: http://www.bb-c.de/ Am Wiesenpfad 6, 53340 Meckenheim Email: [EMAIL PROTECTED] Handelsregister: Amtsgericht Bonn, HRB 10513 Schuhgröße: 45 Geschäftsführer: Rainer J. H. Brandt und Volker A. Brandt ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] web interface not showing up
Hello Johan! > But when I wanted to start to play with the web admin interface of zfs it > didn´t show up @ https://localhost:6789 , as it should, according to all > posts I´ve been reading. I tried different things like the ip adress instead > of localhost, and the servername, with or without https, and with or without > /zfs on the end. > > Also tried from a different machine. No website showed up, and I checked > services, but services said inetd was running from start. It is also needed > for the remote desktop, isn´t it? You need to check if the SMF service is running: $ svcs webconsole STATE STIMEFMRI disabled Aug_25 svc:/system/webconsole:console If it is not, you need to enable it (as root or equivalent): # svcadm -v enable webconsole svc:/system/webconsole:console enabled. # svcs webconsole STATE STIMEFMRI online 19:07:24 svc:/system/webconsole:console See the man pages for more details. Hope this helps. Många hälsningar från Tyskland -- Volker -- ------------ Volker A. Brandt Consulting and Support for Sun Solaris Brandt & Brandt Computer GmbH WWW: http://www.bb-c.de/ Am Wiesenpfad 6, 53340 Meckenheim Email: [EMAIL PROTECTED] Handelsregister: Amtsgericht Bonn, HRB 10513 Schuhgröße: 45 Geschäftsführer: Rainer J. H. Brandt und Volker A. Brandt ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] web interface not showing up
> > You need to check if the SMF service is running: > > > # svcadm -v enable webconsole > > svc:/system/webconsole:console enabled. > > # svcs webconsole > > STATE STIMEFMRI > > online 19:07:24 svc:/system/webconsole:console > > Can the service be enabled for remote connection? Yes, you need to set the corresponding SMF property. Check for the value of "options/tcp_listen": # svcprop -p options/tcp_listen webconsole true If it says "false", you need to set it to "true". Here's a snippet from the Sun documentation: # svccfg svc:> select system/webconsole svc:/system/webconsole> setprop options/tcp_listen=true svc:/system/webconsole> quit # /usr/sbin/smcwebserver restart # netstat -a | grep 6789 I think you could also use "svcadm" to restart the service instead of the "smcwebserver" script. The last line is just to verify that something is listening on the webconsole port. Hope this helps -- Volker -- Volker A. Brandt Consulting and Support for Sun Solaris Brandt & Brandt Computer GmbH WWW: http://www.bb-c.de/ Am Wiesenpfad 6, 53340 Meckenheim Email: [EMAIL PROTECTED] Handelsregister: Amtsgericht Bonn, HRB 10513 Schuhgröße: 45 Geschäftsführer: Rainer J. H. Brandt und Volker A. Brandt ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] web interface not showing up
> > Yes, you need to set the corresponding SMF property. Check > > for the value of "options/tcp_listen": > > > > # svcprop -p options/tcp_listen webconsole > > true > > > > If it says "false", you need to set it to "true". Here's a snippet > > from the Sun documentation: > > > > # svccfg > > svc:> select system/webconsole > > svc:/system/webconsole> setprop options/tcp_listen=true > > svc:/system/webconsole> quit > > # /usr/sbin/smcwebserver restart > > # netstat -a | grep 6789 > > Thanks. for some reason I had to do this twice, the first time it said > "false" still. Odd. Maybe the smcwebserver script is too slow. I don't really know what it does. :-) > I was able to connect but now get a Java error... oh well Hmmm... I run Solaris 10/sparc U4. My /usr/java points to jdk/jdk1.5.0_16. I am using Firefox 2.0.0.16. Works For Me(TM) ;-) Sorry, can't help you any further. Maybe a question for desktop-discuss? Good luck -- Volker -- Volker A. Brandt Consulting and Support for Sun Solaris Brandt & Brandt Computer GmbH WWW: http://www.bb-c.de/ Am Wiesenpfad 6, 53340 Meckenheim Email: [EMAIL PROTECTED] Handelsregister: Amtsgericht Bonn, HRB 10513 Schuhgröße: 45 Geschäftsführer: Rainer J. H. Brandt und Volker A. Brandt ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] OT: ramdisks (Was: Re: create raidz with 1 disk offline)
> > [most people don't seem to know Solaris has ramdisk devices] > > That is because only a select few are able to unravel the enigma wrapped in a > clue that is solaris :) Hmmm... very enigmatic, your remark. :-) However, in this case I suspect it is because ramdisks don't really work well on Solaris: http://www.bb-c.de/ Am Wiesenpfad 6, 53340 Meckenheim Email: [EMAIL PROTECTED] Handelsregister: Amtsgericht Bonn, HRB 10513 Schuhgröße: 45 Geschäftsführer: Rainer J. H. Brandt und Volker A. Brandt ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] OT: ramdisks (Was: Re: create raidz with 1 disk offline)
> Note this from vmstat(1M): > > Without options, vmstat displays a one-line summary of the > virtual memory activity since the system was booted. Oops, you're correct. I was only trying to demonstrate that there was ample free memory and ramdiskadm just didn't work. Usually I do that using top, but pasting top output into an email doesn't really parse very well. > In other words, the first line of vmstat output is some value that > does not represent the current state of the system. Try this instead: Yes, that gives correct numbers. However, it would only lead away from the problem with ramdiskadm(1M). As such, the numbers are almost identical on my system anyway. :-) > >From a free memory standpoint, the current state of the system is very > different than the typical state since boot. Quite right. Regards -- Volker -- ------------ Volker A. Brandt Consulting and Support for Sun Solaris Brandt & Brandt Computer GmbH WWW: http://www.bb-c.de/ Am Wiesenpfad 6, 53340 Meckenheim Email: [EMAIL PROTECTED] Handelsregister: Amtsgericht Bonn, HRB 10513 Schuhgröße: 45 Geschäftsführer: Rainer J. H. Brandt und Volker A. Brandt ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] OT: ramdisks (Was: Re: create raidz with 1 disk offline)
> > So they only work on and off. I never bothered to find out what the > > problem was (in fact, I hadn't even tried the ramdiskadm cmd in that > > version of Solaris before this email thread showed up). > > > > AIUI, the memory assigned to a ramdisk must be contiguous. > This makes some sense in that they are designed to be bootable. > If you've been running for a while, the chance of finding large, > contiguous areas decreases. This is very interesting, thanks for enlightening me! Maybe a small note about this behavior is warranted in the NOTES section of ramdiskadm(1M)? :-) Thanks -- Volker -- -------- Volker A. Brandt Consulting and Support for Sun Solaris Brandt & Brandt Computer GmbH WWW: http://www.bb-c.de/ Am Wiesenpfad 6, 53340 Meckenheim Email: [EMAIL PROTECTED] Handelsregister: Amtsgericht Bonn, HRB 10513 Schuhgröße: 45 Geschäftsführer: Rainer J. H. Brandt und Volker A. Brandt ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS write performance on boot disk
> If that were the case, why would it matter if I was logged into the console, > and why would subdirectories of my home exhibit better write performance > than the top level home directory? A write to /export/home/username is > slower than to /export/home/username/blah, but ONLY if that user is logged > into the console. This smells of name resolution delays somewhere. Do you have a shell prompt that gets some host name or user name from name services? Is your /home directory owned by a non-existing user or group? Do you accidentally have something enabled in /etc/nsswitch.conf that does not exist (ldap, nis, nis+)? Maybe the first miss gets cached and all other misses get resolved from the cache? Or your nscd is disabled/confused/broken disabled altogether? You have an interesting problem there... Good luck -- Volker -- -------- Volker A. Brandt Consulting and Support for Sun Solaris Brandt & Brandt Computer GmbH WWW: http://www.bb-c.de/ Am Wiesenpfad 6, 53340 Meckenheim Email: [EMAIL PROTECTED] Handelsregister: Amtsgericht Bonn, HRB 10513 Schuhgröße: 45 Geschäftsführer: Rainer J. H. Brandt und Volker A. Brandt ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS write performance on boot disk
> > This smells of name resolution delays somewhere. [...] > I think you've misunderstood something here, perhaps in the way I've tried > to explain it. No, I was just offering a hunch. Writing files into a directory checks access permissions for that directory, and that involves name services. It is unlikely but not impossible that your top-level directory triggers some name service delay. I thought it worth checking. > I'm having trouble figuring out what that might be. Perhaps > your response was to the wrong discussion? :-) Regards -- Volker -- -------- Volker A. Brandt Consulting and Support for Sun Solaris Brandt & Brandt Computer GmbH WWW: http://www.bb-c.de/ Am Wiesenpfad 6, 53340 Meckenheim Email: [EMAIL PROTECTED] Handelsregister: Amtsgericht Bonn, HRB 10513 Schuhgröße: 45 Geschäftsführer: Rainer J. H. Brandt und Volker A. Brandt ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Resilvering is SLOW!. Not progressing at all
> Hi, everybody. I started a resilvering with a 150 GB ZFS pool 16 hours > ago. The resilvering is not progressing at all: [...] > > [EMAIL PROTECTED] /]# zpool status Try to run "zpool status" as a non-root user and see if the resilver shows any progress. Sometimes, "zpool status" as root seems to cause the resilver to start again from the beginning. The one bug I could find in bugs.opensolaris.org was closed as "not reproducible" so I am not really sure what the status of this problem is. Regards -- Volker -- ------------ Volker A. Brandt Consulting and Support for Sun Solaris Brandt & Brandt Computer GmbH WWW: http://www.bb-c.de/ Am Wiesenpfad 6, 53340 Meckenheim Email: [EMAIL PROTECTED] Handelsregister: Amtsgericht Bonn, HRB 10513 Schuhgröße: 45 Geschäftsführer: Rainer J. H. Brandt und Volker A. Brandt ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Error: value too large for defined data type
> 2) recompile 64 bit version of rsync > Tried that - tried setting LDFLAGS='-L/lib/64' > & recompiling rsync, but it did not seem to pick up the right libraries. > Suggestions? Assuming you are running (t)csh and have the Sun Studio compiler, do: unsetenv LD_LIBRARY_PATH unsetenv LDFLAGS setenv CC /opt/SUNWspro/bin/cc setenv CFLAGS "-fast -s -m64" ./configure -C --disable-nls --prefix=/opt/local make make install If you have a Bourne-like shell, replace "setenv XX YY" with "export XX=YY". If you want the software to land in some other place, change the "/opt/local" above. Good luck -- Volker -- ------------ Volker A. Brandt Consulting and Support for Sun Solaris Brandt & Brandt Computer GmbH WWW: http://www.bb-c.de/ Am Wiesenpfad 6, 53340 Meckenheim Email: [EMAIL PROTECTED] Handelsregister: Amtsgericht Bonn, HRB 10513 Schuhgröße: 45 Geschäftsführer: Rainer J. H. Brandt und Volker A. Brandt ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Using zfs mirror as a simple backup mechanism for time-slider.
> A while back, I posted here about the issues ZFS has with USB hotplugging > of ZFS formatted media when we were trying to plan an external media backup > solution for time-slider: > http://www.opensolaris.org/jive/thread.jspa?messageID=299501 [...] > There are a few minor issues however which I'd love to get some feedback on > in addition > to the overall direction of this proposal: > > 1. When the external device is disconnected, the zpool status output reports > that the > pool is in a degraded state and displays a status message that indicates > that there > was an unrecoverable error. While this is all technically correct, and is > appropriate > in the context of a setup where it is assumed that the mirrored device is > always > connected, it might lead a user to be unnecessarily alarmed when his > "backup" mirror > disk is not connected. We're trying to use a mirror configuration here in > a manner that > is a bit different than the conventional manner, but not in any way that > it's not designed > to cope with. [...] > So I'd like to ask if this is an appropriate use of ZFS mirror functionality? > It has many benefits > that we really should take advantage of. Yes, by all means. I am doing something very similar on my T1000, but I have two separate one-disk pools and copy to the backup pool using rsync. I would very much like to replace this with automatic resilvering. One prerequisite for wide adoption would be to fix the issue #1 you described above. I would advise not to integrate this anywhere before fixing that "degraded" display. BTW is this USB-specific? While it seems to imply that, you don't state it anywhere explicitly. I attach my backup disk via eSATA, power it up, import the pool, etc. Not really hotplugging... Regards -- Volker -- Volker A. Brandt Consulting and Support for Sun Solaris Brandt & Brandt Computer GmbH WWW: http://www.bb-c.de/ Am Wiesenpfad 6, 53340 Meckenheim Email: v...@bb-c.de Handelsregister: Amtsgericht Bonn, HRB 10513 Schuhgröße: 45 Geschäftsführer: Rainer J. H. Brandt und Volker A. Brandt ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Using zfs mirror as a simple backup mechanism for time-slider.
> > Send/receive has the advantage that the receiving filesystem is > > guaranteed to be in a stable state. > > Can send/receive be used on a multiuser running server system? Yes. > Will > this slowdown the services on the server much? "Depends". On a modern box with good disk layout it shouldn't. > Can the zfs receiving "end" be transformed into a normal file.bz2 Yes. However, you have to carefully match the sending and receiving ZFS versions, not all versions can read all streams. If you delay receiving the stream, it can happen that you won't be able to unpack it any more. Regards -- Volker -- -------- Volker A. Brandt Consulting and Support for Sun Solaris Brandt & Brandt Computer GmbH WWW: http://www.bb-c.de/ Am Wiesenpfad 6, 53340 Meckenheim Email: v...@bb-c.de Handelsregister: Amtsgericht Bonn, HRB 10513 Schuhgröße: 45 Geschäftsführer: Rainer J. H. Brandt und Volker A. Brandt ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Questions about OS 2008.11 partitioning scheme
> http://docs.sun.com/app/docs/doc/817-5093/disksconcepts-20068?a=view > > (To add more confusion, partitions are also referred to as slices.) Nope, at least not on x86 systems. A partition holds the Solaris part of the disk, and that part is subdivided into slices. Partitions are visible to other OSes on the box; slices aren't. Whereever the wrong term appears in Sun docs, it should be treated as a doc bug. For Sparc systems, some people intermix the two terms, but it's not really correct there either. Regards -- Volker -- ------------ Volker A. Brandt Consulting and Support for Sun Solaris Brandt & Brandt Computer GmbH WWW: http://www.bb-c.de/ Am Wiesenpfad 6, 53340 Meckenheim Email: v...@bb-c.de Handelsregister: Amtsgericht Bonn, HRB 10513 Schuhgröße: 45 Geschäftsführer: Rainer J. H. Brandt und Volker A. Brandt ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS + OpenSolaris for home NAS?
> The Samsung HD103UJ drives are nice, if you're not using > NVidia controllers - there's a bug in either the drives or the > controllers that makes them drop drives fairly frequently. Do you happen to have more details about this problem? Or some pointers? Thanks -- Volker -- ------------ Volker A. Brandt Consulting and Support for Sun Solaris Brandt & Brandt Computer GmbH WWW: http://www.bb-c.de/ Am Wiesenpfad 6, 53340 Meckenheim Email: v...@bb-c.de Handelsregister: Amtsgericht Bonn, HRB 10513 Schuhgröße: 45 Geschäftsführer: Rainer J. H. Brandt und Volker A. Brandt ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] 4 disk raidz1 with 3 disks...
> JZ wrote: [...] > Is this guy seriously for real? It's getting hard to stay on the list > with all this going on. No list etiquette, complete irrelevant > ramblings, need I go on? He probably has nothing better to do. Just ignore him; that's what they dislike most. He will go away eventually. Just put him in your killfile. Don't feed the trolls. Regards -- Volker -- ------------ Volker A. Brandt Consulting and Support for Sun Solaris Brandt & Brandt Computer GmbH WWW: http://www.bb-c.de/ Am Wiesenpfad 6, 53340 Meckenheim Email: v...@bb-c.de Handelsregister: Amtsgericht Bonn, HRB 10513 Schuhgröße: 45 Geschäftsführer: Rainer J. H. Brandt und Volker A. Brandt ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS extended ACL
> Given the massive success of GNU based systems (Linux, OS X, *BSD) Ouch! Neither OSX nor *BSD are GNU-based. They do ship with GNU-related things but that's been a long and hard battle. And the massive success has really only been Linux due to brilliant PR (and FUD about *BSD) and OS X due to Apple's commercial approach to BSD. Regards -- Volker -- -------- Volker A. Brandt Consulting and Support for Sun Solaris Brandt & Brandt Computer GmbH WWW: http://www.bb-c.de/ Am Wiesenpfad 6, 53340 Meckenheim Email: v...@bb-c.de Handelsregister: Amtsgericht Bonn, HRB 10513 Schuhgröße: 45 Geschäftsführer: Rainer J. H. Brandt und Volker A. Brandt ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS extended ACL
> > Ouch! Neither OSX nor *BSD are GNU-based. They do ship with > > GNU-related things but that's been a long and hard battle. > > While you are true, this isn't going to help on. I agree. > I see three possible types of Linux users that should be discussed. > > 1)The really dumb Linux users. These people will not notice that > they are using the Solaris/UNIX userland instead of the GNU userland. > So why by default put /usr/gnu/bin in front for them? Hmmm... I don't think a Linux user can be "really dumb". He/she would not run Linux, but a certain other system. :-) > 2)The trolls. They are a small group but very active with publishing > their "opinion" in the net. > Do we really need or like to suport them? > > 3)The educated/smart Linux users. They know about the differences > and they are able to decide whether they like to use the Solaris > tools with full Solaris feature support by default or whether > their default PATH should habe /usr/gnu/bin first. > So why by default put /usr/gnu/bin in front for them? I think only the third group of people are really interested in trying out Solaris/OpenSolaris. So it's really a toss-up: Linux types can be expected to fix up their path; so can we Solaris types. But as others have rightly pointed out this discussion should really take place on some advocacy list. Regards -- Volker -- Volker A. Brandt Consulting and Support for Sun Solaris Brandt & Brandt Computer GmbH WWW: http://www.bb-c.de/ Am Wiesenpfad 6, 53340 Meckenheim Email: v...@bb-c.de Handelsregister: Amtsgericht Bonn, HRB 10513 Schuhgröße: 45 Geschäftsführer: Rainer J. H. Brandt und Volker A. Brandt ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] terabyte or terror-byte disk drives
> yes, you've guessed it, the drive errors originated when the box was > moved. A zpool scrub generated thousands of errors on the damaged > drive. Now it's offline. Al is sad. :( [...] > Just a heads up - it might just help someone else on the list who has > developed bad habits over the years.. I've always felt squeamish when I had to move boxes with spinning disks, or when I had to watch someone else do it. Thanks for justifying my paranoia... and good luck with the replacement drives. Best regards -- Volker -- ------------ Volker A. Brandt Consulting and Support for Sun Solaris Brandt & Brandt Computer GmbH WWW: http://www.bb-c.de/ Am Wiesenpfad 6, 53340 Meckenheim Email: v...@bb-c.de Handelsregister: Amtsgericht Bonn, HRB 10513 Schuhgröße: 45 Geschäftsführer: Rainer J. H. Brandt und Volker A. Brandt ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS root pool over more than one disks?
> zpool add rpool mirror disk1 disk3 > > But when I try to add this it seems to fail with: > cannot add to 'rpool': root pool can not have multiple vdevs or separate logs What you want is "attach" instead of "add": zpool attach [-f] pool device new_device Attaches new_device to an existing zpool device. The existing device cannot be part of a raidz configuration. If device is not currently part of a mirrored configura- tion, device automatically transforms into a two-way mirror of device and new_device. If device is part of a two-way mirror, attaching new_device creates a three-way mirror, and so on. In either case, new_device begins to resilver immediately. -fForces use of new_device, even if its appears to be in use. Not all devices can be overridden in this manner. So in your case: zpool attach rpool disk3 Ths would attach disk3 as a mirror for disk1, assuming your root pool consists of dis1 only. If indeed you want two mirrored disk pairs in your root pool, this is currently not supported, since a root pool can only have one vdev; hence the error message. Regards -- Volker -- ------------ Volker A. Brandt Consulting and Support for Sun Solaris Brandt & Brandt Computer GmbH WWW: http://www.bb-c.de/ Am Wiesenpfad 6, 53340 Meckenheim Email: v...@bb-c.de Handelsregister: Amtsgericht Bonn, HRB 10513 Schuhgröße: 45 Geschäftsführer: Rainer J. H. Brandt und Volker A. Brandt ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS root pool over more than one disks?
> I was just asking because Cindy said there could be a 2-way mirror > config for a root pool. > > Guess I'll either get bigger disks or live with these smaller ones.. > > > What you want is "attach" instead of "add": Ah, OK. So the problem hinges more on the question what a "two-way" mirror is. :-) But I'm afraid that your conclusion (do nothing or go to bigger disks) is absolutely correct. Regards -- Volker -- -------- Volker A. Brandt Consulting and Support for Sun Solaris Brandt & Brandt Computer GmbH WWW: http://www.bb-c.de/ Am Wiesenpfad 6, 53340 Meckenheim Email: v...@bb-c.de Handelsregister: Amtsgericht Bonn, HRB 10513 Schuhgröße: 45 Geschäftsführer: Rainer J. H. Brandt und Volker A. Brandt ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Write caches on X4540
> so, basically, my question is: Is there a way to quickly or permanently > disable the write cache on every disk in an X4540? Hmmm... the only idea I have is to see how format(1M) does it and steal the code to write a small disable-cache tool. :-) Have a look at uscsi(7I) and specifically the ca_write_disable() routine in http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/cmd/format/menu_cache.c HTH -- Volker -- ---- Volker A. Brandt Consulting and Support for Sun Solaris Brandt & Brandt Computer GmbH WWW: http://www.bb-c.de/ Am Wiesenpfad 6, 53340 Meckenheim Email: v...@bb-c.de Handelsregister: Amtsgericht Bonn, HRB 10513 Schuhgröße: 45 Geschäftsführer: Rainer J. H. Brandt und Volker A. Brandt ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Supermicro AOC-SASLP-MV8
> Bouncing a thread from the device drivers list: > http://opensolaris.org/jive/thread.jspa?messageID=357176 > > Does anybody know if OpenSolaris will support this new Supermicro card, > based on the Marvell 88SE6480 chipset? It's a true PCI Express 8 port JBOD > SAS/SATA controller with pricing apparently around $125. > > If it works with OpenSolaris it sounds pretty much perfect. Yes, it does look good. If anyone finds something comparable with external connectors on a low-profile PCIe card, please post here. :-) Thanks -- Volker -- -------- Volker A. Brandt Consulting and Support for Sun Solaris Brandt & Brandt Computer GmbH WWW: http://www.bb-c.de/ Am Wiesenpfad 6, 53340 Meckenheim Email: v...@bb-c.de Handelsregister: Amtsgericht Bonn, HRB 10513 Schuhgröße: 45 Geschäftsführer: Rainer J. H. Brandt und Volker A. Brandt ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] SPARC SATA, please.
> 2) disks that were attached once leave a stale /dev/dsk entry behind > that takes full 7 seconds to stat() with kernel running at 100%. Such entries should go away with an invocation of "devfsadm -vC". If they don't, it's a bug IMHO. Regards -- Volker -- ------------ Volker A. Brandt Consulting and Support for Sun Solaris Brandt & Brandt Computer GmbH WWW: http://www.bb-c.de/ Am Wiesenpfad 6, 53340 Meckenheim Email: v...@bb-c.de Handelsregister: Amtsgericht Bonn, HRB 10513 Schuhgröße: 45 Geschäftsführer: Rainer J. H. Brandt und Volker A. Brandt ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] SPARC SATA, please.
> >> 2) disks that were attached once leave a stale /dev/dsk entry behind > >> that takes full 7 seconds to stat() with kernel running at 100%. > > > > Such entries should go away with an invocation of "devfsadm -vC". > > If they don't, it's a bug IMHO. > > yes, they go away. But the problem is when you do this and replug the > disks they don't show up again... And that's even worse IMO... So you want such disks to behave more like USB sticks? If there was a good way to mark certain devices or a device tree as "volatile" then this would be an interesting RFE. I would certainly not want *all* of my disks to "come and go as they please". :-) I am not sure how feasible an implementation would be though. Regards -- Volker -- Volker A. Brandt Consulting and Support for Sun Solaris Brandt & Brandt Computer GmbH WWW: http://www.bb-c.de/ Am Wiesenpfad 6, 53340 Meckenheim Email: v...@bb-c.de Handelsregister: Amtsgericht Bonn, HRB 10513 Schuhgröße: 45 Geschäftsführer: Rainer J. H. Brandt und Volker A. Brandt ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] SPARC SATA, please.
> I thought the LSI 1068 do not work with SPARC (mfi driver, x86 only). > I thought the 1078 are supposed to work with SPARC (mega_sas). Hmmm uname -a SunOS shelob 5.10 Generic_137111-02 sun4v sparc SUNW,Sun-Fire-T1000 man mpt Devices mpt(7D) NAME mpt - SCSI host bus adapter driver SYNOPSIS s...@unit-address DESCRIPTION The mpt host bus adapter driver is a SCSA compliant nexus driver that supports the LSI 53C1030 SCSI, SAS1064, SAS1068 and Dell SAS 6i/R controllers. ... :-) -- ---- Volker A. Brandt Consulting and Support for Sun Solaris Brandt & Brandt Computer GmbH WWW: http://www.bb-c.de/ Am Wiesenpfad 6, 53340 Meckenheim Email: v...@bb-c.de Handelsregister: Amtsgericht Bonn, HRB 10513 Schuhgröße: 45 Geschäftsführer: Rainer J. H. Brandt und Volker A. Brandt ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] SPARC SATA, please.
> The MCP55 is the chipset currently in use in the Sun X2200 M2 series of > servers. ... which has big problems with certain Samsung SATA disks. :-( So if you get such a board be sure to avoid Samsung 750GB and 1TB disks. Samsung never aknowledged the bug, nor have they released a firmware update. And nVidia never said anything about it either. Of course I only found out about it after buying lots of Samsung disks for our X2200s. Sigh... Regards -- Volker -- ---- Volker A. Brandt Consulting and Support for Sun Solaris Brandt & Brandt Computer GmbH WWW: http://www.bb-c.de/ Am Wiesenpfad 6, 53340 Meckenheim Email: v...@bb-c.de Handelsregister: Amtsgericht Bonn, HRB 10513 Schuhgröße: 45 Geschäftsführer: Rainer J. H. Brandt und Volker A. Brandt ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] SPARC SATA, please.
> > So if you get such a board be sure to avoid Samsung 750GB and > > 1TB disks. Samsung never aknowledged the bug, nor have they released > > a firmware update. And nVidia never said anything about it either. [...] > I'm a Hitachi disk user myself, and they work swell. The Seagates I have > in my X2200 M2 seem to work fine, as well. Yes, all HGST disks I've tried so far work just fine. > I've not tried any SSDs yet with the MCP55 - since they're heavily > Samsung under the hood (regardless of whose name is on the outside), I > _hope_ it was just a HD-specific firmware bug. I think it is quite HD-specific. I have another, slightly older, 160GB Samsung disk that worked fine as root disk in the X2200M2. If you do try an SSD please let us know. :-) Regards -- Volker -- -------- Volker A. Brandt Consulting and Support for Sun Solaris Brandt & Brandt Computer GmbH WWW: http://www.bb-c.de/ Am Wiesenpfad 6, 53340 Meckenheim Email: v...@bb-c.de Handelsregister: Amtsgericht Bonn, HRB 10513 Schuhgröße: 45 Geschäftsführer: Rainer J. H. Brandt und Volker A. Brandt ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Supported Motherboard SATA controller chipsets?
> I'm currently trying to decide between a MB with that chipset and > another that uses the nVidia 780a and nf200 south bridge. > > Is the nVidia SATA controller well supported? (in AHCI mode?) Be careful with nVidia if you want to use Samsung SATA disks. There is a problem with the disk freezing up. This bit me with our X2100M2 and X2200M2 systems. Regards -- Volker -- -------- Volker A. Brandt Consulting and Support for Sun Solaris Brandt & Brandt Computer GmbH WWW: http://www.bb-c.de/ Am Wiesenpfad 6, 53340 Meckenheim Email: v...@bb-c.de Handelsregister: Amtsgericht Bonn, HRB 10513 Schuhgröße: 45 Geschäftsführer: Rainer J. H. Brandt und Volker A. Brandt ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Supported Motherboard SATA controller chipsets?
Hello Kyle! Sorry for the late answer. > > Be careful with nVidia if you want to use Samsung SATA disks. > > There is a problem with the disk freezing up. This bit me with > > our X2100M2 and X2200M2 systems. > > I don't know if it's related to your issue, but I have also seen > comments around about the nv-sata windows drivers hanging up when > formatting drives > than 1024GB. But that's been fixed in the latest > nvidia windows drivers. > > Does that sound related, or like something different? Something different. The problem with the X2100M2 and X2200M2 will only occur with specific Samsung disk models, in my case the HD103UJ 1 TB disk. The system will work fine, until suddenly the disk freezes up. The disk is then no longer recognized at all. It will not respond to any command whatsoever. After a power cycle, the disks is fine -- until the next freeze. I think the same happened to some people on the 'net with the 750GB variant of the same disk, but I have only seen it with the 1 TB type. Regards -- Volker -- ------------ Volker A. Brandt Consulting and Support for Sun Solaris Brandt & Brandt Computer GmbH WWW: http://www.bb-c.de/ Am Wiesenpfad 6, 53340 Meckenheim Email: v...@bb-c.de Handelsregister: Amtsgericht Bonn, HRB 10513 Schuhgröße: 45 Geschäftsführer: Rainer J. H. Brandt und Volker A. Brandt ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] zfs root, jumpstart and flash archives
> I added some preliminary ZFS/flash information here: > > http://opensolaris.org/os/community/zfs/boot/flash/ Cool. Just a general comment: Since the term "flash" is quite overloaded, especially in the context of ZFS, I suggest that you use the term "flash archive" together whenever possible, to avoid confusion. Otherwise, you might hear IBM tell people "ZFS does not support flash". Regards -- Volker PS: Just joking, I'm sure IBM sales would never say such a thing! :-) -- ------------ Volker A. Brandt Consulting and Support for Sun Solaris Brandt & Brandt Computer GmbH WWW: http://www.bb-c.de/ Am Wiesenpfad 6, 53340 Meckenheim Email: v...@bb-c.de Handelsregister: Amtsgericht Bonn, HRB 10513 Schuhgröße: 45 Geschäftsführer: Rainer J. H. Brandt und Volker A. Brandt ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS Confusion
> > Can you actually see the literal commands? A bit like MySQL's 'show > > create table'? Or are you just intrepreting the output? > > Just interpreting the output. Actually you could see the commands on the "old" server by using zpool history oradata Regards -- Volker -- ------------ Volker A. Brandt Consulting and Support for Sun Solaris Brandt & Brandt Computer GmbH WWW: http://www.bb-c.de/ Am Wiesenpfad 6, 53340 Meckenheim Email: v...@bb-c.de Handelsregister: Amtsgericht Bonn, HRB 10513 Schuhgröße: 45 Geschäftsführer: Rainer J. H. Brandt und Volker A. Brandt ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Raid-Z Issue
> Seagate 1.5 TB drives? This sounds somewhat ominous. Are there known problems? Thanks -- Volker -- ---- Volker A. Brandt Consulting and Support for Sun Solaris Brandt & Brandt Computer GmbH WWW: http://www.bb-c.de/ Am Wiesenpfad 6, 53340 Meckenheim Email: v...@bb-c.de Handelsregister: Amtsgericht Bonn, HRB 10513 Schuhgröße: 45 Geschäftsführer: Rainer J. H. Brandt und Volker A. Brandt ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Raid-Z Issue
Brandon Mercer writes: > On Fri, Sep 11, 2009 at 2:57 PM, Volker A. Brandt wrote: > >> Seagate 1.5 TB drives? > > > > This sounds somewhat ominous. Are there known problems? > > They are so well known that simply by asking if you were using them > suggests that they suck. :) There are actually pretty hit or miss > issues with all 1.5TB drives but that particular manufacturer has had > a few more than others. Ah, OK. I thought they were all just first generation hiccups and solved with current firmware. So should we stay away from all 1.5TBs, or just from these Seagates? Thanks -- Volker -- ------------ Volker A. Brandt Consulting and Support for Sun Solaris Brandt & Brandt Computer GmbH WWW: http://www.bb-c.de/ Am Wiesenpfad 6, 53340 Meckenheim Email: v...@bb-c.de Handelsregister: Amtsgericht Bonn, HRB 10513 Schuhgröße: 45 Geschäftsführer: Rainer J. H. Brandt und Volker A. Brandt ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] zfs send/recv question
> > zfs send -i z/[EMAIL PROTECTED]z/[EMAIL PROTECTED]| bzip2 -c |\ > >ssh host.com "bzcat | zfs recv -v -F -d z" > > Since I see 'bzip2' mentioned here (a rather slow compressor), I > should mention that based on a recommendation from a friend, I gave a > compressor called 'lzop' (http://www.lzop.org/) a try due to its > reputation for compression speed. If you have several cores, I can recommend bzip2smp instead of "bzip2 -c"; see http://bzip2smp.sourceforge.net/ for more info. I am using it on my T1000 with good results, performance scales linear if you use only one thread per core (more threads will cause lots of cache misses). Regards -- Volker -- ------------ Volker A. Brandt Consulting and Support for Sun Solaris Brandt & Brandt Computer GmbH WWW: http://www.bb-c.de/~vab/ Am Wiesenpfad 6, 53340 Meckenheim Email: [EMAIL PROTECTED] Handelsregister: Amtsgericht Bonn, HRB 10513 Schuhgröße: 45 Geschäftsführer: Rainer J. H. Brandt und Volker A. Brandt ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Mount order of ZFS filesystems vs. other filesystems?
Hello Kyle! > All of these mounts are failing at bootup with messages about > non-existent mountpoints. My guess is that it's because when /etc/vfstab > is running, the ZFS '/export/OSImages' isn't mounted yet? Yes, that is absolutely correct. For details, look at the start method of svc:/system/filesystem/local:default, which lives in the script /lib/svc/method/fs-local. There you can see that ZFS is processed after the vfstab. > Any ideas? The only way I could find was to set the mountpoint of the file system to legacy, and add it to /etc/vfstab. Here's an example: # ZFS legacy mounts: SHELOB/var - /var zfs - yes - SHELOB/opt - /opt zfs - yes - SHELOB/home - /home zfs - yes - # # -- loopback mount -- begin # loopback mount for /usr/local: /opt/local - /usr/locallofs - yes ro,nodevices /home/cvs - /opt/local/cvs lofs - yes rw,nodevices # -- loopback mount -- end Before I added /home to vfstab, the loopback for /opt/local/cvs would fail. HTH -- Volker -- -------- Volker A. Brandt Consulting and Support for Sun Solaris Brandt & Brandt Computer GmbH WWW: http://www.bb-c.de/ Am Wiesenpfad 6, 53340 Meckenheim Email: [EMAIL PROTECTED] Handelsregister: Amtsgericht Bonn, HRB 10513 Schuhgröße: 45 Geschäftsführer: Rainer J. H. Brandt und Volker A. Brandt ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Mount order of ZFS filesystems vs. other filesystems?
> > The only way I could find was to set the mountpoint of the file system > > to legacy, and add it to /etc/vfstab. Here's an example: > > I tried this last night also, after sending the message and I made it > work. Seems clunky though. Yes, I also would have liked something more streamlined. But since adding entries to vfstab worked I did not pursue if further. > I wonder if there is a technical reason why it has to be done in this order? I can only guess that anything else would have been too complex. The whole sequence seems to have room for improvement. For example in svc:/system/filesystem/root:default there are some checks to mount optimized libc and hwcap libraries, and /usr is mounted, but not the root fs (which I would have expected going by the FMRI name). > More importantly, I wonder if ZFS Boot will re-order this since the > other FS's will all be ZFS. My guess is that the whole thing will be rewritten. > (Actually I wonder what will be left in /etc/vfstab?) Good question. I would think that the file will still be around; it'll have all the non-ZFS mount points, but the root fs will be mounted by ZFS. > > # ZFS legacy mounts: > > SHELOB/var - /var zfs - yes - > > SHELOB/opt - /opt zfs - yes - > > SHELOB/home - /home zfs - yes - > > # > > # -- loopback mount -- begin > > # loopback mount for /usr/local: > > /opt/local - /usr/locallofs - yes ro,nodevices > > /home/cvs - /opt/local/cvs lofs - yes rw,nodevices > > # -- loopback mount -- end > > > > Before I added /home to vfstab, the loopback for /opt/local/cvs would > > fail. > > I'm guessing that /opt/local/cvs is *not* visible as /usr/local/cvs ??? Oh, but it is: http://www.bb-c.de/ Am Wiesenpfad 6, 53340 Meckenheim Email: [EMAIL PROTECTED] Handelsregister: Amtsgericht Bonn, HRB 10513 Schuhgröße: 45 Geschäftsführer: Rainer J. H. Brandt und Volker A. Brandt ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] zfs volume question on system disk
Peter: > if one has data files on a system disk and one wants to make a zfs > volume of those non-OS filesystems, > will it take more of a performance hit as a zfs volume or as a regular > filesystem or no difference at all Forgive me if I do not understand your question. Do you have a "non-OS" slice (e.g. "/data" or "/export/home") on a system disk, and you want to convert that slice into a ZFS pool? Or do you already have a ZFS pool, and want to create a ZFS volume within that pool, and then want to put a file system on that volume? Of course it depends on the usage pattern of the data, but I don't think you'd notice any performance difference either way -- unless you do things like streaming writes approaching the disk bandwidth, etc. Regards -- Volker -- ------------ Volker A. Brandt Consulting and Support for Sun Solaris Brandt & Brandt Computer GmbH WWW: http://www.bb-c.de/ Am Wiesenpfad 6, 53340 Meckenheim Email: [EMAIL PROTECTED] Handelsregister: Amtsgericht Bonn, HRB 10513 Schuhgröße: 45 Geschäftsführer: Rainer J. H. Brandt und Volker A. Brandt ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS sharing question.
>Anyone out there remember the -d option for share? How do you set the > share description using the zfs set commands, or is it even possible? Yes, it is quite hard to find. I filed a bug about this last summer: http://bugs.opensolaris.org/view_bug.do?bug_id=6565879 The way to do set the share description is shown in the bug description. Basically, you say something like: zfs set sharenfs="rw,anon=0 Some Text Here" POOL/fs Note that you will get a trailing tab character which messes up the "share" output. The bug database says "Commit to Fix s10u6_02" so I'm hopeful. :-) Regards -- Volker -- -------- Volker A. Brandt Consulting and Support for Sun Solaris Brandt & Brandt Computer GmbH WWW: http://www.bb-c.de/ Am Wiesenpfad 6, 53340 Meckenheim Email: [EMAIL PROTECTED] Handelsregister: Amtsgericht Bonn, HRB 10513 Schuhgröße: 45 Geschäftsführer: Rainer J. H. Brandt und Volker A. Brandt pgptFwJIuNaHn.pgp Description: PGP signature ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] /var/sadm on zfs?
> On my heavily-patched Solaris 10U4 system, the size of /var (on UFS) > has gotten way out of hand due to the remarkably large growth of > /var/sadm. Can this directory tree be safely moved to a zfs > filesystem? How much of /var can be moved to a zfs filesystem without > causing boot or runtime issues? It seems your original question hasn't been answered yet... :-) I have used U4 with the complete /var on zfs for quite a while and have not encountered any problems. My usual setup for mirrored root disks is: what where how / /dev/md/dsk/d0 ufs swap /dev/md/dsk/d1 swap ROOT cXt0d0s3 + cXt1d0s3zpool The ROOT pool contains /var, /opt, and /export. I set both quota and reservation for /var to be on the safe side. I have not done any stress testing with zones, but normal use is absolutely trouble-free. HTH -- Volker -- ------------ Volker A. Brandt Consulting and Support for Sun Solaris Brandt & Brandt Computer GmbH WWW: http://www.bb-c.de/ Am Wiesenpfad 6, 53340 Meckenheim Email: [EMAIL PROTECTED] Handelsregister: Amtsgericht Bonn, HRB 10513 Schuhgröße: 45 Geschäftsführer: Rainer J. H. Brandt und Volker A. Brandt ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS Hardware Check, OS X Compatibility, NEWBIE!!
> Consider a motherboard based on the R690G/SB600 chipset or the nVidia 7050. > > The ASUS M2A-VM (690) is $70 and has on board video. I think only the > sound is not supported. Likewise the nVidia ASUS M2N-VM (7050) is $70. > I believe both have only 4 SATA ports, but that should be ok for your > build. I have an M2A-VM. It runs 2008.05 just fine. I have not checked out sound support yet, but a driver is listed by the driver detection tool. It is correct that the board has 4 SATA ports. Regards -- Volker -- ------------ Volker A. Brandt Consulting and Support for Sun Solaris Brandt & Brandt Computer GmbH WWW: http://www.bb-c.de/ Am Wiesenpfad 6, 53340 Meckenheim Email: [EMAIL PROTECTED] Handelsregister: Amtsgericht Bonn, HRB 10513 Schuhgröße: 45 Geschäftsführer: Rainer J. H. Brandt und Volker A. Brandt ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS Project Hardware
> Timely discussion. I too am trying to build a stable yet inexpensive storage > server for my home lab [...] > Other options are that I build a whitebox or buy a new PowerEdge or Sun > X2200 etc If this is really just a lab storage server then an X2100M2 will be enough. Just get the minimum spec, buy two 3.5" SATA-II disks (I guess the sweet spot is 750GB right now), and buy 8GB of third party memory to max out the box for ZFS. Then set up a ZFS-rooted Nevada and you're in business. Depending on your requirements, you have slightly over 1.3TB capacity, or about 690GB mirrored. I have just such a machine and am very happy. I do run S10U5 on it since I need the box for other things, too. So I don't have ZFS root. HTH -- Volker -- ------------ Volker A. Brandt Consulting and Support for Sun Solaris Brandt & Brandt Computer GmbH WWW: http://www.bb-c.de/ Am Wiesenpfad 6, 53340 Meckenheim Email: [EMAIL PROTECTED] Handelsregister: Amtsgericht Bonn, HRB 10513 Schuhgröße: 45 Geschäftsführer: Rainer J. H. Brandt und Volker A. Brandt ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS Hardware Check, OS X Compatibility, NEWBIE!!
Hi Darryl! > VAB: Did you have any issues with this board, or was everything detected, i > believe reading in the HCL that the sound card was not detected... I did not have any issues. Here's what prtconf says about the sound: prtconf -vpc /dev/sound/0 pci1043,8249, instance #0 System software properties: name='play-interrupts' type=int items=1 value=00af name='record-interrupts' type=int items=1 value=00af name='interrupt-priorities' type=int items=1 value=0009 Hardware properties: name='acpi-namespace' type=string items=1 value='\_SB_.PCI0.AZAL' name='assigned-addresses' type=int items=5 value=8300a210..fe02..4000 name='reg' type=int items=10 value=a200.....0300a210....4000 name='compatible' type=string items=7 value='pci1002,4383.1043.8249.0' + 'pci1002,4383.1043.8249' + 'pci1043,8249' + 'pci1002,4383.0' + 'pci1002,4383' + 'pciclass,040300' + 'pciclass,0403' name='model' type=string items=1 value='Mixed Mode device' name='power-cons The Device Driver Utility says that audiohd is attached. The audioplay tool works out of the box with some ancient .au files I have. However, I have not been able to convince either Rhythmbox or Totem to play anything at all... they complain about missing codecs etc. Since in typical Gnome fashion they don't say anywhere how to get codecs, I've given up on them. :-( For completeness, here's prtdiag: System Configuration: System manufacturer System Product Name BIOS Configuration: Phoenix Technologies, LTD ASUS M2A-VM ACPI BIOS Revision 1705 03/28/2008 Processor Sockets Version Location Tag -- AMD Athlon(tm) 64 X2 Dual Core Processor 4400+ Socket AM2 Memory Device Sockets TypeStatus Set Device Locator Bank Locator --- -- --- --- DDR2in use 0 A0 Bank0/1 DDR2in use 0 A1 Bank2/3 DDR2in use 0 A2 Bank4/5 DDR2in use 0 A3 Bank6/7 On-Board Devices = Upgradeable Slots ID StatusType Description --- - 1 available PCI PCI1 2 available PCI PCI2 3 available PCI Express PCIEX16 4 available PCI Express PCIEX1_1 HTH -- Volker -- Volker A. Brandt Consulting and Support for Sun Solaris Brandt & Brandt Computer GmbH WWW: http://www.bb-c.de/ Am Wiesenpfad 6, 53340 Meckenheim Email: [EMAIL PROTECTED] Handelsregister: Amtsgericht Bonn, HRB 10513 Schuhgröße: 45 Geschäftsführer: Rainer J. H. Brandt und Volker A. Brandt ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] disk names?
> I'd like to suggest a name: lsdisk and lspart to list the disks and also > the disks/partitions that are available. (Or maybe lsdisk should just > list the disks and partitions in an indented list? Listing the > partitions is important. Listing the controllers might not hurt > anything, either.) Hmmm... the current scheme seems to be "subject verb ". E.g. disk list or disk info c1t9d0 disk info -p c1* With partitions, I assume you mean slices. Partitions are outside of Solaris and managed by fdisk. > Linux has lspci[0], lsscsi, lsusb, lsof, and a number of other > ls-tab-tab utilities out-of-the-box[1]. These utilities would be quite > intuitive for folks who've learned Linux first, and would help people > transition to Solaris quickly. Do we really need to bend and warp everything to suit Linux switchers? (only half :-). > When I first learned Solaris (some years ago now), it took me a > surprisingly long time to get the device naming scheme and the partition > numbering. The naming/numbering is quite intuitive (except for that > part about c0t0d0s2 being the entire device[1]), but I would have felt > that I understood it quicker if I'd seen a nice listing that matches the > concept, and also had quick way to find out the name of that disk that I > just plugged in. My friends who are new to Solaris seem to have the > same problem out of the gate. I did not have this experience. I came from BSD where there were things like /dev/sd0d (which also exist on Solaris), but the Sun way was not too strange... > [0] Including lspci and lsusb with Solaris would be a great idea -- Well, there is scanpci. > [1] Since Solaris 10 still uses /bin/sh as the root shell, I feel that I > must explain that this is tab completion. In bash/zsh/tcsh, hitting tab > twice searches the $PATH for ls* and displays the results I know > that most-everyone on the list already knows this, but I can't help my > self! [ducks!] At the risk of outing myself as a hardcore nit picker, in tcsh it is Control-D, and only once, not twice. :-) > > [2] If I'm giving someone a tour of Solaris administration, /dev/sda > isn't particularly different from /dev/dsk/c0t0d0. But if I open > /dev/dsk/c0t0d0s2 with a partitioning tool, repartition, then > build/mount a filesystem without Something Bad happening, then my > spectators heads usually explode. After that, they don't believe me > when I tell them that they mostly understand what's going on. Yes, ZFS > and the EFI disklabels fix this when you have a system with a ZFS root > and no UFS disks -- but UFS is still necessary in a lot of > configuration, so this kind of system-quirk should be made obvious to > Unix-literate people coming from non-Solaris backgrounds. Maybe it's because I have a Solaris background, but I fail to see any quirk here... Best regards -- Volker -- Volker A. Brandt Consulting and Support for Sun Solaris Brandt & Brandt Computer GmbH WWW: http://www.bb-c.de/ Am Wiesenpfad 6, 53340 Meckenheim Email: [EMAIL PROTECTED] Handelsregister: Amtsgericht Bonn, HRB 10513 Schuhgröße: 45 Geschäftsführer: Rainer J. H. Brandt und Volker A. Brandt ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Liveupgrade snv_77 with a ZFS root to snv_89
[Just stumbled over this while catching up on email...] > +for fs in `zfs list -H | grep "^$ROOTPOOL/$ROOTFS" | grep -w "$ROOTFS" | > grep -v '@' | awk '{ print $1 };'` > > In essence, skip snapshots (@) and non-"rootpool/rootfs/subfs" paths. Note that "zfs list -Hrt filesystem -o name " will give you a one-column list of all existing file systems below . So your zfs invocation is optimized to: +for fs in `zfs list -Hrtfilesystem -oname "$ROOTPOOL"|tail +2` I'm sure this will save several microseconds. :-) Regards -- Volker -- ---- Volker A. Brandt Consulting and Support for Sun Solaris Brandt & Brandt Computer GmbH WWW: http://www.bb-c.de/ Am Wiesenpfad 6, 53340 Meckenheim Email: [EMAIL PROTECTED] Handelsregister: Amtsgericht Bonn, HRB 10513 Schuhgröße: 45 Geschäftsführer: Rainer J. H. Brandt und Volker A. Brandt ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] memory hog
> It might just be me, and the 'feel' of it, but it still feels to me that > the system needs to be under more memory pressure before ZFS gives pages > back. This could also be because I'm typically using systems with either > > 128GB, or <= 4GB of RAM, and in the smaller case, not having some > headroom costs me... I can confirm this "feeling". I have several older systems which used to have UFS and now run using ZFS, and the effect is noticeable. I have never gotten around to doing any benchmarks, but as a rule of thumb any box under 2GB RAM is not really good for ZFS. Regards -- Volker -- ------------ Volker A. Brandt Consulting and Support for Sun Solaris Brandt & Brandt Computer GmbH WWW: http://www.bb-c.de/ Am Wiesenpfad 6, 53340 Meckenheim Email: [EMAIL PROTECTED] Handelsregister: Amtsgericht Bonn, HRB 10513 Schuhgröße: 45 Geschäftsführer: Rainer J. H. Brandt und Volker A. Brandt ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] memory hog
> I've got a couple of identical old sparc boxes running nv90 - one > on ufs, the other zfs. Everything else is the same. (SunBlade > 150 with 1G of RAM, if you want specifics.) > > The zfs root box is significantly slower all around. Not only is > initial I/O slower, but it seems much less able to cache data. Exactly the same here, though with different hardware (Netra T1 200 with 1 GB RAM and 2x 36 GB SCSI). If you put the UFS on top of an SVM mirror the difference is less noticeable but still there. -- ------------ Volker A. Brandt Consulting and Support for Sun Solaris Brandt & Brandt Computer GmbH WWW: http://www.bb-c.de/ Am Wiesenpfad 6, 53340 Meckenheim Email: [EMAIL PROTECTED] Handelsregister: Amtsgericht Bonn, HRB 10513 Schuhgröße: 45 Geschäftsführer: Rainer J. H. Brandt und Volker A. Brandt ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] memory hog
> I think that if you notice the common thread; those who run SPARC's > are having performance issues vs. those who are running x86. I would not say that. For example, my T1000 with 2GB RAM had fair performance. Now that it has 16GB RAM it has improved a lot. :-) Also, I would not call it "performance issues". The initial discussion revolved around performance deltas between SVM/UFS ./. zpool/ZFS on old Sparc hardware. > I know > from my experience, I have a P4 3.2Ghz prescott desktop with 2.5gb > ram, and a Lenovo t61p laptop with 4gb, both of them have no > performance issues with zfs; infact, with zfs, the performance has > gone up. That is because the I/O paths are an intrinsic bottleneck on such hardware. And I think ZFS handles slow I/O paths better (depending on usage patterns of course). I have no issues with old hardware, it's just old. :-) Regards -- Volker -- -------- Volker A. Brandt Consulting and Support for Sun Solaris Brandt & Brandt Computer GmbH WWW: http://www.bb-c.de/ Am Wiesenpfad 6, 53340 Meckenheim Email: [EMAIL PROTECTED] Handelsregister: Amtsgericht Bonn, HRB 10513 Schuhgröße: 45 Geschäftsführer: Rainer J. H. Brandt und Volker A. Brandt ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] memory hog
> > I have a quite old machine with an AMD Athlon 900MHz with 640Mb of RAM > > serving up NFS, WebDAV locally to my house and running my webserver (Apache) > > in a Zone. For me performance is perfectly acceptable, but this isn't an > > interactive desktop. Not only is performance acceptable when I moved all > > the data (photos, etc) off the internal disk of my (PPC) Mac Mini to the > > NFSv3 accessed ZFS system things on the mac actually got faster. > > > > But surely I could afford to by a machine with 4gb of RAM after all it is > > only US$50 right ? Yes I could but why should I need to buy more hardware > > when I can use what I already have and not fill up more land file with non > > RoHS components (most of this machine, everything other than the CPU fan is > > more than 5 years old). > > GREAT point. Sun shouldn't innovate in software if it doesn't run well on > hardware that should've been thrown away years ago. You are comparing apples with oranges here. The point is not to change software to accommodate obsolete hardware. The point is to optimize existing hardware and modern software. The money is better spent on more RAM than on another CPU/SATA HBA/whatever, in this particular use case. > I don't know ANYONE running around > claiming Solaris is the OS to beat on extremely slow hardware with extremely > minimal hardware specs. That isn't its target market and never will be. > THIS IS AN ENTERPRISE OS! Wrong. OpenSolaris is certainly not an Enterprise OS. It might become one when it is passed the torch from Solaris 10. > I don't expect the programmers at Sun or anywhere else to write their code > for hardware that's 10 years old, or stifle innovation based on that idea. > If that's the sort of project you're looking for I think you've stumbled > onto the wrong mailing list. I think you just have made a fool of yourself. :-) Regards -- Volker -- Volker A. Brandt Consulting and Support for Sun Solaris Brandt & Brandt Computer GmbH WWW: http://www.bb-c.de/ Am Wiesenpfad 6, 53340 Meckenheim Email: [EMAIL PROTECTED] Handelsregister: Amtsgericht Bonn, HRB 10513 Schuhgröße: 45 Geschäftsführer: Rainer J. H. Brandt und Volker A. Brandt ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] What is this, who is doing it, and how do I get you to stop?
Hello Brian! > Every time I post to this list, I get an AUTOREPLY from somebody who if > you ask me is up to no good, otherwise they would set a proper From: address > instead of spoofing my domain. Everyone who post gets this autoreply. > From: [EMAIL PROTECTED] These people are not spoofing your domain, they set a "From:" header with no "@domain". Many MTAs append the local domain in this case. Maybe it's because they use a German umlaut in the "From:" string. Judging from the word "Irrläufer", someone at their site has subscribed to zfs-discuss but does not exist there any more. > Received: from mail01.csw-datensysteme.de ([62.153.225.98]) It seems to be a German company (not too far away from me, too. :-) > > X-Mailer: DvISE by Tobit Software, Germany (0241.444A46454D464F4D4E50), As you can see, they use a commercial mail appliance. It's probably just misconfigured. > I don't know who you are, and honestly I don't think I care, I'm going to just > start firewalling you. I recommend everyone else on the list does the same. No need to get uptight, just tell them politely. Eventually they will figure it out. :-) Regards -- Volker -- Volker A. Brandt Consulting and Support for Sun Solaris Brandt & Brandt Computer GmbH WWW: http://www.bb-c.de/ Am Wiesenpfad 6, 53340 Meckenheim Email: [EMAIL PROTECTED] Handelsregister: Amtsgericht Bonn, HRB 10513 Schuhgröße: 45 Geschäftsführer: Rainer J. H. Brandt und Volker A. Brandt ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] What is this, who is doing it, and how do I get you to stop?
> > Everyone who post gets this autoreply. > > So what do the rest of you do? Ignore it? I for one do ignore it. :-) > > > From: [EMAIL PROTECTED] > > > > These people are not spoofing your domain, they set a "From:" header > > with no "@domain". Many MTAs append the local domain in this case. > > Maybe it's because they use a German umlaut in the "From:" string. > > That's not a valid email address either, which is still wrong. You're right, it's wrong. But I didn't say it was right, right? > > Judging from the word "Irrl?ufer", someone at their site has subscribed > > to zfs-discuss but does not exist there any more. > > Then they should send back a message pointing out that the user is not longer > there, not just send the whole message back. Preaching to the choir... > > > Received: from mail01.csw-datensysteme.de ([62.153.225.98]) > > It seems to be a German company (not too far away from me, too. :-) > > H, are you for hire? Maybe you could take a trip out there and deliver > some clue. ;) I am, and I could, but my clue-bat's on loan to a Windows guy. :-))) > > > > X-Mailer: DvISE by Tobit Software, Germany (0241.444A46454D464F4D4E50), > > > > As you can see, they use a commercial mail appliance. It's probably > > just misconfigured. > > Email appliances and Exchange servers are the bane of the internet. Amen! > If they had sent some sort of "this user is no longer here" message or some > such I > would have been less likely to get all jumpy about it. Certainly! See above, look for "choir". > I'll redirect my misguided > anger at yet another poorly managed mail server at the poor sods who admin it. Fair enough. Regards -- Volker -- Volker A. Brandt Consulting and Support for Sun Solaris Brandt & Brandt Computer GmbH WWW: http://www.bb-c.de/ Am Wiesenpfad 6, 53340 Meckenheim Email: [EMAIL PROTECTED] Handelsregister: Amtsgericht Bonn, HRB 10513 Schuhgröße: 45 Geschäftsführer: Rainer J. H. Brandt und Volker A. Brandt ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Re-2: What is this, who is doing it, and how do I get you to stop?
[EMAIL PROTECTED] writes: > ..sorry, there was a misconfiguration in our email-system. I've fixed it in > this moment... > We apologize for any problems you had > > Andreas Gaida Wow, that was fast! And on a Sunday evening, too... So, everything is fixed, and we are all happy now :-) Regards -- Volker -- ------------ Volker A. Brandt Consulting and Support for Sun Solaris Brandt & Brandt Computer GmbH WWW: http://www.bb-c.de/ Am Wiesenpfad 6, 53340 Meckenheim Email: [EMAIL PROTECTED] Handelsregister: Amtsgericht Bonn, HRB 10513 Schuhgröße: 45 Geschäftsführer: Rainer J. H. Brandt und Volker A. Brandt ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss