Re: ZFS root, error 2 when mounting root

2013-02-25 Thread Paul Kraus
On Feb 24, 2013, at 4:42 AM, bw.mail.lists  wrote:

> Basically, I tried to follow 
> https://wiki.freebsd.org/RootOnZFS/GPTZFSBoot/9.0-RELEASE, but ended up with 
> a system that didn't know how to mount /.
> 
> There are two scripts attached.

I did not see any attachments.



> The main difference I see between those two scripts is that one doesn't use a 
> cache file and the other one does, hence the name of the scripts. But it 
> should work without cachefile too, shouldn't it? The other difference is how 
> mountpoints are set, but I can't figure out what could be wrong there.

I am guessing without seeing the scripts, but I assume the cache you 
refer to is the /boot/zfs/zpool.cahce file. This file instructs the kernel 
which zpools to import at boot time. If this file is missing or damaged the 
kernel cannot import any zpools. So you MUST have a valid zpool.cache file in 
order to import the zpool containing the "/" zfs dataset

--
Paul Kraus
Deputy Technical Director, LoneStarCon 3
Sound Coordinator, Schenectady Light Opera Company

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


Re: Strange delays in ZFS scrub or resilver

2013-02-25 Thread Paul Kraus
On Feb 23, 2013, at 11:23 PM, John Levine  wrote:

> I have a raidz of three 1 TB SATA drives, in USB enclosures.  One of
> the disks went bad, so I replaced it last night and it's been
> resilvering ever since.  I can watch the activity lights on the disks
> and it cranks away for a minute or so, then stops for a minute, then
> cranks for a minute, and so forth.  If I do a zpool status while it's
> stopped, the zpool waits until the I/O resumes, and a ^T shows it
> waiting for zio->io_cv.
> 
> I'm running FreeBSD 9.1, amd64 version, totally vanilla install on a
> mini-itx box with 4GB of RAM.  The root/swap disk is an SSD separate
> from the zfs disks.  When the disks are active, top shows about 10%
> system time and 4% interrupt.  When it isn't, top shows about 99.8%
> idle.  The server isn't doing much else, and nothing else currently
> touches the disks.  (They're for remote backup of a system somewhere
> else, and I have the backup job turned off until resilvering
> completes.)

Under 9.0 I had some external drives attached via USB and saw truly 
terrible I/O performance. I moved them to ESATA and it got much better. 
Unfortunately, my external exclosure has a SATA port expander as I need to talk 
to 4 external drives. That gives me about a factor of 2 worse performance than 
the internal SATA drives (even if I am only talking to one drive via the 
external connection).

--
Paul Kraus
Deputy Technical Director, LoneStarCon 3
Sound Coordinator, Schenectady Light Opera Company

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


Re: ZFS root, error 2 when mounting root

2013-02-25 Thread Paul Kraus
On Feb 25, 2013, at 10:14 AM, bw  wrote:

> On 02/25/2013 03:13 PM, Paul Kraus wrote:
>> On Feb 24, 2013, at 4:42 AM, bw.mail.lists  wrote:
>> 
>>> Basically, I tried to follow 
>>> https://wiki.freebsd.org/RootOnZFS/GPTZFSBoot/9.0-RELEASE, but ended up 
>>> with a system that didn't know how to mount /.
>>> 
>>> There are two scripts attached.
>> 
>> I did not see any attachments.
>> 
> 
> Mail list got rid of them, I didn't know it will do that. Appended inline at 
> the end of this mail. Stuff will probably get wrapped, but at least it's 
> there.
> 

> That was my understanding, too, but the instructions on the wiki say there's 
> no need to copy the cache file. In fact, there is no cache file to copy, 
> since the pool is created with
> 
> zpool create -o altroot=/mnt -O canmount=off zroot mirror /dev/gpt/g0zfs 
> /dev/gpt/g1zfs
> 
> No cache file. The wiki article was changed recently to eliminate that part, 
> the message on the wiki is: "Fix so that the default instructions does not 
> install data directly to the zroot pool. Simplify instructions regarding 
> cache files, they are no longer needed. Fixes and cleanups."
> 
> Either the instructions are wrong, or something in my script is. I assume 
> it's my script.

The instructions noted above are now INCORRECT for 9.0 (I have not 
tried this with 9.1 yet) as you MUST manually put the zpool.cache file in place 
for it to work correctly (I tried a couple different variations when I first 
setup my systems a few months ago and learned this the hard way :-) I have 
*lost* of experience with ZFS under Solaris 10 but am relatively new (about a 
year) to FreeBSD.

--
Paul Kraus
Deputy Technical Director, LoneStarCon 3
Sound Coordinator, Schenectady Light Opera Company

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


Re: Does anyone know how to install FreeBSD 8.3 under Virtual Box 4.2.6?

2013-03-01 Thread Paul Kraus
On Mar 1, 2013, at 2:04 AM, Richard Sharpe  wrote:

> Hi,
> 
> I booted the FreeBSD 8.3 DVD1 under Virtual Box, but it crashes in VB
> 4.2.6 under Win 7 and Linux.

Can you install *other* Guest OSes under VBox on these hosts ?

I have been running lots of 9.0 VMs under VBox with only minor issues :-)

--
Paul Kraus
Deputy Technical Director, LoneStarCon 3
Sound Coordinator, Schenectady Light Opera Company

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


cmake fails to build under 9.1

2013-03-05 Thread Paul Kraus
In trying to build NagIOS, one of the dependencies is cmake and it is failing 
to build. See below. And if I run make again it will fail on a different file, 
see further down. Any ideas ? I am running 64 bit 9.1 under VBox 4.2.6 and the 
parent host is Mac OS X 10.8.

[ 61%] Building CXX object 
Source/CMakeFiles/CMakeLib.dir/cmComputeLinkInformation.cxx.o
{standard input}: Assembler messages:
{standard input}:14628: Warning: end of file not at end of a line; newline 
inserted
{standard input}:16093: Error: bad register name `%r1'
c++: Internal error: Killed: 9 (program cc1plus)
Please submit a full bug report.
See http://gcc.gnu.org/bugs.html> for instructions.
*** [Source/CMakeFiles/CMakeLib.dir/cmBootstrapCommands.cxx.o] Error code 1
1 error
*** [Source/CMakeFiles/CMakeLib.dir/all] Error code 2
1 error
*** [all] Error code 2
1 error
*** [do-build] Error code 1

Stop in /usr/ports/devel/cmake.

[ 68%] Building CXX object 
Source/CMakeFiles/CMakeLib.dir/cmGlobalGenerator.cxx.o
[ 68%] Building CXX object 
Source/CMakeFiles/CMakeLib.dir/cmGlobalUnixMakefileGenerator3.cxx.o
[ 68%] Building CXX object Source/CMakeFiles/CMakeLib.dir/cmGraphVizWriter.cxx.o
{standard input}: Assembler messages:
{standard input}:119141: Warning: end of file not at end of a line; newline 
inserted
{standard input}:119618: Error: bad register name `%rb'
c++: Internal error: Killed: 9 (program cc1plus)
Please submit a full bug report.
See http://gcc.gnu.org/bugs.html> for instructions.
*** [Source/CMakeFiles/CMakeLib.dir/cmBootstrapCommands.cxx.o] Error code 1
1 error
*** [Source/CMakeFiles/CMakeLib.dir/all] Error code 2
1 error
*** [all] Error code 2
1 error
*** [do-build] Error code 1

Stop in /usr/ports/devel/cmake.

Thanks.

--
Paul Kraus
Deputy Technical Director, LoneStarCon 3
Sound Coordinator, Schenectady Light Opera Company

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


Re: cmake fails to build under 9.1

2013-03-06 Thread Paul Kraus
On Mar 5, 2013, at 12:19 PM, Volodymyr Kostyrko  wrote:

> 05.03.2013 18:51, Paul Kraus:
>> In trying to build NagIOS, one of the dependencies is cmake and it is 
>> failing to build. See below. And if I run make again it will fail on a 
>> different file, see further down. Any ideas ? I am running 64 bit 9.1 under 
>> VBox 4.2.6 and the parent host is Mac OS X 10.8.

> 
> Any messages on the system console?

No.

--
Paul Kraus
Deputy Technical Director, LoneStarCon 3
Sound Coordinator, Schenectady Light Opera Company

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


Re: Booting from an aribrary disk in ZFS RAIDZ on 8.x

2013-03-06 Thread Paul Kraus
On Mar 5, 2013, at 1:44 PM, Doug Poland  wrote:

> I'm running ZFS filesystem ver 3, storage pool ver 14, on 8-STABLE
> amd64. The kernel build is rather dated from around Feb 2010.
> 
> I have 6 disks in a RAIDZ configuration.  All disks were sliced
> the same with gpart (da(n)p1,p2,p3) with bootcode written to index 1,
> swap on index 2 and freebsd-zfs on index 3.
> 
> Given this configuration, I should be able to boot from any of the 6
> disks in the RAIDZ.  If this is a true statement, how do I make that
> happen from the loader prompt?

Boot in terms of root FS or in terms of boot loader ? 

The boot loader would be set in your BIOS (which physical drive you read for 
that).

/ comes from the zpool/zfs dataset once the boot loader loads enough code 
to find and mount the filesystem. That comes from all the drives in the zpool.

--
Paul Kraus
Deputy Technical Director, LoneStarCon 3
Sound Coordinator, Schenectady Light Opera Company

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


9.1 Postfix problem

2013-04-16 Thread Paul Kraus
When building postfix under 91. I am running into an odd problem. I use 
the INST_BASE option, which seems to cause the problem (it worked fine with 
9.0). The 'make' goes fine, but the 'make install' fails when trying to install 
the startup script to /usr/etc/rc.d instead of /etc/rc.d. It works fine if 
INST-BASE is disabled. I looked through the Makefile but could not suss out how 
that difference in configuration was actually causing the problem.

Has anyone else run into this problem and what was the fix (or did you 
just install into /usr/local) ?

--
Paul Kraus
Deputy Technical Director, LoneStarCon 3
Sound Coordinator, Schenectady Light Opera Company

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


Re: 9.1 Postfix problem

2013-04-26 Thread Paul Kraus
On Apr 17, 2013, at 10:04 AM, Lowell Gilbert 
 wrote:

> Paul Kraus  writes:
> 
>>  When building postfix under 91. I am running into an odd
>> problem. I use the INST_BASE option, which seems to cause the problem
>> (it worked fine with 9.0). The 'make' goes fine, but the 'make
>> install' fails when trying to install the startup script to
>> /usr/etc/rc.d instead of /etc/rc.d. It works fine if INST-BASE is
>> disabled. I looked through the Makefile but could not suss out how
>> that difference in configuration was actually causing the problem.
>> 
>>  Has anyone else run into this problem and what was the fix (or did you 
>> just install into /usr/local) ?
> 
> I use /usr/local, but this seems to be a typo in the last checkin, 
> which changed the internal names of the port options to our brave new
> naming scheme. 
> 
> If you look in the Makefile clause for installing to base, renaming the
> option itself went correctly, but both halves of the '.if' now invoke
> USE_RC_SUBR. That's correct for PREFIX, but for installing into base
> should be USE_RCORDER instead.

Lowell,
That was exactly the problem. I knew it was in the installation 
configuration *somewhere*, but I just could not find it. Thanks.

Should I report this as a bug in the postfix port ?

--
Paul Kraus
Deputy Technical Director, LoneStarCon 3
Sound Coordinator, Schenectady Light Opera Company

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


Re: update from apache22 to apache24

2013-05-03 Thread Paul Kraus
On May 3, 2013, at 10:47 AM, Jerry  wrote:

> I was just wondering if anyone had updated from apache22 to apache24.
> Specifically, are there any problems to be overcome? Does the existing
> httpd.conf file work with the apache24 branch.

There are some changes.

I was not upgrading from 22 to 24, but as part of building a new server to do 
the same task went from 22 to 24. The "allow/deny" syntax has changed, I'm sure 
there are others.

--
Paul Kraus
Deputy Technical Director, LoneStarCon 3
Sound Coordinator, Schenectady Light Opera Company

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


Re: Tell me how to increase the virtual disk with ZFS?

2013-05-11 Thread Paul Kraus
On May 11, 2013, at 8:59 AM, "Vladislav Prodan"  wrote:

> Add another virtual disk and do a RAID0 - not an option. It is not clear how 
> to distribute the data from the old virtual disk to the new virtual disk.

When you add an additional "disk" to a zpool (to create a STRIPE), the ZFS code 
automatically stripes new writes across all top level vdevs (drinks in this 
case). You will see a performance penalty until the data distribution evens 
out. One way to force that (if you do NOT have snapshots) is to just copy 
everything. The new copy will be striped across all top level vdevs.

The other option would be to add an additional disk that is as large as you 
want to the VM, attach it to the zpool as a mirror. The mirror vdev will only 
be as large as the original device, but once the mirror completes resilvering, 
you can remove the old device and grow the remaining device to full size (it 
may do that anyway based on the setting of the auto expand property of the 
zpool. The default under 9.1 is NOT to autoexpand:

root@FreeBSD2:/root # zpool get autoexpand rootpool
NAME  PROPERTYVALUE   SOURCE
rootpool  autoexpand  off default
root@FreeBSD2:/root # 

--
Paul Kraus
Deputy Technical Director, LoneStarCon 3
Sound Coordinator, Schenectady Light Opera Company

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


Re: Tell me how to increase the virtual disk with ZFS?

2013-05-11 Thread Paul Kraus
On May 11, 2013, at 10:03 AM, Alexander Yerenkow  wrote:

> There's no mature (or flexible, or "can do what I want" ) way to
> increase/decrease disk sizes in FreeBSD for now {ZFS,UFS}.
> Best and quickest way - to have twice spare space, copy data, create new
> sufficient disk and copy back.

Is this a statement or a question ? If a statement, then it is factually FALSE. 
If it is supposed to be a question, it does not ask anything.

--
Paul Kraus
Deputy Technical Director, LoneStarCon 3
Sound Coordinator, Schenectady Light Opera Company

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


Re: Tell me how to increase the virtual disk with ZFS?

2013-05-11 Thread Paul Kraus
On May 11, 2013, at 10:09 AM, "Vladislav Prodan"  wrote:
> 
> Thanks.
> I did not realize that there was such an interesting and useful option :)
> 
> # zpool get autoexpand tank
> NAME  PROPERTYVALUE   SOURCE
> tank  autoexpand  off default

The man pages for zpool and zfs are full of such useful information :-)

--
Paul Kraus
Deputy Technical Director, LoneStarCon 3
Sound Coordinator, Schenectady Light Opera Company

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


Re: Tell me how to increase the virtual disk with ZFS?

2013-05-11 Thread Paul Kraus
On May 11, 2013, at 11:13 AM, Alexander Yerenkow  wrote:

2013/5/11 Paul Kraus  On May 11, 2013, at 10:03 AM, Alexander Yerenkow  wrote:
> 
> > There's no mature (or flexible, or "can do what I want" ) way to
> > increase/decrease disk sizes in FreeBSD for now {ZFS,UFS}.
> > Best and quickest way - to have twice spare space, copy data, create new
> > sufficient disk and copy back.
> 
> Is this a statement or a question ? If a statement, then it is factually 
> FALSE. If it is supposed to be a question, it does not ask anything.
> 
> It was a statement, and luckily I was partially wrong, as Vladislav did made 
> what he wanted to.
> However, last time I checked there were no such easy ways to decrease zpools

Correct, there is currently no way to decrease the size of a zpool. That would 
require a defragmentation utility, which is on the roadmap as part of the 
bp_rewrite code enhancement (and has been for many, many years :-)

> or increase/decrease UFS partitions.

> Or grow mirrored ZFS as easily as single zpool.

This one I do not understand. I have grown mirrored zpools many times. Let's 
say you have a 2-way mirror of 1 TB drives. You can do one of two things to 
grow the zpool:

1) add another pair of drives (of any size) as another top level vdev  mirror 
device (you *can* use a different type of top level vdev, raidZ, simple, etc, 
but that is not recommended for both redundancy and performance predictability 
reasons).

2) swap out one of the 1 TB drives for a 2 TB (zpool replace), you can even 
offline one of the halves of the mirror to do this (but remember that you are 
vulnerable to a failure of the remaining drive during the resolver period), let 
the zpool resolver, then swap out the other 1 TB drive for a 2 TB. If the auto 
expand property is set, then once the resolver finishes you have doubled your 
net capacity.

> Or (killer one) remove added by mistake vdev from zpool ;)

Don't make that mistake. Seriously. If you are managing storage you need to be 
double checking every single command you issue if you care about your data 
integrity. You could easily make the same complaint about issuing an 'rm -rf' 
in the wrong directory (I know people who have done that). If you are using 
snapshots you may be safe, if not your data is probably gone.

On the other hand, depending on where in the tree you added the vdev, you may 
be able to remove it. If it is a top level vdev, then you have just changed the 
configuration of the zpool. While very not supported, you just might be able, 
using zdb and rolling back to a TXG before you added the device, remove the 
vdev. A good place to ask that question and have the discussion would be the 
ZFS discuss list at illumos (the list discussion is not limited to illumos, but 
covers all aspects of ZFS on all platforms). Archives here: 
http://www.listbox.com/member/archive/182191/sort/time_rev/ 

> Of course I'm not talking about real hw, rather virtual one.

Doesn't matter to ZFS, whether a drive is a physical, a partition, or a virtual 
disk you perform the same operations.

> If you happen to point me somewhere to have such task solved I'd be much 
> appreciated.

See above :-) Some of your issues I addressed above, others are not there (and 
may never be).

--
Paul Kraus
Deputy Technical Director, LoneStarCon 3
Sound Coordinator, Schenectady Light Opera Company

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


Re: ZFS partitioning

2013-05-12 Thread Paul Kraus
On May 12, 2013, at 9:48 AM, Outback Dingo  wrote:

> notice my boot pool is a mirror, so disk 2 is identical to disk1, so if
> disk1 ever dies, logically i could boot from disk two

The zpool mirror does not mirror the bootblock. You need to manually 
add that to all the drives you may want to boot from.

> pool: tank
> state: ONLINE
>  scan: scrub repaired 0 in 0h0m with 0 errors on Sat May 11 13:20:41 2013
> config:
> 
>NAMESTATE READ WRITE CKSUM
>tankONLINE   0 0 0
>  mirror-0  ONLINE   0 0 0
>da34p3  ONLINE   0 0 0
>da35p3  ONLINE   0 0 0
> 
> errors: No known data errors

--
Paul Kraus
Deputy Technical Director, LoneStarCon 3
Sound Coordinator, Schenectady Light Opera Company

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


Re: ZFS mirror install /mnt is empty

2013-05-13 Thread Paul Kraus
On May 13, 2013, at 1:58 AM, Trond Endrestøl 
 wrote:

> Due to advances in hard drive technology, for the worse I'm afraid, 
> i.e. 4K disk blocks, I wouldn't bother enabling compression on any ZFS 
> file systems. I might change my blog posts to reflect this stop gap.
> 
> If you do happen to have 4K drives, you might want to check out this 
> blog post:
> 
> https://ximalas.info/2012/01/11/new-server-and-first-attempt-at-running-freebsdamd64-with-zfs-for-all-storage/

I did look, it doesn't explain why not to enable compression on 4k 
sector drives.

From discussion on the zfs-discuss lists (both the old one from 
OpenSolaris and the new one at Illumos) the only issue with 4K sector drives is 
mixing 0.5K sector and 4K sector drives. You can tunes the zpool offset to 
handle 4K sector drives just fine, but it is a pool wide tuning.

http://zfsday.com/wp-content/uploads/2012/08/Why-4k_.pdf has some 4K 
background, and the only mention I see of compression and 4K is that you may 
get less. But… you really need to test your data to see if turning compression 
on is beneficial with any dataset. There is noticeable computational overhead 
to enabling compression. If you are CPU bound, then you will get better 
performance with compression off. If you are limited by the I/O bandwidth to 
your drives, then *if* your data is highly compressible, then you will get 
better performance with compression on. I have managed large pools of both data 
that compresses well and data that does not.

http://wiki.illumos.org/display/illumos/ZFS+and+Advanced+Format+disks 
discusses the issue and presents solutions using Illumos. I could find no such 
examples for FreeBSD, but I'm sure some of the same techniques would work 
(manually setting the ashift to 12 for 4K disks).

--
Paul Kraus
Deputy Technical Director, LoneStarCon 3
Sound Coordinator, Schenectady Light Opera Company

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


Re: ZFS mirror install /mnt is empty

2013-05-13 Thread Paul Kraus
On May 13, 2013, at 9:25 AM, Trond Endrestøl 
 wrote:
> 
> I guess it's due to my (mis)understanding that files shorter than 4KB 
> stored on 4K drives never will be subject to compression. And as you 
> state below, the degree of compression depends largely on the data at 
> hand.

Not a misunderstanding at all. With a 4K minimum block size (which is 
what a 4K sector size implies), a file less than 4KB will not compress at all. 
While ZFS does have a variable block size (512B to 128KB), with a 4K minimum 
black size (just like with any fixed block FS with a 4KB block size), small 
files take up more pace than they should (a 1KB file takes up an entire 4KB 
block). This ends up being an artifact of the block size and not ZFS, any FS on 
a 4K sector drive will have similar behavior.

I leave compression off on most of my datasets, only turning it on on 
ones where I see a real benefit. /var compresses vert well (I turn off 
compression in /etc/newsyslog.conf and let ZFS compress even the current logs 
:-), I find that some VM's compress very well, media files do NOT compress very 
well (they tend to already be compressed), generic data compresses well, as do 
scanned documents (uncompressed PDFs). Your individual results will vary :-)

Also remember, if you start with compression on and after a while you 
are not seeing good compression ratios, go ahead and turn it off. The already 
written data will remain compressed but new writes will not be.

--
Paul Kraus
Deputy Technical Director, LoneStarCon 3
Sound Coordinator, Schenectady Light Opera Company

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


Re: ZFS mirror install /mnt is empty

2013-05-14 Thread Paul Kraus
On May 14, 2013, at 12:10 AM, Shane Ambler  wrote:

> When it comes to disk compression I think people overlook the fact that
> it can impact on more than one level.

Compression has effects at multiple levels:

1) CPU resources to compress (and decompress) the data
2) Disk space used
3) I/O to/from disks

> The size of disks these days means that compression doesn't make a big
> difference to storage capacity for most people and 4k blocks mean little
> change in final disk space used.

The 4K block issue is *huge* if the majority of your data is less than 
4K files. It is also large when you consider that a 5K file will not occupy 8K 
on disk. I am not a UFS on FreeBSD expert, but UFS on Solaris uses a default 
block size of 4K but has a fragment size of 1K. So files are stored on disk 
with 1K resolution (so to speak). By going to a 4K minimum block size you are 
forcing all data up to the next 4K boundary.

Now, if the majority of your data is in large files (1MB or more), then 
the 4K minimum black size probably gets lost in the noise.

The other factor is the actual compressibility of the data. Most media 
files (JPEG, MPEG, GIF, PNG, MP3, AAC, etc.) are already compressed and trying 
to compress them again is not likely to garner any real reduction inn size. In 
my experience with the default compression algorithm (lzjb), even uncompressed 
audio files (.AIFF or .WAV) do not compress enough to make the CPU overhead 
worthwhile.

> One thing people seem to miss is the fact that compressed files are
> going to reduce the amount of data sent through the bottle neck that is
> the wire between motherboard and drive. While a 3k file compressed to 1k
> still uses a 4k block on disk it does (should) reduce the true data
> transferred to disk. Given a 9.1 source tree using 865M, if it
> compresses to 400M then it is going to reduce the time to read the
> entire tree during compilation. This would impact a 32 thread build more
> than a 4 thread build.

If the data does not compress well, then you get hit with the CPU 
overhead of compression to no bandwidth or space benefit. How compressible is 
the source tree ? [Not a loaded question, I haven't tried to compress it]

> While it is said that compression adds little overhead, time wise,

Compression most certainly DOES add overhead in terms of time, based on 
the speed of your CPU and how busy your system is. My home server is an HP 
Proliant Micro with a dual core AMD N36 running at 1.3 GHz. Turning on 
compression hurts performance *if* I am getting less than 1.2:1 compression 
ratio (5 drive RAIDz2 of 1TB Enterprise disks). Above that the I/O bandwidth 
reduction due to the compression makes up for the lost CPU cycles. I have 
managed servers where each case prevailed… CPU limited so compression hurt 
performance and I/O limited where compression helped performance.

> it is
> going to take time to compress the data which is going to increase
> latency. Going from a 6ms platter disk latency to a 0.2ms SSD latency
> gives a noticeable improvement to responsiveness. Adding compression is
> going to bring that back up - possibly higher than 6ms.

Interesting point. I am not sure of the data flow through the code to 
know if compression has a defined latency component, or is just throughput 
limited by CPU cycles to do the compression.

> Together these two factors may level out the total time to read a file.
> 
> One question there is whether the zfs cache uses compressed file data
> therefore keeping the latency while eliminating the bandwidth.

Data cached in the ZFS ARC or L2ARC is uncompressed. Data sent via zfs 
send / zfs receive is uncompressed; there had been talk of an option to send / 
receive compressed data, but I do not think it has gone anywhere.

> Personally I have compression turned off (desktop). My thought is that
> the latency added for compression would negate the bandwidth savings.
> 
> For a file server I would consider turning it on as network overhead is
> going to hide the latency.

Once again, it all depends on the compressibility of the data, the 
available CPU resources, the speed of the CPU resources, and the I/O bandwidth 
to/from the drives.

Note also that RAIDz (RAIDz2, RAIDz3) have their own computational 
overhead, so compression may be a bigger advantage in this case than in the 
case of a mirror, as the RAID code will have less data to process after being 
compressed.

--
Paul Kraus
Deputy Technical Director, LoneStarCon 3
Sound Coordinator, Schenectady Light Opera Company

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


Re: List Spam Filtering

2013-05-15 Thread Paul Kraus
On May 14, 2013, at 10:18 PM, Da Rock 
 wrote:

> I'm a big fan of _not_ having to subscribe to a list to get a quick hand with 
> a one off problem (obviously not this one!)- otherwise too many lists get 
> subscribed to, oodles of messages come in which you can't do anything about 
> and so forth (so its not simply just a matter of subscribe, unsubscribe as 
> noted). Unfortunately, many see it as a spam filter and thereby abuse it. How 
> often do you need help with an issue with libreoffice, mozilla whatever, or 
> other application? And yet subscription is compulsory and a ton of messages 
> (devs convs mostly) come flooding in within minutes.

Other lists I have been on had both a list and a forum that accessed 
the same content. While I see that FreeBSD has both, I do not think they share 
content. A forum gateway to the list would permit folks to sign up for the 
forum and NOT get a ton of email. If the forum were publicly readable that 
would also provide a way to look through (if not search) the archives.

I am not trying to make work for people, just suggesting another way to 
address the competing issues of SPAM reduction and ease of access.

--
Paul Kraus
Deputy Technical Director, LoneStarCon 3
Sound Coordinator, Schenectady Light Opera Company

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


Re: [offtopic] ZFS mirror install /mnt is empty

2013-05-15 Thread Paul Kraus
I responded to Trond privately.

On May 15, 2013, at 2:25 AM, Trond Endrestøl 
 wrote:

> Am I the only one to receive these emails twice, delayed only by a 
> couple of days since receiving the original emails?
> 
> Judging be the headers below this is either misconfiguration, a MITM 
> attack or something else.

--
Paul Kraus
Deputy Technical Director, LoneStarCon 3
Sound Coordinator, Schenectady Light Opera Company

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


Re: ZFS install on a partition

2013-05-17 Thread Paul Kraus
On May 17, 2013, at 6:24 PM, "b...@todoo.biz"  wrote:

> I know I should install a system using HBA and JBOD configuration - but 
> unfortunately this is not an option for this server. 

I ran many ZFS pools on top of hardware raid units, because that is what we 
had. It works fine and the NVRAM write cache of the better hardware raid 
systems give you a performance boost.

> What would you advise ? 
> 
> 1. Can I use an existing partition and setup ZFS on this partition using a 
> standard Zpool (no RAID). 

Sure. Be careful when you say RAID… I assume you mean RAIDz configured top 
level vdevs. Remember, a mirror is RAID-1 and the base ZFS striping is 
considered RAID-0. So set it up as plain stripe of one vdev :-)

> 2. Should I use any other solution in order to setup this (like full ZFS 
> install on disk using the entire pool with ZFS). 

If the system is configured with existing LUNS use them.

> 3. Should I avoid using ZFS since my system is not well tuned and It would be 
> asking for trouble to use ZFS in these conditions. 

No. One of the biggest benefits of ZFS is the end to end data integrity. IF 
there is a silent fault in the HW RAID (it happens), ZFS will detect the 
corrupt data and note it. If you had a mirror or other redundant device, ZFS 
would then read the data from the *other* copy and rewrite the bad block (or 
mark that physical block bad and use another).

> P.S. Stability is a must for this system - so I won't die if you answer "3" 
> and tell me to keep on using UFS. 

ZFS is stable, it is NOT as tuned as UFS just due to age. UFS in all of it's 
various incarnations has been tuned far more than any filesystem has any right 
to be. I spent many years managing Solaris system and I was truly amazed at how 
tuned the Solaris version of UFS was.

I have been running a number of 9.0 and 9.1 servers in production, all running 
ZFS for both OS and data, with no FS related issues.

> 
> 
> Thanks. 
> 
> 
> 
> «?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§
> 
> BSD - BSD - BSD - BSD - BSD - BSD - BSD - BSD -
> 
> «?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§
> 
> PGP ID --> 0x1BA3C2FD
> 
> ___
> freebsd-questions@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-questions
> To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"

--
Paul Kraus
Deputy Technical Director, LoneStarCon 3
Sound Coordinator, Schenectady Light Opera Company

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


Re: ZFS install on a partition

2013-05-18 Thread Paul Kraus
On May 18, 2013, at 3:21 AM, Ivailo Tanusheff  
wrote:

> If you use HBA/JBOD then you will rely on the software RAID of the ZFS 
> system. Yes, this RAID is good, but unless you use SSD disks to boost 
> performance and a lot of RAM the hardware raid should be more reliable and 
> mush faster.

Why will the hardware raid be more reliable ? While hardware raid is 
susceptible to uncorrectable errors from the physical drives (hardware raid 
controllers rely on the drives to report bad reads and writes), and the 
uncorrectable error rate for modern drives is such that with high capacity 
drives (1TB and over) you are almost certain to run into a couple over the 
operational life of the drive. 10^-14 for cheap drives and 10^-15 for better 
drives, very occasionally I see a drive rated for 10^-16. Run the math and see 
how many TB worth of data you have to write and read (remember these failures 
are generally read failures with NO indication that a failure occurred, bad 
data is just returned to the system).

In terms of performance HW raid is faster, generally due to the cache 
RAM built into the HW raid controller. ZFS makes good use of system, RAM for 
the same function. An SSD can help with performance if the majority of writes 
are sync (NFS is a good example of this) or if you can benefit from a much 
larger read cache. SSDs are deployed with ZFS as either write LOG devices (in 
which case they should be mirrored), but they only come into play for SYNC 
writes; and as an extension of the ARC, the L2ARC, which does not have to be 
mirrored as it is only a cache of existing data for spying up reads.

> I didn't get if you want to use the system to dual boot Linux/FreeBSD or just 
> to share FreeBSD space with linux.
> But I would advise you to go with option 1 - you will get most of the system 
> and obviously you don't need zpool with raid, as your LSI controller will do 
> all the redundancy for you. Making software RAID over the hardware one will 
> only decrease performance and will NOT increase the reliability, as you will 
> not be sure which information is stored on which physical disk.
> 
> If stability is a MUST, then I will also advise you to go with bunch of pools 
> and a disk designated as hot spare - in case some disk dies you will rely on 
> the automation recovery. Also you should run monitoring tool on your raid 
> controller.

I think you misunderstand the difference between stability and 
reliability. Any ZFS configuration I have tried on FreeBSD is STABLE, having 
redundant vdevs (mirrors or RAIDz) along with hot spares can increase 
RELIABILITY. The only advantage to having a hot spare is that when a drive 
fails (and they all fail eventually), the REPLACE operation can start 
immediately without you noticing and manually replacing the failed drive.

Reliability is a combination of reduction in MTBF (mean time between 
failure) and MTTR (mean time to repair). Having a hot spare reduces the MTTR. 
The other way to improve MTTR is to go with smaller drives to recede the time 
it takes the system to resilver a failed drive. This is NOT applicable in the 
OP's situation. I try very hard not so use drives larger than 1TB because 
resilver times can be days. Resilver time also depends on the total size of the 
the data in a zpool, as a resolver operation walks the FS in time, replaying 
all the writes and confirming that all the data on disk is good (it does not 
actually rewrite the data unless it finds bad data). This means a couple 
things, the first of which is that the resilver time will be dependent on the 
amount of data you have written, not the capacity. A zppol with a capacity of 
multiple TB will resilver in seconds if there is only a few hundred MB written 
to it. Since the resilver operation is not just a block by block copy,
  but a replay, it is I/Ops limited not bandwidth limited. You might be able to 
stream sequential data from a drive at hundreds of MB/sec., but most SATA 
drives will not sustain more than one to two hundred RANDOM I/Ops (sequentially 
they can do much more).

> You can also set copies=2/3 just in case some errors occur, so ZFS can 
> auto0repair the data. if you run ZFS over several LUNs this will make even 
> more sense. 

--
Paul Kraus
Deputy Technical Director, LoneStarCon 3
Sound Coordinator, Schenectady Light Opera Company

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


Re: ZFS install on a partition

2013-05-18 Thread Paul Kraus
On May 18, 2013, at 12:49 AM, kpn...@pobox.com wrote:

> On Fri, May 17, 2013 at 08:03:30PM -0400, Paul Kraus wrote:
>> On May 17, 2013, at 6:24 PM, "b...@todoo.biz"  wrote:
>>> 3. Should I avoid using ZFS since my system is not well tuned and It would 
>>> be asking for trouble to use ZFS in these conditions. 
>> 
>> No. One of the biggest benefits of ZFS is the end to end data integrity.
>> IF there is a silent fault in the HW RAID (it happens), ZFS will detect
>> the corrupt data and note it. If you had a mirror or other redundant device,
>> ZFS would then read the data from the *other* copy and rewrite the bad
>> block (or mark that physical block bad and use another).
> 
> I believe the "copies=2" and "copies=3" option exists to enable ZFS to
> self heal despite ZFS not being in charge of RAID. If ZFS only has a single
> LUN to work with, but the copies=2 or more option is set, then if ZFS
> detects an error it can still correct it.

Yes, but …. What the "copies=" parameter does is tell ZFS to make 
that many copies of every block written on the top level device. So if you set 
copies=2 and then write a 2MB file, it will take up 4MB of space since ZFS will 
keep two copies of it. ZFS will attempt to put them on different devices if it 
can, but there are no guarantees here. If you have a single vdev stripe and you 
lose that one device, you *will* lose all your data (assuming you did not have 
another backup copy someplace else). On the other hand, if the single device 
develops some bad blocks, with copies=2 you will *probably* not lose data as 
there will be other copies of those disk blocks elsewhere to recover from.

From my experience on the ZFS Discuss lists, the place people seem to 
use copies= are on laptops where they only have one drive and 
copies= is better than no protection at all, it is just not 
complete protection.

> This option is a dataset option, is inheritable by child datasets, and can
> be changed at any time affecting data written after the change. To get the
> full benefit you'll therefore want to set the option before putting data
> into the relevant dataset.

You can change it any time and it will only effect data written from 
that point on. This can be useful if you have both high value data band low 
value and you can control when each is written. For example, you leave copies=1 
for most of the time, then you want to save your wedding photos, so you set 
copies=3 and write all the wedding photos, you then set copies=1. You will have 
three copies of the wedding photos and one copy of everything else.

--
Paul Kraus
Deputy Technical Director, LoneStarCon 3
Sound Coordinator, Schenectady Light Opera Company

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


Re: ZFS install on a partition

2013-05-19 Thread Paul Kraus
On May 18, 2013, at 10:16 PM, kpn...@pobox.com wrote:

> On Sat, May 18, 2013 at 01:29:58PM +, Ivailo Tanusheff wrote:

>> Not sure about your calculations, hope you trust them, but in my previous 
>> company we have a 3-4 months period when a disk fails almost every day on 2 
>> year old servers, so trust me - I do NOT trust those calculations, as I've 
>> seen the opposite. Maybe it was a failed batch of disk, shipped in the 
>> country, but no one is insured against this. Yes, you can use several hot 
>> spares on the software raid, but:
> 
> What calculations are you talking about? He posted the uncorrectable read
> error probabilities manufacturers put into drive datasheets. The probability
> of a URE is distinct from and very different from the probability of the
> entire drive failing.

I think he is referring to the calculation I did based on uncorrectable 
error rate and whether you will run into that type of error over the life of 
the drive.

1 TB == 8,796,093,022,208 bits

10^15 (in bits) / 1 TB ~= 113.687

So if over the life of the drive you READ a TOTAL of 113.687 TB, then 
you will, statistically speaking, run into one uncorrectable read error and 
potentially return bad data to the application or OS. This does NOT scale with 
size of drive, it is the same for all drives with an uncorrectable error rate 
of 10^-15 bits. So if you read the entirety of a 1 TB drive 114 times or a 4 TB 
29 times you get the same result.

But this is a statistical probability, and some drives will have more 
(much more) uncorrectable errors and others will have less (much less), 
although I don't know if the distribution falls on a typical gaussian (bell) 
curve.

--
Paul Kraus
Deputy Technical Director, LoneStarCon 3
Sound Coordinator, Schenectady Light Opera Company

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


Re: More than 32 CPUs under 8.4-P

2013-05-19 Thread Paul Kraus
On May 19, 2013, at 11:51 AM, Dennis Glatting  wrote:

> ZFS hangs on multi-socket systems (Tyan, Supermicro) under 9.1. ZFS does
> not hang under 8.4. This (and one other 4 socket) is a production
> system.

Can you be more specific, I have been running 9.0 and 9.1 systems with 
multi-CPU and all ZFS with no (CPU related*) issues.

* I say no CPU related issues because I have run into SATA timeout issues with 
an external SATA enclosure with 4 drives (I know, SATA port expanders are evil, 
but it is my best option here). Sometimes the zpool hangs hard, sometimes just 
becomes unresponsive for a while. My "fix", such as it is, is to tune the zfs 
per vdev queue depth as follows:

vfs.zfs.vdev.min_pending="3"
vfs.zfs.vdev.max_pending="5"

The defaults are 5 and 10 respectively, and when I run with those I have the 
timeout issues, but only under very heavy I/O load. I only generate such load 
when migrating large amounts of data, which thankfully does not happen all that 
often.

--
Paul Kraus
Deputy Technical Director, LoneStarCon 3
Sound Coordinator, Schenectady Light Opera Company

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


Re: ZFS install on a partition

2013-05-23 Thread Paul Kraus
On May 23, 2013, at 4:53 AM, Albert Shih  wrote:

> Have you ever try to update a ZFS Pool on 9.0 to 9.1 ? 

I recently upgraded my home server from 9.0 to 9.1, actually, I did exported my 
data zpool (raidZ2), did a clean installation of 9.1, then imported my data 
zpool. Everything went perfectly. zpool upgrade did NOT indicate that there was 
a newer version of zpool so I did not even have to upgrade the on-disk zpool 
format (currently 28).

> I've a server with a big zpool in 9.0 I'm wonder if it's good idea to
> upgrade to 9.1. If I lost the data I'm  close to dead person. If I thinking
> to upgrade to 9.1 it's because I got small issue about NFSD, LACP.

My data zpool is not that big, only five 1TB drives in a raidZ2 for a net 
capacity of about 3TB, plus one 1TB hot spare.

My suggestion is to do the following (which is how I did the "upgrade"):

1) on a different physical system install 9.1, get the OS configured how you 
want it
2) on the production server, export the data zpool
3) shutdown the production server
4) remove the OS drives from the production server and replace with the drives 
you just installed 9.1 on
5) booth the production server with the 9.1 OS drives, make sure everything is 
working the way you want
6) import the data zpool

If the import fails, you can always put the 9.0 drives back in and get back up 
and running fairly quickly.

My system has the OS on a mirror zpool of two drives for just the OS.

--
Paul Kraus
Deputy Technical Director, LoneStarCon 3
Sound Coordinator, Schenectady Light Opera Company

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


Re: file corruption solution (soft-update or ZFS)

2013-05-25 Thread Paul Kraus
On May 23, 2013, at 11:09 AM, Michael Sierchio  wrote:

> On Thu, May 23, 2013 at 5:33 AM, Warren Block  wrote:
> 
>> ..
> 
>> One thing mentioned earlier is that ZFS wants lots of memory.  4G-8G
>> minimum, some might say as much as the server will hold.
>> 
>> 
> Not necessarily so - deduplication places great demands on memory, but that
> can be satisfied with dedicated cache devices (on SSD for performance and
> safety reasons).  Without dedup, the requirements are more modest.

The rule of thumb for DeDupe is 1GB physical RAM for every 1TB of capacity. The 
issue is that the DeDupe metadata table must live in the ARC for good 
performance. The discussion I have seen on the ZFS lists indicates that L2ARC 
is not really adequate for this, so adding cache devices (SSD's) don't really 
help.

On the other hand, you can use ZFS without DeDupe with as little as 2GB of 
total system RAM (depending on what else the system is doing). In my 
experience, the amount of RAM depends on the amount of I/O not the amount of 
storage. I find between 1GB and 3GB space for the ARC is adequate.

--
Paul Kraus
Deputy Technical Director, LoneStarCon 3
Sound Coordinator, Schenectady Light Opera Company

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


Re: BSD sleep

2013-05-29 Thread Paul Kraus
On May 29, 2013, at 7:58 AM, Jason Birch  wrote:

>> Seriously, that explanation about different hours is not enough to prevent
>> at least useful option.
>> like
>> sleep -f 1h
>> (-f means force convert, without it you can see good explanation why sleep
>> for 1 hour will be not sleep for 1 hour, and etc, and not get sleep at
>> all.).
>> 
> 
> Do one thing, and do it well. What you have proposed involves:
> * an additional force flag
> * interpolation of what follows the force flag (does m mean minutes, or
> months?)
> * expectations around time, time zones, and what an hours is.
> 
> That fails the litmus test on complexity for me personally - it seems like
> a lot of complexity for not much gain.

Agreed. When I first started dealing with Unix professionally (1995, I started 
playing with Unix-like OSes almost 10 years earlier) I was taught that each 
Unix command does one thing and does it well. That simplicity is one of the 
core strengths of Unix (and Unix-like) OSes. With the popularization of Linux I 
see many movements towards a "dumbing down" of the OS, making it behave more 
like more common OSes, even if those changes make it less robust and flexible.

One of the reasons I choose FreeBSD over Linux in many cases is that FreeBSD is 
closer to the roots of Unix in terms of keeping things simple and reliability 
being more important than convenience.

Disclaimer: I spent most of my time between 1995 and 2012 managing Solaris 
systems. An occasional Linux system would crop up. When I started really 
looking at FreeBSD in 2012 (I wanted ZFS and OpenSolaris / OpenIndiana / 
NexentaCore / Illumos did not support my hardware) I was very happily surprised 
that it "felt" like a grown up OS and not the toy that many Linux distributions 
feel like to me.

--
Paul Kraus
Deputy Technical Director, LoneStarCon 3
Sound Coordinator, Schenectady Light Opera Company

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


Re: "swap" partition leads to instability?

2013-05-29 Thread Paul Kraus
On May 29, 2013, at 3:52 PM, jb  wrote:

> Yes, there is some confusion about the diff, if any, between paging and 
> swapping.
> 
> Paging - copying or moving pages between physical memory (RAM) and secondary
>  storage (e.g. hard disk), in both directions.
> Swapping - nowdays is synonymous with "paging".

> You say that FB supports both, Linux supports paging only.
> Well, Linux utilizes swap space as part of virtual memory.
> So, can you elaborate more on that - what is the essence of the diff, why
> should I avoid the term "swapping" when referring to Linux, assuming VMM
> systems on both ?

When I started working professionally with Unix systems in 1995, I was 
taught that paging was the process of copying least used "pages" of RAM onto 
disk so that the RAM could be freed if the system needed more RAM. Swapping was 
the process of moving an entire program from RAM to disk in order to free up 
RAM.

In other words, a process can be "swapped out" and placed on disk until 
it comes up to run again, at which point it can be "swapped in" and executed.

I think that much of the confusion comes from the use of the SWAP 
device by the PAGING system. When the concept of paging came about, it just 
used the already existing SWAP space to store it's "paged out" pages of memory.

On the systems I worked on at the time (SunOS / Solaris), paging was a 
sign of pressure on the physical memory (RAM) of a system, swapping was a sign 
of _severe_ physical memory pressure. This was a time when we configured 2 to 4 
times the amount of physical RAM as SWAP space. RAM was very expensive and hard 
drives just expensive :-) It was common on a "normally" operating system to see 
the page scanner* running up to 100 times per second. A scan rate of over 100 
was considered a sign of pressure on RAM that needed to be addressed, any 
SWAPing was considered a sign that the system needed more physical RAM.

Today RAM is so cheap that _any_ paging is often considered bad and an 
indication that more Ram should be added.

*Solaris Page Scanner: This is a kernel level process that wakes up, examines 
the amount of free RAM, and takes action based on that value. The thresholds 
are all dynamic and based on the amount of RAM in the system. Above a high 
water mark the scanner does nothing. As the amount of free RAM drops, various 
pages of RAM are copied to SWAP space and the RAM freed. Eventually, if the 
amount of free Ram falls low enough, even parts of the kernel will be paged 
out. This is very bad and can lead to a system "thrashing" where it spends the 
vast majority of it's time just paging in and out and not actually getting 
anything done.

--
Paul Kraus
Deputy Technical Director, LoneStarCon 3
Sound Coordinator, Schenectady Light Opera Company

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


Re: OT: rsync on Mac OSX

2013-07-18 Thread Paul Kraus
On Jul 12, 2013, at 2:57 PM, kpn...@pobox.com wrote:

> I thought MacOS X's rsync did handle resource forks if you gave it the
> proper option. The resource fork is reported by rsync in the usual
> convention of having "._" prefixed to the filename.

My understanding was that the files named ._ were plain files that 
included the metadata that makes up the resource fork. The ._ file is not 
really the resource fork, but a workaround for filesystems that do not support 
resource forks.

As such, they would be copied by rsync just fine.

Now as to the Mac OS X rsync understanding resource forks, that I cannot speak 
to, but it should be easy to test. Copy a directory from an HFS+ volume to a 
non-Mac OS X volume (NFS for example) using rsync and see if it creates the ._ 
files to go with the data.

--
Paul Kraus
Deputy Technical Director, LoneStarCon 3
Sound Coordinator, Schenectady Light Opera Company

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


Re: cksum entire dir??

2012-09-11 Thread Paul Kraus
On Tue, Sep 11, 2012 at 9:18 PM,   wrote:

> It's a real shame Unix doesn't have a really good tool for comparing
> two directory trees. You can use 'diff -r' (even on binaries), but that
> fails if you have devices, named pipes, or named sockets in the
> filesystem. And diff or cksum don't tell you if symlinks are different.
> Plus you may care about file ownership, and that's where the stat
> command comes in handy.

Solaris and a least a few versions of Linux have a "dircmp" command
that is in reality a wrapper for diff that handles special files. The
problem with it is that it tends to be slow (I had to validate
millions of files).

-- 
{1-2-----3-----4-5-6-7-}
Paul Kraus
-> Principal Consultant, Business Information Technology Systems
-> Deputy Technical Director, LoneStarCon 3 (http://lonestarcon3.org/)
-> Sound Coordinator, Schenectady Light Opera Company (
http://www.sloctheater.org/ )
-> Technical Advisor, Troy Civic Theatre Company
-> Technical Advisor, RPI Players
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


Re: cksum entire dir??

2012-09-11 Thread Paul Kraus
On Tue, Sep 11, 2012 at 10:03 PM, Gary Kline  wrote:

> I'm not concerned about a file having been changed, just whether
>
>% cp -rp /home/klinebak/foodir   /home/kline/
>
> is 100% reliable.  down to the bit!

If "cp" is not reliable (down to the bit), then you have much bigger problems.

On the other hand, hard drives do have uncorrectable errors and
they report the read operation as valid even though they returned bad
data. That is part of why ZFS was created and ported to FreeBSD, it
checksums the data so a bad bit coming from a drive does not corrupt a
file.

-- 
{1-2-3-4-5-----6-7-}
Paul Kraus
-> Principal Consultant, Business Information Technology Systems
-> Deputy Technical Director, LoneStarCon 3 (http://lonestarcon3.org/)
-> Sound Coordinator, Schenectady Light Opera Company (
http://www.sloctheater.org/ )
-> Technical Advisor, Troy Civic Theatre Company
-> Technical Advisor, RPI Players
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


Odd X11 over SSH issue

2012-11-23 Thread Paul Kraus
)
10709: fstat(8,{ mode=-rw--- ,inode=131090,size=199,blksize=4096
}) = 0 (0x0)
10709: read(8,"\^A\0\0\^Dsrv1\0\^A0\0\^RMIT-MAG"...,4096) = 199 (0xc7)
10709: read(8,0x801848000,4096)  = 0 (0x0)
10709: close(8)  = 0 (0x0)
10709: getsockname(7,{ AF_INET 127.0.0.1:52920 },0x7fffc2d4) = 0 (0x0)
10709: fcntl(7,F_GETFL,) = 2 (0x2)
10709: fcntl(7,F_SETFL,O_NONBLOCK|0x2)   = 0 (0x0)
10709: fcntl(7,F_SETFD,FD_CLOEXEC)   = 0 (0x0)
10709: poll({7/POLLIN|POLLOUT},1,-1) = 1 (0x1)
10709: writev(0x7,0x7fffc420,0x6,0x0,0x50,0x0) = 48 (0x30)
10709: read(7,0x80181a138,8) ERR#35 'Resource
temporarily unavailable'
10709: poll({7/POLLIN},1,-1) = 1 (0x1)
10709: read(7,"\^A\0\v\0\0\0\M-3\^S",8)  = 8 (0x8)
10709: read(7,"\M-P\M-8\M^^\0\0\0\M-@\0\M^?\M^?"...,20172) = 8184 (0x1ff8)
10709: read(7,0x80198e000,11988) ERR#35 'Resource
temporarily unavailable'
10709: poll({7/POLLIN},1,-1) = 1 (0x1)
10709: read(7,"\0\M^?\0\0\M^?\0\0\0\^A\0\0\0s"...,11988) = 11988 (0x2ed4)
10709: poll({7/POLLIN|POLLOUT},1,-1) = 1 (0x1)
10709: writev(0x7,0x7fffc4d0,0x1,0x0,0x0,0x0) = 20 (0x14)
10709: poll({7/POLLIN},1,-1) = 1 (0x1)
10709: read(7,"\^AM\^A\0\0\0\0\0\^A\M^G\0\0\^A"...,4096) = 32 (0x20)
10709: poll({7/POLLIN|POLLOUT},1,-1) = 1 (0x1)
10709: writev(0x7,0x7fffc510,0x1,0x0,0x0,0x0) = 4 (0x4)
10709: poll({7/POLLIN},1,-1) = 1 (0x1)
10709: read(7,"\^AM\^B\0\0\0\0\0\M^?\M^??\0\^A"...,4096) = 32 (0x20)
10709: read(7,0x80193a02c,4096)  ERR#35 'Resource
temporarily unavailable'
10709: poll({7/POLLIN|POLLOUT},1,-1) = 1 (0x1)
10709: writev(0x7,0x7fffc520,0x3,0x0,0x0,0x0) = 44 (0x2c)
10709: poll({7/POLLIN},1,-1) = 1 (0x1)
10709: read(7,"\^A\0\^D\0\0\0\0\0\0\0\0\0\0\0\0"...,4096) = 32 (0x20)
10709: read(7,0x80193a02c,4096)  ERR#35 'Resource
temporarily unavailable'
10709: poll({7/POLLIN|POLLOUT},1,-1) = 1 (0x1)
10709: writev(0x7,0x7fffc490,0x3,0x0,0x0,0x0) = 20 (0x14)
10709: poll({7/POLLIN},1,-1) = 1 (0x1)
10709: read(7,"\^AM\^E\0\0\0\0\0\^A\M^L_\M^O\^A"...,4096) = 32 (0x20)
10709: read(7,0x80193a02c,4096)  ERR#35 'Resource
temporarily unavailable'
10709: read(7,0x80193a02c,4096)  ERR#35 'Resource
temporarily unavailable'
10709: poll({7/POLLIN|POLLOUT},1,-1) = 1 (0x1)
10709: writev(0x7,0x7fffc4c0,0x3,0x0,0x0,0x0) = 8 (0x8)
10709: poll({7/POLLIN},1,-1) = 1 (0x1)
10709: read(7,"\^A\^A\^F\0\0\0\0\0\^A\0\0\0\^A"...,4096) = 32 (0x20)
10709: read(7,0x80193a02c,4096)  ERR#35 'Resource
temporarily unavailable'
10709: read(7,0x80193a02c,4096)  ERR#35 'Resource
temporarily unavailable'
...

-- 
{1-2-3-4-5-6-7-}
Paul Kraus
-> Principal Consultant, Business Information Technology Systems
-> Deputy Technical Director, LoneStarCon 3 (http://lonestarcon3.org/)
-> Sound Coordinator, Schenectady Light Opera Company (
http://www.sloctheater.org/ )
-> Technical Advisor, Troy Civic Theatre Company
-> Technical Advisor, RPI Players
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


Re: Odd X11 over SSH issue

2012-11-23 Thread Paul Kraus
On Fri, Nov 23, 2012 at 1:01 PM, Adam Vande More  wrote:
> On Fri, Nov 23, 2012 at 11:43 AM, Paul Kraus  wrote:
>>
>> I am seeing very poor response time running the VitrualBox GUI via X11
>> tunneled over SSH via the Internet. The issue _appears_ to be limited
>> to the VBox GUI as Firefox is reasonable. I am well aware of the
>> latency issues tunneling X11 over SSH across the Internet, but that is
>> what we are stuck with for the moment. The server is running FreeBSD 9
>> and is patched as of about 4 weeks ago.
>
> Start it with "--graphicssystem native"

Tried it, did not make any noticeable difference, still over a
minute to open the window, but thanks for the suggestion. VBox is
version 4.1.22_OSE.

-- 
{1-2-3-4-5-6-7-}
Paul Kraus
-> Principal Consultant, Business Information Technology Systems
-> Deputy Technical Director, LoneStarCon 3 (http://lonestarcon3.org/)
-> Sound Coordinator, Schenectady Light Opera Company (
http://www.sloctheater.org/ )
-> Technical Advisor, Troy Civic Theatre Company
-> Technical Advisor, RPI Players
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


Re: Odd X11 over SSH issue

2012-11-23 Thread Paul Kraus
On Fri, Nov 23, 2012 at 4:31 PM, Lowell Gilbert
 wrote:

>> Observations:
>>
>> 1. When I first SSH into the box I see a long delay after the SSH
>> tunnel is setup before being prompted for a password, and I do not
>> know if this delay is related to the VBox issue. Details below.
>
> Running the ssh server with more debugging will probably tell you what's
> happening in this area.

Yup, I just have not had a chance to chase that one down, and
given that it happens once per SSH session, has not been a high
priority. I mentioned it in the spirit of full disclosure.

>> I would chock it up to network slowness, but I
>> do not see the same behavior with Firefox, xload, or xclock.
>
> That's not a fair comparison, because tunneling a whole X server
> involves passing a lot more events than tunneling an application to run
> on your local server. This is particularly painful because the X
> protocols are highly serial.

The VIrtualBox GUI (not the underlying VM console) should be
comparable to Firefox in terms of network load. Yes, xclock and xload
are much lower overhead as they are simpler apps. The difference
between Firefox (measured at under 10 seconds to open the window) and
VirtualBox (measured at 157 seconds to open the window) indicates that
_something_ is wrong.

Sorry if I was unclear. I am running 3 different VMs on this
server (soon to be more :-). One is WIn 2008 server as an RDP host for
a specific application, the others ar FreeBSD VMs, one for DNS and
DHCP, and the other for email / webmail. I manage the underlying Win
2008 instance via RDP (and that is how the end users connect), the two
FreeBSD VMs do not run a window manager at all and they are managed
via SSH connections. I use the VBoxHeadless executable to run the VMs
for production use. Normally I make config changes with the command
line tool VBoxManage, but in this case I had a FreeBSD VM that was not
booting so I needed the console (and to make various changes to the
config).

It is running the VBox management GUI on the physical layer server
that I am having fits with.

> Is there any particular reason you don't let the X server run remotely
> and attach to it with something more latency-friendly, like vnc? I would
> expect that to work vastly better on any OS, just because you get X
> (specifically, its tendency to head-of-line blocking) out of its own way.

The short answer to why X11 via SSH and not VNC for the management
is that I have not found a very clean way to have the VNC service
running for root without manual intervention to start it. Yes, I know
I could script it, but that adds one additional layer that needs to be
supported.

P.S. I did get my VM repaired, very slowly and painfully, but I still
need to track down the VBox GUI issue.

-- 
{1-2-3-4-5-6-7-}
Paul Kraus
-> Principal Consultant, Business Information Technology Systems
-> Deputy Technical Director, LoneStarCon 3 (http://lonestarcon3.org/)
-> Sound Coordinator, Schenectady Light Opera Company (
http://www.sloctheater.org/ )
-> Technical Advisor, Troy Civic Theatre Company
-> Technical Advisor, RPI Players
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


Re: Upgrading FreeBSD 8.3 amd64

2012-12-20 Thread Paul Kraus
On Dec 20, 2012, at 10:51 AM, Ralf Mardorf wrote:

> On Thu, 2012-12-20 at 05:29 +0100, Polytropon wrote:
>> I'd say: Use the source Luke. :-)
> 
> :)
> 
> Strange question: Is the FreeBSD handbook available as iBook?

You can get it as a PDF at 
ftp://ftp.freebsd.org/pub/FreeBSD/doc/en_US.ISO8859-1/books/handbook/ and you 
can then view that on your iPad. Look for the Bookshelf or some such, I use an 
iPod Touch and while similar, they are not identical to the iPad.

--
Paul Kraus
Deputy Technical Director, LoneStarCon 3
Sound Coordinator, Schenectady Light Opera Company

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


ZFS info WAS: new backup server file system options

2012-12-21 Thread Paul Kraus
On Dec 21, 2012, at 7:49 AM, yudi v wrote:

> I am building a new freebsd fileserver to use for backups, will be using 2
> disk raid mirroring in a HP microserver n40l.
> I have gone through some of the documentation and would like to know what
> file systems to choose.
> 
> According to the docs, ufs is suggested for the system partitions but
> someone on the freebsd irc channel suggested using zfs for the rootfs as
> well
> 
> Are there any disadvantages of using zfs for the whole system rather than
> going with ufs for the system files and zfs for the user data?

First a disclaimer, I have been working with Solaris since 1995 and 
managed lots of data under ZFS, I have only been working with FreeBSD for about 
the past 6 months.

UFS is clearly very stable and solid, but to get redundancy you need to 
use a separate "volume manager".

ZFS is a completely different way of thinking about managing storage 
(not just a filesystem). I prefer ZFS for a number of reasons:

1) End to end data integrity through checksums. With the advent of 1 TB plus 
drives, the uncorrectable error rate (typically  10^-14 or 10^-15) means that 
over the life of any drive you *are* now likely to run into uncorrectable 
errors. This means that traditional volume managers (which rely on the drive 
reporting an bad reads and writes) cannot detect these errors and bad data will 
be returned to the application.

2) Simplicity of management. Since the volume management and filesystem layers 
have been combined, you don't have to manage each separately.

3) Flexibility of storage. Once you build a zpool, the filesystems that reside 
on it share the storage of the entire zpool. This means you don't have to 
decide how much space to commit to a given filesystem at creation. It also 
means that all the filesystems residing in that one zpool share the performance 
of all the drives in that zpool.

4) Specific to booting off of a ZFS, if you move drives around (as I tend to do 
in at least one of my lab systems) the bootloader can still find the root 
filesystem under ZFS as it refers to it by zfs device name, not physical drive 
device name. Yes, you can tell the bootloader where to find root if you move 
it, but zfs does that automatically.

5) Zero performance penalty snapshots. The only cost to snapshots is the space 
necessary to hold the data. I have managed systems with over 100,000 snapshots.

I am running two production, one lab, and a bunch of VBox VMs all with 
ZFS. The only issue I have seen is one I have also seen under Solaris with ZFS. 
Certain kinds of hardware layer faults will cause the zfs management tools (the 
zpool and zfs commands) to hang waiting on a blocking I/O that will never 
return. The data continuos to be available, you just can't manage the zfs 
infrastructure until the device issues are cleared. For example, if you remove 
a USB drive that hosts a mounted ZFS, then any attempt to manage that ZFS 
device will hang (zpool export -f  hangs until a reboot).

Previously I had been running (at home) a fileserver under OpenSolaris 
using ZFS and it saved my data when I had multiple drive failures. At a certain 
client we had a 45 TB configuration built on top of 120 750GB drives. We had 
multiple redundancy and could survive a complete failure of 2 of the 5 disk 
enclosures (yes, we tested this in pre-production).

There are a number of good writeups on how setup a FreeBSD system to 
boot off of ZFS, I like this one the best 
http://wiki.freebsd.org/RootOnZFS/GPTZFSBoot/9.0-RELEASE , but I do the 
zpool/zfs configuration slightly differently (based on some hard learned 
lessons on Solaris). I am writing up my configuration (and why I do it this 
way), but it is not ready yet.

Make sure you look at all the information here: 
http://wiki.freebsd.org/ZFS , keeping in mind that lots of it was written 
before FreeBSD 9. I would NOT use ZFS, especially for booting, prior to release 
9 of FreeBSD. Some of the reason for this is the bugs that were fixed in zpool 
version 28 (included in release 9).

--
Paul Kraus
Deputy Technical Director, LoneStarCon 3
Sound Coordinator, Schenectady Light Opera Company

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


OpenSSL Certificate issue

2013-01-10 Thread Paul Kraus
DB93CA49F19D794E1DD399BE4350364F
Key-Arg   : None
Start Time: 1357834496
Timeout   : 300 (sec)
Verify return code: 0 (ok)
---
+OK Gpop ready for requests from 208.105.14.76 cz12pf1272748vdb.40
^C


And this does not work...

[root@MailArch /usr/local/openssl/certs]# openssl s_client -connect 
pop.gmail.com:995 -CApath /usr/local/openssl/certs
CONNECTED(0003)
depth=1 /C=US/O=Google Inc/CN=Google Internet Authority
verify error:num=20:unable to get local issuer certificate
verify return:0
---
Certificate chain
 0 s:/C=US/ST=California/L=Mountain View/O=Google Inc/CN=pop.gmail.com
   i:/C=US/O=Google Inc/CN=Google Internet Authority
 1 s:/C=US/O=Google Inc/CN=Google Internet Authority
   i:/C=US/O=Equifax/OU=Equifax Secure Certificate Authority
---
Server certificate
-BEGIN CERTIFICATE-
MIIDfjCCAuegAwIBAgIKO3SUyABopzANBgkqhkiG9w0BAQUFADBGMQswCQYD
VQQGEwJVUzETMBEGA1UEChMKR29vZ2xlIEluYzEiMCAGA1UEAxMZR29vZ2xlIElu
dGVybmV0IEF1dGhvcml0eTAeFw0xMjA5MTIxMTU3MjNaFw0xMzA2MDcxOTQzMjda
MGcxCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpDYWxpZm9ybmlhMRYwFAYDVQQHEw1N
b3VudGFpbiBWaWV3MRMwEQYDVQQKEwpHb29nbGUgSW5jMRYwFAYDVQQDEw1wb3Au
Z21haWwuY29tMIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDWvVlprqQFc95x
O5yfdTl7Hxqvs7C9PPKNdgegVio9c8lOyXoAZSei35xdrNPNbZhxqj5IKbQ+Sqy4
W3H9VVcYnf7MLiKWYCv6TisatKaj98LCd8A5soKp5vidtC+UyCelvB7BsE+rPUm1
CWURHnkNOWEInpJ0grX9ySx2n4hK/wIDAQABo4IBUDCCAUwwHQYDVR0lBBYwFAYI
KwYBBQUHAwEGCCsGAQUFBwMCMB0GA1UdDgQWBBQu/gVNhWx5xU5lNECDJANUvwdT
wDAfBgNVHSMEGDAWgBS/wDDr9UMRPme6npH7/Gra42sSJDBbBgNVHR8EVDBSMFCg
TqBMhkpodHRwOi8vd3d3LmdzdGF0aWMuY29tL0dvb2dsZUludGVybmV0QXV0aG9y
aXR5L0dvb2dsZUludGVybmV0QXV0aG9yaXR5LmNybDBmBggrBgEFBQcBAQRaMFgw
VgYIKwYBBQUHMAKGSmh0dHA6Ly93d3cuZ3N0YXRpYy5jb20vR29vZ2xlSW50ZXJu
ZXRBdXRob3JpdHkvR29vZ2xlSW50ZXJuZXRBdXRob3JpdHkuY3J0MAwGA1UdEwEB
/wQCMAAwGAYDVR0RBBEwD4INcG9wLmdtYWlsLmNvbTANBgkqhkiG9w0BAQUFAAOB
gQC4TtLHlv9CIxcIYr5THHpQ8TtQ7vtZyBBJM/RGF7omUSrWPp5Q0ehVnHH5HT4l
zrlskssLcq8PLsO/prVIxDZUmmcJwMzKw2c//zaCew13Ms/Dq0UbO2Q6IqzppXQL
nHIP7STcClUMZkgiOpzLfrM3jMKa+LuFVVfdRvGh0XVogg==
-END CERTIFICATE-
subject=/C=US/ST=California/L=Mountain View/O=Google Inc/CN=pop.gmail.com
issuer=/C=US/O=Google Inc/CN=Google Internet Authority
---
No client certificate CA names sent
---
SSL handshake has read 1750 bytes and written 325 bytes
---
New, TLSv1/SSLv3, Cipher is RC4-SHA
Server public key is 1024 bit
Secure Renegotiation IS supported
Compression: NONE
Expansion: NONE
SSL-Session:
Protocol  : TLSv1
Cipher: RC4-SHA
Session-ID: 4797C67363287F3C528509AAB91A0852BF265D6DFAEB144048815047CA3595DB
Session-ID-ctx: 
Master-Key: 
1A0FAD1AA041894DEDB7329984DBC513D3EE7B4B92901F7700D5C15D767C3E9E5761561BBD47647605D0852D2A24501E
Key-Arg   : None
Start Time: 1357834512
Timeout   : 300 (sec)
Verify return code: 20 (unable to get local issuer certificate)
---
+OK Gpop ready for requests from 208.105.14.76 j10pf1276456vde.5
^C
[root@MailArch /usr/local/openssl/certs]# 

--
Paul Kraus
Deputy Technical Director, LoneStarCon 3
Sound Coordinator, Schenectady Light Opera Company

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


Re: OpenSSL Certificate issue

2013-01-10 Thread Paul Kraus
On Jan 10, 2013, at 12:38 PM, Greg Larkin wrote:

> It looks like you don't have the Gmail certificate installed locally,
> unless I'm mistaken.

I do not need to have the Google cert installed as long as I have the 
Root Cert that signed it installed, and I do have that cert. The fact that I 
can point to the certificate file itself and the test connection works fine 
shows that I have the correct cert file. I agree that it is probably NOT 
installed correctly, but ...

>  Check the instructions here, and let us know if
> that fixes the problem for you:
> http://squeezesetup.wordpress.com/install-mail-part-2-gmail-certs/

these instructions appear to be for Linux and not FreeBSD and there are 
configuration and path differences, which is probably the core of my problem. I 
expect that I have not installed the root certs into the correct directory (but 
they are in the directory that c_rehash is working in).

--
Paul Kraus
Deputy Technical Director, LoneStarCon 3
Sound Coordinator, Schenectady Light Opera Company

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


Re: OpenSSL Certificate issue

2013-01-10 Thread Paul Kraus
> On 1/10/13 12:49 PM, Paul Kraus wrote:
>> On Jan 10, 2013, at 12:38 PM, Greg Larkin wrote:
>> 
>>> It looks like you don't have the Gmail certificate installed
>>> locally, unless I'm mistaken.
>> 
>> I do not need to have the Google cert installed as long as I have
>> the Root Cert that signed it installed, and I do have that cert.
>> The fact that I can point to the certificate file itself and the
>> test connection works fine shows that I have the correct cert file.
>> I agree that it is probably NOT installed correctly, but ...
>> 
>>> Check the instructions here, and let us know if that fixes the
>>> problem for you: 
>>> http://squeezesetup.wordpress.com/install-mail-part-2-gmail-certs/
>> 
>>> 
>> these instructions appear to be for Linux and not FreeBSD and there
>> are configuration and path differences, which is probably the core
>> of my problem. I expect that I have not installed the root certs
>> into the correct directory (but they are in the directory that
>> c_rehash is working in).
>> 
>> 
> 
> My guess is that you're using the c_rehash supplied with OpenSSL 1.x
> (installed as a port?) to hash the certs and then the OpenSSL 0.9.x
> binary from the base system to connect to the Gmail POP server.
> 
> Give your s_client command another try with the fully specified path
> to the OpenSSL 1.x binary to see if that corrects the verification error.

That appears to be the problem, using /usr/local/bin/openssl works, but I still 
need to know where the base system needs to have the certs placed (and how to 
hash them as the only c_rehash script is the one that came with the port of 
openssl) ? There are a number of utilities (most important here is fetchmail) 
which is using the base opensssl libraries.

NOTE: I did not explicitly install the openssl port, it must have been brought 
in as a dependency by another port.

--
Paul Kraus
Deputy Technical Director, LoneStarCon 3
Sound Coordinator, Schenectady Light Opera Company

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


Re: OpenSSL Certificate issue

2013-01-10 Thread Paul Kraus
On Jan 10, 2013, at 2:06 PM, Greg Larkin wrote:
> On 1/10/13 1:38 PM, Paul Kraus wrote:
> 
> I put the certs for my test in /etc/ssl/certs when using the base
> system openssl and in /usr/local/openssl/certs when using the openssl
> port.
> 
> c_rehash uses a specific openssl binary when invoked like so:
> 
> env OPENSSL=/usr/bin/openssl c_rehash /etc/ssl/certs
> 
> You can set the OPENSSL and SSL_CERT_DIR environment variables
> permanently, and that would ensure everything is consistent going
> forward, even if the openssl port is present.

That almost worked, the default directory for certs is /etc/ssl, 

[root@MailArch /etc/ssl]# pwd
/etc/ssl
[root@MailArch /etc/ssl]# ls -l
total 12
lrwxr-xr-x  1 root  wheel 8 Jan 10 15:26 882de061.0 -> cert.pem
lrwxr-xr-x  1 root  wheel38 Jan 10 15:22 cert.pem -> 
/usr/local/share/certs/ca-root-nss.crt
-rw-r--r--  1 root  wheel  9468 Jan  3  2012 openssl.cnf
[root@MailArch /etc/ssl]#

The clue was in the ca_root_nss port. If you enable etc symlink creation it 
creates the link in /etc/ssl. After running c_rehash (using the correct 
openssl) in that directory, the other tools that just call the openssl 
libraries find the root certs just fine.

Thanks for the help.

--
Paul Kraus
Deputy Technical Director, LoneStarCon 3
Sound Coordinator, Schenectady Light Opera Company

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


Re: ZFS - whole disk or partition or BSD slice?

2013-01-28 Thread Paul Kraus
On Jan 27, 2013, at 8:36 PM, Shane Ambler wrote:

> I recall reading that using partitions for zfs on FreeBSD was as good as full 
> disks. For a boot zpool we need to at least have a partition for the 
> boot-code and one for zfs preventing the use of a full disk.

I have been using ZFS with GPT partitions with no issues. I have NOT 
compared performance between whole disk and partitioned, which is where the 
difference in Solaris arises (ZFS makes better use of the physical drive's 
write cache).

> ZFS is meant to be compatible between different endian systems (x86 and 
> sparc) From what I have read and heard it sounds like zpools are expected be 
> compatible between different OS's as well - as far as zpool versions are 
> compatible - but I do expect it would depend on the partition tables being 
> readable - while full disk usage should work I would also think GPT is 
> compatible. OSX 10.5 (x86 and ppc) included a read-only zfs kext (before 
> Apple canned the project) so it must have been able to read Solaris or 
> FreeBSD created zpools which does indicate a fairly high level of 
> compatibility.

The target OS must be able to read the partitioning scheme used. I am 
not aware of Solaris / OpenSolaris / Illumos being able to read GPT partitions, 
but it has been over 6 months since I played with any of them.

> I believe the way ZFS marks disks/partitions with the zpool data is so that 
> the zpools can be recognised between systems and controllers - it would be 
> interesting to know if and under what conditions a zpool can be accessed, 
> both between different FreeBSD machines as well as the possibility of reading 
> on a Solaris/Indiana machine. Anyone have the resources to test?

When you give ZFS the whole disk, it writes an EFI-like label on the 
drive and makes us of one partition for the ZFS data. So there *is* a form of 
partitioning at the lower most layer, it is just *not* user managed 
partitioning.

--
Paul Kraus
Deputy Technical Director, LoneStarCon 3
Sound Coordinator, Schenectady Light Opera Company

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


Re: Starting with ZFS on fresh install

2013-01-28 Thread Paul Kraus
On Jan 28, 2013, at 10:39 AM, Mike Clarke wrote:

> If you're going to be using ZFS then you'll probably be better off not having 
> separate partitions and letting ZFS manage space allocation if you want to 
> limit the size of /var or any other part of the system,

You can manage space within a ZFS pool, regardless of whether you give the 
zpool whole disks or a partition.

rootpool 6.13G  56.4G31K  none
rootpool/do-not-remove 31K  1024M31K  none
rootpool/root5.01G  56.4G  5.01G  /
rootpool/tmp 60.5K  56.4G  60.5K  /tmp
rootpool/var  111M  56.4G   111M  /var

Shows a system with a rootpool and within the rootpool three separate 
fielsyetms:

/ (root)
/var
/tmp

You can control space usage with the zfs quota property.

Note the rootpool/do-not-remove daatset. This has a quota and reservation of 1 
GB. It's purpose is to permit recovery in case the zpool is accidentally 
filled. ZFS requires *some* free space top process file / directory remove 
operations. If the zpool is completely filled you will NOT be able to remove 
anything to free up space. By having a dataset with a quota and reservation of 
1 GB, that space is already marked as used so it will not be allocated. If the 
remainder of the zpool fills, then you can quiet the system (so running 
processes don't steal the space you are about to free up), change the quota / 
reservation (I like going down to 512 MB), and then remove some files / 
directories to free up space. 

Note that the zpool itself (rootpool) is NOT used as a dataset and is NOT 
mounted. My experience with ZFS under Solaris taught me that while you *can* 
use that dataset, if you have any child datasets (and any other datasets 
created will, by definition, be children of the rootpool) you will end up with 
hierarchical datasets. This means that future operations on datasets will have 
to take place in very specific order (such as mounting and un mounting). By 
avoiding hierarchical datasets (that are actually used) you avoid that 
complexity.

--
Paul Kraus
Deputy Technical Director, LoneStarCon 3
Sound Coordinator, Schenectady Light Opera Company

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


Re: Software raid VS hardware raid

2013-01-28 Thread Paul Kraus
On Jan 28, 2013, at 3:43 PM, Artem Kuchin wrote:

> I have to made a decision on choosing a dedicated server.
> The problem i see is that while i can find very affordable and good options 
> they do not
> provide hardware raid or even if they do it is not the best hardware for 
> freebsd.

I prefer SW RAID, specifically ZFS, for two very large reasons:

1) Visibility: From the OS layer you have very good visibility into the health 
of the RAID set and the underlying drives. All of the lower end HW RAID 
solutions I have seen require proprietary software to "manage" the RAID 
configuration, usually from the physical system's BIOS layer. Finding good OS 
layer software to monitor the RAID and the drives has been very painful. If you 
don't know you have a failure, then you can't do anything about it and when you 
have a second failure you lose data. Running a HW RAID system and not being 
able to issue a simple command from the OS and see the status of the RAID 
scares me.

2) Error Detection and Correction: HW RAID relies on the drives to report read 
and write errors. With UNCORRECTABLE error rates of 10^-14 and 10^-15 and LARGE 
(1 TB plus) drives you are almost guaranteed to statistically run into 
UNCORRECTABLE errors over the life of a typical drive. ZFS has end to end 
checksums and can detect a single bad bit from a drive, if the set is redundant 
it can recreate the correct data and re-write it, effectively correcting the 
bad data on disk.

NOTE: Larger, more expensive HW RAID systems address both of the above issues, 
but at a much higher cost in terms of money and management overhead.

DISCLAIMER: I have been managing mission critical, cannot afford to lose it 
data under ZFS for over 5 years, with no loss of data (even with some horribly 
unreliable low cost HW RAID systems under the ZFS layer... if we had not used 
ZFS we would have lost data multiple times).  

--
Paul Kraus
Deputy Technical Director, LoneStarCon 3
Sound Coordinator, Schenectady Light Opera Company

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


Re: ZFS - whole disk or partition or BSD slice?

2013-01-29 Thread Paul Kraus
On Jan 28, 2013, at 9:37 PM, Thomas Mueller wrote:

>> Presumably the disks are currently FreeBSD-specific.  If I used raw
>> disks instead of slices, could I read them from a Solaris system too?
> 
> ^ I'm mostly sure you would be able to read disks from Solaris/x86.
> ^ However Solaris/Sparc uses another labeling scheme. If you want to be
> ^ fully compatible with other system GPT is a better choice.
> 
> Is GPT compatible with Solaris, can Solaris access a GPT disk?

AFAIK, none of the Solaris derived OSes can read a GPT disk label.

--
Paul Kraus
Deputy Technical Director, LoneStarCon 3
Sound Coordinator, Schenectady Light Opera Company

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


Re: ZFS - whole disk or partition or BSD slice?

2013-01-29 Thread Paul Kraus
On Jan 29, 2013, at 6:59 AM, Volodymyr Kostyrko wrote:
>> 
>> Is GPT compatible with Solaris, can Solaris access a GPT disk?
> 
> Yes. I'm not sure if it can boot off GPT disk but on Solaris zpool 
> automatically creates boundary GPT partition to protect ZFS vdev.

Under the Solaris-based OSes I have used*, ZFS creates an EFI-like disk 
label, NOT a GPT label. FreeBSD (9.0) will read and use the EFI-like disk label 
that ZFS creates (or perhaps it is the ZFS code that is parsing the disk 
label). So if you want to move a zpool between FreeBSD and a Solaris-derived 
OS, then the safe bet is to give ZFS the entire disk and let it create the disk 
label.

*Solaris-based OSes that I have used:
Solaris 10
OpenSolaris 
NCP (Nexenta Core Platform)

--
Paul Kraus
Deputy Technical Director, LoneStarCon 3
Sound Coordinator, Schenectady Light Opera Company

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


Re: Software raid VS hardware raid

2013-01-30 Thread Paul Kraus
On Jan 30, 2013, at 8:10 AM, Andrea Venturoli wrote:

> You can spend the extra money you spare on the controller buying good disks; 
> as someone else pointed out don't get "desktop-class" ones, but "24x7" ones.

Server Class drives buy you some improvement, but my recent experience with 
Seagate Barracuda ES.2 drives is not that good. I have had 50% of them fail 
within the 5-year warranty period. My disks run 24x7 and I use ZFS under 
FreeBSD 9 so I have not lost any data. I have:

2 x Seagate ES.2 250 GB (one has failed)
4 x Seagate ES.2 1 TB (two have failed)
2 x Hitachi UltraStar 1 TB (pre-WD acquisition), no failures, but they are less 
than 2 years old. They are also noticeably faster than the Seagate ES.2

I just ordered 2 x WD RE4 500 GB, we'll see how those do

I go out of my way to purchase disks with a 5-year warranty, they are still out 
there but you have to look for them.

--
Paul Kraus
Deputy Technical Director, LoneStarCon 3
Sound Coordinator, Schenectady Light Opera Company

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


Re: Software raid VS hardware raid

2013-01-30 Thread Paul Kraus
On Jan 30, 2013, at 10:22 AM, Warren Block wrote:

> If you want to use the same drive for booting, it's possible.  Create all 
> three partitions on both drives manually.  Then mirror the freebsd-ufs 
> partition only.  The contents of the freebsd-boot partition don't change 
> often, and swap does not have to be mirrored.

Note that if you do NOT mirror SWAP, then in the event of a disk 
failure you will most likely crash when the system tries to swap in some data 
from the failed drive. If you mirror swap then you do not risk a crash due to 
missing swap data.

--
Paul Kraus
Deputy Technical Director, LoneStarCon 3
Sound Coordinator, Schenectady Light Opera Company

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"


Re: [zfs-discuss] zfs + NFS + FreeBSD with performance prob

2013-02-04 Thread Paul Kraus
On Jan 31, 2013, at 5:16 PM, Albert Shih wrote:

> Well I've server running FreeBSD 9.0 with (don't count / on differents
> disks) zfs pool with 36 disk.
> 
> The performance is very very good on the server.
> 
> I've one NFS client running FreeBSD 8.3 and the performance over NFS is
> very good : 
> 
> For example : Read from the client and write over NFS to ZFS:
> 
> [root@ .tmp]# time tar xf /tmp/linux-3.7.5.tar 
> 
> real1m7.244s
> user0m0.921s
> sys 0m8.990s
> 
> this client is on 1Gbits/s network cable and same network switch as the
> server.
> 
> I've a second NFS client running FreeBSD 9.1-Stable, and on this second
> client the performance is catastrophic. After 1 hour the tar isn't finish.
> OK this second client is connect with 100Mbit/s and not on the same switch.
> But well from 2 min --> ~ 90 min ...:-(
> 
> I've try for this second client to change on the ZFS-NFS server the
> 
>   zfs set sync=disabled 
> 
> and that change nothing.

I have been using FreeBSD 9 with ZFS and NFS to a couple Mac OS X (10.6.8 Snow 
Leopard) boxes and I get between 40 and 50 MB/sec throughput on a Gigabit 
ethernet link. Since you have already ruled out the known sync issue with ZFS 
and no SSD-based write cache, then perhaps you are running into an NFS 3 vs. 
NFS 4 issue. I am not sure if Mac OS X is using NFS 3 or NFS 4.

--
Paul Kraus
Deputy Technical Director, LoneStarCon 3
Sound Coordinator, Schenectady Light Opera Company

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"