40
zfs 86 71
31 mb/s writes and 40 mb/s reads is something that I guess I could
potentially live with. I am guessing the main problem of stacking ZFS
on top of geli like this is the fact that writing to a mirror requires
double the CP
>Thanks, could you do the same, but using 2 .eli vdevs mirrorred
>together in a zfs mirror?
>
>- Sincerely,
>Dan Naumov
Hi,
Proc: Intell Atom 330 (2x1.6Ghz) - 1 package(s) x 2 core(s) x 2 HTT threads
Chipset: Intel 82945G
Sys: 8.0-RELEASE FreeBSD 8.0-RELEASE #0
empty file: /boot/loader.conf
Hdd:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On Mon, 11 Jan 2010 19:45, fjwcash@ wrote:
On Mon, Jan 11, 2010 at 4:24 PM, Dan Naumov wrote:
On Tue, Jan 12, 2010 at 1:29 AM, K. Macy wrote:
If performance is an issue, you may want to consider carving off a
partition
on that SSD, geli-fyi
On Mon, Jan 11, 2010 at 4:24 PM, Dan Naumov wrote:
> On Tue, Jan 12, 2010 at 1:29 AM, K. Macy wrote:
> >>>
> >>> If performance is an issue, you may want to consider carving off a
> partition
> >>> on that SSD, geli-fying it, and using it as a ZIL device. You'll
> probably
> >>> see a marked pe
> Ok, lets assume we have a dedicated ZIL on a single non-redundant
> disk. This disk dies. How do you remove the dedicated ZIL from the
> pool or replace it with a new one? Solaris ZFS documentation indicates
> that this is possible for dedicated L2ARC - you can remove a dedicated
> l2arc from a p
Two hdd Seagate ES2,Intel Atom 330 (2x1.6GHz), 2GB RAM:
geli:
geli init -s 4096 -K /etc/keys/ad4s2.key /dev/ad4s2
geli init -s 4096 -K /etc/keys/ad6s2.key /dev/ad6s2
zfs:
zpool create data01 ad4s2.eli
df -h:
dev/ad6s2.eli.journal857G8.0K788G 0%/data02
data01
On Tue, Jan 12, 2010 at 1:29 AM, K. Macy wrote:
>>>
>>> If performance is an issue, you may want to consider carving off a partition
>>> on that SSD, geli-fying it, and using it as a ZIL device. You'll probably
>>> see a marked performance improvement with such a setup.
>>
>> That is true, but us
2010/1/12 Rafał Jackiewicz :
> Two hdd Seagate ES2,Intel Atom 330 (2x1.6GHz), 2GB RAM:
>
> geli:
> geli init -s 4096 -K /etc/keys/ad4s2.key /dev/ad4s2
> geli init -s 4096 -K /etc/keys/ad6s2.key /dev/ad6s2
>
> zfs:
> zpool create data01 ad4s2.eli
>
> df -h:
> dev/ad6s2.eli.journal 857G
>>
>> If performance is an issue, you may want to consider carving off a partition
>> on that SSD, geli-fying it, and using it as a ZIL device. You'll probably
>> see a marked performance improvement with such a setup.
>
> That is true, but using a single device for a dedicated ZIL is a huge
> no-
On Mon, Jan 11, 2010 at 11:39 AM, Dan Naumov wrote:
> On Mon, Jan 11, 2010 at 7:30 PM, Pete French
> How fast is the CPU in the system showing no overhead? Having no
> noticable overhead whatsoever sounds extremely unlikely unless you are
> actually using it on something like a very modern dualcor
> How fast is the CPU in the system showing no overhead? Having no
> noticable overhead whatsoever sounds extremely unlikely unless you are
> actually using it on something like a very modern dualcore or better.
It's a very modern dual core :-) Phenom 550 - the other machine is an old
Opteron 252.
On Mon, Jan 11, 2010 at 7:30 PM, Pete French
wrote:
>> GELI+ZFS and Debian Linux with MDRAID and cryptofs. Has anyone here
>> made any benchmarks regarding how much of a performance hit is caused
>> by using 2 geli devices as vdevs for a ZFS mirror pool in FreeBSD (a
>
> I havent done it directly
> GELI+ZFS and Debian Linux with MDRAID and cryptofs. Has anyone here
> made any benchmarks regarding how much of a performance hit is caused
> by using 2 geli devices as vdevs for a ZFS mirror pool in FreeBSD (a
I havent done it directly on the same boxes, but I have two systems
with idenitical d
On Sun, Jan 10, 2010 at 8:46 PM, Damian Gerow wrote:
> Dan Naumov wrote:
> : Yes, this is what I was basically considering:
> :
> : new AHCI driver => 40gb Intel SSD => UFS2 with Softupdates for the
> : system installation
> : new AHCI driver => 2 x 2tb disks, each fully encrypted with geli => 2
>
On Sun, Jan 10, 2010 at 6:12 PM, Damian Gerow wrote:
> Dan Naumov wrote:
> : I am mostly interested in benchmarks on lower end hardware, the system
> : is an Atom 330 which is currently using Windows 2008 server with
> : TrueCrypt in a non-raid configuration and with that setup, I am
> : getting r
On Sun, Jan 10, 2010 at 05:08:29PM +0200, Dan Naumov wrote:
> Hello list.
>
> I am evaluating options for my new upcoming storage system, where for
> various reasons the data will be stored on 2 x 2tb SATA disk in a
> mirror and has to be encrypted (a 40gb Intel SSD will be used for the
> system d
Hello list.
I am evaluating options for my new upcoming storage system, where for
various reasons the data will be stored on 2 x 2tb SATA disk in a
mirror and has to be encrypted (a 40gb Intel SSD will be used for the
system disk). Right now I am considering the options of FreeBSD with
GELI+ZFS an
On Sun, 31.05.2009 at 19:28:51 +0300, Dan Naumov wrote:
> Hi
>
> Since you are suggesting 2 x 8GB USB for a root partition, what is
> your experience with read/write speed and lifetime expectation of
> modern USB sticks under FreeBSD and why 2 of them, GEOM mirror?
Well, my current setup is using
On Sun, May 31, 2009 at 9:05 AM, Ulrich Spörlein wrote:
> everybody has different needs, but what exactly are you doing with 128GB
> of / ? What I did is the following:
>
> 2GB CF card + CF to ATA adapter (today, I would use 2x8GB USB sticks,
> CF2ATA adapters suck, but then again, which Mobo has
Ulrich Spörlein wrote:
2GB CF card + CF to ATA adapter (today, I would use 2x8GB USB sticks,
CF2ATA adapters suck, but then again, which Mobo has internal USB ports?)
Many has internal USB header.
http://www.logicsupply.com/products/afap_082usb
___
f
Hi
Since you are suggesting 2 x 8GB USB for a root partition, what is
your experience with read/write speed and lifetime expectation of
modern USB sticks under FreeBSD and why 2 of them, GEOM mirror?
- Dan Naumov
> Hi Dan,
>
> everybody has different needs, but what exactly are you doing with
On Fri, 29.05.2009 at 11:19:44 +0300, Dan Naumov wrote:
> Also, free free to criticize my planned filesystem layout for the
> first disk of this system, the idea behind /mnt/sysbackup is to take a
> snapshot of the FreeBSD installation and it's settings before doing
> potentially hazardous things l
> Hi Morgan,
>
> thanks for the nice benchmarking trick. I tried this on two ~7.2
> systems:
>
> CPU: Intel Pentium III (996.77-MHz 686-class CPU)
> -> 14.3MB/s
>
> CPU: Intel(R) Pentium(R) 4 CPU 2.80GHz (2793.01-MHz 686-class CPU)
> -> 47.5MB/s
>
> Reading a big file from the pool of this P4 r
On Fri, 29.05.2009 at 12:47:38 +0200, Morgan Wesström wrote:
> You can benchmark the encryption subsytem only, like this:
>
> # kldload geom_zero
> # geli onetime -s 4096 -l 256 gzero
> # sysctl kern.geom.zero.clear=0
> # dd if=/dev/gzero.eli of=/dev/null bs=1M count=512
>
> 512+0 records in
> 51
I am pretty sure that adding more disks wouldn't solve anything in
this case, only either using a faster CPU or a faster crypto system.
When you are capable of 70 MB/s reads on a single unecrypted disk, but
only 24 MB/s reads off the same disk while encrypted, your disk speed
isn't the problem.
-
On Fri, 29 May 2009 13:34:57 +0200, Dan Naumov
wrote:
Now that I have evaluated the numbers and my needs a bit, I am really
confused about what appropriate course of action for me would be.
1) Use ZFS without GELI and hope that zfs-crypto get implemented in
Solaris and ported to FreeBSD "soo
Quoting Dan Naumov :
Ouch, that does indeed sounds quite slow, especially considering that
a dual core Athlon 6400 is pretty fast CPU. Have you done any
comparison benchmarks between UFS2 with Softupdates and ZFS on the
same system? What are the read/write numbers like? Have you done any
investi
Pardon my ignorance, but what do these numbers mean and what
information is deductible from them?
- Dan Naumov
> I don't mean to take this off-topic wrt -stable but just
> for fun, I built a -current kernel with dtrace and did:
>
> geli onetime gzero
> ./hotkernel &
> dd if
On Fri, May 29, 2009 at 01:49:54PM +0200, Ivan Voras wrote:
> Emil Mikulic wrote:
[...]
> > kernel`SHA256_Transform 1178 6.3%
> > kernel`rijndaelEncrypt 5574 29.7%
> > kernel`acpi_cpu_c1 8383
On Fri, May 29, 2009 at 2:49 PM, Ivan Voras wrote:
>
> Hi,
>
> What is the meaning of counts? Number of calls made or time?
>
>
The former.
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscri
Emil Mikulic wrote:
> On Fri, May 29, 2009 at 12:47:38PM +0200, Morgan Wesstr?m wrote:
>> You can benchmark the encryption subsytem only, like this:
>>
>> # kldload geom_zero
>> # geli onetime -s 4096 -l 256 gzero
>> # sysctl kern.geom.zero.clear=0
>> # dd if=/dev/gzero.eli of=/dev/null bs=1M count
On Fri, May 29, 2009 at 12:47:38PM +0200, Morgan Wesstr?m wrote:
> You can benchmark the encryption subsytem only, like this:
>
> # kldload geom_zero
> # geli onetime -s 4096 -l 256 gzero
> # sysctl kern.geom.zero.clear=0
> # dd if=/dev/gzero.eli of=/dev/null bs=1M count=512
I don't mean to take
Now that I have evaluated the numbers and my needs a bit, I am really
confused about what appropriate course of action for me would be.
1) Use ZFS without GELI and hope that zfs-crypto get implemented in
Solaris and ported to FreeBSD "soon" and that when it does, it won't
come with such a dramatic
Dan Naumov wrote:
> Thank you for your numbers, now I know what to expect when I get my
> new machine, since our system specs look identical.
>
> So basically on this system:
>
> unencrypted ZFS read: ~70 MB/s per disk
>
> 128bit Blowfish GELI/ZFS write: 35 MB/s per disk
> 128bit Blowfish GELI
Thank you for your numbers, now I know what to expect when I get my
new machine, since our system specs look identical.
So basically on this system:
unencrypted ZFS read: ~70 MB/s per disk
128bit Blowfish GELI/ZFS write: 35 MB/s per disk
128bit Blowfish GELI/ZFS read: 24 MB/s per disk
I am curi
> Ouch, that does indeed sounds quite slow, especially considering that
> a dual core Athlon 6400 is pretty fast CPU. Have you done any
> comparison benchmarks between UFS2 with Softupdates and ZFS on the
Not at all - but, now you have got me curious, I just went to
a completely different system (
Dan Naumov wrote:
> Is there anyone here using ZFS on top of a GELI-encrypted provider on
> hardware which could be considered "slow" by today's standards? What
> are the performance implications of doing this? The reason I am asking
> is that I am in the process of building a small home NAS/webser
Ouch, that does indeed sounds quite slow, especially considering that
a dual core Athlon 6400 is pretty fast CPU. Have you done any
comparison benchmarks between UFS2 with Softupdates and ZFS on the
same system? What are the read/write numbers like? Have you done any
investigating regarding possibl
> Is there anyone here using ZFS on top of a GELI-encrypted provider on
> hardware which could be considered "slow" by today's standards? What
I run a mirrored zpool on top of a pair of 1TB SATA drives - they are
only 7200 rpm so pretty dog slow as far as I'm concerned. The
CPOU is a dual core Ath
Is there anyone here using ZFS on top of a GELI-encrypted provider on
hardware which could be considered "slow" by today's standards? What
are the performance implications of doing this? The reason I am asking
is that I am in the process of building a small home NAS/webserver,
starting with a singl
40 matches
Mail list logo