right in thinking I
could also use this by first upgrading my host and then running this command
to write the /basejail over with the updated files from the host to bring
them into sync? I still don't know how I would then fix the /etc under each
individual jail though.
dware, the system
is an Atom 330 which is currently using Windows 2008 server with
TrueCrypt in a non-raid configuration and with that setup, I am
getting roughly 55mb/s reads and writes when using TrueCrypt
(nonencrypted it's around 115mb/s).
Thanks.
- Sincer
On Sun, Jan 10, 2010 at 6:12 PM, Damian Gerow wrote:
> Dan Naumov wrote:
> : I am mostly interested in benchmarks on lower end hardware, the system
> : is an Atom 330 which is currently using Windows 2008 server with
> : TrueCrypt in a non-raid configuration and with that setup, I am
On Sun, Jan 10, 2010 at 8:46 PM, Damian Gerow wrote:
> Dan Naumov wrote:
> : Yes, this is what I was basically considering:
> :
> : new AHCI driver => 40gb Intel SSD => UFS2 with Softupdates for the
> : system installation
> : new AHCI driver => 2 x 2tb disks, each ful
ebsd.org/cgi/man.cgi comes up with nothing? Also, does
this mean that GPT is _NOT_ in fact fixed regarding this bug?
Thanks.
- Sincerely,
Dan Naumov
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To
ves throughput.
How fast is the CPU in the system showing no overhead? Having no
noticable overhead whatsoever sounds extremely unlikely unless you are
actually using it on something like a very modern dualcore or better.
- Sincerely,
Dan Naumov
___
freeb
691 secs (59559969 bytes/sec)
> srebrny# dd if=/dev/zero of=/data02/test bs=1M count=500
> 500+0 records in
> 500+0 records out
> 524288000 bytes transferred in 20.090274 secs (26096608 bytes/sec)
>
> Rafal Jackiewicz
Thanks, could you do the same, but using 2 .e
IO fail on the
l2arc, the system will gracefully continue to run, reverting said IO
to be processed by the actual default built-in ZIL on the disks of the
pool. However the capability to remove dedicated ZIL or gracefully
handle the death of a non-redundant dedicated ZIL vdev does not
currently exi
the SSD host the swap, boot, root and a few
other partitions. What do I need to know in regards to partition
alignment and filesystem block sizes to get the best performance out
of the Intel SSDs?
Thanks.
- Sincerely,
Dan Naumov
___
freebsd-stable
2010/1/12 Rafał Jackiewicz :
>>Thanks, could you do the same, but using 2 .eli vdevs mirrorred
>>together in a zfs mirror?
>>
>>- Sincerely,
>>Dan Naumov
>
> Hi,
>
> Proc: Intell Atom 330 (2x1.6Ghz) - 1 package(s) x 2 core(s) x 2 HTT threads
> Chipset
as being set to the beginning of the disk
(0x010100).) and applying it to his disk with DD. Can anyone point me
towards an explanation regarding how to edit and apply my own PMBR to
my disk to see if it helps?
Thanks.
Sincerely,
Dan Naumov
___
freebs
longer
needed in 8-STABLE, because the pmbr will be marked as active during
the installation of the bootcode. Is there anything I can do to
archieve the same result in 8.0-RELEASE or is installing from a
snapshop of 8-STABLE my only option?
Thanks.
- Sincerely,
Dan Naumov
__
On 1/19/2010 12:11 PM, Dan Naumov wrote:
> It seems that quite a few BIOSes have serious issues booting off disks
> using GPT partitioning when no partition present is marked as
> "active". See http://www.freebsd.org/cgi/query-pr.cgi?pr=115406&cat=bin
> for a prime e
install created installs using MBR partitioning and that I had swap
as my first partition inside the slice and that it all worked dandy.
Has this changed at some point? Oh, and for the curious the
installation script is here: http://jago.pp.fi/zfsmbrv1-work
On Fri, Jan 22, 2010 at 6:12 AM, Thomas K. wrote:
> On Fri, Jan 22, 2010 at 05:57:23AM +0200, Dan Naumov wrote:
>
> Hi,
>
>> I recently found a nifty "FreeBSD ZFS root installation script" and
>> been reworking it a bit to suit my needs better, includ
On Fri, Jan 22, 2010 at 6:49 AM, Dan Naumov wrote:
> On Fri, Jan 22, 2010 at 6:12 AM, Thomas K. wrote:
>> On Fri, Jan 22, 2010 at 05:57:23AM +0200, Dan Naumov wrote:
>>
>> Hi,
>>
>>> I recently found a nifty "FreeBSD ZFS root installation script" an
or through the FreeBSD
Foundation? And how would one go about calculating the appropriate
amount of money for such a thing?
Thanks.
- Sincerely,
Dan Naumov
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd
On Sun, Jan 24, 2010 at 5:29 PM, John wrote:
> On Fri, Jan 22, 2010 at 07:02:53AM +0200, Dan Naumov wrote:
>> On Fri, Jan 22, 2010 at 6:49 AM, Dan Naumov wrote:
>> > On Fri, Jan 22, 2010 at 6:12 AM, Thomas K. wrote:
>> >> On Fri, Jan 22, 2010 at 05:5
2008 Server and NTFS. So what would be the
cause of these very low Bonnie result numbers in my case? Should I try
some other benchmark and if so, with what parameters?
- Sincerely,
Dan Naumov
___
freebsd-stable@freebsd.org mailing list
http://lists.freebs
guration in the past,
where I would boot off an UFS disk and have the ZFS mirror consist of
2 discs directly. The bonnie numbers in that case were in line with my
expectations, I was seeing 65-70mb/s. Note: again, exact same
hardware, exact same disks attached to the exact same controller. In
my
On Sun, Jan 24, 2010 at 7:42 PM, Dan Naumov wrote:
> On Sun, Jan 24, 2010 at 7:05 PM, Jason Edwards wrote:
>> Hi Dan,
>>
>> I read on FreeBSD mailinglist you had some performance issues with ZFS.
>> Perhaps i can help you with that.
>>
>> You seem to be r
On Sun, Jan 24, 2010 at 8:12 PM, Bob Friesenhahn
wrote:
> On Sun, 24 Jan 2010, Dan Naumov wrote:
>>
>> This works out to 1GB in 36,2 seconds / 28,2mb/s in the first test and
>> 4GB in 143.8 seconds / 28,4mb/s and somewhat consistent with the
>> bonnie results. It also
On Sun, Jan 24, 2010 at 8:34 PM, Jason Edwards wrote:
>> ZFS writes to a mirror pair
>> requires two independent writes. If these writes go down independent I/O
>> paths, then there is hardly any overhead from the 2nd write. If the
>> writes
>> go through a bandwidth-limited shared path then the
On Sun, Jan 24, 2010 at 11:53 PM, Alexander Motin wrote:
> Dan Naumov wrote:
>> This works out to 1GB in 36,2 seconds / 28,2mb/s in the first test and
>> 4GB in 143.8 seconds / 28,4mb/s and somewhat consistent with the
>> bonnie results. It also sadly seems to confir
On Mon, Jan 25, 2010 at 2:14 AM, Dan Naumov wrote:
> On Sun, Jan 24, 2010 at 11:53 PM, Alexander Motin wrote:
>> Dan Naumov wrote:
>>> This works out to 1GB in 36,2 seconds / 28,2mb/s in the first test and
>>> 4GB in 143.8 seconds / 28,4mb/s and somewhat consistent
On Mon, Jan 25, 2010 at 7:33 AM, Bob Friesenhahn
wrote:
> On Mon, 25 Jan 2010, Dan Naumov wrote:
>>
>> I've checked with the manufacturer and it seems that the Sil3124 in
>> this NAS is indeed a PCI card. More info on the card in question is
>> available at
>
On Mon, Jan 25, 2010 at 9:34 AM, Dan Naumov wrote:
> On Mon, Jan 25, 2010 at 7:33 AM, Bob Friesenhahn
> wrote:
>> On Mon, 25 Jan 2010, Dan Naumov wrote:
>>>
>>> I've checked with the manufacturer and it seems that the Sil3124 in
>>> this NAS is
do you
think of these 2 for use in a FreeBSD8 ZFS NAS:
http://www.supermicro.com/products/motherboard/ATOM/ICH9/X7SPA.cfm?typ=H
http://www.supermicro.com/products/motherboard/ATOM/ICH9/X7SPA.cfm?typ=H&IPMI=Y
- Sincerely,
Dan Naumov
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
On Mon, Jan 25, 2010 at 8:32 PM, Alexander Motin wrote:
> Dan Naumov wrote:
>> Alexander, since you seem to be experienced in the area, what do you
>> think of these 2 for use in a FreeBSD8 ZFS NAS:
>>
>> http://www.supermicro.com/products/motherboard/ATOM/I
e
Always - 5908
The disks are of exact same model and look to be same firmware. Should
I be worried that the newer disk has, in 136 hours reached a higher
Load Cycle count twice as big as on the disk thats 5253 hours old?
- Sincerely,
Dan Naumov
network unless I absolutely have to :)
- Sincerely,
Dan Naumov
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
:)
- Sincerely,
Dan Naumov
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
tion past 400,000 (over 600,000 all bets are off though). The
people who need(ed) to worry were people like me, who were seeing the
rate increase at a rate of 43+ per hour.
- Sincerely,
Dan Naumov
___
freebsd-stable@freebsd.org mailing list
http://lists.free
misunderstanding something or is the Supermicro support tech misguided?
- Sincerely,
Dan Naumov
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-uns
) A less clean solution would be to setup a script that polls the
SMART data of all disks affected by the problem every 8-9 seconds and
have this script launch on boot. This will keep the affected drives
just busy enough to not park their heads.
2010/2/8 Gerrit Kühn :
> On Mon, 8 Feb 2010 15:43:46 +0200 Dan Naumov wrote
> about RE: one more load-cycle-count problem:
>
> DN> >Any further ideas how to get rid of this "feature"?
>
> DN> 1) The most "clean" solution is probably using the
right now?
- Sincerely,
Dan Naumov
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
ther. Is this a known issue and/or should I submit a PR?
- Sincerely,
Dan Naumov
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
On Sun, Feb 14, 2010 at 2:24 AM, Dan Naumov wrote:
> Hello
>
> From the SUN ZFS Administration Guide:
> http://docs.sun.com/app/docs/doc/819-5461/gaztn?a=view
>
> "If ZFS is currently managing the file system but it is currently
> unmounted, and the mountpoint property i
2. SuperMicro 5046A $750 (+$43 shipping)
>> 3. LSI SAS 3081E-R $235
>> 4. SATA cables $60
>> 5. Crucial 3×2G ECC DDR3-1333 $191 (+ $6 shipping)
>> 6. Xeon W3520 $310
You do realise how much of a massive overkill this is and how much you
are overspending?
- Dan
On Sun, Feb 14, 2010 at 11:38 PM, Dan Langille wrote:
> Dan Naumov wrote:
>>>
>>> On Sun, 14 Feb 2010, Dan Langille wrote:
>>>>
>>>> After creating three different system configurations (Athena,
>>>> Supermicro, and HP), my configuratio
On Mon, Feb 15, 2010 at 12:42 AM, Dan Naumov wrote:
> On Sun, Feb 14, 2010 at 11:38 PM, Dan Langille wrote:
>> Dan Naumov wrote:
>>>>
>>>> On Sun, 14 Feb 2010, Dan Langille wrote:
>>>>>
>>>>> After creating three differe
bout 8 watt/disk for "green" disks to 20
watt/disk for really powerhungry ones. So yes.
- Sincerely,
Dan Naumov
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
g. md(4)) to a pool for
> L2ARC/cache? The ZFS documentation explicitly states that cache
> device content is considered volatile.
Using a ramdisk as an L2ARC vdev doesn't make any sense at all. If you
have RAM to spare, it should be used by regu
On Mon, Feb 15, 2010 at 7:14 PM, Dan Langille wrote:
> Dan Naumov wrote:
>>
>> On Sun, Feb 14, 2010 at 11:38 PM, Dan Langille wrote:
>>>
>>> Dan Naumov wrote:
>>>>>
>>>>> On Sun, 14 Feb 2010, Dan Langille wrote:
>>>
re expensive 1366 socket CPUs
>> and boards.
>>
>> - Sincerely,
>> Dan Naumov
>
> Hi,
>
> Do have test about this? I'm not really impressed with the i5 series.
>
> Regards,
> Andras
There: http://www.anandtech.com/cpuchipsets/showdoc.aspx?i=3634&p
stripe
doesn't work or is that "everywhere" also out of date after your
changes? :)
- Sincerely,
Dan Naumov
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
On Fri, Feb 19, 2010 at 1:03 AM, Matt Reimer wrote:
> On Thu, Feb 18, 2010 at 10:57 AM, Matt Reimer wrote:
>>
>> On Tue, Feb 16, 2010 at 12:38 AM, Dan Naumov wrote:
>>>
>>> > I don't know, but I plan to test that scenario in a few days.
>>>
e now seems to have increased by a factor of 2 to 3
and is now definately in line with the expected performance of the
disks in question (cheap 2TB WD20EADS with 32mb cache). Thanks to
everyone who has offered help and tips!
- Sincerely,
Dan N
ut it? :)
Thanks!
- Sincerely,
Dan Naumov
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
Hello
Is powerd finally considered stable and safe to use on 8.0? At least
on 7.2, it consistently caused panics when used on Atom systems with
Hyper-Threading enabled, but I recall that Attilio Rao was looking
into it.
- Sincerely,
Dan Naumov
ate why does the CPU get stuck at 1249 Mhz
after boot by default when not using powerd and why it gets stuck at
1666 Mhz with powerd enabled and doesn't scale back down when IDLE?
Out of curiosity, I stopped powerd but the CPU remained at 1666 Mhz.
- Sincerely,
Dan Naumov
omething puts my CPU to
1249 Mhz upon boot with powerd disabled and it gets stuck there, this
shouldn't happen.
- Sincerely,
Dan Naumov
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubsc
haviour, CPU is downclocked
both before and after issuing that command :)
Still doesn't explain why the system boots up at 1249 Mhz, but that's
not that big of an issue at this point now I see that powerd is
behaving correctly.
- Sincerely,
Dan Naumov
_
On a FreeBSD 8.0-RELEASE/amd64 system with a Supermicro X7SPA-H board
using an Intel gigabit nic with the em driver, running on top of a ZFS
mirror, I was seeing a strange issue. Local reads and writes to the
pool easily saturate the disks with roughly 75mb/s throughput, which
is roughly the best t
On Fri, Mar 19, 2010 at 11:14 PM, Dan Naumov wrote:
> On a FreeBSD 8.0-RELEASE/amd64 system with a Supermicro X7SPA-H board
> using an Intel gigabit nic with the em driver, running on top of a ZFS
> mirror, I was seeing a strange issue. Local reads and writes to the
> pool easily
34592 bytes transferred in 76.031399 secs (112978779 bytes/sec)
(107,74mb/s)
Individual disks read capability: 75mb/s
Reading off a mirror of 2 disks with prefetch disabled: 60mb/s
Reading off a mirror of 2 disks with prefetch enabled: 107mb/s
- Sincerely,
reason you are upgrading from a production
release to a development branch of the OS?
- Sincerely,
Dan Naumov
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "
ntually 8.1 will be
RELENG_8_0: 8.0-RELEASE + latest critical security and reliability
updates (8.0 is up to patchset #2, hence -p2)
Same line of thinking applies to 7-STABLE, 7.3-RELEASE and so on.
- Sincerely,
Dan Naumov
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
system
with 2gb ram or more or is this i386 + 1-2gb ram? Amd64 systems with
2gb ram or more don't really usually require any tuning whatsoever
(except for tweaking performance for a specific workload), but if this
is i386, tuning will be generally required to archiev
Blowfish ad1s1a.eli 4GB swap
ad1s1b 128GB ufs2+s /
ad1s1c 128GB ufs2+s noauto /mnt/sysbackup
ad1s2 => 128bit Blowfish ad1s2.eli
zpool
/home
/mnt/data1
Thanks for your input.
- Dan Naumov
___
fre
possible causes of ZFS working so slow on your
system? Just wondering if its an ATA chipset problem, a drive problem,
a ZFS problem or what...
- Dan Naumov
On Fri, May 29, 2009 at 12:10 PM, Pete French
wrote:
>> Is there anyone here using ZFS on top of a GELI-encrypted provider on
>> ha
-encrypted NTFS partition: ~65 MB/s
As you can see, the performance drop is noticeable, but not anywhere
nearly as dramatic.
- Dan Naumov
> I have a zpool mirror on top of two 128bit GELI blowfish devices with
> Sectorsize 4096, my system is a D945GCLF2 with 2GB RAM and a Intel Arom
> 33
+ GHz or a quad core at 2.6. Budget is a concern...
Our difference is that my hardware is already ordered and Intel Atom
330 + D945GCLF2 + 2GB ram is what it's going to have :)
- Dan Naumov
___
freebsd-stable@freebsd.org mailing list
http://lists.f
Pardon my ignorance, but what do these numbers mean and what
information is deductible from them?
- Dan Naumov
> I don't mean to take this off-topic wrt -stable but just
> for fun, I built a -current kernel with dtrace and did:
>
> geli onetime gzero
> ./hot
stuck with 2 x 2 disk mirrors or is there some 3+1
configuration possible?
Sincerely,
- Dan Naumov
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-
exact same size or bigger than the old
device)?
- Dan Naumov
On Sat, May 30, 2009 at 10:06 PM, Louis Mamakos wrote:
> I built a system recently with 5 drives and ZFS. I'm not booting off a ZFS
> root, though it does mount a ZFS file system once the system has booted from
> a U
he problem.
- Dan Naumov
On Sun, May 31, 2009 at 5:29 PM, Ronald Klop
wrote:
> On Fri, 29 May 2009 13:34:57 +0200, Dan Naumov wrote:
>
>> Now that I have evaluated the numbers and my needs a bit, I am really
>> confused about what appropriate course of action for me would be.
&
Hi
Since you are suggesting 2 x 8GB USB for a root partition, what is
your experience with read/write speed and lifetime expectation of
modern USB sticks under FreeBSD and why 2 of them, GEOM mirror?
- Dan Naumov
> Hi Dan,
>
> everybody has different needs, but what exactly are you d
host a root partition on,
without having to setup some crazy GEOM mirror setup using 2 of them?
- Dan Naumov
2009/6/2 Gerrit Kühn
> On Sat, 30 May 2009 21:41:36 +0300 Dan Naumov wrote
> about ZFS NAS configuration question:
>
> DN> So, this leaves me with 1 SATA port used f
procedure_.
Reading that made me pause for a second and made me go "WOW", this is how
UNIX system upgrades should be done. Any hope of us lowly users ever seeing
something like this implemented in FreeBSD? :)
- Dan Naumov
On Tue, Jun 2, 2009 at 9:47 PM, Zaphod Beeblebrox wrote:
>
>
A little more info for the (perhaps) curious:
Managing Multiple Boot Environments:
http://dlc.sun.com/osol/docs/content/2009.06/getstart/bootenv.html#bootenvmgr
Introduction to Boot Environments:
http://dlc.sun.com/osol/docs/content/2009.06/snapupgrade/index.html
- Dan Naumov
On Tue, Jun 2
Anyone else think that this combined with freebsd-update integration
and a simplistic menu GUI for choosing the preferred boot environment
would make an _awesome_ addition to the base system? :)
- Dan Naumov
On Wed, Jun 3, 2009 at 5:42 PM, Philipp Wuensche wrote:
> I wrote a script implement
s possible goes directly to benefit
the development of ZFS support on FreeBSD, should I continue donating
to the foundation or should I be sending donations directly to
specific developers?
Thank you
- Dan Naumov
___
freebsd-stable@freebsd.org ma
Hello list
Any ideas if gptzfsboot is going to be MFC'ed into RELENG_7 anytime
soon? I am going to be building a NAS soon and I would like to have a
"full ZFS" system without having to resort to running 8-CURRENT :)
Sincerely,
- Dan Naumov
gptboot. I didn't make any changes to the stock
Makefiles and used GENERIC kernel config. Do I need to adjust some
options for gptzfsboot to get built?
- Dan Naumov
>>
>
> 5/25/09 - last month
>
___
freebsd-stable@freebsd.o
from 8-CURRENT? Is that getting MFC'ed into
into RELENG_7 anytime soon?
Where are all make.conf options documented by the way? Neither
/usr/share/examples/etc/make.conf nor "man make.conf" make any
reference to the LOADER_ZFS_SUPPORT option.
- Dan Naumov
On Mon, Jun 8, 2009 at 7
===
And...
===
agathon# which install
/usr/bin/install
===
Any ideas?
- Dan Naumov
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send
You need to mount your /dev/ad6s1d.journal as /usr and not
/dev/ad6s1d, because this is the new device provided to you by GEOM.
- Dan Naumov
On Thu, Jun 11, 2009 at 5:50 AM, Garrett Cooper wrote:
> On Wed, Jun 10, 2009 at 7:44 PM, Garrett Cooper wrote:
>> Hi Pawel, ATA, and Sta
or a 3-way one?
- Sincerely,
Dan Naumov
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
fs on partitions instead and keeping a few gb unused on each
disk leaves us with some room to play and be able to avoid this issue.
- Dan Naumov
On Mon, Jun 15, 2009 at 5:16 AM, Freddie Cash wrote:
> I don't know for sure if it's the same on FreeBSD, but on Solaris, ZFS will
> disable t
If this is true, some magic has been done to the FreeBSD port of ZFS,
because according to SUN documentation is is definitely not supposed
to be possible.
- Dan Naumov
On Mon, Jun 15, 2009 at 10:48 AM, Pete
French wrote:
>> The new 2tb disk you buy can very often be actually a few s
e. I guess this probably varies from
manufacturer to manufacturer, but some average estimates would be
nice, just so that one could evaluate whether this 64k barrier is
enough.
- Dan Naumov
On Mon, Jun 15, 2009 at 11:35 AM, Pete
French wrote:
>> If this is true, some magic has been done to th
make up for the
huge difference in performance or is there something else in play? The
system is an Intel Atom 330 dualcore, 2gb ram, Western Digital Green
2tb disk. Also what would be another good way to get good numbers for
comparing the performance of UFS2 vs ZFS on the sam
All the ZFS tuning guides for FreeBSD (including one on the FreeBSD
ZFS wiki) have recommended values between 64M and 128M to improve
stability, so that what I went with. How much of my max kmem is it
safe to give to ZFS?
- Dan Naumov
On Thu, Jun 18, 2009 at 2:51 AM, Ronald Klop wrote
a GJOURNAL for the root
filesystem...
Sincerely,
- Dan Naumov
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
es and potential data loss. In your
case, I would have the pool built as a group of 2 x 6-disk raidz.
Sincerely,
- Dan Naumov
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
nstall the new file.
This would help avoid having to manually approve installation of
hundreds of files in /etc when you upgrade to new releases using
freebsd-update.
- Sincerely,
Dan Naumov
On Fri, Jul 3, 2009 at 11:51 AM, Dominic Fandrey wrote:
> I'd really like mergemaster to tell m
of tank/DATA 1.8T
while the others are 1.5T?
- Sincerely,
Dan Naumov
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
On Sun, Jul 5, 2009 at 2:26 AM, Freddie Cash wrote:
>
>
> On Sat, Jul 4, 2009 at 2:55 PM, Dan Naumov wrote:
>>
>> Hello list.
>>
>> I have a single 2tb disk used on a 7.2-release/amd64 system with a
>> small part of it given to UFS and most of the disk give
stem is reserved, the amount reserved
has historically varied between 5-8%. This is adjustable. See the "-m"
switch to tunefs.
- Sincerely,
Dan Naumov
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
md64 updated to -p1 with
freebsd-update, 2 kernels is the maximum that would fit into the
default 512mb partition size for /, a bit too tight for my liking.
- Sincerely,
Dan Naumov
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/
ly serving files to my home network over Samba and running a few
irssi instances in a screen. What do I need to do to catch more
information if/when this happens again?
- Sincerely,
Dan Naumov
___
freebsd-stable@freebsd.org mailing list
http://lists.f
On Tue, Jul 7, 2009 at 4:18 AM, Attilio Rao wrote:
> 2009/7/7 Dan Naumov :
>> I just got a panic following by a reboot a few seconds after running
>> "portsnap update", /var/log/messages shows the following:
>>
>> Jul 7 03:49:38 atom syslogd: kernel boot fil
ll isn't exactly up to the
task.
- Sincerely,
Dan Naumov
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
en't available for
>> > sparc64?
>>
>> Yes, That's probably it.
>
> It was just a theory; I don't have sparc64. What's your output of
> "ls -1 /boot/kernel | wc"?
>
> -- Rick C. Petty
atom# uname -a
FreeBSD atom.localdomain
every filesystem
used by FreeBSD (ufs, zfs, etc) hardcoded to ignore the last few
sectors of any disk and/or partition and not write data to it to avoid
such issues?
- Sincerely,
Dan Naumov
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
ts this
command to make the freebsd-zfs partition take "entiredisk minus last
sector" ? I can understand the logic of metadata being protected if I
do a: "gpart add -b 1 -s -t freebsd-zfs
/dev/label/disk01" since gpart will have to go through the actual
la
On Tue, Jul 7, 2009 at 4:27 AM, Attilio Rao wrote:
> 2009/7/7 Dan Naumov :
>> On Tue, Jul 7, 2009 at 4:18 AM, Attilio Rao wrote:
>>> 2009/7/7 Dan Naumov :
>>>> I just got a panic following by a reboot a few seconds after running
>>>> "portsnap
On Wed, Jul 8, 2009 at 3:57 AM, Dan Naumov wrote:
> On Tue, Jul 7, 2009 at 4:27 AM, Attilio Rao wrote:
>> 2009/7/7 Dan Naumov :
>>> On Tue, Jul 7, 2009 at 4:18 AM, Attilio Rao wrote:
>>>> 2009/7/7 Dan Naumov :
>>>>> I just got a panic foll
1 - 100 of 109 matches
Mail list logo