On 24-Jul-09, at 6:41 PM, Frank Middleton wrote:
On 07/24/09 04:35 PM, Bob Friesenhahn wrote:
Regardless, it [VirtualBox] has committed a crime.
But ZFS is a journalled file system! Any hardware can lose a flush;
No, the problematic default in VirtualBox is flushes being *ignored*,
whic
On Jul 24, 2009, at 22:17, Bob Friesenhahn wrote:
A journaling filesystem uses a journal (transaction log) to roll
back (replace with previous data) the unordered writes in an
incomplete transaction. In the case of ZFS, it is only necessary to
go back to the most recent checkpoint and any
On Fri, 24 Jul 2009, Frank Middleton wrote:
On 07/24/09 04:35 PM, Bob Friesenhahn wrote:
Regardless, it [VirtualBox] has committed a crime.
But ZFS is a journalled file system! Any hardware can lose a flush;
From my understanding, ZFS is not a journalled file system. ZFS
relies on ordere
Kyle wrote:
If I run `zpool create -f tank raidz1 c3d0 c3d1 c6d0 c6d1` it causes the OS not to boot
saying "Cannot find active partition". If I leave c3d1 out.. ie. `zpool create
-f tank raidz1 c3d0 c6d0 c6d1` and reboot everything is fine. This makes no sense to me
since c4d0 is showing up
I've installed Open Solaris 2009.06 on a machine with 5 identical 1TB wd green
drives to create a ZFS nas. The intended install is one drive dedicated to the
OS and the remaining 4 drives in a raidz1 configuration. The install is
working fine, but creating the raidz1 pool and rebooting is caus
That is because you had only one other choice: filesystem level copy.
With ZFS I believe you will find that snapshots will allow you to have
better control over this. The send/receive process is very, very similar
to a mirror resilver, so you are only carrying your previous process
forward into
On Jul 24, 2009, at 16:00, Miles Nordin wrote:
Is there a correct way to configure it, or will always any
componoent of the overall system other than ZFS get blamed when ZFS
loses a pool?
By default VB does not respect the 'disk sync' command that a guest OS
could send--it's just ignored.
Frank Middleton wrote:
On 07/24/09 04:35 PM, Bob Friesenhahn wrote:
Regardless, it [VirtualBox] has committed a crime.
But ZFS is a journalled file system!
Even a journalled file system has to trust the journal. If the storage
says the journal is committed and its isn't, all bets are off.
Hi,
I am trying to understand in details how much metadata is being cached in ARC
and L2ARC for my workload.
Looking at 'kstat -n arcstats', I see:
ARC Current Size: 19217 MB (size=19,644,754,928)
ARC Metadata Size: 112MB (hdr_size=117,896,760)
I am trying to understand what l2_hdr_size means?
On 07/24/09 04:35 PM, Bob Friesenhahn wrote:
Regardless, it [VirtualBox] has committed a crime.
But ZFS is a journalled file system! Any hardware can lose a flush;
it's just more likely in a VM, especially when anything Microsoft
is involved, and the whole point of journalling is to prevent th
Rob Logan wrote:
> The post I read said OpenSolaris guest crashed, and the guy clicked
> the ``power off guest'' button on the virtual machine.
I seem to recall "guest hung". 99% of solaris hangs (without
a crash dump) are "hardware" in nature. (my experience backed by
an uptime of 1116days) so
On Jul 24, 2009, at 2:33 PM, Bob Friesenhahn wrote:
On Fri, 24 Jul 2009, Kyle McDonald wrote:
http://www.anandtech.com/storage/showdoc.aspx?i=3531&p=8
This an interesting test report. Something quite interesting for
zfs is if the write rate is continually high, then the write
performanc
On Fri, 24 Jul 2009, Kyle McDonald wrote:
http://www.anandtech.com/storage/showdoc.aspx?i=3531&p=8
This an interesting test report. Something quite interesting for zfs
is if the write rate is continually high, then the write performance
will be limited by the FLASH erase performance, regard
Have you ever wondered if adding a separate log device
can improve your performance? zilstat is a DTrace script
which helps answer that question.
I have updated zilstat to offer the option of tracking ZIL
activity on a per-txg commit basis. By default, ZIL activity
is tracked chronologically at f
On Fri, Jul 24, 2009 at 4:35 PM, Bob
Friesenhahn wrote:
> On Fri, 24 Jul 2009, Miles Nordin wrote:
>>
>> The post I read said OpenSolaris guest crashed, and the guy clicked
>> the ``power off guest'' button on the virtual machine. The host never
>> crashed. so whether the IDE cache flush paramete
Miles Nordin wrote:
"km" == Kyle McDonald writes:
km> hese drives do seem to do a great job at random writes, most
km> of the promise shows at sequential writes, so Does the slog
km> attempt to write sequentially through the space given to it?
NO! Everyone who is u
On Fri, 24 Jul 2009, Miles Nordin wrote:
The post I read said OpenSolaris guest crashed, and the guy clicked
the ``power off guest'' button on the virtual machine. The host never
crashed. so whether the IDE cache flush parameter was set or not,
Clicking ``power off guest'' is the same as walk
On Jul 24, 2009, at 10:46 AM, Kyle McDonald wrote:
Bob Friesenhahn wrote:
Of course, it is my understanding that the zfs slog is written
sequentially so perhaps this applies instead:
Actually, reading up on these drives I've started to wonder about
the slog writing pattern. While these
> The post I read said OpenSolaris guest crashed, and the guy clicked
> the ``power off guest'' button on the virtual machine.
I seem to recall "guest hung". 99% of solaris hangs (without
a crash dump) are "hardware" in nature. (my experience backed by
an uptime of 1116days) so the finger is stil
> "km" == Kyle McDonald writes:
km> hese drives do seem to do a great job at random writes, most
km> of the promise shows at sequential writes, so Does the slog
km> attempt to write sequentially through the space given to it?
when writing to the slog, some user-visible applicatio
Ok -- thanks for your reply.
I just wonder what's in those 7 G, if a 3G pool is enough... ? Why has it grown
so much ?
I think I do not understand exactly the relationship between BE and the ZFS
pools: if I destroy the BE, that doesn't destroy the data, does it ? it puts
back the content of rpo
> "re" == Richard Elling writes:
re> The root cause of this thread's woes have absolutely nothing
re> to do with ECC RAM. It has everything to do with VirtualBox
re> configuration.
What part of VirtualBox configuration?
The post I read said OpenSolaris guest crashed, and the guy
> "c" == chris writes:
> "hk" == Haudy Kazemi writes:
c> why would anyone use something called basic? But there must be
c> a catch if they provided several ECC support modes.
They are just taiwanese. They have no clue wtf they are doing and do
not care about quality since t
On Fri, 24 Jul 2009 19:36:52 +0200
dick hoogendijk wrote:
> Thank you for your support 'till now. One final question:..
Alas, it's not a final qustion. It still does not work. I have no idea
what else I could have forgotten. This is what I have on arwen (local)
and westmark (remote):
r...@westm
Bob Friesenhahn wrote:
Of course, it is my understanding that the zfs slog is written
sequentially so perhaps this applies instead:
Actually, reading up on these drives I've started to wonder about the
slog writing pattern. While these drives do seem to do a great job at
random writes, most
Hello all...
I'm seeing this behaviour in an old build (89), and i just want to hear from
you if there is some known bug about it. I'm aware of the "picket fencing"
problem, and that ZFS is not choosing right if write to slog is better or not
(thinking if we have a better throughput from disks)
Ok, I re-tested my rotating rust with these iozone options (note that
-o requests syncronous writes):
iozone -t 6 -k 8 -i 0 -i 2 -O -r 8K -o -s 1G
and obtained these results:
Children see throughput for 6 random writers=5700.49 ops/sec
Parent sees throughput for 6 ran
On Fri, 24 Jul 2009 10:00:30 -0600
cindy.swearin...@sun.com wrote:
> Reproducing this will be difficult in my environment since
> our domain info is automatically setup...
Hey, no sweat ;-) I only asked because I don't want to do the "send
blah" again. but then again, computers don't get tired.
On Fri, Jul 24, 2009 at 05:01:15PM +0200, dick hoogendijk wrote:
> On Fri, 24 Jul 2009 10:44:36 -0400
> Kyle McDonald wrote:
> > ... then it seems like a shame (or a waste?) not to equally
> > protect the data both before it's given to ZFS for writing, and after
> > ZFS reads it back and returns
dick hoogendijk wrote:
On Fri, 24 Jul 2009 10:44:36 -0400
Kyle McDonald wrote:
... then it seems like a shame (or a waste?) not to equally
protect the data both before it's given to ZFS for writing, and after
ZFS reads it back and returns it to you.
But that was not the question.
Th
On Fri, 24 Jul 2009, Bob Friesenhahn wrote:
This seems like rather low random write performance. My 12-drive array of
rotating rust obtains 3708.89 ops/sec. In order to be effective, it seems
that a synchronous write log should perform considerably better than the
backing store.
Actually,
On Jul 14, 2009, at 10:45 PM, Jorgen Lundman wrote:
Hello list,
Before we started changing to ZFS bootfs, we used DiskSuite mirrored
ufs boot.
Very often, if we needed to grow a cluster by another machine or
two, we would simply clone a running live server. Generally the
procedure for
On Jul 24, 2009, at 3:18 AM, Michael McCandless wrote:
I've read in numerous threads that it's important to use ECC RAM in a
ZFS file server.
It is important to use ECC RAM. The embedded market and
server market demand ECC RAM. It is only the el-cheapo PC
market that does not. Going back to s
On Fri, 24 Jul 2009, Tristan Ball wrote:
I've used 8K IO sizes for all the stage one tests - I know I might get
it to go faster with a larger size, but I like to know how well systems
will do when I treat them badly!
The Stage_1_Ops_thru_run is interesting. 2000+ ops/sec on random writes,
5000
In general, questions about beadm and related tools should be sent or at
least cross-posted to install-disc...@opensolaris.org.
Lori
On 07/24/09 07:04, Jean-Noël Mattern wrote:
Axelle,
You can safely run "beadm destroy opensolaris" if everything's
allright with your new opensolaris-1 boot
Hi Dick,
I haven't see this problem when I've tested these steps.
And its been awhile since I've seen the nobody:nobody problem, but it
sounds like NFSMAPID didn't get set correctly.
I think this question is asked during installation and generally is set
to the default DNS domain name.
The dom
On Fri, 24 Jul 2009 10:44:36 -0400
Kyle McDonald wrote:
> ... then it seems like a shame (or a waste?) not to equally
> protect the data both before it's given to ZFS for writing, and after
> ZFS reads it back and returns it to you.
But that was not the question.
The question was: [quote] "My q
On Fri, 24 Jul 2009 07:19:40 -0700 (PDT)
Rich Teer wrote:
> Given that data integrity is presumably important in every non-gaming
> computing use, I don't understand why people even consider not using
> ECC RAM all the time. The hardware cost delta is a red herring:
I live in Holland and it is
Michael McCandless wrote:
I've read in numerous threads that it's important to use ECC RAM in a
ZFS file server.
My question is: is there any technical reason, in ZFS's design, that
makes it particularly important for ZFS to require ECC RAM?
I think, basically the idea is, that if you're goin
This sounds like a bug I hit - if you have zvols on your pool, and
automatic snapshots enabled, the thousands of resultant snapshots have
to be polled by devfsadm during boot, which take a long time - several
seconds per zvol.
I remove the auto-snapshot property from my zvols and the slow boot sto
Tristan Ball wrote:
It just so happens I have one of the 128G and two of the 32G versions in
my drawer, waiting to go into our "DR" disk array when it arrives.
Hi Tristan,
Just so I can be clear, What model/brand are the drives you were testing?
-Kyle
I dropped the 128G into a spare De
On Fri, 24 Jul 2009, Michael McCandless wrote:
> I've read in numerous threads that it's important to use ECC RAM in a
> ZFS file server.
>
> My question is: is there any technical reason, in ZFS's design, that
> makes it particularly important for ZFS to require ECC RAM?
[...]
> Some of the po
On Fri, 24 Jul 2009 15:55:02 +0200
dick hoogendijk wrote:
> [share to local system]
> westmark# zfs set sharenfs=on store/snaps
I left out the options and changed the /store/snaps directory
permissions to 777. Now the snapshot can be send from the host but it
gets u:g permssions like nobody:nobo
Hi, I followed the faq on this, but get erros I can't understand. As I
do want to make backups I really hope someone can tell me what's wrong.
== [ what I did ]
[my remote system]
westmark# zfs create store/snaps
westmark# zfs list
NAME USED AVAIL REFER MOUNTPOINT
store 108
Axelle,
You can safely run "beadm destroy opensolaris" if everything's allright
with your new opensolaris-1 boot env.
You will get back your space (something around 7.18 GB).
There's something strang with the mountpoint of rpool which should be
/rpool and not /a/rpool, maybe you'll have to f
Hi,
I have upgraded from to 2008.11 to 2009.06. The upgrade process created a new
boot environment (named opensolaris-1 in my case), but I am now getting out of
space in my ZFS pool. So, can I safely erase the old boot environment, and if
so will that get me back the disk space I need ?
BE
I've read in numerous threads that it's important to use ECC RAM in a
ZFS file server.
My question is: is there any technical reason, in ZFS's design, that
makes it particularly important for ZFS to require ECC RAM?
Is ZFS especially vulnerable, moreso than other filesystems, to bit
errors in RAM
Darren J Moffat wrote:
Jorgen Lundman wrote:
Jorgen Lundman wrote:
However, "zpool detach" appears to mark the disk as blank, so
nothing will find any pools (import, import -D etc). zdb -l will
show labels,
For kicks, I tried to demonstrate this does indeed happen, so I dd'ed
the first 1
Hi, thanks for pointing out issue, we haven't run updates on server yet.
Yours
Markus Kovero
-Original Message-
From: Henrik Johansson [mailto:henr...@henkis.net]
Sent: 24. heinäkuuta 2009 12:26
To: Markus Kovero
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] No files but poo
Darren J Moffat wrote:
Maybe the 2 disk mirror is a special enough case that this could be
worth allowing without having to deal with all the other cases as well.
The only reason I think it is a special enough cases is because it is
the config we use for the root/boot pool.
See 6849185 an
On 24 jul 2009, at 09.33, Markus Kovero wrote:
During our tests we noticed very disturbing behavior, what would be
causing this?
System is running latest stable opensolaris.
Any other means to remove ghost files rather than destroying pool
and restoring from backups?
This looks like bu
Jorgen Lundman wrote:
Jorgen Lundman wrote:
However, "zpool detach" appears to mark the disk as blank, so nothing
will find any pools (import, import -D etc). zdb -l will show labels,
For kicks, I tried to demonstrate this does indeed happen, so I dd'ed
the first 1024 1k blocks from the di
Yes, server has been rebooted several times and there is no available space, is
it possible to delete ghosts that zdb sees somehow? how this can happen?
Yours
Markus Kovero
-Original Message-
From: pantz...@gmail.com [mailto:pantz...@gmail.com] On Behalf Of Mattias
Pantzare
Sent: 24. he
On Fri, Jul 24, 2009 at 09:57, Markus Kovero wrote:
> r...@~# zfs list -t snapshot
> NAME USED AVAIL REFER MOUNTPOINT
> rpool/ROOT/opensola...@install 146M - 2.82G -
> r...@~#
Then it is probably some process that has a deleted file open. You can
find those
r...@~# zfs list -t snapshot
NAME USED AVAIL REFER MOUNTPOINT
rpool/ROOT/opensola...@install 146M - 2.82G -
r...@~#
-Original Message-
From: pantz...@gmail.com [mailto:pantz...@gmail.com] On Behalf Of Mattias
Pantzare
Sent: 24. heinäkuuta 2009 10:56
On Fri, Jul 24, 2009 at 09:33, Markus Kovero wrote:
> During our tests we noticed very disturbing behavior, what would be causing
> this?
>
> System is running latest stable opensolaris.
>
> Any other means to remove ghost files rather than destroying pool and
> restoring from backups?
You may hav
Interesting, so the more drive failures you have, the slower the array gets?
Would I be right in assuming that the slowdown is only up to the point where
FMA / ZFS marks the drive as faulted?
--
This message posted from opensolaris.org
___
zfs-discuss
During our tests we noticed very disturbing behavior, what would be causing
this?
System is running latest stable opensolaris.
Any other means to remove ghost files rather than destroying pool and restoring
from backups?
r...@~# zpool status testpool
pool: testpool
state: ONLINE
scrub: scrub
58 matches
Mail list logo