dding c4d0. Any suggestions on how to remove it and re add it correctly?
Sincerely,
Michael
_______
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I'm new to this group, so hello everyone! I am having some issues with my
Nexenta system I set up about two months ago as a zfs/zraid server. I have two
new Maxtor 500GB Sata drives and an Adaptec controller which I believe has a
Silicon Image chipset. Also I have a Seasonic 80+ power s
" for my zpool to be mounted on reboot? zfs set mountpoint
does nothing.
BTW to answer some other concerns, the Seasonic supply is 400Watts with a
guaranteed minimum efficency of 80%. Using a kill-o-watt meter I have about
120Watts power consumption
works perspective and from a OpenSolaris perspective.
Regards
Henrik
http://sparcv9.blogspot.com
_______
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
My rep says "Use dedupe at your o
ion of the
dedupratio or it used a method that is giving unexpected results.
Paul
beadm list -a
and/or other snapshots that were taken before turning off dedup?
_______
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.o
arity drives?
I realize this issue is not addressed because there is too much variability in
the enviroments, etc but I thought it would be interesting to see if anyone
has experienced much in terms of close time proximity, multiple drive failures.
__________
HBA to IT-Mode firmware, I was going to
forget that. I'll try out the behavior of ZFS once I get the system together,
I'm still waiting for additional computer parts (a CPU backplate to mount the
heat sink is missing). I'll see if it is possible to at least cold swap hard
dr
zpool/zfs history does not record version upgrade events, those seem like
important events worth keeping in either the public or internal history.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
mall low-cost dom0's.
_______
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 2/14/10 7:02 PM, zfs ml wrote:
On 2/14/10 4:12 PM, Kjetil Torgrim Homme wrote:
Bogdan Ćulibrk writes:
What are my options from here? To move onto zvol with greater
blocksize? 64k? 128k? Or I will get into another trouble going that
way when I have small reads coming from domU (ext3 with
rk to tape or disk.
-David
On 10/01/2011 05:00 AM, zfs-discuss-requ...@opensolaris.org wrote:
Send zfs-discuss mailing list submissions to
zfs-discuss@opensolaris.org
To subscribe or unsubscribe via the World Wide Web, visit
http://mail.opensolaris.org/mailman/listinfo/zfs-discu
On 02/18/2012 04:00 AM, zfs-discuss-requ...@opensolaris.org wrote:
Message: 2
Date: Sat, 18 Feb 2012 00:12:44 -0500
From: Roberto Waltman
To:zfs-discuss@opensolaris.org
Subject: [zfs-discuss] Cannot mount encrypted filesystems.
Message-ID:<4f3f334c.4090...@rwaltman.com>
Content-Type: text
hanks anyway, lock must be a problem. the scenario here is, apache
causes a bunch of stat()
syscall, which leads to a bunch of zfs vnode access, byond the normal
read/write operation.
The problem is, every zfs vnode access need the **same zfs root**
lock. When the number of
httpd processes an
evel degraded status since it doesn't "do" anything.
fmd: [ID 377184 daemon.error] SUNW-MSG-ID: ZFS-8000-GH, TYPE: Fault, VER: 1,
SEVERITY: Major
EVENT-TIME: Wed May 16 03:27:31 MSK 2012
PLATFORM: Sun Fire X4500, CSN: 0804AMT023, HOSTNAME: thumper
SOURCE: zfs-d
e:
Hi all,
Halcyon recently started to add ZFS pool stats to our Solaris Agent, and
because many people were interested in the previous OpenSolaris beta* we've
rolled it into our OpenSolaris build as well.
I've already heard some great feedback about supporting ZIL and ARC stats,
which we
for your help.
Regards,
Peter
___________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
zfs filesystems using mbuffer from an old 134 system with 6 drives - it ran at
about 50MB/s or slightly more for much of the process and mbuffer worked great.
I am wondering what commands people would recommend running to retrieve/save
config info, logs, history, etc to document and save
:
http://www.solarisinternals.com/wiki/index.php/Arcstat
Richard's zilstat:
http://blog.richardelling.com/2009/02/zilstat-improved.html
Other arc tools:
http://vt-100.blogspot.com/2010/03/top-with-zfs-arc.html
http://www.cuddletech.com/blog/pivot/entry.php?id=979
On 10/30/10 5:48 AM, Ian D
Here is a total guess - but what if it has to do with zfs processing running
on one CPU having to talk to the memory "owned" by a different CPU? I don't
know if many people are running fully populated boxes like you are, so maybe
it is something people are not seeing due to
, etc.
"I am seeing some spotty performance with my new Mangy Cours CPU"...
It is like they are asking for it. I think they be better off doing something
like Intel core arch names using city names "Santa Rosa", etc.
On 10/30/10 3:49 PM, Eugen Leitl wrote:
On Sat, Oct 30, 2010
2 PM, Erik Trimble wrote:
On 10/30/2010 7:07 PM, zfs user wrote:
I did it deliberately - how dumb are these product managers that they name
products with weird names and not expect them to be abused? On the other
hand, if you do a search for mangy cours you'll find a bunch of hits where
it
is that they are slow to move with
external communication about Solaris.
___________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
dekvate og
relevante synonymer på norsk.
___________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___________
zfs-discuss mailing list
zfs-discuss@opensolaris
segments
16 VDEVs
116 metaslabs
1856 metaslabs in total
93373117/1856 = 50308 average number of segments per metaslab
50308*1856*64
5975785472
5975785472/1024/1024/1024
5.56
= 5.56 GB
Yours
Markus Kovero
_______
zfs-discuss mailing list
zfs-discuss@opens
I am creating a custom Solaris 11 Express CD used for disaster recovery.
I have included the necessary files on the system to run zfs commands
without error (no apparent missing libraries or drivers). However, when
I create a zvol, the device in /devices and the link to
/dev/zvol/dsk/rpool do
Francois Dion wrote:
> >>"Francois Dion" wrote:
> >> Source is local to rsync, copying from a zfs file system,
> >> destination is remote over a dsl connection. Takes forever to just
> >> go through the unchanged files. Going the other way is no
t;> since parity would have to be calculated twice. I was wondering
>> what the alternatives are here.
>
> Parity calculations are in the noise. You are reading the wrong FAQs.
> It is likely that if you take care that you can carve out individual
> disks as RAID-0 volumes. The
m/jmcp/
_______
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
is in
> but not yet mirrored.
> Motherboard: Tyan Thunder K8W S2885 (Dual AMD CPU) with 1GB ECC Ram
>
> Anything else I can provide?
>
> (thanks again)
___________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
to run the test?
>
> If your system has quite a lot of memory, the number of files should be
> increased to at least match the amount of memory.
>
>> We have two kinds of x4500/x4540, those with Sol 10 10/08, and 2
>> running svn117 for ZFS quotas. Worth trying on both?
n: http://www.tpc.org). I then wanted to try
relocating the database storage from the zone (UFS filesystem) over to a
ZFS-based filesystem (where I could do things like set quotas, etc.). When I do
this, I get roughly half the performance (X/2) I did on the UFS system.
I
ran
some
low-level
- Original Message
> From: Marion Hakanson <[EMAIL PROTECTED]>
> To: [EMAIL PROTECTED]
> Cc: zfs-discuss@opensolaris.org
> Sent: Friday, February 1, 2008 1:01:46 PM
> Subject: Re: [zfs-discuss] Un/Expected ZFS performance?
>
> [EMAIL PROTECTED] said:
>
We are encountering the same issue. Essentially ZFS has trouble stopping access
to a dead drive. We are testing out Solaris/ZFS and this is has become a very
serious issue for us.
Any help /fix for this would be greatly appreciated.
Reg: cfgadm --unconfigure ...
The recommendation seems to be
33 matches
Mail list logo