> >
> The two plugs that I indicated are multi-lane SAS
> ports, which /require/
> using a breakout cable; don't worry - that the
> design for them.
> "multi-lane" means exactly that - several actual SAS
> connections in a
> single plug. The other 6 ports next to them (in
> black) are SATA
Hi !
I'm a Mac User, but I think that I will get more response here about this
question than on a Mac forum.
And first, sorry for my approximative English.
I have a ZFS Pool named "MyPool" with two device (two external USB drive),
configured as mirror :
NAME STATE READ WRITE C
I don't have an answer to your question exactly because i'm a noob and i'm
not using mac but i can say that on FreeBSD which i'm using atm there is a
method to name devices ahead of time so if the drive letters change you
avoid this exact problem. I'm sure opensolaris and mac have something
simila
I think people can understand the concept of missing flushes. The big
conceptual problem is how this manages to hose an entire filesystem, which is
assumed to have rather a lot of data which ZFS has already verified to be ok.
Hardware ignoring flushes and loosing recent data is understandable,
Thank you for you response, wonslung.
I can export / import, yes, but for this I should unmount all filesystems
depending of the pool, and it's not always possible (and it's sad to be forced
to do that).
For the same name device, I don't know how to do that. I will search for this.
--
This mes
There is a little mistake :
"If I do a attach of disk4s2 on disk2s2, it say to me that disk3s2 is busy
(it's suspicious : the drive is not used)"
The good version is :
"If I do a attach of disk4s2 on disk2s2, it say to me that disk4s2 is busy
(it's suspicious : the drive is not used)"
(disk3s
sometimes the disk will be busy just from being in the directory or if
something is trying to connect to it.
Again, i'm no expert so i'm going to refrain from commenting on your issue
further.
2009/7/28 Avérous Julien-Pierre
> There is a little mistake :
>
> "If I do a attach of disk4s2 on dis
We are upgrading to new storage hardware. We currently have a zfs pool with
the old storage volumes. I would like to create a new zfs pool, completely
separate, with the new storage volumes. I do not want to just replace the old
volumes with new volumes in the pool we are currently using. I
Thomas Walker wrote:
We are upgrading to new storage hardware. We currently have a zfs pool
with the old storage volumes. I would like to create a new zfs pool,
completely separate, with the new storage volumes. I do not want to
just replace the old volumes with new volumes in the pool we are
Thomas Walker wrote:
We are upgrading to new storage hardware. We currently have a zfs pool with
the old storage volumes. I would like to create a new zfs pool, completely
separate, with the new storage volumes. I do not want to just replace the old
volumes with new volumes in the pool we a
> zpool create newpool
> zfs snapshot -r oldp...@sendit
> zfs send -R oldp...@sendit | zfs recv -vFd newpool
I think this is probably something like what I want, the problem is I'm not
really "getting it" yet. If you could explain just what is happening here in
an example. Let's say I
I think this is probably something like what I want, the problem is I'm
not really "getting it" yet. If you could explain just what is
happening here in an example. Let's say I have this setup;
oldpool = 10 x 500GB volumes, with two mounted filesystems; fs1 and fs2
I create newpool = 12 x 1TB
What is the best way to attach an USB harddisk to Solaris 10u7?
I know some program is running to auto detect such a device (have
forgotten the name, because I do almost all work on OSOL (hal).
do I use "that program" or disable it an manualy attach the drive to
the system?
--
Dick Hoogendijk --
Thanks for that Brian.
I've logged a bug:
CR 6865661 *HOT* Created, P1 opensolaris/triage-queue zfs scrub rpool
causes zpool hang
Just discovered after trying to create a further crash dump that it's
failing and rebooting with the following error (just caught it prior
to the reboot):
I think you've given me enough information to get started on a test of the
procedure. Thanks very much.
Thomas Walker
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/ma
Ok Bob, but i think that is the problem about picket fencing... and so we are
talking about commit the sync operations to disk. What i'm seeing is no read
activity from disks when the slog is beeing written. The disks are "zero" (no
read, no write).
Thanks a lot for your reply.
Leal
[ http:/
Hi Dick,
The Solaris 10 volume management service is volfs.
If you attach the USB hard disk and run volcheck, the disk should
be mounted under the /rmdisk directory.
If the auto-mounting doesn't occur, you can disable volfs and mount
it manually.
You can read more about this feature here:
htt
On 27/07/2009, at 10:14 PM, Tobias Exner wrote:
Hi list,
I've did some tests and run into a very strange situation..
I created a zvol using "zfs create -V" and initialize an sam-
filesystem on this zvol.
After that I restored some testdata using a dump from another system.
So far so good.
Yes:
$Make sure your dumpadm is set up beforehand to enable savecore, and that
you have a dump device. In my case the output looks like this:
$ pfexec dumpadm
Dump content: kernel pages
Dump device: /dev/zvol/dsk/rpool/dump (dedicated)
Savecore directory: /var/crash/opensolaris
Sav
On 28 July, 2009 - David Gwynne sent me these 1,9K bytes:
>
> On 27/07/2009, at 10:14 PM, Tobias Exner wrote:
>
>> Hi list,
>>
>> I've did some tests and run into a very strange situation..
>>
>>
>> I created a zvol using "zfs create -V" and initialize an sam-
>> filesystem on this zvol.
>> After
On Tue, 28 Jul 2009, Marcelo Leal wrote:
Ok Bob, but i think that is the problem about picket fencing... and
so we are talking about commit the sync operations to disk. What i'm
seeing is no read activity from disks when the slog is beeing
written. The disks are "zero" (no read, no write).
T
Le 28 juil. 09 à 15:54, Darren J Moffat a écrit :
> How do I monitor the progress of the transfer? Once
Unfortunately there is no easy way to do that just now. When the
'zfs recv' finishes is it is done.
I've just found pv (pipe viewer) today (http://www.ivarch.com/programs/pv.shtml
) w
Hi Again,
A bit more futzing around and I notice that output from a plain 'zdb' returns
this:
store
version=14
name='store'
state=0
txg=0
pool_guid=13934602390719084200
hostid=8462299
hostname='store'
vdev_tree
type='root'
id=0
guid=1393460
On Tue, Jul 28, 2009 at 03:04, Brian wrote:
> Just a quick question before I address everyone else.
> I bought this connector
> http://www.newegg.com/Product/Product.aspx?Item=N82E16812198020
>
> However its pretty clear to me now (after Ive ordered it) that it won't at
> all fit in the SAS connec
On Jul 28, 2009, at 8:53 AM, Tomas Ögren wrote:
On 28 July, 2009 - David Gwynne sent me these 1,9K bytes:
On 27/07/2009, at 10:14 PM, Tobias Exner wrote:
Hi list,
I've did some tests and run into a very strange situation..
I created a zvol using "zfs create -V" and initialize an sam-
fi
On Tue, 28 Jul 2009 09:03:14 -0600
cindy.swearin...@sun.com wrote:
> The Solaris 10 volume management service is volfs.
#svcs -a | grep vol has told me that ;-)
> If the auto-mounting doesn't occur, you can disable volfs and mount
> it manually.
I don't want the automounting to occur, so I diabl
My understanding is that there's never any need for a reader to wait for a
write in progress. ZFS keeps all writes in memory until they're committed to
disk - if you ever try to read something that's either waiting to be, or is
being written to disk, ZFS will serve it straight from RAM.
One qu
On Mon, Jul 27, 2009 at 3:58 AM, Markus Kovero wrote:
> Oh well, whole system seems to be deadlocked.
>
> nice. Little too keen keeping data safe :-P
>
>
>
> Yours
>
> Markus Kovero
>
>
>
> From: zfs-discuss-boun...@opensolaris.org
> [mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Markus
>>
>>
>
> I submitted a bug, but I don't think its been assigned a case number yet.
> I see this exact same behavior on my X4540's. I create a lot of
> snapshots, and when I tidy up, zfs destroy can 'stall' any and all ZFS
> related commands for hours, or even days (in the case of nested
> snapshot
On Tue, 28 Jul 2009, dick hoogendijk wrote:
I don't want the automounting to occur, so I diabled volfs.
I then did a "rmformat" to learn the device name, followed by a "zpool
create archive /dev/rdsk/devicename
It is better to edit /etc/vold.conf since vold is used for other
purposes as well
On 07/27/09 03:39, Markus Kovero wrote:
Hi, how come zfs destroy being so slow, eg. destroying 6TB dataset
renders zfs admin commands useless for time being, in this case for hours?
(running osol 111b with latest patches.)
I'm not sure what "latest patches" means w.r.t. ON build, but this is
On 28.07.09 20:31, Graeme Clark wrote:
Hi Again,
A bit more futzing around and I notice that output from a plain 'zdb' returns
this:
store
version=14
name='store'
state=0
txg=0
pool_guid=13934602390719084200
hostid=8462299
hostname='store'
vdev_tree
type
Do any of you know how to set the default ZFS ACLs for newly created
files and folders when those files and folders are created through Samba?
I want to have all new files and folders only inherit extended
(non-trivial) ACLs that are set on the parent folders. But when a file
is created through s
On Mon, Jul 20, 2009 at 7:52 PM, Bob Friesenhahn wrote:
Sun has opened internal CR 6859997. It is now in Dispatched state at High
priority.
CR 6859997 has been accepted and is actively being worked on. The
following info has been added to that CR:
This is a problem with the ZFS file pref
Is it possible to send an entire pool (including all its zfs filesystems)
to a zfs filesystem in a different pool on another host? Or must I send each
zfs filesystem one at a time?
Thanks!
jlc
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
This is my first ZFS pool. I'm using an X4500 with 48 TB drives. Solaris is
5/09.
After the create zfs list shows 40.8T but after creating 4
filesystems/mountpoints the available drops 8.8TB to 32.1TB. What happened to
the 8.8TB. Is this much overhead normal?
zpool create -f zpool1 raidz c1
This is my first ZFS pool. I'm using an X4500 with 48 TB drives. Solaris is
5/09.
After the create zfs list shows 40.8T but after creating 4
filesystems/mountpoints the available drops 8.8TB to 32.1TB. What happened to
the 8.8TB. Is this much overhead normal?
IIRC zpool list includes the p
On Wed 29/07/09 10:09 , "Joseph L. Casale" jcas...@activenetwerx.com sent:
> Is it possible to send an entire pool (including all its zfsfilesystems)
> to a zfs filesystem in a different pool on another host? Or must I send each
> zfs filesystem one at a time?
Yes, use -R on the sending side a
Glen Gunselman wrote:
This is my first ZFS pool. I'm using an X4500 with 48 TB drives. Solaris is
5/09.
After the create zfs list shows 40.8T but after creating 4
filesystems/mountpoints the available drops 8.8TB to 32.1TB. What happened to
the 8.8TB. Is this much overhead normal?
zpool
>Yes, use -R on the sending side and -d on the receiving side.
I tried that first, going from Solaris 10 to osol 0906:
# zfs send -vR mypo...@snap |ssh j...@catania "pfexec /usr/sbin/zfs recv -dF
mypool/somename"
didn't create any of the zfs filesystems under mypool2?
Thanks!
jlc
_
On Tue, 28 Jul 2009, Rich Morris wrote:
6412053 is a related CR which mentions that the zfetch code may not be
issuing I/O at a sufficient pace. This behavior is also seen on a Thumper
running the test script in CR 6859997 since, even when prefetch is ramping up
as expected, less than half o
Try send/receive to the same host (ssh localhost). I used this when
trying send/receive as it removes ssh between hosts "problems"
The on disk format of ZFS has changed there is something about it in
the man pages from memory so I don't think you can go S10 ->
OpenSolaris without doing an up
On Tue, 28 Jul 2009, Rich Morris wrote:
The fix for this problem may be more feedback between the ARC and the zfetch
code. Or it may make sense to restart the prefetch stream after some time
has passed or perhaps whenever there's a miss on a block that was expected to
have already been prefe
>
> Can *someone* please name a single drive+firmware or
> RAID
> controller+firmware that ignores FLUSH CACHE / FLUSH
> CACHE EXT
> commands? Or worse, responds "ok" when the flush
> hasn't occurred?
I think it would be a shorter list if one were to name the drives/controllers
that actually imp
On Wed 29/07/09 10:49 , "Joseph L. Casale" jcas...@activenetwerx.com sent:
> >Yes, use -R on the sending side and -d on the receiving side.
> I tried that first, going from Solaris 10 to osol 0906:
>
> # zfs send -vR mypo...@snap|ssh j...@catania "pfexec /usr/sbin/zfs recv -dF
> mypool/somenam
> This is also (theoretically) why a drive purchased
> from Sun is more
> that expensive then a drive purchased from your
> neighbourhood computer
> shop:
It's more significant than that. Drives aimed at the consumer market are at a
competitive disadvantage if they do handle cache flush corr
I apologize for replying in the middle of this thread, but I never
saw the initial snapshot syntax of mypool2, which needs to be
recursive (zfs snapshot -r mypo...@snap) to snapshot all the
datasets in mypool2. Then, use zfs send -R to pick up and
restore all the dataset properties.
What was the
This thread started over in nfs-discuss, as it appeared to be an nfs
problem initially. Or at the very least, interaction between nfs and zil.
Just summarising speeds we have found when untarring something. Always
in a new/empty directory. Only looking at write speed. read is always
very fas
On Wed, 29 Jul 2009, Jorgen Lundman wrote:
For example, I know rsync and tar does not use fdsync (but dovecot does) on
its close(), but does NFS make it fdsync anyway?
NFS is required to do synchronous writes. This is what allows NFS
clients to recover seamlessly if the server spontaneously
On Mon, Jul 27 at 13:50, Richard Elling wrote:
On Jul 27, 2009, at 10:27 AM, Eric D. Mudama wrote:
Can *someone* please name a single drive+firmware or RAID
controller+firmware that ignores FLUSH CACHE / FLUSH CACHE EXT
commands? Or worse, responds "ok" when the flush hasn't occurred?
two seco
I was greeted by this today. The Sun Message ID page says this should happen
when there were errors in a replicated configuration. Clearly there's only one
drive here. If there are unrecoverable errors how can my applications not be
affected since there's no mirror or parity to recover from?
#
On Tue, 28 Jul 2009, fyleow wrote:
I was greeted by this today. The Sun Message ID page says this
should happen when there were errors in a replicated configuration.
Clearly there's only one drive here. If there are unrecoverable
errors how can my applications not be affected since there's no
We just picked up the fastest SSD we could in the local biccamera, which
turned out to be a CSSDーSM32NI, with supposedly 95MB/s write speed.
I put it in place, and replaced the slog over:
0m49.173s
0m48.809s
So, it is slower than the CF test. This is disappointing. Everyone else
53 matches
Mail list logo