> If you like, you can later add a fifth drive
> relatively easily by
> replacing one of the slices with a whole drive.
>
how does this affect my available storage if I were to replace both of those
sparse 500GB files with a real 1TB drive? Will it be same? Or will I have
expanded my storage?
On Thu, Apr 22, 2010 at 09:58:12PM -0700, thomas wrote:
> Assuming newer version zpools, this sounds like it could be even
> safer since there is (supposedly) less of a chance of catastrophic
> failure if your ramdisk setup fails. Use just one remote ramdisk or
> two with battery backup.. whatever
Ian Collins wrote:
On 04/20/10 04:13 PM, Sunil wrote:
Hi,
I have a strange requirement. My pool consists of 2 500GB disks in
stripe which I am trying to convert into a RAIDZ setup without data
loss but I have only two additional disks: 750GB and 1TB. So, here is
what I thought:
1. Carve a
Someone on this list threw out the idea a year or so ago to just setup 2
ramdisk servers, export a ramdisk from each and create a mirror slog from them.
Assuming newer version zpools, this sounds like it could be even safer since
there is (supposedly) less of a chance of catastrophic failure if
On Tue, Apr 20, 2010 at 2:18 PM, Ken Gunderson wrote:
> Greetings All:
>
> Granted there has been much fear, uncertainty, and doubt following
> Oracle's take over of Sun, but I ran across this on a FreeBSD mailing
> list post dated 4/20/2010"
>
> "...Seems that Oracle won't offer support for ZFS o
Hi Andreas
I will explain to you what I need . You say IMHO is simpler than AVS thats good
.
I have setup 2 nexentacore boxes with zfs pools and nfs on the first node. Now
I need to install the open-ha cluster software with non shared disks.
I now need to make zfs with nfs HA. I understand the
On Fri, 23 Apr 2010, Andreas Höschler wrote:
Maybe I am lucky since I have run three VirtualBox instances at a time (2GB
allocation each) on my system with no problem at all.
I have inserted
set zfs:zfs_arc_max = 0x2
in /etc/system and rebooted the machine having 64GB of mem
On Wed, Apr 21, 2010 at 04:49:30PM +0100, Darren J Moffat wrote:
> /foo is the filesystem
> /foo/bar is a directory in the filesystem
>
> cd /foo/bar/
> touch stuff
>
> [ you wait, time passes; a snapshot is taken ]
>
> At this point /foo/bar/.snapshot/.../stuff exists
>
> Now do this:
>
> rm
>From: Ross Walker [mailto:rswwal...@gmail.com]
>Sent: Thursday, April 22, 2010 6:34 AM
>
>On Apr 20, 2010, at 4:44 PM, Geoff Nordli wrote:
>
>
>If you combine the hypervisor and storage server and have students
>connect to the VMs via RDP or VNC or XDM then you will have the
>performance of local
On Thu, Apr 22, 2010 at 02:53:37PM -0700, Rich Teer wrote:
> On Thu, 22 Apr 2010, Mike Mackovitch wrote:
>
> > I would also check /var/log/system.log and /var/log/kernel.log on the Mac to
> > see if any other useful messages are getting logged.
>
> Ah, we're getting closer. The latter shows noth
Hi Bob,
The problem could be due to a faulty/failing disk, a poor connection
with a disk, or some other hardware issue. A failing disk can easily
make the system pause temporarily like that.
As root you can run '/usr/sbin/fmdump -ef' to see all the fault events
as they are reported. Be sur
On Thu, 22 Apr 2010, Mike Mackovitch wrote:
> I would also check /var/log/system.log and /var/log/kernel.log on the Mac to
> see if any other useful messages are getting logged.
Ah, we're getting closer. The latter shows nothing interesting, but system.log
has this line appended the minute I try
On Wed, Apr 21, 2010 at 10:10:09PM -0400, Edward Ned Harvey wrote:
> > From: Nicolas Williams [mailto:nicolas.willi...@oracle.com]
> >
> > POSIX doesn't allow us to have special dot files/directories outside
> > filesystem root directories.
>
> So? Tell it to Netapp. They don't seem to have any
On Thu, Apr 22, 2010 at 01:54:26PM -0700, Rich Teer wrote:
> On Thu, 22 Apr 2010, Mike Mackovitch wrote:
>
> Hi Mike,
>
> > So, it looks like you need to investigate why the client isn't
> > getting responses from the server's "lockd".
> >
> > This is usually caused by a firewall or NAT getting
On Thu, 22 Apr 2010, Mike Mackovitch wrote:
Hi Mike,
> So, it looks like you need to investigate why the client isn't
> getting responses from the server's "lockd".
>
> This is usually caused by a firewall or NAT getting in the way.
Great idea--I was indeed connected to my network using the Air
On Thu, 22 Apr 2010, Andreas Höschler wrote:
we are encountering severe problems on our X4240 (64GB, 16 disks) running
Solaris 10 and ZFS. From time to time (5-6 times a day)
• FrontBase hangs or crashes
• VBox virtual machine do hang
• Other applications show rubber effect (white screen) whil
On Thu, Apr 22, 2010 at 12:40:37PM -0700, Rich Teer wrote:
> On Thu, 22 Apr 2010, Tomas Ögren wrote:
>
> > Copying via terminal (and cp) works.
>
> Interesting: if I copy a file *which has no extended attributes* using cp in
> a terminal, it works fine. If I try to cp a file that has EA (to the
On Thu, 22 Apr 2010, Alex Blewitt wrote:
Hi Alex,
> For your information, the ZFS project lives (well, limps really) on
> at http://code.google.com/p/mac-zfs. You can get ZFS for Snow Leopard
> from there and we're working on moving forwards from the ancient pool
> support to something more recen
On Thu, 22 Apr 2010, Tomas Ögren wrote:
> Copying via terminal (and cp) works.
Interesting: if I copy a file *which has no extended attributes* using cp in
a terminal, it works fine. If I try to cp a file that has EA (to the same
destination), it hangs. But I get this error message after a few
Rich, Shawn,
> Of course, it probably doesn't help that Apple, in their infinite wisdom,
> canned
> native suport for ZFS in Snow Leopard (idiots).
For your information, the ZFS project lives (well, limps really) on at
http://code.google.com/p/mac-zfs. You can get ZFS for Snow Leopard from ther
If I want to reduce the I/O accesses for example to SSD media on a laptop
and I don't plan to run any big applications is it safe to delete the swap file
?
How do I configure opensolairs to run without swap ?
I've tried 'swap -d /dev/zvol/dsk/rpool/swap'
but 'swap -s' still shows the same amount
On 22 April, 2010 - Rich Teer sent me these 1,1K bytes:
> Hi all,
>
> I have a server running SXCE b130 and I use ZFS for all file systems. I
> also have a couple of workstations running the same OS, and all is well.
> But I also have a MacBook Pro laptop running Snow Leopard (OS X 10.6.3),
> an
On Thu, 22 Apr 2010, Shawn Ferry wrote:
> I haven't seen this behavior. However, all of my file systems used by my
> Mac are pool version 8 fs ver 2. I don't know if that could be part of your
> problem or not.
Thanks for the info. I should have said that all the file systems I'm using
were crea
Hi all,
we are encountering severe problems on our X4240 (64GB, 16 disks)
running Solaris 10 and ZFS. From time to time (5-6 times a day)
• FrontBase hangs or crashes
• VBox virtual machine do hang
• Other applications show rubber effect (white screen) while moving
the windows
I have been t
On Apr 22, 2010, at 1:26 PM, Rich Teer wrote:
> Hi all,
>
> I have a server running SXCE b130 and I use ZFS for all file systems. I
> also have a couple of workstations running the same OS, and all is well.
> But I also have a MacBook Pro laptop running Snow Leopard (OS X 10.6.3),
> and I have
Hi Clint,
Your symptoms to point to disk label problems, dangling device links,
or overlapping partitions. All could be related to the power failure.
The OpenSolaris error message (b134, I think you mean) brings up these
bugs:
6912251, describes the dangling links problem, which you might be ab
Hi all,
I have a server running SXCE b130 and I use ZFS for all file systems. I
also have a couple of workstations running the same OS, and all is well.
But I also have a MacBook Pro laptop running Snow Leopard (OS X 10.6.3),
and I have troubles creating files on exported ZFS file systems.
>From
Hi all
we are encountering severe problems on our X4240 (64GB, 16 disks)
running Solaris 10 and ZFS. From time to time (5-6 times a day)
• FrontBase hangs or crashes
• VBox virtual machine do hang
• Other applications show rubber effect (white screen) while moving the
windows
I have been te
This sounds like
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6894775.
It seems this can be avoided by switching to an LSI card that uses
mpt_sas. For example, the 9211.
However, certain drives, such as the Western Digital
WD2002FYPS-01U1B0, can also result in the behavior .
A
Richard Elling wrote:
> IIRC, POSIX does not permit hard links to directories. Moving or renaming
> the directory structure gets disconnected from the original because these
> are relative relationships. Clearly, NetApp achieves this in some manner
> which is not constrained by POSIX -- a manner
On Apr 22, 2010, at 4:50 AM, Edward Ned Harvey wrote:
>> From: Richard Elling [mailto:richard.ell...@gmail.com]
>>
>> Repeating my previous question in another way...
>> So how do they handle "mv home/joeuser home/moeuser" ?
>> Does that mv delete all snapshots below home/joeuser?
>> To make this
If you read this
http://hub.opensolaris.org/bin/download/Project+colorado/files/Whitepaper-OpenHAClusterOnOpenSolaris-external.pdf
and especially starting at page 25 you will find a detailed explanation how to
implement a storage cluster with shared storage based on Comstar and ISCSI.
If you want
On Thu, 22 Apr 2010, Edward Ned Harvey wrote:
To move closer to RFE status ... I think the description would have to be
written in verbage pertaining to zfs which is more than I know. I can
describe how they each work, but I can't make it technical enough to be an
RFE for zfs.
Someone would a
Hi
On Thursday 22 April 2010 16:33:51 Peter Tribble wrote:
> fsstat?
>
> Typically along the lines of
>
> fsstat /tank/* 1
>
Sh**, I knew about fsstat but never ever even tried to run it on many file
systems at once. D'oh.
*sigh* well, at least a good one for the archives...
Thanks a lot!
On 22/04/2010 15:30, Carsten Aulbert wrote:
sorry if this is in any FAQ - then I've clearly missed it.
Is there an easy or at least straight forward way to determine which of n ZFS
is currently under heavy NFS load?
DTrace Analytics in the SS7000 appliance would be perfect for this.
Once upo
On Thu, Apr 22, 2010 at 3:30 PM, Carsten Aulbert
wrote:
> Hi all,
>
> sorry if this is in any FAQ - then I've clearly missed it.
>
> Is there an easy or at least straight forward way to determine which of n ZFS
> is currently under heavy NFS load?
>
> Once upon a time, when one had old style file
Hi all,
sorry if this is in any FAQ - then I've clearly missed it.
Is there an easy or at least straight forward way to determine which of n ZFS
is currently under heavy NFS load?
Once upon a time, when one had old style file systems and exported these as a
whole iostat -x came in handy, howev
Hi Andreas ,
The paper looks good are any basic examples on AVS or open-ha that explains the
componets throughly and guides. What books or resource you recommend I get more
information about this as I cant find any books
--
This message posted from opensolaris.org
___
Hi Richard
What do you mean by "A mirror would be simple" do you mean to use zfs send and
receive. Also is the auto-cdp plugin free with nexenstor developer.
Is there a detailed explaination of AVS where they explain all the companents
involved like what is bitmap for etc. If AVS is there around
On Apr 20, 2010, at 4:44 PM, Geoff Nordli wrote:
From: matthew patton [mailto:patto...@yahoo.com]
Sent: Tuesday, April 20, 2010 12:54 PM
Geoff Nordli wrote:
With our particular use case we are going to do a "save
state" on their
virtual machines, which is going to write 100-400 MB
per VM v
> From: Richard Elling [mailto:richard.ell...@gmail.com]
>
> Repeating my previous question in another way...
> So how do they handle "mv home/joeuser home/moeuser" ?
> Does that mv delete all snapshots below home/joeuser?
> To make this work in ZFS, does this require that the mv(1)
> command only
On 22/04/2010 00:14, Jason King wrote:
It still has the issue that the end user has to know where the root of
the filesystem is in the tree (assuming it's even accessible on the
system -- might not be for an NFS mount).
For CIFS ZFS provides the Volume Shadow Service (Previous Versions in
Wind
You may have a look in the whitepaper from Torsten Frueauf.
see here http://sun.systemnews.com/articles/137/4/OpenSolaris/22016
This should give you the functionality of a DRBD-Cluster.
Andreas
--
This message posted from opensolaris.org
___
zfs-discus
On Wed, Apr 21, 2010 at 10:13 PM, Richard Elling
wrote:
> Repeating my previous question in another way...
> So how do they handle "mv home/joeuser home/moeuser" ?
> Does that mv delete all snapshots below home/joeuser?
If you wanted to go into home/joeuser/.snapshot , I think you'd have
to look
44 matches
Mail list logo