Hi Richard,
Yes, I did miss that one, but could you remind me what exactly are the sd and
ssd drivers? I can find lots of details about configuring them, but no basic
documentation telling me what they are.
I'm also a little confused as to whether it would have helped our case. The
logs abov
> The GRUB menu is presented, no problem there, and
> then the opensolaris progress bar. But im unable to
> find a way to view any details on whats happening
> there. The progress bar just keep scrolling and
> scrolling.
Press the ESC key; this should switch back from
graphics to text mode and mos
Great idea, much neater than most of my suggestions too :-)
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
James C. McPherson wrote:
An introduction to btrfs, from somebody who used to work on ZFS:
http://www.osnews.com/story/21920/A_Short_History_of_btrfs
*very* interesting article.. Not sure why James didn't directly link to
it, but courteous of Valerie Aurora (formerly Henson)
http://lwn.net
Well that seem to work well! :) Still, now the issue have changed from not
being able to install to USB, to not being able to properly boot from USB.
The GRUB menu is presented, no problem there, and then the opensolaris progress
bar. But im unable to find a way to view any details on whats happ
An introduction to btrfs, from somebody who used to work on ZFS:
http://www.osnews.com/story/21920/A_Short_History_of_btrfs
James C. McPherson
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
http://blogs.sun.com/jmcp http://www.jmcp.homeunix.com/blog
Kernel Conference Austr
X25-E would be good, but some pools have no spares, and since you can't
remove vdevs, we'd have to move all customers off the x4500 before we
can use it.
Ah it just occurred to me that perhaps for our specific problem, we will
buy two X25-Es and replace the root mirror. The OS and ZIL logs c
On Thu, Jul 30, 2009 at 3:54 PM, Joseph L.
Casale wrote:
> Anyone come up with a solution to manage the replication of ZFS snapshots?
> The send/recv criteria gets tricky with all but the first unless you purge
> the destination of snapshots, then force a full stream into it.
>
> I was hoping to sc
We found lots of SAS Controller Reset and errors to SSD on our servers
(OpenSolaris 2008.05 and 2009.06 with third-party JBOD and X25-E). Whenever
there is an error, the MySQL insert takes more than 4 seconds. It was quite
scary.
Eventually our engineer disabled the Fault Management SMART Pooli
Anyone come up with a solution to manage the replication of ZFS snapshots?
The send/recv criteria gets tricky with all but the first unless you purge
the destination of snapshots, then force a full stream into it.
I was hoping to script a daily update but I see that I would have to keep track
of w
* Rob Terhaar (rob...@robbyt.net) wrote:
> I'm sure this has been discussed in the past. But its very hard to
> understand, or even patch incredibly advanced software such as ZFS
> without a deep understanding of the internals.
It's also very hard for the primary ZFS developers to satisfy everyone
Rob Terhaar wrote:
I'm sure this has been discussed in the past. But its very hard to
understand, or even patch incredibly advanced software such as ZFS
without a deep understanding of the internals.
It will take quite a while before anyone can start understanding a
file system which was develop
I'm sure this has been discussed in the past. But its very hard to
understand, or even patch incredibly advanced software such as ZFS
without a deep understanding of the internals.
It will take quite a while before anyone can start understanding a
file system which was developed behind closed door
On Thu, 2009-07-30 at 09:33 +0100, Darren J Moffat wrote:
> Roman V Shaposhnik wrote:
> > On the read-only front: wouldn't it be cool to *not* run zfs sends
> > explicitly but have:
> > .zfs/send/
> > .zfs/sendr/-
> > give you the same data automagically?
> >
> > On the read-write front:
Hello !
How can i export a filesystem /export1 so that sub-filesystems within that
filesystems will be available and usable on the client side without additional
"mount/share effort" ?
this is possible with linux nfsd and i wonder how this can be done with solaris
nfs.
i`d like to use /export
I'll maintain hope for seeing/hearing the presentation until you guys announce
that you had NASA store the tape for safe-keeping.
Bump'd.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.o
On Thu, 30 Jul 2009, Richard Elling wrote:
According to Gartner, enterprise SSDs accounted for $92.6M of a
$585.5M SSD market in June 2009, representing 15.8% of the SSD
market. STEC recently announced an order for $120M of ZeusIOPS
drives from "a single enterprise storage customer." From 20
On Jul 30, 2009, at 2:04 PM, Ross wrote:
Supermicro AOC-SAT2-MV8, based on the Marvell chipset. I figured it
was the best available at the time since it's using the same chipset
as the x4500 Thumper servers.
Our next machine will be using LSI controllers, but I'm still not
entirely happ
Ross wrote:
Supermicro AOC-SAT2-MV8, based on the Marvell chipset. I figured it was the
best available at the time since it's using the same chipset as the x4500
Thumper servers.
Our next machine will be using LSI controllers, but I'm still not entirely
happy with the way ZFS handles timeout
On Jul 30, 2009, at 12:07 PM, Bob Friesenhahn wrote:
On Thu, 30 Jul 2009, Andrew Gabriel wrote:
Except for price/GB, it is game over for HDDs. Since price/GB is
based on
Moore's Law, it is just a matter of time.
SSD's are a sufficiently new technology that I suspect there's
significant
Supermicro AOC-SAT2-MV8, based on the Marvell chipset. I figured it was the
best available at the time since it's using the same chipset as the x4500
Thumper servers.
Our next machine will be using LSI controllers, but I'm still not entirely
happy with the way ZFS handles timeout type errors.
James Lever wrote:
On 30/07/2009, at 11:32 PM, Darren J Moffat wrote:
On the host that has the ZFS datasets (ie the NFS/CIFS server) you
need to give the user the delegation to create snapshots and to mount
them:
# zfs allow -u james snapshot,mount,destroy tank/home/james
Ahh, it was the
On Wed, 2009-07-29 at 06:50 -0700, Glen Gunselman wrote:
> There was a time when manufacturers know about base-2 but those days
> are long gone.
Oh, they know all about base-2; it's just that disks seem bigger when
you use base-10 units.
Measure a disk's size in 10^(3n)-based KB/MB/GB/TB units,
On 30/07/2009, at 11:32 PM, Darren J Moffat wrote:
On the host that has the ZFS datasets (ie the NFS/CIFS server) you
need to give the user the delegation to create snapshots and to
mount them:
# zfs allow -u james snapshot,mount,destroy tank/home/james
Ahh, it was the lack of mount that
what`s your disk controller?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Wed, Jul 29, 2009 at 03:51:22AM -0700, Jan wrote:
> Hi all,
> I need to know if it is possible to expand the capacity of a zpool
> without loss of data by growing the LUN (2TB) presented from an HP EVA
> to a Solaris 10 host.
Yes.
> I know that there is a possible way in Solaris Express Commun
On Thu, Jul 30, 2009 at 14:50, Kurt Olsen wrote:
> I'm using an Acard ANS-9010B (configured with 12 GB battery backed ECC RAM w/
> 16 GB CF card for longer term power losses. Device cost $250, RAM cost about
> $120, and the CF around $100.) It just shows up as a SATA drive. Works fine
> attached
On Thu, 30 Jul 2009, Andrew Gabriel wrote:
Except for price/GB, it is game over for HDDs. Since price/GB is based on
Moore's Law, it is just a matter of time.
SSD's are a sufficiently new technology that I suspect there's significant
probably of discovering new techniques which give larger s
I'm using an Acard ANS-9010B (configured with 12 GB battery backed ECC RAM w/
16 GB CF card for longer term power losses. Device cost $250, RAM cost about
$120, and the CF around $100.) It just shows up as a SATA drive. Works fine
attached to an LSI 1068E. Since -- as I understand it -- one's ZI
Richard Elling wrote:
On Jul 30, 2009, at 9:26 AM, Bob Friesenhahn wrote:
Do these SSDs require a lot of cooling?
No. During the "Turbo Charge your Apps" presentations I was doing around
the UK, I often pulled one out of a server to hand around the audience
when I'd finished the demos on i
On Jul 30, 2009, at 9:26 AM, Bob Friesenhahn wrote:
On Thu, 30 Jul 2009, Ross wrote:
Without spare drive bays I don't think you're going to find one
solution that works for x4500 and x4540 servers. However, are
these servers physically close together? Have you considered
running the slo
That should work just as well Bob, although rather than velcro I'd be tempted
to drill some holes into the server chassis somewhere and screw the drives on.
These things do use a bit of power, but with the airflow in a thumper I don't
think I'd be worried.
If they were my own servers I'd be ve
On Thu, 30 Jul 2009, Ross wrote:
Without spare drive bays I don't think you're going to find one
solution that works for x4500 and x4540 servers. However, are these
servers physically close together? Have you considered running the
slog devices externally?
This all sounds really sophistica
Markus Kovero wrote:
btw, there's coming new Intel X25-M (G2) next month that will offer better
random read/writes than E-series and seriously cheap pricetag, worth for a try
I'd say.
The suggested MSRP of the 80GB generation 2 (G2) is supposed to be $225.
Even though the G2 is not shippin
Ralf Gans wrote:
Jumpstart puts a loopback mount into the vfstab,
and the next boot fails.
The Solaris will do the mountall before ZFS starts,
so the filesystem service fails and you have not even
an sshd to login over the network.
This is why I don't use the mountpoint settings in ZFS. I se
On Jul 30, 2009, at 2:15 AM, Cyril Plisko wrote:
On Thu, Jul 30, 2009 at 11:33 AM, Darren J
Moffat wrote:
Roman V Shaposhnik wrote:
On the read-only front: wouldn't it be cool to *not* run zfs sends
explicitly but have:
.zfs/send/
.zfs/sendr/-
give you the same data automagically?
On th
James Lever wrote:
Hi Darryn,
On 30/07/2009, at 6:33 PM, Darren J Moffat wrote:
That already works if you have the snapshot delegation as that user.
It even works over NFS and CIFS.
Can you give us an example of how to correctly get this working?
On the host that has the ZFS datasets (ie
On Thu, Jul 30, 2009 at 5:27 AM, Ross wrote:
> Without spare drive bays I don't think you're going to find one solution that
> works for x4500 and x4540 servers. However, are these servers physically
> close together? Have you considered running the slog devices externally?
It appears as thoug
Hi Darryn,
On 30/07/2009, at 6:33 PM, Darren J Moffat wrote:
That already works if you have the snapshot delegation as that
user. It even works over NFS and CIFS.
Can you give us an example of how to correctly get this working?
I've read through the manpage but have not managed to get the
Whoah! Seriously? When did that get added and how did I miss it?
That is absolutely superb! And an even stronger case for mkdir creating
filesystems. A filesystem per user that they can snapshot at will o_0
Ok, it'll need some automated pruning of old snapshots, but even so, that has
so
Without spare drive bays I don't think you're going to find one solution that
works for x4500 and x4540 servers. However, are these servers physically close
together? Have you considered running the slog devices externally?
One possible choice may be to run something like the Supermicro SC216
Cyril Plisko wrote:
On Thu, Jul 30, 2009 at 11:33 AM, Darren J
Moffat wrote:
Roman V Shaposhnik wrote:
On the read-only front: wouldn't it be cool to *not* run zfs sends
explicitly but have:
.zfs/send/
.zfs/sendr/-
give you the same data automagically?
On the read-write front: wouldn't it
Hello there,
I'm working for a bigger customer in germany.
The customer ist some thousend TB big.
The information that the zpool shrink feature will not be implemented soon
is no problem, we just keep using Veritas Storage Foundation.
Shirinking a pool is not the only problem with ZFS,
try setti
On Thu, Jul 30, 2009 at 11:33 AM, Darren J
Moffat wrote:
> Roman V Shaposhnik wrote:
>>
>> On the read-only front: wouldn't it be cool to *not* run zfs sends
>> explicitly but have:
>> .zfs/send/
>> .zfs/sendr/-
>> give you the same data automagically?
>> On the read-write front: wouldn't it
Roman V Shaposhnik wrote:
On the read-only front: wouldn't it be cool to *not* run zfs sends
explicitly but have:
.zfs/send/
.zfs/sendr/-
give you the same data automagically?
On the read-write front: wouldn't it be cool to be able to snapshot
things by:
$ mkdir .zfs/snapshot/
T
Hi
> Ive tried to find any hard information on how to install, and boot,
> opensolaris from a USB stick. Ive seen a few people written a few sucessfull
> stories about this, but I cant seem to get it to work.
>
> The procedure:
> Boot from LiveCD, insert USB drive, find it using `format', start
btw, there's coming new Intel X25-M (G2) next month that will offer better
random read/writes than E-series and seriously cheap pricetag, worth for a try
I'd say.
Yours
Markus Kovero
-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.o
47 matches
Mail list logo