> On Sat, Jun 26, 2010 at 12:20 AM, Ben Miles
> wrote:
> > What supporting applications are there on Ubuntu
> for RAIDZ?
>
> None. Ubuntu doesn't officially support ZFS.
>
> You can kind of make it work using the ZFS-FUSE
> project. But it's not
> stable, nor recommended.
I have been using zf
On Mon, Jul 19, 2010 at 11:06 PM, Richard Jahnel wrote:
> I've tried ssh blowfish and scp arcfour. both are CPU limited long before the
> 10g link is.
>
> I'vw also tried mbuffer, but I get broken pipe errors part way through the
> transfer.
>
> I'm open to ideas for faster ways to to either zfs
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Richard Jahnel
>
> I'vw also tried mbuffer, but I get broken pipe errors part way through
> the transfer.
The standard answer is mbuffer. I think you should ask yourself what's
going wrong wi
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Pasi Kärkkäinen
>
> Redhat Fedora 13 includes BTRFS, but it's not used as a default (yet).
>
> RHEL6 beta also includes BTRFS support (tech preview), but again,
>
> Upcoming Ubuntu 10.10 will
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Richard L. Hamilton
>
> I would imagine that if it's read-mostly, it's a win, but
> otherwise it costs more than it saves. Even more conventional
> compression tends to be more resource intens
On Mon, Jul 19, 2010 at 06:00:04PM -0700, Brent Jones wrote:
> On Mon, Jul 19, 2010 at 5:40 PM, Chad Cantwell wrote:
> > fyi, everyone, I have some more info here. in short, rich lowe's 142 works
> > correctly (fast) on my hardware, while both my compilations (snv 143, snv
> > 144)
> > and also
Rodrigo E. De León Plicet wrote:
On Fri, Jun 25, 2010 at 9:08 PM, Erik Trimble wrote:
(2) Ubuntu is a desktop distribution. Don't be fooled by their "server"
version. It's not - it has too many idiosyncrasies and bad design choices to
be a stable server OS. Use something like Debian, SLES,
On Tue, Jul 20, 2010 at 10:54:44AM +1000, James C. McPherson wrote:
> On 20/07/10 10:40 AM, Chad Cantwell wrote:
> >fyi, everyone, I have some more info here. in short, rich lowe's 142 works
> >correctly (fast) on my hardware, while both my compilations (snv 143, snv
> >144)
> >and also the nexan
Erik's experiences echo mine. I've never seen a white-box in a medium to
large company that I've visited. Always a name brand.
His comments on sysadmin staffing are dead on.
Jim Litchfield
Oracle Consulting
On 7/19/2010 5:35 PM, Erik Trimble wrote:
On Mon, 2010-07-19
On Mon, 2010-07-19 at 17:40 -0700, Chad Cantwell wrote:
> fyi, everyone, I have some more info here. in short, rich lowe's 142 works
> correctly (fast) on my hardware, while both my compilations (snv 143, snv 144)
> and also the nexanta 3 rc2 kernel (134 with backports) are horribly slow.
The ide
On Mon, Jul 19, 2010 at 5:40 PM, Chad Cantwell wrote:
> fyi, everyone, I have some more info here. in short, rich lowe's 142 works
> correctly (fast) on my hardware, while both my compilations (snv 143, snv 144)
> and also the nexanta 3 rc2 kernel (134 with backports) are horribly slow.
>
> I fin
On 20/07/10 10:40 AM, Chad Cantwell wrote:
fyi, everyone, I have some more info here. in short, rich lowe's 142 works
correctly (fast) on my hardware, while both my compilations (snv 143, snv 144)
and also the nexanta 3 rc2 kernel (134 with backports) are horribly slow.
I finally got around to
more below...
On Jul 19, 2010, at 4:42 PM, Michael Shadle wrote:
> On Mon, Jul 19, 2010 at 4:35 PM, Richard Elling wrote:
>
>> I depends on if the problem was fixed or not. What says
>>zpool status -xv
>>
>> -- richard
>
> [r...@nas01 ~]# zpool status -xv
> pool: tank
> state: DEGR
Yuri Homchuk wrote:
Well, this is a REALLY 300 users production server with 12 VM's
running on it, so I definitely won't play with a firmware J
I can easily identify which drive is what by physically looking at it.
It's just sad to realize that I cannot trust solaris anymore.
I never
fyi, everyone, I have some more info here. in short, rich lowe's 142 works
correctly (fast) on my hardware, while both my compilations (snv 143, snv 144)
and also the nexanta 3 rc2 kernel (134 with backports) are horribly slow.
I finally got around to trying rich lowe's snv 142 compilation in pla
On Mon, 2010-07-19 at 17:54 -0600, Eric D. Mudama wrote:
> On Wed, Jul 14 at 23:51, Tim Cook wrote:
> > Out of the fortune 500, I'd be willing to bet there's exactly zero
> > companies that use whitebox systems, and for a reason.
> > --Tim
>
> Sure, some core SAP system or HR data warehouse runs o
I am using OpenSolaris to host VM images over NFS for XenServer. I'm looking
for tips on what parameters can be set to help optimize my ZFS pool that holds
my VM images. I am using XenServer which is running the VMs from an NFS
storage on my OpenSolaris server. Are there parameters that I sho
On Wed, Jul 14 at 23:51, Tim Cook wrote:
Out of the fortune 500, I'd be willing to bet there's exactly zero
companies that use whitebox systems, and for a reason.
--Tim
Sure, some core SAP system or HR data warehouse runs on name-brand
gear, and maybe they have massive SANs with various capabil
On Mon, Jul 19, 2010 at 4:35 PM, Richard Elling wrote:
> I depends on if the problem was fixed or not. What says
> zpool status -xv
>
> -- richard
[r...@nas01 ~]# zpool status -xv
pool: tank
state: DEGRADED
status: One or more devices has experienced an unrecoverable error. An
On 07/20/10 08:20 AM, Richard Jahnel wrote:
I've used mbuffer to transfer hundreds of TB without a problem in mbuffer
itself. You will get disconnected if the send or receive prematurely ends,
though.
mbuffer itself very specifically ends with a broken pipe error. Very quickly
with s set
On Jul 19, 2010, at 4:30 PM, Michael Shadle wrote:
> On Mon, Jul 19, 2010 at 4:26 PM, Richard Elling wrote:
>
>> Aren't you assuming the I/O error comes from the drive?
>> fmdump -eV
>
> okay - I guess I am. Is this just telling me "hey stupid, a checksum
> failed" ? In which case why did this
Marty Scholes wrote:
' iostat -Eni ' indeed outputs Device ID on some of
the drives,but I still
can't understand how it helps me to identify model
of specific drive.
Get and install smartmontools. Period. I resisted it for a few weeks but it
has been an amazing tool. It will tell you
On Mon, Jul 19, 2010 at 4:26 PM, Richard Elling wrote:
> Aren't you assuming the I/O error comes from the drive?
> fmdump -eV
okay - I guess I am. Is this just telling me "hey stupid, a checksum
failed" ? In which case why did this never resolve itself and the
specific device get marked as degra
On Jul 19, 2010, at 4:21 PM, Michael Shadle wrote:
> On Mon, Jul 19, 2010 at 4:16 PM, Marty Scholes wrote:
>
>> Start a scrub or do an obscure find, e.g. "find /tank_mointpoint -name core"
>> and watch the drive activity lights. The drive in the pool which isn't
>> blinking like crazy is a fau
On Mon, Jul 19, 2010 at 4:16 PM, Marty Scholes wrote:
> Start a scrub or do an obscure find, e.g. "find /tank_mointpoint -name core"
> and watch the drive activity lights. The drive in the pool which isn't
> blinking like crazy is a faulted/offlined drive.
Actually I guess my real question is
On Mon, Jul 19, 2010 at 4:16 PM, Marty Scholes wrote:
> Start a scrub or do an obscure find, e.g. "find /tank_mointpoint -name core"
> and watch the drive activity lights. The drive in the pool which isn't
> blinking like crazy is a faulted/offlined drive.
>
> Ugly and oh-so-hackerish, but it
> > ' iostat -Eni ' indeed outputs Device ID on some of
> > the drives,but I still
> > can't understand how it helps me to identify model
> > of specific drive.
Get and install smartmontools. Period. I resisted it for a few weeks but it
has been an amazing tool. It will tell you more than you
This is Supermicro Server.
I really don't remember controller model, I set it up about 3 years
ago. I just remember that I needed to reflush controller firmware to
make it work in JBOD mode.
Remember, changing controller firmware may affect your ability to access
drives. Backup first, as
On Mon, Jul 19, 2010 at 1:42 PM, Wolfraider wrote:
> Our server locked up hard yesterday and we had to hard power it off and back
> on. The server locked up again on reading ZFS config (I left it trying to
> read the zfs config for 24 hours). I went through and removed the drives for
> the data
On Mon, Jul 19, 2010 at 3:11 PM, Haudy Kazemi wrote:
> ' iostat -Eni ' indeed outputs Device ID on some of the drives,but I still
> can't understand how it helps me to identify model of specific drive.
Curious:
[r...@nas01 ~]# zpool status -x
pool: tank
state: DEGRADED
status: One or more de
> I've found plenty of documentation on how to create a
> ZFS volume, iscsi share it, and then do a fresh
> install of Fedora or Windows on the volume.
Really? I have found just the opposite: how to move your functioning
Windows/Linux install to iSCSI.
I am fumbling through this process for Ubu
3.) on some systems I've found another version of the iostat command to be more
useful, particularly when iostat -En leaves the serial number field empty or
otherwise doesn't read the serial number correctly. Try
this:
' iostat -Eni ' indeed outputs Device ID on some of the drives,bu
On Jul 19, 2010, at 2:38 PM, Horace Demmink wrote:
> Hello,
>
> I'm working on building a iSCSI storage server to use as the backend for
> virtual servers. I am far more familiar with FreeBSD and Linux, but want to
> use OpenSolaris for this project because of Comstar & ZFS. My plan was to
> ha
On Mon, 2010-07-19 at 17:19 -0400, Max Levine wrote:
> I was looking for a way to do this without downtime... It seems that
> this kind of basic relayout operation should be easy to do.
>
> On Mon, Jul 19, 2010 at 12:44 PM, Freddie Cash wrote:
> > On Mon, Jul 19, 2010 at 9:06 AM, Max Levine wrot
Hello,
I'm working on building a iSCSI storage server to use as the backend for
virtual servers. I am far more familiar with FreeBSD and Linux, but want to use
OpenSolaris for this project because of Comstar & ZFS. My plan was to have a 24
2TB Hitachi SATA drives connected via SAS expanders to
I'm currently running a Sun Fire V880 with snv_134, but would like to
upgrade the machine to a self-built snv_144. Unfortunately, boot
environment creation fails:
# beadm create snv_134-svr4
Unable to create snv_134-svr4.
Mount failed.
In truss output, I find
2514: mount("rpool", "/rpool", MS
I was looking for a way to do this without downtime... It seems that
this kind of basic relayout operation should be easy to do.
On Mon, Jul 19, 2010 at 12:44 PM, Freddie Cash wrote:
> On Mon, Jul 19, 2010 at 9:06 AM, Max Levine wrote:
>> Is it possible in ZFS to do the following.
>>
>> I have a
FWIW I found netcat over at CSW.
http://www.opencsw.org/packages/CSWnetcat/
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Using SunOS X 5.11 snv_133 i86pc i386 i86pc. So the network thing that
was fixed in 129 shouldn't be the issue.
-Original Message-
From: Brent Jones [mailto:br...@servuhome.net]
Sent: Monday, July 19, 2010 1:02 PM
To: Richard Jahnel
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-di
>>1.) did you move your drives around or change which controller each one was
>>connected to sometime after installing and setting up OpenSolaris?
>>If so, a pool export and re-import may be in order.
No I didn't. It was original setup.
2.) are you sure the drive is failing? Does the problem o
No, the pool tank consists of 7 physical drives(5 of Seagate and 2 of Western
Digital)
See output below
#zpool status tank
pool: tank
state: ONLINE
scrub: none requested
config:
NAMESTATE READ WRITE CKSUM
tankONLINE 0 0 0
raidz2
I know that ST3500320AS is Seagate Barracuda.
That exactly why I am confused.
I looked physically at drives and I confirm again that 5 drives are Seagate and
2 drives are Western Digital.
But Solaris tells me that all 7 drives are Seagate Barracuda which is
definetly not correct.
This is
Hi, thanks for answering,
> How large is your ARC / your main memory?
> Probably too small to hold all metadata (1/1000 of the data amount).
> => metadata has to be read again and again
Main memory is 8GB. ARC (according to arcstat.pl) usually stays at 5-7GB
> A recordsize smaller than 128k
Thanks Cindy,
But format shows exactly same thing:
All of them appear as Seagate, no WD at all...
How could it be ???
# format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c1t0d0
/p...@0,0/pci15d9,a...@5/d...@0,0
1. c1t1d0
/p...@0,0/pci15d9,a...@
Hi,
some information is missing...
How large is your ARC / your main memory?
Probably too small to hold all metadata (1/1000 of the data amount).
=> metadata has to be read again and again
A recordsize smaller than 128k increases the problem.
Its a data volume, perhaps raidz or raidz2 and
>I've used mbuffer to transfer hundreds of TB without a problem in mbuffer
>itself. You will get disconnected if the send or receive prematurely ends,
>though.
mbuffer itself very specifically ends with a broken pipe error. Very quickly
with s set to 128 or after sometime with s set over 1024.
On Jul 19, 2010, at 10:49 AM, Richard Jahnel wrote:
>> Any idea why? Does the zfs send or zfs receive bomb out part way through?
>
> I have no idea why mbuffer fails. Changing the -s from 128 to 1536 made it
> take longer to occur and slowed it down bu about 20% but didn't resolve the
> issue.
On 16/07/2010 23:57, Richard Elling wrote:
On Jul 15, 2010, at 4:48 AM, BM wrote:
2. No community = stale outdated code.
But there is a community. What is lacking is that Oracle, in their infinite
wisdom, has stopped producing OpenSolaris developer binary releases.
Not to be outdone
ap> 2) there are still bugs that *must* be fixed before Btrfs can
ap> be seriously considered:
ap> http://www.mail-archive.com/linux-bt...@vger.kernel.org/msg05130.html
I really don't think that's a show-stopper. He filled the disk with
2KB files. HE FILLED THE DISK WITH 2KB
On 19-7-2010 20:36, Brent Jones wrote:
> On Mon, Jul 19, 2010 at 11:14 AM, Bruno Sousa wrote:
>
>> Hi,
>>
>> If you can share those scripts that make use of mbuffer, please feel
>> free to do so ;)
>>
>>
>> Bruno
>> On 19-7-2010 20:02, Brent Jones wrote:
>>
>>> On Mon, Jul 19, 2010 at 9:06
On Mon, Jul 19, 2010 at 11:14 AM, Bruno Sousa wrote:
> Hi,
>
> If you can share those scripts that make use of mbuffer, please feel
> free to do so ;)
>
>
> Bruno
> On 19-7-2010 20:02, Brent Jones wrote:
>> On Mon, Jul 19, 2010 at 9:06 AM, Richard Jahnel
>> wrote:
>>
>>> I've tried ssh blowfish
Richard Jahnel wrote:
Any idea why? Does the zfs send or zfs receive bomb out part way through?
I have no idea why mbuffer fails. Changing the -s from 128 to 1536 made it take
longer to occur and slowed it down bu about 20% but didn't resolve the issue.
It just ment I might get as far as
On Mon, Jul 19, 2010 at 01:34:58AM -0700, Garrett D'Amore wrote:
...snip...
>
> Very simple. 2vdevs gives 2 active "spindles", so you get about twice
> the performance of a single disk.
>
> raidz2 generally gives the performance of a single disk.
>
> For high performance, if you can sacrifice t
Richard,
On 19 Jul 2010, at 18:49, Richard Jahnel wrote:
I heard of some folks using netcat.
I haven't figured out where to get netcat nor the syntax for using
it yet.
I also did a bit of research into using netcat and found this...
http://www.mail-archive.com/storage-disc...@opensolaris.
Hi,
If you can share those scripts that make use of mbuffer, please feel
free to do so ;)
Bruno
On 19-7-2010 20:02, Brent Jones wrote:
> On Mon, Jul 19, 2010 at 9:06 AM, Richard Jahnel
> wrote:
>
>> I've tried ssh blowfish and scp arcfour. both are CPU limited long before
>> the 10g link i
On Mon, Jul 19, 2010 at 9:06 AM, Richard Jahnel wrote:
> I've tried ssh blowfish and scp arcfour. both are CPU limited long before the
> 10g link is.
>
> I'vw also tried mbuffer, but I get broken pipe errors part way through the
> transfer.
>
> I'm open to ideas for faster ways to to either zfs
>If this is across a trusted link, have a look at the HPN patches to
>ZFS. There are three main benefits to these patches:
>- increased (and dynamic) buffers internal to SSH
>- adds a multi-threaded aes cipher
>- adds the NONE cipher for non-encrypted data transfers
>(authentication is still encryp
>Any idea why? Does the zfs send or zfs receive bomb out part way through?
I have no idea why mbuffer fails. Changing the -s from 128 to 1536 made it take
longer to occur and slowed it down bu about 20% but didn't resolve the issue.
It just ment I might get as far as 2.5gb before mbuffer bombed
This is now CR 6970210.
I've been experimenting with a two system setup in snv_134 where
each system exports a zvol via COMSTAR iSCSI. One system imports
both its own zvol and the one from the other system and puts them
together in a ZFS mirror.
I manually faulted the zvol on one system by p
If the format utility is not displaying the WD drives correctly,
then ZFS won't see them correctly either. You need to find out why.
I would export this pool and recheck all of your device connections.
cs
On 07/19/10 10:37, Yuri Homchuk wrote:
No, the pool tank consists of 7 physical drives
On Mon, 2010-07-19 at 12:06 -0500, Bob Friesenhahn wrote:
> On Mon, 19 Jul 2010, Garrett D'Amore wrote:
> >
> > With those same 14 drives, you can get 7x the performance instead of 2x
> > the performance by using mirrors instead of raidz2.
>
> This is of course constrained by the limits of the I/O
If these files are deduped, and there is not a lot of RAM on the machine, it
can take a long, long time to work through the dedupe portion. I don't know
enough to know if that is what you are experiencing, but it could be the
problem.
How much RAM do you have?
Scott
--
This message posted fro
A few things:
1.) did you move your drives around or change which controller each one
was connected to sometime after installing and setting up OpenSolaris?
If so, a pool export and re-import may be in order.
2.) are you sure the drive is failing? Does the problem only affect
this drive or
On Mon, 19 Jul 2010, Garrett D'Amore wrote:
With those same 14 drives, you can get 7x the performance instead of 2x
the performance by using mirrors instead of raidz2.
This is of course constrained by the limits of the I/O channel.
Sometimes the limits of PCI-E or interface cards become the d
On Mon, Jul 19, 2010 at 9:06 AM, Max Levine wrote:
> Is it possible in ZFS to do the following.
>
> I have an 800GB lun a single device in a pool and I want to migrate
> that to 8 100GB luns. Is it possible to create an 800GB concat out of
> the 8 devices, and mirror that to the original device, t
On 07/18/10 17:39, Packet Boy wrote:
What I can not find is how to take an existing Fedora image and copy
the it's contents into a ZFS volume so that I can migrate this image
from my existing Fedora iScsi target to a Solaris iScsi target (and
of course get the advantages of having that disk imag
Our server locked up hard yesterday and we had to hard power it off and back
on. The server locked up again on reading ZFS config (I left it trying to read
the zfs config for 24 hours). I went through and removed the drives for the
data pool we created and powered on the server and it booted suc
Richard Jahnel wrote:
I've tried ssh blowfish and scp arcfour. both are CPU limited long before the
10g link is.
I'vw also tried mbuffer, but I get broken pipe errors part way through the
transfer.
Any idea why? Does the zfs send or zfs receive bomb out part way through?
Might be worth t
On Mon, Jul 19, 2010 at 9:06 AM, Richard Jahnel wrote:
> I've tried ssh blowfish and scp arcfour. both are CPU limited long before the
> 10g link is.
>
> I'vw also tried mbuffer, but I get broken pipe errors part way through the
> transfer.
>
> I'm open to ideas for faster ways to to either zfs
On Mon, 19 Jul 2010, Joerg Schilling wrote:
The missing requirement to provide build scripts is a drawback of the CDDL.
...But believe me that the GPL would not help you here, as the GPL cannot
force the original author (in this case Sun/Oracle or whoever) to supply the
scripts in question.
T
I think you are saying that even though format shows 9 devices (0-8) on
this system, there's really only 7 and the pool tank has only 5 (?).
I'm not sure why some devices would show up as duplicates.
Any recent changes to this system?
You might try exporting this pool and make sure that all
Is it possible in ZFS to do the following.
I have an 800GB lun a single device in a pool and I want to migrate
that to 8 100GB luns. Is it possible to create an 800GB concat out of
the 8 devices, and mirror that to the original device, then detach the
original device? It is possible to do this onl
I've tried ssh blowfish and scp arcfour. both are CPU limited long before the
10g link is.
I'vw also tried mbuffer, but I get broken pipe errors part way through the
transfer.
I'm open to ideas for faster ways to to either zfs send directly or through a
compressed file of the zfs send output.
Hi--
A google search of ST3500320AS turns up Seagate Barracuda drives.
All 7 drives in the pool tank are ST3500320AS. The other two c1t0d0
and c3d0 are unknown, but are not part of this pool.
You can also use fmdump -eV to see how long c2t3d0 has had problems.
Thanks,
Cindy
On 07/19/10 09:29
On Fri, Jul 16 at 18:32, Jordan McQuown wrote:
I'm curious to know what other people are running for HD's in white box
systems? I'm currently looking at Seagate Barracuda's and Hitachi
Deskstars. I'm looking at the 1tb models. These will be attached to an LSI
expander in a sc847e2 chassis
Hi--
I don't know what's up with iostat -En but I think I remember a problem
where iostat does not correctly report drives running in legacy IDE mode.
You might use the format utility to identify these devices.
Thanks,
Cindy
On 07/18/10 14:15, Alxen4 wrote:
This is a situation:
I've got an e
Hello,
I think this is the second time this happens to me. A couple of year ago, I
deleted a big (500G) zvol and then the machine started to hang some 20 minutes
later (out of memory), even rebooting didnt help. But with the great support
from Victor Latushkin, who on a weekend helped me debug t
Hi,
if you are regarding only changes to a file as transactions,
then flock() and fsync() is sufficient to reach ACID level with ZFS.
To achieve transactions which change multiple files,
you need flock(), fsync() and use snapshots for transaction commit or
rollback for transaction abort.
But the
On 07/19/10 07:26, Andrej Podzimek wrote:
I run ArchLinux with Btrfs and OpenSolaris with ZFS. I haven't had a
serious issue with any of them so far.
Moblin/Meego ships with btrfs by default. COW file system on a
cell phone :-). Unsurprisingly for a read-mostly file system it
seems pretty stab
On 12/07/2010 16:32, Erik Trimble wrote:
ZFS is NOT automatically ACID. There is no guaranty of commits for
async write operations. You would have to use synchronous writes to
guaranty commits. And, furthermore, I think that there is a strong
# zfs set sync=always pool
will force all I/O
Ubuntu always likes to be "on the edge" even if btrfs is far from being
'stable' I would not want to run a release that does this. Servers need
stability and reliability. Btrfs is far from this.
Well, it seems to me that this is a well-known and very popular „circle in
proving“:
A: XYZ is far
On Mon, Jul 19, 2010 at 7:12 AM, Joerg Schilling
wrote:
> Giovanni Tirloni wrote:
>
>> On Sun, Jul 18, 2010 at 10:19 PM, Miles Nordin wrote:
>> > IMHO it's important we don't get stuck running Nexenta in the same
>> > spot we're now stuck with OpenSolaris: with a bunch of CDDL-protected
>> > sou
On 19-7-2010 12:27, Anil Gulecha wrote:
On Mon, Jul 19, 2010 at 3:31 PM, Pasi Kärkkäinen wrote:
Upcoming Ubuntu 10.10 will use BTRFS as a default.
Though there was some discussion around this, I don't think the above
is a given. The ubuntu devs would look at the status of the project,
and dec
On Mon, Jul 19, 2010 at 3:31 PM, Pasi Kärkkäinen wrote:
>
> Upcoming Ubuntu 10.10 will use BTRFS as a default.
>
Though there was some discussion around this, I don't think the above
is a given. The ubuntu devs would look at the status of the project,
and decide closer to the release.
~Anil
PS
Giovanni Tirloni wrote:
> On Sun, Jul 18, 2010 at 10:19 PM, Miles Nordin wrote:
> > IMHO it's important we don't get stuck running Nexenta in the same
> > spot we're now stuck with OpenSolaris: with a bunch of CDDL-protected
> > source that few people know how to use in practice because the buil
Thanks, seems simple.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Sat, Jul 17, 2010 at 12:57:40AM +0200, Richard Elling wrote:
>
> > Because of BTRFS for Linux, Linux's popularity itself and also thanks
> > to the Oracle's help.
>
> BTRFS does not matter until it is a primary file system for a dominant
> distribution.
> From what I can tell, the dominant
On Mon, 2010-07-19 at 01:28 -0700, tomwaters wrote:
> Hi guys, I am about to reshape my data spool and am wondering what
> performance diff. I can expect from the new config. Vs. The old.
>
> The old config. Is a pool of a single vdev of 8 disks raidz2.
> The new pool config is 2vdev's of 7 disk
Hi guys, I am about to reshape my data spool and am wondering what performance
diff. I can expect from the new config. Vs. The old.
The old config. Is a pool of a single vdev of 8 disks raidz2.
The new pool config is 2vdev's of 7 disk raidz2 in a single pool.
I understand it should be better wit
89 matches
Mail list logo