On Feb 15, 2013, at 11:08 AM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris)
mailto:opensolarisisdeadlongliveopensola...@nedharvey.com>>
wrote:
Anybody using maczfs / ZEVO? Have good or bad things to say, in terms of
reliability, performance, features?
My main reason for asking is
On Sep 18, 2012, at 10:40 AM, Dan Swartzendruber wrote:
On 9/18/2012 10:31 AM, Eugen Leitl wrote:
I'm currently thinking about rolling a variant of
http://www.napp-it.org/napp-it/all-in-one/index_en.html
with remote backup (via snapshot and send) to 2-3
other (HP N40L-based) zfs boxes for produc
On Sep 18, 2012, at 10:40 AM, Dan Swartzendruber wrote:
> On 9/18/2012 10:31 AM, Eugen Leitl wrote:
>> I'm currently thinking about rolling a variant of
>>
>> http://www.napp-it.org/napp-it/all-in-one/index_en.html
>>
>> with remote backup (via snapshot and send) to 2-3
>> other (HP N40L-based)
we target (enterprise customers).
The beauty of ZFS is the flexibility of it's implementation. By supporting
multiple log device types and configurations it ultimately enables a broad
range of performance capabilities!
Best regards,
Chris
--
C
ing more than to continue to
design and offer our unique ZIL accelerators as an alternative to Flash
only SSDs and hopefully help (in some small way) the success of ZFS.
Thanks again for taking the time to share your thoughts!
The drive for speed,
Chris
----
Christopher Geor
ta" to mean the SSD's internal meta data...
I'm curious, any other interpretations?
Thanks,
Chris
----
Christopher George
cgeorge at ddrdrive.com
http://www.ddrdrive.com/
___
zfs-discuss mailing list
zf
?
Yes! Customers using Illumos derived distros make-up a
good portion of our customer base.
Thanks,
Christopher George
www.ddrdrive.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
rotection" at:
http://www.intel.com/content/www/us/en/solid-state-drives/ssd-320-series-power-loss-data-protection-brief.html
Intel's brief also clears up a prior controversy of what types of
data are actually cached, per the brief it's both user and system
data!
Best regards,
Christophe
On Jan 3, 2012, at 9:35 AM, Svavar Örn Eysteinsson wrote:
> Hello.
>
> I'm planing to replace my old Apple XRAID, and XSAN Filesystem(1.4.2) Fiber
> environment.
> This setup only hosted a AFP,CIFS for a large advertising agency.
> Now that Fiber is damn expensive and for one thing, we do not ne
slice instead of the entire device will
automatically disable the on-board write cache.
Christopher George
Founder / CTO
http://www.ddrdrive.com/
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://m
is immune to TRIM support status and
thus unaffected. Actually, TRIM support would only add
unnecessary overhead to the DDRdrive X1's device driver.
Best regards,
Christopher George
Founder/CTO
www.ddrdrive.com
--
This message posted from o
SATA cable, see slides 15-17.
Best regards,
Christopher George
Founder/CTO
www.ddrdrive.com
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ing.com/Home/scripts-and-programs-1/zilstat
Best regards,
Christopher George
Founder/CTO
www.ddrdrive.com
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
aster,
> assuming that cache disabled on a rotating drive is roughly 100
> IOPS with queueing), that it'll still provide a huge performance boost
> when used as a ZIL in their system.
I agree 100%. I never intended to insinuate otherwise :-)
Best regards,
Christopher George
Fou
Above excerpts written by a OCZ employed thread moderator (Tony).
Best regards,
Christopher George
Founder/CTO
www.ddrdrive.com
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
g in a larger context:
http://www.oug.org/files/presentations/zfszilsynchronicity.pdf
Best regards,
Christopher George
Founder/CTO
www.ddrdrive.com
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensol
ng to perform a Secure Erase every hour, day, or even
week really be the most cost effective use of an administrators time?
Best regards,
Christopher George
Founder/CTO
www.ddrdrive.com
--
This message posted from opensolaris.org
___
zfs-discuss mailin
y" than sync=disabled.
Best regards,
Christopher George
Founder/CTO
www.ddrdrive.com
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> got it attached to a UPS with very conservative shut-down timing. Or
> are there other host failures aside from power a ZIL would be
> vulnerable too (system hard-locks?)?
Correct, a system hard-lock is another example...
Best regards,
Christopher George
Founder/CTO
www.ddrdrive.com
e.
Best regards,
Christopher George
Founder/CTO
www.ddrdrive.com
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
are valid, the resulting degradation
will vary depending on the controller used.
Best regards,
Christopher George
Founder/CTO
www.ddrdrive.com
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.o
he size of the resultant binaries?
Thanks,
Christopher George
Founder/CTO
www.ddrdrive.com
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
is
drive inactivity has no effect on the eventual outcome. So with either a
bursty
or sustained workload the end result is always the same, dramatic write IOPS
degradation after unpackaging or secure erase of the tested Flash based SSDs.
Best regards,
Christopher George
Founder/CTO
www.
> TRIM was putback in July... You're telling me it didn't make it into S11
> Express?
Without top level ZFS TRIM support, SATA Framework (sata.c) support
has no bearing on this discussion.
Best regards,
Christopher George
Founder/CTO
www.ddrdrive.com
--
This mes
the hour time-limit.
The reason the graphs are done in a time line fashion is so you look
at any point in the 1 hour series to see how each device performs.
Best regards,
Christopher George
Founder/CTO
www.ddrdrive.com
--
This message posted from o
IOPS / $1,995) = 19.40
Best regards,
Christopher George
Founder/CTO
www.ddrdrive.com
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
1 Express!
Best regards,
Christopher George
Founder/CTO
www.ddrdrive.com
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> Any opinions? stories? other models I missed?
I was a speaker at the recent OpenStorage Summit,
my presentation "ZIL Accelerator: DRAM or Flash?"
might be of interest:
http://www.ddrdrive.com/zil_accelerator.pdf
Best regards,
Christopher George
Founder/CTO
www.ddrdrive.com
--
nt (or aggregate) write pattern trends to random. Over
50% random with a pool containing just 5 filesystems. This makes
intuitive sense knowing each filesystem has it's own ZIL and they
all share the dedicated log (ZIL Accelerator).
Best regards,
Christopher George
Founder/CTO
www.d
en
though we delineate the storage media used depending on host
power condition. The X1 exclusively uses DRAM for all IO
processing (host is on) and then Flash for permanent non-volatility
(host is off).
Thanks,
Christopher George
Founder/CTO
www.ddrdrive.com
--
This message posted from op
SSD does *not* suffer the same fate, as its
performance is not bound by or vary with partition (mis)alignment.
Best regards,
Christopher George
Founder/CTO
www.ddrdrive.com
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-di
are a
ZIL accelerator well matched to the 24/7 demands of enterprise use.
Thanks,
Christopher George
Founder/CTO
www.ddrdrive.com
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.open
Here is another very recent blog post from ConstantThinking:
http://constantin.glez.de/blog/2010/07/solaris-zfs-synchronous-writes-and-zil-explained
Very well done, a highly recommended read.
Christopher George
Founder/CTO
www.ddrdrive.com
--
This message posted from opensolaris.org
. The same principles
and benefits of multi-core processing apply here with multiple controllers.
The performance potential of NVRAM based SSDs dictates moving away
from a single/separate HBA based controller.
Best regards,
Christopher George
Founder/CTO
www.ddrdrive.com
--
This message
as not power protecting on-board
volatile caches. As the X25-E does implement the ATA FLUSH
CACHE command, but does not have the required power protection to
avoid transaction (data) loss.
Best regards,
Christopher George
Founder/CTO
www.ddrdrive.com
--
This message posted from op
SSDs that fully comply with the
POSIX
requirements for synchronous write transactions and do not lose transactions on
a
host power failure, we are competitively priced at $1,995 SRP.
Christopher George
Founder/CTO
www.ddrdrive.com
--
This message posted from opensolaris.org
> No Slogs as I haven't seen a compliant SSD drive yet.
As the architect of the DDRdrive X1, I can state categorically the X1
correctly implements the SCSI Synchronize Cache (flush cache)
command.
Christopher George
Founder/CTO
www.ddrdrive.com
--
This message posted from opensol
We have a pair of opensolaris systems running snv_124. Our main zpool
'z' is running ZFS pool version 18.
Problem:
#zfs destroy -f z/Users/harri...@zfs-auto-snap:daily-2010-04-09-00:00
cannot destroy 'z/Users/harri...@zfs-auto-snap:daily-2010-04-09-00:00':
dataset is busy
I have tried:
Un
to/from removable media.
Thanks,
Christopher George
Founder/CTO
www.ddrdrive.com
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
market.
Thanks,
Christopher George
Founder/CTO
www.ddrdrive.com
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
and an overwhelming attention to detail.
Thanks,
Christopher George
Founder/CTO
www.ddrdrive.com
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
.
Thanks,
Christopher George
Founder/CTO
www.ddrdrive.com
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ers well.
We are actively designing our soon to be available support plans. Your voice
will be
heard, please email directly at for requests,
comments
and/or questions.
Thanks,
Christopher George
Founder/CTO
www.ddrdrive.com
--
This message posted from opensolaris.org
___
rs (non-clustered) an
additional option.
Thanks,
Christopher George
Founder/CTO
www.ddrdrive.com
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> Personally I'd say it's a must. Most DC's I operate in wouldn't tolerate
> having a card separately wired from the chassis power.
May I ask the list, if this is a hard requirement for anyone else?
Please email me directly "cgeorge at ddrdrive dot com".
Th
t for the DC jack to be
unpopulated so that an internal power source could be utilized. We will
make this modification available to any customer who asks.
Thanks,
Christopher George
Founder/CTO
www.ddrdrive.com
--
This message posted from opensolaris.org
___
ree to disagree. I respect your point of view, and do
agree strongly that Li-Ion batteries play a critical and highly valued role in
many industries.
Thanks,
Christopher George
Founder/CTO
www.ddrdrive.com
--
This message posted from opensolaris.org
because it is a proven and industry
standard method of enterprise class data backup.
Thanks,
Christopher George
Founder/CTO
www.ddrdrive.com
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://m
oduct cannot be supported by
any of the BBUs currently found on RAID controllers. It would require either a
substantial increase in energy density or a decrease in packaging volume
both of which incur additional risks.
> Interesting product though!
Thanks,
Christopher George
Founder/CTO
www
r HBA's
which do require a x4 or x8 PCIe connection.
Very appreciative of the feedback!
Christopher George
Founder/CTO
www.ddrdrive.com
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mai
ovides an optional (user
configured) backup/restore feature.
Christopher George
Founder/CTO
www.ddrdrive.com
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
advancement of Open
Storage and explore the far-reaching potential of ZFS
based Hybrid Storage Pools?
If so, please send an inquiry to "zfs at ddrdrive dot com".
The drive for speed,
Christopher George
Founder/CTO
www.ddrdrive.com
*** Special thanks goes out to SUN employees Garrett D'
Bob - thanks, that makes sense.
The classroom book refers to "top-level virtual devices," and were referred
to as TLDs throughout the class (Top-Level Devices). As you noted, those
are either the base LUN, mirror, raidz, or raidz2.
So there's no limit to the number of TLDs/vdevs we can have, the
been updated for RAIDZ-3 yet, but you
> will get some ideas about how to configure a redundant configuration
> of many disks, here:
>
> http://www.solarisinternals.com/wiki/index.php/ZFS_Configuration_Guide
>
> See ZFS Configuration Example (x4500 with raidz2)
>
> Cindy
All,
We're going to start testing ZFS and I had a question about Top Level
Devices (TLDs). In Sun's class, they specifically said not to use more than
9 TLDs due to performance concerns. Our storage admins make LUNs roughly
15G in size -- so how would we make a large pool (1TB) if we're limited
x/g'
whole_disk=0
children[1]
type='disk'
id=1
guid=
path='/dev/dsk/c2d0s6'
devid='id1,c...@xxx/g'
whole_disk=0
< --- The remain
ment then boots fine.
-Original Message-
From: Nicolas Williams [mailto:nicolas.willi...@sun.com]
Sent: Tuesday, February 24, 2009 5:43 PM
To: Christopher Mera
Cc: lori@sun.com; zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] zfs streams & data corruption
On Mon, Feb 23,
Chris
From: lori@sun.com [mailto:lori@sun.com]
Sent: Tuesday, February 24, 2009 3:13 PM
To: Christopher Mera
Cc: Brent Jones; zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] zfs streams & data corruption
On 02/24/09 12:57, Christopher Mera wrote:
How is it that f
How is it that flash archives can avoid these headaches?
Ultimately I'm doing this to clone ZFS root systems because at the moment Flash
Archives are UFS only.
-Original Message-
From: Brent Jones [mailto:br...@servuhome.net]
Sent: Tuesday, February 24, 2009 2:49 PM
To: Christ
Thanks for your responses..
Brent:
And I'd have to do that for every system that I'd want to clone? There
must be a simpler way.. perhaps I'm missing something.
Regards,
Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.openso
Either way - it would be ideal to quiesce the system before a snapshot anyway,
no?
My next question now is what particular steps would be recommended to quiesce a
system for the clone/zfs stream that I'm looking to achieve...
All your help is appreciated.
Regards,
Christopher
d9f61e94 genunix:lookupnameat+52 (807b51c, 0, 1, 0, d)
d9f61ef4 genunix:cstatat_getvp+15d (ffd19553, 807b51c, )
d9f61f54 genunix:cstatat64+68 (ffd19553, 807b51c, )
d9f61f84 genunix:stat64+1c (807b51c, 8047b50, 8)
From: lori@sun.com [mailto:lori@sun.com]
Sent: Monday, February 23, 2009 1:17 PM
To: C
about this I'd love to hear them!
Thanks,
Christopher Mera
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Evening all, I'm new to Solaris but after drooling over zfs for ages I finally
took the plunge.
First off I had 2x1Tb hdd in raid1 XFS format using mdadm, so using a
opensolaris vm image I transfered one side of the mirror to the other in zfs.
(using rsync and it took 3days!)
So with a 1 disk
es,
> but it still *stable* (or has been for me to date).
>
> --Tim
>
> On Fri, May 23, 2008 at 2:43 PM, Christopher Gibbs <[EMAIL PROTECTED]>
> wrote:
>>
>> Pretty much what the subject says. I'm wondering which platform will
>> have the best stability/p
So now I have to ask, should I go with the OpenSolaris (.com) release
instead? Also, is there one that has better/newer driver support?
(Mostly in relation to SATA controllers)
Not sure if this is the right place to post this but since my main
goal is a ZFS server then I should get your guys opinions.
-
vg: 3792compression: 2.84
>SPA allocated: 121344 used: 0.06%
>
> capacity operations bandwidth errors
> description used avail read write read write read write cksum
> t119K 178M 152 0 1.91M 0 0 0 0
> /tmp/t/1 42.5K 59.5M80 0 650K 0 0 0 0
> /tmp/t/2 33.5K 59.5M31 0 653K 0 0 0 0
> /tmp/t/3 42.5K 59.5M41 0 650K 0 0 0 0
>
>
>
--
Christopher Gibbs
Programmer / Analyst
Web Integration & Programming
Abilene Christian University
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Tue, May 20, 2008 at 4:47 PM, Christopher Gibbs <[EMAIL PROTECTED]> wrote:
> Here's what I get back:
>
> $ sudo zdb -e tank
> zdb: can't open tank: Invalid argument
>
> "tank" is the name of my pool. I tried a bogus name and get this back
's missing and once you get it to shows up you can import the pool.
>
>Rob
>
--
Christopher Gibbs
Programmer / Analyst
Web Integration & Programming
Abilene Christian University
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
May 20, 2008 at 3:46 PM, Rob Logan <[EMAIL PROTECTED]> wrote:
>> There's also a spare attached to the pool that's not showing here.
>
> can you make it show?
>
>Rob
>
--
Christopher Gibbs
Programmer / Analyst
W
e
pool was functioning perfectly before the restart. There's also a
spare attached to the pool that's not showing here.
On Tue, May 20, 2008 at 3:03 PM, Christopher Gibbs <[EMAIL PROTECTED]> wrote:
> So just as the subject says, I replaced a failed disk. Resilver
> completed succ
back:
$ sudo zpool import -f tank
cannot import 'tank': invalid vdev configuration
I'm using: Solaris Express Developer Edition 1/08 snv_79b X86
Any ideas?
--
Christopher Gibbs
Programmer / Analyst
Web Integration & Programming
Abil
Oops, I forgot a step. I also upgraded the zpool in snv79b before I
tried the remove. It is now version 10.
On 2/15/08, Christopher Gibbs <[EMAIL PROTECTED]> wrote:
> The pool was exported from snv_73 and the spare was disconnected from
> the system. The OS was upgraded to snv_79
2/15/08, Robin Guo <[EMAIL PROTECTED]> wrote:
> Hi, Christopher,
>
> I tried by using raw files as the spare, remove the file, then 'zpool
> remove' ,
> looks the raw files could be eliminated from the pool.
>
> But since you use the physical device, I s
By default the webconsole only listens locally but it looks like
you've already set it to listen to external TCP requests.
Whenever I've changed this property I had to do a full disable and
then enable for the change to take effect. Might be worth a try. : )
On 2/14/08, Michael Schuster <[EMAIL P
It should be there... try starting the webconsole service.
On 2/14/08, Tim Thomas <[EMAIL PROTECTED]> wrote:
>
> Hi
>
> I just loaded up opensolaris on an X4500 (Thumper) and tried to connect to
> the ZFS GUI (https://x:6789)...and it is not there.
>
> Is this not part of Open Solaris...or do
I have a hot spare that was part of my zpool but is no longer
connected to the system. I can run the zpool remove command and it
returns fine but doesn't seem to do anything.
I have tried adding and removing spares that are connected to the
system and works properly. Is zpool remove failing becaus
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
If a machine has very obvious memory corruption due to bad ram, is there
anything beyond scrub that can verify the integrity of a pool? Am I
correct in assuming that scrub will fix checksum errors, but not
metadata errors?
- --
Christopher Gorski
nov2002/eujpg
and
# ls
/tmp/pond/testdir/pond/photos/unsorted/drive-452a/\[E\]/drive/archives/seconddisk_20nov2002/eujpg
103-0398_IMG.JPG and other files should be missing.
I filed a bug report, but I can't find the link to it.
This seems to work on zfs or ufs.
- --
Christopher Gorski
Joerg Schilling wrote:
> "Will Murnane" <[EMAIL PROTECTED]> wrote:
>
>> On Jan 30, 2008 1:34 AM, Carson Gaspar <[EMAIL PROTECTED]> wrote:
>>> If this is Sun's cp, file a bug. It's failing to notice that it didn't
>>> provide a large enough buffer to getdents(), so it only got partial results.
>>>
Carson Gaspar wrote:
> Christopher Gorski wrote:
>
>> I noticed that the first calls in the "cp" and "ls" to getdents() return
>> similar file lists, with the same values.
>>
>> However, in the "ls", it makes a second call to getde
Christopher Gorski wrote:
> Christopher Gorski wrote:
>> Robert Milkowski wrote:
>>> As Joerg suggested - please check getdents() - remember to use truss
>>> -v getdents so you should see all directory listings.
>>>
>>> I would check both get
Christopher Gorski wrote:
> Robert Milkowski wrote:
>>
>> As Joerg suggested - please check getdents() - remember to use truss
>> -v getdents so you should see all directory listings.
>>
>> I would check both getdents and open - so if it appears in getde
Robert Milkowski wrote:
>
>
> As Joerg suggested - please check getdents() - remember to use truss
> -v getdents so you should see all directory listings.
>
> I would check both getdents and open - so if it appears in getdents
> but is not opened later on...
>
>
I ran the copy procedure with
Christopher Gorski wrote:
> "unsorted/photosbackup/laptopd600/[D]/cag2b/eujpg/103-0398_IMG.JPG" is a
> file that is always missing in the new tree.
Oops, I meant:
"unsorted/drive-452a/[E]/drive/archives/seconddisk_20nov2002/eujpg/103-0398_IMG.JPG"
is alwa
Robert Milkowski wrote:
> Hello Christopher,
>
> Friday, January 25, 2008, 5:37:58 AM, you wrote:
>
> CG> michael schuster wrote:
>>> I assume you've assured that there's enough space in /pond ...
>>>
>>> can you try
>>>
&g
michael schuster wrote:
>
> I assume you've assured that there's enough space in /pond ...
>
> can you try
>
> $(cd pond/photos; tar cf - *) | (cd /pond/copytestsame; tar xf -)
I tried it, and it worked. The new tree is an exact copy of the old one.
-Chris
___
Nicolas Williams wrote:
> Are there so many files that the glob expansion results in too large an
> argument list for cp?
There are only four subdirs in /pond/photos:
# ls /pond/photos
2006-02-15 2006-06-09 2007-12-20 unsorted
___
zfs-discuss maili
t; On Thu, Jan 24, 2008 at 11:06:13PM -0500, Christopher Gorski wrote:
>> I'm missing actual files.
>>
>>> Christopher Gorski wrote:
>>>> zfs create pond/read-only
>>>> mkdir /pond/read-only/copytest
>>>> cp -rp /pond/photos/* /pond
/copytestsame
The original samba copy from another PC to /pond/photos copied
everything correctly.
-Chris
Christopher Gorski wrote:
> I'm missing actual files.
>
> I did this a second time, with the exact same result. It appears that
> the missing files in each copy are the s
s
michael schuster wrote:
> Christopher Gorski wrote:
>> Hi, I'm running snv_78 on a dual-core 64-bit x86 system with 2 500GB usb
>> drives mirrored into one pool.
>>
>> I did this (intending to set the rdonly flag after I copy my data):
>>
>> zfs cr
Hi, I'm running snv_78 on a dual-core 64-bit x86 system with 2 500GB usb
drives mirrored into one pool.
I did this (intending to set the rdonly flag after I copy my data):
zfs create pond/read-only
mkdir /pond/read-only/copytest
cp -rp /pond/photos/* /pond/read-only/copytest/
After the copy is c
I went ahead and bought a M9N-Sli motherboard with 6 sata controllers and also
a promise tx4 (4x sata300 non-raid) pci controller. Anyone know if the tx4 is
suppoerted in OpenSolaris? If it's as badly supported as the (crappy) Sil
chipsets i'm better of with OpenFiler (linux) I think.
This m
I have ProFTPD successfully installed and running, though I would like to
virtually mount some directory's from my ZFS configurations. In a previous
ProFTPD install on Ubuntu, I had in my /etc/fstab directory an entry like this:
/HDD ID/directory /home/FTP-shared/information vfat bind 0 0
Thou
Would the nv_sata driver also be used on nforce 590 sli? I found Asus M2N32 WS
PRO at my hw shop which has 9 internal sata connectors.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opens
I'm new to the list so this is probably a noob question: Are this forum part of
a mailinglist or something? I keep getting some answers to my posts in this
thread on email as well as some here, but it seems that those answers/posts on
email aren't posted on this forum..?? Or do I just get a copy
___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
--
Christopher Gibbs
Email / LDAP Administrator
Web Integration & Programming
Abilene Christian University
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hmm.. Thanks for the input. I want to have the most space but still need a raid
in some way to have redundancy.
I've added it up and found this:
ggendel - your suggestiong makes me "loose" 1TB - Loose 250GBx2 for the raid-1
ones and then 500GB from a 3x500GB = 1TB
bonwick - your first suggestion
ot that
hard to admin) - Would I be able to join the two RAIDz together for one BIG
volume altogether? And it will survive one disk failure?
/Christopher
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Anyone?
On 9/14/07, Christopher Gibbs <[EMAIL PROTECTED]> wrote:
> I suspect it's probably not a good idea but I was wondering if someone
> could clarify the details.
>
> I have 4 250G SATA(150) disks and 1 250G PATA(133) disk. Would it
> cause problems if I created
1 - 100 of 112 matches
Mail list logo