> This controller card, you have turned off any raid functionality, yes? ZFS
> has total control of all discs, by itself? No hw raid intervening?
> --
> This message posted from opensolaris.org
>
>
yes, it's an LSI 150-6, with the BIOS turned off, which turns it into
e SC846 I got has a single backplane for the SAS/SATA drives,
and one connector to the LSI card. Of course, for what I'm doing, that's
fine.
Paul
Oh, I think the SC846 I got was about $1100.
http://www.cdw.com/shop/search/results.aspx?key=sc846&searchscope=All&sr=1&a
> they mean.
>
Have you ruled out using 'zfs send' / 'zfs receive' for some reason? And
have you looked at rsync? I generally find rsync to be the easiest and
most reliable tool for replicating directory structures. You may want to
look at the GNU v
gt;
>
FWIW, most enclosures like the ones we have been discussing lately have an
internal bay for a boot/OS drive--so you'll probably have all 12 hot-swap
bays available for data drives.
Paul
___
zfs-discuss mailing list
zfs-discuss@opensolari
c7t2d0 ONLINE
> c7t4d0 ONLINE
> c7t3d0 ONLINE
> c7t0d0 OFFLINE
> c7t7d0 ONLINE
> c7t1d0 ONLINE
> c7t6d0 ONLINE
> --
> This message posted from opensolaris.org
>
>
zpool online media c7t0d0
Paul
___
the fact that the
pool wasn't imported.
My guess is that if you move /etc/zfs/zfs.cache out of the way, then
reboot, ZFS will have to figure out what disks are out there again, find
your disk, and realize it is online.
Paul
___
zfs-
to help
if I can. If nothing else, my wife makes a mean chocolate chip cookie!
Think a batch of those would help?
Paul Archer
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
x, portability is not a primary goal at this stage but
if you have portability patches they are welcome."
Unfortunately, I'm trying for a Solaris solution. I already had a Linux
solution (the 'inotify' I started out with).
Paul
___
zfs
Hi there, my first post (yay).
I have done much googling and everywhere I look I see people saying "just
browse to https://localhost:6789 and it is there". Well its not, I am running
2009.06 (snv_111b) the current latest stable release I do believe?
This is my first major foray into the world o
As most of the zfs recovery problems seem to stem from zfs’s own strict
insistence that data be ideally consistent with its corresponding checksum,
which of course is good when correspondingly consistent data may be
recovered from somewhere, but catastrophic otherwise; it seem clear that
zfs must s
Although I don't know for sure that most such errors are in fact single bit in
nature,
I can only surmise they most likely statistically are absent detection
otherwise;
as with the exception of error corrected memory systems and/or check-summed
communication channels, each transition of data betw
Given that the checksum algorithms utilized in zfs are already fairly CPU
intensive, I
can't help but wonder if it's verified that a majority of checksum
inconsistency failures
appear to be single bit; if it may be advantageous to utilize some
computationally
simpler hybrid form of a checksum/ha
Bob wrote:
> ... Given the many hardware safeguards against single (and several) bit
> errors,
> the most common data error will be large. For example, the disk drive may
> return data from the wrong sector.
- actually data integrity check bits as may exist within memory systems and/or
communi
bob wrote:
> On Wed, 13 Aug 2008, paul wrote:
>
>> Shy extremely noisy hardware and/or literal hard failure, most
>> errors will most likely always be expressed as 1 bit out of some
>> very large N number of bits.
>
> This claim ignores the fact that most compute
Yes, Thank you.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I apologize for in effect suggesting that which was previously suggested in an
earlier thread:
http://mail.opensolaris.org/pipermail/zfs-discuss/2008-March/046234.html
And discovering that the feature to attempt worst case single bit recovery had
apparently
already been present in some form in
Kyle wrote:
> ... If I recall, the low priority was based on the percieved low demand
> for the feature in enterprise organizations. As I understood it shrinking a
> pool is percieved as being a feature most desired by home/hobby/development
> users, and that enterprises mainly only grow thier po
Hi,
Can ZFS snapshot be performed at zvol size of 100GB ?
I have no problem with the zvol snapshot at size of 1GB or 10GB.
Thanks,
Paul
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
I apologize for lack of info regarding to previous post.
# zpool list
NAMESIZE USED AVAILCAP HEALTH ALTROOT
gwvm_zpool 3.35T 3.16T 190G94% ONLINE -
rpool 135G 27.5G 107G20% ONLINE -
...
# zfs list
...
gwvm_zpool/gwpo19stby
First, I would like to thank everyone for response.
Second, here is the output for the clarification
# zfs list
...
NAMEUSED AVAIL
REFER MOUNTPOINT
gwvm_zpool/gwpo19stby100G 2.49G 18K
d are
already there.
I was trapped by this some time ago, some libs were on /usr :/
Now I'm fine with UFS root on SVM mirror and /var on ZFS RAID 0+1
(mountpoint=legacy).
FYI I'm on SPARC.
Cheers, Paul
This message posted from opensolaris.org
e="ssd" parent="scsi_vhci" sd_max_xfer_size=0x80;
(I have FC drives)
Where can I teach myself about the disadvantages? I searched for an article or
paper about "Why 128k blocksize is enough" written by the ZFS designer, but
could not find it...
Thx in adv
Hi,
I was first attracted to ZFS (and therefore OpenSolaris) because I thought that
ZFS allowed the used of different sized disks in raidz pools without wasted
disk space. Further research has confirmed that this isn't possible--by default.
I have seen a little bit of documentation around using
ut beeing limited to a non-striped
mirror (i.e. vdev mirror a b mirror c d mirror e f)?
Merry Xmas and Happy New Year, Paul
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/
the FS compression
rev:max_revisions=integer|"none" (default)|"unlimited"
rev:min_revisions=integer|"none" (default)
rev:min_free=integer[specifier] # spec can the usual b,k,M,G,T, %
or "none"
and for convenience a few file attributes alike:
rev:max
a loopback mount, not a dataset, does what you want.
in zonecfg, do:
> add fs
> set special=/export/home
> set dir=/home
> set type=lofs
> add options rw,nodevices,noexec,nosetuid
> end
> verify
# man zonecfg
Make sure the local zones have the same userids as the global zone, best would
be to use
([EMAIL PROTECTED],min-10,min-20,...} and every hour ([EMAIL
PROTECTED],hourly-01,...}, delete these
snapshots prior to the send/receive operation.
Thanks in advance, Paul
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss
re in two AIC JBODs connectected via SAS.
- HBA is an LSI 3801E
- Server is 1RU SuperMicro Intel.
Any advice appreciated!
:-)
Paul Tetley
NearMap Pty Ltd
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
would advise giving up on my zpool on what was apparrently a
transient error.
Regards,
Paul Tetley
On Fri, Mar 12, 2010 at 4:12 PM, Richard Elling wrote:
> On Mar 11, 2010, at 11:28 PM, Paul Tetley wrote:
> > Hi,
> >
> > My zpool is reporting unrecoverable errors with the metadat
OK I have a very large zfs snapshot I want to destroy. When I do this, the
system nearly freezes during the zfs destroy. This is a Sun Fire X4600 with
128GB of memory. Now this may be more of a function of the IO device, but let's
say I don't care that this zfs destroy finishes quickly. I actual
USED AVAIL REFER MOUNTPOINT
bpool/backups/oracle_bac...@20100411-023130 479G - 681G -
bpool/backups/oracle_bac...@20100411-104428 515G - 721G -
bpool/backups/oracle_bac...@20100412-144700 0 - 734G -
Thanks for any help,
Paul
_
Yesterday, Arne Jansen wrote:
Paul Archer wrote:
Because it's easier to change what I'm doing than what my DBA does, I
decided that I would put rsync back in place, but locally. So I changed
things so that the backups go to a staging FS, and then are rsync'ed
over to another
I haven't turned dedup off again yet, because I'd like to figure out how to
get past this problem.
Can anyone give me an idea of why the mounts might be hanging, or where to
look for clues? And has anyone had this problem with dedup and NFS before?
FWIW, the clients are a mix of Solar
this point, but I'd have to destroy
the snapshot first, so I'm in the same boat, yes?
TIA,
Paul
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
n try adding more ram to the system.
--
Thanks for the info. Unfortunately, I'm not sure I'll be able to add more RAM
any time soon. But I'm certainly going to try, as this is the primary backup
server for our Oracle databases.
Thanks again,
Paul
PS It's got 8GB right now. Y
3:08pm, Daniel Carosone wrote:
On Wed, Apr 14, 2010 at 08:48:42AM -0500, Paul Archer wrote:
So I turned deduplication on on my staging FS (the one that gets mounted
on the database servers) yesterday, and since then I've been seeing the
mount hang for short periods of time off and on
Yesterday, Erik Trimble wrote:
Daniel Carosone wrote:
On Wed, Apr 14, 2010 at 08:48:42AM -0500, Paul Archer wrote:
So I turned deduplication on on my staging FS (the one that gets mounted
on the database servers) yesterday, and since then I've been seeing the
mount hang for short perio
3:26pm, Daniel Carosone wrote:
On Wed, Apr 14, 2010 at 09:04:50PM -0500, Paul Archer wrote:
I realize that I did things in the wrong order. I should have removed the
oldest snapshot first, on to the newest, and then removed the data in the
FS itself.
For the problem in question, this is
On 04/26/10 11:54 PM, Yuri Vorobyev wrote:
Hello.
If anybody uses SSD for rpool more than half-year, can you post SMART
information about HostWrites attribute?
I want to see how SSD wear for system disk purposes.
I'd be happy to, exactly what commands shall I run?
Revision: 1.01 Serial No:
Size: 0.00GB <0 bytes>
Media Error: 0 Device Not Ready: 10 No Device: 0 Recoverable: 0
Illegal Request: 0 Predictive Failure Analysis: 0
bash-4.0$
If you can come up with a way I can get you more info, post a response.
Paul
__
000 000 000Old_age
Always - 0
#
Is all this data what your looking for?
Paul
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
dozen VMs suddenly losing
their datastore? I'd love to hear from your experience.
Thanks,
-Paul Choi
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Roy,
Thanks for the info. Yeah, the bug you mentioned is pretty critical. In
terms of SSDs, I have Intel X25-M for L2ARC and X25-E for ZIL. And the
host has 24G RAM. I'm just waiting for that "2010.03" release or
whatever we want to call it when it's released...
-Paul
example).
--
{1-2-3-4-5-6-7-----}
Paul Kraus
-> Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
-> Sound Coordinator, Schenectady Light Opera Company (
http://www.sloctheater.org/ )
-> Technical Adv
looking for
general recommendations and experiences. Thanks.
--
{1-2-3-4-5-6-7-}
Paul Kraus
-> Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
-> Sound Coordinator, Schenectady Light Opera Company (
hould be much faster) ? The
first full might run afoul of the 2 hour snapshots (and deletions),
but I would not expect the incremental to. I am syncing about 20 TB of
data between sites this way every 4 hours over a 100 Mb link. I put
the snapshot management and the site to site replication in the
iSCSI and are
looking to learn from other's experience as well as our own. For
example, is anyone using NFS with Oracle Cluster for HA storage for
VMs or are sites trusting to a single NFS server ?
--
{1-2-3-4-5-6-7---
Something to do with the fact that this is
a very old SATA card (LSI 150-6)?
This is driving me crazy. I finally got my zpool working under Solaris so
I'd have some stability, and I've got no performance.
Paul Archer
Friday, Paul Archer wrote:
Since I got my zfs pool working under
.0 0.3 0.33.33.1 9 14 c11d0
0.00.0 0.00.0 0.0 0.00.00.0 0 0 c12t0d0
Paul Archer
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
0d0
Try using 'format -e' on the drives, go into 'cache' then 'write-cache' and
display the current state. You can try to manually enable it from there.
I tried this, but the 'cache' menu item didn't show up.
d that I hadn't used before
because it's PCI-X, and won't fit on my current motherboard.)
I'll report back what I get with it tomorrow or the next day, depending on
the timing on the resilver.
Paul Archer
___
zfs-discuss maili
Yesterday, Paul Archer wrote:
I estimate another 10-15 hours before this disk is finished resilvering and
the zpool is OK again. At that time, I'm going to switch some hardware out
(I've got a newer and higher-end LSI card that I hadn't used before because
it's PCI-X,
8:30am, Paul Archer wrote:
And the hits just keep coming...
The resilver finished last night, so rebooted the box as I had just upgraded
to the latest Dev build. Not only did the upgrade fail (love that instant
rollback!), but now the zpool won't come online:
r...@shebop:~# zpool i
ors
* 2930277101 accessible sectors
*
* Flags:
* 1: unmountable
* 10: read-only
*
* First SectorLast
* Partition Tag FlagsSector CountSector Mount Directory
0 1700 34 2930277101 2930277134
Thanks for the help!
Paul Archer
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
In light of all the trouble I've been having with this zpool, I bought a
2TB drive, and I'm going to move all my data over to it, then destroy the
pool and start over.
Before I do that, what is the best way on an x86 system to format/label
the disks?
Tha
Cool.
FWIW, there appears to be an issue with the LSI 150-6 card I was using. I
grabbed an old server m/b from work, and put a newer PCI-X LSI card in it,
and I'm getting write speeds of about 60-70MB/sec, which is about 40x the
write speed I was seeing with the old card.
Paul
Tom
11:04pm, Paul Archer wrote:
Cool.
FWIW, there appears to be an issue with the LSI 150-6 card I was using. I
grabbed an old server m/b from work, and put a newer PCI-X LSI card in it,
and I'm getting write speeds of about 60-70MB/sec, which is about 40x the
write speed I was seeing wit
a fair
bit of noise--but I think if you had it in a closet with some
soundproofing, it wouldn't be bad. And if you went with a smaller
enclosure (12 drives, for instance) that would help.
Paul
___
zfs-discuss mailing list
zfs-dis
connector to the LSI card. Of course, for what I'm doing, that's
fine.
Paul
Oh, I think the SC846 I got was about $1100.
http://www.cdw.com/shop/search/results.aspx?key=sc846&searchscope=All&sr=1&Find+it.x=0&Find+it.y=0
One thing I forgot to mention: there is a wart w
Someone posted this link: https://slx.sun.com/1179275620 for a video on ZFS
deduplication. But the site isn't responding (which is typical of Sun, since
I've been dealing with them for the last 12 years).
Does anyone know of a mirror site, or if the video is on YouT
would
like to merge 2# and 3# to get more disk space in OpenSolaris.
Is it possible to eliminate the NTFS partition and add it to the ZFS partition?
Thanks in advance and regards,
Julio
Why don't you just format partition 2 to zfs, then add it to pool
Solaris2 or rpool, whatever it'
/data/images/incoming, and
a /data/images/incoming/100canon directory gets created, then the files under
that directory will automatically be monitored as well.
Thanks,
Paul Archer
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
s out to be the best way to go). I was
hoping that there'd be a script out there already, but I haven't turned up
anything yet.
Paul
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
e the "miniroot" from the
install media is only version 10.
This is not good.
Any advice? I am already thinking about installing U7 on my test box to
demonstrate. Glad I haven't rolled out u8 into production.
Thanks,
Paul
--
This message posted from opensolaris.org
___
;>
>>>
>>>
>>> On Tue, Oct 27, 2009 at 4:25 PM, Paul Lyons >> paulrly...@gmail.com>> wrote:
>>>
>>>When I boot off Solaris 10 U8 I get the error that pool is
>>>formatted using an incompatible version.
>>>
>>&g
tself ? Is there
another benchmark I should be using ?
P.S. I posted a OpenOffice.org spreadsheet of my test resulsts here:
http://www.ilk.org/~ppk/Geek/throughput-summary.ods
--
{1-2-3-4-5-6-7-----}
Paul Kraus
-> Senior Systems A
rall. A big SAMBA file server.
--
{1-2-3-4-5-6-7-}
Paul Kraus
-> Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
-> Sound Coordinator, Schenectady Light Opera Company (
http://www.sloctheater.org/ )
-&g
Richard,
First, thank you for the detailed reply ... (comments in line below)
On Tue, Nov 24, 2009 at 6:31 PM, Richard Elling
wrote:
> more below...
>
> On Nov 24, 2009, at 9:29 AM, Paul Kraus wrote:
>
>> On Tue, Nov 24, 2009 at 11:03 AM, Richard Elling
>> wro
-5-6-7-}
Paul Kraus
-> Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
-> Sound Coordinator, Schenectady Light Opera Company (
http://www.sloctheater.org/ )
-> Technical Advisor, Lunacon 2010 (http://www.lunacon.org/)
->
Hi,
I'm just about to build a ZFS system as a home file server in raidz, but I
have one question - pre-empting the need to replace one of the drives if it
ever fails.
How on earth do you determine the actual physical drive that has failed ?
I've got the while zpool status thing worked out, but h
c1t50060E8010037135d41 ONLINE
c1t50060E8010037135d45 ONLINE
c1t50060E8010037135d49 ONLINE
c1t50060E8010037135d53 ONLINE
c1t50060E8010037135d57 ONLINE
Thanks,
Paul
--
This message posted from opensolaris.org
__
bash-4.0# ulimit -a
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
file size (blocks, -f) unlimited
open files (-n) 256
pipe size(512 bytes, -p) 10
stack size (kbytes, -s) 10240
cpu time
I'm surprised at the number as well.
Running it again, I'm seeing it jump fairly high just before the fork errors:
bash-4.0# ps -ef | grep zfsdle | wc -l
20930
(the next run of ps failed due to the fork error).
So maybe it is running out of processes.
ZFS file data from ::memstat just went do
he disks:
LABEL 3
failed to unpack label 3
Thanks,
Paul
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.
Rather than hacking something like that, he could use a Disk on Module
(http://en.wikipedia.org/wiki/Disk_on_module) or something like
http://www.tomshardware.com/news/nanoSSD-Drive-Elecom-Japan-SATA,8538.html
(which I suspect may be a DOM but I've not poked around sufficiently to see).
(because they eat 80% of disk space) it seems
to be quite challenging.
I've been following this thread. Would it be faster to do the reverse.
Copy the 20% of disk then format then move the 20% back.
Paul
___
zfs-discuss mailing list
zfs-di
On 01/24/10 04:10 AM, Lutz Schumann wrote:
Is there a way (besides format and causing heavy I/O on the device in question)
how to identify a drive. Is there some kind of SES (enclosure service) for this
??
(e.g. "and now let the red led blink")
Try /usr/bin/iostat -En
___
. I have been told by Oracle Support
(but have not yet confirmed) that just running the latest zfs code
(Solaris 10U10) will disable the aclmode property, even if you do not
upgrade the zpool version beyond 22. I expect to test this next week,
as we _need_ ACLs to work for our data.
--
{---
On Wed, Oct 5, 2011 at 5:56 PM, Paul B. Henson wrote:
> On Thu, Sep 29, 2011 at 07:13:40PM -0700, Paul Kraus wrote:
>
>> Another potential difference ... I have been told by Oracle Support
>> (but have not yet confirmed) that just running the latest zfs code
>> (Solaris
c3t5000C5001A55F7A6d0 ONLINE 0 0 0 114K repaired
c3t5000C5001A5347FEd0 ONLINE 0 0 0
spares
c3t5000C5001A485C88d0AVAIL
c3t5000C50026A0EC78d0AVAIL
errors: No known data errors
--
{----1-2-3-4--
-6-7-}
Paul Kraus
-> Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
-> Sound Coordinator, Schenectady Light Opera Company (
http://www.sloctheater.org/ )
-> Technical Advisor, RPI Players
___
zfs-discuss
not a substitute for a real online rebalance,
but it gets the job done (if you can take the data offline, I do it a
small chunk at a time).
--
{1-2-3-4-5-6-7-}
Paul Kraus
-> Senior Systems Architect, Garnet River ( http://www.garn
I have seen too many
horror stories on this list that I just avoid it).
--
{1-2-3-4-----5-6-7-}
Paul Kraus
-> Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
-> Sound Coordinator, Schenectady Light Opera Com
ss. So far, ZFS is one of the
technologies that has not let me down. Of course, in some cases it has
taken weeks if not months to resolve or work around a "bug" in the
code, but in all cases the data was recovered.
--
{1-2-3-4-----5-6-7
operation rewrote the data that had been corrupted on the failing
component. No corrupt data was ever presented to the application.
--
{1-2-3-4-5-6-7-}
Paul Kraus
-> Senior Systems Architect, Garnet River ( http://www.garnetrive
@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
>
--
{1-2-3-4-5-6-7-}
Paul Kraus
-> Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
t;
>
> Can you elaborate #3? In what situation will it happen?
>
>
> Thanks.
>
> Fred
>
--
{1-2-3-4-5-6-7-}
Paul Kraus
-> Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
->
e
as it does not try to change the data).
This was originally reported to me as a problem with ZFS, SAMBA,
or the ACLs I had set up. It is amazing how much _changing_ of data
goes on with no knowledge by the end users.
--
{1-2-3-4-----5-6-7
On Sat, Oct 22, 2011 at 12:36 AM, Paul Kraus wrote:
> Recently someone posted to this list of that _exact_ situation, they loaded
> an OS to a pair of drives while a pair of different drives containing an OS
> were still attached. The zpool on the first pair ended up not being abl
--5-6-7-}
Paul Kraus
-> Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
-> Sound Coordinator, Schenectady Light Opera Company (
http://www.sloctheater.org/ )
-> Technical Advisor, RPI Players
___
zf
ort use only according to the documentation),
so I created RAID0 sets of 2 drives each and ZFS sees 6 x 1TB LUNs.
ZFS then provides my redundancy and data integrity.
--
{1-2-3-4-5-----6-7-}
Paul Kraus
-> Senior Systems Architect, Garnet Ri
I had not yet posted a summary as we
are still working through the overall problem (we tripped over this on
the replica, now we are working on it on the production copy).
--
{1-2-3-4-5-6-7-}
Paul Kraus
-> Senior Systems Archite
On Mon, Oct 31, 2011 at 9:07 AM, Jim Klimov wrote:
> 2011-10-31 16:28, Paul Kraus wrote:
>> Oracle has provided a loaner system with 128 GB RAM and it took 75 GB of
>> RAM
>> to destroy the problem snapshot). I had not yet posted a summary as we
>> are still working
test server, so any ideas to try and help me understand greatly
> appreciated.
What do real benchmarks (iozone, filebench, orion) show ?
--
{1-2-3-4-5-6-7-}
Paul Kraus
-> Senior Systems Archi
---5-----6-7-}
Paul Kraus
-> Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
-> Sound Coordinator, Schenectady Light Opera Company (
http://www.sloctheater.org/ )
-> Technical Advisor, RPI Players
___
to (in fact, in the early days of Google Mail
I did just that as a backup).
--
{1-2-3-4-5-6-7-}
Paul Kraus
-> Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
-> Sound Coordinator, Schenectady Light Opera Co
apdir=hidden " to set the parameter.
--
{1-2-3-4-5-6-7-}
Paul Kraus
-> Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
-> Sound Coordinator, Schenectady Light Opera Company (
http://www.slocth
uch above 0 or is growing.
Keep in mind that any type of hardware RAID should report back 0
for both to the OS.
--
{1-2-3-4-5-6-7-}
Paul Kraus
-> Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
->
On Fri, Nov 11, 2011 at 1:39 PM, Linder, Doug
wrote:
> Paul Kraus wrote:
>
>>> My main reasons for using zfs are pretty basic compared to some here
>>
>> What are they ? (the reasons for using ZFS)
>
> All technical reasons aside, I can tell you one huge reason I
t of address bits? Or is it something that offers functionality
> that other filesystems don't have? ;-)
The stories I have heard indicate that the name came after the TLA.
"zfs" came first and "zettabyte" later.
--
{1-2-3-4---
1 - 100 of 623 matches
Mail list logo