First time user of Solaris and ZFS
I have Solaris 10 installed on the primary IDE drive of my motherboard. I also
have a 4 disc RAIDZ setup on my sata connections. I setup up a successful
1.5TB ZFS server with all discs operational.
Well ... I was trying out something new and I borked my
ot that
hard to admin) - Would I be able to join the two RAIDz together for one BIG
volume altogether? And it will survive one disk failure?
/Christopher
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Hmm.. Thanks for the input. I want to have the most space but still need a raid
in some way to have redundancy.
I've added it up and found this:
ggendel - your suggestiong makes me "loose" 1TB - Loose 250GBx2 for the raid-1
ones and then 500GB from a 3x500GB = 1TB
bonwick - your first suggestion
I'm new to the list so this is probably a noob question: Are this forum part of
a mailinglist or something? I keep getting some answers to my posts in this
thread on email as well as some here, but it seems that those answers/posts on
email aren't posted on this forum..?? Or do I just get a copy
Would the nv_sata driver also be used on nforce 590 sli? I found Asus M2N32 WS
PRO at my hw shop which has 9 internal sata connectors.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opens
I have ProFTPD successfully installed and running, though I would like to
virtually mount some directory's from my ZFS configurations. In a previous
ProFTPD install on Ubuntu, I had in my /etc/fstab directory an entry like this:
/HDD ID/directory /home/FTP-shared/information vfat bind 0 0
Thou
I went ahead and bought a M9N-Sli motherboard with 6 sata controllers and also
a promise tx4 (4x sata300 non-raid) pci controller. Anyone know if the tx4 is
suppoerted in OpenSolaris? If it's as badly supported as the (crappy) Sil
chipsets i'm better of with OpenFiler (linux) I think.
This m
ers well.
We are actively designing our soon to be available support plans. Your voice
will be
heard, please email directly at for requests,
comments
and/or questions.
Thanks,
Christopher George
Founder/CTO
www.ddrdrive.com
--
This message posted from opensolaris.org
___
.
Thanks,
Christopher George
Founder/CTO
www.ddrdrive.com
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
and an overwhelming attention to detail.
Thanks,
Christopher George
Founder/CTO
www.ddrdrive.com
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
market.
Thanks,
Christopher George
Founder/CTO
www.ddrdrive.com
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
to/from removable media.
Thanks,
Christopher George
Founder/CTO
www.ddrdrive.com
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
We have a pair of opensolaris systems running snv_124. Our main zpool
'z' is running ZFS pool version 18.
Problem:
#zfs destroy -f z/Users/harri...@zfs-auto-snap:daily-2010-04-09-00:00
cannot destroy 'z/Users/harri...@zfs-auto-snap:daily-2010-04-09-00:00':
dataset is busy
I have tried:
Un
> No Slogs as I haven't seen a compliant SSD drive yet.
As the architect of the DDRdrive X1, I can state categorically the X1
correctly implements the SCSI Synchronize Cache (flush cache)
command.
Christopher George
Founder/CTO
www.ddrdrive.com
--
This message posted from opensol
SSDs that fully comply with the
POSIX
requirements for synchronous write transactions and do not lose transactions on
a
host power failure, we are competitively priced at $1,995 SRP.
Christopher George
Founder/CTO
www.ddrdrive.com
--
This message posted from opensolaris.org
as not power protecting on-board
volatile caches. As the X25-E does implement the ATA FLUSH
CACHE command, but does not have the required power protection to
avoid transaction (data) loss.
Best regards,
Christopher George
Founder/CTO
www.ddrdrive.com
--
This message posted from op
. The same principles
and benefits of multi-core processing apply here with multiple controllers.
The performance potential of NVRAM based SSDs dictates moving away
from a single/separate HBA based controller.
Best regards,
Christopher George
Founder/CTO
www.ddrdrive.com
--
This message
Here is another very recent blog post from ConstantThinking:
http://constantin.glez.de/blog/2010/07/solaris-zfs-synchronous-writes-and-zil-explained
Very well done, a highly recommended read.
Christopher George
Founder/CTO
www.ddrdrive.com
--
This message posted from opensolaris.org
are a
ZIL accelerator well matched to the 24/7 demands of enterprise use.
Thanks,
Christopher George
Founder/CTO
www.ddrdrive.com
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.open
All,
We're going to start testing ZFS and I had a question about Top Level
Devices (TLDs). In Sun's class, they specifically said not to use more than
9 TLDs due to performance concerns. Our storage admins make LUNs roughly
15G in size -- so how would we make a large pool (1TB) if we're limited
been updated for RAIDZ-3 yet, but you
> will get some ideas about how to configure a redundant configuration
> of many disks, here:
>
> http://www.solarisinternals.com/wiki/index.php/ZFS_Configuration_Guide
>
> See ZFS Configuration Example (x4500 with raidz2)
>
> Cindy
Bob - thanks, that makes sense.
The classroom book refers to "top-level virtual devices," and were referred
to as TLDs throughout the class (Top-Level Devices). As you noted, those
are either the base LUN, mirror, raidz, or raidz2.
So there's no limit to the number of TLDs/vdevs we can have, the
advancement of Open
Storage and explore the far-reaching potential of ZFS
based Hybrid Storage Pools?
If so, please send an inquiry to "zfs at ddrdrive dot com".
The drive for speed,
Christopher George
Founder/CTO
www.ddrdrive.com
*** Special thanks goes out to SUN employees Garrett D'
ovides an optional (user
configured) backup/restore feature.
Christopher George
Founder/CTO
www.ddrdrive.com
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
r HBA's
which do require a x4 or x8 PCIe connection.
Very appreciative of the feedback!
Christopher George
Founder/CTO
www.ddrdrive.com
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mai
oduct cannot be supported by
any of the BBUs currently found on RAID controllers. It would require either a
substantial increase in energy density or a decrease in packaging volume
both of which incur additional risks.
> Interesting product though!
Thanks,
Christopher George
Founder/CTO
www
because it is a proven and industry
standard method of enterprise class data backup.
Thanks,
Christopher George
Founder/CTO
www.ddrdrive.com
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://m
ree to disagree. I respect your point of view, and do
agree strongly that Li-Ion batteries play a critical and highly valued role in
many industries.
Thanks,
Christopher George
Founder/CTO
www.ddrdrive.com
--
This message posted from opensolaris.org
t for the DC jack to be
unpopulated so that an internal power source could be utilized. We will
make this modification available to any customer who asks.
Thanks,
Christopher George
Founder/CTO
www.ddrdrive.com
--
This message posted from opensolaris.org
___
> Personally I'd say it's a must. Most DC's I operate in wouldn't tolerate
> having a card separately wired from the chassis power.
May I ask the list, if this is a hard requirement for anyone else?
Please email me directly "cgeorge at ddrdrive dot com".
Th
rs (non-clustered) an
additional option.
Thanks,
Christopher George
Founder/CTO
www.ddrdrive.com
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Jan 3, 2012, at 9:35 AM, Svavar Örn Eysteinsson wrote:
> Hello.
>
> I'm planing to replace my old Apple XRAID, and XSAN Filesystem(1.4.2) Fiber
> environment.
> This setup only hosted a AFP,CIFS for a large advertising agency.
> Now that Fiber is damn expensive and for one thing, we do not ne
rotection" at:
http://www.intel.com/content/www/us/en/solid-state-drives/ssd-320-series-power-loss-data-protection-brief.html
Intel's brief also clears up a prior controversy of what types of
data are actually cached, per the brief it's both user and system
data!
Best regards,
Christophe
?
Yes! Customers using Illumos derived distros make-up a
good portion of our customer base.
Thanks,
Christopher George
www.ddrdrive.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ta" to mean the SSD's internal meta data...
I'm curious, any other interpretations?
Thanks,
Chris
----
Christopher George
cgeorge at ddrdrive.com
http://www.ddrdrive.com/
___
zfs-discuss mailing list
zf
ing more than to continue to
design and offer our unique ZIL accelerators as an alternative to Flash
only SSDs and hopefully help (in some small way) the success of ZFS.
Thanks again for taking the time to share your thoughts!
The drive for speed,
Chris
----
Christopher Geor
we target (enterprise customers).
The beauty of ZFS is the flexibility of it's implementation. By supporting
multiple log device types and configurations it ultimately enables a broad
range of performance capabilities!
Best regards,
Chris
--
C
On Sep 18, 2012, at 10:40 AM, Dan Swartzendruber wrote:
> On 9/18/2012 10:31 AM, Eugen Leitl wrote:
>> I'm currently thinking about rolling a variant of
>>
>> http://www.napp-it.org/napp-it/all-in-one/index_en.html
>>
>> with remote backup (via snapshot and send) to 2-3
>> other (HP N40L-based)
On Sep 18, 2012, at 10:40 AM, Dan Swartzendruber wrote:
On 9/18/2012 10:31 AM, Eugen Leitl wrote:
I'm currently thinking about rolling a variant of
http://www.napp-it.org/napp-it/all-in-one/index_en.html
with remote backup (via snapshot and send) to 2-3
other (HP N40L-based) zfs boxes for produc
On Feb 15, 2013, at 11:08 AM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris)
mailto:opensolarisisdeadlongliveopensola...@nedharvey.com>>
wrote:
Anybody using maczfs / ZEVO? Have good or bad things to say, in terms of
reliability, performance, features?
My main reason for asking is
SSD does *not* suffer the same fate, as its
performance is not bound by or vary with partition (mis)alignment.
Best regards,
Christopher George
Founder/CTO
www.ddrdrive.com
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-di
en
though we delineate the storage media used depending on host
power condition. The X1 exclusively uses DRAM for all IO
processing (host is on) and then Flash for permanent non-volatility
(host is off).
Thanks,
Christopher George
Founder/CTO
www.ddrdrive.com
--
This message posted from op
nt (or aggregate) write pattern trends to random. Over
50% random with a pool containing just 5 filesystems. This makes
intuitive sense knowing each filesystem has it's own ZIL and they
all share the dedicated log (ZIL Accelerator).
Best regards,
Christopher George
Founder/CTO
www.d
> Any opinions? stories? other models I missed?
I was a speaker at the recent OpenStorage Summit,
my presentation "ZIL Accelerator: DRAM or Flash?"
might be of interest:
http://www.ddrdrive.com/zil_accelerator.pdf
Best regards,
Christopher George
Founder/CTO
www.ddrdrive.com
--
1 Express!
Best regards,
Christopher George
Founder/CTO
www.ddrdrive.com
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
IOPS / $1,995) = 19.40
Best regards,
Christopher George
Founder/CTO
www.ddrdrive.com
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
the hour time-limit.
The reason the graphs are done in a time line fashion is so you look
at any point in the 1 hour series to see how each device performs.
Best regards,
Christopher George
Founder/CTO
www.ddrdrive.com
--
This message posted from o
> TRIM was putback in July... You're telling me it didn't make it into S11
> Express?
Without top level ZFS TRIM support, SATA Framework (sata.c) support
has no bearing on this discussion.
Best regards,
Christopher George
Founder/CTO
www.ddrdrive.com
--
This mes
is
drive inactivity has no effect on the eventual outcome. So with either a
bursty
or sustained workload the end result is always the same, dramatic write IOPS
degradation after unpackaging or secure erase of the tested Flash based SSDs.
Best regards,
Christopher George
Founder/CTO
www.
he size of the resultant binaries?
Thanks,
Christopher George
Founder/CTO
www.ddrdrive.com
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
are valid, the resulting degradation
will vary depending on the controller used.
Best regards,
Christopher George
Founder/CTO
www.ddrdrive.com
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.o
e.
Best regards,
Christopher George
Founder/CTO
www.ddrdrive.com
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> got it attached to a UPS with very conservative shut-down timing. Or
> are there other host failures aside from power a ZIL would be
> vulnerable too (system hard-locks?)?
Correct, a system hard-lock is another example...
Best regards,
Christopher George
Founder/CTO
www.ddrdrive.com
y" than sync=disabled.
Best regards,
Christopher George
Founder/CTO
www.ddrdrive.com
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ng to perform a Secure Erase every hour, day, or even
week really be the most cost effective use of an administrators time?
Best regards,
Christopher George
Founder/CTO
www.ddrdrive.com
--
This message posted from opensolaris.org
___
zfs-discuss mailin
g in a larger context:
http://www.oug.org/files/presentations/zfszilsynchronicity.pdf
Best regards,
Christopher George
Founder/CTO
www.ddrdrive.com
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensol
Above excerpts written by a OCZ employed thread moderator (Tony).
Best regards,
Christopher George
Founder/CTO
www.ddrdrive.com
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
aster,
> assuming that cache disabled on a rotating drive is roughly 100
> IOPS with queueing), that it'll still provide a huge performance boost
> when used as a ZIL in their system.
I agree 100%. I never intended to insinuate otherwise :-)
Best regards,
Christopher George
Fou
ing.com/Home/scripts-and-programs-1/zilstat
Best regards,
Christopher George
Founder/CTO
www.ddrdrive.com
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
SATA cable, see slides 15-17.
Best regards,
Christopher George
Founder/CTO
www.ddrdrive.com
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
is immune to TRIM support status and
thus unaffected. Actually, TRIM support would only add
unnecessary overhead to the DDRdrive X1's device driver.
Best regards,
Christopher George
Founder/CTO
www.ddrdrive.com
--
This message posted from o
slice instead of the entire device will
automatically disable the on-board write cache.
Christopher George
Founder / CTO
http://www.ddrdrive.com/
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://m
Evening all, I'm new to Solaris but after drooling over zfs for ages I finally
took the plunge.
First off I had 2x1Tb hdd in raid1 XFS format using mdadm, so using a
opensolaris vm image I transfered one side of the mirror to the other in zfs.
(using rsync and it took 3days!)
So with a 1 disk
about this I'd love to hear them!
Thanks,
Christopher Mera
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
d9f61e94 genunix:lookupnameat+52 (807b51c, 0, 1, 0, d)
d9f61ef4 genunix:cstatat_getvp+15d (ffd19553, 807b51c, )
d9f61f54 genunix:cstatat64+68 (ffd19553, 807b51c, )
d9f61f84 genunix:stat64+1c (807b51c, 8047b50, 8)
From: lori@sun.com [mailto:lori@sun.com]
Sent: Monday, February 23, 2009 1:17 PM
To: C
Either way - it would be ideal to quiesce the system before a snapshot anyway,
no?
My next question now is what particular steps would be recommended to quiesce a
system for the clone/zfs stream that I'm looking to achieve...
All your help is appreciated.
Regards,
Christopher
Thanks for your responses..
Brent:
And I'd have to do that for every system that I'd want to clone? There
must be a simpler way.. perhaps I'm missing something.
Regards,
Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.openso
How is it that flash archives can avoid these headaches?
Ultimately I'm doing this to clone ZFS root systems because at the moment Flash
Archives are UFS only.
-Original Message-
From: Brent Jones [mailto:br...@servuhome.net]
Sent: Tuesday, February 24, 2009 2:49 PM
To: Christ
Chris
From: lori@sun.com [mailto:lori@sun.com]
Sent: Tuesday, February 24, 2009 3:13 PM
To: Christopher Mera
Cc: Brent Jones; zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] zfs streams & data corruption
On 02/24/09 12:57, Christopher Mera wrote:
How is it that f
ment then boots fine.
-Original Message-
From: Nicolas Williams [mailto:nicolas.willi...@sun.com]
Sent: Tuesday, February 24, 2009 5:43 PM
To: Christopher Mera
Cc: lori@sun.com; zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] zfs streams & data corruption
On Mon, Feb 23,
x/g'
whole_disk=0
children[1]
type='disk'
id=1
guid=
path='/dev/dsk/c2d0s6'
devid='id1,c...@xxx/g'
whole_disk=0
< --- The remain
___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
L
PROTECTED],0
Specify disk (enter its number):
Since they are labeled as "pci-ide" is it safe to assume they are
using the ide/ata driver? If so, is there a performance gain when
using a sata driver?
On 5/21/07, Carson Gaspar <[EMAIL PROTECTED]> wrote:
Christopher Gibbs wrote:
>
We use TSM to backup our Messaging Server mailstores on ZFS.
Tomas is right though, the TSM client doesn't recognize ZFS so
whichever method you use, you just have to specify the path manually.
We kick it off with cron but Tomas' method looks a little nicer.
- Chris
On 7/9/07, Tomas Ögren <[EMAI
__
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
--
Christopher Gibbs
Email / LDAP Administrator
Web Integration & Programming
Abilene Christian University
___
zf
ct_id=139#
On 8/6/07, Christopher Gibbs <[EMAIL PROTECTED]> wrote:
> No idea... mine are all 250GB and the only thing I could find on it is
> this blurb from their product description:
> "Breaks the 137GB barrier! Supports various brands of large capacity
> Serial AT
__
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
--
Christopher Gibbs
Email / LDAP Administrator
Web Integration & Programming
Abilene Christian University
> http://www.pcsilenzioso.it/forum/showthread.php?t=2397
>
>
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
--
I suspect it's probably not a good idea but I was wondering if someone
could clarify the details.
I have 4 250G SATA(150) disks and 1 250G PATA(133) disk. Would it
cause problems if I created a raidz1 pool across all 5 drives?
I know the PATA drive is slower so would it slow the access across th
ning success.
>
> - Eric
>
> On Mon, Sep 17, 2007 at 01:22:40PM -0500, Christopher Gibbs wrote:
> > Anyone?
> >
> > On 9/14/07, Christopher Gibbs <[EMAIL PROTECTED]> wrote:
> > > I suspect it's probably not a good idea but I was wondering if someone
Anyone?
On 9/14/07, Christopher Gibbs <[EMAIL PROTECTED]> wrote:
> I suspect it's probably not a good idea but I was wondering if someone
> could clarify the details.
>
> I have 4 250G SATA(150) disks and 1 250G PATA(133) disk. Would it
> cause problems if I created
___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
--
Christopher Gibbs
Email / LDAP Administrator
Web Integration & Programming
Abilene Christian University
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi, I'm running snv_78 on a dual-core 64-bit x86 system with 2 500GB usb
drives mirrored into one pool.
I did this (intending to set the rdonly flag after I copy my data):
zfs create pond/read-only
mkdir /pond/read-only/copytest
cp -rp /pond/photos/* /pond/read-only/copytest/
After the copy is c
s
michael schuster wrote:
> Christopher Gorski wrote:
>> Hi, I'm running snv_78 on a dual-core 64-bit x86 system with 2 500GB usb
>> drives mirrored into one pool.
>>
>> I did this (intending to set the rdonly flag after I copy my data):
>>
>> zfs cr
/copytestsame
The original samba copy from another PC to /pond/photos copied
everything correctly.
-Chris
Christopher Gorski wrote:
> I'm missing actual files.
>
> I did this a second time, with the exact same result. It appears that
> the missing files in each copy are the s
t; On Thu, Jan 24, 2008 at 11:06:13PM -0500, Christopher Gorski wrote:
>> I'm missing actual files.
>>
>>> Christopher Gorski wrote:
>>>> zfs create pond/read-only
>>>> mkdir /pond/read-only/copytest
>>>> cp -rp /pond/photos/* /pond
Nicolas Williams wrote:
> Are there so many files that the glob expansion results in too large an
> argument list for cp?
There are only four subdirs in /pond/photos:
# ls /pond/photos
2006-02-15 2006-06-09 2007-12-20 unsorted
___
zfs-discuss maili
michael schuster wrote:
>
> I assume you've assured that there's enough space in /pond ...
>
> can you try
>
> $(cd pond/photos; tar cf - *) | (cd /pond/copytestsame; tar xf -)
I tried it, and it worked. The new tree is an exact copy of the old one.
-Chris
___
Christopher Gorski wrote:
> "unsorted/photosbackup/laptopd600/[D]/cag2b/eujpg/103-0398_IMG.JPG" is a
> file that is always missing in the new tree.
Oops, I meant:
"unsorted/drive-452a/[E]/drive/archives/seconddisk_20nov2002/eujpg/103-0398_IMG.JPG"
is alwa
Robert Milkowski wrote:
> Hello Christopher,
>
> Friday, January 25, 2008, 5:37:58 AM, you wrote:
>
> CG> michael schuster wrote:
>>> I assume you've assured that there's enough space in /pond ...
>>>
>>> can you try
>>>
&g
Robert Milkowski wrote:
>
>
> As Joerg suggested - please check getdents() - remember to use truss
> -v getdents so you should see all directory listings.
>
> I would check both getdents and open - so if it appears in getdents
> but is not opened later on...
>
>
I ran the copy procedure with
Christopher Gorski wrote:
> Robert Milkowski wrote:
>>
>> As Joerg suggested - please check getdents() - remember to use truss
>> -v getdents so you should see all directory listings.
>>
>> I would check both getdents and open - so if it appears in getde
Christopher Gorski wrote:
> Christopher Gorski wrote:
>> Robert Milkowski wrote:
>>> As Joerg suggested - please check getdents() - remember to use truss
>>> -v getdents so you should see all directory listings.
>>>
>>> I would check both get
Carson Gaspar wrote:
> Christopher Gorski wrote:
>
>> I noticed that the first calls in the "cp" and "ls" to getdents() return
>> similar file lists, with the same values.
>>
>> However, in the "ls", it makes a second call to getde
Joerg Schilling wrote:
> "Will Murnane" <[EMAIL PROTECTED]> wrote:
>
>> On Jan 30, 2008 1:34 AM, Carson Gaspar <[EMAIL PROTECTED]> wrote:
>>> If this is Sun's cp, file a bug. It's failing to notice that it didn't
>>> provide a large enough buffer to getdents(), so it only got partial results.
>>>
nov2002/eujpg
and
# ls
/tmp/pond/testdir/pond/photos/unsorted/drive-452a/\[E\]/drive/archives/seconddisk_20nov2002/eujpg
103-0398_IMG.JPG and other files should be missing.
I filed a bug report, but I can't find the link to it.
This seems to work on zfs or ufs.
- --
Christopher Gorski
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
If a machine has very obvious memory corruption due to bad ram, is there
anything beyond scrub that can verify the integrity of a pool? Am I
correct in assuming that scrub will fix checksum errors, but not
metadata errors?
- --
Christopher Gorski
I have a hot spare that was part of my zpool but is no longer
connected to the system. I can run the zpool remove command and it
returns fine but doesn't seem to do anything.
I have tried adding and removing spares that are connected to the
system and works properly. Is zpool remove failing becaus
It should be there... try starting the webconsole service.
On 2/14/08, Tim Thomas <[EMAIL PROTECTED]> wrote:
>
> Hi
>
> I just loaded up opensolaris on an X4500 (Thumper) and tried to connect to
> the ZFS GUI (https://x:6789)...and it is not there.
>
> Is this not part of Open Solaris...or do
By default the webconsole only listens locally but it looks like
you've already set it to listen to external TCP requests.
Whenever I've changed this property I had to do a full disable and
then enable for the change to take effect. Might be worth a try. : )
On 2/14/08, Michael Schuster <[EMAIL P
1 - 100 of 112 matches
Mail list logo