On Sat, Feb 27, 2010 at 6:21 PM, Yariv Graf wrote:
>
> Hi,
> It seems I can't import a single external HDD.
>
> pool: HD
> id: 8012429942861870778
> state: UNAVAIL
> status: One or more devices are missing from the system.
> action: The pool cannot be imported. Attach the missing
> d
Ian Collins wrote:
I was running zpool iostat on a pool comprising a stripe of raidz2
vdevs that appears to be writing slowly and I notice a considerable
imbalance of both free space and write operations. The pool is
currently feeding a tape backup while receiving a large filesystem.
Is this
Hi,
Thanks for the reply.
I can arrange the lost SSD buy I already formatted it.
Second, even the external HDD is for instance /dev/rdsk/c16t0d0, when I try to
debug using zdb
It shows me another “path”:
path='/dev/dsk/c11t0d0s0'
devid='id1,s...@tst31500341as2ge
Hello list,
it is damn difficult to destroy ZFS labels :)
I try to remove the vedev labels of disks used in a pool before. According to
http://hub.opensolaris.org/bin/download/Community+Group+zfs/docs/ondiskformat0822.pdf
I created a script that removes the first 512 KB and the last 512 KB, h
hi all
I have a server running svn_131 and the scrub is very slow. I have a cron job
for starting it every week and now it's been running for a while, and it's
very, very slow
scrub: scrub in progress for 40h41m, 12.56% done, 283h14m to go
The configuration is listed below, consisting of thre
"Paul B. Henson" writes:
> On Fri, 26 Feb 2010, David Dyer-Bennet wrote:
>> I think of using ACLs to extend extra access beyond what the
>> permission bits grant. Are you talking about using them to prevent
>> things that the permission bits appear to grant? Because so long as
>> they're only gr
Speaking of long boot times, Ive heard that IBM power servers boot in 90
minutes or more.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Feb 28, 2010, at 5:05 AM, Lutz Schumann wrote:
> Hello list,
>
> it is damn difficult to destroy ZFS labels :)
Some people seem to have a knack of doing it accidentally :-)
> I try to remove the vedev labels of disks used in a pool before. According to
> http://hub.opensolaris.org/bin/down
Andrew Gabriel wrote:
Ian Collins wrote:
I was running zpool iostat on a pool comprising a stripe of raidz2
vdevs that appears to be writing slowly and I notice a considerable
imbalance of both free space and write operations. The pool is
currently feeding a tape backup while receiving a larg
On Sun, Feb 28, 2010 at 1:12 PM, Richard Elling wrote:
> On Feb 28, 2010, at 5:05 AM, Lutz Schumann wrote:
> > Hello list,
> >
> > it is damn difficult to destroy ZFS labels :)
>
> Some people seem to have a knack of doing it accidentally :-)
>
> > I try to remove the vedev labels of disks used in
On Sun, Feb 28, 2010 at 2:06 PM, Yariv Graf wrote:
> Hi,
> Thanks for the reply.
> I can arrange the lost SSD buy I already formatted it.
> Second, even the external HDD is for instance /dev/rdsk/c16t0d0, when I try
> to debug using zdb
> It shows me another “path”:
> path='/dev/dsk/c11t0d0s0'
>
I am running my root pool on a 60 GB SLC SSD (OCZ Agility EX). At present, my
rpool/ROOT has no compression, and no deduplication. I was wondering about
whether it would be a good idea, from a performance and data integrity
standpoint, to use one, the other, or both, on the root pool. My current
On Sat, 27 Feb 2010, Roy Sigurd Karlsbakk wrote:
hi all
I have a server running svn_131 and the scrub is very slow. I have a
cron job for starting it every week and now it's been running for a
while, and it's very, very slow
Have you checked the output of 'iostat -xe' to see if there are
u
On Sun, 28 Feb 2010, valrh...@gmail.com wrote:
backup server, I should be able to compress by about a factor of
1.5x. If I enable both on the rpool filesystem, then clone the boot
environment, that should enable it on the new BE (which would be a
child of rpool/ROOT), right?
If by 'clone' yo
I'm finally at the point of adding an SSD to my system, so I can get
reasonable dedup performance.
The question here goes to sizing of the SSD for use as an L2ARC device.
Noodling around, I found Richard's old posing on ARC->L2ARC memory
requirements, which is mighty helpful in making sure I d
Hi guys, on my home server I have a variety of directories under a single
pool/filesystem, Cloud.
Things like
cloud/movies -> 4TB
cloud/music -> 100Gig
cloud/winbackups -> 1TB
cloud/data -> 1TB
etc.
After doing some reading, I see recomendations to have separate filesystem to
improve p
I have 1 host with Solaris 10 update 8 and it linked with stk6540 array(the
host type set to traffic manager), and host have 4 paths and 2 linked to the
controller A and the rest 2 linked to controller B, when I disabled the MPxIO
and the host reboot, then I checked the zpool status, the testpoo
On 02/28/10 15:58, valrh...@gmail.com wrote:
Also, I don't have the numbers to prove this, but it seems to me
> that the actual size of rpool/ROOT has grown substantially since I
> did a clean install of build 129a (I'm now at build133). WIthout
> compression, either, that was around 24 GB, but
tomwaters wrote:
Hi guys, on my home server I have a variety of directories under a single
pool/filesystem, Cloud.
Things like
cloud/movies -> 4TB
cloud/music -> 100Gig
cloud/winbackups -> 1TB
cloud/data -> 1TB
etc.
After doing some reading, I see recomendations to have separate files
If anyone has specific SSD drives they would recommend for ZIL use would you
mind a quick response to the list? My understanding is I need to look for:
1) Respect cache flush commands (which is my real question...the answer to this
isn't very obvious in most cases)
2) Fast on small writes
It s
On Feb 28, 2010, at 11:51 PM, rwali...@washdcmail.com wrote:
> And what won't work are:
>
> - Intel X-25M
> - Most/all of the consumer drives prices beneath the X-25M
>
> all because they use capacitors to get write speed w/o respecting cache flush
> requests.
Sorry, meant to say "they use c
> Is there anything that is safe to use as a ZIL, faster than the
> Mtron but more appropriate for home than a Stec?
ACARD ANS-9010, as mentioned several times here recently (also sold as
hyperdrive5)
--
Dan.
pgpeFYm43bUlS.pgp
Description: PGP signature
On Sun, Feb 28, 2010 at 07:36:30PM -0800, Bill Sommerfeld wrote:
> To avoid this in the future, set PKG_CACHEDIR in your environment to
> point at a filesystem which isn't cloned by beadm -- something outside
> rpool/ROOT, for instance.
+1 - I've just used a dataset mounted at /var/pkg/downloa
On Mar 1, 2010, at 12:05 AM, Daniel Carosone wrote:
>> Is there anything that is safe to use as a ZIL, faster than the
>> Mtron but more appropriate for home than a Stec?
>
> ACARD ANS-9010, as mentioned several times here recently (also sold as
> hyperdrive5)
You are right. I saw that in a
rwali...@washdcmail.com wrote:
On Feb 28, 2010, at 11:51 PM, rwali...@washdcmail.com wrote:
And what won't work are:
- Intel X-25M
- Most/all of the consumer drives prices beneath the X-25M
all because they use capacitors to get write speed w/o respecting cache flush requests.
Sorr
Hi Cyril,
Thanks for the response.
In simple words this is what been done.
1- zpool import HD (external HDD[single drive])
2- zpool add HD log c0t4d0 (SSD drive)
3- play with it a bit.
4 zpool export HD
5- reinstall opensolaris on SSD drive (ex slog above).
Is there any chance to recover the HD zp
On Feb 28, 2010, at 7:11 PM, Erik Trimble wrote:
> I'm finally at the point of adding an SSD to my system, so I can get
> reasonable dedup performance.
>
> The question here goes to sizing of the SSD for use as an L2ARC device.
>
> Noodling around, I found Richard's old posing on ARC->L2ARC memo
Richard Elling wrote:
On Feb 28, 2010, at 7:11 PM, Erik Trimble wrote:
I'm finally at the point of adding an SSD to my system, so I can get reasonable
dedup performance.
The question here goes to sizing of the SSD for use as an L2ARC device.
Noodling around, I found Richard's old posing on
28 matches
Mail list logo