On Wed, Nov 18, 2009 at 12:49 PM, Jacob Ritorto wrote:
> Tim Cook wrote:
>
> > Also, I never said anything about setting it to panic. I'm not sure why
> > you can't set it to continue while alerting you that a vdev has failed?
>
>
> Ah, right, thanks for the reminder Tim!
>
> Now I'd asked about
Richard,
Thanks for this, this explains why I am seeing this. I am using snapshots as I
am replicating the data to other servers (via zfs send/recieve) is there
another way to prevent this behaviour and still use snapshots? Or do I need to
create these volumes as thin provisioned to get aroun
On Nov 18, 2009, at 7:28 PM, Duncan Bradey wrote:
Guys,
Forgive my ignorance on this but I am wondering how ZFS uses space
when a volume is created with the -V parameter and then shared as an
iSCSI lun.
For example, I created 5 volumes 2 * 1TB in size and 3 * 2TB in
size, but the space
Guys,
Forgive my ignorance on this but I am wondering how ZFS uses space when a
volume is created with the -V parameter and then shared as an iSCSI lun.
For example, I created 5 volumes 2 * 1TB in size and 3 * 2TB in size, but the
space usage appears to be in addition to this size specification
Darren J Moffat wrote:
Len Zaifman wrote:
We are looking at adding to our storage. We would like ~20TB-30 TB.
we have ~ 200 nodes (1100 cores) to feed data to using nfs, and we
are looking for high reliability, good performance (up to at least
350 MBytes /second over 10 GigE connection) and
Chris Du wrote:
> You can get the E2 version of the chassis that supports multipathing
> but you have to use dual port SAS disks. Or you can use seperate SAS
> hba to connect to seperate jbos chassis and do mirror over 2 chassis.
> The backplane is just a path-through fabric which is very unlikely
In that case instead of rewriting the part of my code which handles
quota creation/updating/checking, I would need to completely rewrite
the quota logic. :-(
So what do you do just now with UFS ? Is it a separate filesystem for
the mail directory ? If so it really shouldn't be that big of a deal
On Tue, Nov 17, 2009 at 10:32 AM, Ed Plese wrote:
> You can reclaim this space with the SDelete utility from Microsoft.
> With the -c option it will zero any free space on the volume. For
> example:
>
> C:\>sdelete -c C:
>
> I've tested this with xVM and with compression enabled for the zvol,
> b
On 2009-Nov-19 02:57:31 +0300, Victor Latushkin
wrote:
>> all the cabling, Solaris panic'd before reaching single user.
>
>Do you have crash dump of this panic saved?
Yes. It was provided to Sun Support.
>Option -F is new one added with pool recovery support, so it'll be
>available in build 1
Peter Jeremy wrote:
I have a zpool on a JBOD SE3320 that I was using for data with Solaris
10 (the root/usr/var filesystems were all UFS). Unfortunately, we had
a bit of a mixup with SCSI cabling and I believe that we created a
SCSI target clash. The system was unloaded and nothing happened unt
On Tue, Nov 17, 2009 at 9:41 AM, Michael Armstrong wrote:
> Hi guys, after reading the mailings yesterday i noticed someone was after
> upgrading to zfs v21 (deduplication) i'm after the same, i installed
> osol-dev-127 earlier which comes with v19 and then followed the instructions
> on http://p
Tim Cook wrote:
> Also, I never said anything about setting it to panic. I'm not sure why
> you can't set it to continue while alerting you that a vdev has failed?
Ah, right, thanks for the reminder Tim!
Now I'd asked about this some months ago, but didn't get an answer so
forgive me for ask
On Wed, Nov 18, 2009 at 10:30 AM, Richard Elling
wrote:
> On Nov 18, 2009, at 5:44 AM, Jacob Ritorto wrote:
>
> Hi all,
>>Not sure if you missed my last response or what, but yes, the pool
>> is set to wait because it's one of many pools on this prod server and we
>> can't just panic ever
I don't wish to hijack, but along these same comparing lines, is there
anyone able to compare the 7200 to the HP LeftHand series? I'll start
another thread if this goes too far astray.
thx
jake
Darren J Moffat wrote:
Len Zaifman wrote:
We are looking at adding to our storage. We would li
Scott Meilicke wrote:
I second the use of zilstat - very useful, especially if you don't want to mess
around with adding a log device and then having to destroy the pool if you
don't want the log device any longer.
log devices can be removed as of zpool version 19.
--
Darren J Moffat
___
I second the use of zilstat - very useful, especially if you don't want to mess
around with adding a log device and then having to destroy the pool if you
don't want the log device any longer.
On Nov 18, 2009, at 2:20 AM, Dushyanth wrote:
> Just to clarify : Does iSCSI traffic from a Solaris iS
Len Zaifman wrote:
We are looking at adding to our storage. We would like ~20TB-30 TB.
we have ~ 200 nodes (1100 cores) to feed data to using nfs, and we are
looking for high reliability, good performance (up to at least 350 MBytes
/second over 10 GigE connection) and large capacity.
For th
I'm seeing a performance anomaly where opening a large file (but doing
*no* I/O to it) seems to cause (or correlates to) a significant
performance hit on a mirrored ZFS filesystem. Unintuitively, if I
disable zfs_prefetch_disable, I don't see the performance degradation.
It doesn't make sense
We are looking at adding to our storage. We would like ~20TB-30 TB.
we have ~ 200 nodes (1100 cores) to feed data to using nfs, and we are
looking for high reliability, good performance (up to at least 350 MBytes
/second over 10 GigE connection) and large capacity.
For the X45xx (aka thumper)
Hi guys, after reading the mailings yesterday i noticed someone was after
upgrading to zfs v21 (deduplication) i'm after the same, i installed
osol-dev-127 earlier which comes with v19 and then followed the instructions on
http://pkg.opensolaris.org/dev/en/index.shtml to bring my system up to da
And I like to cut of your jib, my young fellow me lad!
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
There is a new PSARC in b126(?) that allows to rollback to latest functioning
uber block. Maybe it can help you?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/lis
Consider using a modern mail system. The mail system can handle quotas
much better than a file system.
http://blogs.sun.com/relling/entry/on_var_mail_and_quotas
-- richard
On Nov 18, 2009, at 6:18 AM, Jozef Hamar wrote:
Hi Darren,
thanks for reply.
E.g., I have mail quota implemented as p
On Nov 18, 2009, at 5:44 AM, Jacob Ritorto wrote:
Hi all,
Not sure if you missed my last response or what, but yes, the pool
is set to wait because it's one of many pools on this prod server
and we can't just panic everything because one pool goes away.
I just need a way to reset
On Nov 18, 2009, at 2:20 AM, Dushyanth wrote:
Just to clarify : Does iSCSI traffic from a Solaris iSCSI initiator
to a third party target go through ZIL ?
ZFS doesn't know what a block device is. So if you configure your pool
to use iSCSI devices, then it will use them.
To measure ZIL acti
Jozef Hamar wrote:
Darren J Moffat wrote:
Jozef Hamar wrote:
Hi Darren,
thanks for reply.
E.g., I have mail quota implemented as per-directory quota. I know
this can be solved in another way, but still, I would have to change
many things in my system in order to make it work. And this is qu
Darren J Moffat wrote:
Jozef Hamar wrote:
Hi Darren,
thanks for reply.
E.g., I have mail quota implemented as per-directory quota. I know
this can be solved in another way, but still, I would have to change
many things in my system in order to make it work. And this is quite
easy implementa
Jozef Hamar wrote:
Hi Darren,
thanks for reply.
E.g., I have mail quota implemented as per-directory quota. I know this can be
solved in another way, but still, I would have to change many things in my
system in order to make it work. And this is quite easy implementation of mail
quota. Now
Hi Darren,
thanks for reply.
E.g., I have mail quota implemented as per-directory quota. I know this
can be solved in another way, but still, I would have to change many
things in my system in order to make it work. And this is quite easy
implementation of mail quota. Now I'm using UFS and uf
Darren J Moffat wrote:
Jozef Hamar wrote:
Hi all,
I can not find any instructions on how to set the file quota (i.e.
maximum number of files per filesystem/directory) or directory quota
(maximum size that files in particular directory can consume) in ZFS.
That is because it doesn't exist.
Hi all,
Not sure if you missed my last response or what, but yes, the pool is
set to wait because it's one of many pools on this prod server and we
can't just panic everything because one pool goes away.
I just need a way to reset one pool that's stuck.
If the architecture of zfs ca
Jozef Hamar wrote:
Hi all,
I can not find any instructions on how to set the file quota (i.e. maximum
number of files per filesystem/directory) or directory quota (maximum size that
files in particular directory can consume) in ZFS.
That is because it doesn't exist.
I understand ZFS has no
Hi all,
I can not find any instructions on how to set the file quota (i.e.
maximum number of files per filesystem/directory) or directory quota
(maximum size that files in particular directory can consume) in ZFS.
I understand ZFS has no support for this. Am I right? If I am, are
there any pl
Just to clarify : Does iSCSI traffic from a Solaris iSCSI initiator to a third
party target go through ZIL ?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinf
> the SSD's black-box-filesystem is fragmented?
Not very sure - Its a Transcend TS8GSSD25S 2.5" SLC SDD that i could find in
our store immdtly. I also have a ACARD ANS-9010 DRAM (http://bit.ly/3cQ4fK)
that iam experimenting with.
The Intel X25e should arrive soon. Are there any other recommend
Hi,
Thanks for all the inputs. I did run some postmark tests without slog and with
it and did not see any performance benefits on the iSCSI volume.
I will repeat them again and post results here.
Also pls note that the solarix box is the initiator and the target is a
Infortrend S16-R1130 iSCSI
36 matches
Mail list logo