It's simply a shell grokking issue, when you allow your (l)users to self
name your files then you will have spaces etc in the filename (breaks
shell arguments). In this case the '[E]' is breaking your command line
argument grokking. We have the same issue in our photos tree. We have to
use non
Hey folks,
We're at Solaris Nevada snv_64a SPARC. We have a number of ISCSI volumes
shared out with different sizes (1GB, 50GB, 10GB etc) and ACLs to limit
which windoze machines can access what.
We've had issues in the past with zpool devices being removed which resulted
in corrupted zpools with
I don't have time to RTFS so I was curious if there was a guide on using
zdb, and does it do any writing of the zfs information? The binary has a
lot of options which aren't clear what do what.
I'm looking for any tools that let you do low level fiddling with things
such as broken zpools.
ta,
We had a 'windoze' zpool on two internal disks. It had a number of zvols
which
were ISCSI'd out to a few hosts. This has been in and running for some
months.
Recently someone added some external SE6140 LUNs to the zpool as well,
and last
friday those LUNs were deleted from the SE6140 itself, as
Economics for one.
We run a number of testing environments which mimic the production one.
But we don't want to spend $750,000 on EMC storage each time when
something costing $200,000 will do the job we need.
At the moment we have over 100TB on four SE6140s and we're very happy
with the soluti
Please don't do this as a rule, it makes for horrendous support issues
and breaks a lot of health check tools.
>> Actually, you can use the existing name space for this. By default,
>> ZFS uses /dev/dsk. But everything in /dev is a symlink. So you could
>> setup your own space, say /dev/mykno
I'm going to go out on a limb here and say you have an A5000 with the
1.6" disks in it. Because of their design, (all drives seeing each other
on both the A and B loops), it's possible for one disk that is behaving
badly to take over the FC-AL loop and require human intervention. You
can physic
#x27;s.
- One large filesystem
- 70TB
- No downtime growth/expansion
Since it seems that you have several 6140's under ZFS control ... any
problems/comments for me?
Thank you.
On 7/19/07, Mark Ashley <[EMAIL PROTECTED]> wrote:
Hi folks,
One of the things I'm really hanging out fo
Hi folks,
One of the things I'm really hanging out for is the ability to evacuate
the data from a zpool device onto the other devices and then remove the
device. Without mirroring it first etc. The zpool would of course shrink
in size according to how much space you just took away.
Our situati