On Mon, Mar 30, 2009 at 4:13 PM, Michael Shadle wrote:
> Well basically I am trying to analyze giving up 1/7th of my space for
> the off chance that one drive fails during resilvering. I just don't
> know what kind of time to expect for a resilver. I'm sure it also
> depends on the build of nevad
Right.
Another difference to be aware of is that ZFS reports the total
space consumed, including space for metadata -- typically around 1%.
Traditional filesystems like ufs and ext2 preallocate metadata and
don't count it as using space. I don't know how reiserfs does its
bookkeeping, but I would
"https:/localhost:6789/zfs"
Tim wrote:
On Mon, Mar 30, 2009 at 4:58 PM, Blake
wrote:
Can
you list the exact command you used to launch the control panel?
I'm not sure what tool you are referring to.
2009/3/25 Howard Huntley
Blake writes:
> There is a bug where the automatic snapshot service dies if there are
> multiple boot environments. Do you have these? I think you can check
> with Update Manager.
Yeah I have them but due to another bug beadm can't destroy/remove
any.
Update manager/Be manager can't delete th
There is a bug where the automatic snapshot service dies if there are
multiple boot environments. Do you have these? I think you can check
with Update Manager.
On Mon, Mar 30, 2009 at 7:20 PM, Harry Putnam wrote:
>> you need zfs list -t snapshot
>>
>> by default, snapshots aren't shown in zfs
Michael Shadle wrote:
On Mon, Mar 30, 2009 at 4:00 PM, David Magda wrote:
There is a background process in ZFS (see "scrub" in zpool(1M)) that goes
through and make sure all the checksums match reality (and corrects things
if it can). It's reading all the data, but unlike hardware RAID arra
On Mar 30, 2009, at 19:13, Michael Shadle wrote:
Normally it seems like raid5 is perfectly fine for a workoad like this
but maybe I'd sleep better at night knowing I could have 2 disks fail,
but the odds of that are pretty slim. I've never had 2 disks fail, and
if I did, the whole array is proba
> you need zfs list -t snapshot
>
> by default, snapshots aren't shown in zfs list anymore, hence the -t option
>
Yikes, I've got dozens of the things... I monkeyed around a bit with
timeslider but thought I canceled out whatever settings I'd messed
with.
Frequent and hourly both are way to ofte
On Mon, Mar 30, 2009 at 4:00 PM, David Magda wrote:
> There is a background process in ZFS (see "scrub" in zpool(1M)) that goes
> through and make sure all the checksums match reality (and corrects things
> if it can). It's reading all the data, but unlike hardware RAID arrays, it
> only checks t
On Mar 30, 2009, at 13:48, Michael Shadle wrote:
My only question is is how long it takes to resilver... Supposedly
the entire array has to be checked which means 6x1.5tb. It has a
quad core CPU that's basically dedicated to it. Anyone have any
estimates?
Sounds like it is a lot slower th
Tim,
You are correct it does sound like the Java WebConsole ZFS
Administration tool. The patch ID's below look to fix the two issues I
am aware of. One was a registration of the tool to the WebConsole page
the second was a Java JAR file bug that only displayed a white screen
when the ZFS admini
On Mon, Mar 30, 2009 at 4:58 PM, Blake wrote:
> Can you list the exact command you used to launch the control panel?
> I'm not sure what tool you are referring to.
>
>
>
> 2009/3/25 Howard Huntley :
> > I once installed ZFS on my home Sun Blade 100 and it worked fine on the
> sun
> > blade 100 ru
Can you list the exact command you used to launch the control panel?
I'm not sure what tool you are referring to.
2009/3/25 Howard Huntley :
> I once installed ZFS on my home Sun Blade 100 and it worked fine on the sun
> blade 100 running solaris 10. I reinstalled Solaris 10 09 version and
> cre
| > FWIW, it looks like someone at Sun saw the complaints in this thread and or
| > (more likely) had enough customer complaints. ??It appears you can buy the
| > tray independently now. ??Although, it's $500 (so they're apparently made
| > entirely of diamond and platinum). ??In Sun marketing's de
you need zfs list -t snapshot
by default, snapshots aren't shown in zfs list anymore, hence the -t option
On Mon, Mar 30, 2009 at 11:41 AM, Harry Putnam wrote:
> Richard Elling writes:
>
>> It can go very fine, though you'll need to set the parameters yourself,
>> if you want to use different
no idea how many of these there are:
http://www.google.com/products?q=570-1182&hl=en&show=li
2009/3/30 Tim :
>
>
> On Mon, Mar 30, 2009 at 3:56 AM, Mike Futerko wrote:
>>
>> Hello
>>
>> > 1) Dual IO module option
>> > 2) Multipath support
>> > 3) Zone support [multi host connecting to same JBOD
Sounds like the best way - I was about to suggest that anyway :)
On Mon, Mar 30, 2009 at 3:03 PM, Harry Putnam wrote:
> Blake writes:
>> You are seeing snapshots from Time-Slider's automatic snapshot service.
>>
>> If you have a copy of each of these 58 files elsewhere, I suppose you
>> could re
Do you have more than one Boot Environment?
pfexec beadm list
On Mon, Mar 30, 2009 at 1:33 PM, Harry Putnam wrote:
> After messing around with Timeslider... I started getting errors and
> the frequent and hourly services were failing, causing the service to
> be put into maintenance status.
>
Hello,
New here, and I'm not sure if this is the correct mailing list to post this
question or not.
Anyway, we are having some questions about multi-protocol (CIFS/NFS) access to
the same files specifically when not using AD or LDAP.
Summary:
Accessing the same folder from CIFS or NFS when wo
Blake writes:
> You are seeing snapshots from Time-Slider's automatic snapshot service.
>
> If you have a copy of each of these 58 files elsewhere, I suppose you
> could re-copy them to the mirror and then do 'zpool clear [poolname]'
> to reset the error counter.
>
Thanks... I did try coping from
I've run into this too... I believe the issue is that the block
size/allocation unit size in ZFS is much larger than the default size
on older filesystems (ufs, ext2, ext3).
The result is that if you have lots of small files smaller than the
block size, they take up more total space on the filesy
Mattias Pantzare writes:
>> Nice, I see by default it appears the gnu/bin is put ahead of /bin in
>> $PATH, or maybe some my meddling did it, but I see running the Solaris
>> df several more and confusing entries too:
>>
>> /system/contract (ctfs ): 0 blocks 2147483609 files
On Sat, Mar 28, 2009 at 6:57 PM, Ian Collins wrote:
> Please stop top-posting to threads where everyone else is normal-posting, it
> mucks up the flow of the thread.
>
> Thanks,
>
> --
> Ian.
Apologies - top-posting seems to be the Gmail default (or I set it so
long ago that I forgot it was there
You are seeing snapshots from Time-Slider's automatic snapshot service.
If you have a copy of each of these 58 files elsewhere, I suppose you
could re-copy them to the mirror and then do 'zpool clear [poolname]'
to reset the error counter.
On Sun, Mar 29, 2009 at 10:28 PM, Harry Putnam wrote:
I rsynced an 11gb pile of data from a remote linux machine to a zfs
filesystem with compression turned on.
The data appears to have grown in size rather than been compressed.
Many, even most of the files are formats that are already compressed,
such as mpg jpg avi and several others. But also ma
>
>> A useful way to obtain the mount point for a directory is with the
>> df' command. Just do 'df .' while in a directory to see where its
>> filesystem mount point is:
>>
>> % df .
>> Filesystem 1K-blocks Used Available Use% Mounted on
>> Sun_2540/home/bfriesen
>>
My only question is is how long it takes to resilver... Supposedly the
entire array has to be checked which means 6x1.5tb. It has a quad core
CPU that's basically dedicated to it. Anyone have any estimates?
Sounds like it is a lot slower than a normal raid5 style rebuild. Is
there a way to
I don't understand why disabling ZFS prefetch solved this
problem. The test case was a single threaded sequential write, followed
by a single threaded sequential read.
Anyone listening on ZFS have an explanation as to why disabling
prefetch solved Roland's very poor bandwidth problem?
My only th
After messing around with Timeslider... I started getting errors and
the frequent and hourly services were failing, causing the service to
be put into maintenance status.
Not really being sure what to do with it I first tried
`svcadm restart' on them. But they went right back into maintenance
Cross posting to zfs-discuss.
By my math, here's what you're getting;
4.6MB/sec on writes to ZFS.
2.2MB/sec on reads from ZFS.
90MB/sec on read from block device.
What is c0t1d0 - I assume it's a hardware RAID LUN,
but how many disks, and what type of LUN?
What version of Solaris (cat /etc/re
On Mon, Mar 30, 2009 at 3:56 AM, Mike Futerko wrote:
> Hello
>
> > 1) Dual IO module option
> > 2) Multipath support
> > 3) Zone support [multi host connecting to same JBOD or same set of JBOD's
> > connected in series. ]
>
> This sounds interesting - where I can read more about connecting two
>
Richard Elling writes:
> It can go very fine, though you'll need to set the parameters yourself,
> if you want to use different settings. A few weeks ago, I posted a way
> to see the settings, which the time slider admin tool won't show. There
> is a diminishing return for exposing such complex
Carson Gaspar wrote:
Darren J Moffat wrote:
...
Agreed, but other than pattern based I can't at the moment thing of a
nice way to pass all the names over the /dev/zfs ioctl call while
maintaining the fact it is pretty much all fixed size.
I'm not saying passing a list of names over the ioctl
Hello
> 1) Dual IO module option
> 2) Multipath support
> 3) Zone support [multi host connecting to same JBOD or same set of JBOD's
> connected in series. ]
This sounds interesting - where I can read more about connecting two
hosts to same J4200 etc?
Thanks
Mike
_
34 matches
Mail list logo