Title: Signature
Hi
I just loaded up opensolaris on an X4500 (Thumper) and tried to connect
to the ZFS GUI (https://x:6789)...and it is not there.
Is this not part of Open Solaris...or do I just need to work out how to
switch it on..
Thanks
Tim
--
Tim Thomas
Staff
working..
I am running snv_82, a fresh install: is there anything else that I
should enable/disable ?
I have seen something about "Secure by Default" in open solaris but
thought that "netservices open" opened it all up.
Hmmm...
Tim
Christopher Gibbs said the following
Title: Signature
are you sure the service is actually running? does "svcs -a | grep
webconsole" say "online"?
Yes, it is online
--
Tim Thomas
Staff Engineer
Storage
Systems Product Group
Sun Microsystems, Inc.
Internal E
Title: Signature
A reboot did it
Tim Thomas said the following :
Thanks Chris
someone else has suggested that to me but it still does not work.
I also tried...
# svccfg -s svc:/system/webconsole setprop options/tcp_listen = true
# svcadm refresh svc:/system/webconsole
o some
manipulation of the byte strings *when comparing* names, but the on-disk
name should be untouched from what the user requested.
-tim
>
>
> Bye,
> Roland
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Adding the -p option
to cp appeared to be a workaround at one point, but that was a red
herring, the failure with cp can be intermittent as it involves
scandir() scribbling somewhat randomly over memory.
-tim
> Thanks
> Sachin Palav
>
>
> This message posted from opensola
Hi,
I'm interested in the overhead of making, cloning, and destroying snapshots.
It sounds like the cost for all of these is low, but how low??
For example, could I make snapshots of a system every 5 seconds? every second?
More often than that?
I'm primarily interested in the time/computatio
Hi,
I have a pool /zfs01 with two sub file systems /zfs01/rep1 and /zfs01/rep2. I
used [i]zfs share[/i] to make all of these mountable over NFS, but clients have
to mount either rep1 or rep2 individually. If I try to mount /zfs01 it shows
directories for rep1 and rep2, but none of their conten
zfs root though ]
cheers,
tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
cpio?
cpio was updated to support ZFS ACLs as part of "PSARC 2002/240 ZFS", if
that's what you're referring to ?
cheers,
tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi Joerg,
On Thu, 2008-06-05 at 15:09 +0200, Joerg Schilling wrote:
> Tim Foster <[EMAIL PROTECTED]> wrote:
> > On Thu, 2008-06-05 at 13:47 +0200, Joerg Schilling wrote:
> > > Isn't flar based on the outdated cpio?
> >
> > cpio was updated to support ZFS
ust haven't had a chance to dig
into your code yet. I wanted to spend time following up properly,
rather than giving you a quick (but probably un-researched!) answer :-)
I'll try to follow up on the list tomorrow...
cheers,
tim
> Seriously,
fo/zfs-discuss
Not just yet, unfortunately, but we are working on it. This is bug 6647661.
http://www.opensolaris.org/isearch.jspa;jsessionid=F5B8A30AE9981256C1DE4F37EC7B72E1?query=6647661&Submit=Search
-tim
___
zfs-discuss mailing list
zfs-disc
ux?) that can simply hold everything as is.
If someone feels like coding a tool up that basically makes a file of
checksums and counts how many times a particular checksum get's hit over
a dataset, I would be willing to run it and provide feedback. :)
-Tim
Charles Soto wrote:
&g
ld love a ksh93 version of this!
cheers,
tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
that the overhead of SSH really hampered my ability to transfer
data between thumpers as well. When I simply ran a set of sockets and a
pipe things went much faster (filled a 1G link). Essentially I used
netcat instead of SSH.
-Tim
___
zfs-discuss ma
Will Murnane wrote:
> On Thu, Jul 10, 2008 at 12:43, Glaser, David <[EMAIL PROTECTED]> wrote:
>
>> I guess what I was wondering if there was a direct method rather than the
>> overhead of ssh.
>>
> On receiving machine:
> nc -l 12345 | zfs recv mypool/[EMAIL PROTECTED]
> and on sending mac
cheers,
tim
--
Tim Foster, Sun Microsystems Inc, Operating Platforms Group
Engineering Operationshttp://blogs.sun.com/timf
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailma
he console is running.
Guess I need to work out how to use wcadmin now :-)
cheers,
tim
--
Tim Foster, Sun Microsystems Inc, Operating Platforms Group
Engineering Operationshttp://blogs.sun.com/timf
___
zfs
hey all,
I just posted some stuff I'd been playing around with wrt. more desktop
integration of ZFS functionality, full story at:
http://blogs.sun.com/roller/page/timf?entry=zfs_on_your_desktop
it's not much, but it's a start...
cheers,
tim
[
k fine.
It'd be interesting to see how badly people wanted this functionality,
before boiling the ocean (again!) to provide it :-)
Of course, "redo" is a little trickier, as your application would need
to know about the snapshot namespace, but at least your data is safe.
of stored
> file depends on who's doing the deletion)
Aah right, okay - those are reasons against my previous post about
having an application register it's interest in getting undelete
capability. Good points Eric!
cheers,
tim
--
Tim Foster, S
dow. I was
thinking that RFE 6425091 or 6370738 was what I was waiting for, but
views would make this even easier to implement.
cheers,
tim
[1] http://blogs.sun.com/roller/page/timf?entry=zfs_on_your_desktop
--
Tim Foster, Sun Microsystems Inc, Operating
losed in parenthesis) from df instead.
does this help ?
cheers,
tim
--
Tim Foster, Sun Microsystems Inc, Operating Platforms Group
Engineering Operationshttp://blogs.sun.com/timf
___
zfs-discus
the system for.
I've got an old 733Mhz PentiumIII machine running the latest Nevada
build, which happily runs ZFS -- it gets used as a simple backup via
rsync server once a month or so, and happily manages ~180gb of data.
cheers,
tim
--
Tim Foster, Sun
mple implementation of the
scheduled snapshot requirement[1], deleting snapshots as required
according to some user-defined rules (Google for 'zfs automatic
snapshots' + "I'm Feeling Lucky" and you'll find it) Other than the RFEs
that Eric mentioned, how much more
On Wed, 19 Jul 2006, Eric Lowe wrote:
(Also BTW that page has a typo, you might want to get the typo fixed, I
didn't know where the doc bugs should go for those messages)
- Eric
Product: event_registry
Category: events
Sub-Category: msg
t; here, in which
case, no I didn't run into any problems, but see the disclaimer above.
Does Networker traditionally scan /etc/vfstab in any way ? If it doesn't
then I'm guessing you shouldn't have any problems.
cheers,
tim
--
Tim Foster, Sun
Och, sorry - a clarification might needed to my reply:
Tim Foster wrote:
Darren Dunham wrote:
I meant, rather than taring it up, can you just pass the snapshot mount
point to Networker as a saveset?
Yup, in my brief testing, I was able to backup a snapdir using
Networker.
... ** with the
27;ll get your people to port iTunes to
OpenSolaris for us[1] ?)
cheers,
tim
[1] http://blogs.sun.com/roller/page/timf?entry=zfs_on_osx
--
Tim Foster, Sun Microsystems Inc, Operating Platforms Group
Engineering Operationshttp://blogs.sun.com
said,
the devil's in the details (I'm paraphrasing), but I'm only speculating
here - don't have anything concrete.
cheers,
tim
--
Tim Foster, Sun Microsystems Inc, Operating Platforms Group
Engineering Operationshttp://blogs.
ys are thrilled at the prospect DTrace on
their platform.
cheers,
tim
--
Tim Foster, Sun Microsystems Inc, Operating Platforms Group
Engineering Operationshttp://blogs.sun.com/timf
___
zfs-discuss mailing l
was
playing with too...
cheers,
tim
--
Tim Foster, Sun Microsystems Inc, Operating Platforms Group
Engineering Operationshttp://blogs.sun.com/timf
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ose all your data" message?
If there's anything that doesn't make sense, please let me know and I'll try to
clarify.
Thanks,
Tim
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ssing about with in my free
time - it's not an official ZFS project and may not be bullet-proof
enough for production.
It works for me, but bug reports would be appreciated ;-)
cheers,
tim
Tim Foster, Sun Microsystems Inc, Solaris Engineering Ops
http://blo
Boot comes
along, but in the meantime it scratches an itch!
cheers,
tim
--
Tim Foster, Sun Microsystems Inc, Solaris Engineering Ops
http://blogs.sun.com/timf
___
zfs-discuss mailing lis
two machines ?
Does "ssh -v" tell you any more ?
cheers,
tim
--
Tim Foster, Sun Microsystems Inc, Solaris Engineering Ops
http://blogs.sun.com/timf
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
mes said :-)
cheers,
tim
--
Tim Foster, Sun Microsystems Inc, Solaris Engineering Ops
http://blogs.sun.com/timf
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Being resilvered444.00 GB 168.21 GB 158.73 GB
Just wondering if anyone has any rough guesstimate of how long this will take?
It's 3x1200JB ata drives and one Seagate SATA drive. The SATA drive is the one
that was replaced. Any idea how long this will take? As in 5 hours?
the status showed 19.46% the first time I ran it, then 9.46% the second. The
question I have is I added the new disk, but it's showing the following:
Device: c5d0
Storage Pool: fserv
Type: Disk
Device State: Faulted (cannot open)
The disk is currently unpartitioned and unformatted. I was under
hrmm... "cannot replace c5d0 with c5d0: cannot replace a replacing device"
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
says it's online now so I can only assume it's working. Doesn't seem to be
reading from any of the other disks in the array though. Can it sliver without
traffic to any other disks? /noob
This message posted from opensolaris.org
___
zfs-discuss m
]
Sent: Friday, September 15, 2006 4:45 PM
To: Tim Cook
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Re: reslivering, how long will it take?
On Fri, Sep 15, 2006 at 01:26:21PM -0700, Tim Cook wrote:
> says it's online now so I can only assume it's working. Doesn
[mailto:Bill dot Moore at sun dot com]
Sent: Friday, September 15, 2006 4:45 PM
To: Tim Cook
Cc: zfs-discuss at opensolaris dot org
Subject: Re: [zfs-discuss] Re: reslivering, how long will it take?
On Fri, Sep 15, 2006 at 01:26:21PM -0700, Tim Cook wrote:
> says it's online now so I can only ass
d on:
> $ uname -a
> SunOS azalin 5.10 Generic_118833-24 sun4u sparc SUNW,Sun-Blade-2500
I wasn't able to reproduce this on similar bits, nor on recent s10 bits
(ultimately destined for s10_u3) or nevada bits. Do you have a
consistently reproducible test case ?
cheers,
aluation. I'll also keep an eye out for those pools during
testing.
cheers,
tim
--
Tim Foster, Sun Microsystems Inc, Operating Platforms Group
Engineering Operationshttp://blogs.sun.com/timf
emental backups after that, loosing an incremental backup somewhere
in the middle of your range will make it impossible to restore the
subsequent incremental backups [ see example below ].
Does that help at all ?
cheers,
tim
[1] Here's an example:
Take a fi
t;, then remove the disk, replace
with working hardware, and do "zpool online "
This is pretty well covered in the ZFS Administration Guide at:
http://docs.sun.com/app/docs/doc/819-5461/6n7ht6qrt?a=view
cheers,
tim
--
Tim Foster, Sun Microsyste
FS admin guide at:
http://docs.sun.com/app/docs/doc/819-5461/6n7ht6qt0?a=view
Hope this helps,
cheers,
tim
# zpool status -v
pool: ts-auto-pool
state: ONLINE
scrub: resilver completed with 0 errors on Thu Nov 30 17:14:25 2006
config:
hysical
disks, the better.
cheers,
tim
--
Tim Foster, Sun Microsystems Inc, Solaris Engineering Ops
http://blogs.sun.com/timf
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mai
eally want i/o throttling of send/recv operations as
against "normal" pool operations - I don't know enough to suggest how
this could be implemented (except via brutal pstop/prun hacks on the
"zfs send" process whenever your pool exceeds some given IO threshold)
t_magic
Matt has mentioned some additional features to zfs send/recv coming
soon, including ways to send all incremental snapshots, send nested
filesystem, and ways to preserve filesystem properties while sending.
cheers,
tim
--
Tim Foster, Sun Microsystems Inc, Solari
So are there any PCI-Express cards based on the Marvell chipset? And/or
is there something with native SATA support that is the same general
specifications (8 ports, non-raid) just based on a different chipset but
using a PCI-E interface?
-Original Message-
From: [EMAIL PROTECTED]
[mailto
e with a path dropping. I have
the setup above in my test lab and pull cables all the time and have yet
to see a zfs kernel panic. Is this something you've considered? I
haven't seen the bug in question, but I definitely have not run into it
when running mpxio.
--Tim
-Original Messag
es?
What HBA's are you using? What switches?
What version of snv are you running, and which driver?
Yey for slow Friday's before x-mas, I have a bit of time to play in the
lab today.
--Tim
-Original Message-
From: Jason J. W. Williams [mailto:[EMAIL PROTECTED]
Sent: Friday
Again, I haven't tested this scenario, but I can only imagine
it's not something that can be/should be/is recovered from gracefully.
--Tim
-Original Message-
From: Robert Milkowski [mailto:[EMAIL PROTECTED]
Sent: Friday, December 22, 2006 3:18 PM
To: Jason J. W. Williams
Cc: T
ales on 500GB HDD's, it should more than get you started
on this.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
uld be refined - but does anyone think
this is the right direction to go in ?
cheers,
tim
[1] I'm not yet sure if SMF instance names are allowed '/' chars, sorry
[2] http://blogs.sun.com/roller/page/dep?entry=visual_panels_debut
heers,
tim
--
Tim Foster, Sun Microsystems Inc, Operating Platforms Group
Engineering Operationshttp://blogs.sun.com/timf
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
901 - 959 of 959 matches
Mail list logo