On Sun, Nov 16, 2008 at 09:27:32AM -0800, Ed Clark wrote:
Hi Ed,
> > > 1. a copy of the 137137-09 patchadd log if you have
> > http://iws.cs.uni-magdeburg.de/~elkner/137137-09/
> thanks for info - what you provided here is the patch pkg installation log,
Yes, actually the only one, I have/coul
On Mon, Nov 17, 2008 at 07:27:50AM +0200, Johan Hartzenberg wrote:
>
>Thank you for the details. A few more questions: Does booting into
>build 102 do I zpool online on the root pool? And the above disable -t
>is "temporary" till the next reboot - any specific reason for doing it
>
Just to clarify that last answer, we are planning on releasing SSDs for
many of our existing systems and storage. They may be a little
different than what's used in the 7000, but they're intended for the
same purpose.
Your sales rep should be able to give you a better idea of when, but
they'r
Does this affect a fresh install into a single, full disk pool? I just did
this with
nv102 on my laptop. So far, it seems to be working ...
Unfortunately, this would be a pain to unwind. Is there a quick fix
binary patch available?
The only thing that appears to have broken is resume. Could t
Richard Elling wrote:
Chris Gerhard wrote:
My home server running snv_94 is tipping with the same assertion when
someone list a particular file:
Failed assertions indicate software bugs. Please file one.
http://en.wikipedia.org/wiki/Assertion_(computing)
A colleague pointed out that it i
> Would be interesting to hear more about how Fishworks differs from
> Opensolaris, what build it is based on, what package mechanism you are
> using (IPS already?), and other differences...
I'm sure these details will be examined in the coming weeks on the blogs
of members of the Fishworks team
Adam Leventhal wrote:
> Yes. The Sun Storage 7000 Series uses the same ZFS that's in OpenSolaris
> today. A pool created on the appliance could potentially be imported on an
> OpenSolaris system; that is, of course, not explicitly supported in the
> service contract.
>
Would be interesting to he
On Mon, Nov 17, 2008 at 3:33 PM, Will Murnane <[EMAIL PROTECTED]>wrote:
> On Mon, Nov 17, 2008 at 20:54, BJ Quinn <[EMAIL PROTECTED]> wrote:
> > 1. Dedup is what I really want, but it's not implemented yet.
> Yes, as I read it. greenBytes [1] claims to have dedup on their
> system; you might inv
On Mon, Nov 17, 2008 at 20:54, BJ Quinn <[EMAIL PROTECTED]> wrote:
> 1. Dedup is what I really want, but it's not implemented yet.
Yes, as I read it. greenBytes [1] claims to have dedup on their
system; you might investigate them if you decide rsync won't work for
your application.
> 2. The onl
On Mon, Nov 17, 2008 at 2:36 PM, Eric Schrock <[EMAIL PROTECTED]> wrote:
> On Mon, Nov 17, 2008 at 01:38:29PM -0600, Tim wrote:
> >
> > And this passage:
> > "If there is a broken or missing disk, we don't let you proceed without
> > explicit confirmation. The reason we do this is that once the st
Thank you both for your responses. Let me see if I understand correctly -
1. Dedup is what I really want, but it's not implemented yet.
2. The only other way to accomplish this sort of thing is rsync (in other
words, don't overwrite the block in the first place if it's not different), and
i
On Mon, Nov 17, 2008 at 01:38:29PM -0600, Tim wrote:
>
> And this passage:
> "If there is a broken or missing disk, we don't let you proceed without
> explicit confirmation. The reason we do this is that once the storage pool
> is configured, there is no way to add those disks to the pool without
>
Chris Gerhard wrote:
> My home server running snv_94 is tipping with the same assertion when someone
> list a particular file:
>
Failed assertions indicate software bugs. Please file one.
http://en.wikipedia.org/wiki/Assertion_(computing)
-- richard
> ::status
> Loading modules: [ unix genu
On Mon, Nov 17, 2008 at 1:14 PM, Eric Schrock <[EMAIL PROTECTED]> wrote:
>
>
> Yes, we support adding whole or half JBODs. We do not support adding
> individual disks or arbitrarily populated JBODs. If you want the
> ability to survive JBOD failure ("NSPF" in our storage config terms),
> you mus
Thanks to Iain Curtain.
1. boot into single user mode from dvd, then su.
2. mount the rootpool as r/w on /a when prompted
3. run: installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c0d0s0
4. reboot
--
This message posted from opensolaris.org
__
On Mon, Nov 17, 2008 at 01:07:12PM -0600, Tim wrote:
>
> So that leads me to my second question then: If I buy a 7410 with a single
> JBOD attached, can I easily attach a second JBOD and grow the pool? It
> would seem the logical answer is "yes", since growing the pool would just
> require addin
>I'm not sure if this is the right place for the question or not, but I'll
>throw it out there anyways. Does anyone know, if you create your pool(s)
>with a system running fishworks, can that pool later be imported by a
>standard solaris system? IE: If for some reason the head running fishworks
On Mon, Nov 17, 2008 at 12:48 PM, Eric Schrock <[EMAIL PROTECTED]> wrote:
> Yes, the on-disk format is compatible. You cannot, however, do the
> reverse. Importing arbitrary Solaris pools (or former Fishworks pools)
> into the Fishworks environment is not supported. While the on-disk
> format i
On Mon, Nov 17, 2008 at 12:35:38PM -0600, Tim wrote:
> I'm not sure if this is the right place for the question or not, but I'll
> throw it out there anyways. Does anyone know, if you create your pool(s)
> with a system running fishworks, can that pool later be imported by a
> standard solaris sys
On Mon, Nov 17, 2008 at 12:35:38PM -0600, Tim wrote:
> I'm not sure if this is the right place for the question or not, but I'll
> throw it out there anyways. Does anyone know, if you create your pool(s)
> with a system running fishworks, can that pool later be imported by a
> standard solaris sys
> "ah" == Andrew Hisgen <[EMAIL PROTECTED]> writes:
ah> (If it helps for concreteness, let us say that the disk is
ah> implemented as an iSCSI target
for me, iSCSI targets do come back online automatically.
then, they don't resilver, or don't resilver enough.
If they were gone for t
One of my zpools (sol-10-u4-ga-sparc) has experienced some permanent errors.
At this point, I don't care about the contents of the files in question, I
merely want to cleanup the zpool. zpool clear doesn't seem to do anything at
all. Any suggestions?
# zpool status -v ccm01
pool: ccm01
state: O
I'm not sure if this is the right place for the question or not, but I'll
throw it out there anyways. Does anyone know, if you create your pool(s)
with a system running fishworks, can that pool later be imported by a
standard solaris system? IE: If for some reason the head running fishworks
were
Consider the following scenario. A disk that is part of
a zfs zpool and is also part of a mirror in that zpool
becomes disconnected. Imagine that the disconnection
last minutes or hours or days.
Then imagine that the disk becomes reconnected, but,
in a manner where hot-plug event does not cause
I'll have to dig deeper, but I believe this panic was caused by the
controlling terminal of an incremental receive via ssh exiting.
The system is Solaris 10 Update 5.
Nov 17 18:44:07 antaeus unix: [ID 10 kern.notice]
Nov 17 18:44:07 antaeus genunix: [ID 802836 kern.notice]
fe80019f7b50 ff
So it's safe to use the pool after forcing its creation. Good.
i was surprised to see it working properly in snv77 but not in snv101.
Thank you,
Vincent
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listin
On Mon, Nov 17, 2008 at 11:51, BJ Quinn <[EMAIL PROTECTED]> wrote:
> I believe rsync can do this, but some of the servers in question are Windows
> servers and rsync/cygwin might not be an option.
I'd check to make sure rsync has the correct behavior first, but there
is a Windows-based rsync daemo
Wow it was actually just a simple process:
1. boot into single user mode from dvd - the only copy I did have was a build
80 version, which was no good as I had recently upgraded the pools and it
didn't recongise the rootpool (no real surprise there). so i needed to
download and burn the latest
On Mon, 17 Nov 2008, Vincent Boisard wrote:
> #zpool create pool1 c1d1s0
> invalid vdev specification
> use '-f' to override the following errors:
> /dev/dsk/c1d1s0 overlaps with /dev/dsk/c1d1s2
That's CR 6419310.
Regards,
markm
___
zfs-discuss mailin
On Mon, Nov 17, 2008 at 06:33:46PM +0100, Vincent Boisard wrote:
> Hi,
>
> As I was experimenting with snv101, I discovers that every attempt to create
> a zpool gives this error:
>
> #zpool create pool1 c1d1s0
> invalid vdev specification
> use '-f' to override the following errors:
> /dev/dsk/c
Hi,
As I was experimenting with snv101, I discovers that every attempt to create
a zpool gives this error:
#zpool create pool1 c1d1s0
invalid vdev specification
use '-f' to override the following errors:
/dev/dsk/c1d1s0 overlaps with /dev/dsk/c1d1s2
I am testing with vmware, so I used the same v
I think what you define IS dedup. You can search the archieves for
dedup.
Best regards
Mertol
Sent from a mobile device
Mertol Ozyoney
On 17.Kas.2008, at 18:51, BJ Quinn <[EMAIL PROTECTED]> wrote:
> We're considering using an OpenSolaris server as a backup server.
> Some of the servers to
Hi,
is it safe to change the root pool mount point from /rpool to , let's say,
/zpools/rpool.
I'd like to have all my pools under one dir so that / stays clean.
Thank you in advance,
Vincent
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
We're considering using an OpenSolaris server as a backup server. Some of the
servers to be backed up would be Linux and Windows servers, and potentially
Windows desktops as well. What I had imagined was that we could copy files
over to the ZFS-based server nightly, take a snapshot, and only t
IIRC, the 32-bit reference you see is only for the installer, which
doesn't need to be 64-bit. The installer does detect 64-bit hardware
during the install, and behaves accordingly.
Blake
On Fri, Nov 7, 2008 at 5:56 PM, Peter Bridge <[EMAIL PROTECTED]> wrote:
> Just as a follow up. I went ahe
The only time I have done it, I used installgrub to do it.
Malachi
On Mon, Nov 17, 2008 at 8:27 AM, Iain Curtain <[EMAIL PROTECTED]> wrote:
> I decided after upgrading to zfs boot to remove my old ufs slice, however
> that had all the boot info on it. Any tips on how to re-create my grub and
>
I decided after upgrading to zfs boot to remove my old ufs slice, however that
had all the boot info on it. Any tips on how to re-create my grub and get
working again?
I presume the same steps as ufs, boot into single user, mount and recreate with
installgrub command?
Any tips gratefully rece
Tarik,
thank you.
I did some tests and that's the solution..
regards,
Tobias Exner
Tarik Soydan - Sun BOS Software schrieb:
On 11/14/08 04:29, Tobias Exner wrote:
Hi experts,
I need a little help from your site to understand what's going on.
I've got a SUN X4540 Thumper and se
Tarik,
thank you.
I did some tests and that's the solution..
regards,
Tobias Exner
Tarik Soydan - Sun BOS Software schrieb:
> On 11/14/08 04:29, Tobias Exner wrote:
>> Hi experts,
>>
>> I need a little help from your site to understand what's going on.
>>
>>
>> I've got a SUN X4540 Thumper an
hi,
> We noticed following postpatch error while installing
> patch 137137-09 on systems with STMS enabled (MPxIO)
> and fiber channels systems disks:
>
>
> Patch 137137-09 has been successfully installed.
> See
> /var/run/.patchSafeMode/root/var/sadm/patch/137137-09/
> log for details
> Exe
We noticed following postpatch error while installing patch 137137-09 on
systems with STMS enabled (MPxIO) and fiber channels systems disks:
Patch 137137-09 has been successfully installed.
See /var/run/.patchSafeMode/root/var/sadm/patch/137137-09/log for details
Executing postpatch script..
41 matches
Mail list logo