Richard Elling wrote:
//etc/svc/repository-boot-20090419_174236
This file is created at boot time, not when power has failed.
So the fault likely occurred during the boot. With this knowledge,
the rest of your argument makes no sense.
rebootsystem boot Sun Apr 1
Bob Friesenhahn wrote:
OpenSolaris desktop users are surely less than 0.5% of the desktop
population. Are the 90+% of the normal desktop users you are talking
about the Microsoft Windows users, which is indeed something like 90%?
If you really want to be part of the majority, perhaps you ins
Toby Thain wrote:
Chances are. That Ubuntu as double boot here never finds anything
wrong, crashes, etc.
Why should it? It isn't designed to do so.
I knew this would inevitably creep up. :)
Why are you running a non-redundant pool?
Because.
90+% of the normal desktop users will run
dick hoogendijk wrote:
Why don't you quit using it
and focus a little more on installing SunStudio (which isn't that hard
to do; at least not so hard as you want us to believe it is in another
thread). All I ever had to do was start the installer (in a GUI) and
-all- software was placed where it
casper@sun.com wrote:
We are back at square one; or, at the subject line.
I did a zpool status -v, everything was hunky dory.
Next, a power failure, 2 hours later, and this is what zpool status -v
thinks:
zpool status -v
pool: rpool
state: ONLINE
status: One or more devices has experience
casper@sun.com wrote:
I would suggest that you follow my recipe: not check the boot-archive
during a reboot. And then report back. (I'm assuming that that will take
several weeks)
We are back at square one; or, at the subject line.
I did a zpool status -v, everything was hunky dory.
Drew Balfour wrote:
Does anyone know why it's "applications" and not "data"?
Perhaps something like:
status: One or more devices has experienced an error. A successful
attempt to
correct the error was made using a replicated copy of the data.
Data on the pool is unaffected.
On Thu, Apr 16, 2009 at 1:05 AM, Fajar A. Nugraha wrote:
[...]
Thanks, Fajar, et al.
What this thread actually shows, alas, is that ZFS is rocket science.
In 2009, one would expect a file system to 'just work'. Why would
anyone want to have to 'status' it regularly, in case 'scrub' it, and
if s
Bob Friesenhahn wrote:
Since it was not reported that user data was impacted, it seems likely
that there was a read failure (or bad checksum) for ZFS metadata which
is redundantly stored.
(Maybe I am too much of a linguist to not stumble over the wording
here.) If it is 'redundant', it is '
Richard Elling wrote:
status: One or more devices has experienced an unrecoverable error. An
attempt was made to correct the error. Applications are unaffected.
NAMESTATE READ WRITE CKSUM
rpool ONLINE 0 0 0
c1d0s0ONLINE 0 0 1
errors:
My question is related to this:
# zpool status
pool: rpool
state: ONLINE
status: One or more devices has experienced an unrecoverable error. An
attempt was made to correct the error. Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
usin
C. wrote:
I've worked hard to resolve this problem.. google opensolaris rescue
will show I've hit it a few times... Anyway, short version is it's
not zfs at all, but stupid handling of bootarchive. If you've
installed something like a 3rd party driver (OSS/Virtualbox) you'll
likely hit thi
Since I moved to ZFS, sorry, I tend to have more problems after power
failures. We have around 1 outage per week, in average, and the
machine(s) don't boot up as one might expect (from ZFS).
Just today: reboot, and rebooting in circles; with no chance on my side
to see the 30-40 lines of hex-stu
bcirvin,
you proposed "something to allow us to try to pull data from a failed pool".
Yes and no. 'Yes' as a pragmatic solution; 'no' for what ZFS was 'sold' to be:
the last filesystem mankind would need. It was conceived as a filesystem that
does not need recovery, due to its guaranteed consist
May I doubt that there are drives that don't 'sync'? That means you have a good
chance of corrupted data at a normal 'reboot'; or just at a 'umount' (without
considering ZFS here).
May I doubt the marketing drab that you need to buy a USCSI or whatnot to have
functional 'sync' at a shutdown or
Toby,
sad that you fall for the last resort of the marketing droids here. All
manufactures (and there are only a few left) will sue the hell out of you if
you state that their drives don't 'sync'. And each and every drive I have ever
used did. So the talk about a distinct borderline between 'en
I need to disappoint you here, LED inactive for a few seconds is a very bad
indicator of pending writes. Used to experience this on a stick on Ubuntu,
which was silent until the 'umount' and then it started to write for some 10
seconds.
On the other hand, you are spot-on w.r.t. 'umount'. Once t
[Still waiting for answers on my earlier questions]
So I take it that ZFS solves one problem perfectly well: Integrity of data
blocks. It uses CRC and atomic writes for this purpose, and as far as I could
follow this list, nobody has ever had any problems in this respect.
However, it also - at l
We have seen some unfortunate miscommunication here, and misinterpretation.
This extends into differences of culture. One of the vocal person in here is
surely not 'Anti-xyz'; rather I sense his intense desire to further the
progress by pointing his finger to some potential wounds.
May I repeat
Full of sympathy, I still feel you might as well relax a bit.
It is the XkbVariant that starts X without any chance to return.
But look at the many "boot stops after the third line", and from my side, the
not working network settings, even without nwam.
The worst part was a so-called engineer sta
Thanks, Richard,
for another clarification! I personally always considered your post as
enlightening and helpful. Thank you!
It was not my intention to step on anyone's feet with my remarks. I simply
wished that there was a source of all that information that needed to be
extracted from various
[i]If you want to add the entire Solaris partition to the zfs pool as a mirror,
use
zpool attach -f rpool c1d0s0 c2d0s2[/i]
So my mistake in the first place (see first post), in short, was only the last
digit: I ought to have used the complete drive (slice 2), instead of *thinking*
that it is u
[i]prtvtoc /dev/dsk/c1d0s2 | fmthard -s - /dev/rdsk/c2d0s2[/i]
Tim,
I understand what you try to do here, and had thought of something likewise
myself. But - please see my first post - it is not just a mirror that I want,
the disk is of a different size, and so is the bf-partition. If I simply
Now I modified the slice s0, so that is doesn't overlap with s2 (the whole
disk) any longer:
Part TagFlag Cylinders SizeBlocks
0 rootwm 3 - 10432 159.80GB(10430/0/0) 335115900
1 unassignedwm 00 (0/0/0)
Gary,
thanks. All my servers run OpenBSD, so I know the difference between a
DOS-partition and a slice. :)
My confusion is about the labels. I could not label it what I wanted, like
zfsed or pool, it had to be root. And since we can have only a single
bf-partition per drive (dsk), I was thinkin
This is what I did:
partition> print
Current partition table (original):
Total disk cylinders available: 10442 + 2 (reserved cylinders)
Part TagFlag Cylinders SizeBlocks
0 rootwm 3 - 10441 159.93GB(10439/0/0) 335405070
1 unassignedw
Thanks, Peter!
(and I really wished the Admin Guide was more practical). So we still do need
the somewhat arcane format->partition-> tool! I guess, the step that ZFS saves
is newfs, then?!
Uwe
--
This message posted from opensolaris.org
___
zfs-discu
This might sound sooo simple, but it isn't. I read the ZFS Administration Guide
and it did not give an answer; at least no simple answer, simple enough for me
to understand.
The intention is to follow the thread "Easiest way to replace a boot disk with
a larger one".
The command given would be
[i]Is there a current Linux distro that actually configures itself so
this can happen? Most of the ones I've seen don't bother.[/i]
Mike, does 'Debian' or 'Ubuntu' ring a bell? Both cater for this situation in
the text based installer. And surely a few more, that I only haven't tried.
I *am* disa
Orvar Korvar wrote:
> I dont think the mother board is on the HCL. But everything worked fine in
> b90.
>
> I realize I havent provided all necessary info. Here is more info.
> http://www.opensolaris.org/jive/thread.jspa?threadID=69654&tstart=0
>
> The thing is, Ive upgraded ZFS to the newest vers
Peter Tribble wrote:
> On Tue, Jun 10, 2008 at 12:02 PM, <[EMAIL PROTECTED]> wrote:
>
>> Here I made the opposite observation: Just installed nv90 to a dated
>> notebook DELL D400; unmodified except of a 80GB 2.5" hard disk and -
>> of course ! - an extra strip of 1 GB of RAM; making it 1.2 GB
Can someone in the know please provide a recipe to upgrade a nv81 (e.g.) to
ZFS-root, if possible?
That would be, just listing the commands step by step for the uninitiated; for
me.
Uwe
This message posted from opensolaris.org
___
zfs-discuss maili
[i]Consider this to be your life's mission.[/i]
Bob, I can do without this.
Richard,
[i]Actually I use several browsers every day. Each
browser has a cache located somewhere in my home
directory and the cache is managed so that it won't
grow very large. With CDP, I would fill my disk in
a week o
[i]Even then, I'm still confused as to how I
would do anything much useful with this over and above, say, 1 minute
snapshots.[/i]
Hi Nathan,
I was hoping to be clear with my examples.
Within that 1 minute the user has easily received the mail alert that 5 mails
have arrived, has seen the sender
[i]I think you're just looking for frequent backups, not necessarily capturing
every unique file version.[/i]
Thanks for your reply, Joe, but this is not my intention. I agree, that my
arguments here look like moving targets. They simply developed along the lines
of discussion. I'd still target
> atomic view?
Your post was on the gory details on how ZFS writes. "Atomic View" here is,
that 'save' of a file is an 'atomic' operation: at one moment in time you click
'save', and some other moment in time it is done. It means indivisible, and
from the perspective of the user this is how it
On Tue, Feb 26, 2008 at 2:07 PM, Nicolas Williams
<[EMAIL PROTECTED]> wrote:
> How do you use CDP "backups"? How do you decide at which write(2) (or
> dirty page write, or fsync(2), ...) to restore some file? What if the
> app has many files? Point-in-time? Sure, but since you can't restore
>
[i]And would drive storage requirements through the roof!![/i]
The interesting part is, Nathan, you're probably wrong.
First, though, some of my contacts in the enterprise gladly spent millions for
third-party applications running on Microsoft to do exactly that.
[But we all know that SUN is fam
[i]google found that solaris does have file change notification:
http://blogs.sun.com/praks/entry/file_events_notification
[/i]
Didn't see that one, thanks.
[i]Would that do the job?[/i]
It is not supposed to do a job, thanks :), it is for a presentation at a
conference I will be giving. I was
Come on! Nobody?!
I read through documents for several hours, and obviously done my work.
Can someone please point me to link, or just unambiguously say 'yes' or 'no' to
my question, if ZFS could produce a snapshot of whatever type, initiated with a
signal that in turn is derived from a change (e
Hi, checked all Wiki and documentation here on this site, and still need an
answer for a conference paper I am writing:
Can ZFS produce event-driven snapshots? Of course, I mean snapshots of specific
files/system in the event of a change?
This question has eluded me until now.
Uwe
This messa
Hi,
back after half a year ...
... and still, after reading documents for the last half day, I am not
a bit wiser.
Someone had promised to update (and simplify) some examples, but - at
least to me - that hasn't happened. :(
[Just read the 'Legacy Mount Points', and tell me as beginner, what to do
Since nobody seems to have a clue and I didn't want to give up - neither
install from scratch - , I kept playing. Suddenly everything was back in place,
after I hit by intuition
% zfs set mountpoint=legacy home
It beats me, why and how this brought back the desired state, since I had
issued
%
Matt,
thanks for some examples and your understanding.
While I am still quarreling to get a pool mounted,
I still find some unexpected (at least in my legacy terms) behaviour:
% zfs mount pool /export/home
is a clear intention. Maybe too much of legacy ?
% zpool import
" can be imported us
[i]I create the default storage pool during the install, but then when it
reboots, the hostname/hostid has changed so I need to re-associate the pool. I
know you're frustrated with this stuff, but once you've figured it out it
really is very powerful. :-)[/i]
If you read my contributions, I hav
[EMAIL PROTECTED]:~# zpool import
pool: new_zpool
id: 3042040702885268372
state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:
new_zpool ONLINE
c0t2d0s6 ONLINE
It shows that there is one filesystem available for import on one of my disks.
Here is a list of
In continuation of another thread, I feel the need to address this topic
urgently:
Despite of the great and enormous potential of ZFS and its advanced
architecture, in the end success is measured in use and user acceptance.
One of the promises is (was) a high-level interface. "No more 'format'".
Uuh, I just found out that I now have the new data ... whatever, here it is:
[I did have to boot to the old system, since the new install lost its new
'home']
[i]zpool status
pool: home
state: ONLINE
scrub: none requested
config:
NAMESTATE READ WRITE CKSUM
home
[i]
zpool create newhome c0d0s7
zfs snapshot [EMAIL PROTECTED]
zfs send [EMAIL PROTECTED] | zfs receive newhome/home
A 1:1 copy of the zfs "home" should then exist in "/newhome/home".
[/i]
'should' was the right word. It doesn't; and has actually destroyed my poor
chances to mount it. I hope some
Andy,
my excuses, I didn't really appreciate your input in my earlier mail !
[i]I can't get to the console of a system to take it to single user, but you
might try
"svcadm enable -tr filesystem/local" or "zfs mount -a".
[/i]
Both work properly. Half of the job done; now I have the new home mount
[EMAIL PROTECTED]:/u01/home# zfs snapshot u01/[EMAIL PROTECTED]
[EMAIL PROTECTED]:/u01/home# zfs send u01/[EMAIL PROTECTED] | zfs receive
u02/home
One caveat here is that I could not find a way to back up the base of the zpool
"u01" into the base of zpool "u02". i.e.
zfs snapshot [EMAIL PROTECT
... though I tried, read and typed the last 4 hours; still no clue.
Please, can anyone give a clear idea on how this works:
Get the content of c0d1s1 to c0d0s7 ?
c0d1s1 is pool home and active; c0d0s7 is not active.
I have followed the suggestion on
http://www.opensolaris.org/os/community/zfs/dem
52 matches
Mail list logo