yah, unfortunately this is the first send. i'm trying to send 9 TB of data.
It really sucks because i was at 6 TB when it lost power
On Sat, May 22, 2010 at 2:34 AM, Brandon High wrote:
> You can "resume" a send if the destination has a snapshot in common with
> the source. If you don't, t
I should mention that iostat -En doesn't return any information. Is there a
reliable way of reading SMART information natively in OpenSolaris?
Cheers,
Andre
From: andreas_wants_the_w...@hotmail.com
To: zfs-discuss@opensolaris.org
Date: Sat, 22 May 2010 16:49:15 +1000
Subject: [zfs-discuss]
install smartmontools
There is no package for it but it's EASY to install
once you do, you can get ouput like this:
pfexec /usr/local/sbin/smartctl -d sat,12 -a /dev/rdsk/c5t0d0
smartctl 5.39.1 2010-01-28 r3054 [i386-pc-solaris2.11] (local build)
Copyright (C) 2002-10 by Bruce Allen, http://sm
Thanks Thomas, I thought there'd already be a package in the repo for it.
Cheers,
Andre
Date: Sat, 22 May 2010 03:17:38 -0400
Subject: Re: [zfs-discuss] HDD Serial numbers for ZFS
From: wonsl...@gmail.com
To: andreas_wants_the_w...@hotmail.com
CC: zfs-discuss@opensolaris.org
install sma
i don't think there is but it's dirt simple to install.
I followed the instructions here:
http://cafenate.wordpress.com/2009/02/22/setting-up-smartmontools-on-opensolaris/
On Sat, May 22, 2010 at 3:19 AM, Andreas Iannou <
andreas_wants_the_w...@hotmail.com> wrote:
> Thanks Thomas, I thought
On Fri, May 21, 2010 at 10:22 PM, Thomas Burgess wrote:
> yah, it seems that rsync is faster for what i need anywaysat least right
> now...
If you don't have snapshots you want to keep in the new copy, then probably...
-B
--
Brandon High : bh...@freaks.com
_
On 22/05/10 05:09 PM, Andreas Iannou wrote:
I should mention that iostat -En doesn't return any information. Is
there a reliable way of reading SMART information natively in OpenSolaris?
Cheers,
Andre
From: andreas_wants_t
i only care about the most recent snapshot, as this is a growing video
collection.
i do have snapshots, but i only keep them for when/if i accidently delete
something, or rename something wrong.
On Sat, May 22, 2010 at 3:43 AM, Brandon High wrote:
> On Fri, May 21, 2010 at 10:22 PM, Thomas Bur
On 22 maj 2010, at 07.40, Don wrote:
>> The SATA power connector supplies 3.3, 5 and 12v. A "complete"
>> solution will have all three. Most drives use just the 5v, so you can
>> probably ignore 3.3v and 12v.
> I'm not interested in building something that's going to work for every
> possible dr
Basic electronics, go!
The linked capacitor from Elna (
http://www.elna.co.jp/en/capacitor/double_layer/catalog/pdf/dk_e.pdf) has an
internal resistance of 30 ohms.
Intel rate their 32GB X25-E at 2.4W active (we aren't interested in idle
power usage, if its idle, we don't need the capacitor in th
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Brent Jones
>
> Problem with mbuffer, if you do scripted send/receives, you'd have to
> pre-start an Mbuffer session on the receiving end somehow.
> SSH is always running on the receiving end,
On Fri, 21 May 2010, David Dyer-Bennet wrote:
To be comfortable (I don't ask for "know for a certainty"; I'm not sure
that exists outside of "faith"), I want a claim by the manufacturer and
multiple outside tests in "significant" journals -- which could be the
blog of somebody I trusted, as well
On Fri, 21 May 2010, Brandon High wrote:
My understanding is that the controller contains enough cache to
buffer enough data to write a complete erase block size, eliminating
the need to read / erase / write that a partial block write entails.
It's reported to do a copy-on-write, so it doesn't n
On Fri, 21 May 2010, Don wrote:
You know- it would probably be sufficient to provide the SSD with
_just_ a big capacitor bank. If the host lost power it would stop
writing and if the SSD still had power it would probably use the
idle time to flush it's buffers. Then there would be world peace!
On Fri, 21 May 2010, Demian Phillips wrote:
For years I have been running a zpool using a Fibre Channel array with
no problems. I would scrub every so often and dump huge amounts of
data (tens or hundreds of GB) around and it never had a problem
outside of one confirmed (by the array) disk failu
On Fri, 21 May 2010, Don wrote:
You could literally split a sata cable and add in some capacitors
for just the cost of the caps themselves. The issue there is whether
the caps would present too large a current drain on initial charge
up- If they do then you need to add in charge controllers an
I am new to OSOL/ZFS but have just finished building my first system.
I detailed the system setup here:
http://opensolaris.org/jive/thread.jspa?threadID=128986&tstart=15
I ended up having to add an additional controller card as two ports on the
motherboard did not work as standard Sata port.
iostat -xen 1 will provide the same device names as the rest of the system
(as well as show error columns).
zpool status will show you which drive is in which pool.
As for the controllers, cfgadm -al groups them nicely.
t
On 23 May 2010 03:50, Brian wrote:
> I am new to OSOL/ZFS but have just
Following up with some more information here:
This is the output of "iostat -xen 30"
extended device statistics errors ---
r/sw/s kr/s kw/s wait actv wsvc_t asvc_t %w %b s/w h/w trn tot
device
296.82.9 36640.27.5 7.8 2.0 26.1
On Sat, 22 May 2010, Brian wrote:
The -xen helped me determine that it was disks c7d0 and c7d1 that were slower.
You may be right, but is not totally clear since you really need to
apply a workload which is assured to consistently load the disks. I
don't think that 'scrub' is necessarily b
> extended device statistics
> errors ---
> r/sw/s kr/s kw/s wait actv wsvc_t asvc_t %w %b s/w h/w
> trn tot device
> 296.82.9 36640.27.5 7.8 2.0 26.16.6 99 99 0 0
> 0 0 c7d0
> 296.72.5 36618.17.5 7.8 2.0
Bob Friesenhahn wrote:
On Fri, 21 May 2010, Don wrote:
You could literally split a sata cable and add in some capacitors for
just the cost of the caps themselves. The issue there is whether the
caps would present too large a current drain on initial charge up- If
they do then you need to add
On Sat, May 22, 2010 at 11:41 AM, Brian wrote:
> If I look at c7d0, I get a message about no "Alt Slice" found and I don't
> have access to the cache settings. Not sure if this is part of my problem or
> not:
That can happen if the controller is not using AHCI. It'll effect your
performance pr
Is there a way within opensolaris to detect if AHCI is being used by various
controllers?
I suspect you may be accurate an AHCI is not turned on. The bios for this
particular motherboard is fairly confusing on the AHCI settings. The only
setting I have is actually in the raid section, and it
If you install Opensolaris with the AHCI settings off, then switch them on,
it will fail to boot
I had to reinstall with the settings correct.
the best way to tell if ahci is working is to use cfgadm
if you see your drives there, ahci is on
if not, then you may need to reinstall with it on (for
I am not sure I fully understand the question... It is setup as raidz2 - is
that what you wanted to know?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/
Thanks -
I can give reinstalling a shot. Is there anything else I should do first?
Should I export my tank pool?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailm
just to make sure i understand what is going on here,
you have a rpool which is having performance issues, and you discovered ahci
was disabled?
you enabled it, and now it won't boot. correct?
This happened to me and the solution was to export my storage pool and
reinstall my rpool with the ah
On 05/23/10 08:52 AM, Thomas Burgess wrote:
If you install Opensolaris with the AHCI settings off, then switch
them on, it will fail to boot
I had to reinstall with the settings correct.
Well you probably didn't have to. Booting form the live CD and
importing the pool would have put things
On 05/23/10 08:43 AM, Brian wrote:
Is there a way within opensolaris to detect if AHCI is being used by various
controllers?
I suspect you may be accurate an AHCI is not turned on. The bios for this particular
motherboard is fairly confusing on the AHCI settings. The only setting I have is
This didn't work for me. I had the exact same issue a few days ago.
My motherboard had the following:
Native IDE
AHCI
RAID
Legacy IDE
so naturally i chose AHCI, but it ALSO had a mode called "IDE/SATA combined
mode"
I thought i needed this to use both the ide and ant sata ports, turns out it
w
this old thread has info on how to switch from ide->sata mode
http://opensolaris.org/jive/thread.jspa?messageID=448758
On Sat, May 22, 2010 at 5:32 PM, Ian Collins wrote:
> On 05/23/10 08:43 AM, Brian wrote:
>
>> Is there a way within opensolaris to detect if AHCI is being used by
>> vario
Sometimes when it hangs on boot hitting space bar or any key won't bring it
back to the command line. That is why I was wondering if there was a way to
not show the splashscreen at all, and rather show what it was trying to load
when it hangs.
--
This message posted from opensolaris.org
__
Not completely. I noticed my performance problem in my "tank" rather than my
rpool. But my rpool was sharing a controller (the motherboard controller) with
some devices in both the rpool and tank.
--
This message posted from opensolaris.org
___
zfs-d
Ok. What worked for me was booting with the live CD and doing:
pfexec zpool import -f rpool
reboot
After that I was able to boot with AHCI enabled. The performance issues I was
seeing are now also gone. I am getting around 100 to 110 MB/s during a scrub.
Scrubs are completing in 20 minutes
GREAT, glad it worked for you!
On Sat, May 22, 2010 at 7:39 PM, Brian wrote:
> Ok. What worked for me was booting with the live CD and doing:
>
> pfexec zpool import -f rpool
> reboot
>
> After that I was able to boot with AHCI enabled. The performance issues I
> was seeing are now also gone
On 05/23/10 11:31 AM, Brian wrote:
Sometimes when it hangs on boot hitting space bar or any key won't bring it
back to the command line. That is why I was wondering if there was a way to
not show the splashscreen at all, and rather show what it was trying to load
when it hangs.
From my /
I'm confusedI have a filesystem on server 1 called tank/nas/dump
I made a snapshot called first
zfs snapshot tank/nas/d...@first
then i did a zfs send/recv like:
zfs send tank/nas/d...@first | ssh wonsl...@192.168.1.xx "/bin/pfexec
/usr/sbin/zfs recv tank/nas/dump"
this worked fine, next
On 05/23/10 01:18 PM, Thomas Burgess wrote:
this worked fine, next today, i wanted to send what has changed
i did
zfs snapshot tank/nas/d...@second
now, heres where i'm confusedfrom reading the man page i thought
this command would work:
pfexec zfs send -i tank/nas/d...@first tank/n
On Sat, May 22, 2010 at 9:26 PM, Ian Collins wrote:
> On 05/23/10 01:18 PM, Thomas Burgess wrote:
>
>>
>> this worked fine, next today, i wanted to send what has changed
>>
>> i did
>> zfs snapshot tank/nas/d...@second
>>
>> now, heres where i'm confusedfrom reading the man page i thought
Hi,
I have fixed this problem a couple weeks ago, but haven't found the
time to report it until now.
Cindy Swearingen was very kind in contacting me to resolve this issue,
I would like to take this opportunity to express my gratitude to her.
We have not found the root cause of the error.
Brian wrote:
Sometimes when it hangs on boot hitting space bar or any key won't bring it
back to the command line. That is why I was wondering if there was a way to
not show the splashscreen at all, and rather show what it was trying to load
when it hangs.
Look at these threads:
OpenSola
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Thomas Burgess
>
> but i get an error:
>
> cannot receive incremental stream: destination tank/nas/dump has been
> modified
> since most recent snapshot
Whenever you send a snap, and you inte
oh, this makes sense
let me ask a question though.
Lets say i have a filesystem
tank/something
i make the snapshot
tank/someth...@one
i send/recv it
then i do something (add a file...remove something, whatever) on the send
side, then i do a send/recv and force it of the next filesystem
On 05/23/10 03:56 PM, Thomas Burgess wrote:
let me ask a question though.
Lets say i have a filesystem
tank/something
i make the snapshot
tank/someth...@one
i send/recv it
then i do something (add a file...remove something, whatever) on the
send side, then i do a send/recv and force it of
ok, so forcing just basically makes it drop whatever "changes" were made
Thats what i was wondering...this is what i expected
On Sun, May 23, 2010 at 12:05 AM, Ian Collins wrote:
> On 05/23/10 03:56 PM, Thomas Burgess wrote:
>
>> let me ask a question though.
>>
>> Lets say i have a filesystem
On May 21, 2010, at 7:03 PM, Brandon High wrote:
> On Fri, May 21, 2010 at 5:54 PM, Thomas Burgess wrote:
>> shouldn't the newer server have LESS load?
>> Please forgive my ubernoobness.
>
> Depends on what it's doing!
>
> Load average is really how many process are waiting to run, so it's
> no
47 matches
Mail list logo