, will they?
Solaris 11.1 has ZFS with SCSI UNMAP support.
Seem to have skipped that one... Are there any related tools e.g. to
release all "zero" blocks or the like? Of course it's up to the admin
then to know what all this is about or to wreck th
Thanks for all the answers more inline)
On 01/18/2013 02:42 AM, Richard Elling wrote:
> On Jan 17, 2013, at 7:04 AM, Bob Friesenhahn <mailto:bfrie...@simple.dallas.tx.us>> wrote:
>
>> On Wed, 16 Jan 2013, Thomas Nau wrote:
>>
>>> Dear all
>>> I&
ading through all the blocks and we hardly
see network average traffic going over 45MB/s (almost idle 1G link).
So here's the question: would increasing/decreasing the volblocksize improve
the send/receive operation and what influence might show for the iSCSI side?
Thanks for any h
Jamie
We ran Into the same and had to migrate the pool while imported read-only. On
top we were adviced to NOT use an L2ARC. Maybe you should consider that as well
Thomas
Am 12.12.2012 um 19:21 schrieb Jamie Krier :
> I've hit this bug on four of my Solaris 11 servers. Looking for any
ve to rewrite a
tool to do it?
Subsidiary: Is there an official response of Oracle in front of such
case? How do they "officially" deal with Binary Copied disks, as it's
common to do such copy with UFS to copy SAP environment or Databases...
Thanks
On Thu, 11 Oct 2012, Richard Elling wrote:
On Oct 11, 2012, at 2:58 PM, Phillip Wagstrom
wrote:
On Oct 11, 2012, at 4:47 PM, andy thomas wrote:
According to a Sun document called something like 'ZFS best practice' I read
some time ago, best practice was to use the entire di
On Thu, 11 Oct 2012, Freddie Cash wrote:
On Thu, Oct 11, 2012 at 2:47 PM, andy thomas wrote:
According to a Sun document called something like 'ZFS best practice' I read
some time ago, best practice was to use the entire disk for ZFS and not to
partition or slice it in any way.
According to a Sun document called something like 'ZFS best practice' I
read some time ago, best practice was to use the entire disk for ZFS and
not to partition or slice it in any way. Does this advice hold good for
FreeBSD as well?
I looked at a server earlier this week that was running Free
I have a ZFS filseystem and create weekly snapshots over a period of 5
weeks called week01, week02, week03, week04 and week05 respectively. Ny
question is: how do the snapshots relate to each other - does week03
contain the changes made since week02 or does it contain all the changes
made since
then change it back to start at cylinder 1.
I always leave cylinder 0 alone since then.
Thomas
2012-06-16 18:23, Richard Elling skrev:
On Jun 15, 2012, at 7:37 AM, Hung-Sheng Tsao Ph.D. wrote:
by the way
when you format start with cylinder 1 donot use 0
There is no requirement for skipping
Dear all
I'm about to answer my own question with some really useful hints
from Steve, thanks for that!!!
On 03/02/2012 07:43 AM, Thomas Nau wrote:
> Dear all
> I asked before but without much feedback. As the issue
> is persistent I want to give it another try. We disabled
>
there any way to identify which object (file?)
causes this?
Any hints are greatly appreciated
Thomas
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Thu, 16 Feb 2012, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of andy thomas
One of my most vital servers is a Netra 150 dating from 1997 - still going
strong, crammed with 12 x 300 Gb disks and running Solaris 9
On Wed, 15 Feb 2012, David Dyer-Bennet wrote:
While I'm not in need of upgrading my server at an emergency level, I'm
starting to think about it -- to be prepared (and an upgrade could be
triggered by a failure at this point; my server dates to 2006).
One of my most vital servers is a Netra 15
On Tue, 14 Feb 2012, Richard Elling wrote:
Hi Andy
On Feb 14, 2012, at 10:37 AM, andy thomas wrote:
On one of our servers, we have a RAIDz1 ZFS pool called 'maths2' consisting of
7 x 300 Gb disks which in turn contains a single ZFS filesystem called 'home'.
Yest
& running for nearly a
year with no problems to date - there are two other RAIDz1 pools on this
server but these are working fine.
Andy
-
Andy Thomas,
Time Domain Systems
Tel: +44 (0)7866 556626
Fax: +44 (0)20 8372 2582
http://www.time-domain.co.uk
__
once a week to tape; depends on how much
time the TSM client needs to walk the filesystems
> You need to answer all these question first
did so
Thomas
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Bob,
On 01/31/2012 09:54 PM, Bob Friesenhahn wrote:
> On Tue, 31 Jan 2012, Thomas Nau wrote:
>
>> Dear all
>> We have two JBODs with 20 or 21 drives available per JBOD hooked up
>> to a server. We are considering the following setups:
>>
>> RAIDZ2 made of
effective but the system goes down when
a JBOD goes down. Each of the JBOD comes with dual controllers, redundant
fans and power supplies so do I need to be paranoid and use option #1?
Of course it also gives us more IOPs but high end logging devices should take
care of that
Thanks for any h
ur workload as
good as you are able to describe it but takes some time to get things
setup if you cannot find your workload in one of the many provided
examples
Thomas
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Does anyone know where I can still find the SUNWsmbs and SUNWsmbskr
packages for the Sparc version of OpenSolaris? I wanted to experiment with
ZFS/CIFS on my Sparc server but the ZFS share command fails with:
zfs set sharesmb=on tank1/windows
cannot share 'tank1/windows': smb ad
Dear all
We use a STEC ZeusRAM as a log device for a 200TB RAID-Z2 pool.
As they are supposed to be read only after a crash or when booting and
those nice things are pretty expensive I'm wondering if mirroring
the log devices is a "must / highly recommende
128 87486464 87486464 87486464 90 0
0 90
692513669251366925136 86118400 86118400 86118400 83 0
0 83
So does it look good, bad or ugly ;)
Thomas
___
zfs-discuss mailing list
zfs-discuss@opensolari
Tim
the client is identical as the server but no SAS drives attached.
Also right now only one 1gbit Intel NIC Is available
Thomas
Am 18.08.2011 um 17:49 schrieb Tim Cook :
> What are the specs on the client?
>
> On Aug 18, 2011 10:28 AM, "Thomas Nau" wrote:
> > Dea
Have you already extracted the core file of the kernel crash ?
(and btw activated dump device for such dumping happen at next reboot...)
Have you also tried applying the latest kernel/zfs patches and try
importing the pool afterwards ?
Thomas
On 08/18/2011 06:40 PM, Stu Whitefish wrote:
Hi
.6ms/op 683us/op-cpu
Disabling ZIL is no option but I expected a much better performance
especially the ZEUS RAM only gets us a speed-up of about 1.8x
Is this test realistic for a typical fileserver scenario or does it require many
more clients to push t
You're probably hitting bug 7056738 -> http://wesunsolve.net/bugid/id/7056738
Looks like it's not fixed yet @ oracle anyway...
Were you using crypto on your datasets ?
Regards,
Thomas
On Tue, 16 Aug 2011 09:33:34 -0700 (PDT)
Stu Whitefish wrote:
> - Original Message -
On Sat, 13 Aug 2011, Joerg Schilling wrote:
andy thomas wrote:
What 'tar' program were you using? Make sure to also try using the
Solaris-provided tar rather than something like GNU tar.
I was using GNU tar actually as the original archive was created on a
Linux machine. I w
On Sat, 13 Aug 2011, Bob Friesenhahn wrote:
On Sat, 13 Aug 2011, andy thomas wrote:
However, one of our users recently put a 35 Gb tar.gz file on this server
and uncompressed it to a 215 Gb tar file. But when he tried to untar it,
after about 43 Gb had been extracted we noticed the disk usage
On Sat, 13 Aug 2011, Bob Friesenhahn wrote:
On Sat, 13 Aug 2011, andy thomas wrote:
However, one of our users recently put a 35 Gb tar.gz file on this server
and uncompressed it to a 215 Gb tar file. But when he tried to untar it,
after about 43 Gb had been extracted we noticed the disk usage
t are there any
other things I should take into consideration? It's not a major problem as
the system is intended for storage and users are not supposed to go in and
untar huge tarfiles on it as it's not a fast system ;-)
Andy
--------
Andy Thomas,
Time Domain Syste
in FreeBSD, a little bit
> more
> tricky in OpenIndiana (patches and source are available for a few different
> implementations). Or you can just trick them out by starting the pool with a
> 4K
> sector device that doesn't lie (eg, iscsi target).
Are you refering to the &quo
Hi,
I am testing Solaris Express 11 with napp-it on two machines. In both
cases the same problem: Enabling encryption on a folder, filling it with
data will result in errors indicated by a subsequent scrub. I did not
find the topic on the web, but also not experiences shared by people
using e
any times in the docs: these failover backplanes
> require use of SAS drives, no SATA (while the single-path BPs are okay with
> both
> SAS and SATA). Still, according to the forums, SATA disks on shared backplanes
> often give too much headache and may give too little performance in
> com
Dear all
Sorry if it's kind of off-topic for the list but after talking
to lots of vendors I'm running out of ideas...
We are looking for JBOD systems which
(1) hold 20+ 3.3" SATA drives
(2) are rack mountable
(3) have all the nive hot-swap stuff
(4) allow 2 hosts to connect via SAS (4+ lines
So there is no current way to specify the creation of a 3 disk raid-z
array with a known missing disk?
On 12/5/06, David Bustos wrote:
> Quoth Thomas Garner on Thu, Nov 30, 2006 at 06:41:15PM -0500:
> > I currently have a 400GB disk that is full of data on a linux system.
> >
I'm having some very strange nfs issues that are driving me somewhat mad.
I'm running b134 and have been for months now, without issue. Recently i
enabled 2 services to get bonjoir notificatons working in osx
/network/dns/multicast:default
/system/avahi-bridge-dsd:default
and i added a few .se
Thanks, I'm going to do that. I'm just worried about corrupting my data, or
other problems. I wanted to make sure there is nothing I really should be
careful with.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@ope
Hi all
I'm currently moving a fairly big dataset (~2TB) within the same zpool. Data is
being moved from a dataset to another, which has dedup enabled.
The transfer started at quite a slow transfer speed — maybe 12MB/s. But it is
now crawling to a near halt. Only 800GB has been moved in 48 hours
You are saying ZFS will detect and rectify this kind of corruption in a
> deduped pool automatically if enough redundancy is present? Can that fail
> sometimes? Under what conditions?
>
> I would hate to restore a 1.5TB pool from backup just because one 5MB file
> is gone bust. And I have a known g
you can upgrade by changing to the dev repositoryor if you don't mind
re-installing you can download the b134 image at genunix
http://www.genunix.org/
On Sat, Aug 21, 2010 at 1:25 AM, Long Tran wrote:
> Hi,
> I hit ZFS bug that it would be resolve in latter snv 134 or latter.
> I'm running S
I've been running opensolaris for months, and today while poking around, i
noticed a ton of errors in my logs...I'm wondering what they mean and if
it's anything to worry about
I've found a few things on google but not a whole lotanyways, heres a
pastie of the log
http://pastie.org/1104916
alldefault
>
> cn03/3 usedbysnapshots46.8G -
>
> cn03/3 usedbydataset 154K -
>
> cn03/3 usedbychildren 456G -
>
> cn03/3 usedbyrefreservation 0 -
>
> cn03/3 lo
as for the difference between the two df's, one is the gnu df (liek you'd
have on linux) and the other is the solaris df.
2010/8/20 Thomas Burgess
> can't the "zfs" command provide that information?
>
>
> 2010/8/20 Fred Liu
>
> Can you shed mor
can't the "zfs" command provide that information?
2010/8/20 Fred Liu
> Can you shed more lights on **other commands** which output that
> information?
>
> Appreciations.
>
>
>
> Fred
>
>
>
> *From:* Thomas Burgess [mailto:wonsl...@gmail.com
df serves a purpose though.
There are other commands which output that information..
On Thu, Aug 19, 2010 at 3:01 PM, Fred Liu wrote:
> Not sure if there was similar threads in this list before.
> Three scenarios:
> 1): df cannot count snapshot space in a file system with quota set.
> 2): df ca
On Thu, Aug 19, 2010 at 4:33 PM, Mike Kirk wrote:
> Hi all,
>
> Halcyon recently started to add ZFS pool stats to our Solaris Agent, and
> because many people were interested in the previous OpenSolaris beta* we've
> rolled it into our OpenSolaris build as well.
>
> I've already heard some great
On Mon, Aug 16, 2010 at 11:17 PM, Frank Cusack
wrote:
> On 8/16/10 9:57 AM -0400 Ross Walker wrote:
>
>> No, the only real issue is the license and I highly doubt Oracle will
>> re-release ZFS under GPL to dilute it's competitive advantage.
>>
>
> You're saying Oracle wants to keep zfs out of Linu
On Wed, Aug 11, 2010 at 4:05 AM, Orvar Korvar <
knatte_fnatte_tja...@yahoo.com> wrote:
> Someone posted about CERN having a bad network card which injected faulty
> bits into the data stream. And ZFS detected it, because of end-to-end
> checksum. Does anyone has more information on this?
> --
> Th
On Wed, Aug 11, 2010 at 12:57 AM, Peter Taps wrote:
> Hi Eric,
>
> Thank you for your help. At least one part is clear now.
>
> I still am confused about how the system is still functional after one disk
> fails.
>
> Consider my earlier example of 3 disks zpool configured for raidz-1. To
> keep i
On Fri, Aug 6, 2010 at 6:44 AM, P-O Yliniemi wrote:
> Hello!
>
> I have built a OpenSolaris / ZFS based storage system for one of our
> customers. The configuration is about this:
>
> Motherboard/CPU: SuperMicro X7SBE / Xeon (something, sorry - can't remember
> and do not have my specification n
On Fri, Jul 23, 2010 at 5:00 AM, Sigbjorn Lie wrote:
> I see I have already received several replies, thanks to all!
>
> I would not like to risk losing any data, so I believe a ZIL device would
> be the way for me. I see
> these exists in different prices. Any reason why I would not buy a cheap
On Fri, Jul 23, 2010 at 3:11 AM, Sigbjorn Lie wrote:
> Hi,
>
> I've been searching around on the Internet to fine some help with this, but
> have been
> unsuccessfull so far.
>
> I have some performance issues with my file server. I have an OpenSolaris
> server with a Pentium D
> 3GHz CPU, 4GB of
On Wed, Jul 21, 2010 at 12:42 PM, Orvar Korvar <
knatte_fnatte_tja...@yahoo.com> wrote:
> Are there any drawbacks to partition a SSD in two parts and use L2ARC on
> one partition, and ZIL on the other? Any thoughts?
> --
> This message posted from opensolaris.org
>
I've found the Seagate 7200.12 1tb drives and Hitachi 7k2000 2TB drives to
be by far the best.
I've read lots of horror stories about any WD drive with 4k
sectorsit'sbest to stay away from them.
I've also read plenty of people say that the green drives are terrible.
__
>
>
> Conclusion: This device will make an excellent slog device. I'll order
> them today ;)
>
>
I have one and i love it...I sliced it though, used 9 gb for ZIL and the
rest for L2ARC (my server is on a smallish network with about 10 clients)
It made a huge difference in NFS performance and other
p://www.maier-komor.de/mbuffer.html
binary package: http://www.opencsw.org/packages/CSWmbuffer/
- Thomas
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Fri, Jun 18, 2010 at 6:34 AM, Curtis E. Combs Jr. wrote:
> Oh! Yes. dedup. not compression, but dedup, yes.
dedup may be your problem...it requires some heavy ram and/or decent L2ARC
from what i've been reading.
___
zfs-discuss mailing list
zfs-d
On Fri, Jun 18, 2010 at 4:42 AM, Pasi Kärkkäinen wrote:
> On Fri, Jun 18, 2010 at 01:26:11AM -0700, artiepen wrote:
> > Well, I've searched my brains out and I can't seem to find a reason for
> this.
> >
> > I'm getting bad to medium performance with my new test storage device.
> I've got 24 1.5T
>
>
>
> Also, the disks were replaced one at a time last year from 73GB to 300GB to
> increase the size of the pool. Any idea why the pool is showing up as the
> wrong size in b134 and have anything else to try? I don't want to upgrade
> the pool version yet and then not be able to revert back...
On Mon, Jun 14, 2010 at 4:41 AM, Arne Jansen wrote:
> Hi,
>
> I known it's been discussed here more than once, and I read the
> Evil tuning guide, but I didn't find a definitive statement:
>
> There is absolutely no sense in having slog devices larger than
> then main memory, because it will neve
Arne,
On 06/13/2010 03:57 PM, Arne Jansen wrote:
> Thomas Nau wrote:
>> Dear all
>>
>> We ran into a nasty problem the other day. One of our mirrored zpool
>> hosts several ZFS filesystems. After a reboot (all FS mounted at that
>> time an in use) the machine pan
Thanks for the link Arne.
On 06/13/2010 03:57 PM, Arne Jansen wrote:
> Thomas Nau wrote:
>> Dear all
>>
>> We ran into a nasty problem the other day. One of our mirrored zpool
>> hosts several ZFS filesystems. After a reboot (all FS mounted at that
>> time an in
in production after pulling data from
the backup tapes. Scrubbing didn't show any error so any idea what's
behind the problem? Any chance to fix the FS?
Thomas
---
panic[cpu3]/thread=ff0503498400: BAD TRAP: type=e (#pf Page fault)
rp=ff001e937320 addr=20 occurred in module &quo
On Sun, Jun 13, 2010 at 12:18 AM, Joe Auty wrote:
> Thomas Burgess wrote:
>
>
>> Yeah, this is what I was thinking too...
>>
>> Is there anyway to retain snapshot data this way? I've read about the ZFS
>> replay/mirror features, but my impression was that
>
>
> Yeah, this is what I was thinking too...
>
> Is there anyway to retain snapshot data this way? I've read about the ZFS
> replay/mirror features, but my impression was that this was more so for a
> development mirror for testing rather than a reliable backup? This is the
> only way I know of
Very interesting. This could be useful for a number of us. Would you be willing
to share your work?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-dis
sw.org.
If your system is really running on a Virtual Box I'd recommend that you
turn of disk write caching of Virtual Box. Search the OpenSolaris forum
of Virtual Box. There is an article somewhere how to do this. IIRC the
subject is somethink like 'zfs pool curruption'. But it is also
somewhere in the docs.
HTH,
Thomas
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I thought it didI couldn't imagine sun using that chip in the original
thumper if it didn't suppoer NCQalso, i've read where people have had to
DISABLE ncq on this driver to fix one bug or another (as a work around)
On Wed, May 26, 2010 at 8:40 PM, Marty Faltesek
wrote:
> On Wed, 2010-05
On Wed, May 26, 2010 at 5:47 PM, Brandon High wrote:
> On Sat, May 15, 2010 at 4:01 AM, Marc Bevand wrote:
> > I have done quite some research over the past few years on the best (ie.
> > simple, robust, inexpensive, and performant) SATA/SAS controllers for
> ZFS.
>
> I've spent some time lookin
Also, let me note, it came with a 3 year warranty so I expect it to last at
least 3 years...but if it doesn't, i'll just return it under the warranty.
On Tue, May 25, 2010 at 1:26 PM, Thomas Burgess wrote:
>
>
> On Tue, May 25, 2010 at 12:38 PM, Bob Friesenhahn <
> bfr
On Tue, May 25, 2010 at 12:38 PM, Bob Friesenhahn <
bfrie...@simple.dallas.tx.us> wrote:
> On Mon, 24 May 2010, Thomas Burgess wrote:
>
>>
>> It's a sandforce sf-1500 model but without a supercapheres some info
>> on it:
>>
>> Maximum Performan
>
>
> At least to me, this was not clearly "not asking about losing zil" and was
> not clearly "asking about power loss." Sorry for answering the question
> you
> thought you didn't ask.
>
I was only responding to your response of WRONG!!! The guy wasn't wrong in
regards to my questions. I'm s
On Tue, May 25, 2010 at 11:27 AM, Edward Ned Harvey
wrote:
> > From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> > boun...@opensolaris.org] On Behalf Of Nicolas Williams
> >
> > > I recently got a new SSD (ocz vertex LE 50gb)
> > >
> > > It seems to work really well as a ZIL perform
i am running the last release from the genunix page
uname -a output:
SunOS wonslung-raidz2 5.11 snv_134 i86pc i386 i86pc Solaris
On Tue, May 25, 2010 at 10:33 AM, Cindy Swearingen <
cindy.swearin...@oracle.com> wrote:
> Hi Thomas,
>
> This looks like a display bug. I'm se
Is there a best practice on keeping a backup of the zpool.cache file? Is it
possible? Does it change with changes to vdevs?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/
The last couple times i've read this questions, people normally responded
with:
It depends
you might not even NEED a slog, there is a script floating around which can
help determine that...
If you could benefit from one, it's going to be IOPS which help youso if
the usb drive has more io
I was just wondering:
I added a SLOG/ZIL to my new system today...i noticed that the L2ARC shows
up under it's own headingbut the SLOG/ZIL doesn'tis this correct?
see:
capacity operationsbandwidth
poolalloc free read write read write
--
>
>
>
> From earlier in the thread, it sounds like none of the SF-1500 based
> drives even have a supercap, so it doesn't seem that they'd necessarily
> be a better choice than the SLC-based X-25E at this point unless you
> need more write IOPS...
>
> Ray
>
I think the upcoming OCZ Vertex 2 Pro wi
>
>
> Not familiar with that model
>
>
It's a sandforce sf-1500 model but without a supercapheres some info on
it:
Maximum Performance
- Max Read: up to 270MB/s
- Max Write: up to 250MB/s
- Sustained Write: up to 235MB/s
- Random Write 4k: 15,000 IOPS
- Max 4k IOPS: 50,00
>
>
> ZFS is always consistent on-disk, by design. Loss of the ZIL will result
> in loss of the data in the ZIL which hasn't been flushed out to the hard
> drives, but otherwise, the data on the hard drives is consistent and
> uncorrupted.
>
>
>
> This is what i thought. I have read this list on
I recently got a new SSD (ocz vertex LE 50gb)
It seems to work really well as a ZIL performance wise. My question is, how
safe is it? I know it doesn't have a supercap so lets' say dataloss
occursis it just dataloss or is it pool loss?
also, does the fact that i have a UPS matter?
the nu
never mindjust found more info on this...shoudl have held back from
asking
On Mon, May 24, 2010 at 1:26 AM, Thomas Burgess wrote:
> did this come out?
>
> http://cr.opensolaris.org/~gman/opensolaris-whats-new-2010-05/
>
> i was googling trying to find info about the n
did this come out?
http://cr.opensolaris.org/~gman/opensolaris-whats-new-2010-05/
i was googling trying to find info about the next release and ran across
this
Does this mean it's actually about to come out before the end of the month
or is this something else?
_
ok, so forcing just basically makes it drop whatever "changes" were made
Thats what i was wondering...this is what i expected
On Sun, May 23, 2010 at 12:05 AM, Ian Collins wrote:
> On 05/23/10 03:56 PM, Thomas Burgess wrote:
>
>> let me ask a question though.
&
will the new recv'd filesystem be identical to the original forced snapshot
or will it be a combination of the 2?
On Sat, May 22, 2010 at 11:50 PM, Edward Ned Harvey
wrote:
> > From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> > boun...@opensolaris.org] On
On Sat, May 22, 2010 at 9:26 PM, Ian Collins wrote:
> On 05/23/10 01:18 PM, Thomas Burgess wrote:
>
>>
>> this worked fine, next today, i wanted to send what has changed
>>
>> i did
>> zfs snapshot tank/nas/d...@second
>>
>> now, heres w
I'm confusedI have a filesystem on server 1 called tank/nas/dump
I made a snapshot called first
zfs snapshot tank/nas/d...@first
then i did a zfs send/recv like:
zfs send tank/nas/d...@first | ssh wonsl...@192.168.1.xx "/bin/pfexec
/usr/sbin/zfs recv tank/nas/dump"
this worked fine, next
GREAT, glad it worked for you!
On Sat, May 22, 2010 at 7:39 PM, Brian wrote:
> Ok. What worked for me was booting with the live CD and doing:
>
> pfexec zpool import -f rpool
> reboot
>
> After that I was able to boot with AHCI enabled. The performance issues I
> was seeing are now also gone
this old thread has info on how to switch from ide->sata mode
http://opensolaris.org/jive/thread.jspa?messageID=448758
On Sat, May 22, 2010 at 5:32 PM, Ian Collins wrote:
> On 05/23/10 08:43 AM, Brian wrote:
>
>> Is there a way within opensolaris to detect if AHCI is being used by
>> vario
s, turns out it
was basically an ide emulation mode for sata, long story short i ended up
with opensolaris installed in IDE mode.
I had to reinstall. I tried the livecd/import method and it still failed to
boot.
On Sat, May 22, 2010 at 5:30 PM, Ian Collins wrote:
> On 05/23/10 08:52 AM, Th
just to make sure i understand what is going on here,
you have a rpool which is having performance issues, and you discovered ahci
was disabled?
you enabled it, and now it won't boot. correct?
This happened to me and the solution was to export my storage pool and
reinstall my rpool with the ah
If you install Opensolaris with the AHCI settings off, then switch them on,
it will fail to boot
I had to reinstall with the settings correct.
the best way to tell if ahci is working is to use cfgadm
if you see your drives there, ahci is on
if not, then you may need to reinstall with it on (for
i only care about the most recent snapshot, as this is a growing video
collection.
i do have snapshots, but i only keep them for when/if i accidently delete
something, or rename something wrong.
On Sat, May 22, 2010 at 3:43 AM, Brandon High wrote:
> On Fri, May 21, 2010 at 10:22 PM, Tho
i don't think there is but it's dirt simple to install.
I followed the instructions here:
http://cafenate.wordpress.com/2009/02/22/setting-up-smartmontools-on-opensolaris/
On Sat, May 22, 2010 at 3:19 AM, Andreas Iannou <
andreas_wants_the_w...@hotmail.com> wrote:
>
install smartmontools
There is no package for it but it's EASY to install
once you do, you can get ouput like this:
pfexec /usr/local/sbin/smartctl -d sat,12 -a /dev/rdsk/c5t0d0
smartctl 5.39.1 2010-01-28 r3054 [i386-pc-solaris2.11] (local build)
Copyright (C) 2002-10 by Bruce Allen, http://sm
source. If you don't, there's nothing you can do.
>
> It probably taking a while to restart because the sends that were
> interrupted need to be rolled back.
>
> Sent from my Nexus One.
>
> On May 21, 2010 9:44 PM, "Thomas Burgess" wrote:
>
> I can
well it wasn't.
it was running pretty slow.
i had one "really big" filesystemwith rsync i'm able to do multiple
streams and it's moving much faster
On Sat, May 22, 2010 at 1:45 AM, Ian Collins wrote:
> On 05/22/10 05:22 PM, Thomas Burgess wrote:
>
>&g
3.14.2 6 13 c6t6d0
0.9 201.9 34.2 25338.0 3.8 0.5 18.92.6 51 52 c8t5d0
0.00.00.00.0 0.0 0.00.00.0 0 0 c4t7d0
On Sat, May 22, 2010 at 12:26 AM, Brandon High wrote:
> On Fri, May 21, 2010 at 7:57 PM, Thomas Burgess
> wrote:
> &
yah, it seems that rsync is faster for what i need anywaysat least right
now...
On Sat, May 22, 2010 at 1:07 AM, Ian Collins wrote:
> On 05/22/10 04:44 PM, Thomas Burgess wrote:
>
>> I can't tell you for sure
>>
>> For some reason the server lost power an
1 - 100 of 453 matches
Mail list logo