On 9 déc. 2010, at 13:41, Edward Ned Harvey wrote:
>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
>>
>> Also, if you have a NFS datastore, which is not available at the time of
> ESX
>> bootup, then the NFS datastore d
On 19 nov. 2010, at 15:04, Edward Ned Harvey wrote:
>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of Günther
>>
>> Disabling the ZIL (Don't)
>
> This is relative. There are indeed situations where it's acceptable to
> disable ZIL.
On 19 nov. 2010, at 03:53, Edward Ned Harvey wrote:
>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>>
>> SAS Controller
>> and all ZFS Disks/ Pools are passed-through to Nexenta to have full
> ZFS-Disk
>> control like on real hardware.
>
> This is precisely the thing I'm int
On 26 oct. 2010, at 16:21, Matthieu Fecteau wrote:
> Hi,
>
> I'm planning to use the replication scripts on that page :
> http://www.infrageeks.com/groups/infrageeks/wiki/8fb35/zfs_autoreplicate_script.html
>
> It uses the timeslider (other way possible) to take snapshots, uses zfs
> send/rece
On 15 oct. 2010, at 22:19, Ian D wrote:
> A little setback We found out that we also have the issue with the Dell
> H800 controllers, not just the LSI 9200-16e. With the Dell it's initially
> faster as we benefit from the cache, but after a little while it goes sour-
> from 350MB/sec down
On 13 oct. 2010, at 18:37, Marty Scholes wrote:
> The only thing that still stands out is that network operations (iSCSI and
> NFS) to external drives are slow, correct?
>
> Just for completeness, what happens if you scp a file to the three different
> pools? If the results are the same as NF
Just a note to pass on in case anyone runs into the same situation.
I have a DELL R510 that is running just fine, up until the day that I needed to
import a pool from a USB hard drive. I plug in the disk, check it with rmformat
and try to import the zpool. And it sits there for practically fore
On 15 sept. 2010, at 22:04, Mike Mackovitch wrote:
> On Wed, Sep 15, 2010 at 12:08:20PM -0700, Nabil wrote:
>> any resolution to this issue? I'm experiencing the same annoying
>> lockd thing with mac osx 10.6 clients. I am at pool ver 14, fs ver
>> 3. Would somehow going back to the earlier 8/
Hmmm, that's odd. I have a number of VMs running on NFS (hosted on ESX, rather
than Xen) with no problems at all. I did add a SLOG device to get performance
up to a reasonable level, but it's been running flawlessly for a few months
now. Previously I was using iSCSI for most of the connections,
On 30 avr. 2010, at 13:47, Euan Thoms wrote:
> Well I'm so impressed with zfs at the moment! I just got steps 5 and 6 (form
> my last post) to work, and it works well. Not only does it send the increment
> over to the backup drive, the latest increment/snapshot appears in the
> mounted filesys
No idea about the build quality, but is this the sort of thing you're looking
for?
Not cheap, integrated RAID (sigh), but one cable only
http://www.pc-pitstop.com/das/fit-500.asp
Cheap, simple, 4 eSATA connections on one box
http://www.pc-pitstop.com/sata_enclosures/scsat4eb.asp
Still cheap, us
On 19 mars 2010, at 17:11, Joerg Schilling wrote:
>> I'm curious, why isn't a 'zfs send' stream that is stored on a tape yet
>> the implication is that a tar archive stored on a tape is considered a
>> backup ?
>
> You cannot get a single file out of the zfs send datastream.
zfs send is a bloc
On 18 mars 2010, at 15:51, Damon Atkins wrote:
> A system with 100TB of data its 80% full and the a user ask their local
> system admin to restore a directory with large files, as it was 30days ago
> with all Windows/CIFS ACLS and NFSv4/ACLS etc.
>
> If we used zfs send, we need to go back to
On 18 mars 2010, at 16:58, David Dyer-Bennet wrote:
> On Thu, March 18, 2010 04:50, erik.ableson wrote:
>
>> It would appear that the bus bandwidth is limited to about 10MB/sec
>> (~80Mbps) which is well below the theoretical 400Mbps that 1394 is
>> supposed to be able
An interesting thing I just noticed here testing out some Firewire drives with
OpenSolaris.
Setup :
OpenSolaris 2009.06 and a dev version (snv_129)
2 500Gb Firewire 400 drives with integrated hubs for daisy-chaining (net: 4
devices on the chain)
- one SATA bridge
- one PATA bridge
Created a zp
On 16 mars 2010, at 21:00, Marc Nicholas wrote:
> On Tue, Mar 16, 2010 at 3:16 PM, Svein Skogen wrote:
>
> > I'll write you a Perl script :)
>
> I think there are ... several people that'd like a script that gave us
> back some of the ease of the old shareiscsi one-off, instead of having
> to s
I' ve found that the NFS host based settings required the FQDN, and that the
reverse lookup must be available in your DNS.
Try "rw,root=host1.mydomain.net"
Cheers,
Erik
On 10 mars 2010, at 05:47, mingli wrote:
> And I update the sharenfs option with "rw,ro...@100.198.100.0/24", it works
> fin
On 8 mars 2010, at 11:33, Svein Skogen wrote:
> Let's say for a moment I should go for this solution, with the rpool tucked
> away on an usb-stick in the same case as the LTO-3 tapes it "matches"
> timelinewise (I'm using HP C8017A kits) as a zfs send -R to a file on the
> USB stick. (If, and
On 27 janv. 2010, at 12:10, Georg S. Duck wrote:
> Hi,
> I was suffering for weeks from the following problem:
> a zfs dataset contained an automatic snapshot (monthly) that used 2.8 TB of
> data. The dataset was deprecated, so I chose to destroy it after I had
> deleted some files; eventually
On 24 janv. 2010, at 08:36, Erik Trimble wrote:
> These days, I've switched to 2.5" SATA laptop drives for large-storage
> requirements.
> They're going to cost more $/GB than 3.5" drives, but they're still not
> horrible ($100 for a 500GB/7200rpm Seagate Momentus). They're also easier to
> cr
On 21 janv. 2010, at 22:55, Daniel Carosone wrote:
> On Thu, Jan 21, 2010 at 05:04:51PM +0100, erik.ableson wrote:
>
>> What I'm trying to get a handle on is how to estimate the memory
>> overhead required for dedup on that amount of storage.
>
> We'd a
Hi all,
I'm going to be trying out some tests using b130 for dedup on a server with
about 1,7Tb of useable storage (14x146 in two raidz vdevs of 7 disks). What
I'm trying to get a handle on is how to estimate the memory overhead required
for dedup on that amount of storage. From what I gather
Or in OS X with smart folders where you define a set of search terms
and as write operations occur on the known filesystems the folder
contents will be updated to reflect the current state of the attached
filesystems
The structures you defined seemed to be designed around the idea of
On 13 oct. 2009, at 15:24, Derek Anderson wrote:
Simple answer: Man hour math. I have 150 virtual machines on these
disks for shared storage. They hold no actual data so who really
cares if they get lost. However 150 users of these virtual machines
will save 5 minutes or so every day of
Depending on the data content that you're dealing you can compress the
snapshots inline with the send/receive operations by piping the data
through gzip. Given that we've been talking about 500Mb text files,
this seems to be a very likely solution. There was some mention in the
Kernel Keyn
Heh :-) Disk usage is directly related to available space.
At home I have a 4x1Tb raidz filled to overflowing with music, photos,
movies, archives, and backups for 4 other machines in the house. I'll
be adding another 4 and an SSD shortly.
It starts with importing CDs into iTunes or WMP, t
On 7 août 09, at 02:03, Stephen Green wrote:
I used a 2GB ram disk (the machine has 12GB of RAM) and this jumped
the backup up to somewhere between 18-40MB/s, which means that I'm
only a couple of hours away from finishing my backup. This is, as
far as I can tell, magic (since I started th
You're running into the same problem I had with 2009.06 as they have
"corrected" a bug where the iSCSI target prior to 2009.06 didn't honor
completely SCSI sync commands issued by the initiator.
Some background :
Discussion:
http://opensolaris.org/jive/thread.jspa?messageID=388492
"correcte
The zfs send command generates a differential file between the two
selected snapshots so you can send that to anything you'd like. The
catch of course is that then you have a collection of files on your
Linux box that are pretty much useless since your can't mount them or
read the contents
ync mount, but I can't set this on the server side
where it would be used by the servers automatically.
Erik
erik.ableson wrote:
OK - I'm at my wit's end here as I've looked everywhere to find
some means of tuning NFS performance with ESX into returning
something accepta
OK - I'm at my wit's end here as I've looked everywhere to find some
means of tuning NFS performance with ESX into returning something
acceptable using osol 2008.11. I've eliminated everything but the NFS
portion of the equation and am looking for some pointers in the right
direction.
Co
This is something that I've run into as well across various installs
very similar to the one described (PE2950 backed by an MD1000). I
find that overall the write performance across NFS is absolutely
horrible on 2008.11 and 2009.06. Worse, I use iSCSI under 2008.11 and
it's just fine with
On 7 mai 09, at 04:03, Adam Leventhal wrote:
After all this discussion, I am not sure if anyone adequately
answered the
original poster's question as to whether at 2540 with SAS 15K
drives would
provide substantial synchronous write throughput improvement when
used as
a L2ARC device.
I
ge:
From: "erik.ableson"
Date: 17 avril 2009 13:15:21 HAEC
To: zfs-discuss@opensolaris.org
Subject: [zfs-discuss] sharenfs settings ignored
Hi there,
I'm working on a new OS 2008.11 setup here and running into a few
issues with the nfs integration. Notably, it appears that subnet
valu
Hi there,
I'm working on a new OS 2008.11 setup here and running into a few
issues with the nfs integration. Notably, it appears that subnet
values attributed to sharenfs are ignored and gives back a permission
denied for all connection attempts. I have another environment where
permissi
35 matches
Mail list logo