I have a 12 disk ZFS+ volume and this morning tried to look at some data on it
and the 'ls' command just hung. So I ran a 'zpool status' which also proceeded
to hang and return no data. I had to leave for work and when I came home the
computer had shut down, which is weird. So I started the m
Leave the default recordsize. With 128K recordsize, files smaller than
128K are stored as single record
tightly fitted to the smallest possible # of disk sectors. Reads and
writes are then managed with fewer ops.
Not tuning the recordsize is very generally more space efficient and
more perf
> "da" == David Abrahams <[EMAIL PROTECTED]> writes:
da> Is there a cheaper alternative that will securely and
da> persistently store a copy of my data offsite?
rented dedicated servers with disks in them? I have not shopped for
this, but for backups it just needs to not lose your da
Hi,
I rebooted the server after I submitted the information to release the locks
set up on my ESX host.
After the reboot: I reran the iscsitadm list target -v and the GUIDs showed up.
Only interesting problem: the GUID's are identical (any problems with that?)
[EMAIL PROTECTED]:~# iscsitadm li
Hello Tano,
The issue here is not the target or VMware but a missing GUID on the target as
the issue.
Observe the target smf properties using
iscsitadm list target -v
You have
iSCSI Name: iqn.1986-03.com.sun:02:35ec26d8-f173-6dd5-b239-93a9690ffe46.vscsi
Connections: 0
ACL list:
TPGT list:
TPG
Marcelo Leal <[EMAIL PROTECTED]> wrote:
> Hello all,
> I think he got some point here... maybe that would be an interesting
> feature for that kind of workload. Caching all the metadata, would make t
> the rsync task more fast (for many files). Try to cache the data is really
> waste of time, bec
>Do you have an active interface on the OpenSolaris box that is configured for
>>0.0.0.0 right now?
Not anymore:
>By default, since you haven't configured the tpgt on the iscsi target, solaris
>will >broadcast all active interfaces in its SendTargets response. On the ESX
>side, >ESX will att
I have a volume shared via iSCSI that has become unusable. Both
target and initiator nodes are running nevada b99. Running "newfs" on
the initiator node fails immediately with an "I/O error" (no other
details). The pool in which the "bad" volume resides includes other
volumes exported vi
On Fri, Oct 17, 2008 at 2:48 PM, Richard Elling <[EMAIL PROTECTED]>wrote:
> Keep in mind that any changes required for Solaris 10 will first
> be available in OpenSolaris, including any changes which may
> have already been implemented.
>
For me (who uses SOL10) it is the only way I can get infor
on Fri Oct 17 2008, Miles Nordin wrote:
>> "da" == David Abrahams <[EMAIL PROTECTED]> writes:
>
> da> how to deal with backups to my Amazon s3 storage area. Does
> da> zfs send avoid duplicating common data in clones and
> da> snapshots?
>
> how can you afford to use something s
Hello all,
I think he got some point here... maybe that would be an interesting feature
for that kind of workload. Caching all the metadata, would make the rsync task
more fast (for many files). Try to cache the data is really waste of time,
because the data will not be read again, and will jus
Scott Williamson wrote:
> Hi All,
>
> I have opened a ticket with sun support #66104157 regarding zfs send /
> receive and will let you know what I find out.
Thanks.
>
> Keep in mind that this is for Solaris 10 not opensolaris.
Keep in mind that any changes required for Solaris 10 will firs
> "da" == David Abrahams <[EMAIL PROTECTED]> writes:
da> how to deal with backups to my Amazon s3 storage area. Does
da> zfs send avoid duplicating common data in clones and
da> snapshots?
how can you afford to use something so expensive as S3 for backups?
Anyway 'zfs send' does
> "r" == Ross <[EMAIL PROTECTED]> writes:
r> do you have bug ID's for any of those problems?
yeah, some of them, so maybe they will be fixed in s10u6. Sometimes
the bug report writer has a narrower idea of the problem than I do,
but bugs.opensolaris.org is still encouraging. Also note
> "r" == Ross <[EMAIL PROTECTED]> writes:
r> figures so close to 10MB/s. All three servers are running
r> full duplex gigabit though
there is one tricky way 100Mbit/s could still bite you, but it's
probably not happening to you. It mostly affects home users with
unmanaged switche
on Wed Oct 15 2008, Miles Nordin wrote:
>> "s" == Steve <[EMAIL PROTECTED]> writes:
>
> s> the use of zfs
> s> clones/snapshots encompasses the entire zfs filesystem
>
> I use one ZFS filesystem per VDI file. It might be better to use
> vmdk's and zvol's, but right now that's not
On Fri, Oct 17, 2008 at 10:51 AM, Bob Friesenhahn
<[EMAIL PROTECTED]> wrote:
> On Fri, 17 Oct 2008, Al Hopper wrote:
>>
>> a) inexpensive, large capacity SATA drives running at 7,200 RPM and
>> providing, approximately, 300 IOPS.
>> b) expensive, small capacity, SAS drives running at 15k RPM and
>>
Additional Information after continuing to tinker:
After importing the zpool, if I use "root" to
manually 'chmod' the file permissions on the zpool's mount point, then
non-privilege users can access the pool. This alone doesn't solve the
problem since all files in the pool need to be simi
Additional Information after continuing to tinker:
After importing the zpool, if I use "root" to
manually 'chmod' the file permissions on the zpool's mount point, then
non-privilege users can access the pool. This alone doesn't solve the
problem since all files in the pool need to be simi
On Fri, 17 Oct 2008, Al Hopper wrote:
>
> a) inexpensive, large capacity SATA drives running at 7,200 RPM and
> providing, approximately, 300 IOPS.
> b) expensive, small capacity, SAS drives running at 15k RPM and
> providing, approx, 700 IOPS.
Al,
Where are you getting the above IOPS estimates f
Hi All,
I have opened a ticket with sun support #66104157 regarding zfs send /
receive and will let you know what I find out.
Keep in mind that this is for Solaris 10 not opensolaris.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.
Issue: Privileged (root) account required to access zpool imported
from Mac OS/X.
Just installed b119 bits onto my OS/X 10.5.5 system today in an attempt
to share VirtualBox disk image files between my Mac and my OpenSolaris
(2008.11 b99)
laptop.
Install worked well and I was able to crea
Yup, that's one of the first things I checked when it came out with
figures so close to 10MB/s. All three servers are running full duplex
gigabit though, as reported by both Solaris and the switch. And both
the NFS at 60+MB/s, and the zfs send / receive are all going over the
same network link, i
Hi Ross,
On Fri, Oct 17, 2008 at 1:35 PM, Ross <[EMAIL PROTECTED]> wrote:
> Ok, just did some more testing on this machine to try to find where my
> bottlenecks are. Something very odd is going on here. As best I can tell
> there are two separate problems now:
>
> - something is throttling net
Ok, just did some more testing on this machine to try to find where my
bottlenecks are. Something very odd is going on here. As best I can tell
there are two separate problems now:
- something is throttling network output to 10MB/s
- something is throttling zfs send to around 20MB/s
The netwo
On Thu, Oct 16, 2008 at 6:52 AM, Tomas Ögren <[EMAIL PROTECTED]> wrote:
> On 16 October, 2008 - Ross sent me these 1,1K bytes:
>
>> I might be misunderstanding here, but I don't see how you're going to
>> improve on "zfs set primarycache=metadata".
>>
>> You complain that ZFS throws away 96kb of da
dick hoogendijk wrote:
> Vincent Fox wrote:
>
>>> Or perhaps compression should be the default.
>
> No way please! Things taking even more memory should never be the default.
> An installation switch would be nice though.
> Freedom of coice ;-)
Compression does not take more memory, the data is
Some of that is very worrying Miles, do you have bug ID's for any of those
problems?
I'm guessing the problem of the device being reported ok after the reboot could
be this one:
http://bugs.opensolaris.org/view_bug.do?bug_id=6582549
And could the errors after the reboot be one of these?
http://
28 matches
Mail list logo