If you are using rsync already, I would run it on server in daemon mode. And
there are Windows clients that support rsync protocol.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolar
Hi all,
I have built out an 8TB SAN at home using OpenSolaris + ZFS. I have
yet to put it into 'production' as a lot of the issues raised on this
mailing list are putting me off trusting my data onto the platform
right now.
Throughout time, I have stored my personal data on NetWare and now NT
an
Hi again,
brief update:
the process ended successfully (at least a snapshot was created) after
close to 2 hrs. Since the load is still the same as before taking the
snapshot I blame other users' processes reading from that array for the
long snapshot duration.
Carsten Aulbert wrote:
> My remain
Richard Elling пишет:
>> Keep in mind that this is for Solaris 10 not opensolaris.
>
> Keep in mind that any changes required for Solaris 10 will first
> be available in OpenSolaris, including any changes which may
> have already been implemented.
Indeed. For example, less than a week ago fix for
If you have any backups of your boot volume, I found that the pool can be
mounted on boot provided it's still listed in your /etc/zfs/zpool.cache file.
I've moved to OpenSolaris now purely so I can take snapshots of my boot volume
and backup that file.
The relevant bug you need fixing is this
> Is there a way to recover from this problem? I'm
> pretty sure the data is still OK, it's just labels
> that get "corrupted" by controller or zfs. :(
And this s confirmed by zdb, after loong wait for comparison of data and
checksums: no data errors.
--
This message posted from opensolaris.o
Well, I don't have a huge amount of experience in Solaris, but I can certainly
share my thoughts.
1. BACKUPS
Always ensure you have backups of the pool. Ideally stored in a neutral
format. Our plan is to ensure that all ZFS stores are also stored:
- on tape via star
- on an off-site ZFS s
No takers? :)
benr.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hello there...
I did see that already, talk with some guys without answer too... ;-)
Actually, this week i did not see discrepancy between tools, but the pool
information was wrong (space used). Exporting/importing, scrub, and etc, did
not solve. I know that zfs is "async" in the status report
I'm also seeing a very slow import on the 2008.11 build 98 prerelease.
I have the following setup:
a striped zpool of 2 mirrors, both mirrors have 1 local disk and 1 iscsi disk.
I was testing a setup with iscsiboot (windows vista) with gpxeboot, every
client was booted from a iscsi exposed zvol
I'm also seeing a slow import on the 2008.11 build 98 prerelease.
but my situation is a little different :
I have the following setup:
a striped zpool of 2 mirrors, both mirrors have 1 local disk and 1 iscsi disk.
I was testing iscsiboot (windows vista) with gpxeboot, every client was booted
f
Is it possible to unload zfs module without rebooting the computer.
I am making some changes in zfs kernel code. then compiling it. i then want to
reload the newly rebuilt module without rebooting.
I tried modunload. But its always givin "can't unload the module: Device busy" .
What can be the
Yuvraj,
I see that you are using files as disks.
You could write a few random bytes to one of the files and that would
induce corruption.
To make a particular disk faulty you could mv the file to a new name.
Also, you can explore the zinject from zfs testsuite . Probably it has a
way to induce
shelly wrote:
> Is it possible to unload zfs module without rebooting the computer.
>
> I am making some changes in zfs kernel code. then compiling it. i then want
> to reload the newly rebuilt module without rebooting.
>
> I tried modunload. But its always givin "can't unload the module: Devic
On Mon, Oct 20, 2008 at 1:52 AM, Victor Latushkin
<[EMAIL PROTECTED]>wrote
> Indeed. For example, less than a week ago fix for the following two CRs
> (along with some others) was put back into Solaris Nevada:
>
> 6333409 traversal code should be able to issue multiple reads in parallel
> 6418042
On Mon, Oct 20, 2008 at 03:10, gm_sjo <[EMAIL PROTECTED]> wrote:
> I appreciate 99% of the time people only comment if they have a
> problem, which is why I think it'd be nice for some people who have
> successfully implemented ZFS, including making various use of the
> features (recovery, replaci
[EMAIL PROTECTED] wrote on 10/19/2008 01:59:29 AM:
> Ares Drake wrote:
> > Greetings.
> >
> > I am currently looking into setting up a better backup solution for our
> > family.
> >
> > I own a ZFS Fileserver with a 5x500GB raidz. I want to back up data
(not
> > the OS itself) from multiple PCs r
Paul B. Henson wrote:
>
>
> At about 5000 filesystems, it starts taking over 30 seconds to
> create/delete additional filesystems.
>
> At 7848, over a minute:
>
> # time zfs create export/user/test
>
> real1m22.950s
> user1m12.268s
> sys 0m10.184s
>
> I did a little experiment with t
Hi Richard,
Richard Elling wrote:
> Karthik Krishnamoorthy wrote:
>> We did try with this
>>
>> zpool set failmode=continue option
>>
>> and the wait option before pulling running the cp command and pulling
>> out the mirrors and in both cases there was a hang and I have a core
>> dump of the ha
Greetings.
I have a X4500 with an 8TB RAIDZ datapool, currently 75% full. I have it carved
up into several filesystems. I share out two of the filesystems
/datapool/data4 (approx 1.5TB) and /datapool/data5 (approx 3.5TB). THe data is
imagery, and the primary application on the PCs is Socetset.
Hi all,
I have a little question.
WIth RAID-Z rules, what is the true usable disks space?
Is there a calcul like any RAID (ex. RAID5 = nb of disks - 1 for parity)?
Thank you for your help, i found everywhere on the web and i don't found my
answer...
--
This message posted from opensolaris.org
__
On Mon, Oct 20, 2008 at 11:32 AM, William Saadi <[EMAIL PROTECTED]>wrote:
> Hi all,
>
> I have a little question.
> WIth RAID-Z rules, what is the true usable disks space?
> Is there a calcul like any RAID (ex. RAID5 = nb of disks - 1 for parity)?
>
>
# of disks - 1 for parity
Eugene Gladchenko wrote:
> Hi,
>
> I'm running FreeBSD 7.1-PRERELEASE with a 500-gig ZFS drive. Recently I've
> encountered a FreeBSD problem (PR kern/128083) and decided about updating the
> motherboard BIOS. It looked like the update went right but after that I was
> shocked to see my ZFS dest
William Saadi wrote:
> Hi all,
>
> I have a little question.
> WIth RAID-Z rules, what is the true usable disks space?
>
It depends on what data you write to it, how the writes are done, and
what compression or redundancy parameters are set.
> Is there a calcul like any RAID (ex. RAID5 = nb of
On Thu, Oct 16, 2008 at 03:50:19PM +0800, Gray Carper wrote:
>
>Sidenote: Today we made eight network/iSCSI related tweaks that, in
>aggregate, have resulted in dramatic performance improvements (some I
>just hadn't gotten around to yet, others suggested by Sun's Mertol
>Ozyoney)..
Are there plans to make the Sun Web Console available in OpenSolaris? I've used
it on Solaris 10 and it's a great tool, I think it'd make using/administering
ZFS a lot easier for those new the Solaris and ZFS. Thoughts?
--
This message posted from opensolaris.org
_
I've had serious problems trying to get Windows to run as a NFS server with
SFU. On a modern raid array I can't get it above 4MB/s transfer rates. It's
slow enough that virtual machines running off it time out almost every time I
try to boot them.
Oddly it worked ok when I used an old IDE dis
On Sun, 19 Oct 2008, Ed Plese wrote:
> The biggest problem I ran into was the boot time, specifically when "zfs
> volinit" is executing. With ~3500 filesystems on S10U3 the boot time for
> our X4500 was around 40 minutes. Any idea what your boot time is like
> with that many filesystems on the n
On Mon, 20 Oct 2008, Paul B. Henson wrote:
>
> I haven't rebooted it yet; I somewhat naively assumed performance would be
> much better and just started a script to create test file systems for about
> 10,000 people. I'm going to delete the pool and re-create it, then create
> 1000 filesystems at a
We have 135 TB capacity with about 75 TB in use on zfs based storage.
zfs use started about 2 years ago, and has grown from there. This spans
9 SAN appliances, with 5 "head nodes", and 2 more recent servers running
zfs on JBOD with vdevs made up of raidz2.
So far, the experience has been ve
Gary,
>> Sidenote: Today we made eight network/iSCSI related tweaks that, in
>> aggregate, have resulted in dramatic performance improvements
>> (some I
>> just hadn't gotten around to yet, others suggested by Sun's Mertol
>> Ozyoney)...
>> - disabling the Nagle algorithm on the head n
On Mon, Oct 20, 2008 at 9:29 AM, Bob Bencze <[EMAIL PROTECTED]> wrote:
> Greetings.
> I have a X4500 with an 8TB RAIDZ datapool, currently 75% full. I have it
> carved up into several filesystems. I share out two of the filesystems
> /datapool/data4 (approx 1.5TB) and /datapool/data5 (approx 3.
I've a report that the mismatch between SQLite3's default block size and
ZFS' causes some performance problems for Thunderbird users.
It'd be great if there was an API by which SQLite3 could set its block
size to match the hosting filesystem or where it could set the DB file's
record size to match
On Mon, 20 Oct 2008, Pramod Batni wrote:
> Yes, the implementation of the above ioctl walks the list of mounted
> filesystems 'vfslist' [in this case it walks 5000 nodes of a linked list
> before the ioctl returns] This in-kernel traversal of the filesystems is
> taking time.
Hmm, O(n) :(... I gu
A couple of updates:
Installed Opensolaris on a Poweredge 1850 with a single network card, default
iscsitarget configuration (no special tweaks or tpgt settings), vmotion was
about 10 percent successful before I received write errors on disk.
10 percent better than the Poweredge 1900 iscsitarge
Hey, Jim! Thanks so much for the excellent assist on this - much better than
I could have ever answered it!
I thought I'd add a little bit on the other four...
- raising ddi_msix_alloc_limit to 8
For PCI cards that use up to 8 interrupts, which our 10GBe adapters do. The
previous value of 2 cou
36 matches
Mail list logo