Hi all,
I've just seen something weird. On a zpool which looks a bit busy right
now (~ 100 read op/s, 100 kB/s) I started a zfs snapshot about an hour
ago. Until now, a taking a snapshot took usually at few seconds at most
even for largish ~TByte file systems. I don't know if the read IOs are
curr
Le 2 oct. 08 à 09:21, Christiaan Willemsen a écrit :
> Hi there.
>
> I just got a new Adaptec RAID 51645 controller in because the old
> (other type) was malfunctioning. It is paired with 16 Seagate 15k5
> disks, of which two are used with hardware RAID 1 for OpenSolaris
> snv_98, and the r
On Sun, Oct 19, 2008 at 4:08 PM, Paul B. Henson <[EMAIL PROTECTED]> wrote:
> At about 5000 filesystems, it starts taking over 30 seconds to
> create/delete additional filesystems.
The biggest problem I ran into was the boot time, specifically when
"zfs volinit" is executing. With ~3500 filesystem
I originally started testing a prototype for an enterprise file service
implementation on our campus using S10U4. Scalability in terms of file
system count was pretty bad, anything over a couple of thousand and
operations started taking way too long.
I had thought there were a number of improveme
I was able to install os0805 into a USB stick and boot from it. It works
really great.
However, after image-updating to build 95, I am only seeing the GRUB prompt.
I have also installed the 0811_95 LiveDVD into a USB stick, but the machine
just keeps rebooting itself.
--
This message posted fr
I have a zfs pool made of two vdevs, each using one whole physical disk, under
OpenSolaris 2008.5.
Disks live on a Netcell SATA/RAID controller, that has three ports (and I
planned to use three disks there, and configure mirrors in zfs), but as it
turned out could only provide one or two disks t
would it help to insert the raid into another computer and import it there?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
With ZFS, that are 4 identical labels on each
physical vdev, in this case a single hard drive.
L0/L1 at the start of the vdev, and
L2/L3 at the end of the vdev.
As I understand it, part of the reason for having
four identical labels is to make it difficult
to completely loose the information in th
On Mon, 13 Oct 2008 12:15:50 -0600
Lori Alt <[EMAIL PROTECTED]> wrote:
> Then reboot and see if your problem is solved. If not,
> we'll dig deeper with kmdb into what's happening.
Just for the record. We were not able to solve the problem. The ROOT
kept missing from my snv_99 bootdisk.
I have b
I apologize if this has been addressed countless times, but I have searched &
searched and have not found the answer.
I'm rather new to ZFS and have learned a lot about it so far. At least one
thing confuses me, however. I've noticed that writes to the boot disk in
OpenSolaris (i.e. pool rpoo
Hi,
I'm running FreeBSD 7.1-PRERELEASE with a 500-gig ZFS drive. Recently I've
encountered a FreeBSD problem (PR kern/128083) and decided about updating the
motherboard BIOS. It looked like the update went right but after that I was
shocked to see my ZFS destroyed! Rolling the BIOS back did not
Ares Drake wrote:
> Greetings.
>
> I am currently looking into setting up a better backup solution for our
> family.
>
> I own a ZFS Fileserver with a 5x500GB raidz. I want to back up data (not
> the OS itself) from multiple PCs running linux oder windowsXP. The linux
> boxes are connected via 1000
12 matches
Mail list logo