> On Dec 21, 2009, at 4:09 PM, Michael Herf
> wrote:
>
> > Anyone who's lost data this way: were you doing
> weekly scrubs, or
> > did you find out about the simultaneous failures
> after not touching
> > the bits for months?
>
> Scrubbing on a routine basis is good for detecting
> problems
> On Dec 21, 2009, at 4:09 PM, Michael Herf
> wrote:
>
> > Anyone who's lost data this way: were you doing
> weekly scrubs, or
> > did you find out about the simultaneous failures
> after not touching
> > the bits for months?
>
> Scrubbing on a routine basis is good for detecting
> problems
On Dec 21, 2009, at 4:09 PM, Michael Herf wrote:
Anyone who's lost data this way: were you doing weekly scrubs, or
did you find out about the simultaneous failures after not touching
the bits for months?
Scrubbing on a routine basis is good for detecting problems early, but
it doesn't so
Hey James,
> Personally, I think mirroring is safer (and 3 way mirroring) than raidz/z2/5.
> All my "boot from zfs" systems have 3 way mirrors root/usr/var disks (using
> 9 disks) but all my data partitions are 2 way mirrors (usually 8 disks or
> more and a spare.)
Double-parity (or triple-pa
I was able to recover! Thank you both for replying and thank you victor for the
step-by-step.
I downloaded dev-129 from the site and booted off of it. I first ran:
zpool import -nfF -R /mnt rpool
and the cmd output that I could go back to when the box rebooted itself.
Therefore, I ran the cm
Sassy,
this is the zfs-discuss forum. You might have better luck asking at the
cifs-discuss forum.
http://mail.opensolaris.org/mailman/listinfo/cifs-discuss
-- richard
On Dec 21, 2009, at 2:36 PM, Sassy Natan wrote:
Hi Group
I have install the latest version of OpenSolairs (version 129) on m
JD Trout wrote:
Hello,
I am running OpenSol 2009.06 and after a power outage opsensol will no longer
boot past GRUB. Booting from the liveCD shows me the following:
r...@opensolaris:~# zpool import -f rpool
cannot import 'rpool': I/O error
r...@opensolaris:~# zpool import -f
pool: rpool
On Mon, Dec 21, 2009 at 6:50 PM, JD Trout wrote:
> Hello,
> I am running OpenSol 2009.06 and after a power outage opsensol will no
> longer boot past GRUB. Booting from the liveCD shows me the following:
>
> r...@opensolaris:~# zpool import -f rpool
> cannot import 'rpool': I/O error
>
> r...@op
Hello,
I am running OpenSol 2009.06 and after a power outage opsensol will no longer
boot past GRUB. Booting from the liveCD shows me the following:
r...@opensolaris:~# zpool import -f rpool
cannot import 'rpool': I/O error
r...@opensolaris:~# zpool import -f
pool: rpool
id: 153786572483
I don't mean to sound ungrateful (because I really do appreciate all the help I
have received here), but I am really missing the use of my server.
Over Christmas, I want to be able to use my laptop (right now, it's acting as a
server for some of the things my OpenSolaris server did). This means
Kjetil Torgrim Homme wrote:
Note also that the compress/encrypt/checksum and the dedup are
separate pipeline stages so while dedup is happening for block N block
N+1 can be getting transformed - so this is designed to take advantage
of multiple scheduling units (threads,cpus,cores etc).
nice.
Daniel Carosone wrote:
Your parenthetical comments here raise some concerns, or at least eyebrows,
with me. Hopefully you can lower them again.
compress, encrypt, checksum, dedup.
(and you need to use zdb to get enough info to see the
leak - and that means you have access to the raw devic
Hi Group
I have install the latest version of OpenSolairs (version 129) on my machine.
I have configure the DNS, Kerberos, PAM and LDAP client to use my Windows
2003R2 domain.
My Windows Domain Include the RFC2307 Posix account, so each user has UID, GID
configure.
This was very east to configu
Anyone who's lost data this way: were you doing weekly scrubs, or did you
find out about the simultaneous failures after not touching the bits for
months?
mike
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/l
Le 21 déc. 09 à 20:23, Joerg Schilling a écrit :
Matthew Ahrens wrote:
Gaëtan Lehmann wrote:
Hi,
On opensolaris, I use du with the -b option to get the
uncompressed size
of a directory):
r...@opensolaris:~# du -sh /usr/local/
399M/usr/local/
r...@opensolaris:~# du -sbh /usr/loc
Le 21 déc. 09 à 19:28, Matthew Ahrens a écrit :
Gaëtan Lehmann wrote:
Hi,
On opensolaris, I use du with the -b option to get the uncompressed
size of a directory):
r...@opensolaris:~# du -sh /usr/local/
399M/usr/local/
r...@opensolaris:~# du -sbh /usr/local/
915M/usr/local/
r..
Matthew Ahrens wrote:
> Gaëtan Lehmann wrote:
> >
> > Hi,
> >
> > On opensolaris, I use du with the -b option to get the uncompressed size
> > of a directory):
> >
> > r...@opensolaris:~# du -sh /usr/local/
> > 399M/usr/local/
> > r...@opensolaris:~# du -sbh /usr/local/
> > 915M
In case the overhead in calculating SHA256 was the cause, I set ZFS
checksums to SHA256 at the pool level, and left for a number of days.
This worked fine.
Setting dedup=on immediately crippled performance, and then setting
dedup=off fixed things again. I did notice through a zpool iostat that
dis
Gaëtan Lehmann wrote:
Hi,
On opensolaris, I use du with the -b option to get the uncompressed size
of a directory):
r...@opensolaris:~# du -sh /usr/local/
399M/usr/local/
r...@opensolaris:~# du -sbh /usr/local/
915M/usr/local/
r...@opensolaris:~# zfs list -o space,refer,rat
I've hsut bought second drive for my hope PC and decided to do mirror. I've
made
pfexec zpool attach rpool c9d0s0 c13d0s0
waited for scrub and tried to install grub on second disk:
$ pfexec installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c13d0s0
cannot open/stat device /dev/rdsk/c13
Dear all.
We use an "old" 48TB 4500 aka Thumper as iSCSI server based on snv_129.
As the machine has only 16GB of RAM we are wondering if it's sufficient
for holding the bigger part of the DDT in memory without affecting
performance by limiting the ARC. Any hints about scaling memory vs. disk
space
Embedded Operating system/Networking (EON), RAM based live ZFS NAS appliance is
released on Genunix! This is the first EON release with inline Deduplication
features! Many thanks to Genunix.org for download hosting and serving the
opensolaris community.
EON Deduplication ZFS storage is availabl
Yes, a coworker lost a second disk during a rebuild of a raid5 and lost all
data. I have not had a failure, however when migrating EqualLogic arrays in and
out of pools, I lost a disk on an array. No data loss, but it concerns me
because during the moves, you are essentially reading and writing
The question: is there an issue running: ZFS and QFS on the same file server?
The details:
We have a 2540 raid controller with 4 raidsets. Each raidset presents 2 slices
to the OS. one slice (slice 0) from each raidset is a separate qfs filesystems
shared among 7 servers running qfs 4.6patch6.
If you are asking if anyone has experienced two drive failures simultaneously?
The answer is yes.
It has happened to me (at home) and to one client, at least that I can
remember. In both cases, I was able to dd off one of the failed disks (with
just bad sectors or less bad sectors) and recons
Hi,
On opensolaris, I use du with the -b option to get the uncompressed
size of a directory):
r...@opensolaris:~# du -sh /usr/local/
399M/usr/local/
r...@opensolaris:~# du -sbh /usr/local/
915M/usr/local/
r...@opensolaris:~# zfs list -o space,refer,ratio,compress data/local
On Mon, 21 Dec 2009, Tristan Ball wrote:
Yes, primarily since if there is no more memory immediately available,
performance when starting new processes would suck. You need to reserve
some working space for processes and short term requirements.
Why is that a given? There are several system
Mart van Santen wrote:
Hi,
Do the I/O problems go away when only one of the SSDs is attached?
No, the problem stays with only one SSD. The problem is only less when
resilvering, but not totally disappeared (maybe because of the
resilver overhead).
The resilver is likely masking some under
Hi,
Do the I/O problems go away when only one of the SSDs is attached?
No, the problem stays with only one SSD. The problem is only less when
resilvering, but not totally disappeared (maybe because of the resilver
overhead).
Frankly, I'm betting that your SSDs are wearing out. Resilver
It might be helpful to contact SSD vendor, report the issue and
inquire if half a year wearing out is expected behavior for this
model. Further, if you have an option to replace one (or both) SSDs
with fresh ones, this could tell for sure if they are the root cause.
Regards,
Andrey
On Mon, Dec
Mart van Santen wrote:
Hi,
We have a X4150 with a J4400 attached. Configured with 2x32GB SSD's,
in mirror configuration (ZIL) and 12x 500GB SATA disks. We are running
this setup for over a half year now in production for NFS and iSCSI
for a bunch of virtual machines (currently about 100 VM's
Brandon High wrote:
On Sat, Dec 19, 2009 at 8:34 AM, Colin Raven wrote:
If snapshots reside within the confines of the pool, are you saying that
dedup will also count what's contained inside the snapshots? I'm not sure
why, but that thought is vaguely disturbing on some level.
Sure, w
Hi,
We have a X4150 with a J4400 attached. Configured with 2x32GB SSD's, in
mirror configuration (ZIL) and 12x 500GB SATA disks. We are running this
setup for over a half year now in production for NFS and iSCSI for a
bunch of virtual machines (currently about 100 VM's, Mostly Linux, some
Wi
On 21 December, 2009 - Tristan Ball sent me these 4,5K bytes:
> Richard Elling wrote:
> >
> > On Dec 20, 2009, at 12:25 PM, Tristan Ball wrote:
> >
> >> I've got an opensolaris snv_118 machine that does nothing except
> >> serve up NFS and ISCSI.
> >>
> >> The machine has 8G of ram, and I've got
On Sat, Dec 19, 2009 at 8:34 AM, Colin Raven wrote:
> If snapshots reside within the confines of the pool, are you saying that
> dedup will also count what's contained inside the snapshots? I'm not sure
> why, but that thought is vaguely disturbing on some level.
Sure, why not? Let's say you have
35 matches
Mail list logo