Yeah. It is also not so easy to capture the possible data loss during.
There is no reliable way to figure it out.
Thanks.
Fred.
-Original Message-
From: rwali...@washdcmail.com [mailto:rwali...@washdcmail.com]
Sent: 星期二, 五月 25, 2010 11:42
To: Erik Trimble
Cc: Fred Liu; ZFS Discussions
S
On May 24, 2010, at 4:28 AM, Erik Trimble wrote:
> yes, both the X25-M (both G1 and G2) plus the X25-E have a DRAM buffer on the
> controller, and neither has a supercapacitor (or other battery) to back it
> up, so there is the potential for data loss (but /not/ data corruption) in a
> power-lo
Hi,
1): Is it possible to do it?
2): What is the backplane hardware requirement for "luxadm led_blink" to
work to bring Disk LED to the Blink Mode.
Thanks.
Fred
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris
>
>
>
> From earlier in the thread, it sounds like none of the SF-1500 based
> drives even have a supercap, so it doesn't seem that they'd necessarily
> be a better choice than the SLC-based X-25E at this point unless you
> need more write IOPS...
>
> Ray
>
I think the upcoming OCZ Vertex 2 Pro wi
>
>
> Not familiar with that model
>
>
It's a sandforce sf-1500 model but without a supercapheres some info on
it:
Maximum Performance
- Max Read: up to 270MB/s
- Max Write: up to 250MB/s
- Sustained Write: up to 235MB/s
- Random Write 4k: 15,000 IOPS
- Max 4k IOPS: 50,00
>
>
> ZFS is always consistent on-disk, by design. Loss of the ZIL will result
> in loss of the data in the ZIL which hasn't been flushed out to the hard
> drives, but otherwise, the data on the hard drives is consistent and
> uncorrupted.
>
>
>
> This is what i thought. I have read this list on
On 5/24/2010 2:48 PM, Thomas Burgess wrote:
I recently got a new SSD (ocz vertex LE 50gb)
Not familiar with that model
It seems to work really well as a ZIL performance wise. My question
is, how safe is it? I know it doesn't have a supercap so lets' say
dataloss occursis it just
On 5/24/2010 2:48 PM, Thomas Burgess wrote:
I recently got a new SSD (ocz vertex LE 50gb)
It seems to work really well as a ZIL performance wise. My question
is, how safe is it? I know it doesn't have a supercap so lets' say
dataloss occursis it just dataloss or is it pool loss?
ZFS is
On Mon, May 24, 2010 at 05:48:56PM -0400, Thomas Burgess wrote:
> I recently got a new SSD (ocz vertex LE 50gb)
>
> It seems to work really well as a ZIL performance wise. My question is, how
> safe is it? I know it doesn't have a supercap so lets' say dataloss
> occursis it just dataloss or
I recently got a new SSD (ocz vertex LE 50gb)
It seems to work really well as a ZIL performance wise. My question is, how
safe is it? I know it doesn't have a supercap so lets' say dataloss
occursis it just dataloss or is it pool loss?
also, does the fact that i have a UPS matter?
the nu
I had a similar problem with a RAID shelf (switched to JBOD mode, with each
physical disk presented as a LUN) connected via FC (qlc driver, but no MPIO).
Running a scrub would eventually generate I/O errors and many messages like
this:
Sep 6 15:12:53 imsfs scsi: [ID 107833 kern.warning] WARNI
Many many moons ago, I submitted a CR into bugs about a
highly reproducible panic that occurs if you try to re-share
a lofi mounted image. That CR has AFAIK long since
disappeared - I even forget what it was called.
This server is used for doing network installs. Let's say
you have a 64 bit iso
Hi,
I did the zpool import/export performance testing on opensolaris build-134:
1). Create 100 zfs and 100 snapshots, then do zpool export/import
export takes about 5 seconds
import takes about 5 seconds
2). Create 200 zfs and 200 snapshots, then do zpool export/import
export takes a
Forrest Aldrich wrote:
I've seen this product mentioned before - the problem is, we use
Veritas heavily on a public network and adding yet another software
dependency would be a hard sell. :(
Be very certain that you need synchronous replication before you do
this. For some ACID systems it re
> > Thanks for the pointer, I will look into it.
> >
> > The first thing that comes to mind is a possible performance hit,
> > somewhere with the VxFS code. I could be wrong, tho.
No worries, certainly worth looking into though - if performance is acceptable,
it could be a good solution. Let
I've seen this product mentioned before - the problem is, we use Veritas
heavily on a public network and adding yet another software dependency would be
a hard sell. :(
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discu
On Mon, May 24, 2010 at 11:30:20AM -0700, Ray Van Dolson wrote:
> This thread has grown giant, so apologies for screwing up threading
> with an out of place reply. :)
>
> So, as far as SF-1500 based SSD's, the only ones currently in existence
> are the Vertex 2 LE and Vertex 2 EX, correct (I under
This thread has grown giant, so apologies for screwing up threading
with an out of place reply. :)
So, as far as SF-1500 based SSD's, the only ones currently in existence
are the Vertex 2 LE and Vertex 2 EX, correct (I understand the Vertex 2
Pro was never mass produced)?
Both of these are based
VMware will properly handle sharing a single iSCSI volume across multiple ESX
hosts. We have six ESX hosts sharing the same iSCSI volumes - no problems.
-Scott
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensol
> > Can you elaborate?
> >
> > Veritas has it's own filesystem -- we need the block-level
> > replication functionality to backup our data (live) over the WAN to
> > a disaster
> > recover location. Therefore, you wouldn't be able to use Veritas
> > with ZFS filesystem.
zfs create -V 10G test/
> "d" == Don writes:
> "hk" == Haudy Kazemi writes:
d> You could literally split a sata cable and add in some
d> capacitors for just the cost of the caps themselves.
no, this is no good. The energy only flows in and out of the
capacitor when the voltage across it changes. I
On May 24, 2010, at 10:47 AM, Forrest Aldrich wrote:
> We have a Sun thumper 34 terabyte, with 24T free. I've been asked to find
> out whether we can remove some disks from the zpool/ZRAID config (say about
> 10T) and install Veritas volumes on those, then migrate some data to it for
> block-
- Original Message -
> From: "Forrest Aldrich"
> To: zfs-discuss@opensolaris.org
> Sent: Monday, 24 May, 2010 6:47:40 PM
> Subject: [zfs-discuss] Removing disks from a ZRAID config?
> We have a Sun thumper 34 terabyte, with 24T free. I've been asked to
> find out whether we can remove s
We have a Sun thumper 34 terabyte, with 24T free. I've been asked to find out
whether we can remove some disks from the zpool/ZRAID config (say about 10T)
and install Veritas volumes on those, then migrate some data to it for
block-level replication over a WAN.
I know, horrifying - but the pr
> "ai" == Asif Iqbal writes:
>> If you disable the ZIL for locally run Oracle and you have an
>> unscheduled outage, then it is highly probable that you will
>> lose data.
ai> yep. that is why I am not doing it until we replace the
ai> battery
no, wait please, you st
On Mon, 24 May 2010, h wrote:
but...wait..that cant be.
i disconnected the 1TB drives and plugged in the 2TB's before doing replace
command. no information could be written to the 1TBs at all since it is
physically offline.
Do the labels still exist? What does 'zdb -l /dev/rds
but...wait..that cant be.
i disconnected the 1TB drives and plugged in the 2TB's before doing replace
command. no information could be written to the 1TBs at all since it is
physically offline.
--
This message posted from opensolaris.org
___
zf
Seagate is planning on releasing a disk that's part spinning rust and
part flash:
http://www.theregister.co.uk/2010/05/21/seagate_momentus_xt/
The design will have the flash be transparent to the operating system,
but I wish they would have some way to access the two components
sep
I even have this problem on my (productive) backup server. I lost my system-hdd
and my separate ZIL-device while the system crashes and now I'm in trouble. The
old system was running under the least version of osol/dev with zfs v22.
10 days ago after the servers crashs I was very optimistc of so
On May 24, 2010, at 4:06 AM, Demian Phillips wrote:
> On Sun, May 23, 2010 at 12:02 PM, Torrey McMahon wrote:
>> On 5/23/2010 11:49 AM, Richard Elling wrote:
>>>
>>> FWIW, the A5100 went end-of-life (EOL) in 2001 and end-of-service-life
>>> (EOSL) in 2006. Personally, I hate them with a passion
yes i used "zpool replace".
why is one drive recognized?
shouldnt the labels be wiped on all of them?
am i screwed?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
On Mon, 24 May 2010, h wrote:
i had 6 disks in a raidz1 pool that i replaced from 1TB drives to 2TB
drives. i have installed the older 1TB drives in another system and
would like to import the old pool to access some files i accidentally
deleted from the new pool.
Did you use the 'zpool
Hi!
i had 6 disks in a raidz1 pool that i replaced from 1TB drives to 2TB drives.
i have installed the older 1TB drives in another system and would like to import
the old pool to access some files i accidentally deleted from the new pool.
the first system (with the 2TB's) is a Opensolaris system a
On Sun, May 23, 2010 at 12:02 PM, Torrey McMahon wrote:
> On 5/23/2010 11:49 AM, Richard Elling wrote:
>>
>> FWIW, the A5100 went end-of-life (EOL) in 2001 and end-of-service-life
>> (EOSL) in 2006. Personally, I hate them with a passion and would like to
>> extend an offer to use my tractor to
Yes. I mentioned this in my thread. And also I contacted Chris, ;-)
-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of J.P. King
Sent: 星期一, 五月 24, 2010 18:41
To: Andrew Gabriel
Cc: ZFS Discussions
Subject: Re: [zfs-discus
Yeah, If plus the ability to backup the data to the BIOS/EPROM on the
motherboard, that should be the utmost solution….
From: Andrew Gabriel [mailto:andrew.gabr...@oracle.com]
Sent: 星期一, 五月 24, 2010 18:37
To: Erik Trimble
Cc: Fred Liu; ZFS Discussions
Subject: Re: [zfs-discuss] [ZIL device brains
What you probably want is a motherboard which has a small area of main
memory protected by battery, and a ramdisk driver which knows how to use it.
Then you'd get the 1,000,000 IOPS. No idea if anyone makes such a thing.
You are correct that ZFS gets an enormous benefit from even tiny amounts i
Erik Trimble wrote:
Frankly, I'm really surprised that there's no solution, given that the
*amount* of NVRAM needed for ZIL (or similar usage) is really quite
small. a dozen GB is more than sufficient, and really, most systems do
fine with just a couple of GB (3-4 or so). Producing a small
From: Erik Trimble [mailto:erik.trim...@oracle.com]
Sent: 星期一, 五月 24, 2010 16:28
To: Fred Liu
Cc: ZFS Discussions
Subject: Re: [zfs-discuss] [ZIL device brainstorm] intel x25-M G2 has ram cache?
On 5/23/2010 11:30 PM, Fred Liu wrote:
Hi,
I have hit the synchronous NFS writing wall just like man
On 24 maj 2010, at 10.26, Brandon High wrote:
> On Mon, May 24, 2010 at 1:02 AM, Ragnar Sundblad wrote:
>> Is that really true if you use the "zpool replace" command with both
>> the old and the new drive online?
>
> Yes.
(Don't you mean "no" then? :-)
> zpool replace [-f] pool old_device
On 5/23/2010 11:30 PM, Fred Liu wrote:
Hi,
I have hit the synchronous NFS writing wall just like many people do.
There also have lots of discussion about the solutions here.
I want to post all of my exploring fighting done recently to discuss
and share:
1): using the normal SATA-SSDs(intel
On Mon, May 24, 2010 at 1:02 AM, Ragnar Sundblad wrote:
> Is that really true if you use the "zpool replace" command with both
> the old and the new drive online?
Yes.
zpool replace [-f] pool old_device [new_device]
Replaces old_device with new_device. This is equivalent
On 24 maj 2010, at 02.44, Erik Trimble wrote:
> On 5/23/2010 5:00 PM, Andreas Iannou wrote:
>> Is it safe or possible to do a zpool replace for multiple drives at once? I
>> think I have one of the troublesome WD Green drives as replacing it has
>> taken 39hrs and only reslivered 58Gb, I have a
43 matches
Mail list logo