> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
>
> ~= 5.1E-57
Bah. My math is wrong. I was never very good at P&S. I'll ask someone at
work tomorrow to look at it and show me the folly. Wikipedia has it right,
but I c
No compression, no dedup.
I also forgot to mention it's on svn_134
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 01/11/11 11:40 AM, fred wrote:
Hello,
I'm having a weird issue with my incremental setup.
Here is the filesystem as it shows up with zfs list:
NAMEUSED AVAIL REFER MOUNTPOINT
Data/FS1 771M 16.1T 116M /Data/FS1
Data/f...@05
Hello,
I'm having a weird issue with my incremental setup.
Here is the filesystem as it shows up with zfs list:
NAMEUSED AVAIL REFER MOUNTPOINT
Data/FS1 771M 16.1T 116M /Data/FS1
Data/f...@05 10.3G - 1.93T
- Original Message -
> Running "zpool status -x" gives the results below. Do I have any
> options besides restoring from tape?
>
> David
>
> $ zpool status -x
...
This may be a little off-topic, but using 20 drives in a single VDEV - isn't
that a little more than recommended?
Vennlige
Hi Karl,
I would keep your mirrored root pool separate on the smaller disks as
you have setup now.
You can move your root pool, its easy enough. You can even replace
or attach larger disks to the root pool and detach the smaller disks.
You can't currently boot from snapshots, you must boot from
Hi everyone
I am currently testing Solaris 11 Express. I currently have a root pool on a
mirrored pair of small disks, and a data pool consisting of 2 mirrored pairs
of 1.5TB drives.
I have enabled auto snapshots on my root pool, and plan to archive the daily
snapshots onto my data pool. I
Hi David,
You might try importing this pool on a Oracle Solaris Express system,
where a pool recovery feature is available might be able to bring this
pool back (it rolls back to a previous transaction) or if that fails,
you could import this pool by using the read-only option to at least
recover
As a follow-up, I tried a SuperMicro enclosure (SC847E26-RJBOD1). I have 3
sets of 15 drives. I got the same results when I loaded the second set of
drives (15 to 30).
Then, I tried changing the LSI 9200's BIOS setting for max INT 13 drives from
24 (the default) to 15. From then on, the Supe
Hi David,
Don't know whether my info is still helpfull, but here it is anyway.
Had the same problem and solved it using the format -e command.
When you then enter the label option, you will get two options.
format> label
[0] SMI Label
[1] EFI Label
Specify Label type[0]:
Choose zero and your di
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of David Magda
>
> Knowing exactly how the math (?) works is not necessary, but understanding
Understanding the math is not necessary, but it is pretty easy. And
unfortunately it becomes kind of
> From: Pawel Jakub Dawidek [mailto:p...@freebsd.org]
>
> Well, I find it quite reasonable. If your block is referenced 100 times,
> it is probably quite important.
If your block is referenced 1 time, it is probably quite important. Hence
redundancy in the pool.
> There are many corruption po
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Peter Taps
>
> I haven't looked at the link that talks about the probability of
collision.
> Intuitively, I still wonder how the chances of collision can be so low. We
are
> reducing a 4K block
On Mon, January 10, 2011 02:41, Eric D. Mudama wrote:
> On Sun, Jan 9 at 22:54, Peter Taps wrote:
>> Thank you all for your help. I am the OP.
>>
>> I haven't looked at the link that talks about the probability of
>> collision. Intuitively, I still wonder how the chances of collision
>> can be so
Hi,
after node panic I have an issue with import one of my zpools:
# zpool import dmysqlb2
cannot iterate filesystems: I/O error
so I tried to list zfs filesystems:
# zfs list -r dmysqlb2
cannot iterate filesystems: I/O error
NAMEUSED AVAIL REFER MOUNTPOINT
dmysqlb2
Actually, it is not my blog ;)
To answer your question: you first need to create a new vdev that is 4K aligned
unfortunately. I am not aware of any other means to accomplish what you seek.
--
This message posted from opensolaris.org
___
zfs-discuss mai
On Sat, Jan 08, 2011 at 12:59:17PM -0500, Edward Ned Harvey wrote:
> Has anybody measured the cost of enabling or disabling verification?
Of course there is no easy answer:)
Let me explain how verification works exactly first.
You try to write a block. You see that block is already in dedup tabl
On 01/ 8/11 05:59 PM, Edward Ned Harvey wrote:
Has anybody measured the cost of enabling or disabling verification?
The cost of disabling verification is an infinitesimally small number
multiplied by possibly all your data. Basically lim->0 times lim->infinity.
This can only be evaluated on a
On Sun, Jan 09, 2011 at 07:27:52PM -0500, Edward Ned Harvey wrote:
> > From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> > boun...@opensolaris.org] On Behalf Of Pawel Jakub Dawidek
> >
> > Dedupditto doesn't work exactly that way. You can have at most 3 copies
> > of your block. Ded
19 matches
Mail list logo