On Sun, Jun 12, 2011 at 5:28 PM, Nico Williams wrote:
> On Sun, Jun 12, 2011 at 4:14 PM, Scott Lawson
> wrote:
> > I have an interesting question that may or may not be answerable from
> some
> > internal
> > ZFS semantics.
>
> This is really standard Unix filesystem semantics.
>
> > [...]
> >
>
On 6/12/11 7:25 PM, "Richard Elling" wrote:
>>>
>>
>> Here's the timeline:
>>
>> - The Intel X25-M was marked "FAULTED" Monday evening, 6pm. This was not
>> detected by NexentaStor.
>
>Is the volume-check runner enabled? All of the check runner results are
>logged in
>the report database and
On Jun 12, 2011, at 5:04 PM, Edmund White wrote:
> On 6/12/11 6:18 PM, "Jim Klimov" wrote:
>> 2011-06-12 23:57, Richard Elling wrote:
>>>
>>> How long should it wait? Before you answer, read through the thread:
>>> http://lists.illumos.org/pipermail/developer/2011-April/001996.html
>>> Then a
On 13/06/11 11:36 AM, Jim Klimov wrote:
Some time ago I wrote a script to find any "duplicate" files and replace
them with hardlinks to one inode. Apparently this is only good for same
files which don't change separately in future, such as distro archives.
I can send it to you offlist, but it wo
On 6/12/11 6:18 PM, "Jim Klimov" wrote:
>2011-06-12 23:57, Richard Elling wrote:
>>
>> How long should it wait? Before you answer, read through the thread:
>> http://lists.illumos.org/pipermail/developer/2011-April/001996.html
>> Then add your comments :-)
>> -- richard
>
>But the point o
On 13/06/11 10:28 AM, Nico Williams wrote:
On Sun, Jun 12, 2011 at 4:14 PM, Scott Lawson
wrote:
I have an interesting question that may or may not be answerable from some
internal
ZFS semantics.
This is really standard Unix filesystem semantics.
I Understand this, just wanting t
2011-06-13 2:28, Nico Williams пишет:
PS: Is it really the case that Exchange still doesn't deduplicate
e-mails? Really? It's much simpler to implement dedup in a mail
store than in a filesystem...
That's especially strange, because NTFS has hardlinks and softlinks...
Not that Microsoft provi
Some time ago I wrote a script to find any "duplicate" files and replace
them with hardlinks to one inode. Apparently this is only good for same
files which don't change separately in future, such as distro archives.
I can send it to you offlist, but it would be slow in your case because it
is no
On Jun 12, 2011, at 4:18 PM, Jim Klimov wrote:
> 2011-06-12 23:57, Richard Elling wrote:
>>
>> How long should it wait? Before you answer, read through the thread:
>> http://lists.illumos.org/pipermail/developer/2011-April/001996.html
>> Then add your comments :-)
>> -- richard
>
> Interes
2011-06-12 23:57, Richard Elling wrote:
How long should it wait? Before you answer, read through the thread:
http://lists.illumos.org/pipermail/developer/2011-April/001996.html
Then add your comments :-)
-- richard
Interesting thread. I did not quite get the resentment against
a tuna
On Sun, Jun 12, 2011 at 4:14 PM, Scott Lawson
wrote:
> I have an interesting question that may or may not be answerable from some
> internal
> ZFS semantics.
This is really standard Unix filesystem semantics.
> [...]
>
> So total storage used is around ~7.5MB due to the hard linking taking place
Hi All,
I have an interesting question that may or may not be answerable from
some internal
ZFS semantics.
I have a Sun Messaging Server which has 5 ZFS based email stores. The
Sun Messaging server
uses hard links to link identical messages together. Messages are stored
in standard SMTP
MIME
On Jun 11, 2011, at 9:26 AM, Jim Klimov wrote:
> 2011-06-11 19:15, Pasi Kärkkäinen пишет:
>> On Sat, Jun 11, 2011 at 08:35:19AM -0500, Edmund White wrote:
>>>I've had two incidents where performance tanked suddenly, leaving the VM
>>>guests and Nexenta SSH/Web consoles inaccessible and req
On Jun 11, 2011, at 6:35 AM, Edmund White wrote:
> Posted in greater detail at Server Fault -
> http://serverfault.com/q/277966/13325
>
Replied in greater detail at same.
> I have an HP ProLiant DL380 G7 system running NexentaStor. The server has
> 36GB RAM, 2 LSI 9211-8i SAS controllers (no S
On Jun 11, 2011, at 5:46 AM, Edward Ned Harvey wrote:
>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of Jim Klimov
>>
>> See FEC suggestion from another poster ;)
>
> Well, of course, all storage mediums have built-in hardware FEC. At lea
On May 10, 2011, at 9:18 AM, Ray Van Dolson wrote:
> We recently had a disk fail on one of our whitebox (SuperMicro) ZFS
> arrays (Solaris 10 U9).
>
> The disk began throwing errors like this:
>
> May 5 04:33:44 dev-zfs4 scsi: [ID 243001 kern.warning] WARNING:
> /pci@0,0/pci8086,3410@9/pci15d9
Did you try a read-only import as well? I THINK it goes like this:
zpool import -o ro -o cachefile=none -F -f badpool
Did you manage to capture any error output? For example, is it an option for
you to set up a serial console and copy-paste the error text from the serial
terminal on another mac
On Sat, Jun 11, 2011 at 08:26:34PM +0400, Jim Klimov wrote:
> 2011-06-11 19:15, Pasi Kärkkäinen ??:
>> On Sat, Jun 11, 2011 at 08:35:19AM -0500, Edmund White wrote:
>>> I've had two incidents where performance tanked suddenly, leaving the VM
>>> guests and Nexenta SSH/Web consoles i
Indeed it was!
Thanks!!
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Sun, Jun 12, 2011 at 3:54 AM, Johan Eliasson <
johan.eliasson.j...@gmail.com> wrote:
> I replaced a smaller disk in my tank2, so now they're all 2TB. But look,
> zfs still thinks it's a pool of 1.5 TB disks:
>
> nebol@filez:~# zpool list tank2
> NAMESIZE ALLOC FREECAP DEDUP HEALTH
I replaced a smaller disk in my tank2, so now they're all 2TB. But look, zfs
still thinks it's a pool of 1.5 TB disks:
nebol@filez:~# zpool list tank2
NAMESIZE ALLOC FREECAP DEDUP HEALTH ALTROOT
tank2 5.44T 4.20T 1.24T77% 1.00x ONLINE -
nebol@filez:~# zpool status tank2
21 matches
Mail list logo