So Cindy, Simon (or anyone else)... now that we are over a year past when Simon
wrote his excellent blog introduction, is there an updated "best practices" for
ACLs with CIFS? Or, is this blog entry still the best word on the street?
In my case, I am supporting multiple PCs (Workgroup) and Macs
Marc Bevand gmail.com> writes:
>
> This discrepancy between tests with random data and zero data is puzzling
> to me. Does this suggest that the SSD does transparent compression between
> its Sandforce SF-1500 controller and the NAND flash chips?
Replying to myself: yes, SF-1500 does transparent
Orvar Korvar wrote:
> ZFS does not handle 4K sector drives well, you need to create a new zpool
> with "4K" property (ashift) set.
> http://www.solarismen.de/archives/5-Solaris-and-the-new-4K-Sector-Disks-e.g.-WDxxEARS-Part-2.html
>
> Are there plans to allow resilver to handle 4K sector drives?
On Sep 12, 2010, at 8:27 PM, Edward Ned Harvey wrote:
>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of Orvar Korvar
>>
>> I am not really worried about fragmentation. I was just wondering if I
>> attach new drives and zfs send recieve to a
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Orvar Korvar
>
> I am not really worried about fragmentation. I was just wondering if I
> attach new drives and zfs send recieve to a new zpool, would count as
> defrag. But apparently, not.
"
I recently lost all of the data on my single parity raid z array. Each of the
drives was encrypted with the zfs array built within the encrypted volumes.
I am not exactly sure what happened. The files were there and accessible and
then they were all gone. The server apparently crashed and reb
(I am aware I am replying to an old post...)
Arne Jansen gmx.net> writes:
>
> Now the test for the Vertex 2 Pro. This was fun.
> For more explanation please see the thread "Crucial RealSSD C300 and cache
> flush?"
> This time I made sure the device is attached via 3GBit SATA. This is also
> only
On Sun, Sep 12, 2010 at 3:42 PM, Richard Elling wrote:
> OSol source yes, binaries no :-( You will need another distro besides
> OpenSolaris.
> The needed support in sd was added around the b137 timeframe.
Do you know if it's been backported to Nexenta Core? There doesn't
seem to be a list of b
On Sun, Sep 12, 2010 at 5:42 PM, Richard Elling wrote:
> On Sep 12, 2010, at 10:11 AM, Brandon High wrote:
>
>> On Sun, Sep 12, 2010 at 10:07 AM, Orvar Korvar
>> wrote:
>>> No replies. Does this mean that you should avoid large drives with 4KB
>>> sectors, that is, new drives? ZFS does not handl
On Sep 12, 2010, at 10:11 AM, Brandon High wrote:
> On Sun, Sep 12, 2010 at 10:07 AM, Orvar Korvar
> wrote:
>> No replies. Does this mean that you should avoid large drives with 4KB
>> sectors, that is, new drives? ZFS does not handle new drives?
>
> Solaris 10u9 handles 4k sectors, so it might
Comments below...
On Sep 12, 2010, at 2:56 PM, Warren Strange wrote:
>> So we are clear, you are running VirtualBox on ZFS,
>> rather than ZFS on VirtualBox?
>>
>
> Correct
>
>>
>> Bad power supply, HBA, cables, or other common cause.
>> To help you determine the sort of corruption, for
>> mir
> So we are clear, you are running VirtualBox on ZFS,
> rather than ZFS on VirtualBox?
>
Correct
>
> Bad power supply, HBA, cables, or other common cause.
> To help you determine the sort of corruption, for
> mirrored pools FMA will record
> the nature of the discrepancies.
> fmdump -eV
On Sep 12, 2010, at 11:05 AM, Warren Strange wrote:
> I posted the following to the VirtualBox forum. I would be interested in
> finding out if anyone else has ever seen zpool corruption with VirtualBox as
> a host on OpenSolaris:
>
> -
> I am running Ope
On Sep 12, 2010, at 8:24 AM, Humberto Ramirez wrote:
> I'm trying to replicate a 300 GB pool with this command
>
> zfs send al...@3 | zfs receive -F omega
>
> about 2 hours in to the process it fails with this error
>
> "cannot receive new filesystem stream: invalid backup stream"
>
> I have t
Hi Warren,
This may not help much, except perhaps as a way to eliminate possible
causes, but I ran b134 with VirtualBox and guests on ZFS for quite a
long time without any such symptoms. My pool is a simple, unmirrored
one, so the difference may be there. I used shared folders without
inciden
Absolutely spot on George. The import with -N took seconds.
Working on the assumption that esx_prod is the one with the problem, I bumped
that to the bottom of the list. Each mount was done in a second:
# zfs mount zp
# zfs mount zp/nfs
# zfs mount zp/nfs/esx_dev
# zfs mount zp/nfs/esx_hedgehog
I posted the following to the VirtualBox forum. I would be interested in
finding out if anyone else has ever seen zpool corruption with VirtualBox as a
host on OpenSolaris:
-
I am running OpenSolaris b134 as a VirtualBox host, with a Linux guest.
I have e
Chris Murray wrote:
Another hang on zpool import thread, I'm afraid, because I don't seem to have
observed any great successes in the others and I hope there's a way of saving
my data ...
In March, using OpenSolaris build 134, I created a zpool, some zfs filesystems,
enabled dedup on them, mo
On Sun, Sep 12, 2010 at 10:07 AM, Orvar Korvar
wrote:
> No replies. Does this mean that you should avoid large drives with 4KB
> sectors, that is, new drives? ZFS does not handle new drives?
Solaris 10u9 handles 4k sectors, so it might be in a post-b134 release of osol.
-B
--
Brandon High : b
Another hang on zpool import thread, I'm afraid, because I don't seem to have
observed any great successes in the others and I hope there's a way of saving
my data ...
In March, using OpenSolaris build 134, I created a zpool, some zfs filesystems,
enabled dedup on them, moved content into them
No replies. Does this mean that you should avoid large drives with 4KB sectors,
that is, new drives? ZFS does not handle new drives?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensol
Thanks for the reply Andrew, there both 22 (I checked on that prior to post)
Thanks.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Humberto Ramirez wrote:
I'm trying to replicate a 300 GB pool with this command
zfs send al...@3 | zfs receive -F omega
about 2 hours in to the process it fails with this error
"cannot receive new filesystem stream: invalid backup stream"
I have tried setting the target read only (zfs set re
I'm trying to replicate a 300 GB pool with this command
zfs send al...@3 | zfs receive -F omega
about 2 hours in to the process it fails with this error
"cannot receive new filesystem stream: invalid backup stream"
I have tried setting the target read only (zfs set readonly=on omega)
also disa
On Sep 12, 2010, at 3:47 AM, Roy Sigurd Karlsbakk wrote:
>> If not, will any possible problems be avoided if I remove (transfer
>> data away from) any filesystems with dedup=on ?
>
> I would think re-copying data from a deduped dataset to a non-deduped dataset
> will fix it, yes. But then, who
> If not, will any possible problems be avoided if I remove (transfer
> data away from) any filesystems with dedup=on ?
I would think re-copying data from a deduped dataset to a non-deduped dataset
will fix it, yes. But then, who knows, perhaps Oracle will fix dedup and make
it usable one day...
26 matches
Mail list logo