On Mar 1, 2010, at 12:44, Richard Elling wrote:
On Mar 1, 2010, at 7:42 AM, Thomas Burgess wrote:
Also consider that you might not want to snapshot the entire pool.
Snapshots work on the dataset, not the pool (there is no "zpool
snapshot" command :-)
Wouldn't a "zfs snapshot -r mypool"
On Wed, March 10, 2010 14:47, Svein Skogen wrote:
> On 10.03.2010 18:18, Edward Ned Harvey wrote:
>> The advantage of the tapes is an official support channel, and much
>> greater
>> archive life. The advantage of the removable disks is that you need no
>> special software to do a restore, and yo
On Mar 18, 2010, at 14:23, Bob Friesenhahn wrote:
On Thu, 18 Mar 2010, erik.ableson wrote:
Ditto on the Linux front. I was hoping that Solaris would be the
exception, but no luck. I wonder if Apple wouldn't mind lending
one of the driver engineers to OpenSolaris for a few months...
Per
On Mar 18, 2010, at 15:00, Miles Nordin wrote:
Admittedly the second bullet is hard to manage while still backing up
zvol's, pNFS / Lustre data-node datasets, windows ACL's, properties,
Some commercial backup products are able to parse VMware's VMDK files
to get file system information of th
On Mar 20, 2010, at 00:57, Edward Ned Harvey wrote:
I used NDMP up till November, when we replaced our NetApp with a
Solaris Sun
box. In NDMP, to choose the source files, we had the ability to
browse the
fileserver, select files, and specify file matching patterns. My
point is:
NDMP is fi
On Mar 20, 2010, at 14:37, Remco Lengers wrote:
You seem to be concerned about the availability? Open HA seems to be
a package last updated in 2005 (version 0.3.6). (?) It seems to me
like a real fun toy project to build but I would be pretty reserved
about the actual availability and putti
On Wed, March 24, 2010 10:36, Joerg Schilling wrote:
>> > - A public interface to get the property state
>>
>> That would come from libzfs. There are private interfaces just now that
>> are very likely what you need zfs_prop_get()/zfs_prop_set(). They aren't
>> documented or public though and ar
On Wed, March 24, 2010 14:36, Edward Ned Harvey wrote:
> The question is not how to create quotas for users.
>
> The question is how to create reservations for users.
There is currently no way to do per-user reservations. That ZFS property
is only available per-file system.
Even per-user and per
On Thu, March 25, 2010 10:20, Wolfraider wrote:
> What I am thinking is basically having 2 servers. One has the zpool
> attached and sharing out our data. The other is a cold spare. The zpool is
> stored on 3 JBOD chassis attached with Fibrechannel. I would like to
> export the config at specific
On Thu, March 25, 2010 11:28, Wolfraider wrote:
> Which, when I asked the question, I wasn't sure how it all worked. I
> didn't know if the import process need a config file or not. I am learning
> alot, very quickly. We will be looking into the HA cluster in the future.
> The spare is a cold spar
On Fri, March 26, 2010 07:38, Edward Ned Harvey wrote:
>> Coolio. Learn something new everyday. One more way that raidz is
>> different from RAID5/6/etc.
>
> Freddie, again, you're wrong. Yes, it's perfectly acceptable to create
> either raid-5 or raidz using 2 disks. It's not degraded, but it
On Fri, March 26, 2010 09:46, David Dyer-Bennet wrote:
> I don't know that it makes sense to. There are lots of existing filter
> packages that do compression; so if you want compression, just put them in
> your pipeline. That way you're not limited by what zfs send has
> implemented, either. W
A new ARC case:
There is a long-standing RFE for zfs to be able to describe what has
changed between the snapshots of a dataset. To provide this
capability, we propose a new 'zfs diff' sub-command. When run with
appropriate privilege the sub-command describes what file system level
changes hav
On Tue, March 30, 2010 22:40, Edward Ned Harvey wrote:
> Here's a snippet from man zpool. (Latest version available today in
> solaris)
>
> zpool remove pool device ...
> Removes the specified device from the pool. This command
> currently only supports removing hot spares and cach
On Wed, March 31, 2010 12:23, Bob Friesenhahn wrote:
> Yesterday I noticed that the Sun Studio 12 compiler (used to build
> OpenSolaris) now costs a minimum of $1,015/year. The "Premium"
> service plan costs $200 more.
I feel a great disturbance in the force. It is as if a great multitude of
dev
On Mar 31, 2010, at 19:41, Robert Milkowski wrote:
I double checked the documentation and you're right - the default
has changed to sync.
I haven't found in which RH version it happened but it doesn't
really matter.
From the SourceForge site:
Since version 1.0.1 of the NFS utilities tarbal
On Wed, March 31, 2010 21:25, Bart Smaalders wrote:
> ZFS root will be the supported root filesystem for Solaris Next; we've
> been using it for OpenSolaris for a couple of years.
This is already supported:
> Starting in the Solaris 10 10/08 release, you can install and boot from a
> ZFS root fi
On Apr 7, 2010, at 16:47, Bob Friesenhahn wrote:
Solaris 10's Live Upgrade (and the OpenSolaris equivalent) is quite
valuable in that it allows you to upgrade the OS without more than a
few minutes of down-time and with a quick fall-back if things don't
work as expected.
It is more straig
On Apr 7, 2010, at 19:58, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of David Magda
If you're going to go with (Open)Solaris, the OP may also want to
look
into the multi-platform pkgsrc for third-party
On Mon, April 12, 2010 09:10, Willard Korfhage wrote:
> If the firmware upgrade fixed everything, then I've got a question about
> which I am better off doing: keep it as-is, with the raid card providing
> redundancy, or turn it all back into pass-through drives and let ZFS
> handle it, making th
On Mon, April 12, 2010 10:48, Tomas Ögren wrote:
> On 12 April, 2010 - Bob Friesenhahn sent me these 0,9K bytes:
>
>> Zfs is designed for high thoughput, and TRIM does not seem to improve
>> throughput. Perhaps it is most useful for low-grade devices like USB
>> dongles and compact flash.
>
> For
On Mon, April 12, 2010 12:28, Tomas Ögren wrote:
> On 12 April, 2010 - David Magda sent me these 0,7K bytes:
>
>> On Mon, April 12, 2010 10:48, Tomas Ögren wrote:
>>
>> > For flash to overwrite a block, it needs to clear it first.. so yes,
>> > clearing it
On Mon, April 19, 2010 07:32, Edward Ned Harvey wrote:
> I'm saying that even a single pair of disks (maybe 4 disks if you're using
> cheap slow disks) will outperform a 1Gb Ethernet. So if your bottleneck
> is the 1Gb Ethernet, you won't gain anything (significant) by accelerating
> the stuff th
On Mon, April 19, 2010 06:26, Michael DeMan wrote:
> B. The current implementation stores that cache file on the zil device,
> so if for some reason, that device is totally lost (along with said .cache
> file), it is nigh impossible to recover the entire pool it correlates
> with.
Given that ZFS
On Mon, April 19, 2010 23:05, Don wrote:
>> A STEC Zeus IOPS SSD (45K IOPS) will behave quite differently than an
>> Intel X-25E (~3.3K IOPS).
>
> Where can you even get the Zeus drives? I thought they were only in the
> OEM market and last time I checked they were ludicrously expensive. I'm
> look
On Wed, April 21, 2010 02:41, Schachar Levin wrote:
> Hi,
> We are currently using NetApp file clone option to clone multiple VMs on
> our FS.
>
> ZFS dedup feature is great storage space wise but when we need to clone
> allot of VMs it just takes allot of time.
>
> Is there a way (or a planned way
On Wed, April 21, 2010 09:18, Schachar Levin wrote:
> NetApp has the ability to instantly clone single files and that would also
> solve our problem if its somewhere in ZFS road-map (unless the issues we
> have above can be resolved)
Beyond things like dedupe (and compression), ZFS currently does
On Fri, May 7, 2010 04:32, Darren J Moffat wrote:
> Remember also that unless you are very CPU bound you might actually
> improve performance from enabling compression. This isn't new to ZFS,
> people (my self included) used to do this back in MS-DOS days with
> Stacker and Doublespace.
CPU has
I have a suggestion on modifying useradd(1M) and am not sure where to
input it.
Since individual ZFS file systems often make it easy to manage things,
would it be possible to modify useradd(1M) so that if the 'base_dir' is in
a zpool, a new dataset is created for the user's homedir?
So if you spe
On Tue, May 18, 2010 20:45, Edward Ned Harvey wrote:
> The whole point of a log device is to accelerate sync writes, by providing
> nonvolatile storage which is faster than the primary storage. You're not
> going to get this if any part of the log device is at the other side of a
> WAN. So eithe
On Wed, May 19, 2010 02:09, thomas wrote:
> Is it even possible to buy a zeus iops anywhere? I haven't been able to
> find one. I get the impression they mostly sell to other vendors like sun?
> I'd be curious what the price is on a 9GB zeus iops is these days?
Correct, their Zeus products are on
A recent post on StorageMojo has some interesting numbers on how
vibrations can affect disks, especially consumer drives:
http://storagemojo.com/2010/05/19/shock-vibe-and-awe/
He mentions a 2005 study that I wasn't aware of. In its conclusion it
states:
Based on the results of thes
On Thu, May 20, 2010 13:58, Roy Sigurd Karlsbakk wrote:
> - "Travis Tabbal" skrev:
>
>> Disable ZIL and test again. NFS does a lot of sync writes and kills
>> performance. Disabling ZIL (or using the synchronicity option if a
>> build with that ever comes out) will prevent that behavior, and s
On Thu, May 20, 2010 14:12, Travis Tabbal wrote:
>> On May 19, 2010, at 2:29 PM, Don wrote:
>
>> The data risk is a few moments of data loss. However,
>> if the order of the
>> uberblock updates is not preserved (which is why the
>> caches are flushed)
>> then recovery from a reboot may require man
Seagate is planning on releasing a disk that's part spinning rust and
part flash:
http://www.theregister.co.uk/2010/05/21/seagate_momentus_xt/
The design will have the flash be transparent to the operating system,
but I wish they would have some way to access the two components
sep
On Jun 2, 2010, at 02:20, Sigbjorn Lie wrote:
What the hell? I don't have a support contract for my home
machines... I don't feel like this is the right way to go for an
open source project...
Write a letter demanding a refund. Join the OpenSolaris Governing Board.
I'm not sure Oracle's fo
On Wed, June 2, 2010 02:20, Sigbjorn Lie wrote:
> I have just recovered from a ZFS crash. During the antagonizing time
> this took, I was surprised to learn how undocumented the tools and
> options for ZFS recovery we're. I managed to recover thanks to some great
> forum posts from Victor Latushki
On Jun 3, 2010, at 13:36, Garrett D'Amore wrote:
Perhaps you have been unlucky. Certainly, there is a window with N
+1 redundancy where a single failure leaves the system exposed in
the face of a 2nd fault. This is a statistics game...
It doesn't even have to be a drive failure, but an unr
On Jun 4, 2010, at 14:28, zfsnoob4 wrote:
Does anyone know if opensolaris supports Trim?
Not at this time.
Are you referring to a read cache or a write cache?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailm
On Jun 7, 2010, at 00:15, Richard Jahnel wrote:
I use 4 intel 32gb ssds as read cache for each pool of 10 Patriot
Torx drives which are running in a raidz2 configuration. No Slogs as
I haven't seen a compliant SSD drive yet.
Besides STEC's Zeus drives you mean? (Which aren't available in re
On Mon, June 7, 2010 09:21, Richard Jahnel wrote:
> I'll have to take your word on the Zeus drives. I don't see any thing in
> thier literature that explicitly states that cache flushes are obeyed or
> other wise protected against power loss.
The STEC units is what Oracle/Sun use in their 7000 ser
On Mon, June 7, 2010 10:34, Toyama Shunji wrote:
> Can I extract one or more specific files from zfs snapshot stream?
> Without restoring full file system.
> Like ufs based 'restore' tool.
No.
(Check the archives of zfs-discuss for more details. Send/recv has been
discussed at length many times.
On Mon, June 7, 2010 12:56, Tim Cook wrote:
>> The STEC units is what Oracle/Sun use in their 7000 series appliances,
>> and I believe EMC and many others use them as well.
>
> When did that start? Every 7000 I've seen uses Intel drives.
According to the Sun System Handbook for the 7310, the 18
On Jun 7, 2010, at 16:32, Richard Elling wrote:
Please don't confuse Ethernet with IP. Ethernet has no routing and
no back-off other than that required for the link.
Not entirely accurate going forward. IEEE 802.1Qau defines an end-to-
end congestion notification management system:
On Jun 8, 2010, at 20:17, Moazam Raja wrote:
One of the major concerns I have is what happens when the primary
storage server fails. Will the secondary take over automatically
(using some sort of heartbeat mechanism)? Once the secondary node
takes over, can it fail-back to the primary node
On Jun 10, 2010, at 03:50, Fredrich Maney wrote:
David Magda wrote:
Either the primary node OR the secondary node can have active writes
to a volume, but NOT BOTH at the same time. Once the secondary
becomes active, and has made changes, you have to replicate the
changes back to the primary
On Jun 15, 2010, at 14:20, Fco Javier Garcia wrote:
I think dedup may have its greatest appeal in VDI environments
(think about a environment with 85% if the data that the virtual
machine needs is into ARC or L2ARC... is like a dream...almost
instantaneous response... and you can boot a new
On Wed, June 16, 2010 03:03, Arne Jansen wrote:
> Christopher George wrote:
>
>> For the record, not all SSDs "ignore cache flushes". There are at least
>> two SSDs sold today that guarantee synchronous write semantics; the
>> Sun/Oracle LogZilla and the DDRdrive X1. Also, I believe it is more
>
On Wed, June 16, 2010 07:59, Arve Paalsrud wrote:
> Got prices from a retailer now:
>
> 100GB - DENRSTE251E10-0100~1100 USD
> 200GB - DENRSTE251E10-0200~1900 USD
> 400GB - DENRSTE251E10-0400~4500 USD
>
> Prices were given to a country in Europe, so USD prices might be lower.
Heh. I jus
On Wed, June 16, 2010 10:44, Arne Jansen wrote:
> David Magda wrote:
>
>> I'm not sure you'd get the same latency and IOps with disk that you can
>> with a good SSD:
>>
>> http://blogs.sun.com/brendan/entry/slog_screenshots
[...]
> Please keep in mi
On Wed, June 16, 2010 11:02, David Magda wrote:
[...]
> Yes, I understood it as suck, and that link is for ZIL. For L2ARC SSD
> numbers see:
s/suck/such/
:)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/m
On Wed, June 16, 2010 14:15, Lasse Osterild wrote:
> Hi,
>
> Have any of you looked at SSD's from Virident ?
> http://virident.com/products.php
> Looks pretty impressive to me, though I am sure the price is as well.
Only Linux distributions are listed under "Platform Support".
___
On Wed, June 16, 2010 15:15, Arne Jansen wrote:
> I double checked before posting: I can nearly saturate a 15k disk if I
> make full use of the 32 queue slots giving 137 MB/s or 34k IOPS/s. Times
> 3 nearly matches the above mentioned 114k IOPS :)
34K*3 = 102K. 12K isn't anything to sneeze at :)
On Thu, June 17, 2010 09:36, Darren J Moffat wrote:
> On 17/06/2010 14:12, Edward Ned Harvey wrote:
>>> From: Fredrich Maney [mailto:fredrichma...@gmail.com]
>>>
>>> Have you looked at 'lsof' or the native BSM auditing features?
>>> Admittedly audit is not really intended for realtime, but lsof
>>>
On Fri, June 18, 2010 08:29, Sendil wrote:
> I can create 400+ file system for each users,
> but will this affect my system performance during the system boot up?
> Is this recommanded or any alternate is available for this issue.
You can create a dataset for each user, and then set a per-dataset
On Jun 20, 2010, at 11:55, Roy Sigurd Karlsbakk wrote:
There will also be a few common areas for each department and
perhaps a backup area.
The back up area should be on a different set of disks.
IMHO, a back up isn't a back up unless it is an /independent/ copy of
the data. The copy can b
On Jun 21, 2010, at 05:00, Roy Sigurd Karlsbakk wrote:
So far the plan is to keep it in one pool for design and
administration simplicity. Why would you want to split up (net) 40TB
into more pools? Seems to me that'll mess up things a bit, having to
split up SSDs for use on different pools,
On Tue, June 22, 2010 17:32, Bob Friesenhahn wrote:
> On Tue, 22 Jun 2010, Brian wrote:
>>
>> Is what I did wrong? I was under the impression that zfs wrote a
>> label to each disk so you can move it around between controllers...?
>
> You are correct. Normally exporting and importing the pool shoul
On Jun 26, 2010, at 02:09, Arne Jansen wrote:
Geoff Nordli wrote:
Is this the one
(http://www.ocztechnology.com/products/solid-state-drives/2-5--sata-ii/maxim
um-performance-enterprise-solid-state-drives/ocz-vertex-2-pro-
series-sata-ii
-2-5--ssd-.html) with the built in supercap?
Yes.
C
On Jul 1, 2010, at 10:39, Pasi Kärkkäinen wrote:
basicly 5-30 seconds after login prompt shows up on the console
the server will reboot due to kernel crash.
the error seems to be about the broadcom nic driver..
Is this a known bug?
Please contact Nexenta via their support infrastructure (web
On Jul 10, 2010, at 14:20, Edward Ned Harvey wrote:
A few companies have already backed out of zfs
as they cannot afford to go through a lawsuit.
Or, in the case of Apple, who could definitely afford a lawsuit, but
choose
to avoid it anyway.
This was covered already:
http://mail.opensola
On Mon, July 12, 2010 10:03, Tim Cook wrote:
> Everyone's SNAPSHOTS are copy on write BESIDES ZFS and WAFL's. The
> filesystem itself is copy-on-write for NetApp/Oracle, which is why there
> is no performance degradation when you take them.
>
> Per Microsoft:
> When a change to the original volu
On Jul 14, 2010, at 05:15, Ian Collins wrote:
Use a version control tool like hg or svn!
Or Unison:
http://en.wikipedia.org/wiki/Unison_(file_synchronizer)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
On Jul 24, 2010, at 01:20, Sam Fourman Jr. wrote:
I think it should go like what NetApp's snapshot does.
There was a long thread on this topic earlier this year. Please
see the
archives for details.
Do you have the URL? I don't have a long subscription
I too do not have a long subscrip
On Mon, July 26, 2010 14:17, Dav Banks wrote:
> Ah. Thanks! I should have said RAID51 - a mirror of RAID5 elements.
>
> Thanks for the info. Bummer that it can't be done.
Out of curiosity, any particular reason why you want to do this?
___
zfs-discuss
On Mon, July 26, 2010 14:51, Dav Banks wrote:
> I wanted to test it as a backup solution. Maybe that's crazy in itself but
> I want to try it.
>
> Basically, once a week detach the 'backup' pool from the mirror, replace
> the drives, add the new raidz to the mirror and let it resilver and sit
> for
Hello,
TRIM support has just been committed into OpenSolaris:
http://mail.opensolaris.org/pipermail/onnv-notify/2010-July/012674.html
Via:
http://www.c0t0d0s0.org/archives/6792-SATA-TRIM-support-in-Opensolaris.html
___
zfs-discuss mailing list
zfs-d
On Wed, August 4, 2010 12:25, valrh...@gmail.com wrote:
> Actually, no. I could care less about incrementals, and multivolume
> handling. My purpose is to have occasional, long-term archival backup of
> big experimental data sets. The challenge is keeping everything organized,
> and readable severa
For those who missed it, Oracle/Sun announcement on Solaris 11:
Solaris 11 will be based on technologies currently available for
preview in OpenSolaris including:
* Image packaging system
* Crossbow network virtualization
* ZFS de-duplication
* CIFS file servic
On Aug 11, 2010, at 04:05, Orvar Korvar wrote:
Someone posted about CERN having a bad network card which injected
faulty bits into the data stream. And ZFS detected it, because of
end-to-end checksum. Does anyone has more information on this?
CERN generally uses Linux AFAICT:
http:
On Fri, August 13, 2010 07:21, F. Wessels wrote:
> I fully agree with your post. NFS is much simpler in administration.
> Although I don't have any experience with the DDRdrive X1, I've read and
> heard from various people actually using them that it's the best
> "available" SLOG device. Before eve
On Fri, August 13, 2010 07:52, Edward Ned Harvey wrote:
> If you have ZIL disabled, then sync=async. Up to 30sec of all writes are
> lost. Period.
>
> But there is no corruption or data written out-of-order. The end result
> is as-if you halted the server suddenly, flushed all the buffers to di
On Fri, August 13, 2010 11:39, F. Wessels wrote:
> I wasn't planning to buy any SSD as a ZIL. I merely acknowledged that an
> sandforce with supercap MIGHT be a solution. At least the supercap should
> take care of the data loss in case of a power failure. But they are still
> in the consumer realm
On Fri, August 13, 2010 12:35, Handojo wrote:
> Hi
> I am moving /opt and /export to a newly created zpool named 'dpool'
>
> The steps I am working on might be wrong, but here is my step :
>
> I renamed /export to /export2
> I renamed /opt to /opt2
[...]
> But when I reboot, the system is unable to
On Aug 13, 2010, at 16:39, Tim Cook wrote:
I'm a bit surprised at this development... Oracle really just
doesn't get it.
Why are you surprised? Larry Ellison is about making money, not
community. He's been fairly successful at it as well.
Sun was an engineering company at its heart; Oracl
On Aug 14, 2010, at 19:39, Kevin Walker wrote:
I once watched a video interview with Larry from Oracle, this ass
rambled on
about how he hates cloud computing and that everyone was getting
into cloud
computing and in his opinion no one understood cloud computing,
apart from
him... :-|
If
On Aug 14, 2010, at 14:54, Edward Ned Harvey wrote:
From: Russ Price
For me, Solaris had zero mindshare since its beginning, on account of
being prohibitively expensive.
I hear that a lot, and I don't get it. $400/yr does move it out of
peoples'
basements generally, and keeps sol10 ou
On Sun, August 15, 2010 21:44, Peter Jeremy wrote:
> Given that both provide similar features, it's difficult to see why
> Oracle would continue to invest in both. Given that ZFS is the more
> mature product, it would seem more logical to transfer all the effort
> to ZFS and leave btrfs to die.
On Mon, August 16, 2010 09:06, Edward Ned Harvey wrote:
> ZFS does raid, and mirroring, and resilvering, and partitioning, and NFS,
> and CIFS, and iSCSI, and device management via vdev's, and so on. So ZFS
> steps on a lot of linux peoples' toes. They already have code to do this,
> or that, wh
On Sep 27, 2009, at 10:41, Frank Middleton wrote:
You bet! Provided the compiler doesn't use /var/tmp as IIRC early
versions of gcc once did...
I find using "-pipe" better:
-pipe
Use pipes rather than temporary files for communication
between the
various stages
On Sep 28, 2009, at 19:39, Richard Elling wrote:
Finally, there are two basic types of scrubs: read-only and
rewrite. ZFS does
read-only. Other scrubbers can do rewrite. There is evidence that
rewrites
are better for attacking superparamagnetic decay issues.
Something that may be possible
On Sep 29, 2009, at 17:46, Cyril Plisko wrote:
On Tue, Sep 29, 2009 at 11:12 PM, Henrik Johansson
wrote:
Hello everybody,
The KCA ZFS keynote by Jeff and Bill seems to be available online
now:
http://blogs.sun.com/video/entry/
kernel_conference_australia_2009_jeff
It should probably be me
On Oct 10, 2009, at 01:26, Erik Trimble wrote:
That is, there used to be an issue in this scenario:
(1) zpool constructed from a single LUN on a SAN device
(2) SAN experiences temporary outage, while ZFS host remains running.
(3) zpool is permanently corrupted, even if no I/O occured during
o
On Oct 24, 2009, at 08:53, Joerg Schilling wrote:
The article that was mentioned a few hours ago did mention
licensing problems without giving any kind of evidence for
this claim. If there is evidence, I would be interested in
knowing the background, otherwise it looks to me like FUD.
I'm gue
On Oct 23, 2009, at 19:27, BJ Quinn wrote:
Anyone know if this means that this will actually show up in SNV
soon, or whether it will make 2010.02? (on disk dedup specifically)
It will go in when it goes in. If you have a support contract call up
Sun and ask for details; if you're using a f
On Oct 26, 2009, at 20:42, Carson Gaspar wrote:
Unfortunately, I'm trying for a Solaris solution. I already had a
Linux
solution (the 'inotify' I started out with).
And we're on a Solaris mailing list, trying to give you solutions
that work on Solaris. Don't believe everything you read on
On Wed, October 28, 2009 11:18, Tim Cook wrote:
> Either they don't like you, or you don't read your emails :)
>
> It's now hub.opensolaris.org for the main page.
>
> The forums can be found at:
> http://opensolaris.org/jive/index.jspa?categoryID=1
>
> Although they appear to be having technical di
On Wed, October 28, 2009 11:24, Frank Middleton wrote:
> However, you are certainly correct that Sun's business model isn't
> aimed at retail, although one wonders about the size of the market
> for robust SOHO/Home file/media servers that no one seems to be
> addressing right now (well, Apple, ma
On Oct 29, 2009, at 15:08, Henrik Johansson wrote:
On Oct 29, 2009, at 5:23 PM, Bob Friesenhahn wrote:
On Thu, 29 Oct 2009, Orvar Korvar wrote:
So the solution is to never get more than 90% full disk space, för
fan?
Right. While UFS created artificial limits to keep the filesystem
from
Deduplication was committed last night by Mr. Bonwick:
Log message:
PSARC 2009/571 ZFS Deduplication Properties
6677093 zfs should have dedup capability
http://mail.opensolaris.org/pipermail/onnv-notify/2009-November/010683.html
Via c0t0d0s0.org.
___
On Nov 10, 2009, at 20:55, Mark A. Carlson wrote:
Typically this is called "Sanitization" and could be done as part of
an evacuation of data from the disk in preparation for removal.
You would want to specify the patterns to write and the number of
passes.
See also "remanence":
http:
On Wed, November 11, 2009 13:29, Darren J Moffat wrote:
> No I won't be doing that as part of the zfs-crypto project. As I said
> some jurisdictions are happy that if the data is encrypted then
> overwrite of the blocks isn't required. For those that aren't use
> dd(1M) or format(1M) may be suff
On Nov 11, 2009, at 17:40, Bob Friesenhahn wrote:
Zfs is absolutely useless for this if the underlying storage uses
copy-on-write. Therefore, it is absolutely useless to put it in
zfs. No one should even consider it.
The use of encrypted blocks is much better, even though encrypted
block
On Nov 23, 2009, at 14:46, Len Zaifman wrote:
Under these circumstances what advantage would a 7310 cluster over 2
X4540s backing each other up and splitting the load?
Do you want to worry about your storage system at 3 AM?
That's what all these appliances (regardless of vendor) get you for
Deirdre has posted a video of the presentation Darren Muffat gave at
the November 2009 Solaris Security Summit:
http://blogs.sun.com/video/entry/zfs_crypto_data_encryption_for
Slides (470 KB PDF):
http://wikis.sun.com/download/attachments/164725359/osol-sec-sum-09-zfs.pdf
___
On Dec 31, 2009, at 13:44, Joerg Schilling wrote:
ZFS is COW, but does the SSD know which block is "in use" and which
is not?
If the SSD did know whether a block is in use, it could erase unused
blocks
in advance. But what is an "unused block" on a filesystem that
supports
snapshots?
P
On Jan 1, 2010, at 03:30, Eric D. Mudama wrote:
On Thu, Dec 31 at 16:53, David Magda wrote:
Just as the first 4096-byte block disks are silently emulating 4096 -
to-512 blocks, SSDs are currently re-mapping LBAs behind the
scenes. Perhaps in the future there will be a setting to say &qu
On Jan 1, 2010, at 04:33, Ragnar Sundblad wrote:
I see the possible win that you could always use all the working
blocks on the disk, and when blocks goes bad your disk will shrink.
I am not sure that is really what people expect, though. Apart from
that, I am not sure what the gain would be.
C
On Jan 1, 2010, at 11:04, Ragnar Sundblad wrote:
But that would only move the hardware specific and dependent flash
chip handling code into the file system code, wouldn't it? What
is won with that? As long as the flash chips have larger pages than
the file system blocks, someone will have to shu
On Jan 2, 2010, at 19:44, Erik Trimble wrote:
I do think the market is slight larger: Hitachi and EMC storage
arrays/big SAN controllers, plus all Linux boxes once Brtfs
actually matures enough to be usable. I don't see MSFT making any
NTFS changes to help here, but they are doing some r
1 - 100 of 378 matches
Mail list logo