On Tue, Aug 11, 2009 at 9:39 AM, Ed Spencer wrote:
> We backup 2 filesystems on tuesday, 2 filesystems on thursday, and 2 on
> saturday. We backup to disk and then clone to tape. Our backup people
> can only handle doing 2 filesystems per night.
>
> Creating more filesystems to increase the paralle
On Aug 11, 2009, at 17:07, Marcelo Leal wrote:
My question is about SSD, and the differences between use SLC for
readzillas instead of MLC.
Sun uses MLCs for Readzillas for their 7000 series. I would think that
if SLCs (which are generally more expensive) were really needed, they
would be
>SSDs with capacitor-backed write caches
>seem to be fastest.
how to distinguish them from ssd`s without one?
i never saw this explicitly mentioned in the specs.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@openso
On Tue, 2009-08-11 at 14:56, Scott Lawson wrote:
> > Also, is atime on?
> Turning atime off may make a big difference for you. It certainly does
> for Sun Messaging server.
> Maybe worth doing and reposting result?
Yes. All these results were attained with atime=off. We made that change
on all t
>I tried making my nfs mount to higher zvol level. But I cannot traverse to the
>sub-zvols from this mount.
i really wonder when someone will come up with a little patch which implements
crossmnt option for solaris nfsd (like it exists for linux nfsd).
ok, even if it´s a hack - if it works it
On Tue, 2009-08-11 at 08:04 -0700, Richard Elling wrote:
> On Aug 11, 2009, at 7:39 AM, Ed Spencer wrote:
> > I suspect that if we 'rsync' one of these filesystems to a second
> > server/pool that we would also see a performance increase equal to
> > what
> > we see on the development server. (I
glidic anthony wrote:
thanks but if it's experimental i prefer don't use. My server was use to an nfs
share for an esxi so i prefer it was stable.
But i thnik the best way it's to add an other hdd to make the install and make
my raidz with this 3 disks
Do you really consider OpenSolaris prod
Hello David...
Thanks for your answer, but i did not talk in buy disks...
I think you misunderstood my email (or my bad english), but i know the
performance improvements when using a cache device.
My question is about SSD, and the differences between use SLC for readzillas
instead of MLC.
Tha
thanks but if it's experimental i prefer don't use. My server was use to an nfs
share for an esxi so i prefer it was stable.
But i thnik the best way it's to add an other hdd to make the install and make
my raidz with this 3 disks
--
This message posted from opensolaris.org
_
On Tue, Aug 11, 2009 at 09:35:53AM -0700, Sascha wrote:
> Then creating a zpool:
> [b]zpool create -m /zones/huhctmp huhctmppool
> c6t6001438002A5435A0001005Ad0[/b]
>
> [b]zpool list[/b]
> NAME SIZE USED AVAILCAP HEALTH ALTROOT
> huhctmppool 59.5G 103K 59.5G 0%
On Aug 11, 2009, at 1:21 PM, Ed Spencer wrote:
Concurrency/Parallelism testing.
I have 6 different filesystems populated with email data on our mail
development server.
I rebooted the server before beginning the tests.
The server is a T2000 (sun4v) machine so its ideally suited for this
type of
glidic anthony wrote:
Hi,
I have only 3 hdd and i want make a raidz. But i think it's not possible to
install opensolaris on a raidz system. So what is the solution? Maybe to create
a lice on the first disk (where opensolaris was install) and create a raidz
with this slice and the two other di
Hi,
I have only 3 hdd and i want make a raidz. But i think it's not possible to
install opensolaris on a raidz system. So what is the solution? Maybe to create
a lice on the first disk (where opensolaris was install) and create a raidz
with this slice and the two other disk?
I try that but when
Hi all,
In six short weeks (Sept. 14th) all content on opensolaris.org will be
migrated to XWiki. To prepare your web pages for this migration, please
check the first test migration here:
http://hub.opensolaris.org/bin/view/Main/
If you find any problems, please refer to the tips and tricks
Concurrency/Parallelism testing.
I have 6 different filesystems populated with email data on our mail
development server.
I rebooted the server before beginning the tests.
The server is a T2000 (sun4v) machine so its ideally suited for this
type of testing.
The test was to tar (to /dev/null) each o
Richard Elling wrote:
On Aug 11, 2009, at 7:39 AM, Ed Spencer wrote:
On Tue, 2009-08-11 at 07:58, Alex Lam S.L. wrote:
At a first glance, your production server's numbers are looking fairly
similar to the "small file workload" results of your development
server.
I thought you were saying t
Background:
Have a test server running SECE Build 119 configured as a CIFS server in Domain
Mode. Build 119 was required because in our test environment we have a Windows
2008 Domain and the DCs have SP2 installed. I Joined the server to the domain,
created a ZFS file system and shared it out.
>Re-surfacing an old thread. I was wondering myself if there are any
>home-use commercial NAS devices with zfs. I did find that there is
>Thecus 7700. But, it appears to come with Linux, and use ZFS in FUSE,
>but I (perhaps unjustly) don't feel comfortable with :)
no, you justly feel unconforta
Hi Darren,
i tried exactly the same, but it doesn't seem to work.
First the size of the disk:
[b] echo |format -e|grep -i 05A[/b] 17.
c6t6001438002A5435A0001005Ad0
/scsi_vhci/s...@g6001438002a5435a0001005a
Then creating a zpool:
[b]zpool create -m /zones/
On 08/11/09 06:03, Darren J Moffat wrote:
I thought so too initially, then I changed my mind and I like it the way
it is. The reason being is that describing the intent allows changing
the implementation and keeping the meaning. It is the intent that
matters to the administrator not the imp
On Tue, August 11, 2009 10:39, Ed Spencer wrote:
> I suspect that if we 'rsync' one of these filesystems to a second
> server/pool that we would also see a performance increase equal to what
> we see on the development server. (I don't know how zfs send a receive
Rsync has to traverse the entire
On Aug 11, 2009, at 7:39 AM, Ed Spencer wrote:
On Tue, 2009-08-11 at 07:58, Alex Lam S.L. wrote:
At a first glance, your production server's numbers are looking
fairly
similar to the "small file workload" results of your development
server.
I thought you were saying that the development ser
On Tue, 2009-08-11 at 07:58, Alex Lam S.L. wrote:
> At a first glance, your production server's numbers are looking fairly
> similar to the "small file workload" results of your development
> server.
>
> I thought you were saying that the development server has faster performance?
The developmen
On Tue, August 11, 2009 09:24, Marcelo Leal wrote:
> Many companies (including SUN), has just hardware with support to SLC...
> as i need both, i just want to hear your experiences about use SLC SSD
> for ZFS cache. One point is cost, but i want to know if the performance
> is much different, bec
przemol...@poczta.fm wrote:
On Tue, Aug 11, 2009 at 05:42:31AM -0700, Robert Milkowski wrote:
Hi,
I like the new feature but I was thinking that maybe the keywords being used
should be different?
Currently it is:
# zfs set logbias=latency {dataset}
# zfs set logbias=throughput {dataset}
May
On Tue, Aug 11, 2009 at 7:33 AM, Ed Spencer wrote:
> I've come up with a better name for the concept of file and directory
> fragmentation which is, "Filesystem Entropy". Where, over time, an
> active and volitile filesystem moves from an organized state to a
> disorganized state resulting in back
Hello there...
Many companies (including SUN), has just hardware with support to SLC... as i
need both, i just want to hear your experiences about use SLC SSD for ZFS
cache. One point is cost, but i want to know if the performance is much
different, because the two are created specifically to p
On Tue, Aug 11, 2009 at 05:42:31AM -0700, Robert Milkowski wrote:
> Hi,
>
> I like the new feature but I was thinking that maybe the keywords being used
> should be different?
>
> Currently it is:
>
> # zfs set logbias=latency {dataset}
> # zfs set logbias=throughput {dataset}
>
> Maybe it wou
Robert Milkowski wrote:
Hi,
I like the new feature but I was thinking that maybe the keywords being used
should be different?
Currently it is:
# zfs set logbias=latency {dataset}
# zfs set logbias=throughput {dataset}
Maybe it would be more clear this way:
# zfs set logdest=dedicated {datas
At a first glance, your production server's numbers are looking fairly
similar to the "small file workload" results of your development
server.
I thought you were saying that the development server has faster performance?
Alex.
On Tue, Aug 11, 2009 at 1:33 PM, Ed Spencer wrote:
> I've come up w
Hi,
I like the new feature but I was thinking that maybe the keywords being used
should be different?
Currently it is:
# zfs set logbias=latency {dataset}
# zfs set logbias=throughput {dataset}
Maybe it would be more clear this way:
# zfs set logdest=dedicated {dataset}
# zfs set logdest=pool
I've come up with a better name for the concept of file and directory
fragmentation which is, "Filesystem Entropy". Where, over time, an
active and volitile filesystem moves from an organized state to a
disorganized state resulting in backup difficulties.
Here are some stats which illustrate the
... which sounds very similar to issues I've raised many times. ZFS should
have the ability to double check what a drive is doing, and speculatively time
out a device that appears to be failing in order to maintain pool performance.
If a single drive in a redundant pool can be seen to be respon
On Tue, Aug 11, 2009 at 4:14 PM, Martin
Wheatley wrote:
> Did anyone reply to this question?
>
> We have the same issue and our Windows admins do see why the iSCSI target
> should be disconnected when the underlying storage is extended
Is there any iscsi target that can be extended without discon
Hi Chris,
On Sun, Aug 09, 2009 at 05:53:12PM -0700, Chris Baker wrote:
> OK - had a chance to do more testing over the weekend. Firstly some extra
> data:
>
> Moving the mirror to both drives on ICH10R ports and on sudden disk power-off
> the mirror faulted cleanly to the remaining drive no pro
Did anyone reply to this question?
We have the same issue and our Windows admins do see why the iSCSI target
should be disconnected when the underlying storage is extended
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-dis
this sounds right. Theres also the problem of adding non-redundant types to
redundant types...but yah, it kind of defeats the purpose, The thing
people seem to miss, is ZFS comes with a price, that price is you need to
plan your pool AND expansion plan ahead of time. If you want to grow pools
w
37 matches
Mail list logo