What exactly does zfs_space function do?
The comments suggest it allocates and frees space in a file. What does this
mean? And through what operation can i invoke this function? for eg. whenever i
edit/write to a file, zfs_write is called. So what operation can be used to
call this function?
Tha
On Wed, Jan 7, 2009 at 6:30 PM, Jason J. W. Williams <
jasonjwwilli...@gmail.com> wrote:
> Since iSCSI is block-level, I don't think the iSCSI intelligence at
> the file level you're asking for is feasible. VSS is used at the
> file-system level on either NTFS partitions or over CIFS.
>
> -J
VS
Folks, I have had much fun and caused much trouble.
I hope we now have learned the "open" spirit of storage.
I will be less involved with the list discussion going forward, since me too
have much work to do in my super domain.
[but I still have lunch hours, so be good!]
As I always say, thank you
On Wed, Jan 7, 2009 at 6:43 PM, JZ wrote:
> ok, Scott, that sounded sincere. I am not going to do the pic thing on you.
>
> But do I have to spell this out to you -- somethings are invented not for
> home use?
Yeah, I'm sincere, but I've ordered more or less the same type of
hardware for commerci
ok, Scott, that sounded sincere. I am not going to do the pic thing on you.
But do I have to spell this out to you -- somethings are invented not for
home use?
Cindy, would you want to do ZFS at home, or just having some wine and music?
Can we focus on commercial usage?
please!
- Origina
On Wed, Jan 7, 2009 at 4:53 PM, Brandon High wrote:
> On Wed, Jan 7, 2009 at 7:45 AM, Joel Buckley wrote:
>> How much is your time worth?
>
> Quite a bit.
>
>> Consider the engineering effort going into every Sun Server.
>> Any system from Sun is more than sufficient for a home server.
>> You wan
On Tue, 2009-01-06 at 22:18 -0700, Neil Perrin wrote:
> I vaguely remember a time when UFS had limits to prevent
> ordinary users from consuming past a certain limit, allowing
> only the super-user to use it. Not that I'm advocating that
> approach for ZFS.
looks to me like zfs already provides a
Hello high buckley,
OMG, in that spirit, I would suggest to go get a $99 per year per 1 TB
web-based cloudy storage somewhere, if you don't care about your baby
data...
what's $8K for enterprises?!
;-)
z, see pic again
- Original Message -
From: "Brandon High"
To: "Joel Buckley"
Cc
On Wed, Jan 7, 2009 at 7:45 AM, Joel Buckley wrote:
> How much is your time worth?
Quite a bit.
> Consider the engineering effort going into every Sun Server.
> Any system from Sun is more than sufficient for a home server.
> You want more disks, then buy one with more slots. Done.
A few years
OMG, no safety feature?!
Sorry, even on ZFS turf,
if you use HyperV, and the HyperV VSS Writer, it could be a lot safer
-- if you don't know how to do a block-level Super thing...
best,
zStorageAnalyst
- Original Message -
From: "Jason J. W. Williams"
To: "Mr Stephen Yum"
Cc: ;
Sent:
Since iSCSI is block-level, I don't think the iSCSI intelligence at
the file level you're asking for is feasible. VSS is used at the
file-system level on either NTFS partitions or over CIFS.
-J
On Wed, Jan 7, 2009 at 5:06 PM, Mr Stephen Yum wrote:
> Hi all,
>
> If I want to make a snapshot of an
Hi all,
If I want to make a snapshot of an iscsi volume while there's a transfer going
on, is there a way to detect this and either 1) not include the file being
transferred, or 2) wait until the transfer is finished before making the
snapshot?
If I understand correctly, this is what Microsoft
long live the king
- Original Message -
From: "Jason King"
To:
Sent: Wednesday, January 07, 2009 5:33 PM
Subject: Re: [zfs-discuss] Practical Application of ZFS
> On Wed, Jan 7, 2009 at 3:51 PM, Kees Nuyt wrote:
>> On Tue, 6 Jan 2009 21:41:32 -0500, David Magda
>> wrote:
>>
>>>On Ja
On Wed, Jan 7, 2009 at 3:51 PM, Kees Nuyt wrote:
> On Tue, 6 Jan 2009 21:41:32 -0500, David Magda
> wrote:
>
>>On Jan 6, 2009, at 14:21, Rob wrote:
>>
>>> Obviously ZFS is ideal for large databases served out via
>>> application level or web servers. But what other practical ways are
>>> there to
> The Samsung HD103UJ drives are nice, if you're not using
> NVidia controllers - there's a bug in either the drives or the
> controllers that makes them drop drives fairly frequently.
Do you happen to have more details about this problem? Or some
pointers?
Thanks -- Volker
--
On Tue, 06 Jan 2009 22:18:40 -0700, Neil Perrin
wrote:
>I vaguely remember a time when UFS had limits to prevent
>ordinary users from consuming past a certain limit, allowing
>only the super-user to use it. Not that I'm advocating that
>approach for ZFS.
I know that approach from other operating
For SuperUsers, and the little envionments, the JAVA embedded thing does
all...
http://java-source.net/open-source/database-engines
;-)
z
- Original Message -
From: "Kees Nuyt"
To:
Sent: Wednesday, January 07, 2009 4:51 PM
Subject: Re: [zfs-discuss] Practical Application of ZFS
> O
On Tue, 6 Jan 2009 21:41:32 -0500, David Magda
wrote:
>On Jan 6, 2009, at 14:21, Rob wrote:
>
>> Obviously ZFS is ideal for large databases served out via
>> application level or web servers. But what other practical ways are
>> there to integrate the use of ZFS into existing setups to experi
Why is it impossible to have a ZFS pool with a log device for the rpool (device
used for the root partition)?
Is this a bug?
I can't boot a ZFS partition / on a zpool which uses also a log device. Maybe
its not supported because then grub should support it too?
--
This message posted from openso
Ok, Cindy,
Next question is, let me do this for Orvar --
any of the existing snapshot settings need to be changed for the new disks,
before they corrupt the baby data?
:-)
best,
z
- Original Message -
From:
To: "Orvar Korvar"
Cc:
Sent: Wednesday, January 07, 2009 4:31 PM
Subject: Re:
Hi Orvar,
Option A effectively doubles your existing pool (500GB x 4-->1TB x 4)
*and* provides increased reliability. This is the difference between
options A and B.
I also like the convenience of just replacing the smaller disks with
larger disks in the existing pool and not having to create a
On Thu 08/01/09 08:08 , "Brent Jones" br...@servuhome.net sent:
>
> I have yet to devise a script that starts Mbuffer zfs recv on the
> receiving side with proper parameters, then start an Mbuffer ZFS send
> on the sending side, but I may work on one later this week.
> I'd like the snapshots to be
Cindy and you all, thanx for your answers! I have got us several more
OpenSolaris converts meanwhile. One guy said, "Why didnt I try ZFS before??" :o)
A quick question, in scenario A)
My old 4 samsung 500GB is a raidz1. If I exchange each drive and finally add a
hot spare, it is not the same re
This post from close to a year ago never received a response. We just had this
same thing happen to another server that is running Solaris 10 U6. One of the
disks was marked as removed and the pool degraded, but 'zpool status -x' says
all pools are healthy. After doing an 'zpool online' on th
On Wed, Jan 7, 2009 at 12:33 PM, Sam wrote:
> Ok so the capacity is ruled out, it still bothers me that after
> experiencing the error if I do a 'zpool status' it just hangs (forever) but
> if I reboot the system everything comes back up fine (for a little while).
>
> Last night I installed the l
> Marcelo,
Hello there...
>
> I did some more tests.
You are getting very useful informations with your tests. Thanks a lot!!
>
> I found that not each uberblock_update() is also
> followed by a write to
> the disk (although the txg is increased every 30
> seconds for each of the
> three zp
On Wed, Jan 7, 2009 at 12:36 AM, Andrew Gabriel wrote:
> Brent Jones wrote:
>
>> Reviving an old discussion, but has the core issue been addressed in
>> regards to zfs send/recv performance issues? I'm not able to find any
>> new bug reports on bugs.opensolaris.org related to this, but my search
>
Additional comment:
zfs receive verifies the data sent. It also can maintain the snapshots,
which
is handy.
rsync will also verify the data sent between source and destination. rsync
doesn't know anything about snapshots, though it might be a best practice
to use a snapshot as an rsync source.
OMG, open folks are really budget concened.
In enterprises, a 90% policy as a safety feature is ok... alerts will be
sent and POs will be issued...
:-)
z
- Original Message -
From: "Sam"
To:
Sent: Wednesday, January 07, 2009 1:33 PM
Subject: Re: [zfs-discuss] Problems at 90% zpool capa
Ok so the capacity is ruled out, it still bothers me that after experiencing
the error if I do a 'zpool status' it just hangs (forever) but if I reboot the
system everything comes back up fine (for a little while).
Last night I installed the latest SXDE and I'm going to see if that fixes it,
if
Orvar,
Two choices are described below, where safety is the priority.
I prefer the first one (A).
Cindy
A. Replace each 500GB disk in the existing pool with a 1 TB drive.
Then, add the 5th 1TB drive as a spare. Depending on the Solaris
release you are running, you might need to export/import the
So I have just finished building something similar to this...
I'm finally replacing my Pentium II 400Mhz fileserver!
My setup is:
Opensolaris 2008.11
http://www.newegg.com/Product/Product.aspx?Item=N82E16813138117
http://www.newegg.com/Product/Product.aspx?Item=N82E16820145184
http://www.newegg.c
[a quick reply to the beloved Orvar, on my lunch hour...]
economy is bad, save some $ --
don't pay for tools, do your own open solution for this.
know JAVA?
Hash your data and see if they are cool.
http://oceanstore.sourceforge.net/javadoc/pond/ostore/dataobj/DataObject.DataBlock.html#computeVhash
On Wed, Jan 7, 2009 at 11:45 AM, Tim wrote:
>
>
>
>> >> Decision #2: 1.5TB Seagate vs. 1TB WD (or someone else)
>> The 1.5TB drives have a sketchy reputation as compared to any other
>> Seagate drives. The rumor is that reliability was not high enough for
>> the OEMs to carry them, so that's why
On Wed, Jan 7, 2009 at 10:45, Joel Buckley wrote:
> Consider the engineering effort going into every Sun Server.
> Any system from Sun is more than sufficient for a home server.
> You want more disks, then buy one with more slots. Done.
In my experience, buying disks (or disk-related things---JBO
On Wed, January 7, 2009 04:29, Peter Korn wrote:
> Decision #4: file system layout
> I'd like to have ZFS root mirrored. Do we simply use a portion of the
existing disks for this, or add two disks just for root? Use USB-2
flash as those 2 disks? And where does swap go?
The default install in
I have a ZFS raid with 4 samsung 500GB disks. I now want 5 drives samsung 1TB
instead. So I connect the 5 drives, create a zpool raidz1 and copy the content
from the old zpool to the new zpool.
Is there a way to safely copy the zpool? Make it sure that it really have been
copied safely? Ideall
On 6-Jan-09, at 1:19 PM, Bob Friesenhahn wrote:
> On Tue, 6 Jan 2009, Jacob Ritorto wrote:
>
>> Is urandom nonblocking?
>
> The OS provided random devices need to be secure and so they depend on
> collecting "entropy" from the system so the random values are truely
> random. They also execute co
On 01/06/09 09:07, Chris Gerhard wrote:
> To improve the performance of scripts that manipulate zfs snapshots and the
> zfs snapshot service in perticular there needs to be a way to list all the
> snapshots for a given object and only the snapshots for that object.
>
> There are two RFEs filed th
Dude,
How much is your time worth?
Consider the engineering effort going into every Sun Server.
Any system from Sun is more than sufficient for a home server.
You want more disks, then buy one with more slots. Done.
Search "http://store.sun.com"; for the item that matches your
needs and run wit
I have two 280R systems. System A has Solaris 10u6, and its (2) drives
are configured as a ZFS rpool, and are mirrored. I would like to pull
these drives, and move them to my other 280, system B, which is
currently hard drive-less.
Although unsupported by Sun, I have done this before without
Marcelo,
I did some more tests.
I found that not each uberblock_update() is also followed by a write to
the disk (although the txg is increased every 30 seconds for each of the
three zpools of my 2008.11 system). In these cases, ub_rootbp.blk_birth
stays at the same value while txg is incremen
Hello Bernd,
Now i see your point... ;-)
Well, following a "very simple" math:
- One txg each 5 seconds = 17280/day;
- Each txg writing 1MB (L0-L3) = 17GB/day
In the paper the math was 10 years = ( 2.7 * the size of the USB drive) writes
per day, right?
So, in a 4GB drive, would be ~10GB
I'm looking to do the same thing - home NAS with ZFS.
I'm debating several routes/options, and I'd appreciate opinions from folks
here.
My system will primarily be a file & music server, serving CIFS and some NFS as
well as driving multiple concurrent audio streams via SqueezeCenter, and
perha
--On 06 January 2009 16:37 -0800 Carson Gaspar wrote:
> On 1/6/2009 4:19 PM, Sam wrote:
>> I was hoping that this was the problem (because just buying more
>> discs is the cheapest solution given time=$$) but running it by
>> somebody at work they said going over 90% can cause decreased
>> perf
Marcelo,
the problem which I mentioned is with the limited number of write cycles
in flash memory chips. The following document published in June 2007 by
a USB flash drive vendor says the guaranteed number of write cycles for
their USB flash drives is between 10.000 and 100.000:
http://www.cor
Brent Jones wrote:
> Reviving an old discussion, but has the core issue been addressed in
> regards to zfs send/recv performance issues? I'm not able to find any
> new bug reports on bugs.opensolaris.org related to this, but my search
> kung-fu may be weak.
I raised:
CR 6729347 Poor zfs receive p
On Wed 07/01/09 20:31 , Carsten Aulbert carsten.aulb...@aei.mpg.de sent:
> Brent Jones wrote:
> >
> > Using mbuffer can speed it up dramatically, but
> > this seems like a hack> without addressing a real problem with zfs
> > send/recv.> Trying to send any meaningful sized snapshots
> > from say an
48 matches
Mail list logo