It would be helpful if you posted more information about your
configuration.
Numbers *are* useful too, but minimally, describing your setup, use case,
the hardware and other such facts would provide people a place to start.
There are much brighter stars on this list than myself, but if you are
sha
thomas wrote:
Very interesting. This could be useful for a number of us. Would you be willing
to share your work?
No problem. I'll contact you off-list.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/list
To answer the question you asked here...the answer is "no". There have been
MANY discussions of this in the past. Here's the lng thread I started
back
in May about backup strategies for ZFS pools and file systems:
http://mail.opensolaris.org/pipermail/zfs-discuss/2010-March/038678.html
But
On Jun 7, 2010, at 4:50 PM, besson3c wrote:
> Hello,
>
> I have a drive that was a part of the pool showing up as "removed". I made no
> changes to the machine, and there are no errors being displayed, which is
> rather weird:
>
> # zpool status nm
> pool: nm
> state: DEGRADED
> scrub: none r
Thank you David,
Thank you Cindy,
Certainly I feel it is difficult, but is it logically impossible to write a
filter program to do that, with reasonable memory use?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@op
On Jun 7, 2010, at 16:32, Richard Elling wrote:
Please don't confuse Ethernet with IP. Ethernet has no routing and
no back-off other than that required for the link.
Not entirely accurate going forward. IEEE 802.1Qau defines an end-to-
end congestion notification management system:
On Mon, 7 Jun 2010, Mark S Durney wrote:
The customer states that he backed out the kernel patch 142901-12 and then
the x4500 boots successfully??? Has anyone seen this? It almost seems like
the zfs root pool is not being seen upon reboot??
You should find out from your customer what kernel r
Our e-mail server started to slow down today. One of the disk devices
is frequently at 100% usage. The heavy writes seem to cause reads to
run quite slowly. In the statistics below, `c0t0d0' is UFS, containing
the / and /var slices. `c0t1d0' is ZFS, containing /var/log/syslog,
a couple of datab
Everyone, thank you for the comments, you've given me lots of great info to
research further.
On Mon, Jun 7, 2010 at 15:57, Ross Walker wrote:
> On Jun 7, 2010, at 2:10 AM, Erik Trimble wrote:
>
> Comments in-line.
>
>
> On 6/6/2010 9:16 PM, Ken wrote:
>
> I'm looking at VMWare, ESXi 4, but I'l
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Garrett D'Amore
>
> On Mon, 2010-06-07 at 11:49 -0700, Richard Jahnel wrote:
> > Do you lose the data if you lose that 9v feed at the same time the
> computer losses power?
>
> Yes. Hence the
Hello,
I'm wondering if somebody can kindly direct me to a sort of newbie way of
assessing whether my ZFS pool performance is a bottleneck that can be improved
upon, and/or whether I ought to invest in a SSD ZIL mirrored pair? I'm a little
confused by what the output of iostat, fsstat, the zils
Hello,
I have a drive that was a part of the pool showing up as "removed". I made no
changes to the machine, and there are no errors being displayed, which is
rather weird:
# zpool status nm
pool: nm
state: DEGRADED
scrub: none requested
config:
NAMESTATE READ WRITE CKS
Hi Mark:
On Mon, Jun 7, 2010 at 23:21, Mark S Durney wrote:
> IHAC
>
> Who has an x4500(x86 box) who has a zfs root filesystem. They installed
> patches today,
> the latest solaris 10 x86 recommended patch cluster and the patching seemed
> to complete
> successfully. Then when they tried to reboo
IHAC
Who has an x4500(x86 box) who has a zfs root filesystem. They installed
patches today,
the latest solaris 10 x86 recommended patch cluster and the patching
seemed to complete
successfully. Then when they tried to reboot the box the machine would
not boot? They
get the following error
N
When I looked for references on ARC freeing algo, I did find some lines
of codes talking about freeing ARC when memory is under pressure.
Nice...but what could be memory under pressure in the kernel syntax ?
Jumping from C lines to blogs to docsI went back
On Jun 7, 2010, at 2:10 AM, Erik Trimble
wrote:
Comments in-line.
On 6/6/2010 9:16 PM, Ken wrote:
I'm looking at VMWare, ESXi 4, but I'll take any advice offered.
On Sun, Jun 6, 2010 at 19:40, Erik Trimble
wrote:
On 6/6/2010 6:22 PM, Ken wrote:
Hi,
I'm looking to build a virtualiz
> Native ZFS for Linux
Very good to see that there is such effort in progress.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Mon, Jun 7, 2010 at 1:47 PM, Fredrich Maney wrote:
> Not to be too harsh, but as long as you can't mount filesystems, it
> seems to just be hype/vaporware to me.
It's a big step in the right direction.
You can still use zvols to create ext3 filesystems, and use the zpool
for disk management a
On Mon, 2010-06-07 at 13:32 -0700, Richard Elling wrote:
> On Jun 7, 2010, at 11:06 AM, Miles Nordin wrote:
> >
> > the other difference is in the latest comstar which runs in
> > sync-everything mode by default, AIUI. Or it does use that mode only
> > when zvol-backed? Or something.
>
> It d
Thanks for posting this, but these two sentences seem to contradict each other:
"Employees of Lawrence Livermore National Laboratory have ported
Sun's/Oracle's ZFS natively to Linux."
"The ZFS Posix Layer has not been implemented yet, therefore mounting
file systems is not yet possible"
Not to b
On Jun 7, 2010, at 11:06 AM, Miles Nordin wrote:
>
> the other difference is in the latest comstar which runs in
> sync-everything mode by default, AIUI. Or it does use that mode only
> when zvol-backed? Or something.
It depends on your definition of "latest." The latest OpenSolaris release
http://www.osnews.com/story/23416/Native_ZFS_Port_for_Linux
Native ZFS Port for Linux
posted by Thom Holwerda on Mon 7th Jun 2010 10:15 UTC, submitted by kragil
Employees of Lawrence Livermore National Laboratory have ported
Sun's/Oracle's ZFS natively to Linux. Linux already had a ZFS port in
u
On Mon, 7 Jun 2010, Miles Nordin wrote:
FC has different QoS properties than Ethernet because of the buffer
credit mechanism---it can exert back-pressure all the way through the
fabric. same with IB, which is HOL-blocking. This is a big deal with
storage, with its large blocks of bursty writes
On Mon, 2010-06-07 at 11:49 -0700, Richard Jahnel wrote:
> Do you lose the data if you lose that 9v feed at the same time the computer
> losses power?
Yes. Hence the need for a separate UPS.
- Garrett
___
zfs-discuss mailing list
zfs-discuss@
Do you lose the data if you lose that 9v feed at the same time the computer
losses power?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
- "Ray Van Dolson" skrev:
> FYI;
>
> With 4K recordsize, I am seeing 1.26x dedupe ratio between the RHEL
> 5.4
> ISO and the RHEL 5.5 ISO file.
>
> However, it took about 33 minutes to copy the 2.9GB ISO file onto the
> filesystem. :) Definitely would need more RAM in this setup...
>
> Ra
> "et" == Erik Trimble writes:
et> With NFS-hosted VM disks, do the same thing: create a single
et> filesystem on the X4540 for each VM.
previous posters pointed out there are unreasonable hard limits in
vmware to the number of NFS mounts or iSCSI connections or something,
so you wil
On Fri, Jun 04, 2010 at 01:10:44PM -0700, Ray Van Dolson wrote:
> On Fri, Jun 04, 2010 at 01:03:32PM -0700, Brandon High wrote:
> > On Fri, Jun 4, 2010 at 12:37 PM, Ray Van Dolson wrote:
> > > Makes sense. So, as someone else suggested, decreasing my block size
> > > may improve the deduplication
On Mon, June 7, 2010 12:56, Tim Cook wrote:
>> The STEC units is what Oracle/Sun use in their 7000 series appliances,
>> and I believe EMC and many others use them as well.
>
> When did that start? Every 7000 I've seen uses Intel drives.
According to the Sun System Handbook for the 7310, the 18
Thanks Garrett!
> 2) it is dependent on an external power source (a little wall wart
> provides low voltage power to the card... I don't recall the voltage off
> hand)
9V DC.
> 3) the contents of the card's DDR ram are never flushed to non-volatile
> storage automatically, but require an explici
Hi Toyama,
You cannot restore an individual file from a snapshot stream like
the ufsrestore command. If you have snapshots stored on your
system, you might be able to access them from the .zfs/snapshot
directory. See below.
Thanks,
Cindy
% rm reallyimportantfile
% cd .zfs/snapshot
% cd recent-
On Mon, Jun 7, 2010 at 9:45 AM, David Magda wrote:
> On Mon, June 7, 2010 09:21, Richard Jahnel wrote:
> > I'll have to take your word on the Zeus drives. I don't see any thing in
> > thier literature that explicitly states that cache flushes are obeyed or
> > other wise protected against power l
Hi--
Pool names must contain alphanumeric characters as described here:
http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/common/zfs/zfs_namecheck.c
The problem you might be having is probably with an special characters,
such as umlauts or accents (?). Pool names only allow 4 specia
On Mon, 2010-06-07 at 07:51 -0700, Christopher George wrote:
> > No Slogs as I haven't seen a compliant SSD drive yet.
>
> As the architect of the DDRdrive X1, I can state categorically the X1
> correctly implements the SCSI Synchronize Cache (flush cache)
> command.
>
> Christopher George
> Foun
And a very nice device it is indeed.
However for my purposes it doesn't work as it doesn't fit into a 2.5" slot and
use sata/sas connections.
Unfortunately all my pci express slots are in use.
2 raid controllers
1 Fibre HBA
1 10gb ethernet card.
--
This message posted from opensolaris.org
> No Slogs as I haven't seen a compliant SSD drive yet.
As the architect of the DDRdrive X1, I can state categorically the X1
correctly implements the SCSI Synchronize Cache (flush cache)
command.
Christopher George
Founder/CTO
www.ddrdrive.com
--
This message posted from opensolaris.org
___
On Mon, June 7, 2010 10:34, Toyama Shunji wrote:
> Can I extract one or more specific files from zfs snapshot stream?
> Without restoring full file system.
> Like ufs based 'restore' tool.
No.
(Check the archives of zfs-discuss for more details. Send/recv has been
discussed at length many times.
On Mon, June 7, 2010 09:21, Richard Jahnel wrote:
> I'll have to take your word on the Zeus drives. I don't see any thing in
> thier literature that explicitly states that cache flushes are obeyed or
> other wise protected against power loss.
The STEC units is what Oracle/Sun use in their 7000 ser
Can I extract one or more specific files from zfs snapshot stream?
Without restoring full file system.
Like ufs based 'restore' tool.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensol
I'll have to take your word on the Zeus drives. I don't see any thing in thier
literature that explicitly states that cache flushes are obeyed or other wise
protected against power loss.
As for OCZ they cancelled the Vertex 2 Pro which was to be the one with the
super cap. For the moment they a
Hi All!
Can i create pool or dataset with name that contains non-latin letters
(russian letters, specific germany letters, etc ...)?
I tried to create pool with non-latin letters, but could not.
In ZFS User Guide i see next information:
> Each ZFS component must be named according to the follow
On Jun 7, 2010, at 00:15, Richard Jahnel wrote:
I use 4 intel 32gb ssds as read cache for each pool of 10 Patriot
Torx drives which are running in a raidz2 configuration. No Slogs as
I haven't seen a compliant SSD drive yet.
Besides STEC's Zeus drives you mean? (Which aren't available in re
- "Brandon High" skrev:
> On Sun, Jun 6, 2010 at 10:46 AM, Brandon High
> wrote:
> > No, that's the number that stuck in my head though.
>
> Here's a reference from Richard Elling:
> (http://mail.opensolaris.org/pipermail/zfs-discuss/2010-March/038018.html)
> "Around 270 bytes, or one 512
Which Virtual Machine technology are you going to use?
VirtualBox
VMWare
Xen
Solaris Zones
Somethinge else...
It will make a difference as to my recommendation (or, do you want me to
recommend a VM type, too?)
This is somehow off-topic @zfs-discuss, but still. After trying to fight a b
44 matches
Mail list logo