total capacity of all
filesystems to back up is 1161GB, however, only 410GB are used.
Thanks,
Alex
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hello,
I have a zpool consisting of several mirrored vdevs. I was in the middle of
adding another mirrored vdev today, but found out one of the new drives is bad.
I will be receiving the replacement drive in a few days. In the mean time, I
need the additional storage on my zpool.
Is the command
Sure enough Cindy, the eSATA cables had been crossed. I exported, powered off,
reversed the cables, booted, imported, and the pool is currently resilvering
with both c5t0d0 & c5t1d0 present in the mirror. :) Thank you!!
Alex
On May 24, 2011, at 9:58 AM, Cindy Swearingen wrote:
>
it does again at the moment, no reboot!). I will also
try switching the eSATA cables to opposite ports.
Thanks,
Alex
Command output follows:
# format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c5t1d0
/pci@0,0/pci8086,2845@1c,3/pci1095,3132@0/disk@1,0
I thought this was interesting - it looks like we have a failing drive in our
mirror, but the two device nodes in the mirror are the same:
pool: tank
state: DEGRADED
status: One or more devices could not be used because the label is missing or
invalid. Sufficient replicas exist for th
MacZFS has been running on OSX since Apple dropped the ball, but only up to
onnv_74 for the stable branch.
Alex
Sent from my iPhone 4
On 15 Mar 2011, at 15:21, Jerry Kemp wrote:
> FYI.
>
> This came across a Mac OS X server list that I am subscribed to.
&
toc | fmthard on the drive, I was never able to change it to a SMI label.
So I went in there, changed the cylinder info, relabeled, changed it back,
label..and voila..now I can mirror again!!
Thank you for taking the time to personally email me with my issue.
-Alex
--
This message posted
(for some reason I cannot find my original thread..so I'm reposting it)
I am trying to move my data off of a 40gb 3.5" drive to a 40gb 2.5" drive.
This is in a Netra running Solaris 10.
Originally what I did was:
zpool attach -f rpool c0t0d0 c0t2d0.
Then I did an installboot on c0t2d0s0.
Di
On Dec 11, 2010, at 14:15, Frank Van Damme wrote:
2010/12/10 Freddie Cash :
On Fri, Dec 10, 2010 at 5:31 AM, Edward Ned Harvey
wrote:
It's been a while since I last heard anybody say anything about
this.
What's the latest version of publicly released ZFS? Has oracle
made it
closed-sourc
nd it terribly cumbersome. But now
I've
>gotten used to it, and MegaCLI commands just roll off the tongue.
can you paste them anyway ?
-Alex
IMPORTANT: This email remains the property of the Department of Defence and is
subject to the jurisdiction of section 70 of the Crimes Act 1914.
www.cs.toronto.edu/~bianca/papers/sigmetrics09.pdf
Alex
(@alblue on Twitter)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
check fler.us
Solaris 10 iSCSI Target for Vmware ESX
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
0n Sun, Aug 29, 2010 at 08:09:22PM -0700, Brian wrote:
>The fix:
>"""the trick was to modify mode in in-kernel buffer containing
znode_phys_t and then force ZFS to flush it out to disk."""
Can you give an example of how you did this ?
-Ale
On 28 Aug 2010, at 16:25, Norbert Harder wrote:
> Later, since the development of the ZFS extension was discontinued ...
The MacZFS project lives on at Google Code and http://github.com/alblue/mac-zfs
Not that it helps if the data has already become corrupted.
A
0n Wed, Aug 25, 2010 at 02:54:42PM -0400, LaoTsao ?? wrote:
>IMHO, U want -E for ZIL and -M for L2ARC
Why ?
-Alex
IMPORTANT: This email remains the property of the Department of Defence and is
subject to the jurisdiction of section 70 of the Crimes Act 1914. If you h
e copies of data, set copies=2 and zfs will try to schedule writes
across different mirrored pairs.
Alex
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
get the space back.
If you destroy all snapshots, then do a cp/rm on a file by file basis you may
be able to do an in-place compression.
Alex
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> From this output it appears as if Solaris, via the
> BIOS I presume, it looks like my BIOS thinks it
> doesn't have ECC RAM, even though all the memory
> modules are indeed ECC modules.
>
> Might be time to check (1) my current BIOS settings,
> even though I felt sure ECC was enabled in the BIOS
On 9 Jul 2010, at 20:38, Garrett D'Amore wrote:
> On Fri, 2010-07-09 at 15:02 -0400, Miles Nordin wrote:
>>>>>>> "ab" == Alex Blewitt writes:
>>
>>ab> All Mac Minis have FireWire - the new ones have FW800.
>>
>> I t
them amenable to mirroring.
The Mac ZFS port limps on in any case - though I've not managed to spend much
time on it recently, I have been making progress this week.
The Google code project is at http://code.google.com/p/maczfs/ and my Github is
at http://github.com/al
On Jun 11, 2010, at 11:03, Joerg Schilling wrote:
Alex Blewitt wrote:
On Jun 11, 2010, at 10:43, Joerg Schilling wrote:
Jason King wrote:
Well technically they could start with the GRUB zfs code, which is
GPL
licensed, but I don't think that's the case.
As explained in
unfortunate in the CDDL is its use of the term “intellectual
property”.
Whether a license is classified as "Open Source" or not does not imply
that all open source licenses are compatible with each other.
Alex
___
zfs-discuss
7;t).
It gives me an extra, say, 10g on my laptop's 80g SSD which isn't bad.
Alex
Sent from my (new) iPhone
On 6 May 2010, at 02:06, Richard Jahnel wrote:
I've googled this for a bit, but can't seem to find the answer.
What does compression bring to the party that
I was having this same problem with snv_134. I executed all the same commands
as you did. The cloned disk booted up to the "Hostname:" line and then died.
Booting with the "-kv" kernel option in GRUB, it died at a different point each
time, most commonly after:
"srn0 is /pseudo/s...@0"
What's
On 22 Apr 2010, at 20:50, Rich Teer wrote:
On Thu, 22 Apr 2010, Alex Blewitt wrote:
Hi Alex,
For your information, the ZFS project lives (well, limps really) on
at http://code.google.com/p/mac-zfs. You can get ZFS for Snow Leopard
from there and we're working on moving forwards fro
Sv4 - AFAIK the Mac
client is an alpha stage of that on Snow Leopard.
You could try listing the files (from OSX) with ls -...@e which should show you
all the extended attributes and ACLs to see if that's causing a problem.
Alex
___
zfs-
> since Firewire is important to Apple, they may have selected a particular
> Firewire chip which performs particularly well?
Darwin is open-source.
http://www.opensource.apple.com/source/xnu/xnu-1486.2.11/
http://www.opensource.apple.com/source/IOFireWireFamily/IOFi
of performance numbers you can come up with, or if you
run into the same kind of problems with USB that existed for slower
models. (Sadly, the planned FW3200 seems to have disappeared into a
hole in the ground.)
Alex
___
zfs-discuss mailing lis
until the first set of
paying customers wants to get invoiced for the investigation. But to claim it's
FUD without any real data to back it up is just FUD^2.
Alex
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
1 messages in one mailbox
>folder. Thats not because of ZFS but dovecot just handles dbox files (one
for each message like maildir) better in terms of indexing.
Got a link to this magic dbox format ?
-Alex
IMPORTANT: This email remains the property of the Australian Defence
Organisati
Very interesting product indeed!
Given the volume one of these cards take up inside the server though,
I couldn't help but think that 4GB is a bit on the low side.
Alex.
On Wed, Jan 13, 2010 at 5:51 PM, Christopher George
wrote:
> The DDRdrive X1 OpenSolaris device driver is now
0n Thu, Jan 07, 2010 at 10:49:50AM -0800, Richard Elling wrote:
>I have posted my ZFS Tutorial slides from USENIX LISA09 on
>slideshare.net.
>http://richardelling.blogspot.com/2010/01/zfs-tutorial-at-usenix-lisa09-slides.html
Is there a PDF available of this ?
0n Wed, Jan 06, 2010 at 11:00:49PM -0800, Richard Elling wrote:
>On Jan 6, 2010, at 10:39 PM, Wilkinson, Alex wrote:
>>
>>0n Wed, Jan 06, 2010 at 02:22:19PM -0800, Richard Elling wrote:
>>
>>> Rather, ZFS works very nicely with &q
0n Wed, Jan 06, 2010 at 02:22:19PM -0800, Richard Elling wrote:
>Rather, ZFS works very nicely with "hardware RAID" systems or JBODs
>iSCSI, et.al. You can happily add the
Im not sure how ZFS works very nicely with say for example an EMC Cx310 array ?
-Alex
. There is nothing OS specific about EFI,
regardless of whether any given OS supports EFI or not. Nor does it
need to be a "PC" - I have several Mac PPCs that can read EFI
partitioned disks (as well as some Intel ones). These can also be read
by other sys
; however, that doesn't necesssarily imply that all OSs can
read EFI disks. My Commodore 128D could boot CP/M but couldn't
understand FAT32 - that doesn't mean that therefore FAT32 isn't OS
independent either.
Alex
___
z
sk1 disk2
zpool create fivehundred disk1 disk2
Alex
On Nov 8, 2009, at 15:09, Wael Nasreddine (a.k.a eMxyzptlk) wrote:
Hello,
I'm sure this question has been asked many times already, but I
couldn't find the answer myself. Anyway I have a laptop with 2
identical hard disks 250Gb
On 3 Nov 2009, at 14:48, Cindy Swearingen wrote:
Alex,
You can download the man page source files from this URL:
http://dlc.sun.com/osol/man/downloads/current/
FYI there's a couple of nits in the man pages:
* the zpool create synopsis hits the 80 char mark. Might be better to
f
On Tue, Nov 3, 2009 at 2:48 PM, Cindy Swearingen
wrote:
> Alex,
>
> You can download the man page source files from this URL:
>
> http://dlc.sun.com/osol/man/downloads/current/
Thanks, that's great.
___
zfs-discuss mail
xt on-line
(http://docs.sun.com/app/docs/doc/819-2240/zfs-1m)
Can anyone point me to where these are stored, so that we can update
the documentation in the Apple fork?
Alex
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.o
operate on
the existing dataset?
As a workaround I can move files in and out of the pool through an
external 500GB HDD, and with the ZFS snapshots I don't really risk
much about losing data if anything goes (not too horribly, anyway)
wrong.
Thanks to you guys again for the great work!
Alex.
Terrific! Can't wait to read the man pages / blogs about how to use it...
Alex.
On Mon, Nov 2, 2009 at 12:21 PM, David Magda wrote:
> Deduplication was committed last night by Mr. Bonwick:
>
>> Log message:
>> PSARC 2009/571 ZFS Deduplication Properties
>>
y various shell initialisations
which may not get run for Cron jobs. In any case, it's safer to assume
it's not.
> Is it ok to specify /usr/sbin/zpool in crontab file?
It is in fact preferable to specify fully qualified paths in crontabs
generally, so yes.
Alex
___
Apple has finally canned [1] the ZFS port [2]. To try and keep momentum up and
continue to use the best filing system available, a group of fans have set up a
continuation project and mailing list [3,4].
If anyone's interested in joining in to help, please join in the mailing list.
[1] http://a
We finally resolved this issue by change LSI driver. For details, please refer
to here http://enginesmith.wordpress.com/2009/08/28/ssd-faults-finally-resolved/
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensola
Hi there,
I have just upgraded to b118 - except that the mouse is now unable to
select / focus on any other windows except the ones that the desktop
randomly decided to give the focus to.
So I guess I have a bigger problem than just SMB not working here...
Alex.
On Fri, Aug 14, 2009 at 7:32
n order to make it available again. Restarting the SMB
server service without rebooting does not resolve the issue.
Is this a known issue, or have I missed something obvious?
Regards,
Alex.
--
Pablo Picasso - "Computers are useless. They can only give you
answers." - http://www.br
Just to answer my own question - this one might be interesting:
http://www.quicklz.com/
Alex.
On Fri, Aug 14, 2009 at 3:15 PM, Alex Lam S.L. wrote:
> Thanks for the informative analysis!
>
> Just wondering - are there better candidates out there than even LZO
> for this purpo
Thanks for the informative analysis!
Just wondering - are there better candidates out there than even LZO
for this purpose?
Alex.
On Fri, Aug 14, 2009 at 8:05 AM, Denis Ahrens wrote:
> Hi
>
> Some developers here said a long time ago that someone should show
> the code for LZO
At a first glance, your production server's numbers are looking fairly
similar to the "small file workload" results of your development
server.
I thought you were saying that the development server has faster performance?
Alex.
On Tue, Aug 11, 2009 at 1:33 PM, Ed Spencer wrote:
We found lots of SAS Controller Reset and errors to SSD on our servers
(OpenSolaris 2008.05 and 2009.06 with third-party JBOD and X25-E). Whenever
there is an error, the MySQL insert takes more than 4 seconds. It was quite
scary.
Eventually our engineer disabled the Fault Management SMART Pooli
ror them for what comparable arrays of the time cost.
>
> -Aaron
I'd very much doubt that, but I guess one can always push their time
budgets around ;-)
Alex.
>
> On Wed, Jun 10, 2009 at 8:53 AM, Bob Friesenhahn
> wrote:
>>
>> On Wed, 10 Jun 2009,
0n Thu, Apr 30, 2009 at 11:11:55AM -0500, Bob Friesenhahn wrote:
>On Thu, 30 Apr 2009, Wilkinson, Alex wrote:
>>
>> I currently have a single 17TB MetaLUN that i am about to present to an
>> OpenSolaris initiator and it will obviously be ZFS. Howe
0n Thu, Apr 30, 2009 at 11:11:55AM -0500, Bob Friesenhahn wrote:
>On Thu, 30 Apr 2009, Wilkinson, Alex wrote:
>>
>> I currently have a single 17TB MetaLUN that i am about to present to an
>> OpenSolaris initiator and it will obviously be ZFS. Howe
Hi all,
In terms of best practices and high performance would it be better to present a
JBOD to an OpenSolaris initiator or a single MetaLUN ?
The scenario is:
I currently have a single 17TB MetaLUN that i am about to present to an
OpenSolaris initiator and it will obviously be ZFS. However, I a
I am having trouble getting ZFS to behave as I would expect.
I am using the HP driver (cpqary3) for the Smart Array P400 (in a HP Proliant
DL385 G2) with 10k 2.5" 146GB SAS drives. The drives appear correctly, however
due to the controller not offering JBOD functionality I had to configure each
order to be able to
read zfs file systems. I'm just glad zpool attach warned me that I need to
invoke grubinstall manually!
Thank you for making things less mysterious.
Alex
--
This message posted from opensolaris.org
___
zfs-discuss mailing lis
Thanks for clearing that up. That all makes sense.
I was wondering why ZFS doesn't use the whole disk in the standard OpenSolaris
install. That explains it.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolari
in, and it says partition 8 "Contains GRUB boot information". So partition 8
is the master boot sector and contains GRUB stage1?
Alex
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://
Hi all,
I did an install of OpenSolaris in which I specified that the whole disk should
be used for the installation. Here is what "format> verify" produces for that
disk:
Part TagFlag Cylinders SizeBlocks
0 rootwm 1 - 60797 465.73GB(6
Hi again Cindy,
Well, I got the two new 1.5 TB disks, but I ran into a snag:
> a...@diotima:~# zpool attach rpool c3t0d0s0 c3t1d0
> cannot label 'c3t1d0': EFI labeled devices are not supported on root pools.
The Solaris 10 System Administration Guide: Devices and File Systems gives some
pertine
rored pool, even though I've
never mirrored disks or used RAID before. So I've already put in a word with
Santa about a second disk.
Btw, I would never consider using a disk with bleeding-edge capacity for my
system (as opposed to for expendable data like movies) with any file system
o
Thanks, that's what I thought. Just wanted to make sure.
I guess the writers of the documentation think that this is so obviously the
way things would work in a well designed system that there is no reason to
mention it explicitly.
--
This message posted from opensolaris.org
___
Maybe this has been discussed before, but I haven't been able to find any
relevant threads.
I have a simple OpenSolaris 2008.11 setup with one ZFS pool consisting of the
whole of the single hard drive on the system. What I want to do is to replace
the present 500 GB drive with a 1.5 TB drive. (
/mail.opensolaris.org/pipermail/zfs-discuss/2008-August/thread.html#50609
--
alex black, founder
the turing studio, inc.
888.603.6023 / main
510.666.0074 / office
[EMAIL PROTECTED]
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
e available properties, then "zfs set
com 'Tab key'" will become "zfs set compression=", another 'Tab key' here
would show me "on/off/lzjb/gzip/gzip-[1-9]"
..
Looks like a good RFE.
Thanks,
-Alex
--
This message posted from o
0n Thu, Oct 09, 2008 at 06:37:23AM -0500, Mike Gerdts wrote:
>FWIW, I belive that I have hit the same type of bug as the OP in the
>following combinations:
>
>- T2000, LDoms 1.0, various builds of Nevada in control and guest
> domains.
>- Laptop, VirtualBox 1.6.2, Wi
0n Sat, Oct 04, 2008 at 10:37:26PM -0700, Chris Greer wrote:
>The big thing here is I ended up getting a MASSIVE boost in
>performance even with the overhead of the 1GB link, and iSCSI.
>The iorate test I was using went from 3073 IOPS on 90% sequential
>writes to 23953 IOPS w
0n Mon, Sep 29, 2008 at 09:28:53PM -0700, Richard Elling wrote:
>EMC does not, and cannot, provide end-to-end data validation. So how
>would measure its data reliability? If you search the ZFS-discuss
archives,
>you will find instances where people using high-end storage also
0n Wed, Sep 03, 2008 at 12:57:52PM -0700, Paul B. Henson wrote:
>I tried installing the Sun provided samba source code package to try to do
>some debugging on my own, but it won't even compile, configure fails with:
Oh, where did you get that from ?
-aW
IMPORTANT: This email remai
0n Thu, Aug 14, 2008 at 09:00:12AM -0700, Rich Teer wrote:
>Summary: Solaris Express Community Edition (SXCE) is like the OpenSolaris
>of old; OpenSolaris .xx is apparently Sun's intended future direction
>for Solaris. Based on what I've heard, I've not tried the latter. If
What do you mean about "mirrored vdevs" ? RAID1 hardware? Because I have only
ICH9R and opensolaris doesn't know about it.
Would be network boot a good idea?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolar
Hi,
Using the opensolaris installer I've created a raidz array from two 500GB hdds,
but the installer keeps seening two hdds, not the array I've just made.
How do I install opensolaris on raidz array?
Thanks!
This message posted from opensolaris.org
__
d for pointing out that I need to pay
attention to backups. You are obviously right and it's easy to dismiss personal
data as non-essential. Though when thought of in the hundreds of hours of
processing vinyl -> mp3, it's a different story.
-Alex
This message posted fr
Thanks a bunch! I'll look into this very config. Just one Q, where did you get
the case?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi,
I'm sure this has been asked many times and though a quick search didn't reveal
anything illuminating, I'll post regardless.
I am looking to make a storage system available on my home network. I need
storage space in the order of terabytes as I have a growing iTunes collection
and tons of
Hi, I had a question regarding a situation i have with my zfs pool
I have a zfs pool "ftp" and within it are 3 250gb drives in a raid z and 2
400gb drives in a simple mirror. The pool itself has more than 400gb free and I
would like to remove the 400gb drives from the server. My concern is how t
Hi,
we are running a v240 with a zfs pool mirror onto two 3310 (SCSI). During
redundancy test, when offlining one 3310.. all zfs data are unsable.
- zpool hang without displaying any info
- trying to read filesystem hang the command (df,ls,...)
- /var/log/messages keep sending error for the fautl
Drive in my solaris box that had the OS on it decided to kick the bucket this
evening, a joyous occasion for all, but luckly all my data is stored on a zpool
and the OS is nothing but a shell to serve it up on. One quick install later
and im back trying to import my pool, and things are not goin
ntegrity with
asynchronous writes to such write cache, it would seem that such a solution
would give certain disk cabinet manufacturers a run for their money.
-Alex
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensol
d research what NTFS and others do.
You can also use dtrace to simulate the every single write case and see
for yourself the massive explosion of snapshots that would occur as a
result.
Yea, this is would be bad.
Thank you, will try and see if other filesystems do anything with a
closure hoo
. I can see this being
useful in high security environments and companies that have extreme
regulatory requirements.
If not, would there be a way besides scripts/programs to emulate this feature?
Thank You,
Alex
___
zfs-discuss mailing list
zfs-discuss
82 matches
Mail list logo