errible COMSTAR performance when the
initiator FS was using anything less than 8KB blocks, too many small sync
writes over iSCSI = death for storage performance.
Go the usual route at looking at jumbo frames, flow control on the switches,
etc.
--
Brent Jones
br...@servuhome.net
___
> > >regression.
>> > >
>> >
>> > Are you sure it is not a debug vs. non-debug issue?
>> >
>> >
>> > --
>> > Robert Milkowski
>> > http://milek.blogspot.com
>> >
Could it somehow not be compiling 64-bit support?
--
Brent Jones
br...@servuhome.net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
/mailman/listinfo/zfs-discuss
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
I'm surprised you're even getting 400MB/s on the "fast"
configurations, with only 16 drives in a Raidz3 configuration.
To me, 16 drives in Raidz3 (single Vdev) would do about 150MB/sec, as
your "slow" speeds suggest.
--
Brent Jones
br...@servuhome.net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Mon, Jul 19, 2010 at 11:14 AM, Bruno Sousa wrote:
> Hi,
>
> If you can share those scripts that make use of mbuffer, please feel
> free to do so ;)
>
>
> Bruno
> On 19-7-2010 20:02, Brent Jones wrote:
>> On Mon, Jul 19, 2010 at 9:06 AM, Richard Jahnel
>> w
nd any other show stoppers.
Mbuffer is probably your best bet, I rolled mbuffer into my
replication scripts, which I could share if anyone's interested.
Older versions of my script are on www.brentrjones.com but I have a
new one which uses mbuffer
--
Brent Jones
br...@servuhome.net
___
/messages
I opened a case on Sunsolve, but I fear since I am running a dev build
that I will be out of luck. I cannot run 2009.06 due to CIFS
segfaults, and problems with zfs send/recv hanging pools (well
documented issues).
I'd run Solaris proper, but not having in-kernel CIFS or COMSTAR wo
an discern.
Upon a reboot, performance is respectable for a little while, but
within days, it will sink back to those levels. I suspect a memory
leak, but both systems run the same software versions and packages, so
I can't envision that.
Would anyone have any ideas what may cause this?
--
ems? 2008.05 didn't, and I'm considering moving
> back to that rather than using a development build.
>
I would guess you would have less problems on 132 or 134 than you
would on 2009.06 :)
Just from my experience
--
Brent Jones
br...@servuhome.net
___
t it isn't a file system
at all. For iSCSI, you need to configure data fencing, typically
handled by clustering suites from various operating systems to control
which host has access to the iSCSI volumes at one time.
You should stick to CIFS or NFS, or investigate a real clustered file sys
e posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
Try jumbo frames, and making sure flow control is enabled on your
iSCSI switches and all
___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
Problem with mbuffer, if you do scripted send/receives, you'd have to
pre-start an Mbuffer session on the receiving end somehow.
SSH is always r
mailman/listinfo/zfs-discuss
>
Use something other than Open/Solaris with ZFS as an NFS server? :)
I don't think you'll find the performance you paid for with ZFS and
Solaris at this time. I've been trying to more than a year, and
watching dozens, if not hundreds of threads.
Gettin
s by enabling de-dupe on large datasets. Myself
included.
Check CR 6924390 for updates (if any)
--
Brent Jones
br...@servuhome.net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
t;
>
My rep says "Use dedupe at your own risk at this time".
Guess they've been seeing a lot of issues, and regardless if its
'supported' or not, he said not to use it.
--
Brent Jones
br...@servuhome.net
___
zfs-discuss ma
ential and block sizes.
Using dsk/rdsk, I was not able to see that level of performance at all.
--
Brent Jones
br...@servuhome.net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
o
> perform.
Do you have an SSD log device? If not, try disabling the ZIL
temporarily to see if that helps. Your workload will likely benefit
from a log device.
--
Brent Jones
br...@servuhome.net
___
zfs-discuss mailing list
zf
On Wed, Feb 10, 2010 at 4:05 PM, Brent Jones wrote:
> On Wed, Feb 10, 2010 at 3:12 PM, Marc Nicholas wrote:
>> How does lowering the flush interval help? If he can't ingress data
>> fast enough, faster flushing is a Bad Thibg(tm).
>>
>> -marc
>>
>
>> ___
>> zfs-discuss mailing list
>> zfs-discuss@opensolaris.org
>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>>
>
> --
> Sent from my mobile device
> ___
On Tue, Feb 2, 2010 at 7:41 PM, Brent Jones wrote:
> On Tue, Feb 2, 2010 at 12:05 PM, Arnaud Brand wrote:
>> Hi folks,
>>
>> I'm having (as the title suggests) a problem with zfs send/receive.
>> Command line is like this :
>> pfexec zfs send -Rp tank/t..
or me. I get "connect
reset by peer", or transfers (of any kind) simply timeout.
Smaller transfers succeed most of the time, while larger ones usually
fail. Rolling back to snv_127 (my last one) does not exhibit this
issue. I have not had time to narrow down any
nd
checks, the script is relatively simple. Seems Infrageeks added some
better documentation which is very helpful.
You'll want to make sure your remote side doesn't differ, ie. has the
same current snapshots as the sender side. If the replication f
Saso
>
I shouldn't dare suggest this, but what about disabling the ZIL? Since
this sounds like transient data to begin with, any risks would be
pretty low I'd imagine.
--
Brent Jones
br...@servuhome.net
___
zfs-discuss mailing list
zfs-d
540's and switch to 7000's with expensive SSDs, or
switch to file-based Comstar LUNs and disable the ZIL :(
Sad when a $50k piece of equipment requires such sacrifice.
--
Brent Jones
br...@servuhome.net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
__
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
>
Why did you make the ZFS file system have 4k blocks?
I'd let ZFS manage that for you, which by default I believe is 128K
--
Brent Jones
br...@servuhome.net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Sun, Dec 27, 2009 at 1:35 PM, Brent Jones wrote:
> On Sun, Dec 27, 2009 at 12:55 AM, Stephan Budach
> wrote:
>> Brent,
>>
>> I had known about that bug a couple of weeks ago, but that bug has been
>> files against v111 and we're at v130. I have also sea
KB/sec at times, most notably during ZFS send/recv, delete, and
destroying filesystems and snapshots.
Even with de-dupe turned off, if you had blocks that had been
de-duped, that file system will always be slow. I found I had to
completely destroy a file system once de-dupe had been enabled, then
re-create the file system to restore the previously high performance.
A bit of a let down, so I will wait on the sidelines for this feature to mature.
--
Brent Jones
br...@servuhome.net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Sun, Dec 27, 2009 at 12:55 AM, Stephan Budach wrote:
> Brent,
>
> I had known about that bug a couple of weeks ago, but that bug has been files
> against v111 and we're at v130. I have also seached the ZFS part of this
> forum and really couldn't find much about
On Fri, Dec 25, 2009 at 9:56 PM, Tim Cook wrote:
>
>
> On Fri, Dec 25, 2009 at 11:43 PM, Brent Jones wrote:
>>
>> >>
>> >>
>> >
>> >
>> > Hang on... if you've got 77 concurrent threads going, I don't see how
>>
e however with ZFS, and the behavior of the
transaction group writes. If you have a big write that needs to land
on disk, it seems all other I/O, CPU and "niceness" is thrown out the
window in favor of getting all that data on disk.
I was on a watch list for a ZFS I/O scheduler bug with my paid Solaris
support, I'll try to find that bug number, but I believe some
improvements were done in 129 and 130.
--
Brent Jones
br...@servuhome.net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
sage.
These problems seem to have manifested after snv_128, and seemingly
only affect ZFS receive speeds. Local pool performance is still very
fast.
--
Brent Jones
br...@servuhome.net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ious
speed.
129 fixed it a bit, I was literaly getting just a couple hundred
-BYTES- a second on 128, but 129 I can get about 9-10MB/sec if I'm
lucky, but usually 4-5MB/sec. No other configuration changes on the
network occured, except for my X4540's being upgraded to sn
quot; block/parity rewrite tool will
"freshen" up a pool thats heavily fragmented, without having to redo
the pools.
--
Brent Jones
br...@servuhome.net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Sat, Dec 12, 2009 at 8:14 PM, Brent Jones wrote:
> On Sat, Dec 12, 2009 at 11:39 AM, Brent Jones wrote:
>> On Sat, Dec 12, 2009 at 7:55 AM, Bob Friesenhahn
>> wrote:
>>> On Sat, 12 Dec 2009, Brent Jones wrote:
>>>
>>>> I've noticed some ext
On Sat, Dec 12, 2009 at 11:39 AM, Brent Jones wrote:
> On Sat, Dec 12, 2009 at 7:55 AM, Bob Friesenhahn
> wrote:
>> On Sat, 12 Dec 2009, Brent Jones wrote:
>>
>>> I've noticed some extreme performance penalties simply by using snv_128
>>
>> Does the
On Sat, Dec 12, 2009 at 7:55 AM, Bob Friesenhahn
wrote:
> On Sat, 12 Dec 2009, Brent Jones wrote:
>
>> I've noticed some extreme performance penalties simply by using snv_128
>
> Does the 'zpool scrub' rate seem similar to before? Do you notice any read
> pe
which prevents me from rolling back to snv_127,
which would send at many tens of megabytes a second.
This is on an X4540, dual quad cores, and 64GB RAM.
Anyone else seeing similar issues?
--
Brent Jones
br...@servuhome.net
___
zfs-discuss mailing lis
ss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
I submitted a bug a while ago about this:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6855208
I'll escalate since I have a support contract. But yes, I see this as
a serious bug, I thought my machine ha
-site storage service (unless
its stored at the owners home), you could probably co-locate a server
with fast internet connectivity, a bundle of local storage, and just
ZFS snapshot your relevant pools to that server.
I second the recommendation of Amanda from Richard as well though,
pretty
On Wed, Nov 18, 2009 at 4:09 PM, Brent Jones wrote:
> On Tue, Nov 17, 2009 at 10:32 AM, Ed Plese wrote:
>> You can reclaim this space with the SDelete utility from Microsoft.
>> With the -c option it will zero any free space on the volume. For
>> example:
>>
>
ith compression
turned on to see how it behaves next :)
--
Brent Jones
br...@servuhome.net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
blocks as soon as Windows needs
space, and Windows will eventually not need that space again.
Is there a way to reclaim un-used space on a thin provisioned iSCSI target?
--
Brent Jones
br...@servuhome.net
___
zfs-discuss mailing list
zfs-discuss
om Oracle?
Or is it business as usual for ZFS developments?
--
Brent Jones
br...@servuhome.net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
make sure the filesystem in
question is NOT mounted, and just delete the directory that its trying
to mount into.
--
Brent Jones
br...@servuhome.net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
iles.
Explorer will crash, run your system out of memory and slow it down,
or plain out hard lock windows for hours on end.
This is on brand new hardware, 64bit, 32GB RAM, and 15k SAS disks.
Regardless of filesystem, I'd suggest splitting your directory
structure into a hierarchy. It makes sense
ss data? Not a chance.
I'm sure at the price they offer storage, this was the only way they
could be profitable, and it's a pretty creative solution.
For my personal data backups, I'm sure their service would meet all my
needs, but thats about as far as I would trust these systems -
laris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
I see this issue on each of my X4540's, 64GB of ECC memory, 1TB drives.
Rolling back to snv_118 does not reveal any checksum errors, only snc_121
So, the commodity hardware here doesn't hold up, unless Sun isn't
validating their equipment (not likely, as these servers have had no
hardware issues prior to this build)
--
Brent Jones
br...@servuhome.net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
esolution in sight.
Given that Sun now makes the 7000, I can only assume their support on
the more "whitebox" version, AKA X4540, is either near an end, or they
don't intend to support any advanced monitoring whatsoever.
Sad, really.. as my $900 Dell and HP servers can sen
On Fri, Jul 31, 2009 at 10:25 AM, Joseph L.
Casale wrote:
>>I came up with a somewhat custom script, using some pre-existing
>>scripts I found about the land.
>>
>>http://www.brentrjones.com/?p=45
>
> Brent,
> That was super helpful. I had to make some simple chang
run it from
the other system.
I expanded on it by being able to handle A-B and B-A replication
(mirror half of A to B, and half of B to A for paired redundancy).
I'll post that version up in a few weeks when I clean it up a little.
Credits go to Constantin Gonzalez for inspiration
for hours, or even days (in the case of nested
> snapshots).
> The only resolution is not to ever use zfs destroy, or just simply
> wait it out. It will eventually finish, just not in any reasonable
> timeframe.
>
> --
> Brent Jones
> br...@servuhome.net
>
Cor
ven days (in the case of nested
snapshots).
The only resolution is not to ever use zfs destroy, or just simply
wait it out. It will eventually finish, just not in any reasonable
timeframe.
--
Brent Jones
br...@servuhome.net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Looking at this external array by HP:
http://h18006.www1.hp.com/products/storageworks/600mds/index.html
70 disks in 5U, which could probably be configured in JBOD.
Has anyone attempted to connect this to a box running opensolaris to
create a 70 disk pool?
--
Brent Jones
br...@servuhome.net
ly a little
bit more info on that CR?
Seems theres a lot of people bitten by this, from low end to extremely
high end hardware.
--
Brent Jones
br...@servuhome.net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailm
of the above products?
I'd like to say this was an unfortunate circumstance, but there are
many levels of fail here, and to blame ZFS seems misplaced, and the
subject on this thread especially inflammatory.
--
Brent Jones
br...@servuhome.net
___
>> zfs:zfsdev_ioctl+0x14c()
>> genunix:cdev_ioctl+0x1d()
>> specfs:spec_ioctl+0x50()
>> genunix:fop_ioctl+0x25()
>> genunix:ioctl+0xac()
>> unix:_syscall32_save+0xbf()
>> -- switch to user thread's user stack --
>>
>> The box is an x4500, S
as a second slog, it made no difference
> to the write operations.
>
> I'm not sure where to go from here, these results are appalling (about 3x
> the time of the old system with 8x 10kRPM spindles) even with two Enterprise
> SSDs as separate log devices.
>
> cheers,
> Jam
discuss
>
>
Is there a supported way to multipath NFS? Thats one benefit to iSCSI
is your VMware can multipath to a target to get more speed/HA...
--
Brent Jones
br...@servuhome.net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
; http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
Maybe there could be a supported ZFS tuneable (per file system even?)
that is optimized for 'background' tasks, or 'foreground'.
Beyond that, I will give this tuneable a shot and see how it impacts
my own workload.
iSCSI. When those writes occur to my
RaidZ volume, all activity pauses until the writes are fully flushed.
One thing to note, on 117, the effects are seemingly reduced and a bit
more even performance, but it is still there.
--
Brent Jones
br...@servuhome.net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Fri, Jun 26, 2009 at 10:14 AM, Brent Jones wrote:
> On Thu, Jun 25, 2009 at 12:00 AM, James Lever wrote:
>>
>> On 25/06/2009, at 4:38 PM, John Ryan wrote:
>>
>>> Can I ask the same question - does anyone know when the 113 build will
>>> show up on pkg.op
x27;ing the storage-discuss group as well for coverage as this
covers ZFS, and storage.
If anyone has some thoughts, code, or tests, I can run them on my
X4540's and see how it goes.
Thanks
--
Brent Jones
br...@servuhome.net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
gt;
>
Do you know when new builds will show up on pkg.opensolaris.org/dev ?
--
Brent Jones
br...@servuhome.net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Mon, Jun 8, 2009 at 9:38 PM, Richard Lowe wrote:
> Brent Jones writes:
>
>
> I've had similar issues with similar traces. I think you're waiting on
> a transaction that's never going to come.
>
> I thought at the time that I was hitting:
>
o open a support case with Sun (have a support contract),
> but Opensolaris doesn't seem to be well understood by the support
> folks yet, so not sure how far it will get.
>
> --
> Brent Jones
> br...@servuhome.net
>
I can reproduce this 100% by sending about 6 or more sna
On Sun, Jun 7, 2009 at 3:50 AM, Ian Collins wrote:
> Ian Collins wrote:
>>
>> Tim Haley wrote:
>>>
>>> Brent Jones wrote:
>>>>
>>>> On the sending side, I CAN kill the ZFS send process, but the remote
>>>> side leaves its proc
>
> Well, I think I found a specific file system that is causing this.
> I kicked off a zpool scrub to see if there might be corruption on
> either end, but that takes well over 40 hours on these servers.
>
>
> --
> Brent Jones
> br...@servuhome.net
>
It turns out t
On Fri, Jun 5, 2009 at 4:20 PM, Tim Haley wrote:
> Brent Jones wrote:
>>
>> Hello all,
>> I had been running snv_106 for about 3 or 4 months on a pair of X4540's.
>> I would ship snapshots from the primary server to the secondary server
>> nightly, which was
On Fri, Jun 5, 2009 at 4:20 PM, Tim Haley wrote:
> Brent Jones wrote:
>>
>> Hello all,
>> I had been running snv_106 for about 3 or 4 months on a pair of X4540's.
>> I would ship snapshots from the primary server to the secondary server
>> nightly, which was
On Fri, Jun 5, 2009 at 3:25 PM, Ian Collins wrote:
> Brent Jones wrote:
>>
>> On the sending side, I CAN kill the ZFS send process, but the remote
>> side leaves its processes going, and I CANNOT kill -9 them. I also
>> cannot reboot the receiving system, at init 6, t
On Fri, Jun 5, 2009 at 2:49 PM, Rick Romero wrote:
> On Fri, 2009-06-05 at 14:45 -0700, Brent Jones wrote:
>> On Fri, Jun 5, 2009 at 2:28 PM, Mike La Spina
>> wrote:
>> > Hi,
>> >
>> > I have replications between hosts and they are working fine wit
mpt for user
input, and the zfs receive on the remote side is un-killable (and
hangs the server when trying to restart).
It appears to be the receiving end choking on a snapshot, and not
allowing any more to run.
Once one snapshot freezes, running another (for a different file
system) zfs send/re
has
never occurred on that version.
Any thoughts?
--
Brent Jones
br...@servuhome.net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
| +81 (0)3 -3375-1767 (home)
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
Can you give any details about your data set, what you piped zfs
send/receive through (SSH?), hardware/network, etc?
I'm envious of your speeds!
--
Brent Jones
br...@servuhome.net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
to remote, not sure if it happens local to
local, but I experienced it doing local-remote send/recv.
Not sure the best way to handle moving data around, when space is
tight though...
--
Brent Jones
br...@servuhome.net
___
zfs-discuss mailing list
zfs-disc
/zfs-discuss
>
You can add user properties to file systems. But afaik they would not
show up in zpool status.
For example:
zfs set note:purpose="This file system is important"
zfs get note:purpose somefilesystem
Maybe that helps...
--
Brent Jones
br...@servuhome.net
On Sat, Mar 28, 2009 at 5:40 PM, Fajar A. Nugraha wrote:
> On Sun, Mar 29, 2009 at 3:40 AM, Brent Jones wrote:
>> I have since modified some scripts out there, and rolled them into my
>> own, you can see it here at pastebin.com:
>>
>> http://pastebin.com/m3871e478
nspiration:
http://blogs.sun.com/constantin/entry/zfs_replicator_script_new_edition
http://blogs.sun.com/timf/en_IE/entry/zfs_automatic_snapshots_in_nv
Those are some good resources, from that, you can make something work
that is tailored to your environment.
--
Brent Jones
br...@servuhome.
On Wed, Mar 18, 2009 at 11:28 AM, Miles Nordin wrote:
>>>>>> "bj" == Brent Jones writes:
>
> bj> I only have about 50 filesystems, and just a handful of
> bj> snapshots for each filesystem.
>
> there were earlier stories of people who had im
and its showing
~230 ops/sec, with 100% arc misses.
Whatever its doing, the load is very random I/O, and heavy, but little
progress appears to be happening.
I only have about 50 filesystems, and just a handful of snapshots for
each filesystem.
Thanks!
--
Brent Jones
br...@servuhome.net
ssues, it failed to compile on another. Currently, I'm
installing OpenSolaris in a VirtualBox VM on a Linux host using raw disk
passthrough so I can use zfs with this I/O card. We'll see how it
goes :)
Thanks again,
-Brent
On Tue, 2009-03-17 at 18:02 -0700, Craig Cory wrote:
> Brent,
>
Can someone point me to a document describing how available space in a
zfs is calculated or review the data below and tell me what I'm
missing?
Thanks in advance,
-Brent
===
I have a home project with 3x250 GB+3x300 GB in raidz, so I expect to
lose 1x300 GB to parity.
Total size:1650GB
On Tue, Feb 24, 2009 at 11:32 AM, Christopher Mera wrote:
> Thanks for your responses..
>
> Brent:
> And I'd have to do that for every system that I'd want to clone? There
> must be a simpler way.. perhaps I'm missing something.
>
>
> Regards,
> Chris
&g
/backups, you could
issue an SMF command to stop the service before taking the snapshot.
Or at the very minimum, perform an SQL dump of the DB so you at least
have a consistent full copy of the DB as a flat file in case you can't
stop the DB service.
--
Brent Jones
br...@servuhome.net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
t; before any data is
transferred.
I'm going to open a case in the morning, and see if I can't get an
engineer to look at this.
--
Brent Jones
br...@servuhome.net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Thanks for all the help guys. Based on the success reports, i'll give it a shot
in my intel s3210shlc board next week when the UIO card arrives. I'll report
back on the success or destruction that follows...now i just hope solaris 10
10/08, but it sounds like it should.
Chee
Does anyone know if this card will work in a standard pci express slot?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
s@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
We use several X4540's over here as well, what type of workload do you
have, and how much performance increase did you see by disabling the
write caches?
--
Brent Jones
br...@servuhome.net
___
mary, there are a good few problems here, many of which I've
>>> already reported as bugs:
>>>
>>> 1. ZFS still accepts read and write operations for a faulted pool, causing
>>> data loss that isn't necessarily reported by zpool status.
>>> 2.
On Tue, Jan 27, 2009 at 5:47 PM, Richard Elling
wrote:
> comment far below...
>
> Brent Jones wrote:
>>
>> On Mon, Jan 26, 2009 at 10:40 PM, Brent Jones wrote:
>>
>>>
>>>
>>>
>>> --
>>> Brent Jones
>>> br...@ser
On Mon, Jan 26, 2009 at 10:40 PM, Brent Jones wrote:
> While doing some performance testing on a pair of X4540's running
> snv_105, I noticed some odd behavior while using CIFS.
> I am copying a 6TB database file (yes, a single file) over our GigE
> network to the X4540, then
L
c7t3d0AVAIL
c8t4d0AVAIL
c9t5d0AVAIL
--
Brent Jones
br...@servuhome.net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Fri, Jan 9, 2009 at 11:41 PM, Brent Jones wrote:
> On Fri, Jan 9, 2009 at 7:53 PM, Ian Collins wrote:
>> Ian Collins wrote:
>>> Send/receive speeds appear to be very data dependent. I have several
>>> different filesystems containing differing data types. The s
On Sat, Jan 17, 2009 at 2:46 PM, JZ wrote:
>
>
> I don't know if this email is even relevant to the list discussion. I will
> leave that conclusion to the smart mail server policy here.
*cough*
--
Brent Jones
br...@servuhome.net
__
space, and guessable file count / file size distribution?
I'm also trying to put together the puzzle to provide more detail to a
case I opened with Sun regarding this.
--
Brent Jones
br...@servuhome.net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Wed, Jan 7, 2009 at 12:36 AM, Andrew Gabriel wrote:
> Brent Jones wrote:
>
>> Reviving an old discussion, but has the core issue been addressed in
>> regards to zfs send/recv performance issues? I'm not able to find any
>> new bug reports on bugs.opensolaris.org r
ttp://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
Reviving an old discussion, but has the core issue been addressed in
regards to zfs send/recv performance issues? I'm not able to find any
new bug reports on bugs.opensolaris.org related to this, but my search
kung-fu may be weak.
Using mbuffer can speed it up dramatically, but this seems like a hack
without addressing a real problem with zfs send/recv.
Trying to send any meaningful sized snapshots from say an X4540 takes
up to 24 hours, for as little as 300GB changerate.
--
Brent Jones
br...@servuhome.net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Mon, Jan 5, 2009 at 4:29 PM, Brent Jones wrote:
> On Mon, Jan 5, 2009 at 2:50 PM, Richard Elling wrote:
>> Correlation question below...
>>
>> Brent Jones wrote:
>>>
>>> On Sun, Jan 4, 2009 at 11:33 PM, Carsten Aulbert
>>> wrote:
>
ad 48,
1TB drives all working fine).
So, at least you're able to see your drives... sorta.
I -wish- I could see my drives cache status, state, FRU, etc...:(
--
Brent Jones
br...@servuhome.net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Mon, Jan 5, 2009 at 2:50 PM, Richard Elling wrote:
> Correlation question below...
>
> Brent Jones wrote:
>>
>> On Sun, Jan 4, 2009 at 11:33 PM, Carsten Aulbert
>> wrote:
>>
>>>
>>> Hi Brent,
>>>
>>> Brent Jones wrote:
>
On Sun, Jan 4, 2009 at 11:33 PM, Carsten Aulbert
wrote:
> Hi Brent,
>
> Brent Jones wrote:
>> I am using 2008.11 with the Timeslider automatic snapshots, and using
>> it to automatically send snapshots to a remote host every 15 minutes.
>> Both sides are X4540
1 - 100 of 127 matches
Mail list logo