Why would you have to buy smaller disks? You can replace the 320's
with 1tb drives and after the last 320 is out of the raidgroup, it
will grow automatically.
On 6/16/08, Miles Nordin <[EMAIL PROTECTED]> wrote:
> Is RFE 4852783 (need for an equivalent to LVM2's pvmove) likely to
> happen with
I guess I find it ridiculous you're complaining about ram when I can
purchase 4gb for under 50 dollars on a desktop.
Its not like were talking about a 500 dollar purchase.
On 6/16/08, Peter Tribble <[EMAIL PROTECTED]> wrote:
> On Mon, Jun 16, 2008 at 5:20 PM, dick hoogendijk <[EMAIL PROTECTE
Remind me again what a veritas license is. If you can't find ram for
less than that you need to find a new var/disti
On 6/16/08, Chris Siebenmann <[EMAIL PROTECTED]> wrote:
> | I guess I find it ridiculous you're complaining about ram when I can
> | purchase 4gb for under 50 dollars on a desk
On Tue, Jun 17, 2008 at 5:33 AM, Darren J Moffat <[EMAIL PROTECTED]>
wrote:
> Tim wrote:
>
>> I guess I find it ridiculous you're complaining about ram when I can
>> purchase 4gb for under 50 dollars on a desktop.
>>
>
> For many people around the wo
On Tue, Jun 17, 2008 at 8:42 AM, Volker A. Brandt <[EMAIL PROTECTED]> wrote:
> > > I have a quite old machine with an AMD Athlon 900MHz with 640Mb of RAM
> > > serving up NFS, WebDAV locally to my house and running my webserver
> (Apache)
> > > in a Zone. For me performance is perfectly acceptabl
Samba cifs has been in opensolaris from day1.
No, it cannot be used to meet sun's end goal which is cifs INTEGRATION
with the core kernel. Sun cifs supports windows acl's from the kernel
up. Samba does not.
On 6/22/08, Marcelo Leal <[EMAIL PROTECTED]> wrote:
> Hello all,
> i would like to c
It is indeed true and yoi can.
On 6/22/08, kevin williams <[EMAIL PROTECTED]> wrote:
> digg linked to an article related to the apple port of ZFS
> (http://www.dell.com/content/products/productdetails.aspx/print_1125?c=us&cs=19&l=en&s=dhss).
> I dont have a mac but was interested in ZFS.
>
> Th
On Mon, Jun 23, 2008 at 11:18 AM, Edward <[EMAIL PROTECTED]> wrote:
> Yes you are all correct. Ram cost nothing today, even though it might be
> bouncing back to their normal margin. DDR2 Ram are relatively cheap. Not to
> mention DDR3 will bring us double or more memory capacity.
>
Not likely.
On Mon, Jun 23, 2008 at 1:26 PM, Charles Soto <[EMAIL PROTECTED]> wrote:
>
>
>
> On 6/23/08 11:59 AM, "Tim" <[EMAIL PROTECTED]> wrote:
>
> > On Mon, Jun 23, 2008 at 11:18 AM, Edward <[EMAIL PROTECTED]> wrote:
> >
> >> Bu
On Wed, Jun 25, 2008 at 10:44 AM, Bob Friesenhahn <
[EMAIL PROTECTED]> wrote:
> I see that the configuration tested in this X4500 writeup only uses
> the four built-in gigabit ethernet interfaces. This places a natural
> limit on the amount of data which can stream from the system. For
> local h
On Wed, Jun 25, 2008 at 1:19 PM, Bob Friesenhahn <
[EMAIL PROTECTED]> wrote:
> On Wed, 25 Jun 2008, Tim wrote:
>
>>
>> Uhhh... 64bit/133mhz is 17Gbit/sec. I *HIGHLY* doubt that bus will be a
>> limit. Without some serious offloading, you aren't pushing that
On Wed, Jun 25, 2008 at 3:13 PM, Lida Horn <[EMAIL PROTECTED]> wrote:
> Tim wrote:
>
>>
>>
>> On Wed, Jun 25, 2008 at 10:44 AM, Bob Friesenhahn <
>> [EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>>
>> wrote:
>>
>>I see that the c
On Fri, Jun 27, 2008 at 2:47 PM, Christophe Dupre <[EMAIL PROTECTED]>
wrote:
> Hi all,
> based on comments on this list, I bought a new server with 8 SATA bays
> and an AOC-SAT2-MV8 SATA controller. I them fired up a jumpstart of
> Solaris 10 5/08 of the server. Install runs through perfectly, wit
On Fri, Jun 27, 2008 at 11:50 AM, Albert Chin <
[EMAIL PROTECTED]> wrote:
> On Fri, Jun 27, 2008 at 08:13:14AM -0700, Ross wrote:
> > Bleh, just found out the i-RAM is 5v PCI only. Won't work on PCI-X
> > slots which puts that out of the question for the motherboad I'm
> > using. Vmetro have a 2
On Sat, Jun 28, 2008 at 1:42 AM, Erik Trimble <[EMAIL PROTECTED]> wrote:
> Brian Hechinger wrote:
> > On Fri, Jun 27, 2008 at 03:02:43PM -0700, Erik Trimble wrote:
> >
> >> Unfortunately, we need to be careful here with our terminology.
> >>
> >
> > You are completely and 100% correct, Erik. I've
BIOS revs? Any other pci cards in the system?
On Sun, Jun 29, 2008 at 5:16 PM, Christophe Dupre <[EMAIL PROTECTED]>
wrote:
> Tim,
> the system is a Silicon Mechanics A266; the motherboard is a SuperMicro
> H8DM8E-2
>
> I tried pluging the Marvell card in both 133MHz PCI-
On Sun, Jun 29, 2008 at 8:30 PM, Matthew Gardiner <[EMAIL PROTECTED]>
wrote:
> I think Kyle might be onto something here. With ZFS it is so easy
>> to create file systems, one could expect many people to do so.
>> In the past, it was so difficult and required planning, so people
>> tended to be m
On Sun, Jun 29, 2008 at 8:34 PM, Matthew Gardiner <[EMAIL PROTECTED]>
wrote:
>
>
> 2008/6/30 Tim <[EMAIL PROTECTED]>:
>
>
>>
>> On Sun, Jun 29, 2008 at 8:30 PM, Matthew Gardiner <
>> [EMAIL PROTECTED]> wrote:
>>
>>> I think Kyle mi
So what version is on you new card? Seems itd be far easier to
request from supermicro if we knew what to ask for.
On 7/1/08, Marc Bevand <[EMAIL PROTECTED]> wrote:
> I remember a similar pb with an AOC-SAT2-MV8 controller in a system of mine:
> Solaris rebooted each time the marvell88sx driver
So when are they going to release msrp?
On 7/2/08, Mertol Ozyoney <[EMAIL PROTECTED]> wrote:
> Availibilty may depend on where you are located but J4200 and J4400 are
> available for most regions.
> Those equipment is engineered to go well with Sun open storage components
> like ZFS.
> Besides p
hanks in advance !!
>
> best regards,
>
> Y
>
>
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Might
uss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
Do we have drivers available for ANY OS for these cards currently? It'd be
nice to at least be able to test if they function properly.
--Tim
__
he system IO-to-network bandwidth."
http://www.sun.com/servers/x64/x4540/
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
with
real drive trays instead of useless blanks?
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Wed, Jul 9, 2008 at 3:09 PM, Eric Schrock <[EMAIL PROTECTED]> wrote:
> The X4540 uses on-board LSI SAS controllers (C1068E).
>
> - Eric
>
> On Wed, Jul 09, 2008 at 02:59:26PM -0500, Tim wrote:
> > So, I see Sun finally updated the Thumper, and it appears they
Sun?
Does it ship with real drive trays in the *empty* slots, or those worthless
blanks that won't hold a drive?
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
/blogs.sun.com/eschrock
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
Is the 4540 still running a rageXL? I find that somewhat humorous if it's
an Nvidia chipset with
XEON's now, someone was misquoted, or someone was
confused :)
http://www.byteandswitch.com/document.asp?doc_id=158533&WT.svl=news1_1
--Tim
On Wed, Jul 9, 2008 at 3:44 PM, Richard Elling <[EMAIL PROTECTED]>
wrote:
> Yes, thanks for catching this. I'm sure it is just a copy
Dunno how old it is, but James is right, no Raid which is why it's cheaper.
Also why I like it ;)
On Wed, Jul 9, 2008 at 7:34 PM, Brandon High <[EMAIL PROTECTED]> wrote:
> On Wed, Jul 9, 2008 at 1:12 PM, Tim <[EMAIL PROTECTED]> wrote:
> > Perfect. Which means good
ris.org/mailman/listinfo/zfs-discuss
>
I believe NetApp has several (valid) patents in this area that may be
preventing Sun from doing this. Perhaps that's on the table for
cross-licensing negotiation talks?
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
for a high end STK SAN anyways.
It's the same reason you don't see HDS or EMC rushing to adjust the price of
the SYM or USP-V based on Sun releasing the thumpers.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
g I'd build a business case
> around, but they're a reality.
>
> --Joe
>
>
Why not? There's several in the market today whom I suspect have done just
that :D I won't name names, but for anyone in the industry I doubt I have
to.
--Tim
_
20k list gets you into a decked out storevault with FCP/iSCSI/NFS... For
being "just a jbod" this thing is ridiculously overpriced, sorry.
I'm normally the first one to defend Sun when it come to decisions made due
to an enterprise customer base, but this will not be one of those
situations.
On
Bryan,
Where did you find the sas to sata cables? I've been looking but
haven't found anything at the usual watering holes. I assume you
grabbed mini-sas to 4 sata?
Thanks!
--tim
On 7/12/08, Bryan Wagoner <[EMAIL PROTECTED]> wrote:
> Here's to hoping it works, I
4 dBA - almost
> silent.
>
>
I think you'd be better off with a NORCO for a cheap storage server case.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
nstead.
>
>
> hth,
> James C. McPherson
> --
> Senior Software Engineer, Solaris
> Sun Microsystems
> http://www.jmcp.homeunix.com/blog
>
>
Or manually load the driver onto the older version of OSOL :)
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
will do what you're looking for. I'm not sure if it's included by
default or not with the latest builds. Here's the package if you need to
build from source:
http://smartmontools.sourceforge.net/
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
t; smartctl will do what you're looking for. I'm not sure if it's included
> > by
> > default or not with the latest builds. Here's the package if you need to
> > build from source:
> > http://smartmontools.sourceforge.net
On Sun, Mar 7, 2010 at 3:12 PM, Ethan wrote:
> On Sun, Mar 7, 2010 at 15:30, Tim Cook wrote:
>
>>
>>
>> On Sun, Mar 7, 2010 at 2:10 PM, Ethan wrote:
>>
>>> On Sun, Mar 7, 2010 at 14:55, Tim Cook wrote:
>>>
>>>>
&
mental snapshots using
zfs-auto-snapshot.
cheers,
tim
Forwarded Message
From: Brandon High
To: ZFS discuss , zfs-auto-snapshot
Subject: Using zfs-auto-snapshot for automatic backups
Date: Mon, 08 Mar 2010 13:16:02 +
The recent discussion of backin
spares
c3t6d0 AVAIL
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ing to this list announcing new features or code releases for the
fishworks project. If they have on a regular basis and I've just been
missing it, feel free to link to the threads. I'm fairly certain his
response is that if you want to discuss fishworks, you should go about the
proper channe
On Mon, Mar 8, 2010 at 5:47 PM, Miles Nordin wrote:
> >>>>> "tc" == Tim Cook writes:
>
>tc> I'm betting its more the fact that zfs-discuss is not
>
> Firstly, there's no need for you to respond on anyone's behalf,
> especially
ask anyway.
-tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
is by doing:
# svccfg -s auto-snapshot setprop zfs/value_authorization = astring:
solaris.smf.manage.zfs-auto-snapshot
Sorry about that!
cheers,
tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.open
and
sold for less money. Same drivers, same firmware, IIRC, it's even the same
PCI device ID. When I ordered the card, I thought there was a mistake, as
the previous poster already mentioned, it comes in a box with an LSI
sticker, and the card says LSI all over it. The only place
tion. If that doesn't work,
your only remaining option is to restore from backup:
http://docs.sun.com/app/docs/doc/817-2271/gbctt?l=ja&a=view
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ndancy can't be provided below ZFS just if you
> want auto recovery you need redundancy within ZFS itself as well.
>
> You can have 2 separate raid arrays served up via iSCSI to ZFS which then
> makes a mirror out of the storage.
>
> -Ross
>
>
running around since day one claiming the basic concept of ZFS fly's in
the face of that very concept. Rather than do one thing well, it's unifying
two things (file system and raid/disk management) into one. :)
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I see he mentioned hour being
the granularity of the frequency, but that hardly means you'd HAVE to run
scrubs every hour. Nobody is stopping you from setting it to 3600 hours if
you so choose.
--Tim
___
zfs-discuss mailing list
zfs-discuss@openso
On Sat, Mar 20, 2010 at 5:36 PM, Bob Friesenhahn <
bfrie...@simple.dallas.tx.us> wrote:
> On Sat, 20 Mar 2010, Tim Cook wrote:
>
>>
>> Funny (ironic?) you'd quote the UNIX philosophy when the Linux folks have
>> been running around since day
>> one claim
tion level support
Both Standard and Premium support offerings are available for deployment of
Open HA Cluster 2009.06 with OpenSolaris 2009.06 with following
configurations:
*
etc. etc. etc.
So do you get paid directly by IBM then, or is it more of a "consultant"
type role?
--Tim
_
PERTY VALUE SOURCE
data/test userqu...@user1 50Glocal
Anyone have an idea of a fix, please? Or is this a known limitation?
Many thanks,
Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolari
. That limit should go away if you're using it as a
separate storage pool.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
irror drive5
drive6 mirror drive7 drive8
See here:
http://www.stringliterals.com/?p=132
<http://www.stringliterals.com/?p=132>
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
mirroring?
>
> So do you not mirror drives with RAIDZ2 or RAIDZ3 because you would have
> nothing for space left
>
> -Jason
>
>
Triple parity did not get added until version 17. FreeBSD cannot do
raidz3.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ntly from all of the major players using BSD,
if you're talking kernel code, I would say every single one of them has
pulled code from the 7-branch, and likely the 8-branch as well.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
>
What build? How long have you waited for the boot? It almost sounds to me
like it's waiting for the drive and hasn't timed out before you give up and
power it off.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
sent, and everything's fine.
>
> How long should I expect to wait if a drive is missing? It shouldn't take
> more than 30 seconds, IMHO.
>
>
Depends on a lot of things. I'd let it sit for at least half an hour to see
if you get any mess
>
Have you tried booting from a livecd and importing the pool from there? It
might help narrow down exactly where the problem lies.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ave to check and
see if I can dig up one of those old threads. I vaguely recall someone here
had a single drive fail on an import and it took forever to import the pool,
running out of memory every time. I think he eventually added significantly
more memory and was a
stion of implementation if it can affect the
output, especially if it can make it internally inconsistent.
--
Dan.
Yes, a snapshot is taken and removed once the compare is performed.
-tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
xentaStor, and NAS performance
Las Vegas, April 29-30, 2010 http://nexenta-vegas.eventbrite.com
Newer bits (>=129) will try to determine if your un-importable pool can
be helped with a "rewind". This reverts
the pool back in time a short while, so some data i
and Premium support offerings are available for deployment of
Open HA Cluster 2009.06 with OpenSolaris 2009.06 with following
configurations:
--Tim
<http://www.opensolaris.com/learn/features/availability/>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Wed, Mar 31, 2010 at 9:47 AM, Bob Friesenhahn <
bfrie...@simple.dallas.tx.us> wrote:
> On Wed, 31 Mar 2010, Tim Cook wrote:
>
>>
>> http://www.opensolaris.com/learn/features/availability/
>>
>> Full production level support
>>
>> Both Standard
On Wed, Mar 31, 2010 at 11:23 AM, Bob Friesenhahn <
bfrie...@simple.dallas.tx.us> wrote:
> On Wed, 31 Mar 2010, Tim Cook wrote:
>
>>
>> If there is ever another OpenSolaris formal release, then the situation
>> will be different.
>>
>> Cmon now, have a
can find the documentation for the release:*
- *Sun Studio 12 Update 1:** The Sun Studio 12 Update 1 release is the
latest full production release of Sun Studio software. It has recently been
added to the OpenSolaris IPS repository.
To install this release in you
rg/mailman/listinfo/zfs-discuss
On opensolaris? Did you try deleting any old BEs?
-tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
also for the same
> reasons.
> I'm a little surprised that the engineers would suddenly stop doing it
> only on SSD's. But who knows.
>
> -Kyle
>
>
If I were forced to ignorantly cast a stone, it would be into Intel's lap
(if the SSD
and chop it off the end of every disk. I'm betting it's no more
than 1GB, and probably less than that. When we're talking about a 2TB
drive, I'm willing to give up a gig to be guaranteed I won't have any issues
when it comes time to swap it out.
--Tim
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Sat, Apr 3, 2010 at 6:53 PM, Robert Milkowski wrote:
> On 03/04/2010 19:24, Tim Cook wrote:
>
>
>
> On Fri, Apr 2, 2010 at 4:05 PM, Edward Ned Harvey > wrote:
>
>> Momentarily, I will begin scouring the omniscient interweb for
>> information, but I’d
On Sat, Apr 3, 2010 at 7:50 PM, Tim Cook wrote:
>
>
> On Sat, Apr 3, 2010 at 6:53 PM, Robert Milkowski wrote:
>
>> On 03/04/2010 19:24, Tim Cook wrote:
>>
>>
>>
>> On Fri, Apr 2, 2010 at 4:05 PM, Edward Ned Harvey <
>> guacam...@nedharvey.com&
On Sat, Apr 3, 2010 at 9:52 PM, Richard Elling wrote:
> On Apr 3, 2010, at 5:56 PM, Tim Cook wrote:
> >
> > On Sat, Apr 3, 2010 at 7:50 PM, Tim Cook wrote:
> >> Your experience is exactly why I suggested ZFS start doing some "right
> sizing" if you will. Ch
ating like he's talking about. Writes aren't "duplicated on
each port". The path a read OR write goes down depends on the host-side
mpio stack, and how you have it configured to load-balance. It could be
simple round-robin, it could be based on queue depth, it could be most
recent
is, I'd be a happy
> man. ;-)
>
>
Have you tried pointing that bug out to the support engineers who have your
case at Oracle? If the fixed code is already out there, it's just a matter
of porting the code, right? :)
--Tim
___
zfs-discu
issued
> on the passive controller to keep the cache mirrored?
>
>
He's talking about multipathing, he just has no clue what
he's talking about. He specifically calls out applications that are
specifically used for multipathing.
--Tim
_
be dying. Load wouldn't really matter in that scenario (although
a high load will generally help it out the door a bit quicker due to higher
heat/etc.).
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
t; is?
>
> It's hidden in iostat -E, of all places.
>
> --
> Dan.
>
>
I think he wants to know how to identify which physical drive maps to the
dev ID in solaris. The only way I can think of is to run something like DD
against the d
On Tue, Apr 6, 2010 at 12:47 AM, Daniel Carosone wrote:
> On Tue, Apr 06, 2010 at 12:29:35AM -0500, Tim Cook wrote:
> > On Tue, Apr 6, 2010 at 12:24 AM, Daniel Carosone
> wrote:
> >
> > > On Mon, Apr 05, 2010 at 09:35:21PM -0700, Willard Korfhage wrote:
> > >
modified,
and non-open-source version of ZFS.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ters with this NAS software?
> --
> This message posted from opensolaris.org
> _
I wouldn't waste your time. My last go round lacp was completely
broken for no apparent reason. The community is basically
non-existent.
--Tim
___
zfs-discuss
On Wed, Apr 7, 2010 at 5:59 PM, Richard Elling wrote:
> On Apr 7, 2010, at 3:24 PM, Tim Cook wrote:
> > On Wednesday, April 7, 2010, Jason S wrote:
> >> Since i already have Open Solaris installed on the box, i probably wont
> jump over to FreeBSD. However someone has su
; c8t3d0UNAVAIL 0 0 0 cannot open
>
>
>
> # zpool remove junkpool c8t3d0
>
> # zpool status junkpool
>
>
>
> pool: junkpool
>
> state: ONLINE
>
> scrub: none requested
>
> config:
>
>
>
>
rs are a bit "light" perhaps, but it works just fine for my
> needs and holds drives securely. The small fans are a bit noisy, but
> since the box lives in the basement I don't really care.
>
> --eric
>
>
> --
> Eric D. Mudama
> edmud...@mail.bounceswoosh.org
messages?
Try explicitly enabling fmd to send to syslog in
/usr/lib/fm/fmd/plugins/syslog-msgs.conf
-tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
-insensitive behavior ARC case
http://arc.opensolaris.org/caselog/PSARC/2007/244/spec.txt
Not sure why it's not in the man page...
-tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
Depends on what sort of interface you're looking for. The supermicro
AOC-SAT2-MV8's work great. They're pci-x based. 8-ports, come with SATA
cables, and are relatively cheap (<$150 most places).
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
s of what the card came with. Why
would you buy the UIO card when you can get the intel SASUC8i for the same
price/cheaper and it comes in a standard form factor?
The (potential) problem with the 1068 cards is that they don't support AHCI
with SATA.
--Tim
__
GB
> transfer rates.
>
> It's older base hardware... athlon64 3400+ 2.2 ghz 3GB Ram
> With A-open AK86-L Motherboard.
>
> So what do any of you know about a PCI card that fills the bill?
>
>
>
If you're talking about standard PCI, and not P
On Sat, Apr 17, 2010 at 2:12 PM, Harry Putnam wrote:
> Tim Cook writes:
>
> > On Fri, Apr 16, 2010 at 7:35 PM, Harry Putnam
> wrote:
> >
> >> "Eric D. Mudama" writes:
> >>
> >> > On Thu, Apr 15 at 23:57, Günther wrote:
> >
3 volt signal
> voltage (peak transfer rate of 533 MB/s), but at 33 MHz both 5 volt and 3.3
> volt signal voltages are still allowed. It also added transaction latency
> limits to the specification.[7]).
>
> roy
> ___
> zfs-d
2009
and I believe you can retrieve it with fgetattr() from the read/write
view.
-tim
The snapshot dir is just another directory and over NFS you are looking
at it with NFS so it shows the ctime,mtime,atime of the top level
directory of the ZFS dataset at the time the snapshot was created.
to 4k (to be
like NTFS), it jumped up to 1.29x after that. But it should be a lot better
right?
Is there something i missed?
Regards
Tim
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
It was active all the time.
Made a new zfs with -o dedup=on, copied with default record size, got no dedup,
deleted files, set recordsize 4k, dedup ratio 1.29x
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensol
Dedup is a key element for my purpose, because i am planning a central
repository for like 150 Windows Server 2008 (R2) servers which would take a lot
less storage if they dedup right.
--
This message posted from opensolaris.org
___
zfs-discuss mailing
Hi,
The setting was this:
Fresh installation of 2008 R2 -> server backup with the backup feature -> move
vhd to zfs -> install active directory role -> backup again -> move vhd to same
share
I am kinda confused over the change of dedup ratio from changing the record
size, since it should ded
I found the VHD specification here:
http://download.microsoft.com/download/f/f/e/ffef50a5-07dd-4cf8-aaa3-442c0673a029/Virtual%20Hard%20Disk%20Format%20Spec_10_18_06.doc
I am not sure if i understand it right, but it seems like data on disk gets
"compressed" into the vhd (no empty space), so even
The problem is the Solaris team and lsi have put a lot of work into the new
2008 cards. Claiming there are issues without listing specific bugs they can
address is, I'm sure, frustrating to say the least.
On May 12, 2010 8:22 AM, "Thomas Burgess" wrote:
>>
>
> Now wait just a minute. You're cast
Yes, it requires a clustered filesystem to share out a single LUN to
multiple hosts. Vmfs3, however bad of an implementation, is in fact a
clustered filesystem. I highly doubt nfs is your problem though. I'd take
nfs over iscsi and vmfs any day.
On May 23, 2010 8:06 PM, "Chris Dunbar - Earthside
er wise protected against power loss.
>
> The STEC units is what Oracle/Sun use in their 7000 series appliances, and
> I believe EMC and many others use them as well.
>
>
When did that start? Every 7000 I've seen uses Intel drives.
--Tim
_
301 - 400 of 959 matches
Mail list logo