Marc Bevand gmail.com> writes:
>
> What I hate about mobos with no onboard video is that these days it is
> impossible to find cheap fanless video cards. So usually I just go headless.
Didn't finish my sentence: ...fanless and *power-efficient*.
Most cards consume 20+W when idle. This alone is
Brandon High freaks.com> writes:
>
> I'm going to be putting together a home NAS
> based on OpenSolaris using the following:
> 1 SUPERMICRO CSE-743T-645B Black Chassis
> 1 ASUS M2N-LR AM2 NVIDIA nForce Professional 3600 ATX Server Motherboard
> 1 SUPERMICRO AOC-SAT2-MV8 64-bit PCI
Brandon High wrote:
> On Fri, May 30, 2008 at 6:57 PM, Tim <[EMAIL PROTECTED]> wrote:
>
>> USED hardware is your friend :) He wasn't quoting new prices.
>>
>
> Not really an apples-to-apples comparison then, is it? Cruising eBay
> for parts isn't my idea of reproducible or supportable.
>
>
Brandon High wrote:
> On Fri, May 30, 2008 at 5:59 PM, Erik Trimble <[EMAIL PROTECTED]> wrote:
>
>> One thought on this:for a small server, which is unlikely to ever be CPU
>> bound, I would suggest looking for an older dual-Socket 940 Opteron
>> motherboard. They almost all have many PCI-X
On Fri, May 30, 2008 at 6:57 PM, Tim <[EMAIL PROTECTED]> wrote:
> USED hardware is your friend :) He wasn't quoting new prices.
Not really an apples-to-apples comparison then, is it? Cruising eBay
for parts isn't my idea of reproducible or supportable.
Sure, an older server could possibly fall i
USED hardware is your friend :) He wasn't quoting new prices.
On Fri, May 30, 2008 at 8:54 PM, Brandon High <[EMAIL PROTECTED]> wrote:
> On Fri, May 30, 2008 at 5:59 PM, Erik Trimble <[EMAIL PROTECTED]>
> wrote:
> > One thought on this:for a small server, which is unlikely to ever be
> CPU
On Fri, May 30, 2008 at 5:59 PM, Erik Trimble <[EMAIL PROTECTED]> wrote:
> One thought on this:for a small server, which is unlikely to ever be CPU
> bound, I would suggest looking for an older dual-Socket 940 Opteron
> motherboard. They almost all have many PCI-X slots, and single-core
> Opte
Brandon High wrote:
> On Fri, May 30, 2008 at 12:48 PM, Orvar Korvar
> <[EMAIL PROTECTED]> wrote:
>
>> In a PCI-X slot, you will reach something like 1.5GB/sec which should
>> suffice for most needs. Maybe it is cheaper to buy that card + PCI-X
>> motherboard (only found on server mobos) than
> It seems when a zfs filesystem with reserv/quota is
> 100% full users can no
> longer even delete files to fix the situation getting
> errors like these:
>
> $ rm rh.pm6895.medial.V2.tif
> rm: cannot remove `rh.pm6895.medial.V2.tif': Disk
> quota exceeded
We've run into the same problem here.
Hello. I'm still having problems with my array. It's been replaying the ZIL (I
think) for a week now and it hasn't finished. Now I don't know if it will ever
finish: is it starting from scratch every time? I'm dtracing the ZIL and this
is what I get:
0 46882dsl_pool_zil_clean:return
On Fri, May 30, 2008 at 12:48 PM, Orvar Korvar
<[EMAIL PROTECTED]> wrote:
> In a PCI-X slot, you will reach something like 1.5GB/sec which should suffice
> for most needs. Maybe it is cheaper to buy that card + PCI-X motherboard
> (only found on server mobos) than buying a SAS or PCI-express, if
On May 30, 2008, at 6:49 AM 5/30/, Craig J Smith wrote:
>
> It also should be noted that I am
> having to run on Solaris and not Opensolaris due to adaptec
> am79c973 scsi
> driver issues in Opensolaris.
Well that is probably a showstopper then, since the in-kernel support
isn't in the pr
Hi Orvar,
This section describes the operations you can do with a mirrored storage
pool:
http://docs.sun.com/app/docs/doc/817-2271/gazhv?a=view
This section describes the operations you can do with a raidz storage
pool:
http://docs.sun.com/app/docs/doc/817-2271/gcvjg?a=view
Go with mirrored s
Im using the AOC card with 8 SATA-2 ports too. It got detected automatically
during Solaris install. Works great. And it is cheap. Ive heard that it is the
same chipset as used in X4500 thumper with 48 drives?
In a PCI, the PCI bottle necks at ~150MB/sec, or so.
In a PCI-X slot, you will reach
On Fri, May 30, 2008 at 12:00 PM, Bill McGonigle <[EMAIL PROTECTED]> wrote:
> I'm curious - is the current stream format tagged with a version number?
Richard Elling posted something about the send format on 5/14/2008:
> To date, the only incompatibility is with send streams created prior
> to Nev
So, it basically boils down to this, what operations can I do with a vdev? Any
links? Ive googled a bit, but there is no comprehensive list of what I can do.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolar
On May 30, 2008, at 10:49, J.P. King wrote:
For _my_ purposes I'd be happy with zfs send/receive, if only it was
guaranteed to be compatible between versions.
How often are you going to be doing restores from these, and for how
long? Since the zfs send/receive stream format has only changed
> replace a current raidz2 vdev with a mirror.
your asking for vdev removal or pool shrink which isn't
finish yet.
Rob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-dis
> 1) and l2arc or log device needs to evacuation-possible
how about evacuation of any vdev? (pool shrink!)
> 2) any failure of a l2arc or log device should never prevent
> importation of a pool.
how about import or creation of any kinda degraded pool?
Rob
> making all the drives in a *zpool* the same size.
The only issue of having vdevs of diffrent sizes is when
one fills up, reducing the strip size for writes.
> making all the drives in a *vdev* (of almost any type) the same
The only issue is the unused space of the largest device, but
then we c
> Is there a way to efficiently replicating a complete zfs-pool
> including all filesystems and snapshots?
zfs send -R
-R Generate a replication stream package,
which will replicate the specified
filesystem, and
> I'd like to take a backup of a live filesystem without modifying
> the last accessed time.
why not take a snapshot?
Rob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discus
On May 30, 2008, at 10:45 AM, Craig Smith wrote:
> The tough thing is trying to make this fit
> well in a Windows world.
If you hang all the disks off the OpenSolaris system directly, and
export via CIFS ... isn't it just a NAS box from the windows
perspective? If so, how is it any harder to
Justin,
Thanks for the reply
In the environment I currently work in, the "powers that be" are almost
completely anti unix. Installing the nfs client on all machines would take
a real good sales pitch. None the less I am still playing with the client
in our sandbox. As I install this on a test mac
On Fri, May 30, 2008 at 12:37 PM, Justin Vassallo
<[EMAIL PROTECTED]> wrote:
> Is it possible to mirror a vdev within a zpool?
Not that I know of.
> My aim is to replace a current raidz2 vdev with a mirror. I was wondering if
> it is possible to create a mirrored vdev, use it to mirror my current
Hi,
Is it possible to mirror a vdev within a zpool?
My aim is to replace a current raidz2 vdev with a mirror. I was wondering if
it is possible to create a mirrored vdev, use it to mirror my current vdev,
then when resilvering completes remove the old vdev
justin
smime.p7s
Descripti
Orvar Korvar wrote:
> Ok, that was a very good explanation. Thanx a lot!
>
> So, I have a 8 ports SATA card, and I have one ZFS raid with 4 discs,
> 500gb each.
> These 4 discs are one vdev, right?
Yes you have a pool with 1 4 disk *RAIDZ* type vdev.
> And then I can add 4 more discs and create an
On Fri, May 30, 2008 at 7:07 AM, Hugh Saunders <[EMAIL PROTECTED]> wrote:
> On Fri, May 30, 2008 at 10:37 AM, Akhilesh Mritunjai
> <[EMAIL PROTECTED]> wrote:
>> I think it's right. You'd have to move to a 64 bit kernel. Any reasons to
>> stick to a 32 bit
>> kernel ?
>
> My reason would be lack of
No, I did not set that property; not now, not in previous releases.
Nice to see "secure by default" coming to the admin tools as well.
Waiting for SSH to become 127.0.0.1:22 sometime... just kidding ;)
Thanks for the tip!
Any ideas about the stacktrace? - it's still there instead of the web-GUI
On Fri, May 30, 2008 at 6:30 AM, Jeb Campbell <[EMAIL PROTECTED]> wrote:
> Ok, here is where I'm at:
>
> My install of OS 2008.05 (snv_86?) will not even come up in single user.
>
> The OS 2008.05 live cd comes up fine, but I can't import my old pool b/c of
> the missing log (and I have to import
Alas, didn't work so far.
Can the problem be that the zfs-root disk is not the first on the controller
(system boots from the grub on the older ufs-root slice), and/or that zfs is
mirrored? And that I have snapshots and a data pool too?
These are the boot disks (SVM mirror with ufs and grub):
On Fri, May 30, 2008 at 7:43 AM, Paul Raines <[EMAIL PROTECTED]> wrote:
>
> It seems when a zfs filesystem with reserv/quota is 100% full users can no
> longer even delete files to fix the situation getting errors like these:
>
> $ rm rh.pm6895.medial.V2.tif
> rm: cannot remove `rh.pm6895.medial.V2
On 30 May 2008, at 15:49, J.P. King wrote:
> For _my_ purposes I'd be happy with zfs send/receive, if only it was
> guaranteed to be compatible between versions. I agree that the
> inability
> to extract single files is an irritation - I am not sure why this is
> anything more than an implement
> A cleanly written filesystem provides clean and abstract interfaces to do
> anything you like with the filesystem, it's content and metadata. In such an
> environment, there is no need for a utility that knows the disk layout (like
> ufsdump does).
I'd like to take a backup of a live filesystem
It seems when a zfs filesystem with reserv/quota is 100% full users can no
longer even delete files to fix the situation getting errors like these:
$ rm rh.pm6895.medial.V2.tif
rm: cannot remove `rh.pm6895.medial.V2.tif': Disk quota exceeded
(this is over NFS from a RHEL4 Linux box)
I can log
Hi,
I have imported a pool on a SAN volume with an alternate root.
After a reboot it is not possible to import the pool again
without force (-f). This leads me to think that alternate root
pools are not exported during shutdown. Is this intended?
Best regards,
Werner.
--
Werner Donné -- Re
You mean this:
https://www.opensolaris.org/jive/thread.jspa?threadID=46626&tstart=120
Elegant script, I like it, thanks :)
Trying now...
Some patching follows:
-for fs in `zfs list -H | grep "^$ROOTPOOL/$ROOTFS" | awk '{ print $1 };'`
+for fs in `zfs list -H | grep "^$ROOTPOOL/$ROOTFS" | grep -w
On Fri, May 30, 2008 at 10:37 AM, Akhilesh Mritunjai
<[EMAIL PROTECTED]> wrote:
> I think it's right. You'd have to move to a 64 bit kernel. Any reasons to
> stick to a 32 bit
> kernel ?
My reason would be lack of 64bit hardware :(
Is this an iscsi specific limitation? or will any multi-TB pool h
Ok, here is where I'm at:
My install of OS 2008.05 (snv_86?) will not even come up in single user.
The OS 2008.05 live cd comes up fine, but I can't import my old pool b/c of the
missing log (and I have to import to fix the log ...).
So I think I'll boot from the live cd, import my rootpool, mo
Chris Siebenmann <[EMAIL PROTECTED]> wrote:
> The first issue alone makes 'zfs send' completely unsuitable for the
> purposes that we currently use ufsdump. I don't believe that we've lost
> a complete filesystem in years, but we restore accidentally deleted
> files all the time. (And snapshots a
Mark Shellenbaum <[EMAIL PROTECTED]> wrote:
> Joerg Schilling wrote:
> > Darren J Moffat <[EMAIL PROTECTED]> wrote:
> >
> >>> The closest equivalent to ufsdump and ufsrestore is "star".
> >> I very strongly disagree. The closest ZFS equivalent to ufsdump is 'zfs
> >> send'. 'zfs send' like ufs
Very cool! Just one comment. You said:
> We'll try compression level #9.
gzip-9 is *really* CPU-intensive, often for little gain over gzip-1.
As in, it can take 100 times longer and yield just a few percent gain.
The CPU cost will limit write bandwidth to a few MB/sec per core.
I'd suggest tha
Thomas Maier-Komor <[EMAIL PROTECTED]> wrote:
> > I very strongly disagree. The closest ZFS equivalent to ufsdump is 'zfs
> > send'. 'zfs send' like ufsdump has initmiate awareness of the the
> > actual on disk layout and is an integrated part of the filesystem
> > implementation.
> >
> > st
Ok, that was a very good explanation. Thanx a lot!
So, I have a 8 ports SATA card, and I have one ZFS raid with 4 discs, 500gb
each. These 4 discs are one vdev, right? And then I can add 4 more discs and
create another vdev of them.
1) Vdev of 4 samsung 500GB discs. -> zpool, consisting of 1.5T
Leal,
The entire configuration through our corporation is being defined. One of our
team members is heavy into EMC - 200Tb is his "normal" operating range.
However, for this need we are focused just on local "smart appliances" the
purpose of which is to do more than just automatically mirror t
45 matches
Mail list logo