Install nexenta on a dell poweredge ?
or one of these http://www.pogolinux.com/products/storage_director
On Mon, Apr 5, 2010 at 9:48 PM, Kyle McDonald wrote:
> I've seen the Nexenta and EON webpages, but I'm not looking to build my
> own.
>
> Is there anything out there I can just buy?
>
> -Kyl
How is the quality of the ZFS Linux port today? Is it comparable to Illumos
or at least FreeBSD ? Can I trust production data to it ?
On Wed, Feb 27, 2013 at 5:22 AM, Bob Friesenhahn <
bfrie...@simple.dallas.tx.us> wrote:
> On Tue, 26 Feb 2013, Gary Driggs wrote:
>
> On Feb 26, 2013, at 12:44 A
Hi everyone,
We're a small Linux shop (20 users). I am currently using a Linux server to
host our 2TBs of data. I am considering better options for our data storage
needs. I mostly need instant snapshots and better data protection. I have
been considering EMC NS20 filers and Zfs based solutions. F
Thanks for all the answers .. Please find more questions below :)
- Good to know EMC filers do not have end2end checksums! What about netapp ?
- Any other limitations of the big two NAS vendors as compared to zfs ?
- I still don't have my original question answered, I want to somehow assess
the
I guess I am mostly interested in MTDL for a zfs system on whitebox hardware
(like pogo), vs dataonTap on netapp hardware. Any numbers ?
On Tue, Sep 30, 2008 at 4:36 PM, Bob Friesenhahn <
[EMAIL PROTECTED]> wrote:
> On Tue, 30 Sep 2008, Ahmed Kamal wrote:
>
>>
>> - I stil
Tue, Sep 30, 2008 at 8:40 PM, Miles Nordin <[EMAIL PROTECTED]> wrote:
> >>>>> "ak" == Ahmed Kamal <[EMAIL PROTECTED]> writes:
>
>ak> I need to answer and weigh against the cost.
>
> I suggest translating the reliability problems into a c
>
> Intel mainstream (and indeed many tech companies') stuff is purposely
> stratified from the enterprise stuff by cutting out features like ECC and
> higher memory capacity and using different interface form factors.
Well I guess I am getting a Xeon anyway
> There is nothing magical about SA
>
> Well, if you can probably afford more SATA drives for the purchase
> price, you can put them in a striped-mirror set up, and that may help
> things. If your disks are cheap you can afford to buy more of them
> (space, heat, and power not withstanding).
>
Hmm, that's actually cool !
If I config
>
> I observe that there are no disk vendors supplying SATA disks
> with speed > 7,200 rpm. It is no wonder that a 10k rpm disk
> outperforms a 7,200 rpm disk for random workloads. I'll attribute
> this to intentional market segmentation by the industry rather than
> a deficiency in the transfer
>
>
> So, performance aside, does SAS have other benefits ? Data integrity ? How
> would a 8 raid1 sata compare vs another 8 smaller SAS disks in raidz(2) ?
> Like apples and pomegranates. Both should be able to saturate a GbE link.
>
You're the expert, but isn't the 100M/s for streaming not rand
extra sas performance), and offers better performance and
MTTDL than 8 sata raidz2, I guess I will go with 8-sata-raid1 then!
Hope I'm not horribly mistaken :)
On Wed, Oct 1, 2008 at 3:18 AM, Tim <[EMAIL PROTECTED]> wrote:
>
>
> On Tue, Sep 30, 2008 at 8:13 PM, Ahmed Kama
Thanks for all the opinions everyone, my current impression is:
- I do need as much RAM as I can afford (16GB look good enough for me)
- SAS disks offers better iops & better MTBF than SATA. But Sata offers
enough performance for me (to saturate a gig link), and its MTBF is around
100 years, which
Thanks for the info. I am not really after big performance, I am already on
SATA and it's good enough for me. What I really really can't afford is data
loss. The CAD designs our engineers are working on can sometimes be really
worth a lot. But still we're a small company and would rather save and b
>
>In the past year I've lost more ZFS file systems than I have any other
>type of file system in the past 5 years. With other file systems I
>can almost always get some data back. With ZFS I can't get any back.
Thats scary to hear!
>
>
I am really scared now! I was the one trying to
For *nix rsync
For windows rsyncshare
http://www.nexenta.com/corp/index.php?option=com_remository&Itemid=77&func=startdown&id=18
On Sat, Oct 18, 2008 at 1:56 PM, Ares Drake <[EMAIL PROTECTED]>wrote:
> Greetings.
>
> I am currently looking into setting up a better backup solution for our
> family
Hi,
Unfortunately, every now and then someone has his zpool corrupt, with no
tools to fix it! This is due to either zfs bugs, or hardware lying about
whether the bits really hit the platters. I am evaluating what I should be
using for storing VMware ESX VM images (ext3 or zfs on NFS). I really real
zfs list -t snapshot ?
On Sat, Nov 22, 2008 at 1:14 AM, Pawel Tecza <[EMAIL PROTECTED]> wrote:
> Hello All,
>
> This is my zfs list:
>
> # zfs list
> NAME USED AVAIL REFER MOUNTPOINT
> rpool 10,5G 3,85G61K /rpool
> rpool/ROOT9,04G
Hi,
Not sure if this is the best place to ask, but do Sun's new Amber road
storage boxes have any kind of integration with ESX? Most importantly,
quiescing the VMs, before snapshotting the zvols, and/or some level of
management integration thru either the web UI or ESX's console ? If there's
nothin
Hi,
I have been doing some basic performance tests, and I am getting a big hit
when I run UFS over a zvol, instead of directly using zfs. Any hints or
explanations is very welcome. Here's the scenario. The machine has 30G RAM,
and two IDE disks attached. The disks have 2 fdisk partitons (c4d0p2,
c
Well, I checked and it is 8k
volblocksize 8K
Any other suggestions how to begin to debug such issue ?
On Mon, Dec 15, 2008 at 2:44 AM, Bob Friesenhahn <
bfrie...@simple.dallas.tx.us> wrote:
> On Mon, 15 Dec 2008, Ahmed Kamal wrote:
>
>>
>>
Hi,
I have setup AVS replication between two zvols on two opensolaris-2008.11
nodes. I have been seeing BIG performance issues, so I tried to setup the
system to be as fast as possible using a couple of tricks. The detailed
setup and performance data are below:
* A 100G zvol has been setup o
You might want to look at AVS for realtime replication
http://www.opensolaris.org/os/project/avs/
However, I have had huge performance hits after enabling that. The
replicated volume is almost 10% the speed of normal ones
On Thu, Jan 15, 2009 at 1:28 PM, Ian Mather wrote:
> Fairly new to ZFS. I
Hi Jim,
Thanks for your informative reply. I am involved with kristof
(original poster) in the setup, please allow me to reply below
> Was the follow 'test' run during resynchronization mode or replication
> mode?
>
Neither, testing was done while in logging mode. This was chosen to
simply avoid
Did anyone share a script to send/recv zfs filesystems tree in
parallel, especially if a cap on concurrency can be specified?
Richard, how fast were you taking those snapshots, how fast were the
syncs over the network. For example, assuming a snapshot every 10mins,
is it reasonable to expect to syn
Hi Jim,
The setup is not there anymore, however, I will share as much details
as I have documented. Could you please post the commands you have used
and any differences you think might be important. Did you ever test
with 2008.11 ? instead of sxce ?
I will probably be testing again soon. Any tips
>
> "Unmount" is not sufficient.
>
Well, umount is not the "right" way to do it, so he'd be simulating a
power-loss/system-crash. That still doesn't explain why massive data loss
would occur ? I would understand the last txg being lost, but 90% according
to OP ?!
__
>
> The good news is that ZFS is getting popular enough on consumer-grade
> hardware. The bad news is that said hardware has a different set of
> failure modes, so it takes a bit of work to become resilient to them.
> This is pretty high on my short list.
So does this basically mean zfs rolls-ba
ZFS replication basics at http://cuddletech.com/blog/pivot/entry.php?id=984
Regards
On Sat, Mar 28, 2009 at 1:57 AM, Harry Putnam wrote:
>
> [...]
>
> Harry wrote:
> >> Now I'm wondering if the export/import sub commands might not be a
> >> good bit faster.
> >>
> Ian Collins answered:
> > I th
Hi zfs gurus,
I am wondering whether the reliability of solaris/zfs is still guaranteed if
I will be running zfs not directly over real hardware, but over Xen
virtualization ? The plan is to assign physical raw access to the disks to
the xen guest. I remember zfs having problems with hardware that
Is anyone even using ZFS under Xen in production in some form. If so, what's
your impression of reliability ?
Regards
On Sun, May 17, 2009 at 2:16 PM, Ahmed Kamal <
email.ahmedka...@googlemail.com> wrote:
> Hi zfs gurus,
>
> I am wondering whether the reliability of
>
> However, if you need to decide, whether to use Xen, test your setup
> before going into production and ask your boss, whether he can live with
> innovative ... solutions ;-)
>
Thanks a lot for the informative reply. It has been definitely helpful
I am however interested in the reliability of r
>
> It's worth a try although like you i'll have to bow to the gurus on the
> list. It's not the end of the world if she can't get it back, but if anyone
> does know of a method like this, I'd love to know, for future reference as
> much as anything.
>
Perhaps you're looking for http://www.cgsecur
32 matches
Mail list logo