Hi,
I wrote an hobbit script around lunmap/hbamap commands to monitor SAN health.
I'd like to add detail on what is being hosted by those luns.
With svm metastat -p is helpful.
With zfs, zpool status output is awful for script.
Is there somewhere an utility to show zpool informations in a scrip
Below is my customers issue. I am stuck on this one. I would appreciate
if someone could help me out on this. Thanks in advance!
ZFS Checksum feature:
I/O checksum is one of the main ZFS features; however, there is also
block checksum done by Oracle. This is
good when utilizing UFS since it
Hi,
I am new to ZFS and recently managed to get a ZFS root to work.
These were the steps I have done:
1. Installed b81 (fresh install)
2. Unmounted /second_root on c0d0s4
3. Removed /etc/vfstab entry of /second_root
4. Executed ./zfs-actual-root-install.sh c0d0s4
5. Rebooted (init 6)
After selec
I'm running Postgresql (v8.1.10) on Solaris 10 (Sparc) from within a non-global
zone. I originally had the database "storage" in the non-global zone (e.g.
/var/local/pgsql/data on a UFS filesystem) and was getting performance of "X"
(e.g. from a TPC-like application: http://www.tpc.org). I then
Hi all,
Does anyone have any data to show how ZFS raidz with the on-disk cache
enabled for small, random IOs compares to a raid controller card with
cache in raid 5.
I'm working on a very competitive RFP, and one thing that could give us
an advantage is the ability to remove this controller ca
[EMAIL PROTECTED] said:
> . . .
> ZFS filesystem [on StorageTek 2530 Array in RAID 1+0 configuration
> with a 512K segment size]
> . . .
> Comparing run 1 and 3 shows that ZFS is roughly 20% faster on
> (unsynchronized) writes versus UFS. What's really surprising, to me at least,
> is
Does anyone have any particularly creative ZFS replication strategies they
could share?
I have 5 high-performance Cyrus mail-servers, with about a Terabyte of storage
each of which only 200-300 gigs is used though even including 14 days of
snapshot space.
I am thinking about setting up a singl
Scott Macdonald - Sun Microsystem wrote:
> Below is my customers issue. I am stuck on this one. I would appreciate
> if someone could help me out on this. Thanks in advance!
>
>
>
> ZFS Checksum feature:
>
> I/O checksum is one of the main ZFS features; however, there is also
> block checksum d
On Feb 1, 2008, at 1:15 PM, Vincent Fox wrote:
> Ideally I'd love it if ZFS directly supported the idea of rolling
> snapshots out into slower secondary storage disks on the SAN, but in
> the meanwhile looks like we have to roll our own solutions.
If you're running some recent SXCE build, you
Take a look on NexentaStor - its a complete 2nd tier solution:
http://www.nexenta.com/products
and AVS is nicely integrated via management RPC interface which is
connecting multiple NexentaStor nodes together and greatly simplifies
AVS usage with ZFS... See demo here:
http://www.nexenta.com/demo
[EMAIL PROTECTED] said:
> Depending on needs for space vs. performance, I'd probably pixk eithr 5*9 or
> 9*5, with 1 hot spare.
[EMAIL PROTECTED] said:
> How you can check the speed (I'm totally newbie on Solaris)
We're deploying a new Thumper w/750GB drives, and did space vs performance
Hi all
we consider using ZFS for various storages (DB, etc). Most features are great,
especially the ease of use.
Nevertheless, a few questions :
- we are using SAN disks, so most JBOD recommandations dont apply, but I did
not find many experiences of zpool of a few terabytes on Luns... anybody
Erast,
> Take a look on NexentaStor - its a complete 2nd tier solution:
>
> http://www.nexenta.com/products
>
> and AVS is nicely integrated via management RPC interface which is
> connecting multiple NexentaStor nodes together and greatly simplifies
> AVS usage with ZFS... See demo here:
>
> http
For small random I/O operations I would expect a substantial performance
penalty for ZFS. The reason is that RAID-Z is more akin to RAID-3 than RAID-5;
each read and write operation touches all of the drives. RAID-5 allows multiple
I/O operations to proceed in parallel since each read and write
Matt Ingenthron wrote:
> Hi all,
>
> Does anyone have any data to show how ZFS raidz with the on-disk cache
> enabled for small, random IOs compares to a raid controller card with
> cache in raid 5.
>
> I'm working on a very competitive RFP, and one thing that could give us
> an advantage is the
Le 01/02/2008 à 11:17:14-0800, Marion Hakanson a écrit
> [EMAIL PROTECTED] said:
> > Depending on needs for space vs. performance, I'd probably pixk eithr 5*9
> > or
> > 9*5, with 1 hot spare.
>
> [EMAIL PROTECTED] said:
> > How you can check the speed (I'm totally newbie on Solaris)
>
16 matches
Mail list logo