[CentOS] ssd quandry

2011-10-23 Thread John R Pierce
On a CentOS 6 64bit system, I added a couple prototype SAS SSDs on a HP 
P411 raid controller (I believe this is a rebranded LSI megaraid with HP 
firmware) and am trying to format them for best random IO performance 
with something like postgresql.

so, I used the raid command tool to build a raid0 with 2 SAS SSDs

# hpacucli ctrl slot=1 logicaldrive 3 show detail

Smart Array P410 in Slot 1

array C

   Logical Drive: 3
  Size: 186.3 GB
  Fault Tolerance: RAID 0
  Heads: 255
  Sectors Per Track: 32
  Cylinders: 47869
  Strip Size: 256 KB
  Status: OK
  Array Accelerator: Enabled
  Unique Identifier: 600508B1001C2EDB6026F9ADF9F88A09
  Disk Name: /dev/sdc
  Mount Points: /ssd 186.3 GB
  Logical Drive Label: AF36B716PACCRCN810E1R9J646A

# hpacucli ctrl slot=1 show config

Smart Array P410 in Slot 1(sn: PACCRCN810E1R9J)

array C (Solid State SAS, Unused Space: 0 MB)


   logicaldrive 3 (186.3 GB, RAID 0, OK)

   physicaldrive 1I:1:23 (port 1I:box 1:bay 23, Solid State SAS, 100 
GB, OK)
   physicaldrive 1I:1:24 (port 1I:box 1:bay 24, Solid State SAS, 100 
GB, OK)

# hpacucli ctrl slot=1 show ssdinfo detail

Smart Array P410 in Slot 1
Total Solid State Drives with Wearout Status: 0
Total Smart Array Solid State Drives: 2
Total Solid State SAS Drives: 2
Total Solid State Drives: 2


array C

   physicaldrive 1I:1:23
  Port: 1I
  Box: 1
  Bay: 23
  Status: OK
  Drive Type: Data Drive
  Interface Type: Solid State SAS
  Size: 100 GB
  Firmware Revision: 1234
  Serial Number: 99
  Model: XYZZY M2011
  Current Temperature (C): 30
  Maximum Temperature (C): 37
  SSD Smart Trip Wearout: Not Supported
  PHY Count: 2
  PHY Transfer Rate: 6.0GBPS, Unknown

   physicaldrive 1I:1:24
  Port: 1I
  Box: 1
  Bay: 24
  Status: OK
  Drive Type: Data Drive
  Interface Type: Solid State SAS
  Size: 100 GB
  Firmware Revision: 1234
  Serial Number: 99
  Model: XYZZY M2011
  Current Temperature (C): 29
  Maximum Temperature (C): 36
  SSD Smart Trip Wearout: Not Supported
  PHY Count: 2
  PHY Transfer Rate: 6.0GBPS, Unknown



# tail /var/log/messages
Oct 22 22:56:24 svfis-dl180b kernel: sd 0:0:0:3: Attached scsi generic 
sg3 type 0
Oct 22 22:56:24 svfis-dl180b kernel: sd 0:0:0:3: [sdc] 390611040 
512-byte logical blocks: (199 GB/186 GiB)
Oct 22 22:56:24 svfis-dl180b kernel: sd 0:0:0:3: [sdc] 8192-byte 
physical blocks
Oct 22 22:56:24 svfis-dl180b kernel: sd 0:0:0:3: [sdc] Write Protect is off
Oct 22 22:56:24 svfis-dl180b kernel: sd 0:0:0:3: [sdc] Write cache: 
disabled, read cache: enabled, doesn't support DPO or FUA
Oct 22 22:56:24 svfis-dl180b kernel: sdc: unknown partition table
Oct 22 22:56:24 svfis-dl180b kernel: sd 0:0:0:3: [sdc] Attached SCSI disk
Oct 22 22:56:36 svfis-dl180b cmaeventd[2540]: Logical drive 3 of Array 
Controller in slot 1, has changed from status Unconfigured to OK

# mkfs.ext4 /dev/sdc
mke2fs 1.41.12 (17-May-2010)
/dev/sdc is entire device, not just one partition!
Proceed anyway? (y,n) y
Filesystem label=
OS type: Linux
Block size=8192 (log=3)
Fragment size=8192 (log=3)
Stride=1 blocks, Stripe width=0 blocks
12210528 inodes, 24413190 blocks
1220659 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4311218176
373 block groups
65528 blocks per group, 65528 fragments per group
32736 inodes per group
Superblock backups stored on blocks:
 65528, 196584, 327640, 458696, 589752, 1638200, 1769256, 3210872,
 5307768, 8191000, 15923304, 22476104

Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done


# mount -t ext4 /dev/sdc /ssd
mount: wrong fs type, bad option, bad superblock on /dev/sdc,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
dmesg | tail  or so

# tail /var/log/messages
...
Oct 22 23:54:36 svfis-dl180b kernel: EXT4-fs (sdc): bad block size 8192

ok, so lets try 4K blocks?

# mkfs.ext4 -b 4096 /dev/sdc
mke2fs 1.41.12 (17-May-2010)
/dev/sdc is entire device, not just one partition!
Proceed anyway? (y,n) y
mkfs.ext4: Invalid argument while setting blocksize; too small for device



hmmm.   can't do that either?

can I configure this 64bit system for large pages or something so it 
will support 8K blocks?




-- 
john r pierceN 37, W 122
santa cruz ca mid-left coast

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] ssd quandry

2011-10-23 Thread Ken godee

Maybe try to partition it to see what happens.



On 10/23/2011 12:07 AM, John R Pierce wrote:
> On a CentOS 6 64bit system, I added a couple prototype SAS SSDs on a HP
> P411 raid controller (I believe this is a rebranded LSI megaraid with HP
> firmware) and am trying to format them for best random IO performance
> with something like postgresql.
>
> so, I used the raid command tool to build a raid0 with 2 SAS SSDs
>
> # hpacucli ctrl slot=1 logicaldrive 3 show detail
>
> Smart Array P410 in Slot 1
>
>  array C
>
> Logical Drive: 3
>Size: 186.3 GB
>Fault Tolerance: RAID 0
>Heads: 255
>Sectors Per Track: 32
>Cylinders: 47869
>Strip Size: 256 KB
>Status: OK
>Array Accelerator: Enabled
>Unique Identifier: 600508B1001C2EDB6026F9ADF9F88A09
>Disk Name: /dev/sdc
>Mount Points: /ssd 186.3 GB
>Logical Drive Label: AF36B716PACCRCN810E1R9J646A
>
> # hpacucli ctrl slot=1 show config
>
> Smart Array P410 in Slot 1(sn: PACCRCN810E1R9J)
> 
>  array C (Solid State SAS, Unused Space: 0 MB)
>
>
> logicaldrive 3 (186.3 GB, RAID 0, OK)
>
> physicaldrive 1I:1:23 (port 1I:box 1:bay 23, Solid State SAS, 100
> GB, OK)
> physicaldrive 1I:1:24 (port 1I:box 1:bay 24, Solid State SAS, 100
> GB, OK)
>
> # hpacucli ctrl slot=1 show ssdinfo detail
>
> Smart Array P410 in Slot 1
>  Total Solid State Drives with Wearout Status: 0
>  Total Smart Array Solid State Drives: 2
>  Total Solid State SAS Drives: 2
>  Total Solid State Drives: 2
>
>
>  array C
>
> physicaldrive 1I:1:23
>Port: 1I
>Box: 1
>Bay: 23
>Status: OK
>Drive Type: Data Drive
>Interface Type: Solid State SAS
>Size: 100 GB
>Firmware Revision: 1234
>Serial Number: 99
>Model: XYZZY M2011
>Current Temperature (C): 30
>Maximum Temperature (C): 37
>SSD Smart Trip Wearout: Not Supported
>PHY Count: 2
>PHY Transfer Rate: 6.0GBPS, Unknown
>
> physicaldrive 1I:1:24
>Port: 1I
>Box: 1
>Bay: 24
>Status: OK
>Drive Type: Data Drive
>Interface Type: Solid State SAS
>Size: 100 GB
>Firmware Revision: 1234
>Serial Number: 99
>Model: XYZZY M2011
>Current Temperature (C): 29
>Maximum Temperature (C): 36
>SSD Smart Trip Wearout: Not Supported
>PHY Count: 2
>PHY Transfer Rate: 6.0GBPS, Unknown
>
>
>
> # tail /var/log/messages
> Oct 22 22:56:24 svfis-dl180b kernel: sd 0:0:0:3: Attached scsi generic
> sg3 type 0
> Oct 22 22:56:24 svfis-dl180b kernel: sd 0:0:0:3: [sdc] 390611040
> 512-byte logical blocks: (199 GB/186 GiB)
> Oct 22 22:56:24 svfis-dl180b kernel: sd 0:0:0:3: [sdc] 8192-byte
> physical blocks
> Oct 22 22:56:24 svfis-dl180b kernel: sd 0:0:0:3: [sdc] Write Protect is off
> Oct 22 22:56:24 svfis-dl180b kernel: sd 0:0:0:3: [sdc] Write cache:
> disabled, read cache: enabled, doesn't support DPO or FUA
> Oct 22 22:56:24 svfis-dl180b kernel: sdc: unknown partition table
> Oct 22 22:56:24 svfis-dl180b kernel: sd 0:0:0:3: [sdc] Attached SCSI disk
> Oct 22 22:56:36 svfis-dl180b cmaeventd[2540]: Logical drive 3 of Array
> Controller in slot 1, has changed from status Unconfigured to OK
>
> # mkfs.ext4 /dev/sdc
> mke2fs 1.41.12 (17-May-2010)
> /dev/sdc is entire device, not just one partition!
> Proceed anyway? (y,n) y
> Filesystem label=
> OS type: Linux
> Block size=8192 (log=3)
> Fragment size=8192 (log=3)
> Stride=1 blocks, Stripe width=0 blocks
> 12210528 inodes, 24413190 blocks
> 1220659 blocks (5.00%) reserved for the super user
> First data block=0
> Maximum filesystem blocks=4311218176
> 373 block groups
> 65528 blocks per group, 65528 fragments per group
> 32736 inodes per group
> Superblock backups stored on blocks:
>   65528, 196584, 327640, 458696, 589752, 1638200, 1769256, 3210872,
>   5307768, 8191000, 15923304, 22476104
>
> Writing inode tables: done
> Creating journal (32768 blocks): done
> Writing superblocks and filesystem accounting information: done
>
>
> # mount -t ext4 /dev/sdc /ssd
> mount: wrong fs type, bad option, bad superblock on /dev/sdc,
>  missing codepage or helper program, or other error
>  In some cases useful info is found in syslog - try
>  dmesg | tail  or so
>
> # tail /var/log/messages
> ...
> Oct 22 23:54:36 svfis-dl180b kernel: EXT4-fs (sdc): bad block size 8192
>
> ok, so lets try 4K blocks?
>
> # mkfs.ext4 -b 4096 /dev/sdc
> mke2fs 1.41.12 (17-May-2010)
> /dev/sdc is entire device, not just one partition!
> Proceed anyway? (y,n) y
> mkfs.ext4: Invalid argument while setting blocksize; too small f

Re: [CentOS] ssd quandry

2011-10-23 Thread John R Pierce
On 10/23/11 12:23 AM, Ken godee wrote:
> Maybe try to partition it to see what happens.

with parted at least, I'm stuck with a vicious circle that won't let me 
align the data right?

# parted /dev/sdc
GNU Parted 2.1
Using /dev/sdc
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) mklabel msdos
Warning: The existing disk label on /dev/sdc will be destroyed and all 
data on this disk will be lost. Do you want to continue?
Yes/No? y
(parted) mkpart primary ext4 512k -1s
Warning: The resulting partition is not properly aligned for best 
performance.
Ignore/Cancel? i
(parted) quit

# mkfs.ext4 /dev/sdc1
mke2fs 1.41.12 (17-May-2010)
/dev/sdc1 alignment is offset by 4096 bytes.
This may result in very poor performance, (re)-partitioning suggested.
Filesystem label=
OS type: Linux
Block size=8192 (log=3)
Fragment size=8192 (log=3)
Stride=1 blocks, Stripe width=0 blocks
12210528 inodes, 24413127 blocks
1220656 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4311218176
373 block groups
65528 blocks per group, 65528 fragments per group
32736 inodes per group
Superblock backups stored on blocks:
 65528, 196584, 327640, 458696, 589752, 1638200, 1769256, 3210872,
 5307768, 8191000, 15923304, 22476104

Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 22 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.

# mount -t ext4 /dev/sdc1 /ssd
mount: wrong fs type, bad option, bad superblock on /dev/sdc1,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
dmesg | tail  or so


# tail /var/log/messages

Oct 23 00:27:43 svfis-dl180b kernel: EXT4-fs (sdc1): bad block size 8192




GRRR  ok.  um.


# parted /dev/sdc
GNU Parted 2.1
Using /dev/sdc
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) unit b
(parted) print
Model: HP LOGICAL VOLUME (scsi)
Disk /dev/sdc: 12852480B
Sector size (logical/physical): 512B/8192B
Partition Table: msdos

Number  StartEndSize   Type File system  Flags
  1  512000B  12852479B  12340480B  primary  ext4

(parted) rm 1
(parted) mkpart primary ext4 1024s -1s
Warning: The resulting partition is not properly aligned for best 
performance.
Ignore/Cancel? y
parted: invalid token: y
Ignore/Cancel? ignore
(parted) print
Model: HP LOGICAL VOLUME (scsi)
Disk /dev/sdc: 12852480B
Sector size (logical/physical): 512B/8192B
Partition Table: msdos

Number  StartEndSize   Type File system  Flags
  1  524288B  12852479B  12328192B  primary

(parted) quit

# mkfs.ext4 /dev/sdc1
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=8192 (log=3)
Fragment size=8192 (log=3)
Stride=1 blocks, Stripe width=0 blocks
12210528 inodes, 24413126 blocks
1220656 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4311218176
373 block groups
65528 blocks per group, 65528 fragments per group
32736 inodes per group
Superblock backups stored on blocks:
 65528, 196584, 327640, 458696, 589752, 1638200, 1769256, 3210872,
 5307768, 8191000, 15923304, 22476104

Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

*and, yup, 8K blocks still won't mount*

so...

# mkfs.ext4 -b 4096 -F /dev/sdc1
mke2fs 1.41.12 (17-May-2010)
Warning: specified blocksize 4096 is less than device physical 
sectorsize 8192, forced to continue
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
   ^^
...


fixes it.  have to use -F to format this thing.   now I''m seeing IO 
more like what I'd expect to see.


-- 
john r pierceN 37, W 122
santa cruz ca mid-left coast

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] How to remove a Trash folder from a mounted ntfs partition

2011-10-23 Thread Ljubomir Ljubojevic
Vreme: 10/23/2011 02:35 AM, Yves Bellefeuille piše:
> I've seen your correction, but I still don't understand where
> this .Trash-root directory comes from.
>
> The user says that he's running CentOS 5.7 and Gnome, but under Gnome
> the trash directory is simply named .Trash, not .Trash-root, and
> deleting a file from an NTFS file system mounted under Linux doesn't
> move it to .Trash.
>

My observation is that .Trash is for normal users and .Trash-root is 
when you delete as Root. I sometimes use Krusader (under Gnome) in root 
mode, and that could account for .Thrash-root in my case. Maybe he did 
something similar.


-- 

Ljubomir Ljubojevic
(Love is in the Air)
PL Computers
Serbia, Europe

Google is the Mother, Google is the Father, and traceroute is your
trusty Spiderman...
StarOS, Mikrotik and CentOS/RHEL/Linux consultant
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] How to remove a Trash folder from a mounted ntfs partition

2011-10-23 Thread John R Pierce
On 10/23/11 2:24 AM, Ljubomir Ljubojevic wrote:
> My observation is that .Trash is for normal users and .Trash-root is
> when you delete as Root. I sometimes use Krusader (under Gnome) in root
> mode, and that could account for .Thrash-root in my case. Maybe he did
> something similar.

whem I use shell, I don't get any of that junk.   thankfully.



-- 
john r pierceN 37, W 122
santa cruz ca mid-left coast

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] ssd quandry

2011-10-23 Thread Patrick Lists
On 10/23/2011 09:48 AM, John R Pierce wrote:
> On 10/23/11 12:23 AM, Ken godee wrote:
>> Maybe try to partition it to see what happens.
>
> with parted at least, I'm stuck with a vicious circle that won't let me
> align the data right?

Didn't parted have issues with alignment? Here are two links with info 
about alignment of SSDs which I found helpful in the past:

http://www.ocztechnologyforum.com/forum/showthread.php?54379-Linux-Tips-tweaks-and-alignment&p=373226&viewfull=1#post373226

http://www.linux-mag.com/id/8397/

Hope this helps.

Regards,
Patrick
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Centos 6 - named with internal zone file and forwarding

2011-10-23 Thread Giles Coochey

On 21/10/2011 16:27, Les Mikesell wrote:

On Fri, Oct 21, 2011 at 4:12 AM, Giles Coochey  wrote:

I have two Centos 6 servers running BIND.

I have configured the two servers to run internal zones as a master /
slave setup.

My gateway runs DNSmasq and I would like all other requests for lookups to
be sent to the DNSmasq system.

I have added the following:

forward first;
  forwarders {
172.16.0.1;
  };

Where 172.16.0.1 is the host running DNSmasq.

For some reason I still cannot resolve anything outside my network.

Any pointers?

Your servers running bind should be able to resolve outside names
with or without a forwarder if they have internet access.   Are you
sure /etc/resolv.conf is pointing to the right nameservers?   Try
using 'dig @resolver_ip some_name'  to test both the dnsmasq system
and your bind servers individually.



ugg... turns out I had a rather embarassing typo in named.conf... it all 
works now.


___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Centos 6 - named with internal zone file and forwarding

2011-10-23 Thread Ljubomir Ljubojevic
Vreme: 10/23/2011 01:18 PM, Giles Coochey piše:
> On 21/10/2011 16:27, Les Mikesell wrote:
>> Your servers running bind should be able to resolve outside names
>> with or without a forwarder if they have internet access. Are you
>> sure /etc/resolv.conf is pointing to the right nameservers? Try
>> using 'dig @resolver_ip some_name' to test both the dnsmasq system
>> and your bind servers individually.
>>
>
> ugg... turns out I had a rather embarassing typo in named.conf... it all
> works now.

It is nothing to be ashamed of. We all had similar errors, an most 
likely will have them in the future.

There is human ability to correct words as we read, so we overlook 
"obvious" errors:

Acrcndiog to resecarh at Cimdrgabe Unierivsty, it deson't meattr what 
oderr the lertets in a word are, the only ionramptt thnig is that the 
frist and last lttrees are at the right palce. The rset can be a toatl 
mess and you can sitll raed it wutihot a pboelrm. This is becsuae we do 
not raed every letetr by itself but the wrod as a whloe.

-- 

Ljubomir Ljubojevic
(Love is in the Air)
PL Computers
Serbia, Europe

Google is the Mother, Google is the Father, and traceroute is your
trusty Spiderman...
StarOS, Mikrotik and CentOS/RHEL/Linux consultant
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] Netapp like snapshots using Centos 5/6 direct attached storage

2011-10-23 Thread Scott McKenzie

Hello,

I'm researching the best method of providing about 20 users in a production 
environment the same functionality as they would have on a Netapp NFS share.
The O/S I will be using is CentOS 5 or 6 (max flex on which one) and the 
hardware is a disk array directly (12 SAS disks 7TB un-configured brand new) 
attached to a HP 580 G 7.

I've done some reading on ZFS on Linux ,fuse-ZFS, BRTFS ,rsnapshot, snapFS. 

Any one have some advice or experiences to share?
  
Thanks,
spuds 
  
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Netapp like snapshots using Centos 5/6 direct attached storage

2011-10-23 Thread Fajar Priyanto
On Sun, Oct 23, 2011 at 9:56 PM, Scott McKenzie  wrote:
>
> Hello,
>
> I'm researching the best method of providing about 20 users in a production 
> environment the same functionality as they would have on a Netapp NFS share.
> The O/S I will be using is CentOS 5 or 6 (max flex on which one) and the 
> hardware is a disk array directly (12 SAS disks 7TB un-configured brand new) 
> attached to a HP 580 G 7.
>
> I've done some reading on ZFS on Linux ,fuse-ZFS, BRTFS ,rsnapshot, snapFS.
>
> Any one have some advice or experiences to share?

IMHO,
Currently none can beat ZFS features. If you look in wikipedia, only
ZFS has "YES" in all the columns.
I've tried fuse-zfs, not bad. The snapshot works great.
However performance is rather heavy.
ZFS on Linux is worth exploring.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Centos 6 - named with internal zone file and forwarding

2011-10-23 Thread Peter Eckel
Hi Giles,

Am 23.10.2011 um 13:18 schrieb Giles Coochey :

> ugg
> ... turns out I had a rather embarassing typo in named.conf... it all works 
> now.

that tends to happen pretty easily, I know. When I do changes to the BIND 
configs, I made a habit of using named-checkconf/named-checkzone every time and 
also checking the log after restarting the daemon.

It's still no sure-fire remedy, but it helps a lot.

Cheers,

  Peter.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] CentOS 4 Dovecot Problem

2011-10-23 Thread John Hinton
For those of you that still are running CentOS 4... I have one system 
that is still going... there is a problem with the newest release of 
Dovecot under mbox. Certain spam is causing this error when users try to 
log on.

file lib.c: line 37 (nearest_power): assertion failed: (num <= 
((size_t)1 << (BITS_IN_SIZE_T-1)))

Rolling back to a previous release fixes these issues. I'm not bothering 
to file a bug with Redhat as the EOL is rapidly approaching and I just 
about have my one system's users moved to a new server.

I have not as of yet seen this problem on CentOS 5 mbox systems, but I 
don't have many users on those systems either as I'm 'slowly' migrating 
all to CentOS 6 Maildir systems.

-- 
John Hinton
877-777-1407 ext 502
http://www.ew3d.com
Comprehensive Online Solutions

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Netapp like snapshots using Centos 5/6 direct attached storage

2011-10-23 Thread Ray Van Dolson
On Sun, Oct 23, 2011 at 09:56:52AM -0400, Scott McKenzie wrote:
> 
> Hello,
> 
> I'm researching the best method of providing about 20 users in a
> production environment the same functionality as they would have on a
> Netapp NFS share.  The O/S I will be using is CentOS 5 or 6 (max flex
> on which one) and the hardware is a disk array directly (12 SAS disks
> 7TB un-configured brand new) attached to a HP 580 G 7.
> 
> I've done some reading on ZFS on Linux ,fuse-ZFS, BRTFS ,rsnapshot,
> snapFS. 
> 
> Any one have some advice or experiences to share?
>   
> Thanks,
> spuds 

ZFS will be the best, but FUSE ZFS is going to be slower and native ZFS
on Linux is still pretty young.

If you're tied to Linux and your users need absolute stability, I'd go
with tried and true LVM.  If they can be a little more tolerant to
churn / downtime / adventure, the other options you mentioned could
become doable.

If you're _not_ tied to Linux, take a look at Nexenta Community Edition
or Illumos / Solaris Express.

Not familiar with your array, but if it does hardware based snapshots,
might be an option as well.

Ray
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Netapp like snapshots using Centos 5/6 direct attached storage

2011-10-23 Thread Lorenzo Martínez Rodríguez

Hello, is mandatory for you to use Linux?
I am using FreeNAS to share ZFS volumes via NFS and CIFS and it works 
really great!
Regards,

El 23/10/11 20:33, Ray Van Dolson escribió:
> On Sun, Oct 23, 2011 at 09:56:52AM -0400, Scott McKenzie wrote:
>> Hello,
>>
>> I'm researching the best method of providing about 20 users in a
>> production environment the same functionality as they would have on a
>> Netapp NFS share.  The O/S I will be using is CentOS 5 or 6 (max flex
>> on which one) and the hardware is a disk array directly (12 SAS disks
>> 7TB un-configured brand new) attached to a HP 580 G 7.
>>
>> I've done some reading on ZFS on Linux ,fuse-ZFS, BRTFS ,rsnapshot,
>> snapFS.
>>
>> Any one have some advice or experiences to share?
>>
>> Thanks,
>> spuds
> ZFS will be the best, but FUSE ZFS is going to be slower and native ZFS
> on Linux is still pretty young.
>
> If you're tied to Linux and your users need absolute stability, I'd go
> with tried and true LVM.  If they can be a little more tolerant to
> churn / downtime / adventure, the other options you mentioned could
> become doable.
>
> If you're _not_ tied to Linux, take a look at Nexenta Community Edition
> or Illumos / Solaris Express.
>
> Not familiar with your array, but if it does hardware based snapshots,
> might be an option as well.
>
> Ray
> ___
> CentOS mailing list
> CentOS@centos.org
> http://lists.centos.org/mailman/listinfo/centos
>
>


-- 


Lorenzo Martinez Rodriguez

Visit me:   http://www.lorenzomartinez.es
Mail me to: lore...@lorenzomartinez.es
My blog: http://www.securitybydefault.com
My twitter: @lawwait
PGP Fingerprint: 97CC 2584 7A04 B2BA 00F1 76C9 0D76 83A2 9BBC BDE2

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] ssd quandry

2011-10-23 Thread John R Pierce
On 10/23/11 4:00 AM, Patrick Lists wrote:
> Didn't parted have issues with alignment? Here are two links with info
> about alignment of SSDs which I found helpful in the past:

parted handles alignment as well or better than fdisk, which that blog 
suggested using.

anyways, I have it formatted and mounted and aligned now.

this SSD raid is telling the OS it has 8K physical sectors,(512 byte 
logical).mkfs ext4 or xfs will create a file system with 8K logical 
blocks, but the kernel won't let me mount it because its larger than the 
systems 4K page size..., so I have to force mkfs to build a 4K block 
file system.

my database (postgres) uses 8K blocks.  the storage has 8k physical 
blocks.   it seems to me that having the file system block match the 
database and physical blocks would be a Very Good Thing...


... so, whats the status of large page support in linux and specifically 
centos 6 ?



-- 
john r pierceN 37, W 122
santa cruz ca mid-left coast

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Netapp like snapshots using Centos 5/6 direct attached storage

2011-10-23 Thread aurf alien
Look into XFS+LVM.

For XFS, you'll have to yum it via;

yum install kmod-xfs xfs-progs xfs-dump

This is in the centosplus repo.

- aurf

On Sun, Oct 23, 2011 at 6:56 AM, Scott McKenzie  wrote:

>
> Hello,
>
> I'm researching the best method of providing about 20 users in a production
> environment the same functionality as they would have on a Netapp NFS share.
> The O/S I will be using is CentOS 5 or 6 (max flex on which one) and the
> hardware is a disk array directly (12 SAS disks 7TB un-configured brand new)
> attached to a HP 580 G 7.
>
> I've done some reading on ZFS on Linux ,fuse-ZFS, BRTFS ,rsnapshot, snapFS.
>
> Any one have some advice or experiences to share?
>
> Thanks,
> spuds
>
> ___
> CentOS mailing list
> CentOS@centos.org
> http://lists.centos.org/mailman/listinfo/centos
>
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Netapp like snapshots using Centos 5/6 direct attached storage

2011-10-23 Thread John R Pierce
On 10/23/11 7:52 PM, aurf alien wrote:
> Look into XFS+LVM.
>
> For XFS, you'll have to yum it via;
>
> yum install kmod-xfs xfs-progs xfs-dump
>
> This is in the centosplus repo.

XFS is native in C6, you just need xfs-utils from Base



-- 
john r pierceN 37, W 122
santa cruz ca mid-left coast

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos