We have a server with a couple X-25E's and a bunch of larger SATA
disks.
To save space, we want to install Solaris 10 (our install is only about
1.4GB) to the X-25E's and use the remaining space on the SSD's for ZIL
attached to a zpool created from the SATA drives.
Currently we do this by install
On 07/01/10 22:33, Erik Trimble wrote:
On 7/1/2010 9:23 PM, Geoff Nordli wrote:
Hi Erik.
Are you saying the DDT will automatically look to be stored in an
L2ARC device if one exists in the pool, instead of using ARC?
Or is there some sort of memory pressure point where the DDT gets
moved fr
I created a zpool called 'data' from 7 disks.
I created zfs filesystems on the zpool for each Xen vm
I can choose to recursively snapshot all 'data'
I can choose to snapshot the individual 'directories'
If you use mkdir, I don't believe you can snapshot/restore at that level
Malachi de Ælfweal
On 7/1/2010 9:23 PM, Geoff Nordli wrote:
Hi Erik.
Are you saying the DDT will automatically look to be stored in an L2ARC device
if one exists in the pool, instead of using ARC?
Or is there some sort of memory pressure point where the DDT gets moved from
ARC to L2ARC?
Thanks,
Geoff
Go
> Actually, I think the rule-of-thumb is 270 bytes/DDT
> entry. It's 200
> bytes of ARC for every L2ARC entry.
>
> DDT doesn't count for this ARC space usage
>
> E.g.:I have 1TB of 4k files that are to be
> deduped, and it turns
> out that I have about a 5:1 dedup ratio. I'd also
> lik
Folks,
While going through a quick tutorial on zfs, I came across a way to create zfs
filesystem within a filesystem. For example:
# zfs create mytest/peter
where mytest is a zpool filesystem.
When does this way, the new filesystem has the mount point as /mytest/peter.
When does it make sense
doh! It turns out the host in question is actually a Solaris 10 update 6 host.
It appears that an Solaris 10 update 8 host actually sets the start sector at
256.
So to simplify the question. If I'm using ZFS with EFI label and full disk
do I even need to worry about lun alignment? I was a
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Peter Taps
>
> I am learning more about zfs storage. It appears, zfs pool can be
> created on a raw disk. There is no need to create any partitions, etc.
> on the disk. Does this mean there is
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Alxen4
>
> It looks like I have some leftovers of old clones that I cannot delete:
>
> Clone name is tank/WinSrv/Latest
>
> I'm trying:
>
> zfs destroy -f -R tank/WinSrv/Latest
> cannot uns
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Benjamin Grogg
>
> When I scrub my pool I got a lot of checksum errors :
>
> NAMESTATE READ WRITE CKSUM
> rpool DEGRADED 0 0 5
> c8d0s0DEGRA
Awesome. Thank you, CIndy.
Regards,
Peter
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Folks.
My env is Solaris 10 update 8 amd64. Does LUN alignment matter when I'm
creating zpool's on disks (lun's) with EFI labels and providing zpool the
entire disk?
I recently read some sun/oracle docs and blog posts about adjusting the
starting sector for partition 0 (in format -e) to a
Victor,
A little more info on the crash, from the messages file is attached here. I
have also decompressed the dump with savecore to generate unix.0, vmcore.0, and
vmdump.0.
Jun 30 19:39:10 HL-SAN unix: [ID 836849 kern.notice]
Jun 30 19:39:10 HL-SAN ^Mpanic[cpu3]/thread=ff0017909c60:
Jun
Even easier, use the zpool create command to create a pool
on c8t1d0, using the whole disk. Try this:
# zpool create MyData c8t1d0
cs
On 07/01/10 16:01, Peter Taps wrote:
Folks,
I am learning more about zfs storage. It appears, zfs pool can be created on a raw disk.
There is no need to cre
Folks,
I am learning more about zfs storage. It appears, zfs pool can be created on a
raw disk. There is no need to create any partitions, etc. on the disk. Does
this mean there is no need to run "format" on a raw disk?
I have added a new disk to my system. It shows up as /dev/rdsk/c8t1d0s0. Do
On 7/1/2010 12:23 PM, Lo Zio wrote:
Thanks roy, I read a lot around and also was thinking it was a dedup-related problem.
Although I did not find any indication of how many RAM is enough, and never find
something saying "Do not use dedup, it will definitely crash your server". I'm
using a Dell
- Original Message -
> Thanks roy, I read a lot around and also was thinking it was a
> dedup-related problem. Although I did not find any indication of how
> many RAM is enough, and never find something saying "Do not use dedup,
> it will definitely crash your server". I'm using a Dell Xeo
Thanks roy, I read a lot around and also was thinking it was a dedup-related
problem. Although I did not find any indication of how many RAM is enough, and
never find something saying "Do not use dedup, it will definitely crash your
server". I'm using a Dell Xeon with 4 Gb of RAM, maybe it is no
Hello,
this may not apply to your machine. I have two changes to your setup:
* Opensolaris instead of Nexenta
* DL585G1 instead of your DL380G4
Here's my problem: reproducible crash after a certain time (1:30h in my case).
Explanation: the HP machine has enterprise features (ECC RAM) and perfor
- Original Message -
> > As the 15k drives are faster seek-wise (and possibly faster for
> > linear I/O), you may want to separate them into different VDEVs or
> > even pools, but then, it's quite impossible to give a "correct"
> > answer unless knowing what it's going to be used for.
>
>
> As the 15k drives are faster seek-wise (and possibly faster for linear I/O),
> you may want to separate them into different VDEVs or even pools, but then,
> it's quite impossible to give a "correct" answer unless knowing what it's
> going to be used for.Mostly database duty.> > Also, using 10
- Original Message -
> Another question...
> We're building a ZFS NAS/SAN out of the following JBODs we already
> own:
>
>
> 2x 15x 1000GB SATA
> 3x 15x 750GB SATA
> 2x 12x 600GB SAS 15K
> 4x 15x 300GB SAS 15K
>
>
> That's a lot of spindles we'd like to benefit from, but our assumption
Sorry for the formatting, that's
2x 15x 1000GB SATA
3x 15x 750GB SATA
2x 12x 600GB SAS 15K
4x 15x 300GB SAS 15K
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/list
Another question...We're building a ZFS NAS/SAN out of the following JBODs we
already own:
2x 15x 1000GB SATA3x 15x 750GB SATA2x 12x 600GB SAS 15K4x 15x 300GB SAS 15K
That's a lot of spindles we'd like to benefit from, but our assumption is that
we should split these in two separate pools, on
Hi! We've put 28x 750GB SATA drives in a RAIDZ2 pool (a single vdev) and we get
about 80MB/s in sequential read or write. We're running local tests on the
server itself (no network involved). Is that what we should be expecting? It
seems slow to me.
Please read the ZFS best practices
> On a slightly different but related topic, anyone have advice on how
> to connect up my drives? I've got room for 20 pool drives in the case.
> I'll have two AOC-USAS-L8i cards along with cables to connect 16 SATA2
> drives. The motherboard has 6 SATA2 connectors plus 2 SATA3
> connectors. I was
- Original Message -
> I also have this problem, with 134 if I delete big snapshots the
> server hangs only responding to ping.
> I also have the ZVOL issue.
> Any news about having them solved?
> In my case this is a big problem since I'm using osol as a file
> server...
Are you using ded
Hi! We've put 28x 750GB SATA drives in a RAIDZ2 pool (a single vdev) and we get
about 80MB/s in sequential read or write. We're running local tests on the
server itself (no network involved). Is that what we should be expecting? It
seems slow to me.
Please read the ZFS best practices guide
> >The best would be to export the drives in JBOD style, one "array" per
> >drive. If you rely on the Promise RAID, it you won't be able to
> >recover from "silent" errors. I'm in the progress of moving from a
> >NexSAN RAID to a JBOD-like style just because of that (we had data
> >>corruption on t
Hi! We've put 28x 750GB SATA drives in a RAIDZ2 pool (a single vdev) and we
get about 80MB/s in sequential read or write. We're running local tests on the
server itself (no network involved). Is that what we should be expecting? It
seems slow to me.
Thanks
- Original Message -
> I'm new with ZFS, but I have had good success using it with raw
> physical disks. One of my systems has access to an iSCSI storage
> target. The underlying physical array is in a propreitary disk storage
> device from Promise. So the question is, when building a OpenS
It looks like I have some leftovers of old clones that I cannot delete:
Clone name is tank/WinSrv/Latest
I'm trying:
zfs destroy -f -R tank/WinSrv/Latest
cannot unshare 'tank/WinSrv/Latest': path doesn't exist: unshare(1M) failed
Please help me to get rid of this garbage.
Thanks a lot.
--
Th
Hi Benjamin,
I'm not familiar with this disk but you can see the fmstat output that
disk, system event, and zfs-related diagnostics are on overtime about
something and its probably this disk.
You can get further details from fmdump -eV and you will probably
see lots of checksum errors on this di
I'm new with ZFS, but I have had good success using it with raw physical disks.
One of my systems has access to an iSCSI storage target. The underlying
physical array is in a propreitary disk storage device from Promise. So the
question is, when building a OpenSolaris host to store its data on a
Joachim Worringen wrote:
> Greetings,
>
> we are running a few databases of currently 200GB
> (growing) in total for data warehousing:
> - new data via INSERTs for (up to) millions of rows
> per day; sometimes with UPDATEs
> - most data in a single table (=> 10 to 100s of
> millions of rows)
> - q
Dear Forum
I use a KINGSTON SNV125-S2/30GB SSD on a ASUS M3A78-CM Motherboard (AMD SB700
Chipset).
SATA Type (in BIOS) is SATA
Os : SunOS homesvr 5.11 snv_134 i86pc i386 i86pc
When I scrub my pool I got a lot of checksum errors :
NAMESTATE READ WRITE CKSUM
rpool DEGRA
On Jul 1, 2010, at 10:39, Pasi Kärkkäinen wrote:
basicly 5-30 seconds after login prompt shows up on the console
the server will reboot due to kernel crash.
the error seems to be about the broadcom nic driver..
Is this a known bug?
Please contact Nexenta via their support infrastructure (web
> From: Asif Iqbal [mailto:vad...@gmail.com]
>
> currently to speed up the zfs send| zfs recv I am using mbuffer. It
> moves the data
> lot faster than using netcat (or ssh) as the transport method
Yup, this works because network and disk latency can both be variable. So
without buffering, your
On Tue, Jun 15, 2010 at 10:57:53PM +0530, Anil Gulecha wrote:
> Hi All,
>
> On behalf of NexentaStor team, I'm happy to announce the release of
> NexentaStor Community Edition 3.0.3. This release is the result of the
> community efforts of Nexenta Partners and users.
>
> Changes over 3.0.2 includ
Greetings,
we are running a few databases of currently 200GB (growing) in total for data
warehousing:
- new data via INSERTs for (up to) millions of rows per day; sometimes with
UPDATEs
- most data in a single table (=> 10 to 100s of millions of rows)
- queries SELECT subsets of this table via a
I also have this problem, with 134 if I delete big snapshots the server hangs
only responding to ping.
I also have the ZVOL issue.
Any news about having them solved?
In my case this is a big problem since I'm using osol as a file server...
Thanks
--
This message posted from opensolaris.org
__
Hello list,
I wanted to test deduplication a little and did a experiment.
My question was: can I dedupe infinite or is ther a upper limit ?
So for that I did a very basic test.
- I created a ramdisk-pool (1GB)
- enabled dedup and
- wrote zeros to it (in one single file) until an error is r
> I plan on removing the second USAS-L8i and connect
> all 16 drives to the
> first USAS-L8i when I need more storage capacity. I
> have no doubt that
> it will work as intended. I will report to the list
> otherwise.
I'm a little late to the party here. First, I'd like to thank those pioneers
43 matches
Mail list logo