On Fri, Feb 12, 2010 at 1:08 PM, Daniel Carosone wrote:
> With dedup and bp-rewrite, a new operation could be created that takes
> the shared data and makes it uniquely-referenced but deduplicated data.
> This could be a lot more efficient and less disruptive because of the
> advanced knnowledge t
Hello,
# /usr/sbin/zfs list -r rgd3
NAME USEDAVAIL REFER MOUNTPOINT
rgd3 16.5G23.4G20K
/rgd3
rgd3/fs1 19K 23.4G21K
/app/fs1
rgd3/fs1-patch
On Fri, Feb 12, 2010 at 02:25:51PM -0800, TMB wrote:
> I have a similar question, I put together a cheapo RAID with four 1TB WD
> Black (7200) SATAs, in a 3TB RAIDZ1, and I added a 64GB OCZ Vertex SSD, with
> slice 0 (5GB) for ZIL and the rest of the SSD for cache:
> # zpool status dpool
> poo
G'Day,
On Sat, Feb 13, 2010 at 09:02:58AM +1100, Daniel Carosone wrote:
> On Fri, Feb 12, 2010 at 11:26:33AM -0800, Richard Elling wrote:
> > Mathing aorund a bit, for a 300 GB L2ARC (apologies for the tab separation):
> > size (GB) 300
> > size (sectors) 585937500
I have a similar question, I put together a cheapo RAID with four 1TB WD Black
(7200) SATAs, in a 3TB RAIDZ1, and I added a 64GB OCZ Vertex SSD, with slice 0
(5GB) for ZIL and the rest of the SSD for cache:
# zpool status dpool
pool: dpool
state: ONLINE
scrub: none requested
config:
On Fri, Feb 12, 2010 at 11:26:33AM -0800, Richard Elling wrote:
> Mathing aorund a bit, for a 300 GB L2ARC (apologies for the tab separation):
> size (GB) 300
> size (sectors) 585937500
> labels (sectors)9232
> available
On Fri, Feb 12, 2010 at 09:50:32AM -0500, Mark J Musante wrote:
> The other option is to zfs send the snapshot to create a copy
> instead of a clone.
One day, in the future, I hope there might be a third option, somewhat
as an optimimsation.
With dedup and bp-rewrite, a new operation could b
On Fri, Feb 12, 2010 at 12:11 PM, Al Hopper wrote:
> There's your first mistake. You're probably eligible for a very nice
> Federal Systems discount. My *guess* would be about 40%.
Promise JBOD and similar systems are often the only affordable choice
for those of us who can't get sweetheart dis
I don't think adding an SSD mirror to an existing pool will do much for
performance. Some of your data will surely go to those SSDs, but I don't think
the solaris will know they are SSDs and move blocks in and out according to
usage patterns to give you an all around boost. They will just be use
On Tue, Feb 9, 2010 at 12:55 PM, matthew patton wrote:
. snip
> Enter the J4500, 48 drives in 4U, what looks to be solid engineering, and
> redundancy in all the right places. An empty chassis at $3000 is totally
> justifiable. Maybe as high as $4000. In comparison a naked Dell MD1000
On Feb 12, 2010, at 9:36 AM, Felix Buenemann wrote:
> Am 12.02.10 18:17, schrieb Richard Elling:
>> On Feb 12, 2010, at 8:20 AM, Felix Buenemann wrote:
>>
>>> Hi Mickaël,
>>>
>>> Am 12.02.10 13:49, schrieb Mickaël Maillot:
Intel X-25 M are MLC not SLC, there are very good for L2ARC.
>>>
>>
On 02/12/10 09:36, Felix Buenemann wrote:
given I've got ~300GB L2ARC, I'd
need about 7.2GB RAM, so upgrading to 8GB would be enough to satisfy the
L2ARC.
But that would only leave ~800MB free for everything else the server
needs to do.
- Bill
Am 12.02.10 18:17, schrieb Richard Elling:
On Feb 12, 2010, at 8:20 AM, Felix Buenemann wrote:
Hi Mickaël,
Am 12.02.10 13:49, schrieb Mickaël Maillot:
Intel X-25 M are MLC not SLC, there are very good for L2ARC.
Yes, I'm only using those for L2ARC, I'm planing on getting to Mtron Pro 7500
On Feb 12, 2010, at 8:20 AM, Felix Buenemann wrote:
> Hi Mickaël,
>
> Am 12.02.10 13:49, schrieb Mickaël Maillot:
>> Intel X-25 M are MLC not SLC, there are very good for L2ARC.
>
> Yes, I'm only using those for L2ARC, I'm planing on getting to Mtron Pro 7500
> 16GB SLC SSDs for ZIL.
>
>> and
Hi Mickaël,
Am 12.02.10 13:49, schrieb Mickaël Maillot:
Intel X-25 M are MLC not SLC, there are very good for L2ARC.
Yes, I'm only using those for L2ARC, I'm planing on getting to Mtron Pro
7500 16GB SLC SSDs for ZIL.
and next, you need more RAM:
ZFS can't handle 4x 80 Gb of L2ARC with onl
On Fri, 12 Feb 2010, Daniel Carosone wrote:
You can use zfs promote to change around which dataset owns the base
snapshot, and which is the dependant clone with a parent, so you can
deletehe other - but if you want both datasets you will need to keep the
snapshot they share.
Right. The othe
Hi all,
just after sending a message to sunmanagers I realized that my question
should rather have gone here. So sunmanagers please excus ethe double
post:
I have inherited a X4140 (8 SAS slots) and have just setup the system
with Solaris 10 09. I first setup the system on a mirrored pool ov
Hi
Intel X-25 M are MLC not SLC, there are very good for L2ARC.
and next, you need more RAM:
ZFS can't handle 4x 80 Gb of L2ARC with only 4Gb of RAM because ZFS
use memory to allocate and manage L2ARC.
2010/2/10 Felix Buenemann :
> Am 09.02.10 09:58, schrieb Felix Buenemann:
>>
>> Am 09.02.10 02
Darren J Moffat wrote:
On 12/02/2010 09:55, Andrew Gabriel wrote:
Can anyone suggest how I can get around the above error when
sending/receiving a ZFS filesystem? It seems to fail when about 2/3rds
of the data have been passed from send to recv. Is it possible to get
more diagnostics out?
You
On 12/02/2010 09:55, Andrew Gabriel wrote:
Can anyone suggest how I can get around the above error when
sending/receiving a ZFS filesystem? It seems to fail when about 2/3rds
of the data have been passed from send to recv. Is it possible to get
more diagnostics out?
You could try using /usr/bin
Can anyone suggest how I can get around the above error when
sending/receiving a ZFS filesystem? It seems to fail when about 2/3rds
of the data have been passed from send to recv. Is it possible to get
more diagnostics out?
This filesystem has failed in this way for a long time, and I've ignor
I did some more digging through forum posts and found this:
http://opensolaris.org/jive/thread.jspa?threadID=104654
I ran zdb -l again and saw that even though zpool references /dev/dsk/c4d0 zdb
shows no labels at that point... the labels are on /dev/dsk/c4d0s0
However, the log label entries mat
22 matches
Mail list logo