Tim Foster wrote:
> Hi Joe,
>
> On Thu, 2007-09-13 at 11:39 -0400, Poulos, Joe wrote:
>> Is there a way to automate the destroying of snapshots that are older
>> that x amount of days?
>>
> Yeah, a few people have done stuff like this.
>
> Chris has this:
> http://blogs.sun.com/chrisg/entry/a
Tim Cook wrote:
> Won't come cheap, but this mobo comes with 6x pci-x slots... should get the
> job done :)
>
> http://www.supermicro.com/products/motherboard/Xeon1333/5000P/X7DBE-X.cfm
Yes, but where do you buy SuperMicro toys?
SuperMicro doesn't sell online, anything "neat" that I've found is
> Paul Kraus wrote:
> > In the ZFS case I could replace the disk
> and the zpool would
> > resilver automatically. I could also take the
> removed disk and put it
> > into the second system and have it recognize the
> zpool (and that it
> > was missing half of a mirror) and the data was all
On Sep 14, 2007, at 8:16 AM, Łukasz wrote:
> I have a huge problem with space maps on thumper. Space maps takes
> over 3GB
> and write operations generates massive read operations.
> Before every spa sync phase zfs reads space maps from disk.
>
> I decided to turn on compression for pool ( only
On Sat, 15 Sep 2007, Ian Collins wrote:
> Kent Watsen wrote:
>> Getting there - can anybody clue me into how much CPU/Mem ZFS needs?
>> I have an old 1.2Ghz with 1Gb of mem laying around - would it be sufficient?
>>
>>
>>
> It'll use as much memory as you can spare and it has a strong preference
>
Kent Watsen wrote:
> Getting there - can anybody clue me into how much CPU/Mem ZFS needs?
> I have an old 1.2Ghz with 1Gb of mem laying around - would it be sufficient?
>
>
>
It'll use as much memory as you can spare and it has a strong preference
for 64 bit systems. Considering how much yo
Go look at intel - they have a pretty decent mb with 6 sata ports
Tim Cook wrote:
> Won't come cheap, but this mobo comes with 6x pci-x slots... should get the
> job done :)
>
> http://www.supermicro.com/products/motherboard/Xeon1333/5000P/X7DBE-X.cfm
>
>
> This message posted from opensolar
Please see the following link:
http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide#Limiting_the_ARC_Cache
Hth,
Victor
Sergey пишет:
> I am running Solaris U4 x86_64.
>
> Seems that something is changed regarding mdb:
>
> # mdb -k
> Loading modules: [ unix krtld genunix specfs
I’d like to report the ZFS related crash/bug described below. How do I go about
reporting the crash and what additional information is needed?
I’m using my own very simple test app that creates numerous directories and
files of randomly generated data. I have run the test app on two machines, bo
Hello Brian,
On Fri, Sep 14, 2007 at 11:45:27AM -0700, Brian King wrote:
> Is there a way to convert a 2 disk raid-z file system to a mirror without
> backing up the data and restoring?
>
Currently there isn't a way to do this without having some
additional buffer disk space available.
What you
Won't come cheap, but this mobo comes with 6x pci-x slots... should get the job
done :)
http://www.supermicro.com/products/motherboard/Xeon1333/5000P/X7DBE-X.cfm
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opens
Sorry, but looking again at the RMP page, I see that the chassis I
recommended is actually different than the one we have. I can't find
this chassis only online, but here's what we bought:
http://www.siliconmechanics.com/i10561/intel-storage-server.php?cat=625
From their picture gallery, you
I suspect it's probably not a good idea but I was wondering if someone
could clarify the details.
I have 4 250G SATA(150) disks and 1 250G PATA(133) disk. Would it
cause problems if I created a raidz1 pool across all 5 drives?
I know the PATA drive is slower so would it slow the access across th
Is there a way to convert a 2 disk raid-z file system to a mirror without
backing up the data and restoring?
We have this:
bash-3.00# zpool status
pool: archives
state: ONLINE
scrub: none requested
config:
NAMESTATE READ WRITE CKSUM
archivesONLINE 0
On 9/14/07, Moore, Joe <[EMAIL PROTECTED]> wrote:
> I was trying to compose an email asking almost the exact same question,
> but in the context of array-based replication. They're similar in the
> sense that you're asking about using already-written space, rather than
> to go off into virgin sect
Mike Gerdts wrote:
> I'm curious as to how ZFS manages space (free and used) and how
> its usage interacts with thin provisioning provided by HDS
> arrays. Is there any effort to minimize the number of provisioned
> disk blocks that get writes so as to not negate any space
> benefits that thin pr
Mike Gerdts wrote:
> Short question:
Not so short really :-)
Answers to som questions inline. I think others will correct me if I'm
wrong.
> I'm curious as to how ZFS manages space (free and used) and how
> its usage interacts with thin provisioning provided by HDS
> arrays. Is there any effor
Just checking status on the resilver/scrub + snap reset issue-- it is very
painful for large pools such as exist on thumpers that make heavy use of
snaps. Is this still on track for u5/pre-u5 or has it changed? Is there a
different view of these bugs with more information so I do no need to
pes
comments from a RAS guy below...
Adam Lindsay wrote:
Kent Watsen wrote:
What are you *most* interested in for this server? Reliability?
Capacity? High Performance? Reading or writing? Large contiguous reads
or small seeks?
One thing that I did that got a good feedback from this list was
pi
I have a huge problem with space maps on thumper. Space maps takes over 3GB
and write operations generates massive read operations.
Before every spa sync phase zfs reads space maps from disk.
I decided to turn on compression for pool ( only for pool, not filesystems )
and it helps.
Now space map
Kent Watsen wrote:
>> What are you *most* interested in for this server? Reliability?
>> Capacity? High Performance? Reading or writing? Large contiguous reads
>> or small seeks?
>>
>> One thing that I did that got a good feedback from this list was
>> picking apart the requirements of the most
Short question:
I'm curious as to how ZFS manages space (free and used) and how
its usage interacts with thin provisioning provided by HDS
arrays. Is there any effort to minimize the number of provisioned
disk blocks that get writes so as to not negate any space
benefits that thin provisioning ma
> Fun exercise! :)
>
Indeed! - though my wife and kids don't seem to appreciate it so much ;)
>> I'm thinking about using this 26-disk case: [FYI: 2-disk RAID1 for
>> the OS & 4*(4+2) RAIDZ2 for SAN]
>
> What are you *most* interested in for this server? Reliability?
> Capacity? High Perform
On Fri, 14 Sep 2007, Sergey wrote:
> I am running Solaris U4 x86_64.
>
> Seems that something is changed regarding mdb:
>
> # mdb -k
> Loading modules: [ unix krtld genunix specfs dtrace cpu.AuthenticAMD.15 uppc
> pcplusmp ufs ip hook neti sctp arp usba fctl nca lofs zfs random nfs sppp
> crypto
Kent Watsen wrote:
> I'm putting together a OpenSolaris ZFS-based system and need help
> picking hardware.
Fun exercise! :)
> I'm thinking about using this 26-disk case: [FYI: 2-disk RAID1 for the
> OS & 4*(4+2) RAIDZ2 for SAN]
What are you *most* interested in for this server? Reliability?
>
> I will only comment on the chassis, as this is made by AIC (short for
> American Industrial Computer), and I have three of these in service at
> my work. These chassis are quite well made, but I have experienced
> the following two problems:
>
>
Oh my, thanks for the heads-up! Charlie at
I am running Solaris U4 x86_64.
Seems that something is changed regarding mdb:
# mdb -k
Loading modules: [ unix krtld genunix specfs dtrace cpu.AuthenticAMD.15 uppc
pcplusmp ufs ip hook neti sctp arp usba fctl nca lofs zfs random nfs sppp
crypto ptm ]
> arc::print -a c_max
mdb: failed to derefe
Back in April, I pinged this list[1] for help in specifying a ZFS server
that would handle high-capacity reads and writes. That server was
finally built and delivered, and I've blogged the results[2] as part of
a larger series[3] about that server.
[1] http://www.opensolaris.org/jive/thread.jsp
28 matches
Mail list logo