Hi,
How difficult would it be to write some code to change the GUID of a pool?
Thanks
Peter
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Tue, 1 Jul 2008, Brian McBride wrote:
>
> Customer:
> I would like to know more about zfs's checksum feature. I'm guessing
> it is something that is applied to the data and not the disks (as in
> raid-5).
Data and metadata.
> For performance reasons, I turned off checksum on our zfs filesyste
I have some questions from a customer about zfs checksums.
Could anyone answer some of these? Thanks.
Brian
Customer:
I would like to know more about zfs's checksum feature. I'm guessing
it is something that is applied to the data and not the disks (as in
raid-5).
For performance reasons, I
Marc Bevand gmail.com> writes:
>
> I have recently had to replace this AOC-SAT2-MV8 controller with another one
> (we accidentally broke a SATA connector during a maintainance operation). Its
> firmware version is using a totally different numbering scheme (it's probably
> more recent) and it
On Wed, 2008-07-02 at 02:22 +0200, Justin Vassallo wrote:
> When set up with multi-pathing to dual redundant controllers, is
> layering zfs on top of the 6140 of any benefit? AFAIK this array does
> have internal redundant paths up to the disk connection.
>
>
>
> justin
>
Multipathing and r
On Tue, 1 Jul 2008, Miles Nordin wrote:
>
> But, just read the assumptions. They're not really assumptions.
> They're just definitions of what is RAM, and what is a time-sharing
> system. They're givens.
In today's systems with two or three levels of cache in front of
"RAM", variable page sizes
When set up with multi-pathing to dual redundant controllers, is layering
zfs on top of the 6140 of any benefit? AFAIK this array does have internal
redundant paths up to the disk connection.
justin
smime.p7s
Description: S/MIME cryptographic signature
__
> It looks pretty lively from my browser :-)
Now that you showed up ;)
In my case it is OpenSolaris in VirtualBox so I was expecting more cooperation,
or at least people striving to make them cooperate.
But like you said, this is likely just a case of OpenSolaris being optimized
for big iron a
Richard L. Hamilton writes:
> _FIOSATIME - why doesn't zfs support this (assuming I didn't just miss it)?
> Might be handy for backups.
Roch Bourbonnais writes:
> Are these syscall sufficent ?
> int utimes(const char *path, const struct timeval times[2]);
> int futimesat(int fildes, const char *pa
Hello jan,
Tuesday, July 1, 2008, 11:09:54 AM, you wrote:
jd> Hi all,
jd> Based on the further comments I received, following
jd> approach would be taken as far as calculating default
jd> size of swap and dump devices on ZFS volumes in Caiman
jd> installer is concerned.
jd> [1] Following formul
> "bf" == Bob Friesenhahn <[EMAIL PROTECTED]> writes:
bf> sequential access to virtual memory causes reasonably
bf> sequential I/O requests to disk.
no, thrashing is not when memory is accessed randomly instead of
sequentially. It's when the working set of pages is too big to fit in
MC wrote:
> I mentioned this too, but on the performance forum:
> http://www.opensolaris.org/jive/thread.jspa?threadID=64907&tstart=0
>
> Unfortunately the performance forum has tumbleweeds blowing through it, so
> that was probably the wrong place to complain. Not that people don't care
> abou
I mentioned this too, but on the performance forum:
http://www.opensolaris.org/jive/thread.jspa?threadID=64907&tstart=0
Unfortunately the performance forum has tumbleweeds blowing through it, so that
was probably the wrong place to complain. Not that people don't care about
performance, but th
I am using an LSI PCI-X dual port HBA, in a 2 chip opteron system.
Connected to the HBA is a SUN Storagetek A1000 populated with 14 36GB disks.
I have two questions that I think are related.
Initially I set up 2 zpools one on each channel so the pool looked like this:
share
On Tue, 1 Jul 2008, Miles Nordin wrote:
>
>bf> What is the relationship between the size of the memory
>bf> reservation and thrashing?
>
> The problem is that size-capping is the only control we have over
> thrashing right now. Maybe there are better ways to predict thrashing
> than throug
On Tue, Jul 1, 2008 at 16:34, Juho Mäkinen <[EMAIL PROTECTED]> wrote:
> Here's bonnie++ output with default settings:
> Version 1.03 --Sequential Output-- --Sequential Input-
> --Random-
>-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
> Machine
> The problem is that size-capping is the only control we have over
> thrashing right now.
It's not just thrashing, it's also any application that leaks memory.
Without a cap, the broken application would continue plowing through
memory until it had consumed every free block in the storage pool.
> To be honest, it is not quite clear to me, how we might utilize
> dumpadm(1M) to help us to calculate/recommend size of dump device.
> Could you please elaborate more on this ?
dumpadm(1M) -c specifies the dump content, which can be kernel, kernel plus
current process, or all memory. If the dum
> "bf" == Bob Friesenhahn <[EMAIL PROTECTED]> writes:
bf> What is the relationship between the size of the memory
bf> reservation and thrashing?
The problem is that size-capping is the only control we have over
thrashing right now. Maybe there are better ways to predict thrashing
tha
Here's bonnie++ output with default settings:
Version 1.03 --Sequential Output-- --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
MachineSize K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
sonas
I feel I'm being mis-understood.
RAID - "Redundant" Array of Inexpensive Disks.
I meant to state that - Let ZFS deal with redundancy.
If you want to have an "AID" by all means have your "RAID" controller do all
kind of striping/mirroring it can to help with throughput or ease of managing
drive
On Mon, Jun 30, 2008 at 11:43 AM, Akhilesh Mritunjai
<[EMAIL PROTECTED]> wrote:
>> I'll probably be having 16 Seagate 15K5 SAS disks,
>> 150 GB each. Two in HW raid1 for the OS, two in HW
>> raid 1 or 10 for the transaction log. The OS does not
>> need to be on ZFS, but could be.
>
> Whatever you
On Tue, 1 Jul 2008, Johan Hartzenberg wrote:
>
> Larger disks can put more data on the outer edge, where performance is
> better.
On the flip side, disks with a smaller form factor produce less
vibration and are less sensitive to it so seeks stabilize faster with
less chance of error. The platt
On Tue, 1 Jul 2008, Miles Nordin wrote:
>
> okay. But what is the point?
>
> Pinwheels are a symptom of thrashing.
They seem like the equivalent of the meaningless hourglass icon to me.
> Pinwheels are not showing up when the OS is returning ENOMEM.
> Pinwheels are not ``things fail'', they ar
On Mon, Jun 30, 2008 at 10:17 AM, Christiaan Willemsen <
[EMAIL PROTECTED]> wrote:
> The question is: how can we maximize IO by using the best possible
> combination of hardware and ZFS RAID?
>
> Here are some generic concepts that still hold true:
More disks can handle more IOs.
Larger disks ca
> "bf" == Bob Friesenhahn <[EMAIL PROTECTED]> writes:
> "re" == Richard Elling <[EMAIL PROTECTED]> writes:
re> If you run out of space, things fail. Pinwheels are a symptom
re> of running out of RAM, not running out of swap.
okay. But what is the point?
Pinwheels are a symptom
On Tue, Jul 1, 2008 at 14:47, Juho Mäkinen <[EMAIL PROTECTED]> wrote:
> Streaming video or even audio from the exported shares to windows xp gives a
> laggy performance. Seeking the video can take ages, audio (playing mp3 with
> winamp from the cifs share) stops from time to time and also the vid
I built a NAS with three 750 SATA disks in RAIDZ configuration and I've
exported some filesystems using the Solaris kernel CIFS.
Streaming video or even audio from the exported shares to windows xp gives a
laggy performance. Seeking the video can take ages, audio (playing mp3 with
winamp from t
Miles Nordin wrote:
>> "re" == Richard Elling <[EMAIL PROTECTED]> writes:
>>
>
> re> Mike, many people use this all day long and seem to be quite
> re> happy. I think the slow death spiral might be overrated :-)
>
> I don't think it's overrated at all. People all arou
On Tue, 1 Jul 2008, Miles Nordin wrote:
>
> I don't think it's overrated at all. People all around me are using
> this dynamic_pager right now, and they just reboot when they see too
> many pinwheels. If they are ``quite happy,'' it's not with their
> pager.
While we have seen these "pinwheels"
On Jul 1, 2008, at 10:55 AM, Miles Nordin wrote:
>
> I don't think it's overrated at all. People all around me are using
> this dynamic_pager right now, and they just reboot when they see too
> many pinwheels. If they are ``quite happy,'' it's not with their
> pager.
I often exist in a sea of m
> "re" == Richard Elling <[EMAIL PROTECTED]> writes:
re> Mike, many people use this all day long and seem to be quite
re> happy. I think the slow death spiral might be overrated :-)
I don't think it's overrated at all. People all around me are using
this dynamic_pager right now, and
Christiaan Willemsen wrote:
>> Why not go to 128-256 GBytes of RAM? It isn't that
>> expensive and would
>> significantly help give you a "big performance boost"
>> ;-)
>>
>
> Would be nice, but it not that much inexpensive since we'd have to move up a
> class in server choise, and besides t
So what version is on you new card? Seems itd be far easier to
request from supermicro if we knew what to ask for.
On 7/1/08, Marc Bevand <[EMAIL PROTECTED]> wrote:
> I remember a similar pb with an AOC-SAT2-MV8 controller in a system of mine:
> Solaris rebooted each time the marvell88sx driver
Hi--
I'm not quite sure about the exact sequence of events here, but it
sounds like you had two spares and replaced the failed disk with one of
the spares, which you can do manually with the zpool replace command.
The remaining spare should drop back into the spare pool if you detached
it. Check
> Why not go to 128-256 GBytes of RAM? It isn't that
> expensive and would
> significantly help give you a "big performance boost"
> ;-)
Would be nice, but it not that much inexpensive since we'd have to move up a
class in server choise, and besides the extra memory cost, also brings some
more
Darren J Moffat wrote:
> Mike Gerdts wrote:
>
>
>>> Not at all, and I don't see how you could get that assumption from what I
>>> said. I said "dynamically when it is needed".
>>>
>> I think I came off wrong in my initial message. I've seen times when
>> vmstat reports only megabytes of
Mike Gerdts wrote:
>> Not at all, and I don't see how you could get that assumption from what I
>> said. I said "dynamically when it is needed".
>
> I think I came off wrong in my initial message. I've seen times when
> vmstat reports only megabytes of free swap while gigabytes of RAM were
> av
On Tue, Jul 1, 2008 at 8:10 AM, Mike Gerdts <[EMAIL PROTECTED]> wrote:
> On Tue, Jul 1, 2008 at 7:31 AM, Darren J Moffat <[EMAIL PROTECTED]> wrote:
>> Mike Gerdts wrote:
>>>
>>> On Tue, Jul 1, 2008 at 5:56 AM, Darren J Moffat <[EMAIL PROTECTED]>
>>> wrote:
Instead we should take it comple
On Tue, Jul 1, 2008 at 7:31 AM, Darren J Moffat <[EMAIL PROTECTED]> wrote:
> Mike Gerdts wrote:
>>
>> On Tue, Jul 1, 2008 at 5:56 AM, Darren J Moffat <[EMAIL PROTECTED]>
>> wrote:
>>>
>>> Instead we should take it completely out of their hands and do it all
>>> dynamically when it is needed. Now t
Mike Gerdts wrote:
> On Tue, Jul 1, 2008 at 5:56 AM, Darren J Moffat <[EMAIL PROTECTED]> wrote:
>> Instead we should take it completely out of their hands and do it all
>> dynamically when it is needed. Now that we can swap on a ZVOL and ZVOLs
>> can be extended this is much easier to deal with an
On Tue, Jul 1, 2008 at 5:56 AM, Darren J Moffat <[EMAIL PROTECTED]> wrote:
> Instead we should take it completely out of their hands and do it all
> dynamically when it is needed. Now that we can swap on a ZVOL and ZVOLs
> can be extended this is much easier to deal with and we don't lose the
> be
Jeff Bonwick wrote:
>> Neither swap or dump are mandatory for running Solaris.
>
> Dump is mandatory in the sense that losing crash dumps is criminal.
Agreed on that point, I remember all to well why I was in Sun Service
the days when the first dump was always lost because savecore didn't
used
Mike Gerdts wrote
> By default, only kernel memory is dumped to the dump device. Further,
> this is compressed. I have heard that 3x compression is common and
> the samples that I have range from 3.51x - 6.97x.
My samples are in the range 1.95x - 3.66x. And yes, I lost
a few crash dumps on a b
Hi all,
Based on the further comments I received, following
approach would be taken as far as calculating default
size of swap and dump devices on ZFS volumes in Caiman
installer is concerned.
[1] Following formula would be used for calculating
swap and dump sizes:
size_of_swap = MAX(512 MiB
Dave Miner wrote:
>> I agree - I am just thinking, if it is fine in general to allow
>> normal non-experienced user (who is the target audience for Slim
>> installer) to run system without swap. To be honest, I don't know,
>> since I am not very experienced in this area.
>> If people agree that thi
Mike Gerdts wrote:
> On Mon, Jun 30, 2008 at 9:19 AM, jan damborsky <[EMAIL PROTECTED]> wrote:
>> Hi Mike,
>>
>>
>> Mike Gerdts wrote:
>>> On Wed, Jun 25, 2008 at 11:09 PM, Jan Damborsky <[EMAIL PROTECTED]>
>>> wrote:
Thank you very much all for this valuable input.
Based on the coll
Hi Jeff,
Jeff Bonwick wrote:
>> Neither swap or dump are mandatory for running Solaris.
>
> Dump is mandatory in the sense that losing crash dumps is criminal.
I think that installer should be tolerant in this point and shouldn't
refuse to proceed with installation if user doesn't provide enough
Robert Milkowski writes:
> Hello Roch,
>
> Saturday, June 28, 2008, 11:25:17 AM, you wrote:
>
>
> RB> I suspect, a single dd is cpu bound.
>
> I don't think so.
>
We're nearly so as you show. More below.
> Se below one with a stripe of 48x disks again. Single dd with 1024k
> blo
Erik Trimble Sun.COM> writes:
>
> * Huge RAM drive in a 1U small case (ala Cisco 2500-series routers),
> with SAS or FC attachment.
Almost what you want:
http://www.superssd.com/products/ramsan-400/
128 GB RAM-based device, 3U chassis, FC and Infiniband connectivity.
However as a commenter poi
50 matches
Mail list logo