[CentOS] does having a centos wiki acct grant edit access?

2010-09-25 Thread Robert P. J. Day

  i just registered for a centos wiki account, and was wondering if
this gives me edit capability.  i'm prepping for the first of a number
of RHEL/centos basic admin courses and currently working my way thru
the wiki, collecting neat tricks and ideas and, occasionally, i'll
stumble over typoes -- some minor, some not so much.

  for instance, here:

http://wiki.centos.org/TipsAndTricks/YumAndRPM#head-61731905d2e34ac343baeef06e5dd296aeed67b9

the command is listed as:

  yum --disable "*" --enable "rpmforge" list available

i'm guessing that should actually be:

  yum --disablerepo "*" --enablerepo "rpmforge" list available

no?  but even if i got edit access, i'd still be happier if someone
eventually checked out any changes i made to validate them.  whatever
works best.

rday

-- 


Robert P. J. Day   Waterloo, Ontario, CANADA

Top-notch, inexpensive online Linux/OSS/kernel courses
http://crashcourse.ca

Twitter:   http://twitter.com/rpjday
LinkedIn:   http://ca.linkedin.com/in/rpjday

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] does having a centos wiki acct grant edit access?

2010-09-25 Thread Ned Slider
On 25/09/10 11:36, Robert P. J. Day wrote:
>
>i just registered for a centos wiki account, and was wondering if
> this gives me edit capability.  i'm prepping for the first of a number
> of RHEL/centos basic admin courses and currently working my way thru
> the wiki, collecting neat tricks and ideas and, occasionally, i'll
> stumble over typoes -- some minor, some not so much.
>
>for instance, here:
>
> http://wiki.centos.org/TipsAndTricks/YumAndRPM#head-61731905d2e34ac343baeef06e5dd296aeed67b9
>
> the command is listed as:
>
>yum --disable "*" --enable "rpmforge" list available
>
> i'm guessing that should actually be:
>
>yum --disablerepo "*" --enablerepo "rpmforge" list available
>
> no?  but even if i got edit access, i'd still be happier if someone
> eventually checked out any changes i made to validate them.  whatever
> works best.
>
> rday
>

No, an account does not automatically give you edit rights, and you need 
to take this to the centos-docs list where a wiki editor will be happy 
to make the edits for you.

Obtaining wiki edit rights isn't hard - you just need to demonstrate 
that you wish to contribute.

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] does having a centos wiki acct grant edit access?

2010-09-25 Thread Robert P. J. Day
On Sat, 25 Sep 2010, Ned Slider wrote:

> No, an account does not automatically give you edit rights, and you
> need to take this to the centos-docs list where a wiki editor will
> be happy to make the edits for you.

  thanks muchly.

rday

-- 


Robert P. J. Day   Waterloo, Ontario, CANADA

Top-notch, inexpensive online Linux/OSS/kernel courses
http://crashcourse.ca

Twitter:   http://twitter.com/rpjday
LinkedIn:   http://ca.linkedin.com/in/rpjday

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Raid 10 questions...2 drive

2010-09-25 Thread Benjamin Franz
On 09/24/2010 07:50 PM, Digimer wrote:
> Raid 10 requires 4 drives. First you would make two RAID 0 arrays, then
> create a third array that is RAID 1 using the two RAID 0 arrays for it's
> devices.
>
> With only two drives, your option is RAID 1 (mirroring - proper
> redundancy) or RAID 0 (striping only - lose one drive and you lose *all*
> data).
>
>

That's 0+1 not 1+0.

And don't do it that way.

If you have a single drive failure with RAID 0+1 you've lost *all* of 
your redundancy - one more failure and you are dead. If you create two 
RAID1 sets and then strip them into a RAID0 you get pretty much the same 
performance and space efficiency characteristics, but if you have a 
drive failure you still have partial redundancy. You could actually take 
a *second* drive failure as long as it was in the other RAID1 pair. With 
4 drives raid0+1 can only survive 1 drive failure. With 4 drives in raid 
1+0 you can survive an average of 1.67 drive failures.

-- 
Benjamin Franz
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Raid 10 questions...2 drive

2010-09-25 Thread Karanbir Singh
On 09/25/2010 01:06 PM, Benjamin Franz wrote:
> If you have a single drive failure with RAID 0+1 you've lost *all* of
> your redundancy - one more failure and you are dead. If you create two
>

Things get a bit 'grey' with the mdraid10 and extentions, look at : 
http://en.wikipedia.org/wiki/Non-standard_RAID_levels#Linux_MD_RAID_10
for an overview.

- KB
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Raid 10 questions...2 drive

2010-09-25 Thread Christopher Chan
Jacob Bresciani wrote:
> RAID10 requires at least 4 drives does it not?
> 
> Since it's a strip set of mirrored disks, the smallest configuration I 
> can see is 4 disks, 2 mirrored pairs stripped.

He might be referring to what he can get from the mdraid10 (i know, Neil 
Brown could have chosen a better name) which is not quite the same as 
nested 1+0. Doing it the nested way, you need at least 4 drives. Using 
mdraid10 is another story. Thanks Neil for muddying the waters!
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] ca-bundle.crt expired

2010-09-25 Thread Timothy Murphy
I notice that the certificate /etc/pki/tls/certs/ca-bundle.crt
on my CentOS-5.5 system expired on 7 Jan 2010,
although the openssl-0.9.8e-12.el5_4.6 package was updated in March.

What is the point of this certificate?

-- 
Timothy Murphy  
e-mail: gayleard /at/ eircom.net
tel: +353-86-2336090, +353-1-2842366
s-mail: School of Mathematics, Trinity College, Dublin 2, Ireland

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos



Re: [CentOS] Raid 10 questions...2 drive

2010-09-25 Thread Ross Walker
On Sep 25, 2010, at 9:11 AM, Christopher Chan 
 wrote:

> Jacob Bresciani wrote:
>> RAID10 requires at least 4 drives does it not?
>> 
>> Since it's a strip set of mirrored disks, the smallest configuration I 
>> can see is 4 disks, 2 mirrored pairs stripped.
> 
> He might be referring to what he can get from the mdraid10 (i know, Neil 
> Brown could have chosen a better name) which is not quite the same as 
> nested 1+0. Doing it the nested way, you need at least 4 drives. Using 
> mdraid10 is another story. Thanks Neil for muddying the waters!

True, but if you figure it out mdraid10 with 2 drives = raid1, you would need 3 
drives to get the distributed copy feature of Neil's mdraid10.

Mdraid10 actually allows for a 3 drive raid10 set. It isn't raid10 per say but 
a raid level based on distributing copies of chunks around the spindles for 
redundancy.

For true RAID10 support in Linux you create multiple mdraid1 physical volumes, 
create a LVM volume group out of them and create logical volumes that 
interleave between these physical volumes.

This can give you the ability to extend a LVM RAID10 VG by adding RAID10 PVs to 
the VG. Unfortunately there isn't a resilver feature to LVM so you need to 
create a new LV to stripe it across all the members afterward, so leave room in 
the VG to do that.

-Ross

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Configuring BIND to answer to two domain names (four IP addresses)

2010-09-25 Thread Les Mikesell
On 9/24/10 11:12 PM, cpol...@surewest.net wrote:
> On Fri, Sep 24, 2010 at 10:28:41PM +0200, Dotan Cohen wrote:
>> On Fri, Sep 24, 2010 at 22:24, Alexander Dalloz  wrote:
>>> http://www.zytrax.com/books/dns/
>>>
>>> That is a good source to read up about bind configuration.
>>>
>>> As a sidenote please be aware, that if someone directly queries your
>>> ns1.exampleA.com for exampleB.com zone records he will get proper
>>> answers. If you would need to prevent this for any reason you would need
>>> a extended bind config design using views.
>>>
>>> While the zytrax book has lessons about views you can too find a resource in
>>>
>>> http://www.cymru.com/Documents/secure-bind-template.html
>>>
>>
>> Wow, thank you! There is some good reading there, especially the
>> security link. Lots of little holes to exploit!
>>
>> I will be up for the night!
>
> For completeness: there is the BIND 9 Administrator Reference Manual,
> known as the ARM, usually supplied under /usr/share/doc/.
> And what many consider to be the standard reference, Liu and Albitz's
> "DNS and BIND" published by O'Reilly. I believe it's up to the
> 5th edition now; an earlier edition used to be provided online.
> If you're serious about learning DNS you ought to consider this book.

Learning bind is sort of like learning sendmail though.  They both do a million 
things you'll never need (and if you do you should probably change your 
design...).  The trick - especially when you start with the full references - 
is 
to figure out the simple part you need to understand and ignore the rest.  And 
when using distribution-packaged versions, most of what you need is already 
there.

-- 
   Les Mikesell
lesmikes...@gmail.com

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Raid 10 questions...2 drive

2010-09-25 Thread Tom H
On Sat, Sep 25, 2010 at 11:48 AM, Ross Walker  wrote:
> On Sep 25, 2010, at 9:11 AM, Christopher Chan 
>  wrote:
>> Jacob Bresciani wrote:
>>> RAID10 requires at least 4 drives does it not?
>>>
>>> Since it's a strip set of mirrored disks, the smallest configuration I
>>> can see is 4 disks, 2 mirrored pairs stripped.
>>
>> He might be referring to what he can get from the mdraid10 (i know, Neil
>> Brown could have chosen a better name) which is not quite the same as
>> nested 1+0. Doing it the nested way, you need at least 4 drives. Using
>> mdraid10 is another story. Thanks Neil for muddying the waters!


> True, but if you figure it out mdraid10 with 2 drives = raid1, you would need 
> 3
> drives to get the distributed copy feature of Neil's mdraid10.

I had posted earlier (
http://lists.centos.org/pipermail/centos/2010-September/099473.html )
that mdraid10 with two drives is basically raid1 but that it has some
mirroring options. In the "far layout" mirroring option (where,
according to WP, "all the drives are divided into f sections and all
the chunks are repeated in each section but offset by one device")
reads are faster than mdraid1 or vanilla mdraid10 on two drives.


> For true RAID10 support in Linux you create multiple mdraid1 physical
> volumes, create a LVM volume group out of them and create logical
> volumes that interleave between these physical volumes.

Vanilla mdraid10 with four drives is "true raid10".
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Raid 10 questions...2 drive

2010-09-25 Thread Ross Walker
On Sep 25, 2010, at 1:52 PM, Tom H  wrote:

> On Sat, Sep 25, 2010 at 11:48 AM, Ross Walker  wrote:
>> On Sep 25, 2010, at 9:11 AM, Christopher Chan 
>>  wrote:
>>> Jacob Bresciani wrote:
 RAID10 requires at least 4 drives does it not?
 
 Since it's a strip set of mirrored disks, the smallest configuration I
 can see is 4 disks, 2 mirrored pairs stripped.
>>> 
>>> He might be referring to what he can get from the mdraid10 (i know, Neil
>>> Brown could have chosen a better name) which is not quite the same as
>>> nested 1+0. Doing it the nested way, you need at least 4 drives. Using
>>> mdraid10 is another story. Thanks Neil for muddying the waters!
> 
> 
>> True, but if you figure it out mdraid10 with 2 drives = raid1, you would 
>> need 3
>> drives to get the distributed copy feature of Neil's mdraid10.
> 
> I had posted earlier (
> http://lists.centos.org/pipermail/centos/2010-September/099473.html )
> that mdraid10 with two drives is basically raid1 but that it has some
> mirroring options. In the "far layout" mirroring option (where,
> according to WP, "all the drives are divided into f sections and all
> the chunks are repeated in each section but offset by one device")
> reads are faster than mdraid1 or vanilla mdraid10 on two drives.

If you have any two copies of the same chunk on the same drive then redundancy 
is completely lost.

Therefore without loosing redundancy mdraid10 over two drives will have to be 
identical to raid1.

Reads on a raid1 can be serviced by either side of the mirror, I believe the 
policy is hard coded to round robin. I don't know if it is smart enough to 
distinguish sequential pattern from random and only service sequential reads 
from one side or not.

>> For true RAID10 support in Linux you create multiple mdraid1 physical
>> volumes, create a LVM volume group out of them and create logical
>> volumes that interleave between these physical volumes.
> 
> Vanilla mdraid10 with four drives is "true raid10".

Well like you stated above that depends on the near or far layout pattern, you 
can get the same performance as a raid10 or better in certain workloads, but it 
really isn't a true raid10 in the sense that it isn't a stripe set of raid1s, 
but a distributed mirror set.

Now don't get me wrong I'm not saying it's not as good as a true raid10, in 
fact I believe it to be better as it provides way more flexibility and is a lot 
simpler of an implementation, but not really a raid10, but something completely 
new.

-Ross

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Raid 10 questions...2 drive

2010-09-25 Thread Miguel Medalha

> And don't do it that way.
>
> If you have a single drive failure with RAID 0+1 you've lost *all* of
> your redundancy - one more failure and you are dead. If you create two
> RAID1 sets and then strip them into a RAID0 you get pretty much the same
> performance and space efficiency characteristics, but if you have a
> drive failure you still have partial redundancy. You could actually take
> a *second* drive failure as long as it was in the other RAID1 pair. With
> 4 drives raid0+1 can only survive 1 drive failure. With 4 drives in raid
> 1+0 you can survive an average of 1.67 drive failures.

Indeed.

This article explains the odds of loosing data with RAID 1+0 vs 0+1:


Why is RAID 1+0 better than RAID 0+1?
http://www.aput.net/~jheiss/raid10/

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Raid 10 questions...2 drive

2010-09-25 Thread Tom Bishop
Thanks for all of the inputs...I finally came across a good article
summarizing what I needed, looks like I am going to try to the f2 option and
then do some testing vs the default n2 option.  I am building the array as
we speak but it looks like building the f2 option will take 24hrs vs 2hrs
for the n2 optionthis is on 2 1TB hdd

On Sat, Sep 25, 2010 at 3:04 PM, Ross Walker  wrote:

> On Sep 25, 2010, at 1:52 PM, Tom H  wrote:
>
> > On Sat, Sep 25, 2010 at 11:48 AM, Ross Walker 
> wrote:
> >> On Sep 25, 2010, at 9:11 AM, Christopher Chan <
> christopher.c...@bradbury.edu.hk> wrote:
> >>> Jacob Bresciani wrote:
>  RAID10 requires at least 4 drives does it not?
> 
>  Since it's a strip set of mirrored disks, the smallest configuration I
>  can see is 4 disks, 2 mirrored pairs stripped.
> >>>
> >>> He might be referring to what he can get from the mdraid10 (i know,
> Neil
> >>> Brown could have chosen a better name) which is not quite the same as
> >>> nested 1+0. Doing it the nested way, you need at least 4 drives. Using
> >>> mdraid10 is another story. Thanks Neil for muddying the waters!
> >
> >
> >> True, but if you figure it out mdraid10 with 2 drives = raid1, you would
> need 3
> >> drives to get the distributed copy feature of Neil's mdraid10.
> >
> > I had posted earlier (
> > http://lists.centos.org/pipermail/centos/2010-September/099473.html )
> > that mdraid10 with two drives is basically raid1 but that it has some
> > mirroring options. In the "far layout" mirroring option (where,
> > according to WP, "all the drives are divided into f sections and all
> > the chunks are repeated in each section but offset by one device")
> > reads are faster than mdraid1 or vanilla mdraid10 on two drives.
>
> If you have any two copies of the same chunk on the same drive then
> redundancy is completely lost.
>
> Therefore without loosing redundancy mdraid10 over two drives will have to
> be identical to raid1.
>
> Reads on a raid1 can be serviced by either side of the mirror, I believe
> the policy is hard coded to round robin. I don't know if it is smart enough
> to distinguish sequential pattern from random and only service sequential
> reads from one side or not.
>
> >> For true RAID10 support in Linux you create multiple mdraid1 physical
> >> volumes, create a LVM volume group out of them and create logical
> >> volumes that interleave between these physical volumes.
> >
> > Vanilla mdraid10 with four drives is "true raid10".
>
> Well like you stated above that depends on the near or far layout pattern,
> you can get the same performance as a raid10 or better in certain workloads,
> but it really isn't a true raid10 in the sense that it isn't a stripe set of
> raid1s, but a distributed mirror set.
>
> Now don't get me wrong I'm not saying it's not as good as a true raid10, in
> fact I believe it to be better as it provides way more flexibility and is a
> lot simpler of an implementation, but not really a raid10, but something
> completely new.
>
> -Ross
>
> ___
> CentOS mailing list
> CentOS@centos.org
> http://lists.centos.org/mailman/listinfo/centos
>
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Raid 10 questions...2 drive

2010-09-25 Thread Miguel Medalha

> Mdraid10 actually allows for a 3 drive raid10 set. It isn't raid10 per say 
> but a raid level based on distributing copies of chunks around the spindles 
> for redundancy.

Isn't this what they call RAID 1e (RAID 1 Enhanced), which needs a 
minimum of 3 drives?

This seems to me a much better name for it than calling it "RAID 10"...

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Raid 10 questions...2 drive

2010-09-25 Thread Nataraj
Miguel Medalha wrote:
>> Mdraid10 actually allows for a 3 drive raid10 set. It isn't raid10 per say 
>> but a raid level based on distributing copies of chunks around the spindles 
>> for redundancy.
>> 
>
> Isn't this what they call RAID 1e (RAID 1 Enhanced), which needs a 
> minimum of 3 drives?
>
> This seems to me a much better name for it than calling it "RAID 10"...
>
> ___
> CentOS mailing list
> CentOS@centos.org
> http://lists.centos.org/mailman/listinfo/centos
>   
Yes it is Raid-1E.  This is explicitly documented in the link that 
Karanbir provided...
http://en.wikipedia.org/wiki/Non-standard_RAID_levels#Linux_MD_RAID_10

Nataraj

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Raid 10 questions...2 drive

2010-09-25 Thread Ross Walker
On Sep 25, 2010, at 4:15 PM, Miguel Medalha  wrote:

> 
>> Mdraid10 actually allows for a 3 drive raid10 set. It isn't raid10 per say 
>> but a raid level based on distributing copies of chunks around the spindles 
>> for redundancy.
> 
> Isn't this what they call RAID 1e (RAID 1 Enhanced), which needs a minimum of 
> 3 drives?
> 
> This seems to me a much better name for it than calling it "RAID 10"...

The raid1e type probably didn't exist when Neil Brown came up with the 
algorithm.

He should have patented it though...

Maybe he started out with the idea to create a raid10, but didn't want the 
complexity of managing sub-arrays so decided just to redistribute chunk copies 
instead and then it took off from there.

-Ross

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] Mock on 5.5 x86_64 failing with Could not find useradd in chroot, maybe the install failed?

2010-09-25 Thread Jason Pyeron
I am atempting to work with some rpms in the c5-testing repo.

When I add:

[c5-testing]
name=CentOS-5 Testing
baseurl=http://dev.centos.org/centos/5/testing/x86_64/
gpgcheck=1
gpgkey=http://dev.centos.org/centos/RPM-GPG-KEY-CentOS-testing

to centos-5-x86_64.cfg, saving as centos-5-x86_64-testdev.cfg I get the
aformentioned error.

Example

[mockbu...@localhost ~]$ mock -r centos-5-x86_64-testdev.cfg init
init
clean
prep
This may take a while
Could not find useradd in chroot, maybe the install failed?
ending
done

--
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
-   -
- Jason Pyeron  PD Inc. http://www.pdinc.us -
- Principal Consultant  10 West 24th Street #100-
- +1 (443) 269-1555 x333Baltimore, Maryland 21218   -
-   -
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
This message is copyright PD Inc, subject to license 20080407P00.



___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Mock on 5.5 x86_64 failing with Could not find useradd inchroot, maybe the install failed? [SOLVED]

2010-09-25 Thread Jason Pyeron
Solved,

> -Original Message-
> From: centos-boun...@centos.org 
> [mailto:centos-boun...@centos.org] On Behalf Of Jason Pyeron
> Sent: Saturday, September 25, 2010 17:22
> To: 'CentOS mailing list'
> Subject: [CentOS] Mock on 5.5 x86_64 failing with Could not 
> find useradd inchroot, maybe the install failed?
> 
> I am atempting to work with some rpms in the c5-testing repo.
> 
> When I add:
> 
> [c5-testing]
> name=CentOS-5 Testing
> baseurl=http://dev.centos.org/centos/5/testing/x86_64/

Commenting out the GPG lines avoids the issue.

> gpgcheck=1
> gpgkey=http://dev.centos.org/centos/RPM-GPG-KEY-CentOS-testing
> 

Ex:

[mockbu...@localhost ~]$ mock -r centos-5-x86_64-testdev.cfg init
init
clean
prep
This may take a while
ending
done
Finished initializing root
[mockbu...@localhost ~]$ mock -r centos-5-x86_64-testdev.cfg
backuppc-3.1.0-1.el5.centos.src.rpm
init
clean
prep
This may take a while
setup
build
ending
done
Results and/or logs in: /var/lib/mock/centos-5-x86_64/result


--
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
-   -
- Jason Pyeron  PD Inc. http://www.pdinc.us -
- Principal Consultant  10 West 24th Street #100-
- +1 (443) 269-1555 x333Baltimore, Maryland 21218   -
-   -
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
This message is copyright PD Inc, subject to license 20080407P00.

 


___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Raid 10 questions...2 drive

2010-09-25 Thread Miguel Medalha

> The raid1e type probably didn't exist when Neil Brown came up with the 
> algorithm.

You are probably right.

> He should have patented it though...

Maybe...

> Maybe he started out with the idea to create a raid10, but didn't want the 
> complexity of managing sub-arrays so decided just to redistribute chunk 
> copies instead and then it took off from there.

Yes. I didn't want to sound harsh to him. I am VERY grateful for his 
outstanding work.

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Raid 10 questions...2 drive

2010-09-25 Thread Tom H
On Sat, Sep 25, 2010 at 4:04 PM, Ross Walker  wrote:
> On Sep 25, 2010, at 1:52 PM, Tom H  wrote:
>> On Sat, Sep 25, 2010 at 11:48 AM, Ross Walker  wrote:
>>> On Sep 25, 2010, at 9:11 AM, Christopher Chan 
>>>  wrote:
 Jacob Bresciani wrote:
> RAID10 requires at least 4 drives does it not?
>
> Since it's a strip set of mirrored disks, the smallest configuration I
> can see is 4 disks, 2 mirrored pairs stripped.

 He might be referring to what he can get from the mdraid10 (i know, Neil
 Brown could have chosen a better name) which is not quite the same as
 nested 1+0. Doing it the nested way, you need at least 4 drives. Using
 mdraid10 is another story. Thanks Neil for muddying the waters!
>>
>>
>>> True, but if you figure it out mdraid10 with 2 drives = raid1, you would 
>>> need 3
>>> drives to get the distributed copy feature of Neil's mdraid10.
>>
>> I had posted earlier (
>> http://lists.centos.org/pipermail/centos/2010-September/099473.html )
>> that mdraid10 with two drives is basically raid1 but that it has some
>> mirroring options. In the "far layout" mirroring option (where,
>> according to WP, "all the drives are divided into f sections and all
>> the chunks are repeated in each section but offset by one device")
>> reads are faster than mdraid1 or vanilla mdraid10 on two drives.
>
> If you have any two copies of the same chunk on the same drive
> then redundancy is completely lost.
>
> Therefore without loosing redundancy mdraid10 over two drives
> will have to be identical to raid1.
>
> Reads on a raid1 can be serviced by either side of the mirror, I
> believe the policy is hard coded to round robin. I don't know if it is
> smart enough to distinguish sequential pattern from random and
> only service sequential reads from one side or not.
>
>>> For true RAID10 support in Linux you create multiple mdraid1 physical
>>> volumes, create a LVM volume group out of them and create logical
>>> volumes that interleave between these physical volumes.
>>
>> Vanilla mdraid10 with four drives is "true raid10".
>
> Well like you stated above that depends on the near or far layout pattern,
> you can get the same performance as a raid10 or better in certain
> workloads, but it really isn't a true raid10 in the sense that it isn't a 
> stripe
> set of raid1s, but a distributed mirror set.
>
> Now don't get me wrong I'm not saying it's not as good as a true raid10,
> in fact I believe it to be better as it provides way more flexibility and is a
> lot simpler of an implementation, but not really a raid10, but something
> completely new.

You must've misunderstood me.

mdraid10 on two disks: it is raid1 but you have the option of
mirroring, for example, cylinder 24 on disk 1 with cylinder 48 on disk
2; the Wikipedia article says that it makes reads faster (I don't
understand why but that's a different story).

mdraid10 on four disks: it is true raid10 but you also have various
"--layout=" options.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] (In)(x)sane privilege/access issue

2010-09-25 Thread Mark
I just picked up a new HP OfficeJet J3680 all-in-one (because, as
previously posted, my 4315 broke), and I was able to install the
printer using the latest hplip's hp-setup command, BUT:

My sane:

libsane-hpaio.x86_64   1.6.7-4.1.el5.4 installed
sane-backends.x86_64   1.0.18-5.el5installed
sane-backends-devel.i386   1.0.18-5.el5installed
sane-backends-devel.x86_64 1.0.18-5.el5installed
sane-backends-libs.i3861.0.18-5.el5installed
sane-backends-libs.x86_64  1.0.18-5.el5installed
sane-frontends.x86_64  1.0.14-1.2.2installed
xsane.x86_64   0.991-5.el5 installed
xsane-gimp.x86_64  0.991-5.el5 installed

doesn't recognize that there is a scanner out there and only loads the
interface for my old ScanJet 3300C:

[...@marichter ~]$ lsusb
Bus 001 Device 001: ID :
Bus 002 Device 012: ID 03f0:0205 Hewlett-Packard ScanJet 3300c
Bus 002 Device 006: ID 04f9:0033 Brother Industries, Ltd
Bus 002 Device 001: ID :

In fact, the new scanner doesn't even show up unless:

[...@marichter ~]$ sudo lsusb
[sudo] password for mar:
Bus 001 Device 001: ID :
Bus 002 Device 015: ID 03f0:3112 Hewlett-Packard
Bus 002 Device 012: ID 03f0:0205 Hewlett-Packard ScanJet 3300c
Bus 002 Device 006: ID 04f9:0033 Brother Industries, Ltd
Bus 002 Device 001: ID :

If I run xsane from the command line as root (i.e., with sudo), then
it works, but then I also get that WARNING DANGER WILL ROBINSON
message, which is both annoying and probably likely to cause problems
with file permissions on my scans.

So, how do I (a non-root user) get x/sane to see the new USB device
when it only appears to visible to the root user?  What am I missing?

Thanks.

Mark
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos