Re: [CentOS] Information Week: RHEL 7 released today

2014-06-12 Thread Timothy Murphy
Les Mikesell wrote:

>> Does XFS have any advantages over ext4 for normal users, eg with laptops?
>> I've only seen it touted for machines with enormous disks, 200TB plus.

> It is generally better at handling a lot of files - faster
> creation/deletion when there are a large number in the same directory.

I'm wondering if, for the home user, BackupPC would be a good test of that?
Otherwise I can't think of a case where I would have a very large number
of files in the same directory.

>The only down side for a long time has been on 32bit machines where
> the RH default 4k kernel stacks were too small.

Do you mean that that is a down side of XFS, or ext4?

>> Does XFS have the same problems that LVM has if there are disk faults?

> You can't really expect any file system to work if the disk underneath
> is bad.  Raid is your friend there.

In my meagre experience, when a disk shows signs of going bad
I have been able to copy most of ext3/ext4 disks before compete failure,
while LVM disks have been beyond (my) rescue.
Actually, this was in the time of SCSI disks,
which seemed quite good at giving advance warning of failure.
 
-- 
Timothy Murphy  
e-mail: gayleard /at/ eircom.net
School of Mathematics, Trinity College, Dublin 2, Ireland


___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] CentOS-announce Digest, Vol 112, Issue 6

2014-06-12 Thread centos-announce-request
Send CentOS-announce mailing list submissions to
centos-annou...@centos.org

To subscribe or unsubscribe via the World Wide Web, visit
http://lists.centos.org/mailman/listinfo/centos-announce
or, via email, send a message with subject or body 'help' to
centos-announce-requ...@centos.org

You can reach the person managing the list at
centos-announce-ow...@centos.org

When replying, please edit your Subject line so it is more specific
than "Re: Contents of CentOS-announce digest..."


Today's Topics:

   1. CESA-2014:0741 Critical CentOS 5 firefox Update (Johnny Hughes)
   2. CESA-2014:0742 Important CentOS 5 thunderbird Update
  (Johnny Hughes)
   3. CESA-2014:0740 Important CentOS 5 kernel Update (Johnny Hughes)
   4. CESA-2014:0743 Moderate CentOS 6 qemu-kvm Update (Johnny Hughes)
   5. CESA-2014:0741 Critical CentOS 6 firefox Update (Johnny Hughes)
   6. CESA-2014:0742 Important CentOS 6 thunderbird Update
  (Johnny Hughes)
   7. CESA-2014:0747 Moderate CentOS 6 python-jinja2Update
  (Johnny Hughes)
   8. CEBA-2014:0749 CentOS 6 mobile-broadband-provider-info
  FASTTRACK Update (Johnny Hughes)
   9. CEBA-2014:0750 CentOS 6 pulseaudio FASTTRACK  Update
  (Johnny Hughes)


--

Message: 1
Date: Wed, 11 Jun 2014 10:49:01 +
From: Johnny Hughes 
Subject: [CentOS-announce] CESA-2014:0741 Critical CentOS 5 firefox
Update
To: centos-annou...@centos.org
Message-ID: <20140611104901.ga5...@chakra.karan.org>
Content-Type: text/plain; charset=us-ascii


CentOS Errata and Security Advisory 2014:0741 Critical

Upstream details at : https://rhn.redhat.com/errata/RHSA-2014-0741.html

The following updated files have been uploaded and are currently 
syncing to the mirrors: ( sha256sum Filename ) 

i386:
582f7cb91e6ab5d93be7af50a22e14cd6e05d3f66203b77b37878634b29dcc6d  
firefox-24.6.0-1.el5.centos.i386.rpm

x86_64:
582f7cb91e6ab5d93be7af50a22e14cd6e05d3f66203b77b37878634b29dcc6d  
firefox-24.6.0-1.el5.centos.i386.rpm
9edc3f96ec241383a0436f87d27ccf2079ed668a70fa1dff02465a60ec24e327  
firefox-24.6.0-1.el5.centos.x86_64.rpm

Source:
73f5bff644185f97780d7db55c29a91b1257180061f303c788b0cc7f4c63ac94  
firefox-24.6.0-1.el5.centos.src.rpm



-- 
Johnny Hughes
CentOS Project { http://www.centos.org/ }
irc: hughesjr, #cen...@irc.freenode.net



--

Message: 2
Date: Wed, 11 Jun 2014 10:54:29 +
From: Johnny Hughes 
Subject: [CentOS-announce] CESA-2014:0742 Important CentOS 5
thunderbird Update
To: centos-annou...@centos.org
Message-ID: <20140611105429.ga5...@chakra.karan.org>
Content-Type: text/plain; charset=us-ascii


CentOS Errata and Security Advisory 2014:0742 Important

Upstream details at : https://rhn.redhat.com/errata/RHSA-2014-0742.html

The following updated files have been uploaded and are currently 
syncing to the mirrors: ( sha256sum Filename ) 

i386:
fe8e39883bc8fce0f23370adeed35a1064a75fed6f2dff225464b001389f1a1f  
thunderbird-24.6.0-1.el5.centos.i386.rpm

x86_64:
f4f45e86dc38c2ecaab9c5316b8b06ac6b2228c8121e7a0d964db39a97dece98  
thunderbird-24.6.0-1.el5.centos.x86_64.rpm

Source:
c91f239f8d5df36088789be70874ad3adfd12a3cf7ed7ff2b3b0450323cc923b  
thunderbird-24.6.0-1.el5.centos.src.rpm



-- 
Johnny Hughes
CentOS Project { http://www.centos.org/ }
irc: hughesjr, #cen...@irc.freenode.net



--

Message: 3
Date: Wed, 11 Jun 2014 11:01:17 +
From: Johnny Hughes 
Subject: [CentOS-announce] CESA-2014:0740 Important CentOS 5 kernel
Update
To: centos-annou...@centos.org
Message-ID: <2014060117.ga6...@chakra.karan.org>
Content-Type: text/plain; charset=us-ascii


CentOS Errata and Security Advisory 2014:0740 Important

Upstream details at : https://rhn.redhat.com/errata/RHSA-2014-0740.html

The following updated files have been uploaded and are currently 
syncing to the mirrors: ( sha256sum Filename ) 

i386:
b08076941db58fbeb148989f6fcaba087b5de45990aea2f43011718251b755b0  
kernel-2.6.18-371.9.1.el5.i686.rpm
9076b8645681ba78ae87efa47eb3ff24872eeccd8dd89c33ad08ffb129aff336  
kernel-debug-2.6.18-371.9.1.el5.i686.rpm
bad67dd7af8ba071bb2609d9da1e8c7600e84b926c59e4a8a3119e80c10b10d6  
kernel-debug-devel-2.6.18-371.9.1.el5.i686.rpm
79294c50d5c65c4167eb3885d8750a6c00743c1ffae1bc1c517df3d17a6425be  
kernel-devel-2.6.18-371.9.1.el5.i686.rpm
81942e7d0504f2557a62a953d6788f4ead3cafa9b8b59f1cb08347e001a58d28  
kernel-doc-2.6.18-371.9.1.el5.noarch.rpm
1e83a1ee3bec37d7ff0d813f549f70ce246f67b119046dfb291fde3c50f4c88e  
kernel-headers-2.6.18-371.9.1.el5.i386.rpm
26f5595940d362d0060602141350fb6a08dc7113d95d3ddc144a8f5843e52083  
kernel-PAE-2.6.18-371.9.1.el5.i686.rpm
e634a1f4cdce6cdeefe1a1ba6f76ec68ff2f76ba6051e7e6e47d90bb08da9ddb  
kernel-PAE-devel-2.6.18-371.9.1.el5.i686.rpm
a4e8d7d94619340f65bf2f29f6bb371e0c96054f1bc9899892bff1e930d21913  
kernel-xen-2.6.18-371.9.1.el5.i686.rpm
971328ef760a05c19a4ad6ac02fffcd0

Re: [CentOS] issue_discards in lvm.conf

2014-06-12 Thread James B. Byrne

On Wed, June 11, 2014 18:31, Frank Cox wrote:
> I decided that the next time I reformatted my main desktop computer (this one)
> I would have a ssd installed in it to use for the boot drive.  Now that Centos
> 7 is on the horizon, I'm thinking that the time is approaching when I'll want
> to do that.
>

I have a question about SSD respecting security.  Recently I have been
investigating sanitizing these devices, together with 'smart-phones, tablets
and pads which use flash memory persistent storage. Not to mention the
ubiquitous USB 'memory stick'.  I have come to the rather unsettling
conclusion that it is effectively impossible to 'sanitize' these things short
of complete and utter physical destruction, preferably by incineration.  Is
this in fact the case?



-- 
***  E-Mail is NOT a SECURE channel  ***
James B. Byrnemailto:byrn...@harte-lyne.ca
Harte & Lyne Limited  http://www.harte-lyne.ca
9 Brockley Drive  vox: +1 905 561 1241
Hamilton, Ontario fax: +1 905 561 0757
Canada  L8E 3C3

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Google chrome vs network settings proxy?

2014-06-12 Thread Les Mikesell
On Wed, Jun 11, 2014 at 11:22 PM, Gé Weijers  wrote:
> On Wed, Jun 11, 2014 at 11:10 AM, Les Mikesell 
> wrote:
>
>>  However, I can start a chrome connection to gmail and it just goes
>> direct (which happens to work, I just prefer the proxy which will use
>> a different outbound route).   If I go to any non-google site, it uses
>> the proxy and will pop up the expected authentication dialog on the
>> first connection.   Does anyone know (a) why it bypasses the proxy
>> when going to a google site, (b) why it doesn't have its own internal
>> proxy settings, or (c) how to fix it?
>>
>
> Did you configure the proxy for HTTPS? Gmail uses HTTPS exclusively these
> days, the certificate is pinned (hard coded) in Chrome to prevent spoofing,
> maybe the protocol is too. Time for 'tcpdump'?

Yes, that turned out to be the problem.  I had only set http in the
system settings and must have bookmarked/saved the https url so it
didn't even need the initial redirect.

-- 
   Les Mikesell
  lesmikes...@gmail.com
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Google chrome vs network settings proxy?

2014-06-12 Thread Billy Crook
Makes me wonder what happens if a site uses spdy://


On Thu, Jun 12, 2014 at 10:10 AM, Les Mikesell 
wrote:

> On Wed, Jun 11, 2014 at 11:22 PM, Gé Weijers  wrote:
> > On Wed, Jun 11, 2014 at 11:10 AM, Les Mikesell 
> > wrote:
> >
> >>  However, I can start a chrome connection to gmail and it just goes
> >> direct (which happens to work, I just prefer the proxy which will use
> >> a different outbound route).   If I go to any non-google site, it uses
> >> the proxy and will pop up the expected authentication dialog on the
> >> first connection.   Does anyone know (a) why it bypasses the proxy
> >> when going to a google site, (b) why it doesn't have its own internal
> >> proxy settings, or (c) how to fix it?
> >>
> >
> > Did you configure the proxy for HTTPS? Gmail uses HTTPS exclusively these
> > days, the certificate is pinned (hard coded) in Chrome to prevent
> spoofing,
> > maybe the protocol is too. Time for 'tcpdump'?
>
> Yes, that turned out to be the problem.  I had only set http in the
> system settings and must have bookmarked/saved the https url so it
> didn't even need the initial redirect.
>
> --
>Les Mikesell
>   lesmikes...@gmail.com
> ___
> CentOS mailing list
> CentOS@centos.org
> http://lists.centos.org/mailman/listinfo/centos
>



-- 
Billy Crook • Network and Security Administrator • RiskAnalytics, LLC
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Google chrome vs network settings proxy?

2014-06-12 Thread Les Mikesell
On Thu, Jun 12, 2014 at 10:33 AM, Billy Crook  wrote:
> Makes me wonder what happens if a site uses spdy://
>

I'd expect that to be the case for chrome talking to gmail.  But it is
supposed to run over https://.

-- 
   Les Mikesell
  lesmikes...@gmail.com
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] issue_discards in lvm.conf

2014-06-12 Thread SilverTip257
On Thu, Jun 12, 2014 at 10:35 AM, James B. Byrne 
wrote:

>
> On Wed, June 11, 2014 18:31, Frank Cox wrote:
> > I decided that the next time I reformatted my main desktop computer
> (this one)
> > I would have a ssd installed in it to use for the boot drive.  Now that
> Centos
> > 7 is on the horizon, I'm thinking that the time is approaching when I'll
> want
> > to do that.
> >
>
> I have a question about SSD respecting security.  Recently I have been
> investigating sanitizing these devices, together with 'smart-phones,
> tablets
> and pads which use flash memory persistent storage. Not to mention the
> ubiquitous USB 'memory stick'.  I have come to the rather unsettling
> conclusion that it is effectively impossible to 'sanitize' these things
> short
> of complete and utter physical destruction, preferably by incineration.  Is
> this in fact the case?
>

* Hopefully someone who is more of an expert on this matter will speak up.

I've come to the same conclusion.  Due to controller wear leveling and
TRIM, it is difficult to fully sanitize a flash memory (USB flash, SSD).

A former employer of mine contracts out destruction of conventional hard
drives with a machine that has a hydraulic arm and a wedge.  Effectively
bending the platters and some of the drive.  Hardware destruction (prior to
recycling/disposal) in certain business sectors is common place.

-- 
---~~.~~---
Mike
//  SilverTip257  //
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] issue_discards in lvm.conf

2014-06-12 Thread John R Pierce
On 6/12/2014 9:38 AM, SilverTip257 wrote:
> A former employer of mine contracts out destruction of conventional hard
> drives with a machine that has a hydraulic arm and a wedge.  Effectively
> bending the platters and some of the drive.  Hardware destruction (prior to
> recycling/disposal) in certain business sectors is common place.

my employer uses a service that shows up monthly and has a metal chipper 
in the back of their truck.  disks go in and are fully ground up into 
metal chips, under the supervision of our security people.



-- 
john r pierce  37N 122W
somewhere on the middle of the left coast

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] issue_discards in lvm.conf

2014-06-12 Thread m . roth
SilverTip257 wrote:
> On Thu, Jun 12, 2014 at 10:35 AM, James B. Byrne 
> wrote:
>> On Wed, June 11, 2014 18:31, Frank Cox wrote:

>> I have a question about SSD respecting security.  Recently I have been
>> investigating sanitizing these devices, together with 'smart-phones,
>> tablets and pads which use flash memory persistent storage. Not to
mention the
>> ubiquitous USB 'memory stick'.  I have come to the rather unsettling
>> conclusion that it is effectively impossible to 'sanitize' these things
>> short of complete and utter physical destruction, preferably by
incineration.
>> Is this in fact the case?

> I've come to the same conclusion.  Due to controller wear leveling and
> TRIM, it is difficult to fully sanitize a flash memory (USB flash, SSD).
>
> A former employer of mine contracts out destruction of conventional hard
> drives with a machine that has a hydraulic arm and a wedge.  Effectively
> bending the platters and some of the drive.  Hardware destruction (prior
> to recycling/disposal) in certain business sectors is common place.

Where I work, some of the systems (which are behind an *internal*
firewall) have PII and HIPAA data - we're serious about protecting that
stuff. When we surplus a server, the drive must be certified to be
sanitized - that is, for the ones I do, which is most of them, I need to
sign my name to a form that gets stuck on the outside that it's sanitized,
making me *personally* responsible for that.

We use two methods: for the drives that are totally dead, or *sigh* the
SCSI drives, they get deGaussed. For SATA that's still running, we use
DBAN. *Great* software. From what I've read, one pass would probably be
good enough, given how data's written these days. With my name certifying
it, I do paranoid, and tell DBAN the full 7-pass, DoD 5220.22-M. I
*really* don't think anyone's getting anything off that.

We don't have any SSDs, so I can't speak to that. Bet you could deGauss
them, easily enough. Or maybe stick 'em on a burner on a stove to get over
the Curie point*

  mark

* Techniques that a techie group I belong to refer to as "things to do in
someone else's kitchen"

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] issue_discards in lvm.conf

2014-06-12 Thread John R Pierce
On 6/12/2014 10:12 AM, m.r...@5-cent.us wrote:
> We use two methods: for the drives that are totally dead, or*sigh*  the
> SCSI drives, they get deGaussed. For SATA that's still running, we use
> DBAN.*Great*  software. From what I've read, one pass would probably be
> good enough, given how data's written these days. With my name certifying
> it, I do paranoid, and tell DBAN the full 7-pass, DoD 5220.22-M. I
> *really*  don't think anyone's getting anything off that.

if the drive has remapped tracks, there's stale data on there you can't 
erase with DBAN.

> We don't have any SSDs, so I can't speak to that. Bet you could deGauss
> them, easily enough. Or maybe stick 'em on a burner on a stove to get over
> the Curie point*

degaussing would do nothing to flash memory, its semiconductor, not 
magnetic.


-- 
john r pierce  37N 122W
somewhere on the middle of the left coast

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Information Week: RHEL 7 released today

2014-06-12 Thread Warren Young
On 6/11/2014 07:11, Timothy Murphy wrote:
>
> Does XFS have any advantages over ext4 for normal users, eg with laptops?

If you graph machine size -- in whatever dimension you like -- vs number 
deployed, I think you'd find all laptops over on the left side of the 
CentOS deployment curve.  I'd expect that curve to be a skewed bell, 
with a long tail of huge servers over on the right side.

ext* came up from the consumer world at the same time that XFS was 
coming down from the Big Iron world.  The gap between them has thus been 
shrinking, so that as implemented in EL7, ext4 has an awful lot of 
overlap with XFS in terms of features and capabilities.

XFS still offers a lot more upside, and is more appropriate for the 
server systems that CentOS will most often be used on.  It is a more 
sensible default, being the right answer for the biggest subset of the 
CentOS user base.

Since you're over there on the left side of the curve, you may well 
decide that ext4 still makes more sense for you.

That said, there really isn't anything about laptop use that argues 
*against* using XFS.  It isn't a perfect filesystem, but then, neither 
is ext4.

> I've only seen it touted for machines with enormous disks, 200TB plus.

ext4 in EL7 only goes to 50 TiB, whereas XFS is effectively 
unlimited[*].  Red Hat will only support up to 500 TiB with XFS in EL7, 
but I suspect it isn't due to any XFS implementation limit, but just a 
more professional way for them to say "Don't be silly."



[*] The absolute XFS filesystem size limit is about 8 million terabytes, 
which requires about 500 cubic meters of the densest HDDs available 
today.  You'd need 13 standard shipping containers (1 TEU) to transport 
them all, without any space for packing material.  If we add 20% more 
disks for a reasonable level of redundancy and put them in 24-disk 4U 
chassis and mount those chassis in full-size racks, we need about half a 
soccer field of floor space -- something like ~4000 m^2 -- after 
accounting for walking space, network switches, redundant power, and 
whatnot to run it all.  It's so many HDDs that you'd need four or five 
full-time employees in 3 shifts to respond to drive failures fast enough 
to keep an 8 EiB array from falling over due to insufficient redundancy. 
  You simply wouldn't make a single XFS filesystem that big today, so 
QED: effectively unlimited.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Information Week: RHEL 7 released today

2014-06-12 Thread Les Mikesell
On Thu, Jun 12, 2014 at 6:45 AM, Timothy Murphy  wrote:
>
>>> Does XFS have any advantages over ext4 for normal users, eg with laptops?
>>> I've only seen it touted for machines with enormous disks, 200TB plus.
>
>> It is generally better at handling a lot of files - faster
>> creation/deletion when there are a large number in the same directory.
>
> I'm wondering if, for the home user, BackupPC would be a good test of that?
> Otherwise I can't think of a case where I would have a very large number
> of files in the same directory.

There are users on the backuppc list that recommend XFS - but for
'home' size systems it probably doesn't matter that much.

>>The only down side for a long time has been on 32bit machines where
>> the RH default 4k kernel stacks were too small.
>
> Do you mean that that is a down side of XFS, or ext4?

XFS - it needs more working space..  RedHat's choice to configure the
kernel for 4k stacks on 32bit systems is probably the reason XFS
wasn't the default filesystem in earlier versions.  And now that I
think of it, this may be an issue again if CentOS revives 32bit
support.

>>> Does XFS have the same problems that LVM has if there are disk faults?
>
>> You can't really expect any file system to work if the disk underneath
>> is bad.  Raid is your friend there.
>
> In my meagre experience, when a disk shows signs of going bad
> I have been able to copy most of ext3/ext4 disks before compete failure,
> while LVM disks have been beyond (my) rescue.
> Actually, this was in the time of SCSI disks,
> which seemed quite good at giving advance warning of failure.

I'm not sure what controls the number of soft retries before giving up
at the hardware layer.  My only experience is that with RAID1 pairs a
mirror drive seems to get kicked out at the first hint of an error but
the last remaining drive will try much harder before giving up.

-- 
   Les Mikesell
 lesmikes...@gmail.com
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Information Week: RHEL 7 released today

2014-06-12 Thread Jeremy Hoel
This little bit here is awesome and made me laugh.  Thanks!



On Thu, Jun 12, 2014 at 5:27 PM, Warren Young  wrote:

>
> [*] The absolute XFS filesystem size limit is about 8 million terabytes,
> which requires about 500 cubic meters of the densest HDDs available
> today.  You'd need 13 standard shipping containers (1 TEU) to transport
> them all, without any space for packing material.  If we add 20% more
> disks for a reasonable level of redundancy and put them in 24-disk 4U
> chassis and mount those chassis in full-size racks, we need about half a
> soccer field of floor space -- something like ~4000 m^2 -- after
> accounting for walking space, network switches, redundant power, and
> whatnot to run it all.  It's so many HDDs that you'd need four or five
> full-time employees in 3 shifts to respond to drive failures fast enough
> to keep an 8 EiB array from falling over due to insufficient redundancy.
>   You simply wouldn't make a single XFS filesystem that big today, so
> QED: effectively unlimited.
> ___
> CentOS mailing list
> CentOS@centos.org
> http://lists.centos.org/mailman/listinfo/centos
>
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Information Week: RHEL 7 released today

2014-06-12 Thread Les Mikesell
On Thu, Jun 12, 2014 at 12:27 PM, Warren Young  wrote:
>
>
> [*] The absolute XFS filesystem size limit is about 8 million terabytes,
>

Isn't there some ratio of RAM to filesystem size (or maybe number of
files or inodes) that you need to make it through an fsck?

-- 
  Les Mikesell
lesmikes...@gmail.com
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] issue_discards in lvm.conf

2014-06-12 Thread James B. Byrne


On Thu Jun 12 17:21:43 UTC 2014, John R Pierce pierce at hogranch.com wrote:

> On 6/12/2014 10:12 AM, m.roth at 5-cent.us wrote:
>> We use two methods: for the drives that are totally dead, or*sigh*  the
>> SCSI drives, they get deGaussed. For SATA that's still running, we use
>> DBAN.*Great*  software. From what I've read, one pass would probably be
>> good enough, given how data's written these days. With my name certifying
>> it, I do paranoid, and tell DBAN the full 7-pass, DoD 5220.22-M. I
>> *really*  don't think anyone's getting anything off that.
>
> if the drive has remapped tracks, there's stale data on there you can't
> erase with DBAN.
>
>> We don't have any SSDs, so I can't speak to that. Bet you could deGauss
>> them, easily enough. Or maybe stick 'em on a burner on a stove to get over
>> the Curie point*
>
> degaussing would do nothing to flash memory, its semiconductor,
> not magnetic.

An EMP gun on the other hand. . .

-- 
***  E-Mail is NOT a SECURE channel  ***
James B. Byrnemailto:byrn...@harte-lyne.ca
Harte & Lyne Limited  http://www.harte-lyne.ca
9 Brockley Drive  vox: +1 905 561 1241
Hamilton, Ontario fax: +1 905 561 0757
Canada  L8E 3C3

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] issue_discards in lvm.conf

2014-06-12 Thread m . roth
James B. Byrne wrote:
> On Thu Jun 12 17:21:43 UTC 2014, John R Pierce pierce at hogranch.com
> wrote:
>
>> On 6/12/2014 10:12 AM, m.roth at 5-cent.us wrote:
>>> We use two methods: for the drives that are totally dead, or*sigh*  the
>>> SCSI drives, they get deGaussed. For SATA that's still running, we use
>>> DBAN.*Great*  software. From what I've read, one pass would probably be
>>> good enough, given how data's written these days. With my name
>>> certifying it, I do paranoid, and tell DBAN the full 7-pass, DoD
5220.22-M. I
>>> *really*  don't think anyone's getting anything off that.
>>
>> if the drive has remapped tracks, there's stale data on there you can't
>> erase with DBAN.
>>
>>> We don't have any SSDs, so I can't speak to that. Bet you could deGauss
>>> them, easily enough. Or maybe stick 'em on a burner on a stove to get
>>> over
>>> the Curie point*
>>
>> degaussing would do nothing to flash memory, its semiconductor,
>> not magnetic.
>
> An EMP gun on the other hand. . .

I could try out my new welding rig

  mark

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Information Week: RHEL 7 released today

2014-06-12 Thread Paul Heinlein
On Thu, 12 Jun 2014, Jeremy Hoel wrote:

> This little bit here is awesome and made me laugh.  Thanks!

Agreed. Warren wins the Internet today.

> On Thu, Jun 12, 2014 at 5:27 PM, Warren Young  wrote:
>
>>
>> [*] The absolute XFS filesystem size limit is about 8 million 
>> terabytes, which requires about 500 cubic meters of the densest 
>> HDDs available today.  You'd need 13 standard shipping containers 
>> (1 TEU) to transport them all, without any space for packing 
>> material.  If we add 20% more disks for a reasonable level of 
>> redundancy and put them in 24-disk 4U chassis and mount those 
>> chassis in full-size racks, we need about half a soccer field of 
>> floor space -- something like ~4000 m^2 -- after accounting for 
>> walking space, network switches, redundant power, and whatnot to 
>> run it all.  It's so many HDDs that you'd need four or five 
>> full-time employees in 3 shifts to respond to drive failures fast 
>> enough to keep an 8 EiB array from falling over due to insufficient 
>> redundancy. You simply wouldn't make a single XFS filesystem that 
>> big today, so QED: effectively unlimited.

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Information Week: RHEL 7 released today

2014-06-12 Thread Warren Young
On 6/12/2014 12:54, Paul Heinlein wrote:
> On Thu, 12 Jun 2014, Jeremy Hoel wrote:
>
>> This little bit here is awesome and made me laugh.  Thanks!
>
> Agreed. Warren wins the Internet today.

Thank you, thank you.

Now go read some "What if?" to see how a true master plays this game.


[*] https://what-if.xkcd.com/
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Information Week: RHEL 7 released today

2014-06-12 Thread Jeremy Hoel
Oh yeah.. He does great work.  I'm looking forward to his book that comes
out.


On Thu, Jun 12, 2014 at 7:27 PM, Warren Young  wrote:

> On 6/12/2014 12:54, Paul Heinlein wrote:
> > On Thu, 12 Jun 2014, Jeremy Hoel wrote:
> >
> >> This little bit here is awesome and made me laugh.  Thanks!
> >
> > Agreed. Warren wins the Internet today.
>
> Thank you, thank you.
>
> Now go read some "What if?" to see how a true master plays this game.
>
>
> [*] https://what-if.xkcd.com/
> ___
> CentOS mailing list
> CentOS@centos.org
> http://lists.centos.org/mailman/listinfo/centos
>
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Information Week: RHEL 7 released today

2014-06-12 Thread m . roth
Paul Heinlein wrote:
> On Thu, 12 Jun 2014, Jeremy Hoel wrote:
>
>> This little bit here is awesome and made me laugh.  Thanks!
>
> Agreed. Warren wins the Internet today.
>
>> On Thu, Jun 12, 2014 at 5:27 PM, Warren Young  wrote:
>>>
>>> [*] The absolute XFS filesystem size limit is about 8 million
>>> terabytes, which requires about 500 cubic meters of the densest HDDs
>>> available today.  You'd need 13 standard shipping containers (1 TEU)
>>> to transport them all, without any space for packing
>>> material.  If we add 20% more disks for a reasonable level of
>>> redundancy and put them in 24-disk 4U chassis and mount those
>>> chassis in full-size racks, we need about half a soccer field of floor
space --
>>> something like ~4000 m^2 -- after accounting for walking space, network
>>> switches, redundant power, and whatnot to run it all.  It's so many HDDs
>>> that you'd need four or five
>>> full-time employees in 3 shifts to respond to drive failures fast
enough to keep an 8 EiB array from falling over due to insufficient
redundancy. You simply wouldn't make a single XFS filesystem that big
today, so QED: effectively unlimited.

Let's see, how many grad students did the first digital computer need to
replace the burned-out tubes? Was it about that many?


But I agree, he does win the 'Net for today. I propose we award him one
(1) valuable resource... say, an IPv4 address. 

mark



___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] issue_discards in lvm.conf

2014-06-12 Thread Devin Reade
--On Thursday, June 12, 2014 10:35:26 AM -0400 "James B. Byrne" 
 wrote:

> I have a question about SSD respecting security.  [...]
> I have come to the rather
> unsettling conclusion that it is effectively impossible to 'sanitize'
> these things short of complete and utter physical destruction, preferably
> by incineration.

I would concur with that assessment.  Similar to what others have
mentioned, with spinning platters I use either DBAN for relatively
insensitive disks and physical destruction for sensitive stuff
(preferably after DBAN, if it is still a working disk). When it comes
to SSD and other memory-based technologies, physical destruction only.

A couple of weeks ago I was buying some consumer-grade disks for a
particular project.  The sales guy was of course trying to up-sell
me on their in-store replacement plan.  I tried to explain to him
that even if I thought such plans were actually worth something, it
would be pointless because I *never* RMA a hard drive.

I think he was dense; he didn't seem to grasp the concept.

Devin

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] yum install to a portable location

2014-06-12 Thread Robert Stuart
Hi Dan,

Chroot gets you a space that "looks" like it is a separate system. Given 
this is R, I assume you are probably wanting this for HPC like 
purposes... Could I suggest building your own version of R and 
installing into a nfs area? You may also wish to investigate the 
facilities provided by the package environment-modules - they can be 
quite handy (these aren't environmental monitoring).

My R users tend to need the later versions of R. I configure R with 
something like:
./configure --prefix=/nfs_apps --enable-R-shlib --with-x --with-tcltk

Regards
Robert

On 06/12/2014 05:12 AM, Dan Hyatt wrote:
> What will chroot get me.
> I have root on the server, I have a filesystem mounted on all server.
>
> What I want to do is contain the binaries and dependancies on the nfs
> filesystem
> On 6/11/2014 11:30 AM, Andrew Holway wrote:
>> Can you use chroot?
>>
>>
>> On 11 June 2014 18:26, Dan Hyatt  wrote:
>>
>>> I have googled, read the man page, and such.
>>>
>>> What I am trying to do is install applications to a NFS mounted drive,
>>> where the libraries and everything are locally installed on that
>>> filesystem so that it is portable across servers (I have over 100
>>> servers which each need specific applications installed via yum and we
>>> do not want to install 100 copies).
>>>
>>> We tried the yum relocate and it was not available on Centos6.4
>>>
>>> and
>>> yum --nogpgcheck localinstall R-3.1.0-5.el6.x86_64
>>>
>>> I want the binaries and all dependencies in the application filesystem
>>> which is remote mounted on all servers.
>>>
>>> Thanks,
>>>
>>> --
>>>
>>> Dan Hyatt
>>>
>>> ___
>>> CentOS mailing list
>>> CentOS@centos.org
>>> http://lists.centos.org/mailman/listinfo/centos
>>>
>> ___
>> CentOS mailing list
>> CentOS@centos.org
>> http://lists.centos.org/mailman/listinfo/centos

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Mailman 2.1.16 RPMs?

2014-06-12 Thread Devin Reade
--On Wednesday, April 30, 2014 07:52:56 AM -0400 Robert Heller
 wrote:

> Before I go through the hassle of building it myself I want to know if
> someone  else has built RPMS for Mailman 2.1.16.

Following up on this, has anyone got a documented procedure, .spec
files, or whatever for running 2.1.16 or later on CentOS 6? (Other
than the brute-force compile-from-source option?)

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos