From: Kaushal Shriyan
> I get these below information. Please help me understand about "SMART
> Health Status: FAILURE PREDICTION THRESHOLD EXCEEDED: ascq=0x5
> [asc=5d, ascq=5]" and what does that error mean.
The way I understand it:
- Disks have errors.
- After a defined number of errors, the
On 07/20/2012 02:34 PM, m.r...@5-cent.us wrote:
> Ned Slider wrote:
>> On 20/07/12 16:55, m.r...@5-cent.us wrote:
>>> Jay Leafey wrote:
On 07/20/2012 10:32 AM, m.r...@5-cent.us wrote:
> Now that he's up, which was my highest priority, I'm back to looking
> around. I did a yum clean all
Johnny Hughes wrote:
> On 07/20/2012 02:34 PM, m.r...@5-cent.us wrote:
>> Ned Slider wrote:
>>> On 20/07/12 16:55, m.r...@5-cent.us wrote:
Jay Leafey wrote:
> On 07/20/2012 10:32 AM, m.r...@5-cent.us wrote:
>> Now that he's up, which was my highest priority, I'm back to looking
>>
Send CentOS-announce mailing list submissions to
centos-annou...@centos.org
To subscribe or unsubscribe via the World Wide Web, visit
http://lists.centos.org/mailman/listinfo/centos-announce
or, via email, send a message with subject or body 'help' to
centos-announce-requ..
Hello,
Where I can find remmina for CentOS 6.3?
Thanks,
Lázaro.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
Am 30.07.2012 18:04, schrieb Lázaro Morales:
> Hello,
>
> Where I can find remmina for CentOS 6.3?
http://pkgs.org/search/?keyword=remmina&search_on=name&distro=82&arch=32-bit
> Thanks,
> Lázaro.
Alexander
___
CentOS mailing list
CentOS@centos.org
h
I just installed gnome-devel-docs with the objective of having a local copy of
the gnome hig-book and now that I have it installed I don't know how to use it.
I had assumed that it would show up in devhelp but it doesn't. What I have is
a bunch of XML files in /usr/share/gnome/help/hig-book/C, an
I'm trying to rsync a 8TB data folder containing squillions of small files and
it's taking forever (i.e. weeks) to get anywhere.
I'm assuming the slow bit is check-summing everything with a single CPU (even
though it's on a 12-core server ;-( )
Is it possible to do something simple like scp the
On 07/30/12 10:05 PM, Smithies, Russell wrote:
> I'm trying to rsync a 8TB data folder containing squillions of small files
> and it's taking forever (i.e. weeks) to get anywhere.
> I'm assuming the slow bit is check-summing everything with a single CPU (even
> though it's on a 12-core server ;-(
On a ldap enabled CentOS 6.3 x64 system, I try to make it so home
directories are auto-created. I added this :
session required pam_mkhomedir.so skel=/etc/skel/ umask=0077
to my /etc/pam.d/system-auth
And it does nothing. I restarted messagebus (I've seen references to
that) and sshd,
Nicolas Ross a écrit :
> On a ldap enabled CentOS 6.3 x64 system, I try to make it so home
> directories are auto-created. I added this :
>
> session required pam_mkhomedir.so skel=/etc/skel/ umask=0077
>
> to my /etc/pam.d/system-auth
>
> And it does nothing. I restarted messagebus (I've
- Original Message -
| On a ldap enabled CentOS 6.3 x64 system, I try to make it so home
| directories are auto-created. I added this :
|
| session required pam_mkhomedir.so skel=/etc/skel/ umask=0077
|
| to my /etc/pam.d/system-auth
|
| And it does nothing. I restarted messageb
On 07/31/2012 07:05 AM, Smithies, Russell wrote:
> Is it possible to do something simple like scp the whole dir in one go so
> they're duplicates in the first instance, then get rsync to just keep them in
> sync without an initial transfer?
>
> Or is there a better way?
I use tar and ttcp for an
13 matches
Mail list logo