>> > How about hardened? Does ZFS have any problems interacting with
>> > grsecurity or a hardened profile?
>>
>> Has anyone tried hardened and ZFS together?
>
> I did - I had some problems, but I'm not sure if they were caused by the
> combination of ZFS and hardened. There were some issues updat
On Thu, Sep 19, 2013 at 06:41:47PM -0400, Douglas J Hunley wrote:
>
> On Tue, Sep 17, 2013 at 12:32 PM, wrote:
>
> Spo do I need that overlay at all, or just emerge zfs and its module?
>
>
> You do *not* need the overlay. Everything you need is in portage nowadays
>
Afaik the overlay even com
On Fri, Sep 20, 2013 at 11:20:53AM -0700, Grant wrote:
> > How about hardened? Does ZFS have any problems interacting with
> > grsecurity or a hardened profile?
>
> Has anyone tried hardened and ZFS together?
>
Hi,
I did - I had some problems, but I'm not sure if they were caused by the
combinat
On Fri, Sep 20, 2013 at 6:20 AM, Tanstaafl wrote:
> Hi all,
>
> Being that one of the big reasons I stopped using RAID5/6 was the rebuild
> times - can be DAYS for a large array - I am very curious if anyone has
> done, or knows of anyone who has done any tests comparing rebuild times when
> using
> How about hardened? Does ZFS have any problems interacting with
> grsecurity or a hardened profile?
Has anyone tried hardened and ZFS together?
- Grant
Am 20.09.2013 18:50, schrieb Canek Peláez Valdés:
>> OK. I send this message now and test another few reboots.
>
> Forgot to mention it: I also enabled mdadm.service.
That service is enabled here as well and running fine.
# systemctl status lvm2-activation-net.service
lvm2-activation-net.serv
On Fri, Sep 20, 2013 at 11:17 AM, Stefan G. Weichinger wrote:
>
> I haven't yet worked through all your suggestions/descriptions.
>
> Edited USE-flags and dracut-modules, worked around bug
>
> https://bugs.gentoo.org/show_bug.cgi?id=485202
>
> and rebuilt kernel and initrd.
>
> Didn't activate LVs
I haven't yet worked through all your suggestions/descriptions.
Edited USE-flags and dracut-modules, worked around bug
https://bugs.gentoo.org/show_bug.cgi?id=485202
and rebuilt kernel and initrd.
Didn't activate LVs ...
Now I edited fstab:
I had the option "systemd.automount" enabled, like
On 09/20/2013 04:37 AM, Dale wrote:
> Alexander Kapshuk wrote:
>> On 09/19/2013 10:50 PM, Alan McKinnon wrote:
>>> On 19/09/2013 20:58, Alexander Kapshuk wrote:
Howdy,
Is having duplicate packages a good or a bad thing in gentoo? I'm clear
about having duplicate packages for the
Am 19.09.2013 06:47, schrieb Grant:
turn off readahead. ZFS' own readahead and the kernel's clash - badly.
Turn off kernel's readahead for a visible performance boon.
>>> You are probably not talking about ZFS readahead but about the ARC.
>> which does prefetching. So yes.
> I'm taking no
Am 20.09.2013 10:46, schrieb Canek Peláez Valdés:
> Sorry I took my time, I was busy.
>
> Well, yours' a complex setup. This is a similar, although simpler, version:
At first: thank your for the extended test setup you did and described
... I will dig through it as soon as I find time ... I am q
* Pandu Poluan [130920 03:45]:
> Hello list!
>
> Does anyone know the meaning of the 'number between brackets' in the
> "power management" line of /proc/cpuinfo?
>
> For instance (I snipped the "flags" line to not clutter the email:
>
> processor : 0
> vendor_id : AuthenticAMD
> cpu
Hi all,
Being that one of the big reasons I stopped using RAID5/6 was the
rebuild times - can be DAYS for a large array - I am very curious if
anyone has done, or knows of anyone who has done any tests comparing
rebuild times when using slow SATA, faster SAS and fastest SSD drives.
Of course
On 2013-09-20 5:17 AM, Joerg Schilling
wrote:
Douglas J Hunley wrote:
1TB drives are right on the border of switching from RAIDZ to RAIDZ2.
You'll see people argue for both sides at this size, but the 'saner
default' would be to use RAIDZ2. You're going to lose storage space, but
gain an extr
Douglas J Hunley wrote:
> 1TB drives are right on the border of switching from RAIDZ to RAIDZ2.
> You'll see people argue for both sides at this size, but the 'saner
> default' would be to use RAIDZ2. You're going to lose storage space, but
> gain an extra parity drive (think RAID6). Consumer gra
On Fri, Sep 13, 2013 at 7:42 AM, Stefan G. Weichinger wrote:
> Am 12.09.2013 20:23, schrieb Canek Peláez Valdés:
>
>> Stefan, what initramfs are you using?
>
> dracut, run via your kerninst-script.
>
>> Could you please explain how is exactly your layout? From drives to
>> partitions to PVs, VGs a
Hello list!
Does anyone know the meaning of the 'number between brackets' in the
"power management" line of /proc/cpuinfo?
For instance (I snipped the "flags" line to not clutter the email:
processor : 0
vendor_id : AuthenticAMD
cpu family : 21
model : 2
model name
17 matches
Mail list logo