From: Andi Kleen
We had a case where a 4 socket system spent >80% of its total CPU time
contending on the global urandom nonblocking pool spinlock. While the
application could probably have used an own PRNG, it may have valid
reasons to use the best possible key for different session keys.
The a
From: Andi Kleen
We had a case where a 4 socket system spent >80% of its total CPU time
contending on the global urandom nonblocking pool spinlock. While the
application could probably have used an own PRNG, it may have valid
reasons to use the best possible key for different session keys.
The a
From: Andi Kleen
We had a case where a 4 socket system spent >80% of its total CPU time
contending on the global urandom nonblocking pool spinlock. While the
application could probably have used an own PRNG, it may have valid
reasons to use the best possible key for different session keys.
The a
On Thu, Sep 24 2015, Andi Kleen wrote:
>
> v2: Fix name of pool 0. Fix race with interrupts. Make
> iteration loops slightly more efficient. Add ifdefs to avoid
> any extra code on non-NUMA. Delay other pool use to when
> the original pool initialized and initialize the pools from
> pool 0. Add c
On 2015-09-25 16:24, Theodore Ts'o wrote:
On Fri, Sep 25, 2015 at 03:07:54PM -0400, Austin S Hemmelgarn wrote:
Interestingly, based on what dieharder is already saying about performance,
/dev/urandom is slower than AES_OFB (at least, on this particular system,
happy to provide hardware specs if
On 2015-09-25 15:07, Austin S Hemmelgarn wrote:
On 2015-09-25 07:41, Austin S Hemmelgarn wrote:
On 2015-09-24 16:14, Theodore Ts'o wrote:
On Thu, Sep 24, 2015 at 03:11:23PM -0400, Austin S Hemmelgarn wrote:
That is a startling result. Please say what architecture, kernel
version, dieharder ve
On Fri, Sep 25, 2015 at 03:07:54PM -0400, Austin S Hemmelgarn wrote:
>
> Interestingly, based on what dieharder is already saying about performance,
> /dev/urandom is slower than AES_OFB (at least, on this particular system,
> happy to provide hardware specs if someone wants).
Yeah, not surprised
On 2015-09-25 07:41, Austin S Hemmelgarn wrote:
On 2015-09-24 16:14, Theodore Ts'o wrote:
On Thu, Sep 24, 2015 at 03:11:23PM -0400, Austin S Hemmelgarn wrote:
That is a startling result. Please say what architecture, kernel
version, dieharder version and commandline arguments you are using to
On 2015-09-24 16:14, Theodore Ts'o wrote:
On Thu, Sep 24, 2015 at 03:11:23PM -0400, Austin S Hemmelgarn wrote:
That is a startling result. Please say what architecture, kernel
version, dieharder version and commandline arguments you are using to
get 10% WEAK or FAILED assessments from dieharder
On Thu, Sep 24, 2015 at 03:11:23PM -0400, Austin S Hemmelgarn wrote:
> >That is a startling result. Please say what architecture, kernel
> >version, dieharder version and commandline arguments you are using to
> >get 10% WEAK or FAILED assessments from dieharder on /dev/urandom.
>
> I do not remem
On Thu, Sep 24, 2015 at 03:11:23PM -0400, Austin S Hemmelgarn wrote:
> I will make a point however to run some tests over the weekend on a
> current kernel version (4.2.1), with the current dieharder version I
> have available (3.31.1).
Please report your findings. If urandom is worse than AES_OF
On 2015-09-24 12:52, Jeff Epler wrote:
On Thu, Sep 24, 2015 at 12:00:44PM -0400, Austin S Hemmelgarn wrote:
I've had cases where I've done thousands of dieharder runs, and it
failed almost 10% of the time, while stuff like mt19937 fails in
otherwise identical tests only about 1-2% of the time
From: Andi Kleen
We had a case where a 4 socket system spent >80% of its total CPU time
contending on the global urandom nonblocking pool spinlock. While the
application could probably have used an own PRNG, it may have valid
reasons to use the best possible key for different session keys.
The a
On Thu, Sep 24, 2015 at 12:00:44PM -0400, Austin S Hemmelgarn wrote:
> I've had cases where I've done thousands of dieharder runs, and it
> failed almost 10% of the time, while stuff like mt19937 fails in
> otherwise identical tests only about 1-2% of the time
That is a startling result. Please s
On 2015-09-24 09:12, Theodore Ts'o wrote:
On Thu, Sep 24, 2015 at 07:37:39AM -0400, Austin S Hemmelgarn wrote:
Using /dev/urandom directly, yes that doesn't make sense because it
consistent returns non-uniformly random numbers when used to generate larger
amounts of entropy than the blocking poo
On Thu, Sep 24, 2015 at 07:37:39AM -0400, Austin S Hemmelgarn wrote:
> Using /dev/urandom directly, yes that doesn't make sense because it
> consistent returns non-uniformly random numbers when used to generate larger
> amounts of entropy than the blocking pool can source
Why do you think this is
On 2015-09-23 19:28, Andi Kleen wrote:
I'd almost say that making the partitioning level configurable at
build time might be useful. I can see possible value to being able
to at least partition down to physical cores (so, shared between
HyperThreads on Intel processors, and between Compute Modul
> I'd almost say that making the partitioning level configurable at
> build time might be useful. I can see possible value to being able
> to at least partition down to physical cores (so, shared between
> HyperThreads on Intel processors, and between Compute Module cores
> on AMD processors), as
> > +{
> > + struct entropy_store *pool = &nonblocking_pool;
> > +
> > + /*
> > +* Non node 0 pools may take longer to initialize. Keep using
> > +* the boot nonblocking pool while this happens.
> > +*/
> > + if (nonblocking_node_pool)
> > + pool = nonblocking_node_pool[
> Does that sound reasonable?
Sounds good. I can do that.
-Andi
--
a...@linux.intel.com -- Speaking for myself only
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majo
On Tue, Sep 22, 2015 at 04:16:05PM -0700, Andi Kleen wrote:
>
> This patch changes the random driver to use distributed per NUMA node
> nonblocking pools. The basic structure is not changed: entropy is
> first fed into the input pool and later from there distributed
> round-robin into the blocking
On 2015-09-22 19:16, Andi Kleen wrote:
From: Andi Kleen
We had a case where a 4 socket system spent >80% of its total CPU time
contending on the global urandom nonblocking pool spinlock. While the
application could probably have used an own PRNG, it may have valid
reasons to use the best possib
On Wed, Sep 23 2015, Andi Kleen wrote:
> @@ -467,7 +478,7 @@ static struct entropy_store blocking_pool = {
>
> static struct entropy_store nonblocking_pool = {
> .poolinfo = &poolinfo_table[1],
> - .name = "nonblocking",
> + .name = "nonblocking 0",
> .pull = &input_pool,
>
Andi Kleen writes:
>
> With the patchkit applied:
>
> 1 node: 1x
> 2 nodes: 2x
> 3 nodes: 3.4x
> 4 nodes: 6x
Sorry there was a typo in the numbers. Correct results are:
With the patchkit applied:
1 node: 1x
2 nodes: 2x
3 nodes: 2.4x
4 nodes: 3x
So it's not quite linear scalab
From: Andi Kleen
We had a case where a 4 socket system spent >80% of its total CPU time
contending on the global urandom nonblocking pool spinlock. While the
application could probably have used an own PRNG, it may have valid
reasons to use the best possible key for different session keys.
The a
25 matches
Mail list logo