If BCLK is used as PLL input, the sysclk is determined by the hw
params. So it must be updated here to match the input frequency, based
on sample rate, format and channels.
Signed-off-by: Ariel D'Alessandro
Signed-off-by: Michael Trimarchi
---
sound/soc/codecs/tlv320aic31xx.c | 14 +
Add divisors for rates needed when the clk_in is set to BCLK.
Signed-off-by: Michael Trimarchi
Signed-off-by: Ariel D'Alessandro
---
sound/soc/codecs/tlv320aic31xx.c | 20
1 file changed, 20 insertions(+)
diff --git a/sound/soc/codecs/tlv320aic31xx.c b/sound/soc/codecs/tlv
Add entry for fsl,imx-audio-tlv320aic31xx audio codec. This codec is
configured to use BCLK as clock input.
Signed-off-by: Michael Trimarchi
Signed-off-by: Ariel D'Alessandro
---
sound/soc/fsl/fsl-asoc-card.c | 12
1 file changed, 12 insertions(+)
diff --git a/sound/soc/fsl/fsl-as
The tlv320aic31xx codec allows using BCLK as the input clock for PLL,
deriving all the frequencies through a set of divisors.
In this case, codec sysclk is determined by the hwparams sample
rate/format. So its frequency must be updated from the codec itself when
these are changed.
This patchset m
When the clock used by the codec is BCLK, the operation parameters need
to be calculated from input sample rate and format. Low frequency rates
required different r multipliers, in order to achieve a higher PLL
output frequency.
Signed-off-by: Michael Trimarchi
Signed-off-by: Ariel D'Alessandro
Signed-off-by: Ariel D'Alessandro
---
sound/soc/codecs/tlv320aic31xx.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/sound/soc/codecs/tlv320aic31xx.h b/sound/soc/codecs/tlv320aic31xx.h
index 2513922a0292..80d062578fb5 100644
--- a/sound/soc/codecs/tlv320aic31xx.h
+++ b/sound
prom_getprop() can return PROM_ERROR. Binary operator can not identify it.
Signed-off-by: Peiwei Hu
---
arch/powerpc/kernel/prom_init.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/powerpc/kernel/prom_init.c b/arch/powerpc/kernel/prom_init.c
index 18b04b08b983..f84506
On Fri, Nov 19, 2021 at 11:56:08AM +0200, Andy Shevchenko wrote:
>On Fri, Nov 19, 2021 at 03:58:19PM +0800, Calvin Zhang wrote:
>> Change to allocate reserved_mems dynamically. Static reserved regions
>> must be reserved before any memblock allocations. The reserved_mems
>> array couldn't be alloca
On Fri, Nov 19, 2021 at 11:56:08AM +0200, Andy Shevchenko wrote:
>On Fri, Nov 19, 2021 at 03:58:19PM +0800, Calvin Zhang wrote:
>> Change to allocate reserved_mems dynamically. Static reserved regions
>> must be reserved before any memblock allocations. The reserved_mems
>> array couldn't be alloca
On Fri, Nov 19, 2021 at 03:58:19PM +0800, Calvin Zhang wrote:
> Change to allocate reserved_mems dynamically. Static reserved regions
> must be reserved before any memblock allocations. The reserved_mems
> array couldn't be allocated until memblock and linear mapping are ready.
>
> So move the all
On Fri, 2021-11-19 at 12:18 +0100, Michal Suchánek wrote:
> Maybe I was not clear enough. If you happen to focus on an architecture
> that supports IMA fully it's great.
>
> My point of view is maintaining multiple architectures. Both end users
> and people conecerend with security are rarely fami
On Fri, Nov 19, 2021 at 05:35:00PM +0100, Christophe Leroy wrote:
> Neither do I. I was just scared by what I saw while reviewing your patch. A
> cleanup is probably required but it can be another patch.
Heh, understood! For my end, my objective with the fortify work is to
either split cross-membe
Le 19/11/2021 à 17:28, Kees Cook a écrit :
On Fri, Nov 19, 2021 at 08:46:27AM +, LEROY Christophe wrote:
Le 18/11/2021 à 21:36, Kees Cook a écrit :
In preparation for FORTIFY_SOURCE performing compile-time and run-time
field bounds checking for memset(), avoid intentionally writing acr
On Fri, Nov 19, 2021 at 08:46:27AM +, LEROY Christophe wrote:
>
>
> Le 18/11/2021 à 21:36, Kees Cook a écrit :
> > In preparation for FORTIFY_SOURCE performing compile-time and run-time
> > field bounds checking for memset(), avoid intentionally writing across
> > neighboring fields.
> >
> >
Le 19/11/2021 à 12:31, Nicholas Piggin a écrit :
The printk layer at the moment does not seem to have a good way to force
flush printk messages that are created in NMI context, except in the
panic path.
NMI-context printk messages normally get to the console with irq_work,
but that won't help if
On 19 Nov 2021, at 7:33, Vlastimil Babka wrote:
> On 11/15/21 20:37, Zi Yan wrote:
>> From: Zi Yan
>>
>> Hi David,
>>
>> You suggested to make alloc_contig_range() deal with pageblock_order instead
>> of
>> MAX_ORDER - 1 and get rid of MAX_ORDER - 1 dependency in virtio_mem[1]. This
>> patchset
> On 21-Jul-2021, at 11:18 AM, Athira Rajeev
> wrote:
>
> Running perf fuzzer testsuite popped up below messages
> in the dmesg logs:
>
> "Can't find PMC that caused IRQ"
>
> This means a PMU exception happened, but none of the PMC's (Performance
> Monitor Counter) were found to be overflow
On 11/15/21 20:37, Zi Yan wrote:
> From: Zi Yan
>
> Hi David,
>
> You suggested to make alloc_contig_range() deal with pageblock_order instead
> of
> MAX_ORDER - 1 and get rid of MAX_ORDER - 1 dependency in virtio_mem[1]. This
> patchset is my attempt to achieve that. Please take a look and let
The printk layer at the moment does not seem to have a good way to force
flush printk messages that are created in NMI context, except in the
panic path.
NMI-context printk messages normally get to the console with irq_work,
but that won't help if the CPU is stuck with irqs disabled, as can be
the
When taking watchdog actions, printing messages, comparing and
re-setting wd_smp_last_reset_tb, etc., read TB close to the point of use
and under wd_smp_lock or printing lock (if applicable).
This should keep timebase mostly monotonic with kernel log messages, and
could prevent (in theory) a laggy
There is a deadlock with the console_owner lock and the wd_smp_lock:
CPU x takes the console_owner lock
CPU y takes a watchdog timer interrupt and takes __wd_smp_lock
CPU x takes a soft-NMI interrupt, detects deadlock, spins on __wd_smp_lock
CPU y detects deadlock, tries to print something and spi
Most updates to wd_smp_cpus_pending are under lock except the watchdog
interrupt bit clear.
This can race with non-atomic RMW updates to the mask under lock, which
can happen in two instances:
Firstly, if another CPU detects this one is stuck, removes it from the
mask, mask becomes empty and is r
It is possible for all CPUs to miss the pending cpumask becoming clear,
and then nobody resetting it, which will cause the lockup detector to
stop working. It will eventually expire, but watchdog_smp_panic will
avoid doing anything if the pending mask is clear and it will never be
reset.
Order the
These are some watchdog fixes and improvements, in particular a
deadlock between the wd_smp_lock and console lock when the watchdog
fires, found by Laurent.
Thanks,
Nick
Since v3:
- Rebased on upstream.
- Brought patch 5 into the series.
- Fix bug with SMP watchdog last heartbeat time reporting.
Hello,
On Thu, Nov 18, 2021 at 05:34:01PM -0500, Nayna wrote:
>
> On 11/16/21 04:53, Michal Suchánek wrote:
> > On Mon, Nov 15, 2021 at 06:53:53PM -0500, Nayna wrote:
> > > On 11/12/21 03:30, Michal Suchánek wrote:
> > > > Hello,
> > > >
> > > > On Thu, Nov 11, 2021 at 05:26:41PM -0500, Nayna wr
Excerpts from Nicholas Piggin's message of November 10, 2021 12:50 pm:
> @@ -160,11 +187,26 @@ static void watchdog_smp_panic(int cpu, u64 tb)
> goto out;
> if (cpumask_test_cpu(cpu, &wd_smp_cpus_pending))
> goto out;
> - if (cpumask_weight(&wd_smp_cpus_pending
On Fri, Nov 19, 2021 at 05:12:18PM +0800, Peiwei Hu wrote:
> prom_getprop() can return PROM_ERROR. Binary operator can not identify it.
Fixes: 94d2dde738a5 ("[POWERPC] Efika: prune fixups and make them more
carefull")
?
P.S. Try to use my script [1] to send patches, it should be smart enough to
Change to allocate reserved_mems dynamically. Static reserved regions
must be reserved before any memblock allocations. The reserved_mems
array couldn't be allocated until memblock and linear mapping are ready.
So move the allocation and initialization of records and reserved memory
from early_ini
Move code about parsing /reserved-memory and initializing of
reserved_mems array to of_reserved_mem.c for better modularity.
Rename array name from reserved_mem to reserved_mems to distinguish
from type definition.
Signed-off-by: Calvin Zhang
---
drivers/of/fdt.c| 108 +-
The count of reserved regions in /reserved-memory was limited because
the struct reserved_mem array was defined statically. This series sorts
out reserved memory code and allocates that array from early allocator.
Note: reserved region with fixed location must be reserved before any
memory allocat
Le 19/11/2021 à 10:05, Nicholas Piggin a écrit :
Excerpts from Laurent Dufour's message of November 16, 2021 1:09 am:
Le 10/11/2021 à 03:50, Nicholas Piggin a écrit :
It is possible for all CPUs to miss the pending cpumask becoming clear,
and then nobody resetting it, which will cause the locku
On Thu, 2021-11-18 at 19:46 +, Sean Christopherson wrote:
> It is sufficient for the current physical CPU to have an active vCPU, which is
> generally guaranteed in the MMU code because, with a few exceptions,
> populating
> SPTEs is done in vCPU context.
>
> mmap() will never directly trigge
Excerpts from Daniel Axtens's message of November 12, 2021 4:08 pm:
> Hi,
>
>> The printk layer at the moment does not seem to have a good way to force
>> flush printk messages that are created in NMI context, except in the
>> panic path.
>>
>> NMI-context printk messages normally get to the conso
sparse warnings: (new ones prefixed by >>)
>> drivers/w1/slaves/w1_ds28e04.c:342:13: sparse: sparse: incorrect type in
>> initializer (different address spaces) @@ expected char [noderef] __user
>> *_pu_addr @@ got char *buf @@
drivers/w1/slaves/w1_ds28e04.c:342:13: sparse: expecte
Excerpts from Laurent Dufour's message of November 16, 2021 1:09 am:
> Le 10/11/2021 à 03:50, Nicholas Piggin a écrit :
>> It is possible for all CPUs to miss the pending cpumask becoming clear,
>> and then nobody resetting it, which will cause the lockup detector to
>> stop working. It will eventu
Le 18/11/2021 à 21:36, Kees Cook a écrit :
> In preparation for FORTIFY_SOURCE performing compile-time and run-time
> field bounds checking for memset(), avoid intentionally writing across
> neighboring fields.
>
> Add a struct_group() for the spe registers so that memset() can correctly
> reas
36 matches
Mail list logo