On 08.05.23 06:09, Viresh Kumar wrote:
On 05-05-23, 16:11, Oleksandr Tyshchenko wrote:
I was going to propose an idea, but I have just realized that you already
voiced it here [1] ))
So what you proposed there sounds reasonable to me.
I will just rephrase it according to my understanding:
We p
On 04.05.2023 21:39, Andrew Cooper wrote:
> When adding new words to a featureset, there is a reasonable amount of
> boilerplate and it is preforable to split the addition into multiple patches.
>
> GCC 12 spotted a real (transient) error which occurs when splitting additions
> like this. Right n
On 03.05.2023 18:31, Oleksii Kurochko wrote:
> --- a/xen/arch/riscv/include/asm/config.h
> +++ b/xen/arch/riscv/include/asm/config.h
> @@ -70,12 +70,23 @@
>name:
> #endif
>
> -#define XEN_VIRT_START _AT(UL, 0x8020)
> +#ifdef CONFIG_RISCV_64
> +#define XEN_VIRT_START 0xC000 /
On 05.05.2023 23:48, Marek Marczykowski-Górecki wrote:
> pci_serial_early_init() enables PCI_COMMAND_IO for IO-based UART
> devices, add setting PCI_COMMAND_MEMORY for MMIO-based UART devices too.
> Note the MMIO-based devices in practice need a "pci" sub-option,
> otherwise a few parameters are no
On 03.05.23 17:01, Olaf Hering wrote:
clang complains about the signed type:
implicit truncation from 'int' to a one-bit wide bit-field changes value from 1
to -1 [-Wsingle-bit-bitfield-constant-conversion]
The potential ABI change in libxenvchan is covered by the Xen version based
SONAME.
T
On 05.05.2023 19:57, Alejandro Vallejo wrote:
> Nowadays AMD supports trapping the CPUID instruction from ring3 to ring0,
Since it's relevant for PV32: Their doc talks about CPL > 0, i.e. not just
ring 3. Therefore I wonder whether ...
> (CpuidUserDis)
... we shouldn't deviate from the PM and av
On 05.05.2023 19:57, Alejandro Vallejo wrote:
> Includes a refactor to move vendor-specific probes to vendor-specific
> files.
I wonder whether the refactoring parts wouldn't better be split off.
> @@ -363,6 +375,21 @@ static void __init noinline amd_init_levelling(void)
> ctxt_swit
flight 180572 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180572/
Failures :-/ but no regressions.
Tests which are failing intermittently (not blocking):
test-amd64-i386-libvirt 7 xen-install fail in 180566 pass in 180572
test-amd64-i386-xl-qemut-stubdom
On 01.05.2023 21:30, Jason Andryuk wrote:
> --- a/xen/arch/x86/acpi/cpufreq/hwp.c
> +++ b/xen/arch/x86/acpi/cpufreq/hwp.c
> @@ -13,6 +13,8 @@
> #include
> #include
>
> +static bool hwp_in_use;
__ro_after_init again, please.
> --- a/xen/include/acpi/pdc_intel.h
> +++ b/xen/include/acpi/pdc_i
Mon, 8 May 2023 11:06:11 +0200 Juergen Gross :
> > -int short_summary_done:1, prealloc_unpin:1, wrmap_bf:1;
> > +unsigned short_summary_done:1, prealloc_unpin:1, wrmap_bf:1;
> Please use "unsigned int" instead of a pure "unsigned".
The entire file uses just 'unsigned' for bitfields.
Ol
On 01.05.2023 21:30, Jason Andryuk wrote:
> Extend xen_get_cpufreq_para to return hwp parameters. These match the
> hardware rather closely.
>
> We need the features bitmask to indicated fields supported by the actual
> hardware.
>
> The use of uint8_t parameters matches the hardware size. uint
On 01.05.2023 21:30, Jason Andryuk wrote:
> Print HWP-specific parameters. Some are always present, but others
> depend on hardware support.
>
> Signed-off-by: Jason Andryuk
> ---
> v2:
> Style fixes
> Declare i outside loop
> Replace repearted hardware/configured limits with spaces
> Fixup for
On 08.05.2023 12:25, Jan Beulich wrote:
> On 01.05.2023 21:30, Jason Andryuk wrote:
>> Extend xen_get_cpufreq_para to return hwp parameters. These match the
>> hardware rather closely.
>>
>> We need the features bitmask to indicated fields supported by the actual
>> hardware.
>>
>> The use of uint
On 08.05.23 12:00, Olaf Hering wrote:
Mon, 8 May 2023 11:06:11 +0200 Juergen Gross :
-int short_summary_done:1, prealloc_unpin:1, wrmap_bf:1;
+unsigned short_summary_done:1, prealloc_unpin:1, wrmap_bf:1;
Please use "unsigned int" instead of a pure "unsigned".
The entire file uses jus
On 01.05.2023 21:30, Jason Andryuk wrote:
> @@ -531,6 +533,100 @@ int get_hwp_para(const struct cpufreq_policy *policy,
> return 0;
> }
>
> +int set_hwp_para(struct cpufreq_policy *policy,
> + struct xen_set_hwp_para *set_hwp)
const?
> +{
> +unsigned int cpu = policy->
This series reworks the Xenstore internal accounting to use a uniform
generic framework. It is adding some additional useful diagnostic
information, like accounting trace and max. per-domain and global quota
values seen.
Changes in V2:
- added patch 1 (leftover from previous series)
- rebase
Chan
The accounting for the number of nodes of a domain in an active
transaction is not working correctly, as it is checking the node quota
only against the number of nodes outside the transaction.
This can result in the transaction finally failing, as node quota is
checked at the end of the transactio
In order to prepare keeping accounting data in an array instead of
using independent fields, switch the struct changed_domain accounting
data to that scheme, for now only using an array with one element.
In order to be able to extend this scheme add the needed indexing enum
to xenstored_domain.h.
Introduce the scheme of an accounting data array for per-domain
accounting data and use it initially for the number of nodes owned by
a domain.
Make the accounting data type to be unsigned int, as no data is allowed
to be negative at any time.
Signed-off-by: Juergen Gross
Reviewed-by: Julien Gra
Instead of modifying accounting data and undo those modifications in
case of an error during further processing, add a framework for
collecting the needed changes and commit them only when the whole
operation has succeeded.
This scheme can reuse large parts of the per transaction accounting.
The c
Add the node accounting to the accounting information buffering in
order to avoid having to undo it in case of failure.
This requires to call domain_nbentry_dec() before any changes to the
data base, as it can return an error now.
Signed-off-by: Juergen Gross
---
V5:
- add error handling after d
In order to enable switching memory accounting to the generic array
based accounting, add the current connection to the parameters of
domain_memory_add().
This requires to add the connection to some other functions, too.
Signed-off-by: Juergen Gross
Acked-by: Julien Grall
---
tools/xenstore/xe
Add the accounting of per-domain usage of Xenstore memory, watches, and
outstanding requests to the array based mechanism.
Signed-off-by: Juergen Gross
---
V5:
- drop domid parameter from domain_outstanding_inc() (Julien Grall)
---
tools/xenstore/xenstored_core.c | 4 +-
tools/xenstore/xenst
Add a new trace switch "acc" and the related trace calls.
The "acc" switch is off per default.
Signed-off-by: Juergen Gross
Reviewed-by: Julien Grall
---
tools/xenstore/xenstored_core.c | 2 +-
tools/xenstore/xenstored_core.h | 1 +
tools/xenstore/xenstored_domain.c | 10 ++
3 fi
As transaction accounting is active for unprivileged domains only, it
can easily be added to the generic per-domain accounting.
Signed-off-by: Juergen Gross
---
V5:
- use list_empty(&conn->transaction_list) for detection of "no
transaction active" (Julien Grall)
---
tools/xenstore/xenstored_co
On 01.05.2023 21:30, Jason Andryuk wrote:
> @@ -67,6 +68,27 @@ void show_help(void)
> " set-max-cstate|'unlimited' [|'unlimited']\n"
> " set the C-State limitation
> ( >= 0) and\n"
> "
Let get_optval_int() return an unsigned value and rename it
accordingly.
Signed-off-by: Juergen Gross
---
V5:
- new patch, carved out from next patch in series (Julien Grall)
---
tools/xenstore/xenstored_core.c | 12 ++--
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/tool
The maxrequests, node size, number of node permissions, and path length
quota are a little bit special, as they are either active in
transactions only (maxrequests), or they are just per item instead of
count values. Nevertheless being able to know the maximum number of
those quota related values p
Add a new trace switch "tdb" and the related trace calls.
The "tdb" switch is off per default.
Signed-off-by: Juergen Gross
Reviewed-by: Julien Grall
---
tools/xenstore/xenstored_core.c| 8 +++-
tools/xenstore/xenstored_core.h| 7 +++
tools/xenstore/xenstored_transactio
Add saving the maximum values of the different accounting data seen
per domain and (for unprivileged domains) globally, and print those
values via the xenstore-control quota command. Add a sub-command for
resetting the global maximum values seen.
This should help for a decision how to set the rela
Instead of having individual quota variables switch to a table based
approach like the generic accounting. Include all the related data in
the same table and add accessor functions.
This enables to use the command line --quota parameter for setting all
possible quota values, keeping the previous p
On 21.04.23 04:50, Tejun Heo wrote:
BACKGROUND
==
When multiple work items are queued to a workqueue, their execution order
doesn't match the queueing order. They may get executed in any order and
simultaneously. When fully serialized execution - one by one in the queueing
order - is nee
On 08.05.2023 13:56, Jan Beulich wrote:
> On 01.05.2023 21:30, Jason Andryuk wrote:
>> +static int parse_hwp_opts(xc_set_hwp_para_t *set_hwp, int *cpuid,
>> + int argc, char *argv[])
>> +{
>> +int i = 0;
>> +
>> +if ( argc < 1 ) {
>> +fprintf(stderr, "Missin
On 01.05.2023 21:30, Jason Andryuk wrote:
> Allow cpuid_parse to be re-used without terminating xenpm. HWP will
> re-use it to optionally parse a cpuid. Unlike other uses of
> cpuid_parse, parse_hwp_opts will take a variable number of arguments and
> cannot just check argc.
>
> Signed-off-by: Ja
This series adds support for a number of more or less recently announced
ISA extensions. The series interacts mildly (and only contextually) with
the AVX512-FP16 one. Note that the last patch is kind of incomplete: It
doesn't enable the feature for guest use, for lack of detail in the
specification
Provide support for this insn, which is a prereq to FRED. CPUID-wise
introduce both its and FRED's bit at this occasion, thus allowing to
also express the dependency right away.
While adding a testcase, also add a SWAPGS one. In order to not affect
the behavior of pre-existing tests, install write
Unconditionally wire this through the ->rmw() hook. Since x86_emul_rmw()
now wants to construct and invoke a stub, make stub_exn available to it
via a new field in the emulator state structure.
Signed-off-by: Jan Beulich
---
v2: Use X86_EXC_*. Move past introduction of stub_exn in struct
x86_
This is a prereq to enabling the MSRLIST feature.
Note that the PROCBASED_CTLS3 MSR is different from other VMX feature
reporting MSRs, in that all 64 bits report allowed 1-settings.
vVMX code is left alone, though, for the time being.
Signed-off-by: Jan Beulich
---
v2: New.
--- a/xen/arch/x86
These are "compound" instructions to issue a series of RDMSR / WRMSR
respectively. In the emulator we can therefore implement them by using
the existing msr_{read,write}() hooks. The memory accesses utilize that
the HVM ->read() / ->write() hooks are already linear-address
(x86_seg_none) aware (by
[AMD Official Use Only - General]
Hello all,
We want to virtualize the camera that uses the V4L2 Linux drivers i.e.., wanted
to use the Camera APP in DOMU.
Searched online and found 2 approaches to virtualize the camera.
FE and BE:
FrontEnd Driver is available at
https://github.com/andr2000/l
The remaining users calling __skb_frag_set_page() with
page being NULL seems to doing defensive programming, as
shinfo->nr_frags is already decremented, so remove them.
Signed-off-by: Yunsheng Lin
---
drivers/net/ethernet/broadcom/bnx2.c | 1 -
drivers/net/ethernet/broadcom/bnxt/bnxt.c |
Most users use __skb_frag_set_page()/skb_frag_off_set()/
skb_frag_size_set() to fill the page desc for a skb frag.
Introduce skb_frag_fill_page_desc() to do that.
net/bpf/test_run.c does not call skb_frag_off_set() to
set the offset, "copy_from_user(page_address(page), ...)"
suggest that it is as
Most users use __skb_frag_set_page()/skb_frag_off_set()/
skb_frag_size_set() to fill the page desc for a skb frag.
It does not make much sense to calling __skb_frag_set_page()
without calling skb_frag_off_set(), as the offset may depend
on whether the page is head page or tail page, so add
skb_frag
Re-posting patch 1 merely because of the still missing RISC-V ack,
while the new patch 2 contextually depends on it.
1: shorten macro references
2: use $(dot-target)
Jan
Presumably by copy-and-paste we've accumulated a number of instances of
$(@D)/$(@F), which really is nothing else than $@. The split form only
needs using when we want to e.g. insert a leading . at the beginning of
the file name portion of the full name.
Signed-off-by: Jan Beulich
Acked-by: Andre
While slightly longer, I agree with Andrew that using it helps
readability. Where touching them anyway, also wrap some overly long
lines.
Suggested-by: Andrew Cooper
Signed-off-by: Jan Beulich
---
v2: New.
--- a/xen/arch/arm/Makefile
+++ b/xen/arch/arm/Makefile
@@ -93,17 +93,19 @@ endif
$(TA
flight 180574 linux-linus real [real]
flight 180576 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180574/
http://logs.test-lab.xenproject.org/osstest/logs/180576/
Regressions :-(
Tests which did not succeed and are blocking,
including tests which could not be run
On 05.05.2023 19:57, Alejandro Vallejo wrote:
> This is in order to aid guests of AMD hardware that we have exposed
> CPUID faulting to. If they try to modify the Intel MSR that enables
> the feature, trigger levelling so AMD's version of it (CpuidUserDis)
> is used instead.
>
> Signed-off-by: Ale
flight 180575 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180575/
Perfect :-)
All tests in this flight passed as required
version targeted for testing:
ovmf d89492456f79e014679cb6c29b144ea26b910918
baseline version:
ovmf 8dbf868e02c71b407e31f
On 5/5/23 02:20, Jan Beulich wrote:
> On 04.05.2023 23:54, Stewart Hildebrand wrote:
>> On 5/2/23 03:44, Jan Beulich wrote:
>>> On 01.05.2023 22:03, Stewart Hildebrand wrote:
@@ -228,6 +229,9 @@ int iommu_release_dt_devices(struct domain *d);
* (IOMMU is not enabled/present or devi
On 5/2/23 03:50, Jan Beulich wrote:
> On 01.05.2023 22:03, Stewart Hildebrand wrote:
>> --- a/xen/drivers/passthrough/pci.c
>> +++ b/xen/drivers/passthrough/pci.c
>> @@ -1305,7 +1305,7 @@ __initcall(setup_dump_pcidevs);
>>
>> static int iommu_add_device(struct pci_dev *pdev)
>> {
>> -const st
On Mon, May 08, 2023 at 08:39:22PM +0800, Yunsheng Lin wrote:
> The remaining users calling __skb_frag_set_page() with
> page being NULL seems to doing defensive programming, as
> shinfo->nr_frags is already decremented, so remove them.
>
> Signed-off-by: Yunsheng Lin
...
> diff --git a/drivers
On Mon, May 08, 2023 at 11:01:18AM +0200, Jan Beulich wrote:
> On 05.05.2023 23:48, Marek Marczykowski-Górecki wrote:
> > pci_serial_early_init() enables PCI_COMMAND_IO for IO-based UART
> > devices, add setting PCI_COMMAND_MEMORY for MMIO-based UART devices too.
> > Note the MMIO-based devices in
On 08.05.2023 16:16, Stewart Hildebrand wrote:
> On 5/2/23 03:50, Jan Beulich wrote:
>> On 01.05.2023 22:03, Stewart Hildebrand wrote:
>>> --- a/xen/drivers/passthrough/pci.c
>>> +++ b/xen/drivers/passthrough/pci.c
>>> @@ -1305,7 +1305,7 @@ __initcall(setup_dump_pcidevs);
>>>
>>> static int iommu_
flight 180577 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180577/
Failures :-/ but no regressions.
Tests which did not succeed, but are not blocking:
test-amd64-amd64-libvirt 15 migrate-support-checkfail never pass
test-arm64-arm64-xl-xsm 1
Mon, 8 May 2023 13:23:27 +0200 Juergen Gross :
> I have found 18 lines using "unsigned int" for bitfields in this file.
There is indeed a mix of both variants in this file.
I scrolled just down to line 6999, only looking for ':1'.
Olaf
pgpdfK1WmN2vQ.pgp
Description: Digitale Signatur von Open
On 03.05.23 15:16, Maximilian Heyne wrote:
Commit bf5e758f02fc ("genirq/msi: Simplify sysfs handling") reworked the
creation of sysfs entries for MSI IRQs. The creation used to be in
msi_domain_alloc_irqs_descs_locked after calling ops->domain_alloc_irqs.
Then it moved into __msi_domain_alloc_irq
On 03.05.23 17:11, Dan Carpenter wrote:
In the pvcalls_new_active_socket() function, most error paths call
pvcalls_back_release_active(fedata->dev, fedata, map) which calls
sock_release() on "sock". The bug is that the caller also frees sock.
Fix this by making every error path in pvcalls_new_a
flight 180579 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180579/
Perfect :-)
All tests in this flight passed as required
version targeted for testing:
ovmf 6eeb58ece38060be3b0f7111649a93cc8b2dca49
baseline version:
ovmf d89492456f79e014679cb
clang complains about the signed type:
implicit truncation from 'int' to a one-bit wide bit-field changes value from 1
to -1 [-Wsingle-bit-bitfield-constant-conversion]
The potential ABI change in libxenvchan is covered by the Xen version based
SONAME.
Signed-off-by: Olaf Hering
---
v2: cover
On a fedora system, if you run `sudo sh install.sh` you break your
system. The installation clobbers /var/run, a symlink to /run. A
subsequent boot fails when /var/run and /run are different since
accesses through /var/run can't find items that now only exist in /run
and vice-versa.
Skip populat
Wed, 26 Apr 2023 09:31:44 -0400 Jason Andryuk :
> On Wed, Apr 26, 2023 at 6:40 AM Olaf Hering wrote:
> > +++ b/tools/hotplug/Linux/init.d/xendriverdomain.in
> > @@ -49,6 +49,7 @@ fi
> >
> > do_start () {
> > echo Starting xl devd...
> > + mkdir -m700 -p ${XEN_RUN_DIR}
> This one
On Fri, May 05, 2023 at 05:20:40PM +0200, Mickaël Salaün wrote:
> From: Madhavan T. Venkataraman
>
> Hypervisor Enforced Kernel Integrity (Heki) is a feature that will use
> the hypervisor to enhance guest virtual machine security.
>
> Configuration
> =
>
> Define the config variabl
From: Thomas Gleixner
When TSC is synchronized across sockets then there is no reason to
calibrate the delay for the first CPU which comes up on a socket.
Just reuse the existing calibration value.
This removes 100ms pointlessly wasted time from CPU hotplug per socket.
Signed-off-by: Thomas Gl
From: Thomas Gleixner
Make topology_phys_to_logical_pkg_die() static as it's only used in
smpboot.c and fixup the kernel-doc warnings for both functions.
Signed-off-by: Thomas Gleixner
Tested-by: Michael Kelley
---
arch/x86/include/asm/topology.h |3 ---
arch/x86/kernel/smpboot.c
From: Thomas Gleixner
This is used in the SEV play_dead() implementation to re-online CPUs. But
that has nothing to do with CPU0.
Signed-off-by: Thomas Gleixner
Tested-by: Michael Kelley
---
arch/x86/include/asm/cpu.h |2 +-
arch/x86/kernel/callthunks.c |2 +-
arch/x86/kernel/head
From: Thomas Gleixner
No point in keeping them around.
Signed-off-by: Thomas Gleixner
Tested-by: Michael Kelley
---
arch/x86/kernel/smpboot.c |4 ++--
kernel/cpu.c |2 +-
kernel/smp.c |2 +-
3 files changed, 4 insertions(+), 4 deletions(-)
---
--- a/ar
Hi!
This is version 3 of the reworked parallel bringup series. Version 2 can be
found here:
https://lore.kernel.org/lkml/20230504185733.126511...@linutronix.de
This is just a quick reiteration to address the following details:
1) Drop the two extended topology leaf patches as they are not
From: Thomas Gleixner
Now that the CPU0 hotplug cruft is gone, the only user is AMD SEV.
Signed-off-by: Thomas Gleixner
Tested-by: Michael Kelley
---
arch/x86/kernel/callthunks.c |2 +-
arch/x86/kernel/head_32.S| 14 --
arch/x86/kernel/head_64.S|2 +-
3 files c
From: Thomas Gleixner
This was introduced with commit e1c467e69040 ("x86, hotplug: Wake up CPU0
via NMI instead of INIT, SIPI, SIPI") to eventually support physical
hotplug of CPU0:
"We'll change this code in the future to wake up hard offlined CPU0 if
real platform and request are available.
From: Thomas Gleixner
This was introduced together with commit e1c467e69040 ("x86, hotplug: Wake
up CPU0 via NMI instead of INIT, SIPI, SIPI") to eventually support
physical hotplug of CPU0:
"We'll change this code in the future to wake up hard offlined CPU0 if
real platform and request are a
From: David Woodhouse
There are four logical parts to what native_cpu_up() does on the BSP (or
on the controlling CPU for a later hotplug):
1) Wake the AP by sending the INIT/SIPI/SIPI sequence.
2) Wait for the AP to make it as far as wait_for_master_cpu() which
sets that CPU's bit in cpu
From: Thomas Gleixner
The synchronization of the AP with the control CPU is a SMP boot problem
and has nothing to do with cpu_init().
Open code cpu_init_secondary() in start_secondary() and move
wait_for_master_cpu() into the SMP boot code.
No functional change.
Signed-off-by: Thomas Gleixner
From: Thomas Gleixner
Spin-waiting on the control CPU until the AP reaches the TSC
synchronization is just a waste especially in the case that there is no
synchronization required.
As the synchronization has to run with interrupts disabled the control CPU
part can just be done from a SMP functio
From: Thomas Gleixner
cpu_callout_mask is used for the stop machine based MTRR/PAT init.
In preparation of moving the BP/AP synchronization to the core hotplug
code, use a private CPU mask for cacheinfo and manage it in the
starting/dying hotplug state.
Signed-off-by: Thomas Gleixner
Tested-by
From: Thomas Gleixner
There is no harm to hold sparse_irq lock until the upcoming CPU completes
in cpuhp_online_idle(). This allows to remove cpu_online() synchronization
from architecture code.
Signed-off-by: Thomas Gleixner
Tested-by: Michael Kelley
---
kernel/cpu.c | 28 +++
From: Thomas Gleixner
The usage is in smpboot.c and not in the CPU initialization code.
The XEN_PV usage of cpu_callout_mask is obsolete as cpu_init() not longer
waits and cacheinfo has its own CPU mask now, so cpu_callout_mask can be
made static too.
Signed-off-by: Thomas Gleixner
Tested-by:
From: Thomas Gleixner
Now that the core code drops sparse_irq_lock after the idle thread
synchronized, it's pointless to wait for the AP to mark itself online.
Whether the control CPU runs in a wait loop or sleeps in the core code
waiting for the online operation to complete makes no difference.
From: Thomas Gleixner
Now that TSC synchronization is SMP function call based there is no reason
to wait for the AP to be set in smp_callin_mask. The control CPU waits for
the AP to set itself in the online mask anyway.
Signed-off-by: Thomas Gleixner
Tested-by: Michael Kelley
---
arch/x86/k
From: Thomas Gleixner
Now that the core code drops sparse_irq_lock after the idle thread
synchronized, it's pointless to wait for the AP to mark itself online.
Signed-off-by: Thomas Gleixner
Tested-by: Michael Kelley
---
arch/x86/kernel/smpboot.c | 26 ++
1 file ch
From: Thomas Gleixner
Switch to the CPU hotplug core state tracking and synchronization
mechanim. No functional change intended.
Signed-off-by: Thomas Gleixner
Acked-by: Palmer Dabbelt
Tested-by: Michael Kelley
---
arch/riscv/Kconfig |1 +
arch/riscv/include/asm/smp.h|
From: Thomas Gleixner
Implement the validation function which tells the core code whether
parallel bringup is possible.
The only condition for now is that the kernel does not run in an encrypted
guest as these will trap the RDMSR via #VC, which cannot be handled at that
point in early startup.
From: Thomas Gleixner
Parallel AP bringup requires that the APs can run fully parallel through
the early startup code including the real mode trampoline.
To prepare for this implement a bit-spinlock to serialize access to the
real mode stack so that parallel upcoming APs are not going to corrupt
From: Thomas Gleixner
Switch to the CPU hotplug core state tracking and synchronization
mechanim. No functional change intended.
Signed-off-by: Thomas Gleixner
Tested-by: Michael Kelley
---
arch/csky/Kconfig |1 +
arch/csky/include/asm/smp.h |2 +-
arch/csky/kernel/smp.c
From: Thomas Gleixner
Switch to the CPU hotplug core state tracking and synchronization
mechanim. No functional change intended.
Signed-off-by: Thomas Gleixner
Tested-by: Michael Kelley
---
arch/arm/Kconfig |1 +
arch/arm/include/asm/smp.h |2 +-
arch/arm/kernel/smp.c
From: David Woodhouse
In parallel startup mode the APs are kicked alive by the control CPU
quickly after each other and run through the early startup code in
parallel. The real-mode startup code is already serialized with a
bit-spinlock to protect the real-mode stack.
In parallel startup mode th
From: Thomas Gleixner
There is often significant latency in the early stages of CPU bringup, and
time is wasted by waking each CPU (e.g. with SIPI/INIT/INIT on x86) and
then waiting for it to respond before moving on to the next.
Allow a platform to enable parallel setup which brings all to be o
From: Thomas Gleixner
The bring up logic of a to be onlined CPU consists of several parts, which
are considered to be a single hotplug state:
1) Control CPU issues the wake-up
2) To be onlined CPU starts up, does the minimal initialization,
reports to be alive and waits for release int
From: Thomas Gleixner
The CPU state tracking and synchronization mechanism in smpboot.c is
completely independent of the hotplug code and all logic around it is
implemented in architecture specific code.
Except for the state reporting of the AP there is absolutely nothing
architecture specific a
From: Thomas Gleixner
The x86 CPU bringup state currently does AP wake-up, wait for AP to
respond and then release it for full bringup.
It is safe to be split into a wake-up and and a separate wait+release
state.
Provide the required functions and enable the split CPU bringup, which
prepares fo
From: David Woodhouse
Commit dce1ca0525bf ("sched/scs: Reset task stack state in bringup_cpu()")
ensured that the shadow call stack and KASAN poisoning were removed from
a CPU's stack each time that CPU is brought up, not just once.
This is not incorrect. However, with parallel bringup the idle
From: Thomas Gleixner
Make the primary thread tracking CPU mask based in preparation for simpler
handling of parallel bootup.
Signed-off-by: Thomas Gleixner
Tested-by: Michael Kelley
---
arch/x86/include/asm/apic.h |2 --
arch/x86/include/asm/topology.h | 19 +++
a
From: Thomas Gleixner
The new AP state tracking and synchronization mechanism in the CPU hotplug
core code allows to remove quite some x86 specific code:
1) The AP alive synchronization based on cpumasks
2) The decision whether an AP can be brought up again
Signed-off-by: Thomas Gleixner
From: Thomas Gleixner
Switch to the CPU hotplug core state tracking and synchronization
mechanim. No functional change intended.
Signed-off-by: Thomas Gleixner
Tested-by: Mark Rutland
Tested-by: Michael Kelley
---
arch/arm64/Kconfig |1 +
arch/arm64/include/asm/smp.h |2 +-
From: Thomas Gleixner
No point in this conditional voodoo. Un-initializing the lock mechanism is
safe to be called unconditionally even if it was already invoked when the
CPU died.
Remove the invocation of xen_smp_intr_free() as that has been already
cleaned up in xen_cpu_dead_hvm().
Signed-off
From: Thomas Gleixner
Switch to the CPU hotplug core state tracking and synchronization
mechanim. This unfortunately requires to add dead reporting to the non CPS
platforms as CPS is the only user, but it allows an overall consolidation
of this functionality.
No functional change intended.
Sign
From: Thomas Gleixner
No more users.
Signed-off-by: Thomas Gleixner
Tested-by: Michael Kelley
---
include/linux/cpu.h |2 -
kernel/smpboot.c| 90
2 files changed, 92 deletions(-)
---
--- a/include/linux/cpu.h
+++ b/include/linu
From: Thomas Gleixner
Switch to the CPU hotplug core state tracking and synchronization
mechanim. No functional change intended.
Signed-off-by: Thomas Gleixner
Tested-by: Michael Kelley
---
arch/parisc/Kconfig |1 +
arch/parisc/kernel/process.c |4 ++--
arch/parisc/kernel/s
From: Thomas Gleixner
For parallel CPU brinugp it's required to read the APIC ID in the low level
startup code. The virtual APIC base address is a constant because its a
fix-mapped address. Exposing that constant which is composed via macros to
assembly code is non-trivial dues to header inclusio
From: Thomas Gleixner
All users converted to the hotplug core mechanism.
Signed-off-by: Thomas Gleixner
Tested-by: Michael Kelley
---
include/linux/cpu.h |2 -
kernel/smpboot.c| 75
2 files changed, 77 deletions(-)
---
--- a/in
1 - 100 of 107 matches
Mail list logo