q_ops always live further down)
>
> I believe the problem is this:
>
> #define PV_INDIRECT(addr) *addr(%rip)
>
> The displacement that the linker computes will be relative to the where
> this instruction is placed at the time of linking, which is in
> .pv_altinstructions (and not .text). So when we copy it into .text the
> displacement becomes bogus.
>
> Replacing the macro with
>
> #define PV_INDIRECT(addr) *addr // well, it's not so much
> indirect anymore
>
> makes things work. Or maybe it can be adjusted top be kept truly indirect.
That is still an indirect call, just using absolute addressing for the
pointer instead of RIP-relative. Alternatives has very limited
relocation capabilities. It will only handle a single call or jmp
replacement. Using absolute addressing is slightly less efficient
(takes one extra byte to encode, and needs a relocation for KASLR),
but it works just as well. You could also relocate the instruction
manually by adding the delta between the original and replacement code
to the displacement.
--
Brian Gerst
___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
;
> ".type __raw_callee_save___kvm_vcpu_is_preempted, @function;"
> "__raw_callee_save___kvm_vcpu_is_preempted:"
> -"movq __per_cpu_offset(,%rdi,8), %rax;"
> -"cmpb $0, " __stringify(KVM_STEAL_TIME_preempted) "+steal_time(%rax);"
> +&quo
s:40. So
this patch is incompatible with CONFIG_STACK_PROTECTOR.
--
Brian Gerst
___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
larger than 2G text or data. Small-PIC would still
allow it to be placed anywhere in the address space, and would
generate far better code.
--
Brian Gerst
___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
On Wed, Jul 19, 2017 at 11:58 AM, Thomas Garnier wrote:
> On Tue, Jul 18, 2017 at 8:59 PM, Brian Gerst wrote:
>> On Tue, Jul 18, 2017 at 9:35 PM, H. Peter Anvin wrote:
>>> On 07/18/17 15:33, Thomas Garnier wrote:
>>>> With PIE support and KASLR extended ran
32-bit compat sysenter target */
> ENTRY(xen_sysenter_target)
> - undo_xen_syscall
> + mov 0*8(%rsp), %rcx
> + mov 1*8(%rsp), %r11
> + mov 5*8(%rsp), %rsp
> jmp entry_SYSENTER_compat
> ENDPROC(xen_sysenter_target)
This patch causes the iopl_32 and ioperm_32 self-tests to fail on a
64-bit PV kernel. The 64-bit versions pass. It gets a seg fault after
"parent: write to 0x80 (should fail)", and the fault isn't caught by
the signal handler. It just dumps back to the shell. The tests pass
after reverting this.
--
Brian Gerst
___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
On Mon, Aug 14, 2017 at 1:53 AM, Andy Lutomirski wrote:
> On Sun, Aug 13, 2017 at 7:44 PM, Brian Gerst wrote:
>> On Mon, Aug 7, 2017 at 11:59 PM, Andy Lutomirski wrote:
>>> /* Normal 64-bit system call target */
>>> ENTRY(xen_syscall_target)
>>> -
> { X86_FEATURE_CPB, CPUID_EDX, 9, 0x8007, 0 },
> { X86_FEATURE_PROC_FEEDBACK,CPUID_EDX, 11, 0x8007, 0 },
> + { X86_FEATURE_SME, CPUID_EAX, 0, 0x801f, 0 },
This should also be conditional. We don't want to set this feature on
32-bit, even if the processor has support.
> { 0, 0, 0, 0, 0 }
> };
--
Brian Gerst
___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
s_addr >> PAGE_SHIFT;
>
Removing this also affects 32-bit, which is more likely to access
legacy devices in this range. Put in a check for SME instead
(provided you follow my recommendations to not set the SME feature bit
on 32-bit even when the processor supports it).
--
Brian Gerst
___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
On Mon, Jul 10, 2017 at 3:50 PM, Tom Lendacky wrote:
> On 7/8/2017 7:57 AM, Brian Gerst wrote:
>>
>> On Fri, Jul 7, 2017 at 9:39 AM, Tom Lendacky
>> wrote:
>>>
>>> Currently there is a check if the address being mapped is in the ISA
>>> range (
On Mon, Jul 10, 2017 at 3:41 PM, Tom Lendacky wrote:
> On 7/8/2017 7:50 AM, Brian Gerst wrote:
>>
>> On Fri, Jul 7, 2017 at 9:38 AM, Tom Lendacky
>> wrote:
>>>
>>> Update the CPU features to include identifying and reporting on the
>>> Secure Mem
On Tue, Jul 11, 2017 at 4:35 AM, Arnd Bergmann wrote:
> On Tue, Jul 11, 2017 at 6:58 AM, Brian Gerst wrote:
>> On Mon, Jul 10, 2017 at 3:50 PM, Tom Lendacky
>> wrote:
>>> On 7/8/2017 7:57 AM, Brian Gerst wrote:
>>>> On Fri, Jul 7, 2017 at 9:39 AM, Tom Lendac
On Tue, Jul 11, 2017 at 11:02 AM, Tom Lendacky wrote:
> On 7/10/2017 11:58 PM, Brian Gerst wrote:
>>
>> On Mon, Jul 10, 2017 at 3:50 PM, Tom Lendacky
>> wrote:
>>>
>>> On 7/8/2017 7:57 AM, Brian Gerst wrote:
>>>>
>>>>
.Lsyscasll_32_done", X86_FEATURE_XENPV
>
> Borislav, what do you think?
>
> Ditto for the others.
Can you just add !xen_pv_domain() to the opportunistic SYSRET check
instead? Bury the alternatives in that macro, ie.
static_cpu_has(X86_FEATURE_XENPV). That would likely benefit other
code as well.
--
Brian Gerst
___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
info
> + mov $init_thread_union+THREAD_SIZE,REG(sp)
> +
> jmp xen_start_kernel
>
> __FINIT
Use the macros in instead of defining your own. Also,
xorl %eax,%eax is good for 64-bit too, since the upper bits are
cleared.
--
Brian Gerst
___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
+
> #ifdef CONFIG_X86_32
> mov %esi,xen_start_info
> mov $init_thread_union+THREAD_SIZE,%esp
Better, but can still be improved. Replace WSIZE_SHIFT with
__ASM_SEL(2, 3), and use the macros for the registers (ie. __ASM_DI).
--
Brian Gerst
___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
th zero when the
> program begins to run" which I read as it's up to runtime and not the loader
> to do so.
>
> And since kernel does it explicitly on baremetal path I think it's a good
> idea for PV to do the same.
It does it on bare metal because bzImage is a raw bina
17 matches
Mail list logo