On 06/29/2010 04:06 PM, Alexander Graf wrote:
Are we looking at the same link? Looks good to me there.
We're probably looking at the same link but looking at different
things. I'm whining about
static u64 f() {
...
}
as opposed to the more sober
static u64 f()
Avi Kivity wrote:
> On 06/29/2010 03:56 PM, Alexander Graf wrote:
>> Avi Kivity wrote:
>>
>>> On 06/28/2010 11:55 AM, Alexander Graf wrote:
>>>
>> +
>> +static inline u64 kvmppc_mmu_hash_pte(u64 eaddr) {
>> +return hash_64(eaddr>>PTE_SIZE, HPTEG_HASH_BITS_PTE
On 06/29/2010 03:56 PM, Alexander Graf wrote:
Avi Kivity wrote:
On 06/28/2010 11:55 AM, Alexander Graf wrote:
+
+static inline u64 kvmppc_mmu_hash_pte(u64 eaddr) {
+return hash_64(eaddr>>PTE_SIZE, HPTEG_HASH_BITS_PTE);
+}
+
+static inline u64 kvmppc_mmu_hash_vpte(u64 v
Avi Kivity wrote:
> On 06/28/2010 11:55 AM, Alexander Graf wrote:
>>
+
+static inline u64 kvmppc_mmu_hash_pte(u64 eaddr) {
+return hash_64(eaddr>> PTE_SIZE, HPTEG_HASH_BITS_PTE);
+}
+
+static inline u64 kvmppc_mmu_hash_vpte(u64 vpage) {
+return hash_64(v
Avi Kivity wrote:
> On 06/28/2010 04:25 PM, Alexander Graf wrote:
>>>
> Less and simpler code, better reporting through slabtop, less wastage
> of partially allocated slab pages.
>
>
But it also means that one VM can spill the global slab cache and kill
another V
On 06/28/2010 04:25 PM, Alexander Graf wrote:
Less and simpler code, better reporting through slabtop, less wastage
of partially allocated slab pages.
But it also means that one VM can spill the global slab cache and kill
another VM's mm performance, no?
What do you mean b
Avi Kivity wrote:
> On 06/28/2010 12:55 PM, Alexander Graf wrote:
>> Avi Kivity wrote:
>>
>>> On 06/28/2010 12:27 PM, Alexander Graf wrote:
>>>
> Am I looking at old code?
>
Apparently. Check book3s_mmu_*.c
>>> I don't have that pattern.
>>>
>>
On 06/28/2010 12:55 PM, Alexander Graf wrote:
Avi Kivity wrote:
On 06/28/2010 12:27 PM, Alexander Graf wrote:
Am I looking at old code?
Apparently. Check book3s_mmu_*.c
I don't have that pattern.
It's in this patch.
Yes. Silly me.
+static void inva
Avi Kivity wrote:
> On 06/28/2010 12:27 PM, Alexander Graf wrote:
>>> Am I looking at old code?
>>
>>
>> Apparently. Check book3s_mmu_*.c
>
> I don't have that pattern.
It's in this patch.
> +static void invalidate_pte(struct kvm_vcpu *vcpu, struct hpte_cache *pte)
> +{
> + dprintk_mmu("KVM:
On 06/28/2010 12:27 PM, Alexander Graf wrote:
Am I looking at old code?
Apparently. Check book3s_mmu_*.c
I don't have that pattern.
(another difference is using struct hlist_head instead of list_head,
which I recommend since it saves space)
Hrm. I thought about this quite a bit befor
Am 28.06.2010 um 11:12 schrieb Avi Kivity :
On 06/28/2010 11:55 AM, Alexander Graf wrote:
+
+static inline u64 kvmppc_mmu_hash_pte(u64 eaddr) {
+return hash_64(eaddr>> PTE_SIZE, HPTEG_HASH_BITS_PTE);
+}
+
+static inline u64 kvmppc_mmu_hash_vpte(u64 vpage) {
+return hash_64(vpage&
On 06/28/2010 11:55 AM, Alexander Graf wrote:
+
+static inline u64 kvmppc_mmu_hash_pte(u64 eaddr) {
+ return hash_64(eaddr>> PTE_SIZE, HPTEG_HASH_BITS_PTE);
+}
+
+static inline u64 kvmppc_mmu_hash_vpte(u64 vpage) {
+ return hash_64(vpage& 0xfULL, HPTEG_HASH_BITS_VPTE);
+
On 28.06.2010, at 10:28, Avi Kivity wrote:
> On 06/26/2010 02:16 AM, Alexander Graf wrote:
>> Currently the shadow paging code keeps an array of entries it knows about.
>> Whenever the guest invalidates an entry, we loop through that entry,
>> trying to invalidate matching parts.
>>
>> While thi
On 06/26/2010 02:16 AM, Alexander Graf wrote:
Currently the shadow paging code keeps an array of entries it knows about.
Whenever the guest invalidates an entry, we loop through that entry,
trying to invalidate matching parts.
While this is a really simple implementation, it is probably the most
On 26.06.2010, at 01:16, Alexander Graf wrote:
> Currently the shadow paging code keeps an array of entries it knows about.
> Whenever the guest invalidates an entry, we loop through that entry,
> trying to invalidate matching parts.
>
> While this is a really simple implementation, it is probab
Currently the shadow paging code keeps an array of entries it knows about.
Whenever the guest invalidates an entry, we loop through that entry,
trying to invalidate matching parts.
While this is a really simple implementation, it is probably the most
ineffective one possible. So instead, let's kee
16 matches
Mail list logo