Hi Marcelo,
I kicked the tires on 4.4.0-85-generic kernel from xenial-proposed. The
fixes look good. I see the PTP device and TimeSync is not causing "Time
has been changed" messages in systemd. I also see that apt-daily timer
is no longer being randomly delayed due to the clock changes.
--
You
Please ignore my last comment #13, I intended it for another bug.
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1676635
Title:
[Hyper-V] Implement Hyper-V PTP Source
Status in linux pa
Here are a few upstream commits that should also be included.
Drivers: hv: util: don't forget to init host_ts.lock
https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git/commit/drivers/hv/hv_util.c?id=5a16dfc855127906fcd2935fb039bc8989313915
hv_utils: drop .getcrosststamp() support f
@Alberto Ornaghi.
Are you running in Azure or in your own Hyper-V host? If you have access
to the Hyper-V host, you can disable the TimeSync integration service
from the VM settings.
Otherwise, you can create a script that runs "echo 2dd1ce17-079e-
403c-b352-a1921ee207ee > /sys/bus/vmbus/drivers/
The patch from @faulpeltz hasn't been mainlined because of some feedback
that it shouldn't have to close and reopen the /dev/vmbus/hv_vss device
after failure.
I addressed this comment this in a modified version of @faulpeltz's
patch (see comment #345).
I haven't heard from @faulpeltz whether he'
Hi @jsalisbury,
Any status update on the patches for this issue?
It appears the test kernels have resolved the issue.
Let us know if you need additional testing or have questions about the
patches.
Thanks,
Alex
--
You received this bug notification because you are a member of Kernel
Packages,
Thanks Joseph for compiling the list. The patches you've outlined should
be sufficient.
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1470250
Title:
[Hyper-V] Ubuntu 14.04.2 LTS Generat
@faulpeltz
I agree. Your patch should also be included in case a FREEZE operation does
exceed the increased timeout.
I've attached a modified version of your patch that addresses some
concerns we had offline. Could you give it a try?
** Patch added:
"0004-Tools-hv-vss-Thaw-the-filesystem-and-co
I'll let @jsalisbury comment on the status of his backported patches.
@jsalisbury, I'd also take this recently submitted patch from upstream
kernel as well. It ensures that the VSS driver doesn't timeout any long
running FREEZE operations too early. This should preclude the need for
faulpeltz's pa
Thanks @faulpeltz for the info.
I sent you a private message with the list of maintainers to send the
patch (trying to avoid pasting it here in case spam bots can go through
this archive).
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to l
Hi @faulpeltz,
A few questions/comments about your patch:
1) Can you submit your patch to the upstream kernel?
2) Under load, were you able to measure how long the FIFREEZE operation took
before it succeeded? I'm trying to see if we can increase the timeout of the
kernel driver before it hits t
Evan, from your screenshots, I can see that the memory demand values and
the /proc/meminfo values do match closely to each other.
On the 16.04.01 VM, /proc/meminfo shows Committed_AS value is ~1800MB.
The balloon driver then adds a buffer of ~300MB (this buffer is
calculated according to how much
Bernhard(galmok), the memory corruption you're seeing is likely a known
issue that happens when Linux guests are running on Windows Server 2012.
This was fixed recently in the upstream Linux kernels, so hopefully a
future update will have these patches.
--
You received this bug notification becau
One reason you may see the timeout messages is due to having a mismatch
between the user-space hv_vss_daemon version and the kernel version.
Can you rebuild the user-space hv_vss_daemon under the source tree's
tools/hv directory and replace the one provided that's provided in
Ubuntu by default?
-
And in response to Joseph's comment #309, the second patch shouldn't be
required as it's related to a feature introduced in Windows Server 2016
(I'm assuming you folks are testing in Windows Server 2012 R2).
--
You received this bug notification because you are a member of Kernel
Packages, which
Might be worth trying this patchset: https://lkml.org/lkml/2016/8/18/859
The first patch in the set addresses some issues with VSS that would
cause it to take a long time to initiate backup (and even may timeout).
The second patch is not necessary (but will need VSS daemon to be
replaced if you c
> Committed_AS: 7291440 kB
The amount of committed memory reported by meminfo is about 7GB. So it
shouldn't be surprising to see that the demand as seen by Hyper-V is
also high.
In general, the Hyper-V demand is calculated as Committed_AS value plus
some buffer(roughly 700MB if you have 8GB of to
The hv_balloon driver hasn't changed between 15.10 and 16.04, so there
shouldn't be any difference in the way the driver reports demand to
Hyper-V.
To provide a further breakdown of the memory usage, can you show the
output of "cat /proc/meminfo"?
Might help to compare this info between 16.04 and
One other caveat I should mention:
Even if the VM has reduced its memory consumption, Hyper-V does not
necessarily reclaim the unused memory from that VM.
Generally, Hyper-V reclaims unused memory from a VM if it's seeing
memory pressure from other VMs or if the Hyper-V host itself is seeing
memo
Thanks. In both VMs, it looks like the buff/cached memory dropped by
2GB. After some time, did the Hyper-V host eventually reclaim this
memory? Can you check if the Hyper-V host's assigned memory also drops
after some time?
Also, can you tell us if you tried this on an older build where it
doesn't
In both screenshots, it appears that the "buff/cache" value is very
high.
Perhaps the balloon driver is reporting the "buff/cache" as in use to
the host. Can you try running following commands to flush write buffers
and free page cache?
1) sync
2) echo 3 > /proc/sys/vm/drop_caches
After running
Hi Jason,
As Josh mentioned, the screenshot shows that demand as seen by Hyper-V
is still quite high compared to what is displayed inside the guest. If
this number doesn't settle, then Hyper-V would have no reason to reclaim
memory. Does the Hyper-V memory demand number ever settle after some
time
Also, to add one other thing. There were a bunch of commits made
upstream to the storvsc driver in the last few months.
Can we try them out to see if they have any impact on this issue? In
particular:
1) 81988a0e6b031bc80da15257201810ddcf989e64 - storvsc: get rid of bounce buffer
2) 3209f9d780d13
To help see if this is an issue in the hv_storvsc driver; I took the
storvsc driver code from 3.13.0-34.60 (presumably a good build) and
applied it to the 4.2.0-27 kernel (presumably a bad build). Ran tiobench
with backups and was able to repro after about 48 hours.
This implies that either:
1) Th
Don't think this has been asked before, but has anyone had a repro when
backups were turned off? Or does this only happen when backups are
enabled?
I'm verifying this on my own as well, but if this happens regardless of
whether backup is enabled/disabled; then it'll help us narrow down the
cause f
Any updates on whether adbb4e646 test kernel is able to repro this
issue?
In any case, if we can get a next test kernel to try, I can help try
repro on it as well.
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://b
I'm also going to try reverting this commit on a 4.2.0-27 kernel that I
was able to see this issue on; and see if I can repro it there as well.
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bug
Hi Joseph,
Any updates on the bisect?
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1470250
Title:
[Hyper-V] Ubuntu 14.04.2 LTS Generation 2 SCSI Errors on VSS Based
Backups
Status
Thanks for the update @jsalisbury.
7af024a isn't likely to be the cause, as that commit only changes
behavior during VMBus shutdown (i.e. when we cleanup VMBus during guest
shutdown).
This leads me to think that commit c8c38b3 is more likely to be a
factor. Nonetheless, I'll withhold further comm
Thanks for going through this testing.
Based on your results, I looked at the Hyper-V related commits in
http://kernel.ubuntu.com/git/ubuntu/ubuntu-trusty.git between
3.13.0-26.48 (GOOD) and 3.13.0-35.62 (BAD).
>From reviewing these commits, there doesn't seem to be an obvious
culprit. But it's p
Thanks for testing this.
We will continue looking at this internally. It's possible that the
issue's with the storage drivers.
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1470250
Title
While we're not completely certain this is the smoking gun, I've
observed that in the logs being posted here that there are no
freeze/thaw operations taking place when the issue occurs. The commits
that @jrp identified, fix messaging between the VSS utility driver and
the host; and will hopefully e
In addition to what @jrp mentioned, I would also add:
b9830d120cbe155863399f25eaef6aa8353e767f "Drivers: hv: util: Pass the
channel information during the init call"
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https:/
Hi Jay,
Does this issue occur even when VXLAN is not configured between the two
instances?
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1508706
Title:
Networking hangs on azure using
This test kernel fixes the issue.
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1439780
Title:
[Hyper-V] Fiber Channel critical target error
Status in linux package in Ubuntu:
In Pro
Hi,
I'm finished with the bisect and the following commit will fix this
issue:
dc45708ca9988656d706940df5fd102672c5de92 [storvsc: Set the SRB flags
correctly when no data transfer is needed]
Looks like we'll want to include this patch.
Here's the rest of my bisect log, in case you're interested
Hi,
The issue also shows up in this kernel.
I can take over the bisect. My current bisect log is the same as yours.
I'm currently building the next test kernel up to following commit:
74856fbf441929918c49ff262ace9835048e4e6a
Will let you know when I narrow it down further.
--
You received th
Hi Joseph,
Kernel with commits up to 7ce14f6ff26460819345fe8495cf2dd6538b7cdc are
also seeing this issue.
I'm can run my own bisect as well so we can save some time going back
and forth. Are you basing your bisect off of Linux-stable repository?
Thanks,
Alex
--
You received this bug notificati
The issue also shows in this kernel.
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1439780
Title:
[Hyper-V] Fiber Channel critical target error
Status in linux package in Ubuntu:
Con
Still not working with the kernel up to this commit:
0b6280c62026168f79ff4dd1437df131bdfd24f2
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1439780
Title:
[Hyper-V] Fiber Channel critic
The test kernel still hits this issue.
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1439780
Title:
[Hyper-V] Fiber Channel critical target error
Status in linux package in Ubuntu:
C
To answer your question, it looks like the issue can be reproduced as
recently as v4.1-rc4.
So the last version of the kernel that we see this bug is: v4.1-rc4
And the first version of the kernel we don't see this bug is: v4.1-rc5
--
You received this bug notification because you are a member o
So I tried to repro this on the latest upstream v4.1-fc5 kernel from
http://kernel.ubuntu.com/~kernel-ppa/mainline/v4.1-rc5-unstable/.
It looks like the issue's been fixed there.
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ub
43 matches
Mail list logo