Hi Stefan,
I looked at this a long time back (circa 2011), and things may have
changed since then. See:
https://forums.aws.amazon.com/thread.jspa?threadID=59753
When I looked at this last, we weren't emulating TSC and the CPUID flags
that advertise invariant TSC came through. This was making the
On the contrary I would say we can be quite sure this is *not* a
re-introduction of the TSC problem. As I wrote in the other bug report, this
time we do not have unusual high process times. In other words the time stays
increments normally after boot. It is just this one jump at boot. And the ju
Have we been able to confirm if this is not a reintroduction of or
relation to the issue that was resolved in
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/727459?
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.lau
For reference, this was Amazon's response on the forum post:
https://forums.aws.amazon.com/message.jspa?messageID=561728#561728
Basically "we are not changing, move to current generation instances."
Is there some sort of assistance I should request there, or through
Amazon support. Has anyone fro
Got feedback from the Amazon dev I was asking. Sounds like they got it
on their plate. Have not heard any details though. But they got a link
to this report and might jump in.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bug
I've started a thread on the AWS forums about this.
https://forums.aws.amazon.com/thread.jspa?threadID=15
If I don't get a response I'll poke our account rep.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad
I was wrong about this being rare, this is apparently happening quite
often in most of our m1.small instances
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1349883
Title:
dmesg time wildly incorrect
For what it's worth, we've seen this in an AWS instance. It is rare, and
probably not reproduceable in a test environment/
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1349883
Title:
dmesg time wil
Right now, I am at a point where I think I need someone from the other
side (Amazon). The only thing that is sure is that for some reason it
looks like the vcpu time info misses in those cases we see, the
correction offset to adapt the system uptime (host) to the uptime of the
guest. But without im
Of course, after stating that I will look for a instance that is not
affected, all of which I start are on the same cpu type and affected. :/
>From further research it sounds like Xen 3.4.3 has a default of running
guests in native tsc mode while Xen 4.0 and later have a default of
native if the ts
I had a dream... well honestly this idea came to me around then. So I was
fishing in the same availability zone to get another case of this (was not sure
I could just go back to yours). Then installed a 12.04 (3.2.x) kernel and
bingo, same problem.
So at least I can stop trying to think of a way
To clarify, I'm pretty sure whether this happens depends on the Amazon
hardware the instance is being run on.
My warning was that, while the instance was currently running on
hardware that causes the problem, if it was stopped and then started
later, it might not end up on the same hardware, and t
When inspecting the vcpu info on a local PV guest and a EC2 guest provided by
Joe, I found that:
time = {
version = 33456,
pad0 = 0,
tsc_timestamp = 86813831140545,
system_time = 37805107510066,
tsc_to_system_mul = 3743284562,
tsc_shift = -1 '\377',
flags = 0 '
Somehow the Xen side (of moving the vcpu info containing the vcpu time info
from the shared struct to the per-cpu area through the register hypercall) has
not changed much between 12.04 and 14.04. The vcpu_clock has though. That
involves some slightly different memory barriers. What has not chan
You're correct on your findings. m1.small, paravirtual guest.
I'm in the process of migrating my company's servers from 12.04 to
14.04. It was during this migration that I realized I was seeing syslog-
ng logs for October 2014 or later (ie months into the future) on the new
14.04 instances, and I'
Writing up my findings so far:
- this looks to be a m1.small instance (1 vcpu, 1.7GB memory)
- this instance type is a PV guest
So sched_clock (which is used by printk) will be using
paravirt_sched_clock which is set to xen_clocksource_read. The time info
is kept in a per-cpu variable (vcpu_info).
** Changed in: linux (Ubuntu)
Importance: Undecided => Medium
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1349883
Title:
dmesg time wildly incorrect on paravirtual EC2 instances.
To manage not
** Changed in: linux (Ubuntu)
Assignee: (unassigned) => Stefan Bader (smb)
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1349883
Title:
dmesg time wildly incorrect on paravirtual EC2 instances.
apport information
** Tags added: apport-collected
** Description changed:
On Ubuntu 14.04 EC2 instances, during startup the dmesg time goes from 0
seconds to thousands of seconds. This causes kernel logs to be recorded
for months in the future.
This appears to be related to the kerne
19 matches
Mail list logo