On Wed, Dec 20, 2023 at 01:12:35PM +0000, Albretch Mueller wrote: > The only way I see is for the running computer on "exposed mode" to > check via systemd if the time zone has been changed
Huh? None of this makes any sense. First of all, the system's default time zone only changes if a user with root privileges decides to change it. This is done by running "dpkg-reconfigure tzdata", or by manually editing one file and redirecting one symbolic link, if you're stubborn. The selection of the computer's default time zone by its owner is not in ANY way related to the computer's geographic location. The default time zone has nothing to do with systemd, nor with any other init system that may be in place. Systemd does not know or care about the system's default time zone. "Checking via systemd" is a phrase with no meaning, in this case. > and reset it What? You want your software to *undo* the choices made by the computer's owner? > in > that case before you use date for file naming for all data that is > kept in a measured way. So... no, you didn't actually mean "reset the system's default time zone to a previous value". You mean something like "use the currently selected default time zone" -- but that's what everything does all the time, unless the TZ environment variable is set. I think you need to brush up on the fundamentals here. > In the case of the person flying from Paris to Boston he will find > out that he flew away from continental Europe once he reboots his > computer in "exposed mode" (pun intended). Again, you're not making any sense. How does this computer know where it is geographically located? Does it have GPS hardware inside it? And even knowing a user's geographic location is *not* enough to know which time zone the user wishes to use. Time zones are poliitical. There are places on earth where two groups of people in the same geographic location use two different time zones. Because politics. > He will notified of the > time zone he had used before and all such deltas will be kept. Stop this. Stop this IMMEDIATELY. Learn how time works. Then rewrite everything. The system clock stores time as an offset from a fixed point in time known as "the epoch" -- which is midnight, 1 January 1970, UTC. This storage form is known as "epoch time" or "Unix time". When displaying the current time to a user, the epoch time value is converted into a human-readable time string, using the user's chosen locale and time zone. Users select these things by setting environment variables. If the environment variables are not set, the system-wide default values are used instead. Right now, as I write this, the epoch time is 1703082118 which is 1.703... billion seconds after the epoch. Using the standard Gregorian calendar, in the America/New_York time zone, this epoch time value can be displayed as "Wed Dec 20 09:21:58 EST 2023" which is a string that makes sense to me, a human being, who lives in this time zone. It could also be displayed as "Wed Dec 20 09:21:58 AM EST 2023" and this still makes sense to me. The difference between these two strings is the locale definition that I, the end user, have chosen for my environment. It is a personal preference. Any piece of software that wants to calculate durations needs to work with epoch times, or something equivalent to epoch times. Two moments in time must be encoded in a way that they can be subtracted from each other to determine how much time elapsed between moment one and moment two. Since we're on Unix systems which work with epoch time internally, there's no reason to make up an equivalent, or to reinvent any wheels. Just use the epoch time values that the system clock is working with already. So, to be completely blunt here, what you want to do is store ALL timestamps in epoch time format. This could be a string like "1703082118", or it could be the raw 64-bit integer which this string represents. Either way's fine, depending on your programming language and tool set. Because you have all your timestamps in epoch format, calculating intervals is easy -- you just subtract, and that's how many seconds have elapsed. You can convert this number of seconds to an interval expressed in other units, like "1 hour, 23 minutes and 17 seconds" using a bit of arithmetic. Some programming languages may have tools to do this for you. If you want to display human-readable time strings corresponding to your stored epoch times, then you use a tool which takes an epoch time value as input, and produces a human-readable time string as output. This is *incredibly complicated* so you do *not* do this yourself using stone knives and bear skins. You use a *tool* that has been developed and honed and debugged for decades. Since your chosen language seems to be bash, you have two good choices for the tool to do this: bash's builtin printf, or GNU date. GNU date is easy enough: unicorn:~$ date -d @1703082118 Wed Dec 20 09:21:58 EST 2023 You can specify a format string, if you don't want the default format. The end user's environment variables (TZ, LANG, LC_TIME, LC_ALL) will help determine the output, and so will the system's default time zone (if TZ is not set). Bash's printf works quite similarly: unicorn:~$ printf '%(%c)T\n' 1703082118 Wed Dec 20 09:21:58 2023 Again, the output string is determined by the user's environment and the system's default time zone (in the absence of a TZ variable). You may specify a format, and if you do so, it will be used *in conjunction* with the user's environment to generate the output string. The key point here is that you don't STORE these human-readable time strings anywhere. You simply *produce* them on demand, using the epoch time values that you *do* store. Think of time strings as "write only". You never read one. You only write one, and only when the output is intended for human consumption. If a computer's default time zone is changed, or if a user's TZ variable is changed, then your programs should translate your stored epoch time values to different output strings. This is all 100% automatic. If your software is written properly, you don't need to *do* anything specifically to handle this. The underlying tools will handle it. If you have two timestamps (stored in epoch time format) which were generated when the computer's geographic and political time zones were different, *you do not care*. The epoch time values are independent of time zones. When the timestamp values are displayed to the end user, they will be displayed in whatever time zone the end user is currently choosing to operate in, regardless of what time zone the user may have been operating in when the timestamps were originally stored. The end user may need to be aware of this. Maybe. It's up to them to decide whether this is a significant piece of information. If you, the *developer*, have talked to your end users, and you've collectively decided that certain timestamp values must be reported using the time zone that was in use when they were collected, then you will need to store time zone values alongside the epoch time values. This will introduce a great deal of additional complexity to your application, so you should only do this if you've *actually* determined that it's needed. In this case, you would need to override the TZ environment variable each time you display one of these "local timestamps" to the end user. Once again, the underlying tools will take care of the translation for you. You "only" need to worry about storing, retrieving, and temporarily setting these TZ values correctly, and using the right TZ value with its corresponding epoch time value.