[Cloud] Re: Redis eviction stats?

2022-03-21 Thread David Caro

Unfortunately we don't have good observability at the moment on the redis 
instances themselves, so I can't easily give
you a number/graph, but looking at the memory usage and the logs it seems it's 
pretty unfrequent (as in it seems not to
have happened since the VM rebooted). Though I'm not fully confident on that 
statement.

Once I review the o11y of those instances we will have a bit more data, but we 
might not have historical one.
So if it's not a lot of trouble, you can try to use it see if it's stable 
enough and take the decision then.

Keep in mind also that we use rbd only for persistance though, so if there's a 
reboot you might lose data too (it's
meant to be a cache service in the end), that has not happened in 10 moths, but 
well, might happen at some point without
notice.


Cheers

On 03/19 11:17, Roy Smith wrote:
> I'm thinking about using the Toolforge Redis instance for a persistent data 
> store.  For the data I want to store, it's OK if there's occasional evictions 
> due to memory exhaustion, but only for fairly small values of "occasional".
> 
> So, I'm curious what the situation is for this particular instance.  How 
> often does data tend to get evicted?  If the answer is, "We haven't seen that 
> happen in many months", then I'm fine.  If the answer is, "It happens every 
> day", then I need to be looking at a different data store.
> ___
> Cloud mailing list -- cloud@lists.wikimedia.org
> List information: 
> https://lists.wikimedia.org/postorius/lists/cloud.lists.wikimedia.org/

-- 
David Caro
SRE - Cloud Services
Wikimedia Foundation 
PGP Signature: 7180 83A2 AC8B 314F B4CE  1171 4071 C7E1 D262 69C3

"Imagine a world in which every single human being can freely share in the
sum of all knowledge. That's our commitment."


signature.asc
Description: PGP signature
___
Cloud mailing list -- cloud@lists.wikimedia.org
List information: 
https://lists.wikimedia.org/postorius/lists/cloud.lists.wikimedia.org/


[Cloud] [Cloud-announce] Cloud-vps bastion upgrades today

2022-03-21 Thread Andrew Bogott
In a few hours we'll be replacing the existing cloud-vps bastions with 
new systems running Debian Bullseye.


Because this is a DNS change, existing bastion sessions should not be 
interrupted. New connections will produce fingerprint warnings that will 
require you to update your .ssh/known_hosts. Here are the fingerprints 
for the new systems:



primary.bastion.wmcloud.org, eqiad1.bastion.wmcloud.org, 
bastion.wmcloud.org:


ED25519 key fingerprint is 
SHA256:QlZONtScYR4O5jGnrmKRhWVF9lJE+aReENpHXqeOL/4



secondary.bastion.wmcloud.org:

ED25519 key fingerprint is 
SHA256:tRgnLMmISSuByzzeX8yXWcdFKjZad8Hdy6Y7E6jgaGI



-Andrew + the WMCS team

___
Cloud-announce mailing list -- cloud-annou...@lists.wikimedia.org
List information: 
https://lists.wikimedia.org/postorius/lists/cloud-announce.lists.wikimedia.org/
___
Cloud mailing list -- cloud@lists.wikimedia.org
List information: 
https://lists.wikimedia.org/postorius/lists/cloud.lists.wikimedia.org/


[Cloud] [Cloud-announce] [REMINDER] Toolforge Debian Stretch Grid Engine deprecation

2022-03-21 Thread Seyram Komla Sapaty
Hello, all!

This is a follow-up on our earlier announcement[0] of the above.

Thanks to those who have already migrated their tool(s) from Debian Stretch
grid or are
in the process of doing this.
At the start of this process, there were 867 tools running on Stretch grid.
The current number is 821.

=== Recap ===
We are migrating away from Debian Stretch[1] to Debian Buster for all of
Toolforge servers,
and the most affected piece is the Grid Engine backend in particular.

We need to shut down all Stretch hosts before the end of support date to
ensure that
Toolforge remains a secure platform. This migration will take several
months because many people still use the Stretch hosts and our users
are working on tools in their spare time.

== What should I do? ==
You should migrate your Toolforge tool to a newer environment.
You have two options:
* migrate from Toolforge Stretch Grid Engine to Toolforge Kubernetes[2].
* migrate from Toolforge Stretch Grid Engine to Toolforge Buster Grid
Engine.[3]

== Timeline ==
* 2022-02-15: Availability of Debian Buster grid announced to community -
DONE
* 2022-03-21: Weekly reminders via email to tool maintainers for tools
still running on Stretch - IN PROGRESS
* Week of 2022-04-21:
** Daily reminders via email to tool maintainers for tools still running on
Stretch
** Switch login.toolforge.org to point to Buster bastion
* Week of 2022-05-02: Evaluate migration status and formulate plan for
final shutdown of Stretch grid
* Week of 2022-05-21: Shutdown Stretch grid

We thank all of you for your support during this migration process.

You can always reach out via any of our communication channels[4]

[0]
https://lists.wikimedia.org/hyperkitty/list/cloud-annou...@lists.wikimedia.org/thread/EPJFISC52T7OOEFH5YYMZNL57O4VGSPR/
[1] https://wikitech.wikimedia.org/wiki/News/Toolforge_Stretch_deprecation
[2]
https://wikitech.wikimedia.org/wiki/Help:Toolforge/Jobs_framework#Grid_Engine_migration
[3]
https://wikitech.wikimedia.org/wiki/News/Toolforge_Stretch_deprecation#Move_a_grid_engine_webservice
[4]
https://wikitech.wikimedia.org/wiki/Portal:Toolforge/About_Toolforge#Communication_and_support

Thanks.
-- 
Seyram Komla Sapaty
Developer Advocate
Wikimedia Cloud Services
___
Cloud-announce mailing list -- cloud-annou...@lists.wikimedia.org
List information: 
https://lists.wikimedia.org/postorius/lists/cloud-announce.lists.wikimedia.org/
___
Cloud mailing list -- cloud@lists.wikimedia.org
List information: 
https://lists.wikimedia.org/postorius/lists/cloud.lists.wikimedia.org/