mirror of
https://github.com/GameServerManagers/LinuxGSM.git
synced 2026-04-25 14:15:59 +03:00
[GH-ISSUE #626] Testing Infrastructure #499
Labels
No labels
Atomic
Epic
cannot reproduce
command: backup
command: console
command: debug
command: details
command: fast-dl
command: install
command: mods
command: monitor
command: post-details
command: restart
command: send
command: start
command: stop
command: update
command: update-lgsm
command: validate
command: wipe
distro: AlmaLinux
distro: Arch Linux
distro: CentOS
distro: Debian
distro: Fedora
distro: RedHat
distro: Rocky Linux
distro: Ubuntu
distro: openSUSE
engine: goldsrc
engine: source
game: 7 Days to Die
game: ARMA 3
game: Ark: Survival Evolved
game: Assetto Corsa
game: Avorion
game: BATTALION: Legacy
game: Barotrauma
game: Battalion 1944
game: Battlefield 1942
game: Black Mesa: Deathmatch
game: Blade Symphony
game: Call of Duty 2
game: Call of Duty 4
game: Call of Duty: United Offensive
game: Counter-Strike 1.6
game: Counter-Strike 2
game: Counter-Strike: Global Offensive
game: Counter-Strike: Source
game: Day of Infamy
game: Dayz
game: Death Match Classic
game: Don't Starve Together
game: ET: Legacy
game: Eco
game: Factorio
game: Factorio
game: Garry's Mod
game: Half-Life
game: Hurtword
game: Insurgecy
game: Insurgecy
game: Insurgency: Sandstorm
game: Just Cause 3
game: Killing Floor
game: Killing Floor 2
game: Left 4 Dead 2
game: Minecraft
game: Minecraft Bedrock
game: Mordhau
game: Multi Theft Auto
game: Mumble
game: Natural Selection 2
game: No More Room in Hell
game: Pavlov VR
game: Post Scriptum
game: Project Zomboid
game: Quake 3
game: QuakeWorld
game: Red Orchestra: Ostfront 41-45
game: Return to Castle Wolfenstein
game: Rising World
game: Rust
game: San Andreas Multiplayer
game: Satisfactory
game: Soldat
game: Soldier of Fortune 2
game: Squad
game: Squad 44
game: Starbound
game: Stationeers
game: Sven Co-op
game: Team Fortress 2
game: Teamspeak 3
game: Teeworlds
game: Terraria
game: The Front
game: Unreal Tournament 2004
game: Unreal Tournament 3
game: Unreal Tournament 99
game: Unturned
game: Valheim
game: Wurm Unlimited
game: Zombie Master Reborn
game: label missing
good first issue
help wanted
info: alerts
info: dependency
info: docker
info: docs
info: email
info: query
info: steamcmd
info: systemd
info: tmux
info: website
info: website
needs more info
outcome: duplicate
outcome: issue resolved
outcome: issue resolved
outcome: issue unresolved
outcome: pr accepted
outcome: pr rejected
outcome: unconfirmed
outcome: wontfix
outcome: wrong forum
potential-duplicate
priority
pull-request
type: bug
type: feature
type: feature
type: feature request
type: game server request
type: refactor
waiting response
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/LinuxGSM#499
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @jaredballou on GitHub (Jan 12, 2016).
Original GitHub issue: https://github.com/GameServerManagers/LinuxGSM/issues/626
This discussion is to map out a better way to support testing of the script. The current methods rely entirely on @dgibbs64 managing and testing everything, and that's not only a huge load for him to handle, but it's rife with opportunities for mistakes. I have spent the last 10 years joining teams and helping to automate build/deploy/test cycles in a way that requires minimal developer effort and avoids "dropped balls" of tests not getting run, so I think I am in a unique position to help here. I'd also enjoy having a few others with interest in automation or DevOps to work with me so I can teach people these skills and make sure the project has more than one person who can do this stuff.
My thinking is we need to draw up a matrix of what we need to test. Starting with a listing of "test manifests" that describe in detail what the test will do, and define branching from those cases. For example, "I want to test installing Insurgency, then do an update. I then want to delete $FILE, run validate, and make sure the files are correct." These test cases will be stored in a standardized format in Git so that any of us can pull and push tests, and enable the community bug reports to be converted into tests fairly simply. They may simply be more Bash scripts that call LGSM and perform actions and then verify the output and state of the systems, or we may need to get more complex. Once we have a good idea what and how we want to test, we can discuss tools that will suit those needs.
To do the software installs themselves, we will likely need a collection of server binaries and/or Steam keys to do this with, also in a centralized location. My guess is the best way to do this is to look into Steam Family Sharing or even creating a testing-specific Steam account where we can add these games to be installed from. We would need to share the password among the developers who need to deploy local test instances, which is a security risk we need to mitigate. As an alternative, we could just use that account on the official infrastructure, and require devs without access to the main test rig to deal with their own software acquisition.
My thinking is if someone wants us to add a new game, they can gift that account a copy and we use it to do the deployment to development and test servers, so we can both add features and maintain tests over them long-term without needing to buy a bunch of software ourselves. We may even be able to contact the developers of the games offered via Steam and tell them of our plans, we can probably get a lot of free keys seeing as this only benefits developers to have a well-tested tool that lets server admins deploy their games.
Once that's done, we need to define the platforms we wish to test, My guess here is we either use Amazon/Rackspace as a "baseline" for our machines, and go from there. I think a good best practice here is to create Docker images of the baselines we expect, and have them sanitized to the point where we can share them publicly for other testers or simply end users who want to containerize their instances and know that it will just work.
We need to define the scope of OS combinations we want to support, and have that list managed centrally, so that it is cross-referenced in the docs. So, let's say the list is "Ubuntu 2014.10,2015.6, Debian Sarge, CentOS 6.2, CentOS 6.7, CentOS 7.0". We then have the test infrastructure use that list and run each test case on each OS, and report back. That way, if a game or feature is not working on only one system, we can investigate if it's just that OS or something deeper, and we can keep the docs up to date as to what we can really support without guessing.
I seriously recommend we look into doing a Git Flow workflow going forward - see http://nvie.com/posts/a-successful-git-branching-model/ for details. I could add a WebHook to the project so any commit/PR will kick off some automated testing of the code, and then report back success or failure before we even consider a merge. A nice add-on of using this method is that we can start adding features without breaking the master branch, and we can start allowing a lot more third-party developers to contribute in a way that contains and slowly merges work without requiring an "all or nothing" merge as we have now. This will allow us to have testing simply built in as part of the develop/commit workflow as we already do, rather than having to train developers how to clone and test on their own.
For further testing, I'm thinking I can make a Vagrant environment for us to test on local workstations, and use something like Puppet or Chef to fully automate the "main" architecture. The more we can automate here, the more likely this will be used, and the less likely things will be broken or missed. If we have other developers or well-wishing users who have capacity, I can set the system up so we can "plug in" additional hosts that only need SSH availability, and manage them all as a single testing environment for this. I don't think we need a whole lot of compute power here, especially if we are able to "gate off" changes and be able to look programmatically at a commit manifest to determine what on the game/OS matrix needs testing based upon the changes. Worst case scenario, I have an instance I am willing to donate for this cause, and I will have the automation simply queue tests to be run. This way, if we have many machines the testing goes quickly, if we only have one it just takes longer, but the end result is an identical set of results.
I believe the first step will simply be to define the supported OSes, and come up with a representative suite of tests so we can understand how we want the tool to work. Then we need to look into licensing and how we need to acquire all games - in particular, non-Steam games will need to have some way to be downloaded and installed, and I have no experience here to even know how that looks. When we have a good starting point, I can spin us up some baseline machines and begin the automation work. Naturally, whatever we build for the automation toolchain will also be in GitHub, so that we can all understand what is happening and continue to add features and stability to the testing architecture itself.
Looking forward to hearing from the rest of you, if we'd like to break this discussion out into several issues to separate the main parts, let me know. I figured I'd drop it all here and see what people thought before clogging up the issue board. Thanks!
@dgibbs64 commented on GitHub (Sep 19, 2017):
There is now some automated testing in place using travis, it is not perfect but helps catch any major issues and LinuxGSM now uses gitflow which is a greate way to manage the project.
@dgibbs64 commented on GitHub (Sep 19, 2017):
Over time more automation will be added as I learn new stuff. This is definitely something that is to be slowly implemented over a long period.
@lock[bot] commented on GitHub (Apr 1, 2019):
This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.