I can understand patch updates, but what else are the devs doing?
They could be upgrading hosting infrastructure - sometimes this requires servers to be shut down or restarted. They might also be applying database changes such as migrating data from one server to another, or updating the structure of the database to improve performance or support new features.
Honestly, there are quite a number of reasons for planned downtime.
Unplanned downtime is a different story. Usually that's because something unexpected went wrong and there will be engineers trying to get things back up and running ASAP.
They interrogate the player characters 1 by 1 and question if their human has any suspicious activities.
Over a decade ago, I worked in a big tech company that had a scheduled downtime on one Saturday a month. That was for database schema changes.
When you're changing the structure of how you keep track of customer data, you need to make sure that no customers are making changes at that same time. So you take the whole customer-facing service down for a little while, make the schema changes, test them, and then bring the customer-facing service back up. Ideally this takes a few minutes ... but you're prepared for it to take hours.
As the technology improved, and as the developers learned better how to make changes to the system without requiring deep interventions, long downtime for schema changes became less necessary ... for that particular business.
Every tech company pretty much has to learn how to do these sorts of changes for themselves, though.
This is the most informed answer in this thread. It really does come down to schema changes. There are even ways to avoid downtime during schema changes, but it's often complicated. For example, you don't see YouTube go offline for schema changes, but they're willing to make this effort and investment, even for very large databases.
Lots of other database tasks can happen while remaining online. For backups, use a read-only connection. For upgrades, you should have a distributed and scaled database, so take them down in sections during upgrades. For "cleaning up," you can do vacuum operations on part of your database while it's live. Etc etc.
Ultimately, there is almost never a technical reason why a database has to go offline. It's a matter of devotion to the stability and uptime of your infra. Toss enough engineering hours at a database problem and you can pretty much have 100% uptime in the scope of maintenance (not incidents, of course). But even with incidents, there are fail-over plans, replicas, and a ton of other things you can do to stay online. Instead of downtime, you have degraded performance that the users may not even notice.
The other big one that usually requires downtime is network. You may not be touching your game servers all that often but if you need to do a major OS upgrade on a load balancer or switch, that's going to mean everything behind it loses connectivity - and unless you're talking one of the big hitters like WoW, they're probably not funding redundant dual network paths to allow you to take it down without downtime
If you are running metal, and the health of your entire network relies on a single load balancer or a single network switch, you're far from being production-ready from a redundancy and scaling perspective.
I don't disagree, but at the same time running a whole setup that is fully ready for hot swap live failover whenever you have maintenance tasks to do is potentially just not desirable when you have the option of just taking the environment down instead - after all, gamers are pretty much conditioned to expect it at this point
running a whole setup that is fully ready for hot swap live failover whenever you have maintenance tasks to do
This is basically "ready for production 101." It's even easier to run an entire service on a computer under a desk, but this isn't how you run stuff in production.
Even if it's "easier" in the short term, you'll be paying more for not being production-ready in the long term (and get a reputation for not having good uptime).
Yeah I feel you're widely overestimating the setup that's in place for smaller online games companies. We're not talking about Activision or some high-frequency fixed income trading firm here. "Give me something that people can play on that costs as close to nothing as possible" is usually the main driver
Gross
One thing in particular older MMORPGs did was essentially just need a week restart. They could not figure out how to. Make the server not have some bug or another that slowly increased memory usage, so eventually it would just break from the bug.
To alleviate this, they did weekly restarts. Also a good time to do longer full backups, integrity checks, etc. But the main impetus was needing to restart everything.
On top of what's already been said, to your question specifically of what the devs are doing - a lot of the time it's nothing out of the ordinary as the Ops teams are the ones conducting the maintenance. There will likely be a dev or devs on call, but that's routine anyway so it's ultimately just another day for them. Sure, when big patches are pushed they're typically more attentive to the process - but even then, they're essentially informed observers.
maintenance
thanks
Yes, but what are they doing?
maintaining
Yes, but what are the devs doing?
Developing
Same as everyone else, waiting.
Not just database migrations as others have mentioned, but database state. Databases can result in a lot of dead data, because of how transactions and locks work. Cleaning that up can cause usage of the database to be blocked for a short time. It's easiest to do this periodically if there's down time
databases are weirdly mechanical in that you have to shut them off now and then to sort of straighten out the rows and columns, and chuck out abandoned or corrupted files.. maybe add some grease in the form of optimizations and then fire it back up so users can get it all messy again inside.. mostly because they're all written just well enough to function..
How do you straighten out rows and columns?
just give the tape drives a little jiggle as they come online
That doesn't sound right. Why would turning a database off let it do anything? It's off. Most databases periodically do stuff like this in the background.
Database schemas can be updated, new services and special functionalities can be first activated and afterwards tested with specific accounts, among a myriad of other things, depending on the game and the update.
To add to others’ posts. It can be a huge variety of things that risk making the service unstable, unresponsive, and worst case could corrupt data in flight.
Customers view scheduled maintenance as minor inconvenience. Unplanned outage as an annoyance, and loss of data as a dealbreaker.
So any time there was a chance that what we need to do would limit functionality - or otherwise make the system unstable - best to take the system offline for scheduled maintenance.
Windows updates
The servers run on regular operating systems. They might wish to back up the storage (and databases), update the OS, or update their game server software, all of which is a lot easier if the service is stopped.
They could be upgrading hosting infrastructure - sometimes this requires servers to be shut down or restarted. They might also be applying database changes such as migrating data from one server to another, or updating the structure of the database to improve performance or support new features.
Honestly, there are quite a number of reasons for planned downtime.
Unplanned downtime is a different story. Usually that's because something unexpected went wrong and there will be engineers trying to get things back up and running ASAP.
They interrogate the player characters 1 by 1 and question if their human has any suspicious activities.
Within cells, interlinked
Interlinked.
How hot is Emma Stone? Very
Very
Here is an alternative Piped link(s): https://piped.video/watch?v=02N87vLULTM
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I'm open-source, check me out at GitHub.
This is great
Over a decade ago, I worked in a big tech company that had a scheduled downtime on one Saturday a month. That was for database schema changes.
When you're changing the structure of how you keep track of customer data, you need to make sure that no customers are making changes at that same time. So you take the whole customer-facing service down for a little while, make the schema changes, test them, and then bring the customer-facing service back up. Ideally this takes a few minutes ... but you're prepared for it to take hours.
As the technology improved, and as the developers learned better how to make changes to the system without requiring deep interventions, long downtime for schema changes became less necessary ... for that particular business.
Every tech company pretty much has to learn how to do these sorts of changes for themselves, though.
This is the most informed answer in this thread. It really does come down to schema changes. There are even ways to avoid downtime during schema changes, but it's often complicated. For example, you don't see YouTube go offline for schema changes, but they're willing to make this effort and investment, even for very large databases.
Lots of other database tasks can happen while remaining online. For backups, use a read-only connection. For upgrades, you should have a distributed and scaled database, so take them down in sections during upgrades. For "cleaning up," you can do vacuum operations on part of your database while it's live. Etc etc.
Ultimately, there is almost never a technical reason why a database has to go offline. It's a matter of devotion to the stability and uptime of your infra. Toss enough engineering hours at a database problem and you can pretty much have 100% uptime in the scope of maintenance (not incidents, of course). But even with incidents, there are fail-over plans, replicas, and a ton of other things you can do to stay online. Instead of downtime, you have degraded performance that the users may not even notice.
The other big one that usually requires downtime is network. You may not be touching your game servers all that often but if you need to do a major OS upgrade on a load balancer or switch, that's going to mean everything behind it loses connectivity - and unless you're talking one of the big hitters like WoW, they're probably not funding redundant dual network paths to allow you to take it down without downtime
If you are running metal, and the health of your entire network relies on a single load balancer or a single network switch, you're far from being production-ready from a redundancy and scaling perspective.
I don't disagree, but at the same time running a whole setup that is fully ready for hot swap live failover whenever you have maintenance tasks to do is potentially just not desirable when you have the option of just taking the environment down instead - after all, gamers are pretty much conditioned to expect it at this point
This is basically "ready for production 101." It's even easier to run an entire service on a computer under a desk, but this isn't how you run stuff in production.
Even if it's "easier" in the short term, you'll be paying more for not being production-ready in the long term (and get a reputation for not having good uptime).
Yeah I feel you're widely overestimating the setup that's in place for smaller online games companies. We're not talking about Activision or some high-frequency fixed income trading firm here. "Give me something that people can play on that costs as close to nothing as possible" is usually the main driver
Gross
One thing in particular older MMORPGs did was essentially just need a week restart. They could not figure out how to. Make the server not have some bug or another that slowly increased memory usage, so eventually it would just break from the bug.
To alleviate this, they did weekly restarts. Also a good time to do longer full backups, integrity checks, etc. But the main impetus was needing to restart everything.
On top of what's already been said, to your question specifically of what the devs are doing - a lot of the time it's nothing out of the ordinary as the Ops teams are the ones conducting the maintenance. There will likely be a dev or devs on call, but that's routine anyway so it's ultimately just another day for them. Sure, when big patches are pushed they're typically more attentive to the process - but even then, they're essentially informed observers.
maintenance
thanks
Yes, but what are they doing?
maintaining
Yes, but what are the devs doing?
Developing
Same as everyone else, waiting.
Not just database migrations as others have mentioned, but database state. Databases can result in a lot of dead data, because of how transactions and locks work. Cleaning that up can cause usage of the database to be blocked for a short time. It's easiest to do this periodically if there's down time
databases are weirdly mechanical in that you have to shut them off now and then to sort of straighten out the rows and columns, and chuck out abandoned or corrupted files.. maybe add some grease in the form of optimizations and then fire it back up so users can get it all messy again inside.. mostly because they're all written just well enough to function..
How do you straighten out rows and columns?
just give the tape drives a little jiggle as they come online
That doesn't sound right. Why would turning a database off let it do anything? It's off. Most databases periodically do stuff like this in the background.
Database schemas can be updated, new services and special functionalities can be first activated and afterwards tested with specific accounts, among a myriad of other things, depending on the game and the update.
To add to others’ posts. It can be a huge variety of things that risk making the service unstable, unresponsive, and worst case could corrupt data in flight.
Customers view scheduled maintenance as minor inconvenience. Unplanned outage as an annoyance, and loss of data as a dealbreaker.
So any time there was a chance that what we need to do would limit functionality - or otherwise make the system unstable - best to take the system offline for scheduled maintenance.
Windows updates
The servers run on regular operating systems. They might wish to back up the storage (and databases), update the OS, or update their game server software, all of which is a lot easier if the service is stopped.
Maintenance.