Getting folks to work together can be a grueling exercise in itself, as the different folks who administer, develop and use a specific server may have trouble coordinating an appropriate downtime period : this is especially true for enterprise applications that require a 24/7 uptime. One of the options that may come up is a late night restart—yes…someone has to do it. In most cases, the best time to restart a server is when there will be the least amount of activity on it; this could be lunchtime, right after normal working hours or midnight. Either way, it’s got to be done, but be sure you’re ready to respond if something should go wrong. One option is to schedule your downtime during a larger scale planned outage. Just be aware of possible system dependencies that your application may rely on, like network connectivity or authentication. Those opportune times may not be as good as you think they are, and this can lead to further havoc if you can’t verify that your machine is back in a good state.
Now, for proper scheduling to happen, you need to have a clear and effective Service Level Agreement (SLA) with your clients. Without this in place, no rules are set and you, as the sys admin, won’t have any ground to stand on when it comes to working with others’ schedules. Hours of operation need to be defined with those who depend on your system, so that you can easily identify downtime windows for working on a machine.
One effective way to shorten or eliminate downtime during a patch cycle is to configure fail-over partners. Generally, this just means building two machines that run the same app, but keeping one server as a primary box and the second server as a backup. This keeps one machine available to the user community while the second server is in hot fail-over mode, in case the first server should go down. When it comes to patching, the Sys Admin can patch the backup box, have the production application operation confirmed, and switch the application’s functionality from the primary server to the secondary server. This can be done using built in clustering utilities or a manual DNS change. Either way, this helps prevent any long-term downtime, so users can continue with their work with minimal or no interruption at all.
Start Up Scripts
Patches are great, but if you’re not careful and don’t bother to test your machine before you reboot, you may find that your start-up scripts may have been rearranged : this is especially likely to happen to third-party and custom applications. This change in the start-up order can hang the machine during the start-up process, if the moved item is dependent on the network daemon being up for it to start. In some cases, an application will be moved to a position before the network startup script is executed causing the app to hang because of the lack of a networking process for it to start with.
To avoid problems, after applying the last patch, check your startup scripts in /etc/init.d/rcx.d or /etc/rcx.d (depending on your flavor of Linux) and verify that your scripts haven’t been renamed and moved up earlier in the start-up process. This will save you the trouble of having to reboot the machine into single-user mode or using a rescue disk so that you can rename the startup files.
Be sure to check your startup scripts after patching your machines. Reorganized RC directories can keep your machine from starting up correctly, especially for applications that require network connectivity.
Maybe I'm biased, but updating Debian or ArchLinux (more of a desktop distro) has been so easy as not to even think about it.
If you buy Red Hat Enterprise Linux with a Satellite subscription Red Hat does the patching.
If you have Novell - ZenWorks will do the trick.
If you're running a non-commercial unsupported version than sure some of the options you mention might make sense but a simple cron job with yum/apt will do it all with one command line.
Sad Tom's.. sad.. you have been an amazing site once upon a time ...
I'm a Windows guy most of the time but I enjoy playing with Linux from time to time.
Actually as a Linux starter(some time ago) I had no problem patching my Linux.
It was very easy...
I do not remember reading or doing something special before patching it at that first time.
Patching most GNU/Linux installs is a simple task, which is highly scalable, and that can be fully automated through the use of CRON scheduling, etc. NO EXTRA SOFTWARE should be required to update/maintain ANY enterprise level GNU/Linux server distro (also if you server has a GUI on it, its not running in an enterprise level configuration).
I find the mention of Windows Server strange in the article, since it can't run services like Bind9 (DNS), it only makes up roughly 38% of the current market share of net servers, and since it can't run Bind9, it runs NONE of the internet backbone (DNS routing server).
I am a huge fan of Tom's, but this article should never have been published.
While there are many Linux solutions, everybody will find what works best for them. I myself have become a fan of distributions like ArchLinux. I use it on my 3 servers at work and on my desktop and server at home. the package manager, pacman, is by far the best I've ever used. While it may not categorize some things into software groups, it does have it broken down into core, extra and then everything else. It is also extremely easy to configure and create wrappers or optional interfaces that utilize pacman (just like some of the others mentioned. There is also a package called the "arch build system" that allows you to create your own packages from source with the simple modifications of a PKGBUILD file, making recompiling and rebuilding easy and efficient. My latest server was not fully supported by a vanilla or even a patched kernel so a few quick modifications to the PKGBUILD and the kernel config and one command later, the package was compiled from source and installed without me sweating, swearing or crying.
I don't want this to come off as a "YAY ARCH - EVERYBODY SWITCH" comment so much as a "do a little more research, or even a community probe could get you better information" comment. The concept of the article wasn't bad just slightly "mis-informative". Especially seeing as how not everything that is open-source and is an OS is linux/unix. Most are linux-like or unix-like (as is the nature of progression.
As a note for the naysayers, I've used Windows Server, Debian, Gentoo, RedHat, SuSE, ubuntu, FreeBSD, OpenBSD, Solaris and many spin offs of some of those. All of them have their strengths and weaknesses (most notably the flaw of the Windows Server platform would be any machine that loads it - THAT is a biased opinion.)
in Debian based distros like *ubuntu you can set automatically daily updates without _any_ user intervension and without installing additional software.
Its a first time I see such badly written article on tomsharware.