Patching Linux - Pain or Gain?

Problems Continued

Scheduling

Getting folks to work together can be a grueling exercise in itself, as the different folks who administer, develop and use a specific server may have trouble coordinating an appropriate downtime period : this is especially true for enterprise applications that require a 24/7 uptime. One of the options that may come up is a late night restart—yes…someone has to do it. In most cases, the best time to restart a server is when there will be the least amount of activity on it; this could be lunchtime, right after normal working hours or midnight. Either way, it’s got to be done, but be sure you’re ready to respond if something should go wrong. One option is to schedule your downtime during a larger scale planned outage. Just be aware of possible system dependencies that your application may rely on, like network connectivity or authentication. Those opportune times may not be as good as you think they are, and this can lead to further havoc if you can’t verify that your machine is back in a good state.

Now, for proper scheduling to happen, you need to have a clear and effective Service Level Agreement (SLA) with your clients. Without this in place, no rules are set and you, as the sys admin, won’t have any ground to stand on when it comes to working with others’ schedules. Hours of operation need to be defined with those who depend on your system, so that you can easily identify downtime windows for working on a machine.

Fail Over

One effective way to shorten or eliminate downtime during a patch cycle is to configure fail-over partners. Generally, this just means building two machines that run the same app, but keeping one server as a primary box and the second server as a backup. This keeps one machine available to the user community while the second server is in hot fail-over mode, in case the first server should go down. When it comes to patching, the Sys Admin can patch the backup box, have the production application operation confirmed, and switch the application’s functionality from the primary server to the secondary server. This can be done using built in clustering utilities or a manual DNS change. Either way, this helps prevent any long-term downtime, so users can continue with their work with minimal or no interruption at all.

Start Up Scripts

Patches are great, but if you’re not careful and don’t bother to test your machine before you reboot, you may find that your start-up scripts may have been rearranged : this is especially likely to happen to third-party and custom applications. This change in the start-up order can hang the machine during the start-up process, if the moved item is dependent on the network daemon being up for it to start. In some cases, an application will be moved to a position before the network startup script is executed causing the app to hang because of the lack of a networking process for it to start with.

To avoid problems, after applying the last patch, check your startup scripts in /etc/init.d/rcx.d or /etc/rcx.d (depending on your flavor of Linux) and verify that your scripts haven’t been renamed and moved up earlier in the start-up process. This will save you the trouble of having to reboot the machine into single-user mode or using a rescue disk so that you can rename the startup files.

RC Scripts

RC Scripts

Be sure to check your startup scripts after patching your machines. Reorganized RC directories can keep your machine from starting up correctly, especially for applications that require network connectivity.

PRICEGRABBER_M4001160#

  • Darkk
    Very nice article. Patching is just a way of life of sys admins everywhere regardless of what flavor of server and desktop OS. Least the article explains in detail what to expect and the gotchas.

    Good job!

    Darkk
    Reply
  • Patching harder on Linux than Windows?!?
    Maybe I'm biased, but updating Debian or ArchLinux (more of a desktop distro) has been so easy as not to even think about it.
    Reply
  • You really don't know what you're talking about here..and readers should avoid this article.

    If you buy Red Hat Enterprise Linux with a Satellite subscription Red Hat does the patching.

    If you have Novell - ZenWorks will do the trick.

    If you're running a non-commercial unsupported version than sure some of the options you mention might make sense but a simple cron job with yum/apt will do it all with one command line.

    Reply
  • Was this article written by Steve Ballmer? And interestingly there is only a single line about the Debian-based distros? Why Mr. Anderson, why didn't you mention the details about APT? Now Steve Ballmer, let me tell you something - my close friend is a sysadmin of my university (University Of Toronto) and he doesnt even bother patching the systems because they are fully automated (over 200 machines). As well he deploys 50 machines with brand new OS installation with no more than 5 lines of commands.

    Sad Tom's.. sad.. you have been an amazing site once upon a time ...
    Reply
  • hmmm.
    I'm a Windows guy most of the time but I enjoy playing with Linux from time to time.
    Actually as a Linux starter(some time ago) I had no problem patching my Linux.
    It was very easy...
    I do not remember reading or doing something special before patching it at that first time.
    Reply
  • Correction, Patch Quest by Advent Net was cited as patching only RedHat which is incorrect. It also patches Debian. In my experience finding a patch solution for your particular OS has not be that terribly difficult. Finding one that has robust scheduling, push on demand, and can handle the multitude of necessary evil apps, such as Adobe Reader, Quicktime, Realplayer, Instant Messaging, etc... is the real challenge.
    Reply
  • GoK
    The author of this article needs to go back through their information, and edit this article. It is highly inaccurate! The fact that he gave Debian-based GNU/Linux flavors (ie., Ubuntu&Gentoo) less time in the article than his praise for Mircosofts upstream ability for patches, seems a bad sign for this article.

    Patching most GNU/Linux installs is a simple task, which is highly scalable, and that can be fully automated through the use of CRON scheduling, etc. NO EXTRA SOFTWARE should be required to update/maintain ANY enterprise level GNU/Linux server distro (also if you server has a GUI on it, its not running in an enterprise level configuration).

    I find the mention of Windows Server strange in the article, since it can't run services like Bind9 (DNS), it only makes up roughly 38% of the current market share of net servers, and since it can't run Bind9, it runs NONE of the internet backbone (DNS routing server).

    I am a huge fan of Tom's, but this article should never have been published.
    Reply
  • nochternus
    <rant>
    While there are many Linux solutions, everybody will find what works best for them. I myself have become a fan of distributions like ArchLinux. I use it on my 3 servers at work and on my desktop and server at home. the package manager, pacman, is by far the best I've ever used. While it may not categorize some things into software groups, it does have it broken down into core, extra and then everything else. It is also extremely easy to configure and create wrappers or optional interfaces that utilize pacman (just like some of the others mentioned. There is also a package called the "arch build system" that allows you to create your own packages from source with the simple modifications of a PKGBUILD file, making recompiling and rebuilding easy and efficient. My latest server was not fully supported by a vanilla or even a patched kernel so a few quick modifications to the PKGBUILD and the kernel config and one command later, the package was compiled from source and installed without me sweating, swearing or crying.

    I don't want this to come off as a "YAY ARCH - EVERYBODY SWITCH" comment so much as a "do a little more research, or even a community probe could get you better information" comment. The concept of the article wasn't bad just slightly "mis-informative". Especially seeing as how not everything that is open-source and is an OS is linux/unix. Most are linux-like or unix-like (as is the nature of progression.

    As a note for the naysayers, I've used Windows Server, Debian, Gentoo, RedHat, SuSE, ubuntu, FreeBSD, OpenBSD, Solaris and many spin offs of some of those. All of them have their strengths and weaknesses (most notably the flaw of the Windows Server platform would be any machine that loads it - THAT is a biased opinion.)
    </rant>
    Reply
  • malici0usc0de
    With Ubuntu you can also set it up to silently install them in the background, it just prompts for a password then goes away. I don't know how long Ubuntu has had this but I have been using it as my only OS at home for about 2 years now and have never had a problem with patches. I use XP at work as almost everyone does and I notice it operates almost exactly the same way except it doesn't ask you for a password. So if it works for the less techie MS user base then I don't see why so many problems are occurring with this same basic system running under Linux. sudo apt-get install brain
    Reply
  • resistance
    The writer of this article has 0% knowledge of _present-day_ GNU/Linux or this article was sponsored by software monopolist.

    in Debian based distros like *ubuntu you can set automatically daily updates without _any_ user intervension and without installing additional software.

    Its a first time I see such badly written article on tomsharware.
    Reply