Downtime matters. For Amazon.com, a minute of downtime can equate to $66,000. So while the need for high availability is clear, it also needs to be affordable for smaller businesses. Most companies are not going to buy a Fort Knox system, but something needs to be done to prevent massive access loss.
Most IT managers are aware that to keep their IT system going, as well as their internal information protected, they need two things: a robust defense system for intrusions, disasters and viruses, and a backup plan for when everything breaks. And those who are really focused on protection design both for the inevitability that things will break and what the steps will be applied after the fact, i.e. the proverbial “Plan B.”
The idea of high availability architecture has taken root where companies have a core need to keep their system running and their files accessible even if the worst occurs. That involves continuous operation with multiple alternative connections and no practical downtime, essentially the Shangri-La of the 100 percent operational network. It’s impossible to keep a network running through anything. Eventually some event will cause a stoppage, but the goal of a high availability architecture is to get back up as fast as possible, utilizing automation to move faster than the human repair person can do so, where possible.
High availability architecture is an intentional network design that provides redundancy, anticipating a failure.
Here are some of the key resources you can implement to make high availability possible:
- Implement multiple application servers. When servers become overburdened, they may become slow or even crash. Deploying applications over multiple servers will keep applications running efficiently and reduce downtime. For the user, the result is a sense of always being operational no matter what.
- Scaling and slaves matters. Remember the old saying, “don’t put all your eggs in one basket?” The same applies to servers. Databases and information should be scaled so different servers have different pieces of the company puzzle. Additionally, each server should have a slave or two ready to back up the primary server physically.
- Spread out physically. Core network servers should not be kept in the same physical location. Companies have to invest in physical server locations that are geographically spread out. Having a backup server a few miles away can mean the difference in operating again in a few days versus being shut down completely for months.
- Maintain a recurring online backup system along with hardware. Automated backup fills the gap where we manually forget to save and protect files in multiple versions. And it pays dividends under all kinds of situations, from file corruption to natural disaster to internal sabotage by disgruntled employees.
- Use of a virtualized server for zero-downtime recovery. Servers backed up with Nordic Backup Server Pro Preferred include our Preferred Server Hosting and can be pre-emptively virtualized so that there is no waiting before a user can access a cloud backup server.
High availability architecture goes hand in hand with online backup and emergency recovery, but it is focused far more on the original network design and investment early on. This is where long-term planning, ahead and for disaster, really pays off.