Michael O.
5min Read

The Art of Infrastructure

The Art of Infrastructure

decorative

IT Infrastructure is an often underrated and misjudged part of IT. The observation is sometimes made that “any techie can do it”.

The unfortunate reality is that many do !

What is this “Infra” – Structure ?

Infrastructure is also sometimes confused with architecture. This is not too much of a stretch as infrastructure design can be part of system architecture, but is not always so. It is the mix of components needed to enable software to work effectively and reliably, preferably in a cost effective manner. This means that infrastructure crosses the boundary between physical and virtual constantly. It involves the integration of disparate systems such as the OS (Operating System) and Hosting / Hyper-visor layers. Backup systems and “fail-over and recovery” are all part of this merry mix of parts. To top it off, network components such as switches, routers and cabling add to the confusion.

Let us look at each section, starting from the hardware or “tin” as it is known.

“Tin”

With datacenter infrastructure, you cannot afford to run highly critical systems on “desktop” or “home use” hardware. We run our infrastructure on professional “Servers” or “Enterprise” grade hardware. The difference between desktop and dedicated server is more than just price. Server motherboards are designed with redundancy and performance in mind. Most have at least 2 CPU sockets and can have up to 4. The CPU’s used are mostly Intel XEON processors which cost more per processor than most desktops cost. They have higher core densities, mostly starting at 8 cores and going up to 24 cores. The motherboard also has a lot more DIMM slots for memory (RAM). A typical motherboard can accommodate anywhere from 8 to 24 DIMM cards, allowing memory from 32GB up to more than 1 TB on board. Servers also have at least 2, but usually 4 or more network ports.

Motherboards for servers also often have a back-plane – a special riser card, which often has a built in RAID controller and servers can handle from 8 up to 48 hard drives in 1 chassis. This also means that power is a big issue which is why most servers have at least 2 and up to 8 power supplies, not only for extra power, but also for redundancy. Servers are mounted in racks which usually have dual power sources (A and B). These supplies run off different UPS circuits and different power circuits in the datacenter.

Hard drives used are usually rated for high usage or RAID certified. SSD drives for datacenter usage can be up to five times the price of the same size SSD drives for desktop use. They are rated for more READ-WRITE cycles and mostly higher speed ratings as well. All this adds up to a typical mid-range server costing around R150,000 to R200,000 and high end servers as much as R1.2 million per machine.

Storage Infrastructure

Storage can be built in 2 basic types:

  • Direct Attached Storage  – this includes local hard drives in the server and devices attached directly by eSCSI or eSATA connectors (also known as JBOD devices)
  • Network Storage – sometimes called “Shared Storage” – this includes SAN (iSCSI) devices and NAS (NFS/CFS) devices

These 2 types differ in some important ways. Network or Shared storage usually has a higher cost factor as well as a higher complexity in setup and installation. The advantages are that the storage can be accessed by more than one host, thus the “Shared” part of the description. This enabled Virtual Machines to be migrated quickly from one host to another for maintenance or in case of hardware failure. This is commonly known as a High-Availability setup. Because shared storage inherently concentrates the risk – more data can be lost in case of the storage device failing, they are usually installed in a redundant fashion. This means that they are usually duplicated as a minimum. Network storage also runs RAID storage, usually RAID 5 or 10, protecting against individual disk failures.

Direct attached storage spreads the risk at the cost of lower availability as. if a host goes down, Virtual Machine images usually have to be restored from backup. The upside is that directly attached storage usually allows higher performance due to reduced latency as there is no network link between the host and the storage.

Both types of storage need an effective backup strategy as redundancy does not equal recoverability.

Networking Infrastructure

Connecting all these elements is the network. Network equipment can be divided into three groups:

  1. NIC’s – Network Interface Cards – this is what connects a server to the network
  2. SWITCHES – these allow many hosts to connect to each other and to our final element,
  3. ROUTERS – these direct traffic in and out of a network

There are other types of devices as well, but the main ones are above. The network is connected via copper wire (Ethernet Cable or UTP cable) or for very fast networks, fibre. Fibre equipment is very expensive compared to copper.

The “Soft” Stuff

All these elements work together. Infrastructure Engineers configure servers to communicate with the storage, each other and the network. They configure the systems and load the correct software to enable these functions. Websites need software such as NginX, Apache, Lighttpd or Caravan to translate the code in a text file (your website code) to a web page. These may in turn need various languages such as Java, PHP, Ruby or Javascript/CGI to run properly. The pages may also need a database which needs databases server software such as MySQL,MariaDB,PostGreSQL, DrizzleDB,SQLite or Firebird to run. The websites need to be indexed on a DNS server which needs BIND or EasyDNS to run. Email is handled by Exim or Postfix (SMTP) and Dovecot or Courier (POP3 and IMAP).

Then there are various safety and security elements such as Firewalling (Firewalld, UCF, CSF) mostly using iptables. Intrusion detection agents such as SNORT, Fail2Ban  and LFD are also used. These elements need consistent configuration to eliminate failure and a configuration manager is such as CHEF, PUPPET or SALT is used. These settings need to be version managed and then we use GIT or SVN. Routers and switches have there own languages which are mostly vendor specific. We are fortunate enough to have all these skills at hand, as well as a team of virtualization experts at hand.

Happy Hosting !HostAfrica


The Author

Michael O.

Michael is the founder, managing director, and CEO of HOSTAFRICA. He studied at Friedrich Schiller University Jena and was inspired by Cape Town's beauty to bring his German expertise to Africa. Before HOSTAFRICA, Michael was the Managing Director of Deutsche Börse Cloud Exchange AG, one of Germany's largest virtual server providers.

More posts from Michael