(Hyper) Converged Infrastructure

In a traditional infrastructure deployment, compute, storage and networking are deployed and managed independently, often based on components from multiple vendors. In a converged infrastructure, the compute, storage, and network components are designed, assembled, and delivered by one vendor and managed as one system, typically deployed in one or more racks. A converged infrastructure minimizes compatibility issues between servers, storage systems and network devices while also reducing costs for cabling, cooling, power and floor space.

The technology is usually difficult to expand on-demand, requiring the deployment of another rack of infrastructure to add new resources. The following picture shows an example of a converged system.


While in a converged infrastructure the infrastructure is deployed as individual components in a rack, a hyperconverged infrastructure (HCI) brings together the same components within a single server node.

A hyperconverged infrastructure comprises a large number of identical physical servers from one vendor with direct attached storage in the server and special software that manages all servers, storage, and networks as one cluster running virtual machines.

The technology is easy to expand on-demand, by adding servers to the hyperconverged cluster. The following picture shows an example of a hyperconverged system.


Hyperconverged systems are an ideal candidate for deploying VDI environments (see section 12.3.3), because the storage is close to the compute (as it is in the same box) and the solution scales well with the rise of the number of users.

A big advantage of converged and hyperconverged infrastructures is having to deal with one firmware and software vendor. Vendors of hyperconverged infrastructures provide all updates for compute, storage and networking in one service pack and deploying these patches is typically much easier than deploying upgrades in all individual components in a traditional infrastructure deployment.

Drawbacks of converged and hyperconverged infrastructures are:

  • Vendor lock-in – the solution is only beneficial if all infrastructure is from the same vendor
  • Scaling can only be done in fixed building blocks – if more storage is needed, compute must also be purchased. This can have a side effect: since some software licenses are based on the number of used CPUs or CPU cores, adding storage also means adding CPUs and hence leads to extra license costs.

This entry was posted on Vrijdag 21 Oktober 2016

Object storage

Object storage stores data in a flat address space. Data is stored and retrieved using RESTful API calls over HTTP. This in contrast with regular file systems, that store data in hierarchical, directory-based file systems that utilize specialized storage protocols.

Where a traditional file system provides a structure that simplifies locating files (for example, a log file is stored in /var/log/proxy/proxy.log), in object storage, a file’s location must be administered by the application using the object ID. For example, an application has administered that its log file has an object ID of 8932189023.

An object storage container stores the actual data (for example, a document, an image, or a video file), its metadata (for example, date, and size), and a unique Object ID. Amazon’s S3 service pioneered object storage and its protocol became the de facto standard protocol for object storage.

The flat address space in an object storage system enables simplicity and massive scalability of the storage system, as the Object ID is a link to a physical file that can be stored anywhere.

Data in object based storage typically can’t be modified. Instead, modified files must be deleted and rewritten, each time leading to a new Object ID that must be stored for reference in the application.

While object storage was not designed to be used as a file system, some systems emulate a file system using object storage. For instance, Amazon’s S3FS creates a virtual filesystem, based on S3 object storage, that can be mounted to an operating system in the traditional way.

This entry was posted on Vrijdag 07 Oktober 2016

Software Defined Networking (SDN) and Network Function Virtualization (NFV)

Software Defined Networking (SDN) is a relatively new concept. It allows networks to be defined and controlled using software external to the physical networking devices.

With SDN, a relatively simple physical network can be programmed to act as a complex virtual network. It can become a hierarchical, complex and secured virtual structure that can easily be changed without touching the physical network components.

An SDN can be controlled from a single management console and open APIs can be used to manage the network using third party software. This is particularly useful in a cloud environment, where networks change frequently as machines are added or removed from a tenant’s environment. With a single click of a button or a single API call, complex networks can be created within seconds.

SDN works by decoupling the control plane and data plane from each other, such that the control plane resides centrally and the data plane (the physical switches) remain distributed, as shown in the next figure.

Software Defined Networking (SDN)

In a traditional switch or router, the network device dynamically learns packet forwarding rules and stores them in each device as ARP or routing tables. In an SDN, the distributed data plane devices are forwarding network packets based on ARP or routing rules that are loaded into the devices by an SDN controller devices in the central control plane. This allows the physical devices to be much simpler and more cost effective.


Network Function Virtualization

In addition to SDN, Network Function Virtualization (NFV) is a way to virtualize networking devices like firewalls, VPN gateways and load balancers. Instead of having hardware appliances for each network function, in NFV, these appliances are implemented by virtual machines running applications that perform the network functions.

Using APIs, NFV virtual appliances can be created and configured dynamically and on-demand, leading to a flexible network configuration. It allows, for instance, to deploy a new firewall as part of a script that creates a number of connected virtual machines in a cloud environment.

This entry was posted on Vrijdag 23 September 2016

Software Defined Storage (SDS)

Software Defined Storage (SDS) abstracts data and storage capabilities (the control plane) from the underlying physical storage systems (the data plane). This allows data to be stored in a variety of storage systems while being presented and managed as one storage pool to the servers consuming the storage. The figure below shows the SDS model.

Software Defined Storage (SDS) model

Heterogeneous physical storage devices can be made part of the SDS system. SDS enables the use of standard commodity hardware, where storage is implemented as software running on commodity x86-based servers with direct attached disks. But the physical storage can also be a Storage Area Network, a Network Attached Storage system, or an Object storage system. SDS virtualizes this physical storage into one large shared virtual storage pool. From this storage pool, software provides data services like:

  • Deduplication
  • Compression
  • Caching
  • Snapshotting
  • Cloning
  • Replication
  • Tiering

SDS provides servers with virtualized data storage pools with the required performance, availability and security, delivered as block, file, or object storage, based on policies. As an example, a newly deployed database server can invoke an SDS policy that mounts storage configured to have its data striped across a number of disks, creates a daily snapshot, and has data stored on tier 1 disks.

APIs can be used to provision storage pools and set the availability, security and performance levels of the virtualized storage. In addition, using APIs, storage consumers can monitor and manage their own storage consumption.

This entry was posted on Vrijdag 09 September 2016

What's the point of using Docker containers?


Originally, operating systems were designed to run a large number of independent processes. In practice, however, dependencies on specific versions of libraries and specific resource requirements for each application process led to using one operating system – and hence one server – per application. For instance, a database server typically only runs a database, while an application server is hosted on another machine.

Compute virtualization solves this problem, but at a price – each application needs a full operating system, leading to high license and systems management cost. And because even the smallest application needs a full operating system, much memory and many CPU cycles are wasted just to get isolation between applications. Container technology is a way to solve this issue.

Container isolation versus overhead

The figure above shows the relation between isolation between applications and the overhead of running the application. While running each application on a dedicated physical machine provides the highest isolation, the overhead is very high. An operating system, on the other hand, provides much less isolation, but at a very low overhead per application.

Container technology, also known as operating-system-level virtualization, is a server virtualization method in which the kernel of an operating system provides multiple isolated user-space instances, instead of just one. These containers look and feel like a real server from the point of view of its owners and users, but they share the same operating system kernel. This isolation enables the operating system to run multiple processes, where each process shares nothing but the kernel.


Containers are not new – the first UNIX based containers, introduced in 1979, provided isolation of the root file system via the chroot operation. Solaris subsequently pioneered and explored many enhancements, and Linux control groups (cgroups) adopted many of these ideas.

Containers are part of the Linux kernel since 2008. What is new is the use of containers to encapsulate all application components, such as dependencies and services. And when all dependencies are encapsulated, applications become portable.

Using containers has a number of benefits:

  • Isolation – applications or application components can be encapsulated in containers, each operating independently and isolated from each other.
  • Portability – since containers typically contain all components the embedded application or application component needs to function, including libraries, patches, containers can be run on any infrastructure that is capable of running containers using the same kernel version.
  • Easy deployment – containers allow developers to quickly deploy new software versions, as the containers they define can be moved to production unaltered.

Container technology

Containers are based on 3 technologies that are all part of the Linux kernel:

  • Chroot (also known as a jail) - changes the apparent root directory for the current running process and its children and ensures that these processes cannot access files outside the designated directory tree. Chroot was available in Unix as early as 1979.
  • Cgroups - limits and isolates the resource usage (CPU, memory, disk I/O, network, etc.) of a collection of processes. Cgroups is part of the Linux kernel since 2008.
  • Namespaces - allows complete isolation of an applications' view of the operating environment, including process trees, networking, user IDs and mounted file systems. It is part of the Linux kernel since 2002.

Linux Containers (LXC), introduced in 2008, is a combination of chroot, cgroups, and namespaces, providing isolated environments, called containers.

Docker can use LXC as one of its execution drivers. It adds Union File System (UFS) – a way of combining multiple directories into one that appears to contain their combined contents – to the containers, allowing multiple layers of software to be "stacked". Docker also automates deployment of applications inside containers.

Containers and security

While containers provide some isolation, they still use the same underlying kernel and libraries. Isolation between containers on the same machine is much lower than virtual machine isolation. Virtual machines get isolation from hardware - using specialized CPU instructions. Containers don't have this level of isolation. However, there are some operating systems, like Joyent SmartOS' offering, that run on bare metal, and providing containers with hardware based isolation using the same specialized CPU instructions.

Since developers define the contents of containers, security officers lose control over the containers, which could lead to unnoticed vulnerabilities. This could lead to using multiple versions of tools, unpatched software, outdated software, or unlicensed software. To solve this issue, a repository with predefined and approved container components and container hierarchy can be implemented.

Container orchestration

Where an operating system abstracts resources such as CPU, RAM, and network connectivity and provides services to applications, container orchestration, also known as a datacenter operating system, abstracts the resources of a cluster of machines and provides services to containers. A container orchestrator allows containers to be run anywhere on the cluster of machines – it schedules the containers to any machine that has resources available. It acts like a kernel for the combined resources of an entire datacenter instead of the resources of just a single computer.


There are many frameworks for managing container images and orchestrating the container lifecycle. Some examples are:

  • Docker Swarm
  • Apache Mesos
  • Google's Kubernetes
  • Rancher
  • Pivotal CloudFoundry
  • Mesophere DC/OS

This entry was posted on Woensdag 22 Juni 2016

Earlier articles

(Hyper) Converged Infrastructure

Object storage

Software Defined Networking (SDN) and Network Function Virtualization (NFV)

Software Defined Storage (SDS)

What's the point of using Docker containers?

Identity and Access Management

Using user profiles to determine infrastructure load

Public wireless networks

Supercomputer architecture

Desktop virtualization

Stakeholder management

x86 platform architecture

Midrange systems architecture

Mainframe Architecture

Software Defined Data Center - SDDC

The Virtualization Model

What are concurrent users?

Performance and availability monitoring in levels

UX/UI has no business rules

Technical debt: a time related issue

Solution shaping workshops

Architecture life cycle

Project managers and architects

Using ArchiMate for describing infrastructures

Kruchten’s 4+1 views for solution architecture

The SEI stack of solution architecture frameworks

TOGAF and infrastructure architecture

The Zachman framework

An introduction to architecture frameworks

How to handle a Distributed Denial of Service (DDoS) attack

Architecture Principles

Views and viewpoints explained

Stakeholders and their concerns

Skills of a solution architect architect

Solution architects versus enterprise architects

Definition of IT Architecture

My Book

What is Big Data?

How to make your IT "Greener"

What is Cloud computing and IaaS?

Purchasing of IT infrastructure technologies and services

IDS/IPS systems

IP Protocol (IPv4) classes and subnets

Infrastructure Architecture - Course materials

Introduction to Bring Your Own Device (BYOD)

IT Infrastructure Architecture model

Fire prevention in the datacenter

Where to build your datacenter

Availability - Fall-back, hot site, warm site

Reliabilty of infrastructure components

Human factors in availability of systems

Business Continuity Management (BCM) and Disaster Recovery Plan (DRP)

Performance - Design for use

Performance concepts - Load balancing

Performance concepts - Scaling

Performance concept - Caching

Perceived performance

Ethical hacking

Computer crime

Introduction to Cryptography

Introduction to Risk management

The history of UNIX and Linux

The history of Microsoft Windows

Engelse woorden in het Nederlands

Infosecurity beurs 2010

The history of Storage

The history of Networking

The first computers

Cloud: waar staat mijn data?

Tips voor het behalen van uw ITAC / Open CA certificaat

Ervaringen met het bestuderen van TOGAF

De beveiliging van uw data in de cloud

Proof of concept

Een consistente back-up? Nergens voor nodig.

Measuring Enterprise Architecture Maturity

The Long Tail

Open group ITAC /Open CA Certification

Human factors in security

Google outage

SAS 70

De Mythe van de Man-Maand

TOGAF 9 - wat is veranderd?

DYA: Ontwikkelen Zonder architectuur

Landelijk Architectuur Congres LAC 2008

InfoSecurity beurs 2008

Spam is big business

Waarom IT projecten mislukken

Stroom en koeling

Laat beheerders meedraaien in projecten

De zeven eigenschappen van effectief leiderschap


Een ontmoeting met John Zachman

Open CA (voorheen: ITAC) - IT Architect certification

Persoonlijk Informatie Eigendom


Live computable webcast

Lezing Trends in IT Security

Hardeningscontrole en hacktesting


Information Lifecycle Management - Wat is ILM

LEAP: de trip naar Redmond

LEAP: De laatste Nederlandse masterclasses

Scada systemen

LEAP - Halverwege de Nederlandse masterclasses

Beveiliging van data - Het kasteel en de tank

Waarom je geen ICT architect moet worden

Non-functional requirements

Redenen om te backuppen

Log analyse - gebruik logging informatie

LEAP - Microsoft Lead Enterprise Architect Program

Archivering data - more than backup

Patterns in IT architectuur

Tot de dood ons scheidt

High Availability clusters

Hoe geef ik een goede presentatie

Lagen in ICT Beveiliging

Zachman architectuur model

High performance clusters en grids

Redenen om te kiezen voor Open Source software

Monitoring door systeembeheerders

Wat is VMS?

IT Architectuur certificeringen

Storage Area Network's (SAN's)

Systeembeheer documentatie

Wat zijn Rootkits

Virtualisatie van operating systems

Kenmerken van Open Source software

Linux certificering: RHCE en LPI

99,999% beschikbaarheid

Het infrastructuur model

Sjaak Laan

Recommended links

Genootschap voor Informatie Architecten
Ruth Malan
Informatiekundig bekeken
Gaudi site
XR Magazine
Esther Barthel's site on virtualization


XML: RSS Feed 
XML: Atom Feed 


The postings on this site are my opinions and do not necessarily represent CGI’s strategies, views or opinions.


Copyright Sjaak Laan