Cloud computing and Infrastructure

In recent years, we have seen the widespread adoption of cloud computing. Cloud computing can be seen as one of the most important paradigm shifts in computing in recent years. Many organizations now have a cloud-first strategy and are taking steps to move applications from their own on-premises datacenters to the cloud managed by cloudproviders. 

The term cloud is not new. In 1997, Ramnath Chellappa of the University of Texas already stated: 

 

Computing has evolved from a mainframe-based structure to a network-based
architecture. While many terms have appeared to describe these new forms,
the advent of electronic commerce has led to the emergence of 'cloud computing‘.

 

While there are many public cloud service providers today, the three largest are Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). Together, these three have 66% of the market share and have a large number of datacenters around the world. The following picture shows when each of these cloud providers started. 

The three major cloud providers offer similar services, but sometimes under different names. For instance, a virtual machine in Azure is just called a virtual machine, but in GCP it is called a Compute Engine and in AWS it is called an EC2 instance.

While cloud computing can be seen as the new infrastructure, many organizations will be using on-premises infrastructure for many years to come. Migrating a complex application landscape to a cloud provider is no simple task and can take years. And maybe an organization is not allowed to take all its applications to the cloud. In many cases, there will be a hybrid situation, with part of the infrastructure on-premises and another part in one or more clouds.

Please be aware that the cloud is just a number of datacenters that are still filled with hardware – compute, networking and storage. Therefore, it is good to understand infrastructure building blocks and principles even when moving to the cloud.

Cloud definition

The most accepted definition of cloud computing is that of the National Institute of Standards and Technology (NIST)[i]:

Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources(e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.

It is important to realize that cloud computing is not about technology; it is an outsourcing business model. It enables organizations to cut cost while at the same time focusing on their primary business – they should focus on running their business instead of running a mail server.

Clouds are composed of five essential characteristics, four deployment models, and three service models.

Cloud characteristics

Essential cloud characteristics are:

  • On demand self-service – As a result of optimal automation and orchestration, minimal systems management effort is needed to deploy systems or applications in a cloud environment. In most cases, end uses can configure, deploy, start and stop systems or applications on demand.
  • Rapid elasticity – A cloud is able to quickly scale-up and scale-down resources. When temporarily more processing power or storage is needed, for instance as a result of a high-exposure business marketing campaign, a cloud can scale-up very quickly on demand. When demand decreases, cloud resources can rapidly scale down, leading to elasticity of resources.
  • Resource pooling – Instead of providing each application with a fixed amount of processing power and storage, cloud computing provides applications with resources from a shared pool. This is typically implemented using virtualization technologies.
  • Measured service – In a cloud environment the actual resource usage is measured and billed. There are no capital expenses, only operational expenses. This in contrast with the investments needed to build a traditional infrastructure.
  • Broad network access – Capabilities are available over the network and accessed through standard mechanisms.

 Be aware that when using public cloud based solutions, the internet connection becomes a Single Point of Failure. Internet availability and internet performance becomes critical and redundant connectivity is therefore key.

Cloud deployment models

A cloud can be implemented in one of four deployment models.

  • A public cloud deployment is delivered by a cloud service provider, is accessible through the internet, and available to the general public. Because of their large customer base, public clouds largely benefit from economies of scale.
  • A private cloud is operated solely for a single organization, whether managed internally or by a third-party, and hosted either on premises or external. It extensively uses virtualization and standardization to bring down systems management cost and staff.
  • A community cloud is much like a private cloud, but shared with a community of organizations that have shared concerns (like compliance considerations). It may be owned, managed, and operated by one or more of the organizations in the community, a third party, or some combination, and it may exist on or off premises.
  • In a hybrid cloud deployment, a service or application is provided by a combination of a public cloud, and a community cloud and/or a private cloud. This enables running generic services (like email servers) in the public cloud while hosting specialized services (like a business specific application) in the private or community cloud.

Cloud service models

Clouds can be delivered in one of three service models:

  • Software-as-a-Service (SaaS) delivers full applications that can be used by business users, and need little or no configuration. Examples are Microsoft Office365, LinkedIn, Facebook, Twitter, and Salesforce.com.
  • Platform-as-a-Service (PaaS) delivers a scalable, high available, open programming platform that can be used by developers to build bespoke applications that run on the PaaS platform. Examples are Microsoft Azure Cloud Service and Google App Engine.
  • Infrastructure-as-a-Service (IaaS) delivers (virtual) machines, networking, and storage. The user needs to install and maintain the operating systems and the layers above that. Examples are Amazon Elastic Cloud (EC2 and S3) and Microsoft Azure IaaS.

The following figure shows the responsibility of the cloud provider for each service model.

 

In the context of infrastructure, IaaS is the most relevant service model.

When we combine both deployment and service models, we get the following picture.

The next section describes Infrastructure as s Service in more detail.

Infrastructure as a Service (IaaS)

Infrastructure as a Service provides virtual machines, virtualized storage, virtualized networking and the systems management tools to manage them. IaaS can be configured using a graphical user interface (GUI), a command line interface (CLI), or application programming interfaces (APIs).

IaaS is typically based on cheap commodity white label hardware. The philosophy is to keep the cost down by allowing the hardware to fail every now and then. Failed components are either replaced or simply removed from the pool of available resources.

IaaS provides simple, highly standardized building blocks to applications. It does not provide high availability, guaranteed performance or extensive security controls. Consequently, applications running on IaaS should be robust to allow for failing hardware and should be horizontally scalable to increase performance.

In order to use IaaS, users must create and start a new server, and then install an operating system and their applications. Since the cloud provider only provides basic services, like billing and monitoring, the user is responsible for patching and maintaining the operating systems and application software.

Not all operating systems and applications can be used in an IaaS cloud; some software licenses prohibit the use of a fully scalable, virtual environment like IaaS, where it is impossible to know in advance on which machines software will run.



This entry was posted on Friday 28 February 2025

What is IT architecture?

Most of today's infrastructure landscapes are the result of a history of application implementation projects that brought in their own specialized hardware and infrastructure components. Mergers and acquisitions have made matters worse, leaving many organizations with multiple sets of the same infrastructure services that are difficult to interconnect, let alone integrate and consolidate.

Organizations benefit from infrastructure architecture when they want to be more flexible and agile because a solid, scalable, and modular infrastructure provides a solid foundation for agile adaptations. The market demands a level of agility that can no longer be supported by infrastructures that are inconsistent and difficult to scale. We need infrastructures built with standardized, modular components. And to make infrastructures consistent and aligned with business needs, architecture is critical.

Architecture is the philosophy that underlies a system and defines its purpose, intent, and structure. Different areas of architecture can be defined, including business architecture, enterprise architecture, data architecture, application architecture, and infrastructure architecture. Each of these areas has certain unique characteristics, but at their most basic level, they all aim to map IT solutions to business value.

Architecture is needed to govern an infrastructure as it is designed, as it is used, and as it is changed. We can broadly categorize architects into three groups: enterprise architects, domain architects, and solution architects, each with their own role.

Solution architects

Solution architects create IT solutions, usually as a member of a project team. A solution architect is finished when the project is complete. Solution architects are the technical conscience and authority of a project, are responsible for architectural decisions in the project, and work closely with the project manager.

Where the project manager manages the process of a project, the solution architect manages the technical solution of the project, based on business and technical requirements.

Domain architects

Domain architects are experts on a particular business or technology topic. Because solution architects cannot always be fully knowledgeable about all technological details or specific business domain issues, domain architects often assist solution architects on projects. Domain architects also support enterprise architects because they are aware of the latest developments in their field and can inform enterprise architects about new technologies and roadmaps. Examples of domain architects are cloud architects, network architects, and VMware architects.

Domain architects most often work for infrastructure or software vendors, where they help customers implement the vendor's technologies.

Enterprise architects

Enterprise architects continuously align an organization's entire IT landscape with the business activities of the organization. Using a structured approach, enterprise architects enable transformations of the IT landscape (including the IT infrastructure). Therefore, an enterprise architect is never finished (unlike the solution architect in a project, who is finished when the project is finished).

Enterprise architects typically work closely with the CIO and business units to align the needs of the business with the current and future IT landscape. Enterprise architects build bridges and act as advisors to the business and IT.


This entry was posted on Friday 31 January 2025

Infrastructure as Code pipelines

Infra-as-code pipelines are tools that perform predefined steps to deploy infrastructure. There are many tools available for building pipelines, including Jenkins, Bamboo, AWS CodePipeline, and Azure DevOps.

As shown in the figure above a pipeline for IaC can perform the following steps to create a new infrastructure environment.

  • The IaC code is stored in a version control system. Any changes made to the infrastructure code triggers the pipeline to run automatically.
  • The new code is fetched from the repository.
  • A test run is performed to check if the code has no errors and could be deployed in the target environment.
  • After the code passes the test, it is deployed to the target environment using IaC tools.
  • After all infrastructure components are created, the configuration definition is fetched from the repository.
  • The configuration tool automatically configures the infrastructure components, based on the configuration definitions, leading to a running, configured infrastructure component.

Once the infrastructure is deployed, it needs to be validated to ensure that everything is working as expected.


This entry was posted on Sunday 29 December 2024

Quantum computing

A Quantum computer is a computer based on quantum mechanics. Quantum mechanics is a scientific theory that explains how tiny particles like atoms and electrons behave and interact with each other. Quantum mechanics deals with very small particles and operates on principles like probability and uncertainty.

A quantum computer does not use classical CPUs or GPUs, but a processor based on so-called qubits. A qubit (or quantum bit) is the basic unit of quantum information. Unlike classical bits, which can be either 0 or 1, qubits can exist in a superposition of states, representing multiple values simultaneously. This property enables quantum computers to perform certain tasks much faster than classical computers.

The number of qubits in a quantum computer is not comparable to the number of transistors in a CPU. The idea behind a quantum computer is that instead of calculating all the possibilities of a problem, a quantum computer can determine all the solutions at once. A problem with 1 billion possibilities can be computed with 30 qubits at once.

But computing is not the right word. Traditional computers are deterministic and quantum computers are probabilistic. Deterministic means that the result is predetermined, and every time a calculation is performed, the answer will be the same. Probabilistic means that there is a high probability that the result is correct, but that each computation is an approximation that may produce a different result each time. Because of the uncertainty inherent in quantum mechanics by definition, the answer is always an approximation.

Qubits are also highly unstable - they must be cooled to near absolute zero to become superconducting, and they can only hold a stable position for a few milliseconds. This means that calculations have to be repeated many times to get a sufficiently reliable answer.

Quantum computers are still in the experimental stage. A few research centers and large companies like IBM are working on them. Given the complexity and cooling requirements, quantum computing capabilities will most likely be offered as a cloud service in the future.

Quantum computing can be used in medicine, for example, it could speed up drug discovery and help medical research by speeding up chemical reactions or protein folding simulations, something that will never be possible with classical computers because it would take thousands of years to calculate on a classical supercomputer.

Because of its properties, quantum computing could easily break current encryption systems. Therefore, cryptographers are working on post-quantum algorithms.

IBM has built the largest quantum computer yet, with 433 qubits. This figure shows the progression of the number of qubits in the largest quantum computers.

 

This entry was posted on Thursday 20 April 2023

VS kan nog steeds Europese data Microsoft opeisen ondanks nieuwe regels

De afgelopen jaren is er een de enorme toename te zien in het gebruik van clouddiensten van Amerikaanse bedrijven zoals Microsoft, Amazon en Google door bedrijven en instellingen in Nederland. Het is een goede stap dat Microsoft nu gekozen heeft om in de Azure cloud opgeslagen data van Europese burgers en organisaties in Europa te houden. De stap is vooral belangrijk omdat naar verwachting ook de Nederlandse overheid steeds meer gebruik zal gaan maken van clouddiensten, nu het Rijkscloudbeleid in augustus 2022 door de Tweede Kamer is gegaan.

De verwachting is dat ook andere cloudaanbieders vergelijkbare stappen zullen nemen. Hoewel het EU Data Boundary-initiatief van Microsoft een belangrijke stap is, blijft het mogelijk voor de Verenigde Staten om data van Europese burgers en bedrijven te vorderen op basis van de Amerikaanse CLOUD-act. Dit is namelijk een juridisch probleem dat niet met technische maatregelen of processen is op te lossen.


This entry was posted on Friday 06 January 2023


Earlier articles

Cloud computing and Infrastructure

What is IT architecture?

Infrastructure as Code pipelines

Quantum computing

VS kan nog steeds Europese data Microsoft opeisen ondanks nieuwe regels

Data Nederlandse studenten in cloud niet grootschalig toegankelijk voor bedrijven VS

Passend Europees cloudinitiatief nog ver weg

Security bij cloudproviders wordt niet beter door overheidsregulering

The cloud is as insecure as its configuration

Infrastructure as code

DevOps for infrastructure

Infrastructure as a Service (IaaS)

(Hyper) Converged Infrastructure

Object storage

Software Defined Networking (SDN) and Network Function Virtualization (NFV)

Software Defined Storage (SDS)

What's the point of using Docker containers?

Identity and Access Management

Using user profiles to determine infrastructure load

Public wireless networks

Stakeholder management

Archivering data - more than backup

Desktop virtualization

Supercomputer architecture

x86 platform architecture

Midrange systems architecture

Mainframe Architecture

Software Defined Data Center - SDDC

The Virtualization Model

Sjaak Laan

What are concurrent users?

Performance and availability monitoring in levels

UX/UI has no business rules

Technical debt: a time related issue

Solution shaping workshops

Architecture life cycle

Project managers and architects

Using ArchiMate for describing infrastructures

Kruchten’s 4+1 views for solution architecture

The SEI stack of solution architecture frameworks

TOGAF and infrastructure architecture

How to handle a Distributed Denial of Service (DDoS) attack

The Zachman framework

An introduction to architecture frameworks

Architecture Principles

Views and viewpoints explained

Stakeholders and their concerns

Skills of a solution architect architect

Solution architects versus enterprise architects

Definition of IT Architecture

Purchasing of IT infrastructure technologies and services

IP Protocol (IPv4) classes and subnets

My Book

What is Cloud computing and IaaS?

What is Big Data?

How to make your IT "Greener"

IDS/IPS systems

Introduction to Bring Your Own Device (BYOD)

IT Infrastructure Architecture model

Fire prevention in the datacenter

Where to build your datacenter

Availability - Fall-back, hot site, warm site

Reliabilty of infrastructure components

Human factors in availability of systems

Business Continuity Management (BCM) and Disaster Recovery Plan (DRP)

Performance - Design for use

Performance concepts - Load balancing

Performance concepts - Scaling

Performance concept - Caching

Perceived performance

Ethical hacking

Computer crime

Introduction to Cryptography

Introduction to Risk management

Engelse woorden in het Nederlands

The history of UNIX and Linux

The history of Microsoft Windows

Infosecurity beurs 2010

The history of Storage

The history of Networking

The first computers

Cloud: waar staat mijn data?

Tips voor het behalen van uw ITAC / Open CA certificaat

Ervaringen met het bestuderen van TOGAF

De beveiliging van uw data in de cloud

Proof of concept

Measuring Enterprise Architecture Maturity

The Long Tail

Open group ITAC /Open CA Certification

Google outage

Een consistente back-up? Nergens voor nodig.

Human factors in security

TOGAF 9 - wat is veranderd?

Landelijk Architectuur Congres LAC 2008

De Mythe van de Man-Maand

InfoSecurity beurs 2008

Spam is big business

SAS 70

De zeven eigenschappen van effectief leiderschap

Een ontmoeting met John Zachman

Persoonlijk Informatie Eigendom


Recommended links

Genootschap voor Informatie Architecten
Ruth Malan
Gaudi site
XR Magazine
Esther Barthel's site on virtualization
Eltjo Poort's site on architecture


Feeds

 
XML: RSS Feed 
XML: Atom Feed 


Disclaimer

The postings on this site are my opinions and do not necessarily represent CGI’s strategies, views or opinions.

 

Copyright Sjaak Laan