Getting Started with Amazon Web Services (AWS)

icon-cloud-aws

Amazon Web Services is a little-known division of the online retail giant, except for those of us in the business of IT. Its interesting to see that the profits from AWS represented 56 percent of Amazon’s total operating income with $2.57 billion in revenue. While AWS amounted to about 9 percent of total revenue, its margins and sustained growth make it stand out on Wall Street. As businesses make the move to the cloud they may ponder what it takes Getting Started with Amazon Web Services (AWS)

When we have helped organizations evolve by moving part or all of their IT infrastructure to the AWS cloud, we have found that planning is the key to their success. Most businesses have had some cloud presence in their IT infrastructure. The most common, Software as a Service (SaaS), has lead the hyper growth of the cloud. What I will consider here with AWS is how businesses use it for Infrastructure as a Service (IaaS). IaaS is defined as a form of cloud computing that relocates a business’s applications that are currently on their own servers to a hosted cloud provider. Businesses consider this to reduce hardware cost, become more agile with their IT and even improve security. To follow are the 5 simple steps we have developed to move to IaaS with AWS.

Getting Started with Amazon Web Services (AWS)

1)      Define the workloads to migrate- The first cloud migration should be kept as simple as possible. Do not start your cloud practice with any business critical or production applications. A good idea, and where many businesses start, is a data backup solution. You can use your existing backup software or one that partners with AWS currently. These are industry leaders such as Commvault and Veritas, and if you already use these solutions that is even better. Start small and you may even find you can operated in the  free tier of Amazon virtual server or instances. (https://aws.amazon.com/free/)

2)      Calculate cost and Return on Investment (ROI)- Of the two primary types of costs used to calculate ROI, hard and soft costs, hard costs seem to be the greatest savings as you first start your cloud presence. These costs include the server hardware used, if cloud isn’t already utilized,  as well as the time needed to assemble and configure it. When configuring  a physical hardware server a hardware technician will have to make an estimation on the applications growth in order to size the server properly. With AWS it’s pay as you go, only renting what you actually use. Other hard cost such as power consumption and networking costs will be saved as well. Many times when starting small, it doesn’t take a formal process of ROI or documenting soft costs, such as customer satisfaction, to see that it makes sense. Finally, another advantage of starting with a modest presence in the AWS infrastructure is that you may be able to stay within the free tier for the first year. This  offering includes certain types of storage suitable for backups and the networking needed for data migration.

3)      Determine cloud compatibility- There are still applications that don’t work well in a cloud environment. That is why it is important to work with a partner that has experience in cloud implementation. It can be as simple as an application that requires a premium of bandwidth, or is sensitive to data latency. Additionally, industries that are subject to regulation, such as PCI/DSS or HIPAA are further incentivized to understand what is required and the associated costs . For instance, healthcare organizations are bound to secure their Protected Health Information (PHI). This regulated data should be encrypted both in transit and at rest. This example of encryption wouldn’t necessarily change your ROI, but needs to be considered. A strong IT governance platform is always a good idea and can assure smooth sailing for the years to come.

4)      Determine how to migrate existing data to the cloud- Amazon AWS provides many ways to migrate data, most of which will not incur any additional fees. These proven methods not only help secure your data but also speed up the process of implementation of your first cloud instance. To follow are the most popular ways.

  1. a) Virtual Private Network- This common but secure transport method is available to move data via the internet that is not sensitive to latency. In most cases a separate virtual server for an AWS storage gateway will be used.
  2. b) Direct Connect- AWS customers can create a dedicated telecom connection to the AWS infrastructure in their region of the world. These pipes are typically either 1 or 10 Gbps and are provided by the customer’s telecommunications provider. They will terminate at the far end of an Amazon partner datacenter. For example, in the midwest this location is in Virginia. The AWS customer pays for the circuit as well as a small recurring cross-connect fee for the datacenter.
  3. c) Import/Export– AWS will allow their customers to ship their own storage devices containing data to AWS to be migrated to their cloud instance. AWS publishes a list of compatible devices and will return the hardware when the migration is completed.
  4. d) Snowball– Snowball is similar to import/export except that Amazon provides the storage devices for this product. A Snowball can store up to 50 Terabytes (TB) of data and can be combined in series with up to 4 other Snowballs. It also makes sense in sites with little or no internet connectivity. This unique device is set to ship as is, there is no need to box it up. It can encrypt the data and has two 10 GIG Ethernet ports for data transfer. Devices like the Snowball are vital for migrations with large amounts of data. Below is a chart showing approximate transfer times depending on the internet connection speed and the amount of data to be transferred. It is easy to see large migrations couldn’t happen without these devices. The final column shows the amount of data where is makes sense to “seed” the data with a hardware devices rather than transfer it over the internet or a direct connection.
    Company’s Internet Speed Theoretical days to xfer 100 TB @ 80% Utilization Amount of data to consider device
    T3 (44.73 Mbps) 269 days 2 TB or more
    100 Mbps 120 days 5 TB or more
    1000 Mbps (GIG) 12 days 60 TB or more

    1)      Test and Monitor- Once your instance is setup, and all the data migrated, it’s time to test. Best practices are to test the application in the most realistic setting possible. This means during business hours and in an environment when bandwidth consumption will be similar to the production environment. You wont need to look far to find products that can monitor the health of your AWS instances; AWS provides a free utility called CloudWatch. CloudWatch monitors your Amazon Web Services (AWS) resources and the applications you run on AWS in real time. You can use CloudWatch to collect and track metrics, which are variables you can measure for your resources and applications. CloudWatch alarms send notifications or automatically make changes to the resources you are monitoring based on rules that you define. For example, you can monitor the CPU usage and disk reads and writes of your Amazon instances and then use this data to determine whether you should launch additional instances to handle increased load. You can also use this data to stop under-used instances to save money. In addition to monitoring the built-in metrics that come with AWS, you can monitor your own custom metrics. With CloudWatch, you gain system-wide visibility into resource utilization, application performance, and operational health.

    To meet and learn more about how AWS can benefit your organization contact me at (513) 227-4131 or jim.conwell@outlook.com.

 

open-source leaders

If Software is Eating the World, then the Community is Serving it Up!

 Five years ago, Marc Andersen, the co-founder of Netscape, wrote his famous essay for the Wall Street Journal “Why Software is Eating the World”. The premise is as true today, however, the path it is taking is evolving. Open-source or community-based software development has increased in popularity recently. According to Wikipedia, open-source software “is a type of computer software whose source code is released under a license in which the copyright holder grants users the rights to study, change, and distribute the software to anyone and for any purpose”. In the following article, I will describe the origins of open-source and how it has evolved to become a primary way software is developed.

There are many different opinions as to the origins of open-source. Many believe its real momentum coincided with the beginnings of the Linux Operating System developed by Linus Torvalds in 1992. The Linux computer operating system grew quickly in the mid 1990’s creating a new segment of the software industry. This fast-paced growth was complemented by a high-quality product. In the late 1990’s Netscape released its source code for its software Mozilla which became the first widely used software available for free. Soon after that many of the major players in the open-source movement began to organize to oversee the process and the term “open-source software” was coined.

Can you make money on a free product?

To many, Red Hat the most common example of a company built on open-source software; a business model built on support. Redhat has developers that contribute to the software, but their primary strategy is to stabilize and support the open-source projects. Redhat reported overall revenue of $2.4 billion in 2017 while keeping higher than usual margins for the technology industry. Other early adopters to the open source trend in software have been IBM and Google. Both technology titans have developed a unique strategy to monetize open-source.

How do developers GIT started?

When software is being produced by multiple developers in many places consistency and version control is vital. This was discovered early on as Linus Torvalds was developing the Linux kernel. Linus and other early adopters develop a version control system called GIT for tracking changes in files and software. It has allowed open-source software development to flourish. Developers can work on the same software project simultaneously and keep it consistent. GIT controls the push and pulls of the updates from the individual contributors to the primary software product.  Many times, the entire team works with the central repository in this process such as GitHub. GitHub is an independent server hosting organization that provides a remote repository for each of the users. GitHub provides the centralized management for the developers to make changes and update the software.        

Will open-source software continue to grow?

The trend is clear and the forecast obvious, open-source will not only continue to grow but the growth will accelerate. There are too many benefits for it not to; lower cost, no vendor lock-in and the primary benefit of many minds collaborating. Some recent actions from the largest technology organizations in the world back this up. Google, who developed a cloud container orchestration package called Kubernetes donated this entire software package to the Cloud Native Computing Foundation. Google believes it will profit more from the software in the open-source community rather than keeping it proprietary. The most convincing argument comes from the largest software provider in the world, Microsoft. Microsoft had struggled to grow and keep its competitive edge until Satya Nadella a was named CEO in 2014. Nadella has promoted open source and embraced the competitive Linux operating system in Microsoft’s cloud platform Azure. This has changed the culture of the entire organization. His leadership has transformed Microsoft back into a growth company focused on its cloud offering instead of a monopolistic software provider. In June 2018 Microsoft announced the purchase of GitHub for $7.5 billion. This acquisition will allow the acceleration of the use of GitHub in the enterprise space and bring Microsoft development tools to a new audience. Nadella was quoted as saying “we recognize the community responsibility we take on with this agreement and will do our best work to empower every developer to build, innovate and solve the world’s most pressing challenges”. When these two giants of the cloud and technology business make such bold statements on a trend it is likely to grow.           

It’s true what Andersen first alluded to; software will play a larger role in society going forward. This is being driven in large part by the open-source community development movement and the connectivity cloud computing brings to the process.

If you would like to talk more about cloud strategies for software development contact us at:

Jim Conwell (513) 227-4131      jim.conwell@twoearsonemouth.net

www.twoearsonemouth.net

we listen first…

 

automation help build cloud infrastructure

What is Code as a Service?

 

CaaS

When I first started experimenting with the public cloud providers I, like many, began by setting up demo accounts and creating virtual servers. It isn’t a complicated process to create servers, particularly if you compare it to the process of buying hardware and loading software that was required 10 years ago. Cloud computing’s self service capabilities have caused a major disruption in the business of information technology. But even as I “spun-up” servers in a matter of minutes or seconds I began to wonder; how does the large enterprise migrate to and manage their cloud environment? How do they maintain the IT governess and framework with their cloud infrastructure as they have with their on premises infrastructure? How do they maintain standards considering all the ever-changing choices so commonly provided by the cloud vendors? I could see these questions as an issue with small implementations, but how does the enterprise handle this across dozens or even hundreds of cloud architects creating virtual servers? In short, the question I attempt to answer here is what tools are available to maintain IT governance and security compliance in the “move fast and break things” world of the cloud? The answer to all the questions can be found it what has been coined as Code as a Service (CaaS) or Infrastructure as Code (IaC).

Automation with Code as a Service

CaaS’s primary service or function is automation. It uses software to automate repetitive practices to hasten and simplify implementations and processes. A valuable byproduct of this automation is consistency. When processes are automated they can be designed from the start to follow the rules of regulation and governance of the organization. They help assure that no matter how fast process is moving or how many users are involved, governance is maintained.

 Popular Code as a Service tools

There are a host of these tools designed to automate and govern the development of software and IT infrastructure. To follow are examples, starting with the most general IT automation systems and moving to tools designed to work more specific to work with cloud infrastructure.

Ansible

Ansible is an open source automation software promoted by Redhat Corporation. In addition, to cloud provisioning, it assists in application development, intra-service orchestration, and configuration. Ansible uses the simple programming language YAML to create playbooks for automation. Ansible has many modules that integrate with the most common cloud solutions such as AWS, Google Cloud Platform (GCP) and VMware.            

Teraform

Terraform is an infrastructure as code software by Hashi Corporation. It primarily focuses on creating data center infrastructure that is provided by large public clouds. Teraform utilizes JSON language to define infrastructure templates with integrations such as AWS, Azure, GCP, and IBM cloud.         

Kubernetes

Kubernetes is an open source project started by Google and donated in its entirety to the Cloud Native Computing Foundation (CNCF). It orchestrates and automates the deployment of containers. Containers are a different type of virtual server that has promoted and added to the popularity of micro services. Micro services create business applications by combining many smaller applications to create the entire solution. Micro Services are used to increase agility and uptime and make maintenance of the application easier and less disruptive.

CloudFormation

CloudFormation is Amazon Web Services CaaS application that is provided to its customers at no charge. CloudFormation templates can be written in YAML or JSON and make the deployment of AWS services at scale quicker and more secure. CloudFormation saves massive amounts of time for the enterprise cloud architect and insurers all instances maintain the IT governance of the organization.

Code as a Service is a valuable tool for cloud architects and businesses to create cloud native applications or migrate their applications to cloud service providers. There are many products, but most are opensource and will utilize playbooks or templates to assist in creating the cloud infrastructure in a compliant manner.        

If you would like to talk more about strategies for migrating to or creating cloud infrastructure contact us at:
Jim Conwell (513) 227-4131      jim.conwell@twoearsonemouth.net
www.twoearsonemouth.net
we listen first…

Are Containers the Forecast for Cloud?

image courtesy kubernetes.io

One of the most exciting and simultaneously challenging things about working in technology is the speed at which change occurs. The process from a cutting-edge technology to a ubiquitous and commoditized product can happen in the blink of an eye. Now that the cloud has made its way into all sizes and types of business the next related technology has emerged: containers. So it is fair to ask; Are Containers the forecast for cloud?

How we got to this port

VMware’s introduction of virtualization was thought by many to be the predecessor of cloud as we know it today. This revolutionary technology allowed early adopters to reduce costs and enhance their IT agility through virtualization software. The day of physical servers for each application are over. Cloud technology has evolved from a single software for the enterprise, to an outsourced product that is provided to businesses such as major technology institutions like Amazon, Microsoft, and Google. Most recently, containers have evolved as a next step for cloud and are largely developed to suit the needs of software developers.
The difference between Virtual Machines (VM’s) and Containers
A container is defined by Docker as a stand-alone executable software package that includes everything needed to run an application: code, runtime, system libraries and settings. In many ways, that sounds like a VM. However, there are significant differences. Above the physical infrastructure, a VM uses a hypervisor to manage the VMs. Each VM has their own guest operating system such as Windows or Linux (see image #1). A container uses the host operating system and the physical infrastructure which supports the container platform such as Docker. Docker then supports the binaries and libraries of the applications. Containers do a much better job of isolating applications from its surroundings and this allows the enterprise to use the same container instance from development to production.


(Image 1)                                                            (Image 2) Images courtesy of docker.com

How can Containers be used in the Enterprise today?

Docker is currently the most popular company driving the movement for container based solutions in the enterprise. The Docker platform enables independence between applications and infrastructure allowing the applications to move from development to production quickly and seamlessly. By isolating software from its surroundings, it can help reduce conflicts between teams running different software on the same infrastructure. While containers were originally designed for software developers, it is becoming a valuable IT infrastructure solution for the enterprise.
One popular platform allowing the enterprise to benefit from container technology is Kubernetes. Kubernetes is an opensource system originally designed by Google that was donated it to the Cloud Native Computing Foundation (CNCF). Kubernetes assists with three primary functions in developing containers: deployment, scaling and monitoring. Finally, open source companies such as Red Hat are developing products to help utilize these tools and simplify containers for all types of business. OpenShift, designed by Red Hat, is a container application platform that has helped simplify Docker and Kubernetes for the business IT manager. The adoption of new technology, such as cloud computing, often takes time to be accepted in the enterprise. Containers seem to be avoiding this trend and have been accepted and implemented quickly in businesses of all types and sizes.