Getting Started with Amazon Web Services (AWS)

icon-cloud-aws

Amazon Web Services is a little-known division of the online retail giant, except for those of us in the business of IT. Its interesting to see that the profits from AWS represented 56 percent of Amazon’s total operating income with $2.57 billion in revenue. While AWS amounted to about 9 percent of total revenue, its margins and sustained growth make it stand out on Wall Street. As businesses make the move to the cloud they may ponder what it takes Getting Started with Amazon Web Services (AWS)

When we have helped organizations evolve by moving part or all of their IT infrastructure to the AWS cloud, we have found that planning is the key to their success. Most businesses have had some cloud presence in their IT infrastructure. The most common, Software as a Service (SaaS), has lead the hyper growth of the cloud. What I will consider here with AWS is how businesses use it for Infrastructure as a Service (IaaS). IaaS is defined as a form of cloud computing that relocates a business’s applications that are currently on their own servers to a hosted cloud provider. Businesses consider this to reduce hardware cost, become more agile with their IT and even improve security. To follow are the 5 simple steps we have developed to move to IaaS with AWS.

Getting Started with Amazon Web Services (AWS)

1)      Define the workloads to migrate- The first cloud migration should be kept as simple as possible. Do not start your cloud practice with any business critical or production applications. A good idea, and where many businesses start, is a data backup solution. You can use your existing backup software or one that partners with AWS currently. These are industry leaders such as Commvault and Veritas, and if you already use these solutions that is even better. Start small and you may even find you can operated in the  free tier of Amazon virtual server or instances. (https://aws.amazon.com/free/)

2)      Calculate cost and Return on Investment (ROI)- Of the two primary types of costs used to calculate ROI, hard and soft costs, hard costs seem to be the greatest savings as you first start your cloud presence. These costs include the server hardware used, if cloud isn’t already utilized,  as well as the time needed to assemble and configure it. When configuring  a physical hardware server a hardware technician will have to make an estimation on the applications growth in order to size the server properly. With AWS it’s pay as you go, only renting what you actually use. Other hard cost such as power consumption and networking costs will be saved as well. Many times when starting small, it doesn’t take a formal process of ROI or documenting soft costs, such as customer satisfaction, to see that it makes sense. Finally, another advantage of starting with a modest presence in the AWS infrastructure is that you may be able to stay within the free tier for the first year. This  offering includes certain types of storage suitable for backups and the networking needed for data migration.

3)      Determine cloud compatibility- There are still applications that don’t work well in a cloud environment. That is why it is important to work with a partner that has experience in cloud implementation. It can be as simple as an application that requires a premium of bandwidth, or is sensitive to data latency. Additionally, industries that are subject to regulation, such as PCI/DSS or HIPAA are further incentivized to understand what is required and the associated costs . For instance, healthcare organizations are bound to secure their Protected Health Information (PHI). This regulated data should be encrypted both in transit and at rest. This example of encryption wouldn’t necessarily change your ROI, but needs to be considered. A strong IT governance platform is always a good idea and can assure smooth sailing for the years to come.

4)      Determine how to migrate existing data to the cloud- Amazon AWS provides many ways to migrate data, most of which will not incur any additional fees. These proven methods not only help secure your data but also speed up the process of implementation of your first cloud instance. To follow are the most popular ways.

  1. a) Virtual Private Network- This common but secure transport method is available to move data via the internet that is not sensitive to latency. In most cases a separate virtual server for an AWS storage gateway will be used.
  2. b) Direct Connect- AWS customers can create a dedicated telecom connection to the AWS infrastructure in their region of the world. These pipes are typically either 1 or 10 Gbps and are provided by the customer’s telecommunications provider. They will terminate at the far end of an Amazon partner datacenter. For example, in the midwest this location is in Virginia. The AWS customer pays for the circuit as well as a small recurring cross-connect fee for the datacenter.
  3. c) Import/Export– AWS will allow their customers to ship their own storage devices containing data to AWS to be migrated to their cloud instance. AWS publishes a list of compatible devices and will return the hardware when the migration is completed.
  4. d) Snowball– Snowball is similar to import/export except that Amazon provides the storage devices for this product. A Snowball can store up to 50 Terabytes (TB) of data and can be combined in series with up to 4 other Snowballs. It also makes sense in sites with little or no internet connectivity. This unique device is set to ship as is, there is no need to box it up. It can encrypt the data and has two 10 GIG Ethernet ports for data transfer. Devices like the Snowball are vital for migrations with large amounts of data. Below is a chart showing approximate transfer times depending on the internet connection speed and the amount of data to be transferred. It is easy to see large migrations couldn’t happen without these devices. The final column shows the amount of data where is makes sense to “seed” the data with a hardware devices rather than transfer it over the internet or a direct connection.
    Company’s Internet Speed Theoretical days to xfer 100 TB @ 80% Utilization Amount of data to consider device
    T3 (44.73 Mbps) 269 days 2 TB or more
    100 Mbps 120 days 5 TB or more
    1000 Mbps (GIG) 12 days 60 TB or more

    1)      Test and Monitor- Once your instance is setup, and all the data migrated, it’s time to test. Best practices are to test the application in the most realistic setting possible. This means during business hours and in an environment when bandwidth consumption will be similar to the production environment. You wont need to look far to find products that can monitor the health of your AWS instances; AWS provides a free utility called CloudWatch. CloudWatch monitors your Amazon Web Services (AWS) resources and the applications you run on AWS in real time. You can use CloudWatch to collect and track metrics, which are variables you can measure for your resources and applications. CloudWatch alarms send notifications or automatically make changes to the resources you are monitoring based on rules that you define. For example, you can monitor the CPU usage and disk reads and writes of your Amazon instances and then use this data to determine whether you should launch additional instances to handle increased load. You can also use this data to stop under-used instances to save money. In addition to monitoring the built-in metrics that come with AWS, you can monitor your own custom metrics. With CloudWatch, you gain system-wide visibility into resource utilization, application performance, and operational health.

    To meet and learn more about how AWS can benefit your organization contact me at (513) 227-4131 or jim.conwell@outlook.com.

 

Edge Computing and the Cloud

edge-arch

image courtesy of openedgecomputing.org

This article is intended to be a simple introduction to what can be a complicated technical process. It usually helps to begin the articles I write involving a specific cloud technology with a definition. Edge computing’s definition, like many other technologies, has and evolved in a very short period of time. In the past Edge Computing could describe devices that connect to the local area network (LAN) to the wide area network (WAN), such as firewalls and routers. Today’s definitions of Edge Computing are more focused on the cloud and how to overcome some of the challenges of cloud computing. The definition I will use as a basis for this article is to bring computer and data storage resources as close as possible to the people and machines that require the data. Many times, this will include creating a hybrid environment for a distant or relatively slow public cloud solution. The hybrid environment will consist of an alternate location with resources that can provide the faster response required.

Benefits of Edge Computing

The primary benefit of living on the edge is increased performance. This is most often defined in the networking world as reduced latency. Latency is the time it takes for data packets to be stored or retrieved. With the growth of Machine Learning (ML), Machine to Machine Communications (M2M) and Artificial Intelligence (AI), exploding latency awareness has increased across the industry. A human working at a workstation can easily tolerate a data latency of 100-200 milliseconds (MS) without much frustration. If you’re a gamer you would like to see latency at 30 MS or less. Many machines and the applications they run are far less tolerant to data latency. The latency tolerance for machine-based applications can range from tolerating 10 MS to no latency, needing the data in real time. There are applications humans interface with that are more latency sensitive, a primary example being voice communications. In the past decade business’ demand for Voice Over Internet Protocol (VOIP) phone systems has grown which has in turn driven the need for better managed low latency networks. Although data transmission speeds are moving at the speed of light, distance still matters. As a result, we look to reduce latency for our applications by moving the data closer to the edge and its users. This can then produce the secondary benefit of reduced cost. The closer the data is to the applications the fewer network resources required to transmit the data.

Use Cases for Edge Computing

Content Delivery Networks (CDN) are thought of as the predecessor of the edge computing solutions of today. CDN’s are a geographically distributed network of content servers designed to deliver video or other web content to the end user. Edge Computing seeks to take this to the next step by delivering all types of data in even closer to real time.

The Internet of Things (IOT) devices is a large part of what is driving the demand for Edge Computing. A common application involves a video surveillance system an organization would use for security. A large amount of data is stored in which only a fraction is needed or will be accessed. An edge device or system collects all the data, stores it, and only transfers the data needed to a public cloud for authorized access.

Cellular networks and cell towers provide another use case for Edge Computing. Data for analysis are sent from subscriber phones to an edge system at the cell site. Some of this data is used immediately for call control and call processing. Most of the data, which is not time sensitive are then transmitted to the cloud for analysis later.

As the technology and acceptance for driverless cars increase a similar type of edge strategy will be used. However, with driverless applications, the edge devices will be located in the car because of the need for real time responses.

These examples all demonstrate the need for speed is constantly increasing and will continue to grow in our data applications.  As fast as our networks become there will always be the need to hasten processing time for our critical applications.

If you would like to talk more about strategies for cloud migration contact us at:

Jim Conwell (513) 227-4131      jim.conwell@twoearsonemouth.net

www.twoearsonemouth.net

we listen first…