Getting Started with Amazon Web Services (AWS)

icon-cloud-aws

Amazon Web Services is a little-known division of the online retail giant, except for those of us in the business of IT. Its interesting to see that the profits from AWS represented 56 percent of Amazon’s total operating income with $2.57 billion in revenue. While AWS amounted to about 9 percent of total revenue, its margins and sustained growth make it stand out on Wall Street. As businesses make the move to the cloud they may ponder what it takes Getting Started with Amazon Web Services (AWS)

When we have helped organizations evolve by moving part or all of their IT infrastructure to the AWS cloud, we have found that planning is the key to their success. Most businesses have had some cloud presence in their IT infrastructure. The most common, Software as a Service (SaaS), has lead the hyper growth of the cloud. What I will consider here with AWS is how businesses use it for Infrastructure as a Service (IaaS). IaaS is defined as a form of cloud computing that relocates a business’s applications that are currently on their own servers to a hosted cloud provider. Businesses consider this to reduce hardware cost, become more agile with their IT and even improve security. To follow are the 5 simple steps we have developed to move to IaaS with AWS.

Getting Started with Amazon Web Services (AWS)

1)      Define the workloads to migrate- The first cloud migration should be kept as simple as possible. Do not start your cloud practice with any business critical or production applications. A good idea, and where many businesses start, is a data backup solution. You can use your existing backup software or one that partners with AWS currently. These are industry leaders such as Commvault and Veritas, and if you already use these solutions that is even better. Start small and you may even find you can operated in the  free tier of Amazon virtual server or instances. (https://aws.amazon.com/free/)

2)      Calculate cost and Return on Investment (ROI)- Of the two primary types of costs used to calculate ROI, hard and soft costs, hard costs seem to be the greatest savings as you first start your cloud presence. These costs include the server hardware used, if cloud isn’t already utilized,  as well as the time needed to assemble and configure it. When configuring  a physical hardware server a hardware technician will have to make an estimation on the applications growth in order to size the server properly. With AWS it’s pay as you go, only renting what you actually use. Other hard cost such as power consumption and networking costs will be saved as well. Many times when starting small, it doesn’t take a formal process of ROI or documenting soft costs, such as customer satisfaction, to see that it makes sense. Finally, another advantage of starting with a modest presence in the AWS infrastructure is that you may be able to stay within the free tier for the first year. This  offering includes certain types of storage suitable for backups and the networking needed for data migration.

3)      Determine cloud compatibility- There are still applications that don’t work well in a cloud environment. That is why it is important to work with a partner that has experience in cloud implementation. It can be as simple as an application that requires a premium of bandwidth, or is sensitive to data latency. Additionally, industries that are subject to regulation, such as PCI/DSS or HIPAA are further incentivized to understand what is required and the associated costs . For instance, healthcare organizations are bound to secure their Protected Health Information (PHI). This regulated data should be encrypted both in transit and at rest. This example of encryption wouldn’t necessarily change your ROI, but needs to be considered. A strong IT governance platform is always a good idea and can assure smooth sailing for the years to come.

4)      Determine how to migrate existing data to the cloud- Amazon AWS provides many ways to migrate data, most of which will not incur any additional fees. These proven methods not only help secure your data but also speed up the process of implementation of your first cloud instance. To follow are the most popular ways.

  1. a) Virtual Private Network- This common but secure transport method is available to move data via the internet that is not sensitive to latency. In most cases a separate virtual server for an AWS storage gateway will be used.
  2. b) Direct Connect- AWS customers can create a dedicated telecom connection to the AWS infrastructure in their region of the world. These pipes are typically either 1 or 10 Gbps and are provided by the customer’s telecommunications provider. They will terminate at the far end of an Amazon partner datacenter. For example, in the midwest this location is in Virginia. The AWS customer pays for the circuit as well as a small recurring cross-connect fee for the datacenter.
  3. c) Import/Export– AWS will allow their customers to ship their own storage devices containing data to AWS to be migrated to their cloud instance. AWS publishes a list of compatible devices and will return the hardware when the migration is completed.
  4. d) Snowball– Snowball is similar to import/export except that Amazon provides the storage devices for this product. A Snowball can store up to 50 Terabytes (TB) of data and can be combined in series with up to 4 other Snowballs. It also makes sense in sites with little or no internet connectivity. This unique device is set to ship as is, there is no need to box it up. It can encrypt the data and has two 10 GIG Ethernet ports for data transfer. Devices like the Snowball are vital for migrations with large amounts of data. Below is a chart showing approximate transfer times depending on the internet connection speed and the amount of data to be transferred. It is easy to see large migrations couldn’t happen without these devices. The final column shows the amount of data where is makes sense to “seed” the data with a hardware devices rather than transfer it over the internet or a direct connection.
    Company’s Internet Speed Theoretical days to xfer 100 TB @ 80% Utilization Amount of data to consider device
    T3 (44.73 Mbps) 269 days 2 TB or more
    100 Mbps 120 days 5 TB or more
    1000 Mbps (GIG) 12 days 60 TB or more

    1)      Test and Monitor- Once your instance is setup, and all the data migrated, it’s time to test. Best practices are to test the application in the most realistic setting possible. This means during business hours and in an environment when bandwidth consumption will be similar to the production environment. You wont need to look far to find products that can monitor the health of your AWS instances; AWS provides a free utility called CloudWatch. CloudWatch monitors your Amazon Web Services (AWS) resources and the applications you run on AWS in real time. You can use CloudWatch to collect and track metrics, which are variables you can measure for your resources and applications. CloudWatch alarms send notifications or automatically make changes to the resources you are monitoring based on rules that you define. For example, you can monitor the CPU usage and disk reads and writes of your Amazon instances and then use this data to determine whether you should launch additional instances to handle increased load. You can also use this data to stop under-used instances to save money. In addition to monitoring the built-in metrics that come with AWS, you can monitor your own custom metrics. With CloudWatch, you gain system-wide visibility into resource utilization, application performance, and operational health.

    To meet and learn more about how AWS can benefit your organization contact me at (513) 227-4131 or jim.conwell@outlook.com.

 

a cloud buyers guide

A Buyer’s Guide to Cloud

buyguide_Cloud

Most businesses have discovered the value that cloud computing can bring to their IT operations. They may have discovered how it helps to meet their regulatory compliance priorities by being in a SOC 2 audited data center. Others may see a cost advantage as they are approaching a server refresh when costly hardware needs to be replaced. They recognize an advantage of placing this hardware as an operational expense as opposed to the large capital expense they need to make every three years. No matter the business driver, the typical business person isn’t sure where to start to find the right cloud provider. In this fast paced and ever-changing technology environment these IT managers may wonder, is there a buyer’s guide to Cloud?

Where Exactly is the Cloud?…and Where is My Data?

Except for the cloud hyperscalers, (Amazon AWS, Microsoft Azure, and Google) cloud providers create their product in a multi-tenant data center. A multi-tenant data center is a purpose-built facility designed specifically for the needs of the business IT infrastructure and accommodates many businesses. These facilities are highly secured and most times unknown to the public. Many offer additional colocation services that allow their customers to enter the center to manage their own servers. This is a primary difference with the hyperscalers, as they offer no possibility of customers seeing the sites where their data resides. The hyperscale customer doesn’t know where there data is except for a region of the country or availability zone. The hyperscaler’s customer must base their buying decision on trusting the security practices of the large technology companies Google, Amazon, and Microsoft. These are some of the same organizations that are currently under scrutiny from governments around the world for data privacy concerns.  The buying decisions for cloud and data center for cloud seekers should start at the multi-tenant data center. Therefore, the first consideration in a buyer’s guide for the cloud will start with the primary characteristics to evaluate in the data center and are listed below.

  1. Location– Location is a multi-faceted consideration in a datacenter. First, the datacenter needs to be close to a highly available power grid and possibly alternate power companies. Similarly, the telecommunications bandwidth needs to be abundant, diverse and redundant. Finally, the proximity of the data center to its data users is crucial because speed matters. The closer the users are to the data, the less data latency, which means happier cloud users.
  2. Security– As is in all forms of IT today, security is paramount. It is important to review the data center’s security practices. This will include physical as well as technical security.
  3. People behind the data– The support staff at the datacenter creating and servicing your cloud instances can be the key to success. They should have the proper technical skills, responsiveness and be available around the clock.

Is My Cloud Infrastructure Portable?

The key technology that has enabled cloud computing is virtualization. Virtualization creates an additional layer above the operating system called a hypervisor that allows for sharing hardware resources. This allows multiple virtual servers (VMs) to be created on a single hardware server. Businesses have used virtualization for years, VMware and Microsoft HyperV being the most popular choices. If you are familiar with and have some secondary or backup infrastructure on the same hypervisor as your cloud provider, you can create a portable environment. A solution where VMs can be moved or replicated with relative ease avoids vendor lock-in. One primary criticism of the hyperscalers is that it can be easy to move data in but much more difficult to migrate the data out. This lack of portability is reinforced by the proprietary nature of their systems. One of the technologies that the hyperscalers are beginning to use to become more portable is containers. Containers are similar to VMs however they don’t utilize guest operating systems for the virtual servers. This has had a limited affect on portability because containers are a leading-edge technology and have not met widespread acceptance.

What Kind of Commitment Do I Make?

The multi-tenant data center offering a virtualized cloud solution will include an implementation fee and require a commitment term with the contract. Their customized solution will require pre-implementation engineering time, so they will be looking to recoup those costs. Both fees are typically negotiable and a good example where an advisor like Two Ears One Mouth can assist you through this process and save you money.

The hyperscaler will not require either charge because they don’t provide custom solutions and are difficult to leave so the term commitment is not required. The hyperscaler will offer a discount with a contract term as an incentive for a term commitment; these offerings are called reserved instances. With a reserved instance, they will discount your monthly recurring charge (MRC) for a two or three-year commitment.

Finding the best cloud provider for your business is a time-consuming and difficult process. When considering a hyperscaler the business user will receive no support or guidance. Working directly with a multi-tenant data center is more service-oriented but can misuse the cloud buyer’s time. The cloud consumer can work with a single data center representative that states “we are the best” and trust them. Alternatively, they can interview multiple data center provider representatives and create the ambiguous “apples to apples” spreadsheet of prospective vendors. However, neither is effective.

At Two Ears One Mouth IT consulting we will listen to your needs first and then guide you through the process. With our expertise and market knowledge you will be comforted to know we have come to the right decision for you company’s specific requirements. We save our customers time and money and provide our services at little or no cost to them!

If you would like assistance in selecting a cloud provider for your business contact us at:

Jim Conwell (513) 227-4131      jim.conwell@twoearsonemouth.net

www.twoearsonemouth.net

we listen first…

Edge Computing and the Cloud

edge-arch

image courtesy of openedgecomputing.org

This article is intended to be a simple introduction to what can be a complicated technical process. It usually helps to begin the articles I write involving a specific cloud technology with a definition. Edge computing’s definition, like many other technologies, has and evolved in a very short period of time. In the past Edge Computing could describe devices that connect to the local area network (LAN) to the wide area network (WAN), such as firewalls and routers. Today’s definitions of Edge Computing are more focused on the cloud and how to overcome some of the challenges of cloud computing. The definition I will use as a basis for this article is to bring computer and data storage resources as close as possible to the people and machines that require the data. Many times, this will include creating a hybrid environment for a distant or relatively slow public cloud solution. The hybrid environment will consist of an alternate location with resources that can provide the faster response required.

Benefits of Edge Computing

The primary benefit of living on the edge is increased performance. This is most often defined in the networking world as reduced latency. Latency is the time it takes for data packets to be stored or retrieved. With the growth of Machine Learning (ML), Machine to Machine Communications (M2M) and Artificial Intelligence (AI), exploding latency awareness has increased across the industry. A human working at a workstation can easily tolerate a data latency of 100-200 milliseconds (MS) without much frustration. If you’re a gamer you would like to see latency at 30 MS or less. Many machines and the applications they run are far less tolerant to data latency. The latency tolerance for machine-based applications can range from tolerating 10 MS to no latency, needing the data in real time. There are applications humans interface with that are more latency sensitive, a primary example being voice communications. In the past decade business’ demand for Voice Over Internet Protocol (VOIP) phone systems has grown which has in turn driven the need for better managed low latency networks. Although data transmission speeds are moving at the speed of light, distance still matters. As a result, we look to reduce latency for our applications by moving the data closer to the edge and its users. This can then produce the secondary benefit of reduced cost. The closer the data is to the applications the fewer network resources required to transmit the data.

Use Cases for Edge Computing

Content Delivery Networks (CDN) are thought of as the predecessor of the edge computing solutions of today. CDN’s are a geographically distributed network of content servers designed to deliver video or other web content to the end user. Edge Computing seeks to take this to the next step by delivering all types of data in even closer to real time.

The Internet of Things (IOT) devices is a large part of what is driving the demand for Edge Computing. A common application involves a video surveillance system an organization would use for security. A large amount of data is stored in which only a fraction is needed or will be accessed. An edge device or system collects all the data, stores it, and only transfers the data needed to a public cloud for authorized access.

Cellular networks and cell towers provide another use case for Edge Computing. Data for analysis are sent from subscriber phones to an edge system at the cell site. Some of this data is used immediately for call control and call processing. Most of the data, which is not time sensitive are then transmitted to the cloud for analysis later.

As the technology and acceptance for driverless cars increase a similar type of edge strategy will be used. However, with driverless applications, the edge devices will be located in the car because of the need for real time responses.

These examples all demonstrate the need for speed is constantly increasing and will continue to grow in our data applications.  As fast as our networks become there will always be the need to hasten processing time for our critical applications.

If you would like to talk more about strategies for cloud migration contact us at:

Jim Conwell (513) 227-4131      jim.conwell@twoearsonemouth.net

www.twoearsonemouth.net

we listen first…

Three Reasons to Use a Local Datacenter and Cloud Provider

Cincinnati-dc

photo courtesy of scripps.com

Now that the business cloud market has matured, it has become easier to recognize the leaders of the technology as well as the providers that make the most sense to partner with your business. Many times that can be a local datacenter and cloud provider. There are many large public cloud providers and most agree on three leaders:  Amazon Web Services (AWS), Microsoft Azure and Google Cloud. Google has been an uncharacteristic laggard in the space and seems to be struggling with the Business to Business model (B2B). Clearly, a B2B strategy can evolve from Business to Consumer (B2C) strategy, one can look no further than the public cloud leader AWS.

Whether Google Cloud can succeed is unclear. What is clear, however, is that there will always be a place for large public cloud providers. They have fundamentally changed how IT in business is done. The mentality the public cloud help to create, “go fast and break things“, has been an important concept for the enterprise IT sandbox.

Where Does the Local Data Center Fit in?  

I also believe there will always be a place in business IT for the local data center and cloud provider. The local data center and cloud provider mentioned here is not an engineer putting a rack up in his basement, or even the IT service provider whose name you recognize hosted in another data center. The local data center I am referencing has been in business many years, most likely before the technology of “cloud” was invented. My hometown in Cincinnati, Ohio has such a respected data center, 3z.net. 3z has been in business for over 25 years and offers its clients a 100% uptime Service Level Agreement (SLA). It has all the characteristics a business looks for in an organization it trusts its data with: generator, multiple layers of security, and SOC II level of compliance. It uses only top tier telecom providers for bandwidth and its cloud infrastructure uses technology leaders such as Cisco and VMware.  Most of all, 3z is easy to do business with.

To follow are three primary reasons to use a local datacenter.

Known and Predictable Cost-

The local data centers’ cloud cost may appear more expensive on the initial cost evaluation; however, they are often less expensive in the long run. There are many reasons for this but most often it is based on the rate charged for transmitting and receiving data to your cloud. Large public clouds charge fees based on the gigabyte of outbound data. While it is pennies per gigabyte, it can add up quickly. With the per gigabyte charges, the business doesn’t know all their costs up front. The local datacenter will typically charge a flat fee for monthly bandwidth that includes all the data coming and going. This creates an “all you can eat” model and a fixed cost.

Customized and Increased Support for Applications-

Many of the applications the enterprise will use cloud may require customization and additional support from the cloud provider. A good example of this is Disaster Recovery (DR) or Disaster Recovery as a Service (DRaaS). DRaaS requires a higher level of support for the enterprise in the planning phases as most IT leaders have not been exposed to DR best practices. Additionally, the IT leaders in the enterprise want the assurance of a trusted partner to rely on in the unlikely event they declare an emergency for DR. In many of the local cloud provider and datacenters I work with, the president of the datacenter will happily provide his private cell phone number for assistance.

Known and Defined Security and Compliance-

Most enterprise leaders feel a certain assurance of knowing exactly where their data resides. This may never change, or at least not to an IT auditor. Knowing the location and state of your data also helps the enterprise “check the boxes” for regulatory compliance. Many times, the SOC certifications are not enough, more specific details are required. 3z in Cincinnati will encrypt all of your data at rest as a matter of their process. Additional services like these can ease the IT leader’s mind when the time for an audit comes.

It is my opinion that the established local datacenter will survive, and flourish.  However, it may need to adjust to stay relevant and competitive with the large public cloud providers. For example, they will need to emulate some of the popular public cloud offerings such as an easy to use self-service portal and a “try it for free” cloud offering. I believe the local datacenter’s personalized processes are important and I offer support for 3z and its competitive peers to prosper in the future.

If you would like to learn more or visit 3z please contact us at:

Jim Conwell (513) 227-4131      jim.conwell@twoearsonemouth.net

www.twoearsonemouth.net