5G affects cloud

5G or not 5G- What, When and How it Affects IT and Cloud

 

Before entering the cloud and IT business I spent more than a decade working with wireless technologies for business.  During this time, I saw the advent of data to cell phones and the transitions of the generations of data offerings that have been delivered. Generation 2 (G2) brought very rudimentary data to cell phones such as text messaging. G3 brought the internet and its many applications such as mobile email. G4 brought us the high-speed internet we use today offering instant access to applications such as real time video. With each transition of the technology, corporate marketing and spin became more extraordinary creating more time between the introduction and the practical delivery of the new product. Now comes 5G and I expect this trend to continue. Although we hear of the current availability of 5G from wireless carriers the products are not fully developed for practical use and are likely years away for business applications.

What is 5G and who will provide it?

The latest technology of 5G wireless data will be provided to us by the same carriers that delivered wireless service in the past, AT&T, Verizon, and Sprint. Although the primary standards for 5G have been set there is still much to be developed in the technology and will likely be introduced as different versions. This will be similar to 4G when it was first launched with its distinct alternatives of WiMAX and LTE.  5G has already been split into different delivery types, 5G, 5GE, and 5 GHz. Verizon’s first introduction to 5G is designed for the home and small office while AT&T is focused on mobile devices in very limited markets. Most believe there will be fixed wireless versions for point to point circuits for business. At this point, it isn’t clear what versions each provider will offer in 5G as it matures and becomes universally delivered.

The technology of 5G

Similar to all previous generations in the evolution of wireless data 5G offers greater speeds as its primary driver for acceptance. What may slow widespread deployment of 5G is the fact that 4G technology continues to improve and provide greater speeds for users. However, the wireless spectrum available to 4G providers is running short so the transition to 5G is imminent. Most of the 5G technology will be provided on an alternate wireless spectrum, above 6 GHz, and not provided to wireless consumers previously. This new swath of spectrum will offer much greater capacity and speed but won’t come without its own challenges. To achieve these higher speeds the carriers will need to use much higher frequency transmissions called millimeter waves. Millimeter waves cannot penetrate buildings, weather, and trees as well as the previous frequencies. To overcome this wireless carriers will need to implement additional, smaller cell sites called microcells. Many wireless carriers have already implemented microcells complementing the macrocells used in previous offerings of wireless service. Building out additional data network and cell sites such as microcells is expensive and time-consuming. This will add to the delay of a fully implemented 5G offering from the carriers.

Business advantages of 5G

To say that one of the advantages of 5G are greater data speeds would be true, but there is much more to it for business applications. The following are the primary advantages, related to speed, that 5G will provide for businesses for cloud computing.

  • Lower latency – Wireless 5G networks will decrease latency, the time it takes data packets to be stored and retrieved, greatly. This will benefit many business applications such as voice, video and artificial intelligence (AI)
  • Multiple connections- The base stations, or cell sites of 5G, will handle many more simultaneous connections than 4G. This will increase speed for users and capacity for providers.
  • Full duplex transmission- 5G networks can transmit and receive data simultaneously. This full duplex transmission increases the speed and reliability of wireless connectivity enabling new applications and enhancing exiting ones.

Cloud and business solutions enhanced by 5G

It is difficult to say exactly how businesses will benefit from 5G service since it is still being developed. However, the advantages listed above lend themselves to several applications which are sure to be enhanced for business.

The increased speeds and decreased latency 5G offers will expand options and availability for disaster recovery (DR) and data network backups for businesses. When speeds previously only offered to business via wireline can be delivered without wires business continuity will be increased. Many businesses outages today are caused by accidental cable cuts and power outages that wireless 5G will eliminate. It is also possible wireless point to point circuits could replace traditionally wired circuits for the business’s primary data and internet service.

The technology and increasing number of the internet of things (IOT) applications will be enhanced by 5G. The increased speed and connectivity capacity will allow this ubiquitous technology to continue to grow. Similarly, the trend for more and faster edge computing connectivity will benefit. This will enhance applications such as autonomous vehicles that require instant connectivity to networks and other vehicles. Content delivery networks like the ones used for delivery of Netflix will be able to deliver their products faster and more reliably. These are just a few examples of the technologies today that are demanding 5G’s advantages and will expedite its availability.

While the technology to deliver 5G is mostly completed, the timing of widespread implementation for business is still unclear. This is attributed in part to the improvement of 4G speeds in its ability to satisfy today’s consumer’s needs. More importantly, new technologies are not accepted in the marketplace because the technology is ready but rather because the business applications demand them. 5G technologies will be driven by many business applications but widespread acceptance won’t occur for at least another two years. If you want to consult with a partner that has expertise in all aspects of telecom, wireless and cloud technologies, give us a call and we will be glad to find the right solution for your business.

Contact @ Jim Conwell (513) 227-4131      jim.conwell@twoearsonemouth.net

www.twoearsonemouth.net

we listen first…

 

Colocation’s Relevance Today for Business

The past several years have shown a “cloud-first” strategy evolve for business with their IT infrastructure. The inclination for a total on-premises infrastructure has decreased as hybrid cloud solutions have expanded. While off-premises solutions are the up and coming choice for many businesses, on-premises continues to be utilized with most companies. As businesses look at their IT strategies for the future, they should explore options to the cloud and consider the reasons why cloud may or may not be the best fit for all their applications. Businesses have seen the value of taking their data off site for years without handing it over to a cloud provider. The primary alternative has been collocation (colo). Many have seen a renewed interest of colo with the growth of hybrid cloud, as the large public cloud providers have implemented changes to their products to promote hybrid cloud architectures.  Here I will review these changes and discuss colocation’s relevance today for business.        

Colo defined and best use cases

Colo allows organizations to continue to own and maintain control of their IT hardware while taking advantage of an off-premises solution that offers increased uptime and security. As a part of the colo agreement, the data center will offer space, power, and bandwidth to its clients in a secured and compliant area within their facility. Although data centers are some of the most secure places in the world, they still can offer their clients access to their IT resources 24 hours a day, 365 days a year. They accomplish this through multiple layers of security including security guards, video monitoring and biometrics. This ability for colo customers to access and touch their data provides a psychological advantage for many businesses.

Another advantage of colo is power which can be offered with options including multiple power utilities. Redundant power offers additional safeguards against an IT outage. This type of power configuration is not available in most business’s office buildings. Also, a data center can offer power at a reduced rate because of their purchasing power with the utility. With more power comes more cooling requirements. The data center also provides better cooling, again with spare resources to assure it’s always available. Finally, bandwidth is a commodity the data center buys in bulk and offers to its colo customers at savings.

Regulatory compliance is another important advantage driving users to a colo solution. Colo provides its customers instant access to an audited data center, such as one with SOC 2 compliance. Colo has long been believed to offer more security and compliance than on-premises or cloud.

Considerations before moving to colo

The primary items to consider before moving to colo in a data center relate to the space and power components of the solution. Colocation space is typically offered by the data center provider by the rack or by a private cage consisting of multiple racks. In either offering, a prospective buyer should consider the requirements for expansion of their infrastructure. In a cage, a customer is typically offered “reserved space” within that cage to be purchase and can then activate when required. When the customer doesn’t require the segregation of a cage, they will purchase racks that are adjacent to other business customers, which can make expansion more complex. Customer-focused data centers allows a business to reserve adjacent racks without activating the power and therefore are priced at a discounted rate. It is important to have contiguous space in a data center colo so consider additional space for growth with the initial purchase. 

Regarding power make sure you research the amperage and voltage requirements for your infrastructure and its potential for growth. Data centers will have many diverse power offerings so consult with an expert like TEOM for the requirements of your IT equipment.

Today’s evolving advantages of colo

Most of today’s business IT infrastructures, on-premises or colocation, will utilize some type of cloud presence for required applications such a disaster recovery. The byproduct of this growing trend is hybrid cloud implementation. Like the term cloud, a hybrid cloud can have many definitions. For our purposes here, hybrid cloud will be defined as resources complementing your primary on-premises infrastructure with a cloud solution. The large public cloud providers, most often used by businesses, have expanded their presence beyond their own data centers to occupy a cage of colo in large multi-tenant datacenters. This enables the cloud providers to get physically closer to their customers, which creates an advantage for a business user in that data center needing to implement a hybrid cloud solution.

Previously, customers of the large public clouds have relied on either the internet for inexpensive connectivity or expensive dedicated telecom circuits to connect “directly” to their cloud provider. Direct connections have been prohibitively expensive for most businesses because of the high cost of telecom circuits that are required to reach the public cloud. Some have justified the high cost of direct connect due to increased security and the greatly reduced costs of data egress. Egress charges are the cost to move data from the public cloud to the business. Typical egress charges for public cloud providers can be as much is $.14 per gigabyte. When direct connections are established egress charges are greatly reduced to as low as $.02 per gigabyte as of the time this article was written. Because of this direct connect can save users thousands of dollars while greatly increasing security. When the public cloud provider is in the same data center as the colo customer a direct connection to take the form of a “cross connect” within the data center. This common data center service is a fraction of the cost of the telecom circuits mentioned previously. This enormous economic benefit can be multiplied if the business connects to multiple public clouds (multi-cloud).

A more recent trend has the large public cloud providers creating a hybrid cloud on the customer’s premises. Microsoft’s solution, called Azure Stack, was the first introduced, and now has a competitive product from AWS called Outpost. The products, to be covered in a future article, put the hardware and cloud infrastructure of these providers on the customer’s site. This creates additional validation that hybrid is here to stay.

Colo remains relevant today for many of the same reasons it has been chosen for years: availability, security, and compliance. As the large public cloud providers expand outside of their own data centers to get closer to their customers, new advantages for businesses have emerged. When a fiber cross connection in a common data center can be used to direct connect to a public cloud provider, enormous benefits are realized. Ironically, as the public cloud providers grow, colocation has found new life and will remain relevant for the foreseeable future.

If your business wants to stay competitive in this ever-changing environment

Contact us @ Jim Conwell (513) 227-4131      jim.conwell@twoearsonemouth.net

www.twoearsonemouth.net

we listen first…

 

Disaster Recovery, Which Option is Right for Your Business?

 

Active-Active performs the quickest recovery
An Active-Active Disaster Recovery Solution

 

In a recent article, I described how an outsource, or hosted provider can deliver Disaster Recovery (DR) as a Service. In this article, I would like to look at the advantages a business can achieve by creating their own Disaster Recovery and answer the question, which option is right for your business. First, a reminder that DR is not a backup of data but rather a replication of data to ensure its availability and business continuity. DR solutions that are created by using the business’s own IT infrastructure can be divided into two primary categories, active-active and active-passive. Since active-passive was covered in the previous blog, I will focus on active-active here. While both attempt to achieve the same goals, keeping the business IT systems up at all times, they are created and maintained differently. Because of the unique nature of DR solutions, it is generally accepted to engage an expert such as Two Ears One Mouth IT Consulting to determine the right DR solution for an organization. I will compare the two DR strategies through complexity, cost and the most common metrics for DR Recovery Time Objective (RTO) and Recovery Point Objective (RPO).

Active-Active Disaster Recovery

An active-active, or stretched clustering, configuration is the deployment of a second identical live infrastructure which continually replicates with the first site. This framework will typically consist of only two sites.  Because of the simplicity of the concept and the speed and ease in which recovery can occur, it is usually the clients first choice. Ironically, after all the pertinent information is uncovered, is rarely selected by the small medium business seeking disaster recovery.

The two primary reasons it isn’t chosen for most businesses is its cost and the requirement for high bandwidth with low latency. Its high initial cost is due to the purchase of a duplicated set of hardware infrastructure for the primary site. In an active-active scenario, either site can handle the entire workload for the business. Every time a request is made in a software application at one site it must be written to the other site immediately before completing the request. An active-active solution requires a high level of connectivity, or bandwidth, between sites such as dedicated fiber optics. Even with dedicated (dark) fiber between sites data latency is still a consideration. Best practices dictate that the distance between active-active sites should be less than 100 miles. These two requirements eliminate many prospects from considering an active-active solution.  

Advantages of Active-Active

Now I will describe the advantages of an active-active configuration and the businesses that can benefit from it. There are many benefits to this configuration as it is a remarkable process for business continuity. After realizing the upfront cost, many businesses need to determine if it’s a nice-to-have or need-to-have solution for their business. To follow are some of the benefits of an active-active DR solution. 

1)      No “big red” button to push-

One of the most difficult processes of any DR solution is knowing how and when to declare an IT outage a disaster and quickly executing the DR plan. Many solutions will require a detailed process and action plan that involves the entire IT team. An active-active configuration is much simpler to invoke the DR plan because it transfers all workloads to one of the continually running and replicated systems. In addition, it requires very little testing and can be engaged automatically with minimal human intervention.  

2)     Cross-site load balancing-

Although it can be simple to transition to DR mode an active-active DR configuration is very complex to design and create. Some of the factors that make it difficult to create are the very same that provide additional benefits beyond DR. On such benefit is “load balancing” of the data transmitted between sites and offsite. Since both sites are always actively processing data it can be designed so that any process being run can occur at the optimal site available at that time. This can eliminate challenges of slow data responses and maximize bandwidth availability for the business.

3)      Less management means less cost-

The argument can be made that the active-active DR solution is the more cost effective for the long term. The time and technical resources to test, maintain and initiate an active-passive DR solution is much greater than the active-active. Additionally, in analyzing a DR solution, most don’t consider the operational task to “fallback” to normal mode after DR has been implemented; this can be more difficult than the original DR transition. Although expensive initially, the active-active solution has very little ongoing costs.  

Active-Passive Disaster Recovery

An active-passive DR solution creates an environment that is not intended to be live for IT production until a disaster is declared by the business. The infrastructure is over subscribed for resources and dormant until needed. This creates large initial cost savings on hardware. Many times, a business will re-purpose their aged IT equipment and servers for their DR site to realize even greater financial benefit.

One of the most popular active-passive software platforms for disaster recovery today is Zerto. Zerto’s DR solution creates DR at the hypervisor level of the virtualized environment. This allows for a quick and complete transition to the DR resources when an outage occurs. Zerto works with the most popular hypervisors such as VMware or Microsoft’s Hyper V. An active-passive solution such as Zerto can create a more customized solution. A business may select only a small percentage of their application servers as critical to the business and enable DR solution for those applications only. An active-passive solution is more accommodating to multi-site or multi-cloud business DR. Active-passive solutions are also used to provide Disaster Recovery as a Service (DRaaS) from data center and cloud providers.

When a business looks to create DR solution for their business, they have three primary options, active-active, active-passive and DRaaS. It is not a quick or simple decision as to what works best for your business. You need a trusted advisor like Two Ears One Mouth IT Consulting to investigate your IT environment, understand your budget, to guide you down the path to assured business continuity.

If your business is unique and requires a custom DR solution for IT Support

Contact us @ Jim Conwell (513) 227-4131      jim.conwell@twoearsonemouth.net

www.twoearsonemouth.net

we listen first…

   

 

Getting Started with Amazon Web Services (AWS)

icon-cloud-aws

Amazon Web Services is a little-known division of the online retail giant, except for those of us in the business of IT. Its interesting to see that the profits from AWS represented 56 percent of Amazon’s total operating income with $2.57 billion in revenue. While AWS amounted to about 9 percent of total revenue, its margins and sustained growth make it stand out on Wall Street. As businesses make the move to the cloud they may ponder what it takes Getting Started with Amazon Web Services (AWS)

When we have helped organizations evolve by moving part or all of their IT infrastructure to the AWS cloud, we have found that planning is the key to their success. Most businesses have had some cloud presence in their IT infrastructure. The most common, Software as a Service (SaaS), has lead the hyper growth of the cloud. What I will consider here with AWS is how businesses use it for Infrastructure as a Service (IaaS). IaaS is defined as a form of cloud computing that relocates a business’s applications that are currently on their own servers to a hosted cloud provider. Businesses consider this to reduce hardware cost, become more agile with their IT and even improve security. To follow are the 5 simple steps we have developed to move to IaaS with AWS.

Getting Started with Amazon Web Services (AWS)

1)      Define the workloads to migrate- The first cloud migration should be kept as simple as possible. Do not start your cloud practice with any business critical or production applications. A good idea, and where many businesses start, is a data backup solution. You can use your existing backup software or one that partners with AWS currently. These are industry leaders such as Commvault and Veritas, and if you already use these solutions that is even better. Start small and you may even find you can operated in the  free tier of Amazon virtual server or instances. (https://aws.amazon.com/free/)

2)      Calculate cost and Return on Investment (ROI)- Of the two primary types of costs used to calculate ROI, hard and soft costs, hard costs seem to be the greatest savings as you first start your cloud presence. These costs include the server hardware used, if cloud isn’t already utilized,  as well as the time needed to assemble and configure it. When configuring  a physical hardware server a hardware technician will have to make an estimation on the applications growth in order to size the server properly. With AWS it’s pay as you go, only renting what you actually use. Other hard cost such as power consumption and networking costs will be saved as well. Many times when starting small, it doesn’t take a formal process of ROI or documenting soft costs, such as customer satisfaction, to see that it makes sense. Finally, another advantage of starting with a modest presence in the AWS infrastructure is that you may be able to stay within the free tier for the first year. This  offering includes certain types of storage suitable for backups and the networking needed for data migration.

3)      Determine cloud compatibility- There are still applications that don’t work well in a cloud environment. That is why it is important to work with a partner that has experience in cloud implementation. It can be as simple as an application that requires a premium of bandwidth, or is sensitive to data latency. Additionally, industries that are subject to regulation, such as PCI/DSS or HIPAA are further incentivized to understand what is required and the associated costs . For instance, healthcare organizations are bound to secure their Protected Health Information (PHI). This regulated data should be encrypted both in transit and at rest. This example of encryption wouldn’t necessarily change your ROI, but needs to be considered. A strong IT governance platform is always a good idea and can assure smooth sailing for the years to come.

4)      Determine how to migrate existing data to the cloud- Amazon AWS provides many ways to migrate data, most of which will not incur any additional fees. These proven methods not only help secure your data but also speed up the process of implementation of your first cloud instance. To follow are the most popular ways.

  1. a) Virtual Private Network- This common but secure transport method is available to move data via the internet that is not sensitive to latency. In most cases a separate virtual server for an AWS storage gateway will be used.
  2. b) Direct Connect- AWS customers can create a dedicated telecom connection to the AWS infrastructure in their region of the world. These pipes are typically either 1 or 10 Gbps and are provided by the customer’s telecommunications provider. They will terminate at the far end of an Amazon partner datacenter. For example, in the midwest this location is in Virginia. The AWS customer pays for the circuit as well as a small recurring cross-connect fee for the datacenter.
  3. c) Import/Export– AWS will allow their customers to ship their own storage devices containing data to AWS to be migrated to their cloud instance. AWS publishes a list of compatible devices and will return the hardware when the migration is completed.
  4. d) Snowball– Snowball is similar to import/export except that Amazon provides the storage devices for this product. A Snowball can store up to 50 Terabytes (TB) of data and can be combined in series with up to 4 other Snowballs. It also makes sense in sites with little or no internet connectivity. This unique device is set to ship as is, there is no need to box it up. It can encrypt the data and has two 10 GIG Ethernet ports for data transfer. Devices like the Snowball are vital for migrations with large amounts of data. Below is a chart showing approximate transfer times depending on the internet connection speed and the amount of data to be transferred. It is easy to see large migrations couldn’t happen without these devices. The final column shows the amount of data where is makes sense to “seed” the data with a hardware devices rather than transfer it over the internet or a direct connection.
    Company’s Internet Speed Theoretical days to xfer 100 TB @ 80% Utilization Amount of data to consider device
    T3 (44.73 Mbps) 269 days 2 TB or more
    100 Mbps 120 days 5 TB or more
    1000 Mbps (GIG) 12 days 60 TB or more

    1)      Test and Monitor- Once your instance is setup, and all the data migrated, it’s time to test. Best practices are to test the application in the most realistic setting possible. This means during business hours and in an environment when bandwidth consumption will be similar to the production environment. You wont need to look far to find products that can monitor the health of your AWS instances; AWS provides a free utility called CloudWatch. CloudWatch monitors your Amazon Web Services (AWS) resources and the applications you run on AWS in real time. You can use CloudWatch to collect and track metrics, which are variables you can measure for your resources and applications. CloudWatch alarms send notifications or automatically make changes to the resources you are monitoring based on rules that you define. For example, you can monitor the CPU usage and disk reads and writes of your Amazon instances and then use this data to determine whether you should launch additional instances to handle increased load. You can also use this data to stop under-used instances to save money. In addition to monitoring the built-in metrics that come with AWS, you can monitor your own custom metrics. With CloudWatch, you gain system-wide visibility into resource utilization, application performance, and operational health.

    To meet and learn more about how AWS can benefit your organization contact me at (513) 227-4131 or jim.conwell@outlook.com.

 

a cloud buyers guide

A Buyer’s Guide to Cloud

buyguide_Cloud

Most businesses have discovered the value that cloud computing can bring to their IT operations. They may have discovered how it helps to meet their regulatory compliance priorities by being in a SOC 2 audited data center. Others may see a cost advantage as they are approaching a server refresh when costly hardware needs to be replaced. They recognize an advantage of placing this hardware as an operational expense as opposed to the large capital expense they need to make every three years. No matter the business driver, the typical business person isn’t sure where to start to find the right cloud provider. In this fast paced and ever-changing technology environment these IT managers may wonder, is there a buyer’s guide to Cloud?

Where Exactly is the Cloud?…and Where is My Data?

Except for the cloud hyperscalers, (Amazon AWS, Microsoft Azure, and Google) cloud providers create their product in a multi-tenant data center. A multi-tenant data center is a purpose-built facility designed specifically for the needs of the business IT infrastructure and accommodates many businesses. These facilities are highly secured and most times unknown to the public. Many offer additional colocation services that allow their customers to enter the center to manage their own servers. This is a primary difference with the hyperscalers, as they offer no possibility of customers seeing the sites where their data resides. The hyperscale customer doesn’t know where there data is except for a region of the country or availability zone. The hyperscaler’s customer must base their buying decision on trusting the security practices of the large technology companies Google, Amazon, and Microsoft. These are some of the same organizations that are currently under scrutiny from governments around the world for data privacy concerns.  The buying decisions for cloud and data center for cloud seekers should start at the multi-tenant data center. Therefore, the first consideration in a buyer’s guide for the cloud will start with the primary characteristics to evaluate in the data center and are listed below.

  1. Location– Location is a multi-faceted consideration in a datacenter. First, the datacenter needs to be close to a highly available power grid and possibly alternate power companies. Similarly, the telecommunications bandwidth needs to be abundant, diverse and redundant. Finally, the proximity of the data center to its data users is crucial because speed matters. The closer the users are to the data, the less data latency, which means happier cloud users.
  2. Security– As is in all forms of IT today, security is paramount. It is important to review the data center’s security practices. This will include physical as well as technical security.
  3. People behind the data– The support staff at the datacenter creating and servicing your cloud instances can be the key to success. They should have the proper technical skills, responsiveness and be available around the clock.

Is My Cloud Infrastructure Portable?

The key technology that has enabled cloud computing is virtualization. Virtualization creates an additional layer above the operating system called a hypervisor that allows for sharing hardware resources. This allows multiple virtual servers (VMs) to be created on a single hardware server. Businesses have used virtualization for years, VMware and Microsoft HyperV being the most popular choices. If you are familiar with and have some secondary or backup infrastructure on the same hypervisor as your cloud provider, you can create a portable environment. A solution where VMs can be moved or replicated with relative ease avoids vendor lock-in. One primary criticism of the hyperscalers is that it can be easy to move data in but much more difficult to migrate the data out. This lack of portability is reinforced by the proprietary nature of their systems. One of the technologies that the hyperscalers are beginning to use to become more portable is containers. Containers are similar to VMs however they don’t utilize guest operating systems for the virtual servers. This has had a limited affect on portability because containers are a leading-edge technology and have not met widespread acceptance.

What Kind of Commitment Do I Make?

The multi-tenant data center offering a virtualized cloud solution will include an implementation fee and require a commitment term with the contract. Their customized solution will require pre-implementation engineering time, so they will be looking to recoup those costs. Both fees are typically negotiable and a good example where an advisor like Two Ears One Mouth can assist you through this process and save you money.

The hyperscaler will not require either charge because they don’t provide custom solutions and are difficult to leave so the term commitment is not required. The hyperscaler will offer a discount with a contract term as an incentive for a term commitment; these offerings are called reserved instances. With a reserved instance, they will discount your monthly recurring charge (MRC) for a two or three-year commitment.

Finding the best cloud provider for your business is a time-consuming and difficult process. When considering a hyperscaler the business user will receive no support or guidance. Working directly with a multi-tenant data center is more service-oriented but can misuse the cloud buyer’s time. The cloud consumer can work with a single data center representative that states “we are the best” and trust them. Alternatively, they can interview multiple data center provider representatives and create the ambiguous “apples to apples” spreadsheet of prospective vendors. However, neither is effective.

At Two Ears One Mouth IT consulting we will listen to your needs first and then guide you through the process. With our expertise and market knowledge you will be comforted to know we have come to the right decision for you company’s specific requirements. We save our customers time and money and provide our services at little or no cost to them!

If you would like assistance in selecting a cloud provider for your business contact us at:

Jim Conwell (513) 227-4131      jim.conwell@twoearsonemouth.net

www.twoearsonemouth.net

we listen first…

migrating datta to cloud

Creating a Successful Cloud Migration

cloud-migrationIf you’ve been a part of the growth of cloud computing technology, you know that creating a successful cloud migration goes far beyond what can be covered in a short essay. However, this article will communicate guidelines or best practices that will greatly improve the success of your migration project. A successful cloud migration will include at least these three stages: planning, design, and execution. Each phase builds on the previous one and no step should be ignored or downplayed. The business cloud migration requires an expert, internal or external to the organization, to manage the process.

Planning: what type of cloud works best?

When we speak of a cloud migration we are referring to a business’s transition to Infrastructure as a Service (IaaS). Migrating to IaaS is the process of converting your on-site IT infrastructure to a cloud service provider and initiating an OpEx financial model for the business. When approaching this migration the business will investigate three provider solution types: hyperscaler, national cloud service provider and a hybrid of a cloud provider with a portion of the infrastructure remaining on-premises.

The largest public cloud providers, AWS, Azure, and Google are often referred to as hyperscalers.  This name is appropriate as it is what they do best, allow customers to scale or expand very quickly. This scaling is served up by a self-service model via the provider’s web portal which can be very attractive large organizations.  Small and medium sized businesses (SMB) have a harder time adjusting to this model as there is very little support. Self-service means the customer is on their own to develop and manage the cloud instances. Another drawback of the hyperscaler for the SMB is that is nearly impossible to budget what your cloud infrastructure is going to cost. The hyperscalers transactional charges and billing make costs difficult to predict. The larger enterprise will often take the strategy of building the infrastructure as needed and then scale back to meet or reduce the cost. SMB typically does not have this type of latitude with budget constraints and will opt toward the more predictable national or regional cloud provider.

The regional or national data center is a better fit for SMB because of their ability to conform to the businesses needs. Often SMB will have unique circumstances requiring a customized plan for compliance and security or special network requirements. Also, this type of cloud provider will provide an allowance of internet bandwidth in the monthly charges. This eliminates unpredictable transaction fees the hyperscaler charges. In this way, the business can predict their monthly cloud cost and budget accordingly.

There are times when an application doesn’t work well in the cloud infrastructure, yet it is still required for the business. This is when a hybrid cloud environment can be implemented. Hybrid cloud in this instance is created when some applications move off-site while others stay and are managed separately. The challenge is to integrate, or make seamless, this non-cloud application with the other business processes. Over the long term, the application creating the hybrid environment can be repurposed to fit in the cloud strategy. Options include redeveloping the existing software to a cloud native architecture or finding a similar application that works more efficiently in a cloud environment.

Design: a cloud strategy.

A cloud strategy requires not only a strong knowledge of IT infrastructure but also a clear understanding of the business’s operations and processes. It is vital that the customer operations and management teams are involved in the cloud strategy development. Details regarding regular compliance and IT security need to be considered in the initial phases of development rather than later. The technical leader of the project will communicate a common strategy of building a cloud infrastructure wider as opposed to taller. Cloud infrastructure is better suited to have many servers with individual applications (wide) instead of one more powerful server handling many applications (tall).

Once all the critical business operations are considered, a cloud readiness assessment (CRA) can be developed. A CRA will dig deep into the business’s critical and non-critical applications and determine the cloud infrastructure needed to support them. In this stage, each application can be considered for its appropriate migration type. A “lift and shift” migration will move the application off-site as is, however some type of cloud customization may be completed before it is migrated. Connectivity also needs to be considered at this stage. This includes the bandwidth required for the business and its customers to connect with the cloud applications. Many times, an additional private and secure connection is required for access by IT managers or software developers through a VPN that will be restricted and have very limited access. IP addresses may need to be changed to a supplier issued IP block to accommodate the migration. This can create temporary Domain Name System (DNS) issues that require preparation. Finally, data backups and disaster recovery (DR) need to be considered. Many believe migrating to the cloud inherently assures backup and disaster recovery and it does not! Both backups and DR objectives need to be uncovered and planned out carefully.         

Execution and day 2 cloud.

Now that the best cloud provider and the application migration timeline have been determined, the project is ready for the execution phase. The migration team should have performed tests on the applications as a proof of concept (POC) to assure everything will work as planned. After the tests are complete, the data will then be migrated to the provider via an internet connection or a physical disk delivered to the provider. The business’s IT infrastructure has now been moved to the cloud, but the work is not over. The business’s IT infrastructure is in a place called cloud day 2.      

The two services that deliver and assure success in your cloud going forward are monitoring and support. These can be handled internally, or they can be provided by the cloud supplier or another third party. When purchasing the professional services from the cloud provider, it is important to understand their helpdesk operations and have expectations for response times.  Make sure you discuss service level agreements (SLAs) for response both during business hours and after. The service provider should be monitoring the health or “state” of all VMs and network edge devices; security falls under these ongoing services. Many security-minded organizations prefer a more security focused third-party provider than the cloud provider itself. It is critical to understand the data backup services that have been included with your cloud instances. Don’t assume there is an off-site backup included in the cloud service, many data center providers have additional charges for off-site backup. DR goes well beyond backups and creates data replication with aggressive SLAs to restore service during an outage. An often-overlooked part of DR strategy is the “fallback” to your primary service location once the primary site has been restored to service.

A migration of IT infrastructure is a complicated process that needs to be performed by a team of experts. Just as important, the team needs to be managed by a seasoned project manager that has your business interests as primary. This is accomplished when the project manager is not a part of the cloud provider’s team. Having the right manager and team can assure your business can migrate to the cloud without a disruption to your business. Two Ears One Mouth IT Consulting can be the partner that guarantees a successful cloud migration.

If you would like to talk more about cloud migration strategies contact us at:

Jim Conwell (513) 227-4131      jim.conwell@twoearsonemouth.net

www.twoearsonemouth.net

we listen first…

Disaster Recovery as a Service (DRaaS)

draas img

One of the most useful concepts to come from the “as a service” model that cloud computing created is Disaster Recovery as a Service (DRaaS). DRaaS allows the business to outsource the critical part of their IT infrastructure strategy that assures the organization will still operate in the event of an IT outage. The primary technology that has allowed Disaster Recovery (DR) to be outsourced, or prosed as a service, is virtualization. DRaaS providers operate their own datacenter and provide a cloud infrastructure where they will rent servers for replication and recovery of their customer’s data. DRaaS solutions have grown in popularity partly because of the increased need for small and medium (SMB) sized business’s IT strategy to include DR.  DR plans have become mandated by the larger companies, who the SMB supply services to, as well as insurers or regulatory agencies. These entities require proof of the DR plan and of the ability to recover quickly from an outage.  It’s a complicated process that few organizations take the proper time to address. A custom solution for each business needs to be designed by an experienced IT professional that focuses on cloud and DR. Most times an expert such as Two Ears One Mouth Consulting partnered with a DRaaS provider will create the best custom solution.

The Principles and Best Practices for Disaster Recovery (DR)

Disaster Recovery (DR) plans and strategies can vary greatly. One extreme notion is the idea that “my data is in the cloud, so I’m covered”. The other end of the spectrum is “I want a duplication of my entire infrastructure off site and replicated continually”, an active-active strategy. Most businesses today have some sort of backup; however, backup is not a DR plan. IT leadership of larger organizations favor the idea of a duplicated IT infrastructure like the active-active strategy dictates but balk when they see the cost. The answer for your company will depend on your tolerance for an IT outage, how long you’re willing to be off-line, as well as your company’s financial constraints.

First, it’s important to understand what the primary causes of IT outages are. Many times, we consider weather events and the power outages they create. Disruptive weather such as hurricanes, tornadoes and lightning strikes from severe thunder storms affect us all. These weather-related events make the news but are not the most common causes. Human error is the greatest source of IT outages. This type of outage can come from failed upgrades and updates, errors by IT employees or even mistakes from end users. Another growing source of IT outages is malware and IT security breaches (See the previous article on Phishing). Ransomware outages require an organization to recover from backups as the organization’s data has been encrypted and will only be unlocked with a ransom payment. It is vital that security threats are addressed, understood and planned for in the DR recovery process.

Two important concepts of DR are Recovery Point Objective (RPO) and Recovery Time Objective (RTO). RPO will detail the interval of time that will pass during an outage before reaching the organization’s tolerance for data loss. The RPO concept can be used for a ransomware attack, described above, to fallback to data for a time before the breach. More often RPO is used to define how long the customer is willing to go back in time for the data to be restored. This determines the frequency of the data replication and ultimately the cost of the solution.  The RTO defines the amount of time the vendor will have the customer up and running on the DR solution in an outage and how they will “fallback” when the outage is over.

If the company is unable to create an active-active DR solution, it is important to rate and prioritize critical applications. The business leadership needs to decide what applications are most important to the operations of the company and set them first to recover in a DR solution. Typically these applications will be grouped in “phases” as to the priority of importance to the business and order to be restored.

Telecommunications networking can sometimes be the cause of an IT outage and is often the most complicated part of the recovery process. Customers directed to one site in normal circumstances need to be changed to another when the DR plan is engaged. In the early days of DR, there was a critical piece of documentation called a playbook. A playbook was a physical document with step-by-step instructions detailing what needs to happen in the event of an IT outage. It would also define what is considered a disaster, and at what point do we engage the DR plan. Software automation has partially replaced the playbook; however, the playbook concept remains. While automating the process is often beneficial there are steps that can’t be automated. Adjusting the networking of the IT infrastructure in the event the DR plan in imitated in one example.

Considerations for the DRaaS Solution

DRaaS like other outsourced solutions has special considerations. The agreement with the DraaS provider needs to include Service Level Agreements (SLAs). SLA’s are not exclusive to DRaaS but are critical to it. An SLA will define all the metrics you expect your vendor to attain in the recovery process. RTO and RPO are important metrics in an SLA. SLA’s need to be in writing and have well defined penalties if deliverables are not met. There should also be consideration for how the recovery of an application is defined. A vendor can point out the application is working at the server level but may not consider if it’s working at the desktop and at all sites. If the customer has multiple sites, the details of the networking between sites is a critical part of the DR plan. That is why a partner that understands both DR and telecommunications, like Two Ears One Mouth IT Consulting, is critical.

The financial benefits of an outsourced solution such as DRaaS are a primary consideration. To make a CapEx purchase of the required infrastructure that will be implemented in a remote and secure facility is very costly. Most businesses see the value of renting the infrastructure for DR that is already implemented and tested in a secure and telecom rich site.

DR is a complicated and very important technology that a business will pay for but may never use. Like other insurance policies, it’s important and worth the expense. However, it’s complicated it should be designed and executed by professionals which may make an outsourced service the best alternative.

If you need assistance designing your DR Solution (in Cincinnati or remotely), please contact us at:

.Jim Conwell (513) 227-4131      jim.conwell@outlook.com      www.twoearsonemouth.net

cloud savings

Financial Benefits of Moving to Cloud

Cloud-$

                                                                                                 image courtesy of betanews.com

There are many benefits that cloud technology can offer a business, however, business doesn’t buy technology for technology’s sake, it buys it for positive business outcomes. The two most popular business outcomes desired by most businesses are to increase revenue and reduce cost. Information Technology (IT) has long been known to be one of the costliest departments in a business. So it makes sense, if we’re going to recommend to a cloud solution, we look at the financial benefits. The financial advantages paired with the expertise in determining what applications should migrate to the cloud create a cloud strategy. This consultation is not completed just once but needs to be completed periodically by a strategic partner like Two Ears One Mouth.   Just as telecommunications and internet circuits can get financially burdensome as a business grows, so can a cloud solution. Telecom cost recovery became a financial necessity for businesses when telecom costs spiraled out of control. A consultant would examine all the vendors and circuits to help the business reduce IT spend by eliminating waste. The cloud user faces a similar problem, as cloud services can automatically grow as demand increases. The growth will include the cloud solutions cost as well as the resources.

 

To follow are the three primary financial benefits of a cloud migration.

 

CapEx vs OpEx

The primary financial benefit most organizations plan for with their first cloud implementation is the benefit of an operational expense (OpEx) instead of a capital expense (CapEx). This is particularly beneficial for startup companies and organizations that are financially constrained. They find comfort from the “pay as you go model” similar to other services they need, such as utilities. Conversely, enterprises that invest in equipping their own data centers have racks of equipment that depreciate quickly and utilize a fraction of the potential purchased. It has been estimated that most enterprises have an IT hardware utilization rate of about 20% of its total capacity. Cloud services allow you pay only for what you use and seldom pay for resources sitting idle.

 

Agility and scale

Regardless of the size of your business, it would be financially impractical to build an IT infrastructure that could scale as quickly as the one you rent from a cloud provider. This agility allows businesses to react quickly to IT resource needs while simultaneously reducing cost.  Many cloud solutions can predict when additional resources are needed and are able to scale the solution appropriately. This provides obvious benefits for the IT Manager but can create problems with the IT budget. If the cloud solution continues to scale upward, and it is billed transitionally, the cost can escalate quickly. Cloud instances need to be monitored constantly for growth and cost. For this reason, Two Ears One Mouth consultants have developed a product known as cloud billing and support services (CBASS). CBASS makes sure the benefits originally realized with the cloud migration remain intact.

 

Mitigate risk

Many best practices in setting up a cloud infrastructure also enhance IT security. For instance, because your data resides elsewhere, cloud users tend to implement data encryption.  This encryption can include not only the data that rests in the cloud providers datacenter but also as it’s in transit between the datacenter and the customer. This is a wise practice for IT security. It can eliminate data breaches and benefit regulatory compliance in some cases. Additionally, security software and hardware, such as a firewall, tend to be superior in larger IT datacenters, such as with a cloud provider. Ironically, IT security which started as a concern of cloud computing, has become an advantage.

 

Cloud technology has long been a proven technology and is here to stay. It has reduced IT budgets while enhancing IT response time. However, the cost savings of cloud is not automatic and ongoing. Savings, as well as the solution, need to be measured and affirmed regularly. consultants can monitor your cloud environment leaving you to focus on the business.

If you need assistance with your current IT cloud project  please contact us at:

Jim Conwell (513) 227-4131      jim.conwell@outlook.com      www.twoearsonemouth.net

HIPAA- The Who, When and What’s its primary purpose?

HIPAA

The Health Insurance Portability and Accountability Act (HIPAA) was signed into law by President Bill Clinton in 1996, it has since created some of the most sweeping changes in healthcare reform at that time and for many years after. HIPAA was designed to eliminate discrimination by protecting and securing patients’ health data. This law has since grown into a government regulation with a much larger scope and focus on information technology as it relates to healthcare. HIPAA’s three primary functions are:

  1. Protect the privacy and provide security for Protected Health Information (PHI).
  2. Increase the efficiency and effectiveness of the healthcare system.
  3. Establish standards for accessing, sharing and transmitting PHI.

HIPAA was originally segmented into 3 primary components: the Privacy Rule, the Security Rule, and the Enforcement Rule. Several years later it was amended to include the Health Information Technology for Economic and Clinical Health Act (HITECH) and the Omnibus Rule.

The Privacy Rule

The Privacy Rule was designed to protect and keep private all of our Protected Health Information (PHI). PHI includes information such as a patient’s street address, city, birth date, email addresses, social security numbers or any type of identifiable information obtained in the process of receiving care. Individuals may be charged with either civil or criminal penalties for violating HIPAA privacy rules.

The primary goal is to protect individuals’ PHI while promoting an efficient “flow” of information. It applies to covered entities, which are defined as hospitals, doctor’s offices, insurance companies or any organization that accepts health insurance. It also applies to business associates; organizations that create, maintain or transmit PHI on behalf of a covered entity. These entities must protect any PHI transmitted in any form: electronic, oral or written.

The Privacy Rule also allows for an individual’s personal right to access, review and obtain copies of their PHI. In addition, it authorizes the right to amend or request restrictions on the use of their PHI. As a part of the Privacy Rule, covered entities and business associates are required to appoint a privacy officer, complete workforce training on HIPAA compliance and construct business associate agreements with any entity with whom you are disclosing or sharing information.

The Security Rule

The Security Rule sets standards for covered entities and business associates for the security of electronic health information.

The Security Rule has three primary components:

  1. Administrative safeguards– These begin the security management process by identifying a security officer and performing a risk assessment. The goal is to evaluate risk and make sure only authorized personnel can access PHI. Also, contingency and business continuity plans must be addressed and documented in the event of a disaster or disruption of business.
  2. Physical safeguards – These cover facility access controls (badges), alarms and locks. Any PHI data must be encrypted at rest and in motion and have adequate passwords. The use of tablets, phones, etc. must also be considered.
  3. Technical safeguards- These include audit controls (SSAE), which record and monitor transactions, password, pins or biometrics.

All security Information must be documented and accessible on demand. It is required to be updated and archived for 6 years.

The Enforcement Rule

The Enforcement Rule sets the standards for penalties in the event of a HIPAA violation or breach. Initially, there were very little violations reported or penalties assessed. Today, there are still not many penalties compared to the actual violations, which occur frequently.

Most common infractions include:

  • Unauthorized disclosures of PHI
  • Lack of protection of health information
  • Inability of patients to access their health information
  • Disclosing more than the minimum necessary protected health information
  • Absence of safeguards for electronic protected health information

The following are the covered entities required to take corrective action to be in voluntary compliance according to HHS:

  • Private practices
  • Hospitals
  • Outpatient facilities
  • Group plans such as insurance groups
  • Pharmacies

(source: hhs.gov/enforcement, 2013)

HITECH and the Omnibus Rule

In 2009 Congress passed an amendment to HIPAA: the Health Information Technology for Economic and Clinical Health Act (HITECH). This amendment was designed to reduce cost and streamline healthcare through information technology. HITECH expanded HIPAA and implemented new requirements for the protection of PHI in Information Technology.

In 2013 HHS office of civil rights issued “the final rule” or Omnibus as a means of implementing the changes of HITECH. HITECH changes included:

  • It allowed for changes requested to PHI by individuals and required direct approval before the sale of PHI.
  • Business Associates became directly liable and are required to provide items such as workforce training, privacy officer and risk assessment. HITECH also assigned liabilities to subcontractors of business associates.
  • All breaches to HIPAA must be reported to affected individuals as well as the secretary of HHS. An additional risk assessment must then be completed for each breach.
  • HITECH introduced a tiered approach to breach penalties with recurring infractions in the same year totaling up to $1,500,000. It also gave the state Attorney General the power to enforce HIPAA violations.

HIPAA is one of the most sweeping and all-encompassing changes to ever impact the Healthcare industry. It has evolved to regulate the use of Information Technology within the scope of healthcare in addition to protecting the privacy of a patient’s PHI. Unfortunately, like most government regulations, it is vague and very difficult to enforce. In contrast, it has created valuable safeguards for the protection of our personal health records and it has encouraged improvements to the flow and integration of healthcare data.

If you need assistance with any current IT projects (Cincinnati or remote), or risk assessment for your practice please contact us at:

Jim Conwell (513) 227-4131      jim.conwell@outlook.com      www.twoearsonemouth.net

What’s a Managed Service Provider (MSP)?

 

AAEAAQAAAAAAAAwgAAAAJDAxNmMxNDJhLTM0MWEtNDIwZi1iNDBiLTAwYjVhZTYzMzM4Mw[1]

Most organizations, big and small, have gone through this exercise with Information Technology, as well as other services. “Should I hire a dedicated person, assign it to someone in the organization as an additional responsibility or outsource”? What’s a Managed Service Provider (MSP)?  When posing this question for IT services; size matters! In this exercise, we will assume there are from between 20 to 100 IT users in the organization considering an MSP.

Size Matters

When a company I consult with is near the lower end of this user count many times they will tell me that an employee’s relative; brother, sister or husband does their IT work. I call this type of IT provider a trunker, as their office and tools are in the trunk of their car. A trunker can be a smart way to go, receiving a prompt and personalized service response. However; it is important the trunker has a way to stay current with technology. Also, at least one employee of the organization be aware of all he or she does and documents all passwords and major tasks.

 I’ve seen the same level of service can be achieved with an IT MSP as the organization outgrows the trunker. The MSP will typically have an upfront cost to inspect and become familiar with the IT infrastructure. Then there will be a recurring charge, monthly or quarterly, for help-desk support that is either handled remotely or on the customers site. With few exceptions, organizations of 100 employees or less, are serviced satisfactorily with a remote agreement. When an issue calls for onsite service they will pay the predetermined labor rate. Another factor that is determined up front are Service Level Agreements (SLA’s). SLA’s will define how quickly the MSP will respond. As it was with the trunker mentioned before it’s up to the organization to keep track of the IT provider and their tasks. This can be made easier by the fact that an MSP, because it will engage multiple technicians for one customer, needs to document everything for their own benefit.

Why Use an MSP for My Business?

The MSP is the system I see work most often. So let me answer my original question. Why outsource my I.T?!

1)   Consistency and predictability of service. Based on the MSP’s reputation and the SLA’s provided most organizations experience responsive and high continuity of service. When the agreement ends, they can expect a smooth transition to the new vendor or person. I have witnessed many times when the trunker provider relationship ends poorly. The organization can be put in a position of having no documentation and not even knowing the passwords to access their systems.

2)   Transparency. Most MSP’s, as a part of their service, offer dashboards showing real-time status of devises on the network. Many even offer your business remote access to monitor your network. This is a major cost reduction based the cost to host or maintain monitoring yourself.

3)   Expertise. There is knowledge in numbers. Although you may only see or speak with one person as the face of your IT partner, you’re working with a team with vast experience and knowledge. The technical staff of an MSP will always have greater level of experience and a better knowledge of the trends in technology. This is particularly true in regulated organizations such as in healthcare and financial businesses.

Contact us for a free analysis of your business and what will serve it best.

Jim Conwell     (513) 227-4131     jim.conwell@twoearsonemouth.net http://www.twoearsonemouth.net