5G affects cloud

5G or not 5G- What, When and How it Affects IT and Cloud

 

Before entering the cloud and IT business I spent more than a decade working with wireless technologies for business.  During this time, I saw the advent of data to cell phones and the transitions of the generations of data offerings that have been delivered. Generation 2 (G2) brought very rudimentary data to cell phones such as text messaging. G3 brought the internet and its many applications such as mobile email. G4 brought us the high-speed internet we use today offering instant access to applications such as real time video. With each transition of the technology, corporate marketing and spin became more extraordinary creating more time between the introduction and the practical delivery of the new product. Now comes 5G and I expect this trend to continue. Although we hear of the current availability of 5G from wireless carriers the products are not fully developed for practical use and are likely years away for business applications.

What is 5G and who will provide it?

The latest technology of 5G wireless data will be provided to us by the same carriers that delivered wireless service in the past, AT&T, Verizon, and Sprint. Although the primary standards for 5G have been set there is still much to be developed in the technology and will likely be introduced as different versions. This will be similar to 4G when it was first launched with its distinct alternatives of WiMAX and LTE.  5G has already been split into different delivery types, 5G, 5GE, and 5 GHz. Verizon’s first introduction to 5G is designed for the home and small office while AT&T is focused on mobile devices in very limited markets. Most believe there will be fixed wireless versions for point to point circuits for business. At this point, it isn’t clear what versions each provider will offer in 5G as it matures and becomes universally delivered.

The technology of 5G

Similar to all previous generations in the evolution of wireless data 5G offers greater speeds as its primary driver for acceptance. What may slow widespread deployment of 5G is the fact that 4G technology continues to improve and provide greater speeds for users. However, the wireless spectrum available to 4G providers is running short so the transition to 5G is imminent. Most of the 5G technology will be provided on an alternate wireless spectrum, above 6 GHz, and not provided to wireless consumers previously. This new swath of spectrum will offer much greater capacity and speed but won’t come without its own challenges. To achieve these higher speeds the carriers will need to use much higher frequency transmissions called millimeter waves. Millimeter waves cannot penetrate buildings, weather, and trees as well as the previous frequencies. To overcome this wireless carriers will need to implement additional, smaller cell sites called microcells. Many wireless carriers have already implemented microcells complementing the macrocells used in previous offerings of wireless service. Building out additional data network and cell sites such as microcells is expensive and time-consuming. This will add to the delay of a fully implemented 5G offering from the carriers.

Business advantages of 5G

To say that one of the advantages of 5G are greater data speeds would be true, but there is much more to it for business applications. The following are the primary advantages, related to speed, that 5G will provide for businesses for cloud computing.

  • Lower latency – Wireless 5G networks will decrease latency, the time it takes data packets to be stored and retrieved, greatly. This will benefit many business applications such as voice, video and artificial intelligence (AI)
  • Multiple connections- The base stations, or cell sites of 5G, will handle many more simultaneous connections than 4G. This will increase speed for users and capacity for providers.
  • Full duplex transmission- 5G networks can transmit and receive data simultaneously. This full duplex transmission increases the speed and reliability of wireless connectivity enabling new applications and enhancing exiting ones.

Cloud and business solutions enhanced by 5G

It is difficult to say exactly how businesses will benefit from 5G service since it is still being developed. However, the advantages listed above lend themselves to several applications which are sure to be enhanced for business.

The increased speeds and decreased latency 5G offers will expand options and availability for disaster recovery (DR) and data network backups for businesses. When speeds previously only offered to business via wireline can be delivered without wires business continuity will be increased. Many businesses outages today are caused by accidental cable cuts and power outages that wireless 5G will eliminate. It is also possible wireless point to point circuits could replace traditionally wired circuits for the business’s primary data and internet service.

The technology and increasing number of the internet of things (IOT) applications will be enhanced by 5G. The increased speed and connectivity capacity will allow this ubiquitous technology to continue to grow. Similarly, the trend for more and faster edge computing connectivity will benefit. This will enhance applications such as autonomous vehicles that require instant connectivity to networks and other vehicles. Content delivery networks like the ones used for delivery of Netflix will be able to deliver their products faster and more reliably. These are just a few examples of the technologies today that are demanding 5G’s advantages and will expedite its availability.

While the technology to deliver 5G is mostly completed, the timing of widespread implementation for business is still unclear. This is attributed in part to the improvement of 4G speeds in its ability to satisfy today’s consumer’s needs. More importantly, new technologies are not accepted in the marketplace because the technology is ready but rather because the business applications demand them. 5G technologies will be driven by many business applications but widespread acceptance won’t occur for at least another two years. If you want to consult with a partner that has expertise in all aspects of telecom, wireless and cloud technologies, give us a call and we will be glad to find the right solution for your business.

Contact @ Jim Conwell (513) 227-4131      jim.conwell@twoearsonemouth.net

www.twoearsonemouth.net

we listen first…

 

Getting Started with Amazon Web Services (AWS)

icon-cloud-aws

Amazon Web Services is a little-known division of the online retail giant, except for those of us in the business of IT. Its interesting to see that the profits from AWS represented 56 percent of Amazon’s total operating income with $2.57 billion in revenue. While AWS amounted to about 9 percent of total revenue, its margins and sustained growth make it stand out on Wall Street. As businesses make the move to the cloud they may ponder what it takes Getting Started with Amazon Web Services (AWS)

When we have helped organizations evolve by moving part or all of their IT infrastructure to the AWS cloud, we have found that planning is the key to their success. Most businesses have had some cloud presence in their IT infrastructure. The most common, Software as a Service (SaaS), has lead the hyper growth of the cloud. What I will consider here with AWS is how businesses use it for Infrastructure as a Service (IaaS). IaaS is defined as a form of cloud computing that relocates a business’s applications that are currently on their own servers to a hosted cloud provider. Businesses consider this to reduce hardware cost, become more agile with their IT and even improve security. To follow are the 5 simple steps we have developed to move to IaaS with AWS.

Getting Started with Amazon Web Services (AWS)

1)      Define the workloads to migrate- The first cloud migration should be kept as simple as possible. Do not start your cloud practice with any business critical or production applications. A good idea, and where many businesses start, is a data backup solution. You can use your existing backup software or one that partners with AWS currently. These are industry leaders such as Commvault and Veritas, and if you already use these solutions that is even better. Start small and you may even find you can operated in the  free tier of Amazon virtual server or instances. (https://aws.amazon.com/free/)

2)      Calculate cost and Return on Investment (ROI)- Of the two primary types of costs used to calculate ROI, hard and soft costs, hard costs seem to be the greatest savings as you first start your cloud presence. These costs include the server hardware used, if cloud isn’t already utilized,  as well as the time needed to assemble and configure it. When configuring  a physical hardware server a hardware technician will have to make an estimation on the applications growth in order to size the server properly. With AWS it’s pay as you go, only renting what you actually use. Other hard cost such as power consumption and networking costs will be saved as well. Many times when starting small, it doesn’t take a formal process of ROI or documenting soft costs, such as customer satisfaction, to see that it makes sense. Finally, another advantage of starting with a modest presence in the AWS infrastructure is that you may be able to stay within the free tier for the first year. This  offering includes certain types of storage suitable for backups and the networking needed for data migration.

3)      Determine cloud compatibility- There are still applications that don’t work well in a cloud environment. That is why it is important to work with a partner that has experience in cloud implementation. It can be as simple as an application that requires a premium of bandwidth, or is sensitive to data latency. Additionally, industries that are subject to regulation, such as PCI/DSS or HIPAA are further incentivized to understand what is required and the associated costs . For instance, healthcare organizations are bound to secure their Protected Health Information (PHI). This regulated data should be encrypted both in transit and at rest. This example of encryption wouldn’t necessarily change your ROI, but needs to be considered. A strong IT governance platform is always a good idea and can assure smooth sailing for the years to come.

4)      Determine how to migrate existing data to the cloud- Amazon AWS provides many ways to migrate data, most of which will not incur any additional fees. These proven methods not only help secure your data but also speed up the process of implementation of your first cloud instance. To follow are the most popular ways.

  1. a) Virtual Private Network- This common but secure transport method is available to move data via the internet that is not sensitive to latency. In most cases a separate virtual server for an AWS storage gateway will be used.
  2. b) Direct Connect- AWS customers can create a dedicated telecom connection to the AWS infrastructure in their region of the world. These pipes are typically either 1 or 10 Gbps and are provided by the customer’s telecommunications provider. They will terminate at the far end of an Amazon partner datacenter. For example, in the midwest this location is in Virginia. The AWS customer pays for the circuit as well as a small recurring cross-connect fee for the datacenter.
  3. c) Import/Export– AWS will allow their customers to ship their own storage devices containing data to AWS to be migrated to their cloud instance. AWS publishes a list of compatible devices and will return the hardware when the migration is completed.
  4. d) Snowball– Snowball is similar to import/export except that Amazon provides the storage devices for this product. A Snowball can store up to 50 Terabytes (TB) of data and can be combined in series with up to 4 other Snowballs. It also makes sense in sites with little or no internet connectivity. This unique device is set to ship as is, there is no need to box it up. It can encrypt the data and has two 10 GIG Ethernet ports for data transfer. Devices like the Snowball are vital for migrations with large amounts of data. Below is a chart showing approximate transfer times depending on the internet connection speed and the amount of data to be transferred. It is easy to see large migrations couldn’t happen without these devices. The final column shows the amount of data where is makes sense to “seed” the data with a hardware devices rather than transfer it over the internet or a direct connection.
    Company’s Internet Speed Theoretical days to xfer 100 TB @ 80% Utilization Amount of data to consider device
    T3 (44.73 Mbps) 269 days 2 TB or more
    100 Mbps 120 days 5 TB or more
    1000 Mbps (GIG) 12 days 60 TB or more

    1)      Test and Monitor- Once your instance is setup, and all the data migrated, it’s time to test. Best practices are to test the application in the most realistic setting possible. This means during business hours and in an environment when bandwidth consumption will be similar to the production environment. You wont need to look far to find products that can monitor the health of your AWS instances; AWS provides a free utility called CloudWatch. CloudWatch monitors your Amazon Web Services (AWS) resources and the applications you run on AWS in real time. You can use CloudWatch to collect and track metrics, which are variables you can measure for your resources and applications. CloudWatch alarms send notifications or automatically make changes to the resources you are monitoring based on rules that you define. For example, you can monitor the CPU usage and disk reads and writes of your Amazon instances and then use this data to determine whether you should launch additional instances to handle increased load. You can also use this data to stop under-used instances to save money. In addition to monitoring the built-in metrics that come with AWS, you can monitor your own custom metrics. With CloudWatch, you gain system-wide visibility into resource utilization, application performance, and operational health.

    To meet and learn more about how AWS can benefit your organization contact me at (513) 227-4131 or jim.conwell@outlook.com.

 

Edge Computing and the Cloud

edge-arch

image courtesy of openedgecomputing.org

This article is intended to be a simple introduction to what can be a complicated technical process. It usually helps to begin the articles I write involving a specific cloud technology with a definition. Edge computing’s definition, like many other technologies, has and evolved in a very short period of time. In the past Edge Computing could describe devices that connect to the local area network (LAN) to the wide area network (WAN), such as firewalls and routers. Today’s definitions of Edge Computing are more focused on the cloud and how to overcome some of the challenges of cloud computing. The definition I will use as a basis for this article is to bring computer and data storage resources as close as possible to the people and machines that require the data. Many times, this will include creating a hybrid environment for a distant or relatively slow public cloud solution. The hybrid environment will consist of an alternate location with resources that can provide the faster response required.

Benefits of Edge Computing

The primary benefit of living on the edge is increased performance. This is most often defined in the networking world as reduced latency. Latency is the time it takes for data packets to be stored or retrieved. With the growth of Machine Learning (ML), Machine to Machine Communications (M2M) and Artificial Intelligence (AI), exploding latency awareness has increased across the industry. A human working at a workstation can easily tolerate a data latency of 100-200 milliseconds (MS) without much frustration. If you’re a gamer you would like to see latency at 30 MS or less. Many machines and the applications they run are far less tolerant to data latency. The latency tolerance for machine-based applications can range from tolerating 10 MS to no latency, needing the data in real time. There are applications humans interface with that are more latency sensitive, a primary example being voice communications. In the past decade business’ demand for Voice Over Internet Protocol (VOIP) phone systems has grown which has in turn driven the need for better managed low latency networks. Although data transmission speeds are moving at the speed of light, distance still matters. As a result, we look to reduce latency for our applications by moving the data closer to the edge and its users. This can then produce the secondary benefit of reduced cost. The closer the data is to the applications the fewer network resources required to transmit the data.

Use Cases for Edge Computing

Content Delivery Networks (CDN) are thought of as the predecessor of the edge computing solutions of today. CDN’s are a geographically distributed network of content servers designed to deliver video or other web content to the end user. Edge Computing seeks to take this to the next step by delivering all types of data in even closer to real time.

The Internet of Things (IOT) devices is a large part of what is driving the demand for Edge Computing. A common application involves a video surveillance system an organization would use for security. A large amount of data is stored in which only a fraction is needed or will be accessed. An edge device or system collects all the data, stores it, and only transfers the data needed to a public cloud for authorized access.

Cellular networks and cell towers provide another use case for Edge Computing. Data for analysis are sent from subscriber phones to an edge system at the cell site. Some of this data is used immediately for call control and call processing. Most of the data, which is not time sensitive are then transmitted to the cloud for analysis later.

As the technology and acceptance for driverless cars increase a similar type of edge strategy will be used. However, with driverless applications, the edge devices will be located in the car because of the need for real time responses.

These examples all demonstrate the need for speed is constantly increasing and will continue to grow in our data applications.  As fast as our networks become there will always be the need to hasten processing time for our critical applications.

If you would like to talk more about strategies for cloud migration contact us at:

Jim Conwell (513) 227-4131      jim.conwell@twoearsonemouth.net

www.twoearsonemouth.net

we listen first…

Getting Started with Microsoft Azure

 

azure-icon-250x250

image courtesy of Microsoft.com

A few months ago I wrote an article on getting started with Amazon Web Services (AWS): now I wanted to follow-up by writing the same about  getting started with Microsoft Azure. Microsoft Azure is the public cloud offering deployed through Microsoft’s global network of datacenters. Azure has continued to gain market share from its chief rival, AWS. Being in second place is not something Microsoft is used to with their offerings. However, in the cloud, like with internet web browsers, Microsoft got off to a slow start. Capturing market share will not prove as simple with AWS as it was with Netscape and the web browser market in the 90’s but in the last two years, progress has been made. Much of the progress can be attributed to Satya Nadella, Microsoft’s current CEO.  Nadella proclaimed from his start a commitment to the cloud. Most recently Microsoft has expressed their commitment to support Linux and other operating systems (OS) within Azure. Embracing another OS and open source projects is new for Microsoft and seems to be paying off.

Like the other large public cloud providers, Microsoft has an easy to use self-service portal for Azure that can make it simple to get started. In addition to the portal, Microsoft entices small and new users with a free month of service. The second version of the portal released last year has improved the user experience greatly. Their library of pre-configured cloud instances is one of the best in the market. A portal user can select a preconfigured group of servers that would create a complex solution like SharePoint. The SharePoint instance includes all the components required: The Windows Server, SQL Server and SharePoint Server. What would take hours previously now can be “spun-up” in the cloud with a few clicks of your mouse. There are dozens of pre-configured solutions such as this SharePoint example. The greatest advantage Microsoft has over its cloud rivals is it has a deep and long-established channel of partners and providers. These partners, and the channel Microsoft developed for their legacy products, allow them to provide the best support of all the public cloud offerings.

Considerations for Getting Started with Microsoft Azure

Decide the type of workload

It is very important to decide not only what workloads can go to the cloud but also what applications you want to start with. Start with non-production applications that are non-critical to the business.

Define your goals and budget

Think about what you want to achieve with your migration to the cloud. Cost savings? Transferring IT from the capital expense to an operational expense? Be sure you calculate your budget for your cloud instance; Azure has a great tool for cost estimation. In addition, make sure you check costs as you go. The cloud has developed a reputation for starting out with low-costs and increasing them quickly.

Determine your user identity strategy

Most IT professionals are familiar with Microsoft Active Directory (AD). This is Microsoft’s application that authenticates users to the network behind the corporate firewall. AD has become somewhat outdated, not only by cloud’s off-site applications but also by today’s limitless mobile devices. Today, Microsoft offers Azure Active Directory (AAD). AAD is designed for the cloud and works across platforms. At first, you may implement a hybrid approach between AD, AAD and Office 365 users. You can start this hybrid approach through a synchronization of these two authentication technologies. At some point, you may need to add on federation that will add additional connectivity to other applications such as commonly used SaaS applications.

Security

An authentications strategy is a start for security but additional work will need to be done. A future article will detail cloud security best practices in more detail. While it is always best to have a security expert to recommend a security solution, there are some general best practices we can mention here. Try to use virtual machine appliances whenever possible. The virtual firewall, intrusion detection, and antivirus devices add another level of security without adding additional hardware. Devices such as these can be found in the Azure marketplace. Use dedicated links for connectivity if possible. These will incur a greater expense but will eliminate threats from the open Internet. Disable remote desktop and secure shell access to virtual machines. These protocols exist to offer easier access to manage virtual machines over the internet. After you disable these try to use point to point or site to site Virtual Private Networks (VPN‘s). Finally, encrypt all data at rest in virtual machines to help secure data.

Practically every business can find applications to migrate to a public cloud infrastructure such as Azure. Very few businesses put their entire IT infrastructure in a public cloud environment. A sound cloud strategy, and determining which applications to migrate enables the enterprise to get the most from a public cloud vendor.

If you would like to learn more about Azure and a cloud strategy for your business contact us at:

Jim Conwell (513) 227-4131      jim.conwell@twoearsonemouth.net

www.twoearsonemouth.net

Three Reasons to Use a Local Datacenter and Cloud Provider

Cincinnati-dc

photo courtesy of scripps.com

Now that the business cloud market has matured, it has become easier to recognize the leaders of the technology as well as the providers that make the most sense to partner with your business. Many times that can be a local datacenter and cloud provider. There are many large public cloud providers and most agree on three leaders:  Amazon Web Services (AWS), Microsoft Azure and Google Cloud. Google has been an uncharacteristic laggard in the space and seems to be struggling with the Business to Business model (B2B). Clearly, a B2B strategy can evolve from Business to Consumer (B2C) strategy, one can look no further than the public cloud leader AWS.

Whether Google Cloud can succeed is unclear. What is clear, however, is that there will always be a place for large public cloud providers. They have fundamentally changed how IT in business is done. The mentality the public cloud help to create, “go fast and break things“, has been an important concept for the enterprise IT sandbox.

Where Does the Local Data Center Fit in?  

I also believe there will always be a place in business IT for the local data center and cloud provider. The local data center and cloud provider mentioned here is not an engineer putting a rack up in his basement, or even the IT service provider whose name you recognize hosted in another data center. The local data center I am referencing has been in business many years, most likely before the technology of “cloud” was invented. My hometown in Cincinnati, Ohio has such a respected data center, 3z.net. 3z has been in business for over 25 years and offers its clients a 100% uptime Service Level Agreement (SLA). It has all the characteristics a business looks for in an organization it trusts its data with: generator, multiple layers of security, and SOC II level of compliance. It uses only top tier telecom providers for bandwidth and its cloud infrastructure uses technology leaders such as Cisco and VMware.  Most of all, 3z is easy to do business with.

To follow are three primary reasons to use a local datacenter.

Known and Predictable Cost-

The local data centers’ cloud cost may appear more expensive on the initial cost evaluation; however, they are often less expensive in the long run. There are many reasons for this but most often it is based on the rate charged for transmitting and receiving data to your cloud. Large public clouds charge fees based on the gigabyte of outbound data. While it is pennies per gigabyte, it can add up quickly. With the per gigabyte charges, the business doesn’t know all their costs up front. The local datacenter will typically charge a flat fee for monthly bandwidth that includes all the data coming and going. This creates an “all you can eat” model and a fixed cost.

Customized and Increased Support for Applications-

Many of the applications the enterprise will use cloud may require customization and additional support from the cloud provider. A good example of this is Disaster Recovery (DR) or Disaster Recovery as a Service (DRaaS). DRaaS requires a higher level of support for the enterprise in the planning phases as most IT leaders have not been exposed to DR best practices. Additionally, the IT leaders in the enterprise want the assurance of a trusted partner to rely on in the unlikely event they declare an emergency for DR. In many of the local cloud provider and datacenters I work with, the president of the datacenter will happily provide his private cell phone number for assistance.

Known and Defined Security and Compliance-

Most enterprise leaders feel a certain assurance of knowing exactly where their data resides. This may never change, or at least not to an IT auditor. Knowing the location and state of your data also helps the enterprise “check the boxes” for regulatory compliance. Many times, the SOC certifications are not enough, more specific details are required. 3z in Cincinnati will encrypt all of your data at rest as a matter of their process. Additional services like these can ease the IT leader’s mind when the time for an audit comes.

It is my opinion that the established local datacenter will survive, and flourish.  However, it may need to adjust to stay relevant and competitive with the large public cloud providers. For example, they will need to emulate some of the popular public cloud offerings such as an easy to use self-service portal and a “try it for free” cloud offering. I believe the local datacenter’s personalized processes are important and I offer support for 3z and its competitive peers to prosper in the future.

If you would like to learn more or visit 3z please contact us at:

Jim Conwell (513) 227-4131      jim.conwell@twoearsonemouth.net

www.twoearsonemouth.net

Ohio Datacenter with AWS Direct Connect Now Open

cologix

Datacenter Trends

It’s beginning to feel more like the Silicon Valley in Central Ohio. There is now an Ohio Datacenter with AWS Direct Connect If you haven’t seen or heard about the new Cologix datacenter, take a minute to read on.

Cololgix datacenter has been in the Columbus area for many years and operates 27 network neutral datacenters in North America. Its newest facility, COL3, is the largest multi-tenant datacenter in Columbus and resides on the same 8-acre campus as their existing datacenters COL1 and COL2. It offers over 50 network service providers including the Ohio-IX Internet Exchange peering connection.

Most exciting of all is its 20+ cloud service providers, which includes a direct connection to the market leading Amazon Web Services (AWS). This is the first AWS direct connection in the region providing customers with low latency access to AWS US East Region 2. With direct connect AWS customers create a dedicated connection to the AWS infrastructure in their region. When AWS is in the same datacenter where your IT infrastructure resides, such as Cologix, all that is needed for connectivity is a small cross connect fee.

Here are some pertinent specifications of Cologix COL3:

Facility

    • Owned & operated in a 200,000+ SQF purpose-built facilities on 8 acre campus
    • Rated to Miami-Dade hurricane standards
    • 4 Data Halls – Up to 20 Milliwatt (MW)
    • 24” raised floor with anti-static tiles
    • 150 lbs/SQF floor loading capacity with dedicated, sunken loading deck

Power:

  • 2N Electrical, N+1 Mechanical Configurations
  • 2N diverse feeds from discrete substations
  • Redundant parallel IEM power bus systems serve functionality and eliminate all single points of failure
  • 2N generator configuration- Two (2) MW Caterpillar side A and Two (2) MW Caterpillar side B
  • On-site fuel capacity for 72 hours run time at full load
  • Redundant 48,000-gallon tanks onsite, priority refueling from diverse supplies & facility exemption from emergency power

Cooling:

  • Raised floor cold air plenum supply; return air plenum
  • 770 tons per Data Hall cooling capacity
  • Liebert, pump refrigerant DSE
  • Concurrently maintainable, A &B systems

Network:

  • 50+ unique networks in the Cologix-controlled Meet-Me-Room
  • Network neutral facility with 16+ fiber entrances
  • Managed BGP IP (IPv4 & IPv6); multi-carrier blend with quad-redundant routers & Cologix provided customer AS numbers & IP space
  • Most densely connected interconnect site in the region including dark fiber network access
  • Connected to the Columbus FiberNet system plus fiber feeds reaching all 88 Ohio counties
  • Metro area dark fiber available

 

If you would like to learn more or visit COL3 please contact us at:

Jim Conwell (513) 227-4131      jim.conwell@twoearsonemouth.net  

www.twoearsonemouth.net

 

What is a Software Defined Wide Area Network (SD-WAN)

sdwan2

image courtesy of catchsp.com

The trend for software or applications to manage technology and its processes has become commonplace in the world of enterprise IT.  So common, in fact, that it has created its own prefix for IT solutions, Software Defined or SD. Virtualization software from companies like VMware revolutionized the way the enterprise built datacenters and coined the phrase “software defined network”. Today this concept has expanded out from the corporate datacenter to the Wide Area Network (WAN), and ultimately to the enterprise branch offices and even to customers. The Software Defined WAN (SD-WAN) can simplify management of the WAN and significantly reduce the cost of the telecommunication circuits that create the WAN.

What’s a WAN?

A Wide Area Network, or WAN, allow companies to extend their computer networks to connect remote branch offices to data centers and deliver the applications and services required to perform business functions. Historically, when companies extend networks over greater distances and sometimes across multiple telecommunication carriers’ networks, they face operational challenges. Additionally, with the increase of bandwidth intensive applications like Voice over Internet Protocol (VOIP) and video conferencing, costs and complications grew. WAN technology has evolved to accommodate bandwidth requirements. In the early 2000’s Frame Relay gave way to Multi-Protocol Label Switching (MPLS). However, MPLS technology has recently fallen out of favor, primarily because it has remained a proprietary technology.

Why SD-Wan?

MPLS, a very mature and stable WAN platform, has grown costly and less effective with age. The business enterprise needs to select one MPLS vendor and use them at all sites. That MPLS provider needs to look to a local telecom provider to provide the last mile to remote branches and possibly even the head end. This has historically brought unwelcomed blame and finger pointing as the circuit develops troubles or is out of service. It also creates a very slow implementation timeline for a new site. MPLS solutions are typically designed with one Internet source at the head end that supports the entire WAN for Web browsing. This will create a poor internet experience for the branch and many trouble tickets and frustrations for the IT team at the head end. SD-WAN can eliminate these problems unless it isn’t designed correctly, in which case it has the potential to create problems of its own.

SD-WAN uses broadband internet connections at each site for connectivity. The software component of the solution (SD) allows for the management and monitoring of these circuits provided by multiple vendors. The broadband connections are ubiquitous and inexpensive, provided by local cable TV providers. Broadband internet connections offer more bandwidth and are much less expensive than an MPLS node. Additionally, broadband circuits can be installed in weeks instead of the months required for a typical new MPLS site. In an SD-WAN deployment, each site has its own internet connectivity, the same broadband circuit that is delivering connectivity. This greatly increases the satisfaction of the branch users for internet speed and reduces total traffic over the WAN. However, it creates a challenge for the cyber security of the enterprise. When each remote site has its own internet, each site needs its own cyber security solution. Producing a valid cyber security solution can reduce the cost savings that result from the broadband internet.

Gartner recently has labeled SD-WAN as a disruptive technology due to both its superior management of a WAN and its reduced costs. Implementation of an SD-Wan implementation requires a partner with expertise. Some providers today pride themselves on having the best database to find the cheapest broadband circuits for each site. However, it is vital to pick a partner that also can provide an ongoing management of the circuits at each site and a deep understanding of the cyber security risks of an SD-WAN solution.

If you need assistance designing your SD-WAN Solution please contact us at:

Jim Conwell (513) 227-4131      jim.conwell@outlook.com      www.twoearsonemouth.net

#sdwan #sd-wan

 

cloud savings

Financial Benefits of Moving to Cloud

Cloud-$

                                                                                                 image courtesy of betanews.com

There are many benefits that cloud technology can offer a business, however, business doesn’t buy technology for technology’s sake, it buys it for positive business outcomes. The two most popular business outcomes desired by most businesses are to increase revenue and reduce cost. Information Technology (IT) has long been known to be one of the costliest departments in a business. So it makes sense, if we’re going to recommend to a cloud solution, we look at the financial benefits. The financial advantages paired with the expertise in determining what applications should migrate to the cloud create a cloud strategy. This consultation is not completed just once but needs to be completed periodically by a strategic partner like Two Ears One Mouth.   Just as telecommunications and internet circuits can get financially burdensome as a business grows, so can a cloud solution. Telecom cost recovery became a financial necessity for businesses when telecom costs spiraled out of control. A consultant would examine all the vendors and circuits to help the business reduce IT spend by eliminating waste. The cloud user faces a similar problem, as cloud services can automatically grow as demand increases. The growth will include the cloud solutions cost as well as the resources.

 

To follow are the three primary financial benefits of a cloud migration.

 

CapEx vs OpEx

The primary financial benefit most organizations plan for with their first cloud implementation is the benefit of an operational expense (OpEx) instead of a capital expense (CapEx). This is particularly beneficial for startup companies and organizations that are financially constrained. They find comfort from the “pay as you go model” similar to other services they need, such as utilities. Conversely, enterprises that invest in equipping their own data centers have racks of equipment that depreciate quickly and utilize a fraction of the potential purchased. It has been estimated that most enterprises have an IT hardware utilization rate of about 20% of its total capacity. Cloud services allow you pay only for what you use and seldom pay for resources sitting idle.

 

Agility and scale

Regardless of the size of your business, it would be financially impractical to build an IT infrastructure that could scale as quickly as the one you rent from a cloud provider. This agility allows businesses to react quickly to IT resource needs while simultaneously reducing cost.  Many cloud solutions can predict when additional resources are needed and are able to scale the solution appropriately. This provides obvious benefits for the IT Manager but can create problems with the IT budget. If the cloud solution continues to scale upward, and it is billed transitionally, the cost can escalate quickly. Cloud instances need to be monitored constantly for growth and cost. For this reason, Two Ears One Mouth consultants have developed a product known as cloud billing and support services (CBASS). CBASS makes sure the benefits originally realized with the cloud migration remain intact.

 

Mitigate risk

Many best practices in setting up a cloud infrastructure also enhance IT security. For instance, because your data resides elsewhere, cloud users tend to implement data encryption.  This encryption can include not only the data that rests in the cloud providers datacenter but also as it’s in transit between the datacenter and the customer. This is a wise practice for IT security. It can eliminate data breaches and benefit regulatory compliance in some cases. Additionally, security software and hardware, such as a firewall, tend to be superior in larger IT datacenters, such as with a cloud provider. Ironically, IT security which started as a concern of cloud computing, has become an advantage.

 

Cloud technology has long been a proven technology and is here to stay. It has reduced IT budgets while enhancing IT response time. However, the cost savings of cloud is not automatic and ongoing. Savings, as well as the solution, need to be measured and affirmed regularly. consultants can monitor your cloud environment leaving you to focus on the business.

If you need assistance with your current IT cloud project  please contact us at:

Jim Conwell (513) 227-4131      jim.conwell@outlook.com      www.twoearsonemouth.net

Are Containers the Forecast for Cloud?

image courtesy kubernetes.io

One of the most exciting and simultaneously challenging things about working in technology is the speed at which change occurs. The process from a cutting-edge technology to a ubiquitous and commoditized product can happen in the blink of an eye. Now that the cloud has made its way into all sizes and types of business the next related technology has emerged: containers. So it is fair to ask; Are Containers the forecast for cloud?

How we got to this port

VMware’s introduction of virtualization was thought by many to be the predecessor of cloud as we know it today. This revolutionary technology allowed early adopters to reduce costs and enhance their IT agility through virtualization software. The day of physical servers for each application are over. Cloud technology has evolved from a single software for the enterprise, to an outsourced product that is provided to businesses such as major technology institutions like Amazon, Microsoft, and Google. Most recently, containers have evolved as a next step for cloud and are largely developed to suit the needs of software developers.
The difference between Virtual Machines (VM’s) and Containers
A container is defined by Docker as a stand-alone executable software package that includes everything needed to run an application: code, runtime, system libraries and settings. In many ways, that sounds like a VM. However, there are significant differences. Above the physical infrastructure, a VM uses a hypervisor to manage the VMs. Each VM has their own guest operating system such as Windows or Linux (see image #1). A container uses the host operating system and the physical infrastructure which supports the container platform such as Docker. Docker then supports the binaries and libraries of the applications. Containers do a much better job of isolating applications from its surroundings and this allows the enterprise to use the same container instance from development to production.


(Image 1)                                                            (Image 2) Images courtesy of docker.com

How can Containers be used in the Enterprise today?

Docker is currently the most popular company driving the movement for container based solutions in the enterprise. The Docker platform enables independence between applications and infrastructure allowing the applications to move from development to production quickly and seamlessly. By isolating software from its surroundings, it can help reduce conflicts between teams running different software on the same infrastructure. While containers were originally designed for software developers, it is becoming a valuable IT infrastructure solution for the enterprise.
One popular platform allowing the enterprise to benefit from container technology is Kubernetes. Kubernetes is an opensource system originally designed by Google that was donated it to the Cloud Native Computing Foundation (CNCF). Kubernetes assists with three primary functions in developing containers: deployment, scaling and monitoring. Finally, open source companies such as Red Hat are developing products to help utilize these tools and simplify containers for all types of business. OpenShift, designed by Red Hat, is a container application platform that has helped simplify Docker and Kubernetes for the business IT manager. The adoption of new technology, such as cloud computing, often takes time to be accepted in the enterprise. Containers seem to be avoiding this trend and have been accepted and implemented quickly in businesses of all types and sizes.