Unique Ways to Reduce IT Cost

(Information in italics courtesy of BOERGER CONSULTING, LLC)

IT_cost-Asset-Management

image courtesy of connectedgear.com

All businesses today receive benefits from Information Technology (IT) but many still consider it a problematic cost center of their business. As technology has progressed, more forward-thinking organizations have started to see IT as a valuable tool and an investment in their business’s future. Still, every business is looking to reduce costs and IT is often the first place they look to trim their budget.

Some of the traditional ways IT cost has been reduced started in IT hardware with the origin of cloud computing and virtualization. Virtualization is a software system that allows IT hardware to provide much greater capacity and functionality, reducing hardware costs. Other popular cost savings have come from hiring less expensive personnel such as interns or outsourcing IT personnel entirely. A more recent trend has been to utilize open source software for business processes. Open source software is developed within a community and offered for free with options to pay for technical support. However, many businesses and business functions don’t have the flexibility for open source and are required to maintain relationships with the software companies, utilizing expensive and complex licensing for their software. This has led to larger organizations taking cost reduction steps by investigating their physical assets and software licenses — a practice commonly referred to IT Asset Management (ITAM). The savings in both cost and time are staggering with a proper business management plan.

Boerger Consulting, a partner firm of Two Ears One Mouth IT Consulting (TEOM), is an innovator in the ITAM consulting realm. Their focus is helping businesses reduce the overall operational cost of their IT department by properly managing, measuring, and tracking their hardware and software assets. According to Boerger Consulting, the threat is real:

“Some software publishing firms rely on software license audits to generate 40% of their sales revenue. These companies wait and watch your volume license agreements, your merger & acquisition announcements, and the types of support tickets called in, to pinpoint exactly when your organization is out of license compliance. They count on your CMDB and SAM tools to be inaccurate. They make sure the volume license agreement language is confusing and convoluted. And they make sure their auditors always find something – unlicensed software, expired warrantees, unknown network segments – to justify your penalty.”

The payoff, however, is also real. Citing a 2016 paper from Gartner, Boerger Consulting suggests an organization could eliminate THIRTY PERCENT (30%) of their IT software budget with a proper software asset management (SAM) program:

“Part of that savings is finding the hardware and software lost in closets and under desks and odd corners of your warehouse. Another part is identifying and eliminating licenses, support, and warranties for programs and hardware your organization no longer needs. The last part is proactively locating and eliminating audit risks before the auditors do, and then pushing back on them when it is time to renew your volume license agreements.”

Although we focus on different technologies, a synergy has been created between TEOM and Boerger Consulting. We have found that when a business makes the decision to move their IT infrastructure off-site unforeseen challenges can occur. These challenges can be related to privacy and data protection compliance, resulting from retiring on-premises servers, or it may be that there are different software licensing practices required in the cloud. These are two examples of issues that TEOM and Boerger Consulting can solve.

If after reading this article you still have questions about any of these technologies, Jim Conwell and Jeremy Boerger would be happy to meet for a free initial consultation. Please contact:

Jim Conwell (513) 227-4131     jim.conwell@twoearsonemouth.net

www.twoearsonemouth.net

we listen first…

 

5G affects cloud

5G or not 5G- What, When and How it Affects IT and Cloud

 

Before entering the cloud and IT business I spent more than a decade working with wireless technologies for business.  During this time, I saw the advent of data to cell phones and the transitions of the generations of data offerings that have been delivered. Generation 2 (G2) brought very rudimentary data to cell phones such as text messaging. G3 brought the internet and its many applications such as mobile email. G4 brought us the high-speed internet we use today offering instant access to applications such as real time video. With each transition of the technology, corporate marketing and spin became more extraordinary creating more time between the introduction and the practical delivery of the new product. Now comes 5G and I expect this trend to continue. Although we hear of the current availability of 5G from wireless carriers the products are not fully developed for practical use and are likely years away for business applications.

What is 5G and who will provide it?

The latest technology of 5G wireless data will be provided to us by the same carriers that delivered wireless service in the past, AT&T, Verizon, and Sprint. Although the primary standards for 5G have been set there is still much to be developed in the technology and will likely be introduced as different versions. This will be similar to 4G when it was first launched with its distinct alternatives of WiMAX and LTE.  5G has already been split into different delivery types, 5G, 5GE, and 5 GHz. Verizon’s first introduction to 5G is designed for the home and small office while AT&T is focused on mobile devices in very limited markets. Most believe there will be fixed wireless versions for point to point circuits for business. At this point, it isn’t clear what versions each provider will offer in 5G as it matures and becomes universally delivered.

The technology of 5G

Similar to all previous generations in the evolution of wireless data 5G offers greater speeds as its primary driver for acceptance. What may slow widespread deployment of 5G is the fact that 4G technology continues to improve and provide greater speeds for users. However, the wireless spectrum available to 4G providers is running short so the transition to 5G is imminent. Most of the 5G technology will be provided on an alternate wireless spectrum, above 6 GHz, and not provided to wireless consumers previously. This new swath of spectrum will offer much greater capacity and speed but won’t come without its own challenges. To achieve these higher speeds the carriers will need to use much higher frequency transmissions called millimeter waves. Millimeter waves cannot penetrate buildings, weather, and trees as well as the previous frequencies. To overcome this wireless carriers will need to implement additional, smaller cell sites called microcells. Many wireless carriers have already implemented microcells complementing the macrocells used in previous offerings of wireless service. Building out additional data network and cell sites such as microcells is expensive and time-consuming. This will add to the delay of a fully implemented 5G offering from the carriers.

Business advantages of 5G

To say that one of the advantages of 5G are greater data speeds would be true, but there is much more to it for business applications. The following are the primary advantages, related to speed, that 5G will provide for businesses for cloud computing.

  • Lower latency – Wireless 5G networks will decrease latency, the time it takes data packets to be stored and retrieved, greatly. This will benefit many business applications such as voice, video and artificial intelligence (AI)
  • Multiple connections- The base stations, or cell sites of 5G, will handle many more simultaneous connections than 4G. This will increase speed for users and capacity for providers.
  • Full duplex transmission- 5G networks can transmit and receive data simultaneously. This full duplex transmission increases the speed and reliability of wireless connectivity enabling new applications and enhancing exiting ones.

Cloud and business solutions enhanced by 5G

It is difficult to say exactly how businesses will benefit from 5G service since it is still being developed. However, the advantages listed above lend themselves to several applications which are sure to be enhanced for business.

The increased speeds and decreased latency 5G offers will expand options and availability for disaster recovery (DR) and data network backups for businesses. When speeds previously only offered to business via wireline can be delivered without wires business continuity will be increased. Many businesses outages today are caused by accidental cable cuts and power outages that wireless 5G will eliminate. It is also possible wireless point to point circuits could replace traditionally wired circuits for the business’s primary data and internet service.

The technology and increasing number of the internet of things (IOT) applications will be enhanced by 5G. The increased speed and connectivity capacity will allow this ubiquitous technology to continue to grow. Similarly, the trend for more and faster edge computing connectivity will benefit. This will enhance applications such as autonomous vehicles that require instant connectivity to networks and other vehicles. Content delivery networks like the ones used for delivery of Netflix will be able to deliver their products faster and more reliably. These are just a few examples of the technologies today that are demanding 5G’s advantages and will expedite its availability.

While the technology to deliver 5G is mostly completed, the timing of widespread implementation for business is still unclear. This is attributed in part to the improvement of 4G speeds in its ability to satisfy today’s consumer’s needs. More importantly, new technologies are not accepted in the marketplace because the technology is ready but rather because the business applications demand them. 5G technologies will be driven by many business applications but widespread acceptance won’t occur for at least another two years. If you want to consult with a partner that has expertise in all aspects of telecom, wireless and cloud technologies, give us a call and we will be glad to find the right solution for your business.

Contact @ Jim Conwell (513) 227-4131      jim.conwell@twoearsonemouth.net

www.twoearsonemouth.net

we listen first…

 

Colocation’s Relevance Today for Business

The past several years have shown a “cloud-first” strategy evolve for business with their IT infrastructure. The inclination for a total on-premises infrastructure has decreased as hybrid cloud solutions have expanded. While off-premises solutions are the up and coming choice for many businesses, on-premises continues to be utilized with most companies. As businesses look at their IT strategies for the future, they should explore options to the cloud and consider the reasons why cloud may or may not be the best fit for all their applications. Businesses have seen the value of taking their data off site for years without handing it over to a cloud provider. The primary alternative has been collocation (colo). Many have seen a renewed interest of colo with the growth of hybrid cloud, as the large public cloud providers have implemented changes to their products to promote hybrid cloud architectures.  Here I will review these changes and discuss colocation’s relevance today for business.        

Colo defined and best use cases

Colo allows organizations to continue to own and maintain control of their IT hardware while taking advantage of an off-premises solution that offers increased uptime and security. As a part of the colo agreement, the data center will offer space, power, and bandwidth to its clients in a secured and compliant area within their facility. Although data centers are some of the most secure places in the world, they still can offer their clients access to their IT resources 24 hours a day, 365 days a year. They accomplish this through multiple layers of security including security guards, video monitoring and biometrics. This ability for colo customers to access and touch their data provides a psychological advantage for many businesses.

Another advantage of colo is power which can be offered with options including multiple power utilities. Redundant power offers additional safeguards against an IT outage. This type of power configuration is not available in most business’s office buildings. Also, a data center can offer power at a reduced rate because of their purchasing power with the utility. With more power comes more cooling requirements. The data center also provides better cooling, again with spare resources to assure it’s always available. Finally, bandwidth is a commodity the data center buys in bulk and offers to its colo customers at savings.

Regulatory compliance is another important advantage driving users to a colo solution. Colo provides its customers instant access to an audited data center, such as one with SOC 2 compliance. Colo has long been believed to offer more security and compliance than on-premises or cloud.

Considerations before moving to colo

The primary items to consider before moving to colo in a data center relate to the space and power components of the solution. Colocation space is typically offered by the data center provider by the rack or by a private cage consisting of multiple racks. In either offering, a prospective buyer should consider the requirements for expansion of their infrastructure. In a cage, a customer is typically offered “reserved space” within that cage to be purchase and can then activate when required. When the customer doesn’t require the segregation of a cage, they will purchase racks that are adjacent to other business customers, which can make expansion more complex. Customer-focused data centers allows a business to reserve adjacent racks without activating the power and therefore are priced at a discounted rate. It is important to have contiguous space in a data center colo so consider additional space for growth with the initial purchase. 

Regarding power make sure you research the amperage and voltage requirements for your infrastructure and its potential for growth. Data centers will have many diverse power offerings so consult with an expert like TEOM for the requirements of your IT equipment.

Today’s evolving advantages of colo

Most of today’s business IT infrastructures, on-premises or colocation, will utilize some type of cloud presence for required applications such a disaster recovery. The byproduct of this growing trend is hybrid cloud implementation. Like the term cloud, a hybrid cloud can have many definitions. For our purposes here, hybrid cloud will be defined as resources complementing your primary on-premises infrastructure with a cloud solution. The large public cloud providers, most often used by businesses, have expanded their presence beyond their own data centers to occupy a cage of colo in large multi-tenant datacenters. This enables the cloud providers to get physically closer to their customers, which creates an advantage for a business user in that data center needing to implement a hybrid cloud solution.

Previously, customers of the large public clouds have relied on either the internet for inexpensive connectivity or expensive dedicated telecom circuits to connect “directly” to their cloud provider. Direct connections have been prohibitively expensive for most businesses because of the high cost of telecom circuits that are required to reach the public cloud. Some have justified the high cost of direct connect due to increased security and the greatly reduced costs of data egress. Egress charges are the cost to move data from the public cloud to the business. Typical egress charges for public cloud providers can be as much is $.14 per gigabyte. When direct connections are established egress charges are greatly reduced to as low as $.02 per gigabyte as of the time this article was written. Because of this direct connect can save users thousands of dollars while greatly increasing security. When the public cloud provider is in the same data center as the colo customer a direct connection to take the form of a “cross connect” within the data center. This common data center service is a fraction of the cost of the telecom circuits mentioned previously. This enormous economic benefit can be multiplied if the business connects to multiple public clouds (multi-cloud).

A more recent trend has the large public cloud providers creating a hybrid cloud on the customer’s premises. Microsoft’s solution, called Azure Stack, was the first introduced, and now has a competitive product from AWS called Outpost. The products, to be covered in a future article, put the hardware and cloud infrastructure of these providers on the customer’s site. This creates additional validation that hybrid is here to stay.

Colo remains relevant today for many of the same reasons it has been chosen for years: availability, security, and compliance. As the large public cloud providers expand outside of their own data centers to get closer to their customers, new advantages for businesses have emerged. When a fiber cross connection in a common data center can be used to direct connect to a public cloud provider, enormous benefits are realized. Ironically, as the public cloud providers grow, colocation has found new life and will remain relevant for the foreseeable future.

If your business wants to stay competitive in this ever-changing environment

Contact us @ Jim Conwell (513) 227-4131      jim.conwell@twoearsonemouth.net

www.twoearsonemouth.net

we listen first…

 

customer case study

InfoCase Case Study

About InfoCase

For over 24 years, InfoCase has been an industry leader in the design and manufacturing of cases, harnesses, and other protective solutions for mobile devices. Their mission is to help reduce the damage of their customers’ mobile technology while increasing the efficiency of its use. They accomplish this mission by crafting innovative, solution-oriented products that exceed the expectations of their clients. To follow is the InfoCase Case study for the solutions provided by Two Ears One Mouth IT Consulting.

The Challenges

After more than twenty years of continued growth, InfoCase’s growth has accelerated in the past several years. This has caused the company to outgrow some of its communications and IT platforms and require a review for updates for each. Their premise-based Cisco phone system, while still working, was aging and out of support by the manufacturer. If a failure would occur, it could be days or weeks to get their phone service restored. InfoCase’s internet service was a shared cable service at a price point that was competitive when purchased but is no longer advantageous in a time of price reduction for telecom services. The cable company also provided their four analog phone lines that were restrictive in growth and didn’t offer today’s features such as direct inward dialing (D.I.D.) and calling party identification. Similarly, their Cisco firewall support contract had expired, and manufacturer support was not available. The company had aging file servers in an un-virtualized environment. Finally, many of their business applications such as Customer Relationship Management (CRM) and their accounting software needed to be enhanced for off-site access available to cloud-based versions of the software.

The Solutions

With several tasks to complete, priorities needed to be set for InfoCase. Two Ears One Mouth IT Consulting (TEOM) recommended the voice communications system be upgraded first. InfoCase agreed that they would be best served with a Unified Communications as a Service (UCaaS) offering. Familiar with InfoCase needs as well as the UCaaS providers TEOM recommended and presented three UCaaS solution providers. TEOM’s discovery upfront enabled InfoCase to make an informed and quick decision for the best provider for this service. The winning provider had a reliable feature rich UCaaS solution and included a dedicated fiber Internet connection to support their voice and data communications. This dedicated service offered advantages to their voice communications unavailable from their existing service.

Next InfoCase needed an IT solution partner to replace aged hardware, create backups for business continuity and offer ongoing IT services. The best provider for InfoCase was selected under the direction of TEOM and provided an overview of the customers’ needs. The IT provider systematically identified the most critical hardware replacement needs, presented a plan with pricing and started the upgrades.

A typical engagement with TEOM, as with InfoCase, does not end with any single project. We continued an ongoing dialogue addressing less urgent needs and recommending plans for keeping all technology current. InfoCase appreciated TEOM’s personalized approach and their wide variety of supplier/partner options. In this approach, TEOM has established the groundwork for a long-term partnership and allowed InfoCase to focus on their business. 

About Two Ears One Mouth IT Consulting

TEOM provides insight to organizations to determine  the best suppliers for cloud, datacenter, and telecommunications. We provide a personalized strategy and solution saving our clients team lost hours of investigation and negotiation with potential suppliers. We offer our clients a wide breadth of provider partners and create an ongoing and long-term business relationship.

jim.conwell@twoearsonemouth.net (513) 227-4131

 

Trends for Cloud and IT Providers from the Past Year

cloud trends for 2018 cloud trends for last year[/caption]

One of the primary benefits I offer to support my customers is insight and expertise for Cloud and IT services for business. I develop my insight and best practices for clients through working closely with a wide breadth of supplier partners that create the trends in their technology. These IT innovators range from the largest public companies earning billions of dollars each quarter to small entrepreneurs providing IT services to small and medium sized businesses (SMB). Staying current with technology is vital to my customers. Once a year I like to take time to review the trends for the cloud and IT. Here, I will describe recent trends by the primary technologies that are my focus, Infrastructure as a Service (IaaS), Unified Communications as a Service (UCaaS) and IT Managed Service Providers (MSP).

IaaS

Much of the change that occurs in IaaS is created by technologies and services delivered by the cloud hyperscalers such as AWS, Microsoft Azure and Google Cloud Platform. They have created environments that are open to virtually all operating systems and software applications. In a similar fashion, the regional data centers and cloud providers I partner with have evolved to hyper-converged platforms.  Hyper-converged platforms create a software defined IT infrastructure that replaces some of the traditional components of cloud such as storage area networks (SAN) or networking components like firewalls and switches. This trend has also spread to private clouds for organizations that create their own cloud infrastructure on premises.

In addition to hyper-convergence, most IaaS providers have also capitalized on traditional technologies like bandwidth that allows them to better compete with the hyperscalers. These include cloud configurations with a fixed and budget friendly cost structure for data transaction cost or egress. Many hyperscalers customers have been shocked by a low initial cost that rises quickly as their data requirements increase. Most of the trends in IaaS happen first at the hyperscaler then move downstream to the regional cloud providers as they reach general acceptance.   

UCaaS

UCaaS, or hosted IP phone service, has experienced exponential growth with both business users and the cloud providers. The purchase of BroadSoft by Cisco early in 2018 has led the way for many very cost effective UCaaS solutions with enhanced communication features. It is becoming apparent that providers are beginning to reach a critical mass of prospects where the product is being commoditized and the price is a key component of the buyer’s decision process. There have been a handful of providers that have been able to differentiate their UCaaS services through integrations with Customer Relationship Management (CRM) software or other SaaS products. Additionally, some innovative software developers have intrigued customers by taking an “out of the box” view voice communications. Companies like Dialpad, created by ex-Google engineers, have guided their customers to rethink the idea of UCaaS as more than a phone system hosted in the cloud. They have created a new age open communications platform that integrates all the enterprise communication tools. Their solutions often create a voice communication platform without a traditional desk phone. Whatever the technology or provider UCaaS has become ubiquitous. When the business accepts the OpEx model of monthly rental for voice communications the advantages of UCaaS are undeniable.  

Managed Service Providers (MSP)      

In my work to provide guidance to my clients for the best alternatives for cloud providers I often uncover needs for tradition on site IT services. These needs are most often driven by a loss of IT personnel or rapid growth of the company. I wrote an article earlier this last year, Is the MSP Model Right for Your Business, that covered this subject in greater detail. The trend described in the article continues to evolve toward a partner-like relationship between MSP and customer covering the full range of services such as an internal IT department would provide. This mindset is effective if the MSP listens to the customer’s needs and is flexible enough to customize their offering to their specific requirements.

As I stay close and communicate frequently with my supplier partners, I stay abreast of the provider’s changes and how they relate to the industry. I look forward to 2019 as a time of continued growth in cloud computing offerings. As the technology matures it will provide more opportunity to display how the cloud will add value to the business’ IT strategy. Understanding these trends of technology as they evolve will allow Two Ears One Mouth IT consulting to provide valuable insight to clients for years to come.        

          If your business is unique and requires a personalized IT provider strategy and solution

Contact us @ Jim Conwell (513) 227-4131      jim.conwell@twoearsonemouth.net

www.twoearsonemouth.net

we listen first…

Cloud Security is a Shared Responsibility

shared security model cloud security is shared[/caption]

When I first became familiar with enterprise cloud computing, one of the primary objections of cloud adoption was the security of the data within the applications. Today that has changed; as cloud has matured it is now seen as an option for IT infrastructure that can be designed to be more secure than on premises solutions.  Through the process of designing a cloud infrastructure IT professionals have become aware of the increase security benefits cloud offers. Concurrently, IT data exposure and breaches have become more widespread and security has become a greater responsibility. These factors have led all cloud architects, whether with a cloud provider or working within an enterprise, to realize that cloud security is a shared responsibility. Cloud security is shared in two respects, first within the different groups of the enterprise and secondly, the responsibility of securing data is split between the enterprise and the cloud provider. For the purpose of this article we will focus on the sharing of security responsibilities between the enterprise and the provider. I will segregate three categories of cloud computing responsibility in order to simplify the roles and responsibilities: infrastructure, operating systems including applications and customer data.  

Private Cloud

Most IT users today consider a virtualize stack of IT infrastructure on premises a private cloud. In this scenario, as existed before cloud computing, the enterprise is responsible for all aspects of security. In an on-premises private cloud infrastructure the enterprise needs to secure their data from all physical, technical or administrative threats. With large organizations the security responsibilities can be shared within groups of their IT departments which may include network, security, application and compliance.

Infrastructure as a Service.

The greatest security coordination concerns come from a public or hybrid cloud configuration such as infrastructure as a service (IaaS). With an IaaS environment the enterprise has agreed to have the provider manage the infrastructure component of the IT security. This enables the enterprise to outsource all security and regulatory concerns concerning the actual server hardware. They also realize benefits of physical security because their IT infrastructure is off premises and in a secured facility. Many times, regulation or even large customers, will mandate an audited data center standard such as SOC 2 for their IT infrastructure as a requirement of the business partnership. Creating an audited SOC 2 compliant data center on premises can be costly and time-consuming. The hosting of their IT infrastructure in an audited and physically secure data center is one of the greatest benefits of IaaS.

Beyond the physical infrastructure, the IaaS or cloud provider also assures the security of the software hypervisor that orchestrates the virtualized cloud operating systems and services. However, the enterprise is still responsible for the operating systems of the virtual servers and the security patches the software developer issues for them. Additionally, the enterprise is responsible for the security of all their own software applications and the data that resides on them. Some cloud providers will offer managed services to their clients that will include security functions. The provider may offer a managed firewall, monitoring and even malware protection for the virtual servers they host. These services add value as the provider is more familiar with security best practices in the IT infrastructure stack than the enterprise. Still there is always a shared responsibility for security with the enterprise always responsible for their own data. 

Software as a Service (SaaS)

SaaS is the cloud technology the majority of businesses have the most experience with and understand the best. Common SaaS platforms like Microsoft Office 365, Google G Suite or CRM based software like salesforce.com have made SaaS commonplace. Virtually the whole IT stack is owned by the provider in a SaaS platform, however, the enterprise still does still have security responsibilities. The enterprise’s primary security responsibility is concerned with their own data. The business owns their data and needs to ensure it is free of malware and other external threats. They also need to protect the end points such as laptops and tablets that are used to access the SaaS data.

Additional Considerations

Other IT security responsibilities the enterprise needs to consider in any Cloud environment are connectivity, authentication and identification services as well as managing abandoned resources.

Connectivity to the cloud provider is most secure when a private circuit or connection can be implemented. If a private connection is not practical the enterprise needs to create a secure connection such as a virtual private network (VPN) and assure a secure connection is created over public internet.

Authentication and identification of network users is an integral part of any enterprise IT network. Additionally, it is equally important to integrate any authentication or directory service with the cloud solution. A solution like Microsoft Azure AD is considered by many as a best practice for this complicated process. It was described in some detail in a previous article Active Directory (AD) in the Cloud. Finally, a frequent cause for concern, especially with enterprises that employ large IT staffs, are abandoned resources. These are cloud instances that were created and have lost their relevance and have been forgotten. They can reside in a public cloud for years, with continued billing, and the customers data is open to the public since they were created in a  time with less stringent security policies. Periodic billing review and the monitoring services security platforms offer can eliminate this waste.

Business cloud solutions offered to the enterprise come in many different configurations that vary as to the type of infrastructure, software and services offered. In all cloud environments security requires a shared responsibility as well as a layered approach coordinated between the cloud providers and the enterprise.  A supplier agnostic advisor like Two Ears One Mouth IT Consulting can assist by helping a business find the right provider and security services for your business’s applications.      

 

If your business is unique and requires a custom cloud security solution for IT Support

Contact Jim Conwell (513) 227-4131      jim.conwell@twoearsonemouth.net

www.twoearsonemouth.net

we listen first…

         

Disaster Recovery, Which Option is Right for Your Business?

 

Active-Active performs the quickest recovery
An Active-Active Disaster Recovery Solution

 

In a recent article, I described how an outsource, or hosted provider can deliver Disaster Recovery (DR) as a Service. In this article, I would like to look at the advantages a business can achieve by creating their own Disaster Recovery and answer the question, which option is right for your business. First, a reminder that DR is not a backup of data but rather a replication of data to ensure its availability and business continuity. DR solutions that are created by using the business’s own IT infrastructure can be divided into two primary categories, active-active and active-passive. Since active-passive was covered in the previous blog, I will focus on active-active here. While both attempt to achieve the same goals, keeping the business IT systems up at all times, they are created and maintained differently. Because of the unique nature of DR solutions, it is generally accepted to engage an expert such as Two Ears One Mouth IT Consulting to determine the right DR solution for an organization. I will compare the two DR strategies through complexity, cost and the most common metrics for DR Recovery Time Objective (RTO) and Recovery Point Objective (RPO).

Active-Active Disaster Recovery

An active-active, or stretched clustering, configuration is the deployment of a second identical live infrastructure which continually replicates with the first site. This framework will typically consist of only two sites.  Because of the simplicity of the concept and the speed and ease in which recovery can occur, it is usually the clients first choice. Ironically, after all the pertinent information is uncovered, is rarely selected by the small medium business seeking disaster recovery.

The two primary reasons it isn’t chosen for most businesses is its cost and the requirement for high bandwidth with low latency. Its high initial cost is due to the purchase of a duplicated set of hardware infrastructure for the primary site. In an active-active scenario, either site can handle the entire workload for the business. Every time a request is made in a software application at one site it must be written to the other site immediately before completing the request. An active-active solution requires a high level of connectivity, or bandwidth, between sites such as dedicated fiber optics. Even with dedicated (dark) fiber between sites data latency is still a consideration. Best practices dictate that the distance between active-active sites should be less than 100 miles. These two requirements eliminate many prospects from considering an active-active solution.  

Advantages of Active-Active

Now I will describe the advantages of an active-active configuration and the businesses that can benefit from it. There are many benefits to this configuration as it is a remarkable process for business continuity. After realizing the upfront cost, many businesses need to determine if it’s a nice-to-have or need-to-have solution for their business. To follow are some of the benefits of an active-active DR solution. 

1)      No “big red” button to push-

One of the most difficult processes of any DR solution is knowing how and when to declare an IT outage a disaster and quickly executing the DR plan. Many solutions will require a detailed process and action plan that involves the entire IT team. An active-active configuration is much simpler to invoke the DR plan because it transfers all workloads to one of the continually running and replicated systems. In addition, it requires very little testing and can be engaged automatically with minimal human intervention.  

2)     Cross-site load balancing-

Although it can be simple to transition to DR mode an active-active DR configuration is very complex to design and create. Some of the factors that make it difficult to create are the very same that provide additional benefits beyond DR. On such benefit is “load balancing” of the data transmitted between sites and offsite. Since both sites are always actively processing data it can be designed so that any process being run can occur at the optimal site available at that time. This can eliminate challenges of slow data responses and maximize bandwidth availability for the business.

3)      Less management means less cost-

The argument can be made that the active-active DR solution is the more cost effective for the long term. The time and technical resources to test, maintain and initiate an active-passive DR solution is much greater than the active-active. Additionally, in analyzing a DR solution, most don’t consider the operational task to “fallback” to normal mode after DR has been implemented; this can be more difficult than the original DR transition. Although expensive initially, the active-active solution has very little ongoing costs.  

Active-Passive Disaster Recovery

An active-passive DR solution creates an environment that is not intended to be live for IT production until a disaster is declared by the business. The infrastructure is over subscribed for resources and dormant until needed. This creates large initial cost savings on hardware. Many times, a business will re-purpose their aged IT equipment and servers for their DR site to realize even greater financial benefit.

One of the most popular active-passive software platforms for disaster recovery today is Zerto. Zerto’s DR solution creates DR at the hypervisor level of the virtualized environment. This allows for a quick and complete transition to the DR resources when an outage occurs. Zerto works with the most popular hypervisors such as VMware or Microsoft’s Hyper V. An active-passive solution such as Zerto can create a more customized solution. A business may select only a small percentage of their application servers as critical to the business and enable DR solution for those applications only. An active-passive solution is more accommodating to multi-site or multi-cloud business DR. Active-passive solutions are also used to provide Disaster Recovery as a Service (DRaaS) from data center and cloud providers.

When a business looks to create DR solution for their business, they have three primary options, active-active, active-passive and DRaaS. It is not a quick or simple decision as to what works best for your business. You need a trusted advisor like Two Ears One Mouth IT Consulting to investigate your IT environment, understand your budget, to guide you down the path to assured business continuity.

If your business is unique and requires a custom DR solution for IT Support

Contact us @ Jim Conwell (513) 227-4131      jim.conwell@twoearsonemouth.net

www.twoearsonemouth.net

we listen first…

   

 

MSP model

Is the MSP Model Right for Your Business?

mspornot2

In my initial article, What’s a Managed Service Provider (MSP), I introduced the concept of an MSP and its advantages. The article describes how a growing organization evolves when it transitions from calling an individual IT service provider each time an issue presents itself to developing a relationship with a  trusted partner that delivers a full scope of IT services. To review some pertinent definitions: an IT services provider follows a traditional model of being contacted when needed and are paid for their services by time and materials. An MSP provides the full scope of services and in many cases, the outsourced MSP is the business’s IT department. Today the MSP deliverable of a flat fee for services has become widespread and accepted. For a business to transition to an MSP, one vital characteristic must be present… trust. Trust can be difficult to create if the company has no prior experience with the provider. On the other hand, trust can be built when the provider is transparent with their motivations and offerings within the MSP model. In this article, I will dig deeper into the MSP offering to help answer the question is the MSP model right for your business?

Primary Components of the MSP

The concept of managed services has become so popular that some IT providers fail to offer other options to their prospects. A typical MSP agreement will include all phone and remote support services as well as an allowance for on-site labor. Projects out of the scope of the agreement are billed according to time and materials. MSP customers will typically receive a discounted labor rate on project work. The MSP model allows the provider to include a rental fee for certain critical IT infrastructure hardware. Critical IT hardware may be devices such as firewalls, the first defense of IT security and ethernet switches that are the foundation of the IT network. The MSP requires a detailed understanding and control of all the devices on the network in order to manage them properly.

What’s driving the MSP model?

The growth of the MSP model has come from the way it benefits customers as well as advantages realized by the MSP. While the MSP Model is not always the customers first choice, there are factors in the market that are driving customers to embrace this model. To follow are the primary factors that have driven the customer to accept the model.

A scarce and competitive marketplace for talent-

Most small and medium-sized businesses can’t find or can’t afford the IT resources their company requires. When they do find affordable candidates, they typically have a specific skill set that can’t match the depth of expertise the MSP can deliver.   

Organic growth and mergers-

Because of the organization’s explosive growth, sometimes through mergers, it is impossible for the customer to maintain or even be aware of the IT team they require at any given time. The MSP relationship and their technical staff can allow the business to scale up or down quickly their IT support.

Chief Information Officer (CIO) as a service-

Since its inception, IT providers have always looked for ways to create additional value for their clients. One of the first ways they accomplished this is by making recommendations for future technology to test and implement. This is the type of service the CIO provides for a large enterprise, which can take the form of periodic meetings where the provider is updated on the business strategy to help determine technological recommendations. These regularly scheduled meetings or Quarterly Business Reviews (QBRs) initiate a mutually beneficial relationship that lead to a long-term partnership.

As a service instead of purchase-

Renting technology infrastructure instead of an outright purchase may be advantageous due to IT hardware’s limited life. It can also create positive cashflow and other financial advantages for the business. This is an “as a service” model introduced by the cloud computing industry where expensive server hardware is rented instead of purchased.

The IT provider will also receive benefits from the MSP model. It was in large part designed by providers to solve the challenges of both parties.

Consistent revenue

Historically the small IT service provider has struggled, as any small business, creating a consistent revenue and cash flow. The MSP model with its monthly recurring charge (MRC) helps to relieve this challenge. Predictable revenue in addition to an educated customer who is more aware and consistent with their IT demands helps to build the successful model. In a similar fashion, the MSP model helps with hiring decisions and scheduling technicians for customer service calls. With this partnership, the MSP gets to know the needs of the customer better and can predict their requirements more accurately.

Outsourcing by the Outsource-

Some parts of the total MSP solution are not provided by the local provider but rather outsourced to one of their vendors. These services are typically security and monitoring based offerings that offer great value but are costly to implement without large quantities of clients. These services will take the form of malware and antivirus software for endpoints coupled with proactive monitoring as a service. These services enhance the offering and can add profit margin for the MSP. Some popular providers of these types of services are SolarWinds, Webroot and Datto. These companies have grown significantly as part of the MSP trend. They work exclusively with MSPs, never end-users, which helps protect the MSP product.

The MSP model makes sense for most businesses, but not all businesses all the time. When a client has recently experienced growth and is desperate for quality support it can be an easier conversion for the MSP provider. It will be a challenge to justify the cost if the customer’s experience has been in the pay-as-you-go model. This is where the MSP needs to show flexibility and understand that trust is a major part of the solution. They may need to scale back some services and present a custom solution that eases the customer into the MSP model and builds the trust required. A supplier agnostic advisor like Two Ears One Mouth IT Consulting can assure the supplier selection process is transparent and the best option is chosen that will enable trust and a long term partnership.

If your business is unique and requires a custom solution for IT Support

Contact us @ Jim Conwell (513) 227-4131      jim.conwell@twoearsonemouth.net

www.twoearsonemouth.net

we listen first…

Getting Started with Amazon Web Services (AWS)

icon-cloud-aws

Amazon Web Services is a little-known division of the online retail giant, except for those of us in the business of IT. Its interesting to see that the profits from AWS represented 56 percent of Amazon’s total operating income with $2.57 billion in revenue. While AWS amounted to about 9 percent of total revenue, its margins and sustained growth make it stand out on Wall Street. As businesses make the move to the cloud they may ponder what it takes Getting Started with Amazon Web Services (AWS)

When we have helped organizations evolve by moving part or all of their IT infrastructure to the AWS cloud, we have found that planning is the key to their success. Most businesses have had some cloud presence in their IT infrastructure. The most common, Software as a Service (SaaS), has lead the hyper growth of the cloud. What I will consider here with AWS is how businesses use it for Infrastructure as a Service (IaaS). IaaS is defined as a form of cloud computing that relocates a business’s applications that are currently on their own servers to a hosted cloud provider. Businesses consider this to reduce hardware cost, become more agile with their IT and even improve security. To follow are the 5 simple steps we have developed to move to IaaS with AWS.

Getting Started with Amazon Web Services (AWS)

1)      Define the workloads to migrate- The first cloud migration should be kept as simple as possible. Do not start your cloud practice with any business critical or production applications. A good idea, and where many businesses start, is a data backup solution. You can use your existing backup software or one that partners with AWS currently. These are industry leaders such as Commvault and Veritas, and if you already use these solutions that is even better. Start small and you may even find you can operated in the  free tier of Amazon virtual server or instances. (https://aws.amazon.com/free/)

2)      Calculate cost and Return on Investment (ROI)- Of the two primary types of costs used to calculate ROI, hard and soft costs, hard costs seem to be the greatest savings as you first start your cloud presence. These costs include the server hardware used, if cloud isn’t already utilized,  as well as the time needed to assemble and configure it. When configuring  a physical hardware server a hardware technician will have to make an estimation on the applications growth in order to size the server properly. With AWS it’s pay as you go, only renting what you actually use. Other hard cost such as power consumption and networking costs will be saved as well. Many times when starting small, it doesn’t take a formal process of ROI or documenting soft costs, such as customer satisfaction, to see that it makes sense. Finally, another advantage of starting with a modest presence in the AWS infrastructure is that you may be able to stay within the free tier for the first year. This  offering includes certain types of storage suitable for backups and the networking needed for data migration.

3)      Determine cloud compatibility- There are still applications that don’t work well in a cloud environment. That is why it is important to work with a partner that has experience in cloud implementation. It can be as simple as an application that requires a premium of bandwidth, or is sensitive to data latency. Additionally, industries that are subject to regulation, such as PCI/DSS or HIPAA are further incentivized to understand what is required and the associated costs . For instance, healthcare organizations are bound to secure their Protected Health Information (PHI). This regulated data should be encrypted both in transit and at rest. This example of encryption wouldn’t necessarily change your ROI, but needs to be considered. A strong IT governance platform is always a good idea and can assure smooth sailing for the years to come.

4)      Determine how to migrate existing data to the cloud- Amazon AWS provides many ways to migrate data, most of which will not incur any additional fees. These proven methods not only help secure your data but also speed up the process of implementation of your first cloud instance. To follow are the most popular ways.

  1. a) Virtual Private Network- This common but secure transport method is available to move data via the internet that is not sensitive to latency. In most cases a separate virtual server for an AWS storage gateway will be used.
  2. b) Direct Connect- AWS customers can create a dedicated telecom connection to the AWS infrastructure in their region of the world. These pipes are typically either 1 or 10 Gbps and are provided by the customer’s telecommunications provider. They will terminate at the far end of an Amazon partner datacenter. For example, in the midwest this location is in Virginia. The AWS customer pays for the circuit as well as a small recurring cross-connect fee for the datacenter.
  3. c) Import/Export– AWS will allow their customers to ship their own storage devices containing data to AWS to be migrated to their cloud instance. AWS publishes a list of compatible devices and will return the hardware when the migration is completed.
  4. d) Snowball– Snowball is similar to import/export except that Amazon provides the storage devices for this product. A Snowball can store up to 50 Terabytes (TB) of data and can be combined in series with up to 4 other Snowballs. It also makes sense in sites with little or no internet connectivity. This unique device is set to ship as is, there is no need to box it up. It can encrypt the data and has two 10 GIG Ethernet ports for data transfer. Devices like the Snowball are vital for migrations with large amounts of data. Below is a chart showing approximate transfer times depending on the internet connection speed and the amount of data to be transferred. It is easy to see large migrations couldn’t happen without these devices. The final column shows the amount of data where is makes sense to “seed” the data with a hardware devices rather than transfer it over the internet or a direct connection.
    Company’s Internet Speed Theoretical days to xfer 100 TB @ 80% Utilization Amount of data to consider device
    T3 (44.73 Mbps) 269 days 2 TB or more
    100 Mbps 120 days 5 TB or more
    1000 Mbps (GIG) 12 days 60 TB or more

    1)      Test and Monitor- Once your instance is setup, and all the data migrated, it’s time to test. Best practices are to test the application in the most realistic setting possible. This means during business hours and in an environment when bandwidth consumption will be similar to the production environment. You wont need to look far to find products that can monitor the health of your AWS instances; AWS provides a free utility called CloudWatch. CloudWatch monitors your Amazon Web Services (AWS) resources and the applications you run on AWS in real time. You can use CloudWatch to collect and track metrics, which are variables you can measure for your resources and applications. CloudWatch alarms send notifications or automatically make changes to the resources you are monitoring based on rules that you define. For example, you can monitor the CPU usage and disk reads and writes of your Amazon instances and then use this data to determine whether you should launch additional instances to handle increased load. You can also use this data to stop under-used instances to save money. In addition to monitoring the built-in metrics that come with AWS, you can monitor your own custom metrics. With CloudWatch, you gain system-wide visibility into resource utilization, application performance, and operational health.

    To meet and learn more about how AWS can benefit your organization contact me at (513) 227-4131 or jim.conwell@outlook.com.