a cloud buyers guide

A Buyer’s Guide to Cloud

buyguide_Cloud

Most businesses have discovered the value that cloud computing can bring to their IT operations. They may have discovered how it helps to meet their regulatory compliance priorities by being in a SOC 2 audited data center. Others may see a cost advantage as they are approaching a server refresh when costly hardware needs to be replaced. They recognize an advantage of placing this hardware as an operational expense as opposed to the large capital expense they need to make every three years. No matter the business driver, the typical business person isn’t sure where to start to find the right cloud provider. In this fast paced and ever-changing technology environment these IT managers may wonder, is there a buyer’s guide to Cloud?

Where Exactly is the Cloud?…and Where is My Data?

Except for the cloud hyperscalers, (Amazon AWS, Microsoft Azure, and Google) cloud providers create their product in a multi-tenant data center. A multi-tenant data center is a purpose-built facility designed specifically for the needs of the business IT infrastructure and accommodates many businesses. These facilities are highly secured and most times unknown to the public. Many offer additional colocation services that allow their customers to enter the center to manage their own servers. This is a primary difference with the hyperscalers, as they offer no possibility of customers seeing the sites where their data resides. The hyperscale customer doesn’t know where there data is except for a region of the country or availability zone. The hyperscaler’s customer must base their buying decision on trusting the security practices of the large technology companies Google, Amazon, and Microsoft. These are some of the same organizations that are currently under scrutiny from governments around the world for data privacy concerns.  The buying decisions for cloud and data center for cloud seekers should start at the multi-tenant data center. Therefore, the first consideration in a buyer’s guide for the cloud will start with the primary characteristics to evaluate in the data center and are listed below.

  1. Location– Location is a multi-faceted consideration in a datacenter. First, the datacenter needs to be close to a highly available power grid and possibly alternate power companies. Similarly, the telecommunications bandwidth needs to be abundant, diverse and redundant. Finally, the proximity of the data center to its data users is crucial because speed matters. The closer the users are to the data, the less data latency, which means happier cloud users.
  2. Security– As is in all forms of IT today, security is paramount. It is important to review the data center’s security practices. This will include physical as well as technical security.
  3. People behind the data– The support staff at the datacenter creating and servicing your cloud instances can be the key to success. They should have the proper technical skills, responsiveness and be available around the clock.

Is My Cloud Infrastructure Portable?

The key technology that has enabled cloud computing is virtualization. Virtualization creates an additional layer above the operating system called a hypervisor that allows for sharing hardware resources. This allows multiple virtual servers (VMs) to be created on a single hardware server. Businesses have used virtualization for years, VMware and Microsoft HyperV being the most popular choices. If you are familiar with and have some secondary or backup infrastructure on the same hypervisor as your cloud provider, you can create a portable environment. A solution where VMs can be moved or replicated with relative ease avoids vendor lock-in. One primary criticism of the hyperscalers is that it can be easy to move data in but much more difficult to migrate the data out. This lack of portability is reinforced by the proprietary nature of their systems. One of the technologies that the hyperscalers are beginning to use to become more portable is containers. Containers are similar to VMs however they don’t utilize guest operating systems for the virtual servers. This has had a limited affect on portability because containers are a leading-edge technology and have not met widespread acceptance.

What Kind of Commitment Do I Make?

The multi-tenant data center offering a virtualized cloud solution will include an implementation fee and require a commitment term with the contract. Their customized solution will require pre-implementation engineering time, so they will be looking to recoup those costs. Both fees are typically negotiable and a good example where an advisor like Two Ears One Mouth can assist you through this process and save you money.

The hyperscaler will not require either charge because they don’t provide custom solutions and are difficult to leave so the term commitment is not required. The hyperscaler will offer a discount with a contract term as an incentive for a term commitment; these offerings are called reserved instances. With a reserved instance, they will discount your monthly recurring charge (MRC) for a two or three-year commitment.

Finding the best cloud provider for your business is a time-consuming and difficult process. When considering a hyperscaler the business user will receive no support or guidance. Working directly with a multi-tenant data center is more service-oriented but can misuse the cloud buyer’s time. The cloud consumer can work with a single data center representative that states “we are the best” and trust them. Alternatively, they can interview multiple data center provider representatives and create the ambiguous “apples to apples” spreadsheet of prospective vendors. However, neither is effective.

At Two Ears One Mouth IT consulting we will listen to your needs first and then guide you through the process. With our expertise and market knowledge you will be comforted to know we have come to the right decision for you company’s specific requirements. We save our customers time and money and provide our services at little or no cost to them!

If you would like assistance in selecting a cloud provider for your business contact us at:

Jim Conwell (513) 227-4131      jim.conwell@twoearsonemouth.net

www.twoearsonemouth.net

we listen first…

Why Pay More?!

Will an Indirect Consultant Cost My Business More?

Getting more while paying less

Recently, a prospective customer asked me, “Will an indirect consultant cost my business more than negotiating with the service provider directly?” It’s a fair question, one I can answer with minimal effort. Working on a cloud or telecommunications solution with a supplier agnostic advisor (indirect) doesn’t cost anything additional and often will reduce the total cost of a project.

The indirect model

Most cloud and telecommunications providers today will utilize both the direct and indirect sales consultant models. A direct sales consultant is an employee of the supplier who is given a sales quota and a limited variety of solutions to propose. An indirect sales consultant will represent a variety of suppliers and solutions. The consultant will also typically focus on a select group of suppliers and narrow the prospective vendors down for their client based on their specific needs. The indirect and supplier relationships have no quota or demands that create a false urgency or bias to the buyer. The primary goal for the indirect consultant is to save their client time and utilize their expertise to find the best supplier and solution for their customer. Since all suppliers currently embrace the indirect model the list of suppliers is unlimited. The consultant will focus on a few but is prepared to engage any supplier needed for a unique situation. The past decade has shown a clear trend for suppliers to move to the indirect model and many have engaged the indirect channel exclusively.

Why pay more?!

In What is the Difference between a Direct and Indirect Channel, I covered some of the advantages of the indirect partnership and why it enables long term sustainability. Many times, the indirect consultant will save the end-user the cost of a technology solution. Certainly, as some may surmise, suppliers that utilize the indirect model don’t add cost to the solution because it involves an indirect consultant. All suppliers budget for the cost of sales in their pricing, regardless of the channel it comes from. They understand that the opportunities from the indirect channel are unique from their direct sales funnel and don’t incur any additional cost to sales.

Due to their vast knowledge and experience, the indirect representative is very familiar with the sales process of cloud computing and telecommunications and know where the supplier may have flexibility. It may be in the term contract, the installation charges or the monthly recurring charges (MRC) for the service. The indirect consultant becomes a trusted guide through the discovery, sales, and implementation process. This has its greatest value in a cloud or data center acquisition. The discovery and decision process for this type of service may be completed once or twice in an IT leader’s tenure and many years can pass between engagements. As a result, they are unable to remain apprised of the current technology, sales trends, and processes. Conversely, the indirect consultant may lead his or her other clients through a similar process several times a month. They know how to get the best value and are rewarded by it. Cost is not the best reason to use indirect consultant, but it is never the downside of the indirect consulting process.

Today’s suppliers of telecom and cloud services have come to embrace the indirect sales channel because of its propensity to create a “win-win” for all parties involved. It provides a more customized and less expensive solution to the potential customer while introducing new opportunities and reducing the cost of sales for the supplier.

If you would like to understand more about getting more and paying less contact us at:

Jim Conwell (513) 227-4131      jim.conwell@twoearsonemouth.net

www.twoearsonemouth.net

we listen first…

migrating datta to cloud

Creating a Successful Cloud Migration

cloud-migrationIf you’ve been a part of the growth of cloud computing technology, you know that creating a successful cloud migration goes far beyond what can be covered in a short essay. However, this article will communicate guidelines or best practices that will greatly improve the success of your migration project. A successful cloud migration will include at least these three stages: planning, design, and execution. Each phase builds on the previous one and no step should be ignored or downplayed. The business cloud migration requires an expert, internal or external to the organization, to manage the process.

Planning: what type of cloud works best?

When we speak of a cloud migration we are referring to a business’s transition to Infrastructure as a Service (IaaS). Migrating to IaaS is the process of converting your on-site IT infrastructure to a cloud service provider and initiating an OpEx financial model for the business. When approaching this migration the business will investigate three provider solution types: hyperscaler, national cloud service provider and a hybrid of a cloud provider with a portion of the infrastructure remaining on-premises.

The largest public cloud providers, AWS, Azure, and Google are often referred to as hyperscalers.  This name is appropriate as it is what they do best, allow customers to scale or expand very quickly. This scaling is served up by a self-service model via the provider’s web portal which can be very attractive large organizations.  Small and medium sized businesses (SMB) have a harder time adjusting to this model as there is very little support. Self-service means the customer is on their own to develop and manage the cloud instances. Another drawback of the hyperscaler for the SMB is that is nearly impossible to budget what your cloud infrastructure is going to cost. The hyperscalers transactional charges and billing make costs difficult to predict. The larger enterprise will often take the strategy of building the infrastructure as needed and then scale back to meet or reduce the cost. SMB typically does not have this type of latitude with budget constraints and will opt toward the more predictable national or regional cloud provider.

The regional or national data center is a better fit for SMB because of their ability to conform to the businesses needs. Often SMB will have unique circumstances requiring a customized plan for compliance and security or special network requirements. Also, this type of cloud provider will provide an allowance of internet bandwidth in the monthly charges. This eliminates unpredictable transaction fees the hyperscaler charges. In this way, the business can predict their monthly cloud cost and budget accordingly.

There are times when an application doesn’t work well in the cloud infrastructure, yet it is still required for the business. This is when a hybrid cloud environment can be implemented. Hybrid cloud in this instance is created when some applications move off-site while others stay and are managed separately. The challenge is to integrate, or make seamless, this non-cloud application with the other business processes. Over the long term, the application creating the hybrid environment can be repurposed to fit in the cloud strategy. Options include redeveloping the existing software to a cloud native architecture or finding a similar application that works more efficiently in a cloud environment.

Design: a cloud strategy.

A cloud strategy requires not only a strong knowledge of IT infrastructure but also a clear understanding of the business’s operations and processes. It is vital that the customer operations and management teams are involved in the cloud strategy development. Details regarding regular compliance and IT security need to be considered in the initial phases of development rather than later. The technical leader of the project will communicate a common strategy of building a cloud infrastructure wider as opposed to taller. Cloud infrastructure is better suited to have many servers with individual applications (wide) instead of one more powerful server handling many applications (tall).

Once all the critical business operations are considered, a cloud readiness assessment (CRA) can be developed. A CRA will dig deep into the business’s critical and non-critical applications and determine the cloud infrastructure needed to support them. In this stage, each application can be considered for its appropriate migration type. A “lift and shift” migration will move the application off-site as is, however some type of cloud customization may be completed before it is migrated. Connectivity also needs to be considered at this stage. This includes the bandwidth required for the business and its customers to connect with the cloud applications. Many times, an additional private and secure connection is required for access by IT managers or software developers through a VPN that will be restricted and have very limited access. IP addresses may need to be changed to a supplier issued IP block to accommodate the migration. This can create temporary Domain Name System (DNS) issues that require preparation. Finally, data backups and disaster recovery (DR) need to be considered. Many believe migrating to the cloud inherently assures backup and disaster recovery and it does not! Both backups and DR objectives need to be uncovered and planned out carefully.         

Execution and day 2 cloud.

Now that the best cloud provider and the application migration timeline have been determined, the project is ready for the execution phase. The migration team should have performed tests on the applications as a proof of concept (POC) to assure everything will work as planned. After the tests are complete, the data will then be migrated to the provider via an internet connection or a physical disk delivered to the provider. The business’s IT infrastructure has now been moved to the cloud, but the work is not over. The business’s IT infrastructure is in a place called cloud day 2.      

The two services that deliver and assure success in your cloud going forward are monitoring and support. These can be handled internally, or they can be provided by the cloud supplier or another third party. When purchasing the professional services from the cloud provider, it is important to understand their helpdesk operations and have expectations for response times.  Make sure you discuss service level agreements (SLAs) for response both during business hours and after. The service provider should be monitoring the health or “state” of all VMs and network edge devices; security falls under these ongoing services. Many security-minded organizations prefer a more security focused third-party provider than the cloud provider itself. It is critical to understand the data backup services that have been included with your cloud instances. Don’t assume there is an off-site backup included in the cloud service, many data center providers have additional charges for off-site backup. DR goes well beyond backups and creates data replication with aggressive SLAs to restore service during an outage. An often-overlooked part of DR strategy is the “fallback” to your primary service location once the primary site has been restored to service.

A migration of IT infrastructure is a complicated process that needs to be performed by a team of experts. Just as important, the team needs to be managed by a seasoned project manager that has your business interests as primary. This is accomplished when the project manager is not a part of the cloud provider’s team. Having the right manager and team can assure your business can migrate to the cloud without a disruption to your business. Two Ears One Mouth IT Consulting can be the partner that guarantees a successful cloud migration.

If you would like to talk more about cloud migration strategies contact us at:

Jim Conwell (513) 227-4131      jim.conwell@twoearsonemouth.net

www.twoearsonemouth.net

we listen first…

Security and Cloud

Security in the Cloud

When cloud computing first gained acceptance and began to gain momentum in business IT security became a headwind holding it back from even greater acceptance. After all, the IT manager may have thought moving his/her data from the premises to an off-site location is sure to be risky. Similarly, they wondered how their data could be secure when they don’t own and manage the hardware it resides on or even know where it is. While these arguments seem logical, logic does not equal security. How the data is protected is far more important than where it is geographically speaking regarding security. Many times, the data center or cloud provider is better at laying the foundation for IT security than the IT leader of a business, but it is best when there is a team effort between the two.

Beginning with Compliance

Many businesses today are faced with the challenge of regulatory compliance in their IT services. Compliance is a complicated and tedious process that includes not only IT operations but virtually all aspects of the business. A regulated business needs to consider processes that affect the datacenter as well as other departments such as employee and visitor access to data, audits and reporting, and disaster recovery. These are functions that data center providers consider as a primary part of their business. These practices are defined by certifications, with today’s most common certification being Service Organization Controls or SOC. Today you will find most data center using SOC 2. SOC 2 is a set of standards the data center complies with and reports on to satisfy their customer requirements. The audits of SOC 2   will authenticate the data center is doing what it says it does regarding monitoring, alerts and physical security. When a business moves or migrates their IT infrastructure to a SOC 2 compliant datacenter they are assured to have met their compliance goals without managing the difficult process themselves.

Encryption, Cloud Securities Best Practice

Many of the most valued processes of IT security in whole hold true for a cloud and data center environment. No single exercise is as important as encrypting the vital data of the business. Encryption is one of the most effective data protection tools because it converts the data into a secret code that renders it useless without a key. The encryption software produces a key that must be used to unlock and read the data. Data can be encrypted at rest, as when it resides in storage in the datacenter or in transit between the datacenter and the data users. Data encryption in transit is typically created by an appliance that creates a Virtual Private Network (VPN). Encryption is a vital technology to secure data wherever the data resides, encrypting the data in transit is an additional layer of security that helps keep data secure as it moves on and off site.

The Future of Security in the Cloud

It is difficult to predict future trends across industries, but this exercise proves to be especially difficult in technology. To consider how security in the cloud will be handled in the future it is important to understand how the cloud itself with be evolving. In cloud technology, containers are the technology that is gaining acceptance and market share at the current time. Containers are similar to the virtual machines (VMs) of today’s infrastructure but are more independent and create an environment for the use of microservices. Microservices is a concept that a single application for a business should consist of many smaller services instead of one monolithic application. This allows for greater overall uptime as the entire application doesn’t need to be taken down due to a single service requiring maintenance or an update. The same benefit can be realized for security. However, microservices can create a very complicated “mesh” of services that will complicate all aspects of the infrastructure including security. To alleviate these complications for security there have been opensource software packages developed. One helpful opensource software package is Istio. Istio is an opensource package that allows the infrastructure manager to secure, connect and monitor microservices. Itsio can be implemented in a “side-car” deployment where it will secure services from outside the service or container. Today we often think of security services, such as anti-malware as another application running within the server or VM it is protecting. Software like Itsio makes security more of an integral part of the application as opposed to something added to a completed solution. Opensource services like Itsio are making complicated systems easier to manage. Containers and microservices are the strongest evolving trends for the cloud, so one should look to them for the future of security in the cloud.

With each change in technology, the landscape seems to get more complicated. Security can add to the complication; however, it can be simplified if it can be considered prior to the service being developed as opposed to after. The cloud computing industry is taking the lead in corporate IT infrastructure as well as the dual role of creating new ways to approach securing a business’s data.

If you would like to talk more about security in cloud strategies contact us at:

Jim Conwell (513) 227-4131      jim.conwell@twoearsonemouth.net

www.twoearsonemouth.net

we listen first…

open-source leaders

If Software is Eating the World, then the Community is Serving it Up!

 Five years ago, Marc Andersen, the co-founder of Netscape, wrote his famous essay for the Wall Street Journal “Why Software is Eating the World”. The premise is as true today, however, the path it is taking is evolving. Open-source or community-based software development has increased in popularity recently. According to Wikipedia, open-source software “is a type of computer software whose source code is released under a license in which the copyright holder grants users the rights to study, change, and distribute the software to anyone and for any purpose”. In the following article, I will describe the origins of open-source and how it has evolved to become a primary way software is developed.

There are many different opinions as to the origins of open-source. Many believe its real momentum coincided with the beginnings of the Linux Operating System developed by Linus Torvalds in 1992. The Linux computer operating system grew quickly in the mid 1990’s creating a new segment of the software industry. This fast-paced growth was complemented by a high-quality product. In the late 1990’s Netscape released its source code for its software Mozilla which became the first widely used software available for free. Soon after that many of the major players in the open-source movement began to organize to oversee the process and the term “open-source software” was coined.

Can you make money on a free product?

To many, Red Hat the most common example of a company built on open-source software; a business model built on support. Redhat has developers that contribute to the software, but their primary strategy is to stabilize and support the open-source projects. Redhat reported overall revenue of $2.4 billion in 2017 while keeping higher than usual margins for the technology industry. Other early adopters to the open source trend in software have been IBM and Google. Both technology titans have developed a unique strategy to monetize open-source.

How do developers GIT started?

When software is being produced by multiple developers in many places consistency and version control is vital. This was discovered early on as Linus Torvalds was developing the Linux kernel. Linus and other early adopters develop a version control system called GIT for tracking changes in files and software. It has allowed open-source software development to flourish. Developers can work on the same software project simultaneously and keep it consistent. GIT controls the push and pulls of the updates from the individual contributors to the primary software product.  Many times, the entire team works with the central repository in this process such as GitHub. GitHub is an independent server hosting organization that provides a remote repository for each of the users. GitHub provides the centralized management for the developers to make changes and update the software.        

Will open-source software continue to grow?

The trend is clear and the forecast obvious, open-source will not only continue to grow but the growth will accelerate. There are too many benefits for it not to; lower cost, no vendor lock-in and the primary benefit of many minds collaborating. Some recent actions from the largest technology organizations in the world back this up. Google, who developed a cloud container orchestration package called Kubernetes donated this entire software package to the Cloud Native Computing Foundation. Google believes it will profit more from the software in the open-source community rather than keeping it proprietary. The most convincing argument comes from the largest software provider in the world, Microsoft. Microsoft had struggled to grow and keep its competitive edge until Satya Nadella a was named CEO in 2014. Nadella has promoted open source and embraced the competitive Linux operating system in Microsoft’s cloud platform Azure. This has changed the culture of the entire organization. His leadership has transformed Microsoft back into a growth company focused on its cloud offering instead of a monopolistic software provider. In June 2018 Microsoft announced the purchase of GitHub for $7.5 billion. This acquisition will allow the acceleration of the use of GitHub in the enterprise space and bring Microsoft development tools to a new audience. Nadella was quoted as saying “we recognize the community responsibility we take on with this agreement and will do our best work to empower every developer to build, innovate and solve the world’s most pressing challenges”. When these two giants of the cloud and technology business make such bold statements on a trend it is likely to grow.           

It’s true what Andersen first alluded to; software will play a larger role in society going forward. This is being driven in large part by the open-source community development movement and the connectivity cloud computing brings to the process.

If you would like to talk more about cloud strategies for software development contact us at:

Jim Conwell (513) 227-4131      jim.conwell@twoearsonemouth.net

www.twoearsonemouth.net

we listen first…

 

What is the Difference Between a Direct and Indirect Channel?

It’s not just a transaction, it’s a relationship

In my 25-year career in consulting and selling IT solutions most of my time has been as a direct employee of a company. As a direct employee and sales consultant, I received healthcare benefits as well as a salary complemented by commissions or bonuses. My product focus was simply what my company offered, and I would tailor that offering to fit the customer’s needs. There were occasions when I had to take the opposite approach due to the limitations of my company or an offering. I needed to tailor my customer’s needs to my solution. Recently I’ve changed to the indirect channel with an independent organization that allows for many suppliers and their vast array of solutions.

Most IT and telecommunications suppliers utilize both direct and indirect channels. Some, I have found don’t manage this well, which results in channel conflicts. A channel conflict is when a direct and an indirect consultant compete against each other for the same customer. Many times, they are proposing the exact same solution. This situation leaves only two differentiating factors for the customer, relationship, and price. When there is no clear relationship advantage for either consultant then the supplier will be under pressure to lower their price. In this scenario the supplier reduces their margins due to the conflict

This has caused most suppliers to create strict rules of engagement with their channel partners. Like the technologies these suppliers provide, some suppliers manage the channel better than others. In future articles, I will detail some of the suppliers I have had experiences with and how they have dealt with or avoided channel conflict.        

My experience in both channels has found the indirect channel to provide the best alternatives and solutions for the customer. To follow are the three primary characteristics of an indirect consultant that create their competitive advantage.

Better overall industry and solutions knowledge

When I first left the world of the direct consultation and sales channel I was concerned I wouldn’t be able to stay abreast of current technology and trends. After all, the company I worked for had provided all my “training” to this point. I soon discovered my concern was unfounded and the exact opposite was true. What I have discovered as being an independent consultant is that there is an abundance of information available today to all that take the time to seek it. Each supplier I partner with has their own product training and information available, as well as general industry information, to keep their representatives current.

My initial concerns have also led me to seek out and find an unbelievable amount of unbiased technical information available on the web and certain podcasts. I now listen to my favorite “cloud-computing” podcast at the gym in lieu of music. I receive an hour of high quality, technical and current information on cloud computing every day. What I initially perceived as a shortfall of the indirect model has turned into an advantage.

An unbiased approach to customer challenges and solutions

When the consulting partner you’re working with has a greater knowledge and expertise, coupled with larger solution sets to choose from you are with the right partner. Almost every provider I am aware of in telecommunications and IT utilizes the indirect channel. They embrace it due to its lower cost and increased flexibility in their consulting and sales teams. Many suppliers align with “master agents” allowing the indirect consultants to work with many suppliers and only one partnering agreement with the master agent. If an indirect consultant discovers a solution provider that is not aligned with their master agent most times they can engage directly with that provider to establish a relationship.

Compensation models for direct vs. indirect

I am usually not comfortable talking about compensation; I would not consider bringing it up as an advantage for the consultant. However, the typical indirect consultant compensation plan benefits the customer. That is, if the customer is looking for a long-term relationship. The standard compensation plan for the direct employee is based on a one-time commission or bonus for bringing a new customer to the business. This creates an incentive to move on and find the next prospect, not to build the relationship with that client. Compensation plans for indirect, or independent consultants, are paid as a small percentage of the monthly recurring revenue (MRC) created by bringing the new business to the provider. These payments are in the form of residuals that continue as long as that customer stays with that provider. From the inception of the agreement this incentivizes the consultant to stay in close contact with the customer and assure their satisfaction level stays high. For this reason, the independent consultant tends to provide a better level of service that is more consistent to the customer.

There are several different ways to find the right partner to lead you through the process of making the right decisions for your IT infrastructure. Most companies will choose to work with a consultant that in unbiased for providers, has a deep industry knowledge and is incentivized to stand behind the solution for the long run. The describes the indirect consultant like Two Ears One Mouth IT Consulting.

the indirect sales channel works better
The indirect channel wins…for you!

If you would like to talk more about how the channel does or does not work contact us at:

Jim Conwell (513) 227-4131      jim.conwell@twoearsonemouth.net

www.twoearsonemouth.net

we listen first…

automation help build cloud infrastructure

What is Code as a Service?

 

CaaS

When I first started experimenting with the public cloud providers I, like many, began by setting up demo accounts and creating virtual servers. It isn’t a complicated process to create servers, particularly if you compare it to the process of buying hardware and loading software that was required 10 years ago. Cloud computing’s self service capabilities have caused a major disruption in the business of information technology. But even as I “spun-up” servers in a matter of minutes or seconds I began to wonder; how does the large enterprise migrate to and manage their cloud environment? How do they maintain the IT governess and framework with their cloud infrastructure as they have with their on premises infrastructure? How do they maintain standards considering all the ever-changing choices so commonly provided by the cloud vendors? I could see these questions as an issue with small implementations, but how does the enterprise handle this across dozens or even hundreds of cloud architects creating virtual servers? In short, the question I attempt to answer here is what tools are available to maintain IT governance and security compliance in the “move fast and break things” world of the cloud? The answer to all the questions can be found it what has been coined as Code as a Service (CaaS) or Infrastructure as Code (IaC).

Automation with Code as a Service

CaaS’s primary service or function is automation. It uses software to automate repetitive practices to hasten and simplify implementations and processes. A valuable byproduct of this automation is consistency. When processes are automated they can be designed from the start to follow the rules of regulation and governance of the organization. They help assure that no matter how fast process is moving or how many users are involved, governance is maintained.

 Popular Code as a Service tools

There are a host of these tools designed to automate and govern the development of software and IT infrastructure. To follow are examples, starting with the most general IT automation systems and moving to tools designed to work more specific to work with cloud infrastructure.

Ansible

Ansible is an open source automation software promoted by Redhat Corporation. In addition, to cloud provisioning, it assists in application development, intra-service orchestration, and configuration. Ansible uses the simple programming language YAML to create playbooks for automation. Ansible has many modules that integrate with the most common cloud solutions such as AWS, Google Cloud Platform (GCP) and VMware.            

Teraform

Terraform is an infrastructure as code software by Hashi Corporation. It primarily focuses on creating data center infrastructure that is provided by large public clouds. Teraform utilizes JSON language to define infrastructure templates with integrations such as AWS, Azure, GCP, and IBM cloud.         

Kubernetes

Kubernetes is an open source project started by Google and donated in its entirety to the Cloud Native Computing Foundation (CNCF). It orchestrates and automates the deployment of containers. Containers are a different type of virtual server that has promoted and added to the popularity of micro services. Micro services create business applications by combining many smaller applications to create the entire solution. Micro Services are used to increase agility and uptime and make maintenance of the application easier and less disruptive.

CloudFormation

CloudFormation is Amazon Web Services CaaS application that is provided to its customers at no charge. CloudFormation templates can be written in YAML or JSON and make the deployment of AWS services at scale quicker and more secure. CloudFormation saves massive amounts of time for the enterprise cloud architect and insurers all instances maintain the IT governance of the organization.

Code as a Service is a valuable tool for cloud architects and businesses to create cloud native applications or migrate their applications to cloud service providers. There are many products, but most are opensource and will utilize playbooks or templates to assist in creating the cloud infrastructure in a compliant manner.        

If you would like to talk more about strategies for migrating to or creating cloud infrastructure contact us at:
Jim Conwell (513) 227-4131      jim.conwell@twoearsonemouth.net
www.twoearsonemouth.net
we listen first…

How to Select a Cloud or Data Center Provider

2E1M-IT_LOGO

Two Ears One Mouth Business Concept and Purpose

Two Ears One Mouth IT Consulting (TEOM) began with the monumental decision Judge Green made divesting AT&T in 1984. That decision created the telecommunications and Information Technology (IT) industries of today. My personal mentor taught me an important sales consultation concept early in my career. When consulting and selling we must first listen to our customers intently, before we speak to offer solutions. Later in my career, I heard this same concept described in that we have two ears and one mouth, so we can listen more than we speak.

What does our product do?

I’ve always enjoyed working with technologies in the growth portion of their life cycle. Most recently my passion has been in information technology as it relates to cloud computing and data center. Cloud computing has become a mature and reliable product, even while experiencing continued solid growth. TEOM’s primary product focuses on protecting businesses IT infrastructure, and its two primary models, colocation and cloud. We assist organizations by analyzing their IT infrastructure and the applications they run to help decide the best path forward for the infrastructure that houses their data.

We answer questions such as:

·         Should the business migrate from an on premise server-based infrastructure to the cloud?

·         Should the business continue to own their infrastructure with a CapEx model and place it in a secure data center? (Colocation)

·         How best can the goals of regulatory compliance and maximum uptime be accomplished?

·         Can a hybrid model be created that allows the business to migrate to the desired solution over time?

TEOM will also assist businesses that have been through the technical exercise but are looking for a better price or level of service. TEOM has the experience, expertise, and partnerships to help our clients make the right decisions.     

How is TEOM different from the others?    

There is a growing number of multi-tenant data centers that offer their customers secure data center services, as well as their own flavor of cloud. These businesses typically implement a direct sales force to market their products and services. This model falls short in the depth of expertise and solutions that are offered to the client. Their focus and product offering are limited, they can’t compete with a consultant that represents multiple providers. They are forced to make their limited solutions fit their customers’ requirements.

TEOM utilizes an indirect consultation and sales process in which the products and services are brokered from a wide array of these providers. We take an unbiased approach and consider all partners to determine the best solution for our client.

During customer analysis we even consider the largest cloud providers, such as Amazon Web Services (AWS), which use a direct to the end-used or self-service model. We make sure to understand all the current technology options and utilize them in our client’s solution.    

Who is the right client for TEOM?      

Virtually any business can benefit from cloud services. However, for a business to derive a benefit from a TEOM consultation, a certain amount of infrastructure is required. Our typical customer will have at least 5 active servers.  Our most common customer engagement is an organization with dozens of servers and a headcount of over 100.

Many times, our clients have an IT department, including a CIO who realizes the benefit of an outsourced solution for analyzing datacenter providers. In addition to saving critical time, it helps to have a fresh look analyzing the infrastructure with a level of expertise they can’t match internally. Using TEOM can also save time and tedious efforts of interviewing and pricing potential vendors. There is no maximum size of business for a TEOM consult considering we can look at parts or independent departments of the largest enterprise or government organization.

TEOM is the best choice for an expert, unbiased consultation for your organization’s cloud and datacenter needs. We have deep and unparalleled expertise, in large part due to our vast array of partners. We can eliminate many hours of research and vendor interviews with little or no cost to our clients. Our indirect consultation and sales strategy allows us to offer you the best choices from our full breadth of suppliers while charging little or nothing for our services.

If you would like to talk more about an IT infrastructure analysis contact TEOM:

Jim Conwell (513) 227-4131      jim.conwell@twoearsonemouth.net

www.twoearsonemouth.net

we listen first…

 

cloud savings

Enabling precision agriculture with IOT


FarmBeats is an AI for Earth Lighthouse project aimed at showcasing the benefits of IOT in a variety of applications. Water scarcity and pollution are threatening the livelihood of farmers around the world; they are under immense pressure to produce. Through sensors, drones, data analytics, AI, and connectivity solutions, Microsoft is enabling precision agriculture to improve yield while reducing resource consumption.

Edge Computing and the Cloud

edge-arch

image courtesy of openedgecomputing.org

This article is intended to be a simple introduction to what can be a complicated technical process. It usually helps to begin the articles I write involving a specific cloud technology with a definition. Edge computing’s definition, like many other technologies, has and evolved in a very short period of time. In the past Edge Computing could describe devices that connect to the local area network (LAN) to the wide area network (WAN), such as firewalls and routers. Today’s definitions of Edge Computing are more focused on the cloud and how to overcome some of the challenges of cloud computing. The definition I will use as a basis for this article is to bring computer and data storage resources as close as possible to the people and machines that require the data. Many times, this will include creating a hybrid environment for a distant or relatively slow public cloud solution. The hybrid environment will consist of an alternate location with resources that can provide the faster response required.

Benefits of Edge Computing

The primary benefit of living on the edge is increased performance. This is most often defined in the networking world as reduced latency. Latency is the time it takes for data packets to be stored or retrieved. With the growth of Machine Learning (ML), Machine to Machine Communications (M2M) and Artificial Intelligence (AI), exploding latency awareness has increased across the industry. A human working at a workstation can easily tolerate a data latency of 100-200 milliseconds (MS) without much frustration. If you’re a gamer you would like to see latency at 30 MS or less. Many machines and the applications they run are far less tolerant to data latency. The latency tolerance for machine-based applications can range from tolerating 10 MS to no latency, needing the data in real time. There are applications humans interface with that are more latency sensitive, a primary example being voice communications. In the past decade business’ demand for Voice Over Internet Protocol (VOIP) phone systems has grown which has in turn driven the need for better managed low latency networks. Although data transmission speeds are moving at the speed of light, distance still matters. As a result, we look to reduce latency for our applications by moving the data closer to the edge and its users. This can then produce the secondary benefit of reduced cost. The closer the data is to the applications the fewer network resources required to transmit the data.

Use Cases for Edge Computing

Content Delivery Networks (CDN) are thought of as the predecessor of the edge computing solutions of today. CDN’s are a geographically distributed network of content servers designed to deliver video or other web content to the end user. Edge Computing seeks to take this to the next step by delivering all types of data in even closer to real time.

The Internet of Things (IOT) devices is a large part of what is driving the demand for Edge Computing. A common application involves a video surveillance system an organization would use for security. A large amount of data is stored in which only a fraction is needed or will be accessed. An edge device or system collects all the data, stores it, and only transfers the data needed to a public cloud for authorized access.

Cellular networks and cell towers provide another use case for Edge Computing. Data for analysis are sent from subscriber phones to an edge system at the cell site. Some of this data is used immediately for call control and call processing. Most of the data, which is not time sensitive are then transmitted to the cloud for analysis later.

As the technology and acceptance for driverless cars increase a similar type of edge strategy will be used. However, with driverless applications, the edge devices will be located in the car because of the need for real time responses.

These examples all demonstrate the need for speed is constantly increasing and will continue to grow in our data applications.  As fast as our networks become there will always be the need to hasten processing time for our critical applications.

If you would like to talk more about strategies for cloud migration contact us at:

Jim Conwell (513) 227-4131      jim.conwell@twoearsonemouth.net

www.twoearsonemouth.net

we listen first…