Unique Ways to Reduce IT Cost

(Information in italics courtesy of BOERGER CONSULTING, LLC)

IT_cost-Asset-Management

image courtesy of connectedgear.com

All businesses today receive benefits from Information Technology (IT) but many still consider it a problematic cost center of their business. As technology has progressed, more forward-thinking organizations have started to see IT as a valuable tool and an investment in their business’s future. Still, every business is looking to reduce costs and IT is often the first place they look to trim their budget.

Some of the traditional ways IT cost has been reduced started in IT hardware with the origin of cloud computing and virtualization. Virtualization is a software system that allows IT hardware to provide much greater capacity and functionality, reducing hardware costs. Other popular cost savings have come from hiring less expensive personnel such as interns or outsourcing IT personnel entirely. A more recent trend has been to utilize open source software for business processes. Open source software is developed within a community and offered for free with options to pay for technical support. However, many businesses and business functions don’t have the flexibility for open source and are required to maintain relationships with the software companies, utilizing expensive and complex licensing for their software. This has led to larger organizations taking cost reduction steps by investigating their physical assets and software licenses — a practice commonly referred to IT Asset Management (ITAM). The savings in both cost and time are staggering with a proper business management plan.

Boerger Consulting, a partner firm of Two Ears One Mouth IT Consulting (TEOM), is an innovator in the ITAM consulting realm. Their focus is helping businesses reduce the overall operational cost of their IT department by properly managing, measuring, and tracking their hardware and software assets. According to Boerger Consulting, the threat is real:

“Some software publishing firms rely on software license audits to generate 40% of their sales revenue. These companies wait and watch your volume license agreements, your merger & acquisition announcements, and the types of support tickets called in, to pinpoint exactly when your organization is out of license compliance. They count on your CMDB and SAM tools to be inaccurate. They make sure the volume license agreement language is confusing and convoluted. And they make sure their auditors always find something – unlicensed software, expired warrantees, unknown network segments – to justify your penalty.”

The payoff, however, is also real. Citing a 2016 paper from Gartner, Boerger Consulting suggests an organization could eliminate THIRTY PERCENT (30%) of their IT software budget with a proper software asset management (SAM) program:

“Part of that savings is finding the hardware and software lost in closets and under desks and odd corners of your warehouse. Another part is identifying and eliminating licenses, support, and warranties for programs and hardware your organization no longer needs. The last part is proactively locating and eliminating audit risks before the auditors do, and then pushing back on them when it is time to renew your volume license agreements.”

Although we focus on different technologies, a synergy has been created between TEOM and Boerger Consulting. We have found that when a business makes the decision to move their IT infrastructure off-site unforeseen challenges can occur. These challenges can be related to privacy and data protection compliance, resulting from retiring on-premises servers, or it may be that there are different software licensing practices required in the cloud. These are two examples of issues that TEOM and Boerger Consulting can solve.

If after reading this article you still have questions about any of these technologies, Jim Conwell and Jeremy Boerger would be happy to meet for a free initial consultation. Please contact:

Jim Conwell (513) 227-4131     jim.conwell@twoearsonemouth.net

www.twoearsonemouth.net

we listen first…

 

Getting Started with Amazon Web Services (AWS)

icon-cloud-aws

Amazon Web Services is a little-known division of the online retail giant, except for those of us in the business of IT. Its interesting to see that the profits from AWS represented 56 percent of Amazon’s total operating income with $2.57 billion in revenue. While AWS amounted to about 9 percent of total revenue, its margins and sustained growth make it stand out on Wall Street. As businesses make the move to the cloud they may ponder what it takes Getting Started with Amazon Web Services (AWS)

When we have helped organizations evolve by moving part or all of their IT infrastructure to the AWS cloud, we have found that planning is the key to their success. Most businesses have had some cloud presence in their IT infrastructure. The most common, Software as a Service (SaaS), has lead the hyper growth of the cloud. What I will consider here with AWS is how businesses use it for Infrastructure as a Service (IaaS). IaaS is defined as a form of cloud computing that relocates a business’s applications that are currently on their own servers to a hosted cloud provider. Businesses consider this to reduce hardware cost, become more agile with their IT and even improve security. To follow are the 5 simple steps we have developed to move to IaaS with AWS.

Getting Started with Amazon Web Services (AWS)

1)      Define the workloads to migrate- The first cloud migration should be kept as simple as possible. Do not start your cloud practice with any business critical or production applications. A good idea, and where many businesses start, is a data backup solution. You can use your existing backup software or one that partners with AWS currently. These are industry leaders such as Commvault and Veritas, and if you already use these solutions that is even better. Start small and you may even find you can operated in the  free tier of Amazon virtual server or instances. (https://aws.amazon.com/free/)

2)      Calculate cost and Return on Investment (ROI)- Of the two primary types of costs used to calculate ROI, hard and soft costs, hard costs seem to be the greatest savings as you first start your cloud presence. These costs include the server hardware used, if cloud isn’t already utilized,  as well as the time needed to assemble and configure it. When configuring  a physical hardware server a hardware technician will have to make an estimation on the applications growth in order to size the server properly. With AWS it’s pay as you go, only renting what you actually use. Other hard cost such as power consumption and networking costs will be saved as well. Many times when starting small, it doesn’t take a formal process of ROI or documenting soft costs, such as customer satisfaction, to see that it makes sense. Finally, another advantage of starting with a modest presence in the AWS infrastructure is that you may be able to stay within the free tier for the first year. This  offering includes certain types of storage suitable for backups and the networking needed for data migration.

3)      Determine cloud compatibility- There are still applications that don’t work well in a cloud environment. That is why it is important to work with a partner that has experience in cloud implementation. It can be as simple as an application that requires a premium of bandwidth, or is sensitive to data latency. Additionally, industries that are subject to regulation, such as PCI/DSS or HIPAA are further incentivized to understand what is required and the associated costs . For instance, healthcare organizations are bound to secure their Protected Health Information (PHI). This regulated data should be encrypted both in transit and at rest. This example of encryption wouldn’t necessarily change your ROI, but needs to be considered. A strong IT governance platform is always a good idea and can assure smooth sailing for the years to come.

4)      Determine how to migrate existing data to the cloud- Amazon AWS provides many ways to migrate data, most of which will not incur any additional fees. These proven methods not only help secure your data but also speed up the process of implementation of your first cloud instance. To follow are the most popular ways.

  1. a) Virtual Private Network- This common but secure transport method is available to move data via the internet that is not sensitive to latency. In most cases a separate virtual server for an AWS storage gateway will be used.
  2. b) Direct Connect- AWS customers can create a dedicated telecom connection to the AWS infrastructure in their region of the world. These pipes are typically either 1 or 10 Gbps and are provided by the customer’s telecommunications provider. They will terminate at the far end of an Amazon partner datacenter. For example, in the midwest this location is in Virginia. The AWS customer pays for the circuit as well as a small recurring cross-connect fee for the datacenter.
  3. c) Import/Export– AWS will allow their customers to ship their own storage devices containing data to AWS to be migrated to their cloud instance. AWS publishes a list of compatible devices and will return the hardware when the migration is completed.
  4. d) Snowball– Snowball is similar to import/export except that Amazon provides the storage devices for this product. A Snowball can store up to 50 Terabytes (TB) of data and can be combined in series with up to 4 other Snowballs. It also makes sense in sites with little or no internet connectivity. This unique device is set to ship as is, there is no need to box it up. It can encrypt the data and has two 10 GIG Ethernet ports for data transfer. Devices like the Snowball are vital for migrations with large amounts of data. Below is a chart showing approximate transfer times depending on the internet connection speed and the amount of data to be transferred. It is easy to see large migrations couldn’t happen without these devices. The final column shows the amount of data where is makes sense to “seed” the data with a hardware devices rather than transfer it over the internet or a direct connection.
    Company’s Internet Speed Theoretical days to xfer 100 TB @ 80% Utilization Amount of data to consider device
    T3 (44.73 Mbps) 269 days 2 TB or more
    100 Mbps 120 days 5 TB or more
    1000 Mbps (GIG) 12 days 60 TB or more

    1)      Test and Monitor- Once your instance is setup, and all the data migrated, it’s time to test. Best practices are to test the application in the most realistic setting possible. This means during business hours and in an environment when bandwidth consumption will be similar to the production environment. You wont need to look far to find products that can monitor the health of your AWS instances; AWS provides a free utility called CloudWatch. CloudWatch monitors your Amazon Web Services (AWS) resources and the applications you run on AWS in real time. You can use CloudWatch to collect and track metrics, which are variables you can measure for your resources and applications. CloudWatch alarms send notifications or automatically make changes to the resources you are monitoring based on rules that you define. For example, you can monitor the CPU usage and disk reads and writes of your Amazon instances and then use this data to determine whether you should launch additional instances to handle increased load. You can also use this data to stop under-used instances to save money. In addition to monitoring the built-in metrics that come with AWS, you can monitor your own custom metrics. With CloudWatch, you gain system-wide visibility into resource utilization, application performance, and operational health.

    To meet and learn more about how AWS can benefit your organization contact me at (513) 227-4131 or jim.conwell@outlook.com.

 

a cloud buyers guide

A Buyer’s Guide to Cloud

buyguide_Cloud

Most businesses have discovered the value that cloud computing can bring to their IT operations. They may have discovered how it helps to meet their regulatory compliance priorities by being in a SOC 2 audited data center. Others may see a cost advantage as they are approaching a server refresh when costly hardware needs to be replaced. They recognize an advantage of placing this hardware as an operational expense as opposed to the large capital expense they need to make every three years. No matter the business driver, the typical business person isn’t sure where to start to find the right cloud provider. In this fast paced and ever-changing technology environment these IT managers may wonder, is there a buyer’s guide to Cloud?

Where Exactly is the Cloud?…and Where is My Data?

Except for the cloud hyperscalers, (Amazon AWS, Microsoft Azure, and Google) cloud providers create their product in a multi-tenant data center. A multi-tenant data center is a purpose-built facility designed specifically for the needs of the business IT infrastructure and accommodates many businesses. These facilities are highly secured and most times unknown to the public. Many offer additional colocation services that allow their customers to enter the center to manage their own servers. This is a primary difference with the hyperscalers, as they offer no possibility of customers seeing the sites where their data resides. The hyperscale customer doesn’t know where there data is except for a region of the country or availability zone. The hyperscaler’s customer must base their buying decision on trusting the security practices of the large technology companies Google, Amazon, and Microsoft. These are some of the same organizations that are currently under scrutiny from governments around the world for data privacy concerns.  The buying decisions for cloud and data center for cloud seekers should start at the multi-tenant data center. Therefore, the first consideration in a buyer’s guide for the cloud will start with the primary characteristics to evaluate in the data center and are listed below.

  1. Location– Location is a multi-faceted consideration in a datacenter. First, the datacenter needs to be close to a highly available power grid and possibly alternate power companies. Similarly, the telecommunications bandwidth needs to be abundant, diverse and redundant. Finally, the proximity of the data center to its data users is crucial because speed matters. The closer the users are to the data, the less data latency, which means happier cloud users.
  2. Security– As is in all forms of IT today, security is paramount. It is important to review the data center’s security practices. This will include physical as well as technical security.
  3. People behind the data– The support staff at the datacenter creating and servicing your cloud instances can be the key to success. They should have the proper technical skills, responsiveness and be available around the clock.

Is My Cloud Infrastructure Portable?

The key technology that has enabled cloud computing is virtualization. Virtualization creates an additional layer above the operating system called a hypervisor that allows for sharing hardware resources. This allows multiple virtual servers (VMs) to be created on a single hardware server. Businesses have used virtualization for years, VMware and Microsoft HyperV being the most popular choices. If you are familiar with and have some secondary or backup infrastructure on the same hypervisor as your cloud provider, you can create a portable environment. A solution where VMs can be moved or replicated with relative ease avoids vendor lock-in. One primary criticism of the hyperscalers is that it can be easy to move data in but much more difficult to migrate the data out. This lack of portability is reinforced by the proprietary nature of their systems. One of the technologies that the hyperscalers are beginning to use to become more portable is containers. Containers are similar to VMs however they don’t utilize guest operating systems for the virtual servers. This has had a limited affect on portability because containers are a leading-edge technology and have not met widespread acceptance.

What Kind of Commitment Do I Make?

The multi-tenant data center offering a virtualized cloud solution will include an implementation fee and require a commitment term with the contract. Their customized solution will require pre-implementation engineering time, so they will be looking to recoup those costs. Both fees are typically negotiable and a good example where an advisor like Two Ears One Mouth can assist you through this process and save you money.

The hyperscaler will not require either charge because they don’t provide custom solutions and are difficult to leave so the term commitment is not required. The hyperscaler will offer a discount with a contract term as an incentive for a term commitment; these offerings are called reserved instances. With a reserved instance, they will discount your monthly recurring charge (MRC) for a two or three-year commitment.

Finding the best cloud provider for your business is a time-consuming and difficult process. When considering a hyperscaler the business user will receive no support or guidance. Working directly with a multi-tenant data center is more service-oriented but can misuse the cloud buyer’s time. The cloud consumer can work with a single data center representative that states “we are the best” and trust them. Alternatively, they can interview multiple data center provider representatives and create the ambiguous “apples to apples” spreadsheet of prospective vendors. However, neither is effective.

At Two Ears One Mouth IT consulting we will listen to your needs first and then guide you through the process. With our expertise and market knowledge you will be comforted to know we have come to the right decision for you company’s specific requirements. We save our customers time and money and provide our services at little or no cost to them!

If you would like assistance in selecting a cloud provider for your business contact us at:

Jim Conwell (513) 227-4131      jim.conwell@twoearsonemouth.net

www.twoearsonemouth.net

we listen first…

migrating datta to cloud

Creating a Successful Cloud Migration

cloud-migrationIf you’ve been a part of the growth of cloud computing technology, you know that creating a successful cloud migration goes far beyond what can be covered in a short essay. However, this article will communicate guidelines or best practices that will greatly improve the success of your migration project. A successful cloud migration will include at least these three stages: planning, design, and execution. Each phase builds on the previous one and no step should be ignored or downplayed. The business cloud migration requires an expert, internal or external to the organization, to manage the process.

Planning: what type of cloud works best?

When we speak of a cloud migration we are referring to a business’s transition to Infrastructure as a Service (IaaS). Migrating to IaaS is the process of converting your on-site IT infrastructure to a cloud service provider and initiating an OpEx financial model for the business. When approaching this migration the business will investigate three provider solution types: hyperscaler, national cloud service provider and a hybrid of a cloud provider with a portion of the infrastructure remaining on-premises.

The largest public cloud providers, AWS, Azure, and Google are often referred to as hyperscalers.  This name is appropriate as it is what they do best, allow customers to scale or expand very quickly. This scaling is served up by a self-service model via the provider’s web portal which can be very attractive large organizations.  Small and medium sized businesses (SMB) have a harder time adjusting to this model as there is very little support. Self-service means the customer is on their own to develop and manage the cloud instances. Another drawback of the hyperscaler for the SMB is that is nearly impossible to budget what your cloud infrastructure is going to cost. The hyperscalers transactional charges and billing make costs difficult to predict. The larger enterprise will often take the strategy of building the infrastructure as needed and then scale back to meet or reduce the cost. SMB typically does not have this type of latitude with budget constraints and will opt toward the more predictable national or regional cloud provider.

The regional or national data center is a better fit for SMB because of their ability to conform to the businesses needs. Often SMB will have unique circumstances requiring a customized plan for compliance and security or special network requirements. Also, this type of cloud provider will provide an allowance of internet bandwidth in the monthly charges. This eliminates unpredictable transaction fees the hyperscaler charges. In this way, the business can predict their monthly cloud cost and budget accordingly.

There are times when an application doesn’t work well in the cloud infrastructure, yet it is still required for the business. This is when a hybrid cloud environment can be implemented. Hybrid cloud in this instance is created when some applications move off-site while others stay and are managed separately. The challenge is to integrate, or make seamless, this non-cloud application with the other business processes. Over the long term, the application creating the hybrid environment can be repurposed to fit in the cloud strategy. Options include redeveloping the existing software to a cloud native architecture or finding a similar application that works more efficiently in a cloud environment.

Design: a cloud strategy.

A cloud strategy requires not only a strong knowledge of IT infrastructure but also a clear understanding of the business’s operations and processes. It is vital that the customer operations and management teams are involved in the cloud strategy development. Details regarding regular compliance and IT security need to be considered in the initial phases of development rather than later. The technical leader of the project will communicate a common strategy of building a cloud infrastructure wider as opposed to taller. Cloud infrastructure is better suited to have many servers with individual applications (wide) instead of one more powerful server handling many applications (tall).

Once all the critical business operations are considered, a cloud readiness assessment (CRA) can be developed. A CRA will dig deep into the business’s critical and non-critical applications and determine the cloud infrastructure needed to support them. In this stage, each application can be considered for its appropriate migration type. A “lift and shift” migration will move the application off-site as is, however some type of cloud customization may be completed before it is migrated. Connectivity also needs to be considered at this stage. This includes the bandwidth required for the business and its customers to connect with the cloud applications. Many times, an additional private and secure connection is required for access by IT managers or software developers through a VPN that will be restricted and have very limited access. IP addresses may need to be changed to a supplier issued IP block to accommodate the migration. This can create temporary Domain Name System (DNS) issues that require preparation. Finally, data backups and disaster recovery (DR) need to be considered. Many believe migrating to the cloud inherently assures backup and disaster recovery and it does not! Both backups and DR objectives need to be uncovered and planned out carefully.         

Execution and day 2 cloud.

Now that the best cloud provider and the application migration timeline have been determined, the project is ready for the execution phase. The migration team should have performed tests on the applications as a proof of concept (POC) to assure everything will work as planned. After the tests are complete, the data will then be migrated to the provider via an internet connection or a physical disk delivered to the provider. The business’s IT infrastructure has now been moved to the cloud, but the work is not over. The business’s IT infrastructure is in a place called cloud day 2.      

The two services that deliver and assure success in your cloud going forward are monitoring and support. These can be handled internally, or they can be provided by the cloud supplier or another third party. When purchasing the professional services from the cloud provider, it is important to understand their helpdesk operations and have expectations for response times.  Make sure you discuss service level agreements (SLAs) for response both during business hours and after. The service provider should be monitoring the health or “state” of all VMs and network edge devices; security falls under these ongoing services. Many security-minded organizations prefer a more security focused third-party provider than the cloud provider itself. It is critical to understand the data backup services that have been included with your cloud instances. Don’t assume there is an off-site backup included in the cloud service, many data center providers have additional charges for off-site backup. DR goes well beyond backups and creates data replication with aggressive SLAs to restore service during an outage. An often-overlooked part of DR strategy is the “fallback” to your primary service location once the primary site has been restored to service.

A migration of IT infrastructure is a complicated process that needs to be performed by a team of experts. Just as important, the team needs to be managed by a seasoned project manager that has your business interests as primary. This is accomplished when the project manager is not a part of the cloud provider’s team. Having the right manager and team can assure your business can migrate to the cloud without a disruption to your business. Two Ears One Mouth IT Consulting can be the partner that guarantees a successful cloud migration.

If you would like to talk more about cloud migration strategies contact us at:

Jim Conwell (513) 227-4131      jim.conwell@twoearsonemouth.net

www.twoearsonemouth.net

we listen first…

Security and Cloud

Security in the Cloud

When cloud computing first gained acceptance and began to gain momentum in business IT security became a headwind holding it back from even greater acceptance. After all, the IT manager may have thought moving his/her data from the premises to an off-site location is sure to be risky. Similarly, they wondered how their data could be secure when they don’t own and manage the hardware it resides on or even know where it is. While these arguments seem logical, logic does not equal security. How the data is protected is far more important than where it is geographically speaking regarding security. Many times, the data center or cloud provider is better at laying the foundation for IT security than the IT leader of a business, but it is best when there is a team effort between the two.

Beginning with Compliance

Many businesses today are faced with the challenge of regulatory compliance in their IT services. Compliance is a complicated and tedious process that includes not only IT operations but virtually all aspects of the business. A regulated business needs to consider processes that affect the datacenter as well as other departments such as employee and visitor access to data, audits and reporting, and disaster recovery. These are functions that data center providers consider as a primary part of their business. These practices are defined by certifications, with today’s most common certification being Service Organization Controls or SOC. Today you will find most data center using SOC 2. SOC 2 is a set of standards the data center complies with and reports on to satisfy their customer requirements. The audits of SOC 2   will authenticate the data center is doing what it says it does regarding monitoring, alerts and physical security. When a business moves or migrates their IT infrastructure to a SOC 2 compliant datacenter they are assured to have met their compliance goals without managing the difficult process themselves.

Encryption, Cloud Securities Best Practice

Many of the most valued processes of IT security in whole hold true for a cloud and data center environment. No single exercise is as important as encrypting the vital data of the business. Encryption is one of the most effective data protection tools because it converts the data into a secret code that renders it useless without a key. The encryption software produces a key that must be used to unlock and read the data. Data can be encrypted at rest, as when it resides in storage in the datacenter or in transit between the datacenter and the data users. Data encryption in transit is typically created by an appliance that creates a Virtual Private Network (VPN). Encryption is a vital technology to secure data wherever the data resides, encrypting the data in transit is an additional layer of security that helps keep data secure as it moves on and off site.

The Future of Security in the Cloud

It is difficult to predict future trends across industries, but this exercise proves to be especially difficult in technology. To consider how security in the cloud will be handled in the future it is important to understand how the cloud itself with be evolving. In cloud technology, containers are the technology that is gaining acceptance and market share at the current time. Containers are similar to the virtual machines (VMs) of today’s infrastructure but are more independent and create an environment for the use of microservices. Microservices is a concept that a single application for a business should consist of many smaller services instead of one monolithic application. This allows for greater overall uptime as the entire application doesn’t need to be taken down due to a single service requiring maintenance or an update. The same benefit can be realized for security. However, microservices can create a very complicated “mesh” of services that will complicate all aspects of the infrastructure including security. To alleviate these complications for security there have been opensource software packages developed. One helpful opensource software package is Istio. Istio is an opensource package that allows the infrastructure manager to secure, connect and monitor microservices. Itsio can be implemented in a “side-car” deployment where it will secure services from outside the service or container. Today we often think of security services, such as anti-malware as another application running within the server or VM it is protecting. Software like Itsio makes security more of an integral part of the application as opposed to something added to a completed solution. Opensource services like Itsio are making complicated systems easier to manage. Containers and microservices are the strongest evolving trends for the cloud, so one should look to them for the future of security in the cloud.

With each change in technology, the landscape seems to get more complicated. Security can add to the complication; however, it can be simplified if it can be considered prior to the service being developed as opposed to after. The cloud computing industry is taking the lead in corporate IT infrastructure as well as the dual role of creating new ways to approach securing a business’s data.

If you would like to talk more about security in cloud strategies contact us at:

Jim Conwell (513) 227-4131      jim.conwell@twoearsonemouth.net

www.twoearsonemouth.net

we listen first…

automation help build cloud infrastructure

What is Code as a Service?

 

CaaS

When I first started experimenting with the public cloud providers I, like many, began by setting up demo accounts and creating virtual servers. It isn’t a complicated process to create servers, particularly if you compare it to the process of buying hardware and loading software that was required 10 years ago. Cloud computing’s self service capabilities have caused a major disruption in the business of information technology. But even as I “spun-up” servers in a matter of minutes or seconds I began to wonder; how does the large enterprise migrate to and manage their cloud environment? How do they maintain the IT governess and framework with their cloud infrastructure as they have with their on premises infrastructure? How do they maintain standards considering all the ever-changing choices so commonly provided by the cloud vendors? I could see these questions as an issue with small implementations, but how does the enterprise handle this across dozens or even hundreds of cloud architects creating virtual servers? In short, the question I attempt to answer here is what tools are available to maintain IT governance and security compliance in the “move fast and break things” world of the cloud? The answer to all the questions can be found it what has been coined as Code as a Service (CaaS) or Infrastructure as Code (IaC).

Automation with Code as a Service

CaaS’s primary service or function is automation. It uses software to automate repetitive practices to hasten and simplify implementations and processes. A valuable byproduct of this automation is consistency. When processes are automated they can be designed from the start to follow the rules of regulation and governance of the organization. They help assure that no matter how fast process is moving or how many users are involved, governance is maintained.

 Popular Code as a Service tools

There are a host of these tools designed to automate and govern the development of software and IT infrastructure. To follow are examples, starting with the most general IT automation systems and moving to tools designed to work more specific to work with cloud infrastructure.

Ansible

Ansible is an open source automation software promoted by Redhat Corporation. In addition, to cloud provisioning, it assists in application development, intra-service orchestration, and configuration. Ansible uses the simple programming language YAML to create playbooks for automation. Ansible has many modules that integrate with the most common cloud solutions such as AWS, Google Cloud Platform (GCP) and VMware.            

Teraform

Terraform is an infrastructure as code software by Hashi Corporation. It primarily focuses on creating data center infrastructure that is provided by large public clouds. Teraform utilizes JSON language to define infrastructure templates with integrations such as AWS, Azure, GCP, and IBM cloud.         

Kubernetes

Kubernetes is an open source project started by Google and donated in its entirety to the Cloud Native Computing Foundation (CNCF). It orchestrates and automates the deployment of containers. Containers are a different type of virtual server that has promoted and added to the popularity of micro services. Micro services create business applications by combining many smaller applications to create the entire solution. Micro Services are used to increase agility and uptime and make maintenance of the application easier and less disruptive.

CloudFormation

CloudFormation is Amazon Web Services CaaS application that is provided to its customers at no charge. CloudFormation templates can be written in YAML or JSON and make the deployment of AWS services at scale quicker and more secure. CloudFormation saves massive amounts of time for the enterprise cloud architect and insurers all instances maintain the IT governance of the organization.

Code as a Service is a valuable tool for cloud architects and businesses to create cloud native applications or migrate their applications to cloud service providers. There are many products, but most are opensource and will utilize playbooks or templates to assist in creating the cloud infrastructure in a compliant manner.        

If you would like to talk more about strategies for migrating to or creating cloud infrastructure contact us at:
Jim Conwell (513) 227-4131      jim.conwell@twoearsonemouth.net
www.twoearsonemouth.net
we listen first…

Eliminating Cloud Waste

cloud-wasteimage courtesy of parkmycloud.com

In the beginning when the cloud was created, it was presented and sold with cost savings being a primary benefit. Unfortunately, most businesses that have implemented a cloud solution today have not realized those savings.  However, cloud adoption has flourished because of its many other benefits. Agility, Business Continuity and, yes, even IT Security have enabled cloud to be an integral part of the business IT infrastructure. While businesses have been able to forego cost savings of their new IT infrastructure, they won’t tolerate a significant cost increase, particularly if it comes in the form of wasted resources. Today a primary trend in cloud computing is to regain control of cloud cost and eliminate cloud waste.

Ironically the clouds beginnings, virtualization, were based on reducing unused resources. Virtualization software, the hypervisor, created logical server instances that could share fixed resources and provide better overall utilization. As this virtualized IT stack started to move out of the local datacenter (DC) and into the public cloud environments some of this enhanced utilization was lost. Cloud pricing became hard to equate with the IT infrastructure of the past. Cloud was CapEx instead of OpEx and was billed in unusual increments of pennies per hour of usage. Astute financially minded managers began to think in terms of questions like “how many hours are in a month, and how many servers do we need?” Then the first bills arrived and the answers to those questions and the public clouds problematic cost structure became clearer. They quickly realized the need to track cloud resources closely to keep costs in line.

Pricing from the cloud provider come in three distinct categories: storage, compute and the transactions the server experiences. Storage and bandwidth (part of traffic or transactioins)  have become commodities and are very inexpensive. Compute is by far the most expensive resource and therefore the best to optimize to reduce cost. Most public cloud customers see compute be about 40% of their entire invoice. As customers look to reduce their cloud cost, they should begin with the migration process. The migration process is important, particularly in a lift and shift migration (see https://twoearsonemouth.net/2018/04/17/the-aws-vmware-partnership for lift and shift). Best practices require the migration to be completed entirely before the infrastructure is optimized. A detailed monitoring needs to be kept on all instances for unusual spikes in activity or increased billing. Additionally, making sure all temporary instances, especially another availability zone, are shut down as their needs are completed.

 In addition to monitoring and good documentation, the following are the most common tools to reduce computing costs and cloud waste. 

1.       Reserved Instances– All public cloud providers offer an option called reserved instances in order to reduce cost. A reserve instance is a reservation of cloud resources and capacity for either a one or a three-year term in a specific availability zone. The commitment for one or three years allows the provider to offer significant discounts, sometimes as much as 75%. The downside is you are removing one of the biggest advantages of cloud, on demand resources. However, many organizations tolerate the commitment as they have become used to it in previous procurements of physical IT infrastructure.

2.       Shutting Down Non-Production Applications– While cloud was designed to eliminate waste and better utilize resources it is not uncommon for server instances to sit idle for long periods. Although the instances are idle, the billing for it is not. To alleviate the cost of paying for idle resources organizations have looked to shut down non-production applications temporarily. This may be at night or on weekends when usage is very low. Shutting down servers runs chills through operational IT engineers (OPS) as it can be a complicated process. OPS managers don’t worry about shutting down applications on servers, rather starting them back up. It is a process that needs to be planned and watched closely. Many times, servers depend on other servers to be running at boot up properly. Run books are often created to document the steps of the process to shut down and restart of server instances. For any servers to be considered for this process it will be non-production and have tolerance for downtimes. There are Software as a Service (SaaS) applications that help manage the process. They will help determine the appropriate servers to consider for shut down as well as manage the entire process. Shutting down servers to avoid idle costs can be complicated but with the right process or partner significant savings in cloud deployments can be realized.

Cloud adoption has changed many aspects of the enterprise IT, not the least of which is how IT is budgeted. Costs that were once fixed CapEx are now OpEx and fluid. IT budgets need to be realigned to accommodate this change. I have seen organizations that haven’t adjusted and as a result have little flexibility in their operational budgets. However, they may still have a large expense account with a corporate credit card. This has allowed desperate managers in any department to use their expense account to enable public cloud service instances and bypass the budget process. These ad hoc servers created quickly and out of the process,  are the type that becomes forgotten and create ongoing billing problems.

While much of the cloud technology has reached widespread acceptance in business operations’ cost analysis, cloud waste management and budgeting can fall behind in some organizations. Don’t let this happen to your business contact us for a tailor-made cloud solution.

 

If you would like to talk more about strategies to eliminate cloud waste for your business contact us at:        Jim Conwell (513) 227-4131      jim.conwell@twoearsonemouth.net

www.twoearsonemouth.net

we listen first…

Disaster Recovery as a Service (DRaaS)

draas img

One of the most useful concepts to come from the “as a service” model that cloud computing created is Disaster Recovery as a Service (DRaaS). DRaaS allows the business to outsource the critical part of their IT infrastructure strategy that assures the organization will still operate in the event of an IT outage. The primary technology that has allowed Disaster Recovery (DR) to be outsourced, or prosed as a service, is virtualization. DRaaS providers operate their own datacenter and provide a cloud infrastructure where they will rent servers for replication and recovery of their customer’s data. DRaaS solutions have grown in popularity partly because of the increased need for small and medium (SMB) sized business’s IT strategy to include DR.  DR plans have become mandated by the larger companies, who the SMB supply services to, as well as insurers or regulatory agencies. These entities require proof of the DR plan and of the ability to recover quickly from an outage.  It’s a complicated process that few organizations take the proper time to address. A custom solution for each business needs to be designed by an experienced IT professional that focuses on cloud and DR. Most times an expert such as Two Ears One Mouth Consulting partnered with a DRaaS provider will create the best custom solution.

The Principles and Best Practices for Disaster Recovery (DR)

Disaster Recovery (DR) plans and strategies can vary greatly. One extreme notion is the idea that “my data is in the cloud, so I’m covered”. The other end of the spectrum is “I want a duplication of my entire infrastructure off site and replicated continually”, an active-active strategy. Most businesses today have some sort of backup; however, backup is not a DR plan. IT leadership of larger organizations favor the idea of a duplicated IT infrastructure like the active-active strategy dictates but balk when they see the cost. The answer for your company will depend on your tolerance for an IT outage, how long you’re willing to be off-line, as well as your company’s financial constraints.

First, it’s important to understand what the primary causes of IT outages are. Many times, we consider weather events and the power outages they create. Disruptive weather such as hurricanes, tornadoes and lightning strikes from severe thunder storms affect us all. These weather-related events make the news but are not the most common causes. Human error is the greatest source of IT outages. This type of outage can come from failed upgrades and updates, errors by IT employees or even mistakes from end users. Another growing source of IT outages is malware and IT security breaches (See the previous article on Phishing). Ransomware outages require an organization to recover from backups as the organization’s data has been encrypted and will only be unlocked with a ransom payment. It is vital that security threats are addressed, understood and planned for in the DR recovery process.

Two important concepts of DR are Recovery Point Objective (RPO) and Recovery Time Objective (RTO). RPO will detail the interval of time that will pass during an outage before reaching the organization’s tolerance for data loss. The RPO concept can be used for a ransomware attack, described above, to fallback to data for a time before the breach. More often RPO is used to define how long the customer is willing to go back in time for the data to be restored. This determines the frequency of the data replication and ultimately the cost of the solution.  The RTO defines the amount of time the vendor will have the customer up and running on the DR solution in an outage and how they will “fallback” when the outage is over.

If the company is unable to create an active-active DR solution, it is important to rate and prioritize critical applications. The business leadership needs to decide what applications are most important to the operations of the company and set them first to recover in a DR solution. Typically these applications will be grouped in “phases” as to the priority of importance to the business and order to be restored.

Telecommunications networking can sometimes be the cause of an IT outage and is often the most complicated part of the recovery process. Customers directed to one site in normal circumstances need to be changed to another when the DR plan is engaged. In the early days of DR, there was a critical piece of documentation called a playbook. A playbook was a physical document with step-by-step instructions detailing what needs to happen in the event of an IT outage. It would also define what is considered a disaster, and at what point do we engage the DR plan. Software automation has partially replaced the playbook; however, the playbook concept remains. While automating the process is often beneficial there are steps that can’t be automated. Adjusting the networking of the IT infrastructure in the event the DR plan in imitated in one example.

Considerations for the DRaaS Solution

DRaaS like other outsourced solutions has special considerations. The agreement with the DraaS provider needs to include Service Level Agreements (SLAs). SLA’s are not exclusive to DRaaS but are critical to it. An SLA will define all the metrics you expect your vendor to attain in the recovery process. RTO and RPO are important metrics in an SLA. SLA’s need to be in writing and have well defined penalties if deliverables are not met. There should also be consideration for how the recovery of an application is defined. A vendor can point out the application is working at the server level but may not consider if it’s working at the desktop and at all sites. If the customer has multiple sites, the details of the networking between sites is a critical part of the DR plan. That is why a partner that understands both DR and telecommunications, like Two Ears One Mouth IT Consulting, is critical.

The financial benefits of an outsourced solution such as DRaaS are a primary consideration. To make a CapEx purchase of the required infrastructure that will be implemented in a remote and secure facility is very costly. Most businesses see the value of renting the infrastructure for DR that is already implemented and tested in a secure and telecom rich site.

DR is a complicated and very important technology that a business will pay for but may never use. Like other insurance policies, it’s important and worth the expense. However, it’s complicated it should be designed and executed by professionals which may make an outsourced service the best alternative.

If you need assistance designing your DR Solution (in Cincinnati or remotely), please contact us at:

.Jim Conwell (513) 227-4131      jim.conwell@outlook.com      www.twoearsonemouth.net

cloud savings

Financial Benefits of Moving to Cloud

Cloud-$

                                                                                                 image courtesy of betanews.com

There are many benefits that cloud technology can offer a business, however, business doesn’t buy technology for technology’s sake, it buys it for positive business outcomes. The two most popular business outcomes desired by most businesses are to increase revenue and reduce cost. Information Technology (IT) has long been known to be one of the costliest departments in a business. So it makes sense, if we’re going to recommend to a cloud solution, we look at the financial benefits. The financial advantages paired with the expertise in determining what applications should migrate to the cloud create a cloud strategy. This consultation is not completed just once but needs to be completed periodically by a strategic partner like Two Ears One Mouth.   Just as telecommunications and internet circuits can get financially burdensome as a business grows, so can a cloud solution. Telecom cost recovery became a financial necessity for businesses when telecom costs spiraled out of control. A consultant would examine all the vendors and circuits to help the business reduce IT spend by eliminating waste. The cloud user faces a similar problem, as cloud services can automatically grow as demand increases. The growth will include the cloud solutions cost as well as the resources.

 

To follow are the three primary financial benefits of a cloud migration.

 

CapEx vs OpEx

The primary financial benefit most organizations plan for with their first cloud implementation is the benefit of an operational expense (OpEx) instead of a capital expense (CapEx). This is particularly beneficial for startup companies and organizations that are financially constrained. They find comfort from the “pay as you go model” similar to other services they need, such as utilities. Conversely, enterprises that invest in equipping their own data centers have racks of equipment that depreciate quickly and utilize a fraction of the potential purchased. It has been estimated that most enterprises have an IT hardware utilization rate of about 20% of its total capacity. Cloud services allow you pay only for what you use and seldom pay for resources sitting idle.

 

Agility and scale

Regardless of the size of your business, it would be financially impractical to build an IT infrastructure that could scale as quickly as the one you rent from a cloud provider. This agility allows businesses to react quickly to IT resource needs while simultaneously reducing cost.  Many cloud solutions can predict when additional resources are needed and are able to scale the solution appropriately. This provides obvious benefits for the IT Manager but can create problems with the IT budget. If the cloud solution continues to scale upward, and it is billed transitionally, the cost can escalate quickly. Cloud instances need to be monitored constantly for growth and cost. For this reason, Two Ears One Mouth consultants have developed a product known as cloud billing and support services (CBASS). CBASS makes sure the benefits originally realized with the cloud migration remain intact.

 

Mitigate risk

Many best practices in setting up a cloud infrastructure also enhance IT security. For instance, because your data resides elsewhere, cloud users tend to implement data encryption.  This encryption can include not only the data that rests in the cloud providers datacenter but also as it’s in transit between the datacenter and the customer. This is a wise practice for IT security. It can eliminate data breaches and benefit regulatory compliance in some cases. Additionally, security software and hardware, such as a firewall, tend to be superior in larger IT datacenters, such as with a cloud provider. Ironically, IT security which started as a concern of cloud computing, has become an advantage.

 

Cloud technology has long been a proven technology and is here to stay. It has reduced IT budgets while enhancing IT response time. However, the cost savings of cloud is not automatic and ongoing. Savings, as well as the solution, need to be measured and affirmed regularly. consultants can monitor your cloud environment leaving you to focus on the business.

If you need assistance with your current IT cloud project  please contact us at:

Jim Conwell (513) 227-4131      jim.conwell@outlook.com      www.twoearsonemouth.net

Preparing for the Cost of a Data Breach

Cost-of-Breach

One of the biggest challenges, particularly for small and medium businesses (SMB), is trying to anticipate and budget for the cost of a data breach. While larger, often publicly owned, corporations can sustain huge financial losses to litigation or regulatory penalties, organizations with less than $100 million in revenue cannot. Even with the leadership of the SMB becoming aware of the inevitability of an attack, they don’t understand what the potential costs could be and how to prepare for them. This may cause them to task their Chief Information Officer (CIO) or Chief Information Security Officer (CISO) to estimate the cost of a breach for budgetary purposes. The CISO, understanding that addressing the breach issue starts with IT governance, may attempt to educate their company’s leadership on the tools necessary to help to prevent a breach. Both leaders face a difficult decision: what monies are put aside for data security and do we focus on prevention or recovery? Most would agree the answer is a combination of the two: for this exercise I will focus on the components of cost once the data breach has occurred. The four primary silos of cost are response and notification, litigation, regulatory fines and the negative impact to reputation. When the affected enterprise forecasts costs for a potential breach, it not only gives the company an idea of the financial burden it will incur but it also helps those affected to consider documenting the steps to take in the event a breach is discovered.

Notification, the first cost incurred, is the easiest to forecast. Most businesses have a good idea of who their customers are and how best to notify them. A good social media presence can simplify this as well as reduce total costs. After the breach is discovered, the first task is to try to discover which customers were affected. Once that is determined, the business needs to decide the best way to notify them. US Mail, email or social media are the most common methods. The most efficient process for each must be determined. Many states have laws around breach notification and timing, which need to be considered and understood as a part of the process. The larger the organization, and the associated breach, the more complicated this process becomes. In a recent breach of a large healthcare organization, deciding how to contact the affected customers took longer than it should have because the company wasn’t prepared for a breach of the magnitude they faced. The breach affected tens of millions of customers. It was decided that a conventional mail notification was required at a cost of several million dollars!

Litigation and regulatory penalties are similar and can be prepared for in the same way. While regulatory penalties can be better estimated up front, both costs can get out of control quickly. The best way to prepare for these types of costs are with Data Breach Insurance, also known as Cyber Liability Insurance. Cyber Liability Insurance provides coverage for the loss of both first-party and third-party data. This means that whether the data breach happens directly to your company or to a company whose data you are working with, the coverage will be in effect. While most of the time Cyber Liability Insurance is considered for the larger expenses, like lawsuits and regulatory penalties, the right plan can be used for all four types of aforementioned costs: notification, litigation, regulatory fines and damage to reputation.

The hardest to define, and many times the costliest, is the damage to the breached company’s reputation. In a recent study, the three occurrences that have the greatest impact on brand reputation are data breaches, inadequate customer service, and environmental disasters. Of these, the survey found that data breaches have the most negative impact on reputation. If the affected company is in the IT industry, and specifically IT security, the effects are likely to be devastating to the organization. The only trend that seems to be softening that damage is that breaches have become so common that people are more likely to disregard the notification. Greater frequency certainly is occurring, but it isn’t anything the affected company can include in their plan. What you must include in your plan is the message you will communicate with the public to lessen the negative consequences. This should include how you fixed the problem and how you plan to prevent additional breaches in the future. In a recent healthcare breach, the organization partnered with a well-known security platform to better protect patient records going forward.

Considering these four primary areas affected is critical to helping leadership determine the costs associated with a data breach. If you have any questions about determining the cost for your business, contact us today.

Contact us so we can learn more about the IT challenges with your organization.

.Jim Conwell (513) 227-4131      jim.conwell@outlook.com      www.twoearsonemouth.net